this article is incredibly long and rambly, but please enjoy as this asshole struggles to select random items from an array in presumably Javascript for what sounds like a basic crossword app:
At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. I made some false starts using what I could gather, and while I did my thing—programming—Ben told GPT-4 what he wanted and got code that ran perfectly.
Fine: commands like those are notoriously fussy, and everybody looks them up anyway.
ah, the NP-complete problem of just fucking pulling the file into memory (there’s no way this clown was burning a rainforest asking ChatGPT for a memory-optimized way to do this), selecting a random item between 0 and the areay’s length minus 1, and maybe storing that index in a second array if you want to guarantee uniqueness. there’s definitely not literally thousands of libraries for this if you seriously can’t figure it out yourself, hackerman
I returned to the crossword project. Our puzzle generator printed its output in an ugly text format, with lines like
"s""c""a""r""*""k""u""n""i""s""*" "a""r""e""a"
. I wanted to turn output like that into a pretty Web page that allowed me to explore the words in the grid, showing scoring information at a glance. But I knew the task would be tricky: each letter had to be tagged with the words it belonged to, both the across and the down. This was a detailed problem, one that could easily consume the better part of an evening.
fuck it’s convenient that every example this chucklefuck gives of ChatGPT helping is for incredibly well-treaded toy and example code. wonder why that is? (check out the author’s other articles for a hint)
I thought that my brother was a hacker. Like many programmers, I dreamed of breaking into and controlling remote systems. The point wasn’t to cause mayhem—it was to find hidden places and learn hidden things. “My crime is that of curiosity,” goes “The Hacker’s Manifesto,” written in 1986 by Loyd Blankenship. My favorite scene from the 1995 movie “Hackers” is
most of this article is this type of fluffy cringe, almost like it’s written by a shitty advertiser trying and failing to pass themselves off as a relatable techy
I’m not going to claim to be an LLM expert; I’ve used them a bit to try to figure out which of my tasks they can and can’t help with. I don’t like them, so I don’t usually use them recreationally.
I’ll put my stakes on the table too. I’ve been programming for very close to my entire life; my mum taught me to code on a Commodore 64 when I was a tiny kid. Now I’m middle-aged, and I’ve spent my entire professional life either making software or teaching software development and/or software-adjacent areas (maths, security, etc.). I’ve always preferred to call myself a “programmer” rather than a “software engineer” or the like - I do have a degree, but I’ve always considered myself a programmer first, and a teacher/researcher/whatever second.
I think the point made in the article we’re talking about is both too soon and too late. It’s too soon because - for all my worries about what LLMs and other AI might eventually be, at the current moment they’re definitly not AutoDeveloper 3000. I’ve mentioned my personal experiences. Here is a benchmark of LLM performance on actual, real-world Github issues - they don’t do very well on those at all, at least for the time being. All professional programmers I personally know still program, and when they do use LLM’s, they use them to generate example code rather than to write their production code for them, basically like Stack Overflow, eexcept one you can trust even less than actual Stack Overflow. None of them use its generated code directly - also like you wouldn’t with Stack Overflow. At the moment, they’re tools only; they don’t do well autonomously.
But the article is also too late, because the kind of programming I got hooked on and that became a lifelong passion isn’t really what professional development is like anymore, and hasn’t been for a long time, long before LLMs. I spend much more time maintaining crusty old code than writing novel, neat, greenfield code - and the kind of detective work that goes into maintaining a large codebase is often one that LLMs are of little use in. Sure, they can explain code - but I don’t need a tool to explain what code does (I can read), I need to know why the code is there. The answer to this question is rarely directly related to anything else in the code, it’s often due to a real-world consideration, an organizational factor, a weird interaction with hardware, or a workaround for an odd quirk of some other piece of software. I don’t spend my time coming up with elegant, neat algorithms and doing all the cool shit I dreamt of as a kid and learnt about at university - I spend most of my time doing code detective work, fighting idiosyncratic build systems, and dealing with all the infuriating edge cases the real world seems to have an infinite supply of (and that ML-based tools tend to struggle with). Also, I go to lots of meetings - many of which aren’t just the dumb corporate rituals we all love to hate, but a bunch of professionals getting together to discuss the best way to solve a problem none of us know exactly how to solve. The kind of programming I fell in love with isn’t something anyone would pay a professional to do anymore, and hasn’t been for a very long time.
I haven’t been in web dev for over a decade. Most active web devs I know say that the impressive demos of GPT-4 making a HTML page from a napkin sketch would have been career-ending 15 years ago, but doesn’t even resemble what they spend all their time doing at work now: They tear their hair out over infuriating edge cases, they try to figure out why other people wrote specific bits of code, they fight uncooperative tooling and frameworks, they try to decipher vague and contradictory requirements, and they maintain large and complex applications written in an uncooperative language.
The biggest direct influence LLMs have so far had on me is to completely destroy my enthusiasm for publishing my own (non-professional) code or articles about code on the web.