this article is incredibly long and rambly, but please enjoy as this asshole struggles to select random items from an array in presumably Javascript for what sounds like a basic crossword app:

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. I made some false starts using what I could gather, and while I did my thing—programming—Ben told GPT-4 what he wanted and got code that ran perfectly.

Fine: commands like those are notoriously fussy, and everybody looks them up anyway.

ah, the NP-complete problem of just fucking pulling the file into memory (there’s no way this clown was burning a rainforest asking ChatGPT for a memory-optimized way to do this), selecting a random item between 0 and the areay’s length minus 1, and maybe storing that index in a second array if you want to guarantee uniqueness. there’s definitely not literally thousands of libraries for this if you seriously can’t figure it out yourself, hackerman

I returned to the crossword project. Our puzzle generator printed its output in an ugly text format, with lines like "s""c""a""r""*""k""u""n""i""s""*" "a""r""e""a". I wanted to turn output like that into a pretty Web page that allowed me to explore the words in the grid, showing scoring information at a glance. But I knew the task would be tricky: each letter had to be tagged with the words it belonged to, both the across and the down. This was a detailed problem, one that could easily consume the better part of an evening.

fuck it’s convenient that every example this chucklefuck gives of ChatGPT helping is for incredibly well-treaded toy and example code. wonder why that is? (check out the author’s other articles for a hint)

I thought that my brother was a hacker. Like many programmers, I dreamed of breaking into and controlling remote systems. The point wasn’t to cause mayhem—it was to find hidden places and learn hidden things. “My crime is that of curiosity,” goes “The Hacker’s Manifesto,” written in 1986 by Loyd Blankenship. My favorite scene from the 1995 movie “Hackers” is

most of this article is this type of fluffy cringe, almost like it’s written by a shitty advertiser trying and failing to pass themselves off as a relatable techy

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    1 year ago

    But I knew the task would be tricky

    Is it just me or isn’t this not even that tricky (just a bit of work, so I agree with him on the free evening thing, esp when you are a bit rusty)? Anyway, note how he does give a timeframe for doing this himself (an evening) but doesn’t mention how long he worked on the chatgpt stuff, nor does he mention if he succeeded at his project at all

    E: anyway what he needs is an editor.

    • self@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      this is the exact kind of clown who’d go “uh actually I have an editor” and fire up ChatGPT again

    • datarama@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      It’s not tricky at all, but it is tedious. It’s tedious precisely because it isn’t tricky. There’s little essential complexity in the task (so it isn’t fun to solve unless you’re a beginner), but it’s buried in a lot of incidental complexity.

      The thing I’ve personally gotten most actual real-world utility out of LLMs for is … writing VimL scripts, believe it or not. VimL is a language that’s almost entirely made out of incidental complexity, and the main source of friction (at least to me) is that while I use Vim all the time, I rarely write new VimL scripts, so I forget (repress?) all the VimL trivia that aren’t just simple compositions of my day-to-day commands. This is exactly what you’d expect LLMs to be good at: The stakes are low, the result is easy to test, the scripts I need are slight variations over various boring things we’ve already done a ton of times, and writing them requires zero reasoning ability, just a large pile of trivia. I’d prefer it if Vim had a nicer scripting language, but here we are.

      They still screw it up, of course, but given that I never want a VimL script to be very large anyway, that’s easy to fix.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        Yeah I was mentally already thinking about different datastructures and how to convert via various ones to solve the crossword puzzle thing (before I went ‘wtf am I doing’) and was already annoyed by a bit of the tedium of the problem.

        And that is interesting that it works well for scripting like that.

        I do now wonder, how much of the working with LLMs for code is partially the rubber duck effect. That while talking to a LLM and trying to get it to generate code you want are you already working out the problem more and more?

        • datarama@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          No need to speculate! Lots of programmers say - in those exact words - that they use LLMs as rubber ducks that talk back. As one of my friends (who uses ChatGPT a lot, for that exact purpose) likes to put it: The AI has no brain, you have to bring your own.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Well, I had to speculate as I had not talked to people about this and don’t use LLMs myself (or at least not directly glares at google search results). So thanks!

        • datarama@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I am! I use Neovim as my daily driver on my own machine.

          I still need some VimL scripts for when I need to work with systems that have Vim but not Neovim, so things I want to always be there, I’ve generally done in VimL. Anything that involves a bit more complexity I do in Lua (or call out to an external script).

  • datarama@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    This response is going to be rambling.

    For the example problem: If the dictionary file comfortably fits in memory and this was just a one-off hack, I probably wouldn’t even have to think about the solution; it’s a bash one-liner (or a couple lines of Python) and I can a certainly write it faster than I could prompt an LLM for it. If I’m reading the file on a Raspberry Pi or the file is enormous, I’d use one of the reservoir sampling algorithms. If performance isn’t all that important I’d just do the naive one (which I could probably hack up in a couple of minutes), if I needed an optimal one I’d have to look at some of my old code (or search the internet). An LLM could probably do the optimal version faster than I could (if prompted specifically to do so) … but obviously I’d have to check if it got it right, anyway, so I’m not sure where the final time would land.

    I am sure, however, that it’d be less enjoyable. And this (like I think the author is trying to express) is saddening. It’s neat that the hardware guy in the story could also solve a software problem, but a bit sad that he can do it without actually learning anything, just by prompting a machine built out of appropriated labour - I imagine this is what artists and illustrators feel about the image generators. It feels like skills it took a long time to build up are devaluing, and the future the AI boosters are selling - one where our role is reduced to quality controlling AI-generated barf, if there’s a role left for us at all - is a bleak one. I don’t know how well-founded this feeling actually is: In a world that has internet connections, Stack Overflow, search engines and libraries for most of the classic algorithms, the value of being able to blam out a reservoir sampling algorithm from memory was very close to zero anyway.

    It sure wasn’t that ability I got hired for: I’ve mentioned before that I’ve not had much luck trying to use LLMs for things that resemble my work. I help maintain an open-source OS for industrial embedded applications. The nice thing about open source is that whenever we need to solve some problem someone else already solved and put under an appropriate license, we can just use their solution directly without dragging anything through an LLM. But this also definitionally means that we spend pretty much all our time on problems that haven’t been solved publicly (and that LLMs haven’t seen examples of). For us, at the moment, LLMs don’t help with any of the tasks we actually could use help with. Neither does Stack Overflow.

    But the explicit purpose of generative AI is the devaluation of intellectual and creative labour, and right now, a lot of money is being spent on an attempt to make people like me redundant. Perhaps this is just my anxiety speaking, but it makes me terribly uneasy.

    • thesmokingman@programming.dev
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      I’ve been conducting DevOps and SRE interviews for years now. There’s a huge difference between someone that can copypasta SO code and someone that understands the SO code. LLMs are just another extension of that. GitHub Copilot is great for quickly throwing together an entire Terraform file. Understanding how to construct the project, how to tie it all together, how to test it, and the right things to feed into Copilot requires actually having some skill with the work.

      I might hire this person at a very junior level if they exhibited a desire to actually understand what’s going on with the code. Here an LLM can serve as a “mentor” by spitting out code very quickly. Assuming you take the time to understand that code, it can help. If you just commit, push, deploy, you can’t figure out the deeper problems that span files and projects.

      To me the only jobs that might not be safe are for executives a good programmer probably doesn’t want to work for.

      • datarama@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        I’m not personally concerned that any currently-existing ML system can take my job. The state-of-the-art ones can barely help me in my job, let alone do it for me. The things I’ve found them to be good at are things I spend very little time at work actually doing.

        But they’re vaguely shaped like something that can take our jobs, and I don’t know if they’ll turn into that. So I worry - in part also for the purely personal reason that I’m a disabled, middle-aged guy who’s seen better days; a hypothetical future labour market that has no need for programmer-shaped brains anymore is one that people like me would probably do very poorly in.

        • thesmokingman@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I think that’s a really fair far-future take. I think the most reasonable approach isn’t knee-jerk in either direction (they’re taking our jobs vs they are no threat). I feel that good programmers of the future are going to take advantage of AI capabilities. Copilot does a great job of quickly writing boilerplate code I could write at a slower rate. That in turn gives me more time to focus on things like chunking the problem into method names it could figure out how to write or just writing that complicated business logic myself. All of that comes from my architecture experience and ability to suss out what stakeholders really want, then deliver a minimum viable solution quickly enough to iterate or deliver. The emphasis becomes a focus on soft skills and systems thinking, which is something I feel can come naturally to good programmers today. Getting soft skills isn’t so easy and that might push a lot of folks out.

          No matter what, I feel like a solid programmer is one who knows how to adapt. If you can do that, you can adapt to a future where our code jobs are very different from where they are today. I’m pretty young; I started writing Perl web apps, switched to PHP, did random shit, learned JavaScript, did some Rails, then found my passionate in DevOps/SRE. My selling point pre-leadership was my ability to code, not just write YAML, on top of infra knowledge. I think even in an AI future there’s still an edge or two available, even if it’s just soft skills.

          On a related note, if LLMs get good enough to shove is out, the writing will be on the wall and we should have plenty of time to use said LLMs to write killer software for future us before executives grok the change.

          • datarama@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            I’m afraid this is going to be a bit of a rambling answer.

            Some context: Many devices in the industrial embedded sector have extreme environmental requirements. Some of them have to keep functioning if they’re being blasted with a snowstorm or if they’re right next to horrible exhaust heat. The processors that can handle that sort of abuse are often a lot less powerful than desktop or even mobile consumer processors, and storage is terribly expensive. At the same time, a lot of the software that developers and users reasonably expect to be present has grown awfully large and resource-hungry. A system crash can be very, very unpleasant - and as every dev knows, more code means more potential for bugs.

            What all of this means, taken together, is that we’re all very, very happy when we manage to come up with something that contributes a large negative number of lines of code to the platform. If we figure out something that allows us to make a lot of other code redundant so we can throw it away, everyone is happy. This is the opposite of what tools that enable very rapid generation of repetitive code help with - we spend more time trying to come up with smart ways to avoid ending up with more code. Don’t get me wrong - we use generated code for a lot of tasks where that makes sense, but the part we seem to be spending all our time on at my job, LLMs don’t help very much at present.

            The cheery part: I’ve mentioned elsewhere that one of the problems mentioned in the article wasn’t “tricky”, but rather it was just tedious. These sorts of tasks don’t really require deep reasoning or creativity - they just require a lot of trivia, and they’re things that have been done a billion times already, but the languages in common use don’t necessarily have mechanisms that makes them easily abstractable. There’s probably a lot of software that doesn’t get written simply because people can’t be arsed to do the boring part. 90% of that currently-unwritten software is going to be crap because 90% of everything is crap, but if LLMs help get that last 10% off the floor, then that’s great.

            Historically, whenever software has gotten significantly easier and cheaper to make, we’ve ended up discovering that there’s a lot more things we can do with software we hadn’t done before because it’d be too expensive or bothersome, and this has usually meant that demand for software has gone up. A current-day web dev can whip something up in a couple of days that would have been a major team undertaking in 2010, and completely technically infeasible in 1998. If you showed a modern web framework to a late-1990s web developer, they’d see a tool that had automated their entire job away - but there’s a lot more web developers today than there were in 1998.

            The dark part: We’re discussing a “programmers are over” article. There have been a lot of them in the media in the last year, and while I don’t think that’s an accurate description of the world I actually see around me, this is not at all a fun time to have an anxiety disorder. I’ve spent most of my life filing away the more obviously neurodivergent bits of my personality, and I worked as a teacher for a while - but I am what I am, and “soft skills” will never be my strength.

            There’s not a billion-dollar industry in “better autocomplete”, but there would be one in “mass layoff as a service”, and that’s what many of the big players in the field are pumping enormous amounts of money into trying to achieve.

            • thesmokingman@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              1 year ago

              I love your rambling responses! You add a lot of detail and you’re talking about a side of code I don’t touch.

              I think a safety net for you that will continue to exist your entire lifetime is embedded work for the US government or related contracts. I’ve got buds writing embedded code for defense contracts. Stuff like that will take decades to adopt LLMs because of how contracts work and the security process. I’ve got friends at DHS that just finished a fucking Coldfusion migration. Some friends are writing Ada for bombers. Your skills fit that niche pretty well and it’s stable work. The idea is not to use the newest and greatest but rather test in depth with old setups.

              • self@awful.systemsOP
                link
                fedilink
                English
                arrow-up
                7
                ·
                1 year ago

                if the capitalists succeed in their omnipresent goal to vastly reduce the perceived value of your labor, you can always write terrible code that kills in one of the most tedious languages ever invented

                do these ideas give you comfort

                • datarama@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  1 year ago

                  This is the point where the anxiety patient has to make a rambling reality check.

                  It’s obvious they want mass-layoff-as-a-service. They openly say so themselves. But it’s less obvious, at least at this point in time, that generative AI (at least in models like the current ones) actually can create that. I’m worried because I extrapolate from current trends - but my worries are pretty likely to be wrong. I’m too good at worrying for my own good, and at this point, mass layoffs and immiseration are still involuntary speculative fiction. In general, when transformative technologies have come along, people have worried about all the wrong things - people worried that computers would make people debilitatingly bad at math, not that computers would eventually enable surveillance capitalism.

                  We’re currently in the middle of an AI bubble. There are companies that have enormous valuations despite not even having a product, and enormous amounts of resources are being poured into systems that nobody at present knows how to make money from. The legal standing of the major industry players is still unestablished, and world-leading experts disagree about what these models can realistically be expected to do and what they can’t. The hype itself is almost certainly part of a deliberate strategy: When ChatGPT landed a year ago, OpenAI had already finished training GPT-4 (which begun a long time prior). When they released that, it looked like they leapt from GPT-3 to GPT-4 in a few months. The image input capability that came out a few months ago were in the original GPT-4 model (according to their publication at the time); they just disabled it until recently. All of this has been very good at keeping the hype bubble inflated, which has both had the effect of getting investors (and other tech companies) to pour money into the project and making a lot of people really worried for their livelihoods. I freak out whenever I see a flashy demo showing that a LLM can solve some problem that no developer actually needs to use their brain for solving, because freaking out is unfortunately what comes naturally to me when the stakes are high.

                  I don’t think this is like the crypto bubble. Unlike crypto, people are using LLMs and diffusion models to produce things, ranging from sometimes-useful code and “good enough” illustrations for websites, to spam, homework assignments and cover letters, to nonconsensual deepfake porn and phishing. We now have an infinite bullshit machine, and lots of what people do at work involve producing and managing bullshit. But it’s not all bullshit. A couple months ago, the “jagged frontier” paper gave some examples of tasks for management consultants, with and without LLM assistance. Unsurprisingly, writing fluffy and eloquent memos was much more productive with an LLM in tow, but complex analytical tasks actually saw some of the consultants get less productive than the control group. In my own attempts to use them in programming, my tentative conclusion is that at the moment they help to some extent when the stumbling block is about knowledge, but not really much when it’s about reasoning or skill. And more crucially, it seems that an LLM without a human holding its hand isn’t very good at programming (see the abysmal issue resolution rate for Github issues in the SWE-Bench paper). At the moment, they’re code generators rather than automatic programmers, and no programmer I know works as a code generator. Crucially, not a single one of them (who doesn’t also struggle with anxiety) worries about losing their jobs to LLMs - especially the ones who regularly use them.

                  A while ago, I read a blog post by Laurence Tratt, in which he mentions that he gets lots of productivity out of LLMs when he needs a quick piece of Javascript for some web work (something he doesn’t work with daily), but very little for his day job in programming language implementation. This, it seems to me, likely isn’t because programming language implementation is harder than web dev or because there’s not enough programming language implementation literature in the training set (there’s a lot of it, judging by how much PLT trivia even small models can spit out) - it’s because someone like him has high ambitions when working with programming language implementation, and he knows so much about it that the things he doesn’t know are things the LLM also doesn’t know.

                  I don’t know if my worries are reasonable. I’m the sort of person who often worries unreasonably, and I’ve never felt as uncertain about the future of my field as I do at the moment. The one thing I’m absolutely sure of is that there’s no future in which I write code for the US military, though.

                • thesmokingman@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  edit-2
                  1 year ago

                  Do you have anything to else to offer or is your solution to roll over and do nothing? Some of us still have families and networks to support so we can’t just devote all our time to sniping labor on the internet in preparation for the glorious revolution. Given the discussions you have on your instance, I’m kinda disappointed this tepid response is the best you have.

                  I should have seen this coming.

              • gerikson@awful.systems
                link
                fedilink
                English
                arrow-up
                6
                ·
                1 year ago

                Assuming that the current largesse of US defense contracts will survive the LLM-induced collapse of the middle classes is … a take.

                US defense spending is seen as a political holy cow at the moment but its well-paying superstructure is as vulnerable to attacks from the nativist/neo-isolationist right as from the left. Add in a sprinkling of attacks on “woke” corporations and that bomber program is not as safe as you think.

              • datarama@awful.systems
                link
                fedilink
                English
                arrow-up
                5
                ·
                1 year ago

                I’m not American, and if “mass layoff as a service” actually ends up working, the safety net my country does have is going to break apart rather rapidly.

                It’s dependent on people and companies paying taxes, and that’s going to be a problem if middle class employment implodes, domestic businesses are destroyed and all the money is captured by Silicon Valley oligarchs and put in tax havens.

                • thesmokingman@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  1 year ago

                  Oh my goodness! I apologize for assuming. I read the wrong thing into your comments.

                  Do the whims of Silicon Valley greatly affect your market? The most I’ve interacted with non-US markets is running some near-shore consulting with one of the majors (spun up a Mexican firm but the executives wanted to pay local rates for remote US work which is a fucking joke). I also know, should I leave the US, I will have a much lower salary. I’ve hired a fair amount of remote talent in the Americas and India for various jobs; I think a good chunk of that work is the kind that could be replaced by LLMs in the next two decades or so.

    • self@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      I help maintain an open-source OS for industrial embedded applications.

      fuck yes. there’s something weirdly exciting about work like that — not only is it a unique set of constraints, but it’s very likely that an uncountable number of people (myself possibly included) have interacted with your code without ever knowing they did

      But the explicit purpose of generative AI is the devaluation of intellectual and creative labour, and right now, a lot of money is being spent on an attempt to make people like me redundant. Perhaps this is just my anxiety speaking, but it makes me terribly uneasy.

      absolitely same. I keep seeing other programmers uncritically fall for poorly written puff pieces like this and essentially do everything they can to replace themselves with an LLM, and the pit drops out of my stomach every time. I’ve never before seen someone misunderstand their own career and supposed expertise so thoroughly that they don’t understand that the only future in that direction is one where they’re doing a much more painful version of the same job (programming against cookie cutter LLM code) for much, much less pay. it’s the kind of goal that seems like it could only have been dreamed up by someone who’s never personally survived poverty, not to mention the damage LLM training is doing to the concept of releasing open source code or even just programming for yourself, since there’s nothing you can do to stop some asshole company from pilfering your code.

      • fnix@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        the only future in that direction is one where they’re doing a much more painful version of the same job (programming against cookie cutter LLM code) for much, much less pay.

        To the extent that LLMs actually make programming more “productive”, isn’t the situation analogous to the way the power loom was bad for skilled handweavers whilst making textiles more affordable for everyone else?

        I should perhaps say that I’m saying this as someone who is just starting out as a web developer (really chose the right time for that, hah). I try to avoid LLMs and even strictly unnecessary libraries for now because I like learning about how everything works under the hood and want to get an intimate grasp of what I’m doing, but I can also see that ultimately that’s not what people pay you for that and that once you’ve built up sufficient skill to quickly parse LLM output, the demands of the market may make using them unavoidable.

        To be honest, I feel as conflicted & anxious about it all as others already mentioned. Maybe I am just too green to fully understand the value that I would eventually bring, but can I really, in good conscience, say that a customer should pay me more when someone else can provide a similar product that’s “good enough” at a much lower price?

        Sorry for being another bummer. :(

        • datarama@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          I’m not going to claim to be an LLM expert; I’ve used them a bit to try to figure out which of my tasks they can and can’t help with. I don’t like them, so I don’t usually use them recreationally.

          I’ll put my stakes on the table too. I’ve been programming for very close to my entire life; my mum taught me to code on a Commodore 64 when I was a tiny kid. Now I’m middle-aged, and I’ve spent my entire professional life either making software or teaching software development and/or software-adjacent areas (maths, security, etc.). I’ve always preferred to call myself a “programmer” rather than a “software engineer” or the like - I do have a degree, but I’ve always considered myself a programmer first, and a teacher/researcher/whatever second.

          I think the point made in the article we’re talking about is both too soon and too late. It’s too soon because - for all my worries about what LLMs and other AI might eventually be, at the current moment they’re definitly not AutoDeveloper 3000. I’ve mentioned my personal experiences. Here is a benchmark of LLM performance on actual, real-world Github issues - they don’t do very well on those at all, at least for the time being. All professional programmers I personally know still program, and when they do use LLM’s, they use them to generate example code rather than to write their production code for them, basically like Stack Overflow, eexcept one you can trust even less than actual Stack Overflow. None of them use its generated code directly - also like you wouldn’t with Stack Overflow. At the moment, they’re tools only; they don’t do well autonomously.

          But the article is also too late, because the kind of programming I got hooked on and that became a lifelong passion isn’t really what professional development is like anymore, and hasn’t been for a long time, long before LLMs. I spend much more time maintaining crusty old code than writing novel, neat, greenfield code - and the kind of detective work that goes into maintaining a large codebase is often one that LLMs are of little use in. Sure, they can explain code - but I don’t need a tool to explain what code does (I can read), I need to know why the code is there. The answer to this question is rarely directly related to anything else in the code, it’s often due to a real-world consideration, an organizational factor, a weird interaction with hardware, or a workaround for an odd quirk of some other piece of software. I don’t spend my time coming up with elegant, neat algorithms and doing all the cool shit I dreamt of as a kid and learnt about at university - I spend most of my time doing code detective work, fighting idiosyncratic build systems, and dealing with all the infuriating edge cases the real world seems to have an infinite supply of (and that ML-based tools tend to struggle with). Also, I go to lots of meetings - many of which aren’t just the dumb corporate rituals we all love to hate, but a bunch of professionals getting together to discuss the best way to solve a problem none of us know exactly how to solve. The kind of programming I fell in love with isn’t something anyone would pay a professional to do anymore, and hasn’t been for a very long time.

          I haven’t been in web dev for over a decade. Most active web devs I know say that the impressive demos of GPT-4 making a HTML page from a napkin sketch would have been career-ending 15 years ago, but doesn’t even resemble what they spend all their time doing at work now: They tear their hair out over infuriating edge cases, they try to figure out why other people wrote specific bits of code, they fight uncooperative tooling and frameworks, they try to decipher vague and contradictory requirements, and they maintain large and complex applications written in an uncooperative language.

          The biggest direct influence LLMs have so far had on me is to completely destroy my enthusiasm for publishing my own (non-professional) code or articles about code on the web.

        • self@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          I’m not sure the power loom analogy works, because power looms are (to my non-weaver knowledge) fit for purpose. if power looms’ output required significant rework by a skilled weaver (being paid significantly less for essentially the same amount of work done more tediously, per my point above), relied on stolen patterns from all of the world’s handweavers, and they were crushingly inefficient to run per woven piece, I seriously doubt history would remember them as a successful invention

          unfortunately, we’re living in uniquely awful times, and decades of tech’s strange, manipulated culture have turned many programmers into nihilistic utopians with no ability to think things through on a systemic level. generative AI as a whole is nothing but an underhanded wage reduction tactic, but (by design) our industry doesn’t have the solidarity to fight it in any way that works (see the Writers’ Guild’s successful strike)

          • datarama@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            1 year ago

            The power loom analogy works very well, actually. Their spot in history is, in part, because of who got to write the history books.

            The inventors and entrepreneurs who developed them spent lots of time spying on weavers - who understandably weren’t cooperative, when they found out what the machines were intended to do. The quality of their products was so shoddy that the weavers’ first attempt at a legal challenge actually tried to have them defined as fraudulent, because they figured the poor-quality fabric would ruin the reputation of the English textile industry. In the early days, they actually did require frequent fix-up jobs.

            Not all of the entrepreneurs who built factories were monstrous assholes; some of them were quite considerate people who paid professional weavers a decent wage to work for them (these weavers still often hated their new working conditions). Some did this out of legitimate concern for their communities (it was a smaller world, and many of them personally knew the very people whose jobs they were degrading), and some did so because they were afraid that Luddites would break into their factories and destroy all the expensive machines. Most of them were put out of business, they were easy to undercut by owners who instead used indentured children taken from orphanages.

            They did drive the price of clothing down, but unfortunately that didn’t directly translate to all-around increased economic prosperity immediately: Aside from all the weavers being put out of business, entire communities suffered economic collapse because they were built around those weavers’ income.

            You’re right that programmers often have little class consciousness. I’m a union member myself (and so are most of my programmer friends and colleagues) - but unfortunately, I’m not sure how much some unions in a tiny country can do against the economic might of Silicon Valley.

            • self@awful.systemsOP
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 year ago

              huh, explained like that the power loom analogy does much better than I thought in encapsulating this anxiety; at its core, it’s a (very justified) fear that we haven’t learned anything from history and that the loudest and most foolish of our profession are gleefully marching us towards an awful fate

              I’ve been doing some reading on the origins of technolibertarianism (though as with all my reading I’m far behind where I’d like to be) and it’s fucking insane the lengths Silicon Valley has gone to in order to make unionization a taboo topic among American tech workers

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            Totally agree.

            IMO a better analogy would be clothing sweatshops rather than the power loom. Same utilitarian effect of textile affordability increases. Same ethical fuckery with exploitation of labour.

        • locallynonlinear@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Commoditization is a real market force, and yes, it will come for this industry as it has for others.

          Personally, I think we need to be much, much more creative and open to understanding ourselves and the potential of the future. It’s hard to know specifics, but there is broad domains.

          Lately, I’ve been hacking at home with more hardware, and creating interesting low scale, low energy input systems that help me… garden. Analyzing soil samples, planning plots and low energy irrigation, etc, etc. It’s been fun because the work is less about programming in depth and more broad systems thinking. I even have ideas for making a small scale company off this. At that point, purely the programming won’t be the bottleneck.

          If it helps, as an engineer, take a step back and think about nature and how systems and niches within systems evolve. Nature isn’t actually in the business of replacing due to redundancy, it’s in the business of compounding dependency via waste resources, and the shifting roles as a result of that. We need to be ready to creatively take our experience, perspective, and energy gradient to new places. It’s no different for any other part of nature.

          • datarama@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 year ago

            I mean, we’ve been commoditizing our own skills for the entire duration of our profession: Libraries, higher-level languages, open-source. This is the nature of programming, really; we’d be bad at our jobs if we didn’t do that. Today’s afternoon hack would have taken an entire team several months of work a few decades ago, and many of the projects teams start today were unthinkable a few decades ago. This isn’t because we’re a ton better, it’s because a lot of the tough work has already been done.

            Historically, every major increase in programmer productivity has led to demand for software rising faster than the even-more-productive programmers could keep up with, though.

      • locallynonlinear@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        since there’s nothing you can do to stop some asshole company from pilfering your code.

        Currently. Though I think that there is a future where adversarial machine learning might be able to greatly increase the cost of training on pilfered data by encoding human generated inputs in a way that runs counter to training algorithms.

        https://glaze.cs.uchicago.edu/

        • corbin@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          Even if there were Glaze/Nightshade for computer programs, it could be reverse-engineered just like any other code obfuscation. This is the difference between code and most other outputs of labor: code is syntactic and formal, allowing for decidable objective analyses.

          • datarama@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            Well, some analyses are decidable, anyway. ;-)

            But you’re right, of course. The only real data poisoning you could do with code is sharing deliberately bad code … but then you’re also not sharing useful open source code with your fellow humans; you’re just spamming.

            At any rate, I’m not sure that future major gains in LLM coding ability is going to come from simply shoving more code in. The ones we have today have already ingested a substantial chunk of all the open-source code that exists on the public web, and (as the SWE-Bench example I’ve shared elsewhere gives an example of), they still struggle if they aren’t substantially guided by a human.

          • locallynonlinear@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            1 year ago

            There’s a difference between “can” and “cost”. Code is syntactic and formal, true, but what about pseudo code that is perfectly intelligible by a human? There is, afterall, a difference between sharing “compiled” code that is meant to be fed directly into a computer and sharing “conceptual” code that is meant to be contextualized into knowledge. Afterall, isn’t “code” just the formalization of language, with a different purpose and trade off?

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    shuf -n 100 /usr/share/dict/words

    Master hacker

    I would have expected JS standard library to contain something along the lines of random.sample but apparently not. A similar thing exists in something called underscore.js and I gotta say it’s incredibly in-character for JavaScript to outsource incredibly common utility functions to a module called “_”.

    Language bashing aside, there’s something to enjoy about these credulous articles proclaiming AI superiority. It’s not the writing itself, but the self-esteem boost regarding my own skills. I have little trouble doing these junior dev whiteboard interview exercises without LLM help, guess that’s pretty impressive after all!

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 year ago

      Absolutely this, shuf would easily come up in a normal google search (even in googles deteriorated relevancy).

      For fun, “two” lines of bash + jq can easily achieve the result even without shuf (yes I know this is pointlessly stupid)

      cat /usr/share/dict/words | jq -R > words.json
      cat /dev/urandom | od -A n -D | jq -r -n '
        import "words" as $w;
        ($w | length) as $l |
        label $out | foreach ( inputs * $l / 4294967295 | floor ) as $r (
          {i:0,a:[]} ;
          .i = (if .a[$r] then .i  else .i + 1 end) | .a[$r] = true ;
          if .i > 100 then break $out else $w[$r] end
        )
      '
      

      Incidentally this is code that ChatGPT would be utterly incapable of producing, even as toy example but niche use of jq.

    • buzziebee@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      It’s so incredibly easy to randomly select a few lines from a file that it really doesn’t need to be in the standard library. Something like 4 lines of code could do it. Could probably even do it in a single unreadable line of code.

  • locallynonlinear@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    ah, the NP-complete problem of just fucking pulling the file into memory (there’s no way this clown was burning a rainforest asking ChatGPT for a memory-optimized way to do this),

    It’s worse than that, because there’s been incredibly simple, efficient ways to k-sample a stream with all sorts of guarantees about its distribution with no buffering required for centuries. And it took me all of 1 minute to use a traditional search engine to find all kinds of articles detailing this.

    If you can’t bother learning a thing, it isn’t surprising when you end up worshiping the magic of the thing.

    • self@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      reading back, I wonder if they were looking for a bash command or something that’d do it? which both isn’t programming, and makes their inability to find an answer in seconds much worse

    • naevaTheRat@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      So I haven’t programmed in a long time but like isn’t a simple approach for this sort of thing (if want low numbers like 100) just something like:

      from distribution I like(0, len(file)) get 100 samples read line at sample forall samples

      or if file big

      sort samples, stream file, if line = current sample add line to array, remove sample from other array.

      Like that is literally off the top of my head. I’m sure there are real approachs but if googling is too hard isn’t shit like that obvious?

      edit: wait you’d have to dedupe this. also the real approach is called: (unspellable French word for pit of holdy water etc) sampling

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 year ago

    Ah yes, the future of coding. Instead of directly searching for stack overflow answers, we raise the sea level every time we need to balance a tree.

    AI chuds got the wrong message about the guy that tried to use tensorflow to write fizzbuzz.

    Edit: I looked it up. Tensorflow fizzbuzz guy is also an AI chud it seems

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    Our puzzle generator printed its output in an ugly text format… I wanted to turn output like that into a pretty Web page that allowed me to explore the words in the grid, showing scoring information at a glance. But Iknew the task would be tricky: each letter had to be tagged with the words it belonged to, both the across and the down. This was a detailed problem

    I’m so confused. how could the generator have output something that amounts to a crossword, but not in such a way that this task is trivial? does he mean that his puzzle generator produces an unsorted list of words? what the fuck is he talking about

    • self@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      you know, you’re fucking right. I was imagining taking a dictionary and generating every valid crossword for an N x N grid from it, but like you said he claims to already have a puzzle generator. how in fuck is that puzzle generator’s output just a list of words (or a list of whatever the fuck "s""c""a""r""*""k""u""n""i""s""*" "a""r""e""a" is supposed to mean, cause it’s not valid syntax for a list of single characters with words delimited from * in most languages, and also why is that your output format for a crossword?) if it’s making valid crossword puzzles?

      fractally wrong is my favorite kind of wrong, and so many of these AI weirdos go fractal

      • 200fifty@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        I… think (hope??) the “*” is representing filled in squares in the crossword and that he has a grid of characters. But in that case the problem is super easy, you just need to print out HTML table tags between each character and color the table cell black when the character is “*”. It takes like 10 minutes to solve without chatgpt already. :/

        • sc_griffith@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          I rejected that as too easy to be what he meant, but as soon as I read your words I knew in my heart you were right. associating these letters to words is essentially fizzbuzz difficulty, he can’t do it, and he’s writing in the new yorker that he can’t do it. I’m feeling genuine secondhand embarrassment for him

        • self@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          fuck me, so the only reason scoring was “tricky” to them was because this asshole chose unstructured text as their interchange format instead of, say, JSON? and even given that baffling design flaw in their puzzle generator (which is starting to feel like code GPT found and regurgitated that they didn’t know how to modify to make it suitable for their purposes) I can think of like 5 different ways to include scoring data and none of them are hard to implement

  • ericbomb@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I felt like there was a 100% chance that there was a python library that you could just import and use in two lines.

    Turns out it’s like 4 lines depending on which of the multiple ones you use.

    I do love internet people who make cool things because they are smarter than me and share.

  • Mike Knell@blat.at
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    @self This was the point where I started wanting to punch things:

    “At one company where I worked, someone got in trouble for using HipChat, a predecessor to Slack, to ask one of my colleagues a question. “Never HipChat an engineer directly,” he was told. We were too important for that.”

    Bless his heart. That, dearie, isn’t “engineers are so special”, it’s managers wanting to preserve old-fashioned lines of communication and hierarchy because they fear becoming irrelevant. Gatekeeping access to other people’s knowledge to make yourself important goes back millennia.

  • iamnearlysmart@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Really love the bit about how gpt is able to tackle the simple stuff so easily. If an original insight, I take my hat off to you. I came to the edge of it, but never quite really saw it as you point out.

    • self@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      if you never have, find YouTube videos of folks trying to use an LLM to generate code for a mildly obscure language. one I watched that gave the game away was where someone tried to get ChatGPT to write a game in Commodore BASIC, which they then pasted directly into a Commodore 64 emulator to run. not only did the resulting “game” perform like a nonsensical mashup of the simple example code from two old programming books, there was a gigantic edit in the middle of the video where they had to stop and make a significant number of fixes to the LLM’s output, where it either fictionalized something like a line number or constant, or where the mashup of the two examples just didn’t function. after all that programming on their part for an incredibly substandard result, their conclusion was still (of course) that the LLM did an amazing job

  • aubertlone@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Yah I skimmed thru this article a couple days ago when I came across it.

    The author did not bother doing any legwork.

    He claims to be a programmer, but doesn’t want to spend a little bit of time investigating a tool that helps him code faster.

  • count@mastodon.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    @self

    >I keep thinking of Lee Sedol. Sedol was…

    Okay. Lee is his family name! Come on, how could the New Yorker fuck this up?