Good overview of the current sitch wrt AI doomers and others.

  • datarama@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 year ago

    This is more or less also the camp I’m in. I don’t consider myself a “reformer” either; I don’t think there’s any way to turn this technology into something good, at least not under the current socioeconomic conditions. I’m not worried that the robot cultists end up creating an electronic god (or that they accidentally create an electronic satan instead); I’m worried about the collateral social damage that’s going to accrue from an infinite firehose of corporate money propping up competing robot cultists who think they’re building electronic gods.

    As always, the real paperclip maximizer was the corporations we founded along the way.

    • Oggie@woof.group
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      @datarama @200fifty
      I am kind of scared of the electronic god angle, not because it will be one, but because I think it might be a hideously small step to obeying what an autocomplete bot tells you to do, in some bizzare roko’s basilisk corrilary, because you hope it will turn into a god.

      In other words, I have some real concern for when these crazy things start spitting out instructions that don’t turn into Looking Glass surrealism by the third step, and people insisting we follow it blindly.

      • maol@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        The AI seems superintelligent because it’s designed to rot people’s fucking brains…

      • earthquake@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        We already had the guy whose girlfriend AI encouraged him to try to kill the Queen. A stochastic [violence] basilisk is probably sadly inevitable.

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      Energy consumption alone makes it non-viable. The only way they can do it is with cheap electricity, preferably from somewhere far away so the users can’t see the power plants being expanded or even built to supply these AI companies. I live in Ireland and the amount of data centres here is already starting to affect our fucking electricity supply. Whose electricity are they going to steal to generate their jpegs? “Sorry, people of Kazakhstan, I know you want to run your dialysis machines and turn the lights on at night so your kids can do their homework, but we have some very rich people who need to churn out pornographic caricatures of women they don’t like …”

        • mmby@mastodon.social
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          @datarama though it can be used ethically, if the labor is donated or something that is made from public goods is regulated to remain a public good

          right now, it’s an Elsevier business model: receive the work of others at no cost and sell it as a service

          • datarama@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            I’d argue that right now, generative AI companies are actually doing what I’d have thought to be impossible: It’s even worse than the Elsevier business model. At least Elsevier isn’t randomly hoovering up every single bit of research data and papers on the internet without permission and monetizing it.

            This is one of those bits of collateral damage: In an earlier and more innocent era, writing about weird bits of domain knowledge or records of various technical misadventures on the Internet felt great; you’d hope some people would find it and it’d help or amuse them. Now it feels rather bleak - you know all your writings will be ingested by AI companies and used against people.