• GorillasAreForEating@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    1 year ago

    The accomplishment I’m referring to is creating GPT/DALL-E. Yes, it’s overhyped, unreliable, arguably unethical and probably financially unsustainable, but when I do my best to ignore the narratives and drama surrounding it and just try out the damn thing for myself I find that I’m still impressed with it as a technical feat. At the very, very least I think it’s a plausible competitor to google translate for the languages I’ve tried, and I have to admit I’ve found it to be actually useful when writing regular expressions and a few other minor programming tasks.

    In all my years of sneering at Yud and his minions I didn’t think their fascination with AI would amount to anything more than verbose blogposts and self-published research papers. I simply did not expect that the rationalists would build an actual, usable AI instead of merely talking about hypothetical AIs and pocketing the donor money, and it is in this context that I say I underestimated the enemy.

    With regards to “mocking the promptfans and calling them names”: I do think that ridicule can be a powerful weapon, but I don’t think it will work well if we overestimate the actual shortcomings of the technology. And frankly sneerclub as it exists today is more about entertainment than actually serving as a counter to the rationalist movement.

    • datarama@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      The problem here is that “AI” is a moving target, and what “building an actual, usable AI” looks like is too. Back when OpenAI was demoing DOTA-playing bots, they were also building actual, usable AIs.

      When I was in university a very long time ago, our AI professor went with a definition I’ve kept with me ever since: An “AI system” is a system performing a task at the very edge of what we’d thought computers were capable of until then. Chess-playing and pathfinding used to be “AI”, now they’re just “algorithms”. At the moment, natural language processing and image generation are “AI”. If we take a more restrictive definition and define “AI” as “machine-learning” (and tossing out nearly the entire field from 1960 to about 2000), then we’ve had very sophisticated AI systems for a decade and a half - the scariest examples being the recommender systems deployed by the consumer surveillance industry. IBM Watson (remember that very brief hype cycle?) was winning Jeopardy contests and providing medical diagnoses in the early 2010s, and image classifiers progressed from fun parlor tricks to horrific surveillance technology.

      The big difference, and what makes it feel very different now, is in my opinion largely that GPT much more closely matches our cultural mythology of what an “AI” is: A system you can converse with in natural language, just like HAL-9000 or the computers from Star Trek. But using these systems for a while pretty quickly reveals that they’re not quite what they look like: They’re not digital minds with sophisticated world models, they’re text generators. It turns out, however, that quite a lot of economically useful work can be wrung out of “good enough” text generators (which is perhaps less surprising if you consider how much any human society relies on storytelling and juggling around socially useful fictions). This is of course why capital is so interested and why enormous sums of money are flowing in: GPT is shaped as a universal intellectual-labour devaluator. I bet Satya Nadella is much more interested in “mass layoff as a service” than he is in fantasies about Skynet.

      Second, unlike earlier hype cycles, OpenAI made GPT-3.5 onwards available to the general public with a friendly UI. This time, it’s not just a bunch of Silicon Valley weirdos and other nerds interacting with the tech - it’s your boss, your mother, your colleagues. We’ve all been primed by the aforementioned cultural mythology, so now everybody is looking at something that resembles a predecessor of HAL-9000, Star Trek computers and Skynet - so now you have otherwise normal people worrying about the things that were previously only the domain of aforementioned Silicon Valley weirdos.

      Roko’s Basilisk is as ridiculous a concept as it ever was, though.

      • GorillasAreForEating@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        The problem here is that “AI” is a moving target, and what “building an actual, usable AI” looks like is too. Back when OpenAI was demoing DOTA-playing bots, they were also building actual, usable AIs.

        For some context: prior to the release of chatGPT I didn’t realize that OpenAI had personnel affiliated with the rationalist movement (Altman, Sutskever, maybe others?), so I didn’t make the association, and i didn’t really know about anything OpenAI did prior to GPT-2 or so.

        So, prior to chatGPT the only “rationalist” AI research I was aware of were the non-peer reviewed (and often self-published) theoretical papers that Yud and MIRI put out, plus the work of a few ancillary startups that seemed to go nowhere.

        The rationalists seemed to be all talk and no action, so really I was surprised that a rationalist-affiliated organization had any marketable software product at all, “AI” or not.

        and FWIW I was taught a different definition of AI when I was in college, but it seems like it’s one of those terms that gets defined different ways by different people.

        • datarama@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          My old prof was being slightly tongue-in-cheek, obviously. But only slightly: He’d been active in the field since back when it looked like Lisp machines were poised to take over the world, neural nets looked like they’d never amount to much, and all we’d need to get to real thinking machines was hiring lots of philosophers to write symbolic logic descriptions of common-sense tasks. He’d seen exciting AI turn into boring algorithms many, many times - and many more “almost there now!” approaches that turned out to lead to nowhere in particular.

          He retired years ago, but I know he still keeps himself updated. I should write him a mail and ask if he has any thoughts about what’s currently going on in the field.