archive

“There’s absolutely no probability that you’re going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It’s going to take years, if not many decades, but I still think the time to focus on safety is now,” he said.

just days after poor lil sammyboi and co went out and ran their mouths! the horror!

Sources told Reuters that the warning to OpenAI’s board was one factor among a longer list of grievances that led to Altman’s firing, as well as concerns over commercializing advances before assessing their risks.

Asked if such a discovery contributed…, but it wasn’t fundamentally about a concern like that.

god I want to see the boardroom leaks so bad. STOP TEASING!

“What we really need are safety brakes. Just like you have a safety break in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety breaks in AI systems that control critical infrastructure, so that they always remain under human control,” Smith added.

this appears to be a vaguely good statement, but I’m gonna (cynically) guess that it’s more steered by the fact that MS now repeatedly burned their fingers on human-interaction AI shit, and is reaaaaal reticent about the impending exposure

wonder if they’ll release a business policy update about usage suitability for *GPT and friends

  • datarama@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    this appears to be a vaguely good statement, but I’m gonna (cynically) guess that it’s more steered by the fact that MS now repeatedly burned their fingers on human-interaction AI shit, and is reaaaaal reticent about the impending exposure

    What do you mean? The ridiculous AI-generated “news” spat out on MSN, or something more?

    • froztbyte@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      tay is the first thing that comes to mind. I don’t remember all the names of things concretely, just that they’ve repeatedly had egg on their face

      this, while not of their own making, is nonetheless something they have immense exposure to. and thus what I posit: that they’ve become sufficiently sensitized to bad PR from this stuff that they thought to just try get ahead of it

      (over and above risk management for future stock price protection)

      • datarama@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        I think the lesson they’ve learned from Tay is that you absolutely don’t want a machine learning system that adapts from human feedback unless you’ve got humans on board to select which feedback you do and don’t want it to learn from.

        And hey - their new friends have a Kenyan clickworker sweatshop for that.

        • Deborah@hachyderm.io
          link
          fedilink
          arrow-up
          9
          ·
          1 year ago

          If you name your ML system after a railway bridge that collapsed, killing all aboard, inspiring one of the worst works of poetry in the English language, maybe you are asking for trouble.

          https://en.wikipedia.org/wiki/The_Tay_Bridge_Disaster

          Is that fair? No. Relevant? No. Did the creators of Tay know about the Tay Bridge Disaster? Also probably no. Is it funny to consider a poem so artfully terrible that no ML product could replicate its badness*? Oh hell yes.

          * Have I tried? Yes, obviously. Only on free bing GPT tho.