After several months of reflection, I’ve come to only one conclusion: a cryptographically secure, decentralized ledger is the only solution to making AI safer.

Quelle surprise

There also needs to be an incentive to contribute training data. People should be rewarded when they choose to contribute their data (DeSo is doing this) and even more so for labeling their data.

Get pennies for enabling the systems that will put you out of work. Sounds like a great deal!

All of this may sound a little ridiculous but it’s not. In fact, the work has already begun by the former CTO of OpenSea.

I dunno, that does make it sound ridiculous.

  • gerikson@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    This comment from the HN discussion is too funny

    https://news.ycombinator.com/item?id=37725746

    The number of AI safety sessions I’ve joined where the speakers have no real AI experience talking about potentially bad futures, based on zero CS experience and little ‘evidence’ beyond existing sci-fi books and anecdotes, have left me very jaded on the subject as a ‘discipline’.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      “who needs to listen to the poet/writers/painters/sculptors/… anyway? they’re just there to make things that look good in my palazzo garden!”

      • 200fifty@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Yes, there is a lot of bunk AI safety discussions. But there are legitimate concerns as well.

        Hey, don’t worry, someone’s standing up for–

        AI is close to human level.

        Uh, never mind