This is good! Though, he neglects to mention the group of people (including myself) who have yet to be sold on ai’s usefulness at all (all critics of practical AI harms are lumped under ‘reformers’ implying they still see it as valuable but just currently misguided.)
Like, ok, so what if China develops it first? Now they can… generate more convincing spam, write software slightly faster with more bugs, and starve all their artists to death? … Oh no, we’d better hurry up and compete with that!
This is more or less also the camp I’m in. I don’t consider myself a “reformer” either; I don’t think there’s any way to turn this technology into something good, at least not under the current socioeconomic conditions. I’m not worried that the robot cultists end up creating an electronic god (or that they accidentally create an electronic satan instead); I’m worried about the collateral social damage that’s going to accrue from an infinite firehose of corporate money propping up competing robot cultists who think they’re building electronic gods.
As always, the real paperclip maximizer was the corporations we founded along the way.
@datarama@200fifty
I am kind of scared of the electronic god angle, not because it will be one, but because I think it might be a hideously small step to obeying what an autocomplete bot tells you to do, in some bizzare roko’s basilisk corrilary, because you hope it will turn into a god.
In other words, I have some real concern for when these crazy things start spitting out instructions that don’t turn into Looking Glass surrealism by the third step, and people insisting we follow it blindly.
Energy consumption alone makes it non-viable. The only way they can do it is with cheap electricity, preferably from somewhere far away so the users can’t see the power plants being expanded or even built to supply these AI companies. I live in Ireland and the amount of data centres here is already starting to affect our fucking electricity supply. Whose electricity are they going to steal to generate their jpegs? “Sorry, people of Kazakhstan, I know you want to run your dialysis machines and turn the lights on at night so your kids can do their homework, but we have some very rich people who need to churn out pornographic caricatures of women they don’t like …”
I’d argue that right now, generative AI companies are actually doing what I’d have thought to be impossible: It’s even worse than the Elsevier business model. At least Elsevier isn’t randomly hoovering up every single bit of research data and papers on the internet without permission and monetizing it.
This is one of those bits of collateral damage: In an earlier and more innocent era, writing about weird bits of domain knowledge or records of various technical misadventures on the Internet felt great; you’d hope some people would find it and it’d help or amuse them. Now it feels rather bleak - you know all your writings will be ingested by AI companies and used against people.
This is good! Though, he neglects to mention the group of people (including myself) who have yet to be sold on ai’s usefulness at all (all critics of practical AI harms are lumped under ‘reformers’ implying they still see it as valuable but just currently misguided.)
Like, ok, so what if China develops it first? Now they can… generate more convincing spam, write software slightly faster with more bugs, and starve all their artists to death? … Oh no, we’d better hurry up and compete with that!
This is more or less also the camp I’m in. I don’t consider myself a “reformer” either; I don’t think there’s any way to turn this technology into something good, at least not under the current socioeconomic conditions. I’m not worried that the robot cultists end up creating an electronic god (or that they accidentally create an electronic satan instead); I’m worried about the collateral social damage that’s going to accrue from an infinite firehose of corporate money propping up competing robot cultists who think they’re building electronic gods.
As always, the real paperclip maximizer was the corporations we founded along the way.
@datarama @200fifty
I am kind of scared of the electronic god angle, not because it will be one, but because I think it might be a hideously small step to obeying what an autocomplete bot tells you to do, in some bizzare roko’s basilisk corrilary, because you hope it will turn into a god.
In other words, I have some real concern for when these crazy things start spitting out instructions that don’t turn into Looking Glass surrealism by the third step, and people insisting we follow it blindly.
We already had the guy whose girlfriend AI encouraged him to try to kill the Queen. A stochastic [violence] basilisk is probably sadly inevitable.
The AI seems superintelligent because it’s designed to rot people’s fucking brains…
Energy consumption alone makes it non-viable. The only way they can do it is with cheap electricity, preferably from somewhere far away so the users can’t see the power plants being expanded or even built to supply these AI companies. I live in Ireland and the amount of data centres here is already starting to affect our fucking electricity supply. Whose electricity are they going to steal to generate their jpegs? “Sorry, people of Kazakhstan, I know you want to run your dialysis machines and turn the lights on at night so your kids can do their homework, but we have some very rich people who need to churn out pornographic caricatures of women they don’t like …”
deleted by creator
@maol @datarama Can’t believe The Matrix was real
@datarama @200fifty http://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html
@datarama @200fifty or people who are just straight up using it as it is used right now: for labor extraction
Absolutely. That’s what generative AI is for, fundamentally.
@datarama though it can be used ethically, if the labor is donated or something that is made from public goods is regulated to remain a public good
right now, it’s an Elsevier business model: receive the work of others at no cost and sell it as a service
I’d argue that right now, generative AI companies are actually doing what I’d have thought to be impossible: It’s even worse than the Elsevier business model. At least Elsevier isn’t randomly hoovering up every single bit of research data and papers on the internet without permission and monetizing it.
This is one of those bits of collateral damage: In an earlier and more innocent era, writing about weird bits of domain knowledge or records of various technical misadventures on the Internet felt great; you’d hope some people would find it and it’d help or amuse them. Now it feels rather bleak - you know all your writings will be ingested by AI companies and used against people.
@datarama @200fifty still the clearest take i’ve seen on it, now 10 years ago http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-soaking-in-it.html
deleted by creator