GPT-4 is a technical accomplishment. I think it’s ridiculous to even entertain the notion that it might be getting “sentient”, and I’m not at all convinced that there is any way from advanced autocomplete to the superintelligence that will kill all the infidels and upload the true believers into digital heaven.
You could (correctly) point out that all the heavy lifting wrt. developing the transformer architecture had already been done by Google, and OpenAI’s big innovation was “everything on the internet is the training set” (meaning that it’s going to be very difficult to make a test set that isn’t full of things that look a lot like the training set - virtually guaranteeing impressive performance on human exam questions) and securing enough funding to make that feasible. I’ve said elsewhere that LLMs are as much (or more) an accomplishment in Big Data as they are one of AI … but at this point in time, those two fields are largely one and the same, anyway.
Prior to LLMs (and specifically OpenAI’s large commercial models), we didn’t have a software system that could both write poetry, generate code, explain code, answer zoology questions, rewrite arbitrary texts in arbitrary other styles, invent science fiction scenarios, explore alternate history, simulate Linux terminals of fictional people, and play chess. It’s not very good at most of what it does (it doesn’t write good poetry, a lot of its code is buggy, it provides lethal dietary advice for birds, its fiction is formulaic, etc.) - but the sheer generality of the system, and the fact that it can be interacted with using natural language, are things we didn’t have before.
There is certainly some mechanical turking going on behind the scenes (“viral GPT fails” tend to get prodded out of it very quickly!), but it can’t all be mechanical turking - it would not be humanly possible for a human being to read and answer arbitrary questions about a 200-page novel as quickly as GPT-4-Turbo (or Claude) does it, or to blam out task-specific Python scripts as quickly as GPT-4 with Code Interpreter does it.
I’m all for making fun of promptfans and robot cultists, but I also don’t think these systems are the useless toys they were a few years ago.
How much of this is Sutskever’s work? I don’t know. But @GorillasAreForEating was talking about OpenAI, not just him.
(if this is violating the debate rule, my apologies.)
It’s excellent at what it does, which is create immense reams of spam, make the internet worse in profitable ways, and generate at scale barely sufficient excuses to lay off workers. Any other use case, as far as I’ve seen, remains firmly at the toy level.
But @GorillasAreForEating was talking about OpenAI, not just him.
Taking a step back… this is far removed from the point of origin: @Hanabie claims Sutskever specifically is “allowed to be weird” because he’s a genius. If we move the goalposts back to where they started, it becomes clear it’s not accurate to categorise the pushback as “OpenAI has no technical accomplishments”.
I ask that you continue to mock rationalists who style themselves the High Poobah of Shape Rotators, chanting about turning the spam vortex into a machine God, and also mock anyone who says it’s OK for them to act this way because they have a gigantic IQ. Even if the spam vortex is impressive on a technical level!
I suppose the goalpost shifting is my fault, the original comment was about Sutskever but I shifted talking about OpenAI in general, in part because I don’t really know to what extent Sutskever is individually responsible for OpenAI’s tech.
also mock anyone who says it’s OK for them to act this way because they have a gigantic IQ.
I think people are missing the irony in that comment.
I’m still not convinced Hanabie was being ironic, but if so missing the satire is a core tradition of Sneer Club that I am keeping alive for future generations.
I think there’s a non-ironic element too. Sutskever can be both genuinely smart and weird cultist; just because someone is smart in one domain doesn’t mean they aren’t immensely foolish in others.
It’s excellent at what it does, which is create immense reams of spam, make the internet worse in profitable ways, and generate at scale barely sufficient excuses to lay off workers. Any other use case, as far as I’ve seen, remains firmly at the toy level.
I agree! What I meant about not being very good at what it does is that it writes poetry - but it’s bad poetry. It generates code - but it’s full of bugs. It answers questions about what to feed a pet bird - but its answer is as likely as not to kill your poor non-stochastic parrot. This, obviously, is exactly what you need for a limitless-spam-machine. Alan Blackwell - among many others - has pointed out that LLMs are best viewed as automated bullshit generators. But the implications of a large-scale bullshit generator are exactly what you describe: It can flood the remainder of the useful internet with crap, and be used as an excuse to displace labour (the latter being because while not all jobs are “bullshit jobs”, a lot of jobs involve a number of bullshit tasks).
I ask that you continue to mock rationalists who style themselves the High Poobah of Shape Rotators, chanting about turning the spam vortex into a machine God, and also mock anyone who says it’s OK for them to act this way because they have a gigantic IQ.
Obviously.
I’ve said this before: I’m not at all worried about the robot cultists creating a machine god (or screwing up and accidentally creating a machine satan instead), I’m worried about the collateral damage from billions of corporate dollars propping up labs full of robot cultists who think they’re creating machine gods. And unfortunately, GPT and its ilk has upped the ante on that collateral damage compared to when the cultists were just sitting around making DOTA-playing bots.
Sure. What I mean by “generality” is that it can be used for many substantially different tasks - it turns out that there are many tasks that can be approached (though - in this case - mostly pretty poorly) just by predicting text. I don’t mean it in the sense of “general intelligence”, which I don’t know how to meaningfully define (and I’m skeptical it even constitutes a meaningful concept).
In my opinion, this ultimately says more about the role of text in our society than it does about the “AI” itself, though. If a lot of interaction between humans and various social and technical systems are done using text, then there will be many things a text-predictor can do.
GPT-4 is a technical accomplishment. I think it’s ridiculous to even entertain the notion that it might be getting “sentient”, and I’m not at all convinced that there is any way from advanced autocomplete to the superintelligence that will kill all the infidels and upload the true believers into digital heaven.
You could (correctly) point out that all the heavy lifting wrt. developing the transformer architecture had already been done by Google, and OpenAI’s big innovation was “everything on the internet is the training set” (meaning that it’s going to be very difficult to make a test set that isn’t full of things that look a lot like the training set - virtually guaranteeing impressive performance on human exam questions) and securing enough funding to make that feasible. I’ve said elsewhere that LLMs are as much (or more) an accomplishment in Big Data as they are one of AI … but at this point in time, those two fields are largely one and the same, anyway.
Prior to LLMs (and specifically OpenAI’s large commercial models), we didn’t have a software system that could both write poetry, generate code, explain code, answer zoology questions, rewrite arbitrary texts in arbitrary other styles, invent science fiction scenarios, explore alternate history, simulate Linux terminals of fictional people, and play chess. It’s not very good at most of what it does (it doesn’t write good poetry, a lot of its code is buggy, it provides lethal dietary advice for birds, its fiction is formulaic, etc.) - but the sheer generality of the system, and the fact that it can be interacted with using natural language, are things we didn’t have before.
There is certainly some mechanical turking going on behind the scenes (“viral GPT fails” tend to get prodded out of it very quickly!), but it can’t all be mechanical turking - it would not be humanly possible for a human being to read and answer arbitrary questions about a 200-page novel as quickly as GPT-4-Turbo (or Claude) does it, or to blam out task-specific Python scripts as quickly as GPT-4 with Code Interpreter does it.
I’m all for making fun of promptfans and robot cultists, but I also don’t think these systems are the useless toys they were a few years ago.
How much of this is Sutskever’s work? I don’t know. But @GorillasAreForEating was talking about OpenAI, not just him.
(if this is violating the debate rule, my apologies.)
It’s excellent at what it does, which is create immense reams of spam, make the internet worse in profitable ways, and generate at scale barely sufficient excuses to lay off workers. Any other use case, as far as I’ve seen, remains firmly at the toy level.
Taking a step back… this is far removed from the point of origin: @Hanabie claims Sutskever specifically is “allowed to be weird” because he’s a genius. If we move the goalposts back to where they started, it becomes clear it’s not accurate to categorise the pushback as “OpenAI has no technical accomplishments”.
I ask that you continue to mock rationalists who style themselves the High Poobah of Shape Rotators, chanting about turning the spam vortex into a machine God, and also mock anyone who says it’s OK for them to act this way because they have a gigantic IQ. Even if the spam vortex is impressive on a technical level!
I suppose the goalpost shifting is my fault, the original comment was about Sutskever but I shifted talking about OpenAI in general, in part because I don’t really know to what extent Sutskever is individually responsible for OpenAI’s tech.
I think people are missing the irony in that comment.
Guilty as charged: I missed the irony in it.
(I’m the sort of person, unfortunately, who often misses irony.)
I’m still not convinced Hanabie was being ironic, but if so missing the satire is a core tradition of Sneer Club that I am keeping alive for future generations.
I think there’s a non-ironic element too. Sutskever can be both genuinely smart and weird cultist; just because someone is smart in one domain doesn’t mean they aren’t immensely foolish in others.
deleted by creator
I agree! What I meant about not being very good at what it does is that it writes poetry - but it’s bad poetry. It generates code - but it’s full of bugs. It answers questions about what to feed a pet bird - but its answer is as likely as not to kill your poor non-stochastic parrot. This, obviously, is exactly what you need for a limitless-spam-machine. Alan Blackwell - among many others - has pointed out that LLMs are best viewed as automated bullshit generators. But the implications of a large-scale bullshit generator are exactly what you describe: It can flood the remainder of the useful internet with crap, and be used as an excuse to displace labour (the latter being because while not all jobs are “bullshit jobs”, a lot of jobs involve a number of bullshit tasks).
Obviously.
I’ve said this before: I’m not at all worried about the robot cultists creating a machine god (or screwing up and accidentally creating a machine satan instead), I’m worried about the collateral damage from billions of corporate dollars propping up labs full of robot cultists who think they’re creating machine gods. And unfortunately, GPT and its ilk has upped the ante on that collateral damage compared to when the cultists were just sitting around making DOTA-playing bots.
but… it’s not general at all. it just predicts text.
Sure. What I mean by “generality” is that it can be used for many substantially different tasks - it turns out that there are many tasks that can be approached (though - in this case - mostly pretty poorly) just by predicting text. I don’t mean it in the sense of “general intelligence”, which I don’t know how to meaningfully define (and I’m skeptical it even constitutes a meaningful concept).
In my opinion, this ultimately says more about the role of text in our society than it does about the “AI” itself, though. If a lot of interaction between humans and various social and technical systems are done using text, then there will be many things a text-predictor can do.
Very well said, thank you.