It means it was meant to be sarcastic and supportive. I thought answering “yes” to an “or” question is familiar to most people on the internet these days.
tbh, one thing I’m tired of from the internet, is exactly the post-ironic conflation of “just kinding but also sincerely” as wit. It’s actually quite old (mid-late 2000’s) and frequently a vehicle for the most vile views on the internet. And it’s not clever.
I hate to say it, but even sneerclub can get a bit biased and tribal sometimes. He who fights with monsters and so on
I suspect watching the rationalists as they bloviate and hype themselves up and repeatedly fail for years on end have lulled people into thinking that they can’t do anything right, but I think that’s clearly not the case anymore. Despite all the cringe and questionable ethics, OpenAI has made a real and important accomplishment.
They’re in the big leagues now. We should not underestimate the enemy.
(this gets dangerously close to the debate rule, so I’ll leave it to mods to draw the line in reply to this)
What, specifically, are you referencing as the accomplishment? Money? Access to power? Because while I’d agree on those things, it still isn’t really all that notable - that’s been the SFBA dynamic for years. It is why the internet was for years so full of utterly worthless companies, whose only claim of our awareness of them was built on being able to spend their way there.
For openai, the money: wasn’t free, still short, already problematic. I’ve seen enough of those going around, from the insides, to say fairly comfortably that I suspect the rosy veneer they present is as thorough as an oldschool film propfront.
The power? Well, leveraged and lent power, enabled by specific people… and, arguably, now curtailed - because he tried to assert his own views against that power. Because he tried to bite the hand that feeds, and he nearly had all his toys taken away
A team? Eh, lots of people who’ve built teams. A company? Same. Something of a product? Same. None of these elevate him to genius.
Do I think the man is in, in some manner, intelligent? Yes. In some particular domains he’s arguably one of the luminaries of his field (or, in a most extremely dark other possibility, an extremely good thief). I might be able to accept “genius” for this latter definition under some measure of proof, if that were the substantive point of argument. But: it is not.
There is no proof that anything openai has produced is anywhere near their claims. Every visible aspect is grifty, with notable boasts that again and again (so far) fall flat (arguably because the motivations for these boasts are done in self-serving interest).
As to “underestimating the enemy”: I hope the above demonstrates to you that I do not, and think of this fairly comprehensively. Which is why I can tell you this quite certainly: mocking the promptfans and calling them names for their extremely overcomplicated mechanical turk remains one of the best strategies available for handling these ego-fucking buffoon nerds and all their little fans
GPT-4 is a technical accomplishment. I think it’s ridiculous to even entertain the notion that it might be getting “sentient”, and I’m not at all convinced that there is any way from advanced autocomplete to the superintelligence that will kill all the infidels and upload the true believers into digital heaven.
You could (correctly) point out that all the heavy lifting wrt. developing the transformer architecture had already been done by Google, and OpenAI’s big innovation was “everything on the internet is the training set” (meaning that it’s going to be very difficult to make a test set that isn’t full of things that look a lot like the training set - virtually guaranteeing impressive performance on human exam questions) and securing enough funding to make that feasible. I’ve said elsewhere that LLMs are as much (or more) an accomplishment in Big Data as they are one of AI … but at this point in time, those two fields are largely one and the same, anyway.
Prior to LLMs (and specifically OpenAI’s large commercial models), we didn’t have a software system that could both write poetry, generate code, explain code, answer zoology questions, rewrite arbitrary texts in arbitrary other styles, invent science fiction scenarios, explore alternate history, simulate Linux terminals of fictional people, and play chess. It’s not very good at most of what it does (it doesn’t write good poetry, a lot of its code is buggy, it provides lethal dietary advice for birds, its fiction is formulaic, etc.) - but the sheer generality of the system, and the fact that it can be interacted with using natural language, are things we didn’t have before.
There is certainly some mechanical turking going on behind the scenes (“viral GPT fails” tend to get prodded out of it very quickly!), but it can’t all be mechanical turking - it would not be humanly possible for a human being to read and answer arbitrary questions about a 200-page novel as quickly as GPT-4-Turbo (or Claude) does it, or to blam out task-specific Python scripts as quickly as GPT-4 with Code Interpreter does it.
I’m all for making fun of promptfans and robot cultists, but I also don’t think these systems are the useless toys they were a few years ago.
How much of this is Sutskever’s work? I don’t know. But @GorillasAreForEating was talking about OpenAI, not just him.
(if this is violating the debate rule, my apologies.)
It’s excellent at what it does, which is create immense reams of spam, make the internet worse in profitable ways, and generate at scale barely sufficient excuses to lay off workers. Any other use case, as far as I’ve seen, remains firmly at the toy level.
But @GorillasAreForEating was talking about OpenAI, not just him.
Taking a step back… this is far removed from the point of origin: @Hanabie claims Sutskever specifically is “allowed to be weird” because he’s a genius. If we move the goalposts back to where they started, it becomes clear it’s not accurate to categorise the pushback as “OpenAI has no technical accomplishments”.
I ask that you continue to mock rationalists who style themselves the High Poobah of Shape Rotators, chanting about turning the spam vortex into a machine God, and also mock anyone who says it’s OK for them to act this way because they have a gigantic IQ. Even if the spam vortex is impressive on a technical level!
The accomplishment I’m referring to is creating GPT/DALL-E. Yes, it’s overhyped, unreliable, arguably unethical and probably financially unsustainable, but when I do my best to ignore the narratives and drama surrounding it and just try out the damn thing for myself I find that I’m still impressed with it as a technical feat. At the very, very least I think it’s a plausible competitor to google translate for the languages I’ve tried, and I have to admit I’ve found it to be actually useful when writing regular expressions and a few other minor programming tasks.
In all my years of sneering at Yud and his minions I didn’t think their fascination with AI would amount to anything more than verbose blogposts and self-published research papers. I simply did not expect that the rationalists would build an actual, usable AI instead of merely talking about hypothetical AIs and pocketing the donor money, and it is in this context that I say I underestimated the enemy.
With regards to “mocking the promptfans and calling them names”: I do think that ridicule can be a powerful weapon, but I don’t think it will work well if we overestimate the actual shortcomings of the technology. And frankly sneerclub as it exists today is more about entertainment than actually serving as a counter to the rationalist movement.
The problem here is that “AI” is a moving target, and what “building an actual, usable AI” looks like is too. Back when OpenAI was demoing DOTA-playing bots, they were also building actual, usable AIs.
When I was in university a very long time ago, our AI professor went with a definition I’ve kept with me ever since: An “AI system” is a system performing a task at the very edge of what we’d thought computers were capable of until then. Chess-playing and pathfinding used to be “AI”, now they’re just “algorithms”. At the moment, natural language processing and image generation are “AI”. If we take a more restrictive definition and define “AI” as “machine-learning” (and tossing out nearly the entire field from 1960 to about 2000), then we’ve had very sophisticated AI systems for a decade and a half - the scariest examples being the recommender systems deployed by the consumer surveillance industry. IBM Watson (remember that very brief hype cycle?) was winning Jeopardy contests and providing medical diagnoses in the early 2010s, and image classifiers progressed from fun parlor tricks to horrific surveillance technology.
The big difference, and what makes it feel very different now, is in my opinion largely that GPT much more closely matches our cultural mythology of what an “AI” is: A system you can converse with in natural language, just like HAL-9000 or the computers from Star Trek. But using these systems for a while pretty quickly reveals that they’re not quite what they look like: They’re not digital minds with sophisticated world models, they’re text generators. It turns out, however, that quite a lot of economically useful work can be wrung out of “good enough” text generators (which is perhaps less surprising if you consider how much any human society relies on storytelling and juggling around socially useful fictions). This is of course why capital is so interested and why enormous sums of money are flowing in: GPT is shaped as a universal intellectual-labour devaluator. I bet Satya Nadella is much more interested in “mass layoff as a service” than he is in fantasies about Skynet.
Second, unlike earlier hype cycles, OpenAI made GPT-3.5 onwards available to the general public with a friendly UI. This time, it’s not just a bunch of Silicon Valley weirdos and other nerds interacting with the tech - it’s your boss, your mother, your colleagues. We’ve all been primed by the aforementioned cultural mythology, so now everybody is looking at something that resembles a predecessor of HAL-9000, Star Trek computers and Skynet - so now you have otherwise normal people worrying about the things that were previously only the domain of aforementioned Silicon Valley weirdos.
Roko’s Basilisk is as ridiculous a concept as it ever was, though.
was this said in sarcasm or support?
Yes.
wat
He’s the one cultist who actually accomplished something
Exactly. I wonder if people really didn’t get what I was saying 😄
I couldn’t tell what you were saying. Answering “yes” to the question of whether you were writing “in sarcasm or support” is not at all informative.
It means it was meant to be sarcastic and supportive. I thought answering “yes” to an “or” question is familiar to most people on the internet these days.
tbh, one thing I’m tired of from the internet, is exactly the post-ironic conflation of “just kinding but also sincerely” as wit. It’s actually quite old (mid-late 2000’s) and frequently a vehicle for the most vile views on the internet. And it’s not clever.
I hate to say it, but even sneerclub can get a bit biased and tribal sometimes. He who fights with monsters and so on
I suspect watching the rationalists as they bloviate and hype themselves up and repeatedly fail for years on end have lulled people into thinking that they can’t do anything right, but I think that’s clearly not the case anymore. Despite all the cringe and questionable ethics, OpenAI has made a real and important accomplishment.
They’re in the big leagues now. We should not underestimate the enemy.
(this gets dangerously close to the debate rule, so I’ll leave it to mods to draw the line in reply to this)
What, specifically, are you referencing as the accomplishment? Money? Access to power? Because while I’d agree on those things, it still isn’t really all that notable - that’s been the SFBA dynamic for years. It is why the internet was for years so full of utterly worthless companies, whose only claim of our awareness of them was built on being able to spend their way there.
For openai, the money: wasn’t free, still short, already problematic. I’ve seen enough of those going around, from the insides, to say fairly comfortably that I suspect the rosy veneer they present is as thorough as an oldschool film propfront.
The power? Well, leveraged and lent power, enabled by specific people… and, arguably, now curtailed - because he tried to assert his own views against that power. Because he tried to bite the hand that feeds, and he nearly had all his toys taken away
A team? Eh, lots of people who’ve built teams. A company? Same. Something of a product? Same. None of these elevate him to genius.
Do I think the man is in, in some manner, intelligent? Yes. In some particular domains he’s arguably one of the luminaries of his field (or, in a most extremely dark other possibility, an extremely good thief). I might be able to accept “genius” for this latter definition under some measure of proof, if that were the substantive point of argument. But: it is not.
There is no proof that anything openai has produced is anywhere near their claims. Every visible aspect is grifty, with notable boasts that again and again (so far) fall flat (arguably because the motivations for these boasts are done in self-serving interest).
As to “underestimating the enemy”: I hope the above demonstrates to you that I do not, and think of this fairly comprehensively. Which is why I can tell you this quite certainly: mocking the promptfans and calling them names for their extremely overcomplicated mechanical turk remains one of the best strategies available for handling these ego-fucking buffoon nerds and all their little fans
GPT-4 is a technical accomplishment. I think it’s ridiculous to even entertain the notion that it might be getting “sentient”, and I’m not at all convinced that there is any way from advanced autocomplete to the superintelligence that will kill all the infidels and upload the true believers into digital heaven.
You could (correctly) point out that all the heavy lifting wrt. developing the transformer architecture had already been done by Google, and OpenAI’s big innovation was “everything on the internet is the training set” (meaning that it’s going to be very difficult to make a test set that isn’t full of things that look a lot like the training set - virtually guaranteeing impressive performance on human exam questions) and securing enough funding to make that feasible. I’ve said elsewhere that LLMs are as much (or more) an accomplishment in Big Data as they are one of AI … but at this point in time, those two fields are largely one and the same, anyway.
Prior to LLMs (and specifically OpenAI’s large commercial models), we didn’t have a software system that could both write poetry, generate code, explain code, answer zoology questions, rewrite arbitrary texts in arbitrary other styles, invent science fiction scenarios, explore alternate history, simulate Linux terminals of fictional people, and play chess. It’s not very good at most of what it does (it doesn’t write good poetry, a lot of its code is buggy, it provides lethal dietary advice for birds, its fiction is formulaic, etc.) - but the sheer generality of the system, and the fact that it can be interacted with using natural language, are things we didn’t have before.
There is certainly some mechanical turking going on behind the scenes (“viral GPT fails” tend to get prodded out of it very quickly!), but it can’t all be mechanical turking - it would not be humanly possible for a human being to read and answer arbitrary questions about a 200-page novel as quickly as GPT-4-Turbo (or Claude) does it, or to blam out task-specific Python scripts as quickly as GPT-4 with Code Interpreter does it.
I’m all for making fun of promptfans and robot cultists, but I also don’t think these systems are the useless toys they were a few years ago.
How much of this is Sutskever’s work? I don’t know. But @GorillasAreForEating was talking about OpenAI, not just him.
(if this is violating the debate rule, my apologies.)
It’s excellent at what it does, which is create immense reams of spam, make the internet worse in profitable ways, and generate at scale barely sufficient excuses to lay off workers. Any other use case, as far as I’ve seen, remains firmly at the toy level.
Taking a step back… this is far removed from the point of origin: @Hanabie claims Sutskever specifically is “allowed to be weird” because he’s a genius. If we move the goalposts back to where they started, it becomes clear it’s not accurate to categorise the pushback as “OpenAI has no technical accomplishments”.
I ask that you continue to mock rationalists who style themselves the High Poobah of Shape Rotators, chanting about turning the spam vortex into a machine God, and also mock anyone who says it’s OK for them to act this way because they have a gigantic IQ. Even if the spam vortex is impressive on a technical level!
but… it’s not general at all. it just predicts text.
Very well said, thank you.
The accomplishment I’m referring to is creating GPT/DALL-E. Yes, it’s overhyped, unreliable, arguably unethical and probably financially unsustainable, but when I do my best to ignore the narratives and drama surrounding it and just try out the damn thing for myself I find that I’m still impressed with it as a technical feat. At the very, very least I think it’s a plausible competitor to google translate for the languages I’ve tried, and I have to admit I’ve found it to be actually useful when writing regular expressions and a few other minor programming tasks.
In all my years of sneering at Yud and his minions I didn’t think their fascination with AI would amount to anything more than verbose blogposts and self-published research papers. I simply did not expect that the rationalists would build an actual, usable AI instead of merely talking about hypothetical AIs and pocketing the donor money, and it is in this context that I say I underestimated the enemy.
With regards to “mocking the promptfans and calling them names”: I do think that ridicule can be a powerful weapon, but I don’t think it will work well if we overestimate the actual shortcomings of the technology. And frankly sneerclub as it exists today is more about entertainment than actually serving as a counter to the rationalist movement.
The problem here is that “AI” is a moving target, and what “building an actual, usable AI” looks like is too. Back when OpenAI was demoing DOTA-playing bots, they were also building actual, usable AIs.
When I was in university a very long time ago, our AI professor went with a definition I’ve kept with me ever since: An “AI system” is a system performing a task at the very edge of what we’d thought computers were capable of until then. Chess-playing and pathfinding used to be “AI”, now they’re just “algorithms”. At the moment, natural language processing and image generation are “AI”. If we take a more restrictive definition and define “AI” as “machine-learning” (and tossing out nearly the entire field from 1960 to about 2000), then we’ve had very sophisticated AI systems for a decade and a half - the scariest examples being the recommender systems deployed by the consumer surveillance industry. IBM Watson (remember that very brief hype cycle?) was winning Jeopardy contests and providing medical diagnoses in the early 2010s, and image classifiers progressed from fun parlor tricks to horrific surveillance technology.
The big difference, and what makes it feel very different now, is in my opinion largely that GPT much more closely matches our cultural mythology of what an “AI” is: A system you can converse with in natural language, just like HAL-9000 or the computers from Star Trek. But using these systems for a while pretty quickly reveals that they’re not quite what they look like: They’re not digital minds with sophisticated world models, they’re text generators. It turns out, however, that quite a lot of economically useful work can be wrung out of “good enough” text generators (which is perhaps less surprising if you consider how much any human society relies on storytelling and juggling around socially useful fictions). This is of course why capital is so interested and why enormous sums of money are flowing in: GPT is shaped as a universal intellectual-labour devaluator. I bet Satya Nadella is much more interested in “mass layoff as a service” than he is in fantasies about Skynet.
Second, unlike earlier hype cycles, OpenAI made GPT-3.5 onwards available to the general public with a friendly UI. This time, it’s not just a bunch of Silicon Valley weirdos and other nerds interacting with the tech - it’s your boss, your mother, your colleagues. We’ve all been primed by the aforementioned cultural mythology, so now everybody is looking at something that resembles a predecessor of HAL-9000, Star Trek computers and Skynet - so now you have otherwise normal people worrying about the things that were previously only the domain of aforementioned Silicon Valley weirdos.
Roko’s Basilisk is as ridiculous a concept as it ever was, though.