At various points, on Twitter, Jezos has defined effective accelerationism as “a memetic optimism virus,” “a meta-religion,” “a hypercognitive biohack,” “a form of spirituality,” and “not a cult.” …
When he’s not tweeting about e/acc, Verdon runs Extropic, which he started in 2022. Some of his startup capital came from a side NFT business, which he started while still working at Google’s moonshot lab X. The project began as an April Fools joke, but when it started making real money, he kept going: “It’s like it was meta-ironic and then became post-ironic.” …
On Twitter, Jezos described the company as an “AI Manhattan Project” and once quipped, “If you knew what I was building, you’d try to ban it.”
even more horrifying — they see culture as a system of equations they can use AI to generate solutions for, and the correct set of solutions will give them absolute control over culture. they apply this to all aspects of society. these assholes didn’t understand hitchhiker’s guide to the galaxy or any of the other sci fi they cribbed these ideas from, and it shows
The ultimate STEMlord misunderstanding of culture; something absolutely rife in the Silicon Valley tech-sphere.
These dudes wouldn’t recognize culture if unsafed its Browning and shot them in the kneecaps.
Now im wondering, does the ma deuce have a safety? This is important information if I have ever have to defend my atoll from the E/acc smoker attack
Apparently not; some soldiers appear to have bodged their own safeties by doing things like jamming an expended case underneath the trigger.
It’s like pickup artistry on a societal scale.
It really does illustrate the way they see culture not as, like, a beautiful evolving dynamic system that makes life worth living, but instead as a stupid game to be won or a nuisance getting in the way of their world domination efforts
remember that Yudkowsky’s CEV idea was literally to analytically solve ethics
In an essay that somehow manages to offhandendly mention both evolutionary psychology and hentai anime in the same paragraph.
It’s like when he wore a fedora and started talking about 4chan greentexts in his first major interview. He just cannot help himself.
P.S. The New York Times recently listed “internet philosopher” Eliezer Yudkowsky as one of the of the major figures in the modern AI movement, this is the picture they chose to use.
You may not like it, but this is what peak rationality looks like.
the cookie cutter glasses are load bearing
That hat could only work with New Year’s Eve glasses from a year with “00”.
I read that & it made me actively dumber.
I already knew that EY’s shtick involves trying to invent frameworks from first principles, ignorant that entire fields of study exist (& there is SO MUCH of that here) but… “Once upon a time, back at the age of sixteen, I made one hell of a huge philosophical blunder about how morality worked… Youthful foolishness I don’t intend to ever repeat”
Christ, that’s just normal adolescent development! Teenagers try to build s moral framework
He would be less cringe if he opined on AIs using Lawrence Kohlberg as a moral framework, and it’s always cringe to use Lawrence Kohlberg.
I mean say what you want about the tenets of outdated developmental psychologists, Dude, at least it’s a school of thought.