I’d say it’s half-serious satire; she did end up dating a “high net-worth individual”, after all.
If Caroline Ellison hadn’t been in an actual relationship with a “high net-worth individual” I would have said it was just straightforward satire, but given the context I think she’s using mask of irony to pretend she isn’t revealing her true self.
Her words may be satirical, but her actions were more like “this but unironically”.
Here is the document that mentions EA as risk factor, some quotes below
Fourth, the defendant may feel compelled to do this fraud again, or a version of it, based on his use of idiosyncratic, and ultimately for him pernicious, beliefs around altruism, utilitarianism, and expected value to place himself outside of the bounds of the law that apply to others, and to justify unlawful, selfish, and harmful conduct. Time and time again the defendant has expressed that his preferred path is the one that maximizes his version of societal value, even if imposes substantial short term harm or carries substantial risks to others… In this case, the defendant’s professed philosophy has served to rationalize a dangerous brand of megalomania—one where the defendant is convinced that he is above the law and the rules of the road that apply to everyone else, who he necessarily deems inferior in brainpower, skill, and analytical reasoning.
SEO will pillage the commons.
My personal conspiracy theory (not sure if I actually believe this yet):
The idea that people would use generative AI to make SEO easier (and thus make search engine results worse) was not an unfortunate side effect of generative AI, it was the entire purpose. It’s no coincidence that OpenAI teamed up with Google’s biggest rival in search engines; we’re now seeing an arms race between tech giants using spambot generators to overwhelm the enemy’s filters.
The decision to make chatGPT public was not about concern for openness (if it was they would have made the earlier versions of GPT public too), it’s more that they had a business partner lined up and Google search had become enshittified enough that they thought they could pull off a successful “disruption”.
I did not. Got any details?
Also FWIW I discovered this yesterday: https://archive.ph/SFCwS
No idea if it’s true, but even if so I don’t think it would exonerate him (though it would put Aella in a worse light)
Yudkowsky is pretty open about being a sexual sadist
Here’s the old sneerclub thread about the leaked emails linking Scott Alexander to the far right
Scott Alexander’s review of Seeing Like A State is here: https://slatestarcodex.com/2017/03/16/book-review-seeing-like-a-state/
The review is mostly positive, but then it also has passages like this:
Well, for one thing, [James C.] Scott basically admits to stacking the dice against High Modernism and legibility. He admits that the organic livable cities of old had life expectancies in the forties because nobody got any light or fresh air and they were all packed together with no sewers and so everyone just died of cholera. He admits that at some point agricultural productivity multiplied by like a thousand times and the Green Revolution saved millions of lives and all that, and probably that has something to do with scientific farming methods and rectangular grids. He admits that it’s pretty convenient having a unit of measurement that local lords can’t change whenever they feel like it. Even modern timber farms seem pretty successful. After all those admissions, it’s kind of hard to see what’s left of his case.
and
Professors of social science think [check cashing] shops are evil because they charge the poor higher rates, so they should be regulated away so that poor people don’t foolishly shoot themselves in the foot by going to them. But on closer inspection, they offer a better deal for the poor than banks do, for complicated reasons that aren’t visible just by comparing the raw numbers. Poor people’s understanding of this seems a lot like the metis that helps them understand local agriculture. And progressives’ desire to shift control to the big banks seems a lot like the High Modernists’ desire to shift everything to a few big farms. Maybe this is a point in favor of something like libertarianism?
Weirdly rationalists also sometimes read this book and take all the wrong lessons from it.
Scott Alexander is a crypto-reactionary and I think he reviewed it as a way to expose his readers to neoreactionary ideas under the guise of superficial skepticism, in the same manner as the anti-reactionary FAQ. The book’s author might be a anarchist but a lot of the arguments could easily work in a libertarian context.
And yet the market is said to be “erring” and to have “irrationality” when it disagrees with rationalist ideas. Funny how that works.
the cell’s ribosomes will transcribe mRNA into a protein. It’s a little bit like an executable file for biology.
Also, because mRNA basically has root level access to your cells, your body doesn’t just shuttle it around and deliver it like the postal service. That would be a major security hazard.
I am not saying plieotropy doesn’t exist. I’m saying it’s not as big of a deal as most people in the field assume it is.
Genes determine a brain’s architectural prior just as a small amount of python code determines an ANN’s architectural prior, but the capabilities come only from scaling with compute and data (quantity and quality).
When you’re entirely shameless about your Engineer’s Disease
I suppose when talking about science to a popular audience it can be hard not to make generalizations and oversimplifications and if it’s done poorly that oversimplification can cross over into plain old inaccuracy (if I were to be charitable to Yud I would say that this is what happened here).
To wit: even the “K’nex connector with 4 ports” model of carbon doesn’t really explain the bonding of aromatic molecules like benzene or carbon nanotubes; I’ve likewise seen people confidently make the generalization “noble gases don’t react”, apparently unaware of the existence of noble gas compounds.
Big Yud himself responded:
edit: there’s lesswrong thread too: https://www.lesswrong.com/posts/8viKzSrYhb6EFk6wg/why-yudkowsky-is-wrong-about-covalently-bonded-equivalents
evidently I need to pay more attention to the non-sneerclub sections of this site.
He’s been doing interviews on podcasts. The NYT also recently listed “internet philosopher” Eliezer Yudkowsky as one of the key figures of the modern artificial intelligence movement.
Thank you, that link is exactly what I was looking for (and also sated my curiosity about how Yudkowsky got involved with Bostrom and Hanson, I had heard they met on the extropian listserv but I had never seen any proof).
It’s like when he wore a fedora and started talking about 4chan greentexts in his first major interview. He just cannot help himself.
P.S. The New York Times recently listed “internet philosopher” Eliezer Yudkowsky as one of the of the major figures in the modern AI movement, this is the picture they chose to use.
update: Verdon is now accusing another AI researcher of exposing him: https://twitter.com/GillVerd/status/1730796306535514472
I still find it amusing that Siskind complained about being “doxxed” when he used his real first and middle name.
When they made an alt-right equivalent of Patreon they called it “Hatreon”. This stuff is like a game to them.