I did fake Bayesian math with some plausible numbers, and found that if I started out believing there was a 20% per decade chance of a lab leak pandemic, then if COVID was proven to be a lab leak, I should update to 27.5%, and if COVID was proven not to be a lab leak, I should stay around 19-20%
This is so confusing: why bother doing “fake” math? How does he justify these numbers? Let’s look at the footnote:
Assume that before COVID, you were considering two theories:
- Lab Leaks Common: There is a 33% chance of a lab-leak-caused pandemic per decade.
- Lab Leaks Rare: There is a 10% chance of a lab-leak-caused pandemic per decade.
And suppose before COVID you were 50-50 about which of these were true. If your first decade of observations includes a lab-leak-caused pandemic, you should update your probability over theories to 76-24, which changes your overall probability of pandemic per decade from 21% to 27.5%.
Oh, he doesn’t, he just made the numbers up! “I don’t have actual evidence to support my claims, so I’ll just make up data and call myself a ‘good Bayesian’ to look smart.” Seriously, how could a reasonable person have been expected to be concerned about lab leaks before COVID? It simply wasn’t something in the public consciousness. This looks like some serious hindsight bias to me.
I don’t entirely accept this argument - I think whether or not it was a lab leak matters in order to convince stupid people, who don’t know how to use probabilities and don’t believe anything can go wrong until it’s gone wrong before. But in a world without stupid people, no, it wouldn’t matter.
Ah, no need to make the numbers make sense, because stupid people wouldn’t understand the argument anyway. Quite literally: “To be fair, you have to have a really high IQ to understand my shitty blog posts. The Bayesian math is is extremely subtle…” And, convince stupid people of what, exactly? He doesn’t say, so what was the point of all the fake probabilities? What a prick.
Scott is saying essentially that “one data point doesn’t influence the data as a whole that much” (usually true)… “so therefore you don’t need to change your opinions when something happens” which is just so profoundly stupid. Just so wrong on so many levels. It’s not even correct Bayesianism!
??? Motherfucker have you heard of the paradox of the heap? What about all that other shit you just said?
What is this really about, Scott???
OH ok. I see now. I mean I’ve always seen, really, that you and your friends work really hard to come up with ad hoc mental models to excuse every bit of wrongdoing that pops up in any of the communities you’re in.
Again, this isn’t correct Bayesian updating. The formula is the formula. Biasing against recency is not in it. And that’s just within Bayesian reasoning!
YEAH BECAUSE IT’S A PERFECT WORLD YOU DINGUS.
Complete sidenote, but I hate how effective altruism has gone from “charities should spend more money on their charity and not on executive bonusses, here are the ones that don’t actually help anyone” to “I believe I will save infinity humans by colonizing mars, so you can just starve to death today”.
I suspect a large portion of people in EA leadership were already on the latter train and posturing as the former. The former is actually kinda problematic in its own way! If a problem was solvable purely by throwing money at it, then what is the need for a charity at all?