...sure, the types of intellectual misbehavior from scientists was completely different, and much more defensible. But clearly the scientists started pronouncing complete confidence that lab leak was bullshit *before* all of the (weak) evidence it's now being argued settled this in the past 5 years.
Posts by David Manheim
This is a stupid argument. Yes, they were studying the thing that had the highest risk of zoonotic spillover (which means *not* lab leak - pmc.ncbi.nlm.nih.gov/articles/PMC... ) because that's obviously the thing to be studying, and thus it was what tons of people were researching.
Have you tried codex desktop or claude desktop yet?
I agree that zoonotic origin is much more likely. That wasn't what I was arguing about.
Are you claiming that the "actual data" we got since 2020 was surprisingly strong, given what we already knew then? Because the math here is simple, and says that we've gotten very little marginal information.
One problem with making predictions in public is that when I say I'm 75% sure of something, and someone else responds that they are 99% sure I'm wrong, they are using numbers as rhetoric, and I'm trying to make sure that I'd be right approximately 3 out of 4 times.
The epidemiologists on the other side of the debate never looked at the types of evidence that are needed to consider the question. But human-origin spread events are common. Just look at the many SARS-CoV accidents, or the obviously human-caused 1977 flu pandemic.
I agree that the loudest lab leak promoters were idiots.
But given public claims, the German intelligence agency, the CIA (under Biden,) and many others, clearly have evidence that WIV was working specifically on this. And we knew there was work on coronaviruses in the BSL-2 labs for years before.
I said that it's more likely natural origin. But the evidence needed to move from believing something is 80% likely to "scientific evidence" of p<0.05, 95% likely, is larger than needed to move from 20% to 80%. They don't have that much evidence, much less enough for their claimed confidence levels.
Empirical data can't show certain things. You need induction to show something holds for all integers, and similarly, you need something other than empirical tests of current models to show that supposed alignment will scale indefinitely with increased capability.
This is the whole alignment problem. All of it, encapsulated. Low-probability behaviors by the model, just make your training environment=your test environment, "what's a capability vs. a propensity", non-adversarial generalization, Goodhart's law. The whole damn thing
It's not "don't you know who I am," it's "why are you lazy and make me explain when you are literally in front of a device capable of telling you for yourself.
I have no expectation that you should know who I am, but if nothing else, my bio links to my org's website with a list of publications.
But I said that zoonotic origins are very likely!
What I disagree with, and keep saying is unsupportable, is that the "new" evidence that came out since 2021 gives us significantly more than what we already knew. It confirmed things, but it didn't tell us things we didn't already expect to be true.
I agree that there were crazy theories being promoted. But finding crazy people who disagree with you isn't proof that you're right, just like the documented conspiracies by natural origins folks to suppress lab leak theories doesn't act as evidence there was a lab leak.
bsky.app/profile/davi...
You could check my publications in PLOS computational biology, Health Security, Clinical Infectious Diseases, and Risk Analysis, and have been working on related issues for a decade - or you could snarkily assume that if I disagree I must not know what I'm talking about.
Your call.
...I didn't say otherwise. But the evidence makes it more likely than not, but not overwhelmingly likely enough to rule out the other plausible explanation.
I'm just pointing out that the expert overconfidence here is typical, and as usual, not well justified - not that it's likely to be wrong.
The reason for a BSL-4 lab is to study the things that could cause dangerous disease, usually the ones present in the nearby environment.
"WIV contained the exact scenario for what produces the greatest chance of spillover and they called that a crazy conspiracy"
So did EVERY OTHER BSL-4 LAB. You need different and surprising additional evidence, something that wouldn't be true in other world where it was a natural event.
LLMs are missing an operating system!
Great post by William Waites on the SoTA (Society for Technological Advancement) blog, laying out the argument for what the early history of computing tells us about current and future AI system design. sotaletters.substack.com/p/the-telety...
Hoskin's original quote was "Every measure which becomes a target becomes a bad measure"
And if you care, see my original tweet thread with the sourcing here: x.com/davidmanheim...
Also, per Stigler's law of eponymy, he neither came up with the idea: rss.onlinelibrary.wiley.com/doi/full/10....
Nor is the common phrasing his own! (And even the quote from Strathern wasn't originally her phrasing, as she cited Hoskins!)
Comment from a math professor on the quality of the latest proofs.
The claims of hype are often correct - unlike the accompanying assertion that the models aren't vastly improving and can't ever match humans. Critics seem blinded by their commitment to complaining about snake oil, AI hallucinations, missing world models, or lack of symbolic reasoning.
...because we always find natural reservoirs for precursor viruses, because animal virus distributions are so well mapped?
We barely manage to track anything but the most contagious human viruses. Absence of evidence isn't (strong) evidence of absence!
Sure, you'll suggest higher taxes on those poor billionaires to avoid donating your fair share!
Claude refuses to help invent conspiracy theories.
Then, after calling Claude a "glorified and electrified rock," the user complains that it's "being emotionally manipulative" and then claims that LLMs will be "used to fine tune human thought as it globalizes our will and ideals."
I talked about the poker example in my older article about multiparty dynamics and the impact on safety of AI systems, but didn't really explore it from this angle. I should probably revisit it.
If the optimal move in a game like poker is a combination of randomizing strategy and computing a combinatorial explosion of possible states and counter strategies, it seems raw "intelligence" scales somewhat poorly.
So I guess some of this also boils down to luck vs. skill in a given domain.
Probably a key question is whether hardness of problems in competitive domains scales so fast that even polynomially increasing marginal computational power on one side doesn't lead to significant advantage.
It also relates to the aleatory baseline, and how much optimal strategy is randomizing.
Sam Altman said he regrets calling the New Yorker profile "incendiary," but never edited his blog post. Now the SF DA is using the same term in a call for deescalation that implicitly frames journalism as a public safety threat.
x.com/GerritD/sta...
...after very clearly and obviously coming around to supporting a candidate specifically because everyone cared enough about getting Orban out that they were willing to be reasonable in consolidating support behind whichever person could actually make that happen.