What do 81k people want from AI?
Fascinating new report from Anthropic captures their hopes, fears and concerns.
Interesting to see such stark differences between how people responded in developing countries vs high-income countries. ⤵️ www.anthropic.com/fe...
Posts by Michael Eddy
AI has limits. It may shift shortlists. Distinguishing bias vs improvement is complex & we have more work to do. Our study is small & shouldn’t be overinterpreted
But if we care about how scarce resources get allocated, we should test these tools openly
Learn more 👇
Responsible AI use mattered:
• Applicant transparency + opt-out
• Data security & privacy protections
• Clear human accountability
• Human-in-the-loop review
• Explicit learning agenda
AI supported decisions. Humans made them.
In addition to backtesting, we also ran this in parallel with our live 2025–26 cycle (in a sandbox).
Decision-makers only saw AI reports after our usual process was done--allowing us to test whether AI added information, not just speed.
It did.
It was also much faster and cheaper.
~Two days; 10x cheaper than our standard first round.
And we surfaced expert-level insights about 4 months earlier than usual.
We back tested on four years of data (209 applications; 67 shortlisted; 28 funded). On our strongest signal of quality—who ultimately got funded—AI performed on par with generalist reviewer rankings. It surfaced somewhat different shortlists, but matched humans on funding decisions.
The output wasn’t just a ranked list.
It generated structured, comparative explanations about why proposals were stronger or weaker relative to others.
That kind of pool-wide comparison is hard to do consistently with humans alone
Then we changed the core question.
Instead of asking, “Is this proposal good?”
We asked, “Which of these two is stronger, and why?”
That led to 1,000+ head-to-head matchups across the full pool. Think March Madness but on steroids.
Instead of “chatbot vibes,” we built a structured multi-agent system w/ Claude Code.
Not just replicating what reviewers do, but extending it: literature reviews, landscape scans, backgrounders & structured rubric scoring--previously not possible for early-stage proposals
At Stanford Impact Labs, we asked a straightforward question:
Can AI help us accelerate science funding without lowering standards or cutting humans out of the loop?
We put it to the test.
Funders, public procurement, and journal editors all face similar challenges: how to allocate scarce resources most effectively
Most applications are rejected. Decisions can take months. We've all been there.
That delay is a huge invisible tax on science & impact
Could AI reduce that tax?
"Science Funding Goes Beyond the Universities" by @calebwatney.bsky.social, IFP
www.wsj.com/opinion/scie...
What does it take to build a scientific field?
New @RenPhilanthropy playbook out with lessons learned from successful cases below 👇
www.renaissancephila...
We all need stronger BS filters. As signal gets harder to distinguish from noise, real rigor becomes rarer—and more valuable.
3/3
In the conferences I'm attending & across social media, I'm seeing more claims that sound good, but collapse under basic scrutiny. The framing is slick; the rigor is missing.
Bad ideas have always existed. What’s new is how effortlessly they can now masquerade as insight.
2/3
In the age of AI, sounding right is easier than being right.
It’s never been simpler to spin ideas that 𝘧𝘦𝘦𝘭 true—confident tone, polished narrative. AI can turn half-baked thoughts into viral TED-talk soundbites.
But rhetorical sparkle isn’t substance.
1/3
Dunning-Kruger effect in practice:
In 53k forecasts across 100 projects, people who are more confident are actually much less accurate.
New 📄 by @evavivalt.bsky.social & @sdellavi.bsky.social
Check out the new Export Boom Atlas from @GrowthTeams–an interactive map of 80+ export booms from developing economies, showing how countries from Vietnam to Morocco to Costa Rica achieved rapid sector-level growth that created jobs and prosperity. 🌍 https://exportbooms.org”
I love it when papers have accompanying websites!
also, neat when i found out @stanfordimpactlabs.bsky.social had a small role in supporting this work!
📄 & 💻by @marshallburke.bsky.social & a impressive team!
adaptationatlas.org/...
www.nber.org/papers/...
@ukri.org has announced a £11.5m investment in AI-driven evidence synthesis (METIUS).
Backed by a $126m global alliance including @wellcometrust.bsky.social, the project aims to make research more accessible for policymakers on climate, education, justice and development.
Fascinating paper on a timely question... kudos!!
New research by Pierre Azoulay, Danielle Li, Bhaven Sampat and me.
Earlier this year, the President’s budget proposed a 40% cut to the budget of the NIH. This motivated us to ask: what if the NIH had been 40% smaller?
With a few notable exceptions, I’m struck by how few funders are openly sharing what they’re learning from AI in decision-making.
What am I missing?
We’ve embedded this directly in our latest RFP. (screenshot 👇)
This is about experimentation, transparency, and learning—together with the research & funding community.
Here’s what “responsible” means to us:
➡️ AI augments—not replaces—human decision-making
➡️ New workflows that weren’t previously possible
➡️ Robust safeguards
➡️ A learning agenda to test key claims
That’s why @StanfordImpact is piloting responsible AI use in our funding processes.
Our goal: accelerate impact-focused science R&D.
✅ Faster decisions
✅ Lower costs
✅ Reduced burden on applicants
Right now:
⚠️ Many funders ban AI outright.
🙈 Others ignore it.
🤖 And slick AI vendors make bold, untested claims.
None of this felt right to me
Ever applied for funding… and then had to wait months for a response? ⏳
What if funders could move faster and make better decisions—so applicants can secure funding & get straight to work?
AI could help. But the current landscape is messy. 🧵