What do 81k people want from AI?
Fascinating new report from Anthropic captures their hopes, fears and concerns.
Interesting to see such stark differences between how people responded in developing countries vs high-income countries. ā¤µļø www.anthropic.com/fe...
Posts by Michael Eddy
AI has limits. It may shift shortlists. Distinguishing bias vs improvement is complex & we have more work to do. Our study is small & shouldnāt be overinterpreted
But if we care about how scarce resources get allocated, we should test these tools openly
Learn more š
Responsible AI use mattered:
⢠Applicant transparency + opt-out
ā¢Ā Data security & privacy protections
⢠Clear human accountability
⢠Human-in-the-loop review
⢠Explicit learning agenda
AI supported decisions. Humans made them.
In addition to backtesting, we also ran this in parallel with our live 2025ā26 cycle (in a sandbox).
Decision-makers only saw AI reports after our usual process was done--allowing us to test whether AI added information, not just speed.
It did.
It was also much faster and cheaper.
~Two days;Ā 10x cheaper than our standard first round.
And we surfaced expert-level insights about 4 months earlier than usual.
We back tested on four years of data (209 applications; 67 shortlisted; 28 funded). On our strongest signal of qualityāwho ultimately got fundedāAI performed on par with generalist reviewer rankings. It surfaced somewhat different shortlists, but matched humans on funding decisions.
The output wasnāt just a ranked list.
It generated structured, comparative explanations about why proposals were stronger or weaker relative to others.
That kind of pool-wide comparison is hard to do consistently with humans alone
Then we changed the core question.
Instead of asking, āIs this proposal good?ā
We asked, āWhich of these two is stronger, and why?ā
That led to 1,000+ head-to-head matchups across the full pool. Think March Madness but on steroids.
Instead of āchatbot vibes,ā we built a structured multi-agent system w/ Claude Code.
Not just replicating what reviewers do, but extending it: literature reviews, landscape scans, backgrounders & structured rubric scoring--previously not possible for early-stage proposals
At Stanford Impact Labs, we asked a straightforward question:
Can AI help us accelerate science funding without lowering standards or cutting humans out of the loop?
We put it to the test.
Funders, public procurement, and journal editors all face similar challenges: how to allocate scarce resources most effectively
Most applications are rejected. Decisions can take months. We've all been there.
That delay is a huge invisible tax on science & impact
Could AI reduce that tax?
"Science Funding Goes Beyond the Universities" by @calebwatney.bsky.social, IFP
www.wsj.com/opinion/scie...
What does it take to build a scientific field?
New @RenPhilanthropy playbook out with lessons learned from successful cases below š
www.renaissancephila...
We all need stronger BS filters. As signal gets harder to distinguish from noise, real rigor becomes rarerāand more valuable.
3/3
In the conferences I'm attending & across social media, I'm seeing more claims that sound good, but collapse under basic scrutiny. The framing is slick; the rigor is missing.
Bad ideas have always existed. Whatās new is how effortlessly they can now masquerade as insight.
2/3
In the age of AI, sounding right is easier than being right.
Itās never been simpler to spin ideas that š§š¦š¦š trueāconfident tone, polished narrative. AI can turn half-baked thoughts into viral TED-talk soundbites.
But rhetorical sparkle isnāt substance.
1/3
Dunning-Kruger effect in practice:
In 53k forecasts across 100 projects, people who are more confident are actually much less accurate.
New š by @evavivalt.bsky.social & @sdellavi.bsky.social
Check out the new Export Boom Atlas from @GrowthTeamsāan interactive map of 80+ export booms from developing economies, showing how countries from Vietnam to Morocco to Costa Rica achieved rapid sector-level growth that created jobs and prosperity. š https://exportbooms.orgā
I love it when papers have accompanying websites!
also, neat when i found out @stanfordimpactlabs.bsky.social had a small role in supporting this work!
š & š»by @marshallburke.bsky.social & a impressive team!
adaptationatlas.org/...
www.nber.org/papers/...
@ukri.org has announced a £11.5m investment in AI-driven evidence synthesis (METIUS).
Backed by a $126m global alliance including @wellcometrust.bsky.social, the project aims to make research more accessible for policymakers on climate, education, justice and development.
Fascinating paper on a timely question... kudos!!
New research by Pierre Azoulay, Danielle Li, Bhaven Sampat and me.
Earlier this year, the Presidentās budget proposed a 40% cut to the budget of the NIH. This motivated us to ask: what if the NIH had been 40% smaller?
With a few notable exceptions, Iām struck by how few funders are openly sharing what theyāre learning from AI in decision-making.
What am I missing?
Weāve embedded this directly in our latest RFP. (screenshot š)
This is about experimentation, transparency, and learningātogether with the research & funding community.
Hereās what āresponsibleā means to us:
ā”ļø AI augmentsānot replacesāhuman decision-making
ā”ļø New workflows that werenāt previously possible
ā”ļø Robust safeguards
ā”ļø A learning agenda to test key claims
Thatās why @StanfordImpact is piloting responsible AI use in our funding processes.
Our goal: accelerate impact-focused science R&D.
ā
Faster decisions
ā
Lower costs
ā
Reduced burden on applicants
Right now:
ā ļø Many funders ban AI outright.
š Others ignore it.
š¤ And slick AI vendors make bold, untested claims.
None of this felt right to me
Ever applied for funding⦠and then had to wait months for a response? ā³
What if funders could move faster and make better decisionsāso applicants can secure funding & get straight to work?
AI could help. But the current landscape is messy. š§µ