Advertisement · 728 × 90

Posts by Forethought

Preview
AI for Decision Advice As AI gets smarter, people will rely on it for high-stakes decisions. Forethought considers how an ideal AI advisor might behave.

In a new post, Tom Davidson drafts a model spec to guide how AI gives advice in key scenarios, and compares some ideal examples of AI advice to what today's leading models actually say.

Read it here: www.forethought.org/research/ai...

4 days ago 1 0 0 0

As AI gets smarter, people will increasingly turn to it for advice on important decisions, so the quality of AI advice really matters.

4 days ago 2 0 3 0
Preview
The value of moral diversity Several models for thinking about the value of moral diversity as the number of powerholders scales.

How is moral diversity valuable for achieving a near-best future? A new post introduces several models for thinking about the value of moral diversity as the number of powerholders scales.

Read it here: newsletter.forethought.org/p/the-value...

1 week ago 1 0 0 0
Preview
AI and Epistemics: The Good, Bad and Ugly AI could transform how we collectively figure out what's true. Forethought maps out the good, bad, and ugly of AI's potential impact on societal epistemics.

Read it here: www.forethought.org/research/ai...

1 week ago 1 0 0 0

AI could dramatically transform how we collectively determine what's true—for better or worse. In a new post, the authors map out the possible impacts of AI on society's epistemics: the good, the bad and the ugly.

1 week ago 1 0 1 0
Preview
Defense-favoured coordination tech Forethought sketches six near-term AI-enabled coordination technologies designed to help groups find deals, settle disputes, and hold each other accountable.

In a new post, the authors present design sketches exploring how AI-enabled coordination tech could be built to favor defense over offense.

Read it here: www.forethought.org/research/de...

2 weeks ago 0 0 0 0
Post image

Near-term AI could make it dramatically easier for groups to find deals, resolve disputes, and hold each other accountable. But the same tools could enable collusion and worse.

2 weeks ago 1 0 1 0
Preview
AIs Should Have Proactive Prosocial Drives Forethought argues that AIs should (sometimes) take proactive actions to benefit society, not just follow instructions.

New post: AIs should (sometimes) be proactively prosocial.

Read it here: www.forethought.org/research/ai...

x.com/willmacaski...

2 weeks ago 2 0 0 0
Preview
AI for AI for Epistemics AI could help us to build stronger AI-powered systems to help people track what is true. This brings important opportunities and risks.

What if... we could use AI to help build the kind of AI that would empower us to work out what's true?

Introducing: AI for AI for epistemics.

www.forethought.org/research/ai...

2 weeks ago 0 0 0 0
Advertisement
Preview
Concrete Projects in AGI Preparedness Eight concrete projects to help prepare for superintelligence, including AI character evaluation, automated macrostrategy, and tools for improving epistemics.

New post: concrete projects to prepare for superintelligence.

Read it here: www.forethought.org/research/co...

x.com/willmacaski...

3 weeks ago 1 0 0 0
Preview
The importance of AI character Forethought argues that AI character—e.g. how obedient, honest, or altruistic AI systems are—will shape power, conflict, and society far more than is recognized. Work to shape AI character could be hu...

New post: William MacAskill and Tom Davidson argue that AI character is a big deal.

Read it here: www.forethought.org/research/the...

3 weeks ago 1 1 0 0
Preview
Should We Lock in Post-AGI Agreements Under Uncertainty? Some mutually beneficial agreements, between major powers or individuals, depend on shared uncertainty about post-AGI outcomes. We consider which deals are worth enabling before an intelligence explosion.

New post: should we lock in post-AGI agreements under uncertainty?

Read it here: www.forethought.org/research/sh...

1 month ago 1 0 0 0
Preview
Moral Public Goods and the Future of Humanity Moral public goods are widely valued but underfunded. Learn how coordination, governance, and power distribution could shape humanity’s long-term future.

We argue that making sure future people can coordinate to fund moral public goods could be a big deal for how well the long-term future goes: www.forethought.org/research/mo...

1 month ago 1 0 0 0

"Moral public goods" are things many people value for ethical reasons, but where no individual's contribution is worth it unless others contribute too, creating large potential gains from coordination.

1 month ago 2 0 1 0
Can Liberal Democracy Survive AGI? — Sam Hammond | ForeCast
Can Liberal Democracy Survive AGI? — Sam Hammond | ForeCast Sam Hammond and Fin Moorhouse discuss discuss how AGI could reshape the nation-state, drawing on Sam's “AI and Leviathan” essay series.Read a transcript of t...

New podcast episode: chatting with economist Sam Hammond about what happens to public institutions when AI collapses transaction costs.

www.youtube.com/watch?v=grG...

2 months ago 1 0 0 0
Preview
AI Tools for Strategic Awareness: Forecasting & OSINT How near-term AI could power forecasting, scenario planning, and OSINT tools to improve strategic awareness and decision-making.

Today we’re publishing another set of design sketches, illustrating what some of these tools might look like more concretely.

You can read the full article here: www.forethought.org/research/de...

2 months ago 1 0 0 0
Post image

Tools for strategic awareness could deepen people’s understanding of what’s actually going on around them, making it easier for them to make good decisions in their own interests. This would have big implications both for individuals and for collective decision-making.

2 months ago 1 0 1 0
Preview
UN Charter Lessons for International AGI Governance How the UN Charter was created—and what its successes and limits suggest for future international governance of advanced AI and AGI.

And the UN charter piece is here: www.forethought.org/research/th...

2 months ago 1 0 0 0
Preview
International Organization Voting Rules for AGI Governance Rough research note on how international organizations vote—unanimity, majority, weighted voting, and vetoes—and what this means for AGI governance.

The overview of international organisations is here: www.forethought.org/research/an...

2 months ago 1 0 1 0
Advertisement

We’ve recently published two pieces of background research that informed our thinking on an international AGI project:
• An overview of some international organisations, with their voting structures
• The UN Charter: a case study in international governance

2 months ago 1 0 1 0
Preview
Design Sketches for a More Sensible AI Future Explore practical AI tools that improve reasoning, forecasting, coordination, and strategic awareness to help navigate the transition to advanced AI.

There’s an overview of the whole series here: www.forethought.org/research/de...

2 months ago 1 0 0 0
Preview
AI Tools for Trust: Community Notes, Rhetoric Detection & More Five AI technologies to combat misinformation: community notes, rhetoric detection, reliability tracking, epistemic evals, and provenance tracing.

You can read the full post here: www.forethought.org/research/de...

2 months ago 1 0 1 0
Post image

The first set of design sketches focus on collective epistemics: tools that make it easy to know what’s trustworthy and reward honesty.

2 months ago 0 0 1 0
Post image

Technologies powered by near-term AI systems could transform our ability to reason and coordinate, significantly improving our chances of safely navigating the transition to advanced AI.

Last week, we launched series of design sketches for specific technologies that we think could help.

2 months ago 1 0 1 0
Preview
Angels-on-the-Shoulder: 5 AI Tools for Better Decisions Five “angels-on-the-shoulder” AI designs that help people make better decisions.

You can read the full article here: www.forethought.org/research/de...

2 months ago 2 0 0 0

We think tools like this will be possible soon, and could meaningfully help humanity to navigate the transition to advanced AI. Today we’re publishing a set of design sketches describing some of these tools in more detail.

2 months ago 2 0 1 0
Post image

Imagine having a technological analogue to an ‘angels-on-the-shoulder’: a customised tool or tools that help you make better decisions in real time, decisions that you more deeply endorse after the fact.

2 months ago 2 1 1 0
Advertisement
Preview
Short AI Timelines Aren’t Always Higher-Leverage Are 2–10 year AGI timelines really highest leverage? We compare 2027/2035/2045 scenarios and explain when medium timelines can offer higher-impact.

Should we focus on worlds where AGI comes in the next few years? People often argue yes, because short timelines have higher leverage. We're not so sure. New post arguing that for many people, 2035+ timelines might be highest leverage: www.forethought.org/research/sh...

2 months ago 1 0 0 0
Preview
Forethought is Hiring Researchers (with Mia Taylor) This is a bonus episode to say that Forethought is hiring researchers. After an overview of the roles, we hear from Research Fellow Mia Taylor about working at Forethought. The application deadline has been extended to November 1st 2025. Apply here: fore

Wondering whether to apply to our open roles? Research Fellow Mia Taylor joined 5 weeks ago. We just released a new episode of ForeCast, hearing from her about why she joined, what it's like to work here, and who the work is likely (and unlikely) to suit.

pnc.st/s/forecast/...

6 months ago 1 0 0 0
Preview
Forethought Researcher Referrals We may reach out to the person based on the information you provide, but it might be good for you also to encourage the person to apply. You can see more about the role here. You might want to think about things like: Who is a great researcher who cares about these topics who might like more freedom than industry / academia can give them? Who are some of your favourite bloggers / LessWrong commenters? Who are the smartest early career researchers you know? We will pay £10,000 if we end up hiring them as a Senior Research Fellow, or £5,000 if we end up hiring them as a Research Fellow. They must pass 3 month probation, you must be the only person to refer them, and we must have not previously planned to reach out to them. We'll try to use reasonable judgement in cases of ambiguity, aiming to err on the side of being generous.

We’re also offering a referral bounty of up to £10,000 (submit here: forms.gle/xbsC6K9QBAw...).

6 months ago 0 0 0 0