Hey #atmosphereconf we just proved that the right feed algorithms can reduce political polarization. And now we're building open-source recommender infrastructure that you can use for your ATProto app.
rankingchallenge.substack.com/p/its-possib...
Posts by Susan Leavy
I am deeply honored to have been elected as Co-Chair of the United Nations' Independent International Scientific Panel on AI alongside Nobel Peace Prize laureate @mariaressa.bsky.social.
Lecture streaming here tomorrow at 9:00am CET www.iaseai.org/schedule-2026
There is little evidence of overall impacts on labour markets so far, though early-career workers in some AI-exposed occupations have seen declining employment compared with late 2022.
(11/19)
Since the last Report, we’ve seen new evidence of many emerging risks. For example, AI-generated content has become highly realistic and more useful for fraud, scams, and non-consensual intimate imagery. There is growing evidence that AI systems help malicious actors carry out cyberattacks.
(10/19)
But new capabilities pose risks, notably 8 emerging risks.
Misuse:
→ AI-generated content & criminal activity
→ Influence & manipulation
→ Cyberattacks
→ Bio & chemical risks
Malfunctions:
→ Reliability issues
→ Loss of control
Systemic risks:
→ Labor market impacts
→ Risks to human autonomy
(9/19)
These capabilities are increasingly translating into real-world impact.
At least 700 million people now use leading AI systems weekly. In the US, use of AI has spread faster than that of computers and the internet.
(8/19)
But capabilities are also “jagged:” the same model may solve complex problems yet fail at some seemingly simple tasks.
(7/19)
On capabilities: AI systems continue to improve significantly.
Leading models now achieve gold-medal performance on the International Mathematical Olympiad.
AI coding agents can complete 30-minute programming tasks with 80% reliability—up from 10-minute tasks a year ago.
(6/19)
2️⃣ Some risks, from deepfakes to cyberattacks, shifted further from theoretical concerns to real-world challenges.
3️⃣ Many safety measures improved, but remain fallible. Developers increasingly implement multiple layers of safeguards to compensate.
(5/19)
This report provides policymakers with the information they need to make these decisions.
In 2025:
1️⃣ Capabilities continued advancing rapidly, especially in coding, science, and autonomous operation.
(4/19)
AI poses an “evidence dilemma” to policymakers—capabilities evolve quickly, but scientific evidence emerges far more slowly.
Acting too early risks entrenching ineffective policies, but waiting for strong evidence may leave society vulnerable to risks.
(3/19)
Over 100 independent experts contributed to the Report, including Nobel laureates and Turing Award winners, along with an advisory panel with nominees from more than 30 countries and international organisations, including the EU, OECD and UN.
internationalaisafetyreport.org/publication/...
(2/19)
Today we’re releasing the International AI Safety Report 2026: the most comprehensive evidence-based assessment of AI capabilities, emerging risks, and safety measures to date. 🧵
(1/19)
It was equally a pleasure to receive this distinction alongside an exceptional group of colleagues, whose contributions have had a profound impact on the field of AI as we know it today. www.youtube.com/watch?v=0zXS...
I was very honoured to receive the Queen Elizabeth Prize for Engineering from His Majesty King Charles III this week, and pleased to hear his thoughts on AI safety as well as his hopes that we can minimize the risks while collectively reaping the benefits.
Full talk available here: youtu.be/UgZVc0-00t0?...
As AI continues to develop at unprecedented speed, the work of the International AI Safety Report is essential in keeping apace with necessary regulation, developments and safety outputs. @susanly.bsky.social acts as a senior advisor with this project addressing Capabilities and Risk Implications
These developments raise further questions about control, monitoring, and governance as AI systems become more capable.
Read the Key Update: internationalaisafetyreport.org/publication/...
(10/10)
▶️ Emerging oversight challenges: AI models increasingly demonstrate an ability to distinguish evaluation tasks from real-world tasks, raising critical questions about our ability to reliably test their capabilities before deployment.
(9/10)
▶️ Stronger safeguards from developers: Leading AI developers recently activated enhanced protections on their most capable models as a precautionary measure, given possibilities like misuse to build weapons.
(8/10)
▶️ Signals of real-world adoption: In a recent survey, a majority of software devs report using AI tools daily to help design experiments, process data, and write reports.
Yet we still don’t know much about AI use in other domains, nor about how AI use affects productivity overall.
(7/10)
▶️ The rise of “reasoning” models: Recent gains have come mainly from techniques allowing models to generate interim steps before producing answers.
AI capabilities can advance through post-training techniques and more computing power at inference time, not just through scaling model size.
(6/10)
▶️ Impressive performance improvements: Several AI systems can now solve International Mathematical Olympiad problems at gold medal level and complete a majority of problems in several databases of real-world software engineering tasks.
(5/10)
Our first Key Update is the work of 100 international, independent AI experts.
Some of the key findings it covers include: ⬇️
(4/10)
These Key Updates will offer shorter, focused reports on critical developments in AI that will be published between editions of the full report.
This will ensure policymakers have access to up-to-date scientific reporting on the capabilities & risks of advanced AI systems.
(3/10)
As Chair of the International AI Safety Report, an effort backed by 30+ countries & international organisations, I've led a Key Update focused on advancements in AI capabilities and implications for risks published today.
You can read it here: internationalaisafetyreport.org/publication/...
(2/10)
AI is evolving too quickly for an annual report to suffice. To help policymakers keep pace, we're introducing the first Key Update to the International AI Safety Report. 🧵⬇️
(1/10)
What does a verified adult mean? Any idea how they plan to verify?
Email - so many - that’s not what I wanted to do when I grew up!