Advertisement · 728 × 90

Posts by Andrew Strait

Post image

🚨New paper🚨

From a technical perspective, safeguarding open-weight model safety is AI safety in hard mode. But there's still a lot of progress to be made. Our new paper covers 16 open problems.

🧵🧵🧵

5 months ago 18 3 1 2

This is such a cool paper from my UK AISI colleagues. We need more methods for building resistance to malicious tampering of open weight models. @scasper.bsky.social and team below have offered one for reducing biorisk.

8 months ago 5 0 0 0
chart: capital expenditures, quarterly

shows hockey-stick like growth in the capex expenditures of Amazon, Microsoft, Google and meta, almost entirely on data centers

in the most recent quarter it was nearly $100 billion, collectively

chart: capital expenditures, quarterly shows hockey-stick like growth in the capex expenditures of Amazon, Microsoft, Google and meta, almost entirely on data centers in the most recent quarter it was nearly $100 billion, collectively

The AI infrastructure build-out is so gigantic that in the past 6 months, it contributed more to the growth of the U.S. economy than /all of consumer spending/

The 'magnificent 7' spent more than $100 billion on data centers and the like in the past three months *alone*

www.wsj.com/tech/ai/sili...

8 months ago 820 320 78 269

Highly recommend for your beach summer reading.

global.oup.com/academic/pro...

8 months ago 12 2 0 0
Video

Man, even the brocast community appears to be reading @shannonvallor.bsky.social 's book.

8 months ago 84 11 2 9

Congrats to @kobihackenburg.bsky.social for producing the largest study of AI persuasion to date. So many fascinating findings. Notable that (a) current models are extremely good at persuasion on political issues and (b) post training is far more significant than model size or personalisation

8 months ago 9 7 0 0

Massive credit to the lead authors Christopher Summerfield, Lennart Luttegau, Magda Dubois, Hannah Rose Kirk, Kobi Hackenberg, Catherine Fist, Nicola Ding, Rebecca Anselmetti, Coz Ududec, Katarina Slama, Mario Giulianelli

9 months ago 0 0 0 0

Ultimately, we advocate for more rigorous scientific methods. This includes using robust statistical analysis, proper control conditions, and clear theoretical frameworks to ensure the claims made about AI capabilities are credible and well-supported.

9 months ago 2 0 1 0

A key recommendation is to be more precise with our language. We caution against using mentalistic terms like 'knows' or 'pretends' to describe model outputs, as it can imply a level of intentionality that may not be warranted by the evidence.

9 months ago 3 2 1 0
Advertisement

For example, we look at how some studies use elaborate, fictional prompts to elicit certain behaviours. We question whether the resulting actions truly represent 'scheming' or are a form of complex instruction-following in a highly constrained context.

9 months ago 2 0 1 0

We discuss how the field can be susceptible to over-interpreting AI behavior, much like researchers in the past may have over-attributed linguistic abilities to chimps. We critique the reliance on anecdotes and a lack of rigorous controls in some current studies.

9 months ago 1 0 1 0

Our paper, 'Lessons from a Chimp,' compares current research into AI scheming with the historic effort to teach language to apes. We argue there are important parallels and cautionary tales to consider.

9 months ago 1 0 1 0

Recent studies of AI systems have identified signals that they 'scheme', or covertly and strategically pursue misaligned goals from a human user. But are these underlying studies following solid research practice? My colleagues at UK AISI took a look.

arxiv.org/pdf/2507.03409

9 months ago 4 2 1 1
LinkedIn This link will take you to a page that’s not on LinkedIn

Addressing AI-enabled crime will require coordinated policy, technical and operational responses as the technology continues to develop. Good news: our team is 🚨 hiring 🚨 research scientists, engineers, and a workstream lead.

Come join our Criminal Misuse team:

lnkd.in/eS9-Dj5i
lnkd.in/e_dqU6QF

9 months ago 1 0 0 0

Our Criminal Misuse team is focussing on three key AI capabilities that are being exploited by criminals:

- Multimodal generation
- Advanced planning and reasoning,
- AI agent capabilities

9 months ago 0 0 1 0

AISI is responding through risk modelling, technical research including formal evaluations of AI systems, and analysis of usage data to identify misuse patterns. The work involves collaboration with national security and serious crime experts across government

9 months ago 0 0 1 0
Preview
How will AI enable the crimes of the future? | AISI Work How we're working to track and mitigate against criminal misuse of AI.

New blog on the growing use of AI in criminal activities, including cybercrime, social engineering and impersonation scams. As AI becomes more widely available through consumer applications and mobile devices, the barriers to criminal misuse will decrease.

www.aisi.gov.uk/work/how-wil...

9 months ago 3 0 1 0
Advertisement
A screenshot of the NYT piece on chatbots with the quote “The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a disassociative  anesthetic, which ChatGPT described as a “temporary pattern liberator”

A screenshot of the NYT piece on chatbots with the quote “The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a disassociative anesthetic, which ChatGPT described as a “temporary pattern liberator”

this is the most dangerous shit I have ever seen sold as a product that wasn’t an AR-15

10 months ago 155 38 10 2
Two Marines in army combat outfits and guns are seen detaining a young black man in a black and white top, wearing sunglasses with air pods in his ears.

Two Marines in army combat outfits and guns are seen detaining a young black man in a black and white top, wearing sunglasses with air pods in his ears.

BREAKING: US Marines deployed to Los Angeles have carried out the first known detention of a civilian, the US military confirms.

It was confirmed to Reuters after they shared this image with the US military.

10 months ago 7408 3244 789 708
Preview
Research Engineer - Societal Resilience London, UK

For those who prefer this in GenAlpha:

Fr fr it's giving lowkey GOATED research engineering vibes, slaying data pipelines and agent evals, periodt.

job-boards.eu.greenhouse.io/aisi/jobs/46...

10 months ago 1 0 1 0

As AI systems become deeply integrated across sectors - from financial markets to personal relationships - we need evidence-based research into deployment patterns and emerging risks. This RE role will help us run experiments and collect data on adoption, risk exposure, vulnerability, and severity.

10 months ago 2 0 1 0
Preview
Research Engineer - Societal Resilience London, UK

We're hiring a Research Engineer for the Societal Resilience team at the AI Security Institute. The role involves building data pipelines, web scraping, ML engineering, and creating simulations to monitor these developments as they happen.

job-boards.eu.greenhouse.io/aisi/jobs/46...

10 months ago 9 3 1 0
Preview
The boy who came back: the near-death, and changed life, of my son Max It was, we were told, a case of sudden infant death syndrome interrupted. What followed would transform my understanding of parenting, disability and the breadth of what makes a meaningful life

I wrote for the Guardian’s Saturday magazine about my son Max, who changed how I see the world. Took ages. More jokes after the first bit.

Thanks Merope Mills for being the most patient and generous editor.

www.theguardian.com/lifeandstyle...

10 months ago 857 184 128 76
Preview
Grants | The AI Security Institute (AISI) View AISI grants. The AI Security Institute is a directorate of the Department of Science, Innovation, and Technology that facilitates rigorous research to enable advanced AI governance.

Help us build a more resilient future in the age of advanced AI.

Find all the details about our Challenge Fund and Priority Research Areas for societal resilience here:

www.aisi.gov.uk/grants#chall...

#AIChallenge #ResearchFunding

10 months ago 2 1 1 0
Advertisement

We're also looking for:
➡️ Deeper studies into societal risk severity, vulnerability & exposure (non-robust systems, scams, overreliance on companion apps, etc.).
➡️ Downstream mitigations for 'defense in depth'.

10 months ago 1 0 2 0

We're interested in many kinds of projects, including:

➡️ Adoption & integration studies: How are different sectors & frontline workers really using advanced AI? For what tasks? How often?
➡️ Novel & creative datasets to understand real-world AI usage.

10 months ago 1 0 1 0

➡️ Emotional reliance, undue influence & AI addiction
➡️ Infosphere degradation (impacting education, science, journalism)
➡️ Agentic systems causing collusion or cascading failures
➡️ Labour market impacts & displacement
And much more.

10 months ago 1 0 1 0

What risks are we focused on? Those causing or exacerbating severe psychological, economic, & physical harm
➡️ Overreliance on AI in critical infrastructure
➡️ AI enabling fraud or criminal misuse

10 months ago 1 0 1 0
Preview
Grants | The AI Security Institute (AISI) View AISI grants. The AI Security Institute is a directorate of the Department of Science, Innovation, and Technology that facilitates rigorous research to enable advanced AI governance.

We're able to offer grants up £200k over 12 months.

We've updated our Priority Research Areas for societal resilience. Check out the kinds of research questions we're keen to fund. This list will evolve as new challenges emerge.

www.aisi.gov.uk/grants#chall...

10 months ago 2 1 1 0

🚨Funding Klaxon!🚨

Our Societal Resilience team at UK AISI is working to identify, monitor & mitigate societal risks from the deployment of advanced AI systems. But we can't do it alone. If you're tackling similar questions, apply to our Challenge Fund.

#AI #SocietalResilience #Funding

10 months ago 7 4 1 0