Last month, members of the #GAIGANow Team attended the #IASEAI2026 Conference. Our conversations highlighted growing momentum in the field, the need for stronger dialogue between AI safety and AI ethics communities, and the importance of global coordination.
#AISafety #AIEthics
panelists on stage
Should governments impose an LLM token tax? ๐ฐ๐ฐ
If AI gradually replaces important parts of human work, we'll need other forms of taxation.
Important discussion by @akorinek.bsky.social, Anna Salomons, @dianecoyle1859.bsky.social w/ @kncukier.bsky.social recently at #IASEAI2026 at UNESCO in Paris.
Stephen Casper (MIT) on stage speaking
We almost certainly won't make AI safe by making safe AI.
Others are still going to create unsafe AI.
โ @scasper.bsky.social at #IASEAI2026 Open-Weight AI Risk Management Workshop
I led one of the discussion groups and we had some nice new ideas of how to make open weight models safe ๐
Geoffrey Hinton speaking
Geoffrey Hinton:
Proving safety of AI (Stuart Russell's goal) is not going to work for neural network-based AI.
Pushing for non-agentic AI (@yoshuabengio.bsky.social's goal) could work, but the big tech companies are moving towards more agency.
โก Risk mitigation is all we can do!
#IASEAI2026
Anthony Aguirre on stage at UNESCO
3 AI Races โ Big tech companies are racing to compete:
1. For attention/engagement online
2. For AGI to replace human labor
3. For superintelligence, in pursuit of power
Whoever wins, humanity loses.
โ Anthony Aguirre @futureoflife.org at #IASEAI2026
Gillian Hadfield on stage with Yoshua Bengio and Susan Leavy
AI bots used to persuade humans to email famous AI researchers. Now they send emails themselves, claiming sentience etc. (Stuart Russell)
This agency brings profound risks, the theme of #IASEAI2026
AI bots do not have any ID, are not traceable: Should this be regulated? โ @ghadfield.bsky.social