Last week, a 20 year-old man threw a molotov cocktail at Sam Altman's mansion; two days later, people fired a gun at it. Earlier that week, someone fired gunshots into an city councilman's house who approved a data center.
Why the AI backlash has turned violent:
Posts by James Galbraith
🎉 EarlGrey ParTEA v0.1.6 is here! Huge thanks to everyone who's tried the pipeline and shared feedback — your suggestions directly shaped these new features. 🍵
#TEworldwide #transposableelements #bioinformatics #genomics
A solar eclipse (with the Earth blocking the sum) viewed from Artemis II.
Solar eclipse from Artemis II. 😍😍😍😍😍
images-assets.nasa.gov/image/art002...
Question for people who were alive for Bay of Pigs or other tense geopolitical moments like now: How did you focus on things other than the potential destruction of humanity?
Movements to shut down or ban data centers are amassing power and notching victories. Wikipedia has banned AI-generated content in articles. Publishers and entertainment studios are being pushed to reject AI-produced content outright.
In other words: It's open season for refusing AI.
In the new TMK, we're joined by @katemac.bsky.social and @70sbachchan.bsky.social from @thepolycrisis.bsky.social to chat about the war in Iran, fossil fuel demand destruction, forced energy transitions, plus much more. www.patreon.com/posts/154551...
IMPACT OF PARENTHOOD ON UNIVERSITY EMPLOYMENT. Line graph shows how the probability of holding a research position changes from four years before to seven years after having children.
Becoming a parent is much more detrimental to women’s academic careers than it is to men’s
Read the full story: go.nature.com/4v4rxmQ
A new study found that sycophancy is a pervasive function of leading chatbots, which are likely to give users bad/antisocial advice when asked about real-world interpersonal conflicts -- warping users' judgement and promoting dependence.
futurism.com/artificial-i...
BREAKING: The Senate just set up a Greens-led inquiry into a 25% gas export tax.
Big gas corporations are ripping us all off, paying almost no tax while people are hurting.
This inquiry will expose gas corp greed, pushing govt for fairer tax & cost of living help in the budget.
These shitty laws will get slapped down by the High Court, but some poor person is going to have to go through that long horrific process
www.theguardian.com/australia-ne...
Do you want to move to Oxford and join our lab? There is a computational biologist position open atm to develop and implement tools to analyse TE expression in single cell long read data and more. Apply! #TEsky #UniversityofOxford #postdoc
Now out!
We show that TEs can be horizontally transferred between fungal species via Starships. Once transferred, these TEs can become active, changing the genome organization and affecting the lifestyle of the recipient fungus.
www.nature.com/articles/s41...
@oggenfussursula.bsky.social #TEsky
Goldman Sachs reports that 300 million full-time jobs could be replaced by AI by 2030. Labor turnover is high and hiring has slowed. 71% of Americans worry that AI will cause permanent job loss. As young people about to enter the workforce for the first time, the fear of unemployment is understandable, but we cannot save ourselves with the very tool that is putting us at risk. The irony is that as Penn pours endless money and energy into AI advancement in its attempt to get ahead, the University is only quickening its own demise. AI cannot coexist with education — it can only degrade it. As technology advances and workers are replaced by machines, schools are some of the only places we have left to explore and wrestle with human thought. With our own university leading the charge, AI is now corrupting those few sacred spaces and leaving us with nowhere to engage in true scholarship. Editorials represent the majority view of members of The Daily Pennsylvanian Editorial Board who meet regularly to discuss issues relevant to the Penn community. This body is led by Editorial Board Chair Jack Lakis and is entirely separate from the newsroom. Questions or comments should be directed to letters@thedp.com.
An unaccounted for part of the economy is how much young people virulently hate AI, despite how aggressively it's being forced on them. They realize it's making their friends dumber and ruining the world and they want nothing to do with it.
From the Penn student paper:
www.thedp.com/article/2026...
Its a very good angle to come at it from, and the only reason we didn't include that is that we wanted to stick to the clearest cases of conflict between the University's policies on AI and procurement which couldn't be washed away with vague corporate speak.
After just 5 days over 350 staff and students from across the U of Edinburgh have signed our Open Letter calling on the University to end its contract with OpenAI.
You can do this at your institution too. Find its "AI Principles", "Procurement Policy", etc and look for how they conflict with OpenAI
"when the briefs are unclear"
babe understanding the brief, figuring out how to produce the solution, working toward this goal IS THE ASSIGNMENT
Holy shit we might have a cure for sickle cell anemia
Forget turtles, it’s transposons all the way down! 🧬🏃♀️🧬
Solidarity with Emily Tucker and her open letter to students. Tucker writes, "But the great thing is, you don’t have to go along with this, and I urge you not to. You can refuse to use the chatbot. You can tell your professors that you don’t want them to use it or to require you to use it."
A big thanks to Arthur, @zeerak.bsky.social @adamlopez.bsky.social and all the folks at @technomoralfutures.bsky.social for help in writing this letter.
They don't have to be raging Luddites like me, keen to smash the silicon frames infesting our knowledge factories. Maybe they're worried about only one aspect. Or maybe they're in the X-risk camp - we may disagree on some matters, but many of them want to stop these companies too.
Just two days later we have 100s of signatures from across the University. That's all it takes.
So reach out to like-minded individuals at your institutions. Seeing if there is a similar document outlining your institutions AI Principles - contracting OpenAI, Anthropic or XAi is bound to conflict.
So what can you do? Arthur and I started discussing putting this lesson together just last Friday and started reaching out to others with similar views, seeing the QuitGPT and BDS movements as an inspiration. By Monday we'd starting drafting the letter and on Thursday we released it for signatures.
In addition, OpenAI recently signed a massive contract with the US Military to and provide their services to ICE, a paramilitary force tearing people away from their families and communities based solely on race and ethnicity. That's not inclusive or equitable.
ChatGPT does not promote inclusivity or equity. For starters @karenhao.bsky.social's Empire of AI and @404media.co's reporting show the labour conditions of Global South workers training these models are atrocious, being paid a pittance sort through graphic descriptions & images of violence and CSAM
ChatGPT is not accessed responsibly. While the University does host Llama locally, the 15 ChatGPT models it providers are not locally hosted. We have no idea how much energy is being used in answering our prompts or how that energy was generated.
huggingface.co/blog/sasha/e...
ChatGPT is not secure. It has one of the worst security ratings of any LLM provider. Surely a university which considers itself the "birthplace of Artificial Intelligence (AI) in Europe" has the capacity to self host open weight models?
businessdigitalindex.com/research/ai-...
ChatGPT is not safe. Is has coached numerous young people to take their lives. Why is the University supplying such a product to its staff and students?
www.aiaaic.org/aiaaic-repos...
In contracting OpenAI the University maybe they assumed felt that this was in line with its stated AI Principles of 'Safe, Secure, Responsible Access, Inclusivity, and Equity'. Many didn’t seem to question this, but you don’t have to dig too deep to find OpenAI is none of these.
As one of the authors of this thought I’d share a bit about how we got here and how you can do what we’ve done at you institution.
Over the two years I’ve been at the University of Edinburgh I’ve grown increasingly concerned by fellow academics uncritically using LLMs, especially OpenAIs’s ChatGPT.