Are AI Model Weights Protected Speech Under the First Amendment?
"We argue that model weights should not receive First Amendment protection because they are predominantly functional machine-readable parameters...better understood as conduct, not speech."
papers.ssrn.com/sol3/papers....
Posts by Joe Pierre, MD
I had a feeling this would happen… Alex Jones has yet to pay Sandy Hook families what he owes.
We live in a country where the privileged class can evade consequences.
I am grateful to the New York Times and reporter @teddyrosenbluth.bsky.social for sharing the sad story of my father’s reliance on AI for medical guidance, and how it caused him so much pain, and likely hastened his death.
www.nytimes.com/2026/04/13/w...
Welcome to the era of AI propaganda
Political movements that are birthed from belief in conspiracy theories sometimes die from belief in conspiracy theory.
www.wired.com/story/maga-i...
Political movements that are birthed from belief in conspiracy theories sometimes die from belief in conspiracy theory.
www.wired.com/story/maga-i...
The problem with LLMs being used for medical diagnosis, in contrast to human beings, is a combination of inaccuracy and sycophancy.
While many users find the latter validating, it can lead down dark paths.
www.nytimes.com/2026/04/13/w...
Lots of mounting evidence that asking chatbots for medical advice is a bad idea.
Within health topics where misinformation abounds, chatbots give problematic answers about half the time.
Garbage in, garbage out.
bmjopen.bmj.com/content/16/4...
TechCrunch's story about the new lawsuit v. OpenAI:
techcrunch.com/2026/04/10/s...
And more details from the SF Standard:
sfstandard.com/2026/04/13/w...
Like all other rigorous data on this topic, this latest study—measuring community water fluoridation exposure during childhood in the U.S. and any potential impact on cognition all the way up to age 80—found no association with community water fluoridation and “any measure of IQ or neurodevelopment”
I'm a doctor, so I can see why Trump might have been confused.
Now, excuse me while I go put on my robes and get my hands glowing to go do some healing.
Have AI chatbots encouraged incidents of mass violence?
@markfollman.bsky.social reports for @motherjones.com
www.motherjones.com/media/2026/0...
Happy that we can now say “we need to Orban Trump.”
What does it even mean for a Pope to be “weak on crime?”
Only someone who has never read the Bible would post an AI image depicting himself as Christ after bombing Venezuela and Iran and then suggest that a church leader ought to be “tough on crime.”
It was partially answered, but maybe I'll shoot you an email...
I hope so--I was going to ask a question after your presentation, but alas no time.
Thx @jagolinzer.bsky.social for having me join Cambridge Disinformation Summit this year. Great to see @lewan.bsky.social @profsanderlinden.bsky.social again & meet new folks; tho' sorry I didn't get to say hi to @emma-briant.co.uk @asharangappa.bsky.social
in person
itbrief.co.uk/story/cambri...
Great summary of key inferences from the Cambridge Disinformation Summit
@asharangappa.bsky.social
asharangappa.substack.com/p/theyre-com...
"As Gemini’s conversation with Gavalas became less tethered to reality, the bot’s safeguards appeared to relax, and it reinforced Gavalas’s belief that they had merged into a single entity."
www.wsj.com/tech/ai/goog...
Happy to see my book FALSE on the shelf in a store on the other side of the planet.
Disinformation is often a preparatory act for a harms or exploitation act.
In other words, disinformation supports corruption.
Academic research and independent journalism offer some of the few remaining anti-corruption guardrails.
youtu.be/3fye7X6TtVw
It was an absolute pleasure to host the Mayor of London Sir Sadiq Khan @london.gov.uk at the Cambridge Disinformation Summit. A powerful speech about the dangers of online disinformation and how it can translate to offline harm.
Fantastic leadership from one of the world's greatest cities. 👏
I, for one, look forward to the day when false claims and bullshit takes are, once again, no longer given a podium and amplified.
How about you?
What can be done? Simple… don’t use AI.
I don’t even particularly like human co-authors, so not using a digital plagiarist is easy for me.
Google’s A.I.-generated answers look authoritative, but they draw on an array of sources, from trustworthy sites to Facebook posts, report Tripp Mickle, Cade Metz, Dylan Freedman,Teresa Mondría Terol and Keith Collins. archive.ph/wbzQy
"To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago."
If you think it is necessary to be on Twitter to communicate your fact-based worldview, the reality is that worldview is being smothered.
NEW: I talked to Kate Gilgan, a writer who was publicly accused of publishing “AI slop” in the NYT’s “Modern Love” column.
She says she didn’t use AI to generate the piece in question. But she did use it to help her get published.
futurism.com/artificial-i...
Yes, though humans aren’t great at it either… as Massimo suggests, this is why we have trusted databases of peer reviewed papers like PubMed
Well, in my book, I do complain about preprints... publication repositories (of non-peer reviewed or published papers) have gotten out of control. I also talk about the infamous Sokal affair, which is reminscent.
But that seems like a small concern compared to the larger GIGO issue.
LLMs don't "pull from sources we assume are reliable" per se.
They aren't designed from accuracy, they're designed to seem like a human is responding -- accordingly, they draw heavily from sources like social media and reddit, for example.
Garbage in, garbage out.