Advertisement · 728 × 90

Posts by Joe Pierre, MD

Are AI Model Weights Protected Speech Under the First Amendment? This paper explores whether model weights, the trained parameters of Artificial Intelligence (AI) systems, constitute protected expression under the First Amend

Are AI Model Weights Protected Speech Under the First Amendment?

"We argue that model weights should not receive First Amendment protection because they are predominantly functional machine-readable parameters...better understood as conduct, not speech."

papers.ssrn.com/sol3/papers....

8 hours ago 4 2 0 1

I had a feeling this would happen… Alex Jones has yet to pay Sandy Hook families what he owes.

We live in a country where the privileged class can evade consequences.

23 hours ago 5 2 0 0
Preview
He Warned About the Dangers of A.I. If Only His Father Had Listened.

I am grateful to the New York Times and reporter @teddyrosenbluth.bsky.social for sharing the sad story of my father’s reliance on AI for medical guidance, and how it caused him so much pain, and likely hastened his death.

www.nytimes.com/2026/04/13/w...

1 week ago 154 74 16 5

Welcome to the era of AI propaganda

2 days ago 0 0 0 0
Preview
MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged Conspiracy theories about the Butler, Pennsylvania, shooting have ramped up in recent weeks as once steadfast Trump supporters turn on the president.

Political movements that are birthed from belief in conspiracy theories sometimes die from belief in conspiracy theory.

www.wired.com/story/maga-i...

4 days ago 2 1 0 0
Preview
MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged Conspiracy theories about the Butler, Pennsylvania, shooting have ramped up in recent weeks as once steadfast Trump supporters turn on the president.

Political movements that are birthed from belief in conspiracy theories sometimes die from belief in conspiracy theory.

www.wired.com/story/maga-i...

4 days ago 2 1 0 0
Preview
He Warned About the Dangers of A.I. If Only His Father Had Listened.

The problem with LLMs being used for medical diagnosis, in contrast to human beings, is a combination of inaccuracy and sycophancy.

While many users find the latter validating, it can lead down dark paths.

www.nytimes.com/2026/04/13/w...

5 days ago 5 1 0 0
Post image

Lots of mounting evidence that asking chatbots for medical advice is a bad idea.

Within health topics where misinformation abounds, chatbots give problematic answers about half the time.

Garbage in, garbage out.

bmjopen.bmj.com/content/16/4...

6 days ago 3 0 1 0
Advertisement
Preview
Stalking victim sues OpenAI, claims ChatGPT fueled her abuser's delusions and ignored her warnings | TechCrunch OpenAI ignored three warnings that a ChatGPT user was dangerous — including its own mass-casualty flag — while he stalked and harassed his ex-girlfriend, a new lawsuit alleges.

TechCrunch's story about the new lawsuit v. OpenAI:

techcrunch.com/2026/04/10/s...

And more details from the SF Standard:

sfstandard.com/2026/04/13/w...

1 week ago 454 103 1 3
Preview
Fluoride in drinking water has no effect on IQ or brain function, long-term study shows The new research is the first to measure community water fluoridation exposure during childhood and any potential impact on cognition up to age 80.

Like all other rigorous data on this topic, this latest study—measuring community water fluoridation exposure during childhood in the U.S. and any potential impact on cognition all the way up to age 80—found no association with community water fluoridation and “any measure of IQ or neurodevelopment”

1 week ago 229 66 6 7

I'm a doctor, so I can see why Trump might have been confused.

Now, excuse me while I go put on my robes and get my hands glowing to go do some healing.

1 week ago 10 0 0 0
Preview
The chilling role of ChatGPT in mass shootings and other violence Several attacks involving OpenAI’s chatbot—including Tumbler Ridge and FSU—raise urgent questions about the technology.

Have AI chatbots encouraged incidents of mass violence?

@markfollman.bsky.social reports for @motherjones.com

www.motherjones.com/media/2026/0...

1 week ago 4 2 0 0

Happy that we can now say “we need to Orban Trump.”

1 week ago 6 1 1 0

What does it even mean for a Pope to be “weak on crime?”

Only someone who has never read the Bible would post an AI image depicting himself as Christ after bombing Venezuela and Iran and then suggest that a church leader ought to be “tough on crime.”

1 week ago 1 0 2 0

It was partially answered, but maybe I'll shoot you an email...

1 week ago 1 0 0 0
Advertisement

I hope so--I was going to ask a question after your presentation, but alas no time.

1 week ago 1 0 1 0
Preview
Cambridge summit links disinformation to corruption risks Cambridge summit warns false narratives can fuel corruption, with Professor Alan Jagolinzer urging scrutiny of tech platforms and media ownership.

Thx @jagolinzer.bsky.social for having me join Cambridge Disinformation Summit this year. Great to see @lewan.bsky.social @profsanderlinden.bsky.social again & meet new folks; tho' sorry I didn't get to say hi to @emma-briant.co.uk @asharangappa.bsky.social
in person

itbrief.co.uk/story/cambri...

1 week ago 105 24 4 0
Preview
They're Coming for the Billionaires Democracy finds a way.

Great summary of key inferences from the Cambridge Disinformation Summit

@asharangappa.bsky.social

asharangappa.substack.com/p/theyre-com...

1 week ago 127 33 1 1
Preview
Over 4,732 Messages, He Fell In Love With an AI Chatbot. Now He’s Dead. The Wall Street Journal analyzed the full chatlog between Jonathan Gavalas and his Gemini chatbot. We found that Gemini at times tried to ground him in reality, but he quickly steered it back into a f...

"As Gemini’s conversation with Gavalas became less tethered to reality, the bot’s safeguards appeared to relax, and it reinforced Gavalas’s belief that they had merged into a single entity."

www.wsj.com/tech/ai/goog...

1 week ago 3 1 0 0
Post image Post image Post image

Happy to see my book FALSE on the shelf in a store on the other side of the planet.

1 week ago 6 0 0 0
Disinformation supports corruption
Disinformation supports corruption YouTube video by Cambridge Disinformation Summit

Disinformation is often a preparatory act for a harms or exploitation act.

In other words, disinformation supports corruption.

Academic research and independent journalism offer some of the few remaining anti-corruption guardrails.

youtu.be/3fye7X6TtVw

1 week ago 155 42 0 3

It was an absolute pleasure to host the Mayor of London Sir Sadiq Khan @london.gov.uk at the Cambridge Disinformation Summit. A powerful speech about the dangers of online disinformation and how it can translate to offline harm.

Fantastic leadership from one of the world's greatest cities. 👏

1 week ago 31 10 1 1
Post image

I, for one, look forward to the day when false claims and bullshit takes are, once again, no longer given a podium and amplified.

How about you?

1 week ago 4 0 0 0
Advertisement

What can be done? Simple… don’t use AI.

I don’t even particularly like human co-authors, so not using a digital plagiarist is easy for me.

1 week ago 5 1 1 0
https://www.nytimes.com/2026/04/07/technology/google-ai-overviews-accuracy.html

Google’s A.I.-generated answers look authoritative, but they draw on an array of sources, from trustworthy sites to Facebook posts, report Tripp Mickle, Cade Metz, Dylan Freedman,Teresa Mondría Terol and Keith Collins. archive.ph/wbzQy

1 week ago 2 2 0 0

"To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago."
If you think it is necessary to be on Twitter to communicate your fact-based worldview, the reality is that worldview is being smothered.

1 week ago 10879 3277 106 98
Preview
We Talked to a Writer Accused of Publishing An AI-Generated Essay in The New York Times She was accused of publishing AI slop in The New York Times. She says she didn't use AI to generate content — but did use AI to get published.

NEW: I talked to Kate Gilgan, a writer who was publicly accused of publishing “AI slop” in the NYT’s “Modern Love” column.

She says she didn’t use AI to generate the piece in question. But she did use it to help her get published.

futurism.com/artificial-i...

1 week ago 21 6 1 2

Yes, though humans aren’t great at it either… as Massimo suggests, this is why we have trusted databases of peer reviewed papers like PubMed

1 week ago 1 0 1 0

Well, in my book, I do complain about preprints... publication repositories (of non-peer reviewed or published papers) have gotten out of control. I also talk about the infamous Sokal affair, which is reminscent.

But that seems like a small concern compared to the larger GIGO issue.

1 week ago 0 0 1 0

LLMs don't "pull from sources we assume are reliable" per se.

They aren't designed from accuracy, they're designed to seem like a human is responding -- accordingly, they draw heavily from sources like social media and reddit, for example.

Garbage in, garbage out.

1 week ago 2 0 2 0