Advertisement · 728 × 90

Posts by Erin LeDell

2 weeks ago 8710 1965 61 24
Preview
Bluesky's 2025 $100M Series B Lays Foundation for Open Social Web - Bluesky In April 2025, Bluesky raised $100 million in Series B funding led by Bain Capital Crypto. Since our Series A, we've grown from 13 million to over 43 million global users.

Last April, we raised $100M in Series B funding. This round gives us the ability to scale our team to meet the rapid growth of Bluesky and the AT Protocol. Read more: bsky.social/about/blog/0...

1 month ago 2920 350 3514 123
Carmen Sandiego from the 90s cartoon

Carmen Sandiego from the 90s cartoon

Happy International Women's Day to the original International Woman

1 month ago 11650 3289 5 21
Preview
Admiring Our Heroes for International Women’s Day: Five Women In Tech In honor of International Women’s Day, we asked five women at EFF about women in digital rights, freedom of expression, technology, and tech activism who have inspired us. Anna Politkovskaya Jillian

This International Women’s Day, five women at EFF talk about the women who have inspired them. www.eff.org/deeplinks/2...

1 month ago 106 42 0 3

1. LLM-generated code tries to run code from online software packages. Which is normal but
2. The packages don’t exist. Which would normally cause an error but
3. Nefarious people have made malware under the package names that LLMs make up most often. So
4. Now the LLM code points to malware.

1 year ago 7904 3604 120 446
A bright, greeny–white splotch in the centre of the image is the nucleus and coma of comet 3I/ATLAS. To the upper right streams the major dust tail. The background comprises distant stars against a black background, with the NGC 4691 galaxy at upper left.

A bright, greeny–white splotch in the centre of the image is the nucleus and coma of comet 3I/ATLAS. To the upper right streams the major dust tail. The background comprises distant stars against a black background, with the NGC 4691 galaxy at upper left.

This is an incredible image of comet 3I/ATLAS, taken by Satoru Murata ICQ Comet Observations group on 16 November 2025 from western New Mexico.

Structure within the major dust tail from the comet is clearly visible, together with two smaller jets trailing the nucleus and maybe even an anti-tail.

4 months ago 397 85 8 0

The way python and R foster inclusion directly contributes to their success: joyful places to exist, a steady flow of new maintainers, and a delightful collection of niche tools empowered by wildly different expertise coming together

Watch the new python documentary for more on PSF’s work here

5 months ago 51 21 0 1
Advertisement
Preview
A national recognition; but science and open source are bitter victories I have recently been awarded France’s national order of merit, for my career, in science, in open source, and around AI. The speech that I gave carries messages important to me (French below;...

A speech about what drives me, how science and open source are bitter victories, unable to make improve the world if society does not embrace them for the better:
gael-varoquaux.info/personnal/a-...

6 months ago 116 37 5 5
Preview
The official home of the Python Programming Language

TLDR; The PSF has made the decision to put our community and our shared diversity, equity, and inclusion values ahead of seeking $1.5M in new revenue. Please read and share. pyfound.blogspot.com/2025/10/NSF-...
🧵

5 months ago 6406 2749 123 450
Lilac-breasted Roller
Lillabrystet Ellekrage
Coracias caudatus

Lilac-breasted Roller Lillabrystet Ellekrage Coracias caudatus

Lilac-breasted Roller
Lillabrystet Ellekrage
Coracias caudatus
#birds #birding #Kenya #photography #nature #naturephotography #wildlifephotography #wildlife #ornithology #birdphotography #animalphotography

6 months ago 2261 269 59 17

Look at that. And New Mexico is not a rich state. Just one that figured out some priorities.

7 months ago 1529 332 32 8
Post image

Meta trained a special “aggregator” model that learns how to combine and reconcile different answers into a more accurate final one, instead of relying on simple majority voting or reward model ranking on multiple model answers.

7 months ago 46 7 3 1
highlighted text: language models are optimized to be good test-takers, and guessing when uncertain improves test performance

full text: 

Like students facing hard exam questions, large language models sometimes guess when
uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such
“hallucinations” persist even in state-of-the-art systems and undermine trust. We argue that
language models hallucinate because the training and evaluation procedures reward guessing over
acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern
training pipeline. Hallucinations need not be mysterious—they originate simply as errors in binary
classification. If incorrect statements cannot be distinguished from facts, then hallucinations
in pretrained language models will arise through natural statistical pressures. We then argue
that hallucinations persist due to the way most evaluations are graded—language models are
optimized to be good test-takers, and guessing when uncertain improves test performance. This
“epidemic” of penalizing uncertain responses can only be addressed through a socio-technical
mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate
leaderboards, rather than introducing additional hallucination evaluations. This change may
steer the field toward more trustworthy AI systems

highlighted text: language models are optimized to be good test-takers, and guessing when uncertain improves test performance full text: Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such “hallucinations” persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysterious—they originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are graded—language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This “epidemic” of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems

Hallucinations are accidentally created by evals

They come from post-training. Reasoning models hallucinate more because we do more rigorous post-training on them

The problem is we reward them for being confident

cdn.openai.com/pdf/d04913be...

7 months ago 65 7 8 10

Plugging something into the tiny computer that you keep in your pocket. That one that has all your passwords, information, and location.. and giving control away to a random ai and company you know nothing about…

7 months ago 26 6 2 0
Post image

A nasal spray reduced the risk of Covid infections in a double blind, placebo controlled randomized trial
jamanetwork.com/journals/jam...

7 months ago 622 174 21 17
Advertisement

It's fine if this is all seven overvalued companies in an AI trenchcoat, right?

Right?

7 months ago 47 10 2 1

"When in doubt, don't ask ChatGPT for health advice."

8 months ago 57 13 0 0
Preview
AI Eroded Doctors’ Ability to Spot Cancer Within Months in Study Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study.

“The AI in the study probably prompted doctors to become over-reliant on its recommendations, ‘leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,’ the scientists said in the paper.”

8 months ago 5248 2572 112 530
Post image

Somebody on LinkedIn said what we're all thinking.

8 months ago 24853 5222 483 495

I have some academic lady friends I’ve known for 20+yrs. This industry can be so cold, competitive, and selfish, but these women are so kind, generous, steadfast, and fun. We’ll sometimes get busy and go months without chatting, then reconnect as if no time has passed. I’m so grateful for them…

8 months ago 90 3 1 1
Post image

"Capitalism is temporary. Dykes are forever"
Seen in NYC

9 months ago 2183 654 10 10

Maybe if your country is the wealthiest in the world but the richest tenth of the country have two thirds of the wealth and the bottom 50% only have 2.5% of the wealth, you don't have the wealthiest country in the world, you just have feudalism.

8 months ago 2443 674 45 22
Preview
ChatGPT Hallucinated a Feature, Forcing Human Developers to Add It Welcome to the era of ‘gaslight driven development.’ Soundslice added a feature the chatbot thought it existed after engineers kept finding screenshots from the LLM in its error logs.

ChatGPT Hallucinated a Feature, Forcing Human Developers to Add It

🔗 www.404media.co/chatgpt-hall...

8 months ago 178 62 10 14
Advertisement

Legends never die!

8 months ago 2 0 0 0
Post image

In a stunning moment of self-delusion, the Wall Street Journal headline writers admitted that they don't know how LLM chatbots work.

9 months ago 2951 471 43 89

It is *bananas* that they would give vibe coding tools (and _Replit_, of all platforms 🤣) production deploy access! With no backups! We gave better backup tools to teenagers on Glitch remixing apps a decade ago.

9 months ago 256 35 14 0
9 months ago 12364 3585 47 45
Preview
Jason ✨👾SaaStr.Ai✨ Lemkin (@jasonlk) .@Replit goes rogue during a code freeze and shutdown and deletes our entire database

This thread is incredible.

9 months ago 4156 1217 311 620
Preview
ChatGPT advises women to ask for lower salaries, study finds A new study has found that large language models (LLMs) like ChatGPT consistently advise women to ask for lower salaries than men.

Study finds A.I. LLMs advise women to ask for lower salaries than men. When prompted w/ a user profile of same education, experience & job role, differing only by gender, ChatGPT advised the female applicant to request $280K salary; Male applicant=$400K.
thenextweb.com/news/chatgpt...

9 months ago 1936 1020 90 328

make art! if it's the end of the world, you might as well make art! if it's not the end of the world, then the future will be better because people made art right now!

9 months ago 4001 1263 64 55