Posts by Erin LeDell
Last April, we raised $100M in Series B funding. This round gives us the ability to scale our team to meet the rapid growth of Bluesky and the AT Protocol. Read more: bsky.social/about/blog/0...
Carmen Sandiego from the 90s cartoon
Happy International Women's Day to the original International Woman
This International Womenâs Day, five women at EFF talk about the women who have inspired them. www.eff.org/deeplinks/2...
1. LLM-generated code tries to run code from online software packages. Which is normal but
2. The packages donât exist. Which would normally cause an error but
3. Nefarious people have made malware under the package names that LLMs make up most often. So
4. Now the LLM code points to malware.
A bright, greenyâwhite splotch in the centre of the image is the nucleus and coma of comet 3I/ATLAS. To the upper right streams the major dust tail. The background comprises distant stars against a black background, with the NGC 4691 galaxy at upper left.
This is an incredible image of comet 3I/ATLAS, taken by Satoru Murata ICQ Comet Observations group on 16 November 2025 from western New Mexico.
Structure within the major dust tail from the comet is clearly visible, together with two smaller jets trailing the nucleus and maybe even an anti-tail.
The way python and R foster inclusion directly contributes to their success: joyful places to exist, a steady flow of new maintainers, and a delightful collection of niche tools empowered by wildly different expertise coming together
Watch the new python documentary for more on PSFâs work here
A speech about what drives me, how science and open source are bitter victories, unable to make improve the world if society does not embrace them for the better:
gael-varoquaux.info/personnal/a-...
TLDR; The PSF has made the decision to put our community and our shared diversity, equity, and inclusion values ahead of seeking $1.5M in new revenue. Please read and share. pyfound.blogspot.com/2025/10/NSF-...
đ§ľ
Lilac-breasted Roller Lillabrystet Ellekrage Coracias caudatus
Lilac-breasted Roller
Lillabrystet Ellekrage
Coracias caudatus
#birds #birding #Kenya #photography #nature #naturephotography #wildlifephotography #wildlife #ornithology #birdphotography #animalphotography
Look at that. And New Mexico is not a rich state. Just one that figured out some priorities.
Meta trained a special âaggregatorâ model that learns how to combine and reconcile different answers into a more accurate final one, instead of relying on simple majority voting or reward model ranking on multiple model answers.
highlighted text: language models are optimized to be good test-takers, and guessing when uncertain improves test performance full text: Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such âhallucinationsâ persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysteriousâthey originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are gradedâlanguage models are optimized to be good test-takers, and guessing when uncertain improves test performance. This âepidemicâ of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems
Hallucinations are accidentally created by evals
They come from post-training. Reasoning models hallucinate more because we do more rigorous post-training on them
The problem is we reward them for being confident
cdn.openai.com/pdf/d04913be...
Plugging something into the tiny computer that you keep in your pocket. That one that has all your passwords, information, and location.. and giving control away to a random ai and company you know nothing aboutâŚ
A nasal spray reduced the risk of Covid infections in a double blind, placebo controlled randomized trial
jamanetwork.com/journals/jam...
It's fine if this is all seven overvalued companies in an AI trenchcoat, right?
Right?
"When in doubt, don't ask ChatGPT for health advice."
âThe AI in the study probably prompted doctors to become over-reliant on its recommendations, âleading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,â the scientists said in the paper.â
Somebody on LinkedIn said what we're all thinking.
I have some academic lady friends Iâve known for 20+yrs. This industry can be so cold, competitive, and selfish, but these women are so kind, generous, steadfast, and fun. Weâll sometimes get busy and go months without chatting, then reconnect as if no time has passed. Iâm so grateful for themâŚ
"Capitalism is temporary. Dykes are forever"
Seen in NYC
Maybe if your country is the wealthiest in the world but the richest tenth of the country have two thirds of the wealth and the bottom 50% only have 2.5% of the wealth, you don't have the wealthiest country in the world, you just have feudalism.
ChatGPT Hallucinated a Feature, Forcing Human Developers to Add It
đ www.404media.co/chatgpt-hall...
Legends never die!
In a stunning moment of self-delusion, the Wall Street Journal headline writers admitted that they don't know how LLM chatbots work.
It is *bananas* that they would give vibe coding tools (and _Replit_, of all platforms đ¤Ł) production deploy access! With no backups! We gave better backup tools to teenagers on Glitch remixing apps a decade ago.
Study finds A.I. LLMs advise women to ask for lower salaries than men. When prompted w/ a user profile of same education, experience & job role, differing only by gender, ChatGPT advised the female applicant to request $280K salary; Male applicant=$400K.
thenextweb.com/news/chatgpt...
make art! if it's the end of the world, you might as well make art! if it's not the end of the world, then the future will be better because people made art right now!