"A kneecapped Wayback Machine isn’t just bad news for accountability journalism—it will also be a blow to the legal system, as pages archived by the tool are frequently cited as evidence in litigation across the United States."
Posts by Om B. Bhatt
Excited to launch the accompanying free RLHF Course for my book. To kick it off, I've released:
- Welcome video
- Lecture 1: Overview of RLHF & Post-training
- Lecture 2: IFT, Reward Models, Rejection Sampling
- Lecture 3: RL Math
- Lecture 4: RL Implementation
Landing page: rlhfbook.com/course
Publishers say they’re blocking the Internet Archive because of AI scraping. But shutting out a nonprofit library won’t stop AI—it will damage the public’s best record of the web.
This year, I had the privilege of mentoring four highly motivated Iranian students through #Neuromatch. Despite the challenges of war, unstable internet, and disrupted research, we tried to continue our project together.
Today I received a note from a grad student who lives in Tehran. Her note gives you firsthand experience of what it’s like to live in a city that is being bombed, and what it’s like to be young and feel despair about your future.
rezashadmehr.blogspot.com/2026/03/hope...
A black and white photo of Grace Hopper standing in front of a computer bank. She is wearing a dark knee-length dress and has glasses. She is holding a piece of paper that says "COBOL" in her right hand, and flipping a switch on the computer with her left.
Computer science pioneer and United States Navy rear admiral Grace Hopper was born #OTD in 1906. 🧪 👩🔬
As far as I am aware, she is the only person who has both a supercomputer and a US Navy destroyer named after her. (1/n)
Image: Computer History Museum
I wrote a short article on AI Model Evaluation for the Open Encyclopedia of Cognitive Science 📕👇
Hope this is helpful for anyone who wants a super broad, beginner-friendly intro to the topic!
Thanks @mcxfrank.bsky.social and @asifamajid.bsky.social for this amazing initiative!
Why don’t neural networks learn all at once, but instead progress from simple to complex solutions? And what does “simple” even mean across different neural network architectures?
Sharing our new paper @iclr_conf led by Yedi Zhang with Peter Latham
arxiv.org/abs/2512.20607
Every research project starts with a question and ends with a folder called “final_FINAL_reallyfinal2”
In conclusion, this should win Best Picture:
I would describe it as a high dimensional container of a lot of different immiscible liquids.
"Go get me new ideas in X" - samples from immisicible pools. Nothing new.
"Examine X with Y lens. Now, relax assumption Z" - emulsification. Samples new space.
Intensely moving. I was effortlessly knocked out of my default 'bsky research paper mode' into heavy, heavy introspection. And I'm so grateful for that
Why isn’t modern AI built around principles from cognitive science or neuroscience? Starting a substack (infinitefaculty.substack.com/p/why-isnt-m...) by writing down my thoughts on that question: as part of a first series of posts giving my current thoughts on the relation between these fields. 1/3
If A Machine Tells You: 'l Shall Come Back To Finish Those Calculations For You', Then Goes Away And Does Not Return, Did It Break A Promise Or Did It Break Down? Report on Why? Competition-Problem No. 3 There were two entries to this competition. One, from Oxford, declared in verse Your alternatives offered are not An at all contradictory lot Nor are they contrary- \We urge you be wary: Your faithless machine just forgot! But does a machine really tell you anything?! Or, if a machine makes an utterance, is it an utterance or to be regarded as speech? If speech, is it supposed to be made directly by the machine or indirectly by its maker? If it is made indirectly by the maker (by pushing buttons, etc.) then it is not the speech (or utterance) of the machine bur the speech (not only utterance) of the maker. If so, he can't say the sentence in question( a) because he is not going anywhere, b) because if you promise yourself something, you can't use the second person ('l shall finish those calculations for you'), therefore the sentence should either be rephrased or abandoned as twice meaningless. Otherwise the problem depends on the relation between the above utterance and the Cogito. For only if the machine utters this proposition after declaring 'Sum ergo cogito' can it be meaningful speech. If the machine neglected first to establish its resemblance to the maker (see Gen. i.26 and other metaphysicians) then it is not an independent actor, ergo non cogitat et quod sequitur non loquitur either. These fundamentals established, the rest should be worked our by those concerned with moral technology. (Remark. If you are a philosopher, you had better nor mess about with machines any-how.)
This is from a satirical magazine written by Anthony Kenny and Julius Kovesi in Oxford, 1958-59. I have reviewed so many papers in the last 5 years that are basically just this.
Sunday read:
"Philosophy as Conceptual Engineering"
Amie L. Thomasson proposes that philosophy can focus on conceptual engineering: improving, refining, and constructing concepts to better serve social, legal, and scientific purposes.
#Philosophy
www.thephilosopher1923.org/post/philoso...
A study reveals the Network of Theseus (NoT), which enables neural networks to shift trained architectures into new models without performance loss. This method decouples training from inference, promoting efficient AI designs and exploration of architecture options. https://arxiv.org/abs/2512.04198
A tweet from @midware_midwife reads “gemini ur not supposed to say yes………” Below it is a chat screenshot showing Caravaggio’s painting Narcissus—a young man staring at his reflection in the water. The message next to it says “this is us fr fr,” and the AI replies, “Yes. That is exactly us.”
haven't seen this posted here, looks like maybe a big deal, crushes a bunch of stuff. arxiv.org/abs/2506.21734
Very excited to be presenting my work "Estimating and Correcting Yes-No Bias in Language Models" (done with @neuranna.bsky.social) at the poster session @ #CogSci2025 today! Please come check it out starting 1pm 🙏🏻!
Poster | Paper