and in “Memory-Statistics Tradeoff in Continual Learning with Structural Regularization,” Vladimir Braverman, @uuujf.bsky.social, and more study the statistical performance of a continual learning problem with two linear regression tasks in a well-specified random design setting: (12/12)
Posts by JHU Computer Science
.@shunchi.dev, @danielkhashabi.bsky.social, @jienengchen.bsky.social, & more introduce the first open platform that benchmarks world 🌎 models in a closed-loop world that mirrors real agent-environment interactions in “World-in-World: World Models in a Closed-Loop World”: (11/12)
“Massively Multimodal Foundation Models: A Framework for Capturing Interactions with Specialized Mixture-of-Experts” by Xing Han, Suchi Saria, and researchers from @utaustin.bsky.social and @mit.edu proposes a framework that quantifies temporal dependencies to guide mixture-of-expert routing: (9/12)
In “Generative Blocks World: Moving Things Around in Pictures,” @anandbhattad.bsky.social and UIUC researchers create a world to interact with the scene of a generated image by manipulating simple geometric abstractions 🔷: (8/12)
Aayush Mishra, @danielkhashabi.bsky.social, and @aliu33.bsky.social ask if the internal computations 💻 of in-context learning be used to improve the qualities of supervised fine-tuning in “IA2: Alignment with ICL Activations Improves Supervised Fine-Tuning”: (7/12)
“MOBODY: Model Based Off-Dynamics Offline Reinforcement Learning” by Yihong Guo, @aliu33.bsky.social, Yu Yang, and @pan-xu.bsky.social proposes an algorithm that optimizes a policy using learned target dynamics transitions to explore the target domain 🎯: (6/12)
In “Replicable Reinforcement Learning with Linear Function Approximation,” @optimistsinc.bsky.social, @marcelhussing.bsky.social, @mkearnsphilly.bsky.social, @aaroth.bsky.social, @sikatasengupta.bsky.social, & more develop replicable methods for linear function approximation in RL: (5/12)
• “An Open Simulation Platform for Team Training in Robotic Surgery,” which presents an open-source simulation platform for training a robotic 🤖 surgery team
Then, at ICLR... (4/12)
• “Console-Free Mixed Reality Teleoperation of the da Vinci Research Kit,” which presents a novel mixed-reality-based teleoperation and visualization framework for the da Vinci Research Kit
• “A Robotic Simulation Environment for Ultrasound Imaging of Soft Tissue” (3/12)
First, at ISMR, Peter Kazanzides will present the following papers w/ @jhu.edu, @vanderbilt.edu, & Politecnico di Milano researchers:
• “An Effectiveness Study of Dithering for Improved Force Estimation on the dVRK-Si System” (2/12)
ICLR logo.
Check out the work our researchers will be presenting at #ISMR2026 🩺🤖 and @iclr-conf.bsky.social next week! 🧵 (1/12)
Congratulations! Weiting “Steven” Tan successfully defended his dissertation “Towards Multimodal Conversational AI: Understanding, Reasoning, and Generation” under the guidance of advisor Philipp Koehn. Steven plans to continue his research in industry. We in the department are extremely proud of our students who have successfully completed their PhD. Congratulations on this achievement and best wishes as you begin an exciting new phase of life!
Congratulations, Steven!
Consistently approaching core problems through the lens of computational theory, Raman Arora and his students publish regularly in venues like @icmlconf.bsky.social, where they introduced new research to improve the safety and utility of modern AI systems. Learn more about their recent advances:
"Lantern lowers the barrier to entry, financially and technically, so more researchers, students, and communities can explore robotics applications." (6/6/)
Learn more about Lantern 🏮 here:
“If interactive robots only exist in high-tech labs, we limit who gets to innovate with them and who benefits from them,” explains @victor-antony.bsky.social. (5/6)
They discovered that, despite its simplicity, Lantern can still deliver compelling experiences, suggesting that meaningful human-robot interaction doesn’t always require complexity—sometimes subtle motion and touch are enough. (4/6)
To see if this simple robot can still create meaningful human-robot connection, applications, and expressive engagement, its developers tested it in real-world contexts to learn how people interacted with it and what kinds of meanings they projected onto it. (3/6)
Many social robots used in HRI research are expensive, complex, and accessible to only well-funded labs—but Lantern is an open-source platform that costs just $40 to make. It can expand and contract like it’s breathing, vibrates gently, and can be customized with different outer materials. (2/6)
Accessible robotics matters—enter Lantern 🏮, a minimalist robotic platform for human-robot interaction (HRI) research and education. Designed by @victor-antony.bsky.social & other researchers, Lantern won the Sustainability Demonstration Recognition at the 2026 @hri-conference.bsky.social. 🧵 (1/6)
...Thus, my focus has shifted from only implementing quick instant answers to helping users conduct deeper research that AI can't replicate.” 🔎
Try out his demo here: sean2d.com/sec-demo
And learn more about the upcoming Hopstart competition here: engineering.jhu.edu/cle/hopstart/ (8/8)
...While they’re much better now, they unfortunately rely on AI to generate them—which makes them inherently unreliable. Plus, it’s annoying to use for research, since AI unnecessarily editorializes and summarizes information rather than highlighting quality human info... (7/8)
“When I first started coding my first prototype, generative AI wasn’t integrated everywhere like it is now,” Durkis-Dervogne says. “Before, most search engines’ instant answers were very lacking... (6/8)
“Once I experienced how nice it was having most of the internet accessible offline and without ads or distractions, I realized this project could be much more than that,” he says.
This was long before the integration of AI tools across the internet. 🌐 (5/8)
The idea sprung from one of Durkis-Dervogne’s first coding projects 🖥️ in high school, when he created a frontend to merge several sources of offline data in a single interface. (4/8)
You can view excerpts from relevant pages or even an entire article without ever leaving the results page, making the search experience feel more like reading a book 📖 than wading through a minefield of links and inaccurate #AI summaries. (3/8)
His engine works by taking publicly accessible, reliable sources—like Wikipedia, StackOverflow, and more—and converting them to minimal pages, making information quickly accessible in “zen mode”: faster, free from advertising and spam, and available offline. 🪷 (2/8)
The Search Engine Company UI. The search bar entry is "stylesheet languages." One result is shown, titled "Comparison of stylesheet languages, Wikipedia." There are four paragraphs of text scraped from the Wikipedia entry, with a "View full source" button at the bottom.
Hopstart, the annual startup competition hosted by the Johns Hopkins Center for Leadership Education, is this Friday! As part of our countdown, we’re featuring startup entries from JHU CS students—kicking off with Seán Durkis-Dervogne’s clutter-free search engine! 🧵 (1/8)
Computer Science & CLSP Seminar Series: Making LLMs Reason Better, Faster, and Longer. April 27, 2026, 12 p.m. 216 Hodson Hall. Mirella Lapata, University of Edinburgh.
Join us and @jhuclsp.bsky.social for a joint seminar featuring @edinburgh-uni.bsky.social’s Mirella Lapata! Learn more here: www.cs.jhu.edu/event/cs-cls...
Congratulations! Yiwen Shao successfully defended his dissertation “Leveraging Spatial Information for 1-Stage Target ASR Under Multi-Channel Multi-Speaker Scenarios” under the guidance of advisor Sanjeev Khudanpur. Yiwen plans to continue advancing speech and multimodal AI research in industry at Tencent HY, with a focus on audio encoders, representation learning, and large audio-language models. We in the department are extremely proud of our students who have successfully completed their PhD. Congratulations on this achievement and best wishes as you begin an exciting new phase of life!
Congratulations, Yiwen!