All materials are openly licensed (CC-BY) on Figshare. Use them, adapt them, build on them.
👉 doi.org/10.6084/m9.f...
If you're helping faculty navigate AI at your institution, I hope this is useful.
Posts by Lorena A Barba
The session walks through:
→ How tools like deep research actually work behind the scenes
→ When to let AI run autonomously vs. when to stay hands-on
→ How to connect AI to yourthe tools you already use
→ How to write down your expertise so AI can follow it consistently
I just published open materials from our "AI Academy" for engineering faculty.
Session 4 tackled the big shift: AI now does much more than answer your questions. It can go away, do research, and come back with a deliverable—faculty need the framework to understand this.
As Brian Granger said in his keynote: "The Jupyter notebook format is the ideal document for AI."
And if we're going to add AI to computational workflows, we need to preserve the transparency that makes science trustworthy.
Full analysis on LinkedIn:
www.linkedin.com/feed/update/...
Current multi-agent frameworks (LangChain, AutoGen) orchestrate agents in the background.
User prompts → agents debate invisibly → result appears.
Jupyter AI makes agent coordination transparent. Human-in-the-loop, all the way down.
This respects what #Jupyter actually is: a medium for dialogue. Not a REPL with a GUI. Not an IDE with a chatbot bolted on. A conversational space where humans negotiate truth through computing.
Here's the genius: they're using @jupyter.org's Real-Time Collaboration (RTC) protocol—originally designed for humans—as the rail for AI interaction.
AI agents get cursors and chat privileges, just like human collaborators. They just "enter the chat" when needed.
In the demo: a Code-Cell-Editor agent encountered a statistics problem. It knows its limitations, so it @-mentioned a Statistician persona. The handoff happened visibly in the notebook—not in some hidden backend.
The model is agents are teammates, not black boxes.
If you use AI daily, you know "chat fatigue":
prompt ➡ wait ➡ read ➡ re-prompt ➡ repeat
Constantly context-switching between your work and a chat sidebar. Jupyter AI's new direction avoids this entirely.
At #JupyterCon, I saw the future of AI collaboration for data science and beyond—and it's not in a chat window.
Abigayle Mercer and Zach Sailer demoed an experimental new version of Jupyter AI and it changes the plot for multi-agent systems.
A thread with my thoughts… 🧵
The serious conversations about pedagogical approaches that harness AI productively rather than making human learning obsolete 𝘢𝘳𝘦 𝘯𝘰𝘵 𝘺𝘦𝘵 𝘩𝘢𝘱𝘱𝘦𝘯𝘪𝘯𝘨! How are other educators navigating this?
🎥 Check out this short video demo—and I'm curious to hear your thoughts:
youtu.be/6q5CtXD5koY
We're facing a 𝘄𝗶𝗰𝗸𝗲𝗱 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲: motivating genuine learning when the tools have become this capable. Assignments should help critical thinking rather than just testing procedural knowledge. But the pace and the scale of the changes required are 𝘮𝘰𝘯𝘴𝘵𝘳𝘰𝘶𝘴.
What I witnessed:
– The AI assistant analyzed the entire Jupyter notebook
– Executed code cells autonomously
– Wrote proper NumPy functions
– Solved exercises completely
– Even explained its reasoning
All faster than students could read the problem statement.
𝗖𝗼𝗺𝗲𝘁 𝗿𝘂𝗻𝘀 𝗰𝗶𝗿𝗰𝗹𝗲𝘀 𝗮𝗿𝗼𝘂𝗻𝗱 𝗝𝘂𝗽𝘆𝘁𝗲𝗿
Friends, I watched an AI agent complete my students' coding exercises in real-time, and I'm shook 🤯
I tested Perplexity's Comet browser on a Jupyter notebook with eigenvalue problems I assigned to my class… (video at the bottom of this thread)
This raises a critical question: Are certain genres of scholarly work—especially simple literature reviews or trend summaries—no longer valuable as original scholarship? It's time for academia to rethink what we consider "original."
What do you think? #PeerReview
The paper's abstract promised a "review of key trends" in AI for engineering education.
My reasoning for immediate rejection: anyone can generate this with a single, well-crafted prompt. I tested it in Gemini 2.5 Pro, and the results were stunningly good and likely similar to the submitted article.
I just rejected a paper because it was pointless in the age of AI.
For the first time, I recommended a manuscript be rejected because its content was so easily replicable by an AI model with deep research capabilities.
#AcademicPublishing #AIinResearch
Bonus:
In support of my statement in highlight 1), above, you need to see this video by Prof. Giordano Scarciotti of Imperial College London (posted May 10, 2025):
youtu.be/lSbnMBb6INA
7) Beware wearables: Smart glasses can photograph exam questions and get live AI answers. Your syllabus needs wearables policies NOW.
Bottom line: Students will use AI anyway. Let's redesign courses to help them thrive with it.
Full piece: doi.org/10.6084/m9.f...
8/8
6) The AI polarity: It can amplify learning OR create cognitive laziness. The difference is in how we design assignments and teach usage. How might you design for active AI use and promote user patterns that result in positive outcomes?
7/8
5) AI is much more than chat. It's autonomous agents doing research, filling forms, completing coursework. One prompt = entire literature review. Will you change expectations of what students do in your class?
6/8
4) Entry-level jobs were down 15% in 2024, and unemployment for new grads hit 4-year high this year. Meanwhile, companies now require AI use in their teams, and are conducting AI-enabled job interviews. Are we preparing students for this?
5/8
3) Students perceive that they know AI better than faculty (they're probably right). This gap is creating stress and missed opportunities for everyone. What are you going to do about this?
4/8
2) ChatGPT now has "Study Mode" and Gemini has "Guided Learning"—both promise Socratic tutoring versus immediate answers. But will students choose the hard path when instant answers are a click away? (They can turn Study mode on/off!) 🤔
3/8
1) Google just gave all US college students free access to Gemini Pro. They just need an .edu email for verification. This means validity of your take-home assignments is cooked: Gemini can do complex work for students and they don't need to think…
2/8
🧵 Fall 2025 faculty: I wrote 7 pages on what you need to know about AI before classes start in a week or two—it's posted as PDF in the @figshare.com service under CC-BY: link below… Here are the highlights:
1/8
Good benchmarks allow a user to choose the best method for a particular application. “But the first question is, what do we mean by ‘better’?” Fascinating story by @drmichaelbrooks.bsky.social on benchmarking in #AI. Thx @labarba.bsky.social for the suggestion! 🧪 www.nature.com/articles/d41...
Had a hallway conversation at #SciPy2025 about #GenAI in coding education → one week later, a (partial) solution is shipped in @googlecolab.bsky.social 🤯
The magic of conference serendipity never gets old. Longer post with story on LinkedIn: www.linkedin.com/feed/update/...
Presentation slides for my talk yesterday at #SciPy2025
Barba, Lorena A. (2025). Embracing GenAI in Engineering Education: Lessons from the Trenches. figshare. Presentation. doi.org/10.6084/m9.f...
Here's a practical intro for anyone who values simplicity, openness, and wants to build their own online presence the #OpenSource way.
Full tutorial (~40 min): youtu.be/j-tXer7dIes