Lots of tools, like GroundedAI, can help but I think this is far from a solved problem unless you build checking steps or use a specialist model/tool. Even then those checking & validation steps, may not work that well if done in a simplistic way such as just asking the model to check refs.
Posts by PubTech Radar
Not sure that about this. Logically you would think so but my sense is some of the newer mainstream models aren’t improving in this area. Especially if you specifically ask for a reference to support X, v hit and miss what you’ll get - dois are often incorrect even if reference right.
Small rant. What on earth has happened to #ChatGPT? Conversation with 9 straightforward questions, 8 incorrect & 1 partially correct answer. 7 links to supposedly verified sources (after problems in the previous round of conversation) - all hallucinated. Every conversation I have seems problematic ☹️
It was utterly fascinating to get screenshots of some student project work that I was loosely involved with. I didn't recognise any of the (not uni-provided) tech platforms being used. At one point, it seemed everyone had Kimi on their phones (atypical group though & prob just latest trend).
I agree, I doubt many think that way. My interaction with students is incredibly limited, but the few I do interact with are typically tech-related Chinese students at UK Unis who are 100% embedded in Chinese tech.
Gosh, I am full of the joys of spring today 😉. I went to the talk posted by @eve.gd (bsky.app/profile/eve....), and I'm still pondering where we might end up - there definitely aren't enough diverse voices developing this tech.
Bit of an aside, but I can also imagine staff whose teaching effectiveness gets quietly assessed through evidence of what students are asking in their AI. (There are IT systems rather than library systems afterall).
There's also a safeguarding angle – what's a university's responsibility if a student becomes harmfully overdependent on an AI system it endorses, and I assume monitors? Assume there will be a cohort that will embrace, and an equal cohort challenged by monitoring
Given your AI history, personalisation & projects aren't particularly portable, do you put a lot of work into a system you can't take with you when you leave? I've been part of some interesting business discussions about who owns this work & how much, if anything, you can take with you to a new job
Non-university AI seems sensible to me, except for 100% legit academic work. I suspect we'll see some horror stories about students being flagged for 'non-compliant' academic use under policies they've never read, and possibly retrospective log-trawling to prove a case.
It's an interesting post. I wasn't sure if the argument was that students are stopping using AI, or switching to non-university AI, or some combination of both happening, but to different groups.
Worth spending time with this article from Scott Cunningham:
Summary and my thoughts here: substack.com/@pubtechrada...
Original article here:
substack.com/home/post/p-...
Both I think - definitely Windows. It’s an excel add-in. I have no real use for it so was just playing around.
Have you tried the (beta) Claude plugin for excel yet? Wondering if it’s me or problems at Claude’s end but it’s been really shockingly bad on simple tasks like a draw graph and some calculations - supports all the AI naysayer’s worst fears about hallucinations.
I'm a beginner NotebookLM user, but as a person doing basic research tasks with a set of docs, it's incredibly useful & I would have loved it as a student. It's not perfect; more carefully worded questions give better results, and yes, image outputs can be riddled with errors, but still very useful.
I read this article and had questions about the framing/accuracy of a couple of points. I haven't commented & don't feel fully confident responding, as I'm not an expert in technical details. A bit hypocritical, I know, but I'd want someone to do the same for my posts when I get things wrong.
I’m working on a small, experimental tool that creates narrative, multi-signal overviews for scholarly books (citations, usage, attention).
Looking for a few (university) presses (and others) to test an MVP and give candid feedback.
Open data. No black boxes. DM me.
#scholarlypublishing #books
AI & Me - a snapshot of how people in publishing are using AI right now.
Please fill out my AI usage survey if you work in publishing.
Which models are you using? How often? What's delivering value? Takes under 5 minutes. Contributors get early access to results before I share them more widely towards the end of the month: ai-and-me.innovationideas.co.uk
This is a strange book. Lots of interesting points, but I didn't get the conflation of LLM AI with all uses of AI. Lots of good uses of machine learning. AI is bad was overdone, and I think it weakened their arguments. If 50+% of US adults are using LLMs, it's hard to argue they aren't useful.
Love this demo of Crixet (now Prism). For all the talk of robotic labs and AI accelerating science, the reality is Victor Powell, the developer, in his kitchen, minding the baby (the high-value work), while talking to an AI tool to handle the writing: www.youtube.com/watch?v=ce-S...
@mrstew.bsky.social might know.
I think you're being a bit harsh on yourself! There's a world of difference between someone with genuine expertise sharing knowledge and the AI Grifter crowd.
I love these! I think I'm probably closest to The Tool Magpie - though hopefully the more discerning end of the spectrum :-)
🚨 New issue of PubTech Radar is out: lnkd.in/eaCsVfja
British Library staff asked for a decent pay. Instead they got ‘a few money-saving present ideas’, such as ‘consider not giving presents this holiday season’. They are on strike this week. I wrote about it for @lrb.co.uk. www.lrb.co.uk/blog/2025/de...
Photo of an AI-generated white board about the LLama 3 herd of models paper
From Pietro Schirano (@skirano) on X writing about Nano Banana Pro: "Here’s my favorite use case so far: take papers or really long articles and turn them into a detailed whiteboard photo."
How to video: x.com/skirano/stat...
😀
“everything produced by an AI is a hallucination…” is becoming a ritual disclaimer. Hopefully, people will drop this one soon.
Google Scholar gets into "AI powered" space Assuming this can use all the full-text they have indexed this might be a game changer. The timing of this release maybe suggests Gemini 3 is being used? scholar.googleblog.com/2025/11/scho... . Apparently some hit a waitlist, I have access though (1)
A new report from Scholastica and Maverick Publishing surveyed 83 small and medium journal publishers from 21 countries about their tech challenges: lp.scholasticahq.com/technology-n...
Some stats:
* Current AI adoption is limited. Only 8% of publishers are using AI tools extensively.