No, they built their own system, as I recall.
Posts by Ben Brumfield
I'm curious about your impression of Akagi vs. Yamato builds. (Don't pretend you don't own those -- I know you, Alan!)
In just under two hours, we're hosting @jonippolito.net in a webinar presenting his research on the environmental impact of #AI use compared with other common digital technologies.
Register for "How Green is your Prompt" webinar here: content.fromthepage.com/april-2026-w...
That matches what I understand about the most recent excavations at Gault. Watch for a monograph by Wernecke in the next couple of years.
(Big caveat: am not an archaeologist; wife is volunteer tour guide there. Also, you should tour it the next time you're in Austin.)
That's a good point. Because developers are in such short supply, even if AI agents look most compelling to cost cutters as developer-replacements, we may see losses among e.g. metadata librarians first. Hmm.
I'm grateful for that disagreement, since it led me to follow your work since. Mad respect -- may we continue to disagree productively!
I'm curious whether this is happening yet. Anecdotes from industry show AI hitting freelancers, big body shops and entry-level engineers pretty hard, but I have not yet seen IT departments shed headcount in GLAMs or academia. (But I've had my head down with family matters for the last 17 months.)
When the premise is that _anyone_ can vibe-code a new application--and don't get me wrong, it's a glorious vision in underserved fields--I worry that these sustainability tasks aren't even on the radar.
at these debt-reducing tasks, since they follow well-documented conventions and APIs with lots of examples. But not always...)
The question is, are practitioners using agents for that? I think that A&C in my taxonomy are, while B can _when prompted & paid_.
I'm a lot more concerned about D...
To the extent that AI agents can upgrade existing codebases, find and patch security vulnerabilities, and add compliance with accessibility guidelines, they're a really useful tool in the resource-strapped world of F/OSS for cultural heritage. (And in my experience, they can sometimes be superstars
I think that we could talk for hours about "turnkey solutions", since that encompasses a LOT.
But tech debt may be the core of my worries about adding AI agents to the mix. A friend referred to them as "tech debt generators" the other day, and that is my biggest concern about AI coding in GLAMs.
I'd add "train and mentor" to "recruit and pay" here -- though organizations like code4lib have done amazing work on that front.
I don't mean that to be a quibble, because the professional perspective mentorship imparts is exactly what I worry about losing in an option D world.
Final thought on Paul Ehrlich: I happened to see him speak twice, once at HS graduation in ’90s and once at conference sponsored by Fortune (!?) in ’00s. Both times he offered same, disproven “population bomb” warnings he’d made for decades. As joke goes, he predicted ten of the next zero famines.
might be able to replace A, but vastly expand the capacity of B and C.
Will that simply generate more tech debt and exacerbate the sustainability problems of DH and OSS? I don't know.
Is there a viable D, in which librarians/archivists/historians don't need A-C? Also unsure.
B) a humanities grad student/moonlighter with loads of domain knowledge but limited experience or capacity, or
C) a handful of specialist consultancies (Performant, Digirati, some smaller firms) with domain knowledge&SW experience but high costs and limited capacity.
I'd argue that AI coding agents
code (i.e. with version control, tests, and some kind of deployment framework) who are also values-aligned.
As of ten years ago, the options for someone with funding seemed to be either:
A) a smallish commercial firm with no background (or even interest) in cultural heritage, or
This is an interesting point, but I'd like to ask how common "long-term, committed developers with values that align with the cultural heritage sector" are.
Obviously I consider myself one, but I know from experience that it's hard to find developers who are experienced shipping production-worthy
I'd like to know if other people have seen Gemini explicitly reference historical events to provide context to a transcription process.
(And a big caveat: we have seen Gemini "reasoning" be an unreliable narrator at a higher rate than we see hallucinations in text output.)
within what it "knows" about world history. N.B. we did not provide any context for the image in our API call--it used the same prompt we use for any other page image on FromThePage, and it did not have previous page transcription outputs in its context window.
Mark Humpries and @foundhistory.bsky.social have written about Gemini "checking the math" in account books, implying that it has a kind of model of reality it uses while it transcribes. This seems like a further step: the transcription process is connected to a model of history, situating a page
Gemini Reasoning output containing this text: Deciphering Text Cues I’m now focusing on the handwriting style and identifying content clues. The presence of cursive strongly suggests the late 19th or early 20th century. Analyzing the inks, the top line is darker while the rest is red. My initial look at the text shows names like “Gervais” and “Toussaint Laplante” and locations such as “Fish Creek” and “Batoche”. The terms “Rebel” and “Battle” strongly suggest the North-West Rebellion of 1885.
One line in the reasoning produced by Gemini was fascinating:
My initial look at the text shows names like “Gervais” and “Toussaint Laplante” and locations such as “Fish Creek” and “Batoche”. The terms “Rebel” and “Battle” strongly suggest the North-West Rebellion of 1885.
Screenshot of AI generated text, reading: Gervais Madame Josephette 168 $ 1167.85 Fish Creek Widow of Calixte Tourond who was Killed at Batoche he was a Leading Rebel See her evidence in No 167 & No 168 D of J. evidence of Goulet , hus Killed at Batoche
The personal names were nearly indecipherable, as they were French names I hadn't encountered before, alhought the document was in English. Since QUA had run Gemini on the collection, we tried using that as an AI Draft, and were pretty pleased with the results. Nothing new there.
English-language text written in 19th-century script by one hand, using black ink for the first line (which contains a name, number, and dollar amount) with red ink for the remaining four and a half lines, apparently containing a terse narrative.
I stumbled across something interesting during a live demo in yesterday's webinar. Not having anything prepared, I tried a page in the Queens University Archives' Riel Resistance Collection. The hand was extraordinarily difficult for me, even though it was late 19th-century North American script
Ooh -- do you have it posted on FromThePage? We're trying out two new Gemini models and might be able to experiment.
Screenshot of the Newberry Library Digital Collections homepage featuring a pastel illustration of a polar bear on ice with mountains and a sun; a banner invites browsing the digital collections, with “Browse all” and “Recently added” thumbnails below.
Screenshot of the Newberry Transcribe homepage with a purple handwriting background; a large panel reads “Newberry Transcribe — Unlock history!” with buttons for “Learn more” and “Browse manuscripts,” and a row of project tiles below.
📌 Where to find us:
🔍 Browse our digital collections -- thousands of rare maps, manuscripts, postcards, and more, all free and online collections.newberry.org
✍️ Help us transcribe historical documents on Newberry Transcribe (no experience needed, just curiosity) nt.newberry.org
We're 75 minutes away from our webinar on responsible AI use in FromThePage--the challenges we face from AI and how we're addressing (some of) them: content.fromthepage.com/feb-2026-web...
Thank you! Given recent advances, I don't think I'd finalize more than six weeks in advance.
(I can't believe that I find myself thinking about a mid-November advance in capabilities: that's theoretically impossible, but in practice, I guess it works!)
Any chance you could share the reading list to us folks outside of BYU?
Quick blog post noting some thoughts on 'Documenting AI-created/enhanced records in catalogues/metadata/displays' - I'd love to know who's already doing it, and how? www.openobjects.org.uk/2026/02/docu... #AI4LAM #MuseTech
Mirador will ignore it (as well as the `creator` encoding the person who ran the AI).
So "doing it right" according to standards means doing it wrong according to best practices for AI transparency.