I'm proud that Responsible Datasets in Context (RDIC) also won a DH Award for Best Training Materials.
This is a project with @electrostani.bsky.social, @miriamposner.com, Sylvia Fernández, and Anna Preus.
www.responsible-datasets-in-context.com
Posts by Cameron Blevins
screenshot of Bears Will Be Boys essay that shows the title and a smiling cartoon bear with thick black glasses
Fun to see that "Bears Will Be Boys," which I wrote with folks from the @puddingviz.bsky.social, won a 2025 Digital Humanities Award!
pudding.cool/2025/07/kids...
Generative AI has forced me to think a lot more concretely about all the discrete components of doing historical work - and where the value of those different pieces lie
Given how enrollments are tied to funding, this is also going to require a pendulum swing on the student demand side of things back towards in-person classes. At least at my institution, that’s a tough sell for students.
Just confirming that Made by History will now be back up and running in the next few weeks with a new home and new partners at the Philadelphia Inquirer. Exciting! So, if you have pitches, now's the time to start sending them to madebyhistory@inquirer.com 🗃️ #MadebyHistory #HistoryMatters
Example from Canvas feature Intelligence Insights showing dashboard of Students in Need of Attention
Ex. A teacher uses Canvas's "Intelligence Insights" tool to surface "Students in Need of Attention." Technically, it's an AI-powered tracking system. But that's not obvious. Is a teacher expected to a) recognize this, and b) disclose it? community.instructure.com/en/kb/articl...
"Students have the fundamental right to know when, where, and how AI systems are being used to evaluate them, track them, or make decisions about their educational future." Agreed! However...AI tools are getting so deeply enmeshed into ed tech that they're often quite hidden even to instructors +
"Student AI Bill of Rights" is a nicely scoped document from @studentdefense.bsky.social. It raises some interesting questions around enforcement/implementation +
defendstudents.org/all/student-...
Fair enough!
I’m sure it’s a very rickety analogy, but it does feel like there’s something there about the immediate reward hit with minimal friction
Interesting parallels here in the “flow” of vibe coding vs. gambling
Gave a talk today on some of the pragmatic ways I use agentic tools in my work, including for building tools and using local models. I recorded some simple examples and shared everything here: anastasiasalter.net/PragmaticAge... (the site itself is a silly example 👾)
New issue of my newsletter: “Vibe Analysis” — Despite its unserious name, vibe coding shows promise for elements of serious scholarly work
Featuring good work by @sarahebull.bsky.social, @jasonheppler.org and @cblevins.bsky.social
newsletter.dancohen.org/archive/vibe...
This is so, so well-articulated.
Couldn't agree more - the million dollar question is how much friction you need to learn those basics and fundamentals, how much friction is needed to maintain them, and how to ensure that foundation is strong enough before you let them loose
I've actually been thinking about this a lot. Ex. I wonder if Claude Code ends up with something like a default palette for history data viz - ie. "history" activates certain vector space that creates an "old-timey paper" look
Book cover for Cameron Blevins, Paper Trails: The US Post and the Making of the American West
Screenshot of Cameron Blevins visualization "How Fast Was the Mail"?
I did some light iteration for markers, bar charts, etc. but not much on palette. I was struck that it ended up having a similar color scheme as my book cover (although I might be reading too much into it) +
Which loops back to the eternal teaching question around generative AI: what kind of / how much friction is necessary for learning vs. what kind of / how much friction is extraneous?
Hopefully I'll have time down the road to clean this up and also run an accuracy check, but in the meantime I've added a warning note to the README.
This is helpful, thanks! A quick spot check looks like the bulk of them are when Gemini "saw" xx:3x as xx:8x (hard even for human readers) and just copied 8x directly into the decimal. If I were starting again I'd probably include instructions for this in the transcription prompt +
I haven’t! Are you referring to the Decatur AL row from 1892?
Thanks Laura!
Thanks Jonathan!
It’s definitely a risk, but I think this smaller scale project is a sweet spot where you can catch those kinds of things by comparing the source and the output. Ex. You don’t need to know about geolocation to say “Wait why isn’t Pittsburgh showing up on the map?”
Thanks Sarah! It was really helpful to see what you were able to do with a similar mapping project bsky.app/profile/sara...
Thanks so much Shannon!
Great question - are you talking more about accuracy of the data itself or whether the code might be somehow misrepresenting that data - ex. dropping values, inaccurately geolocating things, etc.?
Thanks Shannon!
It also wasn't expected to be fiscally self-sustaining 100 years ago!
Thanks! I think that's probably the way to go for teaching. As I note in the post, there's still a gap with transcription btwn what the models are capable of vs. usability. Fwiw I mindlessly followed Mark Humphries' example here and it worked for me: generativehistory.substack.com/i/179954530/...