Posts by Dr. Casey Fiesler
But if you're an academic I'd recommend checking out The Conversation. Your university may even have a partnership with them and they might do a directly connection.The nice thing is that they CC license the articles so they tend to be republished by larger venues. theconversation.com/us
This is also why, as you'll see from the list above, I haven't written for anywhere other than The Conversation for years, in part because Slate got rid of Future Tense. :(
But here's a list of publications with submission information if you want to try! www.theopedproject.org/submissions
Thought I'd answer this broadly since it's a good question with a complicated answer (from me). :)
I've written lots of op eds (caseyfiesler.com/press/) but *never* successfully cold pitched one. They've all been because (1) someone reached out to me or (2) I already knew an editor because of (1).
collage of headlines: What TikTok's west lem caleb says about dating, social media, and us; congress doesn't understand something big about tiktok; met's threads is surging but mass migration from twitter is likely to remain an uphill battle; ai chatbats are intruding into online communities where people are trying to connect with other humans; why archive of our own's surprise hugo nomination is such a big dea; fandom's fate is not tied to tumblr's; taylor swift's midnights album gives tiktok fan groups more fodder and community; bluesky isn't the new twitter but its resemblance to the old one is drawing millions of new users; fears about tiktok's policy chagnes point to a deeper problem in the tech industry
I was making a new slide for a talk where I'm mentioning reasons I like studying the internet, and ended up with this little collage of some of the op eds I've written over 10+ years, and I think it's really cool so wanted to share. :)
Oof.
That is a given! This post was meant to be about about scholarly evaluation component. (Not saying that some places don’t totally suck in this regard of course!)
And of course I can't leave this thread without also sayng that, assuming we can't solve the academic incentives problem, then the obvious solution is *forcing* reviewing duties based on number of submitted papers. And I still don't understand why we don't do that.
Also what would it look like for hiring to force review based on quality over quantity and to surface that they do that?
Candidates will submit their three best papers, and must remove publication lists from their CV.
(Yes I recognize why this would be controversial.)
What would it look like for hiring + promotion/tenure to explicitly value (and signal that they value) components of academic work that are NOT publishing papers?
We welcome applicants who engage in public scholarship
who create software and artifacts
who have strong editorial experience
... etc.
I feel like lots of people have been yelling "we need to change the incentive structures of academia!" for many years but nothing ever happens. :-\
But like *what* actually needs to happen to disincentivize whatever has resulted in a 3,000+ paper submission numbers jump in 4 years???
I'm not at CHI but I hear that there's expected "oh god what do we do about reviewing, we have had an exponential explosion in paper submissions" discussion.
I think a good start would be to stop making everyone feel like they have to publish 10 first author CHI papers to get a job. :-\
I had not!
This is the part people don't seem to get. (And I'm not sure what I mean by "people" here, maybe like "in my social media comments saying things like 'humans get things wrong all the time!' or 'LLMs are more accurate every day!'")
Many. People. Assume. Accuracy.
www.nytimes.com/2026/04/07/t...
Hey thanks!! Though for THIS video in particular I realize I was slightly misleading in one of comparisons I made between voice and text so I’m going to replace it to be more precise.
Yesterday I threw out what I had planned for my AI & Society class and we spent the whole time on this. I had them choose some of Husk's videos to watch (www.youtube.com/@HuskIRL/sho...) and then tasked them with explaining WHY the funny outcomes are happening - i.e. a lesson about how LLMs work.
You may like this! youtube.com/shorts/wRoZS...
He saw the wrong problem. :)
Yeah the fact that he responded to not-the-actual-issue-at-all was what I found troubling.
I think that is a completely different issue rather than fitting into that taxonomy. People with all kinds of assumptions about LLM capabilities have deep ethical concerns!
Indeed, that is exactly what he said. (Adding a tool.)
This is actually kind of the point I'm trying to make here... it's not a stopwatch so why does it need stopwatch technology? Like there's absolutely no reason why a language model needs a timer.
oh no worries! I'm not offended by people assuming I don't spend as much time on TikTok as I do ;)
Coincidentally! www.youtube.com/shorts/sOaRv...
ha yeah I actually saw this first yesterday!
I don't know if I'm moving the needle here, but a big part of what I'm trying to accomplish with this is just to help more people understand what LLMs are actually doing so they have an intuition for limitations. www.youtube.com/playlist?lis...
Anyway, I wish AI companies would do the same thing.
For what it's worth, it was 10 minutes in that video but when I tried the same thing (in voice mode) it told me 6. So I guess the faith in me is nice heh.
Yeah, though I do think that it's probably worse than it has to be. Do you know if there has been work on RLHF that explicitly rewards "I don't know" to see if the needle can be moved on that?
Since I started my series of videos about how LLMs work I have been fighting for my life in the comments against two very different types of people: those who insist LLMs are far more capable than they actually are and those who insist they literally can't do anything. It's weird.
(2) People see examples like the in the video above, and assume that LLMs cannot do anything correctly. Because if an LLM can't even tell you how many R's are in strawberry then obviously it's completely useless.
If AI companies don't care about (1) then maybe they should at least care about (2)?