Hey y'all! Very pleased to announce that we'll be hosting Communication Science Futures again in East Lansing later this year. The conference in 2024 was a blast, and we're looking forward to running it back!
Keynote speaker: James Pennebaker
Submission deadline: June 5th.
→ commscifutures.com
Posts by Hannah Overbye-Thompson
Fabulous work by @rachaelkee.bsky.social and @richardhuskey.bsky.social
The Necessary Evolution of Mass Communication Research in a Fragmenting Media Landscape: With the ongoing evolution of media channels, debates over the concept of mass communication have been reignited. When we live in a society of filter bubbles and AI-generated content, the very notion of a large uniform audience has been undermined. Indeed, the process of mass communication looks different today than in the early days of the field, which naturally affects how to define and measure media effects. In this forum, leading communication scholars provide arguments as to whether we should keep using the term “mass communication,” adapt its definition, or develop entirely new concepts that better reflect our fragmenting media environment.
Does “mass communication” still exist — and if so, what should it mean today?
In this Forum for @jmcquarterly.bsky.social, we reflect on that question: "The Necessary Evolution of Mass Communication Research in a Fragmenting Media Landscape."
Open-access: doi.org/10.1177/1077...
Good news! 🎉
The registration issue has been resolved — everything should now work smoothly.
🔗 Direct link: www.icahdq.org/event/Hackat...
🗓 Registration is open until April 5, 2026
Looking forward to seeing you at the ICA Hackathon 2026 @SU School for Data Science and Computational Thinking! 🚀💡
Current explanations for political divides in entertainment media use identify divergent preferences for or evaluations of content. According to the theory of normative social behavior (TNSB), extratextual information such as cues about the audience may also influence exposure intentions due to viewers’ perceptions of ingroup norms. Social media users discuss and form communities around entertainment content while conveying partisan and racial identities. A preregistered experiment exposed Black and White partisans (N = 1,259) to tweets in which a television show was endorsed by co- or out-partisans who were racial in- or out-group members. Exposure intentions were stronger when endorsement came from co-partisans; however, this effect was stronger for White partisans. Treatment effects were mediated by perceived ingroup norms and perceptions of how much of the audience consisted of ingroup members. Implications of multiple identities (i.e., race and partisanship) for the TNSB and the study of partisan entertainment divides are discussed.
🚨New pub alert!🚨 Now available open-access in @hcr-journal.bsky.social, I show how endorsements of entertainment media from ingroup members, particularly inpartisans, affect exposure intentions, with differential effects across racial lines. #PolComm #PoliSci #Politics #MediaStudies 🧵
New year, new chapter! I am incredibly excited to share that starting in fall 2026, I'll be joining Michigan State University as a tenure-track Assistant Professor in Advertising + PR.
A big thank you to everyone who has been extra kind during my job market year & Go Green! 💚🤍
I've noticed w/research it's often hard to find validated measures of constructs without hobbling together scales from multiple papers; now when possible I try to contribute by validating scales. Below is a scale that I hope is helpful that measures the perceptual attributes of DOI + reinvention 🧪
Fabulous study by @felix-dietrich.de @aliciaernst.bsky.social @rkreling.bsky.social et al., examining how algorithmic curation affects music streaming UX. Key finding: More algorithmic recommendations = less enjoyment, BUT listening sessions w/algorithmic curation were perceived as more novel 🧪
New study (2025) examines how AI autonomy affects user agency and attitudes. Key finding: AI autonomy triggers psychological reactance through threats to freedom, BUT personalization benefits cancel this out + users with higher agency feel more threatened by autonomous AI 🧪
doi.org/10.1080/0883...
New study (2025) examines if people can detect bias in AI training data. Key finding: Training data cues were largely ineffective; users relied on AI performance instead to judge bias + consistent with prior work on AI bias, the majority of participants failed to notice any bias in training data 🧪
New paper (2025) by @len-s.bsky.social proposes the PMSIS model: parents can use racially diverse entertainment media + "foreground co-viewing" + active mediation to improve children's intergroup socialization 🧪 doi.org/10.1093/annc...
Great work Sovannie 👏👏👏 #commsky
New study by @janadreston.bsky.social @anneo.bsky.social & @germanneubaum.bsky.social reveals how users understand algorithms. Key findings: 71% have a basic understanding of algorithms but only 33% can explain how they work; users see themselves as passive actors when interacting with algorithms 🧪
Personally, I had a lot of fun on this project. It was my first time leading a mixed-methods study and an all-student team. I hope this research is useful for informing design, policy, and education efforts that help people feel more empowered in the algorithmic age.
Demographics mattered too:
👩🦱 Women & people of color often described avoidant attitudes—seeing risks but feeling powerless. Which makes sense, as they are often the target of algorithmic bias
👨 White men sometimes saw systemic risks but reported higher efficacy.
Qual findings:
⚠️ Risks clustered around mental health, privacy, fairness, and polarization.
💡 Efficacy beliefs were split into: Powerlessness, Strategic consumption (user tactics) & Collective responsibility (policy, regulation, audits)
Quant findings:
📊 People saw organizational algorithms as riskier than personal ones.
📊 But they also felt less able to mitigate bias in those systems.
In other words, the higher the stakes, the less control people feel.
Drawing from the Risk Perception Attitude framework, we studied how people think about algorithmic bias in both:
- Organizational algorithms (e.g., hiring, healthcare, policing)
- Individual-use algorithms (e.g., search engines, facial filters)
Excited to share my new paper with @garciaerick.bsky.social Xinyi Zhang & @laurentwang.bsky.social.
We ask: Do people see algorithmic bias as a risk—and do they feel capable of addressing it? Answer... It depends! More below 👇🧪 #commsky
doi.org/10.1080/1044...
New study by Aquino et al. provides a fabulous look at differing opinions about algorithmic bias held by healthcare professionals. 72 experts had 3 key disagreements: whether bias exists (most say yes, some no), who's responsible for fixing it & whether to include race/ethnicity data in AI systems 🧪
New study by @drjt.bsky.social examines if attention control explains the 🔗 between inspection time tasks and intelligence. Key finding: attention control fully mediated the inspection time-intelligence relationship + people with better sustained attention showed less performance decline over time 🧪
🎉 Huge congrats to our team @overbye.bsky.social, Kristy Hamilton and @jacobtfisher.online for receiving a Top Student Paper award in the Communication & Social Cognition Division at #NCA25! 🏆
4. The Oracle of Bacon 🥓🎬
A classic: plug in any actor and see how many steps it takes to reach Kevin Bacon (or any other actor).
Based on co-appearances in films. oracleofbacon.org
3. The Beer Graph 🍺
Curious how Lagers relate to Stouts?
This interactive network lets you explore how beers are connected by taste, aroma, and appearance.
Fun use of similarity graphs!
seekshreyas.github.io/beerviz/
2. The Hidden Network of Trees 🌳
Trees communicate underground using fungal networks; sharing nutrients, warning of threats, and shaping forest life.
A lovely example of why we study two-mode networks
🍄📡 www.youtube.com/watch?v=DUqE...