Register for free to hear from leading voices including: @abridgman.bsky.social, Sally Guy, Helen A. Hayes, Emily Laidlaw, @petermacleod.bsky.social, @taylorowen.bsky.social, Ava Smithing, Tracy Vaillancourt and @ethanz.bsky.social.
tinyurl.com/4hs6ywh2
Posts by Centre for Media, Technology & Democracy
We rescheduled Securing Canada's Digital Sovereignty: A New Playbook for Youth Online Safety! Join us on April 30th in Ottawa to hear from youth advocates, policy experts and leading researchers about the current online harms policy landscape and explore potential solutions. 🧵
Are you a Canadian aged 17 –23? Your voice matters in shaping AI & age assurance policy.
Following our fourth #GenZAI Forum in Halifax, you can now share your perspective through Make.org's civic dialogue platform here: tinyurl.com/yyzny4mn
This event was organized by Paradigms, the Centre, and The Attention Studio, with support from The Waltons Trust. Stay tuned for the full episode of Left To Their Own Devices, coming soon!
Photos by Johnny Makes
After a live Q&A with the audience, guests met with representatives from organizations on the frontlines of digital rights, online safety, and youth advocacy to learn how they can get involved and reclaim their attention from Big Tech.
The discussion revisited the core argument of Haidt's bestseller, The Anxious Generation, while touching on recent legislation restricting social media use for kids and how youth voices can be scoped into tech policy.
Last night in NYC, @jonathanhaidt.bsky.social joined Ava Smithing for a live recording of Left To Their Own Devices, followed by a community showcase of organizations from across the US and Canada that Haidt called "the Woodstock of tech reform." 🧵
After a live Q&A, you’ll have the chance to connect with organizations on the front lines of digital rights, online safety, and youth advocacy at our community showcase.
Join us in New York City on March 31 for a FREE evening with @jonathanhaidt.bsky.social, bestselling author of The Anxious Generation, in conversation with Ava Smithing, Gen Z host of Left to Their Own Devices. 🧵
Unlike social media companies, which compete with news outlets by capturing ad revenue, "AI companies are absorbing the substance of journalism and delivering it directly to consumers as their own product.”
Our latest policy brief by @taylorowen.bsky.social and @abridgman.bsky.social tested 2,267 Canadian news stories across ChatGPT, Gemini, Claude and Grok. They found that AI platforms failed to credit news sources around 82% of the time.
Canada's culture minister says it's time for a "serious conversation" about AI and news media. From @anjakaradeglija.bsky.social, @cdnpress.bsky.social (March 17, 2026) 🧵
According to Esli Chan (PhD Candidate, Expert on Extremism & Gender), "Normalization of the underlying ideology is particularly harmful for youth who are viewing Clav's content because it can affirm rigid notions of how masculinity should be performed, reinforcing toxic ideals."
Although many social media users engage with looksmaxxing content ironically, memes can be a pathway toward more extremist or radical subcultures by normalizing this type of discourse as part of everyday online culture.
Source: Media Ecosystem Observatory data from posts shared by Canadian influencers, news outlets and politicians
Term variants show how this terminology is being repurposed into other forms of discourse, demonstrating its growing presence online.
Source: Media Ecosystem Observatory data from posts shared by Canadian influencers, news outlets and politicians
Posts featuring terms like 'maxxing' and 'mogging' have increased substantially in 2026, suggesting growing adoption of language associated with harmful behaviour.
But these terms aren't new — they originated in incel/manosphere online subcultures in the mid-2000s.
Looksmaxxers like Clavicular recommend extreme practices to optimize their appearance, such as 'bonesmashing,' jaw surgery, and steroid use. Bonesmashing refers to striking one's face with a hammer to reshape its bone structure.
Data from the Centre's Media Ecosystem Observatory shows that 'looksmaxxing' is on the rise in Canada's online ecosystem, peaking in February following a viral video of Kick streamer Clavicular "getting brutally frame mogged by an ASU frat leader." 🧵
The safety of our speakers and guests is our top priority. We are actively working to reschedule the convening and will share a new date as soon as possible. Thank you to everyone who planned to join us, we look forward to bringing this important conversation together very soon.
🚨 Due to the severe ice storm forecast for tomorrow and expected travel disruptions, we’ve made the difficult decision to cancel Securing Canada’s Digital Sovereignty: A New Playbook for Youth Online Safety, scheduled for March 11 in Ottawa.
Are you a Gen Z Canadian (17–23)? We want to hear your thoughts on AI & data privacy!
We just hosted our third #GenZAI forum, where 100 young Canadians drafted policy recommendations on AI data collection. Thanks to Make.Org, you can join the conversation here: tinyurl.com/yv6jz3rt
MEO researcher Esli Chan spoke about our latest conspiracy brief on iHeart Radio CA's The Andrew Carter Morning Show! Have a listen here: www.iheart.com/podcast/962-...
You can read @taylorowen.bsky.social and Helen Hayes' policy memo on scoping AI chatbots into a revised Online Harms Act and their response to OpenAI's letter to Minister Solomon here: tinyurl.com/39zve3ey
While OpenAI's voluntary commitments are a good start, they are no substitute for legislation establishing an independent regulator with authority to require risk assessments, set age-appropriate design standards, ensure compliance and enforce consequences when systems fail.
In a Feb. 26 letter to Minister Solomon, OpenAI disclosed that the Tumbler Ridge shooter created a second ChatGPT account that its detection systems missed, and that under its updated referral protocol it would now report the first banned account to law enforcement.
Owen and Hayes argue that OpenAI's decision not to contact Canadian law enforcement after the shooter's ChatGPT account was flagged and suspended in June 2025 is another example of real-world harms caused by AI systems.
In the wake of the Tumbler Ridge mass shooting, the Centre's Founding Director @taylorowen.bsky.social and Associate Director of Policy Helen Hayes published a policy memo calling on the Canadian government to scope AI chatbots into a revised Online Harms Act. 🧵
@mathieulavigne.bsky.social spoke with @rorywh.bsky.social from @nationalobserver.com about our latest brief on online conspiracy theories and institutional distrust in Canada, from the Centre's Media Ecosystem Observatory (MEO).