Advertisement · 728 × 90

Posts by Centre for Media, Technology & Democracy

Securing Canada’s Digital Sovereignty: A New Playbook - Youth Online Safety Youth online safety is at a turning point. Learn from experts about AI, platforms, and the state of Canada’s online safety policy.

Register for free to hear from leading voices including: @abridgman.bsky.social, Sally Guy, Helen A. Hayes, Emily Laidlaw, @petermacleod.bsky.social, @taylorowen.bsky.social, Ava Smithing, Tracy Vaillancourt and @ethanz.bsky.social.
tinyurl.com/4hs6ywh2

1 week ago 2 2 0 0
Post image

We rescheduled Securing Canada's Digital Sovereignty: A New Playbook for Youth Online Safety! Join us on April 30th in Ottawa to hear from youth advocates, policy experts and leading researchers about the current online harms policy landscape and explore potential solutions. 🧵

1 week ago 0 1 1 0
Preview
React and contribute to collective reflection: AI and Age Assurance A journey to discover to share your point of view.

Are you a Canadian aged 17 –23? Your voice matters in shaping AI & age assurance policy.

Following our fourth #GenZAI Forum in Halifax, you can now share your perspective through Make.org's civic dialogue platform here: tinyurl.com/yyzny4mn

1 week ago 3 2 0 0

This event was organized by Paradigms, the Centre, and The Attention Studio, with support from The Waltons Trust. Stay tuned for the full episode of Left To Their Own Devices, coming soon!

Photos by Johnny Makes

2 weeks ago 0 0 0 0
Post image Post image

After a live Q&A with the audience, guests met with representatives from organizations on the frontlines of digital rights, online safety, and youth advocacy to learn how they can get involved and reclaim their attention from Big Tech.

2 weeks ago 1 0 1 0
Post image

The discussion revisited the core argument of Haidt's bestseller, The Anxious Generation, while touching on recent legislation restricting social media use for kids and how youth voices can be scoped into tech policy.

2 weeks ago 0 0 1 0
Post image

Last night in NYC, @jonathanhaidt.bsky.social joined Ava Smithing for a live recording of Left To Their Own Devices, followed by a community showcase of organizations from across the US and Canada that Haidt called "the Woodstock of tech reform." 🧵

2 weeks ago 1 1 1 0
Advertisement
Left to Their Own Devices: A Conversation with Jonathan Haidt What does it take to build a digital world that actually serves the people in it?

Space is limited. Reserve your free spot now at: tinyurl.com/urpnh5fm

1 month ago 0 0 0 0

After a live Q&A, you’ll have the chance to connect with organizations on the front lines of digital rights, online safety, and youth advocacy at our community showcase.

1 month ago 0 0 1 0
Post image

Join us in New York City on March 31 for a FREE evening with @jonathanhaidt.bsky.social, bestselling author of The Anxious Generation, in conversation with Ava Smithing, Gen Z host of Left to Their Own Devices. 🧵

1 month ago 1 0 1 0
Preview
Culture minister says 'serious conversation' needed about AI systems and news media OTTAWA — Culture Minister Marc Miller says the government must have a serious conversation about AI systems’ use of news. “Having the news cannibalized and regurgitated undermines the spirit of the us...

Read the full article here: halifax.citynews.ca/2026/03/17/c...

1 month ago 1 0 0 0

Unlike social media companies, which compete with news outlets by capturing ad revenue, "AI companies are absorbing the substance of journalism and delivering it directly to consumers as their own product.”

1 month ago 1 0 1 1

Our latest policy brief by @taylorowen.bsky.social and @abridgman.bsky.social tested 2,267 Canadian news stories across ChatGPT, Gemini, Claude and Grok. They found that AI platforms failed to credit news sources around 82% of the time.

1 month ago 0 0 1 0

Canada's culture minister says it's time for a "serious conversation" about AI and news media. From @anjakaradeglija.bsky.social, @cdnpress.bsky.social (March 17, 2026) 🧵

1 month ago 1 0 1 0
Advertisement

According to Esli Chan (PhD Candidate, Expert on Extremism & Gender), "Normalization of the underlying ideology is particularly harmful for youth who are viewing Clav's content because it can affirm rigid notions of how masculinity should be performed, reinforcing toxic ideals."

1 month ago 2 0 0 0

Although many social media users engage with looksmaxxing content ironically, memes can be a pathway toward more extremist or radical subcultures by normalizing this type of discourse as part of everyday online culture.

1 month ago 1 0 1 0
Source: Media Ecosystem Observatory data from posts shared by Canadian influencers, news outlets and politicians

Source: Media Ecosystem Observatory data from posts shared by Canadian influencers, news outlets and politicians

Term variants show how this terminology is being repurposed into other forms of discourse, demonstrating its growing presence online.

1 month ago 1 0 1 0
Source: Media Ecosystem Observatory data from posts shared by Canadian influencers, news outlets and politicians

Source: Media Ecosystem Observatory data from posts shared by Canadian influencers, news outlets and politicians

Posts featuring terms like 'maxxing' and 'mogging' have increased substantially in 2026, suggesting growing adoption of language associated with harmful behaviour.

But these terms aren't new — they originated in incel/manosphere online subcultures in the mid-2000s.

1 month ago 1 1 1 0

Looksmaxxers like Clavicular recommend extreme practices to optimize their appearance, such as 'bonesmashing,' jaw surgery, and steroid use. Bonesmashing refers to striking one's face with a hammer to reshape its bone structure.

1 month ago 1 0 1 0

Data from the Centre's Media Ecosystem Observatory shows that 'looksmaxxing' is on the rise in Canada's online ecosystem, peaking in February following a viral video of Kick streamer Clavicular "getting brutally frame mogged by an ASU frat leader." 🧵

1 month ago 1 0 1 0

The safety of our speakers and guests is our top priority. We are actively working to reschedule the convening and will share a new date as soon as possible. Thank you to everyone who planned to join us, we look forward to bringing this important conversation together very soon.

1 month ago 0 0 0 0

🚨 Due to the severe ice storm forecast for tomorrow and expected travel disruptions, we’ve made the difficult decision to cancel Securing Canada’s Digital Sovereignty: A New Playbook for Youth Online Safety, scheduled for March 11 in Ottawa.

1 month ago 1 0 1 0
Advertisement
Preview
React and contribute to collective reflection: AI and Privacy A journey to discover to share your point of view.

Are you a Gen Z Canadian (17–23)? We want to hear your thoughts on AI & data privacy!

We just hosted our third #GenZAI forum, where 100 young Canadians drafted policy recommendations on AI data collection. Thanks to Make.Org, you can join the conversation here: tinyurl.com/yv6jz3rt

1 month ago 3 3 0 0
Preview
The Andrew Carter Morning Show (Monday March 2, 2026) - The Andrew Carter Podcast | iHeart <p>Trudie Mason, Esli Chan, John Moore, Dr. Mitch Shulman, Alicia Monette, Dr. Japji Anna Bas</p>

MEO researcher Esli Chan spoke about our latest conspiracy brief on iHeart Radio CA's The Andrew Carter Morning Show! Have a listen here: www.iheart.com/podcast/962-...

1 month ago 1 0 0 0
Preview
Scoping AI Chatbots into a revised Online Harms Act: The Case for Immediate Action — Centre for Media, Technology and Democracy February 24, 2026 - The Centre’s Founding Director, Taylor Owen, and Helen Hayes, Associate Director of Policy, are calling for immediate action to scope AI chatbots into a revised Online Harms Act.

You can read @taylorowen.bsky.social and Helen Hayes' policy memo on scoping AI chatbots into a revised Online Harms Act and their response to OpenAI's letter to Minister Solomon here: tinyurl.com/39zve3ey

1 month ago 0 0 0 0

While OpenAI's voluntary commitments are a good start, they are no substitute for legislation establishing an independent regulator with authority to require risk assessments, set age-appropriate design standards, ensure compliance and enforce consequences when systems fail.

1 month ago 0 0 1 0

In a Feb. 26 letter to Minister Solomon, OpenAI disclosed that the Tumbler Ridge shooter created a second ChatGPT account that its detection systems missed, and that under its updated referral protocol it would now report the first banned account to law enforcement.

1 month ago 0 0 1 0

Owen and Hayes argue that OpenAI's decision not to contact Canadian law enforcement after the shooter's ChatGPT account was flagged and suspended in June 2025 is another example of real-world harms caused by AI systems.

1 month ago 0 0 1 0

In the wake of the Tumbler Ridge mass shooting, the Centre's Founding Director @taylorowen.bsky.social and Associate Director of Policy Helen Hayes published a policy memo calling on the Canadian government to scope AI chatbots into a revised Online Harms Act. 🧵

1 month ago 0 0 1 0

@mathieulavigne.bsky.social spoke with @rorywh.bsky.social from @nationalobserver.com about our latest brief on online conspiracy theories and institutional distrust in Canada, from the Centre's Media Ecosystem Observatory (MEO).

1 month ago 2 2 0 0
Advertisement