Advertisement · 728 × 90

Posts by Open MIC

Preview
Anthropic and Washington: investors cannot ignore defence AI risks Audrey Mocle is executive director of the Open Media and Information Companies Initiative, and Michael Clements is executive director of the Business and Human Rights Centre Human rights due diligence, military AI transparency and linking capital to data safeguards can shift markets towards rights-respecting innovation

Google. Meta. OpenAI. These are some of the tech companies that once made public commitments to not use AI for warfare – and have quietly walked them back.

Learn more at the implications: https://loom.ly/bYRmToI

#AIgovernance #militaryAI #AIrisk

4 weeks ago 1 0 0 0

AI built for civilian use is already shaping surveillance and targeting in conflict zones.

Public pensions are indirectly exposed to this shift.

This Q&A shows who's at risk ⬇️

1 month ago 0 0 0 0
Post image

AI is reshaping how information is produced and distributed, creating new risks for information integrity.

It's bad, but maybe not ALL bad.

Learn more and watch our webinar: materialtech.beehiiv.com/p/ai-and-information-int...

1 month ago 0 1 0 0
Post image

TOMORROW – March 10 at 11 a.m ET

Join Open MIC for a webinar on AI and Misinformation, and how #investors can address the threat.

us02web.zoom.us/webinar/register/WN_2UMn...

1 month ago 0 0 0 0
Post image

One week away!

Webinar on Tuesday, March 10, 2026, 11 a.m. - 12 a.m. ET

AI and Misinformation: How Investors Can Address the Threat

us02web.zoom.us/webinar/register/WN_2UMn...

1 month ago 0 0 0 0
Post image Post image Post image

NEW REPORT: investors must urge companies to mitigate security risks caused by rushed AI adoption

https://materialtech.beehiiv.com/p/ai-security-privacy

#aisecurity #shareholderengagement #materialtech #aiprivacy #airisk #shareholderactivism #investorbrief #responsibleinvesting

1 month ago 0 0 0 0
Post image

Reminder: Webinar on Tuesday, March 10, 2026, 11 a.m. - 12 a.m. ET

AI and Misinformation: How Investors Can Address the Threat

✍️ Register here: us02web.zoom.us/webinar/register/WN_2UMn...

1 month ago 0 0 0 0
Preview
Welcome! You are invited to join a webinar: AI and Misinformation: How Investors Can Address the Threat. After registering, you will receive a confirmation email about joining the webinar. This web session aims to help investors understand the risks related to AI and misinformation while exploring what they can do to press for more accountable corporate AI policies and practices. Pane...

The webinar will consist of a panel discussion led by Michael Connor of Open MIC, followed by questions from the audience.

us02web.zoom.us/webinar/regi...

2 months ago 0 0 0 0

Panelists include:

Gordon Crovitz, Co-founder and Co-CEO, @newsguard.bsky.social

Michael Khoo, Climate Disinformation Program Director, @foeus.bsky.social, and policy co-chair, @caadcoalition.bsky.social

Shuwei Fang, Fellow at @shorensteinctr.bsky.social

2 months ago 0 0 1 0
Advertisement
Post image

Join us for a webinar: Tuesday, March 10, 2026, 11 a.m. - 12 a.m. ET

AI and Misinformation: How Investors Can Address the Threat

✍️ Register here: us02web.zoom.us/webinar/register/WN_2UMn...

2 months ago 1 0 1 0
Post image

For investors, human rights due diligence matters—especially when tech companies sell cloud and AI tools to military end-users.

More from @iasj.bsky.social: iasj.org/investors-signal-urgent-...

4 months ago 1 0 0 0
Preview
Subscribe | Material Tech Fostering greater corporate accountability at media & technology companies through shareholder engagement.

This transition aligns with the launch of Material Tech, our new multimedia platform for timely, data-driven insights on how digital technologies shape society and the economy.
Subscribe: materialtech.beehiiv.com/subscribe

4 months ago 0 0 0 0

As technology accelerates, Open MIC remains committed to providing top-quality information and analysis on AI and workers’ rights, privacy and data minimization, disinformation, environmental impacts of data centers, and dual-use risks of emerging technologies.

4 months ago 2 0 1 0

Under Michael’s leadership, Open MIC built a globally respected model for investor-led advocacy on technology and human rights—work that Audrey will continue to expand.

4 months ago 0 0 1 0

She succeeds Michael Connor, Open MIC’s founding Executive Director. Michael will continue supporting the organization as senior adviser and project director.

4 months ago 0 0 1 0
Post image

We are entering our next chapter with the appointment of Audrey Mocle as Executive Director.

Audrey has served as Deputy Director since 2022 and brings deep experience in corporate human rights responsibility, investor engagement, and technology governance.

4 months ago 0 0 1 0
Advertisement
Post image

At #IGF2025 in June, experts from @heartlandorg.bsky.social, BHRRC, @openmicmedia.bsky.social, and UN B-Tech held a roundtable on dual-use tech, used by both civilians + military.

Key takeaways here: https://loom.ly/-h_Wn0k

5 months ago 1 0 0 0

“I certainly think that investors are one of the last really strong bulwarks against the unconstrained development of this new technology," Mocle says.

5 months ago 0 0 0 0
Post image

Union shareholders are fighting back against abusive AI.

Audrey Mocle of @openmicmedia.bsky.social weighs in on the power shareholders have to hold Big Tech accountable.

More from @inthesetimes.com: https://loom.ly/NZz5RTA

5 months ago 3 2 1 0
Video

In our new Future of Finance episode, @michaelconnor.bsky.social, Executive Director of @openmicmedia.bsky.social, explains how shareholder proposals drive change — even when they don’t pass.

Watch: www.youtube.com/watch?v=PkWF...
Listen: www.podbean.com/eas/pb-kcutz...

5 months ago 3 1 0 0

• Engagement → board-level oversight (e.g., privacy)
• AI risks: misinfo/bias, labor impacts, energy & water use
• Vote proxies, instruct managers, add guardrails in VC/PE
• Smart policy builds trust

5 months ago 1 0 0 0
- YouTube
- YouTube Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

Our director joined @georgesdyer.bsky.social on the @ien-online.bsky.social Future of Finance podcast to unpack how investors can steer AI toward long-term value — and how engagement can move the market toward safer innovation.

5 months ago 3 1 1 0
Post image

Understanding AI: Four-Part Series starts today — 6:00 PM ET

@datasociety.bsky.social is co-hosting a four-part event series exploring the societal implications of #AI.

🌐 Visit the Data & Society website for information and livestream: https://loom.ly/OgpNTyo

7 months ago 3 0 0 0

Visit our Events page for more information about these events, and links to register: loom.ly/zWtyuPQ

9 months ago 1 0 0 0

6/23 - 6/27: Internet Governance Forum (IGF) 2025 (@intgovforum.bsky.social)‬
7/7 - 7/22: World Summit on the Information Society (WSIS)+20 High-Level Event 2025 (WSIS Process)
7/8: AI for Human Rights: Smarter, Faster, Fairer Monitoring (part of AI for Good Summit 2025)

9 months ago 0 1 1 0
Advertisement
Post image

Human Rights | Technology | Artificial Intelligence

Upcoming global events offer insights and opportunities to shape policy on emerging technologies and their impacts on communities around the world 👇

9 months ago 1 0 1 0
Preview
Weaponizing AGI: How Speculative Futures Undermine Worker Protections | TechPolicy.Press Natalia Luka critiques speculative narratives about AI's future and how they are driving policy inaction.

‪@nataliyan.bsky.social‬ of the Berkeley AI Research Lab urges stakeholders to concentrate on the belief systems around AI, which are at least as disruptive as the products themselves.

More in ‪@techpolicypress.bsky.social‬: loom.ly/vXtGsew

10 months ago 0 0 0 0

Instead, employers are simply masking politically and economically motivated downsizing as "inevitable AI progress." Even worse, legislators are intentionally failing to pass laws that will protect workers if AI does come for their jobs.

10 months ago 0 0 1 0
Preview
Weaponizing AGI: How Speculative Futures Undermine Worker Protections | TechPolicy.Press Natalia Luka critiques speculative narratives about AI's future and how they are driving policy inaction.

"Layoffs described as progress"

Tech companies and the politicians who love them 💕🤖 are cutting jobs with the excuse that AI is better than people. Not only is that bad for workers, but in most cases it's just not true. AI is not nearly as advanced as they're claiming.

10 months ago 1 0 1 0
Preview
Army bringing in big tech executives as lieutenant colonels The Army is swearing in top tech executives from Meta, OpenAI and Palantir as senior officers to be part-time advisors.

Going a big step beyond simply hiring tech companies to build out defense systems, the U.S. Army is actually swearing in tech executives to consult internally.

Palantir, OpenAI, Meta and Thinking Machines Lab are represented by newly minted lieutenant colonels.

10 months ago 0 0 0 0