Advertisement · 728 × 90
#
Hashtag
#AIPolicy
Advertisement · 728 × 90
Post image

Almost True, an AI Lie - YouTube
youtu.be/ULGXtN7fIrI #ArtificialIntelligence #AIHallucination #CyberSecurity #Misinformation #TechEthics #AIRisk #DigitalTrust #MachineLearning #AIPolicy #FutureOfWork #TechLeadership #ResponsibleAI #DataIntegrity #AILiteracy #EmergingTech

0 0 0 0

Bernie Sanders introduced a bill to pause new AI data centers until federal safeguards are in place, citing risks to jobs, society, and public safety. www.sanders.senate.gov/wp-content/u... #AIPolicy

1 0 1 0
Preview
Federal Court Blocks U.S. Government Action Against Anthropic in AI Policy Dispute A federal judge has ruled in favor of Anthropic in its legal dispute with the U.S. government, ordering the administration to rescind its designation of

A US federal court ordered the government to rescind Anthropic’s “supply-chain risk” designation and halt efforts to restrict federal ties with the company. #AIPolicy

0 0 0 0

Australia released defence AI policy settings requiring legal compliance, lifecycle governance, human accountability, and risk-based controls aligned with international commitments. #AIPolicy
Source:

0 0 0 0
Preview
Gottheimer Questions Anthropic on Code Leaks Rep. Josh Gottheimer pressed Anthropic's CEO on repeated Claude Code leaks, citing national security risks. Anthropic called it a packaging error, not a breach, with no customer data exposed.

📍Gottheimer Presses Anthropic on Repeated Code Leaks.

Rep. Josh Gottheimer pressed Anthropic's CEO on repeated Claude Code leaks, citing national security risks. Anthropic called it a packaging error, not a breach...

#Claude #Security #ClaudeLeaks #LLMModels #AIPolicy

factide.com/gottheimer-p...

1 0 0 0

The White House released a national AI policy framework calling for child safety protections, federal preemption of state AI laws, and measures to boost U.S. AI innovation. #AIPolicy

2 1 0 0
Preview
Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone ...

Deepfakes target minors and are a growing crisis inside our schools. My piece in @thefulcrum-us.bsky.social breaks down what schools should do now, and why AI companies can't wait. Thoughts?

🔗 thefulcrum.us/media-techno...

#AIPolicy #Deepfakes #TakeItDownAct #TechAccountability #DigitalSafety

1 0 0 0
Enforce is hiring: AI Expert ## Work with us * * * ### ICCL Enforce is hiring ### Enforce Artificial Intelligence Expert * * * Enforce, a unit of the Irish Council for Civil Liberties (ICCL), is hiring an expert in law / economics / computer science, or other relevant domain, to help ensure supervision and enforcement of human rights in the context of A.I. and automated decision making systems. We will assess the applications on a rolling basis until 5pm Dublin time on **15 April 2026**. Please apply early. **Duration** : A full-time role is funded till 31 August 2027, with the possibility to renew based on funding extension. **Starting Salary** : €65,000 — €100,000 per annum depending on prior experience and local social charges. **Start Date:** As soon as possible, preferably before 4 May 2026. **Location** : Remotely in the EU. ICCL Enforce is based in Dublin. You may have to travel occasionally, particularly to Brussels. **What you will do ** Enforce investigates, advocates, and litigates to protect people and their rights. In Enforce, you will be expected to * engage with senior stakeholders and experts in key jurisdictions; * influence the implementation and enforcement of law and policy, including implementing acts of the EU AI Act; * investigate and expose harmful use of AI; * track technology and market developments; and * most particularly, track policy developments. You will be expected to work independently, potentially leading projects, with the support of the Director of Enforce, to whom you will report. **Qualifications** REQUIRED * Expertise in at least one of the following aspects of AI: legal and regulatory, and/or economic (including markets and labour), and/or computer science. * Excellent written and spoken English. * Ability to write concisely and communicate clearly. * Ability to work with colleagues and other stakeholders without expertise in AI/automated decision making systems. * Legal right to work in an EU country. * Capacity to learn and to take advantage of feedback. DESIRED * Strong academic or industry or policy record. You may have law degree, or MSc/PhD in economics, statistics, computer science, engineering, or a related field. * Familiarity with EU digital law, particularly the AI Act and GDPR. * Record of communication with and influence on stakeholders including experts, law makers, regulators, standards bodies, industry, and media. * Existing relationships with stakeholders. * Existing relationships with philanthropic foundations. ** To apply** Send the following documents to info@iccl.ie with 'Enforce A.I. specialist - Application' in the subject line of the email before 5pm Dublin time on 15 April 2026: 1. CV of maximum three pages 2. Describe in less than a page the first project you would like to pursue in this role. Describe how you would approach the project based on your strengths and the impact of the project. If you used generative AI to aid in your application, explain why in a separate document along with all the prompts and tools you used. **About Enforce** Enforce is a unit of the Irish Council for Civil Liberties. Ireland’s unique responsibility for monitoring human rights on large digital platforms give us an international focus. We share technical expertise with legislators in strategic jurisdictions, and we investigate tech, industry practice, and markets. Our expertise supports other civil society organisations, too. We also take bad actors to court in key jurisdictions. Learn more at https://iccl.ie/enforce/

Enforce at the Irish Council for Civil Liberties is hiring an expert in #AI.

Full-time role. Remote in the EU.

Apply early. Deadline 5pm Dublin time on 15 April 2026.

#AIPolicy #EUAIAct #DigitalRights #TechPolicy

www.iccl.ie/digital-data/enforce-is-...

0 1 0 0

🇺🇸 The Trump administration wants one federal AI policy — and it's pushing back on strict state laws.

'Minimally burdensome' is the phrase. A task force will challenge state overreach.

The US AI regulation battle is just starting.

#AIPolicy #US #Regulation #Law #AI

0 0 0 0
Preview
Enforce is hiring: AI Expert How data about European defence personnel and political leaders flows to foreign states and non-state actors

Enforce at the @iccl.bsky.social is hiring.

Full-time role. Remote in the EU.

Apply early. Deadline 5pm Dublin time on 15 April 2026.

www.iccl.ie/digital-data...

#AI #AIPolicy #EUAIAct #DigitalRights #TechPolicy
@abeba.bsky.social @rocher.lc @techpolicypress.bsky.social @leevisaari.bsky.social

9 7 0 1
Preview
Catholic Division begins developing policy to govern ‘when, how, where’ AI will be used - Prince Albert Daily Herald The Prince Albert Catholic School Division has taken the first steps to develop an Artificial Intelligence (AI) policy with the goal of addressing a growing technological trend. During the board of education’s regular meeting on Monday, March 23 the board passed a motion to approve the development on policy regarding acceptable use of Artificial Intelligence.

Catholic Division begins developing policy to govern ‘when, how, where’ AI will be used


#AI #AIPolicy #PrinceAlbert #PrinceAlbertCatholicSchoolDivision
paherald.sk.ca/catholic-division-begins...

0 0 0 0

🤖 AI IS NOW SCREENING YOU:

USCIS is using pattern-detection AI to flag "wage manipulation" and fraud.

Not confirmed publicly. Immigration lawyers see it in denials.

An algorithm now decides who gets to work in America.

#H1B #USCIS #AIPolicy

1 0 1 0
Preview
AI Policy Corner: Layered Governance in AI Labs: Defining Boundaries Across the Policy Stack | Montreal AI Ethics Institute ✍️By Tejasvi Nallagundla. Tejasvi is an Undergraduate Student in Computer Science, Artificial Intelligence and Global Studies and an Undergraduate Affiliate at the Governance and Responsible AI Lab…

Tejasvi Nallagundla, undergrad with @grailcenter.bsky.social explores layered governance in AI labs and how defining boundaries across the policy stack can improve accountability and coordination with @MontrealEthics.
montrealethics.ai/ai-policy-co...
#AI #AIGovernance #AIpolicy #ResponsibleAI

0 0 0 0
Post image

California mandates AI safety proof for state contracts. 4-month timeline for new vetting processes covering bias prevention and civil rights protections. #AIPolicy #California www.implicator.ai/newsom-signs-ai-safety-o...

0 0 0 0
C4R's AI Policy - Using Artificial Intelligence Tools Rigorously and Transparently.

We’ve developed a framework to set guardrails around how AI is used in creating and promoting our scientific rigor curriculum. As AI and LLMs keep changing, our approach will continue to adapt.

⭐️ Read our full AI Policy at buff.ly/1ZgrjMu

#ResponsibleAI
#AIGovernance
#AIpolicy

1 1 0 0
Adapting to Technological Change

How we can use AI effectively in education 💻📱💻



“How are we supposed to believe that a student who cannot say “Good morning!” in English without a translator has written first-class academic work at a British university? It is an uncomfortable question, but one that increasingly sits at the centre of a much larger debate: what does academic integrity look like in the age of artificial intelligence?” - Sophie Mills



To read more, turn to page 81 of the Spring edition! 🌷🌱🌷

https://issuu.com/educationchoicesmagazine/docs/education_choices_spring_2026_online/81



Bron Mills #educationchoicesmagazine #educationcornerpodcast #educationchoices #education #EDIB #DEI #equality #diversity #inclusion #ai #aiatuniversity #academicintegrity #aipolicy #aiinschools #artificalintelligence #aitranslation

Adapting to Technological Change How we can use AI effectively in education 💻📱💻 “How are we supposed to believe that a student who cannot say “Good morning!” in English without a translator has written first-class academic work at a British university? It is an uncomfortable question, but one that increasingly sits at the centre of a much larger debate: what does academic integrity look like in the age of artificial intelligence?” - Sophie Mills To read more, turn to page 81 of the Spring edition! 🌷🌱🌷 https://issuu.com/educationchoicesmagazine/docs/education_choices_spring_2026_online/81 Bron Mills #educationchoicesmagazine #educationcornerpodcast #educationchoices #education #EDIB #DEI #equality #diversity #inclusion #ai #aiatuniversity #academicintegrity #aipolicy #aiinschools #artificalintelligence #aitranslation

Adapting to Technological Change
How we can use AI effectively in education 💻📱💻

Read more:
issuu.com/educationcho...

#educationchoicesmagazine #educationcornerpodcast #educationchoices #education #EDIB #DEI #equality #diversity #inclusion #ai #academicintegrity #aipolicy #artificalintelligence

0 0 0 0
Preview
Pro-AI Group to Spend $100mn on US Midterms Pro-AI group pledges $100mn for Nov 8, 2026 midterms (Financial Times, Mar 30, 2026); spending equals ~3.2% of 2022 outside-spend (OpenSecrets, 2022) and could reshape AI regulation.

Pro-AI Group to Spend $100mn on US Midterms: Pro-AI group pledges $100mn for Nov 8, 2026 midterms (Financial Times, Mar 30, 2026); spending equals ~3.2% of 2022 outside-spend (OpenSecrets, 2022) and could reshape AI… 👈 Read full analysis #AIFunding #Midterms2026 #AICampaign #Election2026 #AIPolicy

0 0 0 0

David Sacks, US AI Czar, steps down. The AI regulatory landscape just got murkier — and more important to watch. #AI #AIPolicy

0 0 0 0
Financial insights visualization: Federal court blocks the Trump administration's restrictions on Anthropic: multiple tweets report that Anthropic won a preliminary injunction or court order stopping the government from enforcing a ban on federal use of its AI tools or related blacklist measures (tweets 1, 3, 4, 17, 36, 49)., Pentagon 'supply chain risk' or national security designation is central to the dispute: several tweets say the court blocked the Pentagon or administration from labeling Anthropic a supply chain risk or national security threat (tweets 5, 9, 16, 20, 22, 41, 46).

Financial insights visualization: Federal court blocks the Trump administration's restrictions on Anthropic: multiple tweets report that Anthropic won a preliminary injunction or court order stopping the government from enforcing a ban on federal use of its AI tools or related blacklist measures (tweets 1, 3, 4, 17, 36, 49)., Pentagon 'supply chain risk' or national security designation is central to the dispute: several tweets say the court blocked the Pentagon or administration from labeling Anthropic a supply chain risk or national security threat (tweets 5, 9, 16, 20, 22, 41, 46).

Court blocks Anthropic blacklist. On Mar. 26, Judge Rita Lin preliminarily barred the Trump admin/Pentagon from enforcing a “supply chain risk” designation, citing likely First Amendment retaliation. Stayed 7 days for appeal. Matters for AI procurement. #AIpolicy #GovTech

2 1 1 0
Judge Blocks Pentagon's Anthropic AI Ban (2026) A federal judge halted the Pentagon's 'supply chain risk' ban on Anthropic's Claude AI, ruling it appears designed to punish the company. Read the full ruling details.

Judge Blocks Pentagon's Anthropic AI Ban (2026)
A federal judge halted the Pentagon's 'supply chain risk' ban on Anthropic's Claude AI, ruling it appears designed to punish the company. Read the full rul...

#Anthropic #ClaudeAI #AIPolicy #TrumpAI #TechLaw
https://scrollworthy.org/trending/anthropic

1 0 0 0

📣 "This ruling will restore the certainty that businesses need." — @ACTonline President @morganwreed "“U.S. small businesses are our most innovative and regularly provide breakthrough solutions to immediate needs, both for the government and private sector." 🙌 #Anthropic #AIPolicy

0 0 1 0
Post image

Silicon Valley’s own David Sacks just got booted as the White House AI & crypto czar after a Trump fundraiser. What does this shake‑up mean for tech policy? Dive in for the full scoop. #DavidSacks #AIpolicy #CryptoPolicy

🔗 aidailypost.com/news/david-s...

0 0 0 0
Post image

AI policy moves fast and you need to move faster.

Get direct intelligence from the officials writing the global AI blueprint.

Subscribe to RegulatingAI on Substack. https://substack.com/@regulatingai
#regulatingai #aigovernance #substack #aipolicy #newsletter

0 0 0 0
Naomi Klein & Karen Hao: The Empire of AI and the Fight for Our Future | Chan Centre Insights
Naomi Klein & Karen Hao: The Empire of AI and the Fight for Our Future | Chan Centre Insights YouTube video by Chan Centre for the Performing Arts

#ArtificialIntelligence #AIPolicy

www.youtube.com/watch?v=Z1B_...

2 0 0 0

#Canada needs a #PrivacyRights & #Public #AIPolicy

Sure, we could let #Judges go at it, after the fact…?
or we could be more proactive

🍁 #CANpoli 🍁 #CDNpoli 🍁 #CanadaSky 🌌 #Canadiana

0 0 1 0
Preview
Liability Gap: Schools Deploy AI Tools as Tech Giants Face First Addiction Lawsuit 78% of Australian schools use AI tools but lack governance. A US court just found Meta and Google liable for deliberately addictive design. Schools face a liability and safety crisis.

Liability Gap: Schools Deploy AI Tools as Tech Giants Face First Addiction Lawsuit

#AusEd #AIPolicy #TechLiability #ChildSafety #AusNews #Breaking

thedailyperspective.org/article/2026-03-26-liabi...

0 0 0 0

What are your thoughts? Please share your policies or principles below. 👇

#IOPsychology #GradSchool #AIPolicy #ResearchEthics #AcademicAI

0 0 0 0

Over 500 AI governance standards—and counting—have created a complex, often conflicting landscape.

This piece maps a path toward alignment and human-centered governance.
🔗 link.springer.com/rwe/10.1007/...

#AIGovernance #AIpolicy #ResponsibleAI
bsky.app/profile/dsch...

1 1 0 0
Judge Questions Legality of Government Ban on AI Company Anthropic — Democracy Observer article

Judge Questions Legality of Government Ban on AI Company Anthropic — Democracy Observer article

A judge challenged a government ban on an AI company, raising First Amendment concerns about government overreach and its impact on...

Read more: democracy.observer/posts/judge-questions-le...

#DemocracyObserver #Accountability #AIpolicy

0 0 0 0
Preview
Trump unveils national AI policy framework - Replaye The Trump administration released a national AI framework addressing safety, jobs, energy use and free speech as the U.S. seeks to stay competitive globally.

Trump unveils national AI policy framework
replaye.com/trump-unveil...

#News #Trump #DonaldTrump #AI #AIPolicy #ArtificialIntelligence

1 0 0 0