Check out our new op-ed! With @taniaduarte.bsky.social @markwong.bsky.social, @suoman.bsky.social, & @timdavies.org.uk.
We argue that these collabs normalize close relationships w/Big-Tech, setting them up to be key actors in governance who provide tech solutions to important social ‘problems’.
Posts by Mark Wong
So sorry to see this @abeba.bsky.social. This censorship/complicity is unacceptable and the stress you were put under was inexcusable. AI for good only for the few and business-as-usual. Your critical work is absolutely incredible and speaks truth against big tech power. In solidarity & support.
A short blogpost detailing my experience of censorship at the AI for Good Summit with links to both original and censored versions of slides and links to my talk
aial.ie/blog/2025-ai...
Hats off! 3️⃣ of our team just got elected at the Social Policy Association @socialpolicyuk.bsky.social AGM 2025 💪
@clemmiehilloconnor.bsky.social was elected as Exec Committee Member 👏
@serena-pattaro.bsky.social & @markwong.bsky.social
as Editorial Board Members for the Journal of Social Policy 🙌
Our own @markwong.bsky.social - elected as Editorial Board Member for the Journal of Social Policy - is Senior Lecturer with expertise in data & AI policies, racial justice & participatory methodologies. Mark aims to tap his global networks to elevate JSP as a top venue for debate on policy & AI 🙌
Congrats to @markwong.bsky.social & Aunam Quyoum whose Code of Practice (with Ankita Mishra) has been endorsed by the UK Government's Department for Science, Innovation & Technology & cited in @auditscotland.bsky.social's & @scvo.scot's resources 👏
Info: www.primecommunities.online/outputs/code...
A timely read from @markwong.bsky.social on the Centre's blog this week, ahead of today's #SpringStatement, and a great thread from Mark here highlighting the key points 👇
📝UK Government going full steam ahead with AI but left the people behind
Read in full 🔗 www.gla.ac.uk/research/az/...
See more details and resources signposted in the blog post: *UK Government going full steam ahead with AI but left the people behind* www.gla.ac.uk/research/az/...
@uofgussp.bsky.social @uofglasgow.bsky.social @uofgsocsci.bsky.social @uofgnews.bsky.social @ukri.org
What we need is to involve the public in AI governance.
This will allow participation of diverse perspectives to determine and audit how AI should or should not be used in government. See more: what we are doing in the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project.
Co-design methods, e.g. people’s panels, ensure the lived expertise of adversely-racialised people are valued and listened to in the AI ecosystem. This echoes @demos-uk.bsky.social call for government to shift from ‘citizen engagement to citizen participation’ to mobilise mission-led government.
Research I’ve led at University of Glasgow shows preventing inequalities in digital services and AI requires involving the public. Our co-created code of practice provides an example of how the government can develop digital services in more equitable ways. (see links in blog)
Policies need to ensure AI is fair and beneficial for everyone before it gets further rolled out in government departments. UK government needs to involve the public to decide how and why AI is used in #publicsector. #ResponsibleAI is about considering who is most impacted & rebalance who has power.
I wrote a blogpost for @uofgpolicy.bsky.social: UK Government going full steam ahead with AI but left the people behind. Ahead of the #SpringStatement, the Prime Minister & Chancellor claimed today AI makes substantial cuts to civil service ‘more than possible’. But this faith in AI is misplaced.
This work is done as part of the @UKRI_News
funded project, 'Protecting Minoritised Ethnic Community Online' (thanks to the UKRI strategic priorities fund & REPHRAIN). We want to thank everyone, the team, project partners, participants contributed to this work. 8
Our paper contributes to the growing debates on the importance of centering the role #marginalised communities play in data and AI and amplify the voice of those most impacted. Read our article to find out more about how #codesign is important for trustworthy services 7/n
Our evidence reveals nuanced realties of emotions, frustration, and hopes that racialised people have towards making digital services fairer and trustworthy. The article highlights co-design as a desired path by racialised peoples towards realising change and justice. 6/n
Ample evidence in #criticalAI studies has revealed #AIharms on racialised people. AI models cause harm by transmitting discrimination, toxicity, misinformation, and negative stereotypes. what is lesser known is how people makes sense of and navigate these systems and harms. 5/n
What we found was issues related to trust, data privacy, and poorer quality access to services. Such experiences are shaped by the fears and lived experience of racism. We outline our case for a co-design approach to guide public and private sectors’ decision-making and #policy 4
We argue it is imperative to understand, and value, racialised minorities’ #livedexperience to inform and improve digital services’ design. We drew on qualitative interviews and workshops with people who identify as a minoritised ethnic individuals across England and Scotland. 3
@AunamQuyoum
and I discussed the vulnerabilities minoritised ethnic people face in datafication processes & how they are racialised within data/algorithmic systems. The pace of change in policy and innovation remains slow, while #AI and #datadriven discrimination is rife. 2/n
📣'Valuing lived experience & co-design solutions to counter racial inequality in data & algorithmic systems in UK’s digital services' by Aunam Quyoum + me @Information, Communication, and Society journal. How people navigate racism in data @uofglasgow.bsky.social www.tandfonline.com/doi/full/10....