Posts by Giovanna Mascheroni
"Banning children’s access to social media, though, shifts the responsibility for safety from the platforms that create the environment to the children who navigate it" #SocialMediaBan www.coe.int/en/web/commi...
il 20% di bambin* e adolescenti italian* chiede consigli all'AI su salute e forma fisica ma non si fida al 100%. questi e altri risultati nel report di @eukidsonline.bsky.social researchonline.lse.ac.uk/id/eprint/13...
The role of #GenAI in children’s development is ambivalent, and their opportunities (scaffolding learning) can easily turn into harmful consequences (deskilling and disempowerment). Read more here researchonline.lse.ac.uk/id/eprint/13... #SaferInternetDay
domani pubblichiamo i report EUKO su bambini, adolescenti e AI (uno comparativo in inglese e uno italiano). tante forme di resistenza algoritmica lì! certo, scriviamo/facciamo qualcosa!
"Probably around 10 per cent of them actually have been banned, and half of that 10 per cent has been unbanned by just using basically the same thing that I did: use other people's faces, use their driver's licence. #SocialMediaBan www.abc.net.au/news/2026-02...
You mean the same Gemini being pushed into K-12 schools?
Absolutely damning from @aaronschaffer.com, @willoremus.com, & @nitasha.bsky.social.
To get more data, Anthropic:
* "destructively scanned" millions of books
* downloaded the shadow library LibGen
* hailed another shadow library's arrival as "just in time!!!"
www.washingtonpost.com/technology/2...
Another Tantrump.
"A balanced reading of existing research shows the impact of digital media depends on what children are watching, when, and why." but the debate on #ScreenTime is dominated by 6 papers that sell fears blogs.lse.ac.uk/impactofsoci...
This TikTok star sharing Australian animal stories doesn’t exist – it’s #AI #Blakface theconversation.com/this-tiktok-...
"The media has largely let [tech companies] set the terms of the debate, right down to the terminology used in any discussion of these systems."
From @nannainie.bsky.social & me in @techpolicypress.bsky.social on how to spot and resist anthropomorphizing language about so-called "AI".
Further evidence that the #SocialMediaBan is NOT the solution to protect children's rights and wellbeing. it is just a low-cost policy to appease worried parents. we need more effective regulations on platforums to ensure children's rights by design
www.rollingstone.com/culture/cult... chatbots are “always hallucinating,” he says. “It’s not a malfunction. A predictive model predicts some text, and maybe it’s accurate, maybe it isn’t, but the process is the same either way. To put it another way: LLMs are structurally indifferent to truth.”
Particularly proud of publishing this investigation on a major scandal: the EU Commission softened the green finance criteria so as to include weaponery as "sustainable" – something highly questionable according to experts.
Of course it f&&king is. Please don't use GenAI for news. Please don't use Grok for anything.
Grok is spreading misinformation about the Bondi Beach shooting www.theverge.com/news/844443/...
The proprietary recommender algorithms of YouTube, Instagram and TikTok can determine whether a user is a child. But instead of using this info to protect them they use it to deliver target ads www.sciencedirect.com/science/arti...
1 in 5 high schoolers has had a romantic AI relationship, or knows someone who has www.vpm.org/npr-news/202...
AI-Powered Teddy Bear Caught Talking About Sexual Fetishes and Instructing Kids How to Find Knives
gizmodo.com/ai-powered-t...
These companies are stealing every scrap of data they can find, throwing compute power at it, draining our aquifers of water and our national grids of electricity and all we have so far is some software that you can’t trust not to make things up broligarchy.substack.com/p/the-great-...
Marketed for productivity but used for personal use and emotional conversations, an analysis of 47,000 conversations with ChatGPT by the @washingtonpost.com finds www.washingtonpost.com/technology/2...
We urge qualitative researchers to...reject the use of GenAI in such analyses...
1 GenAI as simulated intelligence is incapable of meaning making
2 Qualitative research should remain a distinctly human practice
3 harms of GenAI to the environment & workers in the Global outh ssrn.com/abstract=567...
Public service media equip citizens with the critical skills to identify mis- and disinformation. De-legitimising public service media is then unctional to a certain political agenda
I appreciate the work of these authors to show that this problem not only is still here but has grown:
www.nbcnews.com/tech/tech-ne...
But it is also quite frustrating 🧵>>
The "womanosphere" and “pastel QAnon” lure young women "into far-right conspiracies through content about motherhood and female-coded aesthetics. Some beauty and wellness influencers have proven to be a natural fit for this ecosystem." www.teenvogue.com/story/womano...
“the excessive visibility of a highly active minority at the tip of the iceberg can not only mislead social scientists, but also deceive social media users themselves.”www.techpolicy.press/what-a-new-study-reveals...
X is designed to radicalise people.
The algorithm promotes Elon Musk's agenda to promote racists and people who want violence bought - specifically - to the streets of Britain.
Members of Parliament, major institutions and the media should not be there.
news.sky.com/story/the-x-...
Creepy crawlers collecting data for generative AI are making the internet work less well for everyone, write Article 19's Tanu I & Corinne Cath. AI crawlers slow sites, strain libraries, and push journalism behind paywalls, they write. But there are solutions, if AI firms choose to respect them.
evidence is piling up that LLMs are culturally biased, since traiend on data from Western, Educated, Industrialized,
Rich, and Democratic (WEIRD) countries, thus unsuitable to understand people from different regions and cultures, study finds: coevolution.fas.harvard.edu/sites/g/file...