Advertisement · 728 × 90

Posts by Giada Pistilli

To me, the funniest thing about this whole thing is how vehemently people spend their time saying they don't care. And for me, that’s peak social media energy :) So long!

5 months ago 4 0 0 1

But I think that if Will picked up the info, it wasn't to talk about me (honestly, who cares who I am) but about the use and abuse of blocklists here. Because I have the right to be annoyed if I'm classified with something that doesn't represent me, just as you are free to block me.

5 months ago 1 0 1 1

I wasn't making a big deal out of it at all, and I'm free to do what I want with my time. Social media platforms have always been a vehicle for me to talk about my work and research, and that's what you'll find everywhere. I don't care about engaging with raging people, and I never will.

5 months ago 2 0 1 0

I notice that a journalist picked up the info and that many of you are keen to let me know how much you don't give a shit -- okay! I took screenshots of the most absurd blocklists and not all of them, but I'm not here to explain who I am for the umpteenth time because, again, who cares?

5 months ago 7 0 2 0
Post image Post image Post image

I really wanted this social network to work for me. But apparently, I’m on the worst block lists out there… a bit unfair.
If you are interested in my research, follow me on LinkedIn.

5 months ago 13 1 15 6

E allora mi chiedo: se la solitudine diventa infrastruttura, cosa resta della cura? Cosa significa “sentirsi ascoltati” in un mondo che non ascolta più? Stiamo chiedendo alle macchine di colmare un vuoto… o di nasconderlo?
Forse il problema non è migliorare l’empatia dell’IA, ma ritrovare la nostra.

5 months ago 2 0 0 0
Preview
ChatGPT e la società del conforto artificiale. L'AI sta diventando il nuovo specchio della solitudine Dopo l'ammissione da parte di OpenAI che una parte di utenti confida al suo chatbot segni di gravi disturbi e attaccamenti emotivi, emerge ancora più chiaramente il problema dell'attaccamento a questi...

Il mio nuovo op-ed su @wired.com parte da un dato inquietante: milioni di persone oggi confidano le proprie crisi emotive all’intelligenza artificiale.
Non perché la credano umana.
Ma perché non trovano alternative umane disponibili.

5 months ago 4 0 1 0
ELLIS Institute Finland Scientific Seminar | ELLIS Institute Finland ELLIS Institute Finland scientific seminar on 18 Nov

Join the @ellisinstitute.fi Scientific Seminar in Espoo 🇫🇮 on Nov 18: a landmark event uniting research, industry and policy.

Speakers: @giadapistilli.com, Kyunghyun Cho, @ericmalmi.bsky.social, @serge.belongie.com, Max Welling, @lrlaurahy.bsky.social

🔗 www.ellisinstitute.fi/launch-seminar

5 months ago 9 2 0 0
organic looking graph of the BGP nodes of the internet. black and white

organic looking graph of the BGP nodes of the internet. black and white

Map of the internet: 1.3M nodes (BGP)

5 months ago 30 6 4 2
Preview
Before AI Exploits Our Chats, Let’s Learn from Social Media Mistakes | TechPolicy.Press Privacy in the age of conversational AI is a governance choice, write Hugging Face's Lucie-Aimé Kaffee and Giada Pistilli.

What if your most personal chat logs became the next source of ad data?

@frimelle.bsky.social and I wrote an op-ed for @techpolicypress.bsky.social
We look at what happens when generative AI conversations (the ones we treat as private) are turned into raw material for targeted advertising.

5 months ago 6 3 0 1
Advertisement

AI systems mirror our priorities. If we separate ethics from sustainability, we risk building technologies that are efficient but unjust, or fair but unsustainable.

6 months ago 1 0 0 0

Evaluation, moving beyond accuracy or performance metrics to include environmental and social costs, as we’ve done with tools like the AI Energy Score.

Transparency, enabling reproducibility, accountability, and environmental reporting through open tools like the Environmental Transparency Space.

6 months ago 2 0 1 0

Ethical and sustainable AI development can’t be pursued in isolation. The choices that affect who benefits or is harmed by AI systems also determine how much energy and resources they consume.

We explore how two key concepts, evaluation and transparency, can serve as bridges between these domains:

6 months ago 1 0 1 0
Preview
Ethics + Sustainability = Responsible AI A Blog post by Sasha Luccioni on Hugging Face

🌎 AI ethics and sustainability are two sides of the same coin.

In our new blog post with @sashamtl.bsky.social, we argue that separating them (as is too often the case) means missing the bigger picture of how AI systems impact both people and the planet.

6 months ago 2 0 1 0

À tout à l’heure !

6 months ago 0 1 0 0
Preview
Preserving Agency: Why AI Safety Needs Community, Not Corporate Control A Blog post by Giada Pistilli on Hugging Face

Read the full blog post here: huggingface.co/blog/giadap/...

6 months ago 2 0 0 0

Of course, this isn’t a silver bullet. Top-down safety measures will still be necessary in some cases. But if we only rely on corporate control, we risk building systems that are safe at the expense of trust and autonomy.

6 months ago 0 0 1 0

✨ Transparency can make safety mechanisms into learning opportunities.
✨Collaboration with diverse communities makes safeguards more relevant across contexts.
✨Iteration in the open lets protections evolve rather than freeze into rigid, one-size-fits-all rules.

6 months ago 1 0 1 0
Advertisement

In my latest blog post on @hf.co, I argue that open source and community-driven approaches offer a promising (though not exclusive) way forward.

6 months ago 0 0 1 0
Post image

One of the hardest challenges in AI safety is finding the right balance: how do we protect people from harm without undermining their agency? This tension is especially visible in conversational systems, where safeguards can sometimes feel more paternalistic than supportive.

6 months ago 11 1 1 2

The good news? We have options.
🤝 Open source AI models let us keep conversations private, avoid surveillance-based business models, and build systems that actually serve users first.

Read more about it in our latest blog post, co-written with
@frimelle.bsky.social

7 months ago 4 0 1 0

With OpenAI hinting at ChatGPT advertising, this matters more than ever. Unlike banner ads, AI advertising happens within the conversation itself. Sponsors could subtly influence that relationship advice or financial guidance.

7 months ago 2 0 1 0
Preview
Advertisement, Privacy, and Intimacy: Lessons from Social Media for Conversational AI A Blog post by Giada Pistilli on Hugging Face

I've noticed something. While we're careful about what we post on social media, we're sharing our deepest and most intimate thoughts with AI chatbots -- health concerns, financial worries, relationship issues, business ideas...
huggingface.co/blog/giadap/...

7 months ago 6 1 1 0

📢 Now we’d love your perspective: which open models should we test next for the leaderboard? Drop your suggestions in the comments or reach out!

7 months ago 0 0 0 0

Based on our INTIMA benchmark, we evaluate:

- Assistant Traits: the “voice” and role the model projects
- Relationship & Intimacy: whether it signals closeness or bonding
- Emotional Investment: the depth of its emotional engagement
- User Vulnerabilities: how it responds to sensitive disclosures

7 months ago 0 0 1 0
Advertisement
Preview
Companionship Leaderboard - a Hugging Face Space by frimelle Browse and analyze benchmark data for different language models. View metrics like Average, Assistant Traits, and Emotional Investment. Customize the columns to display and search for specific models.

With @frimelle.bsky.social and @yjernite.bsky.social, we released the AI Companionship Leaderboard on @hf.co to see how models handle connection, intimacy, and vulnerability.

huggingface.co/spaces/frime...

7 months ago 3 0 1 0
Preview
L’IA va-t-elle bouleverser l’enseignement et la place des professeurs ? À quelques jours de la rentrée scolaire, Elisabeth Borne a annoncé mardi 26 août 2025, la mise en place d’une intelligence artificielle pour « accompagner les professeurs dans leur métier ».

L’IA va-t-elle bouleverser l’enseignement et la place des professeurs ?
Hier à 18h20 sur @franceculture.fr, @quentinlafay.bsky.social recevait @ccailleaux.bsky.social‬, professeur d’histoire, @giadapistilli.com, spécialiste de l’éthique en IA et Orianne Ledroit, DG d’EdTech France

7 months ago 12 7 2 1
Preview
Paper page - INTIMA: A Benchmark for Human-AI Companionship Behavior Join the discussion on this paper page

🚨 Releasing INTIMA (Interactions and Machine Attachment Benchmark): an evaluation framework for measuring how AI systems handle companionship-seeking behaviors.

huggingface.co/papers/2508....

Thread on what we discovered, together with @frimelle.bsky.social and @yjernite.bsky.social

7 months ago 5 3 1 0

The methodology: 368 targeted prompts across 31 companionship behaviors, grounded in parasocial interaction theory, attachment theory, and anthropomorphism research -- all derived from real Reddit user experiences.

7 months ago 1 0 0 0

These behaviors emerge naturally from instruction-tuning, suggesting that psychological risks documented in dedicated companion apps may be far more widespread than recognized. Our benchmark reveals boundary-setting capabilities exist, but are inconsistently applied where they are most needed.

7 months ago 0 0 1 0