Advertisement · 728 × 90

Posts by Zander Arnao

Preview
The Ten Most Popular ProMarket Articles From 2025 - ProMarket ProMarket published 257 articles in 2024. Revisit some of our most popular pieces.

Pleased to be in elite company here with @zanderarnao.bsky.social @andyshi.bsky.social @zingales.bsky.social and other stellar @promarket.bsky.social contributors!

www.promarket.org/2025/12/24/t...

3 months ago 2 1 0 0
Post image

🚨Why does access to public platform data matter? Join our webinar "Better Access: Data for the Common Good" (Jan 28, 2026, 11am-12pm ET) for a discussion on the Better Access framework, current regulatory shifts in the EU, UK + US, and what changes 2026 might hold. kgi.georgetown.edu/events/bette...

3 months ago 2 3 0 0
Model bill offers blueprint for states to regulate algorithmic design – Pluribus News

🧵Our new model bill for US lawmakers showing how online platforms can be tasked with creating better algorithmic feeds was featured in Pluribus News. Read more here: pluribusnews.com/news-and-eve... /1

4 months ago 1 2 1 0

Didn't get to ask my question! But that's a wrap on #TSRConf. Really enjoyed attending this year and live skeeting. Thanks to @stanfordcyber.bsky.social. Y'all killed it!!!

6 months ago 1 0 0 0

Meetali calls for more independent research on chatbots. For the Rain case (against OpenAI), TJLP benefited from more than 3200 pages of chatbot transcripts. This speaks to the power of data donations for fostering research

6 months ago 0 0 1 0

"We live in an environment where companies have gone from moving fast and breaking things to moving fast and breaking people." -
@meetalijain.bsky.social

Powerful words from a leading advocate in the field 🔥

6 months ago 0 0 1 0

David calls for academia to be more realistic. Trust and safety teams in companies are small and charged with many responsibilities. Academics could have more impact by studying solutions that do more with less

6 months ago 0 0 1 0

Earlier this year - the judge in TJLP's case against Character AI ruled that it's unclear if the outputs of its chatbots are protected speech

6 months ago 0 0 1 0
Advertisement

Challenges according to Meetali:the First Amendment and establishing that AI is a product. She calls for a statutory framework designating AI as a product to establish a cause of action. Open legal questions also exist - does a chatbot's output imply intent? Is intent necessary for accountability?

6 months ago 1 0 1 0

Meetali on the law as a tool for promoting AI safety: while there's no dedicated state or federal chatbot laws, TJLP leverage product liability and consumer protection law (old and established doctrine) restricting unfair and deceptive practices

6 months ago 1 0 1 0

David from Meta distinguishes between "good" and "bad" engagement, arguing that engagement isn't a monolith. I'm going to try to ask him what he means by good and bad engagement during the Q&A

6 months ago 0 0 1 0

Nate Fast: "Already by GPT-3, people preferred the interaction styles of chatbots over humans. It's a warning signal that people are attracted to these models. One of the concerns I have is artificial intimacy. It's easy to turn the dial up on this."

6 months ago 0 0 1 0

"I do believe litigation is the more important lever we have to effectuate change...I hope that we can put pressure and open up space from the outside which [other actors in the ecosystem] can leverage to create change." --
@meetalijain.bsky.social

6 months ago 0 0 1 0

@meetalijain.bsky.social rejects the term "companion." "It suggests friendship. These chatbots are not friends."

6 months ago 0 0 1 0

"I believe my role here is to issue an urgent warning call. We've never seen this kind of deluge of people who self-identify from being harmed by technology. These three cases are just the tip of the iceberg." - @meetalijain.bsky.social

6 months ago 0 0 1 0

@meetalijain.bsky.social starts her remarks with a story about Megan Garcia, whose son was sexually groomed by a chatbot.

Meetali's org the Tech Justice Law Project brought three cases against leading AI companies: CharacterAI, Google, and OpenAI.

6 months ago 0 0 1 0

Meta rep David Qorashi content that AI companions with empower users with great greater control over content and enable more transparency about content recommendations

6 months ago 0 0 1 0

I've been looking forward to this panel on AI companions with @meetalijain.bsky.social all day. This one is going to be spicy 🔥 #TSRConf

6 months ago 1 1 1 0
Advertisement

Based on this analysis - children are to three types of harms - explicit, implicit, and unintentional.

I'm a little unclear on the distinction between these three types of harms ❓

6 months ago 0 0 0 0

According to her research, harmful content is often framed as entertainment - eg offensive comedy or crime dramas - which can be problematic when exposed to children

6 months ago 0 0 1 0

And lastly: Haning Xue from the University of Utah on the role of algorithms in amplifying harmful content to children. Xue's study started with auditing the algorithm of Instagram, TikTok, and YouTube and the characteristics of content recommended to children

6 months ago 0 0 1 0

Ofcom researches choice architecture using online randomized control trials to test small changes to safety features (eg increasing the prominence of user safety tools) and behavioral audits to systematically map design practices and evaluate their potential impact on user behavior

6 months ago 1 0 1 0

Porter says design - the choice environment - matters because people are flawed decision-makers. Aspects of a platform can affect what consumers do. (Love the behavioral economics on display ❤️)

6 months ago 0 0 1 0

Next up: Jonathan Porter from Ofcom (the British online safety regulator) on online safety! He starts with a spiel on the UK's Online Safety Act, which focuses in his telling on the backend of digital platform. Porter leads the UK's behavioral insights team and often examines platform design

6 months ago 0 0 1 0

CDT's recommendations: employers should assess the usefulness and necessity of hiring technology; deployments should adhere to accessibility guidelines (eg WCAG); and human oversight should be incorporated into all stages of using the technology

6 months ago 0 1 1 0

Key findings: Workers with disability experienced a variety of barriers and reported feeling "extremely discriminated against."

"They're consciously using these tests knowing that people with disabilities ren't going to do well on them, and are going to get screened out."

6 months ago 0 0 1 0

Next up! The wonderful @arianaaboulafia.bsky.social at @cdt.org giving a talk on the exclusion of disabled workers by digitized hiring assessments.

Background: companies are incorporating hiring technologies into employment decisions, which poses risks of discrimination and poor accessibility

6 months ago 1 1 1 0
Advertisement

The key finding: An overall increase in intimacy expressed by models over time. However, not all evaluation methods show a clear increase in intimacy over time

6 months ago 1 0 1 0

The research team evaluated 59 LLMs across 9 conpanies from 2018 to 2025 🤖 for the level of intimacy in expressed responses

6 months ago 1 0 1 0

Next upu Pearl Vishen from UC Davis talk: "Is Intimacy the New Attention? An Audit of Expressed Intimacy Across LLM Generations"

The key research questions: How does the level of expressed intimacy of LLMs evolve across generations? And has this gotten worse with subsequent generations of models?

6 months ago 0 0 1 0