Advertisement · 728 × 90

Posts by Matt DeVerna

Preparing to give a 1.5 hour presentation and live demo later today about using Claude Code for research when Claude Code is having persistent status problems for the first time in a while… 😬

5 days ago 4 0 1 0
Preview
Inside a pro-Conservative influence operation on Community Notes Inside a pro-Conservative group effort to influence X's Community Notes

NEW on @indicator.media:

I found a group of X accounts that worked together to remove Community Notes from British Conservative Party accounts during the 2024 UK general election.

1 week ago 44 28 1 2
Screenshot of a TLDR section of a substack blog post that reads as follows: 

- I have private datasets (ChatGPT logs, WhatsApp data, YouTube traces) that are too sensitive to share but could answer dozens of research questions my small team will never think to ask.
- I want to build a public platform where anyone can submit a structured research idea in plain English. My team runs the analysis using AI coding agents against the private data, and the contributor gets full credit and all the results.
- The model has real limitations: agents make mistakes, contributors cannot iterate freely with the data, and verification requires trust. I am starting with a small pilot scoped to tractable problems.
- I want feedback on whether this is useful, where it will fail, and what I am missing before I build it.

Screenshot of a TLDR section of a substack blog post that reads as follows: - I have private datasets (ChatGPT logs, WhatsApp data, YouTube traces) that are too sensitive to share but could answer dozens of research questions my small team will never think to ask. - I want to build a public platform where anyone can submit a structured research idea in plain English. My team runs the analysis using AI coding agents against the private data, and the contributor gets full credit and all the results. - The model has real limitations: agents make mistakes, contributors cannot iterate freely with the data, and verification requires trust. I am starting with a small pilot scoped to tractable problems. - I want feedback on whether this is useful, where it will fail, and what I am missing before I build it.

I have not read this yet but find the TLDR intriguing.

open.substack.com/pub/kirangar...

@gvrkiran.bsky.social is great so I am hoping to help foster the the feedback he has requested.

Please only engage constructively.

1 week ago 2 0 0 0
Preview
Project Glasswing: Securing critical software for the AI era A new initiative to secure the world’s most critical software and give defenders a durable advantage in the coming AI-driven era of cybersecurity.

www.anthropic.com/glasswing

1 week ago 1 0 0 1

Deadline coming up! Always one of the best conferences of the year.

1 week ago 1 1 0 0
Post image

In March, TIP Center Director Jeff Hancock traveled to London to share expertise at a special evidence session hosted by the UK Parliament’s Science, Innovation, and Technology Committee exploring whether the UK Government should ban access to #SocialMedia for children under the age of 16.

2 weeks ago 2 2 1 0
Post image Post image Post image Post image

New in Nature Human Behaviour: How Deceptive Online Networks Reached Millions in the US 2020 Elections www.nature.com/articles/s41...

-Reached at least 37M Facebook and 3M Instagram users
-3 networks out of 49 responsible for >70% of users reached
-Exposed users older, more conservative

2 weeks ago 160 83 2 2
Advertisement
Preview
Uncovering simultaneous breakthroughs with a robust measure of disruptiveness An embedding-based disruption measure not only robustly captures disruptive works but also reveals simultaneous discoveries.

The Higgs mechanism was proposed in 1964 by three independent teams.

But here is the puzzle🤔: the "disruption index" says Higgs's paper is among the least disruptive ever.

So what is going on?

In our new paper, just out in #ScienceAdvances, we take up this puzzle: doi.org/10.1126/scia... 👇

1/7

2 weeks ago 42 20 1 2
A llama sweating while writing a paper at a desk. A sign says "Deadline! March 31 11:59pm AOE"

A llama sweating while writing a paper at a desk. A sign says "Deadline! March 31 11:59pm AOE"

❗The full paper submission deadline for COLM is ~14 hours from now (11:59pm AOE)!

Please submit your final PDFs on the same page where you uploaded your abstracts. And please use the provided LaTeX templates; do not handwrite your manuscript like this llama is!

Good luck!

2 weeks ago 8 4 0 0
Preview
Overreliance on AI in Information-seeking from Video Content The ubiquity of multimedia content is reshaping online information spaces, particularly in social media environments. At the same time, search is being rapidly transformed by generative AI, with large...

Interesting new paper from @lajello.bsky.social and friends.

They provide some evidence that people may blindly leverage AI-provided information on video-based information-seeking tasks by leveraging an experimental design with a deceptive AI system.

3 weeks ago 6 0 1 0
Preview
Dutch Court Orders X, Grok to Stop AI-Generated Sexual Abuse Content Dutch court bans Grok's nudify tool and hits xAI with €100,000-a-day fines in Europe's first binding injunction against an AI image generator.

A Dutch court has ruled Grok must stop generating non-consensual undressing images of Dutch residents worldwide and child sexual abuse material in the Netherlands, with €100,000-a-day fines for non-compliance, reports Ramsha Jahangir.

3 weeks ago 2507 822 104 110
Image of the text at https://colmweb.org/submission-instructions.html

Image of the text at https://colmweb.org/submission-instructions.html

~45 hours until the abstract deadline! Submit abstracts on OpenReview by 3/26 11:59pm AOE, full papers 3/31.

Final reminders & submission instructions for COLM are below. Note that as of the March 31 deadline, papers must not be under review for ICML or committed to ACL

colmweb.org/submission-i...

3 weeks ago 7 4 0 1
Post image

We've extended the #CySoc2026 deadline to March 25 to give authors a few additional days to prepare their submissions! Please consider submitting your work and sharing this information!

More information: cy-soc.github.io/2026/
Submission portal: easychair.org/conferences?...

4 weeks ago 4 5 0 3

A few extra days for CySoc submissions!

4 weeks ago 0 1 0 0
Post image

We're excited to hear from Abby André and Jonathan Gilmour from the Impact Project as Joint Keynote speakers!

1 month ago 8 4 0 0

Get your submissions in for CySoc @icwsm.bsky.social!!

1 month ago 3 2 0 0
Advertisement
Preview
The train has left the station: Agentic AI and the future of social science research | Brookings A new era of agentic AI agents has begun. What does it mean for social scientists? Solomon Messing and Joshua Tucker discuss.

Thoughtful piece from @jatucker.bsky.social and @solmg.bsky.social about agentic AI and social science.

1 month ago 5 1 1 0
CySoc 2026 - International Workshop on Cyber Social Threats

📣CFP: 7th edition International Workshop on Cyber Social Threats (CySoc)

We welcome papers that examine a diverse range of issues related to online harmful communications.

📅Submission: March 22nd, 2026
📅Notification: April 8th, 2026

🔗 Details: cy-soc.github.io/2026/

1 month ago 4 5 0 0

🚨🚨🚨

1 month ago 1 1 0 0

Organized with love with @yang3kc.bsky.social @frapierri.bsky.social @yelenamejova.bsky.social @ugurkursuncu.bsky.social @mrjimmyblack.com

1 month ago 2 1 0 0
Post image Post image

We are looking forward to your amazing submissions to the CySoc workshop at ICWSM 2026!

Learn more here: cy-soc.github.io/2026/

Note: the previously circulated submission deadline has been shifted.

1 month ago 5 4 1 1
Post image

Yikes...

1 month ago 1 0 0 0

🤦‍♂️

1 month ago 0 0 0 0
Advertisement

👀👀

1 month ago 1 0 0 0

- Set up the github command line tool gh
- Have Claude Code create something and create a pull request
- Leave inline comments with detailed instructions on GitHub
- Ask CC to pull them down and make a plan to address
- Rinse and repeat

Nice balance between automation and quality control, IMHO.

1 month ago 1 0 7 0
Preview
Large Language Models Require Curated Context for Reliable Political Fact-Checking -- Even with Reasoning and Web Search Large language models (LLMs) have raised hopes for automated end-to-end fact-checking, but prior studies report mixed results. As mainstream chatbots increasingly ship with reasoning capabilities and ...

Explore the preprint ⤵️

1 month ago 2 1 0 0
AI Chatbots Struggle at Fact-Checking, but Curated Evidence Can Help Can AI chatbots reliably tell you whether a political claim is true or false? And if not, what would it take to make them trustworthy fact-checkers?

Matt and co-authors Kai-Cheng Yang, Harry Yaojun, and Filippo Menczer found that today's leading models perform poorly, even when equipped with advanced reasoning and web search capabilities.

👉 The key to better performance? Giving them access to high-quality, curated evidence.

Read the summary ⤵️

1 month ago 3 1 1 0
Post image

Can #AI #chatbots reliably tell you whether a political claim is true or false? If not, what would it take to make them trustworthy fact-checkers?

A new study led by Matt DeVerna tackles these questions by evaluating 15 #LLMs on more than 6K claims fact-checked by PolitiFact over an 18-year period.

1 month ago 6 2 1 0
Preview
Work With Us - NYU’s Center for Social Media, AI, and Politics

@csmapnyu.org is hiring two postdocs.

Amazing group, highly recommend applying.

1 month ago 8 3 0 0
Post image

Abstract submissions close on March 3rd!

We are also extending a ✨ call for mentored reviewers ✨ if you advise excellent graduate or postdoctoral researchers you are welcome to recommend them to review for IC2S2 2026. Email IC2S2@uvm.edu to nominate mentored reviewers (or faculty colleagues)

1 month ago 14 12 1 2
Advertisement