Advertisement · 728 × 90

Posts by Marco Minici

Preview
Screaming in the Void: Introducing my Podcasts Introducing a series of posts about my podcasts

florio.dev/screaming-in...

1 month ago 1 1 0 0
Video

Do you believe that groups of users can coordinate on different social platforms to try to influence other people opinion? Well, in the new episode of targz @marcominici.bsky.social, from ICAR-CNR, describes the technique they used in their research to identify such malevolent groups.

Link below! 👇

1 month ago 2 1 1 0
Client Challenge

📢 New paper! We study urban location recommenders and their feedback with human mobility. Simulating this loop reveals a paradox: people explore more individually, yet city visits and encounters concentrate. Cities coevolve with AI, and inequality can grow.
📄 link.springer.com/article/10.1...

3 months ago 4 2 1 0
screenshot of the title and authors of the Science paper that are linked in the next post

screenshot of the title and authors of the Science paper that are linked in the next post

Our new article in @science.org enables social media reranking outside of platforms' walled gardens.

We add an LLM-powered reranking of highly polarizing political content into N=1256 participants' feeds. Downranking cools tensions with the opposite party—but upranking inflames them.

4 months ago 47 13 1 2
Today, social media platforms hold the sole power to study the effects of feed-ranking algorithms. We developed a platform-independent method that reranks participants’ feeds in real time and used this method to conduct a preregistered 10-day field experiment with 1256 participants on X during the 2024 US presidential campaign. Our experiment used a large language model to rerank posts that expressed antidemocratic attitudes and partisan animosity (AAPA). Decreasing or increasing AAPA exposure shifted out-party partisan animosity by more than 2 points on a 100-point feeling thermometer, with no detectable differences across party lines, providing causal evidence that exposure to AAPA content alters affective polarization. This work establishes a method to study feed algorithms without requiring platform cooperation, enabling independent evaluation of ranking interventions in naturalistic settings.

Today, social media platforms hold the sole power to study the effects of feed-ranking algorithms. We developed a platform-independent method that reranks participants’ feeds in real time and used this method to conduct a preregistered 10-day field experiment with 1256 participants on X during the 2024 US presidential campaign. Our experiment used a large language model to rerank posts that expressed antidemocratic attitudes and partisan animosity (AAPA). Decreasing or increasing AAPA exposure shifted out-party partisan animosity by more than 2 points on a 100-point feeling thermometer, with no detectable differences across party lines, providing causal evidence that exposure to AAPA content alters affective polarization. This work establishes a method to study feed algorithms without requiring platform cooperation, enabling independent evaluation of ranking interventions in naturalistic settings.

New paper in Science:

In a platform-independent field experiment, we show that reranking content expressing antidemocratic attitudes and partisan animosity in social media feeds alters affective polarization.

🧵

4 months ago 153 68 5 3
Post image Post image

What does coordinated inauthentic behavior look like on TikTok?

We introduce a new framework for detecting coordination in video-first platforms, uncovering influence campaigns using synthetic voices, split-screen tactics, and cross-account duplication.
📄https://arxiv.org/abs/2505.10867

11 months ago 21 9 2 2

We constantly ask our apps where to visit, eat or drink.
AI tells us, and most of the time, we follow it. The loop continues.
But do AIs favor certain places? How would we even know if we don’t own the platforms?
We modeled this complex phenomenon, and results are fascinating!
Spoiler: rich get…

1 year ago 3 1 1 1
Preview
IU's Observatory on Social Media defends citizens from online manipulation – the opposite of censorship When thousands of fake accounts controlled by an unknown actor flood social media with some story, and platform algorithms amplify these messages, real...

IU's Observatory on Social Media defends citizens from online manipulation – the opposite of censorship
osome.iu.edu/research/blo...

1 year ago 106 51 0 11
Advertisement
Preview
IOHunter: Graph Foundation Model to Uncover Online Information Operations Social media platforms have become vital spaces for public discourse, serving as modern agorás where a wide range of voices influence societal narratives. However, their open nature also makes them vu...

Preprint is available on arXiv arxiv.org/abs/2412.14663

1 year ago 1 0 0 0
Post image

This work would not have been possible without the other amazing coauthors @luceriluc.bsky.social @frafabbri.bsky.social @emilioferrara.bsky.social

Bonus Pic: myself beyond excited to stand next to my poster!

1 year ago 3 1 1 0

Our work provides a scalable approach for online moderation teams, public institutions, and independent organizations to audit the health of online environments—especially crucial during political events such as election cycles.

1 year ago 1 0 1 0
Post image

2. We explore how our multimodal framework exhibits foundation model behavior in detecting online information operations. Our results show that pretraining IOHunter on past IO datasets enables it to generalize to new, emerging IOs with only a few labeled examples for fine-tuning.

1 year ago 0 0 1 0
Post image

Key takeaways:

1. We propose a multimodal framework that effectively integrates textual and graph information using a cross-attention mechanism, which is then processed by a GNN.

1 year ago 0 0 1 0

Can we effectively detect covert Information Operations (IOs) that attempt to manipulate socio-political debates on social media?

This is the focus of our work, "IOHunter: Graph Foundation Model to Uncover Online Information Operations", just presented at the #AAAI #AAAI2025

1 year ago 7 2 2 0
Preview
The three horsemen of social media: brain rot, anxiety and foreign interference With so many media institutions bidding farewell to X, it's a good time to reflect on the relationship status of social media and European society at large.

The three horsemen of social media: brain rot, anxiety and foreign interference
voxeurop.eu/en/social-me...

1 year ago 37 9 4 1
Preview
IOHunter: Graph Foundation Model to Uncover Online Information Operations Social media platforms have become vital spaces for public discourse, serving as modern agorás where a wide range of voices influence societal narratives. However, their open nature also makes them vu...

Read our preprint available on arXiv at: arxiv.org/abs/2412.14663

1 year ago 1 0 0 0
Advertisement

Our effort highlights the critical role of multi-modality in modeling malicious user behavior, the value of attention to weight the modalities, and how we can advance toward a GFM for the IO Detection task by pre-training our architecture on a dataset of previous IOs.

1 year ago 1 0 1 0
Post image

Our work demonstrates how a multi-modal framework based on GNN+LM and massive pre-training produces a model that effectively generalizes to IOs not present in the original training dataset — the most realistic scenario for IO detection.

1 year ago 0 0 1 0
Post image

Our model delivers substantial improvements over current IO detection methods across three learning tasks:

1️⃣ Supervised IO Detection
2️⃣ Scarcely-Labeled Supervised IO Detection
3️⃣ Cross-IO Detection (with minimal or no labeled data from emerging IOs)

1 year ago 0 0 1 0
Post image

Maintaining the integrity of online discourse is essential for safeguarding fair democratic processes.

Our multi-modal learning framework IOHunter integrates both content and contextual information to identify actors attempting to manipulate online discussions - i.e., IO Drivers

1 year ago 0 0 1 0

"IOHunter: Graph Foundation Model to Uncover Online Information Operations" goes to AAAI'25!
This is the result of an incredible collaboration with @luceriluc.bsky.social @frafabbri.bsky.social and @emilioferrara.bsky.social

Read the entire thread for a summary and the link to the preprint.

1 year ago 5 1 1 2
Exposing Cross-Platform Coordinated Inauthentic Activity in the Run-Up to the 2024 U.S. Election

Exposing Cross-Platform Coordinated Inauthentic Activity in the Run-Up to the 2024 U.S. Election

Figure 1

Figure 1

Figure 2 & Table 5

Figure 2 & Table 5

Figure 3

Figure 3

New evidence of cross-platform foreign interference on social media during the 2024 U.S. Election that drove the spread of highly-partisan, low-credibility, and conspiratorial content, from Cinus, Minici, @luceriluc.bsky.social @emilioferrara.bsky.social arxiv.org/pdf/2410.22716

1 year ago 77 30 1 5