Advertisement · 728 × 90

Posts by Sergio Graziosi

Latest Changes (16/04/2026 - V6.18.0.0) - Forum announcements

#EPPI-Reviewer new release: Version 6.18.0.0 includes a new streamlined system to evaluate and refine LLM prompts. Supports Claude, GPT-5.x and more models and has new functions to facilitate evaluation and maintenance of Living Evidence workflows.
More details: eppi.ioe.ac.uk/cms/Default....

1 week ago 3 2 0 0

This is the fundamentally false premise that people believe, despite all evidence to the contrary.🤷🏿‍♂️

The truth is:

1) Negative, divisive content, is not the most engaging content. It's not close.

2) Algorithms reward negative, divisive content, because billionaires buy platforms and make it so.

2 weeks ago 275 89 12 6

If I say that AOC is a legit candidate for VP or president of the United States, and that her being in the White House will save *tens of millions of lives* around the world...

Dozens of "non-racist" Dems will descend on my mentions to try to tell me that no we need a white man in the White House.🤡

2 weeks ago 76 7 2 0
Preview
Algorithmic Bias

fresh off the press from yours truly: oecs.mit.edu/pub/b61joemo...

I offer an overview of algorithmic bias. I trace its historical roots, examine canonical scholarship and notable real-world incidents, and explore how algorithmic bias emerged as a field of study

1/

1 month ago 505 250 9 19

"Being reasonable" is the key I've been struggling to fit in the right place.
Today, the only way to be reasonable is to be seen as being entirely unreasonable. Or the flip side: if what you do aligns with accepted standards, then it's likely you're doing the wrong thing.
We are without a compass.

1 month ago 1 0 0 1
Preview
Shabana Mahmood accused of mimicking Trump as she announces asylum plans Home secretary’s proposals to overhaul immigration system include end to permanent refugee status

‼️ Today's speech by the Home Secretary marks the end of a week in which we've been drip-fed a series of hostile and cruel asylum policies.

Each one points in the same direction: further penalising and harming people who are simply seeking safety here.

www.theguardian.com/uk-news/2026... 1/🧵

1 month ago 21 11 1 2

Post a banger that’s not in English
Sona Jobarteh - Bannaya
www.youtube.com/watch?v=hlK7...

2 months ago 0 0 0 0

Post a banger that’s not in English
Daniele Silvestri - Cohiba
www.youtube.com/watch?v=egJc...

2 months ago 0 0 0 0
Advertisement
Post image

Big news! @bma.org.uk now advising doctors not to engage with Palantir's NHS data platform! They say there must be a "complete break from Palantir technologies in the NHS", due to the company's track record, e.g. with ICE, and the risk to patient trust. www.bmj.com/content/392/...

2 months ago 30 23 0 1
Post image Post image

Imagine if the US and Britain had real opposition parties. Imagine being as clear eyed as @judeinlondon.com

2 months ago 15 5 0 0

I wrote about how all our politicians and media are incompetent.

I guess that means I’m officially back in the writing game

2 months ago 110 51 6 1
Main Responsibilities
Operational and Financial Administration:
• Coordinate the day-to-day operational and administrative activities of the AI Accountability Lab.
• Oversee lab budgets and financial planning for research grants and cost centres; monitor expenditure and prepare financial reports for the PI and Faculty Finance Office.
• Ensure adherence to Trinity and funder financial policies and procurement procedures.
• Maintain internal administrative systems, documentation, and records to support efficient project delivery.
• Liaise with central Finance, the Research Development Office, and external funding bodies on budgetary and governance matters.
• Coordinate HR administrative processes for the lab, including recruitment, onboarding, contract administration, and research staff extensions in liaison with central HR and the Research Office.
• Create and maintain shared resources, calendars, and internal documentation.
• Coordinate relationships and collaboration with other research centres, government bodies, policy makers, civil society partners, and media as required by the research team.
• Plan and schedule internal lab meetings, reading groups, invited speakers, collaborative work sessions, and other events as required.

Main Responsibilities Operational and Financial Administration: • Coordinate the day-to-day operational and administrative activities of the AI Accountability Lab. • Oversee lab budgets and financial planning for research grants and cost centres; monitor expenditure and prepare financial reports for the PI and Faculty Finance Office. • Ensure adherence to Trinity and funder financial policies and procurement procedures. • Maintain internal administrative systems, documentation, and records to support efficient project delivery. • Liaise with central Finance, the Research Development Office, and external funding bodies on budgetary and governance matters. • Coordinate HR administrative processes for the lab, including recruitment, onboarding, contract administration, and research staff extensions in liaison with central HR and the Research Office. • Create and maintain shared resources, calendars, and internal documentation. • Coordinate relationships and collaboration with other research centres, government bodies, policy makers, civil society partners, and media as required by the research team. • Plan and schedule internal lab meetings, reading groups, invited speakers, collaborative work sessions, and other events as required.

Research Coordination and Governance:
• Track project timelines, deliverables, reporting schedules, and ethical compliance across multiple funded projects.
• Ensure research activities meet Trinity and funder governance requirements (e.g., ethics, data management, GDPR, and reporting).
• Coordinate workshops, seminars, and collaborative events with research centres and external partners as required.
• Maintain shared documentation and project records to support effective collaboration.
• Ensure that research outputs comply with ethical and scientific standards and satisfy the terms and conditions of relevant funding bodies.

Research Coordination and Governance: • Track project timelines, deliverables, reporting schedules, and ethical compliance across multiple funded projects. • Ensure research activities meet Trinity and funder governance requirements (e.g., ethics, data management, GDPR, and reporting). • Coordinate workshops, seminars, and collaborative events with research centres and external partners as required. • Maintain shared documentation and project records to support effective collaboration. • Ensure that research outputs comply with ethical and scientific standards and satisfy the terms and conditions of relevant funding bodies.

Communications and External Engagement
• Coordinate internal and external communications for the lab, including its website, newsletters, and social media presence.
• Liaise with policy makers, media, and stakeholder organisations on behalf of the lab, in consultation with the PI.
• Support the dissemination of research outputs, such as academic publications, policy briefs, and public events in collaboration with ADAPT’s and TCD’s communications staff.
• Work with Trinity Communications to prepare press releases and highlight lab achievements.
• Representing the lab externally with media if required.
Planning and Cross-Centre Coordination
• Support the PI in delivering AIAL’s annual operational plan and reporting on progress against objectives.
• Coordinate activities across multiple projects and partnerships to ensure consistency and alignment with institutional priorities.
• Contribute to process improvements that strengthen the lab’s administrative and operational framework.
• Act as a key contact for internal governance reviews, audits, and funder compliance checks.
Any other duties or responsibilities as assigned by the PI or their delegate in support of the effective operation of the AI Accountability Lab and its objectives.

Communications and External Engagement • Coordinate internal and external communications for the lab, including its website, newsletters, and social media presence. • Liaise with policy makers, media, and stakeholder organisations on behalf of the lab, in consultation with the PI. • Support the dissemination of research outputs, such as academic publications, policy briefs, and public events in collaboration with ADAPT’s and TCD’s communications staff. • Work with Trinity Communications to prepare press releases and highlight lab achievements. • Representing the lab externally with media if required. Planning and Cross-Centre Coordination • Support the PI in delivering AIAL’s annual operational plan and reporting on progress against objectives. • Coordinate activities across multiple projects and partnerships to ensure consistency and alignment with institutional priorities. • Contribute to process improvements that strengthen the lab’s administrative and operational framework. • Act as a key contact for internal governance reviews, audits, and funder compliance checks. Any other duties or responsibilities as assigned by the PI or their delegate in support of the effective operation of the AI Accountability Lab and its objectives.

I’m looking for my right-hand person to come help me run the @aial.ie

- Job Title: Lab Coordinator, AI Accountability Lab (0.8 FTE)
- Pay Scale: (€58,999 - €69,325 per annum pro-rata)
- Closing Date: 11-Feb-2026 12:00

Apply here: my.corehr.com/pls/trrecrui...

Main Responsibilities👇🏾

3 months ago 64 65 4 6
Introduction to EPPI Reviewer – software for conducting systematic reviews
27 January 2026, 14:00 - 15:00 UTC

Introduction to EPPI Reviewer – software for conducting systematic reviews 27 January 2026, 14:00 - 15:00 UTC

Take a look at this webinar introducing EPPI Reviewer and how to use it in the systematic review process. The session will also touch on recently added AI tools!

Sign up here ➡️ www.cochrane.org/events/intro...

@eppicentre.bsky.social @eppi-reviewer.bsky.social

3 months ago 5 5 0 0
About the PhD: 
Audits and evaluation of AI systems — and the broader context that AI systems operate in — have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity.

This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as:

    What does it mean to represent “ground truth” in proxies, synthetic data, or computational simulation?
    How do we reliably measure abstract and complex phenomena?
    What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail?
    How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies?
    Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation.

The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

About the PhD: Audits and evaluation of AI systems — and the broader context that AI systems operate in — have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity. This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as: What does it mean to represent “ground truth” in proxies, synthetic data, or computational simulation? How do we reliably measure abstract and complex phenomena? What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail? How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies? Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation. The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

are you displeased with today’s AI safety evaluation landscape and curious about what greater conceptual clarity, methodological soundness, and rigour in AI evaluation could look like? if so, consider coming to Dublin to pursue a PhD with me

apply here: aial.ie/hiring/phd-a...

pls repost

3 months ago 190 139 6 12

BBC report at a headline level that Grok will now not make sexually exploitative images any more. An actual reporter comes on and explains that it will do no such thing, and will just make the images invisible in the UK. These are two very, very different things.

3 months ago 5968 1810 97 80

Had a very shaky moment on a bend going at about 10mph yesterday. Didn't see any warning sign so I thought I could carry the little speed I had. I was "right", but also wrong: I was right by a margin that was much too small, just about managed stay upright, whoopsie.
So yeah, play it safe!

3 months ago 1 0 0 0
Advertisement

2. COVID now adds to disease burden in an unpredictable, perennial way.
3. We maddeningly restrict access to vaccines without considering the wider and long-term costs of disease.
4. Nothing has been done nationally to improve indoor air quality.
5. Vaccines and NPIs synergise.

4 months ago 242 53 2 1

So, to those who say "masks don't work", cite the flawed Cochrane report, insist upon an unfeasible RCT, accuse folks of panic, and undermine public health on social and mainstream media during a flu/RSV epidemic...

1. Seasonal viruses may be "normal", but they do immense harm.

4 months ago 447 203 13 41

5. A world where capital then uses that process of mechanization to dispossess labor, to displace skilled workers into jobs where they have reduced value and agency, and to turn the products of that labor to their own ends (mostly, consolidating wealth) rather than to the benefit of humankind.

4 months ago 162 25 5 2

4. What we do have is a choice between a world in which concentrated capital takes yet one further step along the path of the industrial revolution, appropriating and mechanizing the knowledge that currently resides within labor, as with the Jacquard loom.

4 months ago 178 25 2 3
Latest Changes (27/10/2025 - V6.17.1.0) - Forum announcements This forum is kept largely for historic reasons and for our latest changes announcements. (It was focused around the older EPPI Reviewer version 4.)

#EPPI-Reviewer new release:
Version 4.17.1.0 includes a new type of "duplicates report" and a (for now) restricted to selected people/reviews new "auto reconciliation" mode for priority screening, and more.
All details: eppi.ioe.ac.uk/cms/Default....

4 months ago 1 1 0 0

I wish I didn’t have to share this. But the BBC has decided to censor my first Reith Lecture.

They deleted the line in which I describe Donald Trump as “the most openly corrupt president in American history.” /1

4 months ago 10092 5108 339 692

We have chaos and Ed Miliband, they're just not evenly distributed.

5 months ago 409 70 13 5

You wouldn't do what you do if you weren't more optimistic than me! 😔

5 months ago 0 0 0 0

Not on prime time, at least. And I think that's worse, and I don't think we disagree on it being worse.
I think you believe that such views would have been aired more, somehow, or equally - and I 100% don't see it happening.
5/5(end)

5 months ago 0 0 1 0

Example: BBC's coverage of Gaza has been terrible overall. But it did contain "genocide is bad, actually" views, and they got expressed in "prime time". Yeah, with caveats and expressed from the sidelines. But present.
Take the BBC out of the game, and I fear we'd have had no such views at all.
4/5

5 months ago 0 0 1 0
Advertisement

Which is probably what we disagree on. Having a dominant player that NEEDS to convince about it being impartial is, in my view a BIG/powerful restaint. It sure dull-ifies everything, but ensures that even if marginalised, "unwelcome" truths still get some air time.
3/5

5 months ago 0 0 1 0

My view is that there currently exists no detectable force able to counteract the power of money. So we'd get a media landscape with 2 types of outlets. "Pretend impartial", which never hurts the hand that feeds it, and "Mail-like". With 5-1% "others".
And that is worse than now.
2/5

5 months ago 0 0 1 0

Ah, no. I agree on that too. The outlets that successfully pretend (and the BBC does, in one relevant sense, but not all) are THE danger.
What I disagree with is probably: what happens when the BBC disappears. I reckon we get a landscape 99% driven by deep pockets and nothing else.
1/5

5 months ago 0 0 1 0

[Surely we agree the BBC doesn't produce *only* harm!]

5 months ago 0 0 1 0