Advertisement · 728 × 90

Posts by Dr Afrodita Marcu 🇷🇴🇬🇧🇪🇺

Preview
F1000Research Article: Cataloguing and theorising open research practices in the arts, humanities and social sciences: Problematising and diversifying ‘Open Science’. Read the latest article version by Jenni Adams, Miranda Barnes, Samuel Moore, Stephen Pinfield, at F1000Research.

"Cataloguing and theorising open research practices in the arts, humanities and social sciences: Problematising and diversifying ‘Open Science’"

New preprint from members of the @morphss.bsky.social team

f1000research.com/articles/15-...

6 days ago 8 3 0 1

"AI tools are trained using large amounts of open-access publications, leading to tensions among researchers who did not anticipate their work being used in this way. AI tools pose new questions about what is permitted by copyright law – and whether the law should develop to accommodate them."

1 week ago 0 0 0 0

One talk I'm particularly interested in is on the topic of AI and its impact on Open publishing: "AI is posing new challenges for journal production, not just from AI-assisted submissions but also from reviewers using AI tools".

1 week ago 1 0 1 0
London Open Science & Scholarship Festival 2026 – open for booking! | UCL Open@UCL Blog UCL Homepage

Looking forward to attending various talks on Open Science at the London Open Science & Scholarship Festival next week blogs.ucl.ac.uk/open-access/...

1 week ago 0 0 1 0
onelife_livedwell (probaby tumlr)

One of the most overlooked skills in living with an energy-limiting condition isn't endurance, it's discernment, the ability to tell the difference between "I could push through" and "I should rest." That tiny pause where you check in with yourself before acting? That's where you preserve tomorrow's capacity.

onelife_livedwell (probaby tumlr) One of the most overlooked skills in living with an energy-limiting condition isn't endurance, it's discernment, the ability to tell the difference between "I could push through" and "I should rest." That tiny pause where you check in with yourself before acting? That's where you preserve tomorrow's capacity.

1 week ago 103 24 3 0
So what kind of response to AI do we need? Rather than just turning to literacy for the answers, we need to carefully consider what kind of ‘text’ AI is. Is it amenable to a literacies response? The stakes are high if we do not think carefully about an appropriate educative response. Not only will we not use AI effectively, ethically or well, we will stop looking for a more suitable and perhaps more robust response to it. We also risk literacy being coopted for compliance and productivity purposes, so it operates as a kind of ‘soft governance’ for participation in the digital economy (Pangrazio and Sefton-Green Citation2024), just as it did in the late nineteenth century when it was used to teach values and morality.

If literacy is the right response, then it needs to be more nuanced in how it is operationalised. Currently, it is used in ways that are both too narrow to capture the digital platforms and political and economic systems it is embedded in, but also too broad to capture the huge variations of how it is employed.

So what kind of response to AI do we need? Rather than just turning to literacy for the answers, we need to carefully consider what kind of ‘text’ AI is. Is it amenable to a literacies response? The stakes are high if we do not think carefully about an appropriate educative response. Not only will we not use AI effectively, ethically or well, we will stop looking for a more suitable and perhaps more robust response to it. We also risk literacy being coopted for compliance and productivity purposes, so it operates as a kind of ‘soft governance’ for participation in the digital economy (Pangrazio and Sefton-Green Citation2024), just as it did in the late nineteenth century when it was used to teach values and morality. If literacy is the right response, then it needs to be more nuanced in how it is operationalised. Currently, it is used in ways that are both too narrow to capture the digital platforms and political and economic systems it is embedded in, but also too broad to capture the huge variations of how it is employed.

On "AI Literacy Day" I suggest reading "The (im)possibility of AI literacy" by @lucipangrazio.bsky.social questioning whether "literacy" is even the right response to AI and, if so, how a meaningful AI literacy could build on the history of "critical digital literacies" doi.org/10.1080/1743...

3 weeks ago 25 10 1 2

"Academics raise concerns over ChatGPT owner’s links to US military and claim tools are ‘looking to replace knowledge workers’."

3 weeks ago 0 0 0 0
Preview
Peer Community In (PCI): Leaving Publishers Out of the Peer Review Process Traditionally, academic peer review is managed by a journal editor who oversees the process and decides on acceptance. In this system, the publisher acts as the main gatekeeper,...

Peer Community In (PCI): Leaving Publishers Out of the Peer Review Process (online🌐, April 29📆, 1pm🕐)

leeds.libcal.com/event/4495806

With Prof Chris Chambers (Cardiff University) and @maddipow.bsky.social

Cc: @cardiffunilib.bsky.social @pci-regreports.bsky.social

3 weeks ago 2 4 0 1
Advertisement

The UKRN Conference 2026 is open for registration!

Join us in Manchester, 8–9 July, for keynotes from Stian Westlake & Charlotte Pennington, sessions from UKRN groups, and contributions from the community.

Early Bird fees close 15 May. Register: www.conference.ukrn.org

1 month ago 15 12 0 0
Preview
Research integrity is locked into an arms race with agentic AI slop - LSE Impact Advances in agentic AI combined with increasingly large reserves of openly accessibly and machine-readable data are creating a perfect storm for the mass-production of AI authored research papers. Adr...

💥New | Research integrity is locked into an arms race with agentic AI slop

✍️ @aidybarnett.bsky.social & Matt Spick

#ResearchIntegrity #ResearchSlop #AcademicPublishing

1 month ago 13 10 0 4
Preview
Understanding Over‐ and Under‐Involvement in Therapeutic Relationships Between Nursing Staff and Patients in Forensic Mental Health Settings: A Qualitative Synthesis Introduction Nursing staff often struggle with balancing care and security in forensic settings, which can lead to over- or under-involvement. These are a cause for concern as this can directly impa...

New paper by my PhD student: Understanding Over‐ and Under‐Involvement in Therapeutic Relationships Between Nursing Staff and Patients in Forensic Mental Health Settings: A Qualitative Synthesis, in the Journal of Psychiatric and Mental Health Nursing: onlinelibrary.wiley.com/doi/10.1111/...

1 month ago 0 0 0 0
Preview
AI is inventing academic articles – and scholars are citi... From fake footnotes to phantom studies, AI-generated citations are slipping into real academic publishing. Scholars and publishers fear this ‘scholarly slop’ is polluting truth and science

AI is inventing academic articles – and scholars are citing them bit.ly/4rjLy5t

1 month ago 7 4 0 0
Preview
‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI As AI has upended the way students learn, academics worry about the future of the humanities - and society at large

www.theguardian.com/technology/n...
"potentially catastrophic effects on cognitive abilities and critical thinking skills". Exactly my thoughts...

1 month ago 0 0 0 0
Preview
AI is inventing academic articles – and scholars are citi... From fake footnotes to phantom studies, AI-generated citations are slipping into real academic publishing. Scholars and publishers fear this ‘scholarly slop’ is polluting truth and science

"The academic publishers Elsevier, Taylor and Francis and Springer Nature all confirmed to The Observer that scholarly slop is real – and growing “at scale”, according to an Elsevier spokesperson. None were able to share their data, but there are signs the problem is prevalent." Cheers guys.

1 month ago 13 11 1 1
Preview
The choices universities and colleges make about AI are political Ahead of this week's Digifest, Michael Webb and Rebecca Flook confront the complex values systems behind general purpose AI technology

'The systems now being woven into education are shaped by a remarkably small group of people. Not “the internet” as the source of training material. Not “society” influencing the way we use these tools.'

Fascinating that this comes from within Jisc. 1/3

1 month ago 124 60 3 10
Preview
The one science reform we can all agree on, but we're too cowardly to do OR: the long overdue forest fire

Many of the problems of open access have been caused by funding organisations providing APC money that goes straight to the publishing industry. It's very optimistic to expect that the same (neoliberal) funders will want to prevent researchers from publishing in for-profit journals.

1 month ago 13 4 0 0
Post image

We are very excited to share the full programme for the London Open Science & Scholarship Festival 2026 and announce that bookings are officially open! ✨

Find all the details on the Open@UCL blog 👉 buff.ly/S7yECkO

1 month ago 15 15 2 5
Advertisement
Post image

Latest substack on Iran by @snellarthur.bsky.social is a must read

substack.com/home/post/p-...

1 month ago 200 94 8 14
Preview
Qualitative Health Research - Volume 36, Number 2-3 Table of contents for Qualitative Health Research, 36, 2-3

Qualitative Health Research

Special Issue: Intersections (existing, emerging, and imagined) between Artificial Intelligence and Qualitative Health Research

journals.sagepub.com/toc/qhra/36/...

1 month ago 1 1 0 0

Coming up later this month:

1 month ago 3 2 0 0
Preview
Fear of stigma blamed as 0.1 per cent of papers declare AI use Worries over admitting ChatGPT use for editing and drafting may explain extremely low disclosure rates, study suggests

Only one in 40 scientific papers suspected of deploying AI writing tools admits using them, says study which suggests stigma of admitting ChatGPT use might explain exceptionally low figure www.timeshighereducation.com/news/fear-st... via @jgro-the.bsky.social

1 month ago 0 1 0 0

“Computer literacy. Internet literacy. Social media literacy. Mobile literacy. Virtual reality literacy…The pitch to train schoolchildren on the latest tech has stayed roughly the same since the introduction of personal computers in the late 1970s…”

And yes indeed, we do fall for it every time.

1 month ago 56 21 0 0
Preview
How tech turned against women As AI-generated sexualised images proliferate and app-facilitated abuse spreads, we are sleepwalking into a new age of gender inequality. It is time to regulate properly

"large language models such as ChatGPT were consistently advising women to ask for lower salaries than men in recruitment processes,... AI tools already in use by more than half of England’s councils were downplaying women’s medical conditions, potentially resulting in unequal care"

2 months ago 953 514 23 64
Preview
Take your academic writing skills to the next level Whether writing a paper or a book, find out how to improve your academic writing at each stage of the process

Whether writing a paper or a book, find out how to improve your academic writing at each stage of the process: www.timeshighereducation.com/campus/take-your-academi... #AcademicWriting #AcademicChatter #ECRchat #PhDSky #Academia

2 months ago 1 2 0 0
Advertisement
Preview
‘Let’s treat writing as shared infrastructure rather than private struggle’ Academic writing is often framed as something faculty should simply manage better; when they struggle, the blame is put on the individual academic. But this explanation doesn’t hold, as Rachel Gabriele explains

Academic #writing is often framed as something faculty should simply manage better; when they struggle, the blame is put on the individual academic. But this explanation doesn’t hold, as Rachel Gabriele explains: https://ow.ly/kFFA50YiI3g #Academia #HigherEd #AcademicSky

2 months ago 10 5 0 1
Preview
The promise and pitfalls of AI in health | LSHTM There has been considerable hype about the transformative potential of AI across many domains of society. Hetan will consider the promise and pitfalls of AI in health, and how we should think about

I’m giving the 40th Annual Health Services Research Lecture (which will be the inaugural Nick Black lecture) in March on ‘the promise and pitfalls of AI in health’. All welcome!
www.lshtm.ac.uk/newsevents/e...

2 months ago 5 3 0 0
Preview
If progress is not to falter, students must be trained in open research The how and why of conducting transparent, rigorous, ethical research must be explicitly taught, say Madeleine Pownall, Charlotte Pennington and Flavio Azevedo

“Open research is about more than the tightening of analytical and methodological standards. The movement also invites us to reconsider how, and by whom, knowledge is created, shared and evaluated”

By @maddipow.bsky.social, @drcpennington.bsky.social, & @flavioazevedo.bsky.social

#MetaSci #OpenSci

2 months ago 27 13 0 0
Preview
Google puts users at risk by downplaying health disclaimers under AI Overviews Exclusive: Google fails to include safety warnings when users are first presented with AI-generated medical advice

Gina Neff, prof. of responsible AI at Queen Mary University of London: the “ 'problem with bad AI Overviews is by design' and Google was to blame. 'AI Overviews are designed for speed, not accuracy, and that leads to mistakes in health information, which can be dangerous.' ”

2 months ago 7 3 0 0
Preview
Assetizing academic content and the emergence of the ‘assetizen’: education platforms, publisher databases, and AI model training - Higher Education Higher Education - Academic content, such as teaching materials and academic publications, has become an economic resource. This has occurred through assetization as the key economic regime in...

New OA article just out on "assetizing academic content" led by @jkom.bsky.social with me, @keanbirch.bsky.social & Klaus Beiter, exploring how academic materials are turned into value-generating digital assets by HE institutions, edtech platforms, and AI companies link.springer.com/article/10.1...

2 months ago 99 61 2 4
Post image

Is notion of a reproducibility crisis in science "exaggerated"?

After ERC's Maria Leptin suggests just that, @fionamcintyre.bsky.social talks to those studying the issue.

To judge whether there's a crisis, we would need to know "normal" level of reproducibility, says @martmichaelis.bsky.social

2 months ago 5 3 2 1