Advertisement · 728 × 90

Posts by Sven E. Hug

It will come back, though.

2 weeks ago 0 0 1 0
Post image

New working paper 🚨🚨🚨

What was the origin of modern economic growth?

Joel Mokyr had a Nobel winning answer - growth took off when science and technology began to reinforce each other

But can we test this quantitatively?

This paper does so – read more ⬇️ 🧵

4 months ago 27 13 2 0

Too bad. But a good sign for the game. 😃

5 months ago 1 0 0 0

Have you played it? What are your first impressions?

5 months ago 1 0 1 0
Post image

🧵 1/
🚨 New paper out in PLOS ONE! w/ @caropradier.bsky.social @benzpierre.bsky.social @natsush.bsky.social @ipoga.bsky.social @lariviev.bsky.social
We studied 43k authors and 264k citation links in U.S. economics to ask:
👉 Why do some papers cite others?
🔗 journals.plos.org/plosone/arti...

5 months ago 33 23 1 3
Preview
We’re now OpenAlex - OpenAlex blog For years, we’ve been working under the name OurResearch. That name sat at the top of our org chart, with three child projects under it: OpenAlex, Unpaywall, and Unsub. Starting today, things are simp...

OurResearch rebrands to OpenAlex.

blog.openalex.org/were-now-ope...

6 months ago 15 9 0 1
Preview
Can We Fix Social Media? Testing Prosocial Interventions using Generative Social Simulation Social media platforms have been widely linked to societal harms, including rising polarization and the erosion of constructive debate. Can these problems be mitigated through prosocial interventions?...

We built the simplest possible social media platform. No algorithms. No ads. Just LLM agents posting and following.

It still became a polarization machine.

Then we tried six interventions to fix social media.

The results were… not what we expected.

arxiv.org/abs/2508.03385

8 months ago 301 106 14 44

That being said, I'm looking forward to the insights from the 'referee consensus model'.
2/2

8 months ago 2 0 1 0

Not very surprising to me, as in traditional journal peer review, the editor(s) are expected to reconcile views among individual referees. This provides an additional perspective while keeping the decision-making power with the editor(s).
1/2

8 months ago 0 0 1 0

😊

9 months ago 1 0 0 0
Advertisement
Post image

We often have to judge who is knowledgeable—precisely when we are not. Can humans really do that? Our new paper in Psychological Science shows that, surprisingly, we can. drive.google.com/file/d/1b15E...

10 months ago 102 30 5 2

There is a large literature on grant peer review but afaik nobody has looked at review scores like you have. Interesting!

11 months ago 0 0 0 0
Post image

who says that science doesn't generate profit?

11 months ago 50 24 5 7
Preview
Research Topic Choice: Motivations, Strategies, and Consequences Abstract. Scientists’ choices of what research topics to pursue are highly consequential and have been the subject of many studies. However, these studies are dispersed across several fields and liter...

How do scientists choose which topic to study?
Decades of studies on this, but they're dispersed across fields and not synthesized. Fortunately for us, Sidney
@sdxiang.bsky.social has written a fantastic review of this literature, focusing on econ and soc lit!
direct.mit.edu/qss/article/...

1 year ago 27 6 2 1

Danke für die rasche und klare Antwort! 👍

1 year ago 1 0 1 0

Weshalb sollte man eine Paketlösung wollen? Weshalb nicht?

1 year ago 1 0 1 0
Principles of Evaluative Bibliometrics in a DORA/CoARA Context The document, "Principles of Evaluative Bibliometrics in a DORA/CoARA Context," provides a comprehensive examination of evaluative bibliometrics, exploring its role within research evaluation. It begi...

Advocates of research assessment reforms and bibliometricians sometimes have a rocky and heated relationship. 🔥

This paper, written by three bibliometricians, attempts to reconcile the two camps.

What are your thoughts on this issue?

#CoARA
#DORA

zenodo.org/records/1467...

1 year ago 6 2 0 0
Post image

Out now in Nature Human Behaviour: Our 68-country #survey on public attitudes to #science 📣
It shows: People still #trust scientists and support an active role of scientists in society and policy-making. #OpenAccess available here: www.nature.com/articles/s41... @natureportfolio.bsky.social
(1/13)

1 year ago 361 164 7 21
Advertisement
Screenshot of paper "Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities" by Mohammad Hosseini, Serge P. J. M. Horbach, Kristi Holmes and Tony Ross-Hellauer 

Crossmark: Check for Updates
Author and Article Information
Quantitative Science Studies 1–24.
https://doi.org/10.1162/qss_a_00337

Abstract
Technology influences Open Science (OS) practices, because conducting science in transparent, accessible, and participatory ways requires tools and platforms for collaboration and sharing results. Due to this relationship, the characteristics of the employed technologies directly impact OS objectives. Generative Artificial Intelligence (GenAI) is increasingly used by researchers for tasks such as text refining, code generation/editing, reviewing literature, and data curation/analysis. Nevertheless, concerns about openness, transparency, and bias suggest that GenAI may benefit from greater engagement with OS. GenAI promises substantial efficiency gains but is currently fraught with limitations that could negatively impact core OS values, such as fairness, transparency, and integrity, and may harm various social actors. In this paper, we explore the possible positive and negative impacts of GenAI on OS. We use the taxonomy within the UNESCO Recommendation on Open Science to systematically explore the intersection of GenAI and OS. We conclude that using GenAI could advance key OS objectives by broadening meaningful access to knowledge, enabling efficient use of infrastructure, improving engagement of societal actors, and enhancing dialogue among knowledge systems. However, due to GenAI’s limitations, it could also compromise the integrity, equity, reproducibility, and reliability of research. Hence, sufficient checks, validation, and critical assessments are essential when incorporating GenAI into research workflows.

Screenshot of paper "Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities" by Mohammad Hosseini, Serge P. J. M. Horbach, Kristi Holmes and Tony Ross-Hellauer Crossmark: Check for Updates Author and Article Information Quantitative Science Studies 1–24. https://doi.org/10.1162/qss_a_00337 Abstract Technology influences Open Science (OS) practices, because conducting science in transparent, accessible, and participatory ways requires tools and platforms for collaboration and sharing results. Due to this relationship, the characteristics of the employed technologies directly impact OS objectives. Generative Artificial Intelligence (GenAI) is increasingly used by researchers for tasks such as text refining, code generation/editing, reviewing literature, and data curation/analysis. Nevertheless, concerns about openness, transparency, and bias suggest that GenAI may benefit from greater engagement with OS. GenAI promises substantial efficiency gains but is currently fraught with limitations that could negatively impact core OS values, such as fairness, transparency, and integrity, and may harm various social actors. In this paper, we explore the possible positive and negative impacts of GenAI on OS. We use the taxonomy within the UNESCO Recommendation on Open Science to systematically explore the intersection of GenAI and OS. We conclude that using GenAI could advance key OS objectives by broadening meaningful access to knowledge, enabling efficient use of infrastructure, improving engagement of societal actors, and enhancing dialogue among knowledge systems. However, due to GenAI’s limitations, it could also compromise the integrity, equity, reproducibility, and reliability of research. Hence, sufficient checks, validation, and critical assessments are essential when incorporating GenAI into research workflows.

1/ 🚨 NEW PAPER! “Open Science at the Generative AI Turn”
In a new study just published in Quantitative Science Studies, we explore how GenAI both enables and challenges Open Science, and why GenAI will benefit from adopting Open Science values. 🧵
doi.org/10.1162/qss_...
#OpenScience #AI #GenAI

1 year ago 16 11 1 1
Renovating the Theatre of Persuasion. ManyLabs as Collaborative Prototypes for the Production of Credible Knowledge | Preprint screenshot

Renovating the Theatre of Persuasion. ManyLabs as Collaborative Prototypes for the Production of Credible Knowledge | Preprint screenshot

Renovating the Theatre of Persuasion. ManyLabs as Collaborative Prototypes for the Production of Credible Knowledge; a new preprint & thread.
In it, I'll say a little about theatres of persuasion, and why new collaborative structures change how they look osf.io/preprints/me... #sts #metascience 1/

1 year ago 41 23 2 0

😂🤣

1 year ago 0 0 0 0

Does training of peer reviewers work?

"Evidence from 10 RCTs suggests that training peer reviewers may lead to little or no improvement in the quality of peer review."

Cochrane systematic review 🔓: www.cochranelibrary.com/cdsr/doi/10....

2 years ago 26 15 2 0

With all the new influx of users, I’d love to see a community around #sciencepolicy #scipol #bibliometrics #scientometrics #scisci #metascience. Please share if you want to be part of it, use the tags to find others, or just say hi 👋

1 year ago 35 16 4 0
Preview
The forced battle between peer-review and scientometric research assessment: Why the CoARA initiative is unsound Abstract. Endorsed by the European Research Area, a Coalition for Advancing Research Assessment (CoARA), primarily composed of research institutions and fu

The perennial dispute between quantitative and qualitative research assessment is once again heating up.

The match-up this time:
Evaluative scientometrics
vs
CoARA

🔓The forced battle between peer-review and scientometric research assessment

academic.oup.com/rev/advance-...

1 year ago 8 4 0 0
Preview
Challenges in Research Policy This open access volume examines significant challenges in research policy, offering expert insights and policy recommendations on critical issues.

New edited volume:
Challenges in Research Policy
And it's open access! 👍

link.springer.com/book/10.1007...

1 year ago 17 9 0 0
OSF

It's raining preprints, hallelujah 🎶

Here is my latest preprint (review article) >> Sustaining the ‘frozen footprints’ of scholarly communication through open citations

osf.io/preprints/so...

1 year ago 8 1 0 1
Advertisement
Preview
20 things you didn’t know about Google Scholar Google Scholar celebrates two decades of breaking down barriers to academic research and making it accessible to everyone, everywhere.

Google Scholar 20th Anniversary: 20 things you didn't know about Google Scholar
blog.google/outreach-ini...

1 year ago 15 10 1 0
Preview
Challenges in Research Policy This open access volume examines significant challenges in research policy, offering expert insights and policy recommendations on critical issues.

New edited volume:
Challenges in Research Policy
And it's open access! 👍

link.springer.com/book/10.1007...

1 year ago 17 9 0 0

Hello Bart! 👋🏻
Taking the opportunity to express my appreciation for your research - and for contributing to the beautiful blue place! 🦋

1 year ago 1 0 0 0
Preview
The forced battle between peer-review and scientometric research assessment: Why the CoARA initiative is unsound Abstract. Endorsed by the European Research Area, a Coalition for Advancing Research Assessment (CoARA), primarily composed of research institutions and fu

The perennial dispute between quantitative and qualitative research assessment is once again heating up.

The match-up this time:
Evaluative scientometrics
vs
CoARA

🔓The forced battle between peer-review and scientometric research assessment

academic.oup.com/rev/advance-...

1 year ago 8 4 0 0