Advertisement · 728 × 90
#
Hashtag
#ResearchEvaluation
Advertisement · 728 × 90
LinkedIn This link will take you to a page that’s not on LinkedIn

From impact to value, time to rethink how we measure meaningful research.

Read the blog here: blogs.lse.ac.uk/impactofsoci...

#researchimpact #impactacademy #researchevaluation

0 0 0 0
Preview
We should focus less on research impact and more on research value - LSE Impact Institutional definitions of research impact align poorly with the practices & values of staff, could a focus on research value lead to better outcomes?

👀ICYMI: "a narrow focus on auditing outputs overlooks the wider benefits that emerge across the whole lifecycle of research"

#ResearchImpact #ResearchEvaluation

9 3 1 0
Preview
We should focus less on research impact and more on research value - LSE Impact Institutional definitions of research impact align poorly with the practices & values of staff, could a focus on research value lead to better outcomes?

We should focus less on research impact and more on research value

#ResearchEvaluation #Australia #WCRI2026 #WCRI

blogs.lse.ac.uk/impactofsoci...

0 0 0 0

Job Deadline - 31st March!

Postdoctoral Visitor in Metadata & Research Evaluation: www.yorku.ca/research/wp-...

#metrics #universities #researchevaluation #highereducation #metadata

5 6 0 0
Preview
We should focus less on research impact and more on research value - LSE Impact Institutional definitions of research impact align poorly with the practices & values of staff, could a focus on research value lead to better outcomes?

💥New | We should focus less on research impact and more on research value

✍️ Ruth O’Connor, Sejul Malde, Wendy Russell and Maya Haviland

#AcademicSky #ResearchEvaluation #ResearchImpact

8 4 1 0


Kathleen Gregory, Stefanie Haustein, Constance Poitras, Emma Roblin, Anton Ninkov, Chantal Ripp, Isabella Peters, Digging deeper into data citations: recognizing and rewarding data work, Research Evaluation, Volume 35, 2026, rvag008, https://doi.org/10.1093/reseval/rvag008

Kathleen Gregory, Stefanie Haustein, Constance Poitras, Emma Roblin, Anton Ninkov, Chantal Ripp, Isabella Peters, Digging deeper into data citations: recognizing and rewarding data work, Research Evaluation, Volume 35, 2026, rvag008, https://doi.org/10.1093/reseval/rvag008

When we read a paper, we see text, figures, and conclusions. But interviews with researchers suggest that up to 75% of research effort is data work: collecting, cleaning, documenting, and preparing data. doi.org/10.1093/rese... #OpenScience #DataCitation #ResearchEvaluation #ResponsibleMetrics

2 1 0 0
Kathleen Gregory, Stefanie Haustein, Constance Poitras, Emma Roblin, Anton Ninkov, Chantal Ripp, Isabella Peters, Digging deeper into data citations: recognizing and rewarding data work, Research Evaluation, Volume 35, 2026, rvag008, https://doi.org/10.1093/reseval/rvag008

Kathleen Gregory, Stefanie Haustein, Constance Poitras, Emma Roblin, Anton Ninkov, Chantal Ripp, Isabella Peters, Digging deeper into data citations: recognizing and rewarding data work, Research Evaluation, Volume 35, 2026, rvag008, https://doi.org/10.1093/reseval/rvag008

New paper in #ResearchEvaluation explores how researchers actually cite data. Key insight: data citations are far more complex than simple indicators of data reuse. A timely reminder: metrics alone cannot capture the real value of data work. doi.org/10.1093/rese... #OpenScience #DataCitation #Data

3 1 1 0
Preview
Can generative AI effectively perform quality evaluation within social sciences? A case study in library and information science - Scientometrics Thus far, the usage of generative artificial intelligence (GAI) has mainly been explored for content-based evaluations. Research quality evaluations and studies focusing on fields in the social scienc...

New Paper regarding quality evaluations using artificial intelligence: link.springer.com/article/10.1...
#bibliometrics #researchevaluation

1 0 0 0
Preview
Can AI support the assessment of REF research environments? - LSE Impact ChatGPT’s can closely replicate expert evaluations of REF research environments. How might research managers use these tools in the run up to the next REF?

💥New | Can AI support the assessment of REF research environments?

✍️Kayvan Kousha, @mikethelwall.bsky.social & @lizziegadd.bsky.social

#REF2029 #ResearchEnvironments #ResearchEvaluation

1 1 0 0

Learn about the h-index with Jorge Hirsch.
@grandlabo.com explains this metric for measuring scientific impact in a clear, accessible video.

#hindex #ScienceMetrics #ResearchEvaluation

0 0 0 0
Preview
Research institutions tout the value of scholarship that crosses disciplines – but academia pushes interdisciplinary researchers out Researchers who focus on one specialty are more likely to rise through the academic ranks, even though wicked societal problems require crosscutting work to solve.

Research institutions tout the value of scholarship that crosses disciplines – but academia pushes interdisciplinary researchers out

#ResearchEvaluation #Research #WCRI2026 #WCRI

theconversation.com/research-ins...

3 0 0 0
Preview
Research evaluation systems are too slow to measure AI accelerated research - Impact of Social Sciences Evaluations of AI research & their impacts are often out of date by the time they are published. Can research evaluation systems catch up?

💥New: Research evaluation systems are too slow to measure AI accelerated research

✍️Tony Bader

#ResearchEvaluation #HealthResearch #AcademicSky

2 1 0 0

"while altmetrics provide valuable insights into the broader digital visibility of research, they should be interpreted as complementary rather than definitive indicators of scholarly impact"
#Altmetrics #ScholarlyImpact #ResearchEvaluation #ImpactFactor #ResearchMetrics

1 0 0 0
Preview
Scepticism over increased use of AI in research assessment - Research Information Report calls robust for national oversight in the UK, comprising sector-wide guidance on usage for Research Excellence Framework

Scepticism over increased use of AI in research assessment

#ResearchEvaluation #AI #GenAI #WCRI2026 #WCRI

www.researchinformation.info/news/sceptic...

1 0 0 0
Patton, C. (2024). Replicability and the humanities: the problem with universal measures of research quality. Research Evaluation, 34. https://doi.org/10.1093/reseval/rvaf052

Patton, C. (2024). Replicability and the humanities: the problem with universal measures of research quality. Research Evaluation, 34. https://doi.org/10.1093/reseval/rvaf052

A new article by Chloe Patton in #ResearchEvaluation shows how debates about #OpenScience often slip into absurdity – like demanding #replication from the #Humanities. You can’t replicate history, culture, or interpretation the way you replicate a physics experiment: doi.org/10.1093/rese...

15 4 1 1
Preview
(PDF) Від цитувань до прогнозів ChatGPT: що означає “якість дослідження” сьогодні [From Citations to ChatGPT Predictions: What “Research Quality” Means Today] PDF | This presentation explores how artificial intelligence, particularly large language models such as ChatGPT, is reshaping contemporary approaches... | Find, read and cite all the research you nee...

Today at my alma mater, I spoke about how research evaluation is quietly shifting from citations to #ChatGPT -style predictions: doi.org/10.13140/RG.... We may be heading from “publish or perish” to the new absurdity: “write ChatGPT-friendly or perish.” #AI #ResearchEvaluation #Scientometrics #LLM

3 0 0 0
Preview
Does hype really sell claims of research impact? - Impact of Social Sciences Gemma Derrick finds the use of ‘hype’ in impact statements has significantly less impact than feared.

🗃️ "As one panel demurred: “this impact is crazy” leading to a downgrading of the application."

#ResearchImpact #ResearchEvaluation

5 1 0 0
Preview
Post Doctoral Research Associate, “Between Economy and Democracy: Reorganising Research Evaluation through Metadata in the Digital Era” | King's College London

If you wish to study the effect of #opendata in #researchevaluation, please consider this postdoc position in London #openscience ( @lizziegadd.bsky.social ) www.kcl.ac.uk/jobs/126965-...

1 1 0 0
Preview
The “least worst” exercise – What direction will research evaluation in Australia take? - Impact of Social Sciences Will a seemingly light touch and data driven approach to Australia's research assessment exercise (ERA) prove effective?

💥 New: The “least worst” exercise – What direction will research evaluation in Australia take?

✍️ Ksenia Sawczak

#HigherEd #ERA #ResearchEvaluation

2 0 0 0
Preview
When the Scoreboard Becomes the Game, It’s Time to Recalibrate Research Metrics - The Scholarly Kitchen Today's guest post discusses research metrics and their relationship to research integrity, inclusivity, and long-term impact.

scholarlykitchen.sspnet.org/2025/09/11/guest-post-when-the-scoreboard-becomes-the-game-its-time-to-recalibrate-research-metrics/

#ResearchMetrics #ScholarlyPublishing #AcademicIntegrity #ResearchCulture #MetricsMatter #ResponsibleResearch #AcademicLife #OpenScience #ResearchEvaluation

0 0 0 0

✨ RDA & Science Policy: White Papers Released ✨

Following the ‪Research Data Alliance May 2025 workshops, new white papers have been produced covering:
🔹 National #PID Strategies
🔹 Journal #ResearchDataPolicy Frameworks
🔹 #ResearchEvaluation Reform

Download here 👇

1 0 1 1
Preview
Researchers suggest one-a-year publication limit - Research Professional News “Probably controversial” proposals intended to spark renewed efforts to tackle “publish or perish...mania”

Researchers suggest one-a-year publication limit

#ResearchEvaluation #AcademicPublishing #ResearchIntegrity #WCRI2026 #WCRI

www.researchprofessionalnews.com/rr-news-worl...

2 1 0 0

8/8 📚 Read the full open-access study: "The cultural impact of the impact agenda in Australia, UK and USA" in Research Evaluation. Time to rethink how we measure and support meaningful research contributions! 🌍 #OpenScience #ResearchEvaluation
9/9

0 0 0 0

6. Real impact: In case studies, h-index ranked a 2-paper author with 31K citations (1000+ co-authors each) same as a 7-paper author with 446 citations (small teams). SBCI properly distinguished their contributions. #ResearchEvaluation #FairMetrics
7/8

0 0 1 0

Two thoughts after reading through several studies on metrics-based #ResearchEvaluation and evaluative #Bibliometrics

(thread, 1/5)

0 0 1 0
Stewart Manley, Simultaneous submissions without simultaneous peer review, Research Evaluation, Volume 34, 2025, rvaf027, https://doi.org/10.1093/reseval/rvaf027

Stewart Manley, Simultaneous submissions without simultaneous peer review, Research Evaluation, Volume 34, 2025, rvaf027, https://doi.org/10.1093/reseval/rvaf027

Stewart Manley published his brilliant idea in #ResearchEvaluation the “exclusive option”. Authors could submit to multiple journals at once, and interested editors request an exclusive right to review: doi.org/10.1093/rese... No duplicated #peerreview. No endless delays. #TimeToChange

1 0 0 0
Post image

The RESSH Conference was organised by #ENRESSH and hosted by the Federation of Finnish Learned Societies.
👏 Many thanks to the organizers for an inspiring event focused on building more responsible, inclusive, and meaningful research evaluation systems.
#RESSH2025 #ResearchEvaluation

2 1 0 0
Post image

Honored to receive an Award of Appreciation from the Ministry of Education and Science of Ukraine for my contribution to the evaluation of research projects. Proud to stand with Ukrainian science.
#Ukraine #Science #ResearchEvaluation #OpenScience

0 0 0 0
Preview
Setting the course: our first two years in the focal area Evaluation & Culture The Evaluation and Culture focal area at CWTS is dedicated to studying, discussing, and advocating for renewed forms of scholarly communication, fair research evaluation, and inclusive research cultur...

📢 New blog post! The Evaluation and Culture focal area at CWTS reflects on two years of work toward fairer research evaluation, inclusive cultures, and better scholarly communication.

Read here 👉 www.leidenmadtrics.nl/articles/set...

#researchculture #scholarlycommunication #researchevaluation

1 3 0 1
Preview
RESSH2025 Conference RESSH2025 conference of the international association ENRESSH (European Network for Research Evaluation in the SSH) is organized 19-21 May, 2025, in Helsinki, Finland. It brings together specialists o...

❓Attending #RESSH2025 in May? The Coalition for Advancing Research Assessment #CoARA +Helsinki Initiative are organising a workshop to identify key challenges in current #researchevaluation practices & solutions that embrace #OpenScience values.

➡️Sign up now! vastuullinentiede.fi/en/events/re...

2 1 0 0