From impact to value, time to rethink how we measure meaningful research.
Read the blog here: blogs.lse.ac.uk/impactofsoci...
#researchimpact #impactacademy #researchevaluation
👀ICYMI: "a narrow focus on auditing outputs overlooks the wider benefits that emerge across the whole lifecycle of research"
#ResearchImpact #ResearchEvaluation
We should focus less on research impact and more on research value
#ResearchEvaluation #Australia #WCRI2026 #WCRI
blogs.lse.ac.uk/impactofsoci...
Job Deadline - 31st March!
Postdoctoral Visitor in Metadata & Research Evaluation: www.yorku.ca/research/wp-...
#metrics #universities #researchevaluation #highereducation #metadata
💥New | We should focus less on research impact and more on research value
✍️ Ruth O’Connor, Sejul Malde, Wendy Russell and Maya Haviland
#AcademicSky #ResearchEvaluation #ResearchImpact
Kathleen Gregory, Stefanie Haustein, Constance Poitras, Emma Roblin, Anton Ninkov, Chantal Ripp, Isabella Peters, Digging deeper into data citations: recognizing and rewarding data work, Research Evaluation, Volume 35, 2026, rvag008, https://doi.org/10.1093/reseval/rvag008
When we read a paper, we see text, figures, and conclusions. But interviews with researchers suggest that up to 75% of research effort is data work: collecting, cleaning, documenting, and preparing data. doi.org/10.1093/rese... #OpenScience #DataCitation #ResearchEvaluation #ResponsibleMetrics
Kathleen Gregory, Stefanie Haustein, Constance Poitras, Emma Roblin, Anton Ninkov, Chantal Ripp, Isabella Peters, Digging deeper into data citations: recognizing and rewarding data work, Research Evaluation, Volume 35, 2026, rvag008, https://doi.org/10.1093/reseval/rvag008
New paper in #ResearchEvaluation explores how researchers actually cite data. Key insight: data citations are far more complex than simple indicators of data reuse. A timely reminder: metrics alone cannot capture the real value of data work. doi.org/10.1093/rese... #OpenScience #DataCitation #Data
New Paper regarding quality evaluations using artificial intelligence: link.springer.com/article/10.1...
#bibliometrics #researchevaluation
💥New | Can AI support the assessment of REF research environments?
✍️Kayvan Kousha, @mikethelwall.bsky.social & @lizziegadd.bsky.social
#REF2029 #ResearchEnvironments #ResearchEvaluation
Learn about the h-index with Jorge Hirsch.
@grandlabo.com explains this metric for measuring scientific impact in a clear, accessible video.
#hindex #ScienceMetrics #ResearchEvaluation
Research institutions tout the value of scholarship that crosses disciplines – but academia pushes interdisciplinary researchers out
#ResearchEvaluation #Research #WCRI2026 #WCRI
theconversation.com/research-ins...
💥New: Research evaluation systems are too slow to measure AI accelerated research
✍️Tony Bader
#ResearchEvaluation #HealthResearch #AcademicSky
"while altmetrics provide valuable insights into the broader digital visibility of research, they should be interpreted as complementary rather than definitive indicators of scholarly impact"
#Altmetrics #ScholarlyImpact #ResearchEvaluation #ImpactFactor #ResearchMetrics
Scepticism over increased use of AI in research assessment
#ResearchEvaluation #AI #GenAI #WCRI2026 #WCRI
www.researchinformation.info/news/sceptic...
Patton, C. (2024). Replicability and the humanities: the problem with universal measures of research quality. Research Evaluation, 34. https://doi.org/10.1093/reseval/rvaf052
A new article by Chloe Patton in #ResearchEvaluation shows how debates about #OpenScience often slip into absurdity – like demanding #replication from the #Humanities. You can’t replicate history, culture, or interpretation the way you replicate a physics experiment: doi.org/10.1093/rese...
Today at my alma mater, I spoke about how research evaluation is quietly shifting from citations to #ChatGPT -style predictions: doi.org/10.13140/RG.... We may be heading from “publish or perish” to the new absurdity: “write ChatGPT-friendly or perish.” #AI #ResearchEvaluation #Scientometrics #LLM
🗃️ "As one panel demurred: “this impact is crazy” leading to a downgrading of the application."
#ResearchImpact #ResearchEvaluation
If you wish to study the effect of #opendata in #researchevaluation, please consider this postdoc position in London #openscience ( @lizziegadd.bsky.social ) www.kcl.ac.uk/jobs/126965-...
💥 New: The “least worst” exercise – What direction will research evaluation in Australia take?
✍️ Ksenia Sawczak
#HigherEd #ERA #ResearchEvaluation
scholarlykitchen.sspnet.org/2025/09/11/guest-post-when-the-scoreboard-becomes-the-game-its-time-to-recalibrate-research-metrics/
#ResearchMetrics #ScholarlyPublishing #AcademicIntegrity #ResearchCulture #MetricsMatter #ResponsibleResearch #AcademicLife #OpenScience #ResearchEvaluation
✨ RDA & Science Policy: White Papers Released ✨
Following the Research Data Alliance May 2025 workshops, new white papers have been produced covering:
🔹 National #PID Strategies
🔹 Journal #ResearchDataPolicy Frameworks
🔹 #ResearchEvaluation Reform
Download here 👇
Researchers suggest one-a-year publication limit
#ResearchEvaluation #AcademicPublishing #ResearchIntegrity #WCRI2026 #WCRI
www.researchprofessionalnews.com/rr-news-worl...
8/8 📚 Read the full open-access study: "The cultural impact of the impact agenda in Australia, UK and USA" in Research Evaluation. Time to rethink how we measure and support meaningful research contributions! 🌍 #OpenScience #ResearchEvaluation
9/9
6. Real impact: In case studies, h-index ranked a 2-paper author with 31K citations (1000+ co-authors each) same as a 7-paper author with 446 citations (small teams). SBCI properly distinguished their contributions. #ResearchEvaluation #FairMetrics
7/8
Two thoughts after reading through several studies on metrics-based #ResearchEvaluation and evaluative #Bibliometrics
(thread, 1/5)
Stewart Manley, Simultaneous submissions without simultaneous peer review, Research Evaluation, Volume 34, 2025, rvaf027, https://doi.org/10.1093/reseval/rvaf027
Stewart Manley published his brilliant idea in #ResearchEvaluation the “exclusive option”. Authors could submit to multiple journals at once, and interested editors request an exclusive right to review: doi.org/10.1093/rese... No duplicated #peerreview. No endless delays. #TimeToChange
The RESSH Conference was organised by #ENRESSH and hosted by the Federation of Finnish Learned Societies.
👏 Many thanks to the organizers for an inspiring event focused on building more responsible, inclusive, and meaningful research evaluation systems.
#RESSH2025 #ResearchEvaluation
Honored to receive an Award of Appreciation from the Ministry of Education and Science of Ukraine for my contribution to the evaluation of research projects. Proud to stand with Ukrainian science.
#Ukraine #Science #ResearchEvaluation #OpenScience
📢 New blog post! The Evaluation and Culture focal area at CWTS reflects on two years of work toward fairer research evaluation, inclusive cultures, and better scholarly communication.
Read here 👉 www.leidenmadtrics.nl/articles/set...
#researchculture #scholarlycommunication #researchevaluation
❓Attending #RESSH2025 in May? The Coalition for Advancing Research Assessment #CoARA +Helsinki Initiative are organising a workshop to identify key challenges in current #researchevaluation practices & solutions that embrace #OpenScience values.
➡️Sign up now! vastuullinentiede.fi/en/events/re...