Final day!🎤 This morning kicks off with Prof. Jakob Edler (MIOIR/Fraunhofer ISI) on demand-based innovation policy—exploring how demand can drive innovation and policy design. #Evaluation2025 #InnovationPolicy
Capitalising on administrative data isn’t just smart, it’s essential. Faye Gracey at #evaluation2025 shows how linked admin data can drive better decisions, complement other methods, and improve lives. Governance, time and infrastructure matter, but the investment is worth it
Allan Williams at #evaluation2025 - Evaluating publicly funded research & innovation means grappling with long-term, diffuse, and unpredictable impact. No easy metrics — but careful thinking, storytelling, and systems-aware methods matter
“Theories of Change on Trial” at #evaluation2025 asked are ToCs helping us learn — or are they just old ideas in tidy diagrams that ignore complexity and local knowledge? A great debate on how ToCs can support real-world change when used with care.
This afternoon, Erik Arnold leads a session on designing evaluations to tackle societal challenges. Next up, Diogo Machado explores the role of AI in science policy—practical applications and future directions.
#Evaluation2025 #PolicyInnovation #ImpactEvaluation
Today kicks off with Erik Arnold on evaluating STI organisations—how to assess research funders and performers, with examples from Finland and Luxembourg.
Followed by Mike Thelwall (remote) on altmetrics—web-based metrics as impact indicators. #Evaluation2025
Brilliant quote from the Participation or Pretence? panel at #evaluation2025:
We need to “move from being extractive to facilitators of meaning-making.” Participation isn’t performance — it’s a shift in power, practice, and purpose.
Luke Roberts at #evaluation2025 reminds us that complexity isn’t optional — it’s reality. Reclaiming complexity in evaluation means recognising interconnectedness, power, and justice.
#SystemsThinking
Dr Luke Roberts at #evaluation2025 challenges us to reclaim evaluation for complexity. Evaluation must embrace emergence, navigate complex systems, and avoid flattening lived experience.
It’s time to shift from control to curiosity
#SystemsThinking #Complexity
At #evaluation2025, a striking reminder about positionality: When we evaluate complex systems, we become part of them.
That means we risk being co-opted — especially if we uncritically adopt the language of "business as usual."
#SystemsThinking #Evaluation #Reflexivity
At #evaluation2025, Pierre Canet showed how Gen AI (like ChatGPT) can speed up document reviews — extracting relevant excerpts fast (1 afternoon vs. 1 week manually).
Great for surfacing quotes, not replacing human analysis. Promising, with caveats.
At #evaluation2025, we heard about the Iron Law of Major Projects (Bent Flyvbjerg):
“Over budget, over time, under benefits — over and over again.” Just 0.4% of major projects deliver on time, on budget and achieve intended benefits.
Evaluation can help us break the cycle.
“This is not random error. Planning is biased.”
At #evaluation2025 we’re seeing the data: most major projects go over budget and under deliver on benefits.
Systemic optimism in planning highlights the need for better evaluation, not just better spreadsheets.
Day 3️⃣ wrapped with insightful discussions on steering effects and productive group work sessions.
Thanks to all our speakers for sparking thoughtful conversations!
#Evaluation2025 #SciencePolicy #GlobalLearning
Up now at #evaluation2025 Sarah Morton and Ailsa Cook share insights on working in the messy middle between evaluation, monitoring and accountability #matteroffocus
This afternoon, Maria Nedeva explores how evaluation steers research systems. Participants are diving into the strategic impacts of evaluation on policy and practice.
#Evaluation2025 #EvaluationTools #PolicyImpact
Midweek momentum!
Today, John Rigby introduces #bibliometrics & #scientometrics followed by Erik Arnold & Kate Barker on peer review and national research assessments. How do we measure what matters? #Evaluation2025
Listening to Kirstine Szifris at #evaluation2025 has me reflecting on my own journey as an accidental evaluator.
I didn’t plan this path— but I’ve found real value in working with ppl to explore what we do, how we do it, and whether it helps. It's about R'ships, trust& learning
Government needs good evidence on the value delivered from its spending. At #evaluation2025, listening to a panel on leveraging data and evaluation to maximise public value — using evidence to guide investment, not just justify it.
Alex Hurrell at #evaluation2025:
We’re in the midst of a data explosion — now’s the time to take stock of how we approach evaluation to ensure it delivers value.
More data isn’t the goal. Better use, better insight, better decisions is.
Alex Hurrell at #evaluation2025:
Don’t think of evaluation in isolation — it’s part of a wider system of monitoring, learning and adaptation. #mel
Evaluation should be embedded, continuous, and connected to real-time decision-making.
A powerful reminder at #evaluation2025: many of us work in data-rich but analysis-poor environments.
We need to invest in both capability and capacity — not just automation, but human insight to make sense of information and drive improvement.
Evaluation isn’t just a task — it’s a culture.
When we engage with it meaningfully, we unlock opportunities to learn, adapt and improve. #evaluation2025
Not everyone will have "evaluation" in their job title — but evaluation should be seeded across the whole organisation.
It’s everyone’s business if we want learning and improvement to stick.
#evaluation2025 #LearningCulture
Do we make the most of the data we already have — or just keep collecting more? A timely challenge at #evaluation2025 about using existing data better, not just generating more of it.
The Scottish Government Evaluation Action Plan 2024–27 aims to:
✅ Make evaluation central to policymaking
✅ Build skills+ capacity across govt
✅ Promote learning& improvement
✅ Strengthen how evidence is used+ shared
#evaluation2025
gov.scot/publications/scottish-government-evaluation-action-plan
Better use of data is crucial to public service improvement.
Ivan McKee stresses the importance of joined-up systems and data sharing across organisations to make that happen. #evaluation2025
Kicking off day 2 of #evaluation2025, Ivan McKee MSP sets the tone:
Good decisions need good evidence. Evaluation is central to how government targets resources and drives improvement
We wrapped Day 2 with Erik Arnold on qualitative methods—interviews, surveys, case studies. A full-spectrum view of evaluation approaches, and group projects are also in full swing!
#Evaluation2025 #Methods
Next up, Cristina Rosemberg (Managing Partner, Technopolis) is unpacking advanced statistical tools—DiD, PSM, synthetic controls.
Participants are applying these to real-world scenarios in group exercises. #Evaluation2025 #QuantMethods #PolicyAnalytics