Back at #Anthropy2026.
Posts by Ryan Miemczyk
Paying a bill with a note that also explains why some of us statistically have fewer bills left to pay. How beautifully bleak.
If you're navigating this tension in the sector, I'd love to hear how. Drop a reply or DM. And funders: what would it take to fund the infrastructure, not just the activities? #SocialImpact #CharityData #ImpactMeasurement
The best-evidenced programmes usually have sustained funding. Not because money made them better at delivery, but because it gave them time to build systems, train staff, and iterate. Evidence needs investment.
The timeframes don't match either. A 12-month grant rarely captures long-term outcomes. You're measuring people 3 months after a 6-week programme and calling it impact. That's a structural problem, not a delivery one.
Baseline and endline surveys sound simple. They're not. You need to reach people twice, months apart. People move. Disengage. Get discharged. Attrition is real. But funders sometimes read it as failure.
Good data collection costs money. Staff time, survey tools, database setup, analysis. It's rarely included as a fundable cost. So charities cut corners — then get judged for producing weak evidence.
Funders want robust evidence of impact. But they often won't fund the infrastructure that makes robust evidence possible. Let's talk about that. 🧵
Sometimes data doesn't have to solve a serious problem. Sometimes it just has to be fun.
🔗 impctlab.uk/lioli/
(No, I will not be taking questions about my life choices.)
I built four tabs of charts to try and answer this. Economics, geography, episode breakdowns, the lot.
Do I have a definitive answer? No.
Does the dashboard look nice? Yes.
The headline finding: Kirstie wins. A lot.
But is she actually persuasive, or is it just inertia bias (people staying put because moving house is genuinely terrifying)?
🧵 I built an interactive data dashboard on Love It or List It UK because apparently that's how I spend my time now.
All in all, RCTs suffer from the exact same issues that any other approach does and that’s usually poor methodology and design issues. Just because it’s an RCT doesn’t mean it’s above reflection.
Worth having a read of a few cases papers that outline some of the issues relating to RCTs. This is a good start:
pubmed.ncbi.nlm.nih.gov/23543723/
Stuck with unrealistic evaluation asks? DM me. I help charities design evaluations that actually work for their budget and context. #SocialImpact #CharityData #ImpactMeasurement
The best evaluation matches your resources. A well-designed survey beats a badly-executed RCT every time.
Three approaches that work for small-scale programmes: pre/post with validated tools, contribution analysis, or case-based evidence with clear mechanisms.
What funders actually need isn't a control group. They need confidence that change happened AND that your service caused it.
Here's the reality: RCTs cost £50k+ to run properly. Most small charities have evaluation budgets under £5k. The maths doesn't work.
Your funder wants a control group for your 12-person pilot. This is a real conversation I had last week. 🧵
On my way to London to deliver a workshop with an awesome client. I’ve time to kill before my train home though, so if anyone wants to grab a coffee between 2 and 6pm, drop me a message. x
Got a ToC gathering dust? Challenge: read it aloud. Does it explain causation or just sequencing? Drop your ToC questions below. #SocialImpact #CharityData #ImpactMeasurement
Pro tip: Write your ToC in prose first. Full sentences. Then diagram it. This forces you to articulate the actual logic.
Your assumptions matter more than your boxes. The real ToC lives in those 'if/then/because' statements you're afraid to write down.
Common mistake: confusing activities with mechanisms. 'We run workshops' is an activity. 'Participants gain confidence through peer support' is a mechanism.
The test: Can someone outside your org read it and understand WHY each step leads to the next? If not, it's a work plan with arrows.
A real ToC shows *how* change happens, not just what you'll do. It's about causal pathways: if we do X, then Y will happen *because* Z.
Most Theory of Change diagrams are just fancy to-do lists. There, I said it. 🧵
If you're in the charity space and want to build this kind of tracking into your programmes without drowning in spreadsheets, drop a reply or DM — happy to share what's worked. #SocialImpact #CharityData #ImpactMeasurement
Common objection: "Our participants don't complete follow-up surveys." Reality check — that's a design problem, not a data problem. Short, single-channel, timed prompts dramatically improve completion rates.