Funders and those who invest their money in good causes all want to know whether the programmes they support are making a real difference. Are people happier? More connected? Leading better lives because of what they’ve invested in?
That impact may seem obvious to those involved in the delivery of the programmes. Case studies, storytelling and traditional evaluations can provide a rich testimony to the difference that is made on a day-to-day basis. But resources are finite, and funders have an understandable appetite for metrics. They want ways to compare the impact of one intervention over another. That’s why, as we’ve discussed before, wellbeing data has become such a powerful tool in recent years. The standard ONS4 questions – asking people to rate their life satisfaction, happiness, anxiety and sense of purpose – help us track change, understand impact, and even put a monetary value on outcomes.
It’s an approach that’s gaining momentum among funders. It also helps many charities better understand and communicate the value of what they do as highlighted by the three programmes featured in our report for Olympic and Paralympics funder, Spirit of 2012.
But it’s not without its limitations. Chief among these is the fact that the questions – simple and well-established as they are – don’t work for everyone.
In our report for Spirit of 2012, Helping funders measuring what matters, we highlight this critical gap: the current ways we measure wellbeing can sometimes exclude neurodivergent people and those with learning disabilities – even when they are core to the mission of many of the programmes funded.
When the data doesn’t fit the people
Projects like Get Out Get Active (GOGA), for instance, are aimed at supporting disabled and non-disabled people to enjoy being active together. These interventions often prioritise individuals who are neurodivergent or have learning disabilities. But if those participants can’t meaningfully respond to the standard wellbeing questions, their experiences – and the impact on their lives – are effectively invisible in the data.
The groups who should be most central to our impact assessments are therefore at risk of being left out or misrepresented. And that, in turn, risks distorting the evidence we rely on to make funding decisions.
Encouraging innovation, facing barriers
Spirit of 2012 has actively encouraged grantees to experiment with more inclusive approaches to measurement. Some developed ‘easy read’ versions of the questions. Others used smiley-face scales or collected responses through guided conversations rather than written surveys. There are no doubt other examples, including a simplified version for young people with learning disabilities, developed by #BeeWell.
These adaptations make a huge difference in capturing the experiences of people who are otherwise difficult to reach in traditional evaluations. But they present a challenge too: the data from these tools can’t currently be used in cost-benefit analyses, because there’s no agreed method to translate simplified responses into the standard 0–10 wellbeing scale used to calculate WELLBYs (wellbeing-adjusted life years).
That’s a real sticking point for funders trying to compare across projects, make evidence-based decisions, or demonstrate value for money.
What can funders do?
This isn’t an unsolvable problem, but it is one that needs attention, collaboration and leadership. There are a number of ways we can help move this forward:
- Encourage flexibility and innovation in wellbeing measurement, especially when working with target groups where standard questions don’t apply.
- Invest in research and partnerships with academics and evaluation specialists to develop validated, inclusive tools – including visual or simplified wellbeing scales. These can be used to fill evidence gaps and allow us to use these scales to monetise wellbeing wellbeing benefits so that these groups can be treated similar to other groups in analysis of government policies.
- Support qualitative approaches where appropriate, recognising that many charities are expert storytellers. Sometimes the richest insights don’t come from a number, but from a story.
Above all, we need to ask ourselves: Are we really measuring what matters, for the people who matter most?
If the goal is to fund work that improves lives, we must make sure our measurement tools are up to the task. That means making space for new approaches, championing inclusion in data as well as delivery, and ensuring no one is left out of the evidence base.
Because fair funding starts with fair measurement. And that’s something we all have a stake in getting right.