Measuring and managing impact: eight pointers
Eastside Primetimers consultant Rosie Chadwick shares her experiences and practical tips about how best to measure and build on your impact.
The importance of measuring and managing impact is widely accepted. Good governance requires it and improvements in delivery depend on it - beneficiaries deserve no less. Despite this, for organisations that are new to impact measurement or simply stretched on many fronts, putting effective approaches in place can be a daunting prospect. This set of practice pointers offers some pragmatic suggestions based on my personal experience of supporting organisations on their impact journey.
1. Be clear about what changes and why
It helps to start by being clear about a few things at the outset and mapping them out. These include the difference you aim to make, the changes you expect to see along the way, what you will do to effect these changes, what gives you confidence that doing 'x' will lead to 'y', and what factors might influence success along the way.
Tempting as this may be, don't feel you need to measure everything all the time. Depending on what it is you're doing, the numbers involved and how well-evidenced the work is already, gathering evidence from a particular cohort or a random sample of participants may suffice.
3. Talk with stakeholders about what forms of evidence they would find most helpful
Asking different stakeholders (beneficiaries, funders, key partners) what kinds of evidence they would find most helpful can link you in with others' expertise and increases the chances of stakeholders engaging with your evidence once collected.
4. Make the most of the evidence you already hold
One organisation I worked with recorded rich data on its client interactions on its Customer Relations Management (CRM) system. The data was all in narrative form, however, which made it very hard to summarise. With just small adjustments, thy were able to use their CRM to generate much more usable information about how they had helped clients to access benefits, extra school support or more suitable housing.
5. Draw on 'standard' measures where available and appropriate
While the measures you use clearly need to be the right ones for your organisation and beneficiaries, it would be surprising if at least some of these weren't common to other organisations as well. The Big Lottery (now National Lottery Community Fund) has just produced a helpful guide to outcome frameworks and standalone measures - a good starting point for considering 'industry standard' measures that may be fit for purpose. While not required to use it, an arts organisation I'm working with has benefitted greatly from linking in with 'Culture Counts' metrics in the Arts Council's Impact and Insight toolkit.
6. Consider small steps to make your evidence more robust
While large scale randomised control trials are beyond the reach of many organisations, small adjustments can still help make evidence more robust. Examples include:
gathering evidence from more than one source
including 'before' and 'after' questions in a single survey (when doing separate ‘before’ and ‘after’ surveys isn’t possible or might represent too much admin)
making the most of comparison opportunities (e.g. comparing different project sites)
taking a more structured approach to case study collection, with open questions to storytellers about what has changed for them and why this matters
allowing for reporting of negative experiences and adverse outcomes - things you don't expect and may not want to hear!
being open and transparent about what results are based on (how many people, over what period etc.)
7. Make sure everyone's on board
It's vital that everyone involved understands what data needs to be collected, why it matters - i.e. to do a better job for beneficiaries - and their part in this, countering any lingering sense of data gathering being an unnecessary burden.
8. Create opportunities to engage with the evidence
While it can be tempting to simply report on the evidence and move on, building in opportunities to engage with, and learn from, the evidence is key to driving better data quality and use. This can usefully extend beyond Board meetings and discussion by staff and volunteer teams, to seeking feedback from beneficiaries and other stakeholders.