The importance of evaluation cannot be understated. Where possible we would advise you to dedicate some time to tracking the outcome(s) of your work.
Evaluating the impact of your solutions enables you to understand if you have achieved your goals, lets you build on what works, and helps you direct resources to activities with maximum impact. Furthermore, it is particularly valuable to share these learnings across the LGA network.
That said, it is not always feasible (or desirable) to conduct a full-scale evaluation. Most councils are working with limited resources and are having to adapt to changing policies and circumstances. The following section will help you to set a reasonable evaluation plan and choose appropriate methods at this time.
3.1. Define your outcomes and metrics
3.1.1. Set your outcomes
When setting up an evaluation, first identify your desired outcomes. Earlier in the guide [1.1. Determine which behaviour to change] we outlined the difference between an ‘ultimate’ outcome, which is the final behaviour you wish to change and the intermediate or ‘proximate’ outcomes. We suggest drafting or using a Theory of Change to help understand how you expect your solution will remove barriers, address proximate behaviours to change your solution.
Ultimate outcomes
Think about the ultimate outcome that you aim to achieve: Are you looking to increase vaccination rates within a specific group? Correct misinformation around vaccines? Encourage people to take their second dose? Or prompt them to maintain social distancing once they have got their vaccine?
Proximate/intermediate outcomes
Once you have identified your ultimate outcome, think of any intermediate outcomes that you need to change in order to achieve your main goal. Your solution might lead to an increase in vaccine take-up, but might do so by first changing attitudes, beliefs, or intentions around vaccines.
There are a number of factors that are strongly associated with vaccination that can be considered as intermediate outcomes: 1) having positive attitudes towards vaccination, 2) perceiving that its a social expectation, 3) feeling anticipated regret if they were not to vaccinate, and 4) believing that COVID-19 is serious and that they are at risk of contracting it (Godinho et al., 2016; Bish et al. 2011).
Read more about How to develop a Theory of Change.
3.1.2. Determine your metrics
Once you have defined your outcomes identify metrics that allow you to track them.
First consider whether you can measure your ultimate outcome. Then examine whether you can measure any other proximate or intermediate outcomes. In order to assess the impact of your solution, keep in mind that you should be able to directly link your solution to the outcome.
- Are you able to measure behaviour, such as actual vaccination?
- Are you able to measure attitudes, intentions, beliefs or self-reported vaccination through surveys?
- Can you directly link your solution to the outcome? In other words, can you determine whether the person who was exposed to your solution later went on to get vaccinated?
- At what level can you capture your metrics? At an individual-level? household-level? Neighbourhood-level?
- Can you capture repeated measures of your outcomes to monitor change overtime?
Using proxy metrics
When tracking your ultimate or intermediate outcomes is not possible, consider finding proxy metrics. Proxy metrics are data points that can be used to represent other metrics.
Examine the extent to which your proxy metric provides data specifically about your target audience. Consider whether it is granular or representative enough to be meaningful.
Below are some examples of proxy metrics that could be used as substitutes for tracking vaccination intentions:
- Click through rates on links providing information on vaccines
- Social media reactions to posts (likes, comments, shares, views)
- Number of downloads or delivered leaflets.
- Number of people attending to vaccine related online events.
- Number of scheduled vaccination appointments
Example: The London Borough of Hounslow tested different behaviourally informed messages on Facebook. Their goal was to address concerns about the vaccine trials being rushed and ultimately encourage vaccination among minority groups. They added links to the Facebook posts that directed residents to the council’s COVID-19 webpage. They tracked click throughs and used them as a proxy metric to evaluate the effectiveness of their messaging.
Read the full case study: London Borough of Hounslow: Used the messenger principle to address vaccine misinformation
3.2. Choose your evaluation method
Deciding on what evaluation method to use will largely depend on your time, resources, and practical feasibility. Below you will find a list of commonly used methods that you can use to evaluate your interventions:
A/B tests (also known as randomised controlled trials)
About: A/B tests are a robust evaluation method. They rely on randomly assigning people to either a control group or one or more treatment groups and then comparing outcomes between these groups. People in the treatment group(s) will receive the solution that you have designed, whereas people in the control group will not.
Strengths: It provides accurate results. If done well, they ensure that any impact you record will be directly attributable to the effectiveness of your solution and not to external factors.
Limitations: Setting up A/B tests can require resources and planning. You will have to ensure that you can control who receives your solution in order to randomly assign them into either group. However, most social media platforms have in-built tools to do this.
Example: The London Borough of Hounslow has conducted an A/B test on Facebook to test the effectiveness of different messages aiming at addressing people’s concerns about the vaccine being rushed. They targeted people within a 10 mile radius from Hounslow and used click through rates as a metric for engagement.
Read the full case study: London Borough of Hounslow: Used the messenger principle to address vaccine misinformation
Resources: Learn more about A/B testing and design your experiment in this A/B testing tool by ideas42 and access a guide on RCTs by Nesta
Pre-post surveys
About: You can measure the impact of your solution by getting your target audience to respond to a survey before and after rolling-out the solution. By comparing outcomes before and after you will be able to assess the effectiveness of your solution. You can track different outcomes in the survey, such as self-reported vaccinations, intentions to vaccinate, attitudes or beliefs.
Strengths: It is an effective method to use when you cannot randomise who will be exposed to your communication or solution.
Limitations: It is less accurate than an A/B test or randomised control trial. There might be other external factors that explain changes in outcomes before and after the implementation of your solution.
Example: Wolverhampton Council have developed an online survey to track changes in attitudes. They deliver the survey before and after launching campaigns to understand which messages work and with which groups.
Read the full case study: Wolverhampton City Council: Applying behavioural science principles as quick wins to COVID work
One-off survey experiments
About: You can launch one-off surveys where you randomly assign survey participants into a number of groups. Each group will be shown a different communication. You can record respondents’ beliefs, attitudes or intentions as outcome metrics. Doing so will allow you to compare the effectiveness of different types of communications and identify the most impactful ones.
Strengths: One-off survey experiments require less planning than an A/B test can can provide accurate results.
Limitations: Getting your target audience to take the survey might be challenging. Consider using incentives to ensure participation.
Qualitative feedback
About: You can conduct a focus group to test your behavioural solution or communication. Gathering qualitative feedback from residents can help you understand if it might work and fine tune your solution before a full roll-out.
Strengths: Qualitative feedback provides rich input that can help to answer why a particular solution might (not) work.
Limitations: Feedback based on qualitative data is hard to quantify and organising focus groups might be time consuming. This method might be best used during the solution development phase but not as a means of conducting a final evaluation.
Example: The London Borough of Hackney conducted a series of focus groups with residents to test the effectiveness of a set of COVID-19 communication. This allowed them, to improve their design and pick the best one.
Read the full case study: Hackney Council: Used resident focus groups to test vaccine hesitancy messaging
Monitoring clickthroughs, downloads, or queries
About: You can monitor how your target audience interacts with your solution. For example, you can track the number of views, clicks, or downloads of a social media post or a an online resource.
Strengths: Often quick and easy to implement (the data are readily available).
Limitations: Unless the data you monitor closely represents your ultimate or intermediate outcomes, the ability to evaluate the impact of your solution is limited.
Example: Norfolk Council developed a behaviourally informed COVID-19 prevention tool-kit for small businesses. They were able to estimate the impact of their solution by monitoring the number of times the tool-kit was downloaded from the council’s website.
Read the full case study: Norfolk County Council: Designed toolkits for small businesses to encourage preventative behaviours
3.3 Learn, adapt and scale
Once you have selected your evaluation method and analysed your data, you should be able to tell which of the solutions you implemented worked particularly well and which were low performers.
You might have also collected valuable feedback throughout the evaluation phase. For example, if you engaged in focus group discussions, you might have received comments from residents when they reacted to your proposed solutions.
Use these learnings to improve your solutions and, once you have the capacity, evaluate them again. This iterative process will ensure that you systematically build on what works and implement behavioural solutions that have the strongest positive impact.
Scale and disseminate
If you have conducted research and found relevant learnings, it is immensely valuable to share these internally within your council and with practitioners across the LGA network.
Please share your learnings with us! We would like to promote and disseminate them across our network.