How Do I Know I Made A Difference?

You need to know how your work effects the people it is trying to help. Read more on impact evaluations here.

An evaluation has four key stages: design, data collection, data analysis, and presenting your results.

1. Design your evaluation

There are two main elements that should be included in your evaluations whenever possible: pre-post intervention design and comparison groups.

The idea behind a pre-post intervention design is simple. For example, if you want to know whether your diet worked, you measure your weight before and after the diet. Then you will know whether your weight has changed. Similarly, if you want to know whether your intervention to reduce anti-migrant hate on Facebook worked, you need to measure hate before and after your intervention and check whether there are any differences.

Comparison groups are made up of people who have not participated in your program. Collecting data from them, too, allows you to check what would have happened if you had not run your project. 

Comparison groups allow you to control for other things that might affect your program. For example: what if, during your intervention to improve tolerance toward religious minorities on Facebook, a member of a religious minority commits a heinous crime against a local girl? It is very likely that this act, and the media discussion about it, will increase intolerance toward religious minorities on Facebook. In this case, having a comparison group is the only way you can still measure the impact of your program, because while both the program group and the comparison group will have been affected by news of the crime, only the program group will show the effects of your intervention. Learn more about comparison groups in the following video.

How Do Comparison Groups Work

2. Collect your data

There are two broad types of research approaches that you can use to collect your data: qualitative and quantitative.

Qualitative tools aim to understand why your program did or did not achieve its aims. For example, you can analyze the language used by the Facebook users over time, or you can interview people who post in a Facebook group and ask them what they think of your intervention program.

Quantitative tools aim to quantify (to transform into numbers, like measuring your weight in kilograms) the change that your program achieved. For example, you can count the likes and shares of intolerant Facebook posts. Or you can give a score to all the sentences written by the users of the Facebook group where you are conducting your intervention, on a scale from 1 to 10, where 1 = maximum level of intolerance and 10 = maximum level of tolerance. In parallel, you could conduct a questionnaire among a large group of Facebook users and assess their level of tolerance, too.

When you assess the impact of an online campaign, it’s important that you use as many different approaches as you can: 

1. Collect all the metrics of online impact that are available to you. The most basic ones are:

  • impressions (number of times a post is displayed)
  • reach (number of people who received impressions of a post)
  • engagement (likes, clicks, or comments on a post)

2. Ask real people what they think about your campaign (e.g., your videos) via:

  • an online questionnaire with members of the audience (e.g., members of a Facebook group where you posted your video)
  • individual interviews or focus groups (in person or online) with members of your audience

Tip

If you don’t have access to the members of your audience, you can select people who are similar to them (in terms of age, gender, political attitudes) and ask them to watch your videos and then to complete a questionnaire or participate in a focus group or an individual interview.

Combining different approaches is very important, because observable online behavior does not fully or consistently reflect people’s real opinions. For example, when you’re counting views and shares, you don’t know if people watch or share a video because they like it, because they dislike it, or because they’re bored. The only way to really know is to ask them what they think.

There are also ethical considerations involved in conducting evaluations. Watch this short video to find out more.

Do No Harm Evaluation Tips

3. Analyze your data

Once you have collected the data, you need to analyze the data. To analyze interviews and focus group data, the easiest way is to identify the recurring themes in what participants say. Themes are recurring sentences, ideas, or sentiments. 

You can then compare the themes found in the data from your participant groups against the comparison group. Here are step-by-step instructions for conducting data analysis for program evaluation.

4. Present your results

Any result is valid and useful, regardless of whether it shows the program to be successful or unsuccessful in the way you hoped. Be honest and transparent when presenting the results of your evaluation. It may be tempting to misrepresent data or cover up bad results, in order to communicate the success of a program and to avoid sharing findings that show no, or even negative, impact. We strongly advise transparency and honesty: evaluations are useful both when they show that a program is successful and when they show that a program is not successful. 

Helpful examples of good evaluation reports can be found here:

 

Contributed by Professor Greg Barton and Dr Matteo Vergani, Alfred Deakin Institute for Citizenship and Globalisation, Deakin University

Join Our Community on Facebook

Keep engaged and join our wider community through our Facebook Page.

Go to Facebook