Are you thinking about evaluation? Top 10 questions answered

Mikaela Green

Mikaela Green on Jan 5, 2026

Posted in:

We’ve recently been working with the Cancer Campaigns Community of Practice: a brilliant group made up of comms colleagues across the NHS and various cancer charities. We have been supporting their efforts to unify and align their approach to evaluation. We’ve noted some common concerns and questions along the way, which might be helpful for you too…

Do we really need to evaluate everything?

Short answer: no. One of the biggest myths about evaluation is that it has to be exhaustive. In practice, the most useful evaluations are focused and proportionate. You can’t measure everything. Instead, the key question to ask upfront is: what decisions will this evaluation help us to make? Go from there.

Is evaluation just about proving impact?

Sometimes. Impact evaluation, or measuring long-term change, is only one type of evaluation and often the hardest to do well in comms. Equally valuable evaluations include formative evaluations (shaping ideas before launch), process evaluations (understanding whether something was delivered as intended) and outcome evaluations (exploring short or medium-term changes, like awareness).

What’s the difference between outputs and outcomes?

Outputs are what you produce, for example ads delivered, materials distributed, people reached. Outcomes are what changes, for example awareness, knowledge, intentions, behaviours.

What’s the difference between correlation, contribution and attribution?

Correlation is when two things move together, but one hasn’t necessarily caused the other, for example GP visits increased during the campaign period but so did cases of the flu. Contribution suggests that your campaign helped to influence a change, alongside other factors. Attribution means that you can directly link a change to the campaign or intervention itself, which is quite unlikely unless you ran a highly structured trial and removed most of the competing variables.

What you’re probably asking is, did my campaign or intervention cause the change or not?  A common misconception is that attribution should always be the goal. In reality, credible contribution is often the most realistic and most honest claim.

If we can’t prove attribution, is the evaluation still worthwhile?

Absolutely. Understanding how and why something appears to be working (or not) is hugely valuable. Theory-based approaches, like using a clear theory of change, help you to test assumptions, identify weak spots and refine future activity, even when attribution isn’t possible.

How much should we spend on evaluation?

A helpful rule of thumb: around 10% of overall project spend. That doesn’t mean you must always hit that figure, particularly with a large media spend, but it’s a useful sense-check. If evaluation spend is close to zero, learning will be limited.

When should we start thinking about evaluation?

The strongest evaluations are planned before activity begins, not bolted on at the end. Thinking early helps you to clarify objectives, agree what “success” looks like, choose realistic and ethical methods for collecting evidence and avoid scrambling for data later.

What makes ‘good’ evidence?

Good evidence should be appropriate, robust and collected ethically. Appropriate means that you should measure what matters, by figuring out what’s important to the project from the outset. Robust means that evidence should be collected systematically, via a transparent process that could be replicated by others. Ethical collection involves informed consent, with particular considerations made to vulnerable populations.

Isn’t qualitative evidence just “nice stories”?

This is a common myth and some stakeholders might only be interested in ‘big numbers’, but well-designed qualitative evidence (like structured feedback or deliberative discussions) can reveal insights that numbers alone never will. Qualitative evidence is especially useful for understanding barriers and motivations, explaining why an outcome did or did not happen, and identifying unintended or negative effects.

What are some useful frameworks for collecting evidence?

Frameworks such as the GCS Evaluation Cycle, from the UK Government Communication Service, and Theory of Change offer structured ways to plan and collect evidence, with clear examples of best practice.

If you’d like help planning or preparing your evaluation, visit the Contact page of our website to get in touch.