Skip to main content

4 – Ensuring monitoring and evaluation

Monitoring refers to setting targets and milestones to measure progress and achievement, and check whether the inputs are producing the planned outputs i.e. it determines whether implementation is proving consistent with design intent—implying we can tweak our approach during the monitoring period. Evaluation is not just about demonstrating eventual success; it also provides insights into why things don’t work (as learning from mistakes has equal value). Monitoring and evaluation are not about finding out about everything (which is intimidating), but are focused on the things that matter.

Project monitoring
In a generic project management context, monitoring involves “oversee[ing] the progress of project work and updat[ing] the project plans to reflect actual performance (Axelos). In the Welsh healthcare context, the Quality and Safety Framework (WG; 2021) describes a universal duty of “quality management” to ensure that care meets the six domains of quality (care that is safe, effective, patient-centred, timely, efficient and equitable). It describes a system that continuously connects quality assurance, planning and improvement activity. Periodic measuring and monitoring permits:

  • Assurance of implementation progress, in keeping with delivery expectations around scale and pace
  • A mechanism to capture and share emerging learning at local level (especially where a contemporaneous lessons log is maintained) and on a regional or national basis (typically via interim reports), thus deriving maximal value from early and ongoing implementation experience
  • Ensuring projects get the resources they need for successful delivery (it may identify additional support requirements/ reconfigurations to address any planning gaps)
  • Recording and management (ownership, mitigations, etc.)  of issues and risks—whether anticipated or emergent
  • Remedial course correction, or unscheduled project termination in the face of insurmountable risks, actual harms, or resource constraints etc. that depreciate the business case.

Project evaluation
Evaluation refers to the structured process of assessing the success of a project or programme in meeting its aims and for reflecting on the lessons learned. The key difference between monitoring and evaluation is that evaluation places a value judgement on the information gathered during a project (Research Councils UK; 2011), including the monitoring data. The assessment of a project’s success (its evaluation) can be different depending upon whose value judgement is used. Evaluation permits:

  • Assessment of whether a project has achieved its intended goals
  • Understanding how the project has achieved its intended purpose, or why it may not have done so
  • Identifying how efficient the project was in converting resources (funded and in-kind) into activities, outputs (objectives) and outcomes (goals or aims)
  • Assessment of how sustainable and meaningful the project was for participants
  • Informing decision makers about next steps.

Service evaluations may evolve into research proposals (perhaps aiming to resolve unanswered questions) or lead to review of the existing business case (see section 3), resulting in a decision to scale up a successful, innovative project (see section 5), continue as-is or with improvements, or to stop the project. Evaluation is:

  • Often falsely viewed only in terms of a visible endpoint product (such as an evaluation report), but is more robust when it is implemented as a “before, during and after” activity that runs alongside the project itself.
  • Best planned prospectively, with contemporaneous data collection (i.e. monitoring) during implementation for summative assessment of pre-defined outcomes; retrospective evaluations can still be worthwhile, but are subject to additional bias.
  • Best carried out or overseen by someone from outside the project (again to reduce bias) and with representative contributions (e.g. with both service provider and user participation).
  • Enhanced by inclusion of both quantitative (numbers e.g. costs) and qualitative (narrative or “lived experience” e.g. via interviews) data.
  • Aided by the use of logic model and/ or evaluation plan templates (see below).

Logic models
Logic models can help sense-check the elements that must come together to successfully plan, deliver and evaluate a project. They can be integrated into project plans from the outset, or inform a bespoke monitoring and evaluation plan by teasing out the following:

  • Inputs: The key things we need to invest/ have in place to support the activity
  • Activities: What we do with the inputs
  • Outputs: What we produce as a result of the activities
  • Outcomes: What our products will achieve for people or services (aims; these can vary over time e.g. short, medium or long-term and should be SMART)
  • Impacts: High-level, ultimate ambitions e.g. the quadruple aims of A healthier Wales
  • Barriers: What we may find difficult to influence or overcome (e.g. external factors)
  • Assumptions: What we hope is already in place (supportive conditions, etc.)

A logic model tries to establish sequential links between the above elements, in multi-row table or diagram form. Sometimes they are easier to populate right-to-left, instead of left-to-right (starting with inputs). For background information on logic models, refer to the following resources:

  • Logic models (Data Cymru): Using logic models to plan, map and identify the activities and inputs that lead to results, and to understand desired changes and who would be accountable for them.
  • Using logic models in evaluation (The Strategy Unit): This briefing has been prepared for NHS England, by the Strategy Unit, as part of a programme of training to support national and locally-based evaluation of the Vanguard programme and sites.

For a simple logic model template, see “Additional support resources” (below).

Evaluation plans
There is no magic formula for developing a universal evaluation plan. If evaluation was something of an afterthought (it happens!), a reflective and inclusive post-project review can recover some value by asking “What went well? What went less well? How would we do it differently next time?” A simple prospective evaluation plan might ask the following:

  • What do we want to know? The evaluation questions(s); the “things that matter”
  • How will we know it? The indications of success (or harm) we will use
  • How will we collect indicator data? The data source(s) and analysis method
  • When/ where will data be collected? Timeframes and tools e.g. point-of-care
  • Who will do this? Monitoring and evaluation roles and responsibilities

For a simple evaluation plan template, see “Additional support resources” (below); make sure to address each evaluation question on its own row. The Cluster Governance: A Guide to Good Practice offers examples of real-world cluster evaluations to learn from: audiology advanced practitioners; treatment unit; Mind in the Vale of Glamorgan evaluation/ therapies; and care home ANP innovation/ evaluation.

PCMW/ ACD monitoring and evaluation plan
The Primary Care Model for Wales (PCMW) and Accelerated Cluster Development (ACD) Programme implementation monitoring and evaluation plan sets out how these transformation ambitions will provide assurance of progress, shared learning, and support joining up of local and regional plans. It describes the step-wise introduction of several supporting tools and products:

  • Key indicator dashboard: a live tile on the Primary Care Information Portal for reporting metrics around PCMW, ACD and other primary care outcomes
  • Self-reflection tool: an annual online questionnaire asking clusters what went well, less well or could be done differently
  • Cluster Development Framework: sets out standards and maturity criteria expected for demonstrating implementation progress
  • Peer review process: describes how clusters and regional partnership boards will be involved in developmental appraisal once per IMTP cycle
  • National implementation progress report: an annual interim (monitoring) report that summarises progress and key learning to date
  • Contribution analysis: a method for approaching endpoint evaluation that is suited to understanding complexity.

See also ACD Toolkit  which details the PCMW/ACD monitoring and evaluation plan.

Additional support resources
The following resources provide further background on defining monitoring and evaluation requirements:

  • Cluster project planning: Cluster Governance: A Guide to Good Practice covers recording and monitoring, and evaluation (Appendix 12).
  • Resources to help you develop your cluster: Includes a section on evaluation (part of Cluster working in Wales)
  • Logic models in evaluation: Cluster Governance: A Guide to Good Practice covers logic model components, discussion of evaluation types and provides logic model and evaluation plan templates.
  • Introductory guide to evaluation (Data Cymru): This guide will support your understanding of what evaluation is and why it is important for your projects, programmes and policies; give you the 'basics', so you understand why and when you might undertake evaluation; provide you with direction as to the approaches and processes you might use to undertake effective evaluation; and provide pointers to further guidance and support.
  • The Magenta Book: Guidance for evaluation (HM Treasury): The Magenta Book is the recommended central government guidance on evaluation that sets out best practice for departments to follow; recommended by Data Cymru as the “go to” evaluation resource.
  • Monitoring and evaluation expertise is a scare resource; it may be available via local public health teams, or from an academic partner (who can add rigour and aid dissemination of learning, typically in return for data access).