Approaches to evaluation

Broadly, there are two useful distinctions to be made in considering general approaches to evaluation.

The first is that between formative and summative evaluation; and the second, between process outcome and process evaluation.

Formative or summative evaluation

Summative evaluation finds a point at the (supposed) end of the intervention or process in question, and attempts to sum up the results by previously agreed and generalised criteria with typically a manageably limited range of measures, usually quantified. (Much of medical research is based on this paradigm, as it is broadly suitable for assessing medications and surgery, for example).

This approach is inherently difficult when addressing entrenched needs - which will tend to need longer term interventions - and still more so with complex needs, which are harder to address in a few simple measures. Nevertheless, sometimes a reasonable proxy measure can be suggested. Housing First, for example, made effective use of one measure  - housing stability, measured quantitatively by length of tenure after re-settlement.)

Formative evaluation by contrast is on-going, actively intervening in the learning and evolving process; and also tends to be more qualitative than quantitative. It is better suited to complex needs and systemic interventions. Although formative evaluation is usually provided by an external agency, action learning and reflective practice are both, in effect, forms of informal formative evaluation.

(For more on the distinction, see: Danuco, opposite)

Outcome and process assessments

Finally, there is a useful distinction to be made between outcomes assessments - of both kinds, whether formative or summative - and fidelity (or 'process') evaluations, which attempt to judge how far any one approach conforms in practice to the model description, the 'ideal type', for such intervention.  As suggested earlier, the application of otherwise well-evidenced standardised treatments to those with complex needs tends to rase such questions of fidelity to the model.

For the assessment of PIEs, the Pizazz and the PIEAbacus begin as process assessments, asking to what extent the actual practice of a service is aligned with the main themes, and their practical expression, in the PIE framework.

Yet the Pizazz process, as it becomes part of the on-going self-directed development planning of services, becomes a formative assessment, feeding back into action planning and future development.

Further reading, listening and viewing

1: On evaluation per se

  • PIE assessment - what is the point?  (PIELink page) :  HERE
  • Service evaluation by outcomes (PIELink page) : HERE
  • Evaluations of specific interventions (PIELink page) : HERE
  • Whole systems evaluation (PIELink page) : HERE
  • Formative vs summative evaluation (PIELink page) : HERE
  • Outcome and process assessments (PIELink page) : HERE

2: On complex needs evaluation and research issues generally

  • Annie Danuco, on formative vs summative evaluation: HERE
  • Becky Rice and Juliette Howe on person-centred research for complex needs: HERE
  • Grant Everitt on the range and sheer complexity of data in work with complex needs: HERE
  • Stephanie Barker and Nick Maguire on the lack of studies researching peer support: HERE
  • Sophie Boobis on researchers learning from a dialogue with evolving practice (video): HERE
  • McDonald & Tomlin: on mindfulness evaluation with young people, with cautions over a premature preference for meta-analysis: HERE
  • Emma Belton: on the challenges in researching behaviour change in young people; and the search for alternative evaluation approaches: HERE
  • Mental Health Foundation: Progression Together, a report with honest comments on difficulties with evaluation studies: HERE
  • Robin Johnson: 'Do complex needs need complex needs services? (Pts 1&2):  HERE
  • Zack Ahmed on using Participatory Appraisal in involving users in local area needs research: HERE
  • Collaborate/Newcastle University Business School on complexity and a new paradigm HERE and (excepts): HERE
  • Sophie Boobis: Evaluation of a Dialogical Psychologically Informed Environment: HERE
  • Brett Grellier: report on a mindfulness programme in three homelessness hostels:  HERE
  • Sophie Boobis on evaluation of facilitated PIEs training: HERE
  • Robin Johnson (in conversation) on outcomes measurement: HERE

 

3: On PIEs assessment specifically

  • The Pizazz as a research tool: HERE
  • The iAbacus team on the IAbacus process - developing the questions: HERE
  • 'Useful questions' the Pizazz process handbook: HERE