On evaluation

For those working with individuals with very complex needs, and those who have experienced - or are at risk of - exclusion, rejection and systemic failure, evaluation of our efforts is challenging - and crucial. When considering evaluation of psychologically informed environments for this population, we need to bear in mind a number of distinctions, which bear upon the suitability of different evaluation approaches to different circumstances or focusses of action.


Firstly, granted the prevailing funding climate, there is a widespread interest in service evaluation. Both service providers and commissioners will want to find evidence that their efforts have been effective. Despite the complexity and the entrenched nature of the difficulties, such local evaluations of particular services tend to revolve around a relatively narrow set of relatively short term 'outcomes' for client.  

(For some thoughts on how to manage these expectations more effectively, see: Johnson, in conversation; and for the complexity of data, see Everitt; both panel right).

Secondly, there is research on the effectiveness of specific interventions - specific activities (such as CBT, mindfulness, motivational interviewing) . Here the principle concern for complex needs services research is to ensure that evidence on interventions that are proved effective for the majority is still relevant to those most excluded (See Johnson, right panel, on complex needs services).

Typically standard treatments need considerable revision, adaptation and customisation ('personalisation") to meet more complex needs; and this may take the actual use far from the evidenced original. Other approaches, more suited to complex and entrenched needs and exclusion are harder to craft, and produce softer, more qualitative conclusions; but may be more relevant.

Thirdly,  granted the complexity of need and the multiplicity of services to meet specific needs, we are increasingly seeing attempts at systems change, and whole systems evaluation - processes and measures of systemic integration between (primarily) local services.

Particularly valuable, in this context, is recent work done by Collaborate and Newcastle University ( see panel right: 'A whole new world: funding and commissioning for complexity') on the challenge for commissioning to develop a very different way of working, which challenges the 'contracts culture' of the part 20+ years; and proposes a 'new paradigm' in which partnership and trust, rather than competition and monitoring, is the way forward.


Another useful distinction is that between formative and summative evaluation. Summative evaluation finds a point at the supposed end of the process in question, and attempts to finalise the results - which is inherently difficult with entrenched need - by prior agreed and generalised criteria with typically a manageably limited range of measures. (Much of health service research is based on this paradigm, as it is broadly suitable for assessing medications and surgery, for example).

Formative evaluation by contrast is on-going, actively intervening in the learning and evolving process; and tends to be more qualitative than quantitative. It is better suited to complex needs and systemic interventions. Although formative evaluation is usually provided by an external agency, action learning and reflective practice are both, in effect, forms of informal formative evaluation.  (For more on the distinction, see: Danuco, opposite) 


Finally, there is a useful distinction to be made between outcomes assessments - of all kinds - and fidelity (or 'process') evaluations, which attempt to judge - usually to ensure - how far any one approach conforms in practice to the model description, the 'ideal type', for such intervention.  The application of well-evidenced standardised treatments to those with complex needs tends to rase such questions of fidelity to the model.

(NB: Some approaches, such as Housing First, despite some early concerns for 'mission drift', have managed the dilemma by distinguishing those areas where strict standardisation is expected from others where a high degree of customisation and personalisation is allowed - even required.  )

The proposed PIEs assessment and service specification tool - currently still in development - is an example of an attempt to devise a format for assessing to what extent any service actually does use the PIE framework, and so can call itself a PIE with confidence.



Further reading, listening and viewing

1:  On complex needs research issues generally

Annie Danuco, on formative vs summative evaluation HERE

Becky Rice and Juliette Howe on person-centred research for complex needs is HERE

Grant Everitt on the range and sheer complexity of data in work with complex needs HERE

Stephanie Barker and Nick Maguire on the lack of studies researching peer support is HERE

Sophie Boobis on researchers learning from a dialogue with evolving practice (video). HERE 

McDonald & Tomlin: on mindfulness evaluation with young people, with cautions over a premature preference for meta-analysis HERE

Emma Belton: on the challenges in researching behaviour change in young people; and the search for alternative evaluation approaches HERE

Mental Health Foundation: Progression Together, a report with honest comments on difficulties with evaluation studies is HERE

Robin Johnson on complex needs and standardised treatments  HERE

Zack Ahmed on using Participatory Appraisal in involving users in local area needs research HERE

Collaborate/Newcastle University Business School on complexity and a new paradigm HERE and (excepts) HERE  

2: On PIEs assessment specifically

Sophie Boobis: Evaluation of a Dialogical Psychologically Informed Environment  is  HERE

Brett Grellier: report on a mindfulness programme in three homelessness hostels  HERE

Sophie Boobis on evaluation of facilitated PIEs training HERE

The iAbacus team on the IAbacus process - developing the questions is HERE

The Pizazz working party outline (forthcoming)

Robin Johnson in conversation on outcomes measurement is HERE