"Rejecters" make the case for practice-based evidence
“There is no evidence that using an evidence-based approach to health care actually improves outcomes, and plenty of anecdotal evidence that it doesn’t.”
This about sums up the attitude of the “rejecters” – one of three classes of practitioner, whose astringent criticism of the new conventional wisdom of prevention science gets an airing in the latest edition of Clinical Child Psychology and Psychiatry.
In ”Improvers, Adapters and Rejecters — the link between evidence-based practice and evidence-based practitioners", Nick Midgley, a psychotherapist at the Anna Freud Centre, sets out the variety of attitudes to evidence-based practice current in his child mental health field.
At the extreme are those for whom the whole evidence-base practice (EBP) movement is based on “an outrageously exclusionary and dangerously normative approach promoting dependency on pre-interpreted, pre-packaged sources”.
These are the group Midgley calls the “rejecters”. For others, the ideals of evidence based practice are more valued, and the debate concerns how best to put them into practice.
Whether in medicine, education or mental health, the culture of evidence based-practice pervades almost every aspect of our public lives, he argues.
That such an approach should have become a universal impulse is a tribute to the work of the British epidemiologist Archie Cochrane and his 1972 critique of the medical establishment Effectiveness and Efficiency.
In that book, Cochrane argued that patterns of care in medicine were “chaotic, individualistic, often ineffective and sometimes harmful,” largely due to the fact that medicine itself had not organized its knowledge “in any systematic, reliable and cumulative way”. Cochrane’s solution was evidence based medicine (EBM), the forerunner of the more wide-ranging evidence-based practice movement.
“At best, this development has led to a deeper understanding of ‘what works for whom’ and a corresponding improvement in the provision of high-quality care,” Midgley writes. “At worst, it has been a way of ‘rationalizing’ services by withdrawing funding for any forms of treatment that cannot be proven to work within the very restrictive definitions of ‘evidence’ used by many of the advocates of EBP.”
A convincing balance has still to be struck between two professional caricatures, he argues. One is of the enlightened practitioner, typified as someone who is always “integrating individual clinical expertise with the best available external clinical evidence from systematic research” while taking into account client values, preferences and expectations. The other is of a born-again equivalent, whose dogmatism leads to assertions that “children with X should be offered Y because the evidence-based guidelines say so”.
And the present reality is that neither has a very strong foothold in the UK system of child mental health.
Midgley claims: “Most clinicians do not change the way they make clinical decisions based on ‘best available external clinical evidence from systematic research’”.
It is in attempting to unpick this failure of implementation that he identifies his three practitioner classes.
For the “improvers” the main issue is better implementation of research findings in the practice setting. In pursuit of this improvement they focus on the process of “diffusion,” “dissemination,” “knowledge transfer,” “translation,” “transportability” and “spread” – identifying the obstacles to these processes and working hard to remedy them. For this group, “if this linear process isn’t taking place (or not enough), then that is probably only because greater efforts need to be made to ‘disseminate’ findings.”
Too much store set by “scientific” evidence
Adapters he describes as the moderators – not convinced by the argument that there are merely “obstacles” and “drivers” to implementation, and ready to list more fundamental reasons why EBP does not more readily translate. “For them the definition of ‘evidence’ is far too restrictive. Too much store is set by ‘context free,’ ‘scientific’ evidence (exemplified by the randomized controlled trial), in which the internal validity of the research design itself is given priority over the external validity of the findings.”
Their argument is that smaller-scale, qualitative research needs to be given greater prominence. Improving methods of meta-analysis for qualitative research findings represent an important step in this direction, he says, and such developments are having some beneficial impact.
Thus, adapters largely accept that a predominantly linear model of translating information from one domain (research) into another (practice) is tenable, but they argue that definitions of evidence need to be widened in order to make research more clinically meaningful. [See, for example, Looking for an escape from the impossible trial]
It is an argument rejecters do not accept. Instead they have been drawn to the idea of “practice-based evidence”, a mischievous turnaround inspired by the tradition of “action research” where the emphasis is on building knowledge from the bottom upwards and resisting any attempt to base clinical practice on the evidence of mainstream research.
Midgley believes none of these approaches captures the whole truth. “Not everyone is convinced that ‘practice based evidence’ is the way to solve the problem of how to improve clinical practice. The knowledge created by such activity often has only local value and may not challenge clinical practices in the way that systematic research based on more ‘scientific’ criteria may do.”
The question of how best to improve clinical practice in child mental health, and how to integrate knowledge from the clinical and the research fields, is still an open one, he says.