You’re going to be unfaithful, so why not make it part of the service?
All the valuable efforts of recent years to build new children’s services on an understanding of what works and to evaluate them through rigorous trials have provided some compelling evidence about the virtues of fidelity.
Fidelity belongs to the world of service implementation. To implement a program with fidelity means to replicate it exactly as the developer describes. In the best cases the detail of what needs to be done, with and by whom, when, where and how will have been written down in a service manual.
Failure to follow these instructions generally means that a program will have less impact on outcomes for children – even none at all. And when it comes to the evaluation, the lack of fidelity and the resulting loss of consistency will weaken any comparison between trials, if it doesn’t altogether obscure the value of the outcome evidence.
Sadly, it's not difficult to find examples of this problem. Much of the recent fuss about whether or not the UK Sure Start program has been effective boils down to problems of fidelity. An examination of parenting support in the community by the national evaluation team found that staff in many areas said they used evidence-based programs, but closer inspection revealed what seems to have been a contagious impulse to adapt them.
Practitioners removed components, or they added extra modules, or they changed the length or frequency of contact with service users. In many places, despite the multitude of existing parenting courses, they devised their own by borrowing bits from the others.
In such conditions, establishing what works gets enormously complicated. And understanding the changes people make to interventions – a previously neglected part of the process – becomes a pressing concern.
We might speculate that the reasons for infidelity can be crudely characterized as ‘pragmatic’ or "only human nature". Pragmatic explanations include funding cuts, insufficient staffing, poor communication, weak leadership, hasty roll-out. "Only human nature" captures a tendency among practitioners to play around, perhaps because they value their own experience over the research evidence or because they're not convinced that the evidence is relevant to the client group they work with.
The urge to tinker runs so deep that experts in the field, such as Brian Bumbarger from the Prevention Research Centre at Penn State University, recommend that ‘drift’ – his term for the tendency to stray from the manual – should be assumed and anticipated, rather than considered the exception. Fidelity is not a naturally occurring phenomenon, he says; program drift is the default.
He observes that any training and support practitioners receive to help them to implement an evidence-based intervention generally focuses on what they are supposed to do, rather than on why they are supposed to do it. He compares this to giving a man a fish rather than teaching him to fish.
He argues that it is as important to understand the logic behind a program as to it is to know how it operates, since this insight will improve the likelihood of informed adaptations.
If drift is the norm rather than the exception, researchers need to understand how fidelity to proven models can be strengthened and to be sufficiently pragmatic to able to say which elements of a service might be essential and which are auxiliary or peripheral.
That way we might be able to change the question from "Why is fidelity so hard?" to "How much infidelity is tolerable?".