If you don’t know what broke, how can you fix it?
Blueprints conference participants generally want to know what a program needs to do to make the grade. As we reported at the beginning of the week, two more have lately entered the Promising Programs list, and there’s a natural tendency to focus on the recipe for that kind of success. [See: Promising to help kids turn their backs on the bottle.]
But improving children's health and development is not simply a matter of implementing what works. It’s also about learning from faults and failures. Similarly, the successes need continuous scrutiny. They may pass the Blueprints test at a certain critical moment, but new trials will follow and the more widespread and thorough they are, the more variable the effects are likely to be.
Two instructive examples of the consequences of discovering what does not work were reported at the Denver conference.
The first concerned Quantum Opportunities, a mentoring program that aims to establish meaningful long-term relationships between struggling high-school children and a mentor or case manager. It is also designed to develop community commitment and involvement with the child's school.
Initial experimental evaluations showed substantial positive effects, and in due course Quantum Opportunities came to be counted among the Blueprints Model Programs.
Armed with the results from the first series of evaluations, a more extensive ten-site randomized controlled trial was undertaken.
Expectations were high, although it generally happens that when programs move up a gear from their development phase to full-scale implementation, effect sizes are considerably reduced. The change is usually taken to reflect the absence of any marked placebo effect, lower levels of motivation among staff and inevitable deviation from the model.
However, in the case of Quantum Opportunities the initial positive effects disappeared entirely. Across the ten sites the picture was mixed. Some children benefited, but there was no appreciable improvement that could be attributed to the program.
Such results are naturally deeply disappointing to a program provider, but they are valuable nonetheless, and they deserve to be viewed with interest by scientists,