Sustainability: how to keep on keeping on
It’s one thing to set up a community-based prevention program with a pump-priming grant and on a wave of enthusiasm for the collaborative effort, but it can be quite another to keep it running efficiently and effectively.
For prevention science, understanding what it takes to keep the motor running and ensuring that all the energy continues to be invested in the right direction falls within an emerging description of “sustainability”.
But just as those working in the environmental sciences and global resource management continue to wrestle with what “sustainability” might mean in everyday practice beyond the dream of maintaining a benign process or state indefinitely, so program designers are asking themselves “sustaining what?”.
In prevention work the broad assumption tends to be that those who hold the purse strings and the responsibility for a program’s early success will develop a long-term plan.
Anecdotal evidence suggests, however, that the collaborative effort and associated programing all too often disintegrate at the end of the initial funding period. Or program implementation is carried on by a few members to the detriment of implementation quality.
Among those getting under the skin of sustainability as it relates to program implementation are Princeton researchers Scheirer Consulting.
They suggest that sustainability can can be measured under any one of four main headings:
- continuing project activities within the funded organization
- sustaining benefits for the intended stakeholders
- maintaining the capacity of a collaborative structure
- maintaining attention to the issues addressed by the program.
Sustainability, they argue, is a change process – multi-faceted, ongoing and cyclical.
With that terminology in mind they examined the sustainability of 48 projects incorporated in the New Jersey Healthy Initiatives supported by the Robert Wood Johnson Foundation.
All of them had been off Robert Wood Johnson funding for at least two years. Around three-quarters reported successfully sustaining some of the project activities, which included delivering health and mental health services, building formal collaborative relationships and developing and implementing a new organizational structure. Indeed, a third of those reported increasing their capacity.
In terms of sustained benefits for intended stakeholders, 60% reported serving more stakeholders since funding ceased.
The majority of projects were still providing clients with the original level of benefit. More than 62% said the original collaborative structure was still in place. Finally, 70% reported unwavering attention to the issues addressed in the initial grant.
The results themselves may not mean much outside the New Jersey context, but the variation in the findings depending on the aspect of sustainability under scrutiny, points to the value of robust definitions.
If planning for sustainability is to be effective, it will always be necessary to be able to say what is to be sustained and why. The processes that promote or inhibit one factor or another might very well differ.
Moreover, only sustaining certain aspects of an evidence based program is unlikely to be sufficient to preserve the integrity of the whole: program fidelity and sustainability are sides of the same coin.
Prevention scientists and program developers are beginning to acknowledge that they need to prepare implementers to make informed decisions about what to adapt and whether to try something new whenever they encounter obstacles.
In their implementation model Mark Greenberg and colleagues have shown that “adaptation with fidelity” is possible as long as changes are made in areas not directly responsible for program outcomes and which do not detract from any theory of change. [See: You’re going to be unfaithful, so why not make it part of the service?]
No knee-jerk reaction to a decrease in funding is ever likely to work.
See: “Defining sustainability outcomes of health programs: Illustrations from an on-line survey.” Mary Ann Scheirer, (and others) Evaluation and Program Planning, November 2008, Vol 31, pp 335-346.