A paragraph in the article by McAra and McVie reported in Prevention Action today (Youth Justice: Is doing nothing better than doing something?) reminded me of the lunacy of many government targets.
They cite a 2005 article by David Smith, founder of the The Edinburgh Study of Youth Transitions and Crime. Based on the work of Friedrich Losel, Director of the Cambridge Institute of Criminology, he notes that in over 500 evaluations of crime reduction strategies the average effect size was less than 0.1 of a standard deviation. That means that the crime rate for the intervention group would end up about 5 percent lower than for the control group.
Two questions spring to mind. Why do governments of all kinds post targets well beyond these bounds? And why do practitioners, and service designers expect more from local interventions?
If the returns are so low, it follows that several modes of intervention – public health as well as targeted; prevention and early intervention as well as treatment – will be needed to make a dent in levels of anti-social behavior. Very little effort goes into thinking about how to link programs.
I found the McAra and McVie study compelling. The results are not new, but they have more weight for being contemporary and based on a strong longitudinal study; and they are interesting in the context of an enlightened youth justice system undermined by inefficient or just plain bad decision-making.
As a scientist I was interested in the potential for an experiment. The decisions of police officers, Juvenile Liaison Officer and Reporter are by any standard pretty random. So why not make the process formally random for a while, and find out if doing nothing really is better than doing something?