Teaching by rote finds fresh legs in Baltimore

Teaching by rote finds fresh legs in Baltimore
30 April 2009

Research inside the Baltimore public school system has put fresh wind in the sails of Direct Instruction, a teaching program developed back in the 1960s, by showing that pupils in schools where it was implemented performed significantly better in reading tests.

The research team from the National Institute of Direct Instruction examined results for over 40,000 six- and seven-year-old children from 119 schools.

Collected over a period of six years the data showed an improvement across the board, but children in schools using Direct Instruction (DI) performed significantly better (effect size 0.63), they report.

The study also found that support from the program originators helped to increase impact. Of the 16 schools that implemented DI, 11 received technical support from the national body for the duration of the study. Pupils at these schools scored even better (effect size 0.82).

Low achievement in reading across the city system prompted the implementation of various reading programs across Baltimore.

Some DI schools received support direct from the National Institute; others went ahead without support or used different providers. These variations made further experiment possible.

The children were tested annually throughout the longitudinal study using the Comprehensive Test of Basic Skills (CTBS) – a widely used measure for children of this age group and one supported by national norm data.

Designed by Professor Siegfried Engelmann at the University of Oregon, Direct Instruction is a teaching model grounded in tightly planned lessons, using small learning increments and carefully defined teaching tasks.

It is based on the theory that clear, evenly-paced instruction will eliminate misinterpretation and so quicken the pace of learning. Teachers using DI must stick closely to the script; students are regularly tested and regrouped according to their progress.

So strictly must teachers adhere to the plan that DI has been disparaged as a ‘teacher proof’ curriculum.

In the Baltimore, it was implemented only in literacy classes, but it can form the backbone of school-wide restructuring. Complete School Reform (CSR) focuses on reorganizing and revitalizing entire schools rather than on the piecemeal implementation of specialized and potentially uncoordinated programs.

The Best Evidence Encyclopedia lists Direct Instruction as a “top-rated” program for complete school reform. Nearly 50 evaluations of the model have been carried out. Although only two have used experimental designs, most have at least collected data for a control group. Meta-analysis has shown it to produce an overall effect size of 0.21.

References
Stockard J (2008), “Improving First Grade Reaching Achievement in a Large Urban District: The Effects of NIFDI-Supported Implementation of Direct Instruction in the Baltimore City Public School System,” National Institute for Direct Instruction, Oregon

Borman G D, Hewes G M, Overman L T and Brown S (2003). “Comprehensive school reform and achievement: A meta-analysis’,” Review of Educational Research, Summer 2003; 73, 2, p. 125

Explainers

Direct Instruction

Designed by Professor Siegfried Engelmann at the University of Oregon, Direct Instruction is a teaching model grounded in carefully planned lessons, using small learning increments and defined teaching tasks.

It is based on the theory that clear, evenly-paced instruction will eliminate misinterpretation and so quicken the pace of learning. Teachers using Direct Instruction must stick to strictly scripted lesson plans; students are regularly tested and regrouped according to their progress.

effect size

An effect size is calculated to indicate the impact of a program in standard units. So a larger effect size means the program had a greater impact on child outcomes than one with a smaller effect size, and the use of standard units means that scores can be compared across a number of different evaluations or programs.

Effect sizes differ from probability (or p values). P-values only tell you how likely it is that your hypothesis is true. They do not tell you anything about the strength of a relationship or effect. In the case of evaluating a program aimed at reducing depression for example, effect size calculations can tell you to what degree depression was reduced.

In the case of evaluating programs, it is often suggested that an effect size (or d) of 0.2 is a small effect, 0.5 a moderate effect and 0.8 a large effect. They are based on standard units derived from mean and standard deviations. An effect size of 0.33 denotes that a treatment led to a one-third of a standard deviation improvement in outcome. Similarly, an effect size of 0.5 denotes a one-half of a standard deviation increase in outcome. Because effect sizes are based upon these mean and standard deviation scores it allows direct comparisons across studies.

However, a small effect size (of say d = 0.1) does not necessarily mean an unimportant effect. Indeed, most ‘proven’ prevention and early intervention programs demonstrate only small or moderate effects. As Kathleen McCartney & Robert Rosenthal (2000; pg. 175) point out, ‘just as children are best understood in context, so are effect sizes’. Issues of cost-benefit often come into play. For example, if a program is relatively inexpensive to provide (in terms of financial, provider and time investment) and results in small effect sizes, this may still be far more favourable compared to another program with far greater costs yet only slightly larger effects.

See: McCartney, K. & Rosenthal, R. (2000). "Effect Size, Practical Importance, and Social Policy for Children". Child Development, 71, (1), 173 – 180.

Best Evidence Encyclopedia

The Best Evidence Encyclopedia or BEE website provides information about the strength of the evidence supporting programs available to all school students. The content takes the form of accessible summaries of systematic reviews. To be included, reviews must cover all relevant studies, focus on experimental or strong quasi-experimental designs and summarize the size of effect on child outcomes attributable to the intervention. The website was created by the Johns Hopkins University Center for Data-Driven Reform in Education (CDDRE) under funding from the Institute of Education Sciences in the US Department of Education.

randomized controlled trials

Sometimes referred to as experimental evaluations, randomized controlled trials or RCTs randomly allocate potential beneficiaries of an intervention to a program or treatment group (who receive the intervention) or a control group (who do not). Outcomes for the two groups are then compared.

They are most often used to test medicines or medical procedures, but they are becoming more common in social interventions, particularly in relation to early years programs and education interventions in the US.

RCTs are considered the most reliable way of testing the effect of an intervention on outcomes for the potential beneficiary. Since the subjects of a trial are allocated at random to program and control groups, both are statistically equivalent and comparisons of outcome will reflect the effect of the intervention and not the characteristic of the groups.

Most importantly, RCTs eliminate selection effects. For example, if entry to the program tested was not random, the outcome might be the result of one group wanting the intervention more than another.

RCTs are strong at estimating the size of the difference in predefined outcomes between program and control groups. It is possible, therefore, to estimate how much change is the result of the intervention.

Other evaluation designs, including quasi-experimental designs that include a control group can detect associations between an intervention and an outcome but they cannot rule out the possibility that the association was caused by a third factor linked to both.

Search form

Advertise here

jcs-again.jpg

Subscribe to our newsletter

Click here to subscribe to the Prevention Action Newsletter.

Monthly archive

Events

European Society for Prevention Research

13th - 15th November 2013

Le Centquatre, Paris, France

13 Nov 2013

Editor's Picks

There is more to the international transfer of prevention programs than just hitting the “copy and paste” buttons. The introduction of the Big Brothers Big Sisters mentoring program to Ireland offers insights into how to succeed.

Few people working with children will have heard the term “prevention scientist,” let alone know what one is or does. Yet this relatively new breed of researcher is behind the growing list of evidence-based programs being promoted in western developed countries. A new publication puts them under the microscope.

Crime and antisocial behavior prevention efforts have flourished over the last 10 years in the US. This progress can and should be used to help communities improve the life chances of their young people, a recent update urges.

Given the well-known barriers to implementing evidence-based programs, is it better to identify their discrete elements and trust practitioners to combine them in tailored packages depending on the needs of the child and family in question?

The final official review on child protection offers a shakeup of services.