By Daryl F. Mellard
Center for Research on Learning, University of Kansas
In the previous posting, I emphasized that progress monitoring was critical for a successful implementation of RTI. If school staffs are not willing to specify, collect, analyze, and use their formative assessment data for informing their decisions about students’ learning and performance and about the choices of their curriculum and instructional practices, they may as well stay with horoscopes to guide their decisions.
The research literature is pretty narrow and not very deep regarding progress monitoring, but some guidance is offered. In this posting, I’ll synthesize three studies that several colleagues (Allison Layland and Barbara Parsons) and I recently described in our literature review focusing on progress monitoring. These three studies look at the application of curriculum-based measurement (CBM; Deno, 1985). CBM is one approach to progress monitoring with a long track record of successful application in elementary settings. CBM assesses different skills covered through the curriculum or intervention in a systematic way as frequently as once a week.
In the first study, Epsin and Halverson (1999) tested 147 students in 10th grade to measure CBM in written expression at the secondary level compared to the elementary. They compared various assessments to measure students’ writing abilities (including number of words written, words spelled correctly, characters per word, and sentences written). The effect of these measurements was assessed by comparisons to students’ general writing proficiency, including California state achievement test scores, first- and second-semester English grades, and students’ group placement (SLD, basic English, regular English, or advanced English).
The results of this study show that secondary school-level CBM procedures for writing need to be more complex than those used at the elementary level. This finding might be expected, in that the task requirements and level of expectation are much higher at the secondary level. On the down side, these more complicated measures will require a larger time commitment and more instruction from secondary-level teachers than those in elementary schools.
A second study of CBM examined the effects of peer-assisted learning strategies (PALS) and CBM on math performance in a high school (grades 9-12) setting (Calhoon & Fuchs, 2003). (For a definition and description of the PALS program, refer to McMaster, Fuchs & Fuchs, 2006; Phillips, Fuchs & Fuchs, 1994; and Phillips, Hamlett, Fuchs & Fuchs, 1993.) In this study, PALS was combined with CBM efforts. PALS was implemented twice weekly and CBM was conducted weekly for 15 weeks.
Findings from this study indicate that with a combination of PALS and CBM, students’ computational math skills improved significantly, but their concepts and application math skills were not statistically different from pre-test measures. This finding is in contrast to studies at the elementary level, which showed improvement in all measured higher-order math skills. A value of this study was demonstrating that CBM items can have application in the secondary setting.
Most recently, Twyman and Tindal (2007) and Ketterlin-Geller, McCoy, Twyman, and Tindal (2006) investigated the reliability of a concept maze task to assist middle-school teachers in making accurate decisions regarding students’ content learning. A concept maze is a task that requires students to select the best answer to complete a sentence from a list of possible choices, thus measuring content comprehension rather than overall general reading comprehension. Reading skill may be necessary to succeed on a traditional maze, but reading skill alone is not sufficient to demonstrate the conceptual knowledge required on the attribute maze.
Results support the use of a concept maze that focuses on attributes as a measure of change in content areas. They explain that a concept maze measure could be used in combination with other measures, including a vocabulary CBM, in a content area to determine to what extent students are on par with or discrepant from their peers at a single point in time, as well as how they are progressing individually and relative to classmates.
Our formative assessment methodologies will have to improve to support RTI implementation in the secondary setting. It seems to me that we don’t want to wait longer than a week to learn whether our interventions are working. On the other hand, high school-level tasks require an integration and application of skills and declarative and procedural knowledge. Progress monitoring is a good area for us to develop a better framework for content-focused, formative assessments.
Great Site!
I cannot agree more that clear student performance data is required to cause student achievement. I have seen many Professional Learning Teams use Data-Driven Curriculum systems (http://www.empoweredhighschools.com/blog/?p=42), which is similar to CBM. When teams analyze student performance on standards, they almost universally see no relation to the grades they gave. They are shocked how poorly their students did. However, they are usually pleased that they now have a better understanding of what is happening to their students and as professionals can begin to truly help them achieve.
Until teams can produce programs insure that approximately 80% of their students are achieving the required standards or expectations, it is difficult for a high school to know which students are Tier Two and which are struggling in poor programs. Bad programs will flood and overwhelm the school’s Tier Two interventions.
Posted by: Howard McMackin | January 24, 2009 at 09:05 PM