Velocity Norms for Academic Growth

Shuttleworth (1934) suggested that growth standards for stature should be expressed in terms of progress rather than status. Tanner (1952) provided a theoretical framework for the development of clinical standards for growth and advocated velocity standards. Bayley (1956) made the first effort to produce standards for height that took account of tempo. Her paper foreshadowed the landmark paper by Tanner, Whitehouse and Takaishi (1966) on longitudinal standards for height velocity and weight velocity. Incremental growth charts for height and weight have since been produced for use in the United States (Baumgartner, Roche & Himes, 1986; Roche & Himes, 1980).

Have you ever heard of growth velocity norms for academic growth—i.e., the growth rate of reading ability or mathematical understanding? There are three reasons you haven’t, which persisted for most of the 20th century: (a) the absence of sufficient longitudinal data on which to base investigations of academic growth; (b) the analytical methods available to educational researchers who wished to study growth; and, (c) challenges of educational measurement (e.g., dimensionality, lack of scale comparability and common units across instruments). Yet, I submit at the dawn of the 21st century, these obstacles have been overcome.

The most recent two reauthorizations of the Elementary and Secondary Education Act (ESEA) required states to assess reading and mathematics in multiple grades. States have been accumulating data for more than a decade. So, longitudinal data are now feasible for reading and mathematics.

Rogosa, Brandt and Zimowski (1982) advocated the use of longitudinal data collection designs gathering more than two waves of serial measures on the same individuals, accompanied by an analytical methodology focused on the individual growth curve. In their landmark book, Raudenbush and Bryk (2002) included a chapter on formulating models for individual change. Singer and Willett (2003) gave book-length treatment to the modeling of individual change. Perhaps the most enabling resource for the educational research community was Singer’s (1998) article demonstrating how to implement multilevel (including growth) models using one of the most widely available general-purpose statistical packages.

Finally, near the end of the 20th century, a new scale was developed for measuring reading ability. Its significant advantage over previous scales was a new kind of general objectivity, attained by calibrating the scale to an external text-complexity continuum and double-anchoring the scale at two substantively important points, much as temperature scales are anchored at the freezing and boiling points of water (Williamson, 2015).

Combining longitudinal data, multilevel modeling and state-of-the-art measurement scales from The Lexile® Framework for Reading and The Quantile® Framework for Mathematics, Williamson (2016) premiered incremental velocity norms for average reading growth and average mathematics growth. Based on an individual growth model, the incremental velocities reflect the long-term developmental growth of students in a well-established reference population (n > 100,000). Now, it is possible to refer the reading or mathematics growth rates of students observed during schooling to a clearly defined population of growth curves derived from serial measures of students whose reading ability and mathematical understanding were systematically assessed over time.

References
Baumgartner, F. N., Roche, A. G., & Himes, J. H. (1986). Incremental growth tables: Supplementary to previously published charts. The American Journal of Clinical Nutrition, 43, 711-722.
Bayley, N. (1956). Growth curves of height and weight by age for boys and girls, scaled according to physical maturity. Journal of Pediatrics, 48, 187-194.
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd  ed.). Thousand Oaks, CA: Sage Publications.
Roche, A. F., & Himes, J. H. (1980). Incremental growth charts. The American Journal of Clinical Nutrition, 33, 2041-2052.
Rogosa, D. R., Brandt, D., & Zimowski, M. (1982). A growth curve approach to the measurement of change. Psychological Bulletin, 92, 726-748.
Singer, J. D. (1998). Using SAS PROC MIXED to fit multilevel models, hierarchical models, and individual growth models. Journal of Educational and Behavioral Statistics, 24(4), 323-355.
Singer, J. D., & Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. New York: Oxford University Press.
Shuttleworth, F. K. (1934). Standards of development in terms of increments. Child Development, 5, 89-91.
Tanner, J. M. (1952). The assessment of growth and development in children. Archives of Disease in Childhood, 27, 10-33.

Doing More With Less

The Council of the Great City Schools recently released a report analyzing the amount of testing administered across city schools.  According to the report, students spend roughly 20-25 hours per year on a variety of mandated assessments – some federally mandated and some mandated by a particular state of district.  Over a student’s lifetime that adds up to hundreds and hundreds of hours spent testing.

If that strikes you as excessive you’re not alone; and on Saturday the Obama administration argued that standardized testing should take up no more than 2% of class time:

‘‘Learning is about so much more than just filling in the right bubble,’’ President Obama said in a video posted on Facebook. ‘‘So we’re going to work with states, school districts, teachers and parents to make sure that we’re not obsessing about testing.’’

Obama said in “moderation, smart, strategic” tests can help assess the progress of children in schools and help them learn. But he said that parents are concerned that too much time is being spent on testing, and teachers are under too much pressure to prepare students for exams.

The President’s call to reduce the amount of standardized testing reflects the concerns of parents and educators around the country, that students are spending far too much time in high stakes tests.  That being said, it would be far better to do more with the tests we already have rather than testing more.  Assessments linked to developmental scales, like the Lexile Framework for Reading, provide educators a range of possibilities.  Having access to a student’s Lexile measures means being able to not only monitor the student’s reading growth, but being able to differentiate for and target that student in an appropriate way.  As Obama argued, there’s a place for smart, strategic tests, assessments that equip teachers with the information they need to keep students learning.

Welcome Back: Starting with Success

A brand new school year is here, offering teachers, students, and parents the opportunity for a fresh and positive outlook for the coming months in the classroom. In the article Starting the School Year Right in the August edition of The School Administrator, Thomas R. Guskey emphasizes that the first two weeks of school are critical for students and parents to feel good about what the students know and what is possible to achieve in the coming months.

Many teachers try to formally or informally assess the ability level of students at the beginning of the school year.  But Guskey cautions that the first assessments need to “help students experience successful learning” during the first two weeks of the year.   It may be important for the educator to firmly establish what students know rather than what they don’t know.

Guskey’s right.  Educators can help put students at ease early in the year by ensuring that the material they receive is at or near their ability level.  With regards to reading, many students across the United States are assessed in the spring and many states report Lexile measures as an indication of a student’s reading level.  The student Lexile measure allows educators to match students to targeted material, a useful way to develop student confidence and promote motivation. 

Because reading levels in a single classroom vary considerably, teachers would be well-advised to differentiate material so that students are able understand the text and experience success.  The Lexile Framework for Reading offers tools to measure text as well as ‘Find a Book’ tool, which provides the Lexile measures of trade books and textbooks in all kinds of categories and genres. Matching the text measure to a student Lexile measure can be a strong asset for helping struggling readers be successful.

Similar to the Lexile scale, the Quantile Framework for Mathematics utilizes a scale that places the math level of students and the difficulty of the math skills and concepts on the same scale.   The Quantile measure for specific mathematics skills and concepts can be found at the Quantile website where the topics are aligned to state standards as well as to the Common Core State Standards.

When student Quantile measures are available from state assessments or other products aligned to the Quantile Framework, then targeting student needs in the mathematics classroom becomes much more manageable, allowing content to be tailored to the student ability level as well.

Dr. Guskey offers numerous suggestions for facilitating positive experiences for students. Critical stakeholders include not only students and teachers, but also parents and administrators. This community of supporters has a strong influence over the long-term success of our children. We often speak of differentiating instruction to meet the needs of our students. But differentiating can mean much more to the students if they recognize their abilities and use that information to grow into motivated and self-assured students throughout their academic career.

Policy Brief: Bending the Reading Growth Trajectory

Written by our own Dr. Malbert Smith, our second policy brief was released Thursday.

As I’ve mentioned before, MetaMetrics is focused on improving education for learners of all ages, and we will be releasing policy briefs that cover research on a variety of educational issues, such as closing the achievement gap, next-generation assessments, and college- and career-readiness. The policy briefs will explore potential ways to address these critical issues by focusing on education as the foundation of student success and the stepping stone to social and economic growth in our country.

The second brief is titled “Bending the Reading Growth Trajectory: Instructional Strategies to Promote Reading Skills and Close the Readiness Gap.” An executive summary is below and the entire brief is available in both HTML and PDF formats:

The January 26 edition of Education Week summarizes the postsecondary readiness gap in unequivocal terms: “High school completion does not equal college readiness.” This reality is the foundation of the Common Core State Standards for English Language Arts which are designed to prepare students “to read and comprehend independently and proficiently the kinds of complex texts commonly found in college and careers.” But what exactly does this mean for educators, and how can they help prepare students for the reading demands of their academic and professional pursuits? Research has validated some instructional strategies—such as exposing middle and high school students to more complex text, using benchmark assessments to supplement year-end tests, and mitigating summer loss—all of which can address the velocity and deceleration of reading growth in order to enhance comprehension skills and support students on higher learning trajectories. As idealized growth trajectories are adopted in response to Common Core—and states continue to collect more and better longitudinal data—we will be even better positioned to think strategically about how we can modify instruction to support students as they progress toward college- and career-readiness.

Want to subscribe to our policy briefs? Visit www.Lexile.com and click on Register in the top right corner. Be sure to check the box next to News Releases!

Measuring Teacher Effectiveness: Take A Broad View

Recently teacher effectiveness and evaluation have been gaining legislative and media attention.  The current Race to the Top application (U.S. Department of Education, 2009) asks states to “design and implement rigorous, transparent, and fair evaluation systems for teachers and principals that (a) differentiate effectiveness using multiple rating categories that take into account data on student growth…as a significant factor, and (b) are designed and developed with teacher and principal involvement.”  Many districts and states are now faced with the challenge of how to thoroughly evaluate whether a teacher is effective in the classroom.   Many states are now modifying existing laws against using student data to evaluate teachers and policy makers are suggesting ways to quantify the evaluation of a teacher by basing the evaluation on student test scores. 

The Tennessee Report, for example, indicates that Tennessee teachers’ evaluation will be based on 50% of student scores, 35% on the Tennessee Value-Added Assessment System (TVASS) and 15% on other student data, including test scores.  And the New York Times reports that New York schools have also implemented a new teacher evaluation basing effectiveness on 40% of student test scores, which includes scores from tests developed within the school district and state standardized tests. 

Organizations, like the Bill & Melinda Gates Foundation-supported, Measures of Effective Teaching, supplement the focus on student test scores alone by offering a broader analysis and taking five types of data into account:

  • Student achievement gains on state standardized assessments and supplemental assessments designed to measure higher-order conceptual thinking
  • Classroom observations and teacher reflections
  • Teachers’ pedagogical content knowledge
  • Student perceptions of the classroom instructional environment
  • Teachers’ perceptions of working conditions and instructional support at their schools

That’s good to hear.  This project comes at a critical time; and as states look for reliable ways to gauge teacher effectiveness, it’s good to see organizations committed to the hard work of determining the key indicators of what makes for an effective teacher.

MetaMetrics Partners with Interactive Achievement

We’re happy to announce our new partnership with Interactive Achievement, a software company that ‘provides educators with accurate assessments of student performance on state standards’.  Interactive Achievement is the developer of OnTrac, a web-based system that ‘delivers standard-aligned content, assessments, and instant reports for precise analysis of student achievement.  OnTrac allows teachers to build their own tests choosing from a bank of established test items.  Once students complete the test, teachers have instant access to online reports.

We have leveled the passages in the OnTrac system using the The Lexile Framework for Reading, allowing teachers to select passages across a range of Lexile levels.  Here’s more:

…educators can now custom develop using the Lexile measures of their students and the test items. Lexile measures will help educators select test passages that students should be able to read and understand, leading to more valid information on the growth required for students to achieve a state’s proficiency levels.

This is powerful information to have.  Having access to the Lexile level of reading passages helps inform the choices teachers make as they design benchmark assessments.  Here’s the President of Interactive Achievement, Jacob Gibson:

“Since over half of U.S. students already receive Lexile measures from high-stakes tests, assigning Lexile measures to the OnTRAC passages allows educators to use a common metric to build assessments that can provide the benchmark data they need to help students achieve at the highest levels.”

We’re thrilled with this new partnership and glad that teachers will have easy access to a tool that allows them to customize assessments based on their student’s reading level.

MetaMetrics is an educational measurement organization. Our renowned psychometric team develops scientific measures of student achievement that link assessment with targeted instruction to improve learning.