Many curricular frameworks for teaching mathematics tend to be only a list of mathematics topics to be learned, with no clear elaboration of key ideas or organizing principles. Because of this, students may not be taught to integrate mathematical ideas, which causes gaps in their knowledge and limits their understanding. The Quantile® Framework for Mathematics, used in conjunction with the Common Core State Standards for Mathematics (CCSSM), helps teachers identify key connections and provide ways to ensure that students gain a comprehensive understanding of mathematics.
Mathematics is hierarchical and lends itself to learning progressions. Development of mathematical concepts depends on a student’s understanding of prerequisite concepts. Learning progressions are curricular frameworks that provide sequencing and guide teachers on proportional use of instructional time. The CCSSM lend themselves to the development of learning progressions because they provide critical areas for instruction. The CCSSM aligns content across K-12 so new material clearly builds upon concepts learned previously.
The Quantile Framework for Mathematics and its taxonomy provides a unique way to support the implementation of the CCSSM and address individual student needs by reporting both student ability and difficulty of concepts on the same scale—the Quantile scale. The taxonomy of the Quantile Framework comprises approximately five hundred skills and concepts called QTaxons. Each QTaxon is linked to related QTaxons, and these groupings form a knowledge cluster. Knowledge clusters form a tightly woven web that encompasses the mathematics learned from kindergarten through high school. By using information about student mathematical ability, the difficulty of the mathematical concepts, and the relationship among mathematical concepts, teachers can effectively target instruction for their students.
For a more detailed description, be sure to check out our latest white paper: Weaving Mathematical Connections from Counting to Calculus: Knowledge Clusters and The Quantile® Framework for Mathematics
The National Center for Response to Instruction (NCRTI) rated the Scholastic Math Inventory (SMI) with highest marks for validity and reliability as a progress monitoring tool. Progress monitoring allows educators to better understand student achievement over time, which is often difficult. Many assessment tools may diagnose specific topics in which students struggle but often do not provide sufficient feedback to help educators monitor student growth over time or inform their instruction.
Response to Intervention is a methodology for identifying and providing timely intervention for struggling students. With funding from the US Department of Education, the American Institutes for Research and researchers from Vanderbilt University and the University of Kansas formed NCRTI to assist states and districts in implementing proven Response to Intervention strategies. NCRTI rated SMI as offering “convincing evidence” for the following areas:
• Reliable performance level score
• Valid performance level score
• Availability of alternate forms
• Sensitivity to student improvement
• End-of-year benchmarks identification
• Rates of improvement specification
SMI provides computer-adaptive benchmark assessments for students beginning in grade two through a first course in Algebra. Each time a student completes an assessment, SMI provides a student with a metric called a Quantile® measure. Because all of the assessments, from grade two through Algebra I, use the same Quantile metric, this student measure can be used by teachers to monitor student growth, not only within a school year but also from year to year.
The student measure also helps teachers analyze students’ readiness for instruction on mathematics skills and concepts. The Quantile® Framework for Mathematics also provides Quantile measures for individual skills and concepts taught in grades two through Algebra I (and actually more than that – kindergarten through high school). A teacher can use the measure of the concept being taught in comparison with the student measure to gain insight as to whether the student is ready for instruction on that particular topic. If the student measure does not match with the measure of the skill or concept, the teacher can identify other related topics with measures that do match. The measure provided by the SMI assessment can be used to target instruction and provide students with related material for which the student is prepared.
The Quantile Framework can work with most mathematics assessments to provide a student Quantile measure. Please visit the Quantile website to see the instruments that currently report Quantile measures.
Here’s an interesting new study by Freakonomics author, Steven Levitt. Wondering how incentives affect student performance, Levitt and fellow researchers studied student performance within a wide variety of incentive structures and found a few things worth considering. First, they found that not only does money work, but the amount of money makes a difference. In fact, students performed better when offered a large sum over a smaller sum.
Second, they found that students responded differently to losses than to gains. This appears to be keeping with larger studies that have found that adults are also more averse to losses than gains. For example, students performed better when given money before the test and were told they would lose it if they fared poorly than when they were offered money if they performed well on the test.
Third, the researchers found that less tangible incentives, rewards like trophies and badges, worked well for students. And fourth, they learned that students were much more likely to perform well when the payoff was immediate, e.g. immediately following the test rather than weeks later.
Nothing all that surprising. We’ve long known that adults respond in similar fashion. Business incentives work best when they are both concrete and provided in a relatively short amount of time, e.g. not at the end of the year. We also know that adults are much more aggressive about keeping what they have and a bit more relaxed about losing out on future gains. Still, this latest study provides interesting food for thought. Most districts focus on the gains made by the end of the year as measured through high stakes assessments. But for many students those assessments mean little. Levitt’s study suggests some subtle ways we might be able to incentivize students to devote more energy and effort to these critical assessments.
The Automated Student Assessment Prize (ASAP) essay grading competition was created to assess how well different vendors of automated essay scoring (AES) engines, along with public competitors from around the world, could score student-generated essays relative to their human counterparts. The competition was sponsored by the William and Flora Hewlett Foundation and hosted by the data prediction platform kaggle.com. As it turns out, the competition confirmed what has been known for years: AES engines not only have good agreement with their human counterparts, but they agree better with humans than humans agree with themselves! The results of the study have been written up in a recent report and touted by some as the final green light for going ahead and letting AES engines score student essays, particularly in high-stakes situations such as standardized tests since not only are they more consistent, but much faster and orders of magnitude less expensive.
Competitors were given training data which included the text of thousands of essays written to eight different prompts. The scores provided by two human raters were given for each essay. Based on this training data, competitors had to use their AES engines to predict how human raters would score new essays without knowing the human assigned scores. This is the underlying problem concerning the competition. AES engines are using a number of features that they tease out of the text to predict how two human raters would score an essay; they are not actually measuring (or at least estimating) the quality of the essay based on the harder to define constructs of writing the way the humans who originally scored them did. Of course, humans will often lack consistency in their scoring, but their scores are still based on things that computers do not yet understand.
Additionally, human raters will score the quality of an essay based on its causal attributes, whereas an AES algorithm will also likely exploit noncausal attributes. As it turns out, the number of characters used in an essay is highly correlated with quality. Better writers have the ability to write more in a fixed period of time and writing more means that you have more space to communicate your idea, which is the goal of writing. But while the number of characters generated is a good predictor of essay quality, it is not a causal attribute of essay quality. It is evident that a sentence repeated 100 times does not represent better quality writing than the same sentence written a single time. (more…)
Guidance counselors have been working to create programs to prepare high-school Seniors for life in the post-secondary world. Many states have already created programs that move beyond academic assessments to programs that focus on personal fulfillment and life skills. Two leading voices behind the push for programs that address personal fulfillment and life skills are Janice Dreis and Larry Rehage. As guidance counselors, at New Trier Township High School in Winnetka, Illinois, Dreis and Rehage noticed that the majority of high school classes and programs did not prepare students for the real life challenges of the post-secondary world, challenges that bring high school Seniors a great deal of anxiety. “They’ll soon be transitioning into the real world, and there can be a huge amount of anxiety, but schools rarely address this. We wanted to create opportunities to engage seniors in this final year that should be a capstone year.”
Dries and Rehhage champion programs that can help ease the anxiety of Senior year and create a supportive school environment. The programs range from volunteer projects, to senior seminars and leadership programs. Rehage and Dreis travel around the U.S. encouraging the creation of similar life programs in the nation’s high schools. They do not force any specific program on schools; instead they encourage teachers and administrators to talk to students to see what programs they would find beneficial and helpful.
Three programs of specific interest are the Year Long Service Learning Project, Senior Instructional Leadership Core program (SILC), and campus based seminars. The Year Long Service project is a community based project that helps develops problem solving and communication skills. It also provides a way to give back to the school community. The Senior Instructional Leadership Core program places students in classrooms to help assist teachers with class instruction. The on campus lectures are a way to bring in speakers who give helpful presentations on preparing for college, stress, and time management.
Many students have reported that these programs have been helpful. Senior Brittany King of Chartiers Valley High School in the Chartiers Valley School District in Bridgeville, Pa., volunteered for her school leadership program. She was an assistant to a classroom for special needs students. Participation in this program helped clinch her career choice. She now is majoring in Special Education at West Virginia University. When reflecting on the program she stated “I feel like I’m doing something worthwhile, and it helped me figure out what I want to do”
Kudos to Dreis and Rehage for their commitment to raising awareness for these programs. In addition to increasing academic demand, the post-secondary world is full of non-academic challenges and it’s good to see counselors work to prepare students for both.