Apparently U.S Education Secretary Arne Duncan is publicly acknowledging the severe limitations of conventional standardized testing. In a recent speech, Duncan said, “Students, parents, and educators know there is more to a sound education than picking the right selection for a multiple choice question." I'm glad to hear him talking about "Assessment 2.0" or "next-generation assessments." (Click here for the New York Times article about the speech.)
Jonathan Martin, Principal of St. Gregory College Prep in Tucson, Arizona, wrote on the blog Connected Principals, "I want us to be held accountable for educational excellence, and I believe that by using external measurements, we are able to demonstrate that accountability. I think too that we can use this data for marketing our school as we seek to grow it. But most of all, I want to know how well our school is doing compared to others so I can receive the hard truth about where we are not doing well enough, and I can know where to focus for improvement."
Martin continues, "I do think we should evaluate ourselves by our school’s own standards, but not only by them; it is too easy for us to be seduced by our own biases. We know our schools, and we love our schools, and sometimes it is hard to see our blind spots or fully appreciate where we may be under-achieving. But that I seek and appreciate external measurements doesn’t mean I love or like scantron multiple choice bubble tests of basic skills that are administered once a year to “grade” a teacher or school. I don’t."
Mr. Martin says what he likes in “next-gen” assessments is, first, "that we can now evaluate higher order thinking skill development in tests that are not multiple choice, but authentic assessments where students write essays reviewing and responding critically to documents and offering thoughtful solutions to complex problems."
Second, Mr. Martin appreciates "new assessments which are computer adaptive, able to shape themselves to individual student proficiency levels, give immediate feedback to students, teachers and parents, and provide the information we need to better personalize instruction."
These “2.0″ approaches are what Secretary Duncan called for in his speech. As for the first, he says:
New assessments will better measure the higher-order thinking skills so vital to success in the global economy of the 21st century… To be on track today for college and careers, students need to show that they can analyze and solve complex problems, communicate clearly, synthesize information, apply knowledge, and generalize learning to other settings.
The PARCC consortium will test students’ ability to read complex text, complete research projects, excel at classroom speaking and listening assignments, and work with digital media. Problems can be situated in real-world environments, where students perform tasks or include multi-stage scenarios and extended essays.
As for the second approach, Mr. Duncan said:
Most of the assessment done in schools today is after the fact and designed to indicate only whether students have learned. Not enough is being done to assess students’ thinking as they learn to boost and enrich learning, and track student growth. [new] assessments will make widespread use of smart technology. They will provide students immediate feedback, computer adaptive testing, and incorporate accommodations for a range of students.
The SMARTER consortium will test students by using computer adaptive technology that will ask students questions pitched to their skill level, based on their previous answers. And a series of interim evaluations during the school year will inform students, parents, and teachers about whether students are on track.
Better assessments, given earlier in the school year, can better measure what matters—growth in student learning. And teachers will be empowered to differentiate instruction in the classroom, propelling the continuous cycle of improvement in student learning that teachers treasure.
I agree with Principal Martin that no single assessment will ever be perfect. We need multiple approaches and multiple ways to assess how our students are learning and how well our teachers are performing. This fact, which seems so obvious, is what makes me worry when I hear about school rankings based on state tests, or teacher hiring or pay decisions based on value-added assessments (which use test scores to judge "value added").
I agree with Mr. Martin:
"We must not ever substitute data and “evidence” for an educator’s judgement. Data inform judgement, but data must never replace discretion and wise judgment."
On the other hand, "let’s not refuse to improve data collection because of the inappropriate abuse of poor data; let’s seek to improve it and use it appropriately."
As our students in Ridgewood continue to take the NJASK year after year, it's encouraging to think that the "powers that be" may be (quietly) acknowledging that we need a better way. Two that I am interested to check out are the College & Work Readiness Assessment (CWRA) and the NWEA’s Measurement of Academic Progress (MAP). Both of these offer hope that maybe we can finally move on from outdated techniques and use creative new tools to truly measure and improve learning.
Welcome to Laurie Goodman's blog. I use this space to share news and opinions about education and schools in Ridgewood, the state of New Jersey and the nation, in addition to other issues I'm personally interested in. I invite you to share your thoughts, feelings, questions or opinions, too, by posting comments on any blog entry. Please observe basic courtesy -- keep your comments focused on issues, no personal attacks or bullying, please. Contact me directly at: firstname.lastname@example.org