This post was written by NCTE member Chris Hass, a member of the NCTE Standing Committee on Literacy Assessment.
Several years ago, my principal called for a grade-level meeting to discuss the disparity between the ELA grades in our third-grade classrooms. In particular, he wanted to talk about the fact that my ELA scores were so much higher than my colleagues’—an indication that maybe I was going too easy on my kids. To begin, he passed out an agenda that included our average grades for the most recent quarter.
Class 1 82%
Class 2 85%
Class 3 84%
Class 4 82 %
My Class 91%
“Why do you suppose we see such a large difference between some of these classroom averages?” he asked.
His tone and demeanor didn’t suggest he was calling me out. I sensed he genuinely wanted to understand. As principals ideally do, he had come to listen. My teammates studied the numbers on the sheet, then exchanged glances with one another. We were an incredibly tight, supportive group who respected one another a great deal, even if our views on teaching, learning, and assessment weren’t often aligned. After a prolonged silence, they looked uneasily over to me and waited.
“Well,” I began, trying to choose my words carefully. “I think we see a difference between scores because what I’m assessing—and how I’m assessing it—is different in some ways from how others are generating their grades. I’m trying to assess all those things that are important in literacy development—not just the ones that have traditionally been assessed. I’ve been told so many times, ‘We assess what we value.’ That’s what I’m striving to do now.”
We Assess What We Value
Most literacy teachers will likely tell you they value the same things—supporting students to develop a love of reading and writing while scaffolding them to implement new skills, strategies, and understandings into their daily practices. But if we value these same things, why do our assessments look so different between classrooms, schools, and districts?
There are many reasons for this. Chief among these is the well-documented way high-stakes standardized testing has narrowed the curriculum and attempted to redefine what counts as growth, learning, and achievement in the literacy classroom. But as professionals we have choices about what and how we assess within our own classrooms. As such, we must carefully consider what skills, strategies, and daily practices should count as evidence of literacy growth and what data sources allow us to best assess this.
As I explained in our meeting that day, my literacy assessment practices varied from our school’s institutional norms in a couple ways. The first was that my students earned grades for reading and enjoying their books each day. If I was going to assess what I valued, I needed to make certain my students’ grades reflected the fact that they were growing into lifelong readers who independently sought out genres, authors, and titles they knew would fuel their reading lives. These texts not only delighted them but provided daily opportunities to put into practice what they were learning in the classroom.
Having students keep a very simple log of the books they were reading each day—in addition to the kidwatching notes I collected in the classroom when scanning the classroom or during our one-on-one and small group reading conferences—provided me vital information about their ability to:
- read on a daily basis
- stay focused on the text
- self-select high-interest books
- read from a variety of titles
- finish (most) books they began
- show interest in what they read
Ideally, this sort of data would be shared in narrative form detailing what the kids were already doing really well as readers and where we could support them to go next. I shared this with parents via emails, conversations, and short notes home. However, since my school demanded I also assign quantitative values to reading, I developed rubrics which allowed me to translate reading logs and kidwatching data into numerical scores.
The second way my literacy assessment practices varied from our school’s norms was the fact that very few of the scores in my gradebook were collected from a paper-and-pencil test. To truly know what my readers were doing to create meaning from the text, I knew I needed to diversify the ways I was assessing them. While a few isolated passages on a test might provide one form of data, I needed to be assessing these skills using a wide variety of data points if I truly wanted to capture what my readers knew and could do in actual practice. These included:
- reading conference notes
- student work samples from both reading and writing
- annotated news articles or short texts
- kidwatching notes taken during literature discussions
- written conversations between a teacher-student or student-student about a text
Again, the ideal vehicle for sharing this data would be to write a narrative report but given the expectations of my school I developed rubrics allowing me to translate these data points into numerical scores as well. While the resulting grades I submitted into the gradebook still didn’t tell families nearly enough about their children’s literacy development, at least those quantitative measures more accurately reflected everything my students demonstrated to me about their literacy growth as readers.
Ultimately, I was pleased with these efforts to navigate a healthy balance between my theoretical understanding of literacy assessment and the institutional mandates I could not change. By the end of our meeting, my principal not only accepted my rationale for these assessment practices—he fully supported them. In this case, everyone won—my kids, their families, and our school. And while not every administrator may be willing to place such trust in the professional knowledge of their teachers—a sad commentary on the current state of public education—we must keep fighting and doing what is best for our kids. The fate of sound literacy practices may well depend upon it.
Chris Hass is a second- and third-grade teacher at the Center for Inquiry in Columbia, South Carolina.
It is the policy of NCTE in all publications, including the Literacy & NCTE blog, to provide a forum for the open discussion of ideas concerning the content and the teaching of English and the language arts. Publicity accorded to any particular point of view does not imply endorsement by the Executive Committee, the Board of Directors, or the membership at large, except in announcements of policy, where such endorsement is clearly specified.