1. How do you determine from these responses if they are comprehending well enough to understand the level of reading required for the next grade?
2. If they are experiencing difficulties in comprehension, how is this shown in your assessment project and what do you do to remediate this?
3. How do you translate this into constructive feedback for students and parents?
These are great questions. Here are my answers:
1. How do I assess if a student is comprehending? Let's be careful not to pretend that we actually know what it means to be reading at a grade 8 level versus a grade 9 level. Psychometricians might like to believe they can empirically diagnose a student's reading level down to a decimal point, but I would wager that most teachers understand the pitfalls of claiming such arrogance.
At best, multiple choice tests offer us spurious precision measuring a child's ability to decode text. Because reading is first and foremost about constructing meaning, the comfort we might gain from a multiple chioce test's pseudo structure must be seen for what it is: a subjective rating masquerading as an objective assessment.
Grades, stickers, happy faces, check marks, gold stars and other bribes that are born out of reductionist assessments such as multiple choice tests are not a kind of feedback to be accommodated; rather, they are problems to be solved and practices to be avoided.
Rather than concerning myself with "are they ready for the next grade" which is an infinitely ambiguous question, I concern myself more with questions like:
Do students enjoy reading?How do I know if a student is constructing meaning? I observe it. Here's a sample of Liam's project. On the left is a poem he selected, and on the right are his thoughts. Click on the picture below to view it larger.
Do they read willingly?
Do they construct meaning from reading?
Look at his response! How could I ever have created a multiple choice question that would allow Liam to share with me his connection between a line of fan-fiction poetry from The Legend of Zelda to a Canadian Broadcasting Corporation's radio talk-show that features a 20 minute discussion on the concept of time?
Some might say this is too much work, and they don't have time to look at a student's work like this. To this I say don't be intimidated. The bulk of my assessing occurs while students are actually doing this.
How do I know he's ready for the next grade? I didn't need to number crunch or analyze data, I simply needed to observe him do this.
As I write this blog post, Liam is sitting next to me. I think he just summarized all this nicely:
You can't construct meaning in a preconceived bubble.
2. How do I know if a student is having difficulties comprehending? In three words, here is my answer: I observe it. There is no substitute for what a teacher can see with their own eyes when observing and interacting with students while they are learning.
Here's a sample from Lewis's project. About 2 minutes into this project, I saw him staring at his monitor like a deer in headlights. I quickly assessed that he was having difficulty. I walked over to him and asked if he needed help. He said yes. Below is a sample from his project. He asked me to provide an excerpt for him to read (its on the left). His thoughts are on the right. Click it to view it larger.
He was intimidated by words he couldn't understand - I know this because he told me. I told him to launch http://www.dictionary.com/ so he could look up these words. Then he showed his thinking by sharing with me that he had to look these words up. I can see that even after using the dictionary his dependence on copying the definition tells me he still isn't really getting it.
But what's the alternative? If I give him a multiple choice exam, he would stare at it, dejected, thinking he's an idiot who can't read. To save his own pride, he would likely remove all effort, madly filling in the bubbles only to declare this as "stupid".
A multiple choice test for Lewis would, at best, show me what he can't do. This kind of project allows Lewis the opportunity to show me what he can do. Designing and implementing remediation with this kind of project is intuitive. In contrast, I have no idea how I would remediate for Lewis if I was given his bubble sheet item analysis.
Quite frankly, Lewis and I would both be at a loss if given such data.
3. The best feedback parents can receive about their child's learning is for them to see their child's learning. Back in the old days, show-and-tell, science fairs and barn dances were exhibitions of learning. Communities came together to observe and listen to students while they performed their learning - and if things went really well, parents and community members might have actually interacted with the students. No one needed to translate the results - everyone could see with their own eyes that learning was or was not taking place. However, if we throw a number or letter grade at parents, or God-forbid try to show them an item analysis sheet like the one above, no wonder they need someone to translate the assessment. Summative assessment does not need to be diablolically complicated: gather information and share that information.You might put a hint of an evaluation, if needed, but otherwise, that's it. We don't need a stanine, quartiles, class averages or even grades and tests. Just gather and share. Rinse and repeat.
An understandable rebuttal to this might be that teachers, parents and community members might not be willing to engage in this - they might not have the time or be willing to expend the effort. Too often, this is as sad as it is true. In an attempt to counter this apathy, technology might provide a kind of solution; on-line communities such as discussion forums, blogs, moodles, nings and other social networking software can alleviate problems brought on by the limitations of time and place.
IN CLOSING, I want to address one more criticism I face with my alternatives to language arts and science multiple choice exams. I have had teachers ask me if I will be grading these projects - of course, the question is a setup - what they are implying is that I am cheating my students because I won't be going through their projects with a fine tooth comb, and that I am doing a disservice to them by not spending as many or more hours grading them as they did actually performing.
To this I say, how long does it take to rip a bubble sheet through a scoring machine?
The difference between a multiple choice exam and a performance assessment is not that the multiple choice exam can be counted and the performance can't. When it comes to quantifying either one, their respective strengths become their own weaknesses. The utility of a multiple choice exam and its artificially convenient interrator reliability comes at an alarming price; authenticity is sacrificed for (perceived) reliability. Meanwhile, the genuineness of a performance assessment can be rich with both quality and quantity, but it's validity comes at a cost; quantifiability may be sacrificed for authenticity.
It might be argued that neither can be properly quantified, but if Linda McNeil from Rice University is correct in saying that measurable outcomes may be the least significant results of learning, maybe we shouldn't be too bothered by it all.
Could you imagine a school district of teachers coming together on a district inservice day where they could sit down and share a wide range of performance assessments? Just imagine the kind of rich dialogue that would revolve around real learning. Now that would be professional development!