Today I was grading students' tests that they took yesterday while I was at our MSU classes. I was unaware that we were even planning on assessing students yesterday; the majority of the assessment was over material that we have been working on, such as writing numbers in different ways, skip counting, etc. However, there were certain questions that our students haven't explicitly been exposed to. Although they have the conceptual knowledge to solve these, they clearly struggled with the format in which they were asked to apply these skills. A trend I saw on several students' tests was that they were extremely confused about tables comparing the number of stamps two different students owned. Each picture of a stamp was worth ten stamps, as noted by the key underneath the table. Although my students are entirely capable of counting by tens and determining the difference between how many each student owned, the majority of students answered this incorrectly. It was a fill-in-the-blank question, so students put what they thought was the obvious answer and just counted how many more pictures of stamps the one child had.
I think that if my students would have had more experience working with tables and understanding that keys even exist, they would have been able to be successful. Since I wasn't there during the actual testing session, I'm not sure how my MT explained it or what clues she gave. This reveals to me that my students understand one-to-one correspondence, but were not aware that these everyday object pictures could represent different values (although they do understand this when related to base ten blocks). A next would be to present a similar type of a problem comparing number of objects between two people; however, I would use base ten rods as the objects. I would start by asking students how many more rods person A had than person B. This would transition into a discussion about what each base ten rod really represents (10 units). We could then make our own key about what each rod represents and scaffold this idea to real-world objects in tables. Another next step would be to present the same problem to students and have them in their table groups talk about how they would each solve it. Since I didn't grade all the tests, I would be curious to see if other students did answer it correctly, or what misconceptions they had. While monitoring these student discussions, I would determine whether to select individual students' ideas and sequence them, or each group's strategy.
Also think about the fact that the "test" is usually a "summative" assessment (though of course it provides formative information as well). It might be useful to think of minor questions (even conversations you can have with the students in response to different tasks or problems) that can help give you insight into what they are thinking. I think this speaks to the importance of having many forms (and grain sizes) of formative assessment.
ReplyDelete