The Look in Their Eyes
I’ve heard it said by colleagues and conference speakers that when a class “get’s it” they can tell by the look in the students’ eyes.
Wouldn’t it be grand if in the future laptops could scan irises and surmise whether and when a student understands the concept being presented? Holy cow, THAT technology would allow immediate adaptive learning, changing the pace of instruction to meet the needs of the student and eliminating the need for grading exams. Let’s set up a Kickstarter to get that show on the road. I’d be the first to invest.
But while we’re waiting let’s examine how we currently assess what students know and don’t know. Lacking the intuitive insights of my colleagues I’m limited to asking students questions and evaluating their answers to see to what degree they “got it.” To do this I use LectureTools, a web application that allows me to pose multiple choice, free response, numerical and image-based questions during or outside class and gather answers instantly to present and discuss. [Truth in advertising: I had a hand in the design of LectureTools so I am not unbiased, nonetheless the results I present here are germane to any student response system.]
I had tried to use clickers in class but found them somewhat limited in the question types allowed and they didn’t include the additional features students wanted in note-taking and other participatory tools. I’ve been using LectureTools for multiple years now in my “Extreme Weather” class that is simultaneously face-to-face and broadcast live on the Internet with a population of about 150 students per semester. What I found was illuminating (for me at least) and may help me in predicting student success (and failure) far earlier in the semester.
Tracking Student’s Understanding
I believe I have the best job at the University of Michigan. Yea, the football coach makes a bundle of money and is in the news all the time, but I get paid to talk about the weather. If the football coach has a bad season he gets a lot of heat. But me, if I tank in class there is little repercussion save for the personal sting to my pride. Still, coaches understand that for athletes to perform at your best on game day they need to perform well during practice. Now with classroom technology I can demonstrate that the same is true in my class.
With LectureTools I am able to ascertain how many multiple-choice and image-based questions each student gets right every class. After collecting these data I compared the percent of questions each student got right during class time with grades the students got on the first two exams. The results (Figure 1) show first that the 2nd exam was harder than the first as grades were lower overall. More important though the results also show that the students who did less well on in-class questions performed less well on subsequent exams.
Such results are not surprising yet illustrate the value of monitoring student response to questions as an integral part of the class structure. By posing questions in class time you are collecting information that can identify students who needs extra support. Knowing this justifies whatever energy it takes to author questions and embed them in the flow of my class. Why wait for the first exam to identify students in need of extra attention?
Tracking Students’ Well Being
My ability at predicting student success may well be improved with new data like their responses to questions in class as describe above but undoubtedly there are other factors that influence a student’s performance. One factor that I have never considered as I had no way to measure it is the students’ physical and emotional state.
I have observed over the last few years that a majority of the students who were withdrawing from my course in mid-semester commented on a crisis in health or emotion in their lives. On a lark this semester I created an image-based question to ask students in LectureTools at the beginning of each class (example, Figure 2) that requested their self assessment of their current physical and emotional state.
Clearly there is a wide variation in students’ perceptions of their physical and emotional state. To analyze these data I performed cluster analysis on students’ reported emotional state prior to the first exam and found that temporal trends in this measure of emotional state could be clustered into six categories.
Perhaps not surprisingly Figure 3 shows that student outcomes on the first exam were very much related to the students’ self assessment of their emotional state prior to the exam. This result is hard evidence for the intuitive, that students perform better when they are in a better emotional state.
Armed with these observations I realize I should monitor student responses to content questions early in the semester and monitor students’ well-being (though it’s possible the former would identify the latter). The bigger question is how to design interventions and/or mentorship that will aid these students earlier in the semester.
The results shown here can serve as a ‘control’ against with which to measure changes in outcomes due to interventions. If in the second week of the semester, for example, I could create an intervention to work with students showing low scores on in-class questions and/or reporting low emotional state will this lead to improvements in subsequent student outcomes? These results demands deeper investigation but illustrate that tools like LectureTools offer new data that has not been previously available. These data will likely challenge assumed relationships and affirm others but either way they represent new opportunities to explore how learning happens (or doesn’t) in our classrooms.