Banning Laptops in Class is the Wrong Answer

Recent articles by Valerie Strauss (@valeriestrauss) in the Washington Post and Dan Rockwell (@Leadershipfreak) in the New Yorker pander to the insecurities of college instructors and ignore research showing that web applications designed to invite participation can lead to improved engagement in class.  I’ve been a professor at a research university for over 35 years and can say with some confidence that students have always had the ability to lose focus in class.  Students, when bored, will daydream, read the newspaper, stare out the window or find any number of other ways to disengage. The laptop just adds another option.

We have three choices when confronted with the challenge of students being distracted by laptop use. We can ban their use, ignore their use or use tools on the laptop that deliberately engage the students in the learning goals of the course. I argue the first two are ridiculous responses.

Philosophically one must ask why are we concerned about students being distracted. As the students must still pass whatever learning assessments we offer isn’t it on them to learn time management skills sufficient to be successful? They need to learn these skills sooner or later so why not in college? Is the issue that we’re worried about their learning or is it that their distraction is a reflection of how unengaging is our teaching?

With the support of the National Science Foundation I have been studying this issue for a few years. My interest is whether tools can be designed to promote participation in class, and especially large classes, where such opportunities have been rare. I use a web-based tool that allows students to take notes, answer questions, ask questions and indicate when they’re confused.

What my group found first was the ability for all to ask questions dramatically increased student participation alone as now even students who are reluctant to ask questions verbally (often female students and students for whom English is not their first language) are asking questions at a rate at or above the rate for male students and the fraction of students asking questions has increased to over 50% of students.

Second we found that students reported feeling far more engaged in the material because they had more opportunities to participate (see previous post). If you give students more opportunities to participate our results show they do. Technology facilitates the creations of multiple ways to engage students during class.

Third, the analytics of what students are doing in class can be related to they learning outcomes. Our research shows that mining the patterns of participation provide predictive tools for student understanding that can be used to focus feedback far earlier in the semester. Contrary to the arguments presented here we have a great deal of data showing that the laptops can increase engagement.

I don’t doubt that technology invites distractions (that’s precisely why I take a laptop to faculty meetings). The reality is that the onset of these devices is not going to stop and the dream that we can ban Internet-enabled devices from the classroom is foolish. The more important question is how can we use these devices to create more engaging learning environments.

Posted in Large Class, Student Engagement, Student Learning, Technology in Lecture | Leave a comment

Do Hybrid Courses Incubate Mediocrity?

My Hybrid Course

Have I inadvertently designed a monster?

In the winter semester I offered a hybrid course, Extreme Weather (AOSS 102), at the University of Michigan (UM) that was simultaneously face-to-face, streamed live using Echo360 and captured.  The course used LectureTools (as described earlier) so students could participate remotely to answer questions, ask questions, take notes and identify when they were confused.  Hence my 168 students could actively engage in the course material regardless whether they physically came to class or participated remotely.

However, in analyzing the copious data that LectureTools affords I have discovered that students with higher incoming grade point averages (GPA’s) behaved remarkably different from those with lower GPA’s (Aside: At the University of Michigan – and many other American universities and colleges – student performance is measured on a 4.o grade point scale).  The lower GPA students tended to answer a slightly lower number of questions, took significantly fewer notes and were far more likely to participate remotely than the higher GPA students.  All this begs the question:

  1. Are hybrid courses creating an environment that incubates mediocre participation or
  2. Does the availability of streaming provide opportunities for lower GPA students to participate at some level when they otherwise might not have participated at all?

“Smarter” Students Behave Differently

Exam scores on each of three hourly exams as a function of incoming GPA.
Figure 1.  Exam scores on each of three hourly exams as a function of incoming GPA.

First of all I learned, perhaps not surprisingly, that student outcomes in my course were related to the students’ incoming grade point average (GPA). The relationship between GPA and grades has been well defined by other, including UM’s Tim McKay (c.f. here).  Figure 1 shows that the students who had better GPA’s tended in general to do better in my course, though there was still considerable scatter.

But as an instructor there’s nothing I can go about my students’ incoming GPA so I explored to what degree did students with differing incoming GPA’s behave differently.  In other words what is it that “smart” students do that weaker students don’t?

The results show that higher GPA students behaved systematically different that their lower GPA counterparts in at least three ways:

Figure 2.  Fraction of questions answered in class, fraction of questions answered correctly and the ratio of the two illustrating that lower GPA students answer fewer questions and tend to get a lower fraction of answered questions correct.
Figure 2. Fraction of questions answered in class, fraction of questions answered correctly and the ratio of the two illustrating that lower GPA students answer fewer questions and tend to get a lower fraction of answered questions correct.

1. Variations in Participation

First, when stratified by incoming GPA it appears that lower GPA students tend to respond to a slightly lower fraction of the questions posed in class (Figure 2), and when they do answer questions they tend to get fewer correct.  The variation in participation  across GPA is modest but fairly consistent.

Getting fewer questions correct in class is not necessarily surprising but the reduced level of effort (answering fewer questions) is problematic.  The lower participation rate suggests a reduced sense of responsibility.  That said, the level of questions attempted for the lowest GPA students was still around 70%.

Figure 3.  The number os words typed over the semester and the number of slides containing notes over the semester as a function of incoming GPA.
Figure 3. The number of words typed over the semester and the number of slides containing notes over the semester as a function of incoming GPA.

2. Variations in Note-Taking

Second, and more significant, the number of words students type in notes and the number of slides to which they type notes increases dramatically with incoming GPA as shown in Figure 3.  Students with higher GPA’s tend to type considerably more notes in class.

Again the reduced level of participation by lower GPA students is troubling.  Does this provide evidence that the lower GPA students have lower GPA’s because of poorer study methods?  Perhaps they do not know how to effectively participate in class or do not value note-taking as a mechanism for engaging with the material.

3. Weaker Students Avoid the Classroom

Figure 4.  Variation in how students participated in class.  Higher GPA students tended to physically come to class while lower GPA students tended to either participate remotely or not at all.
Figure 4. Variation in how students participated in class. Higher GPA students tended to physically come to class while lower GPA students tended to either participate remotely or not at all.

Third, and most surprising for me, the nature of how students chose to participate in this hybrid course was definitely different for higher GPA students than lower GPA students.  Figure 4 shows that higher GPA students tend to participate in person in the classroom.  Lower GPA students, on the other hand, both tended to participate remotely and/or tended to simply miss class more often.

Musings

These results are a tad discouraging for those offering hybrid courses.  Is it possible that the very design of hybrid courses, even those that promote and encourage synchronous participation,  may systematically provide weaker students with an environment where they can avoid taking active measures to participate?  On the other hand, if the course were only face-to-face would the lower GPA students have come to class and participated at a higher level?  I have no evidence that this would be true.

My walk away, and the one I will carry into the next semester, is that the students who have lower GPA’s coming into the course and who opt to participate remotely should be exposed to results such as these so they can understand the potential consequences of lower participation.   Moreover as more is learned from the vast dataset afforded by systems like LectureTools it should be shared with academic advisors who presumably best know their advisees academic records and can share the findings with those who need it most.

 

 

Posted in Large Class | 1 Comment

Evidence that Class Participation Affects Student Outcomes

Imagine a world where you had infinite access to every keystroke and click your students use related to your course. How would that help you? How would you use that information to guide the student?  Is there even evidence that class participation affects student outcomes?

This past semester I used LectureTools in class (see previous story), which collects a great amount of information on student participation in class.  Having all these data I opted to test the hypothesis that class participation affects student outcomes.

The Course

The course, AOSS 102, Extreme Weather, was offered in the winter semester, 2014, and included 165 students. Grades were based on the results of three hourly exams (60%), homework (15%), participation (15%) and “common good points” (10%). Exams are open book and open computer as the goal is less the memorization of facts than understanding how to find information when needed.  The course was streamed live every class day so students could have the choice of physically coming to class or viewing lecture remotely and synchronously. In either situation students could use LectureTools, which allows the students to:

  1. Type notes synchronized with the lecture slides;
  2. Answer questions posed by the instructor
  3. Self-assess understanding and indicate when they are confused
  4. Pose questions to the instructor and view responses;
  5. Draw on the instructor’s lecture slides; and
  6. Print lecture slides and notes for off-line review.

Participation was incentivized by a response measure, which credited both trying to answer the question and getting the question correct (when gradable).  The addition of the gradable component provided extra incentive for students to try to answer a question accurately. Without it they could simple click any answer without thinking to get participation credit.

My study design split the course into three periods bounded by three hourly exams.  As each exam presented a unique, if interconnected, corpus of content this allowed a view for how participation changed over the semester and how that participation was related to specific outcomes.

Results

Figure 1.  The average grades on each of three hourly exams as a function of the number of questions the student answered prior to that exam.
Figure 1. The average grades on each of three hourly exams as a function of the number of questions the student answered prior to that exam.

Value of Asking Questions

Lo and behold the number of questions a student attempted to answer in class wound up being positively related to the students’ outcomes. Figure 1 shows that, on average, students’ grades were higher for those students who attempted to answer more questions.

This result affirms those who engage in student response in class.  Apparently the simple act of trying to answer a question seems to be related to student outcomes. Of course I don’t know why they didn’t answer, whether they just didn’t know the answer, didn’t have enough time, or were distracted by other things at the time of the question.  Regardless, this is evidence that class participation affects student outcomes and affirms that the effort to pose questions germane to the content during class has value.

Value of Measuring Correctness

Figure 2.  Average grade for students on each of three exams versus the number of questions they answered correctly during class prior to the exam.
Figure 2. Average grade for students on each of three exams versus the number of questions they answered correctly during class prior to the exam.

Beyond posing questions and recording participation my research also shows that recording and grading their responses has potentially even more value.  Student correctness results were again grouped by the three exam periods and compared with the corresponding exam grades.

Results from my class (Figure 2) illustrate a strong relationship between in-class performance and student outcomes.  It makes intuitive sense that if a student knows the material during class time they’re more likely to know the material during exams.  But strength of the relationship suggests those who did not understand the material in class were less likely to learn it for the exam.

These two charts suggest it is worth monitoring both student participation in questions and how students perform on gradable questions to allow identification of students at risk of failure earlier in the semester.

These early results are the tip of the iceberg for all the data being collected with LectureTools.  These data need to be combined with other data from the local LMS and/or student information systems to create the best possible schemes for predicting student success but even data from LectureTools alone appears to allow instructors to identify weaker students earlier and makes a strong case for more data collection in our classrooms.

Posted in Large Class, Student Learning, Technology in Lecture | 2 Comments

Predicting Student Success in a Large Class

The Look in Their Eyes

eyegotitI’ve heard it said by colleagues and conference speakers that when a class “get’s it” they can tell by the look in the students’ eyes.

Wouldn’t it be grand if in the future laptops could scan irises and surmise whether and when a student understands the concept being presented?  Holy cow, THAT technology would allow immediate adaptive learning, changing the pace of instruction to meet the needs of the student and eliminating the need for grading exams.  Let’s set up a Kickstarter to get that show on the road.  I’d be the first to invest.

But while we’re waiting let’s examine how we currently assess what students know and don’t know.  Lacking the intuitive insights of my colleagues I’m limited to asking students questions and evaluating their answers to see to what degree they “got it.”  To do this I use LectureTools, a web application that allows me to pose multiple choice, free response, numerical and image-based questions during or outside class and gather answers instantly to present and discuss. [Truth in advertising: I had a hand in the design of LectureTools so I am not unbiased, nonetheless the results I present here are germane to any student response system.]

I had tried to use clickers in class but found them somewhat limited in the question types allowed and they didn’t include the additional features students wanted in note-taking and other participatory tools.  I’ve been using LectureTools for multiple years now in my “Extreme Weather” class that is simultaneously face-to-face and broadcast live on the Internet with a population of about 150 students per semester.  What I found was illuminating (for me at least) and may help me in predicting student success (and failure) far earlier in the semester.

Tracking Student’s Understanding

I believe I have the best job at the University of Michigan.  Yea, the football coach makes a bundle of money and is in the news all the time, but I get paid to talk about the weather.  If the football coach has a bad season he gets a lot of heat.  But me, if I tank in class there is little repercussion save for the personal sting to my pride.  Still, coaches understand that for athletes to perform at your best on game day they need to perform well during practice.  Now with classroom technology I can demonstrate that the same is true in my class.

Figure 1.  Average exam grades categorized by the average number of questions students got right in class.
Figure 1. Average exam grades categorized by the average number of questions students got right in class.

With LectureTools I am able to ascertain how many multiple-choice and image-based questions each student gets right every class.  After collecting these data I compared the percent of questions each student got right during class time with grades the students got on the first two exams.  The results (Figure 1) show first that the 2nd exam was harder than the first as grades were lower overall.  More important though the results also show that the students who did less well on in-class questions performed less well on subsequent exams.

Such results are not surprising yet illustrate the value of monitoring student response to questions as an integral part of the class structure.  By posing questions in class time you are collecting information that can identify students who needs extra support.  Knowing this justifies whatever energy it takes to author questions and embed them in the flow of my class.  Why wait for the first exam to identify students in need of extra attention?

Tracking Students’ Well Being

My ability at predicting student success may well be improved with new data like their responses to questions in class as describe above but undoubtedly there are other factors that influence a student’s performance.  One factor that I have never considered as I had no way to measure it is the students’ physical and emotional state.

Figure 2.  Example of results from a student wellness question for a specific class day.  Note the general collinearity of physical and emotional wellness.
Figure 2. Example of results from a student wellness question for a specific class day. Note the general collinearity of physical and emotional wellness.

I have observed over the last few years that a majority of the students who were withdrawing from my course in mid-semester commented on a crisis in health or emotion in their lives.  On a lark this semester I created an image-based question to ask students in LectureTools at the beginning of each class (example, Figure 2) that requested their self assessment of their current physical and emotional state.

Clearly there is a wide variation in students’ perceptions of their physical and emotional state.  To analyze these data I performed cluster analysis on students’ reported emotional state prior to the first exam and found that temporal trends in this measure of emotional state could be clustered into six categories.

 

Students patterns of emotional state  prior to the first exam were clustered and reveal a relationship between emotional state and the resulting exam grade.
Figure 3.  Students patterns of emotional state prior to the first exam were clustered and reveal a relationship between emotional state and the resulting exam grade.

Perhaps not surprisingly Figure 3 shows that student outcomes on the first exam were very much related to the students’ self assessment of their emotional state prior to the exam.  This result is hard evidence for the intuitive, that students perform better when they are in a better emotional state.

Now What?

Armed with these observations I realize I should monitor student responses to content questions early in the semester and monitor students’ well-being (though it’s possible the former would identify the latter).  The bigger question is how to design interventions and/or mentorship that will aid these students earlier in the semester.

The results shown here can serve as a ‘control’ against with which to measure changes in outcomes due to interventions.  If in the second week of the semester, for example, I could create an intervention to work with students showing low scores on in-class questions and/or reporting low emotional state will this lead to improvements in subsequent student outcomes?  These results demands deeper investigation but illustrate that tools like LectureTools offer new data that has not been previously available.   These data will likely challenge assumed relationships and affirm others but either way they represent new opportunities to explore how learning happens (or doesn’t) in our classrooms.

Posted in Large Class, Technology in Lecture | 2 Comments

Conduct Class from Anywhere

Teach from Anywhere

I’m a college professor who probably works more than 60 hours a week and I suspect I am not uncommon in my profession.  We love to do research, we teach multiple classes and interact with multiple students and support our institutions needs for service.

One of the challenges I face each semester is the conflicting needs of participating in research meetings and presenting at conferences while maintaining my teaching schedule.  My response has been to reschedule classes or find colleagues who could cover a class for me or skip research meetings and conferences I would otherwise attend.

But no more…

Step 1:

This semester I started using technology from “Zoom.us” (http://zoom.us/) to deliver lecture back to the classroom from wherever I am.  I have someone (my teaching assistant or a specified student) set up either the podium computer or a laptop attached to the local video projector to also run Zoom.  I also have a webcam attached to that computer.  Then we make a connection before class and I can see the class and I can choose whether they see me or any window on my screen.  It’s brilliant!

Step 2:

I use LectureTools (truth in advertising, I had a hand in creating LectureTools) so I can pose questions to students and they can respond to my questions and/or ask questions back to me.  Of course students can ask questions verbally and I’d hear them with Zoom but for the many who are uncomfortable asking questions (think students who lack confidence in their English or students uncomfortable with the content of the course) this offers additional opportunities to participate in class.

With this combination I can conduct class as if I were in the classroom –and- provide an interactive classroom experience.  Students report higher levels of engagement (1,2) that can transform even larger classes into active learning spaces.

 Step 3

After class I can check who participated and at what level.  Did they attend class during the class time?  Did they answer questions I posed?  Did they ask questions or indicate they were confused?  In many ways teaching remotely arguably offers no less pedagogical value to conducting class face-to-face and with LectureTools may actually provide a more active learning approach than traditional face-to-face.

Posted in Large Class | Leave a comment

Science as a Contact Sport: Car#1


This video is from car #1 which stayed to the west of the center of the storm. Students in this vehicle were able to film the formation of the tornado, which quickly grew into a massive tornado with multiple vortices.

  1. Science is a Contact Sport
  2. Location and motion of vehicles
  3. Video from Car #1
  4. Video from Car #3
  5. Story from Car #2
Posted in Large Class, Tornado Chasing | Leave a comment

Science is a Contact Sport

The 2008 University of Michigan tornado chase team
The 2008 University of Michigan tornado chase team including (left to right) Paul Schmidt, Joel Dressen (Texas Tech), Matt Onderlinde, David Wright, Jennifer DeHart, Candace Wood (Texas Tech), Joe Merchant, Kim Billmaier, and Brad Charboneau. (Brandon Wills, not shown)

I teach a course in “Extreme Weather” that is based, in part, on my experience leading students into the field to chase “supercell” thunderstorms (i.e. mammoth thunderstorms that can generate large and dangerous hail, lightning and tornadoes).  While the experience is scientifically stimulating it is also visually and emotionally breathtaking …and can be deadly.

The challenge is how to leverage my experiences to stimulate non-science majors to explore the science of extreme weather events.  Short of inviting the whole class to join us in the field the best I can do is present them with stories and data from the chase and invite them to decide how they might react.  Here is one such experience from 2008.

  1. Science is a Contact Sport
  2. Location and motion of vehicles
  3. Video from Car #1
  4. Video from Car #3
  5. Story from Car #2
Posted in Large Class, Tornado Chasing | Leave a comment

Science is a Contact Sport: Car#2

OberlinWindMy car (#2) sped east and overtaken by blinding rain and debris. In the nervous moments that followed, I chose to stop in the middle of the highway and turn the vehicle across the highway and into the wind to take advantage of the car’s aerodynamic design.  My first concern was the dark twister bearing down on me, my second concern was that another  vehicle might be following me and not be able to stop.  Options were poor.

Straw, branches and all sorts of other debris pelted the vehicle.  Visions of a stray cow (having watched too many twister movies) flying into the car seemed not out of the question.  Remembering that my sister and brother (also meteorologists) have a pact that should one of us die prematurely the others get rights to our last photo (bound to be a doozy) I tried to take pictures but it was so dark near the funnel the camera wouldn’t work.

The winds increased dramatically and the car was rocked and pushed backward across the highway by winds estimated at over 150 mi/hr. Fortunately, the car remained upright (a shoutout to the aerodynamics of the Chevy Cobalt).

straw_in_doorAfter the tornado passed, the wheel wells of the car (which had been blasted by the twister) were totally stuffed with debris, and straw was sticking out of every exposed crack around the windows and doors.  Fortunately, thanks to a car wash in the next town, the rental company never suspected anything.

  1. Science is a Contact Sport
  2. Location and motion of vehicles
  3. Video from Car #1
  4. Video from Car #3
  5. Story from Car #2
Posted in Large Class, Tornado Chasing | Leave a comment

Science is a Contact Sport: The Role of Bad Luck

On May 22, 2008, I and a team of undergraduate students from the University of Michigan positioned ourselves outside Oberlin, KS, for an encounter with a developing supercell thunderstorm. The goal on this day was to film the genesis of the thunderstorm and hope that it might ultimately produce a tornado.  As we pondered our next move, a dark cloud formed to our south. The cloud seemed to extend to the ground, making it impossible to distinguish any features. Radar indicated that the cloud contained rotation, but many clouds on this day had rotation, and not one had spawned a tornado.

The sky at this point was so dark despite it being mid-day that blinking flasher lights from passing cars reflected off the dry road surface. Strong turbulent motions were visible inside the cloud as it moved northward towards us.

Suddenly a large v-shaped tornado descended from the approaching thunderstorm and was moving directly toward us.  Given seconds to make a decision two of the three vehicles in the team moved eastward to avoid the tornado, while the third vehicle stayed behind with a plan to head west if necessary.  Bluntly put we were in a very bad situation and, in hindsight, should had withdrawn to the west minutes earlier.  Nonetheless, the video above shows the location of each of the three vehicles when the tornado formed and how each moved with the goal of 1) staying alive and 2) collecting data as close as safely possible to the tornado.

  1. Science is a Contact Sport
  2. Location and motion of vehicles
  3. Video from Car #1
  4. Video from Car #3
  5. Story from Car #2
Posted in Large Class, Tornado Chasing | Leave a comment

Science as a Contact Sport: Car#3


This video is from car #3 which was initially directly in the path of the tornado but was able to move to the east of the center of the tornado. Here you can see the formation of multiple vorticies as the tornado bears down.

  1. Science is a Contact Sport
  2. Location and motion of vehicles
  3. Video from Car #1
  4. Video from Car #3
  5. Story from Car #2
Posted in Large Class, Tornado Chasing | Leave a comment