Teaching with Classroom Response Systems

Resources for engaging and assessing students with clickers

Archive for the ‘Student Perceptions’ Category

The Costs (and Benefits) of Clickers

This week’s Chronicle of Higher Education includes an essay by Michael Bugeja, director of the Greenlee School of Journalism and Communication at Iowa State University, titled “Classroom Clickers and the Cost of Technology.”  (You’ll need a subscription to the Chronicle to use that link, unfortunately.)  In his essay, Bugeja expands on a few points he made about clickers in a prior essay.  I thought I would respond to a few of his points here.

I agree with some of Bugeja’s takeaways from his institution’s experiences with clicker vendors.  He argues that students should be involved in decisions about instructional technology, that chief information officers should be consulted by departments making such decisions, that faculty adopting technologies should be aware of not-so-obvious costs of using these technologies, and that administrators should be prudent when conducting cost-benefit analyses of new instructional technologies.

Those are all very sensible points.  However, I see some problems in the ways Bugeja uses clickers as an example in support of these points.  The fundamental weakness of the essay is that Bugeja seems to be doing a cost-benefit analysis on clickers without paying much attention to the benefits portion of that analysis.  As well-referenced as the cost portion of his analysis is, he fails to consider any of the research looking into the impact of teaching with clickers on student learning.

For instance, he quotes Ira David Socol of Michigan State University as saying, “The idea of wasting money on a device no more sophisticated pedagogically than raising your hand drives me nuts…”  However, there’s strong evidence (Stowell and Nelson, 2007) that when the hand-raising method is used, fewer students participate and students are more hesitant to answer questions honestly than when a classroom response system is used.  Those are significant differences and to ignore them is to fail to accurately describe key benefits of using clickers.

Bugeja also writes that had students at his institution been asked to weigh in on the cost-benefit question regarding clickers, “they probably would have said no because of excessive student fees.”  I can’t speak for students at Iowa State, but a number of published studies of student perceptions of clickers, including Trees and Jackson (2007), MacGeorge et al. (2007), and Kaleta and Joosten (2007), indicate that students respond positively to clickers, particularly when clickers are used in ways that engage them in class and provide them with feedback on their learning.  It should be noted that in the studies I just listed, students were required to purchase their own clickers.  Thus, there is evidence that students see the benefits of clickers outweighing the costs.

A second weakness of Bugeja’s argument is that he discusses the cost side of the cost-benefit analysis by focusing on the cost to install and maintain infrared-based classroom response systems.  IR systems are indeed costly to install and maintain and a bit of a pain for faculty and students to use.  However, arguing that classroom response systems aren’t worth the cost because infrared-based systems are costly is a bit like arguing that automobiles aren’t worth purchasing because steam-powered cars are a pain to use.  Very few colleges and universities are still using infrared-based clicker systems.  The radio frequency systems now in common use eliminate almost all of the installation, maintanence, and usage problems of the infrared systems.

As Bugeja points out, at Iowa State relatively few faculty members used clickers when the infrared system was the only one available.  When the easier-to-use and more-reliable radio frequency system was made available, “users then multiplied throughout the university.”  Bugeja makes good points about the costs involved in supporting early versions of clicker systems, but given how usage increased when more mature technologies were made available, I think a stronger takeaway is that institutions should be cautious when implementing new technologies.  Waiting for “version 2″ can help institutions avoid costs.  Universities now in the process of rolling out clickers widely can take advantage of the more mature radio frequency technologies and thus avoid all the hassle of the older systems.

There’s more I could say about this essay, but I’ll stop here for now.  I encourage you to respond to Michael Bugeja’s essay as well as my thoughts in the comments section below.

Clicker Conference: Tim Stelzer Keynote

I’m back from the Inaugural Conference on Classroom Response Systems, hosted by the Delphi Center for Teaching and Learning at the University of Louisville.  I had a great time meeting people who I knew only by their research, and some of the sessions were very well done.  Thanks to the Delphi Center, particularly director Gale Rhodes and associate director Marianne Hutti, for putting together such an enriching conference.  I’m already looking forward to next year!

Tim Stelzer, research associate professor of physics at the University of Illinois and one of the founders of i>clicker, presented the morning keynote address.  His presentation was very engaging, featuring a nice blend of information and humor.  His slides were particularly impressive, having been designed in the Presentation Zen style.  Many of his slides consisted primarily of one fullscreen image and just a few words of text.  This put the focus on Stelzer and his message while reinforcing that message visually.  He included a few well-chosen video clips and animations that helped him make his points (with a little humor) as well as an ongoing clicker-enabled game that kept his audience engaged.

One point Stelzer made that stood out to me was that in the past, being highly educated was correlated strongly with remembering lots of facts.  This is still true today to some extent.  Consider Ken Jennings, the guy who won all those Jeopardy gameshows.  He’s considered highly intelligent, but not for higher-order thinking skills (problem solving, critical thinking, etc.), just for remembering lots of trivia.

Stelzer made the point that with all the information available to students via the Internet, factual recall doesn’t play the same role it used to play in learning.  The challenge now in higher education is to develop students’ higher-order thinking skills, and Stelzer feels that classroom response systems can facilitate pedagogies that help teachers meet that challenge.  This is a valid point, and it’s one of the reasons I included in my book so many examples of clicker questions aimed at higher-order thinking skills.

Stelzer also described the genesis of the i>clicker classroom response system.  The first electronic system he used at Illinois (after abandoning the flash card method due to poor student participation, caused by lack of accountability) involved hardwiring jacks in all of the seats in a lecture hall.  The students connected their TI-83 calculators to these jacks.  This system allowed a couple of neat features not available in current systems to my knowledge.  One was that it allowed the instructor to call up a seating chart showing each student’s name and how that student voted in response to a clicker question.  This allowed an instructor to say, for instance, “John, I see that you answered B but the students sitting next to you answered C.  Why don’t you discuss this question with them and see if you can come to a consensus?”

Another feature of the system was that when a student answered a multiple-choice question, the system could be programmed to send the student a response determined by the student’s answer choice.  So if a student selected choice A, the system might reply, “Have you considered X?”, where X would be some example or concept relevant to the answer choice.  I imagine this feature would be very useful in helping students think more deeply about their answer choices.  It’s also a feature that could be implemented in some of the systems that use cell phones, smart phones, and laptops available now.  In fact, this might already be a feature of some of these systems.  If you know that to be the case, please let me know.

(Coincidentally, Friday night at the conference reception, I met Kevin Patton of St. Charles Community College in Missouri.  He and I were talking about clickers and somehow hit upon this very same idea–giving feedback to students after their answers right on their clickers, feedback tailored to their particular answers.)

One of the research findings Stelzer shared was particularly interesting, too.  According to surveys of students at the University of Colorado at Boulder, where over 17,000 clickers are in use, the factor mostly highly correlated with negative student reactions to clickers was “sporadic use.”  If clickers aren’t used very often, students tend not to like them.  That’s pretty good evidence that students see some value in the use of clickers, I think.

Stelzer’s talk was videotaped and will be posted online, probably at the i>clicker site.  His top ten tips for using clickers are also available at the i>clicker site.

I have more thoughts from the conference I’ll be posting in the next few days.

Clickers and Student Gender

Jeanne Dillon of Argosy University recently emailed me to ask if I knew of any research on the role of gender in the impact of the use of classroom response systems.  Since something is preventing me from replying to her email, I thought I would respond here.  (Jeanne, if you’re reading this, please email me again or leave a comment below.  Thanks!)

Taking a look through my clickers bibliography, I found several research articles that discussed gender differences.  Several studies–Freeman and Blayney (2005), MacGeorge et al (2007), Nicol and Boyle (2003), Rice and Bunz (2003)–found no statistically significant difference between male and female students’ perceptions of the use of classroom response systems (satisfaction, perceived benefits, etc.).  Scornavacca and Marshall (2007) found the same result when looking at a text-messaging-based classroom response system.

Len (2007) found that some of his astronomy students were “self-testers,” preferring to answer clicker questions independently, and some were “collaborator,” preferring to discussion clicker questions with peers before answering.  He found that gender had no impact on the likelihood that a student was a self-test or a collaborator, which is a somewhat surprising result given commonly held beliefs about gender differences and collaborative learning.

The only study I found that looked at the role of gender in the impact of the use of clickers on student learning (as opposed to student perceptions) was Reay, Li, and Bao (2008).  In that study, it was found that the gain for male students from pre-test to post-test was statistically greater than the gain for females when clickers weren’t used.  When clickers were used, the gains were the same, indicating that clickers (and the question sequence pedagogy the instructors used) reduced the performance gap between male and female students.  As I noted in my post about this study, it’s tough to isolate the effect of the clickers here from the overall effect of the question sequence pedagogy used.

In summary, there’s some evidence that gender doesn’t play a role in how student perceive the usefulness of teaching with clickers.  There’s almost no evidence for or against a gender role in the impact of clickers on student learning at this point.  I would encourage Jeanne and others conducting research on classroom response systems to look for any gender differences.  This would seem to be an area of research with great potential.

If you know of any other research along these lines, please let me know.  Thanks!

EDUCAUSE Day Four

On the final day of the EDUCAUSE Annual Conference, I attended a session titled “Growing and Sustaining Student Response Systems at Large Campuses: Three Stories” presented by Christopher Higgins of the University of Maryland, Nancy O’Laughlin of the University of Delaware, and Michael Arenth of the University of Pittsburgh.  The presenters’ slides are available, and Inside Higher Ed ran a story on the session, too.

University of Maryland

There were three classroom response systems in use at the University of Maryland as of a few years ago, so the IT office got together with Undergraduate Studies and the Center for Teaching Excellence to form a review committee that recommended the adoption of the TurningPoint system.  Key factors included keeping student data on campus (because of FERPA), cost to students, integration with PowerPoint and Maryland’s course management system, and reporting options.

They now have over 12,000 clickers in the system with at least 75 faculty members using clickers, mostly with courses in business and the natural sciences.  (Some departments purchased their own sets, so IT isn’t sure how many faculty are using clickers in these departments.)  I haven’t spoken with many business faculty about how they use clickers, although the business and management section of my bibliography is one of the larger ones.  I might try to track down a couple of Maryland business faculty to find out how they are using clickers.

Challenges included registering student clickers, which required two different registration systems for a while.  Also, the software isn’t as robust on Macs, which poses a problem for some faculty.  They also went from 10 classrooms with receivers and software to 150 in a single semester which was challenging!  TurningPoint’s receivers also needed upgrading last academic year, which posed some logistical problems.

Currently, the IT office handles technical support for faculty using clickers, while the Center for Teaching Excellence handles training and promotion.  The two units seem to work well together, offering joint training sessions that have gone over well.  IT finds it necessary to have a staff member devoted almost entirely to clicker support at the start of a semester.

Christopher Higgins is particularly excited about TurningPoint’s new ResponseWare Web system, which enables any Web-enabled device (laptop, iPhone, etc.) to function as a clicker.  He likes the fact that the system leverages existing hardware that can also perform other functions, as well as the fact that the Web system is cheaper–$20 per student per year or $40 per student for four years.  Christopher found that many students took advantage of an Apple promotion this fall to purchase iPod Touches and iPhones along with their Mac laptops so a lot of students at Maryland have devices that can run the new TurningPoint system.

University of Delaware

The adoption committee at Delaware included not only faculty members and IT staff, but staff from the assessment offices and, I think, students, as well.  (I may have misheard that last point.)  They standardized on Interwrite PRS and spent the summer of 2006 training faculty and installing receivers and software in all classrooms with at least 75 seats.  (They now have receivers and software in all classrooms with at least 35 seats, which is most of the classrooms on campus.)  By the fall semester about 3,600 students and 40 faculty were using clickers.  More faculty started using clickers in the fall of 2007, but this year there are relatively few faculty new to clickers since most faculty have heard about them and decided whether or not to use them.

Clickers are popular in courses in the natural sciences, as well as psychology, political sciences, and nursing.  Many first-year undergraduate courses use clickers, which means that faculty teaching “downstream” courses are now more likely to use clickers, as well, since most of their students already own the devices.

Clickers are used in non-academic settings on campus, too.  Residential Life uses them to collect information on student experiences and opinions in the dorms.  The library and the office of assessment use them, as well.

Challenges to the support of classroom response systems on campus included a move to a new unique student identifier.  The Interwrite PRS system allows students to enter and store their unique identifiers on their clickers, but it took some work to have all the students request a new unique identifier on the Delaware Web site.  Other challenges included handling new versions of the software and a move from one course management system (WebCT) to another (Sakai).

One process Nancy mentioned that I particularly liked is that when faculty request clickers for their courses from the bookstore, there’s a checkbox on the form that asks them if they are new to using clickers.  Faculty who check this box are then sent resources by Nancy’s office and added to Nancy’s mailing list.  This helps faculty connect to useful pedagogical and technical resources and helps Nancy know who’s using clickers on campus.

Nancy also mentioned that she’s found it helpful to give faculty members their own receivers so they can practice as much as they need to outside of the classroom.  She finds that students know when their teachers aren’t comfortable with a technology, so time for practice is important.

Another point Nancy made was that the code of student conduct at Delaware has been amended to mention clickers.  Students are to respond for themselves, not on behalf of other students.  She indicated that faculty appreciate having this clause in the code since it means there’s a process they can follow if they suspect students of cheating by bringing other students’ clickers to class.

University of Pittsburgh

Things at Pittsburgh have been a little more chaotic.  A review committee consisting of IT staff, facilities staff, and registrar staff decided in 2003 not to adopt a single system on campus.  As a result there are now a few systems in use on campus now.  There’s now some move toward standardizing on eInstruction, but there doesn’t seem to be a central decision-making office that enforces that decision so faculty are still free to use other systems.

Clickers are popular in biological sciences, physics, nursing, and pharmacy.  Also, the School of Social Work uses them frequently in their gambling addiction counselor program.  I wouldn’t mind talking to some of those faculty to find out how they use clickers in that setting.

Michael Arenth named a few challenges they’ve faced at Pittsburgh, including managing faculty expectations (particularly for faculty who get excited by clickers but don’t plan on the time necessary to learn the systems), cheating (students who bring other students’ clickers to class to cheat on attendance grades), and set-up between classes since until recently, they haven’t been installing systems in classrooms.

I believe Michael said that Pittsburgh has still been using infrared clicker technologies until fairly recently switching to radio frequency.  (Most people I’ve talked to made the switch a couple of years ago.)  He noted that the IT group on campus had to approve the use of radio frequencies for this purpose.  I hadn’t heard of this kind of approval before, so I found this point interesting.

Common Issues

All three campuses have surveyed faculty and students about clickers, and they used some common questions to enable comparisons among the three campuses.  They found that faculty frequently use clickers to measure student comprehension, measure student opinion, obtain anonymous responses, monitor attendance, and facilitate quizzes.  The presenters spoke only briefly about these results, and it was unclear to me the extent to which faculty use comprehension or opinion questions to generate small-group or classwide discussion or to practice “agile teaching” by responding to the results of clicker questions during class.  I was, however, happy to see that clickers were used more for formative assessment (measuring comprehension and opinions) than summative assessment (quizzes and tests) since I think that’s where clickers really shine.

An audience member at the presentation asked about the student response to clickers.  The panel indicated that students like the interactivity that classroom response systems provide.  They confirmed what I’ve now heard from multiple sources, that students want to see some value added to their learning experience as a result of the clickers.  If a faculty member just asks a question and quickly moves on, there’s no interactivity and little impact on student learning.  Students don’t respond well to this.

Finally, I spoke with Danny Sohier of Université Laval in Québec after the session.  His school is using clickers to conduct end-of-semester course evaluations during class.  They found that online course evaluations resulted in low response rates, a problem I’ve heard about from many institutions.  They now use clickers to collect student responses to multiple-choice evaluation questions during class in some courses, inviting students to respond to open-ended questions online outside of class.  Danny indicated that this arrangement is working pretty well.  I might follow up with him to learn more about this process.

That’s it for my notes on this session.  I was glad to see a clicker session on the agenda at EDUCAUSE.  I was a little surprised at the number of audience members who asked questions at the end of the session and at the nature of those questions.  It seems there are a lot of institutions that are still just starting to work on adoption and support issues.  That indicates to me that use of classroom response systems will continue to grow over the next few years.

Article: Len (2007)

Patrick M. Len recently commented on an earlier post about clicker question banks to share links to his blog, where he regularly posts astronomy and physics clicker questions he has used.  Since he was kind enough to share those links and to make his clicker questions available online for others to use, I thought I would take a look at one of his recent articles on classroom response systems.

Reference: Len, P. M. (2007). Different reward structures to motivate student interaction with electronic response systems in astronomy. Astronomy Education Review, 5(2), 5-15.

Summary: In this study, Len explores the impact of two different “reward structures” used for clicker activities in a medium-to-large astronomy survey course at Cuesta College:

  • Introductory questions were asked the start of class. These questions were graded on effort, not accuracy of student responses. Students were allowed to discuss their answers with each other before voting. Some did, and some did not.
  • Review questions were asked at the end of class. These questions were graded on effort, as well, but if at least 80% of the class answered the day’s questions correctly, those participation points were doubled. This led to some raucous class-wide discussions about the questions.

Sample questions of each type, many of which are conceptual understanding or application questions, are available online in appendices to the article.

Individual students were identified via their responses to a survey as independent workers (“self-testers” in Len’s terminology) or collaborators during the introductory questions. Two pre/post instruments were used to explore differences in these two types of students: the Survey of Attitudes Toward Astronomy (SATA) and the Astronomy Diagnostic Test (ADT).

One key finding of the study was that collaborators (those students who chose to work together to answer the introductory questions) became less confident in their astronomy knowledge and skills and valued astronomy less over the course of the semester, as measured by the SATA. Collaborators also “reported a lower pretest proficiency in science,” according to the ADT, even though they were as accurate in their answers to introductory questions as their self-tester peers.

Len concludes that this one-semester course in astronomy had a significant, negative impact on the beliefs and attitudes about science of these students. He recommends that since these students are “predisposed toward collaborative behavior,” instructors should think carefully about how to use clickers to structure collaborations in ways that increase student confidence and help them value astronomy more.

One other finding was that the helpfulness of the instructor’s lecture in student learning was rated more highly by the self-testers than the collaborators. This complements other findings (Graham, Tripp, Seawright, and Joeckel, 2007) that students who prefer not to participate find clickers less helpful.

Commentary: There’s a lot of data here to make sense of, but I think Len has successfully argued that students who self-report that they aren’t as good at math and science as their peers (a) prefer to collaborate when given the opportunity and (b) became less confident in themselves and less positive toward science during this course. His recommendation to structure collaborative activities (with or without clickers) in ways that are sensitive to these affective issues is a sound one.

Along those lines, it’s possible that the attitudes and beliefs about science held by the collaborator students would have worsened more over the course of the semester had they not been allowed to collaborate on introductory questions. If they had not been allowed to do so, they likely would have done more poorly on these questions (instead of answering them as accurately as their self-tester peers), which in turn would have discouraged them more.

This issue of students in physics and astronomy courses becoming less interested in science because of these courses has been reported elsewhere in the Physics Education Research (PER) community (notably by Carl Wieman’s research groups at the University of Colorado and the University of British Columbia), and I think it’s an important challenge in science education. I’m glad to see this article by Len helping to explore this issue.

Len’s central question–the impact of different reward structures on students in his courses–is only partially answered, in my opinion. It’s clear that the “success-bonus” reward structure used for the review questions encouraged students to collaborate. However, given the way he describes the class environment when students answer his review questions (“Some students shouted for assistance from the rest of the class; others attempted to coach the rest of the class on how to answer, indicating their answer on the overhead projector using fingers, on the screen using laser pointers, or vocally”) it’s unclear the extent to which critical reasoning, as opposed to persuasion and peer pressure, was a factor in these collaborations. An investigation of more structured approaches to implementing this reward structure would be beneficial.

As usual, your comments are welcome!

Article: Jenkins (2007)

Reference: Jenkins, A. (2007). Technique and technology: Electronic voting systems in an English literature lecture. Pedagogy, 7(3), 526-533.

Summary: In this article, Alice Jenkins of Glasgow University in Scotland describes her use of clickers to teach an undergraduate poetry class with 110 students. Her primary use of clickers was for formative assessment immediately following a portion of a lecture on a particular topic, leading into “agile” teaching and class-wide discussion. Types of questions included the following:

  • Application Questions – In the article, Jenkins focuses on using clickers to teach students metrical analysis of poems. She provides an example of this type of question, asking students to predict certain properties of the next line of a given poem, as well as an analysis of student responses to her example question.
  • Critical Thinking Questions – She mentions asking her students “to assess certain formal qualities of poems” using a Likert scale and to choose an adjective that best describes a poem’s diction.

Jenkins also makes an interesting point about the use of the “hand-raising” method of answering questions in class. She asserts that this method works better for questions with two possible answers, since students can be asked to raise their hands for one answer and keep their hands down for the other answer. This allows students to answer a little more independently than the usual method of asking for a show of hands for each answer choice sequentially. However, she also mentions that non-participating students confuse the results here, since their lack of raised hands would be interpreted as “votes” for one of the two answer choices.

Jenkins is also fond of the “I don’t know” option (which isn’t possible with binary “hands-up” questions) since it discourages students from voting randomly (useful presumably because it encourages students to ask themselves, “How confident am I in this answer?”) and provides Jenkins with a sense of the difficulty level of a given question.

Jenkins was observed by a colleague during the classes in which she used clickers. The colleague reported that Jenkins had “asked for and received oral responses from the students 28 times.” Jenkins said she was “astonished” to hear this, presumably because this high level of interaction was unusual for this large course.

Jenkins also surveyed her students about the use of clickers. One interesting result was that 60% of her students said they “worked out the answers to all the questions” when clickers were used, versus only 10% without clickers. The primary uses of clickers students identified as beneficial were (a) helping students assess their own understanding, (b) allowing for anonymous responses, (c) helping the instructor assess student learning, and (d) increasing participation.

Commentary: This is the first published article I’ve found describing the use of clickers in a humanities class, so I was pretty excited to discover it. (Stuart, Brown, and Draper (2004), also from Glasgow University, describe the use of clickers in a philosophical logic course, but that kind of course is fairly unusual in the humanities.) I think there’s a lot of potential for the use of clickers in the humanities, and the interesting application and critical thinking questions Jenkins describes in this article are great examples of that potential. I hope that this article encourages others in the humanities to consider using clickers.

One of the reasons I think that instructors in the humanities have difficulty seeing value in the use of clickers is that their experience with multiple-choice questions is based on the use of such questions on exams, where they are usually factual questions. Asking these kinds of questions in class with clickers isn’t usually particularly exciting or useful, so I can understand why humanities instructors might not see value in clickers.

However, one can use clickers to ask “one-best-answer” questions that encourage critical thinking. For these questions, students are asked to choose from several answers, more than one of which has some merit. The point of these kinds of questions isn’t to find out if students can identify the “right” answer, since there are no “right” answers. Instead, the point is to engage students in a question (by asking all students to think about the question independently and commit to an answer) to lead into a rich class-wide discussion of the material. These kinds of “one-best-answer” questions don’t work well on exams (unless one requests that students defend their choices, I guess), but they can work very well in class.

Finally, I think it’s really interesting that 60% of Jenkins’ students said they thoughtfully considered questions asked via clickers versus 10% who said they would do so for questions not asked via clickers. I’m reminded of a student of Elizabeth Barkley‘s captured in a video Elizabeth shared at a conference I attended. In commenting on the Think-Pair-Share collaborative learning technique, the student said something like, “With Think-Pair-Share, I know I’m going to have to pair up and share my thoughts on a question, so I think about the question. If I know I’m not going to ‘pair’ or ‘share,’ then why should I ‘think’?” Jenkins’ survey results support this assertion!

Article: Reay, Li, and Bao (2008)

The news article I mentioned in my last post referred to a recent Ohio State study on the impact of classroom response system use in physics courses.  Since this study has received other press, I thought I would comment on it here.

Reference: Reay, N. W., Li, Pengfei, & Bao, Lei. (2008). Testing a new voting machine methodology. American Journal of Physics, 72(2), 171-178.

Summary: This study examines the effect of the use of classroom response systems on student performance in introductory electricity and magnetism courses.  For three consecutive semesters, student performance, as measured by results on common exam questions and on a particular concept inventory (the Conceptual Survey of Electricity and Magnetism), was compared between two sections, one in which classroom response systems were used and one in which they were not used.

For the authors, use of clickers typically involved question sequences.  Each question sequence typically focused on a single concept explored in multiple contexts and was this designed to promote transfer of knowledge between contexts.  The authors describe most of their question sequences as either “easy-difficult-difficult” sequences or “rapid-fire” sequences.  The latter type featured relatively easy questions that students were required to answer quickly.  The authors include examples of each type of question sequence.

Students in clicker sections answered 72%, 68%, and 63% of common exam questions correctly over the three semesters.  Students in non-clicker sections answered 64%, 56%, and 52% of exam questions correctly, yielding a difference of 8%, 11%, and 11% between the two sections and indicating that student performance was improved by the use of clicker question sequences.

The concept inventory used was administered at the start and end of each semester.  During the first and third semesters, students in the clicker sections performed better on the post-test than students in the non-clicker sections.  The differences were statistically significant with p-values of 0.005 and 0.009, respectively.  During the second semester, there was not a statistically significant difference between the two sections, although attendance in the second semester clicker section was low, averaging around 50%.  (Normalized gains between the pre-tests and post-tests were higher in the clicker sections for all three semesters, although the authors didn’t compute these gains or associated p-values.)

The authors also note that the gain from pre-test to post-test on the concept inventory was statistically the same for male and female students in the clicker sections.  However, the gain for male students was statistically greater than the gain for female students in the non-clicker sections.  The authors conclude that “the voting machines reduced the gap between male and female student performances on tests.”  They note that they hope to explore this finding further in future research.

Student response to the use of clickers was generally very positive on end-of-semester surveys.  The responses were not as positive during the third semester, and the authors hypothesize that these results might be due to overuse of clicker questions during that semester or to the fact that clicker questions that semester were “lightly graded based on attendance.”

Comments: These results are very persuasive, and the finding about gender differences is particularly encouraging.  The authors did a great job of explaining their research and teaching methods and the complications that arose in their study.  The example question sequences, of both the “easy-difficult-difficult” and “rapid-fire” types, the authors included are appreciated, since they give a concrete sense of what kinds of clicker questions were used in the courses.  As a result, the authors provide a useful description of how clicker questions can be used to promote transfer of knowledge between contexts, an important but difficult-to-achieve learning goal.

My main concern with this study is that it doesn’t seem to assess the impact of the use of classroom response systems as much as it seems to assess the impact of an entire pedagogy, one that happens to use classroom response systems.  The “control” sections not only didn’t use clickers, but it’s unclear if they used any version of the pedagogical approach used in the “experimental” sections to promote transfer of knowledge.  The authors note that “non-voting machine lecturers had access to question sequence material, but otherwise taught in a traditional manner.”  No details are provided on what a “traditional manner” means in this instance.

As a result, I think this study rather successfully argues that a pedagogy involving (a) question sequences designed to promote transfer, (b) variations on the standard peer instruction method, and (c) classroom response systems can be a very effective pedagogy.  That’s actually a great result and I hope that instructors in physics and other disciplines pay attention to it.

However, these findings don’t isolate the impact of the clicker technology itself.  For instance, what about the fact that the technology allows all students to respond to questions independently and be held accountable for responding?  Or the fact that instructors can view results of questions during class and make “on the fly” teaching decisions based on those results?  Or the fact that students can see where they stand in relation to their peers via the results display?  These are properties of the technology itself, and it’s unclear from this particular study what role they play in student learning.

Categories

Recent Comments