Teaching with Classroom Response Systems

Resources for engaging and assessing students with clickers

Archive for the ‘Upper-Level Courses’ Category

Jeffrey R. Young’s Chronicle article, “Reaching the Last Technology Holdouts at the Front of the Classroom,” has apparently struck a nerve among professors, particularly those who are critical of educational technology. As I write this, the article has 59 comments on the Chronicle site, which is far more than most articles receive. Even the graph accompanying the article has received 13 comments!

Since clickers are mentioned in the article and in many of the comments, I thought I would weigh in here on the blog…

First, it’s worth noting that Chris Dede, the Harvard University learning technologies professor interviewed for the article, doesn’t make the argument that professors who don’t use technology are shirking their duties. Several of those who left comments seem to think so, however. For example, here’s comment #33 from Emily in NY:

“Dede does nothing in this article but set up a false dichotomy between professors committed to outdated, boring and irrelevant teaching methods and those eagerly embracing the modern technologies that contemporary students crave.”

Here’s the closest Dede comes to that argument, in the National Educational Technology Plan he helped draft for the US Department of Education in March:

“The challenge for our education system is to leverage the learning sciences and modern technology to create engaging, relevant, and personalized learning experiences for all learners that mirror students’ daily lives and the reality of their futures.”

Dede’s arguments in the Chronicle argument are focused on motivating professors to tap into the latest research on learning and continue to improve their teaching practices over time. From the report he drafted, it’s clear he thinks that technology can help with that, but he doesn’t seem to be making the argument that professors who don’t use technology are irresponsible, just those who stick with the same teaching methods you’d find in a classroom circa 1900. Sure, technology can be a big part of change, but many of the teaching innovations mentioned in the article (such as David Pace’s work on enhancing history teaching) don’t involve any technology.

Speaking of false dichotomies, however, here’s one from comment #18 by user “tee_bee”:

“What matters is that students learn–and a skilled teacher with a blackboard is still going to do a far better job than a bozo with some clickers and powerpoint slides.”

True, a skilled teacher is going to do a better job than a bozo any day, regardless of technology. But comparing a skilled teacher to a bozo isn’t really important here. Might technology (including clickers) help a skilled teacher be even more effective? Yes, that happens. And might technology help a relatively novice teacher become more effective? Yes, that happens, too. Those are the kinds of changes in teaching that are worth thinking about and encouraging, and I think that’s a point that Chris Dede would agree with.

How might teaching with clickers help a good teacher be even more effective? Several comments on the Chronicle article were skeptical of clickers’ potential for doing this. For example, here’s what user “ikant” said in comment #21:

“I’m young, tech-savvy, and pretty unconvinced by this article. I can’t speak for all fields, of course, but I’m pretty skeptical that good class discussions and quality writing in the humanities are particularly improved by clickers etc… the heart of what I do is in trying to educe questions, critical thought and excitement about books which students might previously have thought were utterly irrelevant to them, and (my evaluations indicate that) I do this very well with no particular technological bells and whistles in the classroom. Am I missing something?”

I’m glad that this instructor is capable of leading effective class discussions, foster critical thinking, and increase student motivation in the classroom. Let me clear: Doing so is entirely possible without clickers! However, not all instructors are as skilled as “ikant” appears to be and even for instructors like “ikant,” it’s possible that clickers would enhance an already productive classroom environment. Some examples from past blog posts:

Here’s a similar comment (#26 on the Chronicle site) from user “csgirl”:

“The reason I don’t use blogs and clickers is that they simply are not appropriate to the material I teach. Clickers in particular are useless to me – I care about the strategies my students are using to solve problems, not whether they can click the right answer in a quiz.”

This is a common misconception about clickers, that they’re just good for quizzing students basic conceptual understanding and recall. Here’s another formulation of it, from user “chewy18” in comment #53:

“They might work well for understanding basic concepts or in preparation for recognition/recall examinations where the test question is a line long and the answer a word or two in length. What about those of us who teach upper division courses where we struggle with students who have not, until they reach senior status, even been exposed to the analytical reasoning process. Suddenly they discover that life is, after all, not a multiple choice test and developing an argument that could go either way, is a requirement. How does that appeal to the clicker technology?”

Sure, clickers work well for assessing basic conceptual understanding and factual recall, but they’re useful for teaching at the higher levels of Bloom’s Taxonomy, too. Here are some more examples from past blog posts that demonstrate this:

And for “csgirl,” here’s a great collection of resources on using clickers and peer instruction in computer science from Daniel Zingaro.

Finally, you can imagine how this comment (#37) from user “fizmath” made me feel:

“The teacher/physician analogy is lousy. We have real data to show that new medical tech benefits patients. You can’t say the same about blogs, videoconferencing and those stupid clickers.”

(This is a response to Chris Dede’s analogy that teachers who don’t update their teaching methods over time are akin to physicians who don’t update their medical practices over time.)

Want some research? Try these studies, all of which are well designed and support the claim that clickers used in appropriate ways enhance student learning:

  • Stowell & Nelson (2007) – Clickers provided instructors with more accurate assessment of student learning during class than other response methods, including a show of hands.
  • Yourstone, Kraye, & Albaum (2008) – The use of clickers for end-of-class quizzes improved student exam scores by four points over the use of pencil-and-paper quizzes discussed the next day in class, likely because of the immediate feedback clickers provided to students on their learning.
  • Hoesktra (2008) – Clickers helped students be more attentive during class (since they know clicker questions could be asked at any time) and participate in more meaningful ways (both before votes are submitted and after results are displayed).
  • Smith et al. (2009) – Students actually learned from each other when discussing clicker questions in pairs prior to voting. They don’t “simply choose the answer most strongly supported by neighbors they perceive to be knowledgeable.”
  • Mayer et al. (2009) – Clickers made it easier for instructors to ask their students questions during class and for students to respond to those questions, leading to improved student learning through better class discussions.

My summary for those skeptical of using clickers in the classroom: Read the literature, find out how those in your discipline are using clickers effectively, and see (preferably by experimentation) if those methods might help you to enhance your teaching, regardless of how effective you are currently as a teacher. If a classroom response system doesn’t help you do your job better, then don’t use one. They’re not for everyone. However, don’t write clickers off without first investigating their potential. They’re far more useful and versatile that you might think at first.

Image: “Innovation” by Flickr user thinkpublic, Creative Commons licensed

Reference: Webking, R., & Valenzuela, F. (2006). Using audience response systems to develop critical thinking. In Banks, David A. (Ed.), Audience Response Systems in Higher Education: Applications and Cases. Hershey, PA: Information Science Publishing.

Summary: Webking and Valenzuela describe ways they use classroom response systems in their political sciences courses at the University of Texas-El Paso to foster critical thinking through active participation and class discussions. After noting some commonly cited advantages of teaching with clickers—easier attendance and participation record-keeping, greater participation through anonymity and accountability, and the collection of data to inform agile teaching decisions—the authors provide several concrete examples of clicker questions they have found valuable for developing their students’ critical thinking skills.

The authors’ first example is a sequence of clicker questions that serve to guide students through a close reading of a few passages in the play Antigone. At one point in the play, Antigone makes a statement that seems to very clearly express her belief that obedience to the gods trumps obedience to the king. At another point, however, she makes a somewhat cryptic statement that calls this previous assertion into question. Webking and Valenzuela start with an understand-level question that asks students to clarify this second statement. They follow this with an application-level question asking students to identify a logical consequence of her cryptic statement, one which seems to run counter to her earlier statement about serving the gods. Their third question is an analysis-level one, and it asks students to reconcile the two seemingly contradictory statements by Antigone by identifying a hidden motivation of hers that makes her statements consistent.

Webking and Valenzuela also describe how they use a particularly challenging, analysis-level question about Plato’s Euthyphro. The question asks students to identify the central argument of a particular passage, one that deals with the relationship between justice and piousness. The question is one that Jean McGivney-Burelle would call a “horizontal question” since students answering the question are typically split evenly among three answer choices. Webking and Valenzuela note that one of the three popular responses can’t be supported by the text. Students who argue for this answer choice quickly realize that they were projecting their own perspectives on the text, not arguing from the text. This is a useful metacognitive moment for these students. The class discussion then focuses on the remaining two popular answer choices. Making sense of these two choices requires the students to grapple with categorical logic, the kind that is well-represented by Venn diagrams. Once the students have discussed their way to the correct answer, they realize the value of categorical logic in making sense of arguments like the ones Plato makes—another metacognitive moment.

The Plato example comes from one of the authors’ smaller, upper-level courses, and they assert that “it is in a smaller class that the [classroom response] system is at its best in encouraging discussion and precise argument.” They reach this conclusion, in part, because of the ability of their classroom response system to report to the instructor individual student responses to clicker questions as those responses are submitted. The authors use these individual, real-time results to guide their post-vote discussions, focusing on “groups which had difficulties in reaching consensus, students or groups which answered particularly quickly or particularly slowly, students who disagreed with their groups, students who changed their minds, and so on.” They argue that the ability to see individual, real-time results is important in leading effective post-vote discussions since it allows instructors to analyze “each student’s rational odyssey with each question.”

Also in the article are two examples of student perspective questions the authors use to motivate particular topics in their courses. In one example, they ask students to identify questions they aren’t likely to ask someone they’ve just met. Invariably, students identify the questions about religion and politics. The authors point out to students that one reasonable conclusion from this is that religion and politics are the least important things to know about when getting to know someone. This motivates students to want to learn why this social phenomenon exists.

Comments: This would be a great article to give a faculty member in political science or philosophy who’s interested in getting started teaching with clickers. Webking and Valenzuela provide a concrete, interesting example of a guided close reading of a text (Antigone) using clicker questions of increasing difficulty. This is a great model for instructors in the humanities and social sciences interested in helping their students develop critical thinking and close reading skills. I wish, however, that they had included some voting data in this example and had discussed how they use the results of these questions to guide discussions, as they did with their Plato example.

The Plato example is a great model of clicker use in text-based courses, too. One reason is that the approach Webking and Valenzuela use leads students to appreciate the nature of argument in their discipline. They write, “In time, and actually not very much time, students learn to care more about the strength of the argument than about having their initial position defended as right.” The authors present a useful list of options for leading these kinds of class discussions—focusing on groups that were conflicted, students who answered quickly or slowly, students who changed their minds, etc.

The authors assert that the quality of discussions they can foster depends on the availability to the instructor of real-time, individual voting data. Not all classroom response systems have this feature and, in my experience, instructors who have the option of looking at individual results as they come in don’t frequently take advantage of this option. I think that perhaps the availability of real-time, individual results isn’t as critical as Webking and Valenzuela assert. I’ll often have my students vote on a question individually, then discuss it in groups, then vote again. I’ll sometimes ask for a student who changed his or her mind from the first vote to the second vote to explain his or her reasoning. I can also see asking for a student who disagreed with his or her group to contribute to the post-vote discussion.  (That’s a nice idea, one that I’ll have to try soon!)

My approach, using the aggregate and not individual voting data, relies on students who fit certain profiles volunteering to share their perspectives with the class. Webking and Valenzuela’s approach doesn’t rely on volunteers, but it isn’t quite cold-calling, either, since they select students only after the students have had a chance to consider and respond to the clicker question. I’d like to call this “warm-calling” since the students have had a chance to warm up to the question and since the instructors aren’t calling on students without any knowledge of what those students might contribute to the discussion. I’m not familiar with many instructors who practice warm-calling.  If you do, I’d love to hear from you in the comments about your experiences doing so.

Image: “Coffin Sculpture of Antigone” by Flickr user Xuan Rosamanios / Creative Commons licensed

University of British Columbia professor of earth and ocean sciences Roland Stull recently gave his popular course on the science of storms a clicker makeover.  Persuaded by research from Carl Wieman’s Science Education Initiative at UBC, he now structures his class sessions around conceptual understanding clicker questions, using a version of the standard peer instruction technique.  Stull has his students read their textbook and respond to online quiz questions the night before class.  He has a TA analyze their answers for common areas of confusion, then adjusts his plans for class to address those areas.  Stull notes a variety of benefits to this teaching approach:

“It’s a lot more fun for me to teach the class,” Stull said in an interview in his UBC office. “Not only are the students interacting with themselves, but they are much more willing to ask me questions during class.”

The Georgia Straight article about Stull’s use of clickers quotes Alan Webb, a University of Waterloo accounting professor who published a study ostensibly showing that teaching with clickers actually decreases student participation in class.  However, as I noted in my review of this study, what Webb actually showed was that indicating the correct answer to a clicker questions prior to class discussion of the question decreases student participation.

At the University of Buffalo School of Dentistry, instructors John Maggio and Chester Gary have students respond to questions during class using their laptops as response devices.  The school requires students to have laptops so they can access electronic textbooks, so using “virtual clicker” software on student laptops makes sense.  Maggio finds that his students have rather short attention spans, so he uses clicker questions to keep them engaged during his 90-minute classes, asking as many as twelve questions per class.  The frequent questions and the fact that some are graded on accuracy (not just effort) keep his students from using their laptops to distract themselves.

Just like Roland Stull at UBC, John Maggio says that his clicker questions have increased participation in his class:

“They raise their hands much more often, they’re discussing things much more, they’re participating more than they ever have,” [Maggio] says, noting that his classes featured very little discussion or debate before the introduction of the audience-response technology.

One of the criticisms I often hear about teaching with clickers is that doing so gives shy students an excuse not to summon the courage to speak out in class.  These two news articles would indicate that’s not the case, after all.

Article: Mayer et al. (2009)

Reference: Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., Chun, D., Bulger, M., Campbell, J., Knight, A., & Zhang, H. (2009). Clickers in college classrooms: Fostering learning with questioning methods in large lecture classes. Contemporary Educational Psychology, 34(1), 51-57.

Summary: In this article, Richard Mayer and his collaborators, nine in all, describe the results of an experiment comparing the use of clickers to non-clicker alternatives.  A large enrollment educational psychology course, taken mostly by junior and senior psychology majors, was taught one year in a “traditional” method, without the use of in-class questioning or clickers.  The next year, the same course (with very similar students) was taught using in-class questioning facilitated by clickers.  In the third year, in-class questions were used, but instead of having students respond using clickers, students wrote their responses down on paper quizzes, passed those papers in to the instructor, then indicated their responses to the questions with a show of hands.

Differences among the three courses were kept to a minimum.  The same instructor taught all three courses, and the lecture materials were repeated, as well, with the exception of the additional questions added to the clicker and no-clicker groups.  Reading assignments and exam questions were identical, as well.  Having the students respond to questions in writing in the no-clicker class meant that their initial responses to a question were largely made independently of their peers, just as in the clicker class.  (The answers they signified during the shows of hands were, on the other hand, not necessarily independent.)

There were some differences, however.  The in-class questions in the clicker and no-clicker groups were graded (1 point for answering incorrectly, 2 points for answering correctly), which meant grade incentives were a possible motivator in those two groups.  There was no parallel grade incentive in the “control” group.  Also, in the no-clicker class, the paper quizzes were typically administered at the end of a class session for logistic reasons (distributing and collecting the quizzes took time), whereas in the clicker class, questions were asked at various points during class.

The authors’ findings were certainly interesting.  When they compared midterm and exam performance across the three courses, they found that the clicker class performed significantly better on the exams, averaging 75.1 points out of a possible 90.  The no-clicker class averaged 72.3, and the control group averaged 72.2.  (The difference here was statistically significant with p=.003.)  So the clicker class ended up with an average grade in the course 1/3 of a letter grade higher than the other two classes, a B instead of a B-.  And the paper quizzes plus hand-raising had “no discernible difference on student learning outcomes.”

Even more interesting was the following.  The clicker class performed almost identically to the other two classes on exam questions that were similar to questions asked (via clickers or paper quizzes) in class.  However, on exam questions that were dissimilar to in-class questions, the clicker class performed significantly better (50.2 vs. 47.9 and 48.2, p=.002).

The authors conclude from these data that the logistical difficulty of implementing the paper quizzes (distributing the quizzes, collecting the quizzes, and so on) interfered with any benefit gained from questioning students in this manner.  They also note that doing the questioning at the end of a class session might reduce the impact of the questioning on the students’ learning.  The use of clickers made questioning students “seamless” for the instructor and allowed the instructor to test and provide feedback to students closer in time to the initial learning experience.

The authors also note that some of the components of active learning–”(a) paying more attention to the lecture in anticipation of having to answer questions, (b) mentally organizing and integrating learned knowledge in order to answer questions, and (c) developing metacognitive skills for gauging how well they understood the lecture material”–might serve to explain why the clicker class outperformed the other two classes on exam questions dissimilar to in-class questions.

Comments: These results are fairly persuasive.  The authors did a good job of controlling for potentially confounding variables, and the use of three groups–clickers, no clickers, and control–meant that they could isolate the effect of the clickers from the effect of having students respond to questions during class.  Their conclusion–that clickers make questioning easier for both instructors and students and so allow questioning to have more impact–makes sense to me.

Another possible explanation for the higher learning gains in the clicker class is that the students in the clicker class were able to see the display of results of the clicker questions, whereas the students in the no-clicker class had to rely on a show of hands to see where their peers stood on a question.  Since it’s been shown that the hand-raising method leads to inaccurate representations of student understanding (see, for instance, Stowell and Nelson, 2007), it could be that the more accurate reporting of student responses to questions allowed by the classroom response system led to students taking the process more seriously in one way or another.

It’s also worth noting that after questions were asked and answered by students in both the clicker and no-clicker class, not too much happened.  The instructor would state the correct answer, have a student volunteer share reasons for the correct answer, then share his own reasons for the correct answer.  There wasn’t much in the way of agile teaching (doing something different in class in response to the results of a clicker question) or peer instruction (having students discuss questions with each other prior to answering).  There wasn’t much discussion of incorrect answers, apparently.  All of these processes have potential pedagogical benefits.  Had they been employed, the different in learning outcomes between the clicker class and the other two classes might have been even greater.

I should also point out that the article doesn’t clearly state the instructor’s experience teaching with clickers, although it seems a safe bet that the instructor was new to using clickers.  Instructor experience is another important variable, as is the nature and difficulty of the questions used.  A few sample questions were included in the article, but it would have been helpful to know how difficult the students found these questions.  Did most students answer them correctly?  Did a lot of students answer them incorrectly?

It was @RogerFreedman who pointed me (via Twitter) to this short essay about the use of clickers in small political science classes.  In the essay, University of Denver political science professor Tom Knecht shares several reasons why he uses clickers in his small (15-25 student) classes.  Knecht echoes many of the reasons I provided for using clickers in a recent post, so, as LeVar Burton used to say on Reading Rainbow, “You don’t have to take my word for it.”

  • Knecht uses clickers for formative assessment, gauging his students’ understanding of points he makes during his lectures.  He finds that his students are often hesitant to ask questions when they don’t understand something, so clickers help him discover what’s unclear.
  • He also uses clickers for graded quizzes, motivating his students to prepare for class.  Clickers allow him to distribute these quiz questions throughout a class session, instead of clustering them at the beginning or end of class on a paper quiz.
  • He also finds that the fact that students’ responses are anonymous (as far as their peers are concerned) motivates his students to engage more fully in classwide discussions, particularly around questions on sensitive topics.  (These kinds of topics can arise frequently in political science courses.)  Since all students are asked to respond to his clicker questions, they are all more prepared to engage in the discussion that follows, which enhances that discussion.

Political science courses, like others in the social sciences, often involve questions that have correct and incorrect answers, critical-thinking questions that have multiple defensible answers, and student opinion questions.  As a result, clickers are great tools for these courses, as we see here.

Clickers in Upper-Division Physics

A couple of weeks ago, Stephanie Chasteen shared a series of blog posts on teaching with clickers in upper-division physics courses: Part 1, Part 2, Part 3, and Part 4.  I’m often asked if clickers work well in upper-division courses, yet I’ve not met many faculty members who use them in such courses.  So I was glad to see this series by Stephanie.  It’s adapted from a talk she gave at the American Association of Physics Teachers conference a few months ago, and it includes videos that feature interviews with faculty and students about teaching and learning with clickers.  Here are some highlights from Stephanie’s posts…

One of the students interviewed in the video in Part 1 of the series says that she likes clicker questions because they allow her to take a concept and metaphorically put in her pocket.  I like that metaphor.  It indicates that the clicker question allows her to confirm that she understands a concept, which is useful during class since it helps prepare her for what follows.  This idea that clicker questions allow students to test themselves on concepts during class is one that shows up often in student surveys as a positive aspect of using clickers.  This self-testing is a type of formative assessment, and Stephanie notes it’s important to include even in small classes.

Another type of formative assessment is also mentioned in the same video.  Steven Pollock, whom I interviewed for my book, mentions that prior to using clickers he found himself making assumptions about what his students did and did not understand.  He notes that clickers provide him actual data on his students’ learning so he doesn’t have to rely on his assumptions.  I wonder if this aspect of using clickers is even more important in upper-level courses since common student misconceptions in these courses may not be as well known as in lower-level courses.

Several different types of clicker questions are mentioned in Stephanie’s series: conceptual questions, application questions, review questions used at the start of class, procedural questions asking students to identify the next correct step in a derivation.  I like the conceptual question Steven shared that distinguishes between students approaching physics from a classical mechanics point of view and those using a quantum mechanics approach.  I can imagine this kind of question is particularly useful for students making the transition to an upper-level course like quantum mechanics.

One of the arguments against using clickers in upper-level courses that Stephanie says she hears is that students in these courses are sophisticated learners.  They don’t need the structure of clicker-facilitated peer instruction to help them learn.  Stephanie presents a strong counter-argument, that since these students are more sophisticated learners, they actually get more out of the peer instruction method, more seriously engaging in small-group and classwide discussions.

Stephanie also shares some interesting data on student perceptions of clickers in upper-level courses.  Students who took a non-clicker upper-level course were asked how they would feel if they had taken the course with clickers.  They were resistant, arguing that clickers were for lower-level courses.  However, students who actually went through a clicker-enhanced upper-level course were extremely enthusiastic about their use.  Stephanie points out that students aren’t always able to predict how they’ll respond to a particular teaching approach, which is an important point to remember when trying out new approach in one’s teaching.

Take a look at Stephanie’s blog posts for more thoughts on using clickers in upper-level courses, including thoughts on their role in creating “times for telling.”  Stephanie also contributes to the clickers efforts at the Carl Wieman Science Education Initiative at the University of British Columbia, where they’ve put together a 36-page guide to using clickers in the sciences.

Article: Crossgrove & Curran (2008)

Reference: Crossgrove, K., & Curran, K. L. (2008). Using clickers in nonmajors- and majors-level biology courses: Student opinion, learning, and long-term retention of course material. CBE-Life Sciences Education, 7(1), 146-154.

Summary: The authors report on their study of the impact of clickers in two courses, an introductory biology course for non-majors and a genetics course taken by sophomore biology majors. Both authors came to these courses with experience using active learning techniques and had experience with clickers prior to this study.

The authors surveyed their students and according to survey responses, students were generally very positive about the use of clickers. The non-majors in the introduction course were more positive than the majors on some points, including the usefulness of clickers in helping students score higher on exams.

Students’ average performance on final exams during the semester prior to the use of clickers was compared to average performance during the semesters in which clickers were used. There was no statistically significant difference in performance, although, as the authors note, the cohort of students changed each semester.

Within each course that used clickers, however, student performance on exam questions covering topics that were treated in class with clicker questions was statistically significantly better than student performance on exam questions covering topics that did not involve clicker questions. This was true for all question types–factual recall, conceptual understanding, application, and analysis.

The authors also asked for student volunteers to take tests on the course material four months after the course ended. Although only a few students did so (14 or 15 for each course), the non-major students retained information taught via clickers at a significantly higher rate than they retained information not taught via clickers (dropping from 88% to 83% for clicker material, 92% to 75% for non-clicker material). Major students did a poor job of retaining all material, whether or not it was taught via clickers (dropping from 86% to 60% for clicker material, 87% to 61% for non-clicker material). It’s worth noting, however, that most of the clicker-material questions for the majors were “harder,” application-level questions, which may have contributed to the poor showing.

Comments: To explain the finding that the major students weren’t as positive about the use of clickers as the non-major students, the authors cite the different class sizes of the two courses (large in the case of non-majors, smaller in the case of majors). Other possibilities that occurred to me include the following:

  • Different exam formats, perhaps because of a greater match between in-class clicker questions and exam questions in the introductory course,
  • Different active learning techniques, perhaps because clicker questions didn’t enhance the problem-based learning approach used in the genetics course to the degree that they enhance the peer instruction approach used in the introductory course
  • Affective domain issues, perhaps because the mostly pre-med genetics students were more sensitive to grading issues than the non-major students, or
  • Differences in expectations for learning, perhaps because the genetics students expected to spend time memorizing information, not interacting with information via clicker questions.

I would be interested in seeing more research (by the authors or by others) into the use of clickers with upper-level students exploring these issues.

I found it an interesting result that the students’ performance on exam questions related to topics taught with clicker questions was better than their performance on other questions and that this result didn’t depend on the kind of exam question.  However, it would be useful to know what kinds of clicker questions were asked during these course. Are there certain kinds of clicker questions (conceptual understanding, application, etc.) that lead to better exam performance or retention of knowledge?

The authors noted that the standard deviation on questions involving topics taught via clickers was smaller than on other questions. They point out that this was noted in a previous study, as well. This statistic might be worth examining in future studies about clickers.

Categories

Recent Comments