Teaching with Classroom Response Systems

Resources for engaging and assessing students with clickers

Archive for the ‘Discipline’ Category

Social Media and Waterboarding

I couple of weeks ago on this blog, I shared a tweet by Colin Morris, a student at Kent State University in Ohio.  His comment (via Twitter) was, “44% OF MY U.S. HISTORY CLASS THINKS WATERBOARDING IS A SURFING TERM. I take back everything I’ve said about these ‘clickers’ being useless.”  After I shared this tweet on my blog, a few interesting things happened.

colinmorris jonathanrose
  1. Colin Morris, the student who tweeted this (on the left above), found out about my blog post via Twitter and commented on my blog post, indicating that he saw pedagogical value in clickers but objected to the cost of his clicker, particularly as a senior who won’t use it in future courses.
  2. Then Twitter user @iclickercrs, apparently affiliated with the i>clicker company, tweeted about my blog post.  Another Twitter user, @JonathanRose, a professor at Queen’s University in Ontario, Canada (on the right above), saw this tweet and decided to see how many of his (Canadian) students knew what waterboarding is.  He contacted Colin Morris, who directed Jonathan to the Kent State University instructor who posed this clicker question during class, John Jameson.  Jonathan Rose then used the same question in his introductory political science course.
  3. Jonathan then posted the results of both questions–the student responses from Kent State and the ones from his Queen’s University students.  Here are the results [PDF].  As you can see, only 28% of his students thought that waterboarding is a surfing term.  Also, more of his students than the Kent State students viewed waterboarding more as torture than an interrogation technique.
  4. To bring things full circle, Jonathan Rose tweeted about these results and @iclickercrs re-tweeted Jonathan’s tweet.  I saw this re-tweet, then tracked down Jonathan.  He let me know about items 2 and 3 above, filling in the gaps in my knowledge of this whole social media process.

Watching this all unfold has been very interesting, not only for the interesting uses and reactions to clicker questions, but for the way that Twitter has facilitated connections that might not have happened otherwise.

One Last Update: Colin Morris blogged about this, too, noting the importance of keeping in mind potential audiences when using social media.

Article: Crouch & Mazur (2001)

Reference: Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970-977.

Summary: In this now-classic article, Catherine Crouch and Eric Mazur present data on ten years of the use of peer instruction in introductory physics courses.  Included is a description of Mazur’s teaching practices for these courses, including ConcepTests (multiple-choice questions that help students develop conceptual understanding independent of computational skills), pre-class reading quizzes (used to motivate students to read their textbooks before class, allowing Mazur to shift the transfer of information outside of class, freeing up classtime to work on the assimilation of information), and peer instruction with and without clickers.

For assessment, Crouch and Mazur compare student performance on pre- and post-tests–the Force Concept Inventory (FCI), a widely used multiple-choice test of conceptual understanding in first-semester physics–before Mazur began using peer instruction and after.  They use normalized gain as their metric, which is determined by the formula (post-pre)/(100%-pre).  Thus, if a student scores a 70% on the pre-test and an 80% on the post-test, their normalized gain is (80-70)/(100-70), which is approximately 0.33.  Another student who moved from a 90% to a 95% would have a gain of 0.5, indicating that the student gained 50% of the improvement s/he could have gained from pre-test to post-test.

Using normalized gain on the FCI as a metric enables Crouch and Mazur to make comparisons to national data.  In Richard Hake’s 6000-student study of “traditional” and “interactive” physics courses, the average normalized gain for students in traditional courses was 0.23, whereas the average for students in interactive courses was 0.48, a very significant difference.  The semester before Mazur started using peer instruction, his normalized gain was 0.25, consistent with Hake’s findings for “traditional” lecture courses.  The first semester Mazur used peer instruction, his normalized gain was 0.49, also consistent with Hake’s findings.

Perhaps most interesting is that as Mazur gained experience with these teaching methods (and made refinements to them, like the replacement of flash cards with clickers in his second year using peer instruction), his normalized gain increased by several percentage points each year, hitting 0.74 the sixth time he implemented peer instruction.  Thus he was, in a sense, three times as effective in helping his students master concepts in first-semester physics.

Comments: I tend to review more recent articles on teaching with clickers on this blog, but I couldn’t resist posting something about this classic article.  Mazur’s peer instruction technique is the most commonly used approach to teaching with clickers, and that’s in large part to the persuasiveness of the data he has collected on its impact in his courses.  This article presents solid evidence that having students read their textbooks before class and grapple with tough conceptual understanding questions during class in small groups is a superior way to teach first-semester physics.

It’s also worth noting that Mazur’s normalized gain improved over time.  I’ll occasionally read an article by an instructor who taught a section of a course with clickers and a section without and student performance in the two sections to find that using clickers had little or no impact on student performance.  These experiments often have a variety of design problems, but, regardless, it’s important to note that instructors can improve in their use of a particular teaching method over time.  Expecting great results the first or second time out is sometimes unrealistic, and big learning gains are sometimes only possible after a few semesters experience.

Article: Mayer et al. (2009)

Reference: Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., Chun, D., Bulger, M., Campbell, J., Knight, A., & Zhang, H. (2009). Clickers in college classrooms: Fostering learning with questioning methods in large lecture classes. Contemporary Educational Psychology, 34(1), 51-57.

Summary: In this article, Richard Mayer and his collaborators, nine in all, describe the results of an experiment comparing the use of clickers to non-clicker alternatives.  A large enrollment educational psychology course, taken mostly by junior and senior psychology majors, was taught one year in a “traditional” method, without the use of in-class questioning or clickers.  The next year, the same course (with very similar students) was taught using in-class questioning facilitated by clickers.  In the third year, in-class questions were used, but instead of having students respond using clickers, students wrote their responses down on paper quizzes, passed those papers in to the instructor, then indicated their responses to the questions with a show of hands.

Differences among the three courses were kept to a minimum.  The same instructor taught all three courses, and the lecture materials were repeated, as well, with the exception of the additional questions added to the clicker and no-clicker groups.  Reading assignments and exam questions were identical, as well.  Having the students respond to questions in writing in the no-clicker class meant that their initial responses to a question were largely made independently of their peers, just as in the clicker class.  (The answers they signified during the shows of hands were, on the other hand, not necessarily independent.)

There were some differences, however.  The in-class questions in the clicker and no-clicker groups were graded (1 point for answering incorrectly, 2 points for answering correctly), which meant grade incentives were a possible motivator in those two groups.  There was no parallel grade incentive in the “control” group.  Also, in the no-clicker class, the paper quizzes were typically administered at the end of a class session for logistic reasons (distributing and collecting the quizzes took time), whereas in the clicker class, questions were asked at various points during class.

The authors’ findings were certainly interesting.  When they compared midterm and exam performance across the three courses, they found that the clicker class performed significantly better on the exams, averaging 75.1 points out of a possible 90.  The no-clicker class averaged 72.3, and the control group averaged 72.2.  (The difference here was statistically significant with p=.003.)  So the clicker class ended up with an average grade in the course 1/3 of a letter grade higher than the other two classes, a B instead of a B-.  And the paper quizzes plus hand-raising had “no discernible difference on student learning outcomes.”

Even more interesting was the following.  The clicker class performed almost identically to the other two classes on exam questions that were similar to questions asked (via clickers or paper quizzes) in class.  However, on exam questions that were dissimilar to in-class questions, the clicker class performed significantly better (50.2 vs. 47.9 and 48.2, p=.002).

The authors conclude from these data that the logistical difficulty of implementing the paper quizzes (distributing the quizzes, collecting the quizzes, and so on) interfered with any benefit gained from questioning students in this manner.  They also note that doing the questioning at the end of a class session might reduce the impact of the questioning on the students’ learning.  The use of clickers made questioning students “seamless” for the instructor and allowed the instructor to test and provide feedback to students closer in time to the initial learning experience.

The authors also note that some of the components of active learning–”(a) paying more attention to the lecture in anticipation of having to answer questions, (b) mentally organizing and integrating learned knowledge in order to answer questions, and (c) developing metacognitive skills for gauging how well they understood the lecture material”–might serve to explain why the clicker class outperformed the other two classes on exam questions dissimilar to in-class questions.

Comments: These results are fairly persuasive.  The authors did a good job of controlling for potentially confounding variables, and the use of three groups–clickers, no clickers, and control–meant that they could isolate the effect of the clickers from the effect of having students respond to questions during class.  Their conclusion–that clickers make questioning easier for both instructors and students and so allow questioning to have more impact–makes sense to me.

Another possible explanation for the higher learning gains in the clicker class is that the students in the clicker class were able to see the display of results of the clicker questions, whereas the students in the no-clicker class had to rely on a show of hands to see where their peers stood on a question.  Since it’s been shown that the hand-raising method leads to inaccurate representations of student understanding (see, for instance, Stowell and Nelson, 2007), it could be that the more accurate reporting of student responses to questions allowed by the classroom response system led to students taking the process more seriously in one way or another.

It’s also worth noting that after questions were asked and answered by students in both the clicker and no-clicker class, not too much happened.  The instructor would state the correct answer, have a student volunteer share reasons for the correct answer, then share his own reasons for the correct answer.  There wasn’t much in the way of agile teaching (doing something different in class in response to the results of a clicker question) or peer instruction (having students discuss questions with each other prior to answering).  There wasn’t much discussion of incorrect answers, apparently.  All of these processes have potential pedagogical benefits.  Had they been employed, the different in learning outcomes between the clicker class and the other two classes might have been even greater.

I should also point out that the article doesn’t clearly state the instructor’s experience teaching with clickers, although it seems a safe bet that the instructor was new to using clickers.  Instructor experience is another important variable, as is the nature and difficulty of the questions used.  A few sample questions were included in the article, but it would have been helpful to know how difficult the students found these questions.  Did most students answer them correctly?  Did a lot of students answer them incorrectly?

Article: Campt & Freeman (2009)

Reference: Campt, D., & Freeman, M. (2009). Talk through the hand: Using audience response keypads to augment the facilitation of small group dialogue. The International Journal of Public Participation, 3(1), 80-107.

Summary: This article by David Campt and Matthew Freeman describes ways to use clickers to facilitate dialogue among small-to-medium-sized groups of people (6 to 40 people) with common interests but diverse perspectives.  For example, the authors mention using clickers with residents of an urban neighborhood facing tough questions that involve race and class, as well as with employees from multiple levels of hierarchy within a business discussing the mission and functions of the business.  The authors describe themselves as “dialogue facilitators” and their work as collaborative actions, which uses “dialogue, inquiry, and deliberation to inspire participants, build working relationships, and make decisions about collaborative actions they will take to improve their communities.”  (Wilson, P. (2004). Deep democracy: The inner practice of civic engagement. Fieldnotes: A Journal of the Shambala Insitute. 3, 1-6.)

The authors describe a few different types of clicker questions they use to foster dialogue, including demographic questions exploring participants’ diverse backgrounds, experience questions asking participants “whether or how frequently they may have had specific experiences,” opinion questions about internal and external issues relevant to the community, and fact questions designed to explore differences between objective facts (such as statistics about demographics in the United States) and participant perceptions of those facts.

The authors also describe dialogue focused on collaborative action to have several phases, including introducing participants to each other and to the dialogue process, sharing participant experiences and perceptions, exploring diversity and commonalities with the goal of understanding “underlying social conditions” that produce diverse perspectives, and exploring possibilities for action.  The authors describe several ways that clickers can enhance dialogue in each phase, but they focus primarily on the earlier phases.

For instance, asking demographic clicker questions during the introduction phase can help participants learn about each other more quickly, particularly around demographic characteristics that aren’t immediately visible, such as political affiliation or sexual orientation.  These questions can provide “teachable moments” about group processes, such as reminding participants to be respectful of those with backgrounds different from their own, and help enhance “participants’ sense of empathy for others.”

During the introductory phase, clicker questions can also help to surface common intentions among participants.  The authors note that when there are two “sides” on a contentious issue, often both sides have similar goals but different opinions about reaching those goals.  Asking a clicker question that makes evident participants’ common intentions can help defuse some of the tension in the room that might otherwise arise.

Furthermore, “fact” questions can help bring important facts into the subsequent conversation, often “demonstrating that people in the group know less than they think they do about an issue of relevance,” leading to more open-minded attitudes.

The authors also discuss the use of participant experience questions (e.g. “How long has it been since the last time you can recall witnessing an act of racial discrimination?”) in the second phase of their dialogue facilitation-helping participants understand the variety of perspectives they have on the topic at hand.  Asking such a question, then hearing from a few participants, then commenting on any pattern that emerges (e.g. “It seems that more of the people of color have recent stories.”) is one approach.  However, having those patterns emerge through the results of a clicker question can demonstrate such patterns more quickly and prevent participants from thinking the facilitator is finding patterns that he or she wants to see in the responses.  The authors also note the use of demographic comparison questions, parsing the results of an experience question according to some demographic characteristic of the participants.

Other uses are discussed, as well, including showing matches between clicker question results and national polling data for some questions, helping participants come to decisions about collaborative action steps, and providing both facilitator and participants with information about participants’ feelings about a session at the end of the session.

Finally, the authors make the point several times that clicker questions and their results serve to generate productive dialogue.  They are not an end to themselves.

Comments: While I typically discuss the use of clickers in college and university settings on this blog, I wanted to share and comment on this article since the authors have a particularly nuanced and informed approach to fostering dialogue-with and without clickers-that college and university instructors reading this blog might find useful, particularly those who discuss controversial or sensitive issues in the classroom.  Their writing is also informed by a research literature on fostering dialogue that would likely be unfamiliar to most academics.  I’m also excited by growing use of clickers and other response systems in non-academic educational settings, such as community dialogues as described in this article, as well as church services, corporate presentations, and social science research.

I found the author’s description of types of clicker questions they use to align nicely with the types of clicker questions I group under the umbrella term “student perspective questions.”  I usually think of these questions as being about student demographics, student experiences, or student opinions.  I hadn’t thought about putting factual questions in this category, but it makes sense.  Seeing how students (or dialogue participants) perceive objective facts serves a similar purpose as these other types of questions-helping the community better understand each other and helping the teacher / facilitator better understand the community.  When used to demonstrate to students or participants that they know less than they think they do about a particular topic, these questions also serve to generate a “time for telling.”

When reading about the use of clicker questions to surface common intentions (as described above), I wondered if there’s a risk of having participants feel like such a question is rigged, that the facilitator is asking it mainly as a set-up to make the point that “we all have something in common.”  If there’s a risk of that, I wonder what Campt and Freeman might do to minimize that risk.  I also wonder what they might do if this kind of question backfires, showing that the participants have less in common than they think they do.

A few other questions occurred to me as I was reading the paper’s section on directions for future research on the use of response systems in dialogue facilitation.  The authors ask, “Are there people whose verbal participation in dialogues increases as keypad use increases?”  I would also ask, might a participant who finds out he or she is in the distinct minority on a particular issue be less likely to participate in discussion?  The authors also ask if the availability of providing anonymous feedback might have some distorting effect on reported opinions.  I wondered that, as well, thinking about how contentious or smart aleck participants might abuse the ability to respond anonymously.

I also wonder if there might be a role for pair or small-group discussion prior to voting in these settings.  Peer instruction is a common application of clickers in educational settings-might something similar play a role in dialogue facilitation?  Also, what about asking the same questions at the start and end of a dialogue session as a way to show participants how they’ve changed their perspectives over the course of the dialogue?  Might that be useful in some contexts?

Why Use Clickers?

I just had to share this tweet I saw a few weeks ago.

waterboarding

@colinmorris: 44% OF MY U.S. HISTORY CLASS THINKS WATERBOARDING IS A SURFING TERM. I take back everything I’ve said about these “clickers” being useless.

Note: I’ve had a couple of problems with this blog recently–the RSS feed stopped working and I haven’t had any time to post.  I’ve now fixed both problems.

I’m a huge fan of the World’s Technology Podcast from the BBC and PRI.  Clark Boyd puts together a great collection of technology stories from around the world every week–not the usual stories about the latest gadgets, but stories about how technology is impacting society and culture around the world.  Great stuff.

A couple of months back Clark ran a story on Chinese students using cell phones to cheat.  I tweeted Clark and suggested that he take a look at some of the positive ways teachers are using cell phones in the class room.  I recommended he take a look at what Greg Kulowiec has been doing in his 9th grade history classes in Massachusetts.  I’ve blogged about Greg’s use of Poll Everywhere, and Greg’s posted a great video showing how he uses this text-messaging-based response system to ask his students ethical questions about the Holocaust.

Well, Clark Boyd took me up on my recommendation and interviewed Greg about his use of technology in his classes.  The interview ran near the beginning of episode 256 of Clark’s podcast.  Greg talks about how he had his students use their cell phones during class to call people they knew and quiz them about the US Constitution and how he uses his students’ cell phones as part of a classroom response system to engage them during class.

Greg also describes a couple of ways to leverage his students’ cell phones’ camera functions.  He had his students take photos during a class trip to the New England Aquarium and send them to Greg’s Evernote account for later use in an Animoto video, for instance.  He also plans to have his students create their own Evernote accounts so they can take photos of references as they do research, send those photos to their Evernote accounts, and use Evernote’s tagging ability to organize their research notes.  Very cool stuff.  I think it’s time I signed up for an Evernote account.

Props to Clark Boyd for being open to listener suggestions and to Greg Kulowiec for being willing to share his innovative uses of technology.  You can follow Greg’s continuing experiments on his blog, The History 2.0 Classroom.

Leveraging Pre-Class Reading Quizzes

One of the questions I’m asked most often when I present about teaching with clickers is the “coverage” question: How do you cover all the content you need to in a course if you spend class time having students think about, vote on, and discuss clicker questions?  All that active learning during class must mean you can’t cover all the same content, right?

Although I find the term “cover” problematic, I understand these questions.  Particular in courses that are prerequisites for other courses, there’s a need to make sure students learn a certain (usually large) amount of material.  In talking with faculty who teach with clickers, I’ve heard several different kinds of responses to the “coverage” question, ones I detail in my book.  One response is to move some of the learning that would have taken place during class to out-of-class time.  One way to do this is by having our students read their textbooks before class, which I’ve done in my math courses for several years now.  This means that students come to class with some exposure and understanding of the material, which allows class time to be spent helping the students make sense of that material and go deeper via clicker questions and other active learning techniques.

However, since studies show that only about 30% of students will read their textbooks before class without some kind of incentive, it’s helpful to have students complete pre-class reading quizzes online.  This semester, I’m having my students do so via our course blog.  I post three or four open-ended questions about the textbook section we’ll be addressing in class.  They respond to those questions in the comments below the blog post.  (I’m using the Semi-Private Comments WordPress plugin to make sure student can’t see each others’ responses.)  I grade them on effort, and the quizzes count toward a class participation grade. I’ve found these pre-class reading quizzes do the job well.  I probably have between 80 and 90% of my students read the textbook before class and make at least some sense out of it judging by their responses to the reading quiz questions.

An added benefit to having students complete pre-class reading quizzes is that I can draw on student responses to open-ended quiz questions to create in-class clicker questions.  Here’s an example:

Consider Question 1 on the Introductory Problems handout and Example 1 in Section 1.6.  These two problems involve input-output relationships between different sectors of an economy.  In what ways are these problems essentially different?  Which of the following is the best answer to this question?

  1. The output from one sector in the example is entirely used up by the other sectors. In the handout, the output is only partly used and a net excess is provided.
  2. Example 1 asks for the total annual outputs of the coal, electric, and steel sectors. Whereas question 1 is looking for the production levels for an outside demand.
  3. In the original example, we’re solving the system to meet a single expectation from a foreign country of three demands, whereas the book’s example is looking to maximize the productivity.  This means the book’s example has multiple solutions and we’re looking for the best of them, where as our original only has one.
  4. The two problems are different as the first is trying to find the initial inputs to achieve certain outputs while the second problem is about finding the market price.

The exact same question was posed on the pre-class reading quiz the night before as an open-ended question.  The answer choices you see here actual student responses to that open-ended question.  During class, I had my students respond to this clicker question, letting them know that four of them should recognize their own words in the answer choices.

The votes were split 30% / 0% / 43% / 26% among the four answer choices, which is a great distribution for generating discussion about the question.  It helped that the most popular answer (#3) was partially incorrect.  (The book example did not, in fact, deal with maximizing productivity.)  The other two answers selected by the students (#1 and #4) are both correct, although #4 gets at the heart of the difference between the two examples more than #1 does.

Some of the students were bothered by the fact that this question doesn’t have a single correct answer.  However, since I’m trying to help my students improve their ability to communicate mathematical and technical ideas, it’s worth spending time on a question like this one, where the quality of the explanation plays an important factor.

We had a funny moment when the student who supplied the popular but incorrect answer choice (#3) spoke up after we had discussed what that choice was incorrect.  He didn’t directly own up to his answer, but instead said something like, “I think the student who gave that answer probably didn’t catch on to the fact that productivity wasn’t being maximized.  He probably has a much better understanding of the example now.”

It can be challenging to write clicker questions with answer choices that align well with student understandings and misunderstandings of a topic.  Taking the students’ very own responses as answer choices is one way to get around this.  It also communicates to students that the pre-class reading quizzes are an integral part of their learning experience.

Multiple Mark Questions

I’ve recently become a fan of the “mark all that apply” type of question my classroom response system facilitates.  I call these “multiple mark” questions in my book.  Here’s one I used in the linear algebra course I’m teaching this fall.

mmark

This question is adapted from one of the questions written by Project MathQuest out of Carroll College.  Their version of the question wasn’t a multiple mark question.  Instead, it included a fifth option, “More than one of the above are possible.”  While that option makes the question more interesting and more challenging for the student, it also yields inconclusive data about student learning since students submitting that response may have different ideas about which of the four options are possible.  That’s not all bad, of course.  Given my interview with Kelly Cline, one of the PIs for Project MathQuest, I can imagine Kelly leveraging that ambiguity into a productive classwide discussion of the question.

However, I decided to turn this question into a multiple mark question by adding the instruction, “Mark all that are possible.”  As you can see from the results, 20 of the 20 students present that day indicated that option 3 was possible, 19 of the students indicated options 1 and 2 were possible, and 14 of the students felt that option 4 was possible.  This was very useful feedback for me, since I could quickly tell that the class was in agreement on options 1 through 3, but option 4 deserved some further discussion.

I’ll admit, however, that I got a little tripped up on my own question logic here.  As it turns out, all four options are possible, which was not my intent when I included this question in my lesson plan.  Option 2 is only possible if the third plane intersects the two overlapping planes and option 4 is only possible if the three planes are parallel because they are in fact the same plane.  The way I’ve worded the question, these wrinkles aren’t addressed, making all four options possible.  As a result, the question doesn’t do a great job at uncovering student understanding of these wrinkles.

Here’s the question I should have asked instead:

Suppose you have a system of 3 linear equations in 3 variables.  Which of the following conditions would guarantee that the system has an infinite number of solutions?  Mark all that apply.

  1. All three equations represent the same plane.
  2. Two of the equations represent the same plane.
  3. The three equations represent planes that intersect along a line.
  4. The three equations present parallel planes.

With this question, only options 1 and 3 are correct.  With option 2, it could be that the third plane is parallel to but distinct from the two overlapping planes, yielding no solutions instead of infinitely many solutions.  With option 4, it could be that the three planes are not the same plane, again yielding no solutions instead of infinitely many solutions.  This wording of the question puts the special cases in their proper places.

Have you used multiple mark questions?  Do you find them more difficult to write?  How do your students respond to them?

More on the Classroom of the Future

Back in January, I blogged about a New York Times article describing MIT’s Technology Enhanced Active Learning (TEAL) classrooms.  Just today, Diana Senechal blogged about the article, too, as well as her own experiences as an adult student in a physics class that uses clickers.  A few important questions were raised in Diana’s post and in the comments that followed it–questions about the prep time teachers need to teach with clickers and about which students we should be trying to benefit through our teaching.  I weighed in on those questions on Diana’s blog post, but I thought I would reproduce my comments here in case my readers would like to weigh in, too.

I’ve taught math courses with clickers for five years now, and (full disclosure) I’ve written a book on teaching with clickers, one that draws upon interviews I conducted with 50 faculty members in different disciplines, including physics.  As you might expect, I have a few thoughts about the questions raised here!

The first thing I noticed reading this post and its comments was the juxtaposition of the MIT student’s comment that using clicker-facilitated active learning during class means professors don’t have to prepare as much and Mike Anderson’s comment that using the IFAT quizzes he describes took more, not less, preparation time.

I think Mike’s hit the nail on the head: Figuring out what misconceptions students are likely to have, which is required for coming up with plausible wrong answers to multiple-choice questions, is challenging work.  And doing what the MIT physics professors are doing–designing intensive learning experiences that help students resolve misconceptions and build their knowledge–is even more challenging.  It requires a great deal of understanding of student learning and motivation.

Speaking of student motivation, the question was raised above asking which students are benefited by more active classroom learning experiences.  I would argue that as teachers, we have a responsibility to try to motivate and teach all our students, not just the ones that are self-motivated or the ones who learn best by listening to a lecture.  I think it’s great that Diana enjoys and benefits from a great lecture.  Evidence points to the fact that such students are in the minority.  Combining lectures with more participatory learning experiences is likely to benefit more students’ learning.

I’ll also point out that the pedagogy behind Mike’s IFAT quizzes is very similar to the pedagogy behind effective instruction with clickers–getting students to actively engage with problems and to discuss those problems with peers and their instructors, and providing instructors with useful feedback on student learning, feedback that can inform future instruction.  As Ricki points out, it’s the pedagogy that counts more than the technology.

That being said, clickers provide a few advantages that other technologies don’t.  Clickers allow me to hold my students accountable for their class participation since the system tracks individual student responses.  However, clickers also provide students with a level of anonymity since their peers can’t see who they responded, making it safer for them to take risks and be wrong.  (Asking a question to a class of students and taking the first student response privileges those students who are quicker, more confident, and more experienced.  It leaves all the other students out of the loop, unfortunately.)  And the instant display of results (in the form of a bar graph) provides the instructor with useful information for making on-the-fly teaching choices and can have an impact on student motivation.  If, for instance, students see that most of them answered a question incorrectly, they’re more likely to pay attention to the explanation that follows.

So, dear readers, what say you?  Any thoughts on the prep time issue or the question of which students are most benefited from active engagement teaching techniques?

Along with my colleagues Kien Lim (University of Texas-El Paso) and Kelly Cline (Carroll College), I’m organizing a contributed paper session at the Joint Mathematics Meetings in San Francisco this January 2010.  If you use clickers in your mathematics teaching, I want to encourage you to submit an abstract for a talk.  Details below.  Thanks!

Engaging Students with Classroom Voting

Thursday morning, January 14th
Derek Bruff, Vanderbilt University, Kien Lim, University of Texas at El Paso, and Kelly Cline, Carroll College

Classroom voting is a teaching method in which students are asked to respond to multiple-choice or numeric-result questions posed by their instructors during class, often using handheld transmitters (“clickers”) that allow for the instant display of distributions of responses. Classroom voting can be used to make on-the-fly teaching choices that are responsive to student learning needs, to generate small-group and whole-class discussion, and to create “times for telling” in which student misconceptions are uncovered and addressed. Clickers allow students to respond to questions independently and without their peers knowing how they have responded while allowing instructors to track student responses and thus expect full participation.

We seek papers on classroom voting that focus on at least one of these areas: teaching objectives (e.g., writing effective questions, engendering cognitive conflicts, addressing misconceptions), instructional strategies (e.g., peer instruction, team-based learning, methods of guiding class discussions), new technologies (e.g., using cell phones as clickers, integration with online resources), impact on students (e.g., enhanced student learning, increased student engagement, improved retention), overcoming constraints (e.g., limited class time for active learning), development of new materials (e.g. new sets of classroom voting questions) and strategies for getting started at the course and department level.

Information on submitting an abstract can be found here.

Categories

Recent Comments