Teaching with Classroom Response Systems

Resources for engaging and assessing students with clickers

Archive for the ‘Student Participation’ Category

Reference: James, Mark C., Barbieri, F., & Garcia, P. (2008). What are they talking about? Lessons learned from a study of peer instructionAstronomy Education Review, 7(1).

Summary: This study is a follow-up to James (2006), a study comparing the effect on student discourse during clicker questions of grading incentive.  In the earlier study, student conversations were audio-recorded in two different astronomy course, each taught by different instructors and each involving different grading schemes for clicker questions.  The main finding of the earlier study was that a low-stakes grading scheme (one in which incorrect answers counted as much as correct answers) encouraged richer student-to-student discussions prior to voting on clicker questions than a high-stakes grading scheme (one in which incorrect answers counted only a third as much as correct answers).

A key drawback to James’ earlier study was that the two courses being compared were different in significant ways other than the grading scheme used for clicker questions.  In particular, they had different topics and different instructors.  That drawback has been mostly eliminated in the current study by James, Barbieri, and Garcia.  In this study, the same instructor taught the same course, an introduction to astronomy course with about 180 students, in two consecutive semesters.  In both semesters, clicker questions contributed 12.5% of the students’ overall course grades.  In the first semester, incorrect answers counted one-third as much as correct answers, but in the second semester, incorrect answers counted 90% as much as correct answers.

The instructor used a version of the standard peer instruction technique.  Students were not asked to vote on clicker questions independently, but were instead asked to discuss the questions in pairs prior to voting.  Random samples of students in each semester were audio-recorded during these pair discussions throughout the two semesters.  The audio-recordings were analyzed in two different ways to measure “discourse bias,” “the difference between the fractional contributions to a conversation between partners.”  For instance, if one partner contributed 70% of the time and the other contributed 30% of the time, then the pair’s discourse bias would be 40%.

First, each idea shared by the students during the discussions was coded according to ten categories, including categories such as restating question elements, stating answer preferences, and providing justifications.  (One side finding was that there was no correlation between “type” of clicker question and the nature of the ideas shared by students during the discussion.)  Second, the total number of words produced by each student during the discussions was counted.  Both techniques provided measured of each student’s contribution to the discussions.

The results strongly indicated that the low-stakes grading scheme encouraged more balanced participation by students during pair discussions.  For example, when using the first measure of discourse bias (counting ideas), the average bias for the high-stakes class was 33.2%, whereas the average bias for the low-stakes class was 19.5%.  That is to say that each pair of students engaged in a conversation more dominated by one of the students in the high-stakes class.  The second measure of discourse bias (counting words) provided similar results-an average bias of 39.8% in the high-stakes class and 26.6% in the low-stakes class.

The authors also note that the low-stakes grading scheme promoted more independent student responses to clicker questions following pair discussions.  In the high-stakes class, only 7.6% of the time did two partners submit different answers to clicker questions, whereas in the low-stakes class, this occurred 17.1% of the time.  The authors conclude from this that in the high-stakes class, students’ concern for earning points motivated them to submit their partners’ answers to clicker questions even when they didn’t really believe those answers.

Comments: This study improves on James’ earlier study and provides persuasive evidence that low-stakes grading schemes for clicker questions promotes more meaningful student participation in small group discussions prior to voting.  True, this wasn’t a double-blind, randomized control group experiment (in which students were randomly assigned to the two grading schemes and the instructor didn’t know which grading scheme would be used with each group of students), but such experiments are practically impossible to implement in educational settings.  Short of that “gold standard,” this is a very well-designed and persuasive study, in part because many of the possibly confounding variables in the earlier study were eliminated and in part because of the use of direct, qualitative measures of student participation.

Willoughby and Gustafson (2009) conducted a similar study in physics courses, audio-recording student discussions in some sections and not in others.  They found that students in the sections that were not audio-recorded “block-voted” more when high-stakes grading schemes were used for clicker questions and less when low-stakes grading schemes were used.  In the sections where audio-recorders were used, there was no statistically significant difference in block-voting rates.  They concluded that the presence of the audio-recorders might have influenced student voting behaviors (an example of the Hawthorne effect).  If true, it’s possible that the difference James, Barbieri, and Garcia found in block-voting behavior might have been even greater had audio-recorders not been used-all the more reason to use low-stakes grading schemes when using clickers for formative assessment.

I would have liked to have seen a little additional information provided in the article about the use of clicker questions in these courses.  Were students asked to vote on their own before discussion the clicker questions in pairs?  (I don’t think they were, but this isn’t stated in the article.)  Also, what instructions were given to students prior to the peer discussion times?  I’ve seen some evidence (Lucas, 2009) and heard some advice (from Doug Duncan) that the instructions given to students prior to peer instruction can affect the quality of the discussions.  And while it was found here that the type of clicker question did not correlate with the kinds of ideas shared during peer instruction, it would have been informative to know what kinds of clicker questions were used.

What’s not directly address in this article, however, is the assumption that more student participation during small-group discussion of clicker questions leads to greater student learning.  This was an issue I raised in my comments on James’ earlier study, which included data on student performance on final exams.  In response to that study I asked if greater class participation led to greater student learning or if students who knew the material better simply dominated class discussions.  While it’s possible that the latter is true, evidence from non-clicker studies strongly suggests that more active participation in class discussions leads to greater student learning.  I wish this assumption (that participation leads to student learning) had been stated as such in the article.

The takeaway here is that low-stakes grading schemes for clicker questions leads to greater student participation and clicker questions results that more accurately reflect students’ actual understanding (or lack of understanding).  These results have important implications for instructors using clickers to motivate student participation and inform agile teaching choices.

Article: Freeman, Blayney, & Ginns (2006)

Reference: Freeman, M., Blayney, P., & Ginns, P. (2006). Anonymity and in class learning: The case for electronic response systems. Australasian Journal of Educational Technology, 22(4), 568-580.

Summary: In an effort to investigate the importance to students of the anonymity afforded by classroom response systems, Freeman, Blayney, and Ginns surveyed the 139 students in an introductory management accounting course at the end of the term. Notably, the majority of students in this course were female (73%), 20-22 years old (93%), with non-English speaking backgrounds (82%).

Each three-hour class session in this course featured a number of “formative, mainly rules based, multiple choice questions.”  Students responded to these questions via clickers or hand-raising in alternating class sessions and were encouraged, but not required, to discuss their answers with their peers before responding.  The instructor would announce the correct answer to each question immediately after the distribution of responses was shared with the class and then practice “agile teaching” by using the results of the question to guide subsequent lecture and discussion.  This alternating system of response methods ensured that students were given opportunities to experience a response method that allowed students to remain anonymous (clickers) and one that did not (hand-raising).

Sixty-eight percent of students agreed or strongly agreed with the statement “I preferred answering in-class questions when my answers were anonymous to the instructor.”  Almost as many students (62%) preferred answering questions when their answers were anonymous to their peers.  About the same number of students (63%) agreed or strongly agreed that “anonymity was more important with in-class questions when [they] were uncertain about the answer.”

Additionally, students were asked to rank in order of preference four potential response methods-clickers, hand-raising, volunteering to respond when they knew the answer, and “cold-calling.”  Average student rankings of these four methods indicated a preference for anonymity, as the methods were ranked (on average) in the same order as I just listed them.

The authors conclude from these survey data that students value the anonymity (peer-to-peer anonymity and student-to-instructor anonymity) enabled by a classroom response system.  Since other response methods, such as hand-raising and response cards, are less anonymous, this argues for the use of clickers.

The authors make a couple of additional points about the potential interplay of culture and use of clickers.  One is that peer instruction was not as highly valued by these students as other aspects of learning with clickers, according to survey results.  It’s possible that the cultural diversity of the students (as measured by proxy by the percentage of students with non-English-speaking backgrounds) made peer instruction less useful for these students than for more homogeneous students surveyed in other studies who found peer instruction more useful.

The other is that demographic variables, including native language, had no significant impact on the student responses to survey questions, particularly the questions about the importance of anonymity.  “Using first language as a rough proxy for culture, and in particular openness to criticism, these results might be seen to contradict Banks (2003) who suggested that cultural background could impact preference for ERS usage.”  The authors suggest further study of the role of culture in students’ valuing of anonymity.

Comments: For a relatively simple survey-based study, this article raises some interesting questions.  For instance, given that the instructor of this course practiced agile teaching, altering his instruction based on the distribution of student responses to in-class questions, I would think that response methods that generated more honest responses from students would lead to more useful agile teaching.  That is to say, if students were less likely to indicate confusion about a question when the hand-raising method was used (because their answers would not be anonymous to their peers), then the instructor might overestimate his students’ comprehension of the topic at hand when relying on the response distribution and subsequently spend less time on the topic than might actually be warranted.  If true, then students who prefer anonymous response methods and who have instructors who practice agile teaching might benefit more from an anonymous response method than students whose instructors do no practice agile teaching.  Since I have yet to read a study that compared use of clicker questions with and without agile teaching, it’s unclear at this point how important this issue might be.

The questions raised about the interface of culture, participation, criticism, and anonymity are also very interesting ones.  They are ones that haven’t been on my radar until recently, when I heard a presentation by Parvanak “Pary” Fassihi, who described her use of clickers to engage students in English-as-a-Second-Language (ESL) classes at Boston University and the Boston campus of the Showa Women’s University (of Japan).  She shared that some of her students come from cultures where it is seen as impolite to disagree with others publicly.  These students are often very hesitant to engage in the small-group and classwide discussion of clicker questions that Pary tries to generate in her courses.  (I’ll note here that I interviewed Pary for my book.  She had a lot of interesting things to say about the role of clickers in her language instruction courses, but this observation about cultural views on public disagreement didn’t come up in our interview.)

Pary’s observation is consistent with the finding by Freeman, Blayney, and Ginns that their students, many of whom came from diverse cultures given that few of them spoke English as a first language, did not value peer instruction as highly as might be expected.  I imagine it is also consistent with Banks’ suggestion, mentioned in the article, that “cultural background could impact preference” for clicker usage.  I haven’t read the Banks article, but I hope to do so soon and review it here on the blog.

One criticism I would have of the article at hand is that comparing the clicker and hand-raising response methods doesn’t quite isolate the effect of anonymity since clickers offer a number of advantages over hand-raising that might be at play here.  Given the nature of the survey questions used, this isn’t a significant issue, since student perception of anonymity was the focus, not a control-group-study comparison of two difference response methods.  However, it is possible that the students’ preference for clickers over hand-raising over hearing from student volunteers could be an indication not of their preference for anonymity but of their preference for response methods that encourage more complete and/or more independent participation and/or greater accountability for participation.  Thus, I would argue that those data in particular don’t necessarily imply that students value anonymity when responding to in-class questions.

(Cold-calling also leads to more complete and independent participation and greater accountability for participation since students have to stay on their toes in case they are called upon.  However, I’m comfortable arguing that the stress generated by this method would outweigh any perceived benefits regarding participation or accountability.)

A better comparison might be between having students respond via clickers with and without a display of individual participant responses.  (Many classroom response systems have the ability to display on screen not only a histogram showing the distribution of responses but also a list of individual students and their particular responses.)  That would better isolate the anonymity factor from other advantages of clickers over alternate response methods.  It’s not a perfect solution, in part because students in a large class aren’t likely to know each other’s names and so might not be bothered by having their names displayed next to their responses and in part because this is a feature of classroom response systems that is infrequently used by instructors so its use in this context might seem too artificial.  Also, this plan would not isolate peer-to-peer anonymity from student-to-instructor anonymity.  However, I think it would yield more relevant data than comparing clicker use with hand-raising.

Article: King & Joshi (2008)

Reference: King, D. B., & Joshi, S. (2008). Gender differences in the use and effectiveness of personal response devicesJournal of Science Education and Technology, 17(6), 544-552.

Summary: In this paper, King and Joshi present the results of a study of student participation and performance in two semesters of a chemistry course for engineering students with a particular focus on gender differences.  In the first semester, one section of the course used clickers without including clicker questions in the students grades in any way, while the other two sections did not use clickers at all.  In the second semester, only one section of the course was offered.  Clickers were used in this section, and clicker questions contributed to a participation grade for the students (5% of the overall course grade, with full credit awarded to students who answered at least 75% of the clicker questions throughout the semester, correctly or not).

The authors found that in the first semester’s clicker section, when clicker questions were not included in student grades, there was a statistically significant difference in the response rates of male and female students.  Female students answered 62% of clicker questions on average, whereas male students only answered 48% of questions.  In the second semester, when clicker questions were included in students’ grades, there was no significant difference in the response rates of male and female students.

The authors also found that students who were “active participators” (those who answered at least 75% of clicker questions in a semester) had higher final grades than students who were not active participators.  This difference was statistically significant for male students, but not for female students, however.  These results suggest that although male students participated less frequently than female students, male students who were active participators benefitted more from participation via clicker questions than female students.

The differences in final grades between active and non-active participators were consistent whether or not clicker questions were graded.  The authors conclude from this that “while the average grade improvement was the same during each term, the benefit of requiring clicker usage is that a greater number of students receive this benefit when participation is tied to their course grade.”  This argues for grading clicker questions, particularly for male students, who not only participate less when clicker questions aren’t graded, but also appear to benefit more from being active participators.

The authors also looked at student performance on final exam questions that were “related” to clicker questions asked during the semester.  As the authors expected, students who answered clicker questions correctly tended to do better on related final exam questions.  More surprising was that students who answered clicker questions incorrectly also did better on final exam questions than student who didn’t respond to the related clicker questions at all, indicating that class participation via clicker questions helped prepare students for exams.

It is worth noting that the correlation between answering clicker questions incorrectly and doing well on related final exam questions was not observed in the second semester clickers were used.  Recall that in the second semester, clicker questions were included in students’ grades.  The authors argue that this led some students to simply click in to earn participation points without really trying to answer the clicker questions.  Thus including clicker questions in students’ grades is likely to encourage more students to participate, but enough non-participating students are likely to respond to clicker questions (incorrectly in most instances) that the impact of clicker questions on student performance is harder to see in the student data.

Comments: In their literature review, the authors note that there is evidence that teachers tend “to ask questions of and praise male students more than female students.”  This potential bias is another good reason to use the pick-a-random-student feature of some classroom response systems.  Having the system select at random a student from among those who responded to a clicker question prevents this kind of bias.

King and Joshi’s main results–that students who respond to clicker questions (correctly or not) benefit from the participation and that grading clicker questions (on effort) leads to more students (particularly male students) participating in this useful way–are interesting and persuasive.  The authors did a good job of blending a quasi-experimental design (grading clicker questions in one semester, not grading them in another semester) with data collected within a single semester to argue these points.

Given the authors’ comments about students in the second semester just clicking in for participation points, I wonder if asking students for their confidence in their answers would have helped parse out these students to yield more meaningful data from that semester.  For instance, if students who weren’t really trying to answer clicker questions could be persuaded to signify that they had low (and not high) confidence in their answers, one could remove responses with low confidence from the data set to see if answering clicker questions incorrectly still had a positive correlation with success on related exam questions.

I should also add that it was unclear from the article what kinds of clicker questions were used in these courses–difficult ones, easy ones, recall questions, conceptual understanding questions, etc.  It was also unclear if students were asked to discuss clicker questions in small groups or as a class or if the instructors practiced “agile teaching,” responding in meaningful ways to the distribution of responses for particular clicker questions.  More description of these contextual factors would give the authors’ results more meaning.

King and Joshi’s results about gender–that male students tend to participate less frequently than female students (when not motivated by grades) and that male students who do participate benefit more in terms of performance on final exams–are also very interesting.  I’m reminded of findings shared by Hoekstra (2008) that male students tend to prefer to respond to clicker questions on their own, whereas female students tend to collaborate with other students prior to answering when given the option.  Hoekstra found that the male students liked to test themselves by seeing if they could answer a clicker question without external help.  It’s unclear from the King and Joshi article if students were allowed or required to discuss clicker questions prior to responding to them, but if students were given the option of responding on their own, it might be male students who self-test via clicker questions (whether voluntarily or when prompted to do so by including clicker questions in course grades) benefit from doing so, leading to greater learning gains for these participating male students.

Clickers in (Very Large) Economics Courses

Jennifer Imazeki teaches a 500-student microeconomics course at San Diego State University, and she recently blogged about her use of clickers in this course.  In her post, she describes some advantages of campus standardization, her grading scheme, and her students’ (generally positive) response to learning with clickers.  She also describes a couple of teaching choices she’s made that I find particularly interesting.

Last semester, I also made a quiz available on Blackboard that students could take if they missed class; I take the higher of their clicker score or quiz score for a given day.

Imazeki notes that by giving her students the option of taking an online quiz, she minimizes the number of students who show up to class just to get their participation points.  Since these students are often somewhat disruptive in class, this works out well for the students who attend class while also giving Imazeki a way to keep tabs on the students who don’t attend class.

Imazeki also describes her use of the “pick-a-random-student-who-responded” feature of her classroom response system.

I tend to use this when I have asked the class to brainstorm examples or asked them a question that doesn’t really have a ‘wrong’ answer. Although students don’t love it, they don’t seem to hate it either.

As I’ve mentioned before, this feature can help prevent cheating with clickers (one student responding with an absent student’s clicker).  In a class of 500 students, it also offers a useful way to “cold call” students without having them feel like they’ve been singled out.

Imazeki also notes that she’ll have students draft responses to an open-ended question, then display a clicker question with preset answer choices.  Students are then asked to select the answer choice that matches their draft response.  With graphing questions such as the one she notes as an example (“Use a supply and demand graph to show what happens to price and quantity if X happens”), this approach seems particularly useful since drawing the right graph is a harder task than selecting the right graph from a set of options.

In a subsequent blog post, Jennifer Imazeki shares more results from a survey of her students about her use of clickers.  She notes what percentage of her students agreed with a variety of statements about the positive impact of clickers.  Then she writes:

The percentage agreeing with these statements has risen each of the three semesters I’ve taught the large lecture and the percentage disagreeing has fallen.

I think this is an important point.  Often when we try something new in our teaching, it doesn’t work out as well as we would like.  We haven’t really figured out how to make it work, and the students are used to it either.  Sometimes a teacher trying out something new and receiving poor feedback from her students gives up the innovation and reverts to her previous teaching methods.  However, as Imazeki’s data show, our use of instructional techniques can improve over time with practice and feedback.

(I’ve just noticed that this is the first time I’ve blogged about using clickers in the discipline of economics.  I have a couple of economics examples in my book, however, and I’ve heard from several economics instructors, including ones who teach very large classes like the one described here, that clickers can work very well in their discipline.)

Article: Hoekstra (2008)

And here’s the fifth and final part of student participation week here on the blog.  In some ways, I’ve saved the best for last.  Today’s paper is by Angel Hoesktra of the University of Colorado-Boulder.  When I interviewed Angel for my book, I got a preview of some of her findings and I was impressed by her use of qualitative research methods to explore student perceptions of learning with clickers, so I was expecting interesting and useful results from her published work.  Having finally gotten around to reading that published work, I’m glad to say my expectations have been met.

Reference: Hoekstra, A. (2008). Vibrant student voices: Exploring effects of the use of clickers in large college courses. Learning, Media, & Technology, 33(4), 329-341.

Summary: Hoekstra investigated student perceptions of learning with clickers in multiple sections of a general chemistry course over a period of three years.  Most of the students in the course (apparently between 200 and 300 students per section) take the course to fulfill degree requirements, but typically less than 10% of the students are chemistry majors.  Survey results of these students indicated that 80-90% of them are “concerned about whether or not they will pass the course” and less than 10% of them “would feel comfortable enough to respond to a professor’s question by raising their hand in the large lecture hall.”

Clicker questions were typically used in the course after 10-12 minutes of lecture to assess student understanding of the material just explained.  “Classic” peer instruction was not used.  Instead, each clicker question was asked once and students were encouraged (but not required or prodded) to discuss the questions with their neighbors before voting.  Instructions given to students for these discussions were fairly vague (e.g. “Feel free to work with your neighbors”).  Also, clicker questions were followed by instructor explanations, not classwide discussion.  Correct answers to clicker questions earned three points each; incorrect answers earned one point each.  Clicker questions contributed only 5% of the students’ overall course grades.

Hoekstra investigated student perceptions of clickers through three semesters of student surveys administered using clickers, observations of 27 class sessions, and in-depth interviews with 28 students averaging 56 minutes in length.  Since students in the initial round of interviews had favorable reactions to students, students with less favorable view of clickers were interviewed in the second round.  Observation note and interview transcripts were analyzed through in vivo coding, “a form of open coding designed to allow conceptual categories to emerge from the data.”

Key results are as follows:

  • Interviews indicated strongly that students pay more attention during lecture because they know that clicker questions are asked frequently during lecture.
  • Students stated in interviews that “they looked forward to times when they were able to talk with their peers” during clicker questions.
  • Observations revealed that students frequently voiced their reactions (positive, negative, surprised) to the display of results and answers of clicker questions.
  • Interviewees also indicated that the results displays provided them useful and regular feedback on their learning in the course.  Some even indicated that the clicker questions were most helpful when they answered them incorrectly since these were opportunities to resolve misconceptions.
  • About 15-20% of students chose not to engage in peer discussion of clicker questions.  The decision to engage was typically influenced by the difficulty of the clicker question and the student’s “affinity for working with others.”  Interestingly, during more difficult clicker questions, female students were more likely to engage in peer discussion than male students, who tended to use these questions as tests of their own understanding.  Other reasons for working alone included not having done the reading before class, a disinterest in hearing possibly incorrect explanations from their peers when the correct explanation would be forthcoming from the instructor, and the fact that accountability for peer interaction was difficult in the large class.
  • Most students found the general noise and activity levels in the classroom during peer discussion stimulating.  Some students found it distracting and would have preferred times of quiet as they answered clicker questions.
  • Many students felt that clicker questions increased their anxiety levels during the initial weeks of the course but as they became comfortable with the technology and with peer instruction, they found that the clicker questions decreased their overall anxiety about the course.

Hoekstra uses a quote from Trees and Jackson (2007) to summarize many of her findings: “The success of clickers is in many ways dependent on social, not technological, factors.”

Comments: Where to begin?  This study is a rich source of understanding the many ways that students interact with clicker questions and with each other during times of peer instruction.  I’ve briefly summarized the findings above, but the paper includes details, examples, and very illustrative quotations from student interviews.  I realize that some find qualitative research less meaningful than quantitative research, but I think the scope and rigor of Hoekstra’s work adds much credibility to her findings.

Before commenting on a few specific findings, I thought I might connect the teaching environment described in Hoekstra’s paper with some studies I blogged about earlier in “student participation week” here on the blog.  Given the results of Lucas (2009), the vague instructions given to students for discussing clicker questions might have reduced the quantity and quality of student participation in those discussions.  Since the grading scheme for clicker questions was both high-stakes (because correct answers earned three times as many points as incorrect answers) and low-stakes (because clicker questions only contributed 5% of the students’ course grades), it is unclear from the results of James (2006) and Willoughby and Gustafson (2009) whether or not this grading scheme would have enhanced or inhibited student participation.  Hoekstra’s work did not include a control group of any kind, so one can’t say if student participation in the courses she studied was less than or greater than it would have been under different conditions, but her results seem to indicate that most students engaged in meaningful and productive peer discussions in spite of the vague instructions given to them and the somewhat high-stakes grading scheme.

As for Hoekstra’s specific findings, I think they lend support to a statement I made in my book: “Knowing that a deliverable [a clicker question] may at any time be requested from students can help students maintain attention and engagement during a class session.”  Clickers make it easy to request frequent deliverables of students during class, and Hoekstra’s findings indicate that this is an important reason to use clickers.  Hoekstra’s findings also support other reasons I frequently provide for using clickers: sharing the results of a clicker question can enhance student engagement, clicker questions provide students with useful feedback on their learning, and clicker questions can be useful in structuring class time for students.

Hoekstra’s findings about gender and student participation are thought-provoking.  I’ve debated the importance of an initial, independent vote prior to peer instruction time with several instructors who tend to skip this initial vote (particularly my friends in the math department at Carroll College).  Hoekstra’s findings indicated that for difficult clicker questions, female students might not get as much out of an initial, independent vote as male students, whereas male students might not appreciate jumping straight into peer instruction without a chance to respond independently.

This certainly complicates the debate about initial, independent votes, as well as a teacher’s choices during peer instruction times.  I’ve frequently found it to be the case that the students in my class who hesitate to engage in peer discussion are male students (so Hoekstra’s findings ring true to me), and I typically prod these students to discuss clicker questions with peers.  I may not do that as much in the future given these results.

The situation is further complicated by the finding that some students don’t appreciate the noise and activity levels during clicker questions.  This point reminded me of Richard Felder’s work on learning styles.  Felder’s model distinguishes between active learners, those who prefer to learn through discussion and interaction, and reflective learners, those who prefer to think quietly first.  He makes the great point that traditional lectures do a poor job of supporting both types of learners, since they typically provide students with little time for discussion or quiet reflection.  The “classic” peer instruction model serves both types of learners well, however, since students are invited to respond to clicker questions first on their own, then after discuss them with their peers.  (Larry Michaelsen’s team-based learning model works similarly.)

The finding that some students decide not to engage in peer discussions because they want to wait to hear the correct explanation from the instructor was interesting, as well.  Some instructors (for example, Dennis Jacobs, who teaches chemistry at Notre Dame and is profiled in my book) are very intentional about having students surface and debate reasons for and against all of the answer choices to a clicker question.  The methods these instructors use can help students move away from merely taking notes and memorizing explanations and develop critical thinking skills.  It’s possible that the students in Hoekstra’s study who preferred waiting for their instructor’s explanations to peer discussion might have been motivated to engage in peer discussion and thus sharpen their critical thinking skills with more directive instructions and/or requirements on the part of the instructors.

It’s also worth noting that students who weren’t interested in hearing their peers’ incorrect explanations for clicker questions were also concerned about sharing their own incorrect explanations and thus confusing their peers.  Responding to those concerns might be an important part of motivating these students to engage in peer discussions.

I’ll finish my comments with a response to the quote from Trees and Jackson (2007).  I’ll agree that social factors are likely more important than technological factors in the success of teaching with clickers.  I would qualify that statement, however, to note that (a) the technology can enhance those social factors when used well and (b) the teaching choices that instructors make when using clickers can have a significant impact on those social factors.  As a result, we need not think of those social factors as out of our influence as instructors.

That’s the end of student participation week here on the blog.  That’s also likely the end of five-posts-in-a-week here, too!  I’ll return to my usual format next week, although I have found a few more interesting looking articles on student participation to read soon…

Article: Willoughby & Gustafson (2009)

Here’s part four of student participation week.  I’m blogging all week about recent research on the impact of teaching with clickers on student participation in class.

Reference: Willoughby, S. D., & Gustafson, E. (2009). Technology talks: Clickers and grading incentive in the large lecture hall. American Journal of Physics, 77(2), 180-183.

Summary: Taking an approach similar to that of James (2006)–the subject of Monday’s blog post–Willoughby and Gustafson investigate the impact of grading incentives on student participation in astronomy courses through the analysis of audio-recordings of student small-group discussions of clicker questions.  They improve on James’ study by comparing high-stakes and low-stakes grading schemes within different sections of the same course taught by the same instructor.  James considered two different courses each with a different instructor.  Another difference is that James studied the discussions between pairs of students; Willoughby and Gustafson looked at discussions among student groups of size four.

The course in this study was an introductory astronomy course for non-majors taught by one of the authors of the study.  Four sections of the course were studied, two in the spring 2007 semester and two in the fall 2007 semester.  In each of the four sections, clicker scores contributed 4% of the students’ course grades.  In the two “high-stakes” sections, correct answers to clicker questions were worth one point each; incorrect answers worth nothing.  In the “low-stakes” sections, all answers to clicker questions (correct or not) were worth one point each.

Audio-recordings of student small-group discussions were only collected in the spring 2007 semester.  For these sections, student groups in the high-stakes section voted as a block in response to clicker questions 69% of the time, whereas student groups in the low-stakes section block-voted only 45% of the time, a statistically significant difference (p<0.0005).  Since there was no statistically significant difference in block-voting between the high-stakes and low-stakes sections in the fall, when audio-recorders were not used, the authors conclude that the difference observed in the spring might be due to the Hawthorne effect.  That is, since students were visually reminded that they were being studied (by the audio recorders), they might have altered their behavior.  The authors plan to conduct a follow-up experiment designed to clarify the effect of recorders on student behavior.

The authors note that the results for the spring semester (when audio-recorders were used) are consistent with James’ results.  They point out that James reported much higher rates of block-voting in both the high-stakes and low-stakes classes, but this makes sense given that (a) consensus is more difficult to achieve among four students than between pairs of students and (b) clicker scores contributed much more to students’ course grades in James’ study.

Although in the spring semester sections, students in the high-stakes section answered clicker questions correctly more often than students in the low-stakes section (57% vs. 50%), both groups of students performed equally well in terms of course grades and performance on the Astronomy Diagnostic Test (ADT), a “reliable and validated exam on general astronomy knowledge usually taught in high school science courses.”  (The ADT was also used by Len, 2007.)  The authors conclude that this provides evidence that the results of clicker questions in high-stakes settings may not be as accurate as those in low-stakes settings, a conclusion echoed by James.

Analysis of the audio-recordings in the spring semester indicated not only that students spoke more in the low-stakes section than in the high-stakes section, but also that the nature of their conversations were different.  In the low-stakes section, students more frequently stated answer preferences, asked for clarification, restated the question, and articulated new questions.  The authors conclude that high-stakes grading of clicker questions “will not lead to an increase in frank discussions among the students.”

Comments: These findings provide additional evidence in favor of the claims that James made in his article, that high-stakes grading schemes inhibit balanced and open student discussion during peer instruction time and lead to clicker question results that are less accurate assessments of student understanding.  As a said on Monday, these are important findings for instructors making choices about how to grade clicker questions, particularly instructors wishing to encourage useful peer instruction discussions and generate data on student learning useful for making agile teaching decisions.

As with James’ study, I’m impressed by the qualitative methods employed by Willoughby and Gustafson.  Audio-recording student conversations during clicker questions, then coding those conversations to find identify patterns in the nature of those conversations is a very useful method for understanding small group learning dynamics.  The methods used here lend a lot of weight to the authors’ conclusion that high-stakes environments negatively impact the quality of peer discussions.

The degree to which block-voting was apparently influenced by the presence of the audio-recorders in this study is surprising to me.  I’m not sure why this would be the case.  Perhaps students who were recorded in the low-stakes class thought that if their small groups didn’t agree on the answer to a clicker question but voted as if they did agree that the audio-recording would reveal this discrepancy to their instructors, so they voted more honestly.  Students in the high-stakes class might have felt a similar concern but the concern was outweighed by their interest in scoring more points on clicker questions by answering accurately.

If that’s the case, however, it undercuts the assertion that lower-stakes grading schemes yield clicker results that more accurately reflect student understanding.  On the other hand, since the clicker questions only contributed 4% to the students’ overall course grade in the semester in which audio-recorders were not used, both sections were effectively low-stakes from the students’ point of view, which would explain the lack of difference in block-voting patterns between the two sections.  Recall in James’ study, clicker questions contributed much higher percentages of the students’ course grades, magnifying the difference between high-stakes and low-stakes grading schemes.

I look forward to hearing the results of the authors’ follow-up study, the one designed to shed light on the effect of the presence of audio-recorders on student discussions and voting.

Article: Carnaghan & Webb (2007)

Here’s part three of student participation week.  I’m blogging all week about recent research on the impact of teaching with clickers on student participation in class.

Reference: Carnaghan, C., & Webb, A. (2007). Investigating the effects of group response systems on student satisfaction, learning, and engagement in accounting education. Issues in Accounting Education, 22(3), 391-409.

Summary: Carnaghan and Webb explore the impact of clicker usage on student learning and participation in four sections of an introductory management accounting course.  Three of the sections had 40 or fewer students; one had 72 students.  In two of the sections, clickers were used in the first half of the semester, then not used in the second half.  In the other two sections, clickers were only used in the second half of the semester.  This study design allowed the authors to investigate the effects of adding clickers to a course as well as taking them out of a course at the halfway point.

When clickers were used, students were encouraged to discuss their responses to the clicker questions before submitting them.  Once the votes were in, the results were displayed to the students in a histogram with the correct response highlighted.  If a significant number of students missed the question, “a student volunteer was asked to explain his or her response, followed by further discussion.”  If most students answered correctly (which was often the case since the average for the questions was 84% correct), it’s unclear what happened.  Presumably the instructor said a few words about the question and moved on.

When clickers were not used, the very same multiple-choice questions were used and students were again encouraged to discuss their responses before committing to them.  No polling mechanism was used, and thus neither instructor nor students were aware of the distribution of responses.  Instead, the instructor asked for a volunteer to answer the question.  If the volunteer was incorrect, more volunteers were enlisted “until either the correct answer was provided or the level of confusion indicated significant student difficulties.  The instructor would then display the correct answer, followed by further discussion and an explanation of the correct approach to solving the problem.”

To assess the impact of clicker use on student learning, scores on two midterm exams were compared.  Using clickers improved student performance on midterm exam questions that were related to the multiple-choice questions used in class, but only for the second midterm exam, not the first midterm exam.  According to student survey results, students perceived the use of clickers as helpful to their learning.

To assess the impact of clicker use on student participation, the authors looked at student survey results but also had a teaching assistant record how often each student asked the instructor a question or answered an instructor’s question during class.

  • Losing Clickers: Survey results indicated that students who had clickers taken away from them at the midpoint of the semester felt significantly less comfortable participating in the absence of the clickers.  However, these students actually asked more questions in the second half of the course.  They answered fewer questions, however.
  • Gaining Clickers: The students who initially did not use clickers felt slightly more comfortable participating with clickers than without, but this difference was not statistically significant.  When clickers were introduced at the midpoint of the semester, this group of students asked and answered fewer questions.
  • Overall: Taken as a whole, the students asked an average of 2.3 questions per student over the eleven observed class sessions when clickers were in use, but asked asked an average of 3.2 questions per student over the eleven observed class sessions when clickers were not used.  Similarly, the rate for asking questions was 5.9 questions per student with clickers, 6.2 questions per student without clickers.

As noted above, the average rate of correct responses for clicker questions was 84%.  The authors found a negative correlation between the percentage of students asking questions during class and the class score on the clicker questions.  As the authors state, “It appears that a [CRS] may discourage discussion in classes where the feedback from the system indicates a majority of students understand the concepts being reviewed.”

Comments: I first heard about this study in a July 2007 article on clickers in university settings in Maclean’s Magazine, a Canadian news magazine.  Here’s the quote that got my attention:

Even odder was the fact that students in the clicker classes interacted less with their professors by asking fewer questions. “It actually suppressed verbal participation,” says Webb, who was initially puzzled by the result. His subsequent theory: “I think students who got the questions wrong saw how many of their classmates got it right. If they were in the minority, they wouldn’t want to look foolish in front of their peers and they didn’t ask questions.”

I would argue that Webb is correct in the quote above, and that this study provides evidence not that the use of clickers discourages participation, but the use of clickers when correct answers are indicated on the display of results discourages participation.

Consider again what occurred in the clicker classes in this study.  When the results of a clicker question were displayed, the correct answer was indicated.  Some students probably assumed that since they knew the correct answer, they fully understood the question (whether or not they had responded correctly themselves) and so were less likely to ask questions or otherwise participate.  As Webb points out, students who answered the question incorrectly were probably less likely to participate at this point because they knew they were wrong and didn’t want to look foolish in front of their peers, particularly when they also knew that most of their peers answered the question correctly (as was often the case given the 84% success rate of students on clicker questions).

In the non-clicker classes, after students had time to think about and discuss the question at hand, the instructor called on volunteers to share their reasoning.  The students who joined in the discussion at this point did not have the correct answer confirmed for them by the instructor, and that uncertainty about the correct answer likely encouraged healthy discussion.  Students were more likely to engage in the discussion in order to find out the correct answer, and students with incorrect answers had no particular reason to stay silent during the discussion.

Thus, the use of a correct answer indicator quite possibly explains the differences in participation rates between clicker sections and non-clicker sections.  However, as I mentioned in yesterday’s post, just showing students the histogram of results of a clicker question might inhibit further student discussion of the question, particularly if one answer choice is far and away more popular than the others.  Students are likely to read such results as indicating the popular answer is the correct one and disengage in subsequent discussion for the reasons mentioned above.

The authors assert that the only differences between the clicker sections and the non-clicker sections in their study were “those related solely to characteristics of the [CRS] technology.”  That assertion is, for the most part, true.  However, two features of the technology–the display of results and the correct answer indicator–significantly changed the classwide discussion strategy implemented by the instructor.  As I have said on this blog before, one can’t study whether or not using classroom response systems enhances student learning or student participation.  Rather, one can only study whether or not using classroom response systems in particular ways affects those outcome measures.  Had the authors studied two types of clicker classes, one in which correct answer indicators were used and one in which they were not, then we might have a better answer to the question of the impact of this particular way of using clickers.

As it is, however, this article only provides evidence that the ways in which clickers were used in this study–displaying results along with correct answer indicators to students for clicker questions that were generally fairly easy–are not effective at encouraging student participation during class.

I’ll finish by noting that the preprint of the article I read back in 2007 is still available online.  However, the final published version of the article includes some very important revisions (particularly to the statistics used), so I would discourage readers of this blog from reading the preprint.

Article: Lucas (2009)

Here’s part two of student participation week.  I’ll be blogging all week about recent research on the impact of teaching with clickers on student participation in class.

Reference: Lucas, A. (2009). Using peer instruction and i>clickers to enhance student participation in calculus. PRIMUS, 19(3), 219-231.

Summary: In this article, Lucas assesses his use of clicker-facilitated peer instruction in his calculus courses.  Lucas has his students respond individually to clicker questions, then displays the results to the class (as a histogram), then has the students discuss the questions in small groups prior to a second vote and classwide discussion.  His grading scheme sounds high-stakes initially, since students receive only half-credit for wrong answers, but since he only uses clicker grades when students’ numerical course grades fall between two letter grades, the stakes are actually fairly low.  (According to the article I discussed yesterday, James (2006), this should encourage balanced peer discussion.)

There was a moderately strong correlation between students’ clicker scores and their overall course grades (r=0.57).  Lucas notes that this means that instructors might take advantage of clicker scores early in the semester to identify students who are struggling in a course.  Homework scores are not only more effort to obtain but were correlated less strongly with course performance in Lucas’ case.

Based on end-of-semester student surveys in two calculus courses, one featuring clicker-facilitated peer instruction and the other taught in a more traditional manner, students who participate in peer instruction activities place a greater value on student-student learning (as opposed to instructor-student learning) than students who do not.

Lucas was interested in exploring the impact of the instructions he gave his students on their participation in peer discussion.  He videotaped two tables of eight students each discussing a particular question.  One table was given no instructions; the other was told to first discuss the question in detail in pairs using pencil and paper to explain their answers to each other and then discuss the question with other students at their tables.

The students that were given no instructions deferred to one of the “high status” students at the table even though that student was incorrect instead of defending their own, correct answers.  Lucas defined a “high status” student for the purposes of this student as one who ended up with a B+ or higher in the course, assuming “that students receiving high grades were regarded by their peers as having higher status.”  Furthermore, Lucas states that at this table, “there was very little mathematical dialogue” in the time allocated for discussion.

In contrast, at the table where students were given instructions to discuss the question in pairs using pencil and paper, the video indicated that the students spent most of the discussion time doing exactly that.  Furthermore, for the two pairs at the table that consisted of one high status student and one non-high-status student, the non-high-status students contributed to the pair discussions.  In each case, both students were initially incorrect (with the same wrong answers) but through balanced discussions that involved mathematical reasoning communicated in writing were able to arrive at the correct solution.

Lucas concludes that the instructions given to students prior to peer instruction impact the nature of the peer discussions and that in a math class, encouraging students to discuss clicker questions using pencil and paper enhances the quality of those discussions.

Comments: James (2006), the subject of yesterday’s post, argues that the grading schemes used with clicker questions impacts the nature of the discussions that occur during peer instruction time.  Lucas here argues that the instructions teachers give students for peer instruction time are also important.  I think Lucas is onto something here, although his argument is weakened by the fact that he only analyzed the discussions among two groups of students about a single clicker question.  Further studies are necessary, I think.  It would be fairly easy for Lucas and other instructors to vary the instructions they give students prior to peer instruction, then see which sets of instructions lead to greater convergence to correct answers from the first vote to the second vote.

I think Lucas’ findings were enhanced by his use of video, however.  Video- or audio-taping student conversations provides a useful tool for better understanding the nature and dynamics of peer discussions.  James’ results are certainly stronger because of his analysis of such audio-recordings.

There are other factors that might impact the nature of discussions during peer instruction time, of course.  Eric Mazur and Nathaniel Lasry, in particular, have mentioned the display of the results of the initial clicker vote as one potentially important factor.  If there’s consensus around a single response (right or wrong), students seeing the histogram might assume that the popular answer is the correct one and thus, assuming they understand the correct answer, disengage from subsequent discussion of the question.  Thanks to Mazur’s and Lasry’s observations as well as my own, I’ve been much more intentional this semester about showing my students these initial results.  There’s potential for a study of this factor, too.

Lucas’ definition of “high status” is a practical one, certainly, and a useful one, too, I think.  James explored the connection between high-performing students and contributions to peer discussions in his study, too.  There are other definitions of status, however.  For instance, when I interviewed Edna Ross for my book, she described the some of the ways in which race and gender affect student-to-student discussions during peer instruction time.  If better instructions and lower stakes help motivate lower-performing students to participate more meaningfully in peer instruction (as Lucas’ and James’ results seem to indicate), might these methods also help defuse some of negative ways that race and gender impact peer instruction?  Given the results of  Reay, Li, and Bao (2008), indicating that their clicker-facilitated question-sequence pedagogy reduced the performance gap between male and female students, the answer is quite possibly yes.  There’s another study idea for you…

Two final comments: I like the idea that clicker scores might function as an easily obtained early warning indicator for students struggling in a course.  Implementing this would involve scoring clicker questions on accuracy (for this purpose if not as part of students’ grades), as well as taking a look at individual student clicker scores early in the semester.

Also, Lucas’ finding that implementing peer instruction in the classroom leads students to value learning from their peers more is an interesting one.  This result indicates that the teaching methods we use can have an impact on students’ metacognition, their learning about learning.  And if you believe that student-to-student learning is valuable (as many do), then we can have a positive impact on our students’ metacognition by implementing peer instruction.

I’ll add here that Adam Lucas, the author of this article, and I will be facilitating a minicourse on teaching with clickers and classroom voting at the January 2010 Joint Mathematics Meetings in San Francisco.  Math faculty interested in getting started teaching with clickers are encouraged to join us!

Article: James (2006)

It’s student participation week here on the blog.  Last week I grabbed a handful of articles on classroom response systems to read while proctoring my linear algebra exams, and, as it turned out, all the articles dealt with the impact of teaching with clickers on student participation.  I’ll be blogging about these articles all week.  Enjoy!

Reference: James, M. C. (2006). The effect of grading incentive on student discourse in peer instruction. American Journal of Physics, 74(8), 689-691.

Summary: James looked at the impact of grade incentives on student participation in two introductory astronomy courses.  The “high-stakes” course was a general intro course for non-majors with 180 students in which clicker questions “counted for 12.5% of the overall course grade… and incorrect responses earned one-third the credit earned by a correct response.”  The “low-stakes” course was a course on space travel and the possibility of extraterrestial life for non-majors with 84 students in which clicker questions “counted for 20% of the course grade and incorrect responses earned as much credit as correct responses.”

In each course, a few clicker questions were asked in each class session.  Students were asked to discuss each clicker question with a neighbor before responding individually to the question.  The peer instruction conversations between at least two dozen students in each course were audiotaped on three separate occasions (near the beginning, the middle, and the end of the semester), and each statement made by these students was coded using a set of ten categories (restating question elements, stating answer preference, providing justification for a way of thinking, and so on).

Key findings from the discourse analysis and other data include the following.

  • In the high-stakes classroom, conversations within pairs of students tended to be dominated by one of the two partners.  Furthermore, the dominant partner was typically the student who ended up with the higher grade in the course.  These correlations were not present in the low-stakes course, where conversations within pairs tended to be more balanced with each student contributing.
  • In the high-stakes classroom, conversation partners responded with different responses to clicker questions only 7.6% of the time.  In the low-stakes classroom, they did so 36.8% of the time.  James concludes that “when there is a grading incentive that strongly favors correct responses to CRS questions, the question response statistics displayed by the CRS after each question may exaggerate the degree of understanding that actually exists” and thus impede agile teaching that responds to student difficulties.

Comments: I was impressed with James’ analysis of audio-recordings of student conversations during class.  I think this qualitative research method is a powerful way of “uncovering” learning dynamics within the classroom.  The results of his analysis provide convincing data for his assertion that “the grading incentives instructors adopt for incorrect question responses impacts the nature and quality of the peer discussion that takes place.”

James’ finding that student conversations were more balanced in the lower-stakes classroom is an important one for instructors to consider when determining their grades schemes for clicker questions.  His finding that in the high-stakes class, dominant students tended to be students who ended up with higher grades in the course, however, makes me wonder if students dominated because they had a better grasp of the material or if they ended up with a better grasp of the material because they contributed so much during peer discussions.

Had the students been asked to respond individually to the clicker questions before peer instruction, data from those initial votes could have been used to settle this question, I think.  If the dominant students usually answered correctly before peer discussion, then it’s more likely they dominated because they were right.  If the dominant students didn’t answer correctly at higher rates than the other students, then it’s likely the dominant students worked out the correct answers through contributing to the peer discussion.

This is an important question because it points toward an assumption that I believe many readers of James’ article will make, that the more students are able to contribute to peer discussions, the more they learn.  I don’t believe James actually makes that assumption here; he’s simply describing the effects of grading incentive on who talks more.  However, there’s a large body of research that supports this assumption, so it’s a reasonable one to make.  Under the assumption, it’s a good thing if more students contribute to peer discussions, so instructors should use lower-stakes grading schemes.

Surprisingly, in the low-stakes classroom, student exam scores weren’t correlated with contributions during peer instruction.  This result seems to undercuts the above assumption that discussion is useful to student learning.  Perhaps this lack of correlation is more of a statistical issue, however.  It could be that students in the low-stakes classroom all did pretty well on the exams, which would account for a weak correlation.  For that matter, if these students all contributed at similar levels, that would weaken any correlation, as well.

The finding that student pairs in the low-stakes classroom more frequently submitted different answers is a very useful one.  Practicing agile teaching by responding to the results of a clicker question will more likely enhance student learning if the results of the clicker question are accurate.  That’s why a classroom response system is a more useful response mechanism than hand-raising or flash cards, according to Stowell and Nelson (2007).  This is a strong argument in favor of low-stakes clicker questions.  If it is indeed the case that most students did pretty well in the low-stakes course, it might be because their instructor had better data with which to make agile teaching decisions.

This raises another reason that having students respond to the clicker questions individually before peer instruction would have provided a useful source of data for James.  Had students who initially missed clicker questions in both courses ended up doing better on exams in the low-stakes course, this would have potentially provided evidence for the agile-teaching effect, the contribution-to-discussion effect, or both.

I think James does a good job of not overstating his results, as compelling as they are.  It’s important to point out, however, that this wasn’t a control group experiment.  The topics of the two courses (and thus the nature of the clicker questions) were different, as were the instructors.  Both of these elements could explain the differences in participation, independent of the grading incentives used.

Update: Mark James emailed me after reading this post and suggested that exam performance can help distinguish between more knowledgeable and less knowledgeable students for the purpose of analyzing peer instruction conversations (as he did in this study) but, given that exam performance is a measure of general knowledge of course content, it is less useful in assessing the specific impact of clicker-facilitated peer instruction on student learning, particularly given that only a subset of exam questions were on topics similar to those explored during clicker questions.  This makes sense to me and would explain the lack of correlation seen in the study between contributing to peer instruction discussions and exam performance.

James also pointed out a follow-up study, which I plan to read and blog about in the future.  Here’s the reference:

James, M. C., Barbieri, F., & Garcia, P. (2008). What are they talking about? Lessons learned from a study of peer instruction. Astronomy Education Review, 7(1).

Article: Fies & Marshall (2008)

Reference: Fies, C., & Marshall, J. (2008). The C3 framework: Evaluating classroom response system interactions in university classrooms. Journal of Science Education and Technology, 17, 483-499.

Summary: The authors interviewed and observed nine instructors in different disciplines and surveyed students in one of their own courses (in which clickers and hand-raising were used to respond to questions during alternating weeks) in order to analyze ways in which both teachers and students make use of and react to classroom response systems.  The authors organize most of their findings in a “framework of interaction” they call the C3 framework.

The authors observed that some instructors focused more on performance goals, using clickers to take attendance and administer quizzes.  Other instructors focused more on mastery goals, using clickers to practice agile teaching, facilitate peer instruction, ask one-best-answer questions, create “teachable moments” by creating on-the-fly questions in response to student comments, and make use of student suggestions for answer choices.  The authors note that instructors tended to use clickers in ways that matched their existing orientation toward performance or mastery goals; using clickers did not significantly change their goals.

Feedback from the students studied indicated that students appreciated finding out where they stood among their peers and that clickers increased their participation in class.  There were also indications that in the weeks that clickers were used, student understanding deepened, although comments on the end-of-semester meta-reflection indicated that students were not always aware of this.  It should be noted that the students studied were mostly female pre-service teachers taking a physical sciences course.

The C3 interaction framework developed by the authors includes three components, each of which can be viewed from a teacher or student’s perspective:

  • What concerns does the teacher/student have?  Performance goals or mastery goals?
  • Where should a class be centered?  On the students or on the teacher?
  • Who should have control of the interactions in class?  The students or the teacher?

They note that each of these Cs is a continuum, not a binary choice, and that the element of control tends to be a function of the other two elements.

Comments: Probably the most important point the authors make is that clickers can be used in ways that align with an instructor’s existing orientation toward performance or mastery goals or toward teacher-centeredness or student-centeredness.  I would argue that using clickers can lead to instructors to adopt more mastery-oriented and student-centered teaching practices, but as Fies and Marshall point out, this change is not automatic.  Given the flexibility of the technology, instructors are quite able to adapt it to their existing teaching practices.

The authors point out that a more restrictive technology, one that only supports mastery orientations and student-centeredness, might encourage more instructors to change their teaching practices accordingly.  However, such a technology might not be of interest to instructors not already teaching in student-centered ways.

What then might help instructors think about their teaching choices in ways that open them to consider mastery orientations and student-centered teaching?  I wonder what kind of assistance in using clickers the instructors in this study were given.  Perhaps pedagogically-oriented workshops, working groups, or consultations would have helped the instructors consider other options for using clickers than the ones that aligned with their existing teaching practices.

Some instructors have “teachable moments” of their own when using clickers, finding out from the results of a well-chosen clicker question that their explanations of certain topics were not making sense to students.  This can lead instructors change their teaching practices to include more formative assessment of student learning.

What ideas do you have for helping instructors consider student-centered teaching practices (with or without clickers) in ways that make sense in their particular teaching contexts?

Regarding the C3 framework, it seems to me that the framework describes a single continuum, not three potentially independent continuums.  This single continuum would have mastery goals, student-centeredness, and student control at one end and performance goals, teacher-centeredness, and teacher control at the other end.  If the framework really describes three potentially independent continuums, I would like to hear about an instructor that had mastery goals but was teacher-centered or a student-centered instructor with performance goals, for example.

One of the student comments reported in the article stood out to me as a reason for not participating in class that I hadn’t heard before: “I do not want to seem like I am trying to hog class time.”  I often hear that students are hesitant to speak out in class for fear of being wrong or of having a perspective not shared by their peers.  This idea of deference to other students’ learning opportunities is a new one in my experience, although it makes sense to me, particularly with certain students.

Categories

Recent Comments

Share this: