Ancillary fee anxiety

AnxietyCat Ancillary

Anxiety cat is anxious about ancillary fees

I had originally planned to write (and actually wrote a draft of) a post to explore my questions and concerns about asking students to pay for access to a web-based classroom response system (WBCRS henceforth), like Lecture Tools (now integrated into Echo 360), Top Hat, or Learning Catalytics. My major concern? These tools are basically ways to teach huge classes better, to bring in the interactivity and communication aspects difficult to achieve in the large class setting – kind of a “large class tax” on students. (I’ve used Lecture Tools for several terms – see my previous posts here, here, and here.)

 

I’d hoped to gain some clarity,  maybe spark some conversation with colleagues about the issues relating to using a WBCRS at a cost to students. As part of my thinking, I considered some of the other ancillary items we routinely ask students to purchase (i.e., not usually included in their tuition, but required for a course). I was originally thinking that a teaching tool is really different from a required textbook, dissection kit, safety glasses, or a lab coat. Now I’m not only concerned about the ethics/fairness of asking students to purchase licenses for a WBCRS, but also requiring textbooks and disposable lab coats! Read the rest of this entry »


BYOD and classroom web response systems – my intersession experience

Lecture ToolsI recently finished teaching an intersession introductory microbiology course. It was a relatively small class (at least, for me) – just over 50 students – and it was a blended, flipped class. (I may post more about the flipping/blending later.) For the in-person classes, I used a couple of Bring-Your-Own-Device (BYOD) web-based classroom interaction systems: Lecture Tools and Learning Catalytics. (In previous offerings of the course, I used clickers.)  In this post, I’ll refer to these types of systems as WRS (Web Response Systems). We had access to both systems (at no cost to the students*), and used Lecture Tools regularly.

As I discussed in an earlier post, I had hoped I could use this experience to help me make decisions about moving away from clickers to a WRS. The fact that I met my students in class for only three hours once a week for six weeks was perhaps not the best way to gather a lot of data, but it was nice to try out new technology in a smaller class. Here are some of the things I observed/learned: Read the rest of this entry »


Thinking (and reading) about grading

I just finished my intersession course (yay!), and am trying to catch up on some reading. Schinske and Tanner’s “Teaching More by Grading Less (or Differently)” paper, recently published in CBE-Life Sciences Education includes lots of good stuff: a brief history of grading in higher ed, purposes of grading (feedback and motivation to students; comparing students; measuring student knowledge/mastery) and ending with “strategies for change” to help instructors who want to maximize benefits of grading while reducing the pitfalls. There are many interesting points and suggestions in this paper, and hopefully it will be one of the ones we discuss in an upcoming oCUBE journal club meeting.

In the meantime … anyone else want to chat about some of the stuff discussed in the paper? <:-)

Reference:
Schinske, J., and Tanner, K. 2014. Teaching More by Grading Less (or Differently). CBE-Life Sciences Education 13(2): 159-166.
http://www.lifescied.org/content/13/2/159.short


Test question quandary: multiple-choice exams reduce higher-level thinking

Last fall, I read an article in CBE-Life Sciences Education by Kathrin F. Stanger-Hall Multiple-choice exams: an obstacle for higher-level thinking in introductory science classes. (CBE-Life Sciences Education, 2012, Vol 11(3), 294-306.) I was interested and disturbed by the findings … though not entirely surprised by them. When I got the opportunity to choose a paper for the oCUBE Journal Club, this was the one that first came to mind, as I’ve wanted to talk to other educators about it. I’m looking forward to talking to oCUBErs, but I suspect that there are many other educators who would also be interested in this paper, and some of the questions/concerns that it prompts.

The study:

Graph showing lower fairness in grading SET in MC+SA group

Figure 4. from Stanger-Hall (2012). “Student evaluations at the end of the semester. The average student evaluation scores from the MC + SA class are shown relative to the MC class (baseline).” Maybe reports of student evaluations of teaching should also include a breakdown of assessments used in each class?

Stanger-Hall conducted a study with two large sections of an introductory biology course, taught in the same term by the same instructor (herself), with differences in the types of questions used on tests for each section.  One section was tested on midterms by multiple-choice (MC) questions only, while midterms in the other section included a mixture of both MC questions and constructed-response (CR) questions (e.g., short answer, essay, fill-in-the blank), referred to as MC+SA in the article. She had a nice sample size: 282 students in the MC section, 231 in the MC+SA section. All students were introduced to Bloom’s Taxonomy of thinking skills, informed that 25-30% of exam questions would test higher-level thinking*, and provided guidance regarding study strategies and time.  Although (self-reported) study time was similar across sections, students in the MC+SA section performed better on the portion of the final exam common to both groups, and reported use of more active study strategies vs. passive ones. Despite higher performance, the MC+SA students did not like the CR questions, and rated “fairness in grading” lower than those in the MC-only section. (I was particularly struck by Figure 4, illustrating this finding.)

Read the rest of this entry »