How innovative are we?

Me, back in my punk phase.
Me, back in my punk phase.

Around these parts, we tend to make a lot of innovation. We write posts about cool new things we do, partly because we like telling people about such stuff, partly because we think you’d like to such stuff too.

Of course, we’re also very sensitive and certainly don’t fetishise innovation: it’s got limits and there’s a time and a place for it all. We’re not going to make you do something you don’t want to, or need to.

But there’s another question that sits to one side in all of this: just how innovative are we in any case?

Continue reading

Holiday Greetings

Some holiday cheer as the calendar year winds down:

Should I fix you some sandwiches?
Should I fix you some sandwiches?

The College of Saint Rose has seen its enrollment decline by 15 percent since 2008. In an attempt to balance its budget, it has eliminated twenty-three academic programs. More than ten percent of its full-time faculty are slated for termination. Programs scheduled for closure include bachelor’s degrees in philosophy, religious studies, sociology, geology, Spanish, and economics.

Burlington College continues to teeter on the edge of abyss, for reasons I discussed in August and December of 2014. Burlington’s full-time undergraduate enrollment shrank by one-third from fall 2014 to fall 2015, to 123 students. The college has announced a $2,000 reduction in tuition for the next academic year. In reality the college is just cutting its discount rate in the hopes that a lower sticker price will appeal to a larger number of potential students. Unfortunately these are the same people who have the greatest need for financial aid.

Meanwhile Arizona State University (ASU) and edX have reported a few of the results from the inaugural year of their MOOC-based Global Freshman Academy. Of the 34,000+ people who registered for the Academy’s three MOOCs, only 323 are now eligible for earning academic credit. Some people might say that a completion rate of less than one percent makes the initiative a massive failure. But it’s highly doubtful that the majority of the people who registered for the MOOCs did so to obtain academic credit. It’s also unlikely that the MOOC participants on average were as motivated as the typical student on campus. It could even be the case that many of those who registered for the MOOCs were not as academically prepared for college-level courses as the average ASU first-year student.

As I’ve written previously, I see MOOCs as an extremely low-cost, low-risk, and convenient alternative to much of what happens in many university classrooms. ASU and edX are starting small, gathering data, and iterating. Soon they will be offering a full year’s worth of college courses at a price that is less than half of what Burlington College and College of Saint Rose are charging.

Defending Powerpoint: The Instructor, Not the Tool, Makes the Lesson

The Guardian’s Andrew Smith recently published a piece in their Technology Blog lamenting the overuse of PowerPoint (PPT) -based lectures in the college classroom, citing as faults both the boredom it causes as well as the critical thinking it supposedly inhibits. I agree that PPT can lead to lectures with such attributes, but as with so many teaching methods, the tool itself should not be blamed for the faults of a lesson.

Powerpoint: the death of critical thinking?

To be as anecdotal as the author, I too have heard terrible, boring lectures grounded in pretty fonts, three or four bullet points, and droning voices that belong in a sleep-aid app. But I’ve also attended PPT lectures that were brilliant and thought provoking, and been put to sleep by a fair share of lectures with no visual aids.

A good lecture has more to do with the skill of the presenter than the tool itself. A dynamic presenter can create an interesting, informative, and provoking lecture using PPT, Prezi, or any other form of presentation software, while the same tool in the hands of a less skilled teacher can of course lead to confusion, boredom, and passivity. The chalk/white board (itself a piece of technology) can be just as misused–Many instructors have limited abilities at what I’ve always called ‘boardcraft’, the art of using the board effectively to clearly communicate with students. A set of prepared notes using the board or no technology at all, can lead to an amazing, provocative lecture–or not. It is the skill and training of the instructor, and how they use their instructional tools, rather than the tools themselves that lead to desirable results in a classroom setting. For example, the ALPS team strongly supports simulations and games as methods of teaching students. The most important part of using a game for learning, though, is the debriefing process that occurs after the activity has ended, which requires the instructor to be able to pull the experiences of the students from them and help them think through how the content lessons are exemplified or challenged by the gameplay.

The teaching tool does not determine the interest level or critical nature of the lesson.

This can be done with prepared PPT slides with provocative questions, or the whiteboard, or online discussion boards, or via old school classroom discussion. The tool is far less important than the instructor’s ability to tie personal experience in the game to the overall lesson.

Another assertion in the article that I find troubling is the implied trade off between using PPT and the board. Yes, PPT is linear in its approach. But there is nothing stopping an instructor from going off-script to follow up on a point made by a student. Depending on the technology set up in your classroom, it can be pretty easy to switch between the two. One of the classrooms I teach in has the projector on a separate wall from the board, so I often use both. In another, the projector blocks the board–but it is the work of a moment to ‘Pic-Mute’ the projector and pull the screen up so I can use the board.

There are other methods of ensuring that PPT does not make your lessons boring and uncritical. Consider avoiding bullet points entirely and instead using images, clips, and questions as the touch points for your lesson, allowing the content for that slide to be more dynamically presented and discussed. Alternatively, you can always include slides at the end that are not necessarily part of the main lesson, but which you can jump to very easily if in the course of the lecture, a relevant point comes up. For example, I will frequently drop polling data in the end of my slides on related topics that students sometimes bring up (say, opinions on various social issues for a lecture on Civil Liberties). This gives me options–I can jump to those slides if they come up, but I’m not required to do so if they do not. Jumping in and out of PPT itself is also very easy, and I will do it readily if I suddenly recall a news clip, video, or primary source that I did not include in the main lecture. No one has to be tied to the pre-prepared slides unless they let themselves be constrained by it.

Clearly, I am a fan of PPT, but like any tool, it has its limitations in the classroom. My philosophy is that we have a toolbox full of methods, of which PPT is one, and we should use the tool that helps us craft the best lesson for a given piece of content. It is our job as instructors to create lessons that are not boring and which provoke critical thinking; the blame for a class that fails to meet that mark falls squarely on us, not on PowerPoint.

A Quick Exercise on Confirmation Bias and Hypothesis Testing

This neat exercise featured on the New York Times takes a few seconds to play and includes a neat set of examples of how confirmation bias impacts government policy and corporate America.

Basically, you are presented with 3 numbers in a sequence, and asked to guess the rule that governs the sequence. You can enter in any 3 numbers you like, and the system will tell you whether or not your sequence follows the rule or not. When you are ready to guess, you enter it in but you receive no second chances. Apparently 78% of people make a guess without getting a single ‘no’–and most get the rule wrong.

The example in the NY Times is ‘2, 4, 8’. A number of possible rules could come to mind–must contain multiples of 2, or even numbers, or that the number doubles the one before it. The actually rule in this case is even simpler: the number must be larger than the one before it, meaning that ‘4, 8, 16’ works, but so does ‘1, 10, 3593’.

They don’t mention it in the article, but this exercise can adapted to teach hypothesis testing. Used in class, you can put the sequence on the board and have students suggest other sequences, which you then judge as either following or not following the rule. They have to use this information to come up with the right answer.

If this sounds familiar, it may be because one of the very first entries on this blog was about the board game Zendo, which does precisely this, only with physical pieces rather than numbers. I still use Zendo on day 1 of my methods class, and find it a really useful tool for teaching a variety of methodological skills. This numerical version is a great, easy activity to pull out for a quick fix on helping students with their logical thinking.

Avoiding Feature Creep, Part 1

This post is inspired by the disappearance of some of the last chalkboards on my campus because of a building renovation. I regard the chalkboard as one of the best teaching tools ever invented. They are absurdly easy to use. They have no moving parts and no need for electricity, so they always work. They are relatively inexpensive and never need upgrading.

The new classrooms that are being constructed will invariably get a combination of whiteboards and multimedia equipment. I find the former to be a pain to erase, and with whiteboards I produce noticeably worse penmanship and diagrams than I do with chalkboards. The latter allows for the presentation of information in a variety of ways — video clips, Powerpoint slides, the display of documents being edited in real time  — but only when all the complex pieces are working properly. The fact that we are able to continue teaching and teach well when this equipment fails demonstrates that the computer, the projector, the speakers, and everything else are actually not essential to doing our jobs.

Generally I want a tool that does one thing or a very small combination of things well, like a pen that makes writing on a piece of paper feel good or a chair that doesn’t make my back sore when I sit in it. Or a cell phone that functions as . . . a phone. Whether I’m in the classroom or online, the tool should make me more effective at teaching because it simplifies the process of learning for students.

Unfortunately many of the people who design tools ignore the fact that the utility of any tool usually decreases in proportion to the tool’s number of features and the manner by which those features are accessed. For example, compare the screens of these two online learning management systems:

Blackboard — Chernobyl control room?

Canvas — I can fly this.

Canvas Screenshot

Or, if selecting a simulation for the classroom, the difference between the Survive or Die game:

Deck of Cards

and Statecraft:

Statecraft

I’ll talk about why avoiding feature creep is important for overall course design in a future post.

 

Back to the Future with Blended Courses

I recently returned from the Online Learning Consortium’s conference on blended learning. Blended, or hybrid, means a course in which lecture content has been moved online, and less-frequent classroom sessions focus on higher-order tasks of application, evaluation, or synthesis.

Here is the advice that veterans of blended course design gave at the conference:

  • Set student expectations in advance. Students who are new to blended courses frequently conclude that they are a bad combination of the online and face-to-face worlds. It’s up to instructors to frame the experience as one that provides greater access to and more effective interaction with faculty. Pitching the course as an experiment is probably the worst message to send.
  • Online content and face-to-face exercises must correspond to but not duplicate each other. Students’ classroom participation in team- or project-based activities, for example, needs to align with the key concepts of the online content so that both sides of the course unfold in a coherently-scheduled, mutually-reinforcing manner. A frequent method of assessment that prevents non-proficient students from progressing through the content is highly useful in this regard. If online replicates what happens in the classroom, or if they are not integrated with each other, students will either stop engaging with the former or stop being physically present in the latter. 
  • Students need to understand that “online time” does not replace “homework time.” They will still need to devote significant effort outside of class to research, writing, or the completion of problem sets. This message can be highlighted as part of the orientation to using online content that students will need at the beginning of the semester.
  • Conversely, instructors need to be careful not to overwhelm students with material in excess of what students would encounter in the course’s traditional version. 
  • Online video should be in 5-10 minute pieces with Goldilocks-style assessment exercises after each piece — something not too easy nor too difficult. This fosters students’ engagement with the content by giving them the feeling that they’re being fairly challenged. If the assessments are perceived as too difficult or as irrelevant busy work, student motivation to access the content will decrease.
  • When producing video, don’t be afraid to be a real human. Students are not looking for a Taylor Swift-level of production value.
  • Use replicable tools, methods, and content to drive down the financial and emotional costs of creating additional blended courses in the future.

Grading Discussion in Online Courses

Cat ErrorIf enrollment holds steady, on June 29 I will start teaching two seven-week online graduate courses.* I’ve been teaching these courses every summer for several years, and I’ve decided to experiment this summer with a different system for grading student discussions.

I incorporate student discussion into all my courses, whether they are on campus or online, because I believe it fosters student engagement. But–yet again–discussion in these two courses last year demonstrated that there is often a difference between my beliefs about what students should do and how they decide to achieve whatever objectives they have set for themselves.

The shift was also prompted by the adoption of a different instructional tool. When I began teaching these courses, my university used Blackboard as its course management system. Anyone who has used Blackboard knows that it lacks an intuitive user interface and requires that both students and instructors click through innumerable screens. I created this rubric for class discussion, but there was no way to easily link it to what students were writing. Also the rubric was much too complicated to use to evaluate every discussion post by every student. My assessment of discussion defaulted to digging into the student analytics feature after the mini-semester had ended, to weigh the total number of a student’s posts against a scale I had created. Students got little direct feedback from me on how well they were performing in this component of the course while it was still running.

Last year a few students did not participate at all in the weekly discussions. Because of how I structure my courses, they were able to exercise other options and still perform well in terms of their final grades. But their absence from the discussions meant that their peers were not learning from them and they were not learning from their peers. And it looked to me that the lack of transparency in how I evaluated discussion made this outcome more likely.

This time around the courses will be delivered via Canvas instead of Blackboard. Canvas allows the instructor to create interactive rubrics that can be linked to specific assignments or posts in a discussion. The instructor clicks on the rubric’s boxes and the resulting grade is generated. Students see how their work will be assessed without having to click through a myriad of webpages, and they get immediate feedback from the instructor. 

So I created this new rubric, simpler than the old one but still containing the criteria that I think are most important for peer learning in a professional environment, for grading each student’s discussion posts on a week-by-week basis.** I’ll let you know how it works.

*The courses are the politics of the Middle East and comparative political development, part of an M.A. program in international relations. If you’re interested in acquiring some transferable graduate credit hours, learning about a new subject, or learning how to design and teach online course on a compressed schedule, get in touch–you don’t need to be admitted to the degree program to  enroll in either course.

**My wife/colleague showed me how to do this.