Researching Effects of Metacognitive Exercises

During the fall semester, I hope* to investigate whether metacognitive prompts are associated with differences in students’ exam scores — or, at minimum, whether students report that their learning strategies (a.k.a. study skills) change over time. I plan on collecting data on the following schedule:

  1. Pre-exam survey
  2. Exam 1
  3. Post-exam survey
  4. Pre-exam survey
  5. Exam 2
  6. Post-exam survey
  7. Pre-exam survey
  8. Final exam

The pre-exam survey asks students how frequently they do each of the actions below for the course, on a sale of never, seldom, sometimes, often, or always:

  1. Write notes on main ideas when reading an assigned text for the first time.
  2. Actively participate and pay attention during class.
  3. Write notes during class.
  4. Reorganize notes when not in class.
  5. Periodically review notes when not in class.
  6. Start assignments early instead of waiting until last minute.
  7. Carefully read an assignment’s instructions and rubric before starting it.
  8. Review instructor feedback and assignment rubric after completing an assignment.
  9. Study in an environment that is productive for my learning.
  10. Seek help from others (classmates, friends, instructor, tutor, etc.) if I have questions.

The post-exam survey asks, in questions 1 to 5, how frequently students did each of the actions, on a scale of never, once, a few times, several times, or more than once a day over more than one day. For questions 6 to 8, students are asked to provide a brief answer in their own words:

  1. Studied in an environment that was productive for my learning.
  2. Reviewed the notes I had written.
  3. Reviewed instructor feedback and rubrics on graded assignments.
  4. Studied in intervals over an extended period of time prior to the exam instead of cramming.
  5. Sought help from others (classmates, friends, instructor, tutor, etc.) if I had questions.
  6. Which of your learning strategies helped you prepare the most for the exam? Why?
  7. Which of your learning strategies helped you prepare the least for the exam? Why?
  8. What changes to your learning strategies, if any, do you think you should make? Why?

Students will earn 5 points toward their final grade (on a scale of 1,000 points) for completing each survey, regardless of their responses. Each survey is on the Canvas LMS and will be accessible for a defined time period.

It looks like about twenty-five students at most will be enrolled in this course, so I won’t be able to do a statistical analysis of the data, but maybe I’ll be able to identify some patterns.

*Best laid plans of mice and men often go awry during pandemics.

Maximizing the Medium II

A few weeks ago, I wrote about using one technological platform to circumvent the design constraints of another. Here is another, more serendipitous, example of finding a technological means for achieving an instructional objective.

For an upcoming undergraduate course, I decided to make Twine stories part of my exams. My previous posts on Twine for a team writing project are here, here, and here. (Hard to believe it’s been seven years since I last used it — how time flies.) For now, it is only important to know that Twine is freeware that enables users to create interactive texts in the form of HTML files.

I wanted my exams to each have two parts that students complete in sequence — first, a series of multiple choice questions on concepts; second, an essay-type question in which students demonstrate their ability to apply the same concepts by extending a Twine’s plot line. It is fairly easy (if tedious) to create multiple choice test questions in the Canvas LMS. One can also set a content module to require that students complete each item in the module in order. But initially I didn’t know how to include the Twine story for each exam’s second part.

Continue reading “Maximizing the Medium II”

Another Change to Teammate Evaluations

Jumping into the timecrowave again. Past posts on teammate evaluations:

Simplifying my life with Google Forms

What most students thought was a mysterious calculation

Distributing points instead of forced ranking

Calculating differently

For the upcoming fall semester, I’m making another tweak to the system. Instead of ranking or distributing a set number of points, they will rate each other’s contributions on a three-level scale. And rather than email each team a link to a different Google Form, I have one Google Form for the entire class. I can either email the link to the whole class, or — more likely because it’s easier on my end — I can post the link in the Canvas LMS. Or, as I discussed in my last post, I can embed the Form’s iframe into a Canvas assignment.

Since I’ve set the Form to collect students’ email addresses, I’ll be able to discard the responses of any student who rates a team he or she does not belong to.

The evaluation is worth up to 50 points out of 1,000 in the course grading scale; the last item in the Form is simply a method of encouraging students to reflect on how well they and their teammates collaborated (instead of mindlessly entering numbers). As I did last semester, I will set the corresponding assignment in the Canvas gradebook as worth nothing, to avoid complaints about “losing” points because of their peers’ evaluation of their work.

Keeping up with the literature

OK, actually a bookshop, but the same idea

I don’t know about you, but this is one of the hardest things to do.

Being of an age that I can remember trawling through the card index at the university library, and then spending a day or three browsing the stacks to discover some interesting piece in a journal I’d not heard of until that point [which is how I ended up with my PhD topic, but that’s another story], I am now swamped by a constant wave of new publications.

Which is great, but also problematic. Maybe because of that struggle to find stuff I now worry that I might be missing something, even as I now find I don’t have enough time to read it all, let alone ruminate on it.

Currently, my system works like this.

I’m signed up to about 50 journals for new issue alerts. I keep a spreadsheet of these journals, so I know that I’ve not missed anything from them: some of the alerts are a bit ropey, so maybe twice a year I’ll go to my library website and go check for missing back issues.

Right now, I’d totally recommend doing that, since many publishers seem to have loosened up access to journals that your place might not usually subscribe to.

I’ll download PDFs, reading the abstracts as I go, plus the full piece if it’s particularly salient. You might consider this piece when you write your next piece, because it’s certainly true for me.

In addition, I have a Google Scholar alert set up for a keyword for my research, which produces about an email a day of links to new content. Again, I try to access as much as I can.

And then there’s the stuff that I read about on Twitter or other blogs.

Again, I’m not sure this is the ideal way, but it’s the one I’ve worked with for many years, so it’s comfortable for me, which is also important.

From experience, the most difficult thing is letting stuff build up. A few years ago I left the library ‘visit’ for well over a year and I ended up with several hundred pieces, about which I could tell you pretty much nothing.

Anyone got any better models for doing this? Stick it in the comments below.

Maximizing the Medium I

Probably all of us have encountered the constraints of educational technology — in a particular situation, it doesn’t do quite what we want it to do, so we try to figure out a workaround. Here is one example:

For the coming academic year, my undergraduate students will complete multiple metacognitive exercises that will supply me with data for some pedagogical research. The exercises consist of surveys that ask students to evaluate the effectiveness of their study habits before and after exams (I’ll describe this in detail in a future post).

Initially, I tried creating these surveys in the Canvas LMS quiz tool, because I can set Canvas to automatically reward students with a certain number of points if they complete a survey. I find point rewards to be necessary because most of the undergraduates I teach won’t do anything unless it has a transparent effect on their course grade. However, I rapidly hit several obstacles — e.g., as far as I can tell, one can easily duplicate an “assignment” in Canvas, but not a “quiz.”

In contrast, it is ridiculously easy to copy, rename, and revise survey instruments in Google Forms. But Google Forms isn’t connected to the Canvas gradebook, and I did not want to have repeatedly jump between Google Forms and Canvas to record points each time a student completed a survey. Also I prefer putting as much of my course content as possible in Canvas, because invariably, the more I expect students to use different technological platforms, the more emails I receive about their learned helplessness.

What to do?

Continue reading “Maximizing the Medium I”

Designing to constraints

Seamless integration

It’s summertime, so in between the flood warnings (seriously), it’s time to be doing some Big Thinking about teaching.

As part of my new role at the Open University, I’m contributing to a new Masters in IR, including the development of a simulation exercise.

I’ll be writing a lot more about this simulation in the next couple of years, mainly because the constraints are very different from those I’ve worked to before, with a big pile of knock-on consequences.

As a completely new programme, we’ve got relatively more space to manoeuvre than would be usually the case, but still the constraints loom rather large. As such, I’m dwelling on my third step of my usual approach to such situations.

For those unfamiliar with the OU, it’s the UK’s largest university (nearly the enrolment of the University of California system) working almost entirely on a distance-learning model. We have a lot of adult learners and a very flexible approach to working up to qualifications: you take a module at a time.

The new Masters will be entirely remote, with a taught module that runs for 36 weeks, followed by a dissertation. For most of that 36 weeks, we provide a collection of teaching materials – written and audio/visual – through our website, with structured activities for students, building up to interim and final pieces of assessment.

My role, as part of the central teaching staff, is to create those materials, which have to be able to stand being used by students for several years before a refresh, with activities supervised and moderated by a large team of associates, who handle the bulk of the direct interactions with students.

The upshot here is that I’ve been trying to work up a negotiation simulation that fits a number of requirements that are usually not that conducive to such things:

  • Student numbers will be variable across iterations;
  • I can’t assume all students will be doing this via our website (we have a significant number of students with various accessibility challenges, so they might only be able to learn via a printed version of our materials);
  • As such, synchronous interaction is not an option;
  • Even asynchronous interaction will be a problem for some;
  • And I can’t assume any prior knowledge of negotiation.

As the old joke about getting directions in Ireland goes, you wouldn’t start from here.

But that’s been precisely why I’ve enjoyed my first months here: it’s not run-of-the-mill and I’m being forced to think about how to manage the situation, rather than simply reinvent the wheel.

For those of you not moving jobs, then remember that you too are working to constraints, but you might just have internalised them to a degree. None of us gets a completely free hand, or even something close to one.

The response here is to work with the constraints, not against them.

Whether it’s a oddly-shaped room, or a limit on your timetabled time with students, or making necessary adjustments for students with disabilities, or building in assessment obligations, or a departmental edict against X, Y or Z; then it’s the same thing. Whatever things might be blocked, then other things become possible.

The beauty of education is that it’s not uniform and that there’s no one correct way to do it: variety is a good thing, for so many reasons.

In my case, I’ve used those constraints to explore the options with the rest of the team. That meant presenting a number of basic models to them, with their benefits and disadvantages, all grounded in the question of what purpose this simulation is fulfilling within the programme.

Off the back of that discussion, I’m not working up an approach that combines at least two of those models, which we’ll discuss again in September. And as we settle on things, I’ll write more about how that might work and the further integration and delivery challenges that have to be addressed.

Review of McGuire’s Teach Students How to Learn

I stumbled across Teach Students How to Learn by Saundra Yancy McGuire (Stylus, 2015). Like The New Science of Learning by Doyle and Zakrajsek, it contains some useful advice. Here is a brief review:

The bad

The book has an excessive amount of personal anecdote — such as conversations with and exam scores of individual students — but no presentation of statistically significant findings on overall changes in students’ performance. The author also favorably discusses learning styles and the Myers-Briggs inventory, neither of which is scientifically supported. A more concise presentation with a greater emphasis on empirical evidence would be more persuasive.

The good

McGuire’s focus is on teaching students about the benefits of metacognition, including a specific method of introducing them to Bloom’s taxonomy (Chapter 4). Why is this effective? In high school, students earn high grades without much effort, so they enter college suffering from illusory superiority and ignorant of the actual learning process. Coaching students on specific study strategies (Chapter 5) will therefore benefit them. One example: as professors, we typically know what shortcuts to employ to efficiently find and retain information contained in a book. Students, in contrast, may not know what an index is or how to use one. McGuire also rightly discusses the role of motivation in student learning (Chapters 7-9), and she points out that there are both student-related and professor-related barriers to motivation. These barriers can be mitigated by the instructor.

A final comment

The underlying assumption of this book is that students want to learn, and if they are equipped with the right tools, college becomes a more valuable and rewarding experience for them and their professors. While I think this is a noble and generally accurate sentiment, I’m seeing an increasing number of U.S. undergraduate students for whom college is simply a credentialing process. For these students, the diploma is the goal, learning is not.

‘sup, D?

As we roll into the summer break, I have little to offer in the way of insight on matters pedagogic, so I’ll leave you to ponder the following:

This got a lot of attention over on Twitter last week, with opinion less than settled about whether it was A Good Take or A Bad Take. You can read the many, many comments by clicking through.

Me, I’m a pretty relaxed person about names, not least because of several decades of mangling of my surname and occasional uncertainty about what my actual title is.

I know I’d rather have a student contact me to ask something and make Simon (even Si), than for them to be too worried about honorifics to email at all.

Yes, there are pitfalls all over the place – from hierarchies of power to false matey-ness – so probably the best thing is to talk to students about it directly, when you meet them first.

And as the OP put it, it’s not something for the syllabus, because no-one reads that, apparently.

Which might raise some questions about why your syllabus isn’t working. But that can be one for next time.

Call for Editor(s): Journal of Political Science Education

The American Political Science Association is seeking applications and nominations for editorship of the Journal of Political Science Education. Applications can be from individuals or teams, and are due by September 1. Full details are here.

A big thank you to the outgoing editorial team for their excellent management of this journal over the last few years.

Failing to succeed

One thing that has been really good about being part of ALPS has been the community around it.

For example, this week’s post is inspired by my former colleague and general force of nature, Maxine David, who pushed out this thread the other day (click to read it all):

Essentially, Maxine’s asking the same question that I think we’ve all asked at some point: what are we trying to achieve in our classes?

As you’ll see from the responses to the thread, I started to sketch out a position, but I’d like to expand on it here some more.

Amanda and Nina have long championed failure in the classroom as a valuable learning experience for students. Their argument – which I also hold to – is that hitting nominal targets is good, but not a complete education: not hitting them encourages students to reflect more on the process of learning (and application) that they’ve undertaken. Think of it as being analogous to playing a game, where not hitting the (rather different) target makes you go back and try again, with the thought of why it didn’t work before in your mind.

This model requires us to acknowledge that learning has multiple targets.

Yes, we want students to know stuff and know how to do stuff (which we can catch with summative assessments), but we also want students to know how to know all this. Becoming a reflexive learner and a critical thinker is a core skill for building capacity to learn throughout the rest of one’s life and it’s a skill that has no easy metric, nor any obvious threshold.

And thresholds were my first thought when I read Maxine’s thread.

When we assess, we typically look for evidence of meeting some threshold: does the student demonstrate that they know enough about X or enough about how to do Y? Those thresholds are present in our grading and those institutional matrices that benchmark us all to common standards.

[Cough]

Maxine rightly points out that we cannot really ever separate out the formative and summative elements of assessment: if we genuinely value the development of reflexive learning, then we absolutely shouldn’t be trying to separate them out in the first place.

But this position is vanishingly rare in academia these days. Yes, I tell my doctoral students that a good viva should see every singly person coming out of the room having learnt something, but even that’s not a given.

Easy as it would be to blame the pressures of QA culture and metrification for all this, we also have to recognise that we often don’t create opportunities within our own classes. Even if we aren’t allowed to make adjustments for support received (as Maxine suggests), we should still be trying to instil a culture of collaboration, reflection and development among our students and between them and us.

In so doing we might start to reclaim some of that learning opportunity that will serve everyone in the class well, wherever they are and whatever they do.

UPDATE:

You might have seen that England is going through some very pointed discussions about racism, following the European football championships. This tweet from one of the national team players exactly captures the point: