Call for Proposals: Work Smarter, Not Harder

Charity Butcher, Tavishi Bhasin, Maia Hallward, and Elizabeth Gordon are creating a book for political science faculty who want to simultaneously increase their research productivity and teaching effectiveness. They are seeking proposals for chapters that will outline how faculty can align their teaching and research interests through:

  • The scholarship of teaching and learning.
  • Textbook authorship.
  • Faculty-student research at both the undergraduate and graduate levels.
  • Study abroad and field research.
  • Designing courses to general new areas of scholarship.

If you are interested, please submit a 300-400 word abstract and a short bio (around 50 words) to cbutche2@kennesaw.edu by September 30th. Your proposal should describe the essay you would like to contribute, explicitly connecting your chapter to one of the areas above or indicating an additional unlisted area where you think your chapter could fit. Abstracts will be reviewed on a rolling basis, with all decisions completed by October 15. Accepted abstracts will be included in a book proposal to be submitted to Springer as part of the Political Pedagogies book series (edited by Jamie Frueh and David Hornsby).

They anticipate final essays of roughly 3000 words to be submitted by February 1, 2022. Final essays should include specific, tangible examples of teaching and research practices that other professors could reproduce in their own contexts to improve and expand their research and teaching. For more details, please see the full call for proposals at: 

http://facultyweb.kennesaw.edu/cbutche2/docs/Call%20for%20Proposals%20-%20Aligning%20Teaching%20and%20Research.pdf.

Asynchronous Field Research Exercises

To answer Maria’s question in her comment on my last post about syllabus design, here is an example of one of my field research assignments:

To earn full credit: upload your first post by 6:00 p.m. Wednesday and respond to at least one other person’s post by 9:00 a.m. Friday; see the rubric.

  1. Read:
  1. Go to The Opportunity Atlas at https://www.opportunityatlas.org. Click “Begin Exploring.” Click “OK.” Enter Newport’s zip code in the search bar at the upper left. Make sure “Household Income” is selected for “Outcomes.” Select a street in the blue area and a street in the red area.
  2. Walk a portion of the two streets that you selected. What do you see about the built environment that you think relates to economic opportunity in these locations?
  3. Take a photo that indicates what you notice; post the photo and your observations in the discussion. Identify the location shown in the photo. Cite at least one the readings in your first discussion post. 

Here is my rubric for grading the online discussion:

Syllabus Design for an Unpredictable Semester

Another post on preparing for the fall semester, which, given the pandemic, stands a good chance of going awry despite our best laid plans. Here are a few suggestions on how to cope with the uncertainty:

Obligatory pandemic cat photo

Make it plain to students that as with life, change is likely, if not inevitable. I put an escape clause in all my syllabi: “The instructor reserves the right to change any policies related to this course at his discretion.” The pandemic has given me an additional example to use when explaining to students why this statement is included.

Create multiple, flexible paths for student success. My preferred method is to calculate the final course grade on a 1,000 point scale, where a total of 950 or above equates to an A, but where the total number of points available from all assignments and exams typically tops out at 1,100. This provides students with a buffer — they do not need to complete every assignment or always earn a perfect score for a good course grade. The system gives students an option to exercise in case of illness or some other emergency.

Instead of a few high stakes assessments at the end of the semester, schedule regular lower stakes assessments throughout the semester. Not putting most of your eggs in the same calendrical basket lessens the risk of your course imploding when something goes haywire. Shifting from a cumulative to a more formative assessment regime also generates more learning and reduces anxiety at the end of the semester for students.

Create assignments that do not require a physical presence in the classroom. A year ago when teaching completely online, I designed a series of student field research exercises tied to asynchronous discussions. The system worked well, so I’m doing it again. If holding class on campus on certain days becomes impossible, or even just very complicated, I can easily toggle class to the online LMS/VLE. If class does meet as scheduled, I can simply engage students in a review of their research and online discussion.

These are just a few ideas. If you have more, let us know. We are always looking for guest contributors.

Researching Effects of Metacognitive Exercises

During the fall semester, I hope* to investigate whether metacognitive prompts are associated with differences in students’ exam scores — or, at minimum, whether students report that their learning strategies (a.k.a. study skills) change over time. I plan on collecting data on the following schedule:

  1. Pre-exam survey
  2. Exam 1
  3. Post-exam survey
  4. Pre-exam survey
  5. Exam 2
  6. Post-exam survey
  7. Pre-exam survey
  8. Final exam

The pre-exam survey asks students how frequently they do each of the actions below for the course, on a sale of never, seldom, sometimes, often, or always:

  1. Write notes on main ideas when reading an assigned text for the first time.
  2. Actively participate and pay attention during class.
  3. Write notes during class.
  4. Reorganize notes when not in class.
  5. Periodically review notes when not in class.
  6. Start assignments early instead of waiting until last minute.
  7. Carefully read an assignment’s instructions and rubric before starting it.
  8. Review instructor feedback and assignment rubric after completing an assignment.
  9. Study in an environment that is productive for my learning.
  10. Seek help from others (classmates, friends, instructor, tutor, etc.) if I have questions.

The post-exam survey asks, in questions 1 to 5, how frequently students did each of the actions, on a scale of never, once, a few times, several times, or more than once a day over more than one day. For questions 6 to 8, students are asked to provide a brief answer in their own words:

  1. Studied in an environment that was productive for my learning.
  2. Reviewed the notes I had written.
  3. Reviewed instructor feedback and rubrics on graded assignments.
  4. Studied in intervals over an extended period of time prior to the exam instead of cramming.
  5. Sought help from others (classmates, friends, instructor, tutor, etc.) if I had questions.
  6. Which of your learning strategies helped you prepare the most for the exam? Why?
  7. Which of your learning strategies helped you prepare the least for the exam? Why?
  8. What changes to your learning strategies, if any, do you think you should make? Why?

Students will earn 5 points toward their final grade (on a scale of 1,000 points) for completing each survey, regardless of their responses. Each survey is on the Canvas LMS and will be accessible for a defined time period.

It looks like about twenty-five students at most will be enrolled in this course, so I won’t be able to do a statistical analysis of the data, but maybe I’ll be able to identify some patterns.

*Best laid plans of mice and men often go awry during pandemics.

Maximizing the Medium II

A few weeks ago, I wrote about using one technological platform to circumvent the design constraints of another. Here is another, more serendipitous, example of finding a technological means for achieving an instructional objective.

For an upcoming undergraduate course, I decided to make Twine stories part of my exams. My previous posts on Twine for a team writing project are here, here, and here. (Hard to believe it’s been seven years since I last used it — how time flies.) For now, it is only important to know that Twine is freeware that enables users to create interactive texts in the form of HTML files.

I wanted my exams to each have two parts that students complete in sequence — first, a series of multiple choice questions on concepts; second, an essay-type question in which students demonstrate their ability to apply the same concepts by extending a Twine’s plot line. It is fairly easy (if tedious) to create multiple choice test questions in the Canvas LMS. One can also set a content module to require that students complete each item in the module in order. But initially I didn’t know how to include the Twine story for each exam’s second part.

Continue reading “Maximizing the Medium II”

Another Change to Teammate Evaluations

Jumping into the timecrowave again. Past posts on teammate evaluations:

Simplifying my life with Google Forms

What most students thought was a mysterious calculation

Distributing points instead of forced ranking

Calculating differently

For the upcoming fall semester, I’m making another tweak to the system. Instead of ranking or distributing a set number of points, they will rate each other’s contributions on a three-level scale. And rather than email each team a link to a different Google Form, I have one Google Form for the entire class. I can either email the link to the whole class, or — more likely because it’s easier on my end — I can post the link in the Canvas LMS. Or, as I discussed in my last post, I can embed the Form’s iframe into a Canvas assignment.

Since I’ve set the Form to collect students’ email addresses, I’ll be able to discard the responses of any student who rates a team he or she does not belong to.

The evaluation is worth up to 50 points out of 1,000 in the course grading scale; the last item in the Form is simply a method of encouraging students to reflect on how well they and their teammates collaborated (instead of mindlessly entering numbers). As I did last semester, I will set the corresponding assignment in the Canvas gradebook as worth nothing, to avoid complaints about “losing” points because of their peers’ evaluation of their work.

Maximizing the Medium I

Probably all of us have encountered the constraints of educational technology — in a particular situation, it doesn’t do quite what we want it to do, so we try to figure out a workaround. Here is one example:

For the coming academic year, my undergraduate students will complete multiple metacognitive exercises that will supply me with data for some pedagogical research. The exercises consist of surveys that ask students to evaluate the effectiveness of their study habits before and after exams (I’ll describe this in detail in a future post).

Initially, I tried creating these surveys in the Canvas LMS quiz tool, because I can set Canvas to automatically reward students with a certain number of points if they complete a survey. I find point rewards to be necessary because most of the undergraduates I teach won’t do anything unless it has a transparent effect on their course grade. However, I rapidly hit several obstacles — e.g., as far as I can tell, one can easily duplicate an “assignment” in Canvas, but not a “quiz.”

In contrast, it is ridiculously easy to copy, rename, and revise survey instruments in Google Forms. But Google Forms isn’t connected to the Canvas gradebook, and I did not want to have repeatedly jump between Google Forms and Canvas to record points each time a student completed a survey. Also I prefer putting as much of my course content as possible in Canvas, because invariably, the more I expect students to use different technological platforms, the more emails I receive about their learned helplessness.

What to do?

Continue reading “Maximizing the Medium I”

Review of McGuire’s Teach Students How to Learn

I stumbled across Teach Students How to Learn by Saundra Yancy McGuire (Stylus, 2015). Like The New Science of Learning by Doyle and Zakrajsek, it contains some useful advice. Here is a brief review:

The bad

The book has an excessive amount of personal anecdote — such as conversations with and exam scores of individual students — but no presentation of statistically significant findings on overall changes in students’ performance. The author also favorably discusses learning styles and the Myers-Briggs inventory, neither of which is scientifically supported. A more concise presentation with a greater emphasis on empirical evidence would be more persuasive.

The good

McGuire’s focus is on teaching students about the benefits of metacognition, including a specific method of introducing them to Bloom’s taxonomy (Chapter 4). Why is this effective? In high school, students earn high grades without much effort, so they enter college suffering from illusory superiority and ignorant of the actual learning process. Coaching students on specific study strategies (Chapter 5) will therefore benefit them. One example: as professors, we typically know what shortcuts to employ to efficiently find and retain information contained in a book. Students, in contrast, may not know what an index is or how to use one. McGuire also rightly discusses the role of motivation in student learning (Chapters 7-9), and she points out that there are both student-related and professor-related barriers to motivation. These barriers can be mitigated by the instructor.

A final comment

The underlying assumption of this book is that students want to learn, and if they are equipped with the right tools, college becomes a more valuable and rewarding experience for them and their professors. While I think this is a noble and generally accurate sentiment, I’m seeing an increasing number of U.S. undergraduate students for whom college is simply a credentialing process. For these students, the diploma is the goal, learning is not.

Call for Editor(s): Journal of Political Science Education

The American Political Science Association is seeking applications and nominations for editorship of the Journal of Political Science Education. Applications can be from individuals or teams, and are due by September 1. Full details are here.

A big thank you to the outgoing editorial team for their excellent management of this journal over the last few years.

Formative Assessment: Abort, Retry, Fail?

Two can play this game

Something of a response to Simon’s June 1 post on transitioning from pedagogical theory to teaching practice: he wrote, in part, “assessment is always formative and should be always linked to the feedback and adaptation process.” In theory, I agree. In practice, while I can lead students to feedback, I am still unable to make them read it.

As I’ve written before, the Canvas LMS has a “student viewed” time stamp feature that shows whether a student looks at my feedback on an assignment — my comments and a tabular rubric with cells that I’ve highlighted — after I have graded it. Generally, though, given the lack of time stamps, many students simply ignore this information. An example, with data: my annual spring semester course on comparative politics. In 2018 and 2019, I taught this course in the physical classroom. In 2020, the latter half of the course was online because of the coronavirus pandemic. In 2021, the course was delivered online for the entire semester. For each iteration, I tallied the number of students who looked at the first three, the third to last, and the second to last reading responses after I graded them. Results are below. N is number of students in the class; not every student in a class completed every assignment. The eyeball columns indicate the how many students viewed an assignment after I had graded it; the eyeball with a slash is the opposite.

While I can understand students not bothering to revisit assignments that they earned full marks on, I don’t understand why students who earn less than full marks frequently ignore information that would allow them to do better in the future. Anyone have an explanation?