Helping students do better in essays: what are the options of the seminar tutor?

This guest post is part of a series linked to the publication of G. Pleschova & A. Simon (eds.) Learning to teach in central Europe: Reflections from early career researchers. This post comes from Liudmila Mikalayeva.

Daniela Jaklová Střihavková tells her story of a highly motivated seminar tutor in a first-year course on social work. The problem she sets to solve is a mismatch between the grader’s expectations and the students’ performance: the average grade for exam essays is between 16 and 18 points in 2016-2017, which is far from the maximum 30 points. What can the tutor do to fill this gap between the expectations and the performance?

The setup of the course will be familiar to many young instructors: while a senior professor gives weekly content-focused lectures and grades students’ work at the end of the course, a junior tutor is in charge of interactive seminars where students discuss and apply knowledge. The examination consists mainly of two essays checking students’ understanding of the nature of social work and of the social worker’s role, formulated in proper academic language. The essays are challenging for students, since they receive little training in writing them in the course itself.

Jaklová Střihavková very reasonably suggests improving the course through constructive alignment: course design should guarantee the fit between teaching goals, teaching methods and assignments. So if a major teaching goal is to enable students to define the role of the social worker in a concrete case, they should receive guidance on how to do it, have an opportunity to practice it and receive feedback before the exam.

Practically, the author introduced two additional seminar sessions where students practiced a task similar to that of the exam essay. Combining individual and group work, she provided space for the students to confront the complex task and reflect on their own performance. While she cannot prove that the essays received better grades because of these changes, both the grader and the students were happier with the learning outcome.

The effort by the seminar tutor to bridge the gap between the expectations of the grader and the actual students’ work was however only partly successful. Even after the additional seminars, students continued to feel unsure what the grader expected from them and the grader was still unhappy with how they used disciplinary vocabulary. I see three issues explaining the persistence of the gap.

A relatively minor point is that oral exercises may not be effective enough for supporting students’ success in written tasks. A much more important drawback, underlined by the author herself, is the absence of clear and explicit criteria for grading: the professor would need to make an effort detailing the requirements. And, most significantly, the course structure as such is at the core of the problem: the person grading students’ work is not directly in touch with the students and is senior enough to have forgotten how challenging it is for undergraduate students to understand and use academic jargon and navigate the often-implicit expectations to their work.

Jaklová Střihavková is right to point that to improve learning outcomes students need space to reflect on the curriculum, but young and senior instructors should become more reflective as well. Clarifying expectations, aligning content, teaching approaches and assignments, communicating among themselves and with the students is key and cannot be replaced by teaching experience alone. Students as well as instructors will benefit from it.

Unprepared students? Try an Online Quiz

This guest post is part of a series linked to the publication of G. Pleschova & A. Simon (eds.) Learning to teach in central Europe: Reflections from early career researchers. This post comes from Alistair Jones

We’ve all had the problem of students turning up to class without adequate preparation. To gain the full benefits of any classroom discussion or debate, student preparation is essential.  Yet there are too many other distractions, resulting in unprepared students attending class.  Natália Gachallová explains how she got the students to prepare for their classes, the ensuing benefits, and embeds the explanation in the existing literature.

Preparation for an introduction to Latin anatomical nomenclature, you might think, would be essential. The obvious incentive is the required mark of 75% to pass the first part of the course. Yet experience suggested this was not the case. Thus Gachallová decided to innovate her teaching. She introduced fortnightly summative online mini quizzes. These would reinforce prior learning as well as provide a useful revision tool. There was also a reward component, where students gained extra marks for the final exam based on the average score in the quizzes.

Such innovation can be time consuming, especially noting the length and volume of classes that are undertaken. In this case, there is a cohort of over 130 students! Gachallová does not mention how much time was used in preparing these quizzes, especially in comparison to what preparation was undertaken previously. Follow-up questions were used in class to stimulate debate – an example would be interesting to see.

An online student survey was utilised to measure the effectiveness of this approach, which produced remarkable findings. Around 85% of respondents claimed to find the in-class quizzes beneficial. Conversely, some respondents complained about the quizzes being too challenging, and voiced concerns over spelling mistakes leading to marks being dropped.

The benefits are visible in the grades of students. Both the average mark and the overall pass rates improved. The exception is in the tail of the class, where pass rates at the third attempt of sitting the final exam were almost unchanged.

Yet Gachallová takes into consideration the extra marks gained by students from the online quizzes. Her findings showed most students did not need the extra marks from the quizzes.  Most of those who passed the exam would have passed without the extra marks. A very small number of students failed the exam despite gaining the extra marks from the online quizzes. The reward component was meaningful for about 5% of all students.

The key message from this chapter is simple. If students can engage with the learning materials throughout the academic year, they are far more likely to pass. Online quizzes appear to motivate the students to engage with the class preparation. Formative assessments AND formative feedback can increase student success in the summative assessments.

Some of you may consider a Latin Medical Terminology course to be rather niche. It might be that online quizzes are not deemed appropriate for other subjects. Yet that is to miss the bigger picture. There is a need to engage with the students; to encourage them to take responsibility for their learning. One way is through the use of technology. The fact a small incentive was added in relation to the marks may be inconsequential – which is something for future exploration. 

If students are not going to engage with classroom preparation, why hold the class? If students merely parrot what has been said in class, has any learning actually happened?  Consider the six levels of knowledge in Bloom’s Taxonomy (1956): to have knowledge, comprehension, application, analysis, synthesis and evaluation. If using technology and incentives to innovate our teaching can help our students to a deeper level of knowledge, then it is worth the experimentation.


 [GP1]Simon, can you please insert chapter link here?

Wheel Reinvention Avoidance

A short one today, as I’m in the middle of the UACES conference in Lisbon, trying out my novel panel formats and I’m worrying about the tech side of it all: more on that next week.

On Sunday, we had our regular pre-conference L&T workshop, which was structured to let us discuss in small groups what and how we teach. The initial idea had been to focus this on different substantial specialisations, but it was evident that we all ended up working more general issues and questions.

One of the most useful thoughts in this was the notion that this kind of sharing is really helpful, because it lets us avoid having to do everything from scratch.

Rather than building from the ground up, the exchange of ideas can mean both the avoidance of things that don’t work and the opening-up of new vistas.

To take the latter, I got a couple of really interesting ideas from the discussion, that I’m now looking to see if I can implement with my students.

The first is assessing student’s lecture notes.

This could be as simple as just checking they’ve made such notes, or – as this is where I can see real potential – you could ask for those notes to be augmented with further reading/noting-making, or framed with some elaboration of how different elements hang together, or coupled to a reflective piece of writing.

The second is getting students to present each other’s work.

This forces students to think/act a lot more in making their work understandable enough for someone else to talk it through, and to appreciate how others interpret your work: did they stress what I wanted them to in their presentation?

Neither is such a radical idea, but they contain the core of some new things to explore.

Does the Question Determine the Answer?

Regular readers of this blog know that I sometimes ponder the clarity of my assignment and exam prompts (some past posts on this subject are here, here, and here). Students sometimes don’t hit what, in my mind, the question targets, so I revise in the hopes of creating a prompt that is more transparent. But I don’t want prompts to be answerable with a Jeopardy-like regurgitation of facts. I want students to exert some cognitive effort to figure out how to apply concepts that are relevant to the question at hand.

Usually this situation occurs with my undergraduates, but I’m noticing it more frequently with master’s degree students. A recent example is an assignment from my graduate-level introduction to comparative politics course:

Continue reading

What Do Grades Mean?

What do grades actually mean? I began pondering this question while designing a course for the fall semester. Theoretically a grade indicates the amount of knowledge or skill that a student possesses. But really? Those of us working in the USA are quite familiar with grade inflation. A final grade of C today probably doesn’t indicate the same level of knowledge or skill proficiency as the C from fifty years ago. There is also the persistent problem of knowing whether our assessment tools are measuring the types of learning that we think they are/want them to. And it is probably safe to assume that, both in and out of the classroom, there is a lot of learning happening but we just aren’t interested in trying to measure it. The situation gets even more complex given that — again, in the USA — a “learning activity” often won’t function as intended if students believe that it has no discernible effect on their course grades.

I structure my syllabi so that the sum total of points available from all assessed work is greater than what it needed for any particular final grade. For example, a student might need to accumulate at least 950 points over the semester for an A, but there could be 1,040 points available. I do this to deliberately create wiggle room for students — with so many assignments, students don’t need to get perfect scores on, or complete, all of them. While this leads to higher grades in my courses than if I graded strictly on a bell curve, I want to give students plenty of opportunities to practice, fail, and improve. And I firmly believe that sloppy writing indicates slopping thinking, while good writing indicates the converse. So in reality what I’m doing with most of my assignments is evaluating the writing abilities of my students.

This system often produces a bimodal grade distribution that is skewed to the right. Expend a lot of effort and demonstrate a certain level of proficiency, and you will get a grade somewhere between an A and a B-. Choose not to expend the effort, or consistently demonstrate an inability to perform at a minimum level, and you will get a D or an F. I’m comfortable with this result, in part because I know from the cognitive science research on learning that repeated exposure and frequent testing builds long term memory.

This leads me to the reason for doubting that grades my courses mean the same thing as they do in courses where the only assessment is done through mid-term and final exams composed of multiple-choice questions. Yes, the proportion of A’s in the latter might be lower than in the former, but I bet on average my students are retaining more. At least I like to think that’s the case. There is no way for me to be sure.

More Changes to a Course on Development, Part 3

The final exam for this course last year asked each student to write an economic rationale in support of one of two policy options, using information from course readings as evidence. Generally students students did not do well on the exam, mainly because they did not discuss applicable concepts like moral hazard and discounting the future. These concepts were found in several course readings and discussed in class. While I didn’t explicitly mention these concepts in the exam prompt, the benefits of including them in the rationale should have been obvious given course content.

Now I’m thinking of a question like this for the final exam:

What has a greater influence on economic development in Egypt: law (institutions) or geography (luck)? Why?

In your answer, reference the items below and relevant course readings listed in the syllabus:

The downside here is that I’m giving up an authentic writing exercise in favor of (I assume) even more transparent alignment with course objectives.

Just What Is Your Best Exam Format?

Now that I’m done with hours upon hours of post-semester meetings and painting my house’s front door, I can comment on Simon’s recent post about open-book exams.

Abandon all hope, ye who enter here.

One’s choice of exam format reflects two questions that are often in conflict. Will the exam give students a reasonable chance of demonstrating whether they have acquired the knowledge that they were supposed to have acquired? Can the instructor accurately, impartially, and practically assess the exam results? For example . . .

  • Oral exam: great for testing exegetical ability on the fly, but extremely tiresome and unfeasible if the instructor is teaching more than a couple dozen students in a course.
  • Multiple choice questions: very easy for the instructor to grade, minimizes student complaints, but encourages binge and purge memorization.
  • The timed essay exam, whether open- or closed-book: also tiresome to grade, often susceptible to instructor bias, and, perhaps most importantly, reinforces the unproductive notion that writing (and thus thinking) does not need to be a careful, deliberative process.

How does all this affect me? Over the years I have moved away from formal exams and toward a final culminating assignment — such as a take-home essay question that I reveal in the last week of the semester — intended to test how well students are able to apply concepts to an unfamiliar situation. But lately I’ve become disenchanted with this format, too.

Simon’s post prompted me to think back to my own days as a student. Exams in physics, mathematics, and engineering consisted of, essentially, solving a variety of puzzles — full marks required both supplying the correct solution and documenting how one arrived at it. The primary ability being tested was concept application. One prepared for these exams by working on practice puzzles involving the same concepts. Courses in political science, history, and whatnot had timed essay exams. To prepare for these, I would guess at likely questions and create outlines of essays that answered these questions. I would repeatedly hand-write an outline to memorize it, then turn it into prose during the exam. Even if my guesses weren’t entirely accurate, they were often close enough for the outlines to be very useful.

I’m now wondering if there is a way to turn the outline creation process into the equivalent of an exam. Something that gets graded, but not as part of a scaffolded research paper assignment.

Opening the book on exams

We’re just about getting to the end of semester’s block of teaching weeks, so my attention is turning to final assessment once again.

Let’s take it back, let’s take it back, let’s take it back to the Law School…

With my first-years I’ve inherited a module on the EU that used to be mine some time ago and for which I’ve stuck to the assessment regime through curiosity as much as anything else.

As I’ve discussed elsewhere here, we’re piloting our new computer-based assessment system on the module, so I was keen to see how that changed things. Much of my attention in that regard has been to do with the coursework, but we’re also doing the final exam on it too.

It turns out that this is an excellent opportunity for me to get into open-book exams.

My student memory of these are watching law students carting in a dozen or more lever-arch files (ask your parents) into an exam hall, usually with at least one person have the entire thing spill out across the corridor outside or (on one tremendous occasion) across a busy street and towards a near-by canal.

Happy days. But not very enticing.

But because so much of the work has moved online, not least the exam itself, this seems like a good moment to visit the format.

For those who’ve not encountered it before, an open-book exam is simply one where you can bring and use any materials you like during the exam period. The idea is that it’s much more like a situation you might encounter in real-life than sitting in a bare room, answering questions you’ve hopefully prepared for, but using only what you can haul from the back of your mind.

The reason it’s not been so popular has been a mix of the aforementioned mess, the fear that students will just copy out other peoples’ work and the vague air that it’s not ‘right’.

Of course, I’m a big believer in changing what you when situations change, so why not try an open-book format?

It’s helped by the system being able still to detect plagiarism (final submissions are run through the usual software), plus it can note when a student suddenly dumps several hundred words at once.

Moreover, giving an open-book exam removes any feeling of accommodation to students about factual errors: my lovely mnemonics will be left at one side should I meet anyone who tries to tell me about the Council of Europe in leading the EU.

Of course, an open-book exam – while superficially attractive to students – is a big bear-trap. The temptation to ‘go check something’ will be very high, taking time away from actually writing an answer to the question asked. As those law students used to discover (when we talked to them on our way to the bar), it’s one thing to have access to lots of information, but quite another if you don’t know how to find the right information.

So, we’ll see. My impression so far has been that a lot of my students haven’t really clocked the different issues involved. If nothing else, if they’re relying on my flipped lectures as much as I think they are, then they’ll discover rather quickly that those are in possibly the least-helpful format for an exam.

Let’s hope those lecture notes are in good order.

It’s not me, it’s you: framing grade-inflation

I’m still enough of a kid to be excited to see the place I work at mentioned in the news, especially if it’s in an outlet my mum might see.

Of course, it’d be better if the context of this particular mention were different, but I guess you can’t have it all.

This all comes off the back of the on-going debate in government about grade inflation.

https://twitter.com/educationgovuk/status/1109703893657673728

I wrote about all this last summer, and I’m not sure I’ve gotten much further in my thinking about this, except to note the shift in framing to combating ‘artificial’ grade inflation.

While this might seem to start to take account of the other factors at play, what it singularly doesn’t do is set out a means of calculating this in practice.

Obviously, there are changes in student characteristics that have a direct bearing and these are relatively simple to capture: socio-economic status; entry grades; progressive performance in each year of study.

However, there are also obviously changes in the teaching environment: staffing changes; changes in pedagogic approach; changing curricula (we’ve made our final year dissertation optional thus year, for example); changing provision of learning resources outside the degree programme, at the library or in welfare; changes in programme regulations.

Continue reading

Audio Feedback and Transparency as Teaching Interventions

This is a review of “Enhancing formative assessment as the way of boosting students’ performance and achieving learning outcomes.” Chapter 8 of Early Career Academics’ Reflections on Learning to Teach in Central Europe, by Nikita Minin, Masaryk University.

Nikita Minin of Masaryk University is motivated by a goal we can all appreciate: ensuring that his students achieve the learning outcomes of his course.  In his case, the course is a graduate seminar on theories of IR and energy security and the learning outcomes include improving student skills in critical thinking and writing.  He noticed that students in his class did not seem to really improve on these skills during the class, and introduced three teaching interventions in an attempt to fix this. 

First, Minin provided more intense instruction on the writing assignments at the start of the course, providing a grading rubric and examples of successful student work. Second, he gave students audio rather than written feedback on their papers.  Finally, using a sequential assessment system, the instructor gave formative feedback first and grades much later in the course. Minin assessed the impact of these three interventions, comparing course sections with and without them, and concluded that the first two interventions achieved the objective of improving student achievement of the learning outcomes.

The interventions described in the chapter are in line with current thinking regarding in-course assessment. While Minin does not use the language of transparent teaching, his first intervention falls exactly in line with the Transparency in Teaching and Learning Project’s (TILT)approach. Transparency calls on instructors to openly communicate about the purpose of an assignment, the tasks they are to complete, and the criteria for success, and Minin does exactly that in this first intervention.  Given the data so far on the TILT project, it is not surprising that Minin saw some success by taking this approach. Likewise, now-ubiquitous learning management systems allow for giving feedback in multiple platforms, including audio and video. For years now, advocates for audio-based feedback claim that this can be a more effective tool than written feedback. Minin’s observations therefore, also fit nicely in line with existing work.

Where the chapter falls short, then, is not in the design of its interventions, but in the claims made based on the available data. The sample sizes are tiny, with just five students receiving the interventions. With final grades used as the primary dependent variable, it is difficult to tease out the independent impact of each of the three changes. Using final grades is also an issue when the experimenter is also the person who assigns grades, as it is more difficult to avoid bias than when more objective or blind items are used. Lang’s (2016) bookSmall Teaching: Everyday Lessons from the Science of Learningtells us that engaging in self-reflection is itself an intervention, and Minin’s use of minute-paper style self-reflections to assess the impact of feedback, while itself an interesting and potentially useful idea, mean that a fourth intervention was used in the course.  While I do not doubt Minin’s observations that his interventions had a positive impact, as they are backed by existing research, the evidence in the chapter does not strongly advance our confidence in those findings.

However, I have never been one to dismiss good teaching ideas simply because of a lack of strong evidence from a particular instructor.  Minin highlights a crucial concern—that we should never assume that our courses are teaching what we intend them to teach, and that ‘time and effort’ do not necessarily achieve the desired results, even for graduate students. Reflecting on this, seeking out innovative solutions, and then assessing the impact is a process we should all be following, and Minin sets a great example.