Faculty Pathologies I: Inadequate Administrative Processes

Continuing on the theme of what I’ve learned in the last year of building my own business doing dissertation and academic coaching and freelance editing, at the invitation of the blog owner, Chad, I’m back for a two-part series on common problems that I’ve seen working with faculty on their research, project, and time management. This is part 1 of 2.

Faculty usually begin their careers trained to do one thing: research. If they’re lucky, they’ve been trained to teach, at least a little bit, too. But no one ever begins their career trained in administration and management. Those are, theoretically, on-the-job skills that you pick up on the way. As a result, most faculty have vastly underdeveloped systems for managing administrative processes: committee work, cycles of paperwork like monthly meeting agendas, required paperwork for grants and other funding, and the most dreaded one of all – email.

For most of us, email becomes the default way of managing our committee work, paperwork, and other not-research-but-still-necessary-business. Which means, then, that a system to manage our email becomes a necessity. That system needs to comprise two parts: incoming management, and archiving management.

Managing incoming email needs to be something that you do deliberately, not something done haphazardly. I recommend setting aside 2-3 times per day to process your inbox. Anything that can be answered in 3 sentences or less gets a response; the rest get deleted, archived immediately if appropriate, or placed in a specific folder or given a tag/flag indicating that follow-up is required. Then, once a day, have a dedicated time for churning through the things that require more detailed follow-up. Set a designated amount of time for this and stick to it. That doesn’t mean you can’t tackle one or two semi-quick ones if you have 10 minutes between meetings, but it does mean that email becomes a designated, deliberate task, rather than an interstitial one.

The second part of email management is archiving. The goal is to keep your inbox containing only those things that are active: ongoing conversations, tasks you’re working on, things you need to follow up on. Everything else that’s closed should be either deleted or archived into a system of folders. Most of us are reasonably good at this, but it’s a good idea to make part of your Friday shutdown routine a quick cleanout of the inbox to archive anything that’s been completed that week that hasn’t already been put away so that you can start the week with an empty inbox.

These and other skills are things I can help you develop through academic coaching. If you’re interested in academic coaching, the summer is a great time to start. It gives you a chance to develop and solidify new or better habits before the chaos of term time arrives. Feel free to take a look around my website at http://www.leannecpowner.com/coaching/  and if you’re interested, drop me an email at Leanne@leannecpowner.com . The initial consultation is free. You can also follow me on Facebook at https://www.facebook.com/LeanneCPowner/ or Twitter @LeanneCPowner for free daily writing tips.

Grad Pathologies II: Breaking a Project into Reasonable Goals

Continuing on the theme of what I’ve learned in the last year of building my own business doing dissertation and academic coaching and freelance editing, at the invitation of the blog owner, Chad, I’m back for a two-part series on common problems that I’ve seen working with grad students on their dissertations. This is part 2 of 2. (Later there will be a two-part series on common faculty issues.)

The most common problem I saw with graduate students is a lack of midrange planning. The second most common problem that I see is difficulty in breaking projects into meaningful, doable tasks. It’s not uncommon to see “write theory chapter” on a student’s daily To Do list. The problem is, that’s not a helpful way to express the task. When are you done? What exactly are you writing? Is this really the thing you should be working on right now?

The solution to this problem is to think of your To Do list in terms of SMART goals. SMART goals are:

  • Specific They are precise in what they call for doing; you can tell exactly what the desired output form will be.
  • Measurable Goals should have a specific metric or target associated with them. You should be able to tell when you have completed a task.
  • Attainable The goal should be something you can reasonably achieve that day (or week, depending on what period you make your lists for), given the other commitments you have that day/week.
  • Relevant The goal should be directly connected to the weekly or semester goals that you have. If it’s not helping you reach your goals, should it be on your list?
  • Timely The goal should be the next logical step in the project, building toward the following logical step. It should be necessary for upcoming work, not just ‘for the future.’

So a better daily To Do list item for our hypothetical grad student would be “Write 300 words on scope section of theory chapter.” It tells us what part of the project, and where in that project, we should direct our attention. It tells us how much work we need to do to call this task “done.” Presumably, writing the theory chapter is one of the student’s goals for the semester, and this is the next section waiting to be written.

If you’re interested in dissertation or academic coaching, the summer is a great time to start. It gives you a chance to develop and solidify new or better habits before the chaos of term time arrives. Feel free to take a look around my website at http://www.leannecpowner.com/coaching/  and if you’re interested, drop me an email at Leanne@leannecpowner.com . The initial consultation is free. You can also follow me on Facebook at https://www.facebook.com/LeanneCPowner/ or Twitter @LeanneCPowner for free daily writing tips.

Grad Pathologies I: Lack of Midrange Planning

Some of you may have noticed my absence from ALPS for the last few months. I’ve been busily building my own business doing dissertation and academic coaching and freelance editing. At the invitation of the blog owner, Chad, I’m back for a two-part series on common problems that I’ve seen working with grad students on their dissertations. (Later there will be a two-part series on common faculty issues.)

The biggest, most frequent problem I observe in graduate students (and to be honest, in faculty too) is a lack of midrange planning. The military, into which I married, captures it best with the concepts of strategic, operational, and tactical planning. Strategic planning is overall goals and big picture thinking. We academics are pretty good at this: finish this paper by the conference, collect this data so we can write the next paper, complete the dissertation. We’re also pretty good at the hands-on tactical planning: the daily to-do list is a good example of tactical planning.

Where we fail badly is in connecting the two steps: the operational level. Operational planning makes our daily task lists meaningful towards achieving our strategic goals. It’s about planning ahead and thinking ahead so that our strategic goals are realized on the timeline we want. Operational planning allows you to finish the conference paper before the night before it’s due by getting you started on it well in advance of the deadline, in reasonable amounts.

The tool for doing this is the Semester-At-A-Glance calendar. It’s a simple one-page calendar that shows four months at a time, allowing you to see everything that’s going on and budget your time to larger projects accordingly.

Briefly, the steps to using the Semester-At-A-Glance calendar are to identify your priority goals for the term – no more than two or three key things that you want to accomplish. Then, you begin breaking down those goals into weekly tasks or goals and assigning those to weeks on the calendar. (USE PENCIL!!) You can have more than one goal for each week, but the boxes are deliberately small for a reason. If you can’t get all of your semester goals on to the calendar at once, you need to rethink the reasonableness of your semester goals.

I will host a webinar on using the Semester-At-A-Glance calendar on Friday, April 20, at noon Central time (1 PM Eastern). You can find the URL for the webinar on my website, http://www.leannecpowner.com/coaching/ . I use Zoom.us for my webcasting, which will require you to download and install a small browser plugin to view the webinar. And of course, you can always download the most current Semester-At-A-Glance calendar from my website.

If you’re interested in dissertation or academic coaching, the summer is a great time to start. It gives you a chance to develop and solidify new or better habits before the chaos of term time arrives. Feel free to take a look around my website and if you’re interested, drop me an email at Leanne@leannecpowner.com . The initial consultation is free. You can also follow me on Facebook at https://www.facebook.com/LeanneCPowner/ or Twitter @LeanneCPowner for free daily writing tips.

Active Learning from Day 1: Comparing Textbooks

I like to start my semesters the way I intend to continue them, with an active learning activity on the first day of class. But what do you do when you don’t have any content yet? You let the students develop the content themselves. My last several (regular) posts for this group will focus on activities for the first day of class that don’t require any student background knowledge yet get them used to the idea that they’ll be out of their seats and interacting regularly in this course.

In Introduction to World Politics/International Relations, students are frequently unsure what exactly they’ve signed up to study. A significant minority think they’re studying the politics of other countries, others think they’re doing global current events, and others think it’s foreign policies of great powers. Over the years I’ve built a collection of textbook samples that vary widely in their approaches to the field. I give pairs of students a worksheet that asks them to compare two textbooks to the textbook we’re using in the class. What topics are included in all of the books? Can you find a topic that is in one book but not the other two? For fun, I throw a couple ambiguously titled comparative politics textbooks into the box. The point of the exercise is for them to define the central core of topics in the field of IR, and then we identify some of the contested or less-central issues that appear in a minority of books.

When I have a 75-minute class, I then ask them to compare two different editions of the same book, at least two editions apart. (This part of the activity is only possible because I’ve been collecting textbooks for 15 years and raided some retiring faculty members’ stashes as well; your campus library may have old titles that were previously in use at your school.) We compare topics that were prominent in immediate-post-Cold War titles to those that appear in post-9/11 titles, and in a few cases, we can even compare Cold War books too (I’ve got two Morgenthaus that I’ll sometimes entrust to students and a couple early Russett and Starrs). Students are usually quick to notice that the central core topics haven’t changed that much but that a lot more has been added to the scope of the field since the end of the Cold War. We talk about the implications of that for what we teach and study, and how.

Data Visualization in the Classroom

Today’s post is guest-authored by Alexander Von Hagen-Jamar, a postdoctoral researcher with the STANCE research program, in the  Department of Political Science at Lund University. His research and teaching focus on international relations, international security, state building and capacity, and empirical methodology.

In 2013, I spent two trimesters teaching at Carleton College in Northfield, Minnesota. While there, I had the opportunity to design a course about any subject I wished (within my expertise). I choose to organize the class topically around the consequences of violent political conflict. The other core learning goal was skill-oriented: I wanted to help the students develop applied quantitative literacy in context, and through doing so, encourage them to think deliberately about communication in a variety of mediums. To do that, I designed a series of assignments, centered around a group data visualization assignment. Continue reading “Data Visualization in the Classroom”

Theory Chapter Tetris and Other Ruminations of ‘Teaching’ Online

In a previous post, I talked about how I was embarking on a new career trajectory that involved teaching writing and project management skills online via videoconference, and that I was thinking intensely about how to involve active learning principles in this teaching.

I have to say, it’s been a real challenge. I’ve done my best to leverage the capabilities of Zoom.us, the free videoconference platform that I use. Zoom allows screen sharing as well as on-screen annotation by both parties. I’ve done some collaborative outlining and collaborative editing with clients to teach those crucial skills. But at some level, it seems like not enough.

My most recent innovation was ‘Theory Chapter Tetris.’ A client had a theory chapter that she described as “a hot mess,” and I would largely concur. She wanted to keep as much of what she already had written as possible, though. To me, that sounds like a game of Tetris: take a bunch of misshapen pieces of various sizes and fit them together into solid chunks without a lot of gaps or extra bits sticking out. So I turned her sections into pieces and we manipulated them into a series of chapter outline options. Continue reading “Theory Chapter Tetris and Other Ruminations of ‘Teaching’ Online”

Interactive Resources for Teaching Stats

The internet has allowed the creation and dissemination of a wide range of tools useful to those of us who teach statistics in our research methods courses. I found two to be particularly helpful.

  1. Guessthecorrelation.com – As its name implies, the site gives students a scatterplot of points and asks them to guess the correlation. My students were asked to play three games and upload screenshots of their final scores as evidence that they had completed the assignment. Many went on to play more than three games; the sound effects and points make it a very addictive game. What it brought home to them very effectively is that correlation is about how tight the points are to the (imaginary) best-fit line, not about the slope of the line. Students enjoyed playing a game as homework; it was certainly less onerous than practicing calculating correlations by hand.
  2. The Rice Virtual Lab in Statistics: Sampling Distributions – The Java-based simulation for sampling distributions allows you to draw a distribution of any shape you want, select repeated samples of any size, and then plot the sampling distribution of the means (or several other statistics). I even let them draw some of the distributions and do some of the simulations so that they were convinced it wasn’t just the values I was picking. I was able to demonstrate to the class in just a few minutes that the shape of the parent distribution doesn’t matter; the means will always be distributed normally.  Watching their faces, this really blew their minds; they probably would have blindly accepted it if I just told them this is how it is, but having seen it, we had a much easier time accepting that the same property held for regression coefficients. (The chance to visit the central limit theorem was a bonus for undergrads.) The whole activity took less time than a lecture of the similar material. (A similar lab simulation exists for confidence intervals as well.)

What about you? What are your favorite interactive sites for teaching research methods or statistics?

Using Stats in the Regular Classroom: The 3 S’s Approach to Interpretation

One of the obstacles to using statistically-tested articles in the regular classroom is that most students don’t know (or don’t remember) how to interpret the results. I developed a very simple scheme, known as the 3 S’s, to help them understand results tables and quantitative articles more generally. While the basic framework was designed with linear regression in mind, after a few practices you should be able to introduce the framework in the context of more complicated modes (i.e., in probit/logit the size of the coefficients isn’t really meaningful by itself).

The first S: Sign. Hypothesis testing is generally about whether the relationship we find goes in the direction we think it should. This is predicted by the sign on the coefficient: whether the relationship is positive (upward slope) or negative (downward slope). So the first thing we’re interested in, when we’re testing a hypothesis, is whether we’ve gotten the sign right. Does what we found match what we expected?

The second S: Sureness. Now that we’ve found the sign or direction of the relationship, how sure are we that the sign is right? This is the concept of statistical significance, simplified down to its core element. Sureness asks about whether the value we found is “far enough” away from 0 to allow us to be sure that the sign is right. If the value we found is very close to zero and we’re very uncertain (statistically speaking) about that value, we can’t trust that the sign is right. The true value could actually lie on the other size of 0, and thus our sign would be incorrect. If the value is “far enough” from 0, then we can be reasonably sure that the sign is correct. I usually gloss over the concept of “far enough” and explain that they’ll cover standard errors in their statistics or methods course. For now it’s enough to know that we can be sure of most rather large numbers and even some small numbers if we’re very highly certain about the value we estimated for them.

The third S: Size. Only after we’re sure the sign is right can we meaningfully talk about the size of the relationship. Size isn’t the only thing that matters; in fact, it’s usually the least important in interpreting statistical results. The size of the relationship tells us how much the value of the dependent or outcome variable changes for each one-unit change in the independent or input variable. I have sometimes found it helpful to write out the middle-school equation for a line, y = mx + b, and explain the effect of coefficients by talking about what if x changed from 4 to 5 – how much would y change? What if it went from 58 to 59? Etc.

You can find a helpful powerpoint that walks through this logic – the Crash Course Statistics package – on my website.

Introducing Computer Programming in Political Science Classes

Today’s post is a guest posting from Jane Lawrence Sumner of the University of Minnesota. Jane’s research and teaching interests are in international political economy and research methods.

****************

Undergraduate methods classes frequently use statistical software programs, despite students typically have little background knowledge in programming. While we seldom ask students to do anything that would require that background knowledge, the idea of having to program can be a roadblock for many students. In an attempt to alleviate this, I designed an activity to introduce my students to the fundamentals of programming in a non-technical way. The objectives of the course were for students to understand that programming primarily involves providing the computer with very good, very specific instructions to carry out the task at hand.

Students were split into small groups, and each group was given a piece of paper with a hand-drawn cartoon on it. They were told to write instructions about how to reproduce the drawing that they could then give to another student. After twenty minutes, they stopped. Each group stood with their backs to the whiteboard while they read their instructions to a student from another group, positioned at the whiteboard with a marker. Students were not allowed to modify their instructions and were deliberately unable to see the student with the marker so that they could not course-correct. The student with the marker was allowed to ask only one question: “can you please repeat that?”

Only one of the drawings actually resembled the original. Although the students enjoyed drawing and seeing the monstrosities emerging on the board, and loved the great reveal at the end of what the drawing should have looked like, the key pedagogical point came in the discussion at the end, when I asked students, “What made this difficult? What would have made it easier?” The points that emerged organically were the same objectives I’d set out to convey. Specifically: many, short, precise steps were easier to follow well than longer, more detailed steps. Students also determined that having a clear and constant reference point (like a coordinate system) would have made the direction and scale easier, that naming parts of the drawing for later reference was easier than repeating “the curvy line that ended up at the first straight line” over and over, and that if each step relied upon the subsequent step, things could quickly go awry.

Doing “Measurement”

I like to introduce measurement by doing it. My class on measurement starts with a wide collection of fruit lined up on the chalkboard ledge. I ask students to select three items from the collection and compare them on three dimensions. (I often provide a matrix of appropriate size on a handout or project one on the screen to give students a framework for their work.) We usually briefly discuss what some of the dimensions are that they could compare on: color, taste, size, weight, water content or density, sweetness, etc. After giving students about 5 minutes to complete their measurement matrix, we compare some of the measurements they made and discuss what measurement is: the systematic comparison, evaluation and assignment of values to objects or phenomena. They have just engaged in ‘measuring’ the fruit, even though no tape measures or scales were involved.

We then usually move on to a discussion of precision and accuracy in measurement, usually by discussing the ways in which they measured weight and color. Replicability looms large in this part of the discussion, with a focus on ways to reduce the subjectivity of measurement so that other student or researchers would obtain the same values that they did. How would we obtain reliable measurements of color? Color comparison charts (an external reference) are one option, but how would that handle the bicolored apple? Are there uses of the variable ‘color’ where the coarse measurement of red, yellow, green, etc., is sufficient? In some terms, I’ve numbered the items; we then discuss how the simple act of naming (classifying) the object as an apple or mango or whatever constitutes a form of measurement.

Just to complicate the comparison and discussion, I usually include at least one bicolored apple (red and green), one can of juice (often pineapple), some type of dried fruit, a tomato, a bell pepper, and an eggplant. (The latter three are technically fruit since they have seeds on the inside, even though most Americans think of them as vegetables.) A can of fruit cocktail is also a good ‘wrench’ to throw into the mix. Whether the dried fruit or the juice “count” as fruit is always a good discussion. If I can get unusual fruit like starfruit, kiwi, persimmon, or plantain, I like to include those alongside the usual suspects of apples, oranges, bananas, grapes, plums and the like. I typically spend no more than about $15-20 on fruit for the activity, and frequently far less in the summer.

When we’re done with the class, I usually invite them to take whatever fruit they want from the collection for snacking. I take home the rest and use it myself, or leave it in the department lounge for others.