Reading Time: 5 minutes

My student evaluations came back from my first semester back teaching. I had sort of forgotten that they might exist, since the prompt for them had hit the students before the end of term. Then there had been the final grading, and commencement and, frankly, this was all in my rearview mirror. I admit a bit of trepidation but I was also a bit curious about what it would look like. As a library director, you can often go without any concrete feedback about your performance. It didn’t disappoint.

It reminded me of a poorly considered annual review form. The document the students saw asked them questions that would have no relevance for our course or their studies. Some of the questions were designed to elicit data to match promotion norms (were office hours available, for example) but it sought too much information. I felt it was largely asking the wrong questions; I’m not sure what the students felt. It was pages long that I’m sure causes survey fatigue and attrition, reducing the overall response rate. In my case, it was about 20%, which seemed a reasonable number.

We Few, We Happy Few

At the same time, it’s a minuscule sample. As an operational manager, there is no way I would make a decision based on such a small number of people. Since the views of that group were further dispersed, it means that the feedback was roughly equivalent to a handful of random students giving me feedback in the hallway after class. There is no good way to determine what’s situational (were they frustrated at or elated with me or someone else on the day they completed the survey) and what reflects an actionable goal.

It didn’t help that some of the feedback was … inaccurate. As someone who held weekly office hours and had no attendees, I was a bit surprised how many students indicated that they’d regularly partaken of office hours. To be clear: I don’t care that they said they attended but hadn’t actually attended. It is far more interesting to me that they would feel that that was a correct answer to provide. Do they think their answer is a reflection of them for a third-party (since both I and the student know they didn’t attend)?

The whole process reminded me of nothing so much as annual performance reviews. We are in the middle of that process at the law library at the moment. We were provided a very long (8 page) performance review to complete. Every staff person completes a copy and a manager completes a second copy. Most of the form is marked as optional and it incorporates largely unhelpful assessments. It is, like so many annual reviews, an assessment of vibes. There is nothing inherently wrong with trying to assess vibes except that those aren’t the same as performance.

I am no longer in a position to impact these documents. At my last law library, we tossed the previous job description and settled on a single page. There was an overall selection—meets expectations, needs improvement, does not meet—and then some space for comments.

We got rid of the “exceeds expectations” because it is not something that is objective and not everyone has a job where they would be able to exceed expectations in a measurable way. The highest rating can easily become something where managers show preference for one group over another, or individuals over their colleagues. Also, an exceeds one year should not be repeatable; if you’re exceeding every year, then you’re meeting expectations. It should be enough for everyone to either be meeting expectations or not.

I took a look at a couple of recent performance review forms I’ve worked with. They share some similarities. This was a government form. It was, like the university one I am using at the moment, a one-size-fits-all form. Supervise? Fill out this page. Don’t supervise? Don’t fill it out. And the questions: how do I measure “good” v. “satisfactory” cleanliness and maintenance of my equipment.

A screenshot of a document. It contains a table. At the top are categories (ranging from superior to unsatisfactory) and down the left side is a column with performance labels like "Follows policies and procedures" or "Job attentiveness" or "Cleanliness and equipment maintenance."
A screenshot of a performance review form with labels like “productivity” and “cleanliness.

One organization I worked at had a bit more latitude and attempted to gamify the categories. If anything, though, this is less clear than “meets” v. “exceeds” or “good” v. “satisfactory.” The labeled categories—”properly”, “keeps in mind”, “owns”—seem insufficiently exact to help someone know if they are meeting the meaning of the word and, if not, how to meet it.

A screenshot of a document labeled Evaluation Form. Along the top, instead of words, there are five icons. One is an empty chair. The next four have a human in them, in various poses. The person appears to be responding to a performance: in the first, they're asleep, in the second they're attentive, in the third, they're leaning forward and clapping, and in the fourth, they're jumping on the seat while applauding. The left column contains labels for performance, including "Owns the work and sees it to completion" and "Properly gauges need for soliciting appropriate information"
A screenshot of a performance review form where the scoring is based on an icon of a person sitting in a movie chair and how much they seem to be enjoying the film they’re watching.

Fortunately, the optional segments of our current performance review form are the ones with the vibe assessments. I have dropped them out of my reviews for my staff and, going forward, will ask them to also use only the required parts of the form (which cuts out about 5 of the 8 pages).

The student evaluation forms are a bit better, in that they tend to ask something that can potentially be measured: office hours? did faculty use most of class time allotted? I’m not sure they’re useful measurements but they can be counted.

Verbatims

My preference for reviews are to have what are called verbatims on surveys: individually typed answers to the question. The student evaluations were a riot. Of the dozen students who completed the survey, only one was really wondering what I was doing in a classroom. Literally, “where did they get this guy”. That student was balanced with “he was one of my favorite professors in law school” and “provided great feedback”.

This provided me a lot more valuable information than other assessments. I mean, for example, what did I learn from 20% of my class telling me they regularly attended office hours when I know that no one did? Or their assessment that I had or had not fully utilized the time allotted for the course? The verbatims gave me an insight into why they felt the way they did. I tended to agree with the student who said that, while my teaching was fine, the class seemed pointless (it was, at the time, required).

It’s the same way I feel about employee performance reviews. A check box doesn’t tell anyone as much as a bit of prose. How does an employee know the difference between a “good” and a “satisfactory”. A sentence or two will give them so much more information. Also, as the person writing the performance review, it forces me to be much more even-handed in my assessment. I should be able to tell if I am letting emotion or recency bias get in the way of my performance assessment of a person.

One thing every employee should do when they are preparing for their performance review is to look at their calendar for the period under review. Click through each week and see if you can identify the key activities you engaged in. Some of those will be obvious: you gave a presentation, attended a conference, wrote a paper, taught a class, met a performance KPI. Others may have slipped your mind. Put them on a list and include that with any performance review document. If there’s not a good way to add it, include it as an attachment for your performance review meeting.

These can be hugely helpful to a manager, because, frankly, we forget things. At the same time, we have our own memories. There is almost never a year that goes by that I have something on the performance review that the employee did not themselves remind me about, even though I considered it a significant event or outcome. But the employee’s list is a key reminder and the timing is important.

The more I’ve thought about teaching, the more I’m seeing these similarities with management. When I think about managing early career librarians, many of them will be the same age as a law student who has recently finished their undergraduate degree. There isn’t a good reason that I can tell for not providing the same sorts of approach to feedback, to measurable outcomes, and expectations.

At the same time, it makes me wonder if I can better prepare students for the student evaluations. As a lecturer, can I provide them with some resource similar to the employee-year-in-review document? For example, is there a way to provide them an overview to show what we’ve spent the preceding 12 weeks doing? How their performance has improved over time? How what we’re accomplished met the learning objectives we discussed in the syllabus? In other words, is there a way to provide them with those mental nudges that might help them provide me even more useful feedback?

I have heard that student evaluations will diminish as a factor in faculty promotion and tenure. Given what I’ve seen, I can see why. It’s not entirely arbitrary but it’s not actionable either. When you add in the potential for other biases, even if I am unlikely to experience them, I can’t say it’s a terrible thing to lower their impact. I have a bit of information to work with but it was neither swingeingly awful nor outstanding. If anything, I think I already had a good sense of what I would do differently merely by having enough years behind me to know where I need to make adaptations.