Every year when the award nominations come out, there are dark mutterings about biased, complicated goings-on behind the scenes that result in nominations of Stories-And-People-I-Don’t-Like and prevent the nominations of Stories-And-People-I-Like. Usually these accusations are wildly silly, but sometimes they touch on the natural and understandable bias that is structurally part of the award process. Because these awards have labels like “best”, people think that they are (or are supposed to be) abstract Assessments of Quality — the “Best Novel” is thought somehow to be indisputably “better” than the novels that were not so chosen, or certainly those novels that were not nominated. But this is self-evidently false — not only is “quality” an illusive characteristic, but most award processes aren’t really designed to capture such proxies for quality as exist. As I like to tell my students about the litigation process, “Trials aren’t about determining the truth; they’re about measuring the strength of the evidence.” A similar argument can be made about the awards process.
Let’s start by acknowledging that any measurement captures only that data it is structured to capture. The Nebula Awards, for example, are the result of two votes by the members of SFWA. Thus, the award captures the sentiments of the voting membership — and that’s all it can capture. While it would be nice to imagine that the members of SFWA feel some obligation to vote only for the Highest Quality Works, based on some imaginary scale or set of criteria, I doubt that this is so; certainly it’s not true of me when I vote for the Nebulas. For one thing, I haven’t read all the novels, novellas, novelettes, short stories, YA works, etc. for the given year, and the ones I did choose to read were selected for idiosyncratic reasons: it was written by a friend, or by a new author about whom I was curious, or somebody told me to look at it, or (often) it happened to come out on podcast at a time when I needed something to listen to during my commute. The great majority of eligible works I haven’t read, and I’m sure that there many that are “better” than the ones I did read, which won’t get my vote for that reason.
Contrast this to juried awards, where the panelists are honor-bound to attempt to measure “quality.” While this is is still difficult, it’s amazing how much agreement there can be. I happen to serve on the review board of an organization that receives fiction submissions. Board members are randomly assigned to these submissions, and are asked to rate them on a uniform scale. While the board members have wildly different tastes in fiction, it’s astonishing how close our ratings are of these submissions. This gives me some level of confidence that, at least among professional writers, there are standards that can be more-or-less agreed on.
But go back to the Nebulas. One thing I often notice among the Nebula nominees is that they tend to be nice people — that is, they are often writers who are gracious, kind, and generous to other writers. This should not surprise us. Writers who are kind to other writers are more likely to have friends among the membership of SFWA than writers who are dismissive, rude, or offensive to other writers. I’m sure that a lot of SFWA members, like me, go out of their way to read books and stories by their friends, just so that they can talk to them about their work. So, when Nebula time comes along, more stories by those nice people have been read, and consequently more are in contention.
Does it strike you as unfair or illegitimate that people vote for works by their friends? I don’t think it should. First, as I said, this is a voting process, and people will vote based on those criteria that appeal to them. Second, professional writers are, I suggest, more likely to have good writers among their friends than bad writers. People who are popular among the membership of SFWA tend, on the whole, to be excellent craftspeople.
I’ve also noticed that there are a lot of stories in online markets getting nominations. This also is not surprising — online markets are easy to get to on the fly, and so it’s more likely that those works have been read. Since an online nomination and voting process was initiated, it’s more likely (it seems to me) that people reading online stories would be voting than people who read only print stories.
The upshot of all this is that neither the Nebulas nor the Hugos are really designed to measure quality on some absolute scale, unlike the juried awards, which I think at least attempt it. Does that mean that those votes are insignificant? Not at all. What is popular among writers, or popular among fans attending cons, is an important piece of information.
Naturally there is bias here — precisely the bias of the people who vote on the awards. Nancy Kress has pointed out, for example, that it’s statistically easier for a woman author to win the Nebula than the Hugo; I think this is directly related to the different populations that are doing the voting. The awards naturally and inevitably reflect the prejudices of the writers (in the one case) or the fans (in the other) who choose them.
I disagree, therefore, that the natural consequences of the award selection process creates “unfairness” or “illegitimacy” in that process. The awards are what they are, and they’re a lot of fun. And you won’t find any bad stories among the nominees. But to look for objectivity or “justice” in them is misguided.