The truth behind the lie that “Wine tasting is junk science”
The last few years there’s been a ton of stuff published about how inaccurate critics’ reviews are. You’ve heard it all: We’re influenced by price. We give different reviews to the same wine. Different critics give widely varying scores to the same wines. (For a summary of the various complaints, click here, to this article which appeared yesterday on Yahoo Finance.)
All of the individual criticisms are largely true. In a moment, I’ll tell you why none of that matters, but first, I want to try and figure out why some people get so psychologically bent out of shape about wine critics.
The latest to do so is this guy, David Derbyshire (great name), who writes for the British publication, The Guardian. Here’s the link to his article, Wine-tasting: it’s junk science.
Why don’t people get so upset about restaurant critics or movie critics? You’ll never see an article headlined RESTAURANT REVIEWS ARE JUNK SCIENCE. That’s because restaurant reviewers don’t pretend to be offering anything but their opinion.
Well, neither do wine critics. If you want a “scientific” analysis of a wine, send it to ETS. But how useful would that be for the consumer? Not very. Consumers don’t want scientific analyses of wine and they don’t need them. They want to know what it smells and tastes like, and how it feels in the mouth, and maybe a few other things. For these, they turn to critics.
What’s wrong with that?
We could settle this whole thing in 5 seconds if all the wine critics would take the pledge. What pledge? Admit that your review is the way you responded to that particular wine at that particular time. Don’t claim to be scientific about it, just assure readers that you’re doing your level best and have no conflicts of interest. I sometimes think critics invite this outside criticism because of their implication that wine tasting is science when it’s not. All the critics know this. They all know how fallible they are. They all know they could be fooled, and rather easily at that. But very few of them will admit it. They hide under a veil of authority and pretense, and that’s precisely why this field of wine reviewing is becoming suspect.
If social media has taught us anything, it’s to be transparent in our dealings. Transparency doesn’t cost people their reputation; it enhances it. And the most transparent thing a critic can do is to tell people, Hey, I could be wrong, but this is my opinion, just sayin’.
Trackbacks/Pingbacks
- Wine Talk is About Communication, Not Objectivity or Accuracy | Edible Arts - [...] of wine tasting are misleading; others are simply misguided. As wine critic Steve Heimoff argued recently, the fact that…
- If James Gandolfini were a wine… « Diary of a Winebuyer - [...] The truth behind the lie that “Wine tasting is junk science” (steveheimoff.com) [...]
Yes…critics scores and tasting notes are only one person’s palate and opinion…but so many people rely on it!! The average consumer relies on critic’s scores as well as the winery. We have a long way to go, don’t you think?
Yes, you are correct. The problem that many people have is that the scores critics give are often disassociated from that pledge and often from the review itself. 93 points implies a degree of precision and stasis that just is impossible to achieve. Sadly, most consumers are unaware who gives scores and what those scores actually mean. 91 points means something different to you, Richard Jennings and Robert Parker. If it meant the same thing to each reviewer, consumers would benefit more than having to become their own experts on various critics’ palates. But yes, in and of itself, critical wine analysis is what it is and nothing is inherently or theoretically wrong with it. There are just lots of practical problems with it….
Makes you wonder what a double blind movie or food review would look like?
But I think you are right that transparency is key. As long as you are honest about what you are providing, the consumer has the opportunity to evaluate whether they find it useful or not.
Hodgson’s article is solid, and you do little to refute the “lie” that he’s purported to represent. And restaurants critics have a heluva lot more to gauge than a wine critic; from the ambiance, menu, service, drinks, quality of preparation, presentation, and overall quality, it’s much more objective than wine tasting. I would imagine that if Hodgson ran the same type of test with restaurants, that critics opinions would align much more closely. You even admit that wine critics reviews are representative of a “way you responded to that particular wine at that particular time” which sounds like some postmodern excuse, denying the role of objectivity in favor of protecting the so-called art of wine criticism.
If you look at a restaurant review as a whole, you might have an argument Mike, but Zagat for example still gives a single # score to the FOOD, which in itself is no significantly different from the review of a wine and relies heavily on the subjective palate, mood, and other outside influences a critic faces. For a restaurant food review to be any good, maybe the critic should taste the food blind in a neutral environment as many wine reviews are.
Steve, I thought this was a very well stated argument and well written article.
I think the backlash against critics is based on the fact that they are trying to quantify something that is so subjective. I’m sure that art & movie & music critics also deal with the same backlash. Adding the 100-point scale gives the illusion of mathematical accuracy, so maybe that is where the “junk science” argument comes into play.
My latest opinion is that the further you progress in the wine industry, whether you are a natural winemaker or a negociant or a sommelier or a wine critic, more and more people will line up to tell you why your way of doing things is wrong. While it never hurts to listen to dissenting opinions, you can’t let the critics get you down (which is a funny thing to say to a professional critic, but so it goes)
Yup, Kyle…what you say is dead-on. The TN is merely the critic’s attempt to describe their take on that wine…at that point in time…and the condition of their palate at that time. But, then, the assigning of a 100-pt score to that wine implies a scientific precision (at least in my mind) that is totally unwarranted and laughable.
Can you imagine the scorn and ridicule that would be heaped upon the movie and restaurant critics if they started using a 100-pt scale to accompany their criticism?? They’d be laughed off the face of the NYTimes, I suspect. Yet why do these vaunted wine critics imply their criticism is that precise as to assign a 100-pt score to a wine? Stupid, it is.
Which is why I always liked Charlie’s 3-meadow muffin scores.
Tom
Steve, I admire your take on this; you have always been more willing than most to admit the subjectivity of wine ratings. Question: Do you think newspaper movie reviews are more subjective than wine reviews, less subjective, or about the same?
Patrick: I think movie reviews are more subjective than wine reviews. Not by a lot though.
Why the hell would anyone think that “wine tasting” is science? There is a field of study called sensory science, and it is treated as science, with elaborate statistical evaluation of a data from a large group of trained tasters. This field informs the basis for much of the formulation of processed foods we eat. It is used in the wine business sometimes to do new product development and to validate particular experimental endeavors. Wine tasting by a critic or a group of critics in an uncontrolled environment is not science, and I’m surprised anyone would think it is. Go talk to Hildegard Hayman at Davis if you are interested in sensory science with regards to wine.
Wine tasting, or perhaps I should say, wine scoring, isn’t even science, much less junk science. To assume that one isolated data point is telling you much more than an impression at a single point in time is absurd. Without enough data points to do a regression analysis of the scores it is pretty much meaningless from a scientific point of view. Doesn’t mean the score isn’t useful, it just isn’t scientific.
Wait, what!?! This Robert Hodgson chap’s big gotcha to the competition circuit is that judges tasting in panel groups show a +/- of 4 when scoring on a 100 point scale? Wow, for that format of tasting and reviewing, that is some surprisingly good accuracy. That article actually made me feel more inclined to trust competition results. I used to be pretty biased against them since the format would seem to prevent anything even resembling a coherent result.
Luckily Steve, I think I have the answer that should solve all our problems and forever shut down the comment section on the 100 point scale debate. Just like the scale I use to weigh my harvest bins, it’s time to bring in the bureaucrats! Every year, the NIST should visit every Certified Wine Critic or Deputy Wine Reviewer and administer a standardized test. After scoring 100 wines (Chosen by a secret panel that is in turn chosen by a non-partisan group of wine dignitaries who come from a body of elected wine officials representing each AVA!), they will be given a Seal of Accuracy (Bonus Alert: You would get a cool badge or medal so you could flash some bling when hanging with your sommelier pals) and must publish their achieved standard deviation along with any numerical score they publish. I feel that we as wine consumers have the right to know what STD’s our critics have! Ah, nothing can solve the world’s problems like when the power of science merges with the faultless truth of politics.
I can’t think of a movie, restaurant, or art critic who assigns a point scale to what they review…
Beethoven’s 9th, by the Berlin Philharmonic, von Karajan conducting…93 points?
Mirrors, smoke and hand-jobs.
Ahhh…The decadent delights of this industry.
Mr. Wagner–Sadly, you are wrong. 92 at best.
Mr. Hill–The three-star system used by Connoisseurs’ Guide has always been good and useful but it has limits just as any notational system of ratings will have.
The idea that giving 100 points to a movie is idiotic is undone by the very fact that there are multiple website that do just that. And while the SF Chron’s famous clapping man rating has only four levels of sophistication, it is as open to random misses as any wine rating system.
As for restaurant reviews being closer to each other than wine reviews, I guess I would have to point you in the direction of the reviews that appear in the SF Chron. I love M. Bauer, not so fond of others and find a couple of the less frequent reviewers to be ignorant. Yet even Bauer is given to inconsistency from time to time–because he is human and because he has likes and dislikes in style and likes and dislikes on a given day just as we all do.
So, let’s get back to the basic premise. “Wine reviews are junk science”. The person who penned those words knows nothing about wine reviews. Those of us who do this for a living would never say that what we do is science. It is a cheap journalistic trick to set up an inaccurate red herring so it can be knocked down.
Let’s stop assuming that consumers are idiots. If they are really interested in wine, they will find a critic whose palate they trust and agree with and listen to their point of view! You are right, that tasting is an opinion in a moment. Taste a little earlier and it’s different, taste a little later and it’s different regardless of if it’s waiting in a bottle or a glass! (although in the glass it evolves more quickly). Wine reviews, just like music reviews, restaurant reviews, book reviews, movie reviews are all one persons opinion in one moment! Wine reviewers have a broader range of knowledge to give us more details on what they notice and taste. If you read a review and then taste the wine and disagree, well, don’t follow that persons reviews anymore. We are past the point where we look at these people as gods who are always right, I mean how many of you out there actually buy a wine just because of it’s Parker rating? I guess I’m not sure why (if you have a brain) this is an issue at all!
Maybe he was feeling grumpy because a lot of wine review writers think they are God’s gift to the written word, when in fact a lot of them are just full of themselves.
Nothing wrong with quantifying one’s take on some performance; just not get so precise as a 30 pt system (=100pt system). Also the references to movie (or theater) reviews are way off base. There is no parallel. Something rated for how it tastes in one’s mouth is way different than all that goes into drama which is about the fullness and intellectual complesity of the human condition. Finally Zagat’s number is based on the average of numerous consumer evaluations. A somewhat similar methodology works with the reviews at Rotten Tomatoes which can parlay a binary system into a 100 pt. scale in the composite.
I found Derbyshire’s article in The Guardian thought-provoking and educational, but agree with you, Steve, that most who review wines do so with the understanding that it is an opinion, not scientific fact. Do I think there are inherent biases and situational factors that influence every tasting? Absolutely. That’s why it’s best to read from several sources (including crowdsourced reviews) in order to get a more comprehensive view. In the end, though, the end consumer is subject to the same handling, serving, health and environmental biases that the reviewer is, so even if the review itself were more controlled and precise, the consumption would rarely, if ever, match conditions the wine was reviewed under. There would still would be disconnects between honest reviews and any given consumer’s actual experience.
Great post. Perhaps wine critics rating would be less controversial if they used a “one to five stars” rating system. The 100 point scale implies a certain level of precision and accuracy that just isn’t there. We know wine is really scored on a curve.
That said, if I ever produced a wine that received a 98 pt score I’m certain I would applaud the critic’s opinion, skills, and consistency! LOL
It should be noted that the two most prominent ratings organizations won’t admit they’re fallible. And one often runs psuedo-scientific explanations about how its results are repeatable.
Thank you, Steve, for your honesty.
REGARDING HODGSON’S INVESTIGATION (NOT “NEWS” IN CERTAIN CIRCLES — GIVEN ITS 2009 ORIGIN), WORTH READING . . .
From The Wall Street Journal “Weekend” Section
(November 20, 2009, Page W6):
“A Hint of Hype, A Taste of Illusion;
They pour, sip and, with passion and snobbery, glorify or doom wines.
But studies say the wine-rating system is badly flawed.
How the experts fare against a coin toss.”
[Link: http://online.wsj.com/article/SB10001424052748703683804574533840282653628.html%5D
Essay by Leonard Mlodinow
[Caltech professor and co-author with Stephen Hawking of “The Grand Design”]
AND THIS . . .
From the Los Angeles Times “Business” Section
(January 29, 2009, Page Unknown):
“Wine Judges Are Rather Unsteady, Study Finds;
Only 10% in a four-year study of California State Fair judging
were able to consistently give the same rating, or something close,
to the same wine sampled multiple times in a large blind tasting.”
[Link: http://articles.latimes.com/2009/jan/29/business/fi-wine29%5D
By Jerry Hirsch
Times Staff Writer
ON THE HISTORICAL ORIGIN OF MOVIE REVIEW RATINGS . . .
Excerpts from Wall Street Journal
(June 23, 2009):
“Let’s Rate the Ranking Systems of Film Reviews;
The Stars, Grades and Thumbs Applied to Movies
Suffer From Lackluster Performance, Low Production Values”
[Link: http://online.wsj.com/article/SB123265679206407369.html%5D
[See accompanying exhibit]
By Carl Bialik
“The Numbers Guy” Column
More than 80 years ago, Hollywood’s star system was born — not the studio machine for building franchises around actors, but the method of rating movies with a certain number of stars.
The first appearance may have been on July 31, 1928, in the New York Daily News, which several critics and film historians remember as the pioneer in the field of quantifying movies’ merits. . . .
Today, the star system is ubiquitous but far from simple for critics who must fit an Oscar hopeful and a low-ambition horror movie on the same scale. Even those critics who don’t assign stars or grades find their carefully wrought opinions converted into numbers — or a thumbs up or thumbs down — and mashed together with other critics’ opinions. Critics tend to loathe the system and succumb to it at the same time. It all makes for an odd scale that, under the veneer of objective numerical measurement, is really just an apples-to-oranges mess. . . .
What about movie critics? It is very common that one critics is talking very highly about a movie while the other critics is talking very poorly. I think it is too subjective and has nothing to do with scientific results of a specific wine. I do not agree with the Guardian’s article, because consumer needs the see different critics to understand what is available and who thinks what about a specific wine.