subscribe: Posts | Comments      Facebook      Email Steve

On the “subjective vs. objective” tasting front, what have we learned?



Heavy philosophical opining over at Jamie Goode’s blog the other day. Jamie sat down with “academic philosopher” Professor Barry Smith to talk about the philosophical aspects of wine tasting and specifically about “objectivity and subjectivity,” an old and slippery topic that will never be fully resolved, I think, because the question itself is misleading (more on this later).

The Professor did raise an interesting point: He said “all the great wine critics…say…taste is subjective,” but then these same critics “tell you which vintage is better…and which domain is better” and so, the Professor concludes, “They don’t really believe [tasting] is entirely subjective” because, if it is, then they should not be able to state so definitively (so “normatively” in Smith’s words) that something is better than something else, “normative” being a philosophical term implying the existence of objective standards or “norms.”

Well, the Prof does seem to have identified a paradox. How can tasting be subjective if the taster is giving normative judgments on things? But here’s the problem. No wine critic I’ve ever heard of has said that tasting is just a bunch of random subjectivity; I certainly never did. Let me explain why this whole thing of “objective or subjective” is misleading.

Some pronouncements are objectively true. If I say “Two plus two equals four,” that is fundamentally objective, at least in the Universe we inhabit. If I say “Lafite is more expensive than Two Buck Chuck” that is also objectively true.

With judging wine, though, things get more complicated. Consider: Let’s say we expose three different critics to a single wine, blind, and each reacts differently (as is to be expected). That can’t be explained by the wine: It is what it is—its chemical composition is the same for each of the critics. Therefore the difference is in the critics’ perceptions of the wine. The professor understands this conundrum (which is relativistic): The wine’s chemical properties are absolutely objective (i.e. they exist in the real world and can be measured), and yet the critics’ reactions are absolutely subjective. How are we to make sense of this paradox?

Here’s where the Professor introduces a novel solution: “an intermediate level…in between the chemistry and the variable perceptions.” What is this “intermediate level”? The Professor says it’s “flavour.” “Flavours are emergent properties; they depend on but are not reducible to the chemistry.”

Confused? Me too. I reread this part of the Professor’s answer a couple times and have to say I never did fully grasp it, perhaps because the Professor didn’t make himself clear (it wouldn’t be the first time a highly-trained academic found himself unable to express his theories in plain English). As near as I can tell, this “intermediate level” would form a bridge of sorts between the strictly objective chemistry of the wine (which we all acknowledge exists, independent of our personal reactions to it) and the subjective, personal impression the wine makes on us.

I think this is overthinking things. It has a bit of “How many angels can dance on the head of a pin” Talmudic disputation—argument for the sake of argument. Just because you can arrange words so that they take the form of a question doesn’t mean the question makes sense; but too much of our discourse is based on the premise that, if I can ask it and make it sound like a real question, then it has to have a real answer. It doesn’t.

Look, wine tasting shouldn’t be this complicated; it doesn’t require the skills of an epistomologist. A majority of professional wine tasters will usually agree on the more salient or obvious aspects of a wine—that it’s sweet, for example, or that it has heavy brettanomyces (or that it’s sparkling, for that matter). It’s in the more subtle realms that disagreement sets in (is the wine just a bit reduced? Is it too old? Over-oaked? Tannins too rough?). We should not expect agreement on such subtleties among wine critics, whose palates after all are not laboratory devices but flesh and blood, but that doesn’t mean that wine tasting is either totally objective (it isn’t) or totally subjective (if it were, we wouldn’t have broad agreement on those salient aspects of taste). To expect total agreement is to rest one’s thinking on several illusions: (a) that winetasting is a scientific pursuit (it has elements of science but is not in itself scientific), (b) that the taster will be consistent over time concerning the same wine (she will not be, which the Professor also discerns when he implies a “temporal dimension” to flavor), and (c) moving well beyond wine, that there is a such thing as an “objective reality” that all humans perceive in the same way. Yes…and no. Again, it’s the difference between “more salient aspects” and subtler ones: All humans will agree that the Sun rises in the East (if you disagree, then you’re nuts) but all witnesses to a hit-and-run will not agree that the car that struck the pedestrian was blue. The former (the Sun rising) is a salient perception, the latter (the car color) more subject to differing perceptions. When it comes to such subtleties, humans will always disagree; critics certainly will about wines. That makes life more complicated, and frustrating, and uncertain; but also more interesting, and forces us, in the end, to arrive at our own conclusions.

  1. Steve,

    I’m afraid you’ve misunderstood the question Barry Smith is asking, a question that your distinction between “salient” vs “subtle” features of a wine will not answer. The question is whether wine flavors are “in the mind” or “in the wine”. His answer is neither. There is a third entity “flavor”, constructed out of emergent properties caused by the wine chemistry but not reducible to wine chemistry since it requires the input of mental processing, that must be posited to make sense of wine tasting. Your distinction doesn’t answer the question because the skeptic about wine tasting can simply claim with regard to subtle features of wine that critics are just making stuff up–imagining features that aren’t there since they are not explainable in terms of chemical properties. Yet that is hardly a response that you should welcome.

    Take balance for instance. There is no chemical account of balance in a wine. Yet it is not purely subjective or imagined–it is in part caused by chemical properties in the wine. Hence the intermediate level Smith argues for. (Emergent properties are not mysterious–they are widely accepted as real among scientists who think about such matters.)Judgments about balance can not be explained by a model that assumes subjectivity and objectivity are the only options.

    This is hardly a trivial matter since the vindication of your career depends on being able to answer such questions. The anti-intellectualism in your post is a bit disturbing.

  2. “Consider: Let’s say we expose three different critics to a single wine, blind, and each reacts differently (as is to be expected). That can’t be explained by the wine: It is what it is — its chemical composition is the same for each of the critics. Therefore the difference is in the critics’ perceptions of the wine.”

    Each critic is tasting a different wine, because each critic is tasting from three different bottles.

    If we control the wine’s bottling to a single barrel, the variable of cork taint or premature cork failure arises — even within the same 12 bottle case.

    If we control the wine’s bottling to a single barrel, the variable of different stemware arises. (Impitoyable? INAO? Riedel? Others?)

    If we control the wine’s bottling to a single barrel, the variable of which scoring scale is embraced arises. (20 point? 100 point? 5 stars? 3 “puffs”? Siskel and Ebert-like binary “thumbs up/thumbs down”? Others?)

    No pour of wine in any one glass is the same across all three critics. Each pour of wine is a distinctly different sensory experience.

    See my “part two” comment on a Wine Spectator-organized comparative tasting that did control for all those variables: California wine critic James Laube sitting down with European wine critic James Suckling, tasked with scoring/place ranking a selection of 1985 and 1990 vintage California Cabs/Cab-blends and red Bordeaux.

  3. In an article titled “The Cabernet Challenge,” Wine Spectator (September 15, 1996, pp. 32 – 48) had their two lead red wine critics James Laube and James Suckling compare and contrast — from the same bottle in real time – various 1985 and 1990 vintage California Cabs/Cab-blends and red Bordeaux.

    And the results after controlling for all the identified variables above?

    Two divergent numerical scores/place order rankings – for wines tasted out of the same bottle using the same stemware in the same shared room experienced in “real time.”

    With score differences of upwards of 9 points on their 100 point scale.

    First example: the two critics comparing 1990 Chateau Margaux . . .

    JAMES LAUBE: “A tight, hard-edged and unyielding young wine. Some cedar and currant flavors attempt a coup on the finish, but they’re tightly wrapped in tannin. 86 points.” [20th place personal ranking in the comparative 1990 California versus Bordeaux tasting.]

    JAMES SUCKLING: “Slightly dumb now. Ripe, almost raisiny aromas and flavors that develop a minty, menthol accent. Full-bodied and rich with loads of tannins. Needs time. Better after 2005. 90 points.” [10th place personal ranking in the comparative 1990 California versus Bordeaux tasting.]

    The magazine “officially” awarded the wine (March 31, 1993 issue) with a 96-point “classic” score, invoking these words:

    “A seductive, tantalizing wine with gorgeous aromas and flavors of tobacco, cedar, berry and cassis, superb soft tannins and a long, long finish. Drink after 1998.”

    Second example: the two critics comparing 1990 Beringer “Reserve” Cabernet . . .
    JAMES LAUBE: “Dense and massive, but for all its weight and intensity it delivers a rich, ripe mouthful of currant, cherry, plum, anise and cedary, toasty oak flavors. With its impressive length, depth and concentration, this wine should age with ease for another decade. 98 points.” [1st place personal ranking in the comparative 1990 California versus Bordeaux tasting.]

    JAMES SUCKLING: “Smashes you over the head with masses of fruit and full tannins. Full-bodied, with a long finish. A little tiring to taste, even more so to drink! Better after 2005. 89 points.” [15th place personal ranking in the comparative 1990 California versus Bordeaux tasting.]

    (Bob’s aside: I don’t have at my immediate fingertips the magazine’s “official” awarded score.)

    Third example: the two critics comparing 1985 Lynch-Bages . . .

    JAMES LAUBE: “Classic Bordeaux from the first cheesy, cedary whiff – an aroma rarely duplicated by California Cabernets. Drinks better, with currant and anise notes, and earthy, funky flavors, but struggles to maintain focus. Tannins still a bit raw. Taste several times, with consistent notes. 87 points. [14th place personal ranking in the comparative 1985 California versus Bordeaux tasting.]

    JAMES SUCKLING: “Our wine of the year in 1988 and still well worth it. The first bottle was slightly cheesy but the second one was superb, showing outstanding rope berry, cherry and currant flavors and layers of silky fine tannins. Sexy and exciting. Drink now or hold. 95 points.” [2nd place personal ranking in the comparative 1985 California versus Bordeaux tasting.]

    (Bob’s aside: the wine’s “cheesy” aroma and flavor was also commented upon by Robert Parker in his contemporary review, attributing it to brett. I don’t have at my immediate fingertips Wine Spectator’s “official” awarded score.)

    Self-evidently, Laube and Suckling “agree to disagree” on the relative scores and ranking comparisons of the wines.

    This is the most rigorous side-by-side taste test I have ever seen conducted by Wine Spectator.

    And given the published results that call into question the “repeat-ability” of 100 point scale scores/rankings, it doesn’t surprise me that Wine Spectator has never replicated this comparative tasting format.

  4. Bob Henry says:

    The Wine Spectator experience reported on above gives credence to Caltech professor Leonard Mlodinow’s assertions in this “op-ed” piece on the lack of “repeat-ability” when scoring wines using a 100-point wine scale.

    Excerpts from The Wall Street Journal “Weekend” Section
    (November 20, 2009, Page W6):

    “A Hint of Hype, A Taste of Illusion;
    They pour, sip and, with passion and snobbery, glorify or doom wines.
    But studies say the wine-rating system is badly flawed.
    How the experts fare against a coin toss.”


    Essay by Leonard Mlodinow

    [… teaches randomness at Caltech. His book titled “The Drunkard’s Walk: How Randomness Rules Our Lives” includes a chapter on the fallacy of wine scoring scales]

    . . .

    Given the high price of wine and the enormous number of choices, a system in which industry experts comb through the forest of wines, judge them, and offer consumers the meaningful shortcut of medals and ratings makes sense.

    But what if the successive judgments of the same wine, by the same wine expert, vary so widely that the ratings and medals on which wines base their reputations are merely a powerful illusion? That is the conclusion reached in two recent papers in the “Journal of Wine Economics.”

    Both articles were authored by the same man, a unique blend of winemaker, scientist and statistician. The unlikely revolutionary is a soft-spoken fellow named Robert Hodgson, a retired professor who taught statistics at Humboldt State University. Since 1976, Mr. Hodgson has also been the proprietor of Fieldbrook Winery, a small operation that puts out about 10 wines each year, selling 1,500 cases

    A few years ago, Mr. Hodgson began wondering how wines, such as his own, can win a gold medal at one competition, and “end up in the pooper” at others. He decided to take a course in wine judging, and met G.M “Pooch” Pucilowski, chief judge at the California State Fair wine competition, North America’s oldest and most prestigious. Mr. Hodgson joined the Wine Competition’s advisory board, and eventually “begged” to run a CONTROLLED SCIENTIFIC STUDY of the tastings, conducted in the same manner as the real-world tastings. The board agreed, but expected the results to be kept confidential.

    . . .

    In his first study, each year, for four years, Mr. Hodgson served actual panels of California State Fair Wine Competition judges — some 70 judges each year — about 100 wines over a two-day period. He employed the same blind tasting process as the actual competition. In Mr. Hodgson’s study, however, every wine was presented to each judge THREE different times, each time drawn from the SAME bottle.

    The results astonished Mr. Hodgson. The judges’ wine ratings typically varied by ±4 points on a standard ratings scale running from 80 to 100. A wine rated 91 on one tasting would often be rated an 87 or 95 on the next. Some of the judges did much worse, and only about one in 10 regularly rated the same wine within a range of ±2 points.

    Mr. Hodgson also found that the judges whose ratings were most consistent in any given year landed in the middle of the pack in other years, suggesting that their consistent performance that year had simply been due to chance.

    . . .

    This September, Mr. Hodgson dropped his other bombshell. This time, from a private newsletter called The California Grapevine, he obtained the complete records of wine competitions, listing not only which wines won medals, but which did not. Mr. Hodgson told me that when he started playing with the data he “noticed that the probability that a wine which won a gold medal in one competition would win nothing in others was high.” The medals seemed to be spread around at random, with each wine having about a 9% chance of winning a gold medal in any given competition.

    To test that idea, Mr. Hodgson restricted his attention to wines entering a certain number of competitions, say five. Then he made a bar graph of the number of wines winning 0, 1, 2, etc. gold medals in those competitions. The graph was nearly identical to the one you’d get if you simply made five flips of a coin weighted to land on heads with a probability of 9%. The distribution of medals, he wrote, “mirrors what might be expected should a gold medal be awarded by chance alone.”

    [Bob Henry’s aside: Link to study: http[colon]//www[dot]wine-economics[dot]org/journal/content/Volume4/number1/Full%20Texts/1_wine%20economics_vol%204_1_Robert%20Hodgson[dot]pdf.]

    Mr. Hodgson’s work was publicly dismissed as an absurdity by one wine expert, and “hogwash” by another. But among wine makers, the reaction was different. “I’m not surprised,” said Bob Cabral, wine maker at critically acclaimed Williams-Selyem Winery in Sonoma County. In Mr. Cabral’s view, wine ratings are influenced by uncontrolled factors such as the time of day, the number of hours since the taster last ate and the other wines in the lineup. He also says critics taste too many wines in too short a time. As a result, he says, “I would expect a taster’s rating of the same wine to vary by at least three, four, five points from tasting to tasting.”

    . . .

  5. redmond barry says:

    The philosopher is full of brett.
    A flavor”( not “FLAVOUR” which might be a Platonic form, if there were such things) might be one of a number of qualia , if we knew what those were.
    The number of angels that can dance on the head of a pin, which is more a Scholastic than Talmudic question, is: all of them.

  6. redmond barry says:

    The main difficulty with wine tasting as it is usually conducted is the same as tasting pastrami without rye. One can make an organoleptic evaluation of several Oakville bench Cabs with slices of bread, but until they have been compared accompanied by a double pork chop brined and seared medium rare, with hashed browned potatoes and haricots verts sautéed with almond slices one doesn’t really know much. I’ve just described what might be called an emergent property.

  7. Postscript.

    Underscoring Humboldt State University professor emeritus Robert Hodgson’s research on the California State Fair Wine Competition judges . . .

    “The judges’ wine ratings typically varied by ±4 points on a standard ratings scale running from 80 to 100.”

    . . . let me quote this Steve blog:

    “Thoughts on block bottlings of Pinot Noir and Chardonnay in California”

    Link: From dated April 11, 2011:

    “(When wines are 3 or 4 points apart, their relative standings can easily switch, given the vagaries of time and bottle variation.)”

Leave a Reply


Recent Comments

Recent Posts