subscribe: Posts | Comments      Facebook      Email Steve

Do red wines get higher scores than whites due to “bias”?



Is there “a critical bias toward red wines” among wine critics? That’s the thesis of a thought-provoking study that examined 64,000 scores from leading publications and found some fascinating tendencies:

  • reds score higher than whites
  • red wines are over-represented above 90 points
  • whites are over-represented below 90 points

So pronounced were these findings, the authors write, that, as the score crosses the critical 90-point threshold, “selling price and selling price variation increased quickly…[with some] lower-rated reds costing more than more highly-rated whites.” For example, a 90-point Napa Cabernet might cost $75 whereas a 93-point Chablis might go for $45.

I came across an article about the study at Jeff Siegel’s Wine Curmudgeon blog. (Sorry, I don’t think the full study is available online, although it is on PDF.)

Siegel found the finding curious: “Something is going on,” he wrote. I agree. But what could it be?

Siegel himself postulated various explanations. Critics may rate red wine higher “because it’s more prestigious.” This leads to a cascade of results: Producers invest more money making red wines than whites “…because consumers are willing to pay for that prestige,” and that greater investment in the production process may result in better wines.

During my decades of being a wine critic, I thought about this topic intensely, although I never reached any definitive conclusions. But it’s pretty obvious when you consider that at the leading wine periodicals, there are more (often far more) 100-point scores for reds than for whites. (This was true for me, too. I never gave a perfect score to a white wine.)

Let’s consider the question of bias, or preconceptions. If you know you’re tasting, say, First-Growth Bordeaux or Grand Cru Burgundy or Sauternes for that matter, from a great vintage, you’re more likely to yield to the possibility of giving it 100 points than if you’re tasting, say, a Temecula Tempranillo. So, to eliminate that bias, we taste single-blind. But even if you don’t know the individual bottles, if you’re a professional wine critic and your tasting was set up by a staff person, you’re still most likely going to be told the general category. “We’re tasting Premier Cru red Burgundy today from the 2011 vintage,” or “This flight consists of 2013 Napa Valley Cabernets and Bordeaux blends under $40.” Armed with these telltale bits of information, the brain will begin to come to certain conclusions, albeit unconsciously: a below-$40 Napa Cab cannot possibly get 100 points (so the reasoning goes); the best it can aspire to is 96, maybe 97 points, and so that’s what the critic finds when he tastes the wines.

So let’s make the tasting double-blind: nothing is known about the wines except for the color. This is where the bias for red wines (if there is one) comes in. You cannot prevent the critic from knowing the color. (You can always use black glasses, but I know of no critic who routinely uses them in assessing wines.)

The more I think about it, the more I believe there is a bias toward red wines, and I think Siegel stumbled upon the truth. Red wine is perceived as “more prestigious.” To understand why, you have to look at history. The French invented the system of categorizing wines by status (Grand Cru, First Growth and the like), and they tended to reserve their highest categories for red wines. In turn, the British fundamentally invented the game of writing about and critiquing wine, in the eighteenth and nineteenth centuries; and they overwhelmingly favored French red wines over whites. They therefore gave their highest plaudits to red wines. Our American and British systems of wine reviewing today—from Oz Clark to Robert Parker—are direct descendants of those British wine writers of yesteryear. The inherent bias toward red wines has filtered down over the centuries and still exists.

Which begs the question: Are red wines actually better than white wines? Well, there is the argument they’re more complex: more skin and seed contact, more oak (usually, at the high end), and so on. Does more complexity = “better”? That’s a hard case to prove. At some point, what we know, or think we know, about wine gets so inextricably bound up with the pure and simple physical experience of tasting it, that it’s impossible to separate the two. Which, come to think about it, is perhaps what makes wine so great: its pleasure is as much intellectual as hedonistic.

If point scores give you reassurance, go ahead and trust them…



…no matter how many articles like this one you read that tell you to ignore them.

Now, the first thing I’m going to tell you is that the author of the article, MJ Skegg—a good writer–got all nine of his bullet points correct! MJ is the wine writer for the Portland, Oregon, Mercury, and yes, he’s right, for the most part, when he makes his accusations against scores:

  1. It’s all subjective
  2. Wine critics are human
  3. The wines start to look the same
  4. Experts are inconsistent
  5. They ignore context
  6. They inflate prices
  7. The scores keep getting bigger
  8. The system is (allegedly) corrupt
  9. They’re prescriptive

I might dispute some of his points a little, and I will in a second; but by and large he’s correct (although he’s not really breaking any new ground. Other writers and bloggers have made the same points for years). So how come I say that, for all the correctitude of his points, they still are not (as the lawyers say) dispositive?

Because you could say the same things about any system of wine reviewing! Go down the list and substitute any system you want; each of them is capable of being critiqued for all nine of MJ’s reasons. So that means no system is better or worse than any other. You might as well pick and choose the one that works for you. That any system of judgment created by humans is fallible is obvious; that doesn’t mean we should throw the baby out with the bathwater.

Besides, we all know that, of all the reviewing systems in the world, the 100-point system is the most popular. Like the old saying goes, fifty million Frenchmen can’t be wrong. Therefore, if you’re using it (and I do, when I’m looking for a wine), you shouldn’t feel guilty.

I will admit, as I have before, that MJ’s point (e), “scores ignore context,” is true. It’s hard to pack context into a number! However, every point score I’ve ever seen, including my own, also had a text review attached, which is where you’ll find the context. Granted, a 40-word text review isn’t very capacious, and I always found myself wishing I could write 100 words, or even more, for my reviews; one could write a book on some wines. But you have to draw the line someplace. In one of his articles, MJ’s reviews sound just like they came from Wine Enthusiast, only without the number! Not much context there. I also don’t quite “get” the accusation that scores inflate prices. Not sure how that works. Wine prices have been going up (like prices for everything else) since, like, forever. Take a peek at Eddie Penning-Rowsell’s “The Wines of Bordeaux” to track classified growth Bordeaux prices over centuries. Robert Parker did not create the demand for the First Growths; it’s been there since before America was a country.

So I would tell consumers, Hell, yeah, MJ’s brief concerning scores is spot-on. But rather than undermining scores, he actually makes the case for them, and for the wine critics who use them. Critics are human, just as MJ points out. They are fallible; they have their foibles; neither are they consistent. But don’t you want a human giving you their take? They, like you, me and MJ, are just out there, doing their jobs. If you find a critic you can relate to, at least you know whom you’re dealing with, as opposed to crowd-sourced-type reviewing platforms, which are a mobocracy. If Steve Tanzer or Paul Gregutt floats your boat—if you know them (or feel as if you do) through their writings—if you trust them—if you understand that, as MJ implies, point scores are figurative rather than literal, and you know how to use them as part (but not the whole part) of your buying decision—if you feel that you can use all the help you can get in making that buying decision (and don’t we all?)—then go right ahead, use point scores. Like I said, when I’m exploring a wine or region I’m not that familiar with, I always turn to my trusted bevy of 100 point-based critics, and I’ve not often been disappointed.

* * *

Sorry for not posting yesterday. I’m in Oregon. These travel days don’t leave a lot of extra time for creative writing, and I don’t want to put up crap.


A Sauvignon Blanc tasting that raises questions about point scores



We had a perfectly lovely blind tasting yesterday, 12 Sauvignon Blancs, six of them from Jackson Family Wines wineries, and the others from around the world. It was a bit of a hodgepodge but I just wanted to assemble a range that showed the extremes of style, from an Old World, low- or no-oak, high acidity, pyrazine-driven tartness to a bigger, richer, riper New World style of [partial] barrel fermentation. Here, briefly, are the results. The entire group of tasters was very close in its conclusions—a highly-calibrated group where we achieved near consensus.

My scores:

94 Matanzas Creek 2014 Sauvignon Blanc, Sonoma County

93 Robert Mondavi 2013 To Kalon Vineyard Reserve Fumé Blanc, Napa Valley

93 Matanzas Creek 2013 Journey Sauvignon Blanc, Sonoma County

92 Stonestreet 2013 Alexander Mountain Estate Aurora Point Sauvignon Blanc, Alexander Valley

90 Merry Edwards 2014 Sauvignon Blanc, Russian River Valley

89 Peter Michael 2014 L’Apres-Midi Sauvignon Blanc, Knights Valley

88 Jackson Estate 2014 Stitch Sauvignon Blanc (Marlborough) NOTE: This is not a Jackson Family Wine.

87 Francois Cotat 2014 La Grande Cote, Sancerre

87 Arrowood 2014 Sauvignon Blanc, Alexander Valley

87 Cardinale 2014 Intrada Sauvignon Blanc (Napa Valley)

86 Goisot 2014 Exogyra Virgula Sauvignon Blanc (Saint-Bris)

85 Sattlerhof 2014 Gamlitzer Sauvignon Blanc, Austria

The JFW wines certainly did very well, taking 3 of the top 4 places. The surprise was the Matanzas Creek Sonoma County—it’s not one of the winery’s top tier Sauvignon Blancs (which are Bennett Valley, Helena Bench and Journey) but the basic regional blend. But then, I’ve worked with small lots of all Matanzas’s vineyards, and know how good the source fruit is. This is really a delightful wine, and a testament to the fact that great wine doesn’t have to be expensive. It’s also testament to the art of blending.

But I want to talk about the Francois Cotat, as it raises important and interesting intellectual considerations.

The Cotat immediately followed the Mondavi To Kalon, always one of my favorite Sauvignon Blancs, and the first thing I wrote, on sniffing it, was “Much leaner.” Of course the alcohol on the Cotat is quite a bit lower, and the acidity much higher: it was certainly an Old World wine. But here was my quandary. In terms of the reviewing system I practiced for a long time, this is not a high-scoring wine; my 87 points, I think, is right on the money. It’s a good wine, in fact a very good wine, but rather austere, delicate and sour (from a California point of view). I could and did appreciate its style, but more than 87 points? I don’t think so.

And yet, I immediately understood what a versatile wine this is. You could drink and enjoy it with almost anything; and I was sure that food would soften and mellow it, making it an ideal companion. Then I thought of a hypothetical 100 point Cabernet Sauvignon that is—let’s face it—a very un-versatile kind of wine. It blows you away with opulence, and deserves its score, by my lights. But the range of foods you can pair it with is comparatively narrow.

So here’s the paradox: The higher-scoring wine is less versatile with food, while the lower-scoring wine provides pleasure with so much. It is a puzzle, a conundrum. I don’t think I’m quite ready to drop the 100-point system as my tasting vernacular, but things are becoming a little topsy-turvy in my head.

* * *

While I am affiliated with Jackson Family Wines, the postings on this site are my own and do not necessarily represent the postings, strategies or opinions of Jackson Family Wines.

Another wine-rating system, this time based on 1,000 points



Forget about arguing over the differences between 96 and 97 points. Now we can debate the finer distinctions between a score of 875 and 876. Or 943 and 944. Or 563 and 562. Whaaat?? That’s right. There’s a new wine rating kid in town, called Wine Lister, and it uses, not the familiar 100-point system, but a thousand point system.

No, this is not The Onion. How’s it work? Well, according to their website, they gather data from multiple sources “to give a truly holistic assessment of each wine,” and the reason for a 1000-point system is because Wine Lister “can actually differentiate to this level of precision [which protects] the nuance and meticulousness of the exercise.”

Well, yes, I suppose a 1000-point system can be described as more “nuanced” than a 100-point system. But really, people who believe in score inflation now have a powerful new arrow in their quiver with which to criticize numerical ratings. From their press release, Wine Lister seems to be using only three critics at this point: Jancis Robinson, Antonio Galloni and Bettane+Desseauve (a French-based, sort of a Wine-Searcher website).

At first consideration the notion of a 1000-point system sounds dubious. It does present us defenders of the 100-point scale a certain conundrum: after all, if the 100-point system is good, then a 1000-point system has to be better, right? Maybe even ten times better. Of course, this can lead to a logical absurdity: How about a 10,000-point system? A million-point system? You see the problem.

Of more interest to me than how many points the best system ought to have are the larger questions concerning the need for a new rating system, and the entrepreneurial aspects of Wine Lister’s owners to launch one at this time. Consumers already have many, many wine rating and reviewing sources to which to turn, both online and in print. They don’t seem to be demanding yet another one. Why does Wine Lister feel their time has come?

Well, maybe it has. Any startup is a gamble, and in the entrepreneurial world of wine reviewing, which seems to be undergoing tumultuous changes, anyone can be a winner. Antonio Galloni took a huge gamble when he quit Wine Advocate to launch Vinous, which has turned out to be such a huge success. Will Wine Lister be? I don’t know, but it has good credentials. What it has to prove is that it’s more than a simple compilation of Jancis-Antonio- Bettane+Desseauve reviews. They’re also factoring in Wine-Searcher, and there’s even an auction-value component (although most consumers won’t care about that). But beyond being a “hub of information” (from the press release), I think Wine Lister’s limitation is that wine consumers seem to want a personal connection to the recommender they listen to, which an algorithm cannot provide. I could be wrong. I’ll be following them on Twitter @Wine_Lister and we’ll see what happens.

* * *

While I am affiliated with Jackson Family Wines, the postings on this site are my own and do not necessarily represent the postings, strategies or opinions of Jackson Family Wines.

Scores, stores and wineries: a new analysis



Every day, I get blast email advertisements from wineries or wine stores touting the latest 90-plus point score from Suckling, Parker, Vinous or some other esteemed critic. Here’s an example that came in on Saturday: I’m reproducing everything except the actual winery/wine.

_____ Winery’s ____ Napa Red Wine 2013 Rated 92JS.

Notice how the “92JS” is printed in the same font type and size as the name of the winery and wine. That assigns them equal importance; the rating and critic are virtually part of the brand. Later in the ad, they have the full “James Suckling Review” followed by a full “Wine Spectator Review” [of 90 points]. This is followed by the winery’s own “Wine Tasting Notes,” which by and large echo Spectator’s and Suckling’s descriptions.

Built along similar lines was a recent email ad for a certain Brunello: The headline was “2011 ____ Brunello di Montalcino DOCG”; immediately beneath is (in slightly smaller point size), “94 Points Vinous / Antonio Galloni.”

We can see that, in these headline and sub-heads, through physical proximity on the page or screen, the ads’ creators have linked the name of the winery and the wine to the name of the famous critic and his point score. One of the central tenets of advertising is to get the most important part of the message across immediately and strongly. (This is why so many T.V. commercials begin with the advertiser’s name—you hear and see it before you can change the channel or click the “mute” button.) In like fashion, most of us will quickly read a headline (even if we don’t want to) before skipping the rest of the ad. The headline thus stays in the brain: “Winery” “Wine Critic” “90-plus point score.” That’s really all the winery or wine store wants you to retain. They don’t expect you to read the entire ad, or to immediately buy the wine based on the headline. They do expect that the “Winery” “Wine Critic” “90-plus point score” information will stay embedded in your brain cells, which will make you more likely to buy the wine the next time you’re looking for something, or at least have a favorable view of it.

This reliance of wineries and wine stores on famous critics’ reviews and scores is as strong as ever. There has been a well-publicized revolt against it by sommeliers and bloggers, but their resistance has all the power of a wet noodle. You might as well thrash against the storm; it does no good. The dominance of the famous wine critic is so ensconced in this country (and throughout large parts of Asia) that it shows no signs of being undermined anytime soon. You can regret it; you can rant against it; you can list all the reasons why it’s unhealthy, but you can’t change the facts.

Wineries are complicit in this phenomenon; they are co-dependents in this 12-Step addiction to critics. Wineries, of course, live and die by the same sword: A bad review is not helpful, but wineries will never publish a bad review. They assume (rightly) that bad reviews will quickly be swept away by the never-ending tsunami of information swamping consumers.

Which brings us back to 90-point scores. They’re everywhere. You can call it score inflation, you can argue that winemaking quality is higher, or that vintages are better, but for whatever reason, 90-plus points is more common than ever. Ninety is the new 87. Wineries love a score of 90, but I’ve heard that sometimes they’re disappointed they didn’t get 93, 94 or higher. Even 95 points has been lessened by its ubiquity.

Hosemaster lampooned this, likening 100-point scores to Oprah Winfrey giving out cars to the studio audience on her T.V. show. (“You get a car! And you get a car! And you get a car! And YOU get a car! Everybody gets a car!”) Why does this sort of thing happen? Enquiring minds want to know. In legalese, one must ask, “Cui bono?”—Who benefits? In Oprah’s case, she’s not paying for the cars herself; they’re provided by the manufacturers, who presumably take a tax writeoff. It’s a win-win-win situation for Oprah, the automakers and the audience.

Cui bono when it comes to high scores? The wineries, of course, and the wine stores that sell their wines (and put together the email blast advertisements). And what of the critics?

Step into the tall weeds with me, reader. A wine critic who gives a wine a high score gets something no money can buy: exposure. His name goes out on all those email blast advertisements (and other forms of marketing). That name is seen by tens of thousands of people, thereby making the famous wine critic more famous than ever. Just as the wine is linked to the critic in the headline, the critic’s name is linked to the 90-plus wine; both are meta-branded. (It’s the same thing as when politicians running for public office vie for the endorsement of famous Hollywood stars, rock stars and sports figures: the halo effect of fame and glamor by association.) There therefore is motive on the part of critics to amplify their point scores.

But motive alone does not prove a case nor make anyone guilty. We cannot impute venality to this current rash of high scores; we can merely take note of it. Notice also that the high scores are coming from older critics. Palates do, in fact, change over the years. Perhaps there’s something about a mature palate that is easier to please than a beginner’s palate. Perhaps older critics aren’t as angry, fussy or nit-picky about wine as younger ones; or as ambitious. They’re more apt to look for sheer pleasure and less apt to look for the slightest perceived imperfection. With age comes mellowness; mellowness is more likely to smile upon the world than to criticize it.

Anyhow, it is passing strange to see how intertwined the worlds of wineries, wine stores and wine critics have become. Like triple stars caught in each others’ orbits, they gyre and gimble in the wabe, in a weird but strangely fascinating pas de trois that, for the moment at least, shows no signs of abating.

The Critic vs. the Computer: A case of perceptual discrepancy



Did you know that I prefer organic wines to non-organic wines? I didn’t, either. But then I read this new paper from the American Association of Wine Economists, entitled “Does Organic Wine taste better? An Analysis of Experts’ Ratings,” and I found out that, yup, I do.

Well, kinda sorta. See, the paper’s authors decided to study “data from the three influential wine expert publications: Wine Advocate, Wine Enthusiast, and Wine Spectator,” and as it turned out, “During our period of study [74,148 wines produced in California between 1998 and 2009], the main tasters for California wines for Wine Advocate, Wine Enthusiast and Wine Spectator were Robert Parker, Steve Heimoff, and James Laube, respectively.”

The big P-H-L! They took our scores, crunched them in that esoteric way only economists can, and lo and behold, “Our results indicate that the adoption of wine eco-certification has a statistically significant and positive effect on wine ratings.”

How much? Not a lot: Being eco-certified,” the authors found, “increases the score of the wine by 0.46 point on average.”

Well, one hardly knows where to begin. Right off the bat, I have a problem when the lesson that people will take away is that P-H-L (and by extension major critics) prefer organic wines to non-organic ones. Less than half a point difference? I suppose if they fed 74,148 scores into a computer and found a 0.46 point difference, then who am I to argue with HAL? But a 0.46 point difference doesn’t seem like very much to me. It’s not even round-uppable to the higher score (87.46 rounds down to 87).

But wait, there’s more. The following factors also had an impact on the scores of organically-certified wines, according to the paper:

  • ” a 1% increase in the number of cases will decrease score by 0.003


  • ” An increase in the number of years of certification experience by one [winery] decreases score by 0.09 point.”

Confused? I am. So the more cases wine the winery produces, the lower the score is; but the longer the winery has been certified organic, the lower the score also is!

How about the winemaker’s hair color? Did they include that?

The authors also counted the number of words in each review and found this: “Next, we examine the impact that eco-certification has on the number of words used in wine notes. As shown in regression (1) of Table 6, wine notes of eco-certified wines are not significantly longer than those of conventional wines. However, as shown in regressions (2) and (3), eco-certification increases the average number of positive words by 0.4 but has no statistically significant impact on the number of negative words.”

My interpretation of this is that it’s gibberish. The authors compiled a list of words [Table 7] but I don’t understand how they infer whether their use is positive or negative. Is “jammy” positive or negative? Do Parker, Laube and I even use it in the same way? How about “offbeat”? Is that good or bad? And “peat”: if I tasted that in an Islay Scotch it would be good, but in a Chardonnay?

The authors also state something that I don’t think is objectively true, or, even if it is, is irrelevant. “Second, as a related point, wine experts have a better knowledge about wine eco-certification and are able to differentiate between different types of eco-labels, namely organic wine and wine made with organically grown grapes, which represent different wine production processes with different impacts on quality.”

I’m not going to sit here and tell you I know the difference between different types of eco-labels. There are so damn many (different certifying agencies, “natural,” biodynamic, etc.), I get confused—and, while I’ll let Parker and Laube speak for themselves, I bet they get confused, too. Besides, if “All the publications claim blind review,” as the paper’s authors write, then we critics don’t even see the labels when we’re tasting and reviewing (much less would we have a tech sheet in front of us).

But finally, this statistic seems to be to be the last nail in the coffin of the study: “On average, 1.1% of the wines in the sample are eco-certified.” By my calculations, that’s a little over 800 wines—out of 74,148. I fail to see how you can extrapolate any useful information from such a small sample, compared to the huge number of wines in the study. Apples and oranges.

I’m no economist, it goes without saying. If I were, I guess I’d spend my days crunching numbers and coming up with interesting factoids. But I have to say, I don’t see the point of this particular study—not if it’s going to be used to make a claim that I don’t regard as true. For the record, let me say that I do not think organic wine is better. And you know what? I don’t care what the numbers say.

« Previous Entries

Recent Comments

Recent Posts