subscribe: Posts | Comments      Facebook      Email Steve

My respectful reply to criticism of my reviews

59 comments

 

Well, it happened again. A winemaker took umbrage at the scores I gave his wines, and emailed me with all the reasons why I was wrong.

So let me take a few minutes to explain. As I told the winemaker, I never mind it if someone reaches out to me to complain about my reviews. It’s fine to call me. We can agree to disagree, and just because you disagree doesn’t mean you have to be disagreeable. (That goes for me as well as for winemakers.)

That doesn’t mean I like it when I hear from a disappointed winemaker. I’m only human. I’d much rather someone call me up and say, “Way to go, Heimoff!” than “You really got that wrong.” Usually, these unhappy winemakers have three “facts” they cite in order to prove I’m wrong. They’ll tell me that the grapes came from a great vineyard and therefore it can’t deserve a middling score. Or they’ll tell me that other critics gave it higher scores than I did, and so I must have missed discerning its true qualities. Or they’ll simply cite their sincerity and passion as reasons why their wines should have scored better.

To all of which I say: That’s silly. Just because a wine comes from a “great vineyard” doesn’t mean that it has to be a great wine. We all understand that, don’t we? I should think so. And don’t even get me started on comparing my reviews to those of other critics. That’s fine, if you want to do it, but it carries no weight with me if you point out that ___ and ___ gave your wine 90-plus points while I didn’t. As for the sincerity thing–“We work our tails off to make the best wine we can”–a score in the 80s doesn’t mean to suggest that you don’t care, or that you’re not trying hard. I assume that every winemaker in the business is working his or her tail off and trying their best. The point is that trying one’s best isn’t good enough. The resulting wine has to deserve a great score.

I guess I should add a fourth “fact” often given to me by unhappy winemakers. They’ll review their own wine, find qualities in it that I didn’t, and hope thereby to persuade me that I somehow missed all that good stuff. Well, I think winemakers are the least objective appraisers of their own wines! They’re like doting parents who can’t bring themselves to perceive all the qualities–good and bad–about their children. We all know parents like that, don’t we? The same thing goes for dog owners. I know certain dogs that are not very nice animals. They’re angry, they snap at people and other dogs, they bark when there’s no reason to. And in some of these cases, their mommies and daddies are clueless that their pet has an attitude problem. It’s that way with some winemakers, too. Of course they love their product, and it’s only natural they’d be defensive about it, when and if it’s criticized. But winemakers also need to stand back and at least try to be objective. If they think highly enough of a critic to be upset if that critic doesn’t fall in love with their wine, then instead of complaining to the critic, they should read his words and try to understand the nature of the criticism. On the other hand, if they think the critic doesn’t have the chops to understand their wine, then why would they care what he says?

In most cases when I don’t enthuse over California wine, it’s because it suffers from one or more of the following issues:

1.   too sweet in residual sugar

2.   too fruity-extracted, i.e. a fruit bomb

3.   too soft or, conversely, too tart

4.   unbalanced in alcohol. I don’t object to high ABV, in and of itself, but I don’t like a wine that tastes and feels hot, which even some wines in the low to mid-14s do

5.   an overall simplicity or one-dimensionality

Notice that I’m not even mentioning true flaws, such as excessive brett, TCA, botrytis moldiness, heat damage, etc. I’m talking about wines that are technically “good” (by Wine Enthusiast standards) and drinkable, but just don’t deserve high scores.

Every winemaker wants those 90-plus scores. A part of me deplores that selling wine has come down to that, in order to market wine. But that is obviously beyond my control. I wish it weren’t so (and I know for a fact that every critic who uses the 100-point system feels the same way). I think we were as surprised as anyone when, in the 1990s and 2000s, the situation reached that point. I, myself, often drink wines at home that I’ve scored (or would score) in the middle 80s, and I like them. They’re good, sound, interesting wines that just don’t happen to have the extra levels of complexity required to lift a score over 90.

I have the utmost respect for California’s winemakers. I understand their jobs, not in the technical sense perhaps, but in the applied sense of having to sell their products. Some of them don’t have to worry about what people like me think; most of them do. It gives me no pleasure to disappoint them, but that’s my job, just as making wine is theirs. It just doesn’t work to turn out average quality wines no matter what your excuse is, and expect them to get 90 points or higher, especially at high prices. That dog won’t hunt. American consumers have too many choices from around the world these days for that to work anymore.

  1. Bob Henry says:

    Addendum.

    Tom, et. al.:

    Persistence pays . . .

    I noted after the fact that only the first page of the Wines & Vines article titled “Glossy Buying Guides: Smoke & Mirrors” is reproduced here:

    http://www.wineforall.com/writing.html

    (Or at least I wasn’t successful in “turning the page” beyond 57.)

    I dug deeper into my “hoary archive” of wine newsclippings and found a Word formatted copy on my laptop.

    Reproducing pertinent excerpts below.

    ~~ Bob

    From Wines & Vines
    (July 2006, page 57ff):

    “Glossy Buying Guides: Getting Past the Smoke & Mirrors”

    [Link: not available]

    By W. R. Tish
    “Off the Vine” Column

    . . .

    Label Reproductions: Follow The Money

    Full-color label reproductions may be the most mysterious and misunderstood aspect of the three main buying guides. Wine Spectator runs full-color labels only for “Spectator Selections.” These wines, deemed to have special qualities or value, are essentially showcased at the very front of the buying guide. Wineries are never charged nor asked to pay for these labels. All things considered, this practice reflects an effort by WS to provide a colorful shortlist, which not uncoincidentally, forms the bulk of the tear-out “shopping list” in each issue.

    By stark contrast, Wine Enthusiast and Wine & Spirits use label reproductions to generate revenue. In WE’s case, the labels are called “paid promotions” at the bottom half-inch of the buying guide’s informational box. W&S makes no reference to the nature of the labels, though editor Josh Greene has informed me that a paragraph explaining that the label reproductions are a form of paid advertising will appear in all future issues of the magazine, as well as on the Web site. In both cases, the magazines offer wineries/importers the opportunity to purchase label insertions after wines have been rated. In terms of how the labels are displayed, Wine & Spirits runs the labels directly adjacent to respective tasting notes, as they appear in the normal flow of the buying guide. Wine Enthusiast apparently gives wineries more for their label money: All label-accompanied wines are showcased — in more than one location in the magazine, as well in WE’s online database — but without taster’s initials.

    It’s no surprise that many marketers take advantage of the label reproduction opportunities in WE and W&S. Not only do they present a chance to have labels displayed directly adjacent to their (usually high-scoring) review, the cost ($800 in WE and $500 in W&S) is considerably lower than display advertising. Consider the statistics on page 61 from these same two issues of the three major glossies.

    The obvious question here is: Do readers know? Based on my own research, the answer is an overwhelming “No.” I estimate that 95 of 100 shoppers have no clue whatsoever that the labels are anything other than pure editorial endorsement.

    And why should they? Bottle shots in these magazines aren’t ads, are they? Photos of winemakers and vineyards aren’t ads, are they? Graphics in general that appear in editorial stories aren’t ads, are they? Indeed, I have press and trade colleagues who were surprised to learn that the labels in Wine Enthusiast and Wine & Spirits are inserted on a “pay to play” basis.

    Is this a situation of national urgency? Of course not. But the situation, as it stands now, points to a serious disconnect between appearance and reality. What surprises me is not really that Wine Enthusiast and Wine & Spirits sell labels, or that wineries buy them when the score is right. Rather, it is the benign silence of Wine Spectator. It seems that without a prominent disclaimer asserting that label reproductions in its magazine involve no money whatsoever, WS editors stand the risk of their policy being misinterpreted once people start to realize that labels in wine magazines are not strictly editorial. And yes, it is only a matter of time before average consumers and active bloggers raise the issue in forums outside the big magazines.

    . . .

    Watch for Part II of my analysis in the September issue of Wines & Vines, when I focus on the current state of the major glossies beyond their buying guides.

  2. Corrie Foos says:

    Hi Steve:

    I have a good story. I had a bottle of syrah recommended by a Napa wineshop, and when I tried it, found it not to my taste, and gave it an 82 rating in my Cellartracker account.

    The winemaker emailed me, in an earnest desire to find out why I rated it so low. We had some back and forth, and he wanted to provide me with another bottle on the possibility the one I had was corked. I also authorized him to look at my cellar listings in CT, and he came back to me to say he understood. My cellar is full of fruit bombs, and that’s not the style of his wines, which are less alcohol and less extraction. Though our winestyles don’t coincide, I was extremely impressed with him and his concern for my experience with his wine.

    I have also read Tim Hanni’s book, Why You Like the Wines You Like”. This gave me a new perspective on ‘right and wrong’ in wines.

    Bottom line: I love fruit forward Napa cabs and Paso GSMs. I like young wines because of the fruit flavor. When a wine is described as delicate, nuanced, and low alcohol, I’m likely not to like it.

    I would therefore read your ratings in the context of your personal tastes, which I think is the right way review ratings from any source.

    I have learned this from movie reviewers. Some reviewers are almost a ‘contra-indicator’ of my taste…not right or wrong, just different.

    I always look forward to your blogs and comments.

  3. Corrie Foos, thanks for your observations and nice words about my blog. I agree with you, the best critics are consistent–you may not agree with them, but at least you know where they’re coming from!

  4. Bob Henry says:

    Steve,

    Regarding this observation . . .

    “. . . the best critics are consistent–you may not agree with them, but at least you know where they’re coming from!”

    . . . I always thought it was perplexing that for the longest time, Wine Spectator didn’t have “signed” reviews in the buying guide.

    If that was a tacit sign of “reviewing by assembled committee,” then say so.

    Otherwise, let a reviewer take full credit or criticism for his/her words.

    Knowing the preferences and prejudices of any reviewer is essential to putting the words in context.

    See my next (longest) post on Mr. Parker and his preferences — culled from studiously reading his writings (and taking copious notes over the years).

    ~~ Bob

  5. Bob Henry says:

    Excerpts from Robert Parker on How He “Rates” Wines:

    Source: Robert Parker, The Wine Advocate (issue 84, dated 12-11-92):

    “Long-time readers know that I am more critical of older wines than many other writers. To merit high ratings, an older wine must still be fully alive with its personality intact.”

    Source: Robert Parker, The Wine Advocate (issue 90, dated 12-20-93):

    “Readers should recognize that when tasting old bottles the expression, ‘There are no great wines, only great bottles,’ is applicable. . . . Long-time readers have noted that I prefer my wines younger rather than older. Therefore, regardless of its historical significance, no wine which tastes old and decrepit will receive a good review. Those old wines that receive enthusiastic evaluations do so because they remain well-preserved and loaded with remarkable quantities of rich, pure fruit. They possess a freshness, in addition to the profound complexity that developed with significant bottle age. . . . bottles that received perfect or exceptional reviews are living, rich, concentrated, compelling wines that justify the enormous expense and considerable patience collectors invest in maturing the finest young wines from top vintages.”

    Source: Robert Parker, The Wine Advocate (issue 103, dated 2-23-96):

    “Long-time readers know that I am a fruit fanatic, and if a wine does not retain this essential component, it is not going to receive a satisfactory review.”

    Source: Robert Parker, The Wine Advocate (issue 109, dated 6-27-97):

    “The 1990 Le Pin [red Bordeaux, rated 98 points] is a point or two superior to the 1989 [Le Pin, rated 96 points], but at this level of quality comparisons are indeed tedious. Both are exceptional vintages, and the scores could easily be reversed at other tastings.”

    Source: Robert Parker, The Wine Advocate (issue 111, dated 6-27-97):

    “. . . Many of the wines reviewed have been tasted many times, and the score represents a cumulative average of the wine’s performance in tastings to date. Scores however, do not reveal the important facts about a wine. The written commentary that accompanies the ratings is a better source of information regarding the wine’s style and personality. Its relative quality vis-à-vis its peers, and its value and aging potential than any score could ever indicate.’ ”

    Source: Robert Parker, The Wine Advocate (unknown issue from 2002):

    “ . . . Readers often wonder what a 100-point score means, and the best answer is that it is pure emotion that makes me give a wine 100 instead of 96, 97, 98 or 99. ”

  6. Jason Brandt Lewis says:

    Bob, in the “early days,” the Spec *did* use a tasting panel. It’s only relatively recently that they switched to “initialed” reviews.

    I often disagree — or rather, “diverge” — from Parker in terms of the wines (or rather, style of wine) I like. But what makes Parker useful is his consistency: when he uses certain words in his reviews, I know it’s a wine for me to avoid (or embrace, depending upon the description).

  7. Jason,

    The concept of an anonymous tasting panel poses these questions: Who serves on it? What are their personal preferences/prejudices?

    Laube and Steiman are West Coast residents. The other writers are East Coast or European-based residents.

    The logistics of lining up their in-person appearances at the Spectator’s offices (San Francisco and New York) to power through hundreds of wines is daunting for the tasting co-ordinators.

    This no doubt contributed to assigning a “beat” to each critic, and signed reviews.

    ~~ Bob

  8. Jason Brandt Lewis says:

    Bob — again — this goes back many years, when the Spec was based in San Francisco, rather than Napa. I’m talking the 1980s, maybe into the 1990s (I *really* don’t pay that much attention). Members of the panel were named in the magazine, but no single writer/reviewer was credited with writing any specific review. I agree that — especially as the “wine world” expanded — the logistics favor the current “beat” method.

    That said, the “panel method” was also the case at, for example, the now-defunct Wine World magazine, where I was writing. I was responsible for any and all tasting notes that appeared within the articles I wrote, but when I tasted as a part of the Tasting Panel, the notes were gathered and written by (IIRC) the Editor.

    Without trying to sound like a broken record, this was also the case at Connoisseurs’ Guide to California Wine, where — in the old, OLD days — 12 wines (all blind) were sampled from three stations. Each bottle was hidden, and a glass with wine was in front of the bottle (so you could smell the wine without any other wines previously in your glass), but you tasted from your own glass. At the end of the evening, you turned in your notes to Charlie Olken and/or Earl Singer, but they wrote the notes. In merely the OLD days, this was changed to trying 16 wines a night in two flights of 8. Tasters sat around the table, and evaluated the first flight of wines. Then the wines were discussed, rated, and identified. This was followed by a second flight of 8. After all the tasting and discussion was completed, dinner was served, and people had a chance to taste the wines with food, toss in an additional comment or two if desired, and then the notes were turned in.

    I never noticed any great variability in the TN’s at CGCW, despite the broad and varied members of the panel. That said, I agree there is an advantage to having ONE person review the wines . . . as long as that person is consistent!

  9. Bob Henry says:

    Jason,

    The old, OLD days at Wine Spectator . . .

    ~~ Bob

    THE “EVOLUTION” OF THE WINE SPECTATOR SCORING SCALE

    Quoting from Wine Spectator (March 16-31, 1982, page 12):

    “Scoring

    “The Wine Spectator Tasting Panel uses a nine-point tasting scale, first introduced in 1974 by the Oenological and Viticultural Research Institute of South Africa, and modified by researchers at the University of California-Davis.

    “Panelists are required to grade a wine against five sections (unacceptable to superior) and to provide written comments about each wine tasted. The section division is:

    Unacceptable … 1 point
    Average quality with some defects … 2 to 3 points
    Average quality … 4 to 6 points
    Above average quality with some superior qualities … 7 to 8 points
    Superior … 9 points

    “Space is provided on the tasting sheet for panelists to describe appearance, aroma, taste, and to list general comments. Following the scoring, a panel discussion is held on each flight of wines.

    “Total points given for each wine are tallied and an average score calculated. Only the top four wines (or more if ties occur at any or all of the four levels) are reviewed in detail. All other wines are listed only as having been tasted in the flight.”

    Further quoting from Wine Spectator (March 16-31, 1982, page 12):

    “Criteria

    “In selecting the wines to be evaluated by The Wine Spectator Tasting Panel, care is taken to be as fair and equitable as possible.

    “We believe that Wine Spectator Tasting Panel is the most significant series of wine tasting reports available to the consumer. This is why:

    • All flights contain no more than 12 wines. From each flight of wines tasted, only the top four wines are fully rated with scores and condensed tasting notes from the full panel. When ties occur, all wines in the first four places are reviewed fully.

    • All judging panels will normally consist of five or six members, most of whom are selected for their reputations as winemakers, wine merchants, educators, or wine-interested consumers with acknowledged palates. Further, every effort is made to select panel members to taste and rate wine for which they have established a particular knowledge and expertise.

    • All wines are tasted blind against others of like type. No “ringer,” or European-American comparisons are permitted. When a selected wine is available in more than one vintage, a mixing of vintages is allowed.

    • All wines are poured in each flight, then each panelist tastes and rates each wine individually. A panel discussion of each flight of wines is held following the tasting and rating.

    • All wines are rated on a modified UC-Davis nine-point scale, recommended for The Wine Spectator Tasting Panel by Emeritus Professor Maynard Amerine.

    • All wines are purchased at Southern California retail prices and are selected for their general availability in most major U.S. markets. However, because of distribution and pricing variables, availability will vary throughout the country.”

    In 1985, Wine Spectator dropped its 9-point scale and adopted a 100-point scale. One consequence of this change can be found in the absence of printed 9-point scale wine reviews dating from the late 1970s to the mid-1980s in the “Buyer’s Guide” biannual review softcover books.

    Quoting from Wine Spectator (March 15, 1994, page 90):

    “How We Do the Tastings . . . . Ratings are based on potential quality, on how good the wines will be when they are at their peaks. ….”

    [ Bob Henry’s comment: High numerical scores for most ageable young wines can be misleading, as these wines generally do not reach their peak “potential quality” (sic) until many years after their public release. For example, if a red wine merits a “96” point score reflecting its “potential quality” some ten-plus years into the future, then how do we assign a comparable 100-point “quality” score when sampled today in less-than-peak condition? ]

    . . . .

    Quoting from Wine Spectator (March 15, 1994, page 90):

    “About the 100-Point Scale

    “Ratings reflect how highly our tasting panel regards each wine relative to other wines. Although the ratings summarize the wines’ overall quality, read the tasting notes carefully to determine if their style and character appeal to you.

    95 – 100 Classic, a great wine

    90 – 94 Outstanding, a wine with special character and style

    80 – 89 Good to very good, a wine with special qualities

    70 – 79 Average, a drinkable wine that may have minor flaws

    60 – 69 Below average, drinkable but not recommended

    50 – 59 Poor, undrinkable, not recommended”

    Further quoting from same March 15, 1994 issue in the “Letters” section (page 90):

    “Grading Procedure

    “In Wine Spectator, wines are always rated on a scale of 100. I assume you assign values to certain properties of the wines (aftertaste, tannins for reds, acidity for whites, etc), and combined they form a total score of 100. An article in Wine Spectator describing your tasting and scoring procedure would be helpful to all of us.

    (Signed)

    Thierry Marc Carriou
    Morgantown, N.Y.”

    “Editor’s note: In brief, our editors do not assign specific values to certain properties of a wine when we score it. We grade it for overall quality as a professor grades an essay test. We look, smell and taste for many different attributes and flaws, then we assign a score based on how much we like the wine overall.”

Leave a Reply

*

Recent Comments

Recent Posts

Categories

Archives