There has been some interesting discussion on eBob recently about Wine Advocate #169 where Robert Parker’s new hires first publish recommendations. Apparently Dr. Jay Miller has caused quite a stir with five 100 point wines and a slew of 95 pointers in his initial coverage of Spanish wines. As Dr. Vino pointed out yesterday, does this mean the Wine Advocate is rating it’s wines the same way residents of the mythical Lake Wobegon judge their children? I don’t think so, but it does bring up some good points about the 100 point scale and how to calibrate your palate to these new critics.
As a recent practitioner of the Parker 100 point system, I can sympathize with Dr. Miller and his new colleagues as they traveled the world to taste hundreds of wines and record their notes and scores. The pressure must have been pretty high as they wrote their impressions of the wine’s color, aromas, flavor and overall quality/aging potential. What I like about this approach is that each component has it’s own sub-score and typically at a large tasting you don’t know the wine’s final score until you are finished and add up these components. The only time you are aware of the final score during a tasting is if you give a wine most or all the points for each area. In my short experience, this has only happened once while recently tasting at Pax Cellars. So it’s kind of surprising that five 100 point wines would come out of a few months of tasting but not unheard of given that very nearly one thousand wines were rated (how he managed to taste that many wines is another story).
But what this really brings to the surface for me is that Dr. Miller’s use of the Parker scale is different than with others who use it. As with any subjective enterprise, wine rating is an imprecise activity effected by a number of changing conditions. The temperature and lighting of the room, glassware used, condition of the bottle, corks, what you had to eat earlier, etc. All these things affect the result and any person rating a wine will most likely have slightly different results for the same wine tasted under different conditions. Hopefully this variation will be less than 5% so there will not be too much variation in the final scores. Your mileage will almost certainly be different so you have to experience some of the same wines yourself to get attuned to the reviewers preferences.
This also had me pondering what a 100 point wine is in the first place. I believe it is the best wine a taster has had without any flaws and the maximum of complexity. I’ve never had a wine I would rate a perfect 100 but I think I will know one when I drink it. Or maybe I won’t and all my “best” wines will be rated 98 or 99 points. It’s hard to say really since I’ve not tasted and rated very many wines.
So the bottom line for me is that I will try to taste some of these same wines and see if my ratings agree with Dr. Miller’s. As I expect they will not, I will have a rule of thumb to go by when reading his recommendations. It will also have me scrutinizing the written impression of the wine more and looking at the raw score less. That’s probably a good thing in the long-run.
Hi Tim,
I think you clearly articulate the challenges inherent in using any type of rating scale (points, stars, etc.), as well as the difficulty in standardizing reviews of any product whose “quality� is based on the tastes of the reviewer. I have always found it difficult to understand the subtle differences between an 89-point wine and a 91-point wine (or even larger gaps at times). These ratings are only part of the whole research process, and they should be viewed as that.
Your post also highlights the important role of the Wine 2.0 sites. For the first time, consumers have the opportunity to share their opinions and research the opinions of a diverse group of individuals. The trick is standardizing those opinions and deriving a meaningful comparison among the many reviewers. The way we address this problem at OpenBottles is that we reduce the rating to the lowest common denominator that everyone can relate to – whether the wine tastes good (it is an esthetic good afterall). Using this measure, we can provide a meaningful representation of the information that is most relevant to consumers. The difference between 89 and 91 is removed, as is the debate about whether the tastes of any one individual are skewed toward specific characteristics.
So the bottom line to me is that the more information consumers have the better.
Cheers,
Sagi
It’s especially difficult for many of us that may not have the means to dabble in the 95+ area very often. The 100 rating stands out, but really for me I wouldn’t know 100 from 99 or 98 if I tasted it.