As I worked through my analysis of wine scores (see Post #3 and Post #4), I became more interested in the individual reviewers for each platform, and their biases and predilections. Nothing in the analysis would give any idea of what those biases actually are, but I did want to at least see if there was evidence of bias. I can’t say I found bias, but I did find some erratic scoring that impacted my wine buys in negative ways.
Based on the goal of reducing/eliminating the chances of feeling “robbed” (i.e. relying on a high rating score for a wine to make a splurge buy, only to be disappointed once I got the wine), I compared reviewers to identify “high outlier” reviews. An outlier is:
- A review for which the reviewer scored a wine 4-or-more points higher than another system reviewer. So, if a WS reviewer scored a wine 89, and a WE reviewer scored the same wine 93, the WE review would be flagged as an “outlier” (not the WS review).
I could have defined an outlier as either 4 points low or 4 points high. I based it on high outliers, because those are the ones that infuence my wine buying most (I think), and because, quite simply, all the commercial pressure is to over-rate rather than under-rate wines. The downside of a low outlier is I may miss an opportunity to buy a really good wine, because the low score put me off of it. The downside of the high outlier is I may buy a wine I’m ultimately disappointed in–that was the downside I was most worried about.
Based on this definition, reviews for 63 of the 231 red wines were outliers. The table below provides a tally of the outlier reviews by reviewer. Of the 63 outliers, 51 were by WE reviewers, and 12 by WS reviewers. Matt Kettman had the highest rate of outlier reviews—over half of his scores were 4 or more points higher than the same wine reviewed by WS. Jim Gordon and Roger Voss were in the next level down, with about one-third of their reviews being outliers.

For the outlier reviews, I looked at Vivino as a third score source. The table below provides a tabulation of alignment with Vivino for the WE and WS reviews. If the WE or WS score was closer to the V100, score, it was counted as “better aligned” for that wine. In general, WS scores aligned better with Vivino—for 55 percent of the 231 wines, WS aligned better with Vivino. For the outlier wines, though, the WS score aligned better with the V100 scorefor 68 percent of the wines. So…for more than two-thirds of the cases where the WE score was 4 or more points higher than the WS score, the V100 score for that wine aligned better with the WS score.

Takeaways
- OK, scores…they can be useful. But picking a wine based on score alone is sort of like choosing a mate based on IQ alone. The review is where the personality of the wine is presented by a reviewer. So I should be paying closer attention to the reviews, in addition to the scores, and see how well my assessment of the wine matches with the review as well as the score.
- Look at more than one review and score for a splurge purchase. Definitely.
- Pay attention to reviewers. I should rely on reviewers whose tastes align with mine, and whose scores match what I might give that wine. I should give a wide berth to reviewers who have steered me wrong in the past.
- For all these takeaways, it is important to keep my own notes on wines, including what reviewer and what score was influential in my “buy” decisions. Keeping those notes, and then circling back when I actually try the wine, will help me determine if that guidance was helpful.
In Post #6 I apply all of these takeaways to myself. I keep notes on the wines I buy, and the ones I drink, but I haven’t done much with them. Time to put those notes to use!

