In this seasonal series, the good people of Crossfader detail what they want pop culture to get them for Crossmas this year. This time around, it’s . . .
To Stop Focusing on Review Scores
2017: The year all our heroes failed us. I emerged from it relatively unscathed; none of my heroes were revealed to be sexual predators or harassers or anything of the sort. The only one that did anything that got under my skin was The Rock Critic, a burly St. Louis YouTuber who has introduced me to some awesome music over the years and someone whom I’ve always respected even if I’ve disagreed with him plenty. His crime? Participating in the annual “let’s cherry pick random Pitchfork scores we disagree with” circlejerk that I hope we discard in 2018, along with hand-wringing over review scores and ignoring the actual review.
I’m focusing on Pitchfork’s scoring system because it’s the biggest online music site, my focus is on music, and their scoring system is especially baffling. A system composed of intervals of 1 or .5, either out of 5 or out of 10, are the norm for a reason. The difference between each grade is large enough to imply inherent qualities. We can generally agree what a 1 out of 5 or a 5 out of 5 indicates. But what does a 5.5 out of 10 indicate that a 5.6 out of 10 does not? The .1 intervals that Pitchfork uses are too small for the difference between grades to have any meaning and just come off as arbitrary.
Picture of sad fans trying to appeal to Pitchfork’s emotional deadness
I’m not sure what score set off the circlejerk, but I’m pretty sure it was the newest Blink-182 review. Here’s a clear case of another problem with scoring systems: the review can do a poor job of justifying or explaining the score. While there’s an occasional jab at CALIFORNIA’s familiarity and obsession with the titular state, the review paints the record as likeable and full of conviction, which is about all that pop-punk can aspire to be. Pitchfork tends to treat its 8s as excellent, but even by that curve the review indicates a low 7 or high 6 rather than a 5.5. I’d understand if people felt the score was not a reflection of the review, but that did not seem to be the case and I don’t know how anyone who liked CALIFORNIA could read the review and scoff at it. Furthermore, it’s not like CALIFORNIA was universally beloved or anything; I wasn’t a fan at all, though my opinion is tainted because I’ve never been a Blink fan.
Two of the other rather infamous reviews that got brought up were Tool’s LATERALUS and The Mars Volta’s FRANCES THE MUTE. Saying that Pitchfork gave one of the most acclaimed metal records of the decade a 1.9 is a little dishonest given that the review is clearly a joke aimed to piss off cultish Tool fans. It’s not a consistent joke, but it’s a joke nonetheless, and it’s not the only negative LATERALUS review out there by any means. Furthermore, I saw some claim the Mars Volta review only said the album was “too long,” but it’s clear reading the review that the author expects a lot of his prog rock given a negative reference to Dream Theater, a mixed opinion of the band’s previous record, and his opening monologue against the genre and indie rock. Once again, the review is quite negative, but I think it’s well-explained, if a little kinder than the score indicates. For comparison, check out of the most negative reviews Crossfader has ever ran of an album I truly loved; Ghost’s MELIORA. Much like the Mars Volta review, it creates a picture of the genre, the album’s place in it, and what the reviewer wants out of his prog rock or metal. Proposing alternatives and suggesting other acts who you think do what this band is trying to do is always preferable to a tirade of negativity.
Driving a car with a sack over your head: something Carter would prefer to relistening to Ghost
Crossfader’s system is not perfect, but it boils scoring systems down to their simplest form: do we recommend this or not? For some, that’s all they ever want out of a review, and that’s hard to get out of a score. Every writer at a site slants the same system a little bit, and claiming one review represents the opinions of everyone there is hyperbolic. Getting mad at overly dramatic scores is understandable, but the days of Pitchfork writing intentionally inflammatory scores seems to be a thing of the past—getting mad at those was kind of playing into Pitchfork’s hands. Some worry that these negative reviews will immediately turn off listeners who are depriving themselves of something magical, but that’s ultimately the fault of the listener who sees one negative thing associated with a record and doesn’t even listen to it themselves. I love Rise Against, and that love isn’t affected by Pitchfork thrashing their third record; if someone does not listen to SIREN SONGS OF THE COUNTER CULTURE because of that score, that’s on them, not Pitchfork. My opinion of a record will certainly change as a result of a contradictory opinion or score, but it mostly helps me see what the point of view others take when listening to the same music as me. A score does not help with said point of view, and sums up plenty of past listening experiences with the band and other music into a singular data point that tells you very little.
This was a year when a 7.0 for BREATH OF THE WILD review triggered death threats against the reviewer, something that wasn’t caused because of the way the review was written but rather because of the final score. Getting angry at the end results while ignoring the methods is self-defeating and not conductive to expanding your own critical eye. The best critics are not the ones who have the best methods, not the ones whose end result is most similar to yours. Reviews can certainly be flawed and taking issue with them is fine, but let’s all agree to actually read said review and understand where the reviewer is coming from if we don’t agree with score, rather than just getting angry about it in 2018.
Comments