Metacritic

Metacritic is a problem; few will attempt to deny that. Far too often — which is to say, ever — publishers rely on it as something more than a potentially accurate snapshot of a game’s critical reception. Gamers sometimes look to it as either a definitive statement on whether a game is good or bad, or as a means for pointing out how a review is ‘wrong.’ To say Metacritic is outright ruining the industry would, in my opinion, be a stretch, but it clearly is not doing it any good.

For the uninitiated, Metacritic is a reviews aggregator. It collects reviews of videogames, movies, TV shows, and music albums from a variety of publications, presenting a ‘Metascore’ for each title. This is a weighted average of all the review scores the site tracks, meaning certain publications’ reviews have more impact on the Metascore than others. It’s problematic enough when scores are the only thing readers look at, rather than the text that accompanies it, but Metacritic breaks the opinions conveyed in dozens of reviews down into a single number that readers and game publishers alike often look to when discussing the merits of a game.

The scores used to calculate the Metascore have issues before they are even averaged. Metacritic operates on a 0-100 scale. While it’s simple to convert some scores into this scale (if it’s necessary at all), others are not so easy. 1UP, for example, uses letter grades. The manner in which these scores should be converted into Metacritic scores is a matter of some debate; Metacritic says a B- is equal to a 67 because the grades A+ through F- have to be mapped to the full range of its scale, when in reality most people would view a B- as being more positive than a 67. This also doesn’t account for the different interpretation of scores that outlets have — some treat 7 as an average score, which I see as a problem in an of itself, while others see 5 as average. Trying to compensate for these variations is a nigh-impossible task and, lest we forget, Metacritic will assign scores to reviews that do not provide them.

Anyone who reviews videogames — or any form of entertainment, really — will tell you the score is but one part of the puzzle; in some cases, it’s looked upon as a necessary evil, as certain outlets’ experiments with ditching scores altogether have been deemed failures. While some might look to the score to get a quick impression of how the reviewer in question felt about a game, reading the text is almost always a critical step in understanding what he or she thought of the game. Two examples that immediately jump to mind: Frank Cifaldi gave Monkey Island 2: Special Edition a C on 1UP, not because the game itself is anything short of terrific, but because the Special Edition package itself was terribly lacking; and Jim Sterling of Destructoid famously gave Deadly Premonition a 10 despite its many flaws. If you looked only at the scores, which Metacritic encourages people to do even if it does include links to the full reviews, you might look at these two games much differently than if you had read the accompanying reviews.

The act of simplifying reviews into a single Metascore also feeds into a misconception some hold about reviews. If you browse into the comments of a review anywhere on the web (particularly those of especially big games), you’re likely to come across those criticizing the reviewer for his or her take on a game. People seem to mistaken reviews as something which should be ‘objective.’ “Stop giving your opinion and tell us about the game” is a notion you’ll see expressed from time to time, as if it is the job of a reviewer to go down a list of items that need to be addressed — objectively! — and nothing else. In reality, aside from commenting on certain aspects — whether or not a game’s framerate holds up, or if its online setup is prone to disconnects — a review is typically an inherently subjective pieces of criticism, not an objective determination of whether a game is worth your money. They tell you what a particular person thinks of a game, which is why you’ll see some publications provide multiple takes (and even multiple scores) on the same game: not everyone agrees on what’s good and what isn’t, and nor should they. Providing people with an average score to point to when a score deviates from the pack is not helpful to anyone.

(As an aside, the existence of Metascores don’t help the situation where readers become angry with reviewers because a score does not meet their pre-existing notion of what score a game should have received — even if they’ve never played it themselves. To be fair, this isn’t exclusively a game-specific phenomenon; just look at how the positive reviews of The Dark Knight Rises on Rotten Tomatoes have only a handful of comments whereas the few reviews labeled as ‘rotten’ have hundreds.)

Fallout New Vegas

Keeping this in mind, it doesn’t make sense to average review scores together. Even if all reviewers were to use an identical 100-point scale, what purpose does it serve to give us an average? As noted by IGN’s Keza MacDonald, “A Metacritic average undermines the whole concept of what a review is supposed to be: an experienced critic’s informed and entertaining opinion. Instead it turns reviews into a crowd-sourced number, an average. You can’t average out opinions.” And she’s right: if one person scores a game a 10 and another a 4, telling us its Metascore is 7 accomplishes nothing.

This might all sound unimportant in terms of how it affects the industry, but as noted above, it’s not just gamers who look at Metacritic. Publishers do, too, and in some cases they rely on these scores too heavily, as evidenced by well-publicized stories about bonuses being tied to Metacritic. Most famously, Obsidian’s Chris Avellone revealed on Twitter earlier this year that the developer missed out on receiving a bonus for its work on Fallout: New Vegas, which it was not entitled to royalties on, because it failed to reach the required Metascore. 85+ was what was required to receive the bonus; the game ended up at 84. Some might argue developers should never agree to such terms in the first place, but they may not always have the leverage needed to get their way. And don’t mistaken the number of times this arrangement has been revealed publicly with the number of bonuses tied to Metascores — it happens more often than you might think.

While it’s difficult to blame publishers for wanting review scores to be high, as they can have an impact on sales, the practice of tying bonuses directly to something like a Metascore is flat-out wrong. And it’s something that should concern more than just the developers whose livelihoods are dependent upon the way these averages are calculated. Contracts with these sort of incentives put pressure on developers to design games in a way that is conducive to receiving higher review scores, rather than merely creating the game they want and that gamers will enjoy. This isn’t some conspiracy theory, either, as then-EA Sports president Peter Moore acknowledged the possibility in 2010. “You can break Metacritic down and say, ‘We can get two extra points by doing this,’ but it may not actually enhance the gamers’ experience, and that is where there is a line we have to be careful we don’t cross. It is a bit of a slippery slope if you focus everything on Metacritic.” Even if EA Sports avoided designing games in this fashion under Moore’s watch, it doesn’t mean developers throughout the industry resist the urge to do what’s necessary to stay in business.

The knowledge that their scores could have a direct impact on developers getting paid also puts a pressure on reviewers that did not previously exist. Sure, reviews could have influenced gamers’ decision to purchase a game, but that is an indirect effect. Now their scores could potentially have a direct correlation with bonuses being paid out, and while one hopes reviewers are not affected by this (and among those I know, I don’t believe they are), ideally it would not even be a concern in the first place.

Metascores aren’t contained to Metacritic, either; you’ll find them in the right column of game pages on Steam alongside the game’s genre, release date, and features. Even having said all of this, I’d be lying if I said my impression of a game on Steam I’m unfamiliar with is not to some degree colored when I see a Metascore below, say, 75. Yet it would be a mistake to write off games simply because of what their average review score is; if I did that I never would have touched Costume Quest, Mercenaries 2, The Simpsons Game, Ticket to Ride, or Earth Defense Force 2017, all of which I enjoyed despite all five falling into the 69-74 Metascore range.

Solving this problem seems simple, although that doesn’t mean a solution is realistic. Unfortunately, the popularity of sites like Metacritic and Rotten Tomatoes make it unlikely that we’ll see Metascores going away anytime soon. I do, however, hope developers push back and publishers realize the fallacy of putting so much weight on a number determined by a secret formula that may or may not be representative of what critics actually think of a game.

By Chris Pereira