Why Do Critics and Audiences Often Disagree on Movie Ratings? Critics vs Audience Ratings.

Jake Morrison
26 Min Read

Why Do Critics and Audiences Often Disagree on Movie Ratings?

You finish a film you genuinely loved. You pull up Rotten Tomatoes expecting confirmation, and the Tomatometer sits at 38%. Or the opposite happens: a film critic called a masterpiece puts you to sleep by the second act. This gap between critics’ vs audience ratings is not a glitch in the system. It is the system working exactly as it was built to work, by different people, for different purposes.

This article breaks down why those numbers so rarely align. Not to tell you which side is right, but to help you understand what each score measures, where the disagreement comes from, and how to use both readings together when deciding what to watch next.

The answer involves how rating platforms are structured, how professional critics are trained, how marketing shapes expectations, and why cultural identity plays a bigger role in film reception than most people realise.

What the Ratings Gap Actually Looks Like

Before getting into why the divide exists, it helps to see how wide it actually gets. These are not small, rounding-error differences. Some of the most discussed films in recent memory show gaps of 40, 50, even 60 percentage points between critics’ and audiences’ scores on Rotten Tomatoes.

“Suicide Squad” (2016) scored 26% with critics but 65% with audiences. “Mother!” (2017), Darren Aronofsky’s psychological horror film, earned 69% from critics and a rare F from CinemaScore audiences. “Top Gun: Maverick” (2022) landed at 96% with critics and 99% with audiences, one of the few cases where both groups landed in near-identical territory. These examples sit at different ends of the spectrum, but each one tells you something specific about the nature of the divide.

Films Critics Loved That Audiences Rejected

“Birdman” won the Academy Award for Best Picture in 2015 and sits at 91% on the Tomatometer. Its audience score is a noticeably lower 79%. “The Tree of Life” by Terrence Malick scores 84% with critics; audiences gave it 60%. Paul Schrader’s “First Reformed” earned 96% critical approval and a 60% audience score.

The pattern across these films is consistent. Each one prioritises formal ambition over conventional storytelling. They are slow, often deliberately difficult, and structured in ways that reward viewers who bring film knowledge into the screening. For a casual viewer expecting a clear narrative arc, the experience can feel frustrating rather than rewarding.

Films Audiences Loved That Critics Dismissed

The reverse pattern is just as reliable. “Venom” (2018) scored 30% with critics and 81% with audiences. The first “Transformers” film (2007) sits at 57% critically but earned 70% from audiences. “Uncharted” (2022) scored 40% with critics and 90% with audiences.

These films are built for entertainment delivery, not artistic statement. They offer spectacle, familiar characters, and the satisfaction of watching a story resolve cleanly. Critics tend to measure them against what cinema can achieve at its best. Audiences measure them against whether a Friday night at the cinema felt worth the ticket price. Those are genuinely different tests.

How Movie Rating Systems Are Actually Built

A significant portion of the confusion around movie ratings differences comes from treating scores on different platforms as if they measure the same thing. They do not. Rotten Tomatoes, Metacritic, IMDb, and CinemaScore each use a different methodology, a different voter pool, and a different definition of what a score is supposed to represent.

Understanding those differences does not just explain the gap between critics and audiences. It changes how you read any film score you encounter.

Rotten Tomatoes divides critic reviews into two binary categories: fresh or rotten. A film is fresh if a critic gives it a passing grade, rotten if they do not. The Tomatometer percentage is the share of fresh reviews, not an average of scores. The Audience Score works differently: it averages star ratings submitted by registered users who mark themselves as having seen the film.

Metacritic converts critic reviews into a 0-100 numerical score and averages them, weighting scores from major publications more heavily. IMDb averages ratings from 1 to 10 submitted by any registered user. CinemaScore surveys actual cinema audiences on opening night using a grading scale from A+ to F.

Why a 60% on Rotten Tomatoes Means Something Different to a Critic

The binary model creates results that look straightforward but can be deeply misleading. Consider two films. Film A receives a 6/10 from every critic who reviews it. Under the Tomatometer system, it scores 100% fresh because every review passes the threshold. Film B receives a 10/10 from half its critics and a 4/10 from the other half. That scores 50% fresh because half the reviews fall below the threshold.

By the Tomatometer, Film A looks like a unanimous hit, and Film B looks like a disappointment. In terms of actual critical enthusiasm, Film B generated stronger reactions in both directions, while Film A generated mild consensus. The score does not tell you that.

This is why comparing a Tomatometer percentage to an IMDb average or a Metacritic score as if they are equivalent is a mistake many film discussions make repeatedly.

How Audience Score Systems Can Be Gamed or Skewed

Audience scores carry a different kind of distortion: they can be targeted by organised campaigns. Review bombing refers to coordinated efforts by online communities to drive scores up or down based on reasons that have nothing to do with the film’s quality.

“Captain Marvel” (2019) and “The Last Jedi” (2017) both experienced documented review-bombing campaigns before significant numbers of the general public had seen either film. Rotten Tomatoes responded by removing the “Want to See” score and later requiring proof of purchase before audience reviews could be submitted.

Even without deliberate manipulation, self-selection bias shapes audience scores. The people who vote on film review platforms are not a random cross-section of everyone who saw a film. They are the subset motivated enough to rate it, which skews toward people with strong feelings in either direction.

The Core Reasons Critics and Audiences Disagree on Movie Ratings

The Core Reasons Critics and Audiences Disagree on Movie Ratings

Strip away the platform mechanics and the rating manipulation, and the disagreement comes down to something more fundamental. Critics and general audiences are not watching the same film in the same way. They bring different tools, different expectations, and different definitions of what a film is supposed to do.

Critics Watch Films as a Profession, Not a Pastime

A working critic does not come to a film as a blank slate. They come with years of viewing history, formal or self-directed training in film theory, and a mental library of hundreds or thousands of films they are unconsciously referencing during every screening.

When a critic describes a film as derivative, they often mean something specific: this film borrows heavily from an identifiable earlier work, and does not add enough to justify the comparison. A viewer who has not seen the source material experiences the same film as entirely original. Neither is wrong about what they watched. They are working from different archives.

Critics also attend screenings as a professional obligation. That changes the relationship to the experience. They take notes. They watch for structure. They assess performances against a career’s worth of reference points. That professional distance is not a flaw. But it does mean the thing they are measuring differs from what a viewer measuring pure enjoyment is measuring.

Audiences Prioritise Entertainment, Critics Prioritise Craft

This is the core tension, stated plainly. A general audience member walks into a film asking one essential question: Will this be worth my time? They want to feel something, be entertained, follow a story that engages them, and leave the cinema satisfied.

A critic walks in asking a different set of questions. Is the screenplay structured with purpose? Does the cinematography serve the story or distract from it? Is the pacing consistent with the film’s intentions? Are the performances calibrated, or are they performances designed to look like performances?

These questions produce different verdicts. A film can score high on entertainment and low on craft. It can score high on craft and be almost unwatchable for a viewer who came for a good time. The disagreement between critics and audiences is often just this tension made visible in a number.

How Genre Shapes the Divide Between Critic Reviews and Audience Scores

Not all genres produce the same size of ratings gap. Some types of film are almost guaranteed to split critics and audiences, while others tend to produce more consistent reactions across both groups. Understanding which genres create predictable divides helps explain why the same gap keeps appearing across decades of movie reviews.

Why Horror Films Are Judged by Different Rules

Horror is one of the most consistently divided genres in terms of critic vs audience reception. Critics assess a horror film through the lens of craft: how does the director build tension? What does the film say beneath its surface? Is the fear earned through atmosphere and character, or manufactured through loud audio cues and sudden cuts?

Audiences assess horror through a simpler measure: was I scared? Did the film deliver the feeling I came for?

“Hereditary” (2018) scored 89% with critics and 65% with audiences. It is a masterfully constructed film by almost any formal standard. It is also slow, psychologically dense, and builds toward a conclusion that rewards patience over immediate gratification. For viewers who came expecting conventional horror delivery, the experience was frequently described as frustrating. For critics, those same qualities were exactly the point.

Why Superhero and Action Films Split Scores So Consistently

Blockbuster action films and superhero properties are designed around a specific promise: large-scale spectacle, familiar characters, and emotional payoff delivered reliably. They are built to satisfy, and they usually do. Audiences respond to that satisfaction in their scores.

Critics apply a different filter. When a film uses the same structural beats as the twenty films before it in the same franchise, critics flag the repetition as a creative limitation, not a feature. The spectacle does not compensate for formula, from a craft perspective.

This explains why a film like “Avengers: Endgame” earned 94% from critics (who recognised its genuine achievements within the genre) but why so many mid-tier superhero films sit in the 50-65% critical range while audiences score them 20 to 30 points higher. The audience got what they came for. The critics wanted the films to reach further.

The Role of Marketing, Hype, and Expectations in Movie Ratings Differences

A film score is never generated in a vacuum. It is always measured against what the viewer expected before the credits rolled. And those expectations are largely built by marketing, which means studios have indirect but significant influence over how audiences score films, even after release.

How Pre-Release Hype Inflates Audience Scores

Opening weekend audiences are not a representative sample of everyone who will ever see the film. They are, by definition, the people most excited to see it: fans of the franchise, followers of the director or cast, people who watched the trailer multiple times. Their expectations are already elevated, and their emotional investment makes them more likely to leave satisfied.

This is why opening-weekend audience scores for heavily marketed films tend to run higher than scores recorded two or three weeks later, once the broader and less invested audience has watched it, often at home through a streaming release. The score stabilises downward as the sample becomes more representative. The initial number, the one that gets quoted in headlines, is almost always the most optimistic reading.

Critics, who typically screen films before the marketing campaign reaches full saturation, are less influenced by the hype cycle. They encounter the film closer to its actual content than to its promotional promise.

How Disappointment Crashes Audience Ratings

The inverse of hype inflation is hype disappointment. When a marketing campaign makes promises the film does not keep, the gap between expectation and delivery produces backlash that shows up directly in audience scores.

“Indiana Jones and the Dial of Destiny” (2023) and “Lightyear” (2022) both experienced notable audience score drops relative to their opening weekend numbers as the broader viewing public formed their verdicts. The trailers had built expectations that the films could not consistently meet for every viewer.

This does not mean the films were bad in any objective sense. It means the distance between the marketed promise and the experience was wide enough for a significant portion of the audience to register disappointment in their score. That disappointment is real data, but it is data about expectation management as much as film quality.

Cultural Background and Identity Shape How Films Are Rated

Film criticism in English-language markets has historically been conducted by a relatively narrow demographic: predominantly white, Western, and trained in a specific tradition of European and American cinema. General audiences are considerably more varied. That demographic gap is another structural reason why critic scores and audience scores diverge on certain films.

When Representation Changes What a Film Means to an Audience

“Black Panther” (2018) earned 96% from critics and 79% from audiences, which looks like a modest gap. But the film’s cultural significance to Black audiences specifically went far beyond what any numeric score could capture. For viewers who had rarely seen themselves represented at the centre of a major studio blockbuster, the experience carried weight that altered the meaning of the film beyond its technical execution.

“Crazy Rich Asians” (2018) and “Coco” (2017) produced similar dynamics. Critics assessed craft, originality, and execution. For audiences whose own identities and communities were reflected on screen, those categories were only part of the equation. The personal significance of representation shapes scores in ways that craft-focused critical frameworks do not account for.

Neither group is applying the wrong standard. They are answering different questions about the same film.

International Audiences and the Limits of Western Critical Standards

A Korean film reviewed primarily by English-language critics trained in Western film grammar is being assessed through a framework that was not built for it. Story structure, pacing conventions, emotional expression styles, and comedic timing all differ across film traditions. What reads as slow or tonally inconsistent within a Western critical framework may be precisely calibrated within the tradition it comes from.

“Parasite” won the Palme d’Or and the Academy Award for Best Picture while also scoring 99% with critics and 90% with audiences, a rare convergence. But it is the exception. Many South Korean, Japanese, and Indian films reviewed by Western critics show notable divergences from the scores given by audiences embedded in those cinematic traditions, a reminder that critical standards carry cultural assumptions that do not always travel cleanly across borders.

Can Critics and Audiences Ever Actually Agree?

Can Critics and Audiences Ever Actually Agree?

Given how many structural and psychological forces push critics and audiences toward different verdicts, the more interesting question is not why they disagree, but when they manage to agree. Films that score well with both groups are not common, but they do exist, and they tend to share recognisable qualities.

The Films That Score Well With Both Groups and Why

“Everything Everywhere All at Once” (2022) scored 95% with critics and 88% with audiences. “The Dark Knight” (2008) sits at 94% critically and 94% with audiences. “Parasite” earned 99% and 90%, respectively. These are not random outcomes.

Each of these films combines genuine formal ambition with emotional accessibility. They are technically precise in ways critics can trace and discuss, but they also tell stories that land with feeling, forward momentum, and characters an audience can invest in without a film studies background. That combination is difficult to achieve, which is why the films that manage it tend to be discussed for years after release.

What “Certified Fresh” With High Audience Scores Actually Signals

When a film carries both a high Tomatometer score and a high audience score, it usually means one of two things. Either the film genuinely achieved something across multiple dimensions of quality, or it was crafted so precisely to appeal to both groups that it sits in a kind of middle ground, admirable without being challenging.

Films in the first category tend to be the ones remembered as generation-defining. Films in the second tend to be forgotten within a few years, remembered as polished and competent but not essential. The reader is better placed than any algorithm to decide which category they are looking at after they have seen the film themselves.

Should You Trust Critic Reviews or Audience Scores When Choosing What to Watch?

This is the question that makes all the analysis above practical. You are standing in front of a streaming library with limited time. You have a score from critics and a score from audiences. Which one should you follow?

The honest answer is that neither score should be followed blindly, but each one tells you something specific, and knowing what that something is makes you a better reader of both.

When to Follow the Critics

Critic scores are most reliable when you are looking for a film that rewards close attention. If you want to explore a director’s career, move into a genre you are unfamiliar with, or find a film that offers something beyond pure entertainment delivery, a high critical score is a meaningful signal.

A film that scores 85% or higher critically has been assessed by people who watch films as a professional discipline. Their endorsement means the film achieves something technically or thematically that stands up to close examination. It does not guarantee you will enjoy it. But it does mean the film is likely doing something with purpose.

When to Follow the Audience Score

Audience scores are most reliable when your priority is enjoyment, rewatchability, or finding something that works for a group with different viewing preferences. A film that scores 80% or higher with audiences has satisfied a wide range of viewers, including people who were not necessarily predisposed to love it.

For family viewing, genre comfort picks, or films you want to watch without thinking too hard after a long week, a strong audience score is often the more relevant data point. The audience was not grading on a craft rubric. They were answering the question you are probably asking: Is this worth my evening?

Using both scores together, understanding what each one measures, is the most informed approach. A film with 40% from critics and 85% from audiences is almost certainly a crowd-pleasing genre film that the critical establishment found formulaic. A film with 90% from critics and 55% from audiences is likely ambitious, challenging, and potentially very rewarding if you are in the right frame of mind for it.

Conclusion

The gap between how critics and general audiences rate films is not a problem waiting to be solved. It reflects two genuinely different relationships with cinema, two different sets of questions being asked of the same two hours, and two different definitions of what makes a film worth watching.

Critics bring trained attention, historical context, and a professional obligation to assess craft. Audiences bring personal investment, emotional honesty, and the fundamental question of whether the experience was worth their time. Both perspectives are legitimate. Neither is complete on its own.

When you understand what critics vs audience ratings are actually measuring, both scores become more useful, not less. You stop expecting them to agree and start using the gap between them as information. A wide gap tells you something specific about a film’s relationship to genre conventions, audience expectations, and critical standards. A narrow gap, especially when both scores are high, tells you something else.

The most informed film viewers are not the ones who trust critics over audiences or audiences over critics. They are the ones who understand what each score means before they decide what to watch. That understanding is what we hope this article has given you. If you want to put it to use immediately, check out our guide to the best underrated movies on streaming right now, films where the numbers tell an especially interesting story.

Share This Article
Jake is a film critic and pop culture writer who has been covering movies, music, and streaming for over a decade. He has strong opinions and backs them up. Whether it's a deep read on a classic film or a quick take on what's worth watching this weekend, his writing respects the reader's time.
Leave a Comment