Quantifying Film Reviews

Thanks to an email tip, I just took a quick look at, “Do experts and novices evaluate movies the same way?” (also via BPS Research Digest) a study conducted by Jonathan A. Plucker, James C. Kaufman, Jason S. Temple, and Meihua Qian, and published in the journal Psychology and Marketing, part of a special issue devoted to the theme of marketing movies.   The authors seek to explore differences between how professional critics, “amateur” online critics, and general “novice” audiences to determine which groups tended to give movies the highest ratings.

The authors focused their research on films that opened widely, that is on 1,000 or more screens, which raises some potentially thorny problems I’d like to address later.  The sources for their samples of reviews made some amount of sense.  Critics’ reviews were culled from the numerical rankings on Metacritic.com, while the ratings from amateur critics were taken from IMDB discussion boards.  Finally, the “novice” moviegoer numbers were taken from student surveys, a sample that also introduces some complications, given that students may be more generous than older audiences (a point the authors acknowledge in their discussion of the project’s “limitations”).  Probably a bigger definitional problem, for me, is that users of IMDB discussion boards are self-selecting in a way that might bias them in favor of positive reviews.  I would imagine (but could be wrong) that reviewers writing for a personal blog might be more critical of mainstream films, in particular, than user-generated reviews on IMDB, especially given IMDB’s bias toward newer films and a “popular canon.”

To be fair, they are attentive to the fact that the timing of reviews matters considerably.  Critics reviews typically appear before amateur and novice reviewers have a chance to see a film, and students completing an anonymous survey might respond differently than they would if their reviews were more public.  Further, to give Plucker, et al, credit, they are attentive to the fact that their categories are not mutually exclusive but instead represent a continuum, one that is increasingly complicated due to the rise of film criticism appearing in a variety of internet publications.

Given the sample the authors chose, it is probably no surprise that they discovered that professional critics tend to offer the lowest ratings while novice moviegoers ranked films more highly.  By focusing on films that open on more than 1,000 screens, the study excludes a number of critically-acclaimed films, such as Million Dollar Baby or Juno,  that deliberately use slow roll-outs in order to build positive word-of-mouth (or that target adult audiences who are less wedded to seeing films on opening night).  I’m not suggesting that critic and novice rankings would have been reversed for these two films, but by placing too much emphasis on heavily-marketed, high-concept films that open widely, we may lose some subtleties about how different audiences might evaluate a film.  Being specific matters quite a bit here.  It would be worth exploring distinctions within individual films.  How do critical evaluations of that plucky indie film compare to those of bigger budget films?  By not naming a single movie title, the authors streamline what is often a much more volatile process.  We also lose quite a bit when it comes to relative reach.  Roger Ebert and Manohla Dargis will always have a wider audience than I do as a mostly amateur blogger, not to mention greater access to the film industry itself.

Another concern that I have is how they reduce the reviews to their numeric rankings.  Most, though not all, of the critics I read eschew numeric ratings or starred ratings, and my decision about whether or not to see a certain movie can depend on any number of factors that have little to do with who rates a film highly (in Fayetteville quite a bit depends on what’s available at any given time).  That being said, I think they’re probably right to suggest that these numbers can probably help to guide the practices of marketers as they seek out the “tastemakers” who might champion certain films.  Their conclusions also seem to imply that sites such as FlickTweets that compile film reviews posted to Twitter may actually help to expand positive buzz for a given film.  More than anything, though, a closer look at specific cases would probably tell us more about how these rankings evolve and how “amateur” critics may review films differently than their professional peers.

2 Comments »

  1. McChris Said,

    May 14, 2009 @ 4:30 pm

    I skimmed the paper after you blogged about it, and what’s most interesting to me is the research question they seem to be exploring. It doesn’t seem like they’re interested in how professional critics are different from amateur critics, but how committed movie-goers are like professional critics. I’ve read a little of this psychological creativity research, and it seems like much of its project is to emphasize the role of mundane creativity, comparing creative decisions in quotidian spheres to the work of institutionally sanctioned artists. I don’t claim to understand it much beyond that, but it seems d it aims to nurture “creativity” in the service of producing more productive capitalist subjects, as well as help “creatives” negotiate the corporate world. I don’t think that the authors made any serious attempt at studying film texts or institutions, but they do seem to be interested in eliding the differences between the professional and the amateur.

    I did find it a little strange that the lit review didn’t mention any reception research or even more quant-friendly mass communication research, but I think that may be justified considering their research question.

  2. Chuck Said,

    May 14, 2009 @ 4:42 pm

    I’ll admit that I’m less familiar with the question about “creativity” that they were driving at, and I should have looked at those details more carefully, but you’re right to point out that they do trace out a similarity between professional critics and “committed moviegoers” and that they are attentive to the ways that the differences between professional and amateur are being elided. Their citation of Gladwell’s terminology is actually pretty telling, in terms of their specific goals (and was actually helpful in demarcating the different functions that amateur critics can serve).

    But it’s pretty clear that their research goals (and the questions they are asking) are pretty distinct from my own.

RSS feed for comments on this post · TrackBack URI

Leave a Comment

Subscribe without commenting