Thanks to an email tip, I just took a quick look at, “Do experts and novices evaluate movies the same way?” (also via BPS Research Digest) a study conducted by Jonathan A. Plucker, James C. Kaufman, Jason S. Temple, and Meihua Qian, and published in the journal Psychology and Marketing, part of a special issue devoted to the theme of marketing movies. The authors seek to explore differences between how professional critics, “amateur” online critics, and general “novice” audiences to determine which groups tended to give movies the highest ratings.
The authors focused their research on films that opened widely, that is on 1,000 or more screens, which raises some potentially thorny problems I’d like to address later. The sources for their samples of reviews made some amount of sense. Critics’ reviews were culled from the numerical rankings on Metacritic.com, while the ratings from amateur critics were taken from IMDB discussion boards. Finally, the “novice” moviegoer numbers were taken from student surveys, a sample that also introduces some complications, given that students may be more generous than older audiences (a point the authors acknowledge in their discussion of the project’s “limitations”). Probably a bigger definitional problem, for me, is that users of IMDB discussion boards are self-selecting in a way that might bias them in favor of positive reviews. I would imagine (but could be wrong) that reviewers writing for a personal blog might be more critical of mainstream films, in particular, than user-generated reviews on IMDB, especially given IMDB’s bias toward newer films and a “popular canon.”
To be fair, they are attentive to the fact that the timing of reviews matters considerably. Critics reviews typically appear before amateur and novice reviewers have a chance to see a film, and students completing an anonymous survey might respond differently than they would if their reviews were more public. Further, to give Plucker, et al, credit, they are attentive to the fact that their categories are not mutually exclusive but instead represent a continuum, one that is increasingly complicated due to the rise of film criticism appearing in a variety of internet publications.
Given the sample the authors chose, it is probably no surprise that they discovered that professional critics tend to offer the lowest ratings while novice moviegoers ranked films more highly. By focusing on films that open on more than 1,000 screens, the study excludes a number of critically-acclaimed films, such as Million Dollar Baby or Juno, that deliberately use slow roll-outs in order to build positive word-of-mouth (or that target adult audiences who are less wedded to seeing films on opening night). I’m not suggesting that critic and novice rankings would have been reversed for these two films, but by placing too much emphasis on heavily-marketed, high-concept films that open widely, we may lose some subtleties about how different audiences might evaluate a film. Being specific matters quite a bit here. It would be worth exploring distinctions within individual films. How do critical evaluations of that plucky indie film compare to those of bigger budget films? By not naming a single movie title, the authors streamline what is often a much more volatile process. We also lose quite a bit when it comes to relative reach. Roger Ebert and Manohla Dargis will always have a wider audience than I do as a mostly amateur blogger, not to mention greater access to the film industry itself.
Another concern that I have is how they reduce the reviews to their numeric rankings. Most, though not all, of the critics I read eschew numeric ratings or starred ratings, and my decision about whether or not to see a certain movie can depend on any number of factors that have little to do with who rates a film highly (in Fayetteville quite a bit depends on what’s available at any given time). That being said, I think they’re probably right to suggest that these numbers can probably help to guide the practices of marketers as they seek out the “tastemakers” who might champion certain films. Their conclusions also seem to imply that sites such as FlickTweets that compile film reviews posted to Twitter may actually help to expand positive buzz for a given film. More than anything, though, a closer look at specific cases would probably tell us more about how these rankings evolve and how “amateur” critics may review films differently than their professional peers.