For the xbCulture components I have the ability to rate an item (film or book etc) with a number of stars (or other symbols). Personally I have long kept paper, and later spreadsheet lists of films I have seen and given them a rating. Initially I was using a 5 point scale, but I used to find this a bit coarse - I would often want to use a half point to fit something between the ok (3 star) and good (4 star) categories.

 

So when I switched to using a spreadsheet I switched to ranking them out of 10 - from 1 to 10 stars. This was better but over time some quirks emerged. I found that my rankings, over a few hundred films, tended to be clustered towards the upper half of the scale. In part this would be because I tend not to go to see things I don't think I'll like, but as a result I found I was sometimes still wanting to use half points.

Of course part of the difficulty is that a single scale is a pretty crude device for classifying a pretty diverse range of films. One work might rank really highly for its style but be crap in terms of content, another might have something really good to say but be almost unwatchable - do they both get five stars? This is not a problem that a rating scale can solve.

There are, naturally, psychology studies [1]  into the use of different scales. In general these scales are known as Likert Scales after the person who first researched them. The summary seems to be that a scale should have an odd number of points (not having a mid-point leads people to choose randomly when they want the middle) and have at least five and not more than 10 discrete points. There is some evidence that a seven point scale performs best in terms of repeatability and consistency, but the improvement is small. There is also evidence that going above seven makes it harder to use (about seven items being a limit in short term processing in the brain). It also seems  that in questionnaires using descriptions (one or two words) rather than numbers works much better.

 

Having continued using a 10 point scale when I switched to a flat-file database it became a little difficult to change, but with the move to the online version I decided to implement the rankings as a seven point scale. This also gave the opportunity during import to adjust the old rankings to slightly improve the clustering around points 6, 7 and 8. There were very few 2,3 or 4 star ratings so in remapping 10 points to 7 I was able to slightly shift the spread. (1 & 2 ->1, 3 & 4->2, 5->3, 6->4, 7 & 8->5, 9->6, 10->7)

I also took the opportunity to include a special zero point which is intended to be reserved for things that are so bad that I didn't finished reading, or walked out of the screening before the end. It is very rare that this happens to me, I usually stick with it to the bitter end, but it is equivalent to having a "not applicable (n/a)" option in a questionnaire.

 

 

References & Footnotes
  1. Preston & Colman, 1999: Optimal number of response categories in rating scales: reliability, validity, discriminating power, and respondent preferences (link)