Tuesday, October 12, 2010

A Look at My Review Scores

Some days I get bored or maybe I’m just procrastinating. Today is one of those days and the result is that I dove into the scores I give my reviews. There is a lot of debate about the usefulness/uselessness of scores on reviews. Generally, I find them useless, but I do give each of my reviews a score for my own reasons. The reason I keep doing it is that I can play with the statistics of those scores. I can actually do quite a bit with them and probably will in future editions of boredom/procrastination.

In some of the recent discussion in quality of reviews, Adam of The Wertzone posed the question if ‘scoring’ varies with books that provided by publishers versus books not provided by publishers. So, I took a look at the scores of my reviews. I have a total of 184 scores from my near-5 years of blogging, 114 of those scores were for books provided by publishers and 70 of those scores are for books I purchased or otherwise acquired on my own. The chart below shows the frequency of scores for all of my reviews, for reviews of books provided by publishers, and for books not provided by publishers.


The most interesting thing I see in this chart is that there appears to be no real difference in the distribution of scores for books provided by publishers vs. books not provided by publishers. This is good – it indicates that I’m consistent in my reviewing regardless of the source of the book. I also think that it indicates that my reading choices don’t vary much either – basically I read what looks interesting at the moment, and what governs those choices doesn’t seem to be any different for books publishers have provided versus books that publishers didn’t provide.

And yes, my review scores are skewed to the right. This is because I pick books that I think I’ll like. Yes, I do challenge myself from time to time, but in general I want to enjoy the books I read and I have pretty good idea of what I like. To learn a bit more about how I score those reviews, I touch on it here.

Thoughts?

11 comments:

Unknown said...

You're an honest reviewer. Your scores reflect that.

Anonymous said...

My only real thought is I'm amazed you charted this out.

I agree with Sam. You are an honest reviewer and the chart does reflect that. It's interesting to see it laid out like this, though.

Neth said...

Looking at the chart a bit closer indicates that the books I read that publishers send me are a bit more likely to be merely good, rather than really good in comparison to books I read that aren't provided by the publishers.

And thanks guys.

Alex J. Cavanaugh said...

Sure! Why read books you suspect you won't enjoy? Go for the sure thing. Life is too short.

Ryan said...

Cool chart. I agree with Sam, and Sarah that you are honest in your reviews, and it shows. I think scores, grades, whatever, can be arbitrary and subjective, but in some cases, like when I read a review from a reviewer I trust, i.e. you, and a few others, I think there is more legitimacy to that given score.

MrMathMan said...

If you want some statistical analysis to verify that there is no difference between the scores in these two groups, I'd love to do that. Got an Excel file of all this?

I think the point of Pat's reviews being meaningless is fair. 7.25? Almost nothing gets less than a 7.

There is a difference between a fan and a critic. Pat is a fan. Many critics become just negative. Neth, you do a decent job of being a blend of the two. Which is my favorite (i.e., Rotten Tomatoes).

Neth said...

@MrMathMan, if you really want to do a more rigorous (and valid) statistical analysis, I'll send scores in an excel sheet. For this I did simple histograms and didn't take the time to cull a couple of scores form the list that I know should be (knowing that it would make little difference).

I can tell that there is some statistical differenc here, but it's not much. From what I can tell, my scores for books provided by publishers are just a bit lower.

MrMathMan said...

Sure! Send the me file. My name plus gmail will get it to me.

Neth said...

And thanks to MrMathMan for doing an independent review. Here is what he sent me:

You're officially in the clear. There is NO evidence that the scores of publisher provided reviews are different that the books you bought. The mean score for the pub books is 7.44. The other is 7.31. A difference of this size happens by chance about 50% of the time. In other words, the difference between the two groups is the type that would happen just by random variability.


Good job!


JD



Two-sample T for pub vs not


N Mean StDev SE Mean
pub 107 7.44 1.03 0.099
not 67 7.31 1.33 0.16




Difference = mu (pub) - mu (not)
Estimate for difference: 0.132
95% CI for difference: (-0.245, 0.508)
T-Test of difference = 0 (vs not =): T-Value = 0.69 P-Value = 0.489 DF = 114

Stevie said...

Do you think that your biased against books sent by publishers or perhaps biased towards books that you've selected on your own? For myself, I think that I'd be more pro books that I've chosen myself as already I'm interested whereas if someone selects it for me, I might not like it as much.

Neth said...

@Stevie, the statistics say that I'm not biased toward either and that I approach them the same way. That's the way I want it - I want to a approach each book the same way regardless of where it came from.

LinkWithin

Related Posts Plugin for WordPress, Blogger...