Comment by JoshTriplett
17 days ago
They do a good job of supporting whatever comparisons you want to make, which is useful if you have different preferences; however, I do think they have clear conclusions in many cases, to capture strengths, weaknesses, and "best of" recommendations.
Their, rtings.com "conclusions", "scores" and "recommendations" do not always correspond to their own numbers.
Exactly. Their ten-point scales have no obvious relationship to the underlying measurements, where measurements are even provided, and they rescale the points every so often. (They call this their "methodology version.")
I've noticed that, when a new (expensive, high-commission-generating) product comes out, it often has middling scores at first, and then, a few months later, they've revised their methodology to show how much better the pricey new product is.
1) I trust rtings to not change their position on the basis of what makes them money; that trust is their whole brand.
2) I have not seen products jump from middling to high before, but I have seen the scores change with new methodologies, and sometimes that has the net effect of lowering the scores of older devices. Typically, that seems to represent some change in technology, or in what people are looking for in the market. For instance, I would expect (have not checked) that with the substantially increased interest in high-refresh-rate monitors, the "gaming" score has probably changed to scale with gamer expectations of higher frame rates. That would have the net effect of lowering the score of what was previously considered a "good" monitor. This seems like an inherent property of improving technologies: last year's "great" can be this year's "good" and next year's "meh".
Personally, I never pay much attention to the 0-10 scores in the first place, and always just make tables of the features and measurements I care about. The only exception is for underlying measurements that are complex and need summarizing (e.g. "Audio Reproduction Accuracy").