Some experts in digital libraries, Michael L. Nelson, Martin Klein, and Manoranjan Magudamudi did an interesting evaluation and compared expert rankings to search engine rankings. The paper is called "Correlation of Expert and Search Engine Rankings", and it was released 21st October 2008.
Expert ranking means that experts contribute to the rankings, rather than it being an automated machine task. They chose a good example to test on, lists from ARWU, IMDB, Billboard, ATP, Fortune, Money, US news, WTA.
Their question is "Does authority mean quality?" and the answer is "although authority means quality, quality does not necessarily mean authority".
"US News & World Report publishes a list of (among others) top 50 graduate business schools to answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50."
Interestingly they state that if a webpage doesn't rank in the first few pages, it's as if it doesn't exist. I think this is true of search engine rankings but I know a lot of blogs with low ranking that are popular through word of mouth and social networks. Jill is right, rankings really aren't the be all and end all.
"We then created a program that will create an ordinal ranking of the URLs in a SE independent of any keyword query. We then used Kendall’s Tau (t ) to test for statistically significant (p < t =" 0.60)" t =" 0.80)"> moderate (0.40 < t ="0.60)" t =" 0.80)">
They found that the bigger the list, the fewer the correlations, and in fact they found very few. They say that PageRank showed its limitations because it's a conventional hyperlink method, which doesn't take into account quality scores. They say that Cho and Baeza-Yates found that PageRank was biased against new pages, even if they were of the highest quality.
Really important papers to read from their refs:
B. Amento, L. Terveen, and W. Hill. "Does “Authority” Mean Quality? Predicting Expert Quality Ratings of Web Documents". In Proceedings of SIGIR ’00, pages 296–303, 2000.