The business of rankings
Professor Tan Sri Dato' Dzulkifli Abd Razak
Learning Curve : Perspective
New Sunday Times - 08/22/2010
THE rankings game is here again. This time it involves two types of rankings that allow us to understand what the game is all about from different perspectives.
Firstly, it is hardly surprising that United States' Harvard University once again — for the eighth consecutive year — topped a recent world universities rankings published by Shanghai Jaiotong University.
The results of the rankings are sharply criticised in Europe as the list is dominated by American institutions with the University of California, Berkeley being second, followed by Stanford University.
It is focused almost entirely on a university's scientific research achievements and does not cover the humanities. It is allegedly not an accurate reflection of an institution's overall "performance" — a term that also created controversy over the assessment of a university! More about that later.
The criteria include the numbers of the following: Nobel prizes and Fields medals won by faculty members and alumni, highly cited researchers on staff and articles by faculty published in Nature and Science journals.
Schools in the United States reportedly accounted for 54 per cent of the top 100 universities listed.
And not surprisingly, "in Europe ... officials say the criteria are biased against European schools". But then who says rankings are free from prejudices?
The rankings by Shanghai Jaiotong University was first initiated in 2002 with the aim "to benchmark the performance of Chinese universities, amid efforts by Beijing to create a set of world-class research institutions", which is almost always based on the American norms and systems.
This is reflected in the result where the highest ranked non-US institutions are relegated to lower positions in the ranking.
Britain's Cambridge and Oxford Universities are in fifth and 10th places respectively this year.
The European continent's top-rated institute was Swiss Federal Institute of Technology in Zurich, at 23rd, while Pierre and Marie Curie University in Paris was the highest ranked French school in 39th position. University of Tokyo, in the 20th place, was the best rated in the Asia-Pacific. The top Chinese schools were not among the 100 best globally.
Secondly, a report released almost at the same time by Forbes magazine and the Center for College Affordability and Productivity (CCAP) in Washington DC in the US clearly demonstrated how rankings can be a variant of one another.
In this case, Williams College was named the No. 1 tertiary institution in the US for 2010.
The Massachusetts liberal arts private college with about 2,000 students "surpassed top Ivy League schools such as Princeton University, Harvard and Yale University, which came in second, eighth and 10th respectively on the Forbes list.
Amherst College, another small liberal arts school, ranked third, while Massachusetts Institute of Technology placed fifth.
The United States Military Academy, which ranked No. 1 last year, dropped to fourth place this year.
The rankings was compiled from both qualitative and quantitative information acquired from college students based on criteria that "Forbes and the CCAP considered such as student satisfaction, postgraduate success, student debt, four-year graduation rate and how many students and faculty win prestigious awards such as Rhodes Scholarships or Nobel Prizes".
Sites such as RateMyProfessors.com and MyPlan.com to understand the level of contentment students felt about their colleges also contributed to the ranking.
Salary information from Payscale.com was used to gauge the success of alumni.
Rankings are not a linear measure of the "quality" of education as they are made out.
This is further supported by the 2009 report of the Swedish National Agency for Higher Education as an assignment by the government to survey and analyse the issues related to ranking.
It noted that "weighted rankings of entire higher education institutions are particularly problematic''.
Quite simply, they provide too little information. They also assume that all the potential target groups have a common and very precise definition of quality — a description which may be seriously questioned in terms of indicators, methods, reliability and, particularly, relevance.
"An awareness that simplification of information also involves a loss of information is required, and that information simplified to such an extent — called for in weighted ranking lists — is hardly information worth the name, in the context of the quality of higher education."
The once highly regarded The Times Higher Education Supplement — Quacquarelli Symonds World University Rankings, which is now defunct, is a perfect illustration of an unwholesome dimension to rankings.
And as they say in business, caveat emptor (let the buyer beware). The business of ranking is no different!
* The writer is the Vice-Chancellor of Universiti Sains Malaysia. He can be contacted at vc@usm.my
Learning Curve : Perspective
New Sunday Times - 08/22/2010
THE rankings game is here again. This time it involves two types of rankings that allow us to understand what the game is all about from different perspectives.
Firstly, it is hardly surprising that United States' Harvard University once again — for the eighth consecutive year — topped a recent world universities rankings published by Shanghai Jaiotong University.
The results of the rankings are sharply criticised in Europe as the list is dominated by American institutions with the University of California, Berkeley being second, followed by Stanford University.
It is focused almost entirely on a university's scientific research achievements and does not cover the humanities. It is allegedly not an accurate reflection of an institution's overall "performance" — a term that also created controversy over the assessment of a university! More about that later.
The criteria include the numbers of the following: Nobel prizes and Fields medals won by faculty members and alumni, highly cited researchers on staff and articles by faculty published in Nature and Science journals.
Schools in the United States reportedly accounted for 54 per cent of the top 100 universities listed.
And not surprisingly, "in Europe ... officials say the criteria are biased against European schools". But then who says rankings are free from prejudices?
The rankings by Shanghai Jaiotong University was first initiated in 2002 with the aim "to benchmark the performance of Chinese universities, amid efforts by Beijing to create a set of world-class research institutions", which is almost always based on the American norms and systems.
This is reflected in the result where the highest ranked non-US institutions are relegated to lower positions in the ranking.
Britain's Cambridge and Oxford Universities are in fifth and 10th places respectively this year.
The European continent's top-rated institute was Swiss Federal Institute of Technology in Zurich, at 23rd, while Pierre and Marie Curie University in Paris was the highest ranked French school in 39th position. University of Tokyo, in the 20th place, was the best rated in the Asia-Pacific. The top Chinese schools were not among the 100 best globally.
Secondly, a report released almost at the same time by Forbes magazine and the Center for College Affordability and Productivity (CCAP) in Washington DC in the US clearly demonstrated how rankings can be a variant of one another.
In this case, Williams College was named the No. 1 tertiary institution in the US for 2010.
The Massachusetts liberal arts private college with about 2,000 students "surpassed top Ivy League schools such as Princeton University, Harvard and Yale University, which came in second, eighth and 10th respectively on the Forbes list.
Amherst College, another small liberal arts school, ranked third, while Massachusetts Institute of Technology placed fifth.
The United States Military Academy, which ranked No. 1 last year, dropped to fourth place this year.
The rankings was compiled from both qualitative and quantitative information acquired from college students based on criteria that "Forbes and the CCAP considered such as student satisfaction, postgraduate success, student debt, four-year graduation rate and how many students and faculty win prestigious awards such as Rhodes Scholarships or Nobel Prizes".
Sites such as RateMyProfessors.com and MyPlan.com to understand the level of contentment students felt about their colleges also contributed to the ranking.
Salary information from Payscale.com was used to gauge the success of alumni.
Rankings are not a linear measure of the "quality" of education as they are made out.
This is further supported by the 2009 report of the Swedish National Agency for Higher Education as an assignment by the government to survey and analyse the issues related to ranking.
It noted that "weighted rankings of entire higher education institutions are particularly problematic''.
Quite simply, they provide too little information. They also assume that all the potential target groups have a common and very precise definition of quality — a description which may be seriously questioned in terms of indicators, methods, reliability and, particularly, relevance.
"An awareness that simplification of information also involves a loss of information is required, and that information simplified to such an extent — called for in weighted ranking lists — is hardly information worth the name, in the context of the quality of higher education."
The once highly regarded The Times Higher Education Supplement — Quacquarelli Symonds World University Rankings, which is now defunct, is a perfect illustration of an unwholesome dimension to rankings.
And as they say in business, caveat emptor (let the buyer beware). The business of ranking is no different!
* The writer is the Vice-Chancellor of Universiti Sains Malaysia. He can be contacted at vc@usm.my