Rankings are sexy; Data are ugly.

Over the weekend, SMU hosted the Education Writer’s Association meeting of higher education reporters. The conference covered many issues from the use of big data, college athletics, and sexual assault. One of the most fascinating panels considered the role of rankings and ratings. With the Obama administration’s emphasis on creating a college rating system, this is one of the hottest issues in higher education particularly at the federal level. Colleges and universities along with their lobbying interests have pushed back against the system. Political leaders demand accountability for the large sums that governments spend on higher education. It seems everyone has a higher education ranking system these days. U.S. News isn’t the only game in town. Washington Monthly, Money, and Forbes all have higher education rankings. Rankings are sexy. They sell magazines and students make critical decisions about where to go to school. The problem with college rankings is that the data used to build these rankings are fundamentally flawed.

Photo credit: Sheila Sund
Everyone agrees that there is a wide diversity in the 6,000 colleges and universities we have in the United States. As I’ve written before, this is one of the greatest strengths of American higher education. The problem is how you create a set of variables that can be used to measure all colleges. And even if you could identify these variables, do we have valid, preferably audited, data to populate the data set to develop the ratings system. There are four problems that plague efforts to create a ratings system for higher education.

No serious learning data.

We simply have no sound measurement on what students learn in college. Obviously, this is what we want to know. What are students learning in college? There is the Collegiate Learning Assessment that attempts to get at this issue, but it has problems. If students are going to college to learn something, how do we create a ranking system when we have no way to measure learning?

No serious instructional data.

Not only can we not adequately measure learning, but we also can’t measure the quality of instruction nationally. One of the best measures we have for faculty is Ratemyprofessor.com. Before you laugh at the idea of using a website that includes chili peppers of a professor’s hotness, Money used the site for a measure of faculty quality in their rankings. For years, we have struggled with how to consider quality of teaching as part of a tenure portfolio. We have no serious way to consider teaching broadly across faculty and institutions.

Serious flaws with the federal data.

What data we have available are often available through national data sets that are mostly compiled by the federal government. However, the quality and usefulness of this data is a challenge. Changing definitions and the types of data collected longitudinally present difficulties in exploring variables over time. In addition, many institutions argue that the federal data misrepresent the work that they do. For example, community colleges argue that federal data dramatically underestimate their true graduation rates by failing to consider transfers and other ways that their students are different.

We have to rely too much on proxies.

In the end, the biggest problem is that we must rely too much on proxies in researching higher education. We do not have enough sources of data on key aspects of what we do in higher education so we are forced to come up with proxy measures and variables. By their nature, these measures are imprecise and can’t entirely capture what is happening in higher education. Proxies may be better than nothing, but they are far from able to explain teaching, learning, and institutional activities.

In many ways, the entire edifice of rankings are a proxy. A proxy for an industry that offers incomplete data. Some of this is the fault of colleges and universities that don’t want to be measured. Part of this is the problem with media and political leaders fixated on winners and losers.

Federal policy makers, the higher education lobby, and colleges themselves fight over President Obama’s desire to have a higher education rating system for institutions. Yet, most significant is that much of the variation that occurs isn’t between institutions, but between students. Why does one student succeed while another fails? These individual characteristics and circumstances are the holy grail in improving the data available for higher education. Ironically, while this data is best collected through qualitative means, we continue having fights over how to quantify the work of institutions.

We have to find a way to say one college is better than another. This is sexy. It sells.

Unfortunately, the ugly of the underlying data challenges too often never get fixed.

(Visited 121 times, 1 visits today)