Rankings just don’t work in higher education

Sexual assault is one of the most pressing challenges facing higher education right now. From the Stanford University case to the crisis at Baylor, universities across the country are trying to figure out how to reduce sexual violence and manage the regulatory and public relations environment surrounding sexual assault. Of course, the best way to solve a complicated issue in higher education is creating a rankings system. Surely not with sexual assault you say?  Think again! Yes, we have a sexual assault ranking! Proving once again, rankings just don’t work in higher education.

On June 7th, the Washington Post released a story documenting sexual assault reports on campus using data provided in response to the requirements of the Clery Act, the federal campus safety legislation that requires higher education campuses to report on criminal activity on campus.

The headline read: These colleges have the most reports of rape.

To be fair to the Washington Post, the story did include a fairly comprehensive accounting of how many victim advocates see increased reporting as a positive sign that fewer sexual assaults are going unreported.

However, it doesn’t take long to realize what happens next. Stories pop up everywhere:

“Brown University and University of Connecticut tie for number one in rape.”

“Dartmouth comes in second in sexual assault.”

You would think we were talking about athletics teams or SAT scores.

This is the problem with lists in our modern media environment.

The clickbait headline writes itself: The top universities for rape… Number 6 will shock you!

Rankings obscure the complexity of an issue

Whether we are talking about rankings of sexual assault or the U.S. News rankings, rankings obscure the complexity involved in an issue.

Sexual assault is a huge challenge. Everything we know about the issue is disturbing. A survey from the Association of American Universities found more than 20% of female undergraduates were victims of sexual misconduct.

The expectations of the U.S. Department of Education have been difficult to understand and university don’t know how to satisfy the federal government when dealing with sexual assault. Successful reporting requires a tremendous network of counselors, law enforcement, student affairs, clergy, and other professionals both on and off campus.

Policies have to be clear, consistently followed, and widely disseminated.

These aren’t easy tasks and universities should be struggling through how to stop this epidemic.

Silly rankings that fail to consider all of the issues involved in sexual assault do nothing but distract us from reducing and eliminating sexual assault on campus.

Rankings measure the wrong things

An inherent problem with rankings is that they inevitably measure the wrong things. Often, this is because measuring the right thing is difficult or nearly impossible to do.

If we truly wanted to compile a ranking around sexual assault, the culture of drugs and alcohol, attitudes toward women, and culture around sexual assault reporting would all need to be considered.

How do you measure the culture of sexual assault reporting? This is a complicated question with real methodological challenges.

No worries—just compile a ranking with the absolute number of assaults. And maybe another one that shows the number of rapes per 1,000 students.

Problem solved!

Even the creators of college rankings acknowledge the methodological and data problems with such lists.

Only 2 schools are list in the Top 25 of the U.S. News rankings. Do we think that’s because top public universities don’t compare favorably to top private universities? Or perhaps, there a flaw in the rankings that privileges the position of private universities.

Rankings fail to account for institutional diversity

Longtime readers know that I believe institutional diversity is one of the most important and under appreciated aspects of American higher education.

Simply put, institutional diversity is the range of different college universities in our system of higher education.

Two year or four year institutions. Public, private, or for profit. Rural or suburban or urban. Large or small.

Liberal arts colleges. Research universities.

Serving adult students or 18-22 year olds.

Selective or open access.

There is a huge variation among colleges and universities. Putting all universities in a list fails to acknowledge the relevant institutional differences at play.

The Washington Post notes the example of New York University which as an urban institution has mostly off campus housing. Undoubtedly, this impacts the number of rapes reported on campus.

Rankings always have these kinds of challenges because of the weighting of different variables constitutes a values judgement.

Emphasizing SAT scores over the number of Pell grant recipients skews the final rankings.

Adding research expenditures into a ranking or considering student-faculty ratios are values judgements about what type of higher education is better than another.

To create a list, you have to treat institutions largely the same and make a judgment about what variables to give credit for and which ones to ignore.

As a result, every ranking system is biased whether it wants to admit it or not.

The problem with federal data collection

I want to be clear:  I do believe that the Clery Act is positive for higher education and institutions should be reporting their crime statistics.

However, we have to be careful with the kinds of data we collect on higher education at the federal level.

The simple Washington Post story—which fairly well covered the complexity of sexual assault—was co-opted into a misleading ranking of sexual assault with implied implications for campus safety.

This type of misuse is why many of us in higher education were skeptical at the Obama administration’s attempt to creating a rankings (or rating) system for higher education.

We were worried about the values the Department of Education was going to impose on higher education through the rankings. Moreover, the increased collection of data needed to truly implement a rankings system would open up more data for misuse.

The Department of Education and the federal government have a reasonable expectation that colleges share relevant data for accountability.

As a higher education scholar, I rely on data collected by the federal government. Yet, we must consider the ways that data can be misused. I’m not advocating that we shouldn’t be collecting information on higher education. We need good data.

We just need to recognize that the propensity of many to turn everything into a ranking often opens up a data system to abuse.

Rankings just don’t work in higher education

For all of these reasons, rankings just don’t work in higher education. They are great for selling magazines or getting people to click on your website, but they do not provide beneficial information that students and families should use in making higher education decisions.

Whether we’re talking about deciding where to go to college or measuring sexual assault, shouldn’t we be emphasizing creating beneficial information for people?

Otherwise, what’s the point of rankings?

 

(Visited 141 times, 1 visits today)