Is This Any Way To Pick A College? : NPR Ed Academic prowess, graduation rates, or the record of the football team? We rank the rankings.

Is This Any Way To Pick A College?

This dog went to a top-ten college...for dogs. Martha T/Flickr hide caption

toggle caption
Martha T/Flickr

This dog went to a top-ten college...for dogs.

Martha T/Flickr

There are more than 7,000 colleges in the U.S., and 21.8 million students enrolled in them. That's potentially 21.8 million opinions about what makes a school "the best."

The penalty for a bad choice can be huge. The cost of a degree continues to soar, graduation rates vary widely from college to college, and a growing body of evidence suggests that picking a supposedly "top" school doesn't necessarily pay off later in life.

With so many variables—cost, location, curriculum, reputation — parents and students often turn to rankings and reviews. The most famous and influential is probably the one by U.S. News & World Report. There are also rankings in CNN Money and Forbes, plus published guides by Princeton Review, Barron's, Fiske's and The College Board. The Department of Education has its own College Navigator site. Later this year, the Obama Administration is expected to release its own Postsecondary Institution Ratings System or PIRS, that, it proposes, will eventually be tied to federal student dollars.

Each of these ratings systems and guides has its own particular recipe for weighing the available data. We're not going to get into the long-running debate about the merits of this or that formula, but we can take a hard look at the ingredients: The strengths and weaknesses of criteria such as selectivity, prestige, and graduation rates. How useful is the information? How fair? And how easily can colleges game the rankings?

As you'll see, this is a subjective judgment based on limited evidence, but, hey, that also describes these rankings systems themselves.

1) Selectivity

For decades, the primary measure used to rank universities was by looking at which ones graduated the "best men." From the 1930s to the 1950s, for example, Prentice and Kunkel published a guide that listed colleges based on how many of their alumni appeared in Who's Who.

This method was simple and transparent.

It also largely mirrored social class, and it was somewhat circular: the "best colleges" were where the "best men" went, so the "best men" (most, back then, were men) kept going there.

Selectivity today is defined as the ratio of students who apply to those who are accepted (a metric that colleges can and do mess with, by soliciting more applications). It also means looking at SAT or ACT scores and high school grades of the entering students. The U.S. News & World Report rankings, originally conceived in 1959, made selectivity a significant part of their formula. It still accounts for 12.5 percent of the rankings all by itself. It also indirectly influences other measures, like academic reputation and retention rates (the percentage of students who return from year to year.)

But here's the problem: the vast majority of American college students today go to nonselective institutions that admit just about everybody. And so what use can they make of this information?

We give "selectivity" one mortarboard.

one mortarboard

2) Reputation

How good would you say that Princeton's undergraduate business program is? 
That's a trick question: There is no such program. Yet when other college presidents were asked this question, they gave the nonexistent program top marks. That's known as the "halo effect."

U.S. News bases 22.5 percent of its formula in some categories on reputation, in part by asking leadership at peer institutions for their opinions. As the halo effect shows, that's a less than foolproof process. Our rating:

one mortarboard

3) Learning

It sure would be nice, in theory, if we had a way to directly measure what students actually learn in college. But the kinds of standardized tests given in each grade by states that form the basis of K-12 accountability are screamingly unpopular at the college level. (Honestly, they're not all that popular at the K-12 level either). Two caps in theory, but in practice?

one mortarboard

4) Graduation Rates

Instead of judging colleges by who they exclude, it could be fairer and a lot more useful to examine what happens to the students they let in. The most basic approach is to look at the graduation rate, as Forbes, U.S. News and other rankings do.

There's just one problem: If you're going to rank schools by graduation rate, how do you adjust for differences in the quality of entrants or the level of resources the school has available? A cash-strapped public university that takes all comers is likely to have a lower graduation rate than a private school with tiny classes and a big endowment. In the California State University system, just to pick one very large example, the four-year graduation rate is just 16 percent.

And don't forget, the mere number of students with diplomas doesn't speak to the quality of that diploma.

Bottom line: graduation is a crucial, if limited, yardstick. Graduation rates are most useful when comparing colleges within a similar category.

two mortarboards

5) Earning Power

One of the biggest reasons people go to college is to get a better-paying job. So why not judge colleges, at least in part, by their graduates' incomes? It's a simple return-on-investment question, especially important given the huge amounts we're borrowing to go to school. And, as noted above, there's growing evidence that higher income is not necessarily linked to attending expensive private schools.

The only colleges currently regulated based on value for money are for-profits. According to a federal regulation known as the "gainful employment" rule, for-profit schools whose graduates can't pay back their loans can lose eligibility for federal student aid. The Obama administration's new ratings system would probably include graduate income as one measure. But as of right now, federal law prohibits linking records of students at individual schools to federal income or employment data.

In the meantime, the only employment and salary information we have is largely self-reported, on sites like, or collected by institutions themselves. In both cases, the data is likely to be incomplete. Also, some have argued that this method is unfairly biased against colleges that educate a lot of future teachers, social workers, and artists, and in favor of tech and engineering-heavy schools.

two mortarboards

6) Broader Outcomes

Income and employment don't tell us everything we need to know about the value of higher education. Far from it.

Educated people, by the numbers, are healthier, live longer, vote more, and have stronger marriages. And that's to say nothing of the intangible benefits to individuals of a liberal arts education and, to society, of having an educated and informed citizenry.

There must be colleges that do a better or worse job of developing those qualities. Unfortunately, what we don't have are agreed-upon ways of measuring them.

That's why the recent Gallup-Purdue survey is so interesting, with its findings that going to a top-rated school had no impact on later success or happiness.

And while that survey challenged our definitions of "prestige," the pollsters did find a strong link between great teaching and learning experiences in college, and how that showed up in terms of happy, engaged alumni years later.

"If you are a graduate who was emotionally supported during college, it more than doubles your odds of being engaged in work and triples your odds of thriving," says Brandon Busteed, director of Gallup's education practice. "So we're talking about life-altering differences."

They hope to market some of their services to universities, but it's a little ways down the road. If reliable ratings that include broader long-term outcomes emerge, we'd give them:

three mortarboards

7) Individual Needs

Given this incomplete, fuzzy view of colleges, a would-be college student would be well advised to consider their own individual needs., a for-profit startup, takes a throw-everything-at-the-wall, consumer-friendly approach to college ratings, adding together the Department of Education's information on cost of attendance, retention and graduation rates with subjective and self-reported information. will tell you not just a list of colleges matching your search criteria—if you enter your grades and transcript information, it can tell you which ones you are likely to get into, and which ones are rated most highly by students like you. "If you ask me what the best law schools are, I can give you a general list, but the answer we need to get to is asking, 'why do you want to go to law school?'" and returning a recommendation based on those motivations, says CEO John Katzman.

Of course, any system is only going to be as good as the data we have, which is wildly incomplete. With millions of students and countless variables, there's never going to be a ranking that can find the one true "best" college.

A customized approach, like Noodle's, keeps that fact front and center.

four mortarboards