Three Takeaways on College Ratings from the PIRS Symposium

The Department of Education held a technical symposium last week to discuss what kind of data and analysis the federal government should use for President Obama’s accessibility, affordability, and  outcomes rating for U.S. colleges (Official title: Postsecondary Institution Rating System).

Three key takeaways from the meeting:

First, the current higher education data infrastructure urgently needs improvement. This message was delivered by just about every presenter, and it is probably the most important message of the day. There was general consensus that student-level data (the student unit record) rather than institution-level data  would provide a much stronger foundation for the ratings.

One example:  When a student transfers from a community college to a four-year college, it should count as a success for the community college.  But because institution-level data cannot distinguish between a student who drops out and one who transfers, that student counts as a failure for that community college.  Student-level data from a student unit record could easily distinguish between the two.

Second, there is growing skepticism over the feasibility of using the rating system to accomplish the two original goals of the ratings:

1) providing consumer information, and

2) using it as an accountability tool to reallocate federal financial aid.

The two goals aren’t necessarily contradictory, but there isn’t much overlap between them either. As a result, the better the ratings do on one goal, the less relevant the ratings are for the other. For example, to provide consumer information, you’d want to group institutions by location, since most students only consider schools within a limited geographic location. Yet a geographic-based grouping makes no sense for an accountability system. Why hold colleges in New York to a different standard than colleges in Texas?

Third, much debate remains concerning creating peer/comparison groups – ranking schools against similar schools. Several presenters had experience using peer groupings for projects, and the difficulties those presenters mentioned, and their discomfort in using those groupings for accountability further convinced me that a regression method (comparing broad groups of colleges after statistically accounting for differences in inputs, such as the percentage of students receiving Pell grants) is better than the peer-group method.

A prototype of the rating system is due to be released this spring, with the full ratings to come sometime during the 2014-15 academic year.