Reforms to Increase Transparency in Higher Education

By Mark Schneider
Vice President and Institute Fellow
American Institutes for Research

Testimony Presented to the House
Subcommittee on Higher Education
May 24, 2017

 

Higher education is one of the largest investments that individuals make over the course of a lifetime. To help students make the most of this investment, federal higher-education policy supports portable grants, loans, and tax credits available to prospective students and allows them to choose from a diverse array of providers. When the system was designed, policymakers assumed that providing voucher-like Pell grants, for example, and later, tax benefits to students and allowing them to choose would reward schools that offer high-quality programs and punish those that fall short. In the aggregate, it was hoped, these choices would create market forces that would hold colleges and universities accountable for what they charge and the quality of the education they deliver.

Market competition works best when consumers can find and use clear, comparable information about the costs and quality of different offerings. If such information is lacking, either because it does not exist or because it is difficult to find and use, then market competition will be based on other attributes that may or may not be related to the key dimensions that enhance quality and efficiency. In the case of higher education, that means students might judge campuses based on their proximity to home, amenities (lazy rivers, climbing walls, top chefs), or, in some cases, tuition (as a proxy for quality). In the aggregate, choices based on these dimensions might reward campuses that have a geographic monopoly or those that inflate their tuition, stunting the ability of market forces to improve the system as a whole.

To be sure, evaluating the quality of post-secondary institutions and programs is a difficult task, even when information is plentiful. Part of this is because of the nature of the good: A post-secondary education is an “experience good,” meaning it is difficult to assess a school’s value until after you’ve actually enrolled. In some cases, the true value is not recognized until many years in the future when graduates learn how much their degree is rewarded in the labor market. And most students only purchase a post-secondary education once or twice, meaning they have little opportunity to learn from experience.

Consumers also face a dearth of clear, comparable data on the cost and quality of different offerings. Some basic pieces of information, such as the actual out-of-pocket costs for a given student at a given institution, are available only at the very end of the college-application process, after students have settled on a set of choices (and schools often change the terms of their financial-aid package from year to year).

Other information is incomplete: Federal graduation rates, which provide a basic measure of the likelihood of completing a credential, are still biased toward first-time, full-time students only, which excludes students who transfer in and complete a credential or transfer out and complete one somewhere else—although improvements in coverage are happening.

Data on how much students learn is largely non-existent. And information on how graduates of particular programs fare after finishing school—in terms of finding a job and contributing to society—is also not systematically available outside of a handful of states or institutions. Popular private rankings suffer from the same limitations.

The federal government, in concert with the states and institutions, could do more to increase transparency and enhance market accountability in higher education. Reporting more effectively data that it already collects and collecting better data on basic measures of cost, quality, and outcomes would provide a number of benefits.

First, students could use the information to avoid investing in schools or programs that do not provide a positive return on investment and to discover options that they may have eliminated on the basis of incomplete or faulty information. For instance, while many argue that a bachelor’s degree is the best path to the middle class, a closer look at the earnings of workers with associate’s degrees or certificates in technical fields, or those who complete apprenticeships, reveals that there are many other affordable, worthwhile opportunities to consider.[i]

Second, researchers and policymakers could more readily judge where investments in federal aid are paying off and where reforms could improve efficiency and reduce waste. Though the Office of Federal Student Aid sits on millions of student-level records that measure the receipt of grants and loans, completion or separation status, and loan repayment, very little of that data is used to inform the policymaking or budgeting process. And almost none of those administrative data are made available to researchers who could help answer pressing questions.

Third, private firms could use new, more granular data to come up with all manner of rankings and ratings to reflect the unique preferences of different students. The most popular rankings tend to reward admissions selectivity and spending over actual measures of student learning or value-added. Better data on post-graduation outcomes would provide a fuller picture of institutional quality and, eventually, encourage institutions to compete on how well their graduates do after graduation rather than how well they scored on their entrance exams. Early evidence suggests that the earnings data released on the newly revamped College Scorecard affected student choices.[ii]

Fourth, private lenders and funders could use labor-market outcome data to improve underwriting and extend credit on the basis of a student’s potential rather than the student’s past experience with credit products. Without reliable data on the likely return on investment to different options, lenders are forced to rely on credit scores and the availability of credit-worthy co-signers. These measures exclude students who may have high potential but no credit history.[iii]

With so much at stake for taxpayers and for students, the nation must improve its data collection and the way in which it makes these data available.

What can be done?

I focus on a few areas in which the federal government could improve the flow of data to consumers.

  • First, I look at IPEDS, the nation’s premier data collection on higher education—a data collection that everyone loves to hate. Related to that, I discuss the disclosures that schools are required to make and how we might better organize and present that information.
  • Second, I look at how we can improve the collection of data on post-completion student outcomes.
  • Third, I look at some opportunities for re-purposing existing administrative data collected by various federal agencies. This will require creating a different culture of data sharing and building an infrastructure to allow the merging of data often governed by different laws regarding use.

While there are opportunities to enhance transparency, it is important to place clear restrictions on what federal regulators can use such data for, to make sure these efforts are designed to serve a specific audience and to protect students’ privacy. And most of these suggested changes cannot be done without explicit action by Congress.

IPEDS

The primary source of data on post-secondary education is the Integrated Postsecondary Education Data System (IPEDS), which requires institutions that participate in federal student-aid programs to fill out a series of surveys each year. The surveys focus on 12 distinct topics, including: institutional characteristics, institutional prices, admissions, enrollment, student financial aid, degrees and certificates conferred, student persistence and success, and institutional resources.[iv] This extensive coverage of so many aspects of higher education—the topics covered, the very questions asked, and the mixing of consumer and regulatory information—are all the result of a long process of accretion whereby legislation demands that new pieces of data be collected but never eliminates questions or whole surveys that have outlived their usefulness or pose burdens in excess of benefits. (NCES has documented the legislative mandates behind different IPEDS surveys, showing its limited ability to eliminate items or surveys.)

In IPEDS, the collected data are aggregated to the institution-level, providing a snapshot of an institution’s enrollments, finances, staffing, prices, and some student outcomes in a particular year. IPEDS is the only source of comparable institution-level data on student success like retention and graduation rates. Much of IPEDS data are extensive but flawed. Moreover, most of the data collected is never used by schools or researchers. NCES has captured data on each item in every one of the IPEDS surveys and has found that most items are NOT ever viewed by anybody.

Here are some specific actions that Congress could consider to reduce the burden of IPEDS on institutions. The first two suggested actions have been put forward often before:

  1. Simplify the Human Resources Survey

This survey is likely the most burdensome and most disliked survey in all of IPEDS. It is also likely that much of the data it produces is inferior to data gathered by others, such as the American Association of University Professors or the College and University Professional Association for Human Resources. Indeed, when I was chair of the Political Science Department at Stony Brook, I always looked to the AAUP data to justify personnel requests to my dean and provost and never once used IPEDS data.

The Human Resources survey is needed to meet requirements through the Civil Rights Act of 1964, as amended through the Equal Employment Opportunity Act of 1972 and current disclosures required under the Higher Education Opportunity Act.

In turn, like so many other fixes to IPEDS, changing the HR survey requires Congressional action. Among the fixes Congress might consider:

  • Limit any Human Resources survey to biennial collections.
  • Limit data elements of the Human Resources survey to requirements under the Civil Rights Act of 1964 as amended through the Equal Opportunity Act of 1972.
  • Return to the practice of exempting institutions with fewer than 15 full-time staff from submitting any documentation on employees.

  1. Drop the Academic Libraries Survey

For years, many have argued that the benefits of this survey far outweigh the costs. Congress could consider allowing a non-profit organization to gain rights to the survey instrument, dropping it from IPEDS.


These two suggested actions are “perennials” that have circulated for years. There are some more fundamental changes that Congress might consider.
 

  1. Use sample surveys rather than universe surveys

Congress could request NCES hold Technical Review Panels to explore which IPEDS items are needed at the institution level and for which national estimates would suffice.

Here’s one clear example of where sample data could replace the universe data: IPEDS collects data for the U. S. Census Survey of State and Local Government Finance. Since the Census only reports national estimates, are data from every institution really needed?

Data that is used to obtain information from both public and private institutions for gross national product estimates could also likely be done via sample surveys.
 

  1. Relief for small schools

There are many small schools in the IPEDS universe. Indeed, the majority of schools in IPEDS (60%) have undergraduate enrollments of less than 500 students and around half of those have enrollments of less than 250 students. Having these small schools fill out the same IPEDS forms with the same degree of regularity as a mega-university such as UT-Austin clearly puts a disproportionate burden on them.

Annual surveys of every small institution might be justified, but a consideration of alternate collection schedules for some surveys might be worth study. Congress could consider the extent to which sample surveys of these small schools or shifting data collection to biennial rather than annual might serve the public interest while at the same time reducing burden.
 

  1. Use existing administrative records instead of surveys

Congress could consider instructing the U.S. Department of Education to study how existing data sources can be used to produce information that is now collected by IPEDS. Two examples come immediately to mind:

  1. FSA already collects extensive information on student loans and federal student grants, such as Pell grants. Why do institutions have to report these data again via IPEDS?
  2. The IPEDS finance survey contains similar information that is filed with the Office of Postsecondary Education through the EZAudit system. Periodically there have been discussions about coordinating these data collections—but both collections continue independently.

Note that reforms such as these would require that NCES, FSA, and OPE to better coordinate their data collections. Historically, FSA in particular has been a reluctant partner on many efforts such as these. But Congress could help change that.
 

  1. Changing FSA’s mission as a PBO

FSA has been classified as a Performance Based Organization (PBO) since the 1998 reauthorization of the Higher Education Act. Its orientation is essentially that of a bank, focused solely on the administration of financial aid programs rather than reporting data or facilitating research. Title 1, Part D of the 1998 HEA lays out seven priorities for FSA as a PBO:

  1. “to improve service to students and other participants in the student financial assistance programs authorized under subchapter IV of this chapter and part C of subchapter I of chapter 34 of title 42, including making those programs more understandable to students and their parents
  2. to reduce the costs of administering those programs
  3. to increase the accountability of the officials responsible for administering the operational aspects of these programs
  4. to provide greater flexibility in the management and administration of the Federal student financial assistance programs
  5. to integrate the information systems supporting the Federal student financial assistance programs
  6. to implement an open, common, integrated system for the delivery of student financial assistance…
  7. to develop and maintain a student financial assistance system that contains complete, accurate, and timely data to ensure program integrity.”[v]

Under its current mandate, FSA is primarily, and rightly, concerned with its core responsibilities: assessing eligibility for aid, disbursing the aid, and tracking repayment. FSA is required to report some basic data on loan-default rates, and its data center provides access to aggregate data on loan disbursements; the distribution of repayment plans; the frequency of forbearance, deferment, and delinquency; and institution-level data on defaults, program reviews, and financial responsibility scores.[vi] However, FSA has often been less than responsive to requests for data and research that would benefit the rest of the nation.

There are several paths Congress could consider to improve FSA’s role in providing data for accountability and transparency. One step might be inserting new goals into FSA’s “Purposes of the PBO” that would call for a more active role in reporting NSLDS data, assessing the effectiveness of federal investments, and facilitating research.

While its role as a bank and originator of direct federal student loans must remain paramount, its structure as a PBO provides an opportunity to make FSA more responsive to the dissemination of data. Specifically, the chief operating officer must create an annual performance plan for FSA in consultation with students, institutions, Congress, lenders, and others. That plan could include the development and dissemination of data measuring the results of the taxpayers’ $130 billion annual investment in student financial aid. A formal revision of FSA’s “Purposes as a PBO” could make this a core part of FSA’s mission.

More specifically, point (G) could be revised to include other uses for FSA data other than just program integrity, such as “to develop and maintain a student financial assistance system that contains complete, accurate, and timely data to provide updates on the state of the federal loan portfolio, assess the effectiveness of federal investments, and ensure program integrity.”
 

  1. Organizing and simplifying disclosures.

In addition to formal reporting requirements, institutions must disclose information on a number of topics to prospective students and the public. The latest reauthorization of the Higher Education Act (in 2008) contained 40 separate disclosures (nine of which had to be disclosed only to loan borrowers).[vii] However, there is evidence that compliance with those disclosure requirements is spotty.[viii]

Disclosure requirements range from essential aspects of institutional activity—student financial-aid information, student outcomes, and health and safety—to peripheral aspects—availability of voter-registration forms and information about intercollegiate athletic programs. The disclosure requirements are often extensive and detailed.

Congress could consider whether all of these are necessary. Perhaps equally important, if these disclosures are deemed important, then to increase transparency and ease of access Congress might ask ED to study the creation of an Institutional Disclosures Page where all federal disclosures could be organized and available for students and families. Such a single location would also improve checking for institutional compliance with Congressional mandates.

 

Improve measurement of student outcomes

The data that the federal government has to measure student outcomes is limited. The success of students and institutions should be measured by how much students learned while attending and how much they earn after they leave. There is some agreement on assessing labor market outcomes. In contrast, measuring student learning outcomes, what many would call the most basic product of higher education, is far more contentious.

A recent report by ETS argued that there is a need for a “systematic, data-driven, comprehensive approach to understanding the quality of post-secondary education…with direct, valid, and reliable measures of student learning.” In that report, ETS explores the challenges of creating such a measurement system—including the difficulty of defining the different dimensions that should be included in such a measure of student learning, ranging from workplace skills to academic expertise and encompassing both “hard skills” as well as so-called “soft skills” such as teamwork and creativity.[ix]  Given the breadth of these different demands, little consensus now exists on how to move forward.  In turn, it is probably misguided for the federal government to invest scarce time and resources in trying to develop measures of learning outcomes for post-secondary education.[x]

However, the federal government has made some important moves in making available earnings data—but more can be done.

In contrast to IPEDS, which measures what is taking place in the institution, the concern for earnings deals with another concern shared by policymakers, students, and families: what happens to students after they complete their studies. After all, the rhetoric surrounding higher education claims that it is the best human capital investment individuals and governments can make. But as with any investment, ultimately the returns matter. Can Congress help make information about the return on investment (ROI) more available to consumers?

The most ambitious attempt to make these data available was the College Scorecard. However, that effort shows the challenges of gathering and presenting earnings data.

Even though the College Scorecard published data about the earnings of students enrolled in post-secondary institutions up to 10 years after enrolling, much of the data that are available to measure the labor-market success of students is inadequate. Most notably, the earnings measures in the Scorecard were based only on students who received federal financial aid and they were aggregated at the institution rather than the program level. As a result, the Scorecard, currently the federal government’s main source of post-secondary earnings data, does not adequately measure variation in earnings outcomes. In addition, the Scorecard data does not distinguish between students who completed credentials and those that did not.

As a result, we know very little about how students from different institutions and different programs of study fare after college. This makes it impossible to adequately measure the return on investment (ROI) of students or taxpayers, raising significant questions about what we are actually getting for the billions of dollars that the federal government, state governments, and families invest in post-secondary education. While we know that, on average, post-secondary education is a good investment, ROI varies widely across colleges and universities—and even more widely across different fields of study.[xi]

To measure ROI at the institution and program-level, one would need to merge two different sets of data. The first are individual student-level “transcript” data that shows the year a student completed a course of study, the institution that awarded the post-secondary credential, and the field of study (this is the federal Classification of Instructional Program code, known as the CIP code). The second are wage data. At present, these wage data mostly come from state unemployment insurance (UI) wage systems, although the Scorecard used the more comprehensive unduplicated W-2 wage data from the IRS.

Merging student-level data with either source of wage data uses Social Security Numbers, and the merging is usually done by the agency that holds the wage data (to protect privacy). The individual-level data are never made public. Rather the data are aggregated at the program-level, inspected to suppress any small programs (as a rule of thumb, programs that contain fewer than 10 cases are suppressed), and returned to the education agency that provides the transcript-level data.

There are currently no nationwide standards governing how these data are reported. For example, to minimize the number of missing programs caused by small enrollments, states that release merged transcript/wage data often combine several cohorts. Practices across states differ somewhat, but this is a technical issue that could (and should) be resolved by the federal government.

There is also a question about what to do with students who enroll in but do not complete a program. Most states are focused on the wages of completers, but, as is well known, large numbers of students never finish. The federal Scorecard data tracked cohorts of students, but did not distinguish between completers and leavers. The transcript data can also include demographic information (e.g., race or gender). This could provide valuable information about the differential success of different types of students, but adds complexity to the aggregated data.

Yet another challenge is the level of data needed by the federal government to assess student success. As noted, the Scorecard used data only on students who participated in a Title IV program. Because the Department of Education must know whether or not students are in good standing with an institution of higher education in order to know when students must begin repaying their loans, the NSLDS maintains detailed records of the enrollment of students receiving federal aid in any Title IV approved institution. Moreover, Title IV student-level data actually chart the path of the students in which the nation’s taxpayers are investing the most money. And there is certainly a compelling federal interest in knowing the extent to which Title IV students are succeeding in the pursuit of post-secondary credentials.

As noted, the federal Scorecard reported wage data at the institution-level, the only level at which the NSLDS can currently collect data. The Department of Education may overcome this limit in the next several years because institutions must now report to FSA information on the programs in which students are enrolled. (This information is needed because the 150% Subsidized Loan Limitation provisions are based on the borrower’s enrollment in a specific program.) Because student outcomes vary greatly across programs of study both within and across institutions, these program-level data are essential. In short, to the extent to which FSA collects student-level indicators of success at the program-level for students who have received federal student loans and/or Pell Grants, the nation has the potential to better measure the payoff of the large investment the nation is making in its post-secondary students.

But note that these efforts require cooperation between different government agencies which hold different data systems that need to be integrated for maximum effect. That however leads to yet another set of issues that require Congressional action.

 

Improving intergovernmental data sharing agreements

There are many data systems housed in different federal agencies. By merging together these different existing data systems, we can measure the return on the investment taxpayers and students earn from the time and money they have spent on higher education.

It is important to remember that these data systems were created for many different purposes—and not for the measurement of student success and return on investment.

For example, 

  • The Federal Student Aid student level data system was designed to track the disbursement of Title IV funds.
  • The American Community Survey has detailed data on educational attainment, occupation, and other outcomes that could be tied to more specific student level information.
  • The Census Local Employment Household Dynamics program holds extensive wage data that states have agreed to share through their Unemployment Insurance earnings data. These too could be tied to more specific student level information.
  • And of course the IRS holds individual level wage data, in some ways the ultimate measure of student success. These data too could be merged with student level information, as was done for the College Scorecard.

The point is that scattered across many different agencies are the data that we need to better measure what taxpayers are getting back from the billions upon billions of dollars the nation spends every year on higher education. But to do so, these disparate data have to be merged.

The problem is that merging these data is difficult and cumbersome. Moreover, each necessary data sharing agreement is currently a hand crafted effort, requiring lots of time and lots of energy—all handicapped by complex rules and laws governing each of the different data systems. This means that MoUs between agencies for data sharing are often negotiated, renegotiated, and then negotiated again—with numerous lawyers and data owners involved in complex negotiations. Complex rules then govern the level at which the data can be reported. 

I by no means want to suggest that protecting the privacy of students and taxpayers is not of the highest priority. However, the rules governing each of these different data systems all too often leads to paralysis preventing the generation of the evidence we need to support good decision making.

So we literally end up spending months if not years handcrafting data sharing agreements. In contrast, there is no infrastructure to support a regularized path to combining these multiple data sources. There are some places that Congress could encourage data sharing and increased access to improve transparency and accountability.

The Commission on Evidence Based Policymaking is expected to report the results of its two year investigation. The commission is explicitly focusing on key issues related to the use of survey and administrative data:

  • Existing barriers to accessing and using data government already collects
  • Strategies for better integrating existing data with appropriate infrastructure and security, to support policy research and evaluation
  • Practices for monitoring and assessing outcomes of government programs
  • Whether a data clearinghouse could enhance program evaluation and research opportunities

The results of the Commission’s work will hopefully provide a roadmap to how better to use existing administrative data systems for accountability. But regardless of the Commission’s recommendations, legislation will be necessary to coordinate the different laws, rules and regulations that right now impede the merging of already existing data. And Congress needs to consider the benefits of these merged data weighed against the increased privacy risks of combining them.

Concluding Comment

There are multiple paths Congress could consider to improve data collections in a way that could make data more useful, usable, and used by students and policy makers. All of these can increase the foundation for better consumer choice and, through better choice, better institutional performance. However, as the nation considers these paths, the federal government needs to be careful about mixing consumer information tools and regulatory tools. While there may be overlap in the information consumers need and the information regulators need, mixing the two can create problems. And the way in which data are collected, curated, and displayed varies greatly depending on the primary focus of the effort.

Notes


[i] For additional information, see College Measures: www.air.org/collegemeasures.

[ii] Michael Hurwitz and Jonathan Smith, Student Responsiveness to Earnings Data in the College Scorecard, Social Science Research Network, May 2016, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2768157.

[iii] For more information on the private student loan market and its loan underwriting methods, see Andrew P. Kelly and Kevin J. James, Looking Backward or Looking Forward: Exploring the Private Student Loan Market, AEI, June 2016, www.aei.org/wp-content/uploads/2016/05/Looking-Backward-or-Looking-Forward.pdf.

[iv] See U.S. Department of Education, National Center for Education Statistics, “IPEDS 2016-17 Data Collection System – 2016-17 Survey Materials,” https://surveys.nces.ed.gov/ipeds/VisResults.aspx?mode=results.

[v] 20 U.S. Code § 1018.

[vi] For its most recent annual report, see U.S. Department of Education, Federal Student Aid, 2016 Annual Report, 2016, https://www2.ed.gov/about/reports/annual/2016report/fsa-report.pdf.

[vii] See U.S. Department of Education, National Center for Education Statistics, and National Postsecondary Education Cooperative, Information Required to Be Disclosed Under the Higher Education Act of 1965: Suggestions for Dissemination, November 2009, http://nces.ed.gov/pubs2010/2010831rev.pdf.

[viii] Kevin Carey and Andrew P. Kelly, The Truth Behind Higher Education Disclosure Laws, AEI and Education Sector, 2011, http://www.air.org/sites/default/files/Higher-Education-Disclosure-Laws.pdf.

[x] The specter of a testing regime for colleges and universities that would immediately be compared to the mandatory tests of No Child Left Behind should alone be enough to give the government pause.

[xi] See the various reports and data bases at College Measures, http://www.air.org/collegemeasures.