Undergraduate Financial Aid in the United States

Financial Aid Policy: Questions and Concerns

Back to table of contents
Authors
Judith Scott-Clayton
Project
Commission on the Future of Undergraduate Education

This section discusses some of the questions and concerns that are frequently raised regarding financial aid policy. Key points of contention are explained and, where possible, research evidence is summarized.


What is the evidence that financial aid improves college access and completion outcomes?

Distinguishing the true causal effect of financial aid from preexisting differences is conceptually challenging, because aid programs often systematically target recipients based on characteristics (such as need, merit, or motivation to enroll) that may independently influence outcomes of interest. Rigorous research, however, convincingly shows that net prices do influence college enrollment, persistence, and completion decisions. As early as 1988, research reviews indicated that a $1,000 decrease in net price was generally associated with a 3- to 5-percentage-point increase in college attendance.50 Subsequent research using more rigorous experimental and quasi-experimental methods found positive effects of a similar magnitude, across a range of contexts.51

Research has found positive effects of aid receipt not just on enrollment overall but on college choice, persistence, degree completion, and beyond. For example, one recent randomized evaluation of the Buffet Scholarship program in Nebraska (which considers both need and merit) finds that scholarship winners were significantly more likely to switch from two-year to four-year institutions and were more likely to persist there as well.52 Other studies have found that both need-based and merit-based state aid programs can improve bachelor’s degree completion rates.53 Preliminary evidence on relatively new place-based “promise” programs such as those in Kalamazoo and Tennessee suggests they may have particularly large impacts on enrollment and graduation per dollar of aid.54 New evidence on post-college outcomes suggests that students who receive grants as undergraduates also have higher graduate school enrollment, higher earnings, and higher homeownership rates than similar students who do not get the same grants.55

Despite the preponderance of positive results in the literature, some notable null findings demonstrate that program design matters and positive impacts are never a guarantee. For example, two examinations of broad-based state merit aid programs using national data found no effects on degree completion in general, and a study of the Adams Scholarship in Massachusetts found that the merit-based program reduced degree attainment by inducing students to switch to under-resourced in-state institutions.56 And two recent studies found that none of the higher education tax benefits—credits and deductions valued at over $15 billion in 2013–2014—influence college enrollment, perhaps because they are not realized until months after the enrollment decision is made.57

Also unclear is whether loans or work-study necessarily have the same effects as grants. While evidence from outside the United States suggests student loans can have a big impact on college access, a 2008 review of the U.S. literature concluded that students are not as sensitive to loans as to grants (though the review could not conclude whether loans are still cost-effective, since the vast majority of loans provided are ultimately repaid to the government).58 Evidence on work-study has been mixed, perhaps because the effects of the program genuinely vary from context to context.59 One study found that effects were most positive for low-income students at public institutions, in part because these students are more likely to work anyway—in less-desirable off-campus jobs—in the absence of FWS.60


Which design features are most important in financial aid programs?

Practitioners and scholars increasingly acknowledge two critical features of financial aid program design: complexity and timing. While the increasing availability of financial aid is a good thing for students and families, it also means that figuring out the net price they will personally pay—early enough to do anything about it—is more complicated than ever. Sticker prices may be relatively easy to locate online, but getting good estimates of likely aid eligibility at different institutions can be much more challenging. Just because the information exists somewhere online does not mean students and their families ever see it. This lack of transparency can undermine the effectiveness of financial aid, making it harder to reach students who need aid most. Misperceptions about college costs are widespread and are most prevalent among students from the lowest-income backgrounds, likely contributing to persistent gaps in postsecondary attainment.61 High-achieving low-income students often do not even apply to highly selective schools (a phenomenon known as “undermatch”), in part because they are unaware of the substantial aid available at such institutions.

As a result of this complexity and confusion, many students fail to access aid for which they would qualify. While FAFSA application rates have risen over time—from 50 percent of undergraduates in 1999–2000 to 70 percent in 2011–2012—substantial numbers of eligible students still fail to apply. Of the 30 percent of students who failed to file a FAFSA, one-third would have qualified for a Pell Grant.62 In addition, many FAFSA filers apply after important deadlines, in turn decreasing the likelihood of receiving state and institutional aid for which they would otherwise be eligible.63 Similar problems may explain the lack of impact of education tax benefits: the value of the benefit is not known in many cases until several months after enrollment, and many households fail to optimize which of the available benefits they claim.64

Two influential studies provide dramatic evidence regarding the consequences of complexity. In one, researchers randomly selected a subset of low-income families who visited tax-preparation centers and were offered personal assistance with completing and submitting the FAFSA. The intervention took less than ten minutes and cost less than $100 per participant but increased immediate college entry rates by 8 percentage points (24 percent) for high school seniors and 1.5 percentage points (16 percent) for independent participants with no prior college experience.65 After three years, participants in the full treatment group had accumulated significantly more time in college than the control group. In the other study, researchers randomly selected high-achieving, low-income students from a College Board database and mailed them packets of information on net costs and application procedures at different types of institutions, along with vouchers for automatic application fee waivers.66 The intervention cost only $6 per student but significantly increased enrollment rates at highly selective colleges and universities. Whether this intervention would be similarly effective among less high-achieving groups is not obvious, but these two experiments taken together suggest that simplifying the aid information and application process may be a highly cost-effective strategy for reducing inequality in college attainment.

While the U.S. Department of Education has made progress in recent years in reducing the number of questions on the FAFSA and enabling some students to automatically import tax information from the Internal Revenue Service (IRS), these incremental improvements have had a limited impact on the application experience overall. In particular, they have not enabled students to easily discern their eligibility well in advance of application or substantially reduced the hassle factors.67 Since the main determinants of Title IV aid eligibility are already collected via the IRS Form 1040, some have proposed eliminating the FAFSA completely and instead determining eligibility using income and other data from tax forms, much as the education tax benefits already do.68 Similarly, some scholars have recommended streamlining the education tax benefits to make them easier to understand and enable families to claim them earlier, closer to when costs are actually incurred.69


Should financial aid have performance requirements?

Evidence suggests that aid programs that incorporate achievement incentives are particularly effective, especially when the goal is to improve college performance and completion (rather than college entry alone). For example, randomized evaluations of performance-based scholarships run by the social policy research firm MDRC found significant positive effects on persistence and graduation.70 A quasi-experimental study of West Virginia’s PROMISE scholarship, which required a minimum GPA and successful completion of 30 credits per year to renew, found that the program increased GPAs and credits completed in the first three years of college. In the fourth and final year of the scholarship—while students were still receiving the money but no longer faced the achievement incentives—the program’s effects disappeared, suggesting that the performance requirements and not just the money itself were driving effectiveness (the impacts in the first three years were enough to improve on-time degree completion by 7 percentage points).71

Academic incentives may improve not only performance after college entry but college preparation and initial enrollment as well. For example, a study of the introduction of Tennessee’s state merit aid program, which provided large college scholarships to students with minimum high school GPA and SAT/ACT test scores, found that the scholarship significantly improved high school achievement as measured by ACT test scores (the increases in test scores were too large to be explained simply by increases in retesting).72 A similar study of a program in Texas that paid eleventh- and twelfth-grade students and teachers for earning passing scores on Advanced Placement (AP) exams found that the policy not only improved AP exam scores but increased college enrollment rates as well as college academic performance even for those students who would have gone to college anyway.73

An important caveat is that performance incentives must be salient to students in order to be effective. If students first learn of academic standards when they learn they have not met them, it may be too late to recover. A recent study of federal Satisfactory Academic Progress standards finds that the policy functions primarily as a cost control—by cutting off low-performing students from receiving additional aid—rather than as an incentive that increases attainment over the long term.74


Unintended consequences: The “Bennett Hypothesis” and fiscal federalism

As the volume of available aid for college grows, one concern often raised is whether this simply encourages institutions to increase tuition even faster. This is referred to as the “Bennett Hypothesis” after former U.S. Secretary of Education William Bennett, who raised the concern. Some evidence supports it, but primarily among private sector institutions. For example, proprietary schools that are eligible to receive federal Title IV aid charge significantly more than similar institutions that are not eligible for federal aid.75 And one study found that at selective nonprofit institutions, up to two-thirds of Pell Grant awards were clawed back from students through reductions in institutional grant aid.76 However, at the public institutions most Pell recipients attend, the same study found no evidence of such claw-backs.

A broader concern raised recently is how federal and state investments in higher education interact. As federal investments have increased, has this served to buffer reductions in state and local appropriations, or might it serve to accelerate them? Limited research is available to answer this question. But some evidence suggests state governments take federal support into account when setting their higher education budgets. For example, when “maintenance of effort” provisions were inserted into the American Reinvestment and Recovery Act of 2009, requiring states to commit at least as much postsecondary funding as they had in 2006 if they wanted to receive the maximum in higher-education-related federal stimulus dollars, many states opted to reduce their expenditures to almost exactly the required minimum.77 U.S. senators on both sides of the aisle have also noted the perverse incentives of making federal support for K-12 education, health, and transportation contingent upon state maintenance-of-effort provisions while support for higher education generally is not.78


Are students overburdened with debt?

Without question, debt loads have increased substantially over time. Students today borrow nearly three times more per year on average than students who enrolled twenty-five years ago (though slightly less than students of a decade ago).79 Borrowing is higher for students at four-year institutions than at two-year institutions and higher for those at private institutions than at public ones. Among students who complete a bachelor’s degree, 61 percent have student loan debt. The average amount among those with any debt is $26,900.80 Less than 0.3 percent of bachelor’s degree recipients leave college with more than $100,000 in undergraduate debt, despite the seeming prevalence of these unusual cases in media accounts.81 Most individuals with student debt in excess of $100,000 have graduate debt.

Little evidence supports the idea that the debt burden of today’s students, while still far higher than amounts borrowed in previous generations, is unmanageable on average. The vast majority of borrowers are able to repay thanks to strong earnings prospects for those with higher education.82 Some studies have found that people with student loan debt have lower rates of homeownership and lower psychological well-being, though other analysts caution that more rigorous evidence is needed to determine whether these relationships are truly causal.83 While graduating with less debt may be preferable to graduating with more, evidence suggests that college attainment itself has a far stronger effect on future outcomes than students’ level of debt per se.84 For example, one state grant program that significantly reduced undergraduate debt led to increases in graduate school enrollment—and thus increases in graduate school debt—such that recipients ended up with, if anything, slightly more debt than nonrecipients. But they also had higher earnings and higher rates of homeownership—effects that more likely are attributable to other program mechanisms (such as improved GPAs and reduced time to degree) rather than a reduction of undergraduate debt.85

Of course, averages mask important heterogeneity and risk—particularly in the first few years after leaving school. Many students do not even know how much they have taken out in loans, let alone what their monthly repayment will be.86 The default loan repayment plan asks students to pay back their student debt over a ten-year period right after college, when earnings are lowest and most variable, creating nontrivial risk around students’ ability to repay.87 Four years after getting a bachelor’s degree, nearly one in five graduates is making payments that exceed 15 percent of their income.88 Moreover, the current provisions intended to protect students against default (including loan deferment, forbearance, and various pay-as-you-earn, income-based, income-contingent, or extended loan repayment plans) are themselves so complex that many students at risk fail to take advantage of them before they get into repayment trouble. This loan repayment risk varies substantially by race. Black borrowers are three times as likely to default as white borrowers, and among black bachelor’s degree holders, 48 percent see their undergraduate loan debt grow in the first four years after graduation (due to interest accumulation), compared with just 17 percent of white graduates.89 Borrowers are much less likely to fall behind on their loans in countries that automatically enroll them in income-contingent repayment plans (such as Australia and the United Kingdom) or that have a longer expected repayment timeframe (twenty and twenty-five years in Germany and Sweden, respectively).90

Perhaps counterintuitively, the borrowers most likely to run into trouble are not the ones with particularly high levels of debt but students who leave college without earning a credential. Students with more debt tend to have higher levels of attainment and higher earnings.91 A recent analysis of borrowers found that those with less than $5,000 in debt had a default rate almost twice as high as those with $100,000 in debt (34 percent versus 18 percent).92 Even small debts can spiral out of control for students who leave college without a credential. Scholars have suggested reforming student loan repayment options to minimize students’ repayment risks and better communicate both risks and protections upfront.93


What are the advantages and challenges of “making college free”?

President Barack Obama’s proposal in 2015 to eliminate tuition for America’s community college students could be a case study in the messaging power of “free”; it caught people’s attention in a way that prior efforts to lower the price of college have not. Googling “Obama free community college” returns 18.7 million hits (down from a whopping 75 million results shortly after the proposal was announced), compared with just 141,000 for “Obama Pell Grant increase.” What many people do not realize is that about 40 percent of community college students already receive enough grant aid to fully cover their tuition (including 85 percent of Pell recipients at community college).94 But the current system requires students to navigate the complex aid application process and take a leap of faith in the meantime. Free community college thus may improve access even for those who already qualify for substantial aid. Moreover, tuition and fees are not the only costs college students face. Transportation, books, and food alone can easily add up to more than the cost of tuition. If tuition were free, low-income students could instead use their other aid to pay for more of these additional costs of enrollment.

The success of local “promise” programs, which preceded the Obama administration’s own College Promise proposal, suggests that such programs could have substantial impacts on enrollment and completion. But the local programs that inspired President Obama, such as the Tennessee Promise, have often been part of broader reforms designed to improve student persistence and completion. These other reforms—such as improving student advising and making it easier for students to transfer courses—require resources, careful planning, and knowledge of local context. Whether a national program can replicate the early successes of state and local programs remains an open question of active debate.

Some have extended the Obama proposal to suggest that all public higher education should be free.95 Lower sticker prices certainly simplify the marketing message, and many other countries do offer free postsecondary education. But complete reliance on public finance is not without risk. In many countries, free higher education comes at the cost of state-specified caps on enrollment and/or lower quality.96 The advantage of a higher-tuition, higher-aid model is that it makes use of private resources from those students who can afford to pay, while enabling any given level of public subsidies to go further by better targeting students who need assistance most. A central challenge for policy-makers going forward is whether the problems of complexity and confusion that undermine the effectiveness of financial aid can be solved, without necessarily making college completely free.


Are stakeholders doing enough to ensure that students use their financial aid for institutions and programs that serve them well?

Postsecondary institutions are increasingly stratified in terms of both inputs and outputs, so students’ choice of institution is more consequential than ever.97 But students can have difficulty assessing institutional quality in advance. If college students are misinformed or uncertain about the value of different programs, this may lead to underinvestment or misallocated investments in education.98 The concern that students may use federal and state financial support for programs that have little benefit—and, with student loans, could even leave them worse off—has led to new efforts at the state and federal levels to improve both information and accountability.

Reporting and rewarding measures of institutional performance can, in theory, generate both better information and stronger financial incentives to improve the decision-making processes of prospective students, policy-makers, and institutions.99 Students can benefit from improved information by identifying programs that better fit their goals, preparation, and budgets. State and federal policy-makers can use performance reporting to assess whether institutions are using their grant aid efficiently to improve student outcomes.100 Even before formal stakes are attached to such measures, simply tracking and reporting them can help stimulate organizational learning.101

In his 2013 State of the Union address, President Obama gave voice to the accountability movement by calling for institutions to be “[held] accountable for cost, value, and quality,” eventually by linking measures of institutional performance to federal aid.102 In September 2015, the Obama administration took a major step toward this goal by releasing an updated version of its College Scorecard, which for the first time provided information not just on college costs and graduation rates but on median post-college earnings at over four thousand institutions nationwide. The accountability agenda is even more advanced at the state level. As of 2015, thirty-two states were already utilizing performance or “outcomes-based” formulae to distribute funding for public institutions, with another five in the process of implementing such a plan.103 While in most states the portion of state funding that is performance-based remains small—typically less than 10 percent—two states (Tennessee and Ohio) now base most of their institutional funding on performance metrics.104

Prior research suggests that improving information on its own, without providing individualized outreach and guidance, may have limited impact.105 The wrong type of information can also potentially distort students’ choices in adverse ways. For example, post-college average earnings data may discourage students from enrolling in programs that have stronger payoffs in the long term than in the short term, or programs that generate nonmonetary benefits that are not captured in average earnings.106

Rigorous evidence regarding the effectiveness of state performance policies is also somewhat discouraging. Two recent quasi-experiments compared trends over time in states adopting new policies and in states that did not, finding evidence of unintended strategic responses. Some institutions appear to enroll fewer low-income students in reaction to performance incentives, while some community colleges appear to increase the production of short-term certificates, but not associate’s degrees, when completion rates are introduced as a performance metric.107 Thus, efforts to improve information and accountability must balance the value of strengthened incentives against the potential for unintended distortions and strategic behavior.


ENDNOTES

50. Larry L. Leslie and Paul T. Brinkman, The Economic Value of Higher Education, American Council on Education/Macmillan Series on Higher Education (New York: Macmillan Publishing, 1988).

51. See Page and Scott-Clayton, “Improving College Access in the United States: Barriers and Policy Responses,” for a recent review.

52. Joshua Angrist, Sally Hudson, and Amanda Pallais, “Evaluating Econometric Evaluations of Post-Secondary Aid,” The American Economic Review 105 (5) (2015): 502–507.

53. Benjamin L. Castleman and Bridget Terry Long, “Looking beyond Enrollment: The Causal Effect of Need-Based Grants on College Access, Persistence, and Graduation,” Working Paper 19306 (Cambridge, Mass.: National Bureau of Economic Research, 2013); Susan Dynarski, “Building the Stock of College-Educated Labor,” Journal of Human Resources 43 (3) (2008): 576–610; Judith Scott-Clayton, “On Money and Motivation: A Quasi-Experimental Analysis of Financial Incentives for College Achievement,” Journal of Human Resources 46 (3) (2011): 614–646.

54. Bartik and Lachowska, “The Short-Term Effects of the Kalamazoo Promise Scholarship on Student Outcomes”; Bartik, Hershbein, and Lachowska, “Longer-Term Effects of the Kalamazoo Promise Scholarship on College Enrollment, Persistence, and Completion”; Carruthers and Fox, “Aid for All: College Coaching, Financial Aid, and Postsecondary Persistence in Tennessee.”

55. Eric Bettinger, Oded Gurantz, Laura Kawano, and Bruce Sacerdote, “The Long Run Impacts of Merit Aid: Evidence from California’s Cal Grant,” Working Paper 22347 (Cambridge, Mass.: National Bureau of Economic Research, 2016); Judith Scott-Clayton and Basit Zafar, “Financial Aid, Debt Management, and Socioeconomic Outcomes: Post-College Effects of Merit-Based Aid,” Working Paper 22574 (Cambridge, Mass.: National Bureau of Economic Research, 2016).

56. Maria D. Fitzpatrick and Damon Jones, “Higher Education, Merit-Based Scholarships and Post-Baccalaureate Migration,” Working Paper 18530 (Cambridge, Mass.: National Bureau of Economic Research, 2012); David L. Sjoquist and John V. Winters, “Building the Stock of College-Educated Labor Revisited,” Journal of Human Resources 47 (1) (2012): 270–285; Sarah R. Cohodes and Joshua S. Goodman, “Merit Aid, College Quality, and College Completion: Massachusetts’ Adams Scholarship as an In-Kind Subsidy,” American Economic Journal: Applied Economics 6 (4) (2014): 251–285.

57. George B. Bulman and Caroline M. Hoxby, “The Returns to the Federal Tax Credits for Higher Education,” Working Paper 20833 (Cambridge, Mass.: National Bureau of Economic Research, 2015); Caroline M. Hoxby and George B. Bulman, “The Effects of the Tax Deduction for Postsecondary Tuition: Implications for Structuring Tax-Based Aid,” Economics of Education Review 51 (2016): 23–60.

58. Alex Solis, “Credit Access and College Enrollment,” paper presented at the 2015 meeting of the American Economic Association, Boston, Mass., January 2015, https://www.aeaweb.org/aea/2015conference/program/retrieve.php?pdfid=862; Marc Gurgand, Adrien J. S. Lorenceau, and Thomas Mélonio, “Student Loans: Liquidity Constraint and Higher Education in South Africa,” Working Paper 117 (Paris: Agence Française de Développement, 2011); Donald E. Heller, “The Impact of Loans on Student Access,” in Sandy Baum, Michael McPherson, and Patricia Steele, eds., The Effectiveness of Student Aid Policies: What the Research Tells Us (New York: The College Board, 2008), 39–68. Also see Erin Dunlop, “What Do Stafford Loans Actually Buy You? The Effect of Stafford Loan Access on Community College Students,” CALDER Working Paper No. 94 (Washington, D.C.: National Center for Analysis of Longitudinal Data in Education Research, 2013); Mark Wiederspan, “Denying Loan Access: The Student-Level Consequences When Community Colleges Opt Out of the Stafford Loan Program,” Economics of Education Review 51 (2016): 79–96.

59. Adela Soliz and Bridget Terry Long, “The Causal Effect of Federal Work-Study on Student Outcomes in the Ohio Public University System,” CAPSEE Working Paper (New York: Center for Analysis of Postsecondary Education and Employment Conference at Columbia University, 2014); Judith Scott-Clayton, “The Causal Effect of Federal Work-Study Participation: Quasi-Experimental Evidence from West Virginia,” Educational Evaluation and Policy Analysis 33 (4) (2011): 506–527; Judith Scott-Clayton and Veronica Minaya, “Should Student Employment Be Subsidized? Conditional Counterfactuals and the Outcomes of Work-Study Participation,” Economics of Education Review 52 (2016): 1–18.

60. Scott-Clayton and Minaya, “Should Student Employment Be Subsidized? Conditional Counterfactuals and the Outcomes of Work-Study Participation.”

61. See review by Page and Scott-Clayton, “Improving College Access in the United States: Barriers and Policy Responses.”

62. Author’s calculations based on data from the 2011–2012 National Postsecondary Student Aid Study (NPSAS).

63. Jacqueline E. King, Missed Opportunities: Students Who Do Not Apply for Financial Aid (Washington, D.C.: American Council on Education, 2004).

64. Nicholas Turner, “Why Don’t Taxpayers Maximize Their Tax-Based Student Aid? Salience and Inertia in Program Selection,” The B.E. Journal of Economic Analysis & Policy: Contributions 11 (1) (2011): 1–24.

65. Eric P. Bettinger, Bridget Terry Long, Philip Oreopoulos, and Lisa Sanbonmatsu, “The Role of Application Assistance and Information in College Decisions: Results from the H&R Block FAFSA Experiment,” The Quarterly Journal of Economics 127 (3) (2012): 1205–1242.

66. Caroline Hoxby and Sarah Turner, “Expanding College Opportunities for High-Achieving, Low Income Students,” Stanford Institute for Economic Policy Research (SIEPR) Discussion Paper 12-014 (Stanford, Calif.: SIEPR, 2013).

67. The Department of Education recently implemented a new “data-retrieval” tool that enables applicants to automatically prefill their FAFSA with tax elements from the IRS. A major limitation of this tool, however, has been timing: states and institutions may have FAFSA deadlines well before income tax data are available from the IRS. Some state deadlines fall in February or simply tell students to file “as early as possible after January 1.” Basing eligibility only on prior-prior year income tax data (for instance, 2014 tax year information for students enrolling in 2016) is an important new change just going into effect for 2016–2017 that aims not only to enable students to file the FAFSA sooner but to allow more students to benefit from the data-retrieval tool. Time will tell whether this has a more appreciable impact than previous attempts at incremental reform.

68. See Susan M. Dynarski and Judith E. Scott-Clayton, “The Cost of Complexity in Federal Student Aid: Lessons from Optimal Tax Theory and Behavioral Economics,” National Tax Journal (2006): 319–356, which documents that most of the information on the FAFSA is unnecessary. Students’ Pell eligibility can be determined with a high level of precision using just a handful of elements from the form, primarily income and family size. Simplification proposals include the Financial Aid Simplicity and Transparency (FAST) Act introduced by Senators Lamar Alexander and Michael Bennet in January 2015, as well as earlier proposals by the Institute for College Access and Success in 2007. Susan M. Dynarski and Judith Scott-Clayton, “College Grants on a Postcard: A Proposal for Simple and Predictable Federal Student Aid,” Hamilton Project Discussion Paper (Washington, D.C.: Brookings Institution, 2007); Sandy Baum and Judith Scott-Clayton, “Redesigning the Pell Grant Program for the Twenty-First Century,” Hamilton Project Discussion Paper (Washington, D.C.: Brookings Institution, 2013).

69. Hoxby and Bulman, “The Effects of the Tax Deduction for Postsecondary Tuition.”

70. Reshma Patel and Ireri Valenzuela (with Drew McDermott), Moving Forward: Early Findings from the Performance-Based Scholarship Demonstration in Arizona (New York: MDRC, 2013); Lashawn Richburg-Hayes, Thomas Brock, Allen LeBlanc, Christina H. Paxson, Cecilia E. Rouse, and Lisa Barrow, Rewarding Persistence: Effects of a Performance-Based Scholarship Program for Low-Income Parents (New York: MDRC, 2009).

71. Scott-Clayton, “On Money and Motivation.”

72. Amanda Pallais, “Taking a Chance on College: Is the Tennessee Education Lottery Scholarship a Winner?” Journal of Human Resources 44 (1) (2009): 199–222.

73. C. Kirabo Jackson, “A Little Now for a Lot Later: An Evaluation of a Texas Advanced Placement Incentive Program,” Journal of Human Resources 45 (3) (2010): 591–639.

74. Judith Scott-Clayton and Lauren Schudde, “Performance Standards in Need-Based Student Aid,” paper presented at the NBER Education Meeting, May 2015.

75. Cellini and Goldin, “Does Federal Student Aid Raise Tuition? New Evidence on For-Profit Colleges.”

76. Lesley J. Turner, “The Road to Pell is Paved with Good Intentions: The Economic Incidence of Federal Student Grant Aid,” unpublished manuscript, University of Maryland, 2013, http://econweb.umd.edu/~turner/Turner_FedAidIncidence.pdf.

77. F. King Alexander, “Make ‘Maintenance of Effort’ Permanent,” Inside Higher Ed (January 28, 2010), archived at https://www.insidehighered.com/views/2010/01/28/make-maintenance-effort-permanent.

78. See transcript from U.S. Senate, Committee on Health, Education, Labor, and Pensions (HELP) hearing, June 3, 2015, http://www.help.senate.gov/hearings/reauthorizing-the-higher-education-act-ensuring-college-affordability.

79. College Board, Trends in Student Aid 2015, Table 3. Federal loans per FTE were $4,795 in 2014–2015, compared with $1,636 in 1990–1991 and $5,103 in 2002–2003.

80. College Board, Trends in Student Aid 2015.

81. Judith Scott-Clayton, “Student Loan Debt: Who Are the 1%?” Economix: Explaining the Science of Everyday Life (December 2, 2011), http://economix.blogs.nytimes.com/2011/12/02/student-loan-debt-who-are-the-1/.

28. Beth Akers and Matthew M. Chingos, Is a Student Loan Crisis on the Horizon? (Washington, D.C.: Brookings Institution, 2014), http://www.brookings.edu/research/reports/2014/06/24-student-loan-crisis-akers-chingos.

83. For original research, see Meta Brown and Sydnee Caldwell, Young Student Loan Borrowers Retreat from Housing and Auto Markets (New York: Federal Reserve Bank of New York, 2013), http://libertystreeteconomics.newyorkfed.org/2013/04/young-student-loan-borrowers-retreat-from-housing-and-auto-markets.html#.V7YLYfkrLct; Katrina Walsemann, Gilbert C. Gee, and Danielle Gentile, “Sick of Our Loans: Student Borrowing and Mental Health of Young Adults in the United States,” Social Science and Medicine 124 (2015): 85–93. For counterarguments, see Beth Akers, Reconsidering the Conventional Wisdom on Student Loan Debt and Home Ownership (Washington, D.C.: Brookings Institution, 2014), https://www.brookings.edu/research/reconsidering-the-conventional-wisdom-on-student-loan-debt-and-home-ownership/; Beth Akers, Unanswered Questions on Student Debt and Emotional Well-Being (Washington, D.C.: Brookings Institution, 2015), https://www.brookings.edu/research/unanswered-questions-on-student-debt-and-emotional-well-being/.

84. Susan M. Dynarski, The Trouble with Student Loans? Low Earnings, Not High Debt (Washington, D.C.: Brookings Institution, 2016), https://www.brookings.edu/research/the-trouble-with-student-loans-low-earnings-not-high-debt/.

85. Scott-Clayton and Zafar, “Financial Aid, Debt Management, and Socioeconomic Outcomes: Post-College Effects of Merit-Based Aid.”

86. Beth Akers and Matthew M. Chingos, Are College Students Borrowing Blindly? (Washington, D.C: Brookings Institution, 2014), http://www.brookings.edu/research/reports/2014/12/10-borrowing-blindly-akers-chingos.

87. Susan Dynarski and Daniel Kreisman, “Loans for Educational Opportunity: Making Borrowing Work for Today’s Students,” Hamilton Project Discussion Paper (Washington, D.C.: Brookings Institution, 2013).

88. Judith Scott-Clayton, “Early Labor Market and Debt Outcomes for Bachelor’s Degree Recipients: Heterogeneity by Institution Type and Major, and Trends over Time,” CAPSEE Working Paper (New York: Center for Analysis of Postsecondary Education and Employment, 2016).

89. Judith Scott-Clayton and Jing Li, “Black-White Disparity in Student Loan Debt More than Triples after Graduation,” Evidence Speaks Report (Washington, D.C.: Brookings Institution, 2016), https://www.brookings.edu/research/black-white-disparity-in-student-loan-debt-more-than-triples-after-graduation/.

90. Susan Dynarski, “America Can Fix Its Student Loan Crisis: Ask Australia,” The New York Times (July 10, 2016).

91. Dynarski, The Trouble with Student Loans?; Adam Looney and Constantine Yannelis, “A Crisis in Student Loans? How Changes in the Characteristics of Borrowers and in the Institutions They Attended Contributed to Rising Loan Defaults,” Brookings Papers on Economic Activity 2015 (2) (2015): 1–89.

92. Meta Brown, Andrew Haughwout, Donghoon Lee, Joelle Scally, and Wilbert van der Klaauw, “Looking at Student Loan Defaults through a Larger Window,” in Liberty Street Economics (New York: Federal Reserve Bank of New York, 2015); Looney and Yannelis, “A Crisis in Student Loans?: How Changes in the Characteristics of Borrowers and in the Institutions They Attended Contributed to Rising Loan Defaults.”

93. For example, Dynarski and Kreisman, in “Loans for Educational Opportunity: Making Borrowing Work for Today’s Students,” have proposed that students be automatically enrolled in an income-contingent repayment system that would collect repayments as a proportion of income automatically through the tax system. The repayment period would extend up to thirty years or until the loan is paid off, whichever comes first.

94. Author’s calculations based on NPSAS: 2012 data, accessed via NCES QuickStats.

95. See, for example, Sara Goldrick-Rab, “Public Higher Education Should Be Universal and Free,” The New York Times (January 20, 2016), http://www.nytimes.com/roomfordebate/2016/01/20/should-college-be-free/public-higher-education-should-be-universal-and-free.

96. As the British economist Nicholas Barr explains, “Countries typically pursue three efficiency goals in higher education: larger quantity, higher quality, and constant or falling public spending. Systems that rely on public finance can generally achieve any two, but only at the expense of the third: a system can be large and tax-financed, but with worries about quality (France, Germany, Greece, Italy); or high-quality and tax-financed, but small (the United Kingdom until 1990); or large and high-quality, but fiscally expensive (as in Scandinavia).” Nicholas Barr, Paying for Higher Education: What Policies, in What Order? (London: London School of Economics, 2010), 3–4.

97. William G. Bowen, Matthew M. Chingos, and Michael S. McPherson, Crossing the Finish Line: Completing College at America’s Public Universities (Princeton, N.J.: Princeton University Press, 2009); John Bound, Michael F. Lovenheim, and Sarah Turner, “Why Have College Completion Rates Declined? An Analysis of Changing Student Preparation and Collegiate Resources,” American Economic Journal: Applied Economics 2 (3) (2010): 129–157.

98. Matthew Wiswall and Basit Zafar, “How Do College Students Respond to Public Information about Earnings?” Journal of Human Capital 9 (2) (2015): 117–169; Justine S. Hastings, Christopher A. Neilson, Anely Ramirez, and Seth D. Zimmerman, “(Un)informed College and Major Choice: Evidence from Linked Survey and Administrative Data,” Economics of Education Review 51 (2016): 136–151; Louis Jacobson and Robert J. LaLonde, “Using Data to Improve the Performance of Workforce Training,” Hamilton Project Discussion Paper (Washington, D.C.: Brookings Institution, 2013).

99. James J. Heckman, Carolyn Heinrich, and Jeffrey Smith, “The Performance of Performance Standards,” Journal of Human Resources 37 (4) (2002): 778–811; Alastair Muriel and Jeffrey Smith, “On Educational Performance Measures,” Fiscal Studies 32 (2) (2011): 187–206; Kevin J. Dougherty and Vikash Reddy, Performance Funding for Higher Education: What Are the Mechanisms? What Are the Impacts? ASHE Higher Education Report (San Francisco, Calif.: Jossey-Bass, 2013).

100. Executive Office of the President, Using Federal Data to Measure and Improve the Performance of U.S. Institutions of Higher Education (Washington, D.C: Executive Office of the President, 2015), https://collegescorecard.ed.gov/assets/UsingFederalDataToMeasureAndImprovePerformance.pdf.

101. Dougherty and Reddy, Performance Funding for Higher Education: What Are the Mechanisms? What Are the Impacts?

102. U.S. Department of Education, “Education Department Releases College Scorecard to Help Students Choose Best College for Them,” press release (Washington, D.C.: U.S. Department of Education, 2013), http://www.ed.gov/news/press-releases/education-department-releases-college-scorecard-help-students-choose-best-college-them.

103. National Conference of State Legislatures, Performance-Based Funding for Higher Education (Denver, Colo.: National Conference of State Legislatures, 2015), http://www.ncsl.org/research/education/performance-funding.aspx.

104. Martha Snyder, Driving Better Outcomes: Typology and Principles to Inform Outcomes-Based Funding Models (Washington, D.C.: HCM Strategists, 2015).

105. Bettinger, Long, Oreopoulos, and Sanbonmatsu, “The Role of Application Assistance and Information in College Decisions: Results from the H&R Block FAFSA Experiment”; Philip B. Levine, “Transparency in College Costs,” Economic Studies Working Paper (Washington, D.C.: Brookings Institution, 2014), http://www.brookings.edu/research/papers/2014/11/12-transparency-in-college-costs-levine.

106. Judith Scott-Clayton and Veronica Minaya, “Labor Market Outcomes and Postsecondary Accountability: Are Imperfect Metrics Better than None?” working paper prepared for NBER Productivity in Higher Education Conference, June 2016; summary online at http://conference.nber.org/confer//2016/PHEs16/Minaya_Scott-Clayton.pdf.

107. Robert Kelchen and Luke J. Stedrak, “Does Performance-Based Funding Affect Colleges’ Financial Priorities?” Journal of Education Finance 41 (3) (2016): 302–321; Nicholas W. Hillman, David A. Tandberg, and Alisa H. Fryar, “Evaluating the Impacts of ‘New’ Performance Funding in Higher Education,” Educational Evaluation and Policy Analysis 37 (4) (2015): 501–519. A broader review of the literature by Nicholas Hillman identifies twelve studies that find mostly null or even negative results of performance funding policies. See Nicholas Hillman, Why Performance-Based College Funding Doesn’t Work (New York: The Century Foundation, 2016).