Skip to main content

The latest Boogey Man: Frontloading

It's happened three times in the last several months: I am invited into, or stumble into, a discussion on "Frontloading."  It's been the case that the people who are talking about it are generally convinced it exists, and generally believe it's a widely practiced approach.

In case you don't know, frontloading is the presumed practice of enrollment managers (of course) who make big institutional aid awards to entice freshmen to enroll, and then remove them after the freshman year.  Journalists, especially, point to aggregated data suggesting that the average amount of institutional aid for non-freshmen is lower than for freshmen. "Aha!" they scream, "The smoking gun!"

Well, not so fast.  I'm willing to admit that there may be a few colleges in the US where frontloading happens, probably in a clandestine manner, but perhaps, in at least one instance I was made aware of, for a very logical and justifiable reason.  But most enrollment managers I've asked have the same reaction to the concept: To do so would be career suicide.  This does not deter those who love to skewer enrollment management and hoist the problems of higher education squarely on our backs.

To be sure, I asked a Facebook group of 9,000 college admissions officers, high school counselors, and independent college consultants about the practice.  This is not a group of wallflowers, and the group members call it like they see it; even so, I asked them to message me privately if there were colleges where this routinely happened.  I got a couple of "I think maybe it happens," responses, and exactly one comment from a counselor who said she was sure it happened.

I have told people repeatedly that there are many possible reasons why the data look the way they do:


  • The freshman data is first-time, full-time, degree seeking students.  All of them are technically eligible for institutional aid.
  • The "All students" data includes all undergraduates.  That includes full-time, part-time, non-degree seeking students, many of whom are less likely to qualify for aid.
  • The "All students" group also contains transfers, who may qualify for less aid
  • It's possible that students who earned aid as a freshman lose it due to lack of academic progress or performance in subsequent years
  • It's also possible students with the most institutional aid are the neediest, and thus not likely to be around past the freshman year
These reasons seem to fall on deaf ears of people who are eager to prove something they believe to be true.

So I tried another angle: Doing the same comparison of Pell Grant participation.  The Pell Grant, of course, is a federal program, not awarded by or controlled by the colleges.  What would happen if we looked at Pell Grant participation rates among freshmen and the rest of the student body?  I think this visualization, below, demonstrates it quite well.

There are four columns: 

  • Percent of freshmen with Pell
  • Percent of all other undergraduates (non-freshmen) with Pell
  • The percentage of undergraduates who are freshmen (in a traditional college, this should be about 25%-30%.  Bigger or smaller numbers can tip off a different mix)
  • And finally, the right hand column, showing the difference between the first two.
The bars are color-coded: Blue bars show more freshmen with Pell; red bars show fewer freshmen with Pell, and gray bars show no difference.  You'll note that most bars are blue; if this were institutional aid, you might leap to the conclusion of frontloading.  That's exactly what journalists do.

But it's not institutional aid.  It's federal aid.  And yet, the differences are very large at some places.

You can hover over the top of any column to sort by that column.  And you can use the filters at the right to limit the colleges included on the view to certain regions, control, or Carnegie type.

Is this evidence strong enough to convince you?  If not, let me know why in the comments below.



Comments

Popular posts from this blog

Changes in AP Scores, 2022 to 2024

Used to be, with a little work, you could download very detailed data on AP results from the College Board website: For every state, and for every course, you could see performance by ethnicity.  And, if you wanted to dig really deep, you could break out details by private and public schools, and by grade level.  I used to publish the data every couple of years. Those days are gone.  The transparency The College Board touts as a value seems to have its limits, and I understand this to some extent: Racists loved to twist the data using single-factor analysis, and that's not good for a company who is trying to make business inroads with under-represented communities as they cloak their pursuit of revenue as an altruistic push toward access. They still publish data, but as I wrote about in my last post , it's far less detailed; what's more, what is easily accessible is fairly sterile, and what's more detailed seems to be structured in a way that suggests the company doesn...

Educational Attainment and the Presidential Elections

I've been fascinated for a while by the connection between political leanings and education: The correlation is so strong that I once suggested that perhaps Republicans were so anti-education because, in general, places with a higher percentage of bachelor's degree recipients were more likely to vote for Democrats. The 2024 presidential election puzzled a lot of us in higher education, and perhaps these charts will show you why: We work and probably hang around mostly people with college degrees (or higher).  Our perception is limited. With the 2024 election data just out , I thought I'd take a look at the last three elections and see if the pattern I noticed in 2016 and 2020 held.  Spoiler: It did, mostly. Before you dive into this, a couple of tips: Alaska's data is always reported in a funky way, so just ignore it here.  It's a small state (in population, that is) and it's very red.  It doesn't change the overall trends even if I could figure out how to c...

Changes in SAT Scores after Test-optional

One of the intended consequences of test-optional admission policies at some institutions prior to the COVID-19 pandemic was to raise test scores reported to US News and World Report.  It's rare that you would see a proponent of test-optional admission like me admit that, but to deny it would be foolish. Because I worked at DePaul, which was an early adopter of the approach (at least among large universities), I fielded a lot of calls from colleagues who were considering it, some of whom were explicit in their reasons for doing so.  One person I spoke to came right out at the start of the call: She was only calling, she said, because her provost wanted to know how much they could raise scores if they went test-optional. If I sensed or heard that motivation, I advised people against it.  In those days, the vast majority of students took standardized admission tests like the SAT or ACT, but the percentage of students applying without tests was still relatively small; the ne...