Skip to main content

The Race Goes On

Unless you live under a rock, you probably know that colleges are, in general, interested in increasing the number of students who apply for admission.  There are a couple reasons for this, but they're all mostly based on the way things used to be: That is, before colleges started trying to intentionally increase applications.  The good old days, some might say.

In general, increasing applications used to mean a) you could select better students, who would be easier to teach, and who might reflect well on your college, or b) you as an admissions director could sleep a little better, because you were more certain you could fill the class, or c) your admission rate would go down, which is generally considered a sign of prestige.  After all, the best colleges have low admission rates, right?

Well, yes, one does have to admit that the colleges that spring to mind when one says "excellent" all tend to have low admission rates.  Lots of people want to go there, and thus, it must be good.  The trained eye might be able to spot the forgery, but what about the average person?

This week, we have another journalistic treatise presumably exposing colleges for the ways in which they attempt to increase applications.  The tactics listed in this article are nothing new: Reduce the essay, waive the fee, encourage more low-income kids.  Barely mentioned was the "Fast App/VIP app/Priority App," many colleges use that allow them to count an "applicant" as anyone who clicks an email link that says "Start your application."

However, application increases only pay off when you generate them from students who have a reasonable propensity to enroll.  Prestige can be measured by a little-used variable that punishes you when you increase applications to try to look more selective at the cost of deceasing your yield: It's called the Draw Rate, and it's a powerful indicator of market position.  It's a simple calculation: Yield rate/admit rate.

Here's a secret: For some percent of the freshman class, let's say 33%, recruitment doesn't come into play at all.  A large chunk of your enrollment is natural; that is, those students are likely to enroll no matter what you do.  The next 33% are going to enroll presuming you do everything correctly, make it affordable, and help them understand how they fit. But the last group, that final third, comes from students who have little predisposition to enroll.  Your recruitment tactics focus on them, and you spend most of your time trying to find them, get them to apply, and then to enroll.  They may make up as much as 75% of your pool.  They enter your applicant pool with a lower level of interest.

The problem is that usually, a big increase in applications comes not from the first or second group, and not even the third, but rather a fourth group, the "Ain't no way I'm going to enroll short of a miracle" group.  The bigger problem is that you don't always know exactly who these students are. This is one of the reasons demonstrated interest has become a topic of discussion.

When you artificially increase applications, and you have to cover your ass by admitting more, your yield is going to drop.  And so will your draw rate.

So, let's look at the data.  These charts start out very busy, so you should interact by selecting just a region or Carnegie type.  But even at their busy mess stage, you can see: a) applications are up, b) admits are up, and c) yield rates are down at almost every type of institution, with the exception of the big, private, research universities.  The ones you can rattle off without thinking too much.

But look at the Draw Rates, on the last two charts.  Draw rates are down across the board, mostly because capacity is relatively constant, the supply of students is down, and competition is up.  The only winners in the battle to increase prestige? The ones who were prestigious in the first place.  The money spent trying to join that club, or sometimes even just to look more like them, could have been put to better use.

Use the boxes across the top to see the six points of this Tableau Story Points visualization.  Note that the last one exposes some data anomalies which are inherent in IPEDS, often due to typos or new IR staff who count the wrong thing (my alma mater in 2010-2011, for instance.)

What do you see? And what do you think?  Is the race for prestige dooming us? Or is it just the latest evolutionary stage in the natural process of competition?




Comments

Popular posts from this blog

The Highly Rejective Colleges

If you're not following Akil Bello on Twitter, you should be.  His timeline is filled with great insights about standardized testing, and he takes great effort to point out racism (both subtle and not-so-subtle) in higher education, all while throwing in references to the Knicks and his daughter Enid, making the experience interesting, compelling, and sometimes, fun. Recently, he created the term " highly rejective colleges " as a more apt description for what are otherwise called "highly selective colleges."  As I've said before, a college that admits 15% of applicants really has a rejections office, not an admissions office.  The term appears to have taken off on Twitter, and I hope it will stick. So I took a look at the highly rejectives (really, that's all I'm going to call them from now on) and found some interesting patterns in the data. Take a look:  The 1,132 four-year, private colleges and universities with admissions data in IPEDS are incl

On Rankings, 1911, and Economic Mobility

If you're alive today, you have lived your whole life with college rankings.  Yes, even you.  You may not have knows you were living in the time of college rankings, but indeed, you have been, unless you were born before 1911 (or maybe earlier.)  If you're interested, you can read this Twitter thread from 2020 where I discuss them and include snippets of those 1911 rankings as well as those from 1957, written by Chesley Manly. You can read for yourself, or you can trust me, that in fact the rankings as we know them have been surprisingly consistent over time, and most people would have only minor quibbles with the ratings from 1911.  Perhaps that's because they have always tended to measure the same thing. But what if we did different rankings?  No, not like the Princeton Review where they make an attempt to measure best party school, or best cafeteria food, or worst social life.  Something more quantifiable and concrete, although still, admittedly, a hard thing to get rig

Freshman Migration, 1986 to 2020

(Note: I discovered that in IPEDS, Penn State Main Campus now reports with "The Pennsylvania State University" as one system.  So when you'd look at things over time, Penn State would have data until 2018, and then The Penn....etc would show up in 2020.  I found out Penn State main campus still reports its own data on the website, so I went there, and edited the IPEDS data by hand.  So if you noticed that error, it should be corrected now, but I'm not sure what I'll do in years going forward.) Freshman migration to and from the states is always a favorite visualization of mine, both because I find it a compelling and interesting topic, and because I had a few breakthroughs with calculated variables the first time I tried to do it. If you're a loyal reader, you know what this shows: The number of freshman and their movement between the states.  And if you're a loyal viewer and you use this for your work in your business, please consider supporting the costs