Skip to main content

More on the Admissions Arms Race

In a recent post, I wrote about the Admissions Arms Race, and who had come out victorious.  The short answer was "almost no one."  I rolled up admission rates (percent of applicants admitted) and yield rates (percent of those offered admission who enroll) and showed them over time.  These variables are pretty common parlance in college admissions; everyone with experience seems to know them.  But I showed them only aggregated by type of institution; averages often mask details contained in them.  To add some detail, I've now plotted them for every four-year, degree-granting institution that enrolls freshmen.

In that post I also introduced "Draw Rate," a term few had heard of.  It's a simple calculation: You take the yield rate and divide it by the admit rate.  So, for instance, Harvard, with a yield rate of about 84% and an admit rate of about 6% (2012) has a Draw Rate of about 14.  Given that the industry average is about .6 (not six....point six), you see the market position of Harvard, even in comparison to some of its rivals: Princeton, Yale, and MIT, for instance, all of which hover around the still formidable 8 range.

The beauty of the draw rate is that it can't be fooled: If you get more selective just by generating fake or soft applications, your yield rate is going to go down.  Try some numbers for yourself.  Reasonable numbers, please, I don't like to argue with absurdity.

Over the last couple of decades, colleges have been pursuing prestige by attempting to get more selective. It's a good example of post hoc, ergo propter hoc thinking: Prestigious colleges are selective, so if we appear to be more selective, we'll become prestigious. (And parents engage in the same behavior when they see that successful people graduate from prestigious institutions, and therefore want a prestigious name on their child's diploma.  They think the prestige caused the success, when it's often family success that generates the admission in the first place.  Read Gladwell's paragraph on selection effects and treatment effects; it's in Section 3 of this article.)

See for yourself: Select public or private; a Carnegie type; a region, and then, if you want, a state within the region.  I started with three years, but you can put in what you want.

As an aside, another thing I like about this is that is shows the problems with IPEDS data, such as missing information and obvious, erratic spikes up or down that suggest data errors. I use IPEDS data a lot and it can be very frustrating.

But mostly, it shows that there have been some winners over time.  And they're mostly the ones who have been winning all along.

You can't market your way to the top in higher ed.



Comments

Popular posts from this blog

The Highly Rejective Colleges

If you're not following Akil Bello on Twitter, you should be.  His timeline is filled with great insights about standardized testing, and he takes great effort to point out racism (both subtle and not-so-subtle) in higher education, all while throwing in references to the Knicks and his daughter Enid, making the experience interesting, compelling, and sometimes, fun. Recently, he created the term " highly rejective colleges " as a more apt description for what are otherwise called "highly selective colleges."  As I've said before, a college that admits 15% of applicants really has a rejections office, not an admissions office.  The term appears to have taken off on Twitter, and I hope it will stick. So I took a look at the highly rejectives (really, that's all I'm going to call them from now on) and found some interesting patterns in the data. Take a look:  The 1,132 four-year, private colleges and universities with admissions data in IPEDS are incl

On Rankings, 1911, and Economic Mobility

If you're alive today, you have lived your whole life with college rankings.  Yes, even you.  You may not have knows you were living in the time of college rankings, but indeed, you have been, unless you were born before 1911 (or maybe earlier.)  If you're interested, you can read this Twitter thread from 2020 where I discuss them and include snippets of those 1911 rankings as well as those from 1957, written by Chesley Manly. You can read for yourself, or you can trust me, that in fact the rankings as we know them have been surprisingly consistent over time, and most people would have only minor quibbles with the ratings from 1911.  Perhaps that's because they have always tended to measure the same thing. But what if we did different rankings?  No, not like the Princeton Review where they make an attempt to measure best party school, or best cafeteria food, or worst social life.  Something more quantifiable and concrete, although still, admittedly, a hard thing to get rig

Freshman Migration, 1986 to 2020

(Note: I discovered that in IPEDS, Penn State Main Campus now reports with "The Pennsylvania State University" as one system.  So when you'd look at things over time, Penn State would have data until 2018, and then The Penn....etc would show up in 2020.  I found out Penn State main campus still reports its own data on the website, so I went there, and edited the IPEDS data by hand.  So if you noticed that error, it should be corrected now, but I'm not sure what I'll do in years going forward.) Freshman migration to and from the states is always a favorite visualization of mine, both because I find it a compelling and interesting topic, and because I had a few breakthroughs with calculated variables the first time I tried to do it. If you're a loyal reader, you know what this shows: The number of freshman and their movement between the states.  And if you're a loyal viewer and you use this for your work in your business, please consider supporting the costs