Skip to main content

Six-year graduation rates at four-year colleges and universities

Graduation rates are always a hot topic in higher education, but often for the wrong reason.  To demonstrate, I offer my parents.  Here is a portrait of Agnes and Mark, married May 4, 1946.


One night while I was talking to my brother, he asked, "Do you think mom was the way she was because dad was the way he was, or do you think dad was the way he was because mom was the way she was?"  To which I replied, "yes."  My point, of course, is that in complex relationships, it's always difficult--impossible, actually--to detangle cause and effect.

And, despite the Student Affairs perspective that graduation rates are a treatment effect, I maintain that they are actually a selection effect.  As I've written about before, it's pretty easy to predict a college's six-year graduation rate if you know one data point: The mean SAT score of the incoming class.  That's because the SAT rolls a lot of predictive factors into one index number.  These include academic preparation, parental attainment, ethnicity, and wealth, on the student side, and selectivity, on the college side.

When a college doesn't have to--or chooses not to--take many risks in the admissions process, they tend to select those students who are more likely to graduate.  That skews the incoming class wealthier (Asian and Caucasian populations have the highest income levels in America), higher ability (the SAT is a good proxy for some measure of academic achievement, and often measures academic opportunity), and second generation.  And when you combine all those things--or you select so few poor students you can afford to fund them fully--guess what?  Graduation rates go up.

If this doesn't make any sense, read the Blueberry Speech.  Or ask yourself this question: If 100 MIT students enrolled at your local community college, what percentage would graduate? 

But graduation rates are still interesting to look at, once you have that context.  The visualization below contains three views, using the tabs across the top.  You'll have to make a few clicks to get the information you need.

The first view (Single Group) starts with a randomly selected institution, Oklahoma State.  Choose your institution of choice by clicking on the box and typing any part of the name, and selecting the institution. 

On the yellow bars, you see the entering cohorts in yellow, and the number of graduating students on the blue bars.  Note: The blue bars show graduates in the year shown (so, 4,755, which you can see by hovering over the bar) while the yellow bar shows the entering class from six years prior (7,406 in 2019, who entered in 2013).

The top row shows graduation rates at all institutions nationally, and the second row shows percentages for the selected institution.  You can choose any single ethnicity at the top left, using the filter.

The second view (Single Institution) shows all ethnicities at a single institution.  The randomly selected demonstration institution is Gustavus Adolphus College in Minnesota, but of course you can choose any institution in the data set.  Highlight a single ethnic group using the highlight function (I know some people are frightened of interacting with these visualizations....you can't break anything).

Note: I start with a minimum of 10 students in each year's cohorts for the sake of clarity.  Small schools in the Northeast, for instance, might enroll one Asian/Pacific Islander in their incoming class, each year, so the graduation rate could swing wildly from 0% to 100%.  You can change this if you want to live dangerously, by pulling the slider downward.

The final view (Sectors) shows aggregates of institutional types.  It starts with graduation rates for Hispanic/Latino students, but you can change it to any group you want.

Have fun learning about graduation rates.  Just don't assume they are mostly driven by what happens at the institution once the admissions office has its say.

Comments

Popular posts from this blog

Changes in AP Scores, 2022 to 2024

Used to be, with a little work, you could download very detailed data on AP results from the College Board website: For every state, and for every course, you could see performance by ethnicity.  And, if you wanted to dig really deep, you could break out details by private and public schools, and by grade level.  I used to publish the data every couple of years. Those days are gone.  The transparency The College Board touts as a value seems to have its limits, and I understand this to some extent: Racists loved to twist the data using single-factor analysis, and that's not good for a company who is trying to make business inroads with under-represented communities as they cloak their pursuit of revenue as an altruistic push toward access. They still publish data, but as I wrote about in my last post , it's far less detailed; what's more, what is easily accessible is fairly sterile, and what's more detailed seems to be structured in a way that suggests the company doesn...

Educational Attainment and the Presidential Elections

I've been fascinated for a while by the connection between political leanings and education: The correlation is so strong that I once suggested that perhaps Republicans were so anti-education because, in general, places with a higher percentage of bachelor's degree recipients were more likely to vote for Democrats. The 2024 presidential election puzzled a lot of us in higher education, and perhaps these charts will show you why: We work and probably hang around mostly people with college degrees (or higher).  Our perception is limited. With the 2024 election data just out , I thought I'd take a look at the last three elections and see if the pattern I noticed in 2016 and 2020 held.  Spoiler: It did, mostly. Before you dive into this, a couple of tips: Alaska's data is always reported in a funky way, so just ignore it here.  It's a small state (in population, that is) and it's very red.  It doesn't change the overall trends even if I could figure out how to c...

Changes in SAT Scores after Test-optional

One of the intended consequences of test-optional admission policies at some institutions prior to the COVID-19 pandemic was to raise test scores reported to US News and World Report.  It's rare that you would see a proponent of test-optional admission like me admit that, but to deny it would be foolish. Because I worked at DePaul, which was an early adopter of the approach (at least among large universities), I fielded a lot of calls from colleagues who were considering it, some of whom were explicit in their reasons for doing so.  One person I spoke to came right out at the start of the call: She was only calling, she said, because her provost wanted to know how much they could raise scores if they went test-optional. If I sensed or heard that motivation, I advised people against it.  In those days, the vast majority of students took standardized admission tests like the SAT or ACT, but the percentage of students applying without tests was still relatively small; the ne...