Big data could help mitigate the affirmative action ban

It isn’t perfect, but data and analytics could capture the disadvantages applicants face and the diversity they may represent, says Carlo Ratti

七月 7, 2023
Data graphs projected onto a black woman's face
Source: iStock

When the US Supreme Court released its verdict outlawing race-based affirmative action last week, I was reminded of a young postdoc I hired for my lab at the Massachusetts Institute of Technology many years ago. The researcher – let’s call her Sasha – hailed from a former Soviet republic, and my colleagues thought her résumé was not up to our standards. Yet, I reasoned, if Sasha had achieved so much in an environment that afforded her so little, why not see what she could do at MIT?

The bet paid off – she became one of our best researchers and today is a well-known professor at a leading US university.

This is not a tale about race-based affirmative action, yet it might teach us something about how to move forward.

Why is affirmative action so important? First, every individual should have equal opportunities regardless of their background. Admissions officers cannot just look at the finish line – grades and test scores – when the starting blocks are different. Second, diversity enriches educational environments, leading to better outcomes for everyone.

The Supreme Court argues that structural racism is no longer a sufficient disadvantage to justify positive discrimination. This is obviously wrong. As Justice Ketanji Brown Jackson wrote in her dissent, “Deeming race irrelevant in law does not make it so in life.” So we need to make affirmative action work – but without specific reference to race.

Our first response might be to give increased weight to race-neutral variables, such as parental income and address. Such efforts are welcome, but they are not a perfect replacement. Within the same zip code or income bracket, a poor Black student is more likely to come from generations of poverty, attend worse-off schools, and even breathe polluted air or drink contaminated water.

Studies have demonstrated that a variety of alternatives to race-based affirmative action, such as race-neutral holistic review or systems that focus on income or geographic diversity, do not work as well as expected. Strikingly, ignoring race does not only make a class less racially diverse, as expected, it even makes it less socio-economically diverse. No algorithm gets better when you restrict its access to information. Or, as our dean of admissions at MIT, Stu Schmill, recently wrote, “If you take away a carpenter’s tools, they will have a much harder time building.”

A further issue is that all the variables we have discussed so far – from race to census-based socio-economics – are coarse, focusing on groups rather than individuals. Yet the rise of big data is driving a computational revolution in the social sciences. In 20 years of research, I have seen a trove of huge empirical datasets emerge to describe urban areas, unlocking a “new science of cities” that allows us to understand the greatest metropolis down to the smallest block. Big data and analytics could also help admissions officers quantitatively capture the kinds of disadvantages applicants face and the kinds of diversity they may represent.

Think about all the variables that impact a student’s life but are invisible in a college application. A truly fair system would take into account not only an applicant’s high school zip code but also the quality of their pre-kindergarten programme and the levels of lead in their water pipes. We are far from being able to obtain every relevant variable, but in 2023 we could get a lot more.

Just as important as bringing more data in, colleges should also get more data out. Before Harvard’s admissions process was challenged, much of it was a black box to the public; it should not have taken a lawsuit at the Supreme Court to remedy that. As universities scramble to devise new admissions policies, every school could become a test bed for shared innovation.

Admissions offices could embrace experimentation and data collection, even within a single class. Instead of devising a single new policy, they could try different ones and track the results over time in terms of admissions, campus experiences and later careers. Even small differences would be measurable, and they could help to challenge a variety of misconceptions. Harvard has long argued that letting in too many poor students would compromise its academic excellence; what if it admitted a few more and tested this out?

Of course, a data-driven process cannot carry us all the way to justice. Data collection is hard, and privacy will always be a concern. We should also remember the controversies that swirled around the SAT Adversity Score – a proposal to supplement standardised testing scores according to socio-economic factors in a student’s school and neighbourhood. The project drew criticism for being too opaque, simple and presumptuous in assuming that adversity could be so easily quantified (it was ultimately reworked into the more modest SAT Landscape). Open experimentation and transparency could be powerful curatives, but they might also generate new culture wars and lawsuits.

There is no perfect way to optimise elite university admissions – there are too many incredible applicants and too few slots. In the long term, the best solution is probably to ensure that there are better paths to a good education and social mobility beyond a seat on the shiny USS Princeton. But for now, in the absence of race-based affirmative action, universities owe it to themselves – and to their applicants – to innovate and find new algorithms.

More data will promote more justice – and help ensure that no Sasha goes unnoticed in the admissions pool.

Carlo Ratti is professor of practice of urban technologies and planning at the Massachusetts Institute of Technology, where he directs the Senseable City Lab.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (1)

I'm sorry but this is illogical and misunderstands the use of data, "big" and other. Has the MIT professor never heard of ecological fallacies and complications of deductive reasoning. He misses the point about the purposes, the history, and the context of affirmative action in all its many, sometimes contradictory forms.
ADVERTISEMENT