We hear it all the time: I’m no good at math. Math is hard. Math is boring. Mathematician and bestselling author Cathy O’Neil knows how to make it interesting.

One way? Call attention to the ways algorithms with built-in biases can ruin people’s lives. O’Neil does it with powerful stories in her newest book, Weapons of Math Destruction, because she sees a big problem: As mathematical models, also known as algorithms, become more prevalent and influential, how can we hold them accountable? How can we make sure big data isn’t being used to cause harm?

O’Neil told about 300 people at the Carnegie Library Lecture Hall on Monday evening how she’s witnessed algorithms causing harm: She worked as an analyst on Wall Street during the financial crisis of 2008 and saw first hand how corruption and greed masked their true intent behind ‘math.’ She quit and in response joined the Occupy movement Thinking that selling ads that people want was less harmful than her Wall Street job, she worked for a firm that targeted specific customers to see their ads and saw again how big data and algorithms can be used to perpetuate inequality (she pointed to an example of how ads for for-profit colleges were only shown to low-income people). She also highlights more specific examples, like what happened to Kyle Behm.

Behm was attending Vanderbilt University, when symptoms of bipolar disorder caused him to take some time off. A year and a half later, he was healthy enough to return to school, though at a different university. He also started looking for a part-time job, and a friend, who worked at a Kroger grocery store, put in a good word for him with his manager. It seemed simple: All Behm would have to do is fill out the online paperwork, including a personality test, and he’d be a shoe-in.

Behm’s results on the personality test, one many chain retailers use, “red-lighted” him, as his friend would later explain, meaning his answers had knocked him into the “do not hire” category automatically. This happened over and over, at company after company. Behm said the questions on the personality tests were similar to the questions he was asked at the hospital for an assessment of his mental health.

In essence, O’Neil said, Kyle was being blackballed from finding a minimum-wage job because of his mental health status. Eventually, Kyle’s father would file a class-action lawsuit against the companies his son applied to, alleging that their hiring algorithms were biased. O’Neil said that’s a reality in many sectors of modern life: Math and algorithms being used unfairly before governments and watchdogs are able to catch up and stop it.

“This one test was expanded…at that massive [scale],” she said. “And the point of the [Americans with Disabilities Act] was to prevent this very thing from happening, this systemic filtering out of people with mental health status. And this is exactly what was happening, we think.”

Audience members listen to Cathy O’Neil, a mathematician and author of the book, “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” as she speaks at the Carnegie Library Lecture Hall on Sept. 25, 2017. (Photo by Ryan Loew/PublicSource)

O’Neil outlines some of the problems algorithms and big data can pose for everyday people and calls for more transparency from tech companies that use those tools.

As O’Neil defines them, ‘weapons of math destruction’ (or WMDs, as she calls them) are the ways in which data and mathematical algorithms are used to target low-income people (with predatory loans, for example), reinforce racism (with unfair sentencing practices in the criminal justice system) and, generally, amplifying inequality. WMDs, O’Neil says, share three characteristics: They’re widespread, meaning they touch a lot of people’s lives; they’re mysterious, meaning that the data being collected and the algorithm’s formulas aren’t open to the public; and they’re destructive, meaning that when used on a large scale, they can perpetuate systemic inequality.

“I’ve noticed something. ‘Weapons of math destruction’ aren’t just destructive for the individual, they’re destructive for society,” O’Neil said.

In Behm’s case, that meant not only he was kept out of a job, but so were others who suffered from mental health disorders. O’Neil also considered the impact WMDs have had on the education system.

As an attempt to close the achievement gap, where rich students score consistently higher on standardized tests compared to poorer students, some states adopted a system of teacher evaluations. The idea was that if schools could get rid of the bad teachers, their students, regardless of socioeconomic status, would do better. But that’s not how it worked.

The algorithms used in the evaluations unfairly targeted teachers who had students who didn’t perform well on standardized tests, but it didn’t take into account the many reasons that could be happening. Those scores included a number of uncertain factors that could cost teachers their jobs, or block worthy teachers from achieving tenure. A lack of understanding about math and how it works allows faulty systems to be built in the first place, and to not be held accountable once they are, O’Neil said.

“That’s not what math is for. Math is meant to clarify, not to intimidate,” she said.

Cathy O’Neil, a mathematician and author of the book, “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” speaks at the Carnegie Library Lecture Hall on Sept. 25, 2017. (Photo by Ryan Loew/PublicSource)

In many of the examples O’Neil pointed to, she said the algorithms are built with good intentions: They want to make the hiring process for companies more cost-effective, improve the education system or make the criminal justice system fairer. But oftentimes, algorithms can cause more harm than good because of people’s implicit biases, uncertainty about factors being used in the algorithm or incomplete data.

“Algorithms don’t make things fair, they actually propagate whatever practices we had that the data reflected back to us,” she said. “They just automate the status quo.”

O’Neil is clear: she doesn’t hate algorithms. She actually loves them, she said, pointing to her favorites, the algorithms behind fantasy sports and Google Maps. Both of those systems provide a good service and their potential for a “worst-case scenario” is low. But to solve the problems with other algorithms, she said, the field of data science needs a heavier focus on ethics.

With schools around the country developing close ties with the big tech companies, as Carnegie Mellon University has with Uber, it seems more convenient for schools and their students to focus on building more algorithms rather than thinking about how to make them more ethical. Those who build algorithms should be asking how their models will be used and if they’re fair, O’Neil said. In addition, she says government algorithms should be fully transparent, from the data that goes into them to the expected outcomes. Breaking up the massive tech companies with antitrust laws is also important, she said, as are class-action lawsuits against companies that misuse algorithms. One concerning scenario, she said, was Facebook founder Mark Zuckerberg’s potential to legally use his site’s algorithm to convince people to vote for him for president, a job he’s speculated to run for.

To combat all these dangers, O’Neil also envisions a data rights bill that would outline how a person’s individual data could be used by companies and governments. In her work now, O’Neil said, she wants to audit algorithms and provide more transparency to those models that control more and more of our lives. There’s a lot she doesn’t know, she said, but she has an idea of where to start.

“I don’t know how we organize, but I really want there to be an answer to the question I get often which is, ‘What can I do?’” she said. “And the very first thing we can do, of course, is stop trusting algorithms. When someone says, ‘Trust this, this is math,’ say, ‘No, show me the evidence.’”

You can reach this story’s author, J. Dale Shoemaker, at 412-515-0069 or by email at dale@publicsource.org. You can follow him on Twitter at @JDale_Shoemaker.

Know more than you did before? Support this work with a MATCHED gift!

Through Dec. 31, the Wyncote Foundation, Loud Hound Foundation and our generous local match pool supporters will match your new monthly donation 12 times or double your one-time gift, all up to $1,000. Now that's good news!

Readers tell us they can't find the information they get from our reporting anywhere else, and we're proud to provide this important service for our community. We work hard to produce accurate, timely, impactful journalism without paywalls that keeps our region informed and moving forward.

However, only about .1% of the people who read our stories contribute to our work financially. Our newsroom depends on the generosity of readers like yourself to make our high-quality local journalism possible, and the costs of the resources it takes to produce it have been rising, so each member means a lot to us.

Your MATCHED donation to our nonprofit newsroom helps ensure everyone in Allegheny County can stay up-to-date about decisions and events that affect them. Please make your gift of support now.

J. Dale Shoemaker was a reporter for PublicSource between 2017 and 2019.

One reply on “Mathematician Cathy O’Neil talks big data, ethics and how algorithms can perpetuate inequity and what we can do about it”

  1. Ms. O’Neil obviously knows her stuff, but her talk left me unsatisfied. Instead of focusing on what she’s identified as the substance of the problem (people who write algorithms are motivated by factors other than social improvement), she spent a lot of her time working to dumb down her content for a popular audience.

    E.g. “if you have data and a definition of success, you have an algorithm.” I don’t think so…you’d just have data and a definition of success! The algorithm is the mathematical/computational thing trying to connect the dots, the actual nature and structure of which she didn’t address at all. Perhaps using an elementary example would’ve helped illuminate what she was talking about; as far as I understood them, the examples of her kids’ food preferences didn’t seem to bear much on the question of how algorithms are written in the world and by whom.

    I felt also like she fell into some too-easy tropes. One I recorded verbatim was “Fox News is a propaganda machine, and deliberately so. It doesn’t even call itself news.” It’s right there, next to “Fox”! You are free to not like them at all, but this seems like a pretty ineffective critique.

    She focused on the algorithms that can be used to predict recidivism, which typically ingest data points including poverty, school suspensions, family instability, mental health problems, and drug addiction. Her point was that by scoring people using these criteria, the algorithm (and the judiciary employing it) are “creating their own reality,” i.e. jailing people based on presumptions about what sort of person should be going to jail. She identifies this (rightly, I think) as a vicious feedback loop, since going to jail puts enormous stresses on a person’s life and those around them, pushing people who the economic and social margins, where off-the-books (illegal) work might be the only type available. Fair point.

    I see two problems with her claim here: 1) Ms. O’Neil failed to address the alternative, which is…judges making decisions based on their thoughts/feelings/presumptions about the accused person standing in front of them. Given the choice between human frailties and biases versus the frailties and biases of a mathematical model (which incorporates the human ones!), which should we prefer, and why? I see good arguments in both directions, but she never addressed alternatives to algorithms. 2) Practically nobody (except the private prison industry) wants to see people jailed—but if you had to guess who would be likely to commit property crimes, wouldn’t you guess that poor people would be likelier than middle class? And if you had to guess who would commit violent crimes, wouldn’t people from violent homes be likelier than those from stable homes? We may abhor the implication of cause and effect writ large over whole populations, but the first person to bring them up would be the accused’s lawyer when mitigation rolls around. It’s well known that these factors increase the likelihood of a person committing certain types of crime, and no great surprise. In light of that, if the judiciary is charged with making guesses about an individual’s future behavior, it doesn’t seem inappropriate to include some of these data points.

    Finally: I haven’t read Ms. O’Neil’s book, though I’m guessing that some or all of my critiques are addressed there, because it’s a book and not a public talk condensed into a short timeframe. I just wish she’d assumed the audience was a bit more sophisticated so she could’ve gone further in her presentation.

Comments are closed.