We hear it all the time: I’m no good at math. Math is hard. Math is boring. Mathematician and bestselling author Cathy O’Neil knows how to make it interesting.
One way? Call attention to the ways algorithms with built-in biases can ruin people’s lives. O’Neil does it with powerful stories in her newest book, Weapons of Math Destruction, because she sees a big problem: As mathematical models, also known as algorithms, become more prevalent and influential, how can we hold them accountable? How can we make sure big data isn’t being used to cause harm?
O’Neil told about 300 people at the Carnegie Library Lecture Hall on Monday evening how she’s witnessed algorithms causing harm: She worked as an analyst on Wall Street during the financial crisis of 2008 and saw first hand how corruption and greed masked their true intent behind ‘math.’ She quit and in response joined the Occupy movement Thinking that selling ads that people want was less harmful than her Wall Street job, she worked for a firm that targeted specific customers to see their ads and saw again how big data and algorithms can be used to perpetuate inequality (she pointed to an example of how ads for for-profit colleges were only shown to low-income people). She also highlights more specific examples, like what happened to Kyle Behm.
Behm was attending Vanderbilt University, when symptoms of bipolar disorder caused him to take some time off. A year and a half later, he was healthy enough to return to school, though at a different university. He also started looking for a part-time job, and a friend, who worked at a Kroger grocery store, put in a good word for him with his manager. It seemed simple: All Behm would have to do is fill out the online paperwork, including a personality test, and he’d be a shoe-in.
Behm’s results on the personality test, one many chain retailers use, “red-lighted” him, as his friend would later explain, meaning his answers had knocked him into the “do not hire” category automatically. This happened over and over, at company after company. Behm said the questions on the personality tests were similar to the questions he was asked at the hospital for an assessment of his mental health.
In essence, O’Neil said, Kyle was being blackballed from finding a minimum-wage job because of his mental health status. Eventually, Kyle’s father would file a class-action lawsuit against the companies his son applied to, alleging that their hiring algorithms were biased. O’Neil said that’s a reality in many sectors of modern life: Math and algorithms being used unfairly before governments and watchdogs are able to catch up and stop it.
“This one test was expanded…at that massive [scale],” she said. “And the point of the [Americans with Disabilities Act] was to prevent this very thing from happening, this systemic filtering out of people with mental health status. And this is exactly what was happening, we think.”
O’Neil outlines some of the problems algorithms and big data can pose for everyday people and calls for more transparency from tech companies that use those tools.
As O’Neil defines them, ‘weapons of math destruction’ (or WMDs, as she calls them) are the ways in which data and mathematical algorithms are used to target low-income people (with predatory loans, for example), reinforce racism (with unfair sentencing practices in the criminal justice system) and, generally, amplifying inequality. WMDs, O’Neil says, share three characteristics: They’re widespread, meaning they touch a lot of people’s lives; they’re mysterious, meaning that the data being collected and the algorithm’s formulas aren’t open to the public; and they’re destructive, meaning that when used on a large scale, they can perpetuate systemic inequality.
“I’ve noticed something. ‘Weapons of math destruction’ aren’t just destructive for the individual, they’re destructive for society,” O’Neil said.
In Behm’s case, that meant not only he was kept out of a job, but so were others who suffered from mental health disorders. O’Neil also considered the impact WMDs have had on the education system.
As an attempt to close the achievement gap, where rich students score consistently higher on standardized tests compared to poorer students, some states adopted a system of teacher evaluations. The idea was that if schools could get rid of the bad teachers, their students, regardless of socioeconomic status, would do better. But that’s not how it worked.
The algorithms used in the evaluations unfairly targeted teachers who had students who didn’t perform well on standardized tests, but it didn’t take into account the many reasons that could be happening. Those scores included a number of uncertain factors that could cost teachers their jobs, or block worthy teachers from achieving tenure. A lack of understanding about math and how it works allows faulty systems to be built in the first place, and to not be held accountable once they are, O’Neil said.
“That’s not what math is for. Math is meant to clarify, not to intimidate,” she said.
In many of the examples O’Neil pointed to, she said the algorithms are built with good intentions: They want to make the hiring process for companies more cost-effective, improve the education system or make the criminal justice system fairer. But oftentimes, algorithms can cause more harm than good because of people’s implicit biases, uncertainty about factors being used in the algorithm or incomplete data.
“Algorithms don’t make things fair, they actually propagate whatever practices we had that the data reflected back to us,” she said. “They just automate the status quo.”
O’Neil is clear: she doesn’t hate algorithms. She actually loves them, she said, pointing to her favorites, the algorithms behind fantasy sports and Google Maps. Both of those systems provide a good service and their potential for a “worst-case scenario” is low. But to solve the problems with other algorithms, she said, the field of data science needs a heavier focus on ethics.
With schools around the country developing close ties with the big tech companies, as Carnegie Mellon University has with Uber, it seems more convenient for schools and their students to focus on building more algorithms rather than thinking about how to make them more ethical. Those who build algorithms should be asking how their models will be used and if they’re fair, O’Neil said. In addition, she says government algorithms should be fully transparent, from the data that goes into them to the expected outcomes. Breaking up the massive tech companies with antitrust laws is also important, she said, as are class-action lawsuits against companies that misuse algorithms. One concerning scenario, she said, was Facebook founder Mark Zuckerberg’s potential to legally use his site’s algorithm to convince people to vote for him for president, a job he’s speculated to run for.
To combat all these dangers, O’Neil also envisions a data rights bill that would outline how a person’s individual data could be used by companies and governments. In her work now, O’Neil said, she wants to audit algorithms and provide more transparency to those models that control more and more of our lives. There’s a lot she doesn’t know, she said, but she has an idea of where to start.
“I don’t know how we organize, but I really want there to be an answer to the question I get often which is, ‘What can I do?’” she said. “And the very first thing we can do, of course, is stop trusting algorithms. When someone says, ‘Trust this, this is math,’ say, ‘No, show me the evidence.’”
Do you feel more informed?
Help us inform people in the Pittsburgh region with more stories like this — support our nonprofit newsroom with a donation.