Book Review of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Niel

This post may contains affiliate links. If you click and buy we may make a commission, at no additional charge to you. Please see our disclosure policy for more details.

This Book Review of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Niel is brought to you from Jay Thompson from the Titans of Investing.

Genre: Privacy & Surveillance in Society
Author: Cathy O’Niel
Title: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Buy the Book)

Executive Summary:

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil seeks to reveal the dangers and damaging impact of decision making by algorithmic models. O’Neil has dedicated her life to data science and mathematics and has been on the cutting edge of algorithmic modeling since its early beginnings.

She is a data scientist by trade and some of her accomplishments and accolades include graduating with a Ph.D. in mathematics from Harvard University, professor at Barnard College, and data quant at hedge fund D.E. Shaw. O’Neil also worked as a data scientist for a number of start-ups specializing in creating algorithmic models to increase profits and productivity. Thus, she is well-qualified to write this book.

Writing is a gateway to presence. And so much more! Start a book blog to pursue huge profits, enriching presence, meaningful work.  these tips  helped us earn $5,400+ in December 2018.

O’Neil’s many years of experience as a data scientist opened her eyes to the misuse of big data, algorithmic modeling, and the dangers they bring to our world. She refers to these flawed algorithmic models as ‘weapons of math destruction’ or WMDs for short. These WMDs have three common characteristics: opacity, damage, and scale.

With respect to opacity, the parameters and specifically the inputs of WMDs are intentionally clouded and disguised behind intimidating mathematics in order to deter common people from understanding the true inputs driving the models and their conclusions.

As a result, the subjects of these destructive models remain oblivious to the models functionalities and unaware of the information that could help them overcome the models harmful results.

The next characteristic all WMDs possess is damage. The models are comprised of bias and prejudice that leads to increased inequality. The models tend to favor the rich and punish the poor. Moreover, they contain vicious feedback loops that compound and multiply this phenomenon creating an increasingly wide wealth gap as a byproduct.

The final common characteristic is scale. These WMD’s operate at a massive scale, which is the very nature of big data driven algorithms. People can be quickly evaluated and grouped into different pools based on millions and millions of data points that are readily available.

O’Neil walks us through a multitude of instances in which algorithms are being used unfairly and where inequality is compounding rapidly. Furthermore, she alludes to the future dangers of big data algorithms if left unregulated. She ultimately challenges each of us to make this problem known as well as to make conscious efforts to resolve and mitigate its catastrophic effects.

The world of WMDs initially seeks fairness and efficiency, yet their use ultimately drives up debt, stimulates mass incarceration, preys on the vulnerable, dismisses the qualified, labels the masses, and oppresses the poor in almost every way possible. In short, the realm of big data and algorithmic modeling is rampant with malicious intent and flawed mathematical models that increase inequality and threaten democracy. This book is a call for change.

My Views

O’Neil has certainly opened my eyes to a problem in which I was largely unaware. This book sheds light on a vast and diverse set of examples in which algorithmic modeling is misused and has damaging effects. With that said,

O’Neil does not focus on any of the positives that algorithmic modeling and big data have produced. Because of this, I found much of her work to be largely emotional and overly biased.

In my opinion, it is beyond serious debate that the uses of big data and algorithmic decision making have created benefits that far outweigh the negatives identified in this book. They have led to the production of more meaningful and accurate outputs achieved at previously impossible rates.

Overall, O’Neil’s work does not make me doubt the future benefits or positive impacts of big data and algorithms in the slightest. However, the examples she provides that expose the misuse of these powerful tools have made me realize the need for regulations in this realm.

One of the repeated issues that I have seen with algorithmic decision making models that turn problematic is that those using the models do not fully understand the innerworkings of the algorithmic inputs or what they are evaluating.

This disconnect is present because those with the practical experience and decision making power are not the ones who construct the models. To keep up with the increasing complexity of our world, companies and organizations are turning to complex mathematics to decipher risk and increase productivity.

But because the mathematicians who design these risk deciphering algorithms neither understand the practicality of their outputs nor the true benefits of their use, the data that results is flawed and misinterpreted. Likewise, those with decision making power for whom the algorithms were created do not fully understand the risk metrics used or the sensitivity of the algorithmic assumptions.

Ultimately, what ensues when combined with improper incentive and government policies is collapse. We saw this with the financial crisis and there are several other examples in which this problem exists and will continue to be revealed.

As the world continues to increase in complexity, this disconnect will only be exacerbated. However, it is a tall order to narrow the gap of understanding between the creators of algorithms and those who use them. Thus, a good place to start is quality oversight. Algorithms need regulations and oversight just like humans do.

Ultimately, there is no perfect system – humans make mistakes and are subject to practices that create unfortunate byproducts like injustice and inequality, just as algorithms will inevitably do the same. I would argue that algorithms give us the ability to minimize mistakes and limit threatening byproducts because their inputs can be easily governed and refined.

O’Neil advocates for the construction of a universal code of ethics and moral principles in which all algorithms must abide. While I understand her reasoning, the feasibility of this is sparse.

It is far too difficult to assign with certainty an entity qualified to create such a universal set of ethics and morals, and the basis on which such principles would be established. Thus, it is essential that people’s constitutional rights are reflected and enforced in algorithmic models because that is something that can be controlled and protected.

The use of algorithmic models to fuel injustice, inequality, and discrimination is a tragedy and must be corrected. Complex mathematics used to cloud and taint the rights ensured in our constitution can no longer be tolerated. Just as our forefathers created a set of reasoned and balanced laws by which all citizens are to be governed, the use of big data and algorithms must be regulated to ensure fair and equal treatment for all persons.

In summation, we have already been witnesses to ways in which flawed algorithms have influenced collapse and failure. Further, this book reveals many instances in which algorithms are currently creating injustice and inequality. Ultimately, people will continue to practice, pursue, and compound processes that work for them. It is critical that people and institutions using algorithmic models with malintent be brought to justice and their state of operations be seized and corrected.

Many of the future applications of big data and algorithmic modeling remain unknown, and it is for this reason I believe measures to regulate this field are of the utmost importance.

Introduction

In this brief, we will examine the common characteristics of different weapons of math destruction (WMDs), and the threats they bring to our world. The primary commonalities of these threatening algorithms are opacity, damage, and scale.

Specifically, these algorithmic parameters tend to be hidden from their subjects, result in severe damage to several unfortunate groupings of people, and are used at massive scale. Furthermore, these algorithms create and embody destructive feedback loops that punish the oppressed and widen the gap of inequality.

If you love writing, it’s time to start a book blog.  start today  (we show you HOW and WHY)

The mission of this brief is to further explore Cathy O’Neil’s thoughts and to point out the future implications of big data and algorithmic modeling. To this end, I have included what I believe to be O’Neil’s best examples of the dangers and damaging effects algorithmic models have had within five different realms.

Teachers

In 2007, the Mayor of Washington, D.C – Adrian Fenty – dedicated himself to solving the important issue of underperforming schools throughout the city. On average, hardly one in two students were successfully graduating from high school. The belief was that the students’ underperformance was the result of poor and unproductive teaching. Thus, Fenty aimed to eliminate all bad teachers within the system.

As you might suspect, this led to the evaluation of all teachers in the Washington, D.C. school districts using a WMD called IMPACT. The IMPACT algorithm was based on what is called a value added model. The value added model functioned just as it sounds and aimed to evaluate how much learning value was gained in each subject by students annually.

The algorithm primarily used annual standardized test scores as the means of measure.

For example, if the differential in standardized test scores ranked in the bottom 7% for a particular teacher when compared to all other classes in the district, the teacher of that respective class was fired. Seems fair enough, right?

Well, according to O’Neil there were many other factors involved for which the algorithm was unable to account. She uses the story of fifth grade teacher Sarah Wysocki to make her case.

Wysocki had been teaching at MacFarland Middle School for two years and was receiving superb feedback from her principal as well as the parents of her students. One evaluation even referred to her as “one of the best teachers I’ve ever come into contact with”.

However, at the end of the 2011 school year, Wysocki’s IMPACT score for value added modeling in math and language skills was in the bottom 7%. Consequently, she along with the other 205 teachers below the 7% threshold were terminated.

How could this be? Did the principal and parents give Wysocki such great reviews simply because she was likeable? That is possible. However, O’Neil presents a series of factors that could be contributors for which the algorithms did not account.

To make her point, O’Neil creates a hypothetical student who performed very well one year on her standardized test but then over the course of the next year runs into family issues, money problems, has to move, or becomes depressed.

Any combination of these factors leads her to underperform on the following year’s standardized test.

O’Neil’s point is that algorithms are unable to take such subjective inputs into consideration. Further, O’Neil stresses the notion that algorithms need massive amounts of data in order to draw conclusive statistical trends.

A classroom of thirty students, or less, is not nearly enough data to produce statistically reliable correlations. One data point (here, one student) could completely skew a teacher’s overall score under the algorithmic value added model.

Another critical drawback of WMDs evaluation and ranking systems is that they encourage the system or the people under review to solely care about the algorithmic inputs. Incentivizing this type of behavior inevitably leads to cutting corners and dishonesty. This is a theme we will see regularly in the following sections of this brief.

In the case of teachers in the Washington D.C. school district, IMPACT encouraged cheating.

In the year prior to Sarah Wysocki’s termination an extremely high number of erasures on standardized tests were discovered. Teachers worried that they would lose their job or miss out on year-end bonuses due to underperformance, so they corrected their students’ exams.

These actions artificially inflated the scores on standardized tests of incoming students to teachers such as Sarah Wysocki. Now, teachers like Wysocki unwilling to compromise their integrity were certainly doomed when evaluated by the value added model.

It is highly unlikely that her students would outperform their corrected and inflated scores from the previous year, and as a result the value added model would no longer produce meaningful results.

Overall, O’Neil exposes several valid areas of concern in regard to algorithms evaluating teachers. Moreover, we will see how many of these themes and inconsistencies repeat in the sections to follow.

Financial Crisis

Cathy O’Neil worked for one of the world’s most prestigious hedge funds, D.E. Shaw during the epicenter of the financial crash of 2008. If I had to guess, one of the primary reasons she wrote this book was because of her experience as a data scientist during the crisis.

Ultimately, O’Neil concludes from her experience at D.E. Shaw that algorithms like the ones she worked to build played a significant role in the market collapse.

To begin, O’Neil argues that the subprime mortgages that flourished during the housing boom were not the problem that led to the financial crisis. In her view, what caused the issue was banks loading these mortgages into classes of securities and then selling them using faulty mathematical models that overestimated their value. Consequently, the risk models attached to mortgage backed securities were, as O’Neil would put it, WMDs.

Even though the banks recognized that a portion of the mortgages would default, they remained confident in the system based on two assumptions. The first was that banks believed the algorithms effectively distributed the risk of the bundled securities rendering them bulletproof.

After all, that is how the products were marketed. In hindsight, this was hardly the case. Ultimately, the risk ratings were intentionally disguised under the premise that foolproof algorithms ensured balanced risk, yielding substantial profits in the short-term while obscuring the actual level of risk from the buyers of the securities.

The second assumption was the belief that it was unlikely several people would default simultaneously.

This belief existed because the WMD risk models assumed that the future would be the same as the past. At the time, defaults were believed to be uncorrelated, and statistically the solid mortgages would counterbalance the few random defaults in each package.

O’Neil points out that this was by no means the first time flawed algorithms were used in the finance realm.

However, the difference that gave these defective risk models the power to crash the global economy was scale. To make matters worse, there were other markets that latched on to mortgage backed securities – primarily credit default swaps and synthetic collateralized debt obligations. The scale at which these risk assessing algorithms operated was enormous.

From here, you know how the story ends. The entire market collapsed along with the algorithms that created it.

Of course, there were a lot more people and entities that played a role in the housing market crash and these algorithms only played a small part. However, O’Neil highlights the severity and catastrophic implications that faulty algorithms can produce if left unquestioned and unrefined. This is certainly a topic of vital importance in the financial realm going forward.

Policing and Prison Sentences

O’Neil speculates that one of the largest areas in which algorithms and big data are being used to feed injustice is in policing and prison sentences. She begins her argument stating a series of facts that are troubling.

The first is that African Americans are three times more likely, and Hispanics four times more likely, to receive the death penalty when compared to Caucasians convicted of the same charges. Further, black men on average serve nearly 20% longer sentences than white men charged with similar crimes.

Algorithms and models play an important role in these grim statistics.

Twenty-four states have started using what are called recidivism models in an effort to eliminate racial bias and calculate the associated risk of each convict fairly. Based on the calculated level of risk, the sentence of each person is determined.

In the abstract, such an approach seems fair. However, the question remains whether these models truly have eliminated human bias or if instead they have disguised bias behind mathematical inputs.

One of the most popular models for determining a convict’s level of risk is known as the LSI-R (Level of Service Inventory-Revised) model.

Essentially, the LSI-R model asks each convict a series of questions that allows the algorithm to determine their level of risk to society. Some of these questions are very relevant such as “How many convictions have you had?” or “What part did others play in the offense? What part did drugs or alcohol play?” These are legitimate questions that do not embody prejudice or bias.

However, many of the questions in the model are not as straightforward.

O’Neil reveals that many of the questions are directed at the socioeconomic status of the individual and superimpose the importance of upbringing which taints the statistics. For example, one question incorporated into the LSI-R model is “When was the first time you were ever involved with the police?” To me, this seems like a simple and unbiased question but O’Neil points out factors and considerations of which I was previously unaware.

A 2013 study in New York City showed that Black and Latino males between the ages of fourteen to twenty-four made up 4.7% of the city’s population, yet they were subject to over 40% of ‘stop and frisk’ checks by police. Furthermore, over 90% of those stopped proved to be innocent.

The point is that Black and Hispanic men are statistically subject to significantly more run-ins with the police than white men, resulting in increased charges among their respective races. The models do not take these factors into consideration.

Moreover, the LSI-R model inquires as to whether the guilty subject has friends or family with criminal records. It is far less likely that any middle or upper class citizen will have friends or family with criminal records in comparison to someone born into poverty.

Similarly, it is less likely that a wealthy white male would be subject to a stop and frisk check by police than a black male with low economic standing. Thus, the recidivism model’s questions inevitably overexpose racial minorities and groups of low economic status.

The most devastating side effects of algorithmic models like LSI-R are the feedback loops they create. The results they produce compound and feed the cycles of injustice by the very way in which they are structured.

Based on the answers to the questionnaires we have seen that the poor and some racial minorities are more likely to be deemed “higher risk” in comparison to a wealthy person who commits the same crime. Thus, the “higher risk” criminals serve a longer sentence.

And, it has been proven that prisoners serving longer sentences, surrounded by other criminals, are more likely to return to prison in the future. Upon release, this hypothetical high risk criminal returns to his impoverished neighborhood where crime is more prevalent and where it is more difficult to find a job with a criminal record.

Assuming he commits another crime after release, the LSI-R model will be seen as successful in identifying a high risk criminal.

But, aren’t the models partially to blame for this vicious cycle? After all, the model is the reason the person served a longer sentence and thus increased the likelihood of his return to prison.

These questions are difficult to answer. The LSI-R model serves as an example of a feedback loop embedded in WMDs. Feedback loops are another common area of concern in algorithmic modeling and it is a subject we will continue to explore in the following sections.

Predatory Advertising

Online advertising is without question one of the largest platforms for WMDs to do their work, and it is a platform to which everyone is subject. However, the consequence for falling victim to predatory advertising is vastly different across economic classes. The example O’Neil uses to illustrate this consequence is for-profit colleges.

First, what is targeted advertising? As you might guess, predatory or targeted advertising is when search engines like Google use algorithms to evaluate your interests based on your searches and clicks. Once the search engine knows your interests, and even more importantly your anxieties, it will start selectively advertising to you.

Are you short on money? – expect ads for payday loans… Oh, and expect high interest too. Anxious about your weight or physique? – here come the diet pills and gym memberships. You get the point.

Some targeted advertising is beneficial. After all, it can make life easier. It brings our desires and solutions to our problems right to our fingertips. But, much of this targeted advertising is predatory and malicious. It seeks out the most desperate victims at massive scale. This is certainly the case with for-profit colleges.

To say for-profit colleges like the University of Phoenix, Vatterott College, and Corinthian College target troubled victims would be an understatement.

Examples of some of the criteria that the predatory advertising algorithms of for-profit colleges look for are: “Welfare moms with kids, pregnant women, recently divorced, low self-esteem, low income jobs, experienced a recent death, physically or mentally abused, recent incarceration, drug rehabilitation, dead-end jobs, no future”.

What do all of these people have in common? They are extremely vulnerable – hence the name predatory advertising. Victims who meet these criteria are just the people to which for-profit colleges appeal. They are overwhelmed and looking to turn their lives around.

Coincidentally, the victims of predatory ads also happen to be tremendously ill-informed.

In our example, for-profit colleges charge exorbitantly high tuition fees in comparison to public universities. A for-profit college by the name of Everest University charged $68,000 for an online degree in paralegal studies. This same degree can be achieved for less than $10,000 at several public universities across the United States.

Even worse is that the degrees achieved from for- profit colleges often are of no more value than a high school diploma. Unfortunately, because of fraudulent ads and the façade of prestige surrounding these private universities, individuals who attend for-profit colleges do not become aware of this until it is far too late.

To make matters worse, the great majority of the victims of for-profit colleges come from the lower class.

So, how do they pay for their tuition? With more predatory advertising in the form of student loans – many times at criminally high interest rates.

In the end, WMDs behind predatory advertising used in cases like for-profit colleges leave vulnerable and desperate victims of low economic status up to their chins in debt, compounding at high interest for a piece of paper that proves to be worthless.

Meanwhile, the masterminds behind these WMDs like the CEO of Apollo Education Group (parent company of the University of Phoenix), Gregory Cappelli, are writing themselves checks for $25,000,000 a year in compensation…Yikes.

Landing Credit

A study by the Society for Human Resource Management found that almost half of the employers in the United States screen hires based on a candidate’s credit history. The theory is that people who avoid debt and pay bills in a timely manner are more dependable and more likely to be responsible and effective workers. O’Neil believes this practice has become all too common and has noxious repercussions.

Credit reports and derivations of credit reports like e-scores have become rampantly used in employment screening, determining loan or insurance eligibility, and many other applications.

Algorithms are able to use the data that comes from a person’s credit history with ease, allowing them to place each individual into different buckets of people who have similar ratings. In the case of e-scores, this includes a person’s zip code, internet surfing patterns, online purchases, and other faulty scientific models to estimate creditworthiness.

The reason e-scores are important is because it is illegal to use a candidate’s credit score when considered for employment without the person’s permission. As you might guess, companies have a few ways around this legal issue.

To start, there are readily available e-scores that many believe predict creditworthiness as well as credit scores themselves. Companies will often times use e- scores if the potential hire denies them permission to look through their credit history.

Consequently, it is important to note that if a candidate denies the right of the employer to evaluate their credit history it is very likely that they will not be considered for the job.

So, why is this important? Consider e-scores in the case of loan or credit card eligibility. It is a near certainty that someone from a rough, low income neighborhood will be given a low e-score since the algorithms are designed to place high priority on zip code.

Statistically, people from low income neighborhoods are much more likely to default.

As such, anyone in this bucket will be deemed risky meaning less available credit and higher interest rates. A brutal result considering this person is likely already struggling to make ends meet.

Many would point out that most people from areas of low income and crime are inherently more risky borrowers. That may be a valid conclusion. But, does it justify assuming that the history of human behavior from a certain zip code should determine the type of loan to which a specific individual from a given geographical residence should be entitled?

There are certainly a number of people who would be responsible borrowers. Opportunities for such loans, employment, and so forth would give them the ability to work their way out of poverty. However, by the nature of these WMDs people of specific zip codes combined with other biased evaluation metrics are discounted and grouped into buckets of people the models deem to be like them and in milliseconds their opportunities are limited and squandered.

Moreover, evaluating credit reports of people being considered for employment has proven to have a vicious feedback loop.

As previously stated, companies must legally have permission to evaluate the credit history of a potential hire and those who deny this permission are likely not considered.

By the same token, those who do grant employers the right to inspect their credit reports are just as likely to be passed over if their credit ratings are poor. A 2012 study showed that one in ten people who experienced credit card debt among low and middle income families were denied employment because of their credit history.

It is safe to assume that even more were denied jobs on the same basis but were told other candidates were more qualified or any number of other false reasons.

O’Neil emphasizes how dangerous and damaging the practice of using credit scores in hiring and promotions is and how it creates and contributes to a serious poverty cycle.

Being denied a job, resulting in unemployment because of poor credit, will inevitably cause the credit scores of this population of people to become even worse; consequently, further limiting their opportunity for work and advancement in society. Feedback loops resulting from WMDs evaluating credit reports ultimately contribute to the wealth gap in our country and increases inequality.

The last area I will address with respect to the danger of algorithms using credit history to determine qualification for loans, jobs, and insurance is the wealth gap this process generates.

As we know, the global economy is constantly cycling. Furthermore, reliable and hardworking individuals lose their jobs every day as companies fail, are subject to budget cuts, and outsource labor overseas. In periods of economic downturn, the number of layoffs escalates.

The unemployed as a result of such circumstances often times no longer have access to health insurance.

This is critical because medical expenses are the primary cause of bankruptcy among individuals and families in the United States. Hard working people previously living paycheck to paycheck who become ill or whose families become ill during such times find themselves in debt and their credit plummets.

By contrast, wealthy people frequently have substantial savings so an event like this would not impact their credit ratings in the same way it would someone equally dependable and hardworking in the workplace living paycheck to paycheck.

Ultimately this means that credit scores are more than identifiers of dependability and responsibility as employers might suggest, but they are also directly correlated to wealth. Because the wealthy can keep their credit ratings healthy and attractive in all seasons they are granted far more opportunity in employment as well as a realm of other sectors in life.

The opposite is true for those less fortunate and a widening wealth gap is what results.

Overall, the damaging implications of WMDs evaluating items like credit score pose serious side effects to our society and create cycles of feedback loops that will ultimately compound and multiply the injustice and inequality they create.

Conclusion

The five topics covered in this brief are just a few of the many platforms in which big data and algorithms have been misused and exploited in damaging and counterproductive ways. I have little doubt that the majority of these so called weapons of math destruction were designed with good intentions; with an aim to increase efficiency and bring a heightened sense of fairness to the beneficiaries of their products.

Unfortunately, the opposite is true in many cases. The key revolves around inspecting and optimizing the inputs of these models in a way that warrants fairness and equality.

Big data and computers will remain a prevalent part of each of our futures and the use of algorithmic models to solve problems, increase efficiency among systems, and allocate resources will only continue to increase with time.

O’Neil reveals the dangers and pitfalls of many current algorithmic models and the need for established ethics and moral fortitude behind the inputs has become very clear. A fundamental set of ethics and morals is essential to protecting the democracy and equality of our future.

It is my belief that algorithms and models like the ones O’Neil describes can be constructed in ways that are ultimately used for the good and betterment of mankind.

Arguably the most important finding O’Neil exposes in her work is the sheer power these algorithms have in our lives. They are used at a massive scale and they effect each of us. Because of this, one of the areas in which the most improvement is needed is transparency among the models that control so many facets of our lives.

People have a right to know the exact parameters that go into algorithmic models in which their rights, democracy, and livelihoods are impacted. It is unacceptable for these inputs and outputs to be masked behind intentionally intimidating mathematical functions designed specifically to keep its constituents in the dark.

In order to establish integrity and fairness among algorithmic models these processes must be regulated and policed in ways that are fundamentally moral. The age of the algorithm is still relatively new, and many of its future applications remain unknown.

Thus, it is absolutely critical that measures are taken to ensure and regulate the integrity of current and future models. After all, we are the ones who ultimately control what data to pay attention to, and which data to dismiss. As O’Neil has made so evident this is the very essence of whether algorithms are used for the good or if they become weapons of math destruction.

Will we allow the algorithms used in this age of big data to increase inequality and threaten democracy, or will we carefully inspect and design the inputs of algorithms to breed fairness, equality, and productivity to our world? The decision remains entirely up to us.

HookedtoBooks.com would like to thank the Titans of Investing for allowing us to publish this content. Titans is a student organization founded by Britt Harris. Learn more about the organization and the man behind it by clicking either of these links.

Britt always taught us Titans that Wisdom is Cheap, and principal can find treasure troves of the good stuff in books. We hope only will also express their thanks to the Titans if the book review brought wisdom into their lives.

This post has been slightly edited to promote search engine accessibility.

 

Leave a Comment