limited investigation of some of the moral issues associated with the growth of
ID: 3576772 • Letter: L
Question
limited investigation of some of the moral issues associated with the growth of Big Data analytics and its rapidly increasing deployment in numerous human decision-making applications and situations, impacting the lives of everyone more and more. We will focus mainly on Cathy O’Neil’s text Weapons of Math Destruction, several reviews of this text, and the article “Big Data Ethics” which was not assigned reading but is important for approaching certain dimensions of this topic. Links to the texts are listed below.
What I think that you should take away from this investigation is an increased consciousness of the scope and ubiquity of Big Data and the specific kinds of unique and in some cases original moral issues that arise with this nascent and emergent technology. In some ways ethics precedes technology, while in other ways it is always playing catch-up with the new.
One dimension of the moral issues with Big Data arises because Big Data utilizes proxy measurements to target or evaluate members of correlative groups. “The police analyze zip codes to deploy officers, employers use credit scores to gauge responsibility, payday lenders assess grammar to determine credit worthiness. But zip codes are also a stand-in for race, credit scores for wealth, and poor grammar for immigrants.” source
Thus, mindless algorithms created by an unknown number of invisible technicians who inevitably and blindly perhaps introduce their own prejudices and biases into the algorithms can result in racism, sexism, ethnic profiling, predatory marketing, prejudicial policies, and other kinds of injustice. This potential for perniciousness is multiplied exponentially by virtue of the sheer scope of Big Data’s reach. And the opacity of its insinuation into the most intimate and personal nooks and crannies of our everyday life will most certainly result in transforming our sense of self and our moral value orientation without our being aware of it for the most part, guided by an invisible human hand targeting the data-driven payload toward often unsuspecting groups.
Here it might be instructive to recall what we learned from the Stanford Prison experiment, Milgram’s obedience study, and from Social Psychologists like Sam Sommers who wrote Situations Matter about how background dimensions of situations that we are mostly unaware of can have a big impact on our perception, including our self-perception. Big Data is the perfect tool for taking advantage of this tacit source of influence. Most folks will barely realize it is happening.
The article listed below, "Big Data: Weapons of Math Destruction" by Derek Beres is a review of Cathy O'Neil's book Weapons of Math Destruction. Beres’s article focuses on the potential for Big Data to cause "dehumanization by numbers" by making decisions that affect people's lives based on algorithms that "create self-perpetuating feedback loops where your phone bill can have more impact on auto insurance than getting hammered and sitting behind the wheel." This occurred in Florida, according to O’Neil, where some residents who had clean driving records were charged more for insurance than others who had DUI’s but good credit scores. This unfairness is the result of using abstract and impersonal algorithms that focus on "proxies" (quantifiable data sources) as a way of making decisions in other areas of human interaction. O’Neil details numerous cases.
For example, in A Math Nerd Wants to Stop the Big Data Monster, Katherine Burton points out that O'Neil "describes companies using ZIP codes as a proxy for creditworthiness, which leads to predatory lending and hiring discrimination." Again, in Math is racist: How data is driving inequality, Aimee Rawlins points out that one of the most compelling sections of O'Neil's book focuses on algorithm-driven recidivist models for the sentencing of criminals:
For years, criminal sentencing was inconsistent and biased against minorities. So some states started using recidivism models to guide sentencing. These take into account things like prior convictions, where you live, drug and alcohol use, previous police encounters, and criminal records of friends and family.
These scores are then used to determine sentencing.
"This is unjust," O'Neil writes. "Indeed, if a prosecutor attempted to target a defendant by mentioning his brother's criminal record or the high crime rate in his neighborhood, a decent defense attorney would roar, 'Objection, Your Honor!'"
But in this case, the person is unlikely to know the mix of factors that influenced his or her sentencing -- and has absolutely no recourse to contest them.
Once we begin to see the potential for Big Data to impact and influence our lives and our consciousness and to drive social policy, we can begin to understand the kinds of moral concerns that Andrej Zwitter raises in his article Big Data Ethics, about the potential erosion of personal moral responsibility as a result of the nature, growth and impact of Big Data.
According to Zwitter, the invisibility of Big Data’s influence is due in part to the speed of Big Data’s development, too fast for us to fully comprehend its nature and possible effects; a case of technological development dangerously outpacing moral consciousness development. Part of the reason for this is that the mining and deployment of Big Data is largely invisible and we are all tacitly complicit in its formation. And the Big Data industry is still in its infancy, as Daniel Hulme made clear in the video we watched, so things are just getting started. We collect tons more data than we actually put to use due in large part to the development of the internet. Undoubtedly, the amount of interpreted and deployed Big Data will continue to increase as applications multiply.
Here is why Zwitter thinks that Big Data will cause an erosion of individual moral responsibility. He points to the “hyper-connectivity” of current society, witnessed in the explosion of social networking, for example, and driven by Big Data, resulting in the fact that virtually everyone becomes a data collection point that contributes anonymously to some degree to the targeting of groups for commercial, policing, evaluative, and other purposes, and which can involve unjust and prejudicial outcomes. For this reason, Zwitter thinks that Big Data will erode individual moral agency and individual moral responsibility.
Let’s look at this a little closer. As we learned earlier from Kant, being a moral agent means that you are responsible for the actions that you knowingly and willingly cause. But with Big Data you are part of the aggregated active cause of the resulting targeting somewhere down the data road but without clear knowledge of the extent of your participation, its targeted use, or the outcome of what you partially initiated and caused. Thus, Zwitter asks to what extent do data contributors (you and me) have moral responsibility for those targeted (and possibly unjust) outcomes?
Data is de-individualized in its aggregation which first distances the individual data contributor from a moral connection to her or his input and the consequent outcome impacting the targeted group. Impersonal data still retains group characteristics or it would be useless, and thus it does not matter that the data has been “de-individualized” because this anonymization still leaves group privacy vulnerable. Individual data contributors inevitably contribute to this group vulnerability and the use to which it is put, yet they no longer have any control of the outcome of this use and thus cannot have moral responsibility for it. In this way Zwitter thinks Big Data undermines or erodes personal moral responsibility.
In response to Zwitter, however, let me offer an alternative interpretation regarding his worry about the erosion of individual moral agency, a perspective that is born out of our earlier reflections in this course about the nature of human subjectivity. It may be that the potential undermining of individual moral agency resulting from the hyper-networked structure of a datified or data-driven social order is not an assault on moral agency but rather a necessary correction to the over-reach of the whole idea of moral agency to begin with, an idea we encountered earlier in this course.
To be a moral agent involves the presumption that we are somehow free of situational influences (free and autonomous) and thus able to make moral judgments for which we are entirely morally responsible. But this seemingly realistic position may have an erroneous assumption at the heart of the very idea of moral agency itself.
As we saw with the workers at Wells Fargo who were influenced by the toxic cross-selling culture that existed at the bank--and recalling again what Social Psychology teaches about invisible situational influences on our perception and judgments--it seems reasonable to conclude that the toxic culture influenced the moral judgments of the workers and is thus to some degree responsible for the workers’ immoral behavior. Just as human subjectivity is essentially inter-subjectivity, according to Emmanuel Levinas, so also morality is perhaps always inter-relational morality such that we, as individuals, are never wholly and entirely responsible for any of our actions. In short, we are all in this together, inescapably. The idea of the separate individual is a myth not a reality.
So, from this inter-relational moral perspective, Big Data is not a threat to moral agency, as Zwitter thinks. Rather, Big Data (in addition to Social Psychology and Levinas) provides support for a critique of the individualist understanding of moral agency as an outmoded ideal configuration of the person based on a liberal notion of rational subjectivity that makes little sense in a hyper-connected social order. In other words, if it is true that we are all connected, as Levinas argues; and if it is true that our judgments are influenced by situational factors that we are unaware of, then we cannot be held individually accountable for our inter-related and contextualized actions.
Could it be that there really is no such thing as an action of which I am wholly the cause? Are all actions necessarily inter-relational to some extent? What do YOU think?
Big Data: Weapons of Math Destruction? REQUIRED
why you think the topic is important or interesting to you; how you think and feel about it. Explain and evaluate the moral issue or issues that are associated with the topic, looking at both the pro and con sides of the issue where applicable, describing different perspectives on the issue, pointing out who could be harmed and who could benefit, looking at the motives of groups involved, etc., whatever information you think is most essential to a good summary. Finally, make a judgment about the issue and explain what you think would be the best way to resolve the conflict at the heart of each of the two moral issues you choose taking the claims of all sides into account
Explanation / Answer
Answer:
The data lying in the servers of your company was just data until yesterday – sorted and filed. Suddenly, the slang Big Data got popular and now the data in your company is Big Data. The term covers each and every piece of data your organization has stored till now. It includes data stored in clouds and even the URLs that you bookmarked. Your company might not have digitized all the data. You may not have structured all the data already. But then, all the digital, papers, structured and non-structured data with your company is now Big Data.In short, all the data – whether or not categorized – present in your servers is collectively called BIG DATA. All this data can be used to get different results using different types of analysis. It is not necessary that that all analysis use all the data. Different analysis uses different parts of the BIG DATA to produce the results and predictions necessary.Big Data is essentially the data that you analyze for results that you can use for predictions and for other uses. When using the term Big Data, suddenly your company or organization is working with top level Information technology to deduce different types of results using the same data that you stored intentionally or unintentionally over years.
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.