Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Online Communities vs. Traditional Communities Cybertechnology has made it possi

ID: 3604183 • Letter: O

Question

Online Communities vs. Traditional Communities

Cybertechnology has made it possible to extend, or perhaps even ignore, the

geographical boundaries of traditional community life. This, in turn, causes us to

reexamine the concept of community; individuals physically separated by continents

and oceans can now interact regularly in SNSs and other online forums to discuss topics

that bind them together as a community. Not surprisingly then, more recent definitions of

“community” focus on the common interests of groups rather than on geographical and

physical criteria.

Rheingold points out that because of the social contracts and collaborative negotiations

that happened when members met online, the WELL became a community in that

setting. He notes, for example, that in the WELL, norms were “established, challenged,

changed, reestablished, rechallenged, in a kind of speeded up social evolution.” When the

members decided to get together occasionally at physical locations in the greater San

Francisco Bay area, the WELL became a “hybrid community,” spanning both physical

and virtual space. But some “pure” online communities also continue to thrive along side

the hybrid communities. As Michelle White (2002) notes, these electronic-only forums

also seem like “real communities” because they offer their members “social exchange,

emotional support, and learning environments.”

Avery popular mode of online communication for both young and older Internet users is

a forum called the blog (or Web log). According to the (online) Merriam Webster

Dictionary, a blog is “a Web site that contains an online personal journal with reflections,

comments, and often hyperlinks provided by the writer.” How do blogs facilitate

interactions in, and function as, online communities? While some blogs function as

online diaries, others provide commentary on a particular topic or news story. Based on

their topics, blogs are often organized into categories such as personal blogs, political

blogs, corporate blogs, health blogs, literary blogs, travel blogs, etc

Blogs can be maintained by either individuals or organizations. The community of

blogs is often referred to as the “blogosphere.” Online communities such as myBlogLog

and Blog Catalog connect bloggers, whereas search engines such as Bloglines, Blog-

Scope, and Technorati assist users in finding blogs. Blogging has become popular because

it is an easy way to reach many people, but it has also generated some social and ethical

controversies.4 For example, we saw in our analysis of the Washingtonienne scenario (in

Chapter 1) that a number of privacy-related concerns arose, which affected not only

Jessica Cutler but also the six men implicated in her personal online diary. Other

controversies arise in response to political blogs—for instance, some bloggers have

been responsible for breaking news stories about political scandals and thus influencing

public opinion. However, some of these bloggers also had political agendas to advance

and were eager to spread negative stories about politicians whose views they opposed,

and in some cases these stories have not been accurate

Assessing Pros and Cons of Online Communities

Those who see these communities in a favorable light could point to the fact that on SNSs

such as Facebook, users can make new “friends” and meet prospective college roommates

before setting foot on campus; they can also possibly find future romantic partners

in online dating services such as eHarmony. Additionally, users can join and form online

medical support groups, as well as various blogs designed to disseminate material to likeminded

colleagues. Through these online services and forums, users can communicate

with people they might not otherwise communicate with by physical mail or telephone.

Gordon Graham (1999) believes that online communities also promote individual

freedom because members can more easily disregard personal attributes, such as gender

and ethnicity, which are more obvious in traditional communities.

However, online communities have also had some negative effects. In addition to

threatening traditional community life, they have

A. facilitated social polarization (because of the very narrow focus of some groups),

B. minimized the kind of face-to-face communications (that have defined traditional

friendships),

C. facilitated anonymity and deception (thereby enabling some forms of socially

and morally objectionable behavior that would not be tolerated in traditional

communities).

Onl Wehave noted some ways cybertechnology provides us with choices about which kinds of

online communities we wish to join; this would seem to contribute positively to human

interaction by enabling us to come together with like-minded individuals we otherwise

might not meet. However, some online communities, especially those whose focus tends

to be on topics and issues that are divisive and narrow, can also contribute to social

polarization. Mitch Parsell (2008) argues that “extremely narrowly focused” online

communities can be dangerous because they “can polarize attitudes and prejudices,”

which can lead to increased division and “social cleavage.” He worries that the narrow

focus of many online communities presents us with cause for concern. Parsell expresses

this concern in the form of the following argument:

1. People tend to be attracted to others with like opinions.

2. Being exposed to like opinions tends to increase our own prejudices.

3. This polarizing of attitudes can occur on socially significant issues. . . .

4. Thus, where the possibility of narrowing focus on socially significant issues is

available, increased community fracture is likely.5

So, even though online communities can empower individuals by providing them

with greater freedom and choice in terms of their social interactions, they can also foster

increased polarization in society.ine Communities and Social Polarization

Frien A

related, and very important, question that also arises has to do with the implications that

online-only communication between individuals may have for our traditional understanding

of friendship. In other words, is it possible for people who interact only in virtual

(or purely online) contexts to be “real friends”?

To what extent, if any, is physical interaction between individuals necessary for true

friendships to develop and flourish? At one time, the notion of “disembodied friends”

might have seemed strange. But today, we hear about so-called “friends” who communicate

regularly online but have never met in physical space.dships in Online Communities

The authors argue

that it is not possible to realize close friendships in a “virtual world” because purely

computer-mediated contexts (a) facilitate voluntary self-disclosure and (b) enable people

to choose and construct a highly controlled “self-presentation” or identity. Because of

these factors, essential elements of a person’s character, as well as the “relational self

ordinarily developed through those interactions in friendship” are distorted and lost. For

example, they point out that in off-line contexts, we involuntarily disclose aspects of

ourselves through indicators or “cues” in our interactions with others. And because

interactions in these contexts are acts of “nonvoluntary self-disclosure,” one has less

control over the way one presents oneself to others. As a result, important aspects of our

true personalities are involuntarily revealed, which makes close friendships possible in

off-line contexts but not in virtual ones.

Deception in Online Communities

Some critics believe that online communities reveal a “darker side” of the Internet

because people can, under the shield of anonymity, engage in behavior that would not be

tolerated in most physical communities. For instance, individuals can use aliases and

screen names when they interact in online forums, which makes it easier to deceive others

about who actually is communicating with them. We briefly examine a scenario that is

now a classic case for illustrating how online anonymity, pseudonymity, and deception

can contribute to the darker side of online communities

What is Virtual Reality (VR)?

Philip Brey (1999, 2008) defines virtual reality, or VR, as “a three dimensional interactive

computer generated environment that incorporates a first person perspective.” Notice

three important features in Brey’s definition of VR:

_ interactivity,

_ a three-dimensional environment,

_ a first-person perspective

First,

interactivity requires that users be able to navigate and manipulate the represented

environment. Because a three-dimensional environment is required in VR, neither textbased

computer-generated environments nor two-dimensional graphic environments will

qualify. Brey also points out that a first-person perspective requires a single locus from

which the environment is perceived and interacted with; the first-person perspective also

requires an immersion in the virtual world rather than simply an “experience” of that

world as an “object that can be (partially) controlled by the outside.”

Ethical Are ethical issues involving behavior in VR applications, including online games,

different from those associated with morally controversial acts displayed on television

or played out in board games? Consider that television programs sometimes display

violent acts and some board games allow participants to act out morally controversial

roles—how are VR applications different? Brey (1999) points out that in VR applications,

users are actively engaged, whereas television viewers are passive.VRusers are not

spectators; rather, they are more like actors, as are board game players, who also act out

roles in certain board games. This common feature suggests that there might not be much

difference between the two kinds of games; however, Brey notes that VR applications,

unlike board games, simulate the world in a way that gives it a much greater appearance

of reality. And in VR, the player has a first-person perspective of what it is like to perform

certain acts and roles, including some that are criminal or immoral, or bothControversies Involving Behavior in VR Applications and Games

Violent and Sexually Offensive Acts in MMORPGs and MMOGs

In addition to concerns about sexually offensive behavior in online games, many

worry about the kinds of violent acts that are also carried out in these environments.

Monique Wonderly (2008) suggests that some forms of violence permitted in online

games may be “more morally problematic” than pornography and other kinds of sexually

offensive behavior in virtual environments. She points out, for example, that relatively

few video games “permit sexual interaction between characters,” and even fewer allow

“d Morgan Luck (2009) notes that while most people agree that murder is

wrong, they do not seem to be bothered by virtual murder in MMORPGs. He points out,

for example, that some might see the virtual murder of a character in a video game as no

different from the “taking of a pawn in a chess game.” But Luck also notes that people

have different intuitions about acts in virtual environments that involve morally objectionable

sexual behavior, such as child pornography and pedophilia. And he worries that

the kind of reasoning used to defend virtual murder in games could, unwittingly, be

extended to defend virtual pedophilia. For example, he notes that the following line of

reasoning, which for our purposes can be expressed in standard argument form, may

unintentionally succeed in doing this.

1. Allowing acts of virtual murder will not likely increase the number of actual

murders.

2. Allowing acts of virtual pedophilia may significantly increase the amount of

actual pedophilia.

3. Therefore, virtual pedophilia is immoral, but virtual murder is noteviant sexual conduct.”

different kind of rationale for why virtual child pornography should be prohibited

has been offered by Per Sandin (2004), who argues that it can cause significant harm to

many people who find it revolting or offensive. But Brey (2008) points out that a problem

with Sandin’s argument is that it “gives too much weight to harm caused by offense.” As

Brey puts it, “If actions should be outlawed whenever they offend a large group of

people, then individual rights would be drastically curtailed, and many things, ranging

from homosexual behavior to interracial marriage, would still be illegal.”15 Hence, none

of the arguments considered so far can show why acts that are morally objectionable in

physical space either should or should not be allowed in virtual environments.

Assessing the Nature of “Harm” in Virtual Environments

Can a plausible argument be constructed to show why it is wrong to perform acts in virtual

environments that would be considered immoral in real life? We have seen some

difficulties with arguments that tried to show that allowing morally objectionable actions

in virtual environments will likely lead to an increase (or decrease) in those actions in the

real world. Other arguments have tried to link, or in some cases delink, the kind of harm

caused in virtual environments with the sense of harm one might experience in the real

world. For example, some arguments have tried to show that sexually offensive acts in

virtual environments can cause harm to vulnerable groups (such as children and women)

in the real world.16 However, the individual premises used to support the conclusions to

these arguments typically lack sufficient empirical evidence to establish the various claims

being made. On the contrary, some arguments claim that no one is physically harmed in

virtual murder or, for that matter, in any act performed only in a virtual environment. But

these arguments have also been criticized for lacking sufficient evidence to establish their

conclusions.

Brey (1999) believes that we can use two different kinds of arguments to

show why it is wrong to engage in immoral acts in virtual environments:

a. The argument “from moral development.”

b. The argument from “psychological harm

The argument from psychological harm suggests that the way we refer to characters

that represent a particular group can cause harm to actual members of the group.

Consider a cartoon depicting a woman being raped: Actual (flesh-and-blood) women

may suffer psychological harm from seeing, or possibly even knowing about, this cartoon

image, even though none of them, as flesh-and-blood individuals, is being raped, either

physically or as represented by the cartoon. Extending this analogy to virtual space, it

would follow that the “rape” of a virtual woman in a virtual environment, such as a MOO,

MMOG, MMORPG, etc., can also cause psychological harm to flesh-and-blood women.

Virtual Economies and “Gold Farming”

Kai Kimppa and Andrew Bisset (2008) define gold farming as “playing an online

computer game for the purpose of gaining items of value within the internal economy of

the game and selling these to other players for real money.”17 These items can include

“desirable items” as well as in-game money (where the rules defining the game’s internal

economy permit this); they can also include “highly developed” game characters. All of

these items can also be sold via online auctions or designated Web sites. Kimppa and

Bisset point out that the 2009 “in-game gold market” globally was estimated at 7 billion

dollars; they also note that the practice of gold farming is most popular in countries such

as China and Mexico that have both low-average income levels and “relatively good

access to the Internet.”

Misrepresentation, Bias, and Indecent Representations in VR Applications

So far, we have examined some behavioral, or what Brey (1999) also refers also refers to as

“interactive,” controversies regarding ethical dimensions of VR applications. The other

ethical aspect that needs to be considered, in Brey’s VR model, has to do with the ways in

which virtual characters and virtual objects are represented in these applications. Note

that this set of ethical concerns includes not only virtual characters in games but also

features of VR applications used to simulate and model objects in the physical world.

Brey (2008) argues that representations can become morally problematic when

they are

1. misrepresentations (that can cause harm by failing to uphold standards of

accuracy),

2. biased representations (that fail to uphold standards of fairness),

3. indecent representations (that violate standards of decency and public morality

Misrepresenting entities with respect to descriptive features, however, can be

distinguished from (otherwise accurate) representations that favor certain values or

interests over others. Brey calls the latter biased representation; it can result from the

choice of model. For example, “softbots” (or “bots”) in the form of avatars on computer

screens, which often display human-like features and qualities, could be used in a VR

application to represent members of a racial or minority group; even though the

representation may be structurally accurate, if the avatar is used in a way that suggests

a racial stereotype, it can fail to accurately portray a member of the racial group

However, Brey also points out that the context in which a representation take place

can also be a factor in determining whether it is considered decent. He uses the example

of a representation of open heart surgery to illustrate this point, noting that a representation

of this procedure in the context of a medical simulator may not be offensive to

someone considering whether to undergo the surgery. However, it could be deemed

offensive in other contexts, such as using the representation as a background in a

music video.

CYBER IDENTITIES AND CYBER SELVES: PERSONAL IDENTITY

AND OUR SENSE OF SELF IN THE CYBER ERA

Social scientists have described various ways that the use of cybertechnology can impact

personal identity. One (now classic) incident, in the 1980s, that quickly caught their

attention involved a male psychologist who joined an online forum for disabled persons,

where he identified himself as a woman who had become crippled as a result of an

automobile accident. Under this alias, “she” soon engaged in romantic exchanges with a

few of the forum’s members. When “her” true identity was later discovered, however,

many of the participants in this electronic forum were outraged. Some felt manipulated

by the psychologist’s use of a fraudulent identity, and others complained that they were

victims of “gender fraud.” Lindsy Van Gelder (1991) describes this incident as “the

strange case of the electronic lover.”

Cybertechnology as a “Medium of Self-Expression”

Turkle’s early studies focused mainly on the role that stand-alone, or non-networked,

computers played in the relationship between personal identity and computers. Her

subsequent research in this area has centered on interactions involving networked

computers; in particular, her studies examine how behavior in networked environments

significantly impacts our relationships with our “selves” as we conceive them. To illustrate

several of her key points,

“MUD Selves” and Distributed Personal Identities

In MUDs, users can be (i.e., can represent themselves textually or graphically as)

characters that are very different from their actual selves; Turkle notes, for example,

that the obese can be slender, and the old can be young. She also points out that MUD

users can express multiple, and often unexplored, aspects of the self, and that they can

“play with their identity” by trying out new roles.

Acts in which individuals assume different identities or different gender roles are

hardly unique to the world of MUDs and virtual environments; for example, a male

transgendered person in physical space can selectively represent himself as a member of

the opposite sex in contexts of his choosing

Turkle notes that some of her research subjects in MUDs experience their world

through interactions in “multiple windows”; real life (or “RL”) is considered by some

MUD participants as simply “one more window.” One of Turkle’s research subjects,

whom she refers to as Doug, remarked that RL is not typically his “best window.” Turkle

points out that in MUDs, we can “project ourselves into our own dramas in which we are

producer, director, and star”; in this sense, she believes that MUDs provide a “new

location for acting out our fantasies.

The Impact of Cybertechnology on Our Sense of Self

We have examined some effects that one’s interactions in virtual or computer-mediated

environments, including MOOs and MUDs, can have for one’s personal identity. In this

section, we focus on the impact that cybertechnology has for our sense of self (as humans)

vis-_a-vis two factors:

a. our relation to nature,

b. our relation to (and sense of place in) the universe.

With regard to (a), social scientists often describe this relation in terms of three major

epochs in human civilization: the agricultural age, the industrial age, and the information

age. Each has been characterized by revolutionary technological breakthroughs in

gaining control over nature. At the dawn of the agricultural age, people who had

previously led nomadic lives developed technology that enabled them to control the

production of crops by controlling elements of nature rather than conforming to nature’s

seasonal rotations, which often required migrating to different locations. In the industrial

age, humans harnessed steam power. With steam power, people were no longer

compelled to set up communities close to large bodies of water that provided much

of their energy. We recently entered a phase (i.e., the third great epoch) that social

scientists call “the information age,” which, as we will see, has also significantly influenced

the way we now conceive of ourselves in relation to nature

how has this relatively recent technology already

begun to define us as human beings? J. David Bolter (1984) believes that, historically,

people in Western cultures have seen themselves through the prism of a defining

technology, which “develops links, metaphorical or otherwise, with a culture’s science,

philosophy, or literature.”27 Philosophers and humanists have used metaphors associated

with a particular “defining technology” to describe both human beings and the universe

they inhabit in a given age or time period. Bolter identifies three eras in Western culture

where a defining technology has played a key role: the ancient Greek world, the

Renaissance, and the contemporary computer age. Our interest, of course, is with

the third era.

To support Bolter’s thesis that we have come to see ourselves more and more in

computer-like ways, we have only to reflect for a moment on some of the expressions that

we now use to describe ourselves. For example, Bolter points out that when psychologists

speak of “input and output states of the brain,” or of the brain’s hardware and software,

they exemplify Turing’s men. And when cognitive psychologists study the “mind’s

algorithm for searching long-term memory,” or when linguists treat human language

as if it were a programming code, they, too, are “Turing’s men.” Psychologists and

cognitive scientists who suggest that the human mind is like a computer in that

it “encodes, stores, retrieves, and processes information” are also, in Bolter’s view,

Turing’s men.

What is AI? A Brief Overview

John Sullins (2005) defines AI as “the science and technology that seeks to create

intelligent computational systems.” Sullins notes that AI researchers have aimed at

building computer systems that can duplicate, or at least simulate, the kind of intelligent

behavior found in humans. The official birth of AI as an academic field is often traced to a

conference at Dartmouth College in 1956, which was organized by AI pioneers John

McCarthy and Marvin Minsky. Since then, the field has advanced considerably and has

also spawned several subfields

The classical AI approach was eventually criticized by researchers in the field who

argued that human intelligence cannot be reduced merely to symbolic manipulation

(captured in software programs) and that something additional was needed. For example,

one school argued that an artificial brain with neural networking (that could

“perceive” and “learn” its environment)—was also required for a machine to learn

and understand the world and thus potentially duplicate the way that humans think.

Whereas the latter scheme in AI is often described as a “bottom-up” (or inductive)

approach to machine learning, the classical/symbolic AI model is typically viewed as a

“to Another division in the field arose when a group of AI researchers argued that it was

not critical to build machines that were as intelligent as humans (or that thought in the

same way humans did); rather, they believed that a legitimate goal for AI research would

be to develop systems that were “expert” in performing specific tasks that required a high

level of intelligence in humans. For example, a system such as an “expert doctor” could

be highly competent in diagnosing medical diseases, although it would be unable to

perform any tasks outside that very narrow domain. (p-down” (or deductive) approach.

One concern that arose early in AI research, which was more sociological than

technological in nature, had to do with how we might react to a world where machines

would be as intelligent, or possibly even more intelligent, than humans

The Turing Test and John Searle’s “Chinese Room” Argument

In 1950, Alan Turing confidently predicted that by the year 2000 a computing machine

would be able to pass a test, which has come to be called “The Turing Test,” demonstrating

machine intelligence. Turing envisioned a scenario in which a person engaged in

a conversation with a computer (located in a room that was not visible to the human) was

unable to tell—via a series of exchanges on a computer screen—whether he or she

was conversing with another human or with a machine. He believed that if the computer

was able to answer questions and communicate with the person at the other end in a way

that the person there could not be sure whether this entity was a human or a computer,

then we would have to attribute some degree of human-like intelligence to the computer.

Unfortunately, an extended discussion of key questions involving both Watson and

the Turing test, as well as an in-depth discussion of the history of AI itself, are beyond the

scope of this chapter. AI’s history, though relatively brief, is fascinating, and several

excellent resources are available; so, fortunately, there is no need to replicate that

discussion here.31 We limit our further analysis of AI and AI-related ethical issues to two

broad questions: (1) What is the nature of the human-machine relationship (in the

development of cyborgs and other AI entities)? (2) Do at least some (i.e., highly

sophisticated) AI entities warrant moral consideration?

The classical AI approach was eventually criticized by researchers in the field who

argued that human intelligence cannot be reduced merely to symbolic manipulation

(captured in software programs) and that something additional was needed. For example,

one school argued that an artificial brain with neural networking (that could

“perceive” and “learn” its environment)—was also required for a machine to learn

and understand the world and thus potentially duplicate the way that humans think.

Whereas the latter scheme in AI is often described as a “bottom-up” (or inductive)

approach to machine learning, the classical/symbolic AI model is typically viewed as a

“top-down” (or deductive) approach.

Another division in the field arose when a group of AI researchers argued that it was

not critical to build machines that were as intelligent as humans (or that thought in the

same way humans did); rather, they believed that a legitimate goal for AI research would

be to develop systems that were “expert” in performing specific tasks that required a high

level of intelligence in humans. For example, a system such as an “expert doctor” could

be highly competent in diagnosing medical diseases, although it would be unable to

perform any tasks outside that very narrow domain. (Recall our brief discussion of expert

systems in Chapter 10, in connection with cybertechology and work.) However, many

other AI researchers believed that it was still possible to achieve the original goal of

emulating, (general) human intelligence in machines. Some of these researchers, including

those working on the CYC project, use an approach that builds on classical/symbolic

AI by designing software programs that manipulate large databases of factual information.

Others, such as “Connectionists,” have designed neural networks that aim at

modeling the human brain, with its vast number of neurons and arrays of neural

pathways, which exhibit varying degrees of “connection strengths.” And some AI

researchers focus on building full-fledged robots that can include artificial emotions

as well.2

The Turing Test and John Searle’s “Chinese Room” Argument-In 1950, Alan Turing confidently predicted that by the year 2000 a computing machine

would be able to pass a test, which has come to be called “The Turing Test,” demonstrating

machine intelligence. Turing envisioned a scenario in which a person engaged in

a conversation with a computer (located in a room that was not visible to the human) was

unable to tell—via a series of exchanges on a computer screen—whether he or she

was conversing with another human or with a machine. He believed that if the computer

was able to answer questions and communicate with the person at the other end in a way

that the person there could not be sure whether this entity was a human or a computer,

then we would have to attribute some degree of human-like intelligence to the computer.

While most AI researchers would concede that Turing’s prophecy has not yet been

fully realized, they also point to the significant progress and achievements that have been

made in the field so far. For example, in 1997 an IBM computer program called Deep Blue

defeated Gary Kasparov, then reigning champion, in the competition for the world chess

title. And in 2011, another IBM computer program, called Watson, defeated two human

opponents in the TV game show Jeopardy in a championship match. (This humancomputer

competition was viewed by millions of people around the world.)

Watson, like Deep Blue, is a disembodied AI, i.e., a highly sophisticated set of

computer programs. Unlike Deep Blue, which could be viewed as an “expert system”

that is highly skilled at playing chess (but not necessarily competent in other areas),

Watson was capable of answering a wide range of questions posed in natural language.

Some believe that Watson’s skills at least simulate human intelligence in the broad or

general sense. But did Watson, in defeating its human challengers, also exhibit the skills

necessary to pass the Turing test? And even if Watson could pass the Turing test, would

that necessarily show that Watson possessed (human-like) intelligence.

Cyborgs and Human-Machine Relationships-So far, we have considered whether machines could, in principle at least, possess humanlike

intelligence. We have also considered how our answer to this question can affect our

sense of what it means to be human. Next, we see how the development of cyborgs and

the concerns it raises about human-machine relationships may also have a similar effect

on us.

The Challenge in Distinguishing AI Entities from Humans: Are Computers Becoming

More Human-Like-Even though they are merely virtual

entities, some exhibit human-like features when viewed on screens or when heard on electronic devices. Also consider that some avatars (and AI bots), which now act on our

behalf, exhibit characteristics and stereotypic traits associated with humans in certain

professions. For example, an avatar in the form of an AI “agent” designed to interact with

other AI agents as well as with humans, such as a “negotiation agent,” may look like and

have the persona of a (human) broker This confusion in interacting with artificial entities will likely become more exacerbated

as we move from our interactions with virtual entities on screens (of computers and

devices) to interacting more regularly with physical AI entities—viz., robots. Consider

that sophisticated robots of the near future will not only look more human-like but may

also exhibit sentient characteristics; that is, these robots, like humans and animals, would

(arguably, at least) be capable of simulating the experiences of sensation, feeling, and

emotion. Robots and other kinds of AI entities of the not-too-distant future may also

exhibit, or appear to exhibit, consciousness. Many AI researchers have questioned the

nature of consciousness; for example, cognitive scientists and philosophers ask whether

consciousness is a uniquely human attribute. Some also question whether it might be

an emergent property—that is, a property capable of “emerging” (under the right

conditions) in nonhuman entities, such as advanced AI systems.

Do (At Least Some) AI Entities Warrant Moral Consideration?- If some AI entities are capable of exhibiting (or simulating) rationality and intelligence

(and possibly even consciousness)—characteristics that traditionally have been reserved

to describe only humans—it would not seem unreasonable to ask whether these entities

might also warrant moral status. And if some of these entities can exhibit (or simulate)

human-like emotion and needs, as in the case of the artificial boy in the movie AI, would

that also be a relevant factor to consider in understanding and addressing concerns about

moral consideration for AI entities? An important question, then, is whether we will need

to expand the conventional realm of moral consideration to include these entities. In

answering this question, however, two additional, and perhaps more basic, questions

need to be examined:

i. Which kinds of beings, or entities, deserve moral consideration?

ii. Why do those beings/entities warrant it?

Prior to the twentieth century, ethicists and lay persons in the Western world

generally assumed that only human beings deserved moral consideration; all other

entities—animals, trees, natural objects, etc.—were viewed merely as resources for

humans to use (and misuse/abuse) as they saw fit. In other words, humans saw these

“resources” simply as something to be used and disposed of as they wished, because they

believed that they had no moral obligations toward them.

On a second front, some environmentalists made an even bolder claim, arguing that

we should extend ethical consideration to include new “objects,” or entities. Hans Jonas

(1984) argued that because modern technologies involving atomic and nuclear power

have presented us with tools of destruction that could devastate our planet on a scale

never before imaginable, we needed to expand our sphere of moral obligation to include

“new objects of moral consideration.” These “objects” included natural objects such as

trees, land, and the environment itself, as well as abstract objects such as “future

generations of humans” that will inherit the planet.

Luciano Floridi (2002) has suggested that we need to grant some level of moral

consideration to at least certain kinds of informational objects or entities. Initially, one

might find Floridi’s assertion strange, perhaps even preposterous, but we have seen that

some sophisticated AI entities already exhibit a form of rationality that parallels that of

humans. The question that concerns us here is whether these artificial entities merit moral

consideration because they, like humans, have rational abilities. If our primary justification

for granting moral consideration to humans is based on the premise that humans are

rational entities, and if certain artificial entities also qualify as “rational entities,” then we

can make a compelling case for granting at least some moral consideration to them. For

example, even if they do not qualify as full-blown moral agents (as typical adult humans

do), they may nevertheless meet the threshold of what Floridi calls “moral patients.”

Please write summary of this article in your own words

Explanation / Answer

trying to come up with the shapes to extrude in order to create a continent "skin" for wireframe globe.

a ten foot diameter wireframe globe, to be made of pipe, will have plating that is cut out in shapes of the continents, rolled to match, then attached.

the flate layouts should be easy to derive, but i need the "curved" continents extrusions, in place, for my wireframe in order for presentation.