Besides its surprisingly good action cinematography, ‘Minority Report’ owes its huge success to the deep discomfort it created in viewers. The movie constructs a future world where law enforcement makes use of ‘Pre-Cogs’ — humans who have been given the gift of foresight through genetic modification, so that they can see crimes before they happen. When a crime is predicted, the purported criminal is promptly apprehended and the crime prevented. The movie forces the viewer to confront a host of questions that have troubled philosophers for millennia.
If the future is predetermined, in what sense can we be said to be free? Central to our commonsense conception of freedom is the inherent possibility of doing otherwise. If the future is closed to alternate possibilities, then there is no sense in which a murderer could have acted differently and then it seems that the act of murder is not a free act. Relatedly, if a person cannot do otherwise, is there a sense in which the person is morally responsible for the action? Hume, most famously, articulated the seemingly essential relationship between the notion of moral responsibility and the possibility of freely choosing your actions. ‘Ought implies can,’ he said. One is morally obligated to act in a certain way only if one can in fact act in such a way. If the future is predetermined, then in a clear sense the murderer could not have failed to murder. But then what sense is there to the claim that the murderer ought not to murder? And if there is no sense to be given in response to this question, there is little reason to hold the murderer morally responsible. The murderer is no different from a person who happens to slip on a banana, land on an innocent bystander, and accidentally snap his neck. The person is causally responsible for the unfortunate killing, but, since the person could not have done otherwise, is not morally responsible for it.
What if you could know when and how you were going to die? Would you choose to remain ignorant, or would you prefer to confront the facticity of your own mortality directly? This question has engaged philosophers for millennia. Until recently, the question was merely a matter for personal speculation, eliciting intuitions about mortality, self-determination, and free-will. This has all changed. At least, so it seems.
A new industry has emerged, as a result of the last decade’s exponential technological advances in the field of bioinformatics. Now, a glimpse of our most likely personal Reaper is less than 100 dollars away (just two years ago the glimpse was ten times the distance and ten times more blurry). Gene sequencing companies have sprung up everywhere, like mushrooms after a rain. For a modest price, each of us can have our DNA analyzed, and receive a report of our personal predisposition to acquire a variety of potentially debilitating or terminal diseases. Alzheimer’s disease, diabetes, lung cancer, breast cancer, obesity, and multiple sclerosis are but a few of the many worrisome conditions targeted by such DNA analysis.
Yesterday, Peter Ludlow opened the second week of the 2009 Compass Interdisciplinary Virtual Conference with a riveting presentation on virtual communities, cultures and governance. This year’s conference is titled ‘Breaking Down Barriers.’ Accordingly, Ludlow takes us into the virtual world of Second Life and provides a glimpse of how individuals, from a standpoint of anonymity, nonetheless construct communities, cultures, and even forms of governance that resolve inevitable conflicts.
Second Life is the height of embedded social networking. It is a platform where people can assume any identity they wish by constructing a highly customizable avatar. The content of the virtual world is also completely user designed. Players construct objects, buildings, business establishments, and much more. Each player travels through the virtual world as his avatar, and can engage with, modify, and construct, various objects, and most importantly can interact with the avatars of other players.
These interactions create various communities. Ludlow defines a virtual community as a group of individuals spatially separated but engaged in a broad range of shared social activities through non-face-to-face forms of communication. A community might form around a virtual night-club; regularly meeting at the same spot and intensively interacting. Or, a community might form around a business venture, for example, constructing a new virtual night-club. The opportunities for interaction within Second Life are plenty. And, as in the real world, these interactions provide the basis for enduring relationships, friendships, alliances, but also enmities.
The use of harsh interrogative techniques by the U.S. government has been a hotly debated topic in the global media in recent months. The debate is especially intense with respect to the moral significance of such techniques. As significant is the controversy about the veracity of the information acquired through the application of these techniques.
These two issues are often considered to be related. The weight of our moral considerations is likely to be inversely related to the utility of the practice (though followers of Kant would reject this claim). In other words, if we find that reliable and crucial information can only be obtained by inflicting significant harm to a single purportedly depraved individual, our moral responsibility towards that individual seems diminished. If, on the other hand, milder techniques are just as effective, our reasons for employing harsh interrogation seem morally suspect.
New research reported on the BBC website indicates that the harsh interrogative techniques in question are not only ineffective at eliciting reliable and crucial information, but also that they have a negative long-term effect on the possibility of obtaining that information. The research shows that, under conditions of extremely high stress, detainees Continue reading “Intensive interrogation doesn’t lead to information”
The notion of a mental disorder, or illness, is an essentially normative notion. It is dependent on the availability of some metric of normalcy, or orderliness. Whether a given mental tendency is a disorder or not depends on whether or not, and in what ways, it deviates from what is considered normal, or orderly. But, what are the norms that determine this metric?
This question is highly controversial, and its importance transcends far beyond the walls of academia. Few such seemingly terminological issues have such a tremendous impact on the day to day lives of so many millions of individuals across the world. For example, until quite recently (1973!!), homosexuality was considered a mental disorder by the American Psychiatric Association. Its status as a disorder gave legitimacy to subject individuals ‘afflicted’ with this ‘disorder’ to psychiatric treatment, often leading to detrimental effects (not to mention the pervasive social and legal discrimination they faced). Characterizing a given tendency as a disorder has the potential to bring about terrible harms and injustices. However, there are also cases in which pursuing various corrective measures seems crucial. Certain tendencies, such as schizophrenia, can be so disruptive to an individual’s life that treatment seems necessary. Labeling such a tendency as a ‘disorder’ potentially brings with it various societal and legal commitments to provide support that can substantially alter the lives of suffering individuals for the better. It is clear, then, that much hangs on how we come to characterize a mental tendency as a disorder.
When is a thing or process a part of a person’s body or bodily process? It seems that though human beings are not normally born with large titanium deposits, a titanium knee implant at a certain point of one’s life is part of one’s knee, part of one’s body just as much as one’s hand, spine, or brain. Similarly, when a person undergoes heart transplant it seems clear that the ‘new’ heart is now a genuine part of her body. Having undergone the transplant successfully, it would be tremendously odd to say that her body was heartless, but connected to some other person’s heart. It is her blood that the heart is in the business of circulating. Continue reading “Where is my mind?”
A highly influential experiment, conducted over 30 years ago, presented an array of indistinguishable stockings to subjects who were then asked to pick the one they found most appealing. Overwhelmingly, the subjects preferred the stockings on their right. When asked about the reasons for their choice, none of the subjects indicated the relative location of the item. Rather, they explained their choices by pointing out superior features of the chosen item. Of course, since the items were in fact indistinguishable in all relevant respects, no such superior features were present. The subjects were confabulating.
The results of this experiment, and others that followed, are quite surprising. They suggest that we are tremendously bad at introspecting on the reasons for our choices, and all too naturally come up with irrelevant explanations for them. We are often completely unconscious of the actual reasons for our choices. If this is the case, it puts our conception of ourselves as self-determining agents in jeopardy.
In this week’s Newsweek, Sharon Begley reports on a fascinating new study by Daniel Casasanto that reveals a pervasive spatial bias that depends on handedness. According to the study, subjects associate positive ideas with the region of space that corresponds to their ‘strong’ hand. For example, right-handed subjects judge stimuli presented on their right as more positive (e.g., good, intelligent, happy, attractive) than those presented on their left. This pattern is reversed in the case of left-handed subjects.