Author Archives: Dena T. Smith

Authors in Focus: Sophia Nathenson discusses her article, “Critical Theory and Medical Care in America: Changing Doctor–Patient Dynamics”

 

In this edition of Authors in Focus, Sociology Compass author Sophia Nathenson discusses the utility of critical theory for understanding the  doctor-patient relationship, as well as some of the broader issues in health care today.

Listen to the informative interview by clicking  HERE…

And then read the article by clicking here

 

IBM’s Watson on Jeopardy! Blurring the Line between Humans and Technology

To the left is a 1917 portrait of Thomas J. Watson, founder of IBM. A few weeks ago, IBM debuted its latest supercomputer, named after this giant of innovation (Watson), on the TV game-show Jeopardy! Though it seemed as though Watson was standing in between the two other competitors on the show, as Jeopardy! provided the computer with the same electronically-equipped podium as the other contestants, and even wrote “his” name on said podium, the brains behind this powerful supercomputer capable of answering complicated trivia questions, in fact, looks a great deal like the computer to the bottom left, one of the original IBM machines from the 1960′s. Like this primitive machine, Watson is monstrous, though certainly more “brilliant” than the room-sized machine that was too cumbersome for anyone to actually own as a personal item. Watson roundly defeated former Jeopardy! champions. Though I won’t dare pretend to be schooled enough in technology to delve into the details of Watson’s inner workings, I’ll summarize for the purposes of this piece by saying that the machine functions by recognizing key words and concepts, much like a human being, when faced with the need to process information at top speed.

While there are myriad issues of interest here, I’d like to propose that we think about the presentation of machines as something with human qualities – specifically, the use of human names to talk about machines – and what that does to the perception of machines. Consider the Mars rovers, Spirit and Opportunity, who were not named to seem particularly human. Nonetheless, the rovers were often described as adorable, or cute, and people truly feared for those little rovers, and mourned them when they ceased to respond to signals from earth, allowing their human friends back in the control room to assume they were longer functioning, or, had died. Naming these machines, making them look like human beings (with human-body-like features), makes it easier to think of them as people. For more information on sociological work or this nature, see Janet Vertesi’s work. Just as many people have, for decades, named their vehicles, people name their computers, cell phones, ipods, etc. But a computerized Jeopardy! contestant with the same name as the IBM founder, that (or who) buzzes in to answer questions, and uses word recognition to do so (however imperfectly) is a new step in blurring the line between human and machine.

Though Watson has a human name, it doesn’t have a face or a body or any “real” human characteristics. The Time Magazine article linked below suggests that computers are rapidly becoming more capable of human-like functioning. Millions of people tuned in to watch Watson live and online after the show, but what if this were a life-like “being,” something that, for all intents and purposes appeared human?What if Watson were a human-looking machine connected to a room full of technology rather than just an empty space behind a podium with a mechanical-sounding voice? Lev Grossman of Time Magazine explains:

“…if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties”

If there is a likelihood that the line between the human and the computer might become rapidly blurrier as we hit a new threshold with this kind of advancement – the ability to create machines that seem very human – might the feeling of threat increase? Watson was exciting and intriguing to most people, but not scary. What about if we were not able to tell the computer apart from the human quite as easily? What if, as Grossman suggests, computers could actually be smarter and just as (if not more) capable than humans? How might this change the nature of humanity? The nature of technology? The nature of social interaction?

IBM – Watson

Watson’s Jeopardy Win…What Did we Discover?

2045: The Year Man Become Immortal

Technology, Philosophy of – in The Blackwell Encyclopedia

Authors in FOCUS: An Interview with Wayne Brekhus On Cognitve Sociology and the Study of Race

Wayne Brekhus discusses his co-authored article,

On the Contributions of Cognitive Sociology to the Sociological Study of Race

In the interview, Dr. Brekhus answers questions such as:

  • What is cognitive sociology?
  • How did he become interested in the cognitive perspective?
  • Why is it so critical that we study race using the cognitive model?

To listen to the interview, CLICK HERE !!

AND…

After you watch the interview, read the article by clicking here!!

Patient autonomy and the biomedical model

Recently, there have been many suggestions that a backlash against the unilateralism of the biological approach in medicine is on the brink. Perhaps, some suggest, patients have garnered some say in their treatment, even though many researchers suggest that modern medical practice strips patients’ rights to make their own decisions. But where ought the boundary between patient autonomy and doctor totalitarianism be? On the one hand, purely diagnostic, biomedical medicine that does not allow for patients’ own insight into their conditions,  makes patients feel objectified, as if they are nothing more than a disease. On the other hand, doctors have a certain expertise and patients may not always know what’s best for them. After all, medical training is difficult, arduous, and produces a professional with an important and valuable set of skills. Certainly, options beyond the biomedical model would allow patients to have increased autonomy and say in their care. The availability of, for instance, acupuncture and herbal supplements has allowed many patients with a range of conditions from depression to back pain to find relief in a treatment that, at one time, would never have been (and still often isn’t) considered acceptable treatment in a Western medical perspective.

What would it mean for the effectiveness of treatment if patients begin to have more of a say in their treatment? When patients come into doctors’ offices asking for certain procedures, tests, and even medicines, it represents an informed consumer, but also a patient who may be less receptive to the advice of doctors. The question is: how do we find a balance between patients being able to chose the kind of treatment they want and being truly listened to by their doctors (rather than simply diagnosed as a medical object)?

Nathenson (2010, see below) suggests that the biomedical model may not be the dominant lens in the future. While this seems like it is still a distant possibility, Nathenson describes an increasing medical pluralism because of more patient autonomy.  This is a crucial question – how should we study these various ways of treating illness and understand how the dominance of these models are maintained. I say we also need to examine how much autonomy is really useful in medical treatment. I’m not convinced that patients can be fully autonomous in any kind of treatment, though certainly some models preclude much more autonomy than others.

MRI’s as you see to the left, medications, and other medical technologies are powerful tools that reinforce the legitimacy of the medical profession, which is currently dominated by the biomedical model, for better or worse. And there are major benefits of this diagnostic model, even if it tends to ignore the patient’s voice. In the article below, Pauline Chen writes of how useful diagnoses can be in relieving patients’ fears about their symptoms. In doing so, she describes the utility of doctors’ expertise. While the biomedical model is oft critiqued for its cold, objectifying, and even dehumanizing tendencies ( and this is a serious problem), it also trains doctors who have highly specified knowledge that also offers an ability to solve problems that might otherwise be illusive. However, there must be a way to marry some sense of autonomy and humanity with the best that science and medicine have to offer.

Critical Theory and Medical Care in America: Changing Doctor-patient Dynamics

The Comfort of a Diagnosis

Blinded by narcissism?

In the Freudian Era, Narcissism was a central psychiatric concept and diagnosis. In the last several months, the likelihood that the American Psychiatric Association will drop this diagnosis from it’s new, 5th edition of The Diagnostic and Statistical Manual of Mental Disorders (DSM) has been the subject of a string of articles in prominent newspapers and other news outlets including the New York Times and NPR. Though the debate is one about professional discourse and diagnosis, it extends well beyond this realm and begs the question of whether or not this change represents a larger trend in the US wherein Americans no longer see putting themselves before others and thinking of themselves as better and more capable than others (even with little evidence to back it up) as a problem.

In her book, Generation Me: Why Today’s Young Americans are More Confident, Assertive, Entitled – and More Miserable than Ever Before, Jean Twenge somewhat satirically describes an increasing focus on the importance of self-esteem in American Society. From birth, she argues, children are steeped in the notion that they are important just for being them and that they must make themselves feel good at all costs. Ultimately, Twenge argues, this rather ironically leads to more unhappiness and even mental illness, as the current generation of young adults does not learn how to live in the real world. Their entire educational experience can be captured by several of Twenge’s examples: children receive trophies just for doing their best, rather than for being the best player or the hardest worker on the team; they are content with C grades because teachers tell them they’re good no matter what their grades are; they earn pretty stickers for effort rather than genuine achievement. While there are some wonderful outcomes of the self-esteem movement (for a description of the functions and theories of self-esteem, see the linked article below) that started with the Baby Boomer generation – namely that kids do feel more liberated and, in moderation, self-esteem is certainly beneficial – Twenge argues that the level of self-esteem present in today’s kids is harmful to both them and society more broadly. Ultimately, over-inflated self-esteem can result in narcissistic tendencies that lead to much more than feeling overly good about oneself; narcissism can ruin relationships, cost people their jobs, and even lead to increases in violence.

(more…)

Genes cannot be bought, but their testing certainly can be…

The recent uptick in genetic testing for a range of illnesses has prompted great debate in the medical community about how reliable and useful the testing is, as well as discussion among social scientists about the social and ethical consequences of the testing. One line of inquiry that has been around a bit longer is about biological thinking, specifically as it is related to stigma and inequality. In particular, there is a fascinating and timely discussion of the geneticization of mental illness by Jo Phelan (2005) that, even before the emergence of the current debate about technology, delved into the promise and perils of genetic thinking – though not specifically about genetic testing. For instance, Phelan addresses issues of stigma and labeling associated with seeing mental illness as a genetic problem. Phelan finds that stigma is, at the same time, both enhanced and alleviated by geneticization. In other words, if an illness is genetic, it removes the feelings of responsibility from the sufferer and makes it more difficult for others to blame him or her for said illness. The illness and the person who embodies said condition, then, are not seen as one and the same. However, the same genetic thinking opens the door to a range of  new judgments that can be detrimental both in terms of self-concept and the way in which others make assumptions about those who experience, in this case, mental illness. There has been some, but not much work, overall, in the social sciences, about the social problems associated with genetic testing (for a lovely summary, see the article linked below). In the last few weeks, genetic testing has been thrust into the forefront once again after fervent debate that ended with Eric Holder, US Attorney General ruling that genes cannot be patented – thus, genes are in the public domain – even though companies like “Myriad,” a testing company, already possess the patent to two human genes (and it is unclear what will happen to these patents).

Drawing on the existent literature on genetic testing, Richard Tutton (whose recent article in Sociology Compass is linked below) reviews the literature on genetic testing and calls for sociologists to pay more attention to these issues. Though Tutton does not address issues of inequality directly, the recent debate on access to genetic testing led me to wonder, for instance, who can afford the testing? Who will it be offered to? Will insurance cover it? How might this testing “blame” ethnic/racial groups for illness? In reference to the Phelan article mentioned above, would knowing one is predestined to developed depression, for instance, change the way we see someone struggling with that condition? And on and on and on. Tutton does survey the literature on the use of genetic testing and forensics and there is clearly an open door to an over-reliance on an imperfect technology when someone’s freedom or life hangs in the balance. One of the great fears about genetic testing is that it will become a central determining factor in whether we see people as “criminal” or not — a frightening idea.

(more…)

The influence of science on morality

Recently, I’ve come across several mentions of the role of science in influencing morality. Most of these discussions allude to the following question: to what extent do scientific findings influence people’s concepts of right and wrong or even good and evil? The discussion is generally about the role the natural sciences play in these determinations, but I often wonder what sociologists’ role is in shaping concepts of morality. I do not have an answer to the above question and will likely pose more questions than I will provide answers in the following paragraphs, but I believe this is both an interesting and important question for social scientists to consider. For all the debates about the importance of objectivity in scientific work and all the measures put in place to insure that scientists’ opinions remain on the periphery (if not entirely absent from) their work, there seems to be less debate about the role of morality.

In a recent NPR broadcast on Science and Morality (see below), one of the participants summarized this issue quite succinctly: “Science plays an important and vital role in our lives…when it comes to morality, people say science is neutral…but that’s not quite true.”  This is, in part, because we often confound objectivity and moral neutrality, which are really not one and the same.  For instance, even if researchers take great care to be objective in their work, all science is political. Studies of evolution are increasingly under attack from the religious right who claim that this undermines the belief in creationism – they accuse scientists of destroying a religious moral fiber by investigating such topics. Thus, no matter how objective a study may be, its ability to influence public morality or to at least cause debates about where morality does and should come from (science or religion, in this case) is independent of objectivity.

Another participant in the NPR broadcast reminds us that, “a better understanding of human nature and the human brain can affect moral judgments.” In other words, simply by virtue of explaining the world and furthering knowledge, we may actually be inadvertently affecting people’s morals and the way they see the world. This can be important and is sometimes the goal – for instance, by explaining inequality (which is a kind of social evil), we hope that it might dissipate (thus creating a better society). We may change the way people see right and wrong in the process, but, we hope, for the betterment of society. Still, we advance a particular moral stance, despite whatever measures have been taken to ensure the objectivity of the project.  Even the most self-reflexive of social scientists still has a moral position, an ethical orientation toward the world and our work almost always have the goal of explaining a phenomenon and perhaps fixing a social ill. Ultimately, it is important to note that moral positions influence our work and also change the morality of those who come into contact with it. I obviously cannot know to what extent social scientists change the morality of the general public, but it is certainly an interesting question for investigation.

Can Science Shape Human Values? And Should It? Audio from NPR

Morality, in The Blackwell Dictionary of Modern Social Thought

From mourning to reflection: considerations in the aftermath of a trajedy

Rutgers University has been my intellectual home for the last 8 years. Recently, one of our freshman, Tyler Clementi, committed suicide by jumping off the George Washington Bridge. He took his own life after his roommate and another student posted a video of him engaging in sexual activity with another student on the internet. This horribly sad and disturbing event sparked an emotional reaction on our campus, as well as discussion of how to protect students from this kind of infringement on their privacy, bullying, and also from themselves, when they feel overwhelmed by mistreatment.

In the wake of Tyler Clementi’s death, I asked my freshman what they think are the important lessons to learn from this. Most of the answers were along the lines of:  “we have not come as far as we think we have.” My students referred many times to the need to increase awareness of prejudice. Clementi’s suicide was clearly the act of a troubled young man, but students certainly recognize that perhaps the greatest factor here was that the video of Clementi depicted him having sex with another man. Though Rutgers tries to be inclusionary and strives toward creating an egalitarian environment, this event does, in fact, remind us that being gay is still considered wrong and at the very least “weird” or “fascinating” by more people than many of us care to imagine. Those of us who are sociologically-minded are well aware of this, but what is striking about my students’ comments is that, to many of them, it seemed surprising to find out that homophobia is real and has real consequences. Of course there were certainly students in my classroom who have felt the horrible emotional toll of being subject to this kind of hate. The new realization that homophobia and bullying related to it are such problems likely does reflect, in some ways, just how far we have come – it was so striking to my young students that not everyone is accepting and tolerant. On the other hand, it points out that living in a place where exclusionary attitudes and practices do not abound, where we are not surrounded by prejudice and conservative sentiment about sexuality, can be dangerous. In fact, this little part of the world where we live, work and go to school, is not at all representative of the attitudes of society at large and makes people susceptible to forgetting that our tiny corner of the world is not the norm. While it’s true that progress has been made, one lesson for these students who live in one of the most liberal parts of the country is that their little island is not at all representative.

To me, the starkest realization came from the complete lack of mention of technology in these discussions. It seemed clear to me that one of the central issues here was that the internet was used as a tool to taunt a student, embarrass him, violate his privacy, and ultimately push him to feel the consequences so greatly that he took his own life. This seemed a clear indication to me that the use of technology has become so ubiquitous that it almost seemed to my students to be irrelevant to the conversation. In fact, when I brought up the role of technology in Clementi’s suicide and asked for feedback on this issue, I found myself starting out at a room of blank faces. The quotidian nature of internet usage, of posting things online, is something new but already taken for granted. In particular, Clementi’s death has me wondering about the effect of technology on the way we treat one another. In her article in Sociology Compass,The Sociology of the Future: Tracing Stories of Technology and Time,” Cynthia Selin suggests an important role for sociologists in studying the relationship between technology and the future. Selin focuses mostly on nanotechnology and its important role in shaping the future, but her general message is one that pushed me to reflect on what happened at Rutgers. Selin begins her piece with the following:

“Scholarly attention to the development of new technologies and to exploring the sociological tools and methods we have for grasping their emergence is exceedingly important not only for the dual nature of technology as blessing and curse, but also because our technological reach into the future is growing. Our ability to produce technologies that have a lasting impact on social systems seems to be growing given the biological, chemical, and material technologies of late. Nanotechnology is one such novel technology area that is regularly promised to radically alter what it means to be human, our systems of production, and our environmental landscapes” (1878).

Has not the ability to post videos, personal profiles and generally to interact online not already altered what it means to be human, and, if those technologies are so involved in our humanity, might they not also have the ability to take away that humanity? I don’t mean to suggest that the internet is evil or that we should all stop using technology, but that, if technology is part of what provides us our humanity today, then we must be extremely careful about what it can also take away (or even how it can be used maliciously to take away other people’s humanity). The internet breaks down barriers and allows people to be more open, to spread important messages, to have political and personal power to create change for the betterment of society. However, the internet also allows people to say things they wouldn’t normally say, to do things they would not otherwise and to be more removed from the consequences of what they say and do. These kids would not have been able to have the same impact on Tyler Clementi’s life, had they not had internet access. And would they have even tried? The fact that not a single one of my forty four students mentioned this as a factor shows us just how important it is to point out to them, and to adults as well. The consequences of technology are powerful and some are not reversible – the video of Tyler Clementi’s sexual encounter has been removed from the web, but he took his own life and nothing can reverse that.

Bullying, Suicide, Punishment: In the New York Times

The Sociology of the Future: In Sociology Compass

“Hope” and “change” don’t pay the bills – and for that, the democrats will pay

In the last presidential election, “hope” that Washington could be a less partisan and ultimately a less corrupt and more transparent place, coupled with a longing for “change,” propelled Obama into office. That, and an intense disappointment with the previous administration. However, the economic meltdown and the generally painful economic situation for a large number of Americans has lead even many Obama supporters to question whether anything is actually different and whether our president can be pragmatic and effectual in difficult times. This had lead, for instance, one woman at the recent CNBC question and answer session with the President to say: “Quite frankly, I’m exhausted….defending the mantle of change I voted for.” She, in fact, rattled off a list of exhaustion, including how unpleasant life is for many “middle class” Americans today. Most are in historically unheard of debt, even if they don’t have college loans (but if they do, it’s even worse), many are losing their houses, some are reportedly even resorting to food banks because they simply can’t make ends meet. And this is in the middle class.

This CNBC press conference, largely because the audience consisted of Obama supporters, calls into question what those who voted for Obama are feeling about the economic situation. Who do you blame when things are bad if you voted for the person in charge? And, if I’m unemployed, caring about hope and change may not only be irrelevant now, but it may anger me that those were the more intangible values upon which I based my feelings in the last campaign. Not wanting to direct my feelings of frustration at myself, the party I voted for and the politicians I had a hand in electing will surely bear the brunt of my anger. Social psychological studies inform us that we are more likely to attribute blame to others or external forces for bad consequences and think of ourselves as having a role in the ones that prove to be useful or have a positive outcome in some way. So, I might feel good about the Democrats’ role in health care reform and feel as though my vote had something to do with it, but when it comes to the economy, the death of soldiers in two wars, etc., I might instead blame the administration and likely the President as the figurehead. And, what does this mean for the the 2012 political season? Frustrations are high and people don’t want to blame themselves, so they blame the administration (and that’s the Democratic supporters!). Looking forward to the next election, I have to wonder if there’s any way for the democrats and the President to escape this blame game unscathed.

Disappointed Supporters Question Obama

Attribution Theory

The wide usage of antipsychotic medications may indicate social rather than biological etiology

There are many lessons to take away from the New York Times article linked below that describes a rambunctious little boy whose life was nearly ruined by anti-psychotic medications. Increasing numbers of children have been prescribed this class of drugs as of late for conditions ranging from Tourette Syndrome to bipolar disorder, which psychiatrists have begun to diagnose in children at younger and younger ages. There is controversy surrounding the very ability to diagnose these conditions in young children and certainly over the utility and safety of prescribing the most potent of the psychiatric medications for this population.  The issues associated with medicating children, especially with this class of medications, as well as the dangers to them, their families and society more broadly are innumerable (even if there are benefits in some cases, which most biological psychiatrists argue there is). But for now, let’s take this as an example of the increasing diagnosis of disorders such as bipolar disorder, which is an intriguing phenomenon that needs exploration at the aggregate level.

In the debate over whether any disorders are purely biological entities, sociologists generally adhere to the argument that bipolar disorder and schizophrenia seem to be heavily rooted in biology, since the rates of these disorders are relatively stable over time and place. As opposed to depression, anxiety, substance abuse, etc., the former disorders do not seem affected by culture (though the course of the illnesses are) or the social environment more broadly. In other words, there is fairly wide agreement that bipolar disorder and schizophrenia are in fact organic conditions, not likely to be born of  social or environmental factors alone (or maybe at all). However, as these diagnoses become increasingly common (this does not yet seem to be the trend with schizophrenia but certainly is the case with bipolar disorder), what does this say about the assumption of biological etiology?

(more…)