Archive for the ‘Research Methods’ Category

h1

how to think science…

September 6, 2013

One of the first quotes I share with my students is one that (as far as I know) is attributed to Carl Sagan

“Science is a way of thinking much more than it is a body of knowledge.”

I think at first, the students shrug and say, “Ok, neat quote, C-P.” But I really want them to internalize this idea. Mull it over. Reorganize their understanding of what they (think they) know about science and the world they live in. Really appreciating what this means can be transformative for a student. Often, they have developed a sense that “knowing science” is simply having at their disposal a set of facts and statements about the relationships between those facts. It is difficult to convince them that “knowing science” is more about understanding how those facts came to be, to what degree we can accept or work with those facts, and how it is that science moves forward. It is not about simply gathering “more facts”. It is about how we can best maintain a balance between being skeptical and accepting ideas about the world that are put before us.

With that thought in mind, I thought I would share a recent rumination by Dan Simons on a study that was recently published in Nature (article here and a more accessible description of the study here). Simons provides a textbook example of how a scientist thinks… The study in question examined how playing a video game might affect the visual attention and multitasking capabilities of an older adult. In short, the study found evidence that the game play increased certain cognitive functions of the older adults (aged 60 to 85). The response that Simons has is what he calls a “HIBAR” (“Had I Been A Reviewer” – referring to the peer review process that scientific publication undergo before being published). Some of his concerns are methodological (e.g. tasks completed by the control conditions), some are statistical (e.g. lack of measures of variability for differences scores), some are theoretical (e.g. why the video game affected some functioning but not others). Altogether, they illustrate what productive skepticism looks like. He is not simply dismissing the study or blindly embracing it. He is trying to better understand what the results show (and don’t show) about the topic, and he is trying to work through how we should think about those results. It would be silly to think that a single study answers all the potential questions about something as complex as attention, training, and aging. However, if we take a single study, like the one in question, and then carefully assess it, we see a way ahead. That is science.

Advertisements
h1

scarily funny science stuff…

August 24, 2012

I hope that the students in my Intro to Psychology sections learn enough about science that they can appreciate the frightening humor in this clip from last year…

Click here to pop over to the Daily Show site to watch…. Science – What Is It Up To?

h1

another one bites the dust…

July 31, 2012

It seems that the past year has been a rough one for psychological science – several researchers have had their work called into question based on evidence suggesting that they had falsified data. This is a tricky issue. For science to work, the community of scientists has to trust that everyone is on the up-and-up. Honesty in reporting data is paramount. So, it is critical that individuals that are not living up to this standard are removed from the community. At the same time we are looking out for these nefarious researchers, we have to trust that everyone else is on being honest. So, there is obviously tension – I must trust, but I also must be skeptical…

The tricky part is how do we know? Well, we can feel a bit more confident because individuals like Uri Simonsohn are on the case. Here is an interview with him about how he detected fraudulent data in one high profile case this past year. This morning I saw that there was a retraction notice in the recent issue of Psychological Science, and following the story a bit found out that Simonsohn had identified another batch of suspect data. I found this situation particularly interesting because of the simplicity of his approach. Lawrence Sanna, a professor of psychology at the University of Michigan, had published a study purporting that the level of altruism an individual felt was related to his or her actual level of elevation – higher elevation = higher level of atruism. The method was interesting – it involved escalators and hot sauce, but that is a story for another post… Anyhow, Simonsohn noticed that although the means for the different conditions were very different, the standard deviations across the conditions were nearly identical. Simply put, different groups of individuals had been tested at different levels of elevation – the participants at higher elevations on average had much higher levels of altruism. The problem in the data was that although these means differed significantly, the range and distribution of responding within each group of participants was essentially identical. Simonsohn correctly noted that this is highly unlikely, so he located some other studies using similar methods and found that not only were the differences between groups far smaller, the variability within each group also differed more appropriately. Several other papers written by Sanna showed a similar lack of variability in the reported variability across the conditions, so Simonsohn contacted Sanna and some of his co-authors with his concerns during the fall of 2011. As of June 2012, Sanna has resigned from U of Michigan, retracted at least four papers, and now maintains his silence (under legal counsel).

All of this occurred because of some lowly, little standard deviations. By the way – you should use this as a prompt to review just what a standard deviation is and why your friendly research methods instructor would spend so much time making sure you understood that statistic…

h1

required reading for research methods students…

March 13, 2012

Subtitle: The Challenge of Thinking Scientifically

A study is completed. It is published. It gets cited a whole lot. Obviously, it is a done deal – science has once again illuminated some dark corner of our world… Or maybe not. I came across a post by Dan Simons that was intended to focus on the relatively simple issue of why the replication of a particular experiment might fail or not. Click on the link – read the post. It is a well articulated explanation of how one might think about results from a replication – reasons it might confirm the original experiment, and reasons it might not (btw – the post responds to a recent squabble about a published replication failure – you can read about that particular brouhaha here). Understanding how you can interpret a replication is important.

Now, read the comments. The post is good, but the comments for this particular post provide a wonderful glimpse of how psychological scientists think about a range of issues:

I think I’ll just use the comment section from the post as the outline for my Research Methods class next semester. Or maybe have the students critically respond to it as a part of their final exam…

h1

autism in context…

January 23, 2012

One of the first topics we cover in Research Methods is that science does not occur within a vacuum – and this is especially true for psychological science. Recently, there has been a flurry of news stories about how certain mental health disorders are diagnosed, with a lot of debate focused on the diagnostic criteria for autism. This is occurring now because a revision of the Diagnostic and Statistical Manual (DSM), THE reference for psychiatrists and mental health professionals to diagnose mental health disorders, has been underway for years. The DSM is currently in its fourth (although revised) edition, and the fifth edition promises to make some important changes. (An overview of the revision is available here.) The changes are intended to increase the reliability and validity of the diagnoses that are made – essentially, the goal is to avoid false positives (diagnosing someone with a disorder who is actually healthy) while not excluding individuals from a diagnosis who do have a legitimate need for clinical intervention. It seems as though this should be a fairly straight-forward, empirically grounded process (the ideal), but there are a lot of other pressures playing into it – you can read a good summary of some of those issues here: Redefining Autism For DSM-V. The diagnostic criteria for autism have been arguably “loose” in past editions of the DSM, reflecting a very healthy debate about exactly what autism is.

In recent years, these has been an increase in the diagnosis of autism (MSNBC has a series of stories related to the increase – always worthwhile to see how the popular press approaches a subject, and they don’t disappoint with the title “Autism: The Hidden Epidemic?”). This increase has been accompanied by a debate – is the increase due to an increased prevalence of the disorder or does it reflect that the diagnostic criteria are too broad, resulting in lots of false positives? Depending on how you answer that question, you will see the proposed revision to the DSM in a very different light. For those that see an increase in prevalence – the revision may exclude many people who would benefit from clinical intervention. For those that see an increase in false positives – the revision represents a better diagnostic tool that will increase the reliability and validity of the identification of individuals with autism.

I am not a clinical psychologist. I am not an expert in autism. I am however interested in this debate because (a) it is important that we get diagnoses of mental health disorders right and (b) it nicely illustrates how the science of autism diagnosis is contextualized. This revision will have very real impact on families – possibly how they view the behaviors of a child that does not play the way other children on the playground do or what access they have to clinical or education services to assist them in working with that child. This revision will have a very real impact on healthcare groups – facilities and organizations may see funding disappear if the number of individuals with autism shrinks because of tighter diagnostic guidelines. The revision will impact researchers, teachers, and many others. I point this out to my students because sometimes when we are working our way through topics like “construct validity” or “false positives” in class, it may all seem rather academic and they might not see the importance of what they are learning.

h1

open source science…

January 17, 2012

This one is for all my Research Methods students (past and present). We spend quite a bit of time considering what the scientific process is and how it works, but I have to admit, in an attempt to “keep it simple”, I don’t always engage with some of the controversies that surround how science really works – as in what happens when you take that idealized notion of the scientific process and implement it in the “real world” where economic, social, personal, and professional pressures all work to distort the ideal. A new article in the New York Times does some of the heavy lifting for us, so we can take a look at what keeps scientists up at night…

h1

steppin’ out to learn about human behavior…

September 5, 2011

Fields for Psychology – Association for Psychological Science.

This is one of those easy posts – all I essentially have to say is “go read the article linked above”. The article is written by Doug Medin, so you know it will be a quality read.

Just a bit of background though. I want to direct my Research Methods students to this article to emphasize some of the points I’ve made in class about the relative strengths and weaknesses of various research methodologies used in psychological research. I also wanted to get my Cognitive students thinking about the impact of relying nearly exclusively on laboratory experimentation in the study of cognition. Dr. Medin does a great job articulating the issues and presenting the challenges for psychological researchers. Enjoy.