Skip to content

Social contagion? Maybe not…

July 28, 2010

Recently there was a lot of racket about results presented in papers by a pair Nicholas Christakis & James Fowler. Their papers gained a lot of attention both in the academic world as well as in media. In their papers they claim to provide evidence for social contagion (transmission through social networks) of several types of individual characteristics. They wrote articles addressing, among other, social transmission of: obesity (friends of obese people are likely to be obese too), smoking (smokers tend to be friends with smokers), happiness (friends of happy people tend to be happy too), or loneliness (people are more likely to feel lonely if their friends feel lonely too, my personal favourite…). All of these are based on data from Farmingham Heart Study about which I wrote some time ago. For the reference, the relevant papers are listed at the bottom.

Authors claim to find very strong effects which they attribute to social transmission of the studied characteristic (obesity, smoking etc.) dismissing other potential explanations based on various arguments. I am sure for anybody interested these things the results were jaw-dropping. For example, in the case of obesity, the reported result was that if your friend is obese then you risk for becoming obese increases by 57% (!) as compared to the situation if your friend did not become obese.

For an overly-critical person like me the results were not so much jaw-dropping but eyebrow-raising. Among other things, to me, the results were based on somewhat strange analytical techniques where, at the same time, social networks literature suggests different approaches. Christakis & Fowler were not referring to the existing methodology at all. Most notably to SIENA models for network and behavior dynamics or models for social selection and social influence developed by people at MelNet.

I was not the only one not fully convinced, see for example here or here.

Anyway, I’m not going to review all the results here as somebody did that pretty well recently. The paper The Spread of Evidence – Poor Medicine via Flawed Social Network Analysis was uploaded couple of days ago to the Arxiv. The author, Russel Lyons, mathematician from Indiana University, takes a closer look at the papers I mentioned. In general, he finds flaws that range from problems in the arcane details of estimation strategy to undergraduate-level mistakes in interpreting confidence intervals. All the flaws Lyons finds fall into two categories:

  1. Certain aspects of statistical techniques used by Christakis & Fowler are not justified.
  2. The numerical results obtained are misinterpreted.

The bottom line is: substantive claims Christakis & Fowler make are not supported by results they show. Again, I’m not going to copy-paste from Lyons’ paper. Have a look yourself here.

Couple of end-thoughts:

  • It’s great that somebody took a detailed and closer look at this research. It is a contribution to the public good.
  • How come the mentioned papers by Christakis and Fowler passed the reviews in, what seems to be, quite respected journals? Especially that some of the mistakes seem to be very basic.
  • Perhaps we need to move forward from the present journal reviewing system to something like Open-Source Science 2.0? All the scientific publications should be transparent as it goes for data and methods used. I’m thinking about systems in the flavor of “literate statistical analysis” like Sweave in R. Moreover, all scientific publications could be reviewed and commented upon publicly on the Web? Much like Talk pages on Wikipedia…

Some of the papers by Christakis & Fowler I’m referring to:

  • Christakis & Fowler (2007) “The Spread of Obesity in Large Social Network over 32 Years”, N. Engl. J. Med., 357:370-379
  • Christakis & Fowler (2008) “The Collective Dynamics of Smoking in a Large Social Network”,N. Engl. J. Med., 358:2249-2258
  • Fowler &  Christakis (2008) “Dynamic spread of happiness in a large social network: longitudinal analysis over 20 years in the Framingham Heart Study,” Brit. Med. J., vol. 337, p. a2338, 2008. doi:10.1136/bmj.a2338.
  • J. T. Cacioppo, J. H. Fowler, and N. A. Christakis, (2009) “Alone in the crowd: the structure and spread of loneliness in a large social network,” J. Personality Soc. Psych., vol. 97, no. 6, pp. 977–991.

Links:

Edit

(2010-07-30) Seems like Lyons paper is a bit of an old news. The first version was available on his website at least in April, as featured by this piece at Slate. Also, see this article at NYT from September 2009.

(2011-06-08) Lyons’ paper is officially published. I posted an update to the story here.

 

 

 

 

 

 

 

About these ads
4 Comments leave one →
  1. July 29, 2010 13:54

    Very nice post! However, I am not convinced that “Open Source science” would be the solution. The problem is that most science is so specialized that it because very difficult to assess its quality if you’re not an expert in that specific field. Moreover, if I’m not an expert in that field, I is also hard to judge whether criticism is valid, and in such a “public discussion”, even whether some critic is an expert or not.
    As an example, take the climate “debate”: as a layperson, knowing hardly anything about climate science, it is easy to become confused by the cacophony of opposing opinions. Here, I would rather trust the opinions of those who are actually certified as experts, that is, have a PhD in climate science from a reputable university and have experience in the field.
    Of course this does not guarantee that mistakes cannot happen, and I’m not saying that the current peer-review system cannot be improved, but if all research would be subject to public discussion instead of peer review, I fear that the good research will be pearls before swine.

    • July 29, 2010 14:30

      Well, my idea about “open source science 2.0″ was a bit informal… Pushing it forward, I guess there are several issues involved. Just two of them that came to my mind at this moment:

      The first one being public visibility of the reviewing process: reviews could be publicly accessible to anybody and have a status of a publication themselves with the identities of the reviewers revealed. Then the role of trust you mention (e.g. trusting publishers that the quality of published papers is high) would decrease because everything is transparent: no hidden facts, methodological abra-cadabras etc. Think of departmental seminars on which you take discussed papers apart, the Web makes it possible on a much larger scale.

      The second point is whether writing reviews should be completely open to anybody. That indeed might be a problem, given for example the troubles Wikipedia has/had with vandalism etc. Perhaps not that serious though. One could think of, for example, assigning an “editor” who ensures that the “reviews” pass some sanity checks, that’s just an idea. Moreover, if the reviews would be publicly accessible and signed by an identified person it would quite risky to write bull*****.

  2. July 30, 2010 12:05

    Just saw that Lyons’ paper was actually available on his website as early as in April, and commented upon on slate.com. You can find the links at the bottom of my original post.

Trackbacks

  1. Social contagion story update « Brokering the Closure

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: