The Singular Barrier to Scientific Collaboration (and why Sean Parker is worried)

Posted by on Apr 25, 2016

Glacier

Publish or Perish

Amidst the recent media burst when Sean Parker’s $250M immunotherapy cancer center was announced, Forbes’ coverage by Matthew Herper nailed it: misaligned incentives stifle scientific discovery. He writes,

Parker told me that he worries that the incentives in science lead researchers to work not on their best, bravest idea, but on what will get them published or will get their next grant funded. (emphasis added)

Herper also reported that this was of concern to Susan Desmond-Hellmann, a former oncology researcher, now chief officer at the Gates Foundation. She says,

Whenever you want to do something with pace and purpose, money and collaboration are really helpful. And it’s difficult to collaborate in academia. You care about credit, you care about your institution, you care about your lab.

The incentive to get published in elite academic journals, no matter the cost to the integrity of the work, has long plagued scientific research. There is tremendous competition for jobs and grant funding, and evaluations are based on a single important measure: publication.

This fact was summarized well by David Nicholas (CIBER Research) in his talk, “new ways of building, showcasing and measuring scholarly reputation in the digital age” at the Academic Publishing in Europe conference in Berlin this year:

[The scholar’s reputation] is built mainly around one scholarly activity (research), one output of that activity (publication in high-impact factor, peer reviewed papers) and on one measurement of that output (citations). A once bibliographic tool defines scholarly reputation, world-wide. Warts and all. (emphasis added)

Distortions of the Scientific Process

This “publish or perish” system leads to several behaviors that stifle scientific discovery. First and foremost is the pressure to publish cohesive, important-sounding papers in elite journals. This can result in the omission of data that doesn’t fit the narrative, severely distorting the scientific process. This pressure has contributed to today’s reproducibility crisis; one shocking study demonstrated that only 10% of the experiments presented in landmark oncology papers could be reproduced. It can also keep large amounts of experimental data hidden, for fear of competition.

Then there’s “middle author problem.” In the biomedical sciences, credit for a published paper is stacked to reward the first author (who conducted the research) or the last author (who conceived/directed the research) on a paper. In the era of big data and increasingly specialized expertise, more and more publications are based on multi-institutional and multi-disciplinary projects. Thus, the number of middle authors on papers has great http://www.health-canada-pharmacy.com increased. Scientists working on these mega projects have confided that this problem reduces interest in collaborating: to keep a lab well funded, the work of highest priority is often dedicated to projects that can produce publications with the names of that lab’s researchers well positioned.

There is also a reluctance to share insights with collaborators for fear of being scooped by them. Lacking a means of tracking, assigning provenance, and rewarding these contributions, there is little incentive to offer either new ideas or incremental findings that could further project goals.

There are those who claim this problem does not exist. Oncology researcher Robert Weinberg — founding member of the Whitehead Institute at MIT and biotech company Verastem – stated that “Self-respecting scientists [collaborate] all the time when collaborations offer actual synergies.” However this statement is mostly only true for well established scientists—those whose reputations have moved past the H-index (the metric that quantifies his or her publications and citations) and tenure concerns.

Quantifying Collaboration

Though the scientific establishment has long bemoaned the dearth of other measures, it has not taken a leap from the current reward system to generate alternative measures of research effectiveness. This is perhaps because until now there has been nothing else so easily quantifiable.

Currently, the primary means of collaborating among multiple labs is the webinar or conference call. A typical such meeting might consist of a couple dozen researchers viewing a slideshow presentation by one or more of the team. The slides may be archived for later viewing, but if one misses the call or if questions are cut short due to time constraints, insights evaporate.

We can and must do better. Today’s online communication tools, much of which are open source, coupled with innovations such as ORCID and CRediT that showcase researchers’ activities, enable the quantitation of contributions to team science that were heretofore invisible. For example, Rapid Science’s Collaboration Score—being piloted on our Rapid Learning Platform with early funding from the NIH Common Fund—is a reward metric that will operate on any online collaborative workspace.

Sean Parker has smartly addressed the IP barriers that can hinder progress at academic institutions, as widely reported. But it remains to be seen whether the Parker Institute’s 40 labs and 300 investigators can overcome the entrenched publication-based incentive system, so as to formulate a true community of the revolutionary kind seen in, say, Napster and Facebook.