Rapid Science: Incentivizing Collaboration

A Reputation System That Can Generate More Powerful Collaborative and Translational Research

Given the increasing connectivity between fields and specialties of science, there is an increasing need for collaboration, yet a system of winner-takes-all is inherently unfair to collaborators. A different reward system could promote team science and thus promote the overall progress of the scientific enterprise.1

— Arturo Casadevall, Editor in Chief, mBio and Ferric C. Fang, Editor in Chief, Infection & Immunity

A Crippling Problem

The primary reputation currency in the sciences, and the chief determinant of whether a researcher is hired, promoted and funded, is the published paper and the journal in which it appears. Thomson Reuters’ Impact Factor (IF), which reflects citation rates of a journal’s articles, is a key reputation metric, even though it’s been challenged as an indicator of research quality.  An author’s H-index is similarly based on article citations but without regard to journal status. “Altmetrics” have developed to measure the impact of papers in social media outlets such as Twitter, blogs, and social bookmarking systems.2

However, despite emerging revisionist reward systems, the traditional published paper remains the ballast of scientists’ careers. Moreover, researchers’ fears of being scooped by competitors constrain them from sharing important insights in collaborative projects in which a systems approach is required to find solutions. Dr. Ewan Birney, project lead on the massive NIH-funded “Encyclopedia of DNA Elements,” noted in his editorial The Making of ENCODE: Lessons for Big-Data Projects (Nature, Sep 5, 2012) that in consortium science, “researchers must focus on creating the best data set they can. Maybe they will use the data, maybe they won’t. What is important is the community resource, not individual success… In turn, the success of participants must be measured at least as much by how their data have enabled science as by the insights they have produced.”

A Common-Sense Solution

In science, as in so many parts of life, what gets measured is what gets rewarded, and what gets rewarded is what gets done.3

Rapid Science is a spinoff from the nonprofit Cancer Commons,whose mission is to ensure that patients receive the best available treatment for their conditions, and to learn as much as possible from their outcomes. While at Cancer Commons, we participated in two multi-institutional projects – Stand Up to Cancer’s West Coast prostate cancer “Dream Team” and the Adelson Medical Research Foundation’s Melanoma Research initiative. These projects are intended to closely track late-stage cancer patients and devise individualized treatment recommendations, with the goal of improving outcomes for these and future patients.  Our role was to develop online tools to stimulate discussion of results based on aggregated data and analysis, and to organize and publish continually updated “Evidence Reviews” based on insights from the research community.

To stimulate collaboration in these communities and others being organized, we propose development of a reward metric that scores the quality and quantity of collaborators’ involvement in the project. The “C-Score” will provide a meaningful measure of participants’ contributions to discovery processes that require robust group involvement. Quantifying individual contributions can also provide a means to rank multiple author listings in collaborative publications such as the Evidence Review.

Individuals’ contributions will be scored on the basis of activities that take place on the Rapid Science collaboration platform:

  • how early and widely they share research findings and insights
  • submitting case reports and other formats of patient treatments and outcomes to a computable database
  • how many annotations, comments, and open questions they post on the platform, and the quality of those postings
  • whether hypotheses they generate are incorporated into the Evidence Review
  • rating/annotating the latest published literature and clinical trials
  • moderating discussions
  • peer reviewing and authoring the Evidence Review and supporting results

Development of the metric and rankings will be led by reputation experts, drawing from the insights and expertise of mathematicians, computer scientists, economists, and behavioral scientists, as well as researchers themselves and funding officials, academic administrators, and other parties.

See Further Reading for references on this and related topics. Our consultant, Luca de Alfaro, is a professor of computer science at UC Santa Cruz and a reputation systems expert; has worked with WikiTrust, a reputation system for Wikipedia authors, and Crowdcensus, a reputation system for user edits to Google Maps, among many other projects. Luca has published over 80 academic papers on reputation systems, crowdsourcing, ranking, and game theory.

Raphael's The School of Athens

Footnotes

1. Reforming Science: Methodological and Cultural Reforms, Infection and Immunity March 2012, vol. 80, pp. 891-896.

2. Jason Priem, Heather A. Piwowar, Bradley M. Hemminger, Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact 20 Mar 2012.

3. Michael Nielsen, “Reinventing Discovery: The New Era of Networked Science”, Princeton University Press, 2011.