Rapid Science: Incentivizing Collaboration

A Reputation System That Can Generate More Powerful Collaborative and Translational Research

Given the increasing connectivity between fields and specialties of science, there is an increasing need for collaboration, yet a system of winner-takes-all is inherently unfair to collaborators. A different reward system could promote team science and thus promote the overall progress of the scientific enterprise.1

— Arturo Casadevall, Editor in Chief, mBio and Ferric C. Fang, Editor in Chief, Infection & Immunity

A Crippling Problem

The primary reputation currency in the sciences, and the chief determinant of whether a researcher is hired, promoted and funded, is the published paper and the journal in which it appears. Thomson Reuters’ Impact Factor (IF), which reflects citation rates of a journal’s articles, is a key reputation metric, even though it’s been challenged as an indicator of research quality.  An author’s H-index is similarly based on article citations but without regard to journal status. “Altmetrics” have developed to measure the impact of papers in social media outlets such as Twitter, blogs, and social bookmarking systems.2

However, despite emerging revisionist reward systems, the traditional published paper remains the ballast of scientists’ careers. Moreover, researchers’ fears of being scooped by competitors constrain them from sharing important insights in collaborative projects in which a systems approach is required to find solutions. Dr. Ewan Birney, project lead on the massive NIH-funded “Encyclopedia of DNA Elements,” noted in his editorial The Making of ENCODE: Lessons for Big-Data Projects (Nature, Sep 5, 2012) that in consortium science, “researchers must focus on creating the best data set they can. Maybe they will use the data, maybe they won’t. What is important is the community resource, not individual success… In turn, the success of participants must be measured at least as much by how their data have enabled science as by the insights they have produced.”

A Common-Sense Solution

In science, as in so many parts of life, what gets measured is what gets rewarded, and what gets rewarded is what gets done.3

Today’s research often requires massive datasets and specialized expertise for discovery and problem-solving. A remarkable number of grants are now being awarded that require collaborative effort, as exemplified by the National Institute of Medical Sciences 2017 announcement of “a new program to support collaborative, team-based science… as a result of evaluations of our previous programs, recent research on the science of team science, and community input.” We believe that these efforts will fail in the absence of an alternative to publishing as the sole reward system for research contributions.

The Collaboration Score, or C-Score, will reward participants’ contributions to a project, as well as their rapid, open dissemination of findings. The algorithm being devised will measure an array of activities, which will also be viewable by funders and administrators in participants’ profiles on the project platform, diminishing the possibilities for gaming of scores. Some of those activities include:

  • sharing research findings, patient data, and other information – high scores for early and wide dissemination
  • generating hypotheses that are incorporated in the group’s communications
  • moderating/initiating/participating in discussions
  • posting/rating/commenting on the latest published literature and clinical trials
  • rating/reviewing co-investigators’ findings
  • contributing statistical analyses, data curation, software development and new methodologies
  • demonstrating reproducibility
  • leading/participating in cross-group committees–e.g, protocols, data standardization
  • Iterating/updating one’s findings or case reports based on feedback or new evidence
  • coauthoring/peer-reviewing content to be disseminated – including the team’s Evidence Review
  • publishing incremental and null results, editorials, reviews

Development of the metric and rankings is being led by reputation experts, drawing from the insights and expertise of mathematicians, computer scientists, economists, and behavioral scientists, as well as researchers themselves and funding officials, academic administrators, and other parties.

See Further Reading for references on this and related topics. If you would like to participate in formulating the C-Score, please contact us: edit at rapidscience dot org.

Raphael's The School of Athens

Footnotes

1. Reforming Science: Methodological and Cultural Reforms, Infection and Immunity March 2012, vol. 80, pp. 891-896.

2. Jason Priem, Heather A. Piwowar, Bradley M. Hemminger, Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact 20 Mar 2012.

3. Michael Nielsen, “Reinventing Discovery: The New Era of Networked Science”, Princeton University Press, 2011.