Open Science rankings: yes, no, or not this way? A debate on developing and implementing transparency metrics.

The Journal of Trial and Error is proud to present an exciting and timely event: a three-way debate on the topic of Open Science metrics, specifically, transparency metrics. Should we develop these metrics? What purposes do they fulfil? How should Open Science practices be encouraged? Are (transparency) rankings the best solution? These questions and more will be addressed in a dynamic and interactive debate with three researchers of different backgrounds: Etienne LeBel (Independent Meta-Scientist and founder of ERC-funded project ‘Curate Science’), Sarah de Rijcke (Professor of Science and Evaluation Studies and director of the Centre for Science and Technology Studies at Leiden University), and Juliëtte Schaafsma (Professor of Cultural Psychology at Tilburg University and fierce critic of rankings and audits). This is an event organized by the Journal of Trial and Error, and supported by the Open Science Community Tilburg, the Centre for Science and Technology Studies (CWTS, Leiden University), and the Open Science Community Utrecht.

(click on image for enlarge shareable flyer)

Date and Time: 15 June 2021 (15h30 CEST, 9h30 EST)
Where: Zoom and livestreamed on YouTube.
Host: Max Bautista Perpinyà
Co-organizer: Martijn van der Meer
Registration (free, limited spots): https://zoom.us/webinar/register/8516189472243/WN_nMWglOJqQqiM2aHs0W29tA
YouTube stream: https://youtu.be/TMGaNvo-SgM


Abstract

Transparency is a, if not the, central concept behind the Open Science movement. Often lauded as an inherently ‘good’ characteristic that science must have, it is not a surprise that researchers, meta-scientists, and funders are wondering: What is transparency? Who is doing it right? How can we make it intelligible? And finally, how can we measure it? 

In the past years, discussions have focused on the use and misuse of metrics and the development of alternative ones. Fragmented debates have taken place in the academic literature, and social media arguments have been fierce but often shallow. The time is ripe for an open, lively, and constructive discussion about these issues, gathering experts from different corners to discuss issues that affect the scholarly community at large. 

Transparency has been situated as a methodological, epistemic, social, and ethical good. Clear reporting of methods allows for increased replicability of studies; open data-sharing benefits cross-evaluation of results; open access publications make scientific results more globally accessible. In short, transparency is widely seen as an ethos that any scientist should have – the research process should be made crystal-clear for the benefit of science’s ultimate patron, the taxpayer. Transparency is, in this way, the key multifaceted response for arising problems of governance and accountability.

Transparency has thus many dimensions, and while most researchers would accept that transparency is a virtuous element, several thorny questions arise when we try to measure and rank researchers by their level of transparency. While it may be argued that it’s ethically good to share data, publish in open access, and open your lab-book, it’s another question whether metrics and leaderboards are the way to do so. Should individuals be audited, or should the transparency pressure be situated on collective institutions such as universities or publishers? Could transparency metrics increase managerial control of research activities? Do leaderboards further imply and solidify the idea that science is a competition rather than a collaboration? 

Viewpoints on all these topics vary wildly, as the debate on Twitter over the recently-launched, European Research Council-funded ‘Curate Science’ project shows.  We want to facilitate debating these issues in a moderated and structured manner with experts from different but relevant backgrounds: Etienne LeBel, Independent Meta-Scientist and founder of Curate Science; Juliëtte Schaafsma, Professor of Cultural Psychology at Tilburg University and fierce critic of rankings and audits; and Sarah de Rijcke, Professor of Science and Evaluation Studies and director of the Centre for Science and Technology Studies (CWTS, Leiden University). The interventions will be followed by a generous Q&A where participants will be able to interact and discuss with the speakers. 

This event aims to be a testing platform for the well-needed dialogue and active discussion between those building metric tools (generally coming from the social sciences, psychology, meta-science, and bibliometrics), and those scholars who study the larger social structures and cultural contexts in which those tools and practices are built (generally humanities and social science scholars). Additionally, many other scientists have a stake in these discussions but aren’t experts and could benefit from an interactive, representative discussion between the parts. This is an event for those interested in metrics, transparency, open science, and research management – that is, it should prove to be an engaging event for all academics.


Programme

  1. Introduction to the theme of Open Science, metrics, and brief presentations of the speakers by host, Max Bautista Perpinyà (5-10’).
  2. Opening statements on the question ‘Open Science rankings: yes, no, or not this way?’ (7’ each speaker).
  3. Questions Rounds (45’).
    • BLOCK 1: METRICS
    • BLOCK 2: OPEN SCIENCE AND TRANSPARENCY
    • BLOCK 3: ACADEMIA
  4. Closing statements by each of the 3 speakers (3-5’ each).
  5. Closing remarks by moderator (5’).
  6. Break (15’).
  7. Questions (45’).

The speakers

Etienne LeBel

Independent Meta-Scientist and founder of ERC-funded project ‘Curate Science’.

Sarah de Rijcke

Professor of Science and Evaluation Studies and director of the Centre for Science and Technology Studies (CWTS, Leiden University).

Juliëtte Schaafsma

Professor of Cultural Psychology at Tilburg University and fierce critic of rankings and audits.


Institutions involved

Organizer

Supporters