Minisymposium Presentation
Studying Artifact Evaluation to Improve Reproducibility and Usability
Presenter
Description
Researchers support reproducibility and rigorous science by sharing and reviewing research artifacts—the documentation and code necessary to replicate a computational study. Creating quality research artifacts and conducting reviews for conferences and journals are both considered to be time consuming and poorly rewarded activities. To simplify these scholarly tasks, we studied the work of artifact evaluation (i.e., artifact reviewing) for a recent ACM conference. Through analysis of reviewers’ comments and their responses to a set of three surveys distributed throughout the evaluation process, we recognized common issues reviewers faced and the features of high quality artifacts. To lessen the time and difficulty of artifact creation and evaluation, we identify design implications for infrastructures like the testbeds used to execute research artifacts. By applying the knowledge gleaned through our study, we hope to improve the usability of research infrastructure and, consequently, the reproducibility of research artifacts.