Artifact Evaluation

Artifact Evaluation for RTAS 2018 

Submission of computation artifacts for evaluation is now open! Submission page
You will have to upload the preliminary version of your accepted paper, i.e. the preliminary version of the paper extended with a maximum of 2 pages of instructions on how to replicate the results in the paper.

Important Dates:

  • Submissions open: Now!
  • Submission deadline: rolling submissions until January 21, 2018.
  • First round of reviews: completed by February 4th, 2018.
  • Notifications: February 18, 2018.

The AE process 

Have you ever complained that you cannot reproduce results from a paper you read?
Have you ever wished you could have someone else validate your work?
Have you ever tried or wished to validate someone else’s work?

Trying to answer these questions, RTAS 2018 will include Artifact Evaluation (AE).
AE is intended to help check experimental results from the accepted papers. It is not trying to prove formally that the artifact is correct and it is not trying to prove the artifact does not work (quite the opposite!).

The process is relatively straightforward. Participation in the AE is optional, with authors submitting all related research material (a guide, simulators, tools, benchmarks, data sets, configuration files, …) necessary to check claims and results of their accepted paper. At least two qualified expert reviewers will follow your user/review guide, install your software, rerun your experiments, recreate your graphs and tables, and send you a report with feedback about the overall assessment of your artifact. If the experiments require specialized embedded hardware, then authors should provide the raw data or logs captured from the experiments. If that is not possible, authors should contact the chairs as soon as possible so that we can work with them toward finding a way of reviewing their artifacts. When evaluation of the artifact is successful, before the camera ready is due, authors will be able to amend their final papers to include the “stamp of approval”, and we will publish the guides on the conference webpage. The AE webpage will only recognize artifacts that passed evaluation.

Your software and datasets must be packaged using a freely available tool, such as VirtualBox or Docker virtual machines, git/mercurial/svn repositories, or zip/tar files. After a first round of reviews, there will be a feedback period to enable authors to improve their artifacts for eventual sharing, which is the goal of this exercise. The AE chairs will mediate all communication to keep the reviewers’ identities confidential.

The Artifact Evaluation chairs will offer substantial help in packaging the artifact to the first 5-10 authors that request help, using the OCCAM platform. OCCAM is an open curation platform that allows its users to contribute and deploy their artifacts and experiments. Once author’s package their artifacts, OCCAM provides the tools to define and configure experiments, and tools to create and visualize interactive plots that help reviewers and readers both to replicate their work and to analyze their data. By participating in AE, authors will have the additional benefit of having an easily reusable software that can attract other users to reuse their software.

For more information and examples on packaging artifacts for evaluation follow the links below:

Submission instructions 

The submission process is very straightforward:

  1. Package your software and datasetsusing a freely available tool, such as VirtualBox or Docker virtual machines, git/mercurial/svn repositories, or zip/tar files.
  2. Write an abstract for the artifact. This abstract will be used to select the appropriate reviewers, therefore it should describe your artifact (including minimal software and hardware requirements), how the artifact supports your paper, and what results are expected.
  3. Use the accepted preliminary version of your paper, and extend it with a result replication guide. The guide can be up to 2 pages long, and should be appended to the preliminary version of your accepted paper (all in one PDF file). The guide must describe the software/hardware requirements of the artifact, how to obtain and deploy the artifact, and the steps required to replicate the published experimental results, as they appear in the paper. Note: guides for artifacts that passed evaluation will be published on the conference website. Hence, the guide should not be included in the camera-ready version of the paper.
  4. Submit the artifact on the RTAS-AE page.

Conditionally accepted papers (i.e., currently under shepherding) are still welcomed to submit their artifacts for AE. However, they will need to be accepted in the main conference to pass the artifact evaluation.

The reviewing process 

The review process occurs in three phases:

  1. Reviewers deploy and run the artifacts.
  2. Author feedback period.
  3. Final evaluation.

The first phase starts by the submission deadline. During a short time, reviewers will follow the instructions on the guide to ensure that artifacts can be deployed and run. The objective of this initial phase is to make sure that reviewers can deploy and run the artifacts.
During the second phase, authors will be able to correct any bugs in their artifacts/guides to make sure reviewers can un-package and run the artifact before doing a deeper evaluation, ultimate the objective of AE.
In the third and last phase, reviewers perform their final evaluation.

Note that the first 2 phases can be intertwined (several rounds) if authors and reviewers are quick in their portions and in synch.

AE Chairs

Luís Oliveira (loliveira@pitt.edu), University of Pittsburgh, USA

Daniel Mosse (mosse@cs.pitt.edu), University of Pittsburgh, USA