Artifact Evaluation

FM 2024 will include artifact evaluation (AE) for accepted papers.

An artifact is any additional material, such as software, data sets, log files, machine-checkable proofs, etc., that substantiates the claims made in the paper. Ideally, the artifact allows for the full reproduction of all results in the corresponding paper by providing details on all relevant steps, inputs, configurations and parameters used. For tools, an artifact typically includes the source code or a binary of the tool and corresponding documentation on how to use, reuse, and possibly extend it.

For all accepted FM papers and preliminarily accepted tutorials, authors can submit an artifact substantiating the paper’s or tutorial’s claims. Participation in the AE is optional, but we strongly encourage participation, particularly for tool papers and tutorials.

Important Dates (Artifacts Only)

Artifact Abstract SubmissionJune 17th, 2024 (Mon)23:59 AoE
Artifact SubmissionJune 24th, 2024 (Mon)23:59 AoE
Artifact NotificationJuly 15th, 2024 (Mon)

Evaluation Criteria

The FM 2024 Artifact Evaluation Committee (AEC) will read the corresponding paper and evaluate the artifact according to the following criteria:

  • consistency with and reproducibility of results presented in the paper;
  • completeness;
  • documentation and ease of (re-)use;
  • availability in an online repository with a DOI.

The evaluation will be based on the EAPLS guidelines, and the AEC will decide which badge types — among “functional”, “reusable”, and “available” — will be assigned to a given artifact and added to the title page of the paper in the proceedings. Availability in an online repository with a DOI is a requirement for the “reusable” badge.

Submission Guidelines

The artifact submission is handled via EasyChair. Select the FM 2024 Artifact Evaluation track and provide the following information:

  • Artifacts should have the same title and authors as the accepted paper.
  • Upload a PDF of the accepted paper.
  • The (short) abstract should summarize the content of the artifact and explain its relation to the paper. If there are any special requirements for running the artifact (e.g., specific hardware or software, number of cores, etc.), state them clearly in the abstract.
  • A URL (preferably a DOI) to a publicly available zip file containing the artifact and all relevant files. We recommend Zenodo for hosting the artifact.
  • The SHA256 checksum of the zip file (to ensure consistency), which can be generated with:
    • Linux: sha256sum <file>
    • Windows: CertUtil -hashfile <file> SHA256
    • MacOS: shasum -a 256 <file>

Artifact Guidelines

The artifact on the permanent storage (e.g. Zenodo) should be based on a virtual machine (VM) image or a docker image.

The artifact should contain:

  • A file License.txt containing the license for the artifact. The license must at least allow the AEC to evaluate the artifact w.r.t. the criteria mentioned above.
  • A README file containing step-by-step instructions on how to use the artifact. In particular, please document, in step-by-step instructions, how to reproduce the results of the paper using the artifact. If part of the results cannot be reproduced, shortly explain why this is the case.
  • All code, binaries, example files, documentation, scripts, etc. required to reproduce the results in the paper.

To obtain the “available” badge, make the artifact publicly and permanently available with a DOI, e.g. on Zenodo.

To obtain the “functional” badge, make sure the artifact is documented, consistent, complete, and exercisable as per the EAPLS guidelines.

The “reusable” badge is awarded instead of the “functional” badge for functional and available artifacts of particularly high quality, that are suitable for reuse and repurposing beyond the associated paper as per the EAPLS guidelines.

Suggestions for preparing the artifact

  • In case of a VM image, please use VirtualBox and save the VM image as an Open Virtual Appliance (OVA) file.
  • Make it simple for AEC members to exercise the artifact and reproduce the results of the paper via easy-to-use scripts and detailed instructions.
  • When writing step-by-step instructions, assume minimum expertise of users.
  • The artifact should run out-of-the-box and not require the user to install any additional software. All required packages should already be provided in the VM or Docker image.
  • For experiments that require a large amount of resources (time, memory, number of cores, etc.), we recommend to indicate a subset of the results of the paper which can be reproduced with reasonably modest resources (w.r.t. RAM, number of cores, etc.) and in a reasonable amount of time. Please also include the full set of experiments (for reviewers with sufficient hardware or time), just make it optional.
  • In case the artifact cannot comply with some of the guidelines, please contact the AE chairs before the AE submission deadline. A common example are artifacts requiring restrictively-licensed software such as Matlab.

Artifact Evaluation Committee

Member(s)AffiliationRole
Carlos E. BuddeUniversity of Trento, ItalyAE Committee Chair
Arnd HartmannsUniversity of Twente, The NetherlandsAE Committee Chair
Jie AnChinese Academy of Sciences, ChinaAE Committee Member
Alberto BombardelliFondazione Bruno Kessler, ItalyAE Committee Member
Konstantin BritikovUniversity of Lugano, SwitzerlandAE Committee Member
Laura BussiCNR-ISTI Pisa, ItalyAE Committee Member
Julie CaillerUniversity of Regensburg, GermanyAE Committee Member
Emily ClementIRIF/Université Paris Cité, FranceAE Committee Member
Cesar CornejoUNRC/CONICET, ArgentinaAE Committee Member
Daniel DrodtTU Darmstadt, GermanyAE Committee Member
Federico FormicaMcMaster University, CanadaAE Committee Member
Laura P. Gamboa GuzmánIowa State University, USAAE Committee Member
Michael A. JacksIowa State University, USAAE Committee Member
Mehrdad KarrabiInstitute of Science and Technology Austria, AustriaAE Committee Member
Marian Lingsch-RosenfeldLMU Munich, GermanyAE Committee Member
Pham Hong LongSingapore Mgmnt. University, SingaporeAE Committee Member
Antoine MartinLRE/EPITA, FranceAE Committee Member
Lucas Martinelli TabajaraRuntime Verification, Inc., USAAE Committee Member
Tommaso OssUniversity of Trento, ItalyAE Committee Member
Quentin PeyrasLRE/EPITA, FranceAE Committee Member
Edoardo PuttiUniversity of Twente, The NetherlandsAE Committee Member
Florian RenkinIRIF/Université Paris Cité, FranceAE Committee Member
Guillermo Román-DíezUniversidad Politécnica de Madrid, SpainAE Committee Member
Alec E. RosentraterIowa State University, USAAE Committee Member
Philipp Schlehuber-CaissierLRE/EPITA, FranceAE Committee Member
Alexander StekelenburgUniversity of Twente, The NetherlandsAE Committee Member
Francesco PontiggiaTU Wien, AustriaAE Committee Member
Yanni DongUniversity of Twente, The NetherlandsAE Committee Member
Fabrizio FornariUniversity of Camerino, ItalyAE Committee Member
Rong GuMälardalen University, SwedenAE Committee Member
Tobias JohnUniversity of Oslo, NorwayAE Committee Member
Aditi KabraCarnegie Mellon University, USAAE Committee Member
Paul KobialkaUniversity of Oslo, NorwayAE Committee Member
Alexander MackayAustralian National University, AustraliaAE Committee Member
Andrea ManiniPolitecnico di Milano, ItalyAE Committee Member
Tobias NiessenTU Wien, AustriaAE Committee Member
Andrea PferscherUniversity of Oslo, NorwayAE Committee Member
Roberto PizziolIMT Lucca, ItalyAE Committee Member
Lorenzo RossiUniversity of Camerino, ItalyAE Committee Member
Ömer SayilirUniversity of Twente, The NetherlandsAE Committee Member
Riccardo SieveUniversity of Oslo, NorwayAE Committee Member
Reza SoltaniUniversity of Twente, The NetherlandsAE Committee Member
Jack StodartAustralian National University, AustraliaAE Committee Member
Emily YuInstitute of Science and Technology Austria, AustriaAE Committee Member