Abstract
System competitions evaluate solvers and compare state-of-the-art implementations on benchmark sets in a dedicated and controlled computing environment, usually comprising of multiple machines. Recent initiatives such as [6] aim at establishing best practices in computer science evaluations, especially identifying measures to be taken for ensuring repeatability, excluding common pitfalls, and introducing appropriate tools. For instance, Asparagus [1] focusses on maintaining benchmarks and instances thereof. Other known tools such as Runlim (http://fmv.jku.at/runlim/) and Runsolver [12] help to limit resources and measure CPU time and memory usage of solver runs. Other systems are tailored at specific needs of specific communities: the not publicly accessible ASP Competition evaluation platform for the 3rd ASP Competition 2011 [4] implements a framework for running a ASP competition. Another more general platform is StarExec [13], which aims at providing a generic framework for competition maintainers. The last two systems are similar in spirit, but each have restrictions that reduce the possibility of general usage: the StarExec platform does not provide support for generic solver input and has no scripting support, while the ASP Competition evaluation platform has no support for fault-tolerant execution of instance runs.Moreover, benchmark statistics and ranking can only be computed after all solver runs for all benchmark instances have been completed.
This research is supported by the Austrian Science Fund (FWF) project P20841 and P24090.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Asparagus Web-based Benchmarking Environment, http://asparagus.cs.uni-potsdam.de/
Barrett, C., Deters, M., Moura, L., Oliveras, A., Stump, A.: 6 years of SMT-Comp. J. Auto. Reasoning 50(3), 243–277 (2013)
Calimeri, F., Ianni, G., Krennwallner, T., Ricca, F.: The Answer Set Programming Competition. AI Mag. 33(4), 114–118 (2012)
Calimeri, F., Ianni, G., Ricca, F.: The third open answer set programming competition. Theor. Pract. Log. Prog., FirstView, 1–19 (2012), doi:10.1017/S1471068412000105
Couvares, P., Kosar, T., Roy, A., Weber, J., Wenger, K.: Workflow Management in Condor. In: Workflows for e-Science, pp. 357–375. Springer (2007)
Collaboratory on Experimental Evaluation of Software and Systems in Computer Science (2012), http://evaluate.inf.usi.ch/
The software of the seventh international planning competition (IPC) (2011), http://www.plg.inf.uc3m.es/ipc2011-deterministic/FrontPage/Software
Järvisalo, M., Le Berre, D., Roussel, O., Simon, L.: The International SAT Solver Competitions. AI Mag. 33(1), 89–92 (2012)
Klebanov, V., Beckert, B., Biere, A., Sutcliffe, G. (eds.): Proceedings 1st Int’l Workshop on Comparative Empirical Evaluation of Reasoning Systems, vol. 873. CEUR-WS.org (2012)
Papadimitriou, C.H.: Computational complexity. Addison-Wesley (1994)
Peschiera, C., Pulina, L., Tacchella, A.: Designing a solver competition: the QBFEVAL’10 case study. In: Workshop on Evaluation Methods for Solvers, and Quality Metrics for Solutions (EMS+QMS) 2010. EPiC, vol. 6, pp. 19–32. EasyChair (2012)
Roussel, O.: Controlling a solver execution with the runsolver tool. J. Sat. 7, 139–144 (2011)
Stump, A., Sutcliffe, G., Tinelli, C.: Introducing StarExec: a cross-community infrastructure for logic solving. In: Klebanov, et al. (eds.) [9], p. 2
Sutcliffe, G.: The TPTP problem library and associated infrastructure. J. Autom. Reasoning 43(4), 337–362 (2009)
Thain, D., Tannenbaum, T., Livny, M.: Distributed Computing in Practice: The Condor Experience. Concurrency Computat. Pract. Exper. 17(2-4), 323–356 (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Charwat, G. et al. (2013). VCWC: A Versioning Competition Workflow Compiler. In: Cabalar, P., Son, T.C. (eds) Logic Programming and Nonmonotonic Reasoning. LPNMR 2013. Lecture Notes in Computer Science(), vol 8148. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40564-8_23
Download citation
DOI: https://doi.org/10.1007/978-3-642-40564-8_23
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-40563-1
Online ISBN: 978-3-642-40564-8
eBook Packages: Computer ScienceComputer Science (R0)