Submission and Evaluation
The Cup participants shall submit their programs as Java source code, creating a single executable, which can be called with filename(s) for the input dataset, and a filename for writing the program results into a file. Your program must extend the base classes defined in COMSET. No modifications to the base classes are allowed. You may only use the libraries provided by COMSET and Java SE. If you think there is a specific library that is essential for your submission, please write an email to the mailing list for approval. Libraries should be open source. Using libraries or tools other than those provided by COMSET and Java SE or not previously approved will lead to immediate disqualification.
To ensure that the agent objects do not know each other, there shall be no direct or indirect communication between the agent objects. The following mechanisms are disallowed in the definition of the agent class:
- Static variables
- Socket communication
- File operation
- Shared memory
- Other mechanisms that enable an agent to know the existence or status of other agents.
- External data (i.e., data other than those designated/provided by organizers) to improve algorithm performance is allowed but must be shared with organizers. Any submission not abiding by the disclosure rule will be disqualified.
Participants shall submit their solution as a single .zip file via EasyChair:
The submission must contain:
- The original source code and all dependencies (submission of the source code is mandatory to ensure originality of the submitted work).
- A readme.txt file containing information on how to compile and run the submitted code. You may include a brief description of the main idea behind your submission.
- A contact.txt file containing the full name, email addresses, and affiliations of the authors.
The submissions will be evaluated on an up to date server-grade machine. You can expect a multicore CPU and enough RAM to hold the dataset in memory.
Allowed libraries / tools
- COMSET (mandatory)
- Java SE
Additional libraries can be requested by mailing the contest chairs.
We will randomly choose a few days from the time period covered by the training dataset. The COMSET simulator will be run for each of the chosen days independently starting from 8am until 9pm. The average of the outcomes of all runs will be used for evaluation. We will use the average search time as the primary evaluation metric. For breaking ties, we will first look at the average wait time, and then the expiration percentage (see section 2 for the definition of these metrics). Finally, we break ties on code stability, quality, and readability.
Challenges similar to the one posted here often lead to questions concerning various details of the challenge and evaluation rules. We will use a public Google Group for communicating any questions that can be answered as well as the corresponding answers to all interested people. Therefore, please try to pose questions only in a form that does not leak too much of your ideas.
We will also post notifications for important updates to the challenge to this group.
COMSET is being developed by a group of students from Eindhoven University of Technology (TU/e) via a collaboration with HERE Technologies: Jeroen Schols, Robert van Barlingen, Wouter de Vries, João Soares Ferreira, Tijana Klimovic. The collaboration is coordinated by Dr. Matei Stroila (HERE), Dr. Bo Xu (HERE), and Dr. Kevin A.B. Verbeek (TU/e).
Yuanyuan Pao (Lyft) is coordinating the provision of evaluation data from Lyft.
Thanks to Dr. Ouri Wolfson (University of Illinois at Chicago) for constructive discussions during the development of this year’s CUP.