CVRPLib Best Known Solution Challenge

The CVRPLib team announces the CVRPLib BKS Challenge, a 30-day competition aimed at finding extremely high-quality solutions for the new XL benchmark set, 100 challenging CVRP instances ranging from 1,000 to 10,000 customers.

The X, XML, and XL Sets

The X set comprises 100 instances with between 100 and 1,000 customers [7], carefully devised to cover a wide spectrum of characteristics found in real-world applications. Four key attributes characterize each instance:

  • depot positioning (random, centered, or cornered),
  • customer distribution (random, clustered, or random-clustered),
  • demand patterns (seven different distributions), and
  • average route size (from very short to very long routes).

The X set has become very popular and has been used for both exact and heuristic method evaluation over the past decade. It also served as the base for the CVRP track of the 12th DIMACS Implementation Challenge. Today, 61 X instances have proven optimal solutions, with most of the remaining 39 likely being optimal as well. The last new BKS for an X instance was found in June 2021.

The XML set, with 10,000 instances of 100 customers, was generated using a similar scheme [8]. All XML instances have known optimal solutions.

In recent years, the routing community has started to consider larger instances involving thousands of customers. Thus, we created the XL benchmark, consisting of instances with 1,000 to 10,000 customers, generated using the same successful scheme as in the X and XML sets. These large instances are beyond the reach of current exact methods but are very useful for evaluating heuristics.

The competition is a community-wide effort to obtain extremely high-quality solutions for the XL set, hopefully just a few units away from the optima.

The Competition

The official XL instances have already been generated, but they will remain confidential until the competition begins. However, we now provide a Python script (available here as a zip file, including examples of instances and their generator script) for generating statistically similar instances. Participating teams may use these for training. We are currently running several of the best published CVRP codes on the official XL instances to obtain initial BKSs. As a result, finding improvements during the competition will be challenging!

The competition will run for 30 days. During this period, teams will be able to submit improved solutions through the competition webpage, which will automatically verify them. Each of the 100 instances will have a real-time Leaderboard, displaying the best-known value progression and the corresponding teams. Note that the solutions themselves will not be published until after the competition, to prevent “local-searching” other teams’ results. A Global Leaderboard will also display each team’s total score.

Scoring

Each time a team submits an improved BKS for an instance, it begins accumulating lead time score (in days), which continues until another team improves the solution or the competition ends. Final BKSs will receive a 5-day bonus. Thus, the maximum score a team can receive from a single instance is 35 days. The global score is the sum of lead time scores across all 100 instances.

For example, the Leaderboard for a hypothetical instance XL-n5067-k201 at the end of the competition could look like this:

Current time: 2592000 (competition finished)

BKS Value Submission Date (GMT) Time Stamp (secs) Lead Time (days) Team
64540 0 Initial solution obtained
by method XYZ
64537 15 Jan 2026, 08:29:42 235782 4.19 Team 1
64521 19 Jan 2026, 13:07:50 598070 0.93 Team 2
64515 20 Jan 2026, 11:30:15 678215 6.95 Team 1
64503 27 Jan 2026, 10:33:11 1278391 5.95 Team 3
64501 1 Feb 2026, 08:33:27 1792807 3.77 Team 3
64499 5 Feb 2026, 03:25:31 2118331 5.48 Team 2

Contributions to the Global Leaderboard for instance XL-n5067-k201:

Team 2: 0.93 + 5.48 + 5 = 11.41 days

Team 1: 4.19 + 6.95 = 11.14 days

Team 3: 5.95 + 3.77 = 9.72 days

It is in the teams’ interest to submit improved solutions promptly. Even current leaders benefit from consolidating their lead with further improvements.

Note: Instances for which no team improves upon the initial BKS will not contribute to the Global Leaderboard.

The team at the top of the Global Leaderboard at the end of the competition will be declared the Overall Winner and receive a certificate from CVRPLib. Any team that finds at least one Final BKS, even for a single instance, will also receive a certificate.

Deeper motivations for the CVRPLib BKS Challenge

For decades, the dominant heuristics for problems like CVRP have been based on classical optimization: powerful local search combined with metaheuristics. All finalists in the 2022 VRP DIMACS Challenge used such techniques. In the past 10 years, machine-learning-based techniques have emerged for vehicle routing. Some are the current best performers in non-deterministic variants (e.g., winners of the dynamic VRPTW track in the EURO Meets NeurIPS 2022 VR Competition). However, only recently have some ML methods reached performance comparable to classic methods on deterministic variants like the CVRP.

The CVRPLib BKS Challenge provides a great opportunity for ML-based or hybrid approaches to prove their effectiveness by finding solutions beyond the reach of classical techniques.

But there is an even deeper motivation. One may argue that in optimization practice it is much more important to have methods that can quickly find good solutions (say, less than 0.5% away from the optimal) than methods that take a lot of time to find almost-optimal solutions. True. But there are cases where only exceptionally good solutions are useful. For example, when using optimization as an aid to mathematical discovery.

Just a few months ago, DeepMind’s AlphaEvolve made global headlines for discovering an improved algorithm for complex matrix multiplication. It also made progress in 13 other mathematical problems by “discovering constructions (objects) with better properties than all previously known constructions, according to given mathematical definitions”. In other words, by finding new BKSs to some well-defined optimization problems! Yet, only weeks later, researchers used classic optimization (the package FICO XPRESS) to beat AlphaEvolve results in four cases. We believe that the interest of CVRPLib BKS Challenge goes beyond routing: it is an arena where classic and emerging optimization paradigms can be compared.

Questions & Answers

How will the initial BKSs be obtained?

They will be obtained using the following published methods:

  • KGLS^XXL (Arnold & Sörensen 2019 [1])
  • SISRs (Christiaens & Vanden Berghe 2020 [2])
  • FILO/FILO2 (Accorsi & Vigo 2021 [3], Accorsi & Vigo 2024 [9])
  • HGS-CVRP with decomposition (Vidal 2021 [4]; Santini et al. 2023; [5])
  • AILS-II (Máximo et al. 2023 [6])

Each method will be run for about 5 CPU-days per instance, split across multiple independent runs with different seeds and parameters. The competition page will indicate which method(s) obtained the initial BKS for each instance.

The already high-quality initial BKSs will make it unlikely that teams find many improving BKSs on the first days of the competition by only running existing methods.

Who can compete?

Everyone except the organizers (the only people currently with access to the official XL instances). Authors of the methods used for obtaining the initial BKSs are welcome to participate.

Are there limits on the computing resources?

No. Indeed, ambitious competitors may employ many processors running in parallel, perhaps one for each XL instance. Using several processors per instance (trying different seeds and/or parameters) certainly makes a method to find better solutions in the same wall clock time. However, this is only effective up to a point. Even with massive parallelism, a fundamentally inferior method is likely to hit an asymptotic limit and would not obtain many BKSs for such truly challenging instances.

What are the instance and solution file formats?

The instances and solutions will be in CVRPLib format. Both formats are described in the CVRP competition rules. As with the X instances, the number of routes is not fixed. If a hypothetical instance is named XL-n5067-k201, the value 5067 is the number of points (1 depot + 5066 customers), and 201 is the minimum number of routes, though solutions with more than 201 routes are allowed.

Calendar

  • [24 Aug 2025] – This announcement. Official announcement. Competition rules and Python script for training instances released.
  • [12 Nov 2025] – Competition webpage opens for tests. This means that a form for submitting solutions (at that moment only for the existing X instances) will be available. A submitted solution will be automatically checked for feasibility, and its cost will be calculated (even if it is not a new BKS). It is recommended that participants test several solutions to ensure that they are using the correct rounding conventions and format file.
  • [12 Dec 2025] – Competition webpage opens for teams’ registration. It is necessary to inform the team members and their affiliations. It is also necessary to include some documentation (at least two pages in pdf format) describing the method created by the team that will be used in the competition. It is OK for the methods to use third-party software and even borrow from existing open-sourced CVRP codes. But this should be fully acknowledged.

It is possible to opt for a “hidden registration’’. This means that the team data will not be shown on the webpage unless the team submits some new BKS during the competition.

  • [12 Jan 2026, 15:00:00 GMT] - Competition Starts. Official XL instances released. The webpage opens for submitting new BKSs for XL instances.
  • [11 Feb 2026, 15:00:00 GMT] - Competition Finishes: Submissions close. The overall winner and the Final BKS winners are declared.

Post-Competition – Final BKSs (not only their values) will be published in CVRPLib. Participants who obtained at least a new BKS (not necessarily a final BKS) are invited to upload more material describing their methods and how their runs were performed. It is essential to report the amount of computing time used. Uploading the additional material on the runs is mandatory for winners to receive the certificates.

Organizers

  • Rafael Martinelli (PUC-Rio)
  • Eduardo Queiroga (UFPB)
  • Anand Subramanian (UFPB)
  • Eduardo Uchoa (UFF)
  • Thibaut Vidal (Polytechnique Montréal)

References

  1. Arnold, F., Gendreau, M., & Sörensen, K. (2019). Efficiently Solving Very Large-Scale Routing Problems. Computers & Operations Research, 107, 32–42. https://doi.org/10.1016/j.cor.2019.03.006. [Code: https://github.com/ArnoldF/LocalSearchVRPXXL]
  2. Christiaens, J., & Vanden Berghe, G. (2020). Slack Induction by String Removals for Vehicle Routing Problems. Transportation Science, 54(2), 417–433. https://doi.org/10.1287/trsc.2019.0914
  3. Accorsi, L., & Vigo, D. (2021). A Fast and Scalable Heuristic for the Solution of Large-Scale Capacitated Vehicle Routing Problems. Transportation Science, 55(4), 832–856. https://doi.org/10.1287/trsc.2021.1059 [Code: https://acco93.github.io/filo/]
  4. Vidal, T. (2022). Hybrid Genetic Search for the CVRP: Open-Source Implementation and SWAP* Neighborhood. Computers & Operations Research, 140, 105643. https://doi.org/10.1016/j.cor.2021.105643 [Code: https://github.com/vidalt/HGS-CVRP]
  5. Santini, A., Schneider, M., Vidal, T., & Vigo, D. (2023). Decomposition Strategies for Vehicle Routing Heuristics. INFORMS Journal on Computing, 35(3), 543–559. https://doi.org/10.1287/ijoc.2023.1288 [Code: https://github.com/INFORMSJoC/2022.0048]
  6. Máximo, V. R., Cordeau, J.-F., & Nascimento, M. C. V. (2024). AILS-II: An Adaptive Iterated Local Search Heuristic for the Large-Scale Capacitated Vehicle Routing Problem. INFORMS Journal on Computing, 36(4), 974–986. https://doi.org/10.1287/ijoc.2023.0106 [Code: https://github.com/INFORMSJoC/2023.0106]
  7. Uchoa, E., Pecin, D., Pessoa, A., Poggi, M., Vidal, T., & Subramanian, A. (2017). New benchmark instances for the capacitated vehicle routing problem. European Journal of Operational Research, 257(3), 845–858. https://doi.org/10.1016/j.ejor.2016.08.012
  8. Queiroga, E., Sadykov, R., Uchoa, E., & Vidal, T. (2021). 10,000 optimal CVRP solutions for testing machine learning based heuristics. In AAAI-22 workshop on machine learning for operations research (ML4OR). https://openreview.net/pdf?id=yHiMXKN6nTl
  9. Accorsi, L., & Vigo, D. (2024). Routing one million customers in a handful of minutes. Computers & Operations Research, 164, 106562. https://doi.org/10.1016/j.cor.2024.106562 [Code: https://acco93.github.io/filo2/]