1 Introduction

The conference VOCAL was first announced as “Veszprem Optimization Conference: Advanced Algorithms” in 2004 by founder professors Ferenc Friedler and Tamas Terlaky. Since then, VOCAL have became a biannual conference series, and the eighth member was organized in December 10–12, 2018 in Esztergom, Hungary. The VOCAL conferences present latest results in optimization algorithms, regardless whether the mathematical programming model is continuous or discrete, linear or nonlinear. Presentations by the broad research community review the complexity and convergence properties of the algorithms, high-performance optimization software, and the latest applications as well. The aim was to bring together researchers from the theoretical and applied community in a medium-sized event.

In 2018 in Esztergom 70 papers were presented in 25 sessions by authors from 10 countries. Topics included decision support for analysis and synthesis of complex industrial, logistic, and healthcare systems as well as further development of related techniques.

2 Objective functions, allocation strategies, and matching algorithms for kidney exchange programs

Two sessions were dedicated to healthcare applications, especially to the modeling and optimization support for kidney exchange programs. Papers discussed not only matching and optimization algorithms but the expected long term practical consequences of their application as well.

A questionnaire survey was conducted in 17 European countries on the operation and challenges of kidney exchange programs. The survey showed that living donor kidney exchange programs (CEPs) contribute significantly to the increase in the number of living donor transplants. Biro and coauthors reported, that exchanging best practices and jointly addressing existing challenges can significantly improve access to the most (cost) effective treatment for an increasing number of patients with kidney disease Biró et al. (2019).

The theoretical question is how long it is possible to improve international co-operation in such a way as to lead to a more favorable outcome for all parties. One of the tools of the study is the search for Pareto optimal solutions in a multi-agent environment. Recently, new fast algorithms have been developed to verify and support the achievement of Pareto optimality Aziz et al. (2019).

Several European countries with national kidney exchange programs have already conducted international exchanges along regulated lines, where patients with end-stage kidney disease can exchange willing but incompatible living donors. Replacements are selected by regular matching runs according to well-defined but country-specific constraints and optimization criteria.

The goal of optimizing KEP is typically to maximize the number of possible transplants and reduce the maximum waiting time. The matching can be run for a specific period of time or immediately as the patient pool expands. For both approaches, an appropriate algorithm is proposed in Monteiro et al. (2021) that deals with the waiting times of patients in the pool. The proposed algorithms have been tested by computer experiments with two types of settings: no pair leaves the pool before matching, and early (unmatched) departure is allowed. According to the simulations, the waiting time can be used as a selection criterion. However, the maximum waiting time can only be reduced by slightly reducing the total number of transplants performed compared to the maximum proposed by single objective algorithms.

In the study Biro et al. (2021) not only integer programming formulations are given to determine the chain of optimal exchanges, but also long-term effects for different pools are predicted through simulations. A further step forward may be to consider not only the number of transplants but also their expected quality.

3 Modeling and preventing risks

What happens with a railway network if some stations or lines are destroyed by a sequence of attacks? What kind of measures can be applied to determine the most important elements in a railway network? Are there any differences between the attack strategies according to their effects to the most important parameters of a railway network? How many line sections or stations have to be destroyed randomly to make the network inoperable? These, and some similar questions are raised in the paper Tóth (2021) by B.G. Tóth.

In this paper first a graph model of the Hungarian national railway network has been created. The edges are weighted either by the distance or the time between every pairs of stations in the given graph. The measures are chosen from the literature: efficiency measurements and centrality measurements are taken from Latora and Marchiori (2004) and Freeman (1977). The effects of randomly deleted lines or stations are measured by the analysis of giant component and the value of the critical probability which is introduced in Albert and Barabasi (2002). The effects of different randomly generated destructing operations are also analyzed deeply emphasizing those results which can be important to defend the most important elements of the railway network, and the necessary developments for reducing the vulnerability of the network is also pointed out.

Guzmics and Pfulg in Guzmics and Pflug (2019) have dealt with systems where the lifetimes of the entities involved in the system are interdependent. An entity can be anything from components of a technical system to banks in a market. However, a system is assumed to consist of entities of the same type. The basic idea of the model is that the lifetime distributions of individual entities also affect the lifetime of other entities, where each entity is first assumed to have an individual exponential lifetime in the same way as in the well-known Marshall–Olkin type models. The model has been developed for financial institutions in an environment where the collapse of some entities also threatens the survival of other entities. The improved lifetime-based cascade model is able to describe the dynamics of the dependency structures of financial systems without containing an explicit time dependence.

Kovács and co-authors Kovács et al. (2019) have developed a synthesis procedure that is able to take into account the expected risks of the operation of a process network in the design phase. The purpose of the method is to algorithmically construct complex systems with a guaranteed minimum reliability. In solving the problem, during the combinatorially difficult process synthesis, the techniques of probability theory must also be utilized. It has been shown that the previously developed combinatorial tools of process network synthesis, i.e., the so-called P-graph framework, can be extended to integrate the two domains, and the proposed reliability analysis method can be embedded in a process design software. All statements and algorithms are general and proven, while also providing solutions to challenging real-world problems.

In the work motivated by the design of safety-critical power generation networks, Süle and Baumgartner (2019) started from P-graph description of the system structure as well, but first expanded it with logical conditions and then transformed it into a reliability block diagram. Based on the cut and path sets of the graph, the polynomial risk model was derived, which also supports the incorporation of redundancies to increase reliability. A multi-purpose optimization method has been developed to assess the criticality of subsystems, which can help to improve the reliability of existing systems during retrofit design as an extension to the P-graph methodology.

4 Modeling techniques and solution methods for scheduling problems

Practical scheduling problems often raise questions that need further developments of existing solution methods to answer. The session on scheduling methods was motivated by examples from steel industry, print industry, and transportation. Ősz et al. (2020) examined the scheduling of steel forge with the aim of minimizing installation and storage costs with strict deadlines and rapid resource deterioration. In addition to the lifetime or usage dependency on aging taken into consideration by multiple scheduling methods, as a novelty the start-up of equipment units, i.e., the wear caused by changeovers, is taken into account, which is a significant practical aspect in the problem under study. Note that, while it is important that the lifespan of devices is not utilized for setup and product switching, at the same time, tight deadlines may force it. The authors showed that an optimal schedule by the proposed NLP model can significantly reduce not only the operational but also setup costs.

Frits and Bertók presented a model transformation and solution method for scheduling custom printed napkin manufacturing Frits and Bertók (2020). The original problem is transformed to Time Constrained Process Network Synthesis (TCPNS) formulation, and treated according to the P-graph framework. Problem specific constraints include a complex calculation of changeover times. Exact detailed schedules are provided for production shifts considering deadlines and consumer priorities. Since tasks are shared between machines in any proportion, and the amount of available raw materials is limited by continuous values, the presented method is capable of integrated implementation of process planning and scheduling.

In Dávid and Krész (2020) propose a solution to the problem of public transport vehicle scheduling. In the presented approach the assignment of buses to certain periods considers not only which task can be served by which type of buses, but also long-term plans, taking into account parking and periodical maintenance capacities. For this problem, a state-extended multiple commodity flow network model has been provided. The software implementation of the proposed method was also investigated for real life and randomly generated problems.

The paper of I. Borgulya (Borgulya (2021)) is dealing with the classical bin packing problem (BPP). There are given n items with sizes \(s_i,\) \(i=1,2,\ldots , n,\) and an infinity of bins with capacity \(c\ge s_i\). The aim is to pack the items into a minimal number of bins while the sum of the sizes in the same bin does not exceed the bin capacity. The bin packing was among the first deeply analyzed problems. It belongs to the class of NP-hard problems as it was proved by D.S. Johnson in the early seventies (Johnson (1973)). Therefore in the last fifty years a large number of approximation algorithms have been presented and analyzed from different points of views. In the last two decades the genetic algorithms became more and more popular appointing a new direction of the algorithms. The paper (Borgulya 2021) is a new attempt to give an evolutionary algorithm (EA) for the offline version of BPP.

The presented Hybrid Evolutionary Algorithm (HEA) applies the following steps. Firstly, a relative pair frequency matrix (RPFM) is defined to select items into such subsets which help to construct a feasible solution for the problem. Then the bins in the feasible solution are divided into two subsets. According to a parameter fully bins (FB) and not-fully bins (NFB) are distinguished. Then—using the RPFM—with the help of mutation operators an improvement is executed on the NFB bins to get a better result. Local searches are applied to pack further items into the FB bins. As usual, a running time limit is given to terminate the algorithm.

The paper contains exhaustive experimental results. On one hand the author assorted the most important benchmark instances to check the effectiveness of the presented algorithm, and by this computer experiments the author justified the efficiency of his algorithm. On the other hand—as it was declared in the first part of the paper—it was also demonstrated that the developed evolutionary algorithm can be applied successfully to solve the most difficult test problems too: for the hard28 test set—which is known from the literature (see Buljubašič and Vasquez (2016))—this hybrid EA has always hit at least one of the optimal solutions, and this justifies the theoretical efforts of the author.

5 Extensions and applications of the P-graph framework

As we could see in the above sections the P-graph framework Friedler et al. (1992) serves as a proper basis for both reliability analysis and integrated process planning and scheduling. In a dedicated session the latest extensions and application of the framework have been presented.

Process Network Synthesis (PNS) determines the optimal process or network structure that can be composed from a predefined set of building blocks while the optimal volume of its components called operating units are calculated as well. In the traditional mathematical formulation utilization of resources and the amount of resulted outcomes are proportional to the volumes of operating units. Éles and coauthors present a modeling technique where variable input-output ratios can be handled without modifying the original algorithms of related mathematical programming models Éles et al. (2020).

Van Fan et al. (2020) show that P-graph is an effective tool for designing and redesigning municipal solid waste (MSW) treatment systems, which cannot be avoided in a desirable Circular Economy. In case studies it is presented, that the composition of MSW varies according to the income levels of countries examined. The P-graph software identifies the most suitable optimal and alternative suboptimal treatment approaches, considering the balance between operating cost, product qualities, and green house gas emission.

6 Linear programming

As the fundamental building block for any more complex, i.e., mixed integer or nonlinear programming solver, linear programming is a must have robust building block. Consequently, any increase in its efficacy or extension in the range of practically solvable problems may lead to further advancements in the methods established on its basis. Darvay et al. (2020) proposed a corrector-predictor interior-point algorithm (CP IPA) with a new search direction by algebraic equivalent transformation, and proved its global convergence. Note that the determined iteration bound meets the level of best known iteration bounds of these class of methods. They have presented computational results as well, to illustrate the practical efficiency of the methods.

Molnár et al. developed a software module for the XPRESS optimizer capable to utilize the objects and functions available in the LEMON C++ library. As a result, they could significantly decrease computational time needed to solve a quadratic assignment problem while generating lower bounds by a reformulation-linearization technique called the Dual Ascent Procedure Molnár-Szipai and Varga (2019).

7 Modelling and applications

In the final group of topics we composed new modeling approaches together with application oriented research topics. Abdellali and Kató presented new interesting results on three dimensional image reconstruction produced by a graph-cut based algorithm Abdellali and Kató (2021). Results show that the use of depth prior information from different sources produces better 3D reconstructions.

Imre Dobos and Gyöngyi Vörösmarty presented a supplier evaluation technique based on the data envelopment analysis (DEA) Dobos (2021). The paper compares self- and peer-appraisal indicators for reciprocal and additive DEA models.

The convergence of an inner approximation scheme for probability maximization was studied by CsI (2021). The main message of the paper is that the procedure gains traction as an optimal solution is approached.

Electricity consumer models are identified by an inverse optimization approach in the paper of Kovács (2021). The method is demonstrated on a common consumer model with multiple types of deferrable loads behind a single smart meter. Experimental results are then presented, and some directions for future research are proposed.

Finally a paper by Bozena and Bodgan Staruch is in the phase of production, just after the acceptance by the time we write this summary Staruch and Staruch (accepted for publication). The authors discuss the competence-based assignment of tasks to employees in factories with demand-driven manufacturing setting. The potential use of the presented methodology is discussed to solve real-life problems related to production management.