The stochastic dominance (SD) is based on an axiomatic model of risk-averse preferences and there... more The stochastic dominance (SD) is based on an axiomatic model of risk-averse preferences and therefore, the SD-efficiency is an important property of selected portfolios. As defined with a continuum of criteria representing some measures of failure in achieving several targets , the SD does not provide us with a simple computational recipe. While limiting to a few selected target values one gets a typical multiple criteria optimization model approximating the corresponding SD approach. Although, it is rather difficult to justify a selection of a few target values, this difficulty can be overcome with the effective use of fuzzy target values. While focusing on the first degree SD and extending the target membership functions to some monotonic utility functions we get the multiple criteria model which preserves the consistency with both the first degree and the second degree SD. Further applying the reference point methodology to the multiple criteria model and taking advantages of fuzzy chance specifications we get the method that allows to model interactively the preferences by fuzzy specification of the desired distribution. The model itself guarantees that every generated solution is efficient according to the SD rules.
Many risk measures have been recently introduced which (for discrete random variables) result in ... more Many risk measures have been recently introduced which (for discrete random variables) result in Linear Programming (LP) models. While some LP computable risk measures may be viewed as approximations to the variance (e..g., the mean absolute deviation or the Gini's mean absolute difference), shortfall or quantile risk measures are recently gaining more popularity in various financial applications. In this paper we study LP solvable portfolio optimization models based on extensions of the Conditional Value at Risk (CVaR) measure. The models use multiple CVaR measures thus allowing for more detailed risk aversion modeling. We study both the theoretical properties of the models and their performance on real-life data.
The Markowitz model of portfolio optimization quantifies the problem in a lucid form of only two ... more The Markowitz model of portfolio optimization quantifies the problem in a lucid form of only two criteria: the mean, representing the expected outcome, and the risk, a scalar measure of the variability of outcomes. The classical Markowitz model uses the variance as the risk measure, thus resulting in a quadratic optimization problem. Following Sharpe's work on linear approximation to the mean-variance model, many attempts have been made to linearize the portfolio optimization problem. There were introduced several alternative risk measures which are computationally attractive as (for discrete random variables) they result in solving linear programming (LP) problems. The LP solvability is very important for applications to real-life financial decisions where the constructed portfolios have to meet numerous side constraints and take into account transaction costs. The variety of LP solvable portfolio optimization models presented in the literature generates a need for their classification and comparison. It is the main goal of our work. The paper introduces a systematic overview of the LP solvable models with a wide discussion of their theoretical properties. This allows us to classify the models with respect to the types of risk or safety measures they use. The paper also provides the first complete computational comparison of the discussed models on real-life data.
— The rapid growth of traffic induced by Internet services makes the simple over-provisioning of ... more — The rapid growth of traffic induced by Internet services makes the simple over-provisioning of resources not economical and hence imposes new requirements on the di-mensioning methods. Therefore, the problem of network design with the objective of minimizing the cost and at the same time solving the tradeoff between maximizing the service data flows and providing fair treatment of all demands becomes more and more important. In this context, the so-called Max-Min Fair (MMF) principle is widely considered to help finding reasonable bandwidth allocation schemes for competing demands. Roughly speaking, MMF assumes that the worst service performance is maximized, and then is the second worst performance, the third one, and so on, leading to a lexicographically maximized vector of sorted demand bandwidth allocations. It turns out that the MMF optimal solution cannot be approached in a standard way (i.e., as a mathematical programming problem) due to the necessity of lexicographic maximization of ordered quantities (bandwidth allocated to demands). Still, for convex models, it is possible to formulate effective sequential procedures for such lexicographic optimization. The purpose of the presented paper is threefold. First, it discusses resolution algorithms for a generic MMF problem related to telecommunications network design. Second, it gives a survey of network design instances of the generic formulation, and illustrates the efficiency of the general algorithms in these particular cases. Finally, the paper discusses extensions of the formulated problems into more practical (unfortunately non-convex) cases, where the general for convex MMF problems approach fails.
Badania operacyjne i systemowe 2004: Podejmowanie decyzji --- podstawy matematyczne i zastosowania, R.Kulikowski, J.Kacprzyk, R.Słowiński (red.),
Na przykładzie problemu wielotowarowego przepływu w sieci rozważane jest podejście umożliwiające ... more Na przykładzie problemu wielotowarowego przepływu w sieci rozważane jest podejście umożliwiające zastosowanie techniki generacji kolumn w wielokryterialnych zadaniach sprawiedliwego rozdziału zasobów (przepustowości), gdzie sprawiedliwość realizuje się przez stosowanie zasady MMF (Max-Min Fairness). Rozwiązanie zadania opiera się na maksyminimalizacji leksykograficznej i jest wyznaczane przez sekwencyjną optymalizację maksyminową z eliminacją blokujących funkcji oceny. Na każdym etapie odpowiednie zadanie maksyminimalizacji jest rozwiązywane z użyciem techniki generacji kolumn. Szczególna struktura zadania umożliwia wykorzystanie algorytmu znajdowania najkrótszej ścieżki w grafie do generowania nowych kolumn. Przedstawiono wyniki wstępnych testów numerycznych dla prezentowanego podejścia.
Many risk measures have been recently introduced which (for discrete random variables) result in ... more Many risk measures have been recently introduced which (for discrete random variables) result in Linear Programming (LP) models. While some LP computable risk measures may be viewed as approximations to the variance (e..g., the mean absolute deviation or the Gini's mean absolute difference), shortfall or quantile risk measures are recently gaining more popularity in various financial applications. In this paper we study LP solvable portfolio optimization models based on extensions of the Conditional Value at Risk (CVaR) measure. The models use multiple CVaR measures thus allowing for more detailed risk aversion modeling. We study both the theoretical properties of the models and their performance on real-life data.
Modern performance measures differ from the classical ones since they assess the performance agai... more Modern performance measures differ from the classical ones since they assess the performance against a benchmark and usually account for asymmetry in return distributions. The Omega ratio is one of these measures. Until recently, limited research has addressed the optimization of the Omega ratio since it has been thought to be computationally intractable. The Enhanced Index Tracking Problem (EITP) is the problem of selecting a portfolio of securities able to outperform a market index while bearing a limited additional risk. In this paper, we propose two novel mathematical formulations for the EITP based on the Omega ratio. The first formulation applies a standard definition of the Omega ratio where it is computed with respect to a given value, whereas the second formulation considers the Omega ratio with respect to a random target. We show how each formulation, nonlinear in nature, can be transformed into a Linear Programming model. We further extend the models to include real features, such as a cardinality constraint and buy-in thresholds on the investments, obtaining Mixed Integer Linear Programming problems. Computational results conducted on a large set of benchmark instances show that the portfolios selected by the model assuming a standard definition of the Omega ratio are consistently outperformed, in terms of out-of-sample performance, by those obtained solving the model that considers a random target. Furthermore, in most of the instances the portfolios optimized with the latter model mimic very closely the behavior of the benchmark over the out-of-sample period, while yielding, sometimes, significantly larger returns.
Resource allocation problems are concerned with the allocation of limited resources among competi... more Resource allocation problems are concerned with the allocation of limited resources among competing activities so as to achieve the best performances. However, in systems which serve many users there is a need to respect some fairness rules while looking for the overall efficiency. The so-called Max-Min Fairness is widely used to meet these goals. However, allocating the resource to optimize the worst performance may cause dramatic worsening of the overall system efficiency. Therefore, several other fair allocation schemes are being considered and analyzed. In this paper we show how the concepts of multiple criteria equitable optimization can effectively be used to generate various fair and efficient allocation schemes. First, we demonstrate how the scalar inequality measures can be consistently used in bicriteria models to search for fair and efficient allocations. Further, two alternative multiple criteria models equivalent to equitable optimization are introduced, thus allowing to generate a larger variety of fair and efficient resource allocation schemes.
We consider the problem of constructing mean{risk models which are consistent with the second deg... more We consider the problem of constructing mean{risk models which are consistent with the second degree stochastic dominance relation. By exploiting duality relations of convex analysis we develop the quantile model of stochastic dominance for general distributions. This allows us to show that several models using quantiles and tail characteristics of the distribution are in harmony with the stochastic dominance relation. We also provide stochastic linear programming formulations of these models.
Resource allocation problems are concerned with the allocation of limited resources among competi... more Resource allocation problems are concerned with the allocation of limited resources among competing activities so as to achieve the best performances. In systems which serve many usersthere is a need to respect some fairness rules while looking for the overall efficiency. The so-called Max-Min Fairness is widely used to meet these goals. However, allocating the resource to optimize the worst performance may cause a dramatic worsening of the overall system efficiency. Therefore, several other fair allocation schemes are searched and analyzed. In this paper we focus on mean-equity approaches which quantify the problem in a lucid form of two criteria: the mean outcome representing the overall efficiency and a scalar measure of inequality of outcomes to represent the equity (fairness) aspects. The mean-equity model is appealing to decision makers and allows a simple trade-off analysis. On the other hand, for typical dispersion indices used as inequality measures, the mean-equity approach may lead to inferior conclusions with respect to the outcomes maximization (system efficiency). Some inequality measures, however, can be combined with the mean itself into optimization criteria that remain in harmony with both inequality minimization and maximization of outcomes. In this paper we introduce general conditions for inequality measures sufficient to provide such an equitable consistency. We verify the conditions for the basic inequality measures thus showing how they can be used not leading to inferior distributions of system outcomes.
Mathematical and Statistical Methods for Actuarial Sciences and Finance, 2010
The portfolio optimisation problem is modelled as a mean-risk bicriteria optimisation problem whe... more The portfolio optimisation problem is modelled as a mean-risk bicriteria optimisation problem where the expected return is maximised and some (scalar) risk measure is minimised. In the original Markowitz model the risk is measured by the variance while several polyhedral risk measures have been introduced leading to Linear Programming (LP) computable portfolio optimisation models in the case of discrete random variables represented by their realisations under specified scenarios. Recently, the second order quantile risk measures have been introduced and become popular in finance and banking. The simplest such measure, now commonly called the Conditional Value at Risk (CVaR) or Tail VaR, represents the mean shortfall at a specified confidence level. The corresponding portfolio optimisation models can be solved with general purpose LP solvers. However, in the case of more advanced simulation models employed for scenario generation one may get several thousands of scenarios. This may lead to the LP model with a huge number of variables and constraints, thus decreasing the computational efficiency of the model. We show that the computational efficiency can be then dramatically improved with an alternative model taking advantages of the LP duality. Moreover, similar reformulation can be applied to more complex quantile risk measures like Gini's mean difference as well as to the mean absolute deviation.
The problem of aggregating multiple numerical criteria to form overall objective functions is of ... more The problem of aggregating multiple numerical criteria to form overall objective functions is of considerable importance in many disciplines. The ordered weighted averaging (OWA) aggregation, introduced by Yager, uses the weights assigned to the ordered values rather than to the specific criteria. This allows one to model various aggregation preferences, preserving simultaneously the impartiality (neutrality) with respect to the individual criteria. However, importance weighted averaging is a central task in multicriteria decision problems of many kinds. It can be achieved with the Weighted OWA (WOWA) aggregation though the importance weights make the WOWA concept much more complicated than the original OWA. We show that the WOWA aggregation with monotonic preferential weights can be reformulated in a way allowing to introduce linear programming optimization models, similar to the optimization models we developed earlier for the OWA aggregation. Computational efficiency of the proposed models is demonstrated.
2014 IEEE 38th International Computer Software and Applications Conference Workshops, 2014
Network resource allocation problems are concerned with the allocation of limited resources among... more Network resource allocation problems are concerned with the allocation of limited resources among competing entities so as to respect some fairness rules while looking for the overall efficiency. This paper presents the methodology of fair optimization representing inequality averse optimization rather than strict inequality minimization as foundation of fairness in resource allocation. Commonly applied in network resource allocation Max-Min Fairness or the lexicographic maximin optimization are the most widely known concepts of fair optimization. Alternative models of fair optimization are discussed showing that they generate all the classical fair solution concepts as special cases. However, the fair optimization concepts can effectively generate various fair and efficient resource allocation schemes.
Proceedings of the International Multiconference on Computer Science and Information Technology, 2010
The portfolio optimization problem is modeled as a mean-risk bicriteria optimization problem wher... more The portfolio optimization problem is modeled as a mean-risk bicriteria optimization problem where the expected return is maximized and some (scalar) risk measure is minimized. In the original Markowitz model the risk is measured by the variance while several polyhedral risk measures have been introduced leading to Linear Programming (LP) computable portfolio optimization models in the case of discrete random variables represented by their realizations under specified scenarios. Among them, the second order quantile risk measures, recently, become popular in finance and banking. The simplest such measure, now commonly called the Conditional Value at Risk (CVaR) or Tail VaR, represents the mean shortfall at a specified confidence level. Recently, the second order quantile risk measures have been introduced and become popular in finance and banking. The corresponding portfolio optimization models can be solved with general purpose LP solvers. However, in the case of more advanced simulation models employed for scenario generation one may get several thousands of scenarios. This may lead to the LP model with huge number of variables and constraints thus decreasing the computational efficiency of the model since the number of constraints (matrix rows) is usually proportional to the number of scenarios. while the number of variables (matrix columns) is proportional to the total of the number of scenarios and the number of instruments. We show that the computational efficiency can be then dramatically improved with an alternative model taking advantages of the LP duality. In the introduced models the number of structural constraints (matrix rows) is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios.
Lecture Notes in Economics and Mathematical Systems, 1997
There are several decision problems with multiple homogeneous and anonymous criteria where the pr... more There are several decision problems with multiple homogeneous and anonymous criteria where the preference model needs to satisfy the principle of anonymity (symmetry with respect to permutations of criteria). The standard reference point method cannot be directly applied to such problems. In this paper we develop, as an analogue of the reference point method, the reference distribution method taking into account both the efficiency princi-pIe and the principle of anonymity. All the solutions generated during the interactive process belong to the symmetrically efficient set which is a subset of the standard efficient set. It means, the achievement vector of the generated solution is neither dominated by another achievement vector nor by any permutation of some achievement vector.
The approach called the Lexicographic Min-Max (LMM) optimization depends on searching for solutio... more The approach called the Lexicographic Min-Max (LMM) optimization depends on searching for solutions minimal according to the lex-max order on a multidimensional outcome space. LMM is a refinement of the standard Min-Max optimization, but in the former, in addition to the largest outcome, we minimize also the second largest outcome (provided that the largest one remains as small as possible), minimize the third largest (provided that the two largest remain as small as possible), and so on. The necessity of point-wise ordering of outcomes within the lexicographic optimization scheme causes that the LMM problem is hard to implement. For convex problems it is possible to use iterative algorithms solving a sequence of properly defined Min-Max problems by eliminating some blocked outcomes. In general, it may not exist any blocked outcome thus disabling possibility of iterative Min-Max processing. In this paper we analyze two alternative optimization models allowing to form lexicographic sequential procedures for various nonconvex (possibly discrete) LMM problems. Both the approaches are based on sequential optimization of directly defined artificial criteria. The criteria can be introduced into the original model with some auxiliary variables and linear inequalities thus the methods are easily implementable.
2012 15th International Telecommunications Network Strategy and Planning Symposium (NETWORKS), 2012
ABSTRACT Allocating bandwidth to maximize service flows with fair treatment of all the services i... more ABSTRACT Allocating bandwidth to maximize service flows with fair treatment of all the services is a key issue in network dimensioning. In such applications, the so-called Max-Min Fairness (MMF) solution concept is widely used. It is based on the worst service performance maximization with additional regularization by the lexicographic maximization of the second worst performance, the third one etc. The basic sequential procedure is applicable only for convex models, thus it allows to deal with basic design problems but fails if practical discrete restrictions commonly arriving in telecommunications network design are to be taken into account. We analyze alternative sequential approaches allowing to solve non-convex MMF network dimensioning problems. The directly defined sequential criteria can be introduced into the original model with some auxiliary variables and linear inequalities. The approaches guarantee the exact MMF solution for a complete set of criteria. However, they can be simplified by reducing the number of criteria thus generating effectively approximated MMF solutions.
The problem of aggregating multiple numerical attributes to form overall measure is of considerab... more The problem of aggregating multiple numerical attributes to form overall measure is of considerable importance in many disciplines. The ordered weighted averaging (OWA) aggregation, introduced by Yager, uses the weights assigned to the ordered values rather than to the specific attributes. This allows one to model various aggregation preferences, preserving simultaneously the impartiality (neutrality) with respect to the individual attributes. However, importance weighted averaging is a central task in multiattribute decision problems of many kinds. It can be achieved with the Weighted OWA (WOWA) aggregation though the importance weights make the WOWA concept much more complicated than the original OWA. In this paper we analyze solution procedures for optimization problems with the ordered average objective functions or constraints. We show that the WOWA aggregation with monotonic preferential weights can be reformulated in a way allowing to introduce linear programming optimization models, similar to the optimization models we developed earlier for the OWA aggregation. Computational efficiency of the proposed models is demonstrated.
The stochastic dominance (SD) is based on an axiomatic model of risk-averse preferences and there... more The stochastic dominance (SD) is based on an axiomatic model of risk-averse preferences and therefore, the SD-efficiency is an important property of selected portfolios. As defined with a continuum of criteria representing some measures of failure in achieving several targets , the SD does not provide us with a simple computational recipe. While limiting to a few selected target values one gets a typical multiple criteria optimization model approximating the corresponding SD approach. Although, it is rather difficult to justify a selection of a few target values, this difficulty can be overcome with the effective use of fuzzy target values. While focusing on the first degree SD and extending the target membership functions to some monotonic utility functions we get the multiple criteria model which preserves the consistency with both the first degree and the second degree SD. Further applying the reference point methodology to the multiple criteria model and taking advantages of fuzzy chance specifications we get the method that allows to model interactively the preferences by fuzzy specification of the desired distribution. The model itself guarantees that every generated solution is efficient according to the SD rules.
Many risk measures have been recently introduced which (for discrete random variables) result in ... more Many risk measures have been recently introduced which (for discrete random variables) result in Linear Programming (LP) models. While some LP computable risk measures may be viewed as approximations to the variance (e..g., the mean absolute deviation or the Gini's mean absolute difference), shortfall or quantile risk measures are recently gaining more popularity in various financial applications. In this paper we study LP solvable portfolio optimization models based on extensions of the Conditional Value at Risk (CVaR) measure. The models use multiple CVaR measures thus allowing for more detailed risk aversion modeling. We study both the theoretical properties of the models and their performance on real-life data.
The Markowitz model of portfolio optimization quantifies the problem in a lucid form of only two ... more The Markowitz model of portfolio optimization quantifies the problem in a lucid form of only two criteria: the mean, representing the expected outcome, and the risk, a scalar measure of the variability of outcomes. The classical Markowitz model uses the variance as the risk measure, thus resulting in a quadratic optimization problem. Following Sharpe's work on linear approximation to the mean-variance model, many attempts have been made to linearize the portfolio optimization problem. There were introduced several alternative risk measures which are computationally attractive as (for discrete random variables) they result in solving linear programming (LP) problems. The LP solvability is very important for applications to real-life financial decisions where the constructed portfolios have to meet numerous side constraints and take into account transaction costs. The variety of LP solvable portfolio optimization models presented in the literature generates a need for their classification and comparison. It is the main goal of our work. The paper introduces a systematic overview of the LP solvable models with a wide discussion of their theoretical properties. This allows us to classify the models with respect to the types of risk or safety measures they use. The paper also provides the first complete computational comparison of the discussed models on real-life data.
— The rapid growth of traffic induced by Internet services makes the simple over-provisioning of ... more — The rapid growth of traffic induced by Internet services makes the simple over-provisioning of resources not economical and hence imposes new requirements on the di-mensioning methods. Therefore, the problem of network design with the objective of minimizing the cost and at the same time solving the tradeoff between maximizing the service data flows and providing fair treatment of all demands becomes more and more important. In this context, the so-called Max-Min Fair (MMF) principle is widely considered to help finding reasonable bandwidth allocation schemes for competing demands. Roughly speaking, MMF assumes that the worst service performance is maximized, and then is the second worst performance, the third one, and so on, leading to a lexicographically maximized vector of sorted demand bandwidth allocations. It turns out that the MMF optimal solution cannot be approached in a standard way (i.e., as a mathematical programming problem) due to the necessity of lexicographic maximization of ordered quantities (bandwidth allocated to demands). Still, for convex models, it is possible to formulate effective sequential procedures for such lexicographic optimization. The purpose of the presented paper is threefold. First, it discusses resolution algorithms for a generic MMF problem related to telecommunications network design. Second, it gives a survey of network design instances of the generic formulation, and illustrates the efficiency of the general algorithms in these particular cases. Finally, the paper discusses extensions of the formulated problems into more practical (unfortunately non-convex) cases, where the general for convex MMF problems approach fails.
Badania operacyjne i systemowe 2004: Podejmowanie decyzji --- podstawy matematyczne i zastosowania, R.Kulikowski, J.Kacprzyk, R.Słowiński (red.),
Na przykładzie problemu wielotowarowego przepływu w sieci rozważane jest podejście umożliwiające ... more Na przykładzie problemu wielotowarowego przepływu w sieci rozważane jest podejście umożliwiające zastosowanie techniki generacji kolumn w wielokryterialnych zadaniach sprawiedliwego rozdziału zasobów (przepustowości), gdzie sprawiedliwość realizuje się przez stosowanie zasady MMF (Max-Min Fairness). Rozwiązanie zadania opiera się na maksyminimalizacji leksykograficznej i jest wyznaczane przez sekwencyjną optymalizację maksyminową z eliminacją blokujących funkcji oceny. Na każdym etapie odpowiednie zadanie maksyminimalizacji jest rozwiązywane z użyciem techniki generacji kolumn. Szczególna struktura zadania umożliwia wykorzystanie algorytmu znajdowania najkrótszej ścieżki w grafie do generowania nowych kolumn. Przedstawiono wyniki wstępnych testów numerycznych dla prezentowanego podejścia.
Many risk measures have been recently introduced which (for discrete random variables) result in ... more Many risk measures have been recently introduced which (for discrete random variables) result in Linear Programming (LP) models. While some LP computable risk measures may be viewed as approximations to the variance (e..g., the mean absolute deviation or the Gini's mean absolute difference), shortfall or quantile risk measures are recently gaining more popularity in various financial applications. In this paper we study LP solvable portfolio optimization models based on extensions of the Conditional Value at Risk (CVaR) measure. The models use multiple CVaR measures thus allowing for more detailed risk aversion modeling. We study both the theoretical properties of the models and their performance on real-life data.
Modern performance measures differ from the classical ones since they assess the performance agai... more Modern performance measures differ from the classical ones since they assess the performance against a benchmark and usually account for asymmetry in return distributions. The Omega ratio is one of these measures. Until recently, limited research has addressed the optimization of the Omega ratio since it has been thought to be computationally intractable. The Enhanced Index Tracking Problem (EITP) is the problem of selecting a portfolio of securities able to outperform a market index while bearing a limited additional risk. In this paper, we propose two novel mathematical formulations for the EITP based on the Omega ratio. The first formulation applies a standard definition of the Omega ratio where it is computed with respect to a given value, whereas the second formulation considers the Omega ratio with respect to a random target. We show how each formulation, nonlinear in nature, can be transformed into a Linear Programming model. We further extend the models to include real features, such as a cardinality constraint and buy-in thresholds on the investments, obtaining Mixed Integer Linear Programming problems. Computational results conducted on a large set of benchmark instances show that the portfolios selected by the model assuming a standard definition of the Omega ratio are consistently outperformed, in terms of out-of-sample performance, by those obtained solving the model that considers a random target. Furthermore, in most of the instances the portfolios optimized with the latter model mimic very closely the behavior of the benchmark over the out-of-sample period, while yielding, sometimes, significantly larger returns.
Resource allocation problems are concerned with the allocation of limited resources among competi... more Resource allocation problems are concerned with the allocation of limited resources among competing activities so as to achieve the best performances. However, in systems which serve many users there is a need to respect some fairness rules while looking for the overall efficiency. The so-called Max-Min Fairness is widely used to meet these goals. However, allocating the resource to optimize the worst performance may cause dramatic worsening of the overall system efficiency. Therefore, several other fair allocation schemes are being considered and analyzed. In this paper we show how the concepts of multiple criteria equitable optimization can effectively be used to generate various fair and efficient allocation schemes. First, we demonstrate how the scalar inequality measures can be consistently used in bicriteria models to search for fair and efficient allocations. Further, two alternative multiple criteria models equivalent to equitable optimization are introduced, thus allowing to generate a larger variety of fair and efficient resource allocation schemes.
We consider the problem of constructing mean{risk models which are consistent with the second deg... more We consider the problem of constructing mean{risk models which are consistent with the second degree stochastic dominance relation. By exploiting duality relations of convex analysis we develop the quantile model of stochastic dominance for general distributions. This allows us to show that several models using quantiles and tail characteristics of the distribution are in harmony with the stochastic dominance relation. We also provide stochastic linear programming formulations of these models.
Resource allocation problems are concerned with the allocation of limited resources among competi... more Resource allocation problems are concerned with the allocation of limited resources among competing activities so as to achieve the best performances. In systems which serve many usersthere is a need to respect some fairness rules while looking for the overall efficiency. The so-called Max-Min Fairness is widely used to meet these goals. However, allocating the resource to optimize the worst performance may cause a dramatic worsening of the overall system efficiency. Therefore, several other fair allocation schemes are searched and analyzed. In this paper we focus on mean-equity approaches which quantify the problem in a lucid form of two criteria: the mean outcome representing the overall efficiency and a scalar measure of inequality of outcomes to represent the equity (fairness) aspects. The mean-equity model is appealing to decision makers and allows a simple trade-off analysis. On the other hand, for typical dispersion indices used as inequality measures, the mean-equity approach may lead to inferior conclusions with respect to the outcomes maximization (system efficiency). Some inequality measures, however, can be combined with the mean itself into optimization criteria that remain in harmony with both inequality minimization and maximization of outcomes. In this paper we introduce general conditions for inequality measures sufficient to provide such an equitable consistency. We verify the conditions for the basic inequality measures thus showing how they can be used not leading to inferior distributions of system outcomes.
Mathematical and Statistical Methods for Actuarial Sciences and Finance, 2010
The portfolio optimisation problem is modelled as a mean-risk bicriteria optimisation problem whe... more The portfolio optimisation problem is modelled as a mean-risk bicriteria optimisation problem where the expected return is maximised and some (scalar) risk measure is minimised. In the original Markowitz model the risk is measured by the variance while several polyhedral risk measures have been introduced leading to Linear Programming (LP) computable portfolio optimisation models in the case of discrete random variables represented by their realisations under specified scenarios. Recently, the second order quantile risk measures have been introduced and become popular in finance and banking. The simplest such measure, now commonly called the Conditional Value at Risk (CVaR) or Tail VaR, represents the mean shortfall at a specified confidence level. The corresponding portfolio optimisation models can be solved with general purpose LP solvers. However, in the case of more advanced simulation models employed for scenario generation one may get several thousands of scenarios. This may lead to the LP model with a huge number of variables and constraints, thus decreasing the computational efficiency of the model. We show that the computational efficiency can be then dramatically improved with an alternative model taking advantages of the LP duality. Moreover, similar reformulation can be applied to more complex quantile risk measures like Gini's mean difference as well as to the mean absolute deviation.
The problem of aggregating multiple numerical criteria to form overall objective functions is of ... more The problem of aggregating multiple numerical criteria to form overall objective functions is of considerable importance in many disciplines. The ordered weighted averaging (OWA) aggregation, introduced by Yager, uses the weights assigned to the ordered values rather than to the specific criteria. This allows one to model various aggregation preferences, preserving simultaneously the impartiality (neutrality) with respect to the individual criteria. However, importance weighted averaging is a central task in multicriteria decision problems of many kinds. It can be achieved with the Weighted OWA (WOWA) aggregation though the importance weights make the WOWA concept much more complicated than the original OWA. We show that the WOWA aggregation with monotonic preferential weights can be reformulated in a way allowing to introduce linear programming optimization models, similar to the optimization models we developed earlier for the OWA aggregation. Computational efficiency of the proposed models is demonstrated.
2014 IEEE 38th International Computer Software and Applications Conference Workshops, 2014
Network resource allocation problems are concerned with the allocation of limited resources among... more Network resource allocation problems are concerned with the allocation of limited resources among competing entities so as to respect some fairness rules while looking for the overall efficiency. This paper presents the methodology of fair optimization representing inequality averse optimization rather than strict inequality minimization as foundation of fairness in resource allocation. Commonly applied in network resource allocation Max-Min Fairness or the lexicographic maximin optimization are the most widely known concepts of fair optimization. Alternative models of fair optimization are discussed showing that they generate all the classical fair solution concepts as special cases. However, the fair optimization concepts can effectively generate various fair and efficient resource allocation schemes.
Proceedings of the International Multiconference on Computer Science and Information Technology, 2010
The portfolio optimization problem is modeled as a mean-risk bicriteria optimization problem wher... more The portfolio optimization problem is modeled as a mean-risk bicriteria optimization problem where the expected return is maximized and some (scalar) risk measure is minimized. In the original Markowitz model the risk is measured by the variance while several polyhedral risk measures have been introduced leading to Linear Programming (LP) computable portfolio optimization models in the case of discrete random variables represented by their realizations under specified scenarios. Among them, the second order quantile risk measures, recently, become popular in finance and banking. The simplest such measure, now commonly called the Conditional Value at Risk (CVaR) or Tail VaR, represents the mean shortfall at a specified confidence level. Recently, the second order quantile risk measures have been introduced and become popular in finance and banking. The corresponding portfolio optimization models can be solved with general purpose LP solvers. However, in the case of more advanced simulation models employed for scenario generation one may get several thousands of scenarios. This may lead to the LP model with huge number of variables and constraints thus decreasing the computational efficiency of the model since the number of constraints (matrix rows) is usually proportional to the number of scenarios. while the number of variables (matrix columns) is proportional to the total of the number of scenarios and the number of instruments. We show that the computational efficiency can be then dramatically improved with an alternative model taking advantages of the LP duality. In the introduced models the number of structural constraints (matrix rows) is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios.
Lecture Notes in Economics and Mathematical Systems, 1997
There are several decision problems with multiple homogeneous and anonymous criteria where the pr... more There are several decision problems with multiple homogeneous and anonymous criteria where the preference model needs to satisfy the principle of anonymity (symmetry with respect to permutations of criteria). The standard reference point method cannot be directly applied to such problems. In this paper we develop, as an analogue of the reference point method, the reference distribution method taking into account both the efficiency princi-pIe and the principle of anonymity. All the solutions generated during the interactive process belong to the symmetrically efficient set which is a subset of the standard efficient set. It means, the achievement vector of the generated solution is neither dominated by another achievement vector nor by any permutation of some achievement vector.
The approach called the Lexicographic Min-Max (LMM) optimization depends on searching for solutio... more The approach called the Lexicographic Min-Max (LMM) optimization depends on searching for solutions minimal according to the lex-max order on a multidimensional outcome space. LMM is a refinement of the standard Min-Max optimization, but in the former, in addition to the largest outcome, we minimize also the second largest outcome (provided that the largest one remains as small as possible), minimize the third largest (provided that the two largest remain as small as possible), and so on. The necessity of point-wise ordering of outcomes within the lexicographic optimization scheme causes that the LMM problem is hard to implement. For convex problems it is possible to use iterative algorithms solving a sequence of properly defined Min-Max problems by eliminating some blocked outcomes. In general, it may not exist any blocked outcome thus disabling possibility of iterative Min-Max processing. In this paper we analyze two alternative optimization models allowing to form lexicographic sequential procedures for various nonconvex (possibly discrete) LMM problems. Both the approaches are based on sequential optimization of directly defined artificial criteria. The criteria can be introduced into the original model with some auxiliary variables and linear inequalities thus the methods are easily implementable.
2012 15th International Telecommunications Network Strategy and Planning Symposium (NETWORKS), 2012
ABSTRACT Allocating bandwidth to maximize service flows with fair treatment of all the services i... more ABSTRACT Allocating bandwidth to maximize service flows with fair treatment of all the services is a key issue in network dimensioning. In such applications, the so-called Max-Min Fairness (MMF) solution concept is widely used. It is based on the worst service performance maximization with additional regularization by the lexicographic maximization of the second worst performance, the third one etc. The basic sequential procedure is applicable only for convex models, thus it allows to deal with basic design problems but fails if practical discrete restrictions commonly arriving in telecommunications network design are to be taken into account. We analyze alternative sequential approaches allowing to solve non-convex MMF network dimensioning problems. The directly defined sequential criteria can be introduced into the original model with some auxiliary variables and linear inequalities. The approaches guarantee the exact MMF solution for a complete set of criteria. However, they can be simplified by reducing the number of criteria thus generating effectively approximated MMF solutions.
The problem of aggregating multiple numerical attributes to form overall measure is of considerab... more The problem of aggregating multiple numerical attributes to form overall measure is of considerable importance in many disciplines. The ordered weighted averaging (OWA) aggregation, introduced by Yager, uses the weights assigned to the ordered values rather than to the specific attributes. This allows one to model various aggregation preferences, preserving simultaneously the impartiality (neutrality) with respect to the individual attributes. However, importance weighted averaging is a central task in multiattribute decision problems of many kinds. It can be achieved with the Weighted OWA (WOWA) aggregation though the importance weights make the WOWA concept much more complicated than the original OWA. In this paper we analyze solution procedures for optimization problems with the ordered average objective functions or constraints. We show that the WOWA aggregation with monotonic preferential weights can be reformulated in a way allowing to introduce linear programming optimization models, similar to the optimization models we developed earlier for the OWA aggregation. Computational efficiency of the proposed models is demonstrated.
The Enhanced Index Tracking Problem (EITP) calls for the determination of an optimal portfolio of... more The Enhanced Index Tracking Problem (EITP) calls for the determination of an optimal portfolio of assets with the bi-objective of maximizing the excess return of the portfolio above a benchmark and, simultaneously, minimizing the tracking error. The EITP is capturing a growing attention among academics, both for its practical relevance and for the scientific challenges that its study, as a multi-objective problem, poses. Several optimization models have been proposed in the literature, where the tracking error is measured in terms of standard deviation or in linear form using, for instance, the mean absolute deviation. More recently, reward-risk optimization measures, like the Omega ratio, have been adopted for the EITP. On the other side, shortfall or quantile risk measures have nowadays gained an established popularity in a variety of financial applications. In this paper, we propose a class of bi-criteria optimization models for the EITP, where risk is measured using the Weighted multiple Conditional Value-at-Risk (WCVaR). The WCVaR is defined as a weighted combination of multiple CVaR measures, and thus allows a more detailed risk aversion modeling compared to the use of a single CVaR measure. The application of the WCVaR to the EITP is analyzed, both theoretically and empirically. Through extensive computational experiments, the performance of the optimal portfolios selected by means of the proposed optimization models is compared, both in-sample and, more importantly, out-of-sample, to the one of the portfolios obtained using another recent optimization model taken from the literature.
The Conditional Value-at-Risk (CVaR) has become a very popular concept to measure the risk of an ... more The Conditional Value-at-Risk (CVaR) has become a very popular concept to measure the risk of an investment. In fact, though originally proposed for adoption in a financial context, the concept has potential for a broad range of applicability. In this paper, we consider problems that are formulated as mixed integer linear programming (MILP) models and show that a discrete version of the CVaR, that we call Discrete CVaR (DCVaR), can be adopted. The DCVaR mediates between a conservative Min-imax/Maximin criterion and an aggressive minimum cost/maximum profit criterion, in all cases where uncertainty or variability matters. We show that the Discrete CVaR satisfies properties that make it an attractive measure. In particular, the models resulting from the adoption of the Discrete CVaR remain MILP models. To illustrate the relevance of the proposed model, we apply the DCVaR measure to several instances of the multidimensional knapsack problem and the p-median problem.
Uploads
Papers by Włodzimierz Ogryczak