Владикавказский математический журнал
2008, Том 10, выпуск 4, С. 39–48
УДК 513.588
NONSTANDARD MODELS AND OPTIMIZATION1
S. S. Kutateladze
On the Centenary of the Birth of S. L. Sobolev
This is an overview of a few possibilities that are open by model theory in optimization. Most attention
is paid to the impact of infinitesimal analysis and Boolean valued models to convexity, Pareto optimality,
and hyperapproximation.
Keywords: Boolean valued analysis, nonstandard analysis, approximate efficiency, hyperapproximation,
lattice normed space.
1. Agenda. The union of functional analysis and applied mathematics celebrates its
sixtieth anniversary this year [1, 2]. This talk focuses on the trends of interaction between
model theory and the methods of domination, discretization, and scalarization.
2. The Art of Calculus. Provable counting is the art of calculus which is mathematics
in modern parlance. Mathematics exists as a science more than two and a half millennia,
and we can never mixed it with history or chemistry. In this respect our views of what is
mathematics are independent of time.
The objects of mathematics are the quantitative forms of human reasoning. Mathematics
functions as the science of convincing calculations. Once-demonstrated, the facts of
mathematics well never vanish. Of course, mathematics renews itself constantly, while the
stock increases of mathematical notions and construction and the understanding changes of
the rigor and technologies of proof and demonstration. The frontier we draw between pure
and applied mathematics is also time-dependent.
3. Francis Bacon. The Mathematics are either pure or mixed. To the Pure Mathematics
are those sciences belonging which handle quantity determinate, merely severed from any
axioms of natural philosophy; and these are two, Geometry and Arithmetic; the one handling
quantity continued, and the other dissevered. Mixed hath for subject some axioms or parts of
natural philosophy, and considereth quantity determined, as it is auxiliary and incident unto
them. . .
In the Mathematics. . . that use which is collateral and intervenient is no less worthy than
that which is principal and intended. . . And as for the Mixed Mathematics, I may only make
this prediction, that there cannot fail to be more kinds of them, as nature grows further
disclosed. (The Advancement of Learning, 1605)
c 2008 Kutateladze S. S.
°
1
This article bases partly on a talk prepared for the International Conference «Methods of Logic in
Mathematics V», June 1–6, 2008, St. Petersburg but undelivered in view of the author’s illness.
40
Kutateladze S. S.
4. Mixed Turns into Applied. After the lapse of 150 years Leonhard Euler used the
words «pure mathematics» in the title of one of his papers Specimen de usu observationum in
mathesi pura in 1761. It was practically at the same time that the term «pure mathematics»
had appeared in the eldest Encyclopaedia Britannica. In the nineteenth century «mixed»
mathematics became to be referred to as «applied».
The famous Journal de Mathématiques Pures et Appliquées was founded by Joseph
Liouville in 1836 and The Quarterly Journal of Pure and Applied Mathematics started
publication in 1857.
5. Pure and Applied Mathematics. The intellectual challenge, beauty, and intrinsic
logic of the topics under study are the impetus of many comprehensive and deep studies
in mathematics which are customarily qualified as pure. Any application of mathematics is
impossible without creating some metaphors, models of the phenomena and processes under
examination. Modeling is a special independent sphere of intellectual activities which is out
of mathematics.
Application of mathematics resides beyond mathematics in much the same way as maladies
exist in nature rather than inside medicine. Applied mathematics acts as an apothecary mixing
drugs for battling illnesses.
The art and craft of mathematical techniques for the problems of other sciences are the
content of applied mathematics.
6. New Challenges. Classical mechanics in the broadest sense of the words was the
traditional sphere of applications of mathematics in the nineteenth century.The beginning of
the twentieth century was marked with a sharp enlargement of the sphere of applications
of mathematics. Quantum mechanics appeared, requesting for new mathematical tools. The
theory of operators in Hilbert spaces and distribution theory were oriented to adapting the
heuristic methods of the new physics. At the same time the social phenomena became the
object of the nonverbal research requiring the invention of especial mathematical methods. The
demand for the statistical treatment of various data grew rapidly. Founding new industries as
well as introducing of promising technologies and new materials, brought about the necessity
of elaboration of the technique of calculations. The rapid progress of applied mathematics was
facilitated by the automation and mechanization of accounting and standard calculations.
7. Cofathers of New Mentality. In the 1930s applied mathematics rapidly approached
functional analysis. Of profound importance in this trend was the research of John von
Neumann in the mathematical foundations of quantum mechanics and game theory as a
tool for economic studies. Leonid Kantorovich was a pioneer and generator of new synthetic
ideas in Russia [1, 2, 7, 8].
8. Enigmas of Economics. The main particularity of the extremal problems of economics
consists in the presence of numerous conflicting ends and interests to be harmonized. We
encounter the instances of multicriteria optimization. Seeking for an optimal solution in these
circumstances, we must take into account various contradictory preferences which combine
into a sole compound aim. It is impossible as a rule to distinguish some particular scalar
target and ignore the rest of the targets. This circumstance involves the specific difficulties
that are untypical in the scalar case: we must specify what we should call a solution of a
vector program and we must agree upon the method of conforming versatile ends provided
that some agreement is possible in principle. Therefore, it is actual to seek for the reasonable
concepts of optimality in multiobjective decision making. Among these we distinguish the
concepts of ideal and generalized optimum alongside Pareto-optimum as well as approximate
and infinitesimal optimum.
41
Nonstandard models and optimization
9. Enter the Reals. Optimization is the science of choosing the best. To choose, we use
preferences. To optimize, we use infima and suprema (for bounded subsets) which is practically
the least upper bound property. So optimization needs ordered sets and primarily (boundedly)
complete lattices. To operate with preferences, we use group structure. To aggregate and scale,
we use linear structure. All these are happily provided by the reals R, a one-dimensional
Dedekind complete vector lattice. A Dedekind complete vector lattice is a Kantorovich space.
10. Scalarization. Scalarization in the most general sense means reduction to numbers.
Since each number is a measure of quantity, the idea of scalarization is clearly of a universal
importance to mathematics. The deep roots of scalarization are revealed by the Boolean
valued validation of the Kantorovich heuristic principle. We will dwell upon the aspects of
scalarization most important in applications and connected with the problems of multicriteria
optimization.
11. Legendre in Disguise. Assume that X is a vector space, E is an ordered vector
space, f : X → E • := E ∪ +∞ is a convex operator, and C := dom(f ) ⊂ X is a convex set.
A vector program (C, f ) is written as follows:
x ∈ C,
f (x) → inf.
The standard sociological trick includes (C, f ) into a parametric family yielding the
Legendre trasform or Young–Fenchel transform of f :
f ∗ (l) := sup (l(x) − f (x)),
x∈X
X#
with l ∈
a linear functional over X. The epigraph of f ∗ is a convex subset of X # and so
f ∗ is convex. Observe that −f ∗ (0) is the value of (C, f ) [15, 17, 18].
12. Order Omnipresent. A convex function is locally a positively homogeneous convex
function, a sublinear functional. Recall that p : X → R is sublinear whenever
epi p := {(x, t) ∈ X × R : p(x) 6 t}
is a cone. Recall that a numeric function is uniquely determined from its epigraph.
Given C ⊂ X, put
H(C) := {(x, t) ∈ X × R+ : x ∈ tC},
the Hörmander transform of C. Now, C is convex if and only if H(C) is a cone. A space with
a cone is a (pre)ordered vector space.
«The order, the symmetry, the harmony enchant us. . . » (Leibniz)
13. Fermat’s Criterion. ∂f (x̄), the subdifferential of f at x̄, is
n
o
l ∈ X # : (∀ x ∈ X) l(x) − l(x̄) 6 f (x) − f (x̄) .
A point x̄ is a solution to the minimization problem (X, f ) if and only if
0 ∈ ∂f (x̄).
This Fermat criterion turns into the Rolle Theorem in a smooth case and is of little avail
without effective tools for calculating ∂f (x̄). A convex analog of the «chain rule» is in order.
14. Enter Hahn–Banach. The Dominated Extension takes the form
∂(p ◦ ι)(0) = (∂p)(0) ◦ ι,
with p a sublinear functional over X and ι the identical embedding of some subspace of X
into X.
42
Kutateladze S. S.
If the target R may be replaced with an ordered vector space E, then E admits dominated
extension.
15. Enter Kantorovich. The matching of convexity and order was established in two
steps.
Theorem (Hahn–Banach–Kantorovich). Every Kantorovich space admits dominated
extension of linear operators.
This theorem proven by Kantorovich in 1935 was a first attractive result of the theory of
ordered vector spaces [3].
Theorem (Bonnice–Silvermann–To). Each ordered vector space admitting dominated
extension of linear operators is a Kantorovich space.
16. New Heuristics. Kantorovich demonstrated the role of K-spaces by the example
of the Hahn–Banach theorem. He proved that this central principle of functional analysis
admits the replacement of reals with elements of an arbitrary K-space while substituting
linear and sublinear operators with range in this space for linear and sublinear functionals.
These observations laid grounds for the universal heuristics based on his intuitive belief that
the members of an abstract Kantorovich space are a sort of generalized numbers.
17. Canonical Operator. Consider a Kantorovich space E and an arbitrary nonempty
set A. Denote by l∞ (A, E) the set of all order bounded mappings from A into E; i. e.,
f ∈ l∞ (A, E) if and only if f : A → E and {f (α) : α ∈ A} is order bounded in E. It is
easy to verify that l∞ (A, E) becomes a Kantorovich space if endowed with the coordinatewise
algebraic operations and order. The operator εA,E acting from l∞ (A, E) into E by the rule
εA,E : f 7→ sup{f (α) : α ∈ A} (f ∈ l∞ (A, E))
is called the canonical sublinear operator given A and E. We often write εA instead of εA,E
when it is clear from the context what Kantorovich space is meant. The notation ε n is used
when the cardinality of A equals n and we call the operator εn finitely-generated.
18. Support Hull. Consider a set A of linear operators acting from a vector space X into
a Kantorovich space E. The set A is weakly order bounded if {αx : α ∈ A} is order bounded
for every x ∈ X. We denote by hAix the mapping that assigns the element αx ∈ E to each
α ∈ A, i. e. hAix : α 7→ αx. If A is weakly order bounded then hAix ∈ l∞ (A, E) for every
fixed x ∈ X. Consequently, we obtain the linear operator hAi : X → l ∞ (A, E) that acts as
hAi : x 7→ hAix. Associate with A one more operator
pA : x 7→ sup{αx : α ∈ A} (x ∈ X).
The operator pA is sublinear. The support set ∂pA is denoted by cop(A) and referred to as
the support hull of A.
19. Hahn–Banach in Disguise. The modern statement of the dominated extension is
as follows [15]:
Theorem. If p is a sublinear operator with ∂p = cop(A) then P = ε A ◦ hAi. Assume
further that p1 : X → E is a sublinear operator and p2 : E → F is an increasing sublinear
operator. Then
∂(p2 ◦ p1 ) = {T ◦ h∂p1 i : T ∈ L+ (l∞ (∂p1 , E), F )& T ◦ ∆∂p1 ∈ ∂p2 }.
Moreover, if ∂p1 = cop(A1 ) and ∂p2 = cop(A2 ) then
¢
ª
©
¡
∂(p2 ◦ p1 ) = T ◦ hA1 i : T ∈ L+ (l∞ (A1 , E), F ) ∃ α ∈ ∂εA2 T ◦ ∆A1 = α ◦ hA2 i .
43
Nonstandard models and optimization
20. Enter Boole. Cohen’s final solution of the problem of the cardinality of the continuum
within ZFC gave rise to the Boolean-valued models by Vopěnka, Scott, and Solovay. Takeuti
coined the term «Boolean-valued analysis» for applications of the new models to functional
analysis [10].
Let B be a complete Boolean algebra. Given an ordinal α, put
(B)
Vα(B) := {x : (∃ β ∈ α) x : dom(x) → B & dom(x) ⊂ Vβ
}.
The Boolean-valued universe V(B) is
V(B) :=
[
Vα(B) ,
α∈On
with On the class of all ordinals. The truth value [[ϕ]] ∈ B is assigned to each formula ϕ of
ZFC relativized to V(B) .
21. Enter Descent. Given ϕ, a formula of ZFC, and y, a subset VB ; put Aϕ := Aϕ(·,
{x : ϕ(x, y)}. The descent Aϕ↓ of a class Aϕ is
y)
:=
Aϕ↓:= {t : t ∈ V(B) & [[ϕ(t, y)]] = }.
If t ∈ Aϕ↓, then it is said that t satisfies ϕ(·, y) inside V(B) .
The descent x↓ of an element x ∈ V(B) is defined by the rule
x↓:= {t : t ∈ V(B) & [[t ∈ x]] = },
i. e. x↓= A·∈x↓. The class x↓ is a set. Moreover, x↓⊂ scyc(dom(x)), where scyc is the symbol
of the taking of the strong cyclic hull. If x is a nonempty set inside V(B) then
(∃z ∈ x↓)[[(∃z ∈ x) ϕ(z)]] = [[ϕ(z)]].
22. The Reals in Disguise. There is an object R inside V(B) modeling R, i. e.,
[[R is the reals ]] = .
Let R↓ stand for the descent of the carrier |R| of the algebraic system R := (|R|, +, · , 0, 1, 6)
inside V(B) . Implement the descent of the structures on |R| to R↓ as follows:
x + y = z ↔ [[x + y = z]] = ;
xy = z ↔ [[xy = z]] = ;
x 6 y ↔ [[x 6 y]] = ;
λx = y ↔ [[λ∧ x = y]] =
(x, y, z ∈ R↓, λ ∈ R).
Theorem (Gordon). R ↓ with the descended structures is a universally complete
Kantorovich space with base B(R↓) isomorphic to B.
23. Norming Sequences [1, 4].
(ξ1 , ξ2 , . . . ) = (|ξ1 |, |ξ2 |, . . . , |ξN −1 |, sup |ξk |) ∈ RN .
k>N
44
Kutateladze S. S.
ξ2
ξ3
ξ1
x(t)
x = (|ξ1 |,|ξ2 |,|ξ3 |)
«I believe that the use of members of semi-ordered linear spaces instead of reals in various
estimations can lead to essential improvement of the latter.» (Kantorovich [1])
24. Domination. Let X and Y be real vector spaces lattice-normed with K-spaces E
and F . In other words, given are some lattice-norms · X and · Y . Assume further that T is
a linear operator from X to Y and S is a positive operator from E into F :
X
·
T
/Y
·
X
²
E
S
Y
²
/F
Moreover, in case
Tx
Y
6S x
X
(x ∈ X),
we call S the dominant or majorant of T .
25. Enter Abstract Norm. If the set of all dominants of T has the least element, then
the latter is called the abstract norm or least dominant of T and denoted by T . Hence, the
least dominant T is the least positive operator from E to F such that
T x 6 T ( x ) (x ∈ X).
26. Domination and Model Theory. These days the development of domination
proceeds within the frameworks of Boolean valued analysis. All principal properties of lattice
normed spaces represents the Boolean valued interpretations of the relevant properties of
classical normed spaces. The most important interrelations here are as follows: Each Banach
space inside a Boolean valued model becomes a universally complete Banach–Kantorovich
space in result of the external deciphering of constituents. Moreover, each lattice normed space
may be realized as a dense subspace of some Banach space in an appropriate Boolean valued
model. Finally, a Banach space X results from some Banach space inside a Boolean valued
model by a special machinery of bounded descent if and only if X admits a complete Boolean
algebra of norm-one projections which enjoys the cyclicity property. The latter amounts to
the fact that X is a Banach–Kantorovich space and X is furnished with a mixed norm [9].
27. Approximation. Convexity is an abstraction of finitely many stakes encircled with
a surrounding rope, and so no variation of stakes can ever spoil the convexity of the tract to
be surveyed.
Study of stability in optimization is accomplished sometimes by introducing various
epsilons in appropriate places. One of the earliest excursions in this direction is connected with
45
Nonstandard models and optimization
the classical Hyers–Ulam stability theorem for ε-convex functions. Exact calculations with
epsilons and sharp estimates are sometimes bulky and slightly mysterious. Some alternatives
are suggested by actual infinities, which is illustrated with the conception of infinitesimal
optimality.
28. Enter Epsilon and Monad. Assume given a convex operator f : X → E ∪ +∞ and
a point x in the effective domain dom(f ) := {x ∈ X : f (x) < +∞} of f . Given ε > 0 in the
positive cone E+ of E, by the ε-subdifferential of f at x we mean the set
©
ª
∂ εf (x) := T ∈ L(X, E) : (∀ x ∈ X) (T x − F x 6 T x − f x + ε) ,
with L(X, E) standing as usual for the space of linear operators from X to E [11].
Distinguish some downward-filtered subset E of E that is composed ofTpositive elements.
Assuming E and E standard, define the monad µ(E ) of E as µ(E ) := {[0, ε] : ε ∈ ◦E }.
The members of µ(E ) are positive infinitesimals with respect to E . As usual, ◦E denotes the
external set of all standard members of E, the standard part of E [16].
29. Approximate Optimality. Fix a positive element ε ∈ E. A feasible point x 0 is a
ε-solution or ε-optimum of a program (C, f ) provided that f (x0 ) 6 e + ε with e the value
of (C, f ). In other words, x0 is an ε-solution of (C, f ) if and only if x0 ∈ C and the f (x0 ) − ε is
the greatest lower bound of f (C) or, equivalently, f (C) + ε ⊂ f (x0 ) + E + . Clearly, x0 is a
ε-solution of an unconditional problem f (x) → inf if and only if the zero belong to ∂ ε f (x0 );
i. e.,
f (x0 ) 6 inf f (x) + ε ↔ 0 ∈ ∂ε f (x0 ).
x∈X
30. Approximate Efficiency. A feasible point x0 is ε-Pareto optimal for (C, f ) whenever
f (x0 ) is a minimal element of U +ε, with U := f (C); i. e., (f (x0 )−E + )∩(f (C)+ε) = [f (x0 )]. In
more detail, x0 is ε-Pareto-optimal means that x0 ∈ C and, for all x ∈ C, from f (x0 ) > f (x)+ε
it follows that f (x0 ) ∼ f (x) + ε [12, 19, 20].
x2
U+ →ε
U
xε
x1
→
ε
31. Subdifferential Halo. Assume that the monad µ(E ) is an external cone over ◦ R
and, moreover, µ(E ) ∩ ◦E = 0. In application, E is usually the filter of order-units of E. The
relation of infinite proximity or infinite closeness between the members of E is introduced as
follows:
e1 ≈ e2 ↔ e1 − e2 ∈ µ(E ) & e2 − e1 ∈ µ(E ).
Now
Df (x) :=
\
ε∈◦ E
∂ ε f (x) =
[
ε∈µ(E )
∂ ε f (x);
46
Kutateladze S. S.
the infinitesimal subdifferential of f at x. The elements of Df (x) are infinitesimal subgradients
of f at x [13, 16].
32. Exeunt Epsilon. Theorem. Let f1 : X ×Y → E ∪+∞ and f2 : Y ×Z → E ∪+∞ be
convex operators. Suppose that the convolution f2 △ f1 is infinitesimally exact at some point
(x, y, z); i. e., (f2 △ f1 )(x, y) ≈ f1 (x, y) + f2 (y, z). If, moreover, the convex sets epi(f1 , Z) and
epi(X, f2 ) are in general position then
D(f2 △ f1 )(x, y) = Df2 (y, z) ◦ Df1 (x, y).
33. Discretization. «It seems to me that the main idea of this theory is of a general
character and reflects the general gnoseological principle for studying complex systems. It
was, of course, used earlier, and it is also used in systems analysis, but it does not have a
rigorous mathematical apparatus.
The principle consists simply in the fact that to a given large complex system in some space
a simpler, smaller dimensional model in this or a simpler space is associated by means of oneto-one or one-to-many correspondence. The study of this simplified model turns out, naturally,
to be simpler and more practicable. This method, of course, presents definite requirements on
the quality of the approximating system.» (Kantorovich [1])
34. Hypodiscretization. The analysis of the equation T x = y, with T : X → Y
a bounded linear operator between some Banach spaces X and Y , consists in choosing finitedimensional vector spaces XN and YN and the corresponding embeddings ıN and N :
XO
T
/Y
O
N
ıN
XN
TN
/Y
N
In this event, the equation
TN xN = yN
is viewed as a finite-dimensional approximation to the original problem.
35. Hyperdiscretization Nonstandard models yield the method of hyperapproximation
E
T
ϕE
/F
ϕF
²
E#
T#
²
/ F#
Here E and F are normed spaces over the same scalars, while T is a bounded linear
operator from E to F , and # symbolizes a nonstandard hull.
36. The Hull of a Space. Let ∗ is the symbol of the Robinsonian standardization. Let
(E, k · k) be an internal normed space over ∗ F, with F := R; C. As usual, x ∈ E is a limited
element provided that kxk is a limited real (whose modulus has a standard upper bound by
definition). If kxk is an infinitesimal then x is also referred to as an infinitesimal. Denote
by ltd(E) and µ(E) the external sets of limited elements and infinitesimals of E. The set
47
Nonstandard models and optimization
µ(E) is the monad of the origin in E. Clearly, ltd(E) is an external vector space over F, and
µ(E) is a subspace of ltd(E). Put E # = ltd(E)/µ(E) and endow E # with the natural norm
kϕxk := kx# k := st(kxk) ∈ F for all x ∈ ltd(E) Here ϕ := ϕE := (·)# : ltd(E) → E # is the
canonical homomorphism, and st takes the standard part of a limited real. This (E # , k · k) is
an external normed space called the nonstandard hull of E.
37. The Hull of an Operator. Suppose now that E and F are internal normed spaces
and T : E → F is an internal bounded linear operator. The set of reals c(T ) := {C ∈ ∗ R :
(∀x ∈ E)kT xk 6 Ckxk} is internal and bounded. Recall that kT k := inf c(T ). If the norm kT k
of T is limited then the classical normative inequality kT xk 6 kT k kxk valid for all x ∈ E,
implies that T (ltd(E)) ⊂ ltd(F ) and T (µ(E)) ⊂ µ(F ). Hence, we may soundly define the
descent of T to the factor space E # as the external operator T # : E # → F # , acting by the
rule
T # ϕE x := ϕF T x (x ∈ E).
The operator T # is linear (with respect to the members of F) and bounded; moreover, kT # k =
st(kT k). The operator T # is called the nonstandard hull of T .
38. One Puzzling Definition. Approximation of arbitrary function spaces and operators
by their analogs in finite dimensions, which is discretization, matches the marvelous universal
understanding of computational mathematics as the science of finite approximations to general
(not necessarily metrizable) compacta. This revolutionary and challenging definition was given
in the joint talk submitted by S. L. Sobolev, L. A. Lyusternik, and L. V. Kantorovich at the
Third All-Union Mathematical Congress in 1956 [5].
Infinitesimal methods suggest a background, providing new schemes for hyperapproximation of general compact spaces. As an approximation to a compact space we may take an
arbitrary internal subset containing all standard elements of the space under approximation.
39. State of the Art. Adaptation of the ideas of model theory to optimization projects
among the most important directions of developing the synthetic methods of pure and applied
mathematics. This approach yields new models of numbers, spaces, and types of equations.
The content expands of all available theorems and algorithms. The whole methodology of
mathematical research is enriched and renewed, opening up absolutely fantastic opportunities.
We can now use actual infinities and infinitesimals, transform matrices into numbers, spaces
into straight lines, and noncompact spaces into compact spaces, yet having still uncharted
vast territories of new knowledge.
40. Vistas of the Future. Quite a long time had passed until the classical functional
analysis occupied its present position of the language of continuous mathematics. Now the
time has come of the new powerful technologies of model theory in mathematical analysis.
Not all theoretical and applied mathematicians have already gained the importance of modern
tools and learned how to use them. However, there is no backward traffic in science, and the
new methods are doomed to reside in the realm of mathematics for ever and in a short time
they will become as elementary and omnipresent in calculuses and calculations as Banach
spaces and linear operators.
References
1. Kantorovich L. V. Functional Analysis and Applied Mathematics // Vestnik Leningrad Univ. Math.—
1948.—Vol. 6.—P. 3–18.
2. Kantorovich L. V. Functional Analysis and Applied Mathematics // Uspekhi Mat. Nauk.—1948.—Vol. 3,
№ 6.—P. 3–50.
48
Kutateladze S. S.
3. Kantorovich L. V. On Semiordered Linear Spaces and Their Applications in the Theory of Linear
Operators // Dokl. Akad. Nauk SSSR.—1935.—Vol. 4, № 1/2.—P. 11–14.
4. Kantorovich L. V. The Principle of Majorants and the Newton Method // Dokl. Akad. Nauk SSSR.—
1951.—Vol. 76, № 1.—P. 17–20.
5. Sobolev S. L., Lyusternik L. A., Kantorovich L. V. Functional Analysis and Computational
Mathematics // Proceedings of the Third All-Union Mathematical Congress, Moscow, June–July.—
Moscow, 1956.—Vol. 2.—P. 43.
6. Kantorovich L. V. Functional Analysis (the Main Ideas) // Sibirsk. Mat. Zh.—1987.—Vol. 28, № 1.—
P. 7–16.
7. Kantorovich L. V. My Way in Science // Uspekhi Mat. Nauk.—1987.—Vol. 48, № 2.—P. 183–213.
8. Kantorovich L. V. Selected Works. Part I. Descriptive Theory of Sets and Functions. Functional Analysis
in Semi-ordered Spaces.—Amsterdam: Gordon and Breach Publishers, 1996.
9. Kantorovich L. V. Selected Works. Part II. Applied Functional Analysis. Approximation Methods and
Computers.—Amsterdam: Gordon and Breach Publishers, 1996.
10. Kusraev A. G. Dominated Operators.—Dordrecht: Kluwer Academic Publishers, 2001; Moscow: Nauka,
2003.
11. Kusraev A. G., Kutateladze S. S. Introduction to Boolean Valued Analysis.—Moscow: Nauka, 2005.
12. Kutateladze S. S. Convex Operators // Uspekhi Mat. Nauk.—1979.—Vol. 34, № 1.—P. 167–196.
13. Kutateladze S. S. Convex ε-Programming // Dokl. Akad. Nauk SSSR.—1979.—Vol. 245, № 5.—P. 1048–
1050.
14. Kutateladze S. S. A Version of Nonstandard Convex Programming // Sibirsk. Mat. Zh.—1986.—Vol. 27,
№ 4.—P. 84–92.
15. Kusraev A. G., Kutateladze S. S. Subdifferential Calculus: Theory and Applications.—Moscow: Nauka,
2007.
16. Gordon E. I., Kusraev A. G., Kutateladze S. S. Infinitesimal Analysis: Selected Topics.—Moscow: Nauka,
2008.
17. Zalinescu C. Convex Analysis in General Vector Spaces.—London: World Scientific Publishers, 2002.
18. Singer I. Abstract Convex Analysis.—New York: John Wiley & Sons, 1997.
19. Gutiérrez C., Jiménez B., Novo V. On Approximate Solutions in Vector Optimization Problems via
Scalarization // Comput. Optim. Appl.—2006.—Vol. 35, № 3.—P. 305–324.
20. Gutiérrez C., Jiménez B., Novo V. Optimality Conditions for Metrically Consistent Approximate
Solutions in Vector Optimization // J. Optim. Theory Appl.—2007.—Vol. 133, № 1.—P. 49–64.
Received August 8, 2008.
Kutateladze Semen Samsonovich
Sobolev Institute of Mathematics, Principal Researcher
RUSSIA, 630090, Novosibirsk, 4 Acad. Koptyug avenue
E-mail: sskut@math.nsc.ru