Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Hassan Riahi

    Let \begin{document}$ f: {\mathcal H} \rightarrow \mathbb{R} $\end{document} be a convex differentiable function whose solution set \begin{document}$ {{\rm{argmin}}}\; f $\end{document} is nonempty. To attain a solution of the problem... more
    Let \begin{document}$ f: {\mathcal H} \rightarrow \mathbb{R} $\end{document} be a convex differentiable function whose solution set \begin{document}$ {{\rm{argmin}}}\; f $\end{document} is nonempty. To attain a solution of the problem \begin{document}$ \min_{\mathcal H}f $\end{document}, we consider the second order dynamic system \begin{document}$ \;\ddot{x}(t) + \alpha \, \dot{x}(t) + \beta (t) \nabla f(x(t)) + c x(t) = 0 $\end{document}, where \begin{document}$ \beta $\end{document} is a positive function such that \begin{document}$ \lim_{t\rightarrow +\infty}\beta(t) = +\infty $\end{document}. By imposing adequate hypothesis on first and second order derivatives of \begin{document}$ \beta $\end{document}, we simultaneously prove that the value of the objective function in a generated trajectory converges in order \begin{document}$ {\mathcal O}\big(\frac{1}{\beta(t)}\big) $\end{document} to the global minimum of the objective function, that the trajectory strongly converges to th...
    We consider ageneralized vector equilibrium problem, which is the fol-lowing set-valued vector version of Ky Fan’s minimax inequality: Find $\overline{x}\in C $ such as to satisfy $\varphi(\overline{x},y)\not\subset K(\overline{x}) $ for... more
    We consider ageneralized vector equilibrium problem, which is the fol-lowing set-valued vector version of Ky Fan’s minimax inequality: Find $\overline{x}\in C $ such as to satisfy $\varphi(\overline{x},y)\not\subset K(\overline{x}) $ for all $y\in C $ , (GVEP) where \bullet $X $ and $E $ are topological vector spaces, \bullet $C $ is anonempty closed convex subset of $X $, \bullet $\varphi $ : $C\cross Carrow 2^{E} $ is aset-valued map, and \bullet $K $ is aset-valued map from $C $ to $E $.
    In a Hilbert space setting, in order to develop fast first-order methods for convex optimization, we study the asymptotic convergence properties (t→ +∞) of the trajectories of the inertial dynamics ẍ(t) + γ(t)ẋ(t) + β(t)∇Φ(x(t)) = 0. The... more
    In a Hilbert space setting, in order to develop fast first-order methods for convex optimization, we study the asymptotic convergence properties (t→ +∞) of the trajectories of the inertial dynamics ẍ(t) + γ(t)ẋ(t) + β(t)∇Φ(x(t)) = 0. The function Φ to minimize is supposed to be convex, continuously differentiable, γ(t) is a positive damping coefficient, and β(t) is a time scale coefficient. Convergence rates for the values Φ(x(t)) − min Φ and the velocities are obtained under conditions involving only β(t) and γ(t). In this general framework (Φ is only assumed to be convex with a non-empty solution set), the fast convergence property is closely related to the asymptotic vanishing property γ(t) → 0, and to the temporal scaling β(t) → +∞. We show the optimality of the convergence rates thus obtained, and study their stability under external perturbation of the system. The discrete time versions of the results provide convergence rates for a large class of inertial proximal algorithms....
    In a Hilbert setting, for convex differentiable optimization, we consider accelerated gradient dynamics combining Tikhonov regularization with Hessian-driven damping. The Tikhonov regularization parameter is assumed to tend to zero as... more
    In a Hilbert setting, for convex differentiable optimization, we consider accelerated gradient dynamics combining Tikhonov regularization with Hessian-driven damping. The Tikhonov regularization parameter is assumed to tend to zero as time tends to infinity, which preserves equilibria. The presence of the Tikhonov regularization term induces a strong convexity property which vanishes asymptotically. To take advantage of the exponential convergence rates attached to the heavy ball method in the strongly convex case, we consider the inertial dynamic where the viscous damping coefficient is taken proportional to the square root of the Tikhonov regularization parameter, and therefore also converges towards zero. Moreover, the dynamic involves a geometric damping which is driven by the Hessian of the function to be minimized, which induces a significant attenuation of the oscillations. Under an appropriate tuning of the parameters, based on Lyapunov's analysis, we show that the trajectories have at the same time several remarkable properties: they provide fast convergence of values, fast convergence of gradients towards zero, and strong convergence to the minimum norm minimizer. This study extends a previous paper by the authors where similar issues were examined but without the presence of Hessian driven damping.
    We focus our attention on generalized vector equilibrium problems. In particular, we formulate a general and unified existence theorem, present an analysis for the assumptions used in this result, and give some applications to vector... more
    We focus our attention on generalized vector equilibrium problems. In particular, we formulate a general and unified existence theorem, present an analysis for the assumptions used in this result, and give some applications to vector variational inequalities, vector complementarity problems and vector optimization.
    Research Interests:
    ABSTRACT In this paper, we give an existence result for the following dy-namical equilibrium problem: du dt , v − u(t) + F (u(t), v) ≥ 0 ∀v ∈ K and for a.e. t ≥ 0, where K is a closed convex set in a Hilbert space and F : K ×K → R is a... more
    ABSTRACT In this paper, we give an existence result for the following dy-namical equilibrium problem: du dt , v − u(t) + F (u(t), v) ≥ 0 ∀v ∈ K and for a.e. t ≥ 0, where K is a closed convex set in a Hilbert space and F : K ×K → R is a monotone bifunction. We introduce a class of demipositive bifunctions and use it to study the asymptotic behaviour of the solution u(t) when t → ∞. We obtain weak convergence of u(t) to some solution x ∈ K of the equilibrium problem F (x, y) ≥ 0 for every y ∈ K. Our applications deal with the asymp-totic behaviour of the dynamical convex minimization and dynamical system associated to saddle convex-concave bifunctions. We then present a new neural model for solving a convex programming problem.
    ... HEDY ATTOUCH AND HASSAN RIAHI Given X a Banach space and f: X -> [R U {+ oo} a proper lower semicontinuous function which is bounded from below, the Ekeland's E-variational principle asserts the existence of a point x in X,... more
    ... HEDY ATTOUCH AND HASSAN RIAHI Given X a Banach space and f: X -> [R U {+ oo} a proper lower semicontinuous function which is bounded from below, the Ekeland's E-variational principle asserts the existence of a point x in X, which we call E-extremal with respect to f ...
    In the present paper, slightly modifying the topological KKM The- orem of Park and Kim (1996), we obtain a new existence theorem for general- ized vector equilibrium problems related to an admissible multifunction. We work here under the... more
    In the present paper, slightly modifying the topological KKM The- orem of Park and Kim (1996), we obtain a new existence theorem for general- ized vector equilibrium problems related to an admissible multifunction. We work here under the general framework of G-convex space which does not have any linear structure. Also, we give applications to greatest element, xed point and vector saddle point problems. The results presented in this paper extend and unify many results in the literature by relaxing the compactness, the closedness and the convexity conditions.
    In a Hilbert space setting H, given Φ: H → R a convex continuously differentiable function, and α a positive parameter, we consider the inertial system with Asymptotic Vanishing Damping (AVD)_α ẍ(t) + α/tẋ(t) + ∇Φ (x(t)) =0. Depending on... more
    In a Hilbert space setting H, given Φ: H → R a convex continuously differentiable function, and α a positive parameter, we consider the inertial system with Asymptotic Vanishing Damping (AVD)_α ẍ(t) + α/tẋ(t) + ∇Φ (x(t)) =0. Depending on the value of α with respect to 3, we give a complete picture of the convergence properties as t → + ∞ of the trajectories generated by (AVD)_α, as well as iterations of the corresponding algorithms. Our main result concerns the subcritical case α≤ 3, where we show that Φ (x(t))-Φ = O (t^-2/3α). Then we examine the convergence of trajectories to optimal solutions. As a new result, in the one-dimensional framework, for the critical value α = 3 , we prove the convergence of the trajectories without any restrictive hypothesis on the convex function Φ . In the second part of this paper, we study the convergence properties of the associated forward-backward inertial algorithms. They aim to solve structured convex minimization problems of the form Θ:= Φ + ...
    In a Hilbert space, we provide a fast dynamic approach to the hierarchical minimization problem which consists in finding the minimum norm solution of a convex minimization problem. For this, we study the convergence properties of the... more
    In a Hilbert space, we provide a fast dynamic approach to the hierarchical minimization problem which consists in finding the minimum norm solution of a convex minimization problem. For this, we study the convergence properties of the trajectories generated by a damped inertial dynamic with Tikhonov regularization. When the time goes to infinity, the Tikhonov regularization parameter is supposed to tend towards zero, not too fast, which is a key property to make the trajectories strongly converge towards the minimizer of f of minimum norm. According to the structure of the heavy ball method for strongly convex functions, the viscous damping coefficient is proportional to the square root of the Tikhonov regularization parameter. Therefore, it also converges to zero, which will ensure rapid convergence of values. Precisely, under a proper tuning of these parameters, based on Lyapunov's analysis, we show that the trajectories strongly converge towards the minimizer of minimum norm,...
    In a Hilbert space setting H, for convex optimization, we analyze the fast convergence properties as t tends to infinity of the trajectories generated by a third-order in time evolution system. The function f to minimize is supposed to be... more
    In a Hilbert space setting H, for convex optimization, we analyze the fast convergence properties as t tends to infinity of the trajectories generated by a third-order in time evolution system. The function f to minimize is supposed to be convex, continuously differentiable, with a nonempty set of minimizers. It enters into the dynamic through its gradient. Based on this new dynamical system, we improve the results obtained by [Attouch, Chbani, Riahi: Fast convex optimization via a third-order in time evolution equation, Optimization 2020]. As a main result, when the damping parameter α satisfies α > 3, we show that the convergence of the values at the order 1/t3 as t goes to infinity, as well as the convergence of the trajectories. We complement these results by introducing into the dynamic an Hessian driven damping term, which reduces the oscillations. In the case of a strongly convex function f, we show an autonomous evolution system of the third order in time with an exponent...
    this article is two-fold. One goal is to define the topological degree for maximal monotone operators. Particular attention is paid to the continuation methods for this kind of operators and real functions of convex type. This allows us... more
    this article is two-fold. One goal is to define the topological degree for maximal monotone operators. Particular attention is paid to the continuation methods for this kind of operators and real functions of convex type. This allows us to extend some recent results (see [5], [6]) by withdrawing the compactness assumptions.
    In a Hilbert space, we provide a fast dynamic approach to the hierarchical minimization problem which consists in finding the minimum norm solution of a convex minimization problem. For this, we study the convergence properties of the... more
    In a Hilbert space, we provide a fast dynamic approach to the hierarchical minimization problem which consists in finding the minimum norm solution of a convex minimization problem. For this, we study the convergence properties of the trajectories generated by a damped inertial dynamic with Tikhonov regularization. When the time goes to infinity, the Tikhonov regularization parameter is supposed to tend towards zero, not too fast, which is a key property to make the trajectories strongly converge towards the minimizer of f of minimum norm. According to the structure of the heavy ball method for strongly convex functions, the viscous damping coefficient is proportional to the square root of the Tikhonov regularization parameter. Therefore, it also converges to zero, which will ensure rapid convergence of values. Precisely, under a proper tuning of these parameters, based on Lyapunov’s analysis, we show that the trajectories strongly converge towards the minimizer of minimum norm, and...
    We study directional strict efficiency in vector optimization and equilibrium problems with set-valued map objectives. We devise several possibilities to define a meaningful concept of strict efficiency in a directional sense for these... more
    We study directional strict efficiency in vector optimization and equilibrium problems with set-valued map objectives. We devise several possibilities to define a meaningful concept of strict efficiency in a directional sense for these kinds of problems and then we present necessary optimality conditions from several perspectives by means of generalized differentiation calculus. A concept of generalized convexity for multimappings is employed as well and its role in getting equivalence between some classes of solutions is emphasized.
    The generalized topological degree theory is based on the Brouwer and Leray-Schauder degrees. It can be defined for general classes of mappings. The purpose of this article is two-fold. One goal is to define the topological degree for... more
    The generalized topological degree theory is based on the Brouwer and Leray-Schauder degrees. It can be defined for general classes of mappings. The purpose of this article is two-fold. One goal is to define the topological degree for maximal monotone operators. Particular attention is paid to the continuation methods for this kind of operators and real functions of convex type. This allows us to extend some recent results (see [5], [6]) by withdrawing the compactness assumptions.
    This problem can be considered as a nonconvex generalization of the classical variational inequalities of J. L. Lions and G. Stampacchia. For typical examples in connection with mechanics and engineering we refer to the books of... more
    This problem can be considered as a nonconvex generalization of the classical variational inequalities of J. L. Lions and G. Stampacchia. For typical examples in connection with mechanics and engineering we refer to the books of Panagiotopoulos [20, 22] and [18]. The techniques used for resolution of hemivariational inequalities are subsequently based on arranging fixed point theorems, Galerkin methods and the convolution product regularization, see [15]-[17], [21], [22] and the bibliography therein. In the last few years, much attention has been focused to the existence theory of such inequalities by means of the generalized Ky Fan minimax theorem [5, 4]. It is the aim of the present paper to investigate the variational-hemivariational inequality (V HI): find u ∈ D and λ ∈ IR such that ∀v ∈ D λ〈H(u), v − u〉 ≤ α (u, v − u) + 〈C (u) , v − u〉
    The purpose of this paper is to generalize the Brézis-Haraux theorem on the range of the sum of monotone operators from a Hilbert space to general Banach spaces. The result obtained provides that the range R ( A + B ¯ τ ) \mathcal... more
    The purpose of this paper is to generalize the Brézis-Haraux theorem on the range of the sum of monotone operators from a Hilbert space to general Banach spaces. The result obtained provides that the range R ( A + B ¯ τ ) \mathcal R(\overline {A+B}{}^\tau ) is topologically almost equal to the sum R ( A ) + R ( B ) \mathcal R(A)+\mathcal R(B) where τ \tau is a compatible topology in X ∗ ∗ × X ∗ X^{**}\times X^* as proposed by Gossez. To illustrate the main result we consider some basic properties of densely maximal monotone operators.
    Abstract We study existence and asymptotic behavior of a bounded solution of the following Tikhonov regularized second-order difference equation (): where, A is a multivalued maximal monotone operator defined on a Hilbert space and are... more
    Abstract We study existence and asymptotic behavior of a bounded solution of the following Tikhonov regularized second-order difference equation (): where, A is a multivalued maximal monotone operator defined on a Hilbert space and are positive real parameters. We first prove under condition existence of unique bounded solution of (). For asymptotic behavior, we use a suitable assumption on cn and to prove strong convergence of un to the element of minimal norm of Some applications are thereafter discussed with respect to minimization and saddle-point problems. Specially, we study the rate of convergence of optimal values in convex minimization and convex-concave problems. We end the paper by concluding remarks and noticing some research perspectives.
    In a Hilbert space setting H ${\mathcal{H}}$ , in order to minimize by fast methods a general convex lower semicontinuous and proper function Φ : H → ℝ ∪ { + ∞ } ${\Phi }: {\mathcal{H}} \rightarrow \mathbb {R} \cup \{+\infty \}$ , we... more
    In a Hilbert space setting H ${\mathcal{H}}$ , in order to minimize by fast methods a general convex lower semicontinuous and proper function Φ : H → ℝ ∪ { + ∞ } ${\Phi }: {\mathcal{H}} \rightarrow \mathbb {R} \cup \{+\infty \}$ , we analyze the convergence rate of the inertial proximal algorithms. These algorithms involve both extrapolation coefficients (including Nesterov acceleration method) and proximal coefficients in a general form. They can be interpreted as the discrete time version of inertial continuous gradient systems with general damping and time scale coefficients. Based on the proper setting of these parameters, we show the fast convergence of values and the convergence of iterates. In doing so, we provide an overview of this class of algorithms. Our study complements the previous Attouch–Cabot paper (SIOPT, 2018) by introducing into the algorithm time scaling aspects, and sheds new light on the Güler seminal papers on the convergence rate of the accelerated proximal methods for convex optimization.
    In a Hilbert space, we analyze the convergence properties of a general class of inertial forward–backward algorithms in the presence of perturbations, approximations, errors. These splitting algorithms aim to solve, by rapid methods,... more
    In a Hilbert space, we analyze the convergence properties of a general class of inertial forward–backward algorithms in the presence of perturbations, approximations, errors. These splitting algorithms aim to solve, by rapid methods, structured convex minimization problems. The function to be minimized is the sum of a continuously differentiable convex function whose gradient is Lipschitz continuous and a proper lower semicontinuous convex function. The algorithms involve a general sequence of positive extrapolation coefficients that reflect the inertial effect and a sequence in the Hilbert space that takes into account the presence of perturbations. We obtain convergence rates for values and convergence of the iterates under conditions involving the extrapolation and perturbation sequences jointly. This extends the recent work of Attouch–Cabot which was devoted to the unperturbed case. Next, we consider the introduction into the algorithms of a Tikhonov regularization term with vanishing coefficient. In this case, when the regularization coefficient does not tend too rapidly to zero, we obtain strong ergodic convergence of the iterates to the minimum norm solution. Taking a general sequence of extrapolation coefficients makes it possible to cover a wide range of accelerated methods. In this way, we show in a unifying way the robustness of these algorithms.
    In a Hilbert space setting 𝓗, given Φ : 𝓗 → ℝ a convex continuously differentiable function, and α a positive parameter, we consider the inertial dynamic system with Asymptotic Vanishing Damping          (AVD)α     ẍ(t) + (α/t)ẋ(t) +... more
    In a Hilbert space setting 𝓗, given Φ : 𝓗 → ℝ a convex continuously differentiable function, and α a positive parameter, we consider the inertial dynamic system with Asymptotic Vanishing Damping          (AVD)α     ẍ(t) + (α/t)ẋ(t) + ∇Φ(x(t)) = 0. Depending on the value of α with respect to 3, we give a complete picture of the convergence properties as t → +∞ of the trajectories generated by (AVD)α, as well as iterations of the corresponding algorithms. Indeed, as shown by Su-Boyd-Candès, the case α = 3 corresponds to a continuous version of the accelerated gradient method of Nesterov, with the rate of convergence Φ(x(t)) − min Φ = 𝒪(t−2) for α ≥ 3. Our main result concerns the subcritical case α ≤ 3, where we show that Φ(x(t)) − min Φ = 𝒪(t−⅔α). This overall picture shows a continuous variation of the rate of convergence of the values Φ(x(t)) − min𝓗 Φ = 𝒪(t−p(α)) with respect to α > 0: the coefficient p(α) increases linearly up to 2 when α goes from 0 to 3, then displays a plate...
    We consider the regularized Tikhonov-like dynamical equilibrium problem: find \begin{document} $u: [0, +∞ [\to\mathcal H$ \end{document} such that for a.e. \begin{document} $t \ge 0$ \end{document} and every \begin{document} $y∈K$... more
    We consider the regularized Tikhonov-like dynamical equilibrium problem: find \begin{document} $u: [0, +∞ [\to\mathcal H$ \end{document} such that for a.e. \begin{document} $t \ge 0$ \end{document} and every \begin{document} $y∈K$ \end{document} , \begin{document} $\langle \dot{u}(t), y-u(t)\rangle +F(u(t), y)+\varepsilon(t) \langle u(t), y-u(t)\rangle \ge 0$ \end{document} , where \begin{document} $F:K×K \to \mathbb{R}$ \end{document} is a monotone bifunction, \begin{document} $K$ \end{document} is a closed convex set in Hilbert space \begin{document} $\mathcal H$ \end{document} and the control function \begin{document} $\varepsilon(t)$ \end{document} is assumed to tend to 0 as \begin{document} $t \to +∞$ \end{document} . We first establish that the corresponding Cauchy problem admits a unique absolutely continuous solution. Under the hypothesis that \begin{document} $\int_{0}^{+∞} \varepsilon (t) dt , we obtain weak ergodic convergence of \begin{document} $u(t)$ \end{document} to \begin{document} $x∈K$ \end{document} solution of the following equilibrium problem \begin{document} $F(x, y) \ge 0, \;\forall y∈K$ \end{document} . If in addition the bifunction is assumed demipositive, we show weak convergence of \begin{document} $u(t)$ \end{document} to the same solution. By using a slow control \begin{document} $\int_{0}^{+∞} \varepsilon (t) dt = ∞$ \end{document} and assuming that the bifunction \begin{document} $F$ \end{document} is 3-monotone, we show that the term \begin{document} $\varepsilon (t)u(t)$ \end{document} asymptotically acts as a Tikhonov regularization, which forces all the trajectories to converge strongly towards the element of minimal norm of the closed convex set of equilibrium points of \begin{document} $F$ \end{document} . Also, in the case where \begin{document} $\varepsilon $ \end{document} has a slow control property and \begin{document} $\int_{0}^{+∞}\vert \dot{\varepsilon} (t) \vert dt , we show that the strong convergence property of \begin{document} $u(t)$ \end{document} is satisfied. As applications, we propose a dynamical system to solve saddle-point problem and a neural dynamical model to handle a convex programming problem. In the last section, we propose two Tikhonov regularization methods for the proximal algorithm. We firstly use the prox-penalization algorithm \begin{document} $(ProxPA)$ \end{document} by iteration \begin{document} $ x_{n+1} = J^{F_n}_{λ_n}(x_n)$ \end{document} where \begin{document} $F_n(x, y) = F(x, y)+\varepsilon_n \langle x, y-x\rangle$ \end{document} , and \begin{document} $\varepsilon_n$ \end{document} is the Liapunov parameter; afterwards, we propose the descent-proximal (forward-backward) algorithm \begin{document} $(DProxA)$ \end{document} : \begin{document} $x_{n+1} = J^F_{λ_n} ((1 - λ_n\varepsilon_n)x_n)$ \end{document} . We provide low conditions that guarantee a strong convergence of these algorithms to least norm element of the set of equilibrium points.
    ABSTRACT In this paper, we present two penalty-splitting inspired iteration schemes (PFFSA) and (PFFSA) for hierarchical equilibrium problems in Hilbert space. Based on the Opial-Passty lemma, we propose weak ergodic convergence and weak... more
    ABSTRACT In this paper, we present two penalty-splitting inspired iteration schemes (PFFSA) and (PFFSA) for hierarchical equilibrium problems in Hilbert space. Based on the Opial-Passty lemma, we propose weak ergodic convergence and weak convergence of the iterative sequences generated by the Forward–Forward algorithm (PFFSA) and the Forward–Backward algorithm (PFFSA), which are proved under quite mild conditions: the bifunction of the two level equilibrium problems are supposed pseudomonotone. For strong convergence, we first add a strong monotonicity condition on the objective bifunction. We present after, a strong convergence result of algorithm (PFFSA) by adding a topological assumption, i.e. the objective bifunction is of class . Some examples are given to illustrate our results. The first example deals with pseudomonotone variational inequalities and convex minimization problem in the upper level problem. In the second one, we propose a convex minimization in the lower-level problem, where strong convergence of (PFFSA) to a minimum point is insured under infcompactness condition for objective function. These convergence results are new and generalize some recent results in this field.
    L'objet de ce travail est de participer a l'elaboration de l'analyse epigraphique au sein d'un groupe anime par h. Attouch et r. J. B. Wets. Nous nous sommes particulierement interesses a l'etude de la stabilite sous... more
    L'objet de ce travail est de participer a l'elaboration de l'analyse epigraphique au sein d'un groupe anime par h. Attouch et r. J. B. Wets. Nous nous sommes particulierement interesses a l'etude de la stabilite sous ses aspects qualitatifs et quantitatifs (topologiques et metriques) pour les solutions de quelques problemes variationnels ou d'optimisation: 1) stabilite par epiconvergence de la somme (respectivement somme epigraphique ou inf-convolution); 2) stabilite des argmin par la methode de continuation avec des deformations epi-continues pour les fonctions de graphe-continue pour les operateurs maximaux monotones; 3) stabilite metrique des sous-differentiels et des argmin en optimisation; 4) stabilite topologique et metrique pour une version geometrique du principe variationnel d'ekeland
    The continuation method, initiated by H.Poincare, then systematically developped in the context of the degree theory by Kronecker and Brouwer, Leray and Schauder ij consists of imbedding the problem in a parametrized family of problems and... more
    The continuation method, initiated by H.Poincare, then systematically developped in the context of the degree theory by Kronecker and Brouwer, Leray and Schauder ij consists of imbedding the problem in a parametrized family of problems and consider its solvability as the parameter varies. The homotopy invariance is a decisive property of the topological degree of mappings.
    The primary goal of this paper is to shed some light on the maximality of the pointwise sum of two maximal monotone operators. The interesting purpose is to extend some recent results of Attouch, Moudafi and Riahi on the graph convergence... more
    The primary goal of this paper is to shed some light on the maximality of the pointwise sum of two maximal monotone operators. The interesting purpose is to extend some recent results of Attouch, Moudafi and Riahi on the graph convergence of maximal monotone operators to the more general setting of reflexive Banach spaces. In addition, we present some conditions which imply the uniform Brézis-Crandall-Pazy condition. Afterwards, we present, as a consequence, some recent conditions which ensure the Mosco-epiconvergence of the sum of convex proper lower semicontinuous functions.
    Research Interests:
    Le principe variationnel d'Ekeland est un outil qui a fait preuve de beaucoup d'importance en analyse non linéaire, dans lequelle il a joui d'une grande variante d'applications allant de la géométrie des espaces de Banach... more
    Le principe variationnel d'Ekeland est un outil qui a fait preuve de beaucoup d'importance en analyse non linéaire, dans lequelle il a joui d'une grande variante d'applications allant de la géométrie des espaces de Banach (c.f. Brezis & Browder [5], Bishop & Phelps [4]) à la théorie de l'optimisation (c.f. Ekeland [7,8]) et du calcul différentiel généralisé (c.f. Aubin [2,3], Penot [10],...) jusqu'au calcul des variations (c.f. Clarke [6], Ekeland [7]) et la théorie des semi-groupes non linéaires (Brezis & Browder [5], Ekeland [7]). Dans cet article, on va aborder un autre aspect d'application du principe variationnel d'Ekeland. On démontrera un résultat d'existence en optimisation multiobjective d'optima de Pareto, en s'appuyant sur ce dernier principe variationnel.

    And 22 more