Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
  • PhD in geomechanics from Université Grenoble Alpes (UGA), France. MSc in geomechanics, civil engineering and risks fr... moreedit
This work presents a new particle swarm optimizer (PSO)‐based metaheuristic algorithm designed to reduce overall computational cost without compromising optimization's precision in functions with variable evaluation time. The... more
This work presents a new particle swarm optimizer (PSO)‐based metaheuristic algorithm designed to reduce overall computational cost without compromising optimization's precision in functions with variable evaluation time. The algorithm exploits the evaluation time gradient in addition to the convergence gradient attempting to reach the same convergence precision following a more economical path. The particle's newly incorporated time information is usually in contradiction to the past memories of best function evaluations thus degrading convergence. A technique is proposed in order to modulate the new cognitive input that consists of progressive reducing of its weight in order to confer the algorithm the appropriate time‐convergence balance. Results show that the proposed algorithm not only provides computational economy, but also unexpectedly improves convergence per se due to a better exploration in the initial stages of optimization. Its application in asymptotic homogenization of a cracked poroelastic medium confirms its superior performance compared to a series of alternative optimization algorithms. The proposed algorithm improvement allows to extend the applicability of PSO and PSO‐based algorithms to problems that were previously thought to be too computationally expensive for population‐based approaches.
X-ray μCT imaging is a common technique that is used to gain access to the full-field characterization of materials. Nevertheless, the process can be expensive and time-consuming, thus limiting image availability. A number of existing... more
X-ray μCT imaging is a common technique that is used to gain access to the full-field characterization of materials. Nevertheless, the process can be expensive and time-consuming, thus limiting image availability. A number of existing generative models can assist in mitigating this limitation, but they often lack a sound physical basis. This work presents a physics-supervised generative adversarial networks (GANs) model and applies it to the generation of X-ray μCT images. FEM simulations provide physical information in the form of elastic coefficients. Negative X-ray μCT images of a Hostun sand were used as the target material. During training, image batches were evaluated with nonparametric statistics to provide posterior metrics. A variety of loss functions and FEM evaluation frequencies were tested in a parametric study. The results show, that in several test scenarios, FEM-GANs-generated images proved to be better than the reference images for most of the elasticity coefficient...
Besides long-standing works on experimental characterisation of geomaterials in laboratory tests computational geomechanics has been the subject of theoretical and numerical works for a long while. Recently, multi-scale analysis using a... more
Besides long-standing works on experimental characterisation of geomaterials in laboratory tests computational geomechanics has been the subject of theoretical and numerical works for a long while. Recently, multi-scale analysis using a numerical approach of the homogenisation of the microstructural behaviour of materials to derive the constitutive response at the macro scale has become a new trend in numerical modelling. DEM takes part of this multi-scale analysis as a microscale configuration for each one of the integration points of a macroscale FEM scheme. Compared with analytical constitutive law models, this approach allows us to take into account in a natural way the inherent anisotropy and rheology of the material and at the same time to perform real-size macroscale problems which would be impossible with pure DEM code. The counterpart of this approach is linked to the computational cost, nevertheless, while in a analytical law model the critical part of the computation is t...
We develop a numerical model that simulates the evolution of a virtual population with an incentive and ability-based wage, capital yield from savings, social welfare system, and total income subject to taxation and political turnovers.... more
We develop a numerical model that simulates the evolution of a virtual population with an incentive and ability-based wage, capital yield from savings, social welfare system, and total income subject to taxation and political turnovers. Meta-heuristics, particle swarm optimization (PSO) in particular, is used to find optimal taxation given the constraints of a plurality democracy with yardstick vote. Results show that the policymaker tends to a taxation system that is highly punitive for a minority in order to win the election by benefiting others. Such decision-making leads to a cyclic taxation policy with high taxation targeting sequential portions of the population.
In a standard GANs topology both Generator (Gen) and Discriminator (Dis) multilayer perceptrons have been modified by reducing their depth, resulting in four layers of neurons in both networks. This reduction is done to avoid the... more
In a standard GANs topology both Generator (Gen) and Discriminator (Dis) multilayer perceptrons have been modified by reducing their depth, resulting in four layers of neurons in both networks. This reduction is done to avoid the ``Helvetica scenario'' or mode collapse, a known problem in GANs augmented by larger depths. Training data consists in 12,000 samples obtained from the micro-scale 6 in the paper in a 2D strain space. This data is partitioned into 24 segments of 500 samples each and one raster image is obtained from each segment. GIF image shows the progression of the training with an array of 4x4 generations at each Epoch.<br>Results from the paper: Predicting the Non-Deterministic Response of Poroelastic Media with Damageable Cracks using Generative Adversarial Networks
Storage sludge has high water content and low shear strength, which limits the capacity expansion of overlying municipal landfilling. Few studies have addressed the field treatment of large amounts of storage sludge due to the variability... more
Storage sludge has high water content and low shear strength, which limits the capacity expansion of overlying municipal landfilling. Few studies have addressed the field treatment of large amounts of storage sludge due to the variability of the depth of geotechnical property. This paper proposes a stratified treatment method for storage sludge, based on the in situ characterization of layered sedimentary patterns of the storage sludge acquired from the Qizishan landfill in China. Additionally, the stability of the landfilling above the sludge pond is analyzed using the Morgenstern–Price and limit equilibrium slice method, which considers the layered strength properties of solidified sludge. The treated sludge has a significant decrease in average water content from 1398% to 88% and an increase in average cohesion to 23.52 kPa. The high content of clay particles, low amount of solidification products, and high water content together result in the high sensitivity to the water conten...
In the calculation of failure envelopes of a material using the von Misses distortion energy criterion some microstructures appear to be unbreakable under certain conditions: for stiff crack networks and specific loading paths the... more
In the calculation of failure envelopes of a material using the von Misses distortion energy criterion some microstructures appear to be unbreakable under certain conditions: for stiff crack networks and specific loading paths the resulting damage pattern turns the microscale into a matrix based spring like geometry with much softer homogenized properties than the matrix itself, this kind of configuration is able to undergo large strain inputs without further loading of the crack network. To avoid this the crack stiffness range is clipped in the upper range. <br>Videos show several unbreakable configurations, the name of the files indicate: applied xx strain, yy strain, xy strain, crack stiffness denominator (1 if absent), (G), and exponent, iterative method tolerance (tol), maximum number of loading steps (n), maximum number of iterations (maxit), final loading step (if no failure it coincides with n), and total simulation time (t). Loading steps are compiled as single frames...
In the development of a Cognitive Dissonance Particle Swarm Optimizer (CDPSO) algorithm, a series of functions is proposed both as optimization target F and evaluation time T functions. Twelve well known benchmark functions for... more
In the development of a Cognitive Dissonance Particle Swarm Optimizer (CDPSO) algorithm, a series of functions is proposed both as optimization target F and evaluation time T functions. Twelve well known benchmark functions for optimization problems are considered: paraboloid, Griewank, Rastrigin, Rosenbrock, Bukin, Log-sumcan, Ackley, Drop-wave, Holder-table, Levy, Michalewicz and Styblinski-Tang. Some of the functions have been made negative or shifted so that they all represent minimization problems with a zero objective limit. The combination of the 12 target functions with their homologous function evaluation times yields 144 optimization problem cases. Each problem is optimized using the F function as target functions and T as the virtual evaluation time. Results present outputs for each one of the 144 F-T pairs and the reference PSO case for each one of the proposed Cognitive Dissonance functions. pdf reports, png individual pictures and LaTeX sources are provided.
The proposed numerical model simulates the evolution of a virtual population with an incentive and ability-based wage, capital yield from savings, social welfare system and total income subject to taxation. The model allows to obtain an... more
The proposed numerical model simulates the evolution of a virtual population with an incentive and ability-based wage, capital yield from savings, social welfare system and total income subject to taxation. The model allows to obtain an objective criterion to define the middle class based on the marginal percentile income. Meta-heuristics are used to find an optimal taxation given the constrains of a democracy with yardstick vote. This repository contains results of 30 parametric optimizations for one mandate and 30 for two mandates and complements the data in the paper.
The multi-scale FEMxDEM approach is an innovative n umerical method for geotechnical problems, using at the same time the Finite Element Method (F EM) at the engineering macro-scale and the Discrete Element Method (DEM) at the scale of... more
The multi-scale FEMxDEM approach is an innovative n umerical method for geotechnical problems, using at the same time the Finite Element Method (F EM) at the engineering macro-scale and the Discrete Element Method (DEM) at the scale of the m icrostructure of the material. The link between scales is made via computational homogenization. In this way, the continuum numerical constitutive law and the corresponding tangent matrix are obtain ed directly from the discrete response of the microstructure [1,2,3]. In the proposed paper, a variety of operators, rath e an the tangent consistent for the NewtonRaphson method, is tested in a challenging attempt to improve the poor convergence performance. The independence of the DEM computations is exploit ed to develop a parallelized code using an OpenMP paradigm. At the macro level, a second gradi ent constitutive relation is implemented in order to enrich the first gradient Cauchy relation bringing mesh-independency to the model. The second gr...
Recently, multi-scale analysis using a numerical approach of the homogenisation of the microstructural behaviour of materials to derive the constitutive response at the macro scale has become a new trend in numerical modelling in... more
Recently, multi-scale analysis using a numerical approach of the homogenisation of the microstructural behaviour of materials to derive the constitutive response at the macro scale has become a new trend in numerical modelling in geomechanics. Considering rocks as granular media with cohesion between grains, a two-scale fully coupled approach can be defined using FEM at the macroscale, together with DEM at the microscale [1,2,3]. In this approach, the micro-scale DEM boundary value problem attached to every Gauss point in the FEM mesh, can be seen as a constitutive model, the answer of which is used by the FEM method in the usual way. A first major advantage of two-scale FEM-DEM approach is to allow one to perform real-grain-size microstructure modelling on real-structure-size macroscopic problems, without facing the intractable problem of dealing with trillions of grains in a fully DEM mapped full-field problem. A second one is that using this approach, microscale related features ...
The disposal of radioactive waste in deep underground repositories has been studied for a long while (OECD/NEA, 1995). Argillaceous rocks have been found to be good candidates to host the repositories because of their low permeability and... more
The disposal of radioactive waste in deep underground repositories has been studied for a long while (OECD/NEA, 1995). Argillaceous rocks have been found to be good candidates to host the repositories because of their low permeability and the ability to absorb radionuclides. In France, the Callovo-Oxfordian argillite (COX) has been chosen by the national agency ANDRA to be the host rock to store the radioactive waste. During the previous years, the problem of gallery excavations in the COX host rock has been the case of study in Liège University. The problem, involving strain localization, is not well posed when modelled using classical theories; a microstructured model is proposed: local Second Gradient (Collin et al. 2006), this avoids the pathological mesh dependency by introducing an internal length that regularizes the problem. At the present time, the model takes into account several transverse anisotropies, e.g. cohesion (Pietruszczak et al. 2002), a visco-plastic model is retained to model the long term convergence, finally, a permeability evolution model allows to correlate strain localization and permeability increase (Pardoen et al. 2016). The purpose of this work is to model the long term behaviour of the MAVL galleries (100 years) with a special focus on the localisation extent, effect of ventilation, displacements and concrete stress state. Results (Figure 1) show an important effect of the supporting structure on the problem, the compressible wedges determine the localization mode by triggering the shear bands. Results also show a high sensitivity to the viscosity parameters
The multi-scale FEM-DEM approach is an innovative numerical method for geotechnical problems, using at the same time the Finite Element Method (F EM) at the engineering macro-scale and the Discrete Element Method (DEM) at the scale of the... more
The multi-scale FEM-DEM approach is an innovative numerical method for geotechnical problems, using at the same time the Finite Element Method (F EM) at the engineering macro-scale and the Discrete Element Method (DEM) at the scale of the microstructure of the material. The link between scales is made via computational homogenization. In this way, the continuum numerical constitutive law and the corresponding tangent matrix are obtain ed directly from the discrete response of the microstructure [1,2,3]. In the proposed paper, a variety of operators, rath er than the tangent consistent for the NewtonRaphson method, is tested in a challenging attempt to improve the poor convergence performance observed. The independence of the DEM computations is exploited to develop a parallelized code. The non-uniqueness of the solution is an already we ll known phenomenon in softening materials, but in this case the non-uniqueness results in a loss o f objectivity due to the parallelization, this p...
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
The paper presents a multi-scale modeling of Boundary Value Problem (BVP) approach involving cohesive-frictional granular materials in the FEM × DEM multi-scale framework. On the DEM side, a 3D model is defined based on the interactions... more
The paper presents a multi-scale modeling of Boundary Value Problem (BVP) approach involving cohesive-frictional granular materials in the FEM × DEM multi-scale framework. On the DEM side, a 3D model is defined based on the interactions of spherical particles. This DEM model is built through a numerical homogenization process applied to a Volume Element (VE). It is then paired with a Finite Element code. Using this numerical tool that combines two scales within the same framework, we conducted simulations of biaxial and pressuremeter tests on a cohesive-frictional granular medium. In these cases, it is known that strain localization does occur at the macroscopic level, but since FEMs suffer from severe mesh dependency as soon as shear band starts to develop, the second gradient regularization technique has been used. As a consequence, the objectivity of the computation with respect to mesh dependency is restored.
Double-scale numerical methods constitute an effective tool for simultaneously representing the complex nature of geomaterials and treating real-scale engineering problems such as a tunnel excavation or a pressuremeter at a reasonable... more
Double-scale numerical methods constitute an effective tool
for simultaneously representing the complex nature of geomaterials
and treating real-scale engineering problems such
as a tunnel excavation or a pressuremeter at a reasonable
numerical cost. This paper presents an approach coupling
Discrete Elements (DEM) at the micro-scale with Finite Elements
(FEM) at the macro-scale. In this approach, a DEMbased
numerical constitutive law is embedded into a standard
FEM formulation. In this regard, an exhaustive discussion
is presented on how a 2D/3D granular assembly can be
used to generate, step by step along the overall computation
process, a consistent Numerically Homogenised Law. The
paper also focuses on some recent developments including
a comprehensive discussion of the efficiency of Newton-like
operators, the introduction of a regularisation technique at
the macro-scale by means of a second gradient framework
and the development of parallelisation techniques to alleviate
the computational cost of the proposed approach. Some
real-scale problems taking into account the material spatial
variability are illustrated, proving the numerical efficiency of
the proposed approach and the benefit of a particle-based
strategy.
The paper presents a multiscale model based on a FEMxDEM approach, a method that couples Discrete Elements at the micro-scale and Finite Elements at the macro-scale. FEMxDEM has proven to be an effective way to treat real-scale... more
The paper presents a multiscale model based on a FEMxDEM approach, a method that couples Discrete Elements at the micro-scale and Finite Elements at the macro-scale. FEMxDEM has proven to be an effective way to treat real-scale engineering problems by embedding constitutive laws numerically obtained using Discrete Elements into a standard Finite Element framework. The proposed paper focuses on some numerical open-issues of the method. Given the problem nonlinearity, Newton's method is required. The standard full Newton method is modified by adopting operators different from the consistent tangent matrix and by developing ad-hoc solution strategies. The efficiency of several existing operators is compared and a new, original strategy is proposed, which is shown to be numerically more efficient than the existing propositions. Furthermore, a shared memory parallelization framework using OpenMP directives is introduced. The combination of these enhancements allow to overcome the FEMxDEM computational limitations, thus making the approach competitive with classical FEM in terms of stability and computational cost.
Storage sludge has high water content and low shear strength, which limits the capacity expansion of overlying municipal landfilling. Few studies have addressed the field treatment of large amounts of storage sludge due to the variability... more
Storage sludge has high water content and low shear strength, which limits the capacity
expansion of overlying municipal landfilling. Few studies have addressed the field treatment of
large amounts of storage sludge due to the variability of the depth of geotechnical property. This
paper proposes a stratified treatment method for storage sludge, based on the in situ characterization
of layered sedimentary patterns of the storage sludge acquired from the Qizishan landfill in China.
Additionally, the stability of the landfilling above the sludge pond is analyzed using the Morgenstern–
Price and limit equilibrium slice method, which considers the layered strength properties of solidified
sludge. The treated sludge has a significant decrease in average water content from 1398% to 88%
and an increase in average cohesion to 23.52 kPa. The high content of clay particles, low amount of
solidification products, and high water content together result in the high sensitivity to the water
content of the strength of deep solidified sludge. For a 40-m high waste body, stability analysis
suggests a sliding surface across the raw sludge pond, while the critical surface remains outside the
treated sludge pond and the safety factor is increased from 0.934 to 1.464. The validated stratified
treatment provides valuable references for the treatment of deep sludge.
Recent improvements in micro-scale material descriptions allow to build increasingly refined multiscale models in geomechanics. This often comes at the expense of computational cost which can eventually become prohibitive. Among other... more
Recent improvements in micro-scale material descriptions allow to build increasingly refined multiscale models in geomechanics. This often comes at the expense of computational cost which can eventually become prohibitive. Among other characteristics, the non-determinism of a micro-scale response makes its replacement by a surrogate particularly challenging. Machine Learning (ML) is a promising technique to substitute physics-based models, nevertheless existing ML algorithms for the prediction of material response do not integrate non-determinism in the learning process. Is it possible to use the numerical output of the latest micro-scale descriptions to train a ML algorithm that will then provide a response at a much lower computational cost? A series of ML algorithms with different levels of depth and supervision are trained using a data-driven approach. Gaussian Process Regression (GPR), Self-Organizing Maps (SOM) and Generative Adversarial Networks (GANs) are tested and the latt...
It is very common for natural or synthetic materials to be characterized by a periodic or quasi-periodic micro-structure. This micro-structure, under the different loading conditions may play an important role on the apparent, macroscopic... more
It is very common for natural or synthetic materials to be characterized by a periodic or quasi-periodic micro-structure. This micro-structure, under the different loading conditions may play an important role on the apparent, macroscopic behaviour of the material. Although, fine, detailed information can be implemented at the micro-structure level, it still remains a challenging task to obtain experimental metrics at this scale. In this work, a constitutive law obtained by the asymptotic homogenization of a cracked, damageable, poroelastic medium is first evaluated for multi-scale use. For a given range of micro-scale parameters, due to the complex mechanical behaviour at micro-scale, such multi-scale approaches are needed to describe the (macro) material’s behaviour. To overcome possible limitations regarding input data, meta-heuristics are used to calibrate the micro-scale parameters targeted on a synthetic failure envelope. Results show the validity of the approach to model micr...
The multi-scale FEMxDEM approach is an innovative numerical method for geotechnical problems involving granular materials. The Finite Element Method (FEM) and the Discrete Element Method (DEM) are simultaneously applied to solve,... more
The multi-scale FEMxDEM approach is an innovative numerical method for geotechnical problems involving granular materials. The Finite Element Method (FEM) and the Discrete Element Method (DEM) are simultaneously applied to solve, respectively, the structural problem at the macro-scale and the material microstructure at the microscale. The advantage of using such a double scale configuration is that it allows to study an engineering problem without the need of standard constitutive laws, thus capturing the essence of the material properties. The link between scales is obtained via numerical homogenization, so that, the continuum numerical constitutive law and the corresponding tangent matrix are obtained directly from the discrete response of the microstructure. Typically, the FEMxDEM approach presents some drawbacks; the convergence velocity and robustness of the method are not as efficient as in classical FEM models. Furthermore, the computational cost of the microscale integration...
Sludge treatment wetlands (STW) are used as a dewatering technology in some European countries since the 80's. Although the efficiency of this technology in terms of sludge dewatering and mineralisation is well known, design and operation... more
Sludge treatment wetlands (STW) are used as a dewatering technology in some European countries since the 80's. Although the efficiency of this technology in terms of sludge dewatering and mineralisation is well known, design and operation parameters are yet to be standardised. The aim of this study is to develop a mathematical model capable of predicting the water loss with time, in order to optimise the feeding frequency enhancing sludge dewatering and expanding the lifespan of the system. The proposed model is validated with experimental data from one pilot and two full-scale STW. The scenarios considered indicate that the optimum feeding frequency decreases with the sludge layer height. In this way, systems with a sludge layer of 20 cm, 40 cm and 80 cm (corresponding to 2, 4 and 8 years of operation), should be fed every 2.5, 10 and 30e40 days, respectively. On the other hand, evapotranspiration (ET) has no effect on the feeding frequency, although it does increase the sludge dryness from 25% to 45% (for ET of 2.5 and 14.5 mm/ d in the case of 20 cm of sludge height). According to the model output, the sludge loading rate is determined as a function of evapotranspiration, feeding frequency and sludge height.
Research Interests: