Advanced Strategies For Optimal Design and Operation OfPressure Swing Adsorption Processes (PDFDrive)
Advanced Strategies For Optimal Design and Operation OfPressure Swing Adsorption Processes (PDFDrive)
Advanced Strategies For Optimal Design and Operation OfPressure Swing Adsorption Processes (PDFDrive)
the degree of
Doctor of Philosophy
in
Chemical Engineering
Anshul Agarwal
May, 2010
Acknowledgments
First and foremost, I would like to thank Professor Lorenz T. Biegler for his guidance, encour-
agement and support which made this research work meaningful and fruitful. His patience,
tenacity, enthusiasm, and depth of knowledge has left a deep impression on me. His guid-
ance has made me understand and appreciate the importance and potentials of mathematical
My thanks go to the National Energy Technology Laboratory (NETL) for providing financial
auspices over the duration of this project. I would like to thank Dr. Stephen E. Zitney and
Dr. David C. Miller from NETL for their comments on my research and for ensuring that my
work proceeded towards the vision of the project. Besides the committee chair Prof. Biegler,
other committee members Prof. Ignacio E. Grossmann, Prof. B. Erik Ydstie, and Prof. Bruce
I would also like to thank Yi-Dong Lang and Sree for constructive discussions on reduced-order
Finally, I would like to express my deepest gratitude to my parents for their incessant support,
encouragement, belief, and love. Without that I wouldn’t have come this far.
ii
Abstract
With expanding areas of applications, increasing needs for efficient cycles, and growing de-
mands for efficient modeling, it has become essential to develop new systematic strategies for
optimal design and operation of PSA systems. Although industrial usage of PSA is widespread,
we observe a drought of any systematic methodology to design PSA cycles in PSA literature
due to inherent complexity of cyclic PSA processes. We present a generic PSA superstruc-
ture to synthesize optimal PSA configurations. The superstructure is rich enough to predict
a number of different PSA operating steps, and their optimal sequence by solving an optimal
control problem.
Because of low operating costs and high performance, PSA is considered as a promising op-
tion for both post-combustion and pre-combustion CO2 capture. Since commercial PSA cycles
consider CO2 as a waste stream, cycle development specifically targeted towards high-purity
CO2 separation is essential. We utilize superstructure approach for this purpose and succeed
in synthesizing optimal cycles which can separate CO2 at a purity as high as 95%, or with a
low power consumption of 465 kWh/tonne CO2 captured, for post-combustion capture. When
applied for pre-combustion capture, superstructure approach yields cycles with an extremely
Large number of spatial nodes required to capture steep adsorption fronts lead to a large
set of DAEs, and thus to a challenging PSA optimization problem. We generate reduced-order
models (ROMs) which are not only orders of magnitude smaller, but also reasonably accurate.
Consequently, replacing DAEs with these ROMs yields a cheap optimization problem. How-
iii
ever, a trust-region envelope is essential for optimization as a ROM is accurate only around the
for a H2 -CH4 PSA case study. Promising preliminary results encourage us to formally devise
a systematic adaptive trust-region strategy. We first develop an exact penalty trust-region al-
gorithm and devise correction schemes to ensure convergence to actual local optimum. When
demonstrated for the Skarstrom cycle for CO2 capture, penalty algorithm converges in 92 iter-
ations and 1.88 CPU hrs. To circumvent the difficulty of determining a penalty parameter, we
also devise a hybrid filter-based trust-region framework. When applied to the PSA case study,
filter algorithm converges within 51 iterations consuming 1.36 CPU hrs. Thus, initial results
are quite promising and reveal the potential of using ROMs for optimization.
iv
Contents
Acknowledgments ii
Abstract iii
Contents v
List of Tables x
1 Introduction 1
CONTENTS v
CONTENTS
3 PSA Superstructure 35
3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.1 Superstructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
CONTENTS vi
CONTENTS
6.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
CONTENTS vii
CONTENTS
7.4.3.1 Comparison of Rigorous Model and ROM at the Starting Guess 152
CONTENTS viii
CONTENTS
8 Conclusions 184
Bibliography 192
A Nomenclature 206
CONTENTS ix
List of Tables
4.1 PSA cycles suggested in the literature for post-combustion CO2 separation . . 51
6.3 Isotherm parameters for H2 and CH4 on activated carbon [108] . . . . . . . . . 116
6.7 Comparison of rigorous model and ROM based on the performance variables . 123
6.8 Decision variables and the root-point at which ROM is built . . . . . . . . . . . 125
LIST OF TABLES x
LIST OF TABLES
7.6 Comparison of rigorous model and ROM based on the performance variables . 156
B.1 Iteration sequence for Algorithm I with ZOC for Problem (7.31) . . . . . . . . 209
B.2 Iteration sequence for Algorithm I with FOC for Problem (7.31) . . . . . . . . 210
B.3 Iteration sequence for tangential subproblems of Algorithm II for Problem (7.31) 213
B.4 Iteration sequence for normal subproblems of Algorithm II for Problem (7.31) . 215
LIST OF TABLES xi
List of Figures
2.1 A composite adsorbent pellet with different mass transfer resistances [156] . . . 14
2.2 Adsorption isotherms and change in equilibrium solid loading with pressure and
temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5 Boundary matchings required for the two-bed four-step PSA process . . . . . . 25
2.6 Approaches to match boundary values while simulating multi-bed PSA operations 26
6.1 Comparison of original profile and ROM profiles for Burgers equation for varying
6.2 First six POD basis functions of methane mole fraction for adsorption . . . . . 119
6.3 Singular values of gas-phase mole fraction of methane and temperature . . . . . 120
6.4 Comparison of methane mole fraction for all the steps after CSS . . . . . . . . 121
6.5 Comparison of temperature profiles for all the steps after CSS . . . . . . . . . . 122
6.6 Comparison of methane mole fraction profile for adsorption step obtained after
6.8 Gas-phase methane mole fraction profiles for ROM for relaxed bounds in Case I 129
6.10 Methane mole fraction profiles for ROM for relaxed bounds in Case II . . . . . 132
7.1 First six POD basis functions of CO2 mole fraction for adsorption . . . . . . . 153
7.2 Singular values of mole fraction of CO2 and superficial gas velocity . . . . . . . 154
7.3 Comparison of CO2 mole fraction for all the steps at the starting guess . . . . . 155
7.4 Comparison of CO2 mole fraction for penalty TR algorithm with FOC . . . . . 161
7.6 Comparison of CO2 mole fraction for hybrid filter TR algorithm . . . . . . . . 182
— Leibniz
which always lies open before our eyes (I mean the universe),
— Galileo
Chapter 1
Introduction
Synopsis
Over the last few decades, pressure swing adsorption (PSA) processes have emerged success-
fully as cost-effective alternatives to the traditional gas separation processes, and thus have
still present stiff research challenges in terms of process development, accurate modeling of
mass transfer and adsorption phenomena, and adsorbent design, especially for emerging new
applications. In this chapter, we highlight such challenges in brief, describe our approach in
order to address a few of these, and define the scope of our research. An outline of this thesis
Chapter 1. Introduction 1
1.1 PSA Overview
Separation of gases accounts for a major fraction of the production cost in chemical, petro-
chemical, and related industries. There has been a growing demand for economical and energy
efficient gas separation processes. The new generation of more selective adsorbents developed
in recent years has enabled adsorption-based technologies to compete successfully with tra-
ditional gas separation techniques, such as cryogenic distillation and absorption. The last
few decades have seen a considerable increase in the applications of adsorptive gas separation
technologies, such as pressure swing adsorption (PSA). Pressure swing adsorption is a versa-
tile technology for separation and purification of gas mixtures. While initial applications of
PSA included gas drying and purification of dilute mixtures, current industrial applications
include solvent vapor recovery, air fractionation, production of hydrogen from steam-methane
reformer (SMR) and petroleum refinery offgases, separation of hydrocarbons such as carbon
dration. Advent of commercial PSA operations started with the early patents on this subject
granted to Skarstrom [176] and Guerin de Montgareuil and Domine [60]. Since then, PSA has
become the state-of-the-art separation technology for applications like air fractionation and
hydrogen production. Many of these processes are described in published books and review
articles on this subject [31, 92, 106, 154, 156, 166, 169, 181, 206]. Moreover, Sircar [167] has
given an extensive list of publications on PSA which highlights growth in the research and
in a packed column, in order to produce a gas stream enriched in less strongly adsorbed
components of the feed gas. The adsorbed components are then desorbed from the solid by
lowering their gas-phase partial pressures inside the column to enable adsorbent re-usability.
Desorbed gases, as a result, are enriched in the more strongly adsorbed components of the feed
gas. No external heat is generally used for desorption. The selectivity in a PSA process comes
Chapter 1. Introduction 2
1.1 PSA Overview
from differences in either adsorption equilibrium or adsorption kinetics between the components
to be separated. While a PSA process carries out adsorption at superambient pressure and
desorption at near-ambient pressure level, a vacuum swing adsorption (VSA) process undergoes
PSA/VSA processes are substantially sophisticated with multiple adsorber columns executing
and desorption, such a sequence also involves a multitude of complementary operating steps
essential to control product gas purity and recovery, and optimize overall separation efficiency.
Each bed undergoes this sequence of steps repeatedly, and thus the entire PSA system operates
in a cyclic manner.
Some of the advantages of PSA systems and key reasons for recent growth of this technology
• PSA and VSA processes operate at ambient temperatures and do not require any solvent
quite less compared to cryogenic technologies. Primary operating cost for these processes
comes from the energy requirements for compression and vacuum generation. Hence,
PSA processes are cost-effective compared to traditional technologies, and are especially
desirable when lower production rates or lower product purities are required.
lored and engineered for a particular application, thus exhibiting high selectivity and
adsorption capacity which leads to extremely high purity and recovery separation.
• Optimum marriage between a material and a process while synthesizing the separation
scheme drives innovation and leads to highly efficient designs for PSA processes.
Chapter 1. Introduction 3
1.2 Research Challenges with PSA
Theoretical modeling of PSA systems has also been extensively studied to gain a clear
understanding of this rather complex process. A summary of the published dynamic models
has been compiled by Ruthven [156] and Nikolic̀ et al. [136]. In general, PSA bed model
is a set of fairly complex partial differential and algebraic equations (PDAEs) which reflect
the transient nature of the process and capture the underlying physics in detail. With such
models, it is now possible to accurately predict the dynamic behavior of a PSA process, and
to adequately account for all the factors that affect the performance of any given PSA system.
Although commercial applications of PSA processes have grown rapidly, existing as well as
systematic formulation for synthesizing multibed PSA cycles, obtaining optimal operation
strategy with detailed PSA bed models, and obtaining exhaustive experimental data for the
kinetic and equilibrium behavior for novel adsorbents to accurately model multicomponent
adsorption isotherm and mass transfer phenomena. Ruthven [155] and Sircar [169, 170] have
Although high-performance cycles have been developed for individual commercial applications,
design and synthesis of a PSA system for given commercial specifications has largely remained
an experimental effort. It is not clear why a particular cycle is chosen or one performs bet-
ter than other configurations. More importantly, so far no systematic algorithm or method
has been developed in the literature to design and evaluate a PSA cycle configuration due to
the inherent complexity of the process which involves a number of sequential but interacting
unsteady-state cycle steps. A priori design of a practical cycle that can guarantee commercial
specifications without the use of supporting data from a bench- or pilot-scale process is con-
sidered less desirable because of the expense and computational time involved in solving the
Chapter 1. Introduction 4
1.2 Research Challenges with PSA
Very few studies in the literature have tried to address this issue. All of these studies suggest
simplistic formulations to determine minimum number of beds required in a PSA process for
given kinds and fixed sequences of operating steps. Chiang [46] proposed simple arithmetic-
based heuristics, while Smith et al. [177] extended Chiang’s work to propose a mixed-integer
nonlinear programming (MINLP) based approach to obtain optimal number of beds required
to execute a fixed sequence of operating steps. Smith et al. [178, 179] also suggested a 3-step
scenario to design an industrial PSA system, but again with a known cycle of operating steps.
Recently, Nikolic̀ et al. [136, 137] proposed a state-task network (STN) based framework to
determine optimal number of beds. However, STN developed wasn’t exhaustive and missed
It is clear that novel PSA cycle sequences are anticipated for upcoming applications as well
as for high-efficiency separation for current applications. Although process designers commonly
resort to simplified and specific models for the PSA process of interest, and utilize simplistic
descriptions in order to achieve satisfactory designs, accurate and reliable industrial design
Another research challenge, as briefly mentioned in the previous section, relates to developing
orous mathematical models. The behavior in each bed is described by partial differential and
algebraic equations (PDAEs) in space and time, constructed from conservation of heat, mass,
and momentum augmented by transport and equilibrium equations. Such hyperbolic equa-
tions involve high nonlinearities arising from non-isothermal effects and nonlinear adsorption
isotherms, with solution profiles represented by steep adsorption fronts. As a result, opti-
mization of PSA systems for either design or operation presents a significant computational
Chapter 1. Introduction 5
1.2 Research Challenges with PSA
Sophisticated optimization strategies have been developed and applied to PSA systems
with significant improvement in the performance of the process. For optimization of a bench-
scale and a rapid PSA process, Nilchan et al. [138, 139] proposed a complete discretization
approach for the PDAEs. Smith et al. [177, 178, 179] suggested a mixed-integer nonlinear
programming based approach to minimize number of beds. Ko et al. [110, 111] used an SQP-
based approach for optimization of PSA and fractionated vacuum PSA (FVPSA) processes.
Ko et al. [109] also formulated a multiobjective optimization problem for rapid PSA and
temperature swing processes. Rajasree et al. [145] developed a simulation based approach for
synthesis, design and optimization of PSA processes. Kapoor et al. [104] developed a simple
optimization scheme for PSA systems based on black-box models and an interior penalty
approach, and demonstrated it for three different PSA case studies. Kvamsdal et al. [118, 119,
120] optimized a PSA process for trace separation, and analyzed the effect of mass transfer and
cyclic steady state convergence. Jiang et al. [99] used an SQP-based approach to solve PSA
optimization problems and computed direct sensitivities to obtain derivatives. However, even
the most efficient of these approaches can still be time consuming for large systems, which
gives us a strong motivation to develop cost-efficient and robust optimization strategies for
PSA processes.
Moreover, there is a strong need to incorporate spatially & temporally distributed models
within flowsheet simulators, such as ASPEN and HYSYS, as they currently deal with lumped-
parameter models which suffer from accuracy limitations. Inclusion of dynamic PDAE-based
PSA model with other steady-state flowsheet models for overall flowsheet optimization is chal-
It is critical to develop reliable, analytical models for accurate prediction of the core properties
(multicomponent gas adsorption equilibria, kinetics, and heat of adsorption) using a limited
Chapter 1. Introduction 6
1.3 Problem Statement and Scope of Work
data source, since they govern the performance of an adsorptive process and are vital for process
especially for the heterogeneous adsorbents used in practice [169, 170]. An accurate knowledge
of these interactions under all conditions of pressure, temperature, and gas compositions, which
can vary widely in a practical PSA process, is needed for a reliable solution of the process
adsorption database which can validate adsorption models used for PSA process design, and
can compute aforementioned core properties. Although such databases exist [188, 105], they
are quite old, not exhaustive and do not account for newly developed adsorbents for existing
Synthesizing novel high-performance adsorbents presents research challenges of its own. Ad-
sorbents are desired to provide large specific surface area for a large adsorption capacity, and
to be selective enough to retain one or more components from the gaseous feed mixture. Be-
ginning with amorphous adsorbents like silica gel and activated carbon, the range of industrial
adsorbents have grown to include synthetic molecular sieves and next generation zeolites with
symmetric and crystalline pore structures [155, 156]. The challenge is to synthesize significantly
heterogeneous adsorbents which can provide substantially high selectivity for high-purity pro-
In this work, we address the challenges related to systematic PSA process synthesis, and devel-
oping computationally cheap strategies for simulation and optimization of PSA bed models, as
listed in sections 1.2.1 and 1.2.2, respectively. Development of new adsorbent materials, models
that compute adsorbent properties, and multicomponent adsorption database is beyond the
Chapter 1. Introduction 7
1.4 Thesis Outline
To address the issue of developing a systematic framework to develop novel PSA cycles, we
present an optimization-based framework to generate optimal PSA cycles from a 2-bed PSA
superstructure. The interconnections between the two beds are governed by time-dependent
control variables. Different PSA operating steps are realized by varying these control variables.
We achieve an optimal sequence of operating steps by solving an optimal control problem with
the PDAEs of the PSA system. To demonstrate this framework, we limit the scope of this
work to binary feed mixtures. Extending the formulation to multicomponent feed streams and
In order to address the challenge associated with efficient computation of PSA bed models
and optimization problem, we develop a model reduction based framework that systematically
ular, we obtain these reduced-order models (ROMs) using proper orthogonal decomposition
(POD). POD basis functions are used within a Galerkin’s projection framework to derive a
low-order DAE system that accurately describes the dominant dynamics of the PDAE system.
Further, these ROMs are used as surrogate models to define much smaller and computationally
late a convergent and robust optimization algorithm which utilizes these ROM-based smaller
optimization problems. In this work, we illustrate this framework with manageable two-bed
PSA systems. However, the algorithm developed is generic and can be extended to multibed
In this thesis, our main focus is to introduce and develop these two novel ideas mentioned above
(the 2-bed PSA superstructure to systematically generate optimal PSA cycles, and the novel
trust-region framework for optimization using reduced-order models), and present a proof of
concept for them using practical and computationally manageable PSA problems. We organize
our work in eight chapters, and present an outline of each chapter in this section.
Chapter 1. Introduction 8
1.4 Thesis Outline
We begin with the concepts of Pressure Swing Adsorption (PSA) in Chapter 2. PSA is an
entially adsorbs one or more components from a feed mixture. We provide a brief background
on the principles of adsorption and describe different kinds of cyclic adsorption processes. Then
we discuss mathematical modeling for a fixed bed PSA process, which involves characterizing
mass, energy, and momentum balances together with mass transfer phenomena and adsorption
equilibrium. In general, PSA processes are governed by coupled hyperbolic PDAEs. Numerical
Chapter 3 introduces the novel two-bed PSA superstructure to determine optimal PSA
configurations. The superstructure consists of two beds, one of which acts as an adsorbing bed
and the other as a desorbing bed. The interconnections between the two beds are governed by
time-dependent control variables, such as fractions of the light and the heavy product recycle.
The superstructure predicts different PSA operating steps by varying these control variables.
realize that it is a singular control problem as the controls appear linearly. Solution strategy
In Chapter 4, we demonstrate the superstructure approach for case studies related to post-
combustion CO2 capture. PSA is a promising option to effectively capture CO2 from flue gas
streams. However, most commercial PSA cycles do not focus on enriching the strongly adsorbed
pure strongly adsorbed component. We present a fairly comprehensive review of the previous
studies on PSA cycles for CO2 production. This review highlights the difficulties associated
with choosing one PSA cycle over another, and motivates development of a structured approach
for PSA cycle design. Hence, we use the superstructure to synthesize optimal PSA cycles which
maximize CO2 recovery and minimize overall power consumption. Results obtained are quite
Chapter 5 illustrates the superstructure approach for case studies related to pre-combustion
Chapter 1. Introduction 9
1.4 Thesis Outline
CO2 capture. PSA processes offer significant advantages for pre-combustion CO2 capture in
terms of performance, energy requirements and operating costs since the shifted synthesis gas
(syngas) is available for separation at a high pressure with a high CO2 concentration. Since
the industrial PSA cycles focus on recovering H2 at a very high purity but consider CO2 as a
waste stream, novel PSA cycle designs are anticipated which recover both H2 and CO2 at a
high purity. With the help of the superstructure approach, we successfully synthesize optimal
PSA configurations which maximize CO2 recovery and minimize overall power consumption.
After this, the thesis focuses on addressing the challenge associated with efficient compu-
tation of PSA bed models and optimization problem. Chapter 6 describes a reduced-order
modeling technique that can circumvent such computational challenges by generating cost-
efficient low-order models which can be used as surrogate models in optimization problems.
work, we describe how proper orthogonal decomposition (POD) can be successfully used to
construct reduced-order models which can be orders of magnitude smaller than the original
model without losing accuracy. Further, we discuss ROM-based optimization, and describe how
ROMs can be utilized to optimize in a trust-region around the point where it is constructed.
First we develop an exact penalty-based trust-region algorithm, and develop correction schemes
for objective and constraints to ensure global convergence with POD-based approximate mod-
els. Then we illustrate this algorithm with correction schemes for a two-bed four-step PSA
case study for post-combustion capture. Next, highlighting drawbacks of penalty approach and
benefits of a filter, we develop a hybrid filter trust-region algorithm for constrained ROM-based
optimization. Filter algorithm is then demonstrated for the PSA case study.
Finally, we summarize the contributions of this thesis, and discuss directions for future
Chapter 1. Introduction 10
Chapter 2
Synopsis
bent preferentially adsorbs one component (or a family of related components) from a feed
mixture, thus achieving separation. To understand the design and operation of PSA processes,
first provide a brief background on the adsorption phenomena and its fundamentals. Then we
discuss adsorption modeling in a fixed bed which, besides comprising mass, momentum and
energy conservation, also involves characterizing adsorption equilibrium behavior and mass
transfer resistances affecting adsorption kinetics. Further, we describe the transient cyclic op-
eration of a two-bed as well as a multi-bed multi-step PSA processes with examples. Numerical
methodologies to simulate PSA processes and obtain solution for the bed model are also dis-
cussed, which involve finite volume spatial discretization scheme which is mass conserving,
avoids unphysical oscillations, and thus, is essential to simulate PSA bed models.
As discussed by Coulson et al. [55], Yang [206], and Vermeulen et al. [190], adsorption involves
contacting a free fluid phase (gas or liquid) with a rigid particulate phase which has the property
of selectively taking up and storing one or more solute species originally contained in the fluid.
Besides adsorption, the conditions for desorption must also exist as it is usually necessary to
reuse the adsorbent. The fluid-solid interaction leads to a reduction in potential energy of the
fluid molecules in the vicinity of adsorbent surface. As a result, the fluid molecules concentrate
such that their molecular density in this region is substantially greater than the free fluid phase.
The strength of such a surface interaction depends on the nature of both the solid adsorbent
and the fluid adsorbate. Consequently, different substances adsorb with different affinities.
Such a selectivity provides the basis to achieve separation in adsorption separation processes,
If the fluid-surface interactions involves weak forces, such as van der Waals, we observe
physical adsorption or physisorption. In contrast, if the forces are strong and involve electron
tion and regeneration of the spent adsorbent by manipulating external operating conditions.
adsorbent has a finite solute uptake capacity from the free fluid phase and must be cleaned
for re-utilization. Thus, the adsorption phenomena should be reversible. Such reversibility is
processes are designed to operate in a cyclic manner. Often two fixed-bed adsorbers are
provided, such that one is used for adsorption while the other is being regenerated. For
separating components from gaseous mixtures, following two kinds of adsorption separation
temperature. The cyclic operation in this case typically takes a rather long time because
of a relatively large time constant of heat transfer due to poor thermal conduction in the
• Pressure Swing Adsorption (PSA): In this process, bulk separation of a mixed gas is
In this case, the step time for desorption is of the same order of magnitude as that of
the adsorption (sometimes even smaller). Hence, this process enjoys shorter cycle time
An alternative to manipulate pressure and temperature is to alter the composition of the fluid
phase to control the direction of adsorption. This operation, called Concentration Swing
Adsorption (CSA), is utilized when the free fluid phase is a liquid. Such aforementioned
As mentioned before, physisorption is caused mainly by weak forces between fluid molecules
and adsorbent surface. Thus, adsorbents are characterized by surface properties such as surface
area. Moreover, the role of an adsorbent is to provide surface for selective adsorption of certain
components from the fluid phase. Hence, the desirable properties which an adsorbent should
• Capacity: An adsorbent is desired to provide a large specific surface area for a large
adsorption capacity. A low capacity adsorbent leads to longer and expensive adsorbent
beds. The creation of a large internal surface area in a limited volume is commercially
some adsorbents have larger pores called macropores which result from aggregation of
fine powders into pellets. A typical porous adsorbent particle is illustrated in Figure 2.1.
Figure 2.1: A composite adsorbent pellet with different mass transfer resistances [156]
Macropores function as diffusion paths of adsorbate molecules from outside the pellet to
• Selectivity: An adsorbent must selectively retain one or more adsorbates from the fluid
phase. This can be achieved either by equilibrium selectivity, in which species adsorb
At the microscopic level (see Figure 2.1), process of adsorption involves following steps in
1. The adsorbate diffuses from the bulk fluid phase to the external surface of the adsorbent
pellet
2. From the external surface, adsorbate diffuses into and through the macropores.
3. If micropores exist, adsorbate diffuses further in the micropores before getting adsorbed
onto the surface of the micropores, otherwise it adsorbs on the surface of macropores.
Consequently, the adsorbate encounters three different kinds of mass transfer resistances at
• External film resistance: This exists in the external liquid film surrounding the ad-
sorbent pellet. It can be characterized by using the system’s Sherwood, Reynolds and
Schmidt number. Typically this resistance is negligibly small in PSA systems [181, 156].
• Macropore diffusive resistance: This mass transfer resistance exists in the macrop-
ores of the adsorbent particle, and usually is the rate-controlling step. It depends on the
relative magnitude of the pore diameter and the mean free path of the adsorbate under
the operating conditions in the pores. When the pore diameter is much greater than
the mean free path, Bulk diffusion (Dm,i ) dominates the transport, and is estimated by
Chapman-Enskog equation [181]. When the mean free path is much larger than the pore
r
T
DK = 48.5dp (2.1)
Mw
Knudsen diffusion is usually dominant when the total pressure is quite low. In the
Dm,i DK
Def f,i = i = 1, . . . , Nc (2.2)
Dm,i + DK
tion
p Def f,i
Dp,i = i = 1, . . . , Nc (2.3)
τ
• Micropore diffusive resistance: Also known as Surface diffusion, this resistance exists
in the micropores of the adsorbent pellet. For the adsorbents considered in this work,
either micropores don’t exist or this resistance is negligible, i.e., adsorption occurs in the
micropores instantaneously.
This model is applicable when all the mass transfer resistances between the gas and the
∂qi ∂q ∗
= i i = 1, . . . , Nc (2.4)
∂t ∂t
In the pore diffusion model, following detailed diffusion equation within the spherical
adsorbent particle is solved to obtain the local rate of change of solid loading [207].
∂qi 1 ∂ 2 ∂qi
= 2 Dp,i r , ∀r ∈ (0, dp /2), i = 1, . . . , Nc (2.5)
∂t r ∂r ∂r
Boundary condition is obtained by comparing internal flux with the external flux at the
The LDF model [84] is obtained by assuming a parabolic solution profile for the pore
diffusion model and then using an average solid loading for the entire adsorbent particle.
It is expressed as
∂qi
= ki (qi∗ − qi ) i = 1, . . . , Nc (2.6)
∂t
60Dp,i
ki = i = 1, . . . , Nc (2.7)
d2p
Chapter 2. Pressure Swing Adsorption 16
2.3 PSA Modeling
q*
Tads
q*ads
ng T swing
wi
Ps
Tdes
q*des
Tads < Tdes
Pdes Pads P
Figure 2.2: Adsorption isotherms and change in equilibrium solid loading with pressure and
temperature
An adsorbent in contact with the surrounding gaseous mixture for a sufficiently long time
eventually attains equilibrium. In this state the amount of a component adsorbed on the
surface is determined as shown in Figure 2.2 [154]. The relation between the equilibrium
amount adsorbed and the total pressure of the fluid phase at a particular temperature is called
Figure 2.2 also shows how adsorption/desorption is facilitated by changing total pressure or
temperature of the system. We note that an adsorption process is always exothermic while
desorption is always endothermic. Since the overall change in system’s entropy is negative
during adsorption, enthalpy change must be negative to ensure a net negative change in the
Gibbs free energy (vice-versa for desorption). Consequently, adsorption is favored at a lower
temperature, while desorption at a higher one. Similarly, at a high pressure, more adsorbate
molecules interact with the molecules at the adsorbent surface leading to a higher adsorbent
surface coverage and higher equilibrium solid loading. Hence, adsorption is favored at a high
At sufficiently low adsorbate concentration (or partial pressure), adsorption isotherm at-
ads /RT
Here kiH is Henry’s constant and is inversely related to temperature (kiH = k0 e∆Hi ).
cal forms, some of which are based on a simplified physical picture of adsorption/desorption
phenomena while others are purely empirical and intended to correlate the experimental data.
Commonly used mathematical models include single- and dual-site Langmuir model, Freundlich
The Langmuir isotherm model is derived from mass action considerations and by bal-
ancing the occupied and unoccupied sites on the surface. It shows correct asymptotic
qis bi Pi
qi∗ = P where qis = ki1 + ki2 T bi = ki3 exp(ki4 /T ) i = 1, . . . , Nc
1 + j bj Pj
(2.9)
s b P
q1i s b P
q2i
qi∗ = P1i i + P2i i i = 1, . . . , Nc (2.10)
1 + j b1j Pj 1 + j b2j Pj
s 1 2 3 4
where qmi = kmi + kmi T bmi = kmi exp(kmi /T ) m = 1, 2
qi∗ = qis bi Pi
1/n
n > 1, i = 1, . . . , Nc (2.11)
or Langmuir-Freundlich model
1/n
qis bi Pi
qi∗ = P 1/n
i = 1, . . . , Nc (2.12)
1 + j bj P j
are occasionally used as adsorption isotherms. Both these models are empirical in nature
with no sound theoretical basis, and both of these do not reduce to Henry’s Law in the
low-concentration region. However, they can cogently represent the behavior of several
PSA processes are generally carried out with packed adsorption columns. The dynamic be-
equilibrium, and fluid dynamics, and its understanding is vital for process modeling and anal-
Assuming an axially dispersed plug flow pattern in a fixed bed adsorption column, the
transient component material balance for the bulk gas phase is given by
We often omit the axial dispersion term from this equation while numerically integrating it
because the numerical dispersion resulting from spatial discretization of the flux term always
exists for any discretization scheme. Thus, considering physical dispersion, together with
the inevitable numerical dispersion, causes additional smearing of the solution profiles. Hence,
instead of a parabolic formulation, the following hyperbolic form is used for component material
Here we assume no radial dependence for concentration and solid loading. Thus, in the equa-
tion above, Ci and qi represent cross-sectional average values. Numerical dispersion in such
temperature changes influence the adsorption equlibrium behavior. Thus, accounting for heat
generation and transfer in adsorbent beds is essential for accurate modeling of PSA processes.
The heat generated on the adsorbent surface is transported through conduction between ad-
sorbent particles and through convection in the bulk gas phase. The extent of temperature
transport properties, and heat transfer characteristics of the packed bed such as thermal con-
the bulk gas phase and adsorbent particles. Moreover, heat transfer in the axial direction by
thermal conduction is often negligible unless the operation is adiabatic at a very high flow rate.
Based on these assumptions, the energy balance for the system is given by
!
X ∂T X ∂qi ∂vh
b i
Ci (Cpg − R) + ρb Cps − ρb ∆Hiads + + UA (T − Tw ) = 0 (2.15a)
∂t ∂t ∂x
i i
i
Cpg = aic + bic T + cic T 2 + dic T 3 i = 1, . . . , Nc (2.15b)
X Z
i
h= Ci Cpg dT (2.15c)
i
Here we also consider temperature dependence of heat capacities and heat transfer through the
wall of the column. As in the case of material balance, here T represents average temperature
across cross-section. The effective heat transfer coefficient UA comprises contributions from
transfer coefficient can be obtained from Carberry equation and it depends on system’s Nusselt
number, Prandtl number, and Reynolds number [181]. Fluid-to-wall heat transfer coefficient
is usually obtained from empirical correlations. If only fluid-to-wall heat transfer is assumed,
UA reduces to
4hw
UA = (2.16)
D
As the bulk fluid flows through the void spaces between adsorbent particles, it experiences
a pressure drop due to viscous energy losses and drop in kinetic energy. Ergun equation is
commonly used to describe such a pressure drop along the bed length
X !
∂P 150µ(1 − b )2 1.75 1 − b
− = v+ Mwi Ci v|v| (2.17)
∂x d2p 3b dp 3b i
The first term on the right-hand side represents losses due to viscous flow (laminar part), while
the second term accounts for the drop in kinetic energy (turbulent part).
Often pressure drop across the bed is assumed negligible and is not considered in the
analysis of the dynamic behavior of a PSA process [48, 47, 144]. Cruz et al. [58] suggest
that such an assumption is valid for bench-scale PSA processes. They suggest that an overall
material balance to obtain velocity profile along the bed length can be avoided and a constant
or linear velocity profile is acceptable for PSA processes with low Reynolds number.
Heavy Heavy
Feed Feed
product product
Feed Feed Counter-current Light reflux
pressurization (or Adsorption) depressurization (or Desorption)
(Step 1) (Step 2) (Step 3) (Step 4)
As described in section 2.1, fixed-bed pressure swing adsorption processes typically operate
in a cyclic manner undergoing adsorption and desorption steps periodically in one or more
packed beds. Desorption step renders clean beds re-usable for adsorption step. In PSA,
bulk separation is achieved by “swinging” between high and low pressure levels for adsorption
and desorption, respectively. Adsorption is carried out at a superambient pressure while the
desorption is carried out under vacuum. Larger pressure difference allows efficient separation
A typical two-bed operation mode of PSA cycle is shown in Figure 2.3. The early PSA cycles
developed by Skarstrom [176] and Air Liquide [60] utilized such an operating strategy. The
current), and light reflux (or desorption). In the first step (Step 1), bed 1 is pressurized
by high-pressure feed gas from feed end, while bed 2 is depressurized in a counter-current
fashion (Step 3) and strongly-adsorbed component (heavy product) is removed. Next, high-
pressure feed gas is continued through bed 1 where adsorption of heavy product (Step 2) takes
place while product gas enriched in weakly-adsorbed component (light product) leaves the top.
During this period, a fraction of the light product gas is drawn out to bed 2 at low pressure to
purge and further desorb the accumulated heavy adsorbate counter-currently, called the light
reflux step (Step 4). Further, the beds interchange roles and execute previous steps of the
other bed. Eventually, both beds repeat all the four steps in a cyclic manner.
The idea behind the light reflux step is to flush the void spaces within the bed and to
ensure that at least the upper end of the bed, where light product is withdrawn, is completely
free of the heavy product. Moreover, counter-current operation during depressurization and
light reflux steps prevents retention of the heavy product at least at the upper end of the bed,
thereby reducing the amount of purge used in Step 4. Increasing the purge amount increases
the light product purity but inevitably reduces its recovery. Also, the operating pressure during
Step 2 substantially influences light product losses during Step 3 and 4. Higher adsorption
pressure increases losses during Step 3, while low pressure increases losses during Step 4 [156].
The Skarstrom cycle represents the most basic operation of a PSA process. To improve
the purity and recovery of light or heavy products or both, as well as to design PSA processes
for multi-component feed mixtures, several modified configurations have been proposed in the
literature with a multitude of distinct operating steps such as light product or heavy product
pressurization, desorption with heavy product as purge gas, co-current depressurization, pres-
sure equalization etc. [43, 62, 129, 196, 213, 202]. The processes differ from one another with
respect to the kinds of operating steps as well as the sequence in which these steps are carried
out.
Industrial PSA operations adopt more sophisticated modes to increase product purity and
Figure 2.4: Time chart for a 5-bed 11-step H2 PSA process [100]
recovery or minimize overall power consumption and operating costs. A practical PSA/VPSA
process can be fairly complex with a multicolumn design executing a wide variety of nonisother-
mal, nonisobaric, and non-steady-state operating steps in a non-trivial sequence. For instance,
Figure 2.4 shows the time chart for a 5-bed 11-step PSA process which separates hydrogen
from a multicomponent feed mixture at a purity of more than 99.9999% [100]. Besides the
conventional feed (adsorption) and purge (light reflux) steps, the cycle comprises multiple pres-
sure equalization steps (EQ) and unconventional blowdown (depressurization) with pressure
equalization step.
PSA processes are no more complex than most of the conventional separation processes, but
they are different in one essential feature: the process always operates under transient condi-
tions. Since the time intervals for operating steps are usually short and boundary condition
around the PSA beds change as we switch from one operating step to another, the process
never reaches a steady state. Consequently, behavior of a transient PSA process is always de-
scribed by a set of partial differential equations (PDEs) which requires more complex solution
procedure.
PSA processes differ from the conventional separation processes in one more feature: they
operate under cyclic steady state (CSS). At CSS, conditions in each bed at the end of each
cycle are exactly the same as those at the beginning of the cycle. In other words, although
the process remains dynamic within a cycle, the transient behavior of the entire cycle remains
CSS
Heavy Product Heavy Product Feed Feed
t
t
Feed Feed Heavy Product Heavy Product
CSS
Figure 2.5: Boundary matchings required for the two-bed four-step PSA process
constant and repeats itself invariably from cycle to cycle. Mathematically, CSS is represented
by matching the inital conditions of the PDEs with the solution obtained at the end of the
cycle. Thus, we note that the initial conditions required to solve the set of PDEs of a PSA
process are themselves parametric and should be computed simultaneously with the solution
of PDEs.
The number of cycles required by an actual PSA process to go from start-up to CSS are
system dependent, but typically quite large. For instance, the number of cycles required to
reach CSS are around 500 for H2 PSA while 2000 for O2 VPSA process [114]. Normally, CSS
is determined by solving the PDE system repeatedly for each step of the cyclic process in
sequence, using the final concentration profile for each step as the initial condition for the next
step in the cycle. Such computations are bulky since the procedure is repeated sufficiently for
In a multi-bed PSA operation, beds interact with each other over the cycle as material flows
from one bed to another, such as during the reflux and pressure equalization steps. We need to
capture such an interaction at the boundary of the beds while simulating the multi-bed system.
Store solution
Light
Product
Store CSS
Store solution
Heavy Product Heavy Product
x t
Bed 2
Conditions
Light matched
Product simultaneously
Bed 1
Pressurization Adsorption
x t
Feed Feed
Store solution
(b) Multibed approach
Figure 2.6: Approaches to match boundary values while simulating multi-bed PSA operations
Moreover, due to CSS, conditions for the same bed should be matched at the beginning and
the end of the cycle. For instance, Figure 2.5 illustrates such boundary matchings with the
spatial domain and time chart of the two-bed four-step process shown in Figure 2.3. First,
since a small amount of the light product exiting during the adsorption step is recycled as
a reflux to the second bed for desorption, the conditions at the ends of the beds need to be
matched. Next, since the final bed conditions at the end of an operating step serve as the intial
condition for the next step, bed profiles should be matched. Finally, for CSS, bed profiles at
the beginning and the end of the cycle should be matched. The boundaries where conditions
Unibed and Multibed formulations are two different strategies to implement aforementioned
boundary matchings. As shown in Figure 2.6(a), Unibed approach involves simulating a single
bed for all the operating steps in a cycle. Since all the PSA beds follow same sequence of steps
and demonstrate identical dynamic behavior, one bed is sufficient to simulate the entire PSA
process. Hence, only one set of bed equations are solved with varying boundary conditions for
all the steps. To simulate bed interactions and match boundary information for different beds,
storage buffers are used in the model implementation. In contrast, in Multibed approach we
simulate all the beds in the PSA flowsheet but only for a portion of the cycle. As illustrated
in Figure 2.6(b) for a 2-bed 4-step process, this portion of the cycle is selected such that it
covers all the operating steps of the cycle among all the beds. Thus, we solve bed equations
for all the beds but only for one set of operating steps. Such an approach accurately simulates
match at the beginning of one bed and end of another bed, solution of the entire PSA cycle is
eventually obtained. Ling Jiang [99] provides a detailed description of Unibed and Multibed
approaches.
As discussed in section 2.3, PSA processes are mathematically modeled by coupled hyperbolic
partial differential and algebraic equations (PDAEs) distributed in both space and time. Ob-
taining analytical solution without making any approximation is close to impossible for such
a highly coupled set of PDAEs. Low dimensional PDAEs of this type (with simplifications)
can be solved analytically or directly by the PDE package CLAWPACK [123]. However, for
large-scale systems numerical methods employing discretization for the spatial or the temporal
or both domains are essential. There are two distinct approaches to numerically simulate the
set of PDAEs:
Method of Lines is a two-step approach [160]. First, PDAEs are discretized in the spatial
domain thus converting them to a set of differential algebraic equations (DAEs). Next,
DAEs are integrated using standard time integration routines. One of the advantages
with MOL is that with a high-resolution spatial discretization as well as with the er-
ror checking mechanisms present in the commercial time integration routines, we can
achieve high order accuracy in both dimensions. A solution of the hyperbolic PDAEs
defining PSA systems is usually characterized by steep adsorption fronts in the spa-
tial domain, which is aggravated with higher adsorption pressure. In order to capture
such steep fronts, it is essential to use a spatial discretization scheme which not only
avoids physically unrealistic oscillations, but also ensures minimal numerical dispersion
and negligible smearing (damping) with fewer discretization nodes. Moreover, it should
also ensure extreme accuracy with mass, momentum, and energy conservation, which is
vital for PSA-based separations. Because first- and second-order finite difference and
finite element methods do not often mitigate such numerical noise with hyperbolic sys-
tems, high-resolution schemes such as upwind-based finite volume methods are utilized
approach allows seamless addition of the CSS condition to the discretized bed model, thus
Hence, CD is attractive and efficient for simpler PSA models. Nilchan and Pantelides
[138, 139] successfully demonstrated this approach for small-scale PSA problems and
solved the resulting algebraic system using a nonlinear equation solver inside gPROMS
[2]. However, one of the drawbacks of CD is that in the absence of any error checking
While a large number of nodes can mitigate this drawback, it can consequently make the
variables simultaneously also presents a significant challenge. Hence, after solving with
the CD approach with fewer nodes, we verify the accuracy of the results with a more
We present a brief description of the spatial and the temporal discretization schemes used in
PSA processes are convection dominated and finding accurate discretization scheme that re-
accurately predict behaviour for smooth regions, they introduce unrealistic oscillations around
steep zones in the spatial profile. First-order methods can be inherently bounded and can
ensure no oscillatory behavior for steep zones. However, due to low order accuracy, fronts lose
their sharpness (numerical smearing) unless hundreds of spatial nodes are used. Therefore,
high-resolution methods are utilized for convection dominated processes which ensure second
or higher order accuracy for smooth regions together with low order accuracy without oscil-
lations for steep regions and discontinuities in the spatial profile [124, 70]. In particular, we
use the finite volume method with flux limiters in this work since it not only avoids these
aforementioned issues but also is well suited to model hyperbolic conservation laws, given its
Finite volume methods with flux limiters utilize the theory of flux-corrected transport
(FCT) which incorporates anti-diffusion to negate the effects of excessive smearing. FCT
removes numerical dispersion from the discretized equations to keep the fronts sharp ensuring
no oscillations [32]. On the other hand, finite volume methods with a flux limiter conceptually
are exact opposite of FCT. In other words, they introduce additional numerical dispersion
around steep zones to avoid oscillations. However, the theory of flux limiters has been derived
f1 f2 f j −1 f j f j +1 f N
In a finite volume method, the spatial domain is divided into discrete volume elements (or
cells) and we define average values for state variables over each element. For instance, for
one-dimensional finite volume method, spatial division is done as shown in Figure 2.7, and
Z xj+1/2
f (x)dx = ∆j f¯j (2.18)
xj−1/2
Here j ± 1/2 are walls of volume j, ∆j is the length of the volume j, and f¯j is the volume
average of f (x) which for example can represent bulk phase mass concentration or enthalpy.
PDAEs are then integrated in the spatial domain and the state variables are replaced by their
cell average values. For instance, Equation (2.14) after applying finite volume discretization
dC̄i,j dq̄i,j 1
b + (1 − b )ρs + vj+1/2 Ci,j+1/2 − vj−1/2 Ci,j−1/2 = 0 (2.19)
dt dt ∆j
Here vj+1/2 Ci,j+1/2 and vj−1/2 Ci,j−1/2 are mass fluxes across the walls of volume j, resulting
from the approximation of the integral in Equation (2.18). Since Equation (2.17) (or any other
equation) always evaluates velocity at cell walls, only Ci,j+1/2 , Ci,j−1/2 (or in general, wall
values of the densities) need to be approximated in terms of cell average values by interpolation.
For upwind finite volume methods, such an interpolation depends on the direction of the flux.
In this work, we use the following flux direction-based formulation to approximate wall values
of densities, such as bulk phase mass concentration or enthalpy [59, 91, 125]
1
For vj+1/2 ≥ 0 fj+1/2 = f¯j + θ(rj+1/2 ) f¯j+1 − f¯j (2.20a)
2
f¯j − f¯j−1
rj+1/2 = ¯ (2.20b)
fj+1 − f¯j
1
For vj+1/2 < 0 fj+1/2 = f¯j+1 + θ(rj+1/2 ) f¯j − f¯j+1 (2.20c)
2
f¯j+1 − f¯j+2
rj+1/2 = (2.20d)
f¯j − f¯j+1
Here f¯j is the cell average value for state variable such as gas-phase concentration C, solid
loading q, or temperature T , θ(r) is the flux limiter and rj+1/2 is a ratio which measures the
smoothness of the profile. If rj+1/2 is close to 1, the profile is presumably smooth. If rj+1/2 is
far from 1, there must be a sharp discontinuity at xj . Depending on the value of rj+1/2 , θ(r)
applies proper correction. If the profile is smooth, θ(r) preserves second or higher-order nature
of the discretization, otherwise near steep regions θ(r) reduces the order of the discetization
to eliminate oscillations.
Flux limiters take various forms to perform aforementioned functions. Darwish et al. [59]
and Hirsch [91] give a detailed description of several flux limiters such as Minmod, Superbee,
and Van Leer limiters. While Minmod limiter is too diffusive and Superbee doesn’t perform
adequately for smooth regions, Van Leer has properties between the two and thus is more
desirable. Hence, we use the Van Leer flux limiter for our case studies, which has the following
form
r + |r|
θ(r) = (2.21)
1 + |r|
Boundary conditions are incorporated in the finite volume scheme with the help of “ghost
cells”, as illustrated in Figure 2.7. Ghost cells are required because Dirichlet or Neumann
boundary conditions specified for the problem apply to the walls of the first or N th finite
volume and they need to be translated to corresponding cell average value to get accounted
for in the discretization scheme. Thus, boundary conditions at the walls are usually translated
to the average values of fictitious ghost cells using some form of extrapolation. For instance, if
finlet and foutlet are given as boundary conditions and a linear extrapolation scheme is chosen,
where f¯0 , f¯−1 , f¯N +1 , and f¯N +2 are average values for ghost cells.
For temporal discretization of the DAE system obtained after spatial discretization, we employ
orthogonal collocation on finite elements (OCFE) technique for our work [40, 69]. OCFE is a
discretization scheme which combines the method of weighted residuals with the finite element
methods. The state temporal profiles are approximated at a finite number of points - the
To illustrate the concept, we consider the following set of ordinary differential equation
(ODE)
dy
= f (y(t), t), y(t0 ) = y0 (2.23)
dt
For discretization, the time domain is partitioned into nE finite elements of length hi , i ∈
P E
[1, . . . , nE ] such that ni=1 hi = tf − t0 , where t0 and tf are initial and final time, respectively.
P
Thus, time at the end of each element i is defined as ti = t0 + im=1 hm . Next, we represent
the time derivative of the state as a Lagrange polynomial of order nC , where nC is the number
Here τj , j = 1, . . . , nC are the collocation points which are usually the roots of an orthogonal
polynomial of degree nC , yi,0 is the value of the state at the beginning of the element i,
dy dy
dt i,j = dt (ti,j ) with ti,j = ti−1 + hi τj , and Ωj (τ ) is a polynomial of order nC , satisfying
Z τ
Ωj (τ ) = lj (τ )dτ, τ ∈ [0, 1], t ∈ [ti−1 , ti ] i = 1, . . . , nE
0
Y
nC
τ − τk
where lj (τ ) = , j = 1, . . . , nC (2.25)
τj − τk
k=1,6=j
dΩj
Ωj (0) = 0, = δj,k , j, k = 1, . . . , nC
dτk
Here we use Radau collocation points with τj < τj+1 , j = 1, . . . , nC − 1, and τnC = 1 for every
element. Since the last collocation point lies at the end of the finite element i, continuity of
X
nC
dy
yi,0 = yi−1,0 + hi−1 Ωj (1) (2.26)
dt i−1,j
j=1
While state variables are continuous across the finite elements, control variables can present
discontinuities at the boundaries of the elements. We prefer Radau collocation points because
they allow us to set constraints easily at the end of each element in an optimization problem,
and to stabilize the system more efficiently if high index DAEs are present [33, 28]. To de-
dy
termine polynomial coefficients dt i,j , we substitute Equation (2.24) into Equation (2.23) and
enforce the resulting algebraic equations at the collocation points τj , which leads to
dy dy
= (ti,j ) = f (y(tij ), tij ), j = 1, . . . , nC , i = 1, . . . , nE (2.27)
dt i,j dt
the physics of pressure swing adsorption processes can be modeled mathematically in the
form of PDAEs. It is clear from the description that the fundamentals at the molecular
level are relatively well understood and characterized, and it is possible to construct fairly
accurate models to predict dynamic behavior of pressure swing adsorption processes. Currently,
research efforts are aimed at improvements in multicomponent mixture isotherms, and better
understanding of mass transfer phenomena, axial dispersion and fluid transport within packed
We also discussed operation strategies for two-bed as well as multi-bed PSA processes, and
mentioned that industrial PSA systems carry out high purity separations with the help of so-
phisticated cycles involving complex sequences of operating steps. Designing such complicated
sequences and PSA processes is generally non-intuitive, and thus a systematic methodology
is desired which can ameliorate the arduous task of synthesizing PSA cycles. In the next
few chapters, we present a novel optimization-based framework to design optimal PSA cycles,
and illustrate it with the help of examples motivated from the application of PSA for carbon
capture.
Finally, we showed that bed model for PSA processes is defined by hyperbolic partial differ-
ential and algebraic equations (PDAEs) with high nonlinearities arising from non-isothermal
effects and nonlinear adsorption isotherms, and with solution profiles represented by steep ad-
PSA Superstructure
Synopsis
cycles for a given application. In particular, we present a novel PSA superstructure to simul-
taneously determine new configurations and design parameters. The superstructure consists
of two beds, one of which acts as an adsorbing bed and the other as a desorbing bed. The
interconnections between the two beds are governed by time-dependent control variables, such
as fractions of the light and the heavy product recycle. The superstructure is rich enough to
predict a number of different PSA operating steps, which are accomplished by varying these
control problem for the superstructure with the partial differential and algebraic equations of
the PSA system and the cyclic steady state condition. We also present the PDAEs for the bed
model with the connectivity equations of the superstructure. Large-scale optimization capa-
bilities have enabled us to adopt a complete discretization methodology to solve the optimal
3.1 Motivation
Industrial PSA/VSA systems are quite intricate involving multiple adsorber columns which
operating steps. Synthesizing such configurations for given commercial specifications has so
far relied on thumb rules, past experiences in adsorption design, or immense experimental
effort with bench- or pilot-scale processes. A systematic methodology to design, evaluate and
optimize novel PSA cycle configurations hasn’t been reported in the literature to date due to
the inherent complexity of the process. Cycle design with accurate, reliable, and rigorous PSA
bed models is considered prohibitive because of the expense and computational time involved.
Very few studies in the literature have tried to address this issue. All of these studies
process for given kinds and fixed sequences of operating steps, but do not discuss how these
steps should be chosen to form a cycle. Zhang and Webley [211] outlined an approach for
cycle development by understanding the roles of individual operating steps and adsorption
fronts. However, they identified optimal configurations with the help of a pre-decided set of
operating steps and a simplified mathematical model. Chiang [46] proposed simple arithmetic-
based heuristics, while Smith et al. [177] extended Chiang’s work to propose a mixed-integer
nonlinear programming (MINLP) based approach to obtain optimal number of beds required
to execute a fixed sequence of operating steps. Smith et al. [178, 179] also suggested a 3-step
scenario to design an industrial PSA system, but again with a known cycle of operating steps.
Recently, Nikolic̀ et al. [136, 137] proposed a state-task network (STN) based framework to
determine optimal number of beds, with operating steps forming the states of the network. The
kinds and sequences of operating steps chosen were fixed in their case as well. Moreover, STN
developed wasn’t exhaustive and missed many basic steps such as product repressurization,
co-current depressurization, and desorption with purge stream coming from another bed.
of operating steps in a PSA cycle without any assumption on the kinds of steps that should
Light
Product (LP)
Top reflux (TR)
Ca,i(t), Ta(t), (t)
va(t), Pads(t) Pressure-reducing Pd(t)
Valve
Co-current Counter-
Bed current Bed
(CoB) (CnB)
be included in the cycle. With development of optimization strategies for process synthesis, it
in the next section, this approach relies on the formulation of an optimal control problem.
3.2 Methodology
3.2.1 Superstructure
Figure 3.1 illustrates the proposed 2-bed PSA superstructure. It has a co-current bed (CoB)
and a counter-current bed (CnB) that determine co-current and counter-current operating
steps in the cycle, respectively. We consider only two beds to ensure that the direction of
the flow, and thus the superficial velocity, remains co-current for CoB and counter-current
for CnB. This strategy avoids flow reversals in the bed, and does not require additional bed
connections with embedded logical conditions in order to realize different operating steps. This
superstructure is consistent with the concept of unibed models [114], where no more than two
beds interact at the same time, and the steps can be grouped into adsorbing steps and desorbing
steps. Consequently, it can accomplish a wide variety of operating steps with just a single bed
connection, as shown in Figure 3.1. Furthermore, this helps to avoid discrete variables and
The superstructure is designed to get the light product from the upper end (light end) of
CoB and heavy product from the lower end (heavy end) of CnB. The time dependent variables
β(t) and α(t) determine the fraction of the light product and the heavy product streams that
go in the top and the bottom reflux, respectively. If the feed gas (or inlet gas) comes at a
low pressure which is close to atmospheric, it is first compressed from its pressure Pinlet to
Pf eed through the optional inlet compressor, before being compressed from Pf eed to Pa using
the feed compressor. The time dependent feed fraction φ(t) determines the feeding strategy.
For CoB, pressure is specified at the light end by Pads , while the pressure at the other end
Pa is determined from the pressure drop in this bed. The velocity va , concentration for ith
component Ca,i , and temperature Ta at the light end are determined from the outlet flux.
Similarly for CnB, pressure is specified at the heavy end by Pdes , while Cd,i , Td and vd are
obtained from the output flux, and Pd is obtained from the pressure drop. The superstructure
also incorporates compressors and valves to account for different pressure levels in the beds,
varying control variables α(t), β(t), φ(t), Pads (t) and Pdes (t). For instance, as shown in Figure
3.2, if we set α = 0.5, β = 0 and φ = 0 then the feed and the top reflux are turned off, and
a part of the heavy-product is recycled back to the co-current bed as a bottom reflux. Thus,
this results in a counter-current depressurization (CnD) step for the counter-current bed and
a heavy reflux (HR) step for the co-current bed. Similarly, if we set α = 0, β = 0.5 and φ = 0
= 0.5 =0 =0
=0 =0 =1
Heavy Co-current depress-
reflux (HR) urization (CoD) Feed (F)
then we realize a co-current depressurization (CoD) step and a light reflux (LR) step, or if
1 1 1
0 0 0
tswitch t tswitch t tswitch t
(t) = f
(t) = 0
(t) = 0 (t) = 0
(t) = f (t) = 1
Feed pressu- Counter-current Feed (F) Light reflux (LR)
rization (FP) depressurization (CnD)
As a consequence, temporal profiles of α(t), β(t), φ(t), Pads (t) and Pdes (t) result in a se-
quence of operating steps. By translating these temporal profiles into a sequence of meaningful
operating steps, we eventually obtain a complete PSA cycle. For instance, the profiles of α(t),
β(t), and φ(t) shown in Figure 3.3 translate into the classical 2-bed 4-step Skarstrom cycle
(FP,F,CnD,LR) [176]. CoB generates pressurization (FP) and feed (F) steps, while CnB si-
multaneously generates depressurization (CnD) and light reflux (LR) steps. Thus, the overall
cycle includes these four steps (FP,F,CnD,LR). In an actual 2-bed PSA unit, after performing
its steps, CoB will follow the steps of CnB and vice-versa. However, in the mathematical
framework, this is realized by giving final conditions of CoB as the initial conditions for CnB
and vice-versa, thus modeling the true 2-bed behavior. In other words, we utilize Multibed
because of an endless number of shapes that the profiles of α(t), β(t), φ(t), Pads (t) and Pdes (t)
can take. As a consequence of this, we obtain an optimal sequence of operating steps, along
with other decision variables such as cycle time, step times and bed dimensions, by solving the
min Φ(z(x, tf ), y(x, tf ), α(tf ), β(tf ), φ(tf ), Pads (tf ), Pdes (tf ), z0 , p)
∂z ∂z
s.t. f , , z(x, t), y(x, t), α(t), β(t), φ(t), Pads (t), Pdes (t), z0 , p = 0
∂t ∂x
g(z(x, t), y(x, t), α(t), β(t), φ(t), Pads (t), Pdes (t), p) ≤ 0
Here Φ is the objective function related to overall power consumption, component purity or
recovery. It can depend upon differential variables z(x, t), algebraic variables y(x, t), control
variables α(t), β(t), φ(t), Pads (t), and Pdes (t), initial conditions z0 and other decision variables
p. The first equation represents the PDAE-based model for the PSA system, while the second
equation is the cyclic steady state (CSS) condition (see Table 3.1). As mentioned before, the
CSS condition is implemented by giving the final conditions of CoB as the initial condition
for CnB and vice versa. Additional constraints for the optimization problem are given by the
algebraic equations s and the inequalities g. The control variables α(t), β(t) and φ(t) are
fractions bounded between 0 and 1. Other control variables, Pads (t) and Pdes (t), and decision
It is important to note that although optimal 2-bed PSA configurations are construed from
the optimal profiles of α(t), β(t), φ(t), Pads (t) and Pdes (t), multibed cycles (with more than
two beds) follow immediately from these solutions. These are generated by staggering the
steps over multiple beds and ensuring that a bed with a product flow step occurs at all points
in time.
We consider a detailed PDAE-based mathematical model for the optimal control problem. The
model is fairly general and can also be extended beyond the following assumptions:
2. There are no radial variations in temperature, pressure and concentrations of the gases
3. The gas and the solid phases are in thermal equilibrium and bulk density of the solid
6. The adsorption rate is approximated by the linear driving force (LDF) expression. Sircar
and Hufton [172] demonstrated that the LDF model is sufficient to capture the kinetics
Cyclic Steady State (CSS) condition for beds CoB and CnB
zCoB (x, 0) = zCnB (x, tf ), zCnB (x, 0) = zCoB (x, tf ), (3.13)
T
z(x, t) = [CL (x, t), CH (x, t), qL (x, t), qH (x, t), T (x, t)]
requires several sets of averaging of kinetic properties and the effect of local characteristics
Based on the above assumptions, the mathematical model for the PSA superstructure is listed
in Table 3.1. The equations are written for light product L and heavy product H. Here we
consider a lumped mass transfer coefficient for the LDF equation. Since a smaller magnitude of
UA makes energy balance a weak function of the ambient temperature, Tw is assumed constant.
As a convention, flow in the counter-current bed is considered negative and a minus sign is
used for vd . Since the bed model is based on fluxes, the feed throughput and the bed diameter
can be adjusted as long as the specified feed flux is achieved and the model assumptions are
not violated. The purities and recoveries of light (L) and heavy (H) products are calculated
Z
(1 − β(t))va (t)Ca,L (t) dt
purityL = Z X (3.14a)
(1 − β(t))va (t) Ca,i (t) dt
i
Z
(1 − α(t))(−vd (t))Cd,H (t) dt
purityH = Z X (3.14b)
(1 − α(t))(−vd (t)) Cd,i (t) dt
i
Z
(1 − β(t))va (t)Ca,L (t) dt
recoveryL = (3.14c)
Qf eed,L
Z
(1 − α(t))(−vd (t))Cd,H (t) dt
recoveryH = (3.14d)
Qf eed,H
Z
Qf eed,i = φ(t)vf eed Cf eed,i dt i ∈ {L, H} (3.14e)
Here Qf eed is the feed flux. The total power consumption, given by the following equations,
is the sum of the work done by the compressors and the vacuum generator. We note that in
the following equations we do not consider the compression work for the light or the heavy
Z "P γ−1 !
γRTd i (φ(t)vf eed Cf eed,i + α(t)(−vd )Cd,i ) Pa γ
Wtotal = −1
γ−1 ηc Pf eed
P ( γ−1 γ−1 ) !
α(t)(−v d )C d,i P f eed γ P f eed γ
+ i min , −1
ηh Patm Pdes
P ( γ−1 !)#
i (−vd )Cd,i Patm γ
+ max 0, −1
ηv Pdes
γ−1 !
γ φ(t)vf eed Pf eed Pf eed γ
+ − 1 dt (3.15a)
γ−1 ηf g Pinlet
Wtotal
Power = Z (3.15b)
(1 − α(t))vd (t)Cd,H (t) dt
Here, the max function ensures that the work done by the vacuum generator is zero when Pdes
is more than the atmospheric pressure Patm . Similarly, since the vacuum generator discharges
heavy reflux at Patm , the min function ensures a proper upstream pressure for the heavy
product compressor. Since min and max functions introduce non-differentiability, the following
smoothing approximations are adopted [20]. A value of 0.01 is used for in the following
equations.
We adopt a complete discretization approach to solve the system of PDAEs in Table 3.1. The
PDAEs are converted into a set of algebraic equations by discretizing the state and the control
variables both in space and time. As a result, the PDAE-constrained optimal control problem
(3.1) gets converted into a large-scale nonlinear programming (NLP) problem. One of the
advantages of this approach is that it directly couples the solution of the PDAE system with
the optimization problem. The model equations are solved only once at the optimum and the
excessive computational effort of getting intermediate solutions is avoided [28]. However, the
performance of this approach substantially depends upon the optimization solver, and therefore
it is crucial to choose an efficient NLP solver. Hence, we use the state-of-the-art NLP solver
IPOPT 3.4 for our case studies. This interior point solver uses a barrier method to handle
inequalities and exact second derivative information for faster convergence to the optimum
[195].
To capture steep adsorption fronts, avoid oscillations in the solution, and model conser-
vative properties of the system, we apply a first-order finite volume method for spatial dis-
cretization. For the temporal domain, we apply orthogonal collocation on finite elements with
a Radau collocation scheme. Radau collocation allows us to set constraints at the ends of
the finite elements [103]. A 3-point collocation scheme is used for state variables while con-
trol variables are considered to be piecewise constant. While control variables are allowed to
consider a moving finite element strategy in which the size of each temporal finite element is
considered a decision variable. With moving finite elements, it is possible to locate optimal
breakpoints of the control variables with variable element lengths. Appropriate bounds are
imposed on the variable element lengths of each finite element to guarantee accuracy of the
discretization.
out any error checking mechanism for temporal integration can cause inaccuracies to creep in
the NLP solution obtained from IPOPT, verification of the solution with an accurate dynamic
optimal values of the decision variables obtained from IPOPT. The DAE system obtained af-
ter applying method of lines is integrated in MATLAB at the optimal values of the decision
variables. The profiles and performance variables obtained from MATLAB are then compared
with those obtained from IPOPT. For the method of lines approach in MATLAB, we use a
first-order finite volume method for spatial discretization and ode15s for temporal integration.
Finally, because all of the control variables appear linearly, problem (3.1) is a singular
optimal control problem. In singular control problems, the optimal control profiles cannot be
determined directly from the stationarity condition of the Hamiltonian. The Euler-Lagrange
equations obtained after applying the maximum principle to (3.1) are high index in nature and
ill-conditioned. This doesn’t affect optimal controls that lie at their bounds where solution
is “bang-bang”. However, a bang-bang solution may not always be guaranteed with complex
nonlinear state equations of PSA system as the Hamiltonian derivative w.r.t. controls can be
zero for some time interval, leading to a singular control profile. Repeated time differentiations
of the Hamiltonian derivative can recover the control, but identifying the beginning and the
end of a singular arc is often difficult. Applying orthogonal collocation to singular problems can
also reflect this behavior with an ill-conditioned reduced Hessian and solutions characterized
by oscillations that do not abate with increasing mesh refinement [102]. To address singular
problems, several approaches have been suggested which propose regularizations to improve
eigenvalues of the reduced Hessian, and to guarantee a unique solution [97, 180, 101]. In
particular, regularizations have been performed through coarse discretization of the control
profile [161]. Applying a coarse control discretization hampers rapid decay of the eigenvalues
of reduced Hessian, thus allowing a reasonable solution to the singular problem. Hence, we
adopt this relatively simple regularization heuristic to ameliorate the singular nature of the
control problem (3.1). Such an approach, coupled with moving finite element strategy, improves
the bang-bang nature of the optimal solution, locates singular arcs and minimizes their length
Synopsis
CO2 from flue gas streams. In most commercial PSA cycles, the weakly adsorbed component
in the mixture is the desired product, and enriching the strongly adsorbed CO2 is not a
concern. On the other hand, it is necessary to concentrate CO2 to high purity to reduce
CO2 sequestration costs and minimize safety and environmental risks. Thus, it is necessary
to develop PSA processes specifically targeted to obtain pure strongly adsorbed component.
We demonstrate the superstructure approach for case studies related to post-combustion CO2
capture. In particular, optimal PSA cycles are synthesized which maximize CO2 recovery and
minimize overall power consumption. The results show the potential of the superstructure to
predict PSA cycles with up to 98% purity and recovery of CO2 . Moreover, for purity of over
90%, these cycles can recover CO2 from atmospheric flue gas with a low power consumption of
465 kWh/tonne CO2 captured. Superstructure approach is, therefore, quite useful for assessing
4.1 Introduction
Today, fossil fuels provide about 85% of the global energy demand and the outlook is that they
will remain the dominant source of energy for decades to come. Consequently, global energy-
related CO2 emissions, especially from power plants that burn fossil fuels, have increased,
thereby increasing CO2 concentration levels in the atmosphere [93]. One option to mitigate
the emission of CO2 is to capture it from emission sources, store it in the ocean or underground,
or use it for enhanced oil or coal bed methane recovery. Before CO2 can be sequestered it must
be separated and concentrated from flue gas with a low CO2 concentration.
There are a variety of approaches to CO2 separation from other flue gas components, such
as gas absorption, membranes, cryogenic distillation, gas adsorption and others, each with their
own pros and cons [3]. Absorption is a well-established method for separating CO2 . Currently,
absorption based technologies are commercially utilized for CO2 capture, in which different
kinds of amines are used as solvents for absorbing CO2 from flue gas. The greatest advantage
of absorption processes is that these amine-based solvents can be easily regenerated. Moreover,
these processes can capture CO2 with purity higher than 95% which is enough for sequestration.
However, solvent regeneration energy is quite high for absorption processes. Typical values of
energy requirement for the leading absorption technologies range from 765 to 950 kWh/tonne
CO2 captured (excluding energy requirement for CO2 compression) [94]. Moreover, solvent
can form corrosive solutions with the flue gas. Another technology is cryogenic distillation
in which CO2 is captured by liquefying. One of its biggest advantage is that its product is
liquid CO2 which is ready for transport for sequestration. Moreover, CO2 recovery can exceed
99.9%. However, these processes are extremely energy intensive and cannot tolerate H2 O, O2 ,
SOx , and NOx in the feed stream [3]. Membrane separation processes, on the other hand,
though being simple, suffer from the lack of membranes which are either not selective enough
or not very permeable to CO2 . This results in a low-purity CO2 product. Recent developments
have shown pressure/vacuum swing adsorption to be a promising option for separating CO2 .
PSA/VSA operate at ambient temperatures and do not require any solvent or thermal energy
for CO2 recovery or sorbent regeneration. Only feed compression and vacuum generation
constitute the key energy requirements which can be quite low. Moreover, sorbents can be
designed which can withstand H2 O, O2 , SOx , and NOx in the flue gas stream.
PSA processes have been widely applied for the removal of CO2 from various feed mixtures,
such as CO2 in the steam reformer off-gas, natural gas and flue gas mixtures [173]. They are
also commercially used to remove trace amounts of CO2 from air [113]. In these commercial
PSA cycles, the weakly adsorbed (or light) component in the mixture is the desired product
and enriching the strongly adsorbed (or heavy) component (in this case, CO2 ) is not a concern.
On the other hand, for CO2 sequestration, it is necessary to concentrate CO2 to a high purity
to reduce the compression and the transportation cost. Moreover, safety and environmental
Typically adsorbents preferentially adsorb CO2 from a flue gas mixture, consequently mak-
ing it a heavy product. The conventional PSA cycles are inappropriate for concentrating heavy
product because the light product purge step (or the light reflux step) in these cycles uses a
portion of the light product gas, which necessarily dilutes the heavy component in the heavy
product stream. As a result, a pure light component is easy to attain from such cycles, but not
a pure heavy component. Thus, it is necessary to develop PSA processes specifically targeted
Because the product purity of the heavy component is limited by the gas mixture occupying
the void spaces in the bed, its purity can be increased by displacing the gas mixture in the void
spaces with a pure heavy product gas. For instance, for the separation of N2 -CO2 mixture,
the displacement can be accomplished by purging the bed with CO2 after the adsorption step
in the PSA cycle. Hence, to obtain a pure heavy product gas, a heavy product pressurization
step or a heavy reflux step is necessary in the cycle, similar to a light product pressurization
step or a light reflux step in the conventional PSA cycles. This idea was first suggested in a
patent by Tamura [183], and has been incorporated in most of the PSA cycles that have been
suggested in the literature for high purity CO2 separation from flue gas.
A fairly comprehensive review of the previous studies on PSA cycles for concentrating
CO2 from flue gas is presented in the next section. This review highlights the difficulties
associated with choosing one PSA cycle over another for a given application. From these
studies it is not clear why a particular cycle was chosen or one performed better than other
Since 1992, when the Japanese power industry started investigating flue gas CO2 removal using
gas adsorption [90, 95, 96, 158, 209], a multitude of PSA/VSA cycles have been developed in
the literature to produce pure CO2 from a flue gas mixture. We provide a summary of these
studies in Table 4.1. In this table, yf is the CO2 % in feed, while pCO2 and rCO2 are CO2 purity
and recovery in the heavy product stream, respectively, and Pl is the vacuum/low pressure
used to extract CO2 at high purity. The terminology for various operating steps in a cycle
is adopted from Reynolds et al. [151]. Most of these studies are bench-scale and deal with
Ritter and co-workers have studied numerous PSA cycles for CO2 capture from a feed
at high temperature using K-promoted Hydrotalcite as the adsorbent [151, 149, 150]. They
have emphasized the importance of including heavy reflux step to obtain heavy product at a
high purity. They compared seven different 4-bed 4-step, 4-bed 5-step and 5-bed 5-step cycle
configurations with and without heavy reflux step. In another work [148], they analyzed nine
different PSA configurations and achieved better purities and recoveries for CO2 , although, at
an extremely small feed throughput. Kikkinides et al. [107] studied a 4-bed 4-step vacuum
swing process and improved CO2 purity and recovery by allowing significant breakthrough of
CO2 from the light end of the column undergoing heavy reflux, and then recycling the effluent
from this light end back to the column with the feed. Chue et al. [52] compared activated
carbon and zeolite 13X using a 3-bed 9-step VSA process. They suggested that despite a
Table 4.1: PSA cycles suggested in the literature for post-combustion CO2 separation
PSA cycle yf pCO2 rCO2 Pl Feed throughput
configuration operating step sequencea Adsb (%) (%) (%) (kPa) (kgmol/hr) Ref.
5-bed 5-step F,HR,CnD,LR,LPP HTlc 15 72 82 11.49 0.001 [151]d
5-bed 5-step F,HR,CnD,LR,LPP HTlc 15 76 49 11.49 0.003 [151]d
4-bed 4-step F,HR,CnD,LPP HTlc 15 83 17 11.49 0.001 [151]d
5-bed 5-step F,HR,CnD,LR,LPP HTlc 15 98.7 98.7 11.64 0.00052 [148]d
5-bed 5-step F+R,HR,CnD,LR,LPP HTlc 15 98.6 91.8 11.64 0.00052 [148]d
4-bed 4-step F,HR,CnD,LPP HTlc 15 99.2 15.2 11.64 0.006 [148]d
4-bed 4-step F+R,HR,CnD,LPP HTlc 15 99.2 15.2 11.64 0.006 [148]d
4-bed 4-step LPP,F+R,HR,CnD AC 17 99.9 68 10.13 16.19 [107]d
3-bed 8-step FP,F,CoD,R,N,HR,CnD,N 13X 16 99 45 6.67 0.049 [52]
4-bed 8-step FP,F,HR,LEE,CnD,LR,LEE,N NaX 13 95 50 10 1.116 [182]c,d
2-bed 4-step FP,F,CnD,LR 13X 10 70 68 4 0.331 [142]d
2-bed 6-step LEE,FP,F,LEE,CnD,LR 13X 10 82 57 6.67 0.331 [142]d
3-bed 5-step FP,F,HR,CnD,LR 13X 10 83 54 6.67 0.331 [142]d
2-bed 4-step FP,F,CnD,LR 13X 8.3 78 50 101.3 0.004 [85]c
3-bed 8-step FP,F,CoD,LEE,HPP,HR,CnD,LEE AC 17 99.8 34 10.13 0.027 [132]c,d
3-bed 7-step FP,F,LEE,HR,N,CnD,LEE AC 13 99 55 10.13 0.204 [133]c,d
3-bed 8-step FP,F,CoD,LEE,HPP,HR,CnD,LEE 13X 13 99.5 69 5.07 0.025 [50]c,d
2-bed 4-step HPP,FP,CoD,CnD 13X 20 48 94 5.07 — [51]
2-bed 5-step HPP,FP,F,CoD,CnD 13X 20 43 88 5.07 — [51]
3-bed 4-step LPP,F,CnD,LR 13X 20 58 75 5.07 — [51]
3-bed 6-step LPP,FP,F,HR,CoD,CnD 13X 20 63 70 5.07 — [51]
2-bed 4-step FP,F,CnD,LR 13X 15 72 94 90 30.35 [111]
1-bed 4-step FP,F,CoD,CnD 13X 15 90 94 70 1.741 [111]
2-bed 4-step LPP,F,CnD,LR 13X 15 52 66 10 0.007 [86]c
3-bed 5-step LPP,F,HR,CnD,LR 13X 15 83 66 10 48.57 [86]c
3-bed 6-step F,LEE,CnD,LEE 13X 12 83 60 4 0.193 [44]c,d
3-bed 9-step F,LEE,HR,CnD,LEE 13X 12 95 60 5 0.193 [212]c,d
3-bed 9-step F,LEE,I,LEE,CnD,LEE,FP 13X 12 92.5 75 3 0.327 [203]d
a Cycle-steplegend: CnD-counter-current depressurization; CoD-co-current depressurization; FP-feed pressurization;
F-feed or adsorption; HPP-heavy product pressurization; HR-heavy reflux; LEE-light end equalization; LPP-light
product pressurization; LR-light reflux; N-null or idle; R-recycle. b Adsorbent legend: HTlc-K-promoted Hydrotalcite;
NaX, 13X-molecular sieve zeolites; AC-activated carbon. c Studies with experimental results. d Multicomponent study.
high heat of adsorption of CO2 , zeolite 13X is better because of its higher working capacity,
lower purge requirement, and higher equilibrium selectivity. PSA cycle sequences that took
advantage of both light and heavy reflux steps were explored by Takamura et al. [182] and Park
et al. [142]. Takamura et al. studied a 4-bed 8-step VSA process while Park et al. analyzed
three different cycle configurations for VSA processes. While the pure CO2 rinse step and the
equalization step in the 3-bed 5-step cycle improved their CO2 purity and recovery, it didn’t
decrease their power requirements, which were 106.91 kWh/tonne CO2 for the 2-bed 6-step
cycle, and 147.64 kWh/tonne for the 3-bed 5-step cycle. Though the power consumption was
quite low, the feed throughput of 0.331 kgmol/hr was also on the lower side. The conventional
2-bed 4-step Skarstrom cycle was also studied by Gomes et al. [85], in which they didn’t apply
vacuum to recover CO2 . Their work also showed that the light reflux step itself is not sufficient
Na et al. [132, 133] and Choi et al. [50] studied 3-bed 8-step and 3-bed 7-step VSA
configurations experimentally as well as numerically. Light reflux step was not used for any of
these configurations, while heavy reflux was used in all of them. The 2-bed cycles of Chou and
Chen [51] did not use any kind of reflux steps while the 3-bed cycles used both light and heavy
reflux steps. The 2-bed cycles were unconventional as flow reversal was implemented in between
the pressurization and depressurization steps. Similarly, the 3-bed 6-step cycle incorporated
an unusual co-current light product pressurization step. They couldn’t go beyond 63% CO2
purity, which was achieved using the 3-bed 6-step cycle. Ko et al. [111] optimized a 2-bed
4-step PSA process to minimize power consumption, and a 1-bed 4-step fractionated VPSA
process to increase CO2 purity to 90% and recovery to 94%. Grande et al. [86] studied a
classical Skarstrom cycle with light product pressurization and a 3-bed 5-step process which
included a pure CO2 rinse step after the adsorption step. Their scale-up study showed that a
purity of 83% and a recovery of 66% is possible with the 3-bed 5-step process at a much higher
Webley and co-workers [44, 212, 203, 211] have done an extensive research in the field
of CO2 separation by adsorption. Chaffee et al. [44] and Zhang et al. [212] studied two
different VSA processes. For a low feed throughput of 0.193 kgmol/hr for both the cycles,
they achieved a low power consumption of 192 kWh/tonne CO2 for the 3-bed 6-step and 240
kWh/tonne CO2 for the 3-bed 9-step cycle. Xiao et al. [203] studied a similar 3-bed 9-step
cycle and were able to increase CO2 recovery to 75%. In another study, Zhang and Webley [211]
compared numerous VSA cycle configurations, and showed that CO2 purity can be increased
While this review offers some trends and guidelines, a fully systematic methodology is
still required to design PSA cycle configurations. In the subsequent sections, we demonstrate
application of the superstructure approach to obtain optimal PSA cycles for post-combustion
carbon capture.
Parameter Value
Bulk porosity (b ) 0.34
Particle diameter (dp ) 0.002 m
Adsorbent density (ρs ) 1870 kg m−3
Bulk density (ρb ) 1234.2 kg m−3
Heat capacity of solid (Cps ) 450.54 J kg−1 K−1
Heat transfer coefficient (UA ) 926.7 J m−3 sec−1 K−1
Gas viscosity (µ) 1.7857×10−5 kg m−1 sec−1
Gas constant (R) 8.314 J mol−1 K−1
Mass transfer coefficient (k) CO2 =0.1631 sec−1
N2 =0.2044 sec−1
Heat of adsorption (∆H ads ) CO2 =23011.14 J mole−1
N2 =14452.72 J mole−1
Ambient temperature (Tw ) 298 K
Isotherm parameters
CO2 N2
k11 2.817269 1.889581
k21 -3.51×10−4 -2.25×10−4
k31 2.83×10−9 1.16×10−9
k41 2598.203 1944.606
k12 3.970888 1.889581
k22 -4.95×10−3 -2.25×10−4
k32 4.41×10−9 1.16×10−9
k42 3594.071 1944.606
combustion flue gas stream. As an initial study, the focus is on a binary feed mixture. A
multicomponent feed mixture also having water, oxygen and other trace components will be
considered in the future extensions of this work. We assume that the flue gas enters at at-
mospheric pressure at a temperature of 310 K, and a maximum velocity (vf eed ) of 50 cm/sec.
Since the inlet pressure Pinlet is atmospheric, we assume optional inlet compressor is present
in the superstructure which compresses inlet gas to pressure Pf eed . Feed pressure Pf eed varies
with the case studies. Zeolite 13X is chosen as the adsorbent to separate CO2 ; Chue et al. [52]
suggested it to be a preferable adsorbent over others for this separation system. The adsorbent
properties for 13X and other model parameters are listed in Table 4.2 [111].
Usually a large number of spatially discretized nodes are required to capture steep adsorp-
tion fronts. Such fine spatial discretization, together with temporal discretization, leads to a
very large set of algebraic equations which becomes extremely expensive to solve. Although a
large number of elements improve accuracy, it makes the problem computationally challenging
to solve. Hence, to get the solution in a reasonable amount of time, we consider 20 spatial
finite volumes and around 24-26 temporal finite elements for the optimization problem. NLP
solution from IPOPT is verified with more accurate dynamic simulations in MATLAB at the
We consider three different cases to explore different facets of the superstructure approach.
The first case study optimizes the 2-bed 4-step Skarstrom configuration, obtained after fixing
the control variables in the superstructure, and shows the ineffectiveness of such traditional cy-
cles for high-purity CO2 separation. The second case then finds an optimal PSA configuration
which separates CO2 at high purity and recovery. Finally, in the third case, we find an optimal
First, we explore the potential of the conventional 2-bed 4-step Skarstrom cycle (cf. section
2.4.1) for post-combustion CO2 capture. For this, we fix the profiles of α(t), β(t) and φ(t)
over time, as shown in the Figure 3.3. While fβ is chosen as 0.3, fφ before tswitch is fixed to
0.35. This ensures that the superficial velocity is close to zero towards the light end of CoB
during the FP step. In this case, Pads and Pdes remain constant for the entire cycle. The inlet
pressure Pf eed is fixed to 300 kPa, as considered by Gomes et al. [85] as well.
With this configuration we maximize CO2 recovery. Since the lack of any heavy reflux step
in the configuration may not enable a high purity separation, a relatively low value of 40%
is chosen for the lower bound on CO2 purity. Besides Pads and Pdes , we also consider bed
length BLen, and cycle time Tc as decision variables. Since a moving finite element strategy
is adopted, the length of each finite element is also considered as an optimization variable.
Because none of the decision variables are functions of time, the optimal control problem (3.1)
becomes a dynamic optimization problem, which becomes the following NLP after discretizing
Here w and c(w) = 0 represent the set of completely discretized variables and model equations,
respectively. Constraint (4.1c) ensures that the pressure always drops around the pressure
reducing valve in the superstructure. Similarly, constraints (4.1d) and (4.1e) ensure that the
gas is never expanded by the heavy gas and the feed compressors, respectively. The rest of the
With 24 temporal finite elements and 20 spatial finite volumes, we solved the NLP in
AMPL [78] using IPOPT. Table 4.3 includes a summary of the optimization results. With
35,022 variables and 29 degrees of freedom, we were able to solve it to optimality in around
3 CPU hours on an Intel Quad core 2.4 GHz machine with 8 GB RAM. Optimal moving
finite element lengths and cycle time of 2140 sec. yield an optimal step time of 760 sec. for
the pressurization (and depressurization) step, and 410 sec. for the feed (and light reflux)
step. Such a long pressurization step is due to a small amount of feed during that step, which
requires longer time for bed to get pressurized and CO2 to adsorb. At the optimum, the cycle
Accuracy check
Full discretization MATLAB verification
N2 purity 91.25% 90.74%
N2 recovery 85.88% 85.94%
CO2 purity 40% 38.65%
CO2 recovery 53.36% 50.22%
handles a feed flux of 96.4 kgmol m−2 hr−1 , which is higher than the corresponding 2-bed
4-step case studies in Table 4.1. At a purity of 40%, a maximum CO2 recovery of only 53.4%
was achieved. Such poor performance proves the point made in the introduction; classical
cycles without heavy reflux cannot produce heavy product at high purity since a light reflux
step dilutes the heavy product and decreases its purity. Table 4.3 also lists the MATLAB
verification of the AMPL results. We considered 20 spatial finite volumes for MATLAB as
well. A comparison of the purities and the recoveries indicates reasonable accuracy for the
Since a high-purity CO2 separation wasn’t achieved by the Skarstrom cycle, in this case we
solve the optimal control problem (3.1) to obtain an optimal configuration which yields better
performance. For this, a few modifications are essential to the optimization problem presented
in the previous case. The control variables α(t), β(t) and φ(t) are freed to let them achieve
an optimal sequence of operating steps. The pressures Pads and Pdes are converted back to
time dependent control variables. To keep this case comparable to the previous one, we fix
the bed length to 5 meters. A desired CO2 purity of at least 95% is chosen. Besides this, we
impose a lower bound on feed flux Qf eed , in the absence of which the optimizer may force the
feed fraction φ(t) to zero in order to maximize CO2 recovery. Finally, we add the equation for
power consumption (3.15b) to the optimal control problem. A 72% efficiency is assumed for
all the compressors and the vacuum generator unit [30]. As in the previous case, we fix the flue
gas inlet pressure Pf eed to 300 kPa to achieve a reasonable Qf eed . The following large-scale
NLP results after complete discretization of state and control variables in the optimal control
problem.
As in the previous case study, c(w) is the set of completely discretized PDAEs with CSS
condition. We choose a lower bound of 50 kPa for the vacuum generated, which is not a
substantially high vacuum. Similarly, the chosen upper bound of 600 kPa for Pads is also not
substantially high. No bounds are specified for the purity and the recovery of nitrogen. We
impose a lower bound of 80 kgmol m−2 hr−1 on the total feed flux. Because the bound is not
on the feed throughput, a bigger diameter PSA bed will be able to handle much higher feed
throughput and the optimal configuration need not change. For instance, for a 3 meter bed
diameter, one PSA column will be able to handle a significantly high feed throughput of 565
kgmol/hr for the same optimal configuration. Also note that the value of 80 kgmol m−2 hr−1
is significantly higher than feed flux chosen in the literature studies in Table 4.1, as the focus
The NLP was solved in AMPL with 26 temporal finite elements and 20 spatial finite
Bottom reflux ( α )
1
600
Pads (kPa)
0.5 550
0 500
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
450
Top reflux ( β )
1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.5
0
100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Pdes (kPa)
1 80
Feed ( φ )
60
0.5
40
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized cycle time Normalized cycle time
volumes. The optimal control profiles are shown in Figure 4.1. The profiles are drawn against
cycle time normalized between 0 and 1. These profiles suggest an optimal 2-bed 6-step VSA
(t) as
N2 N2 N2 profile
(t) = 0 (t) = 0
Low-pressure Pressurization +
Low-vacuum High-vacuum Total reflux
adsorption with high-pressure
desorption desorption
heavy reflux adsorption
(Step 1) (Step 4) (Step 2) (Step 5) (Step 3) (Step 6)
The cycle starts with α(t)=1, β(t)=0, and φ(t)=1. This suggests a bottom reflux from CnB
to CoB and feed being fed to CoB. From the profiles of Pads (t) and Pdes (t), CoB operates at
around 450 kPa while CnB operates at around 85 kPa during this step. Thus, we have a low-
pressure adsorption step with a heavy reflux for CoB (step 1) and a low-vacuum desorption
step for CnB (step 4) with desorbed CO2 being sent as a heavy reflux from CnB to CoB.
After this step, both top and bottom reflux disappear, while φ(t) indicates continuation of the
feed to CoB. CoB gets pressurized to the upper bound of 600 kPa and N2 is withdrawn at
a high pressure, while CO2 is extracted from CnB at a vacuum of 50 kPa. This suggests a
pressurization and high pressure adsorption step for CoB (step 2) and a high vacuum desorption
step for CnB (step 5). The feed fraction φ(t) drops at the beginning of this step to facilitate
CoB pressurization. We observe a drop in the CO2 concentration in CoB (see step 2 in Figure
4.3) because of its low concentration in feed. Also, because of the application of vacuum, the
gas-phase CO2 concentration decreases sharply for both step 4 and 5, as evident in Figure 4.3.
Further, the pressures in the beds are held at their same respective levels, while α(t)
becomes 1, β(t) approaches 1, and the feed is stopped completely. Because there is no feed
nor product at this time, we have a total reflux step (step 3 and 6), in which both the beds
are connected to each other and a recirculation of the components occurs within the system.
A small amount of N2 is withdrawn at the beginning of this step, and is shown as a dotted
line in Figure 4.2. A decrease in Pads (t) and an increase in Pdes (t) towards the end of this step
halts this recirculation. After the total reflux step, the co-current bed follows the steps of the
From Figure 4.2 together with Figure 4.3, we observe a couple of key aspects of this cycle.
First, we observe an extensive use of heavy reflux in the cycle (step 1 and 3) to enrich gas-
phase CO2 concentration towards heavy end of CoB to ensure high-purity CO2 production.
During both steps 1 and 3, desorbed CO2 from CnB is sent as a heavy reflux to CoB which
enriches the adsorbed CO2 concentration towards the heavy end of CoB. This is evident from
the gas-phase CO2 concentration profile for steps 1 and 3 in Figure 4.3. Second, we observe a
completely novel total reflux step in the cycle. Moreover, this is the longest step and runs for
almost 60% of the total cycle duration. This is a vital step to improve CO2 purity as it eschews
external influence and re-arranges component distribution within the system. During steps 3
and 6, nitrogen from CoB to CnB purges CO2 out of CnB from its heavy end and enriches
itself towards the light end of CnB while pushing its front. Similarly CO2 from CnB to CoB
purges nitrogen out of CoB and enriches itself towards the heavy end of CoB and pushes its
front towards the light end of CoB. Step 3 in Figure 4.3 confirms such a movement of CO2
front.
The optimization results for this case are summarized in Table 4.4. With 50,162 variables
and 206 degrees of freedom, it was solved to optimality in approximately 12.5 CPU hours on
the Intel Quad core 2.4 GHz machine with 8 GB RAM. At the optimum, the feed flux attained
its lower bound of 80 kgmol m−2 hr−1 . For such a high feed flux, we obtained a reasonable
power consumption of 637.25 kWh/tonne CO2 captured. Also, an optimum CO2 recovery of
80% at a purity of 95%, for such a high feed flux, is substantially better than the literature
studies for post-combustion capture that deal with high feed throughput. These results confirm
our assertion that steps like heavy reflux are essential for high-purity CO2 separation.
In Table 4.5 we provide a validation of the optimal results obtained from AMPL with
method of lines simulations in MATLAB for varying number of spatial finite volumes. The
results from AMPL are in good agreement with those from MATLAB, and the accuracy doesn’t
suffer as we consider a large number of finite volumes in MATLAB. This indicates that the
100
95
CO2 purity (%)
90
85
55 60 65 70 75 80 85 90 95 100
CO recovery (%)
2
Figure 4.4 shows a trade-off curve between CO2 purity and recovery. We construct this
curve by varying the lower bound on CO2 purity in the NLP (4.2), and then maximizing
CO2 recovery for each lower bound. As a result, it is possible that a different optimal cycle
configuration is achieved at each point plotted on the curve. However, each configuration
is the best possible cycle for a particular CO2 purity. Consequently, this yields an optimal
purity-recovery trade-off curve. The curve shows that if a very high purity CO2 separation is
desired then the recovery falls drastically. A similar trend is observed with the purity when a
very high CO2 recovery is sought. The intermediate section of the curve is a preferable region
to operate.
Although we achieved a high purity separation in the previous case, the power consumption
was also quite high. Therefore, the objective of this case is to obtain an optimal configuration
which yields a high-purity separation at minimal power requirements. To achieve this, few
minute modifications are done to the NLP (4.2). While the lower bound on CO2 recovery
is relaxed to 85%, the lower bounds on CO2 purity and feed flux are relaxed to 90% and 65
kgmol m−2 hr−1 , respectively. To minimize the work done in compressing flue gas from Pinlet
to Pf eed (in Equation (3.15a)), we consider Pf eed a decision variable instead of fixing it to 300
kPa. Appropriate bounds are imposed on Pf eed . The efficiency is kept same as 72% for all
compressors and vacuum generator. The rest of the optimization problem remains same, and
is as below. The NLP was solved in AMPL with 24 temporal finite elements and 20 spatial
finite volumes. The optimal control profiles are shown in Figure 4.5. Since optimal feeding
strategy and pressure profile are different compared to previous case, we obtain an entirely
The cycle begins with α(t)=1, β(t)=0 and φ(t) close to one. This suggests a heavy reflux
from CnB to CoB and feed being fed to CoB. From the profiles of Pads (t) and Pdes (t), the
Bottom reflux ( α )
1 600
500
Pads (kPa)
0.5
400
0
300
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
200
Top reflux ( β )
1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.5
0 150
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Pdes (kPa)
1 100
Feed ( φ )
0.5
50
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized cycle time Normalized cycle time
pressure rises in CoB and falls in CnB during this step. Thus, we have a pressurization step
for CoB (step 1) and a depressurization step for CnB (step 5), with heavy reflux increasing
(t) as
N2 N2 N2 profile
(t)=0 (t)=0 (t)=1
Next, both α(t) and β(t) go to zero, while Pads (t) and Pdes (t) attain their maximum and
minimum allowed values, respectively. This suggests an adsorption step with the removal of
light product for CoB (step 2), and a high vacuum desorption step for CnB (step 6), during
which high purity CO2 is collected. After this step we observe that both α(t) and β(t) go to 1.
However, unlike the total reflux step in previous case study, β(t) doesn’t go to 1 at once and
nitrogen is still constantly removed from the system. Feed is also fed to CoB for a considerable
amount of time at the beginning of this step. Therefore, this translates into a heavy reflux
step for CoB (step 3) and a light reflux step for CnB (step 7). Nevertheless, the intent of this
step is similar to that of the total reflux step: enrich the N2 front towards the light end of
CnB, and CO2 front towards the heavy end of CoB. The gas phase CO2 concentration profiles
for both these steps in Figure 4.7 validate this behavior. After this, α(t) and β(t) remain at
1, while Pads (t) starts to drop, Pdes (t) starts to jump sharply, and the two pressures come
very close to each other. In fact, Pads (t) and Pd (t) are approximately equal during this step.
This translates into a short pressure equalization step (step 4 and 8). Since the heavy reflux
from CnB is negligible, we show it as a dotted line for this step in Figure 4.6. Clearly, from
Figure 4.7, CO2 concentration drops substantially in CoB and rises steadily in CnB during the
equalization step. After this, CoB follows the steps of CnB and vice-versa.
Figure 4.6 together with the gas-phase CO2 concentration profiles in Figure 4.7 illustrate
several key aspects of the cycle. First, as in the previous case, heavy reflux step is used as
the only step to enrich adsorbed-phase CO2 concentration towards the heavy end of the bed.
For more than 60% of the cycle time, CnB provides heavy reflux to CoB for gas-phase CO2
enrichment, thus ensuring high-purity CO2 production. Such enrichment and movement of
CO2 adsorption front is evident from the gas-phase CO2 concentration profile for step 1 and
step 3 in Figure 4.7. Second, we observe that adsorption pressure Pads operates at a lower level
for most of the duration of the cycle and attains its upper bound only for a short duration.
This leads to savings in power consumption. The third key aspect of this VSA cycle is the
pressure equalization step (steps 4 and 8), which leads to additional power savings. Gas-phase
CO2 concentration profiles for step 4 and 8 in Figure 4.7 illustrate sharp CO2 desorption in
Table 4.6 summarizes the optimization results. With 46,313 variables and 191 degrees of
freedom in the NLP, the optimal solution was obtained in approximately 4.5 CPU hours. At
the optimum, the feed flux, CO2 purity, and CO2 recovery were at their respective lower bounds
of 65 kgmol m−2 hr−1 , 90% and 85%. Under these conditions, and at an optimum Pf eed of
182.3 kPa, we achieved a power consumption of 464.76 kWh/tonne CO2 captured, which is
over 27% lower than Case 2. Table 4.7 lists a validation of the optimal results obtained from
AMPL with accurate simulations in MATLAB for varying number of spatial finite volumes.
As observed in the previous case, the purities and recoveries are in reasonable agreement, even
Figure 4.8 shows the trade-off curve between power consumption and CO2 recovery. As
in the previous case, we construct the curve by varying the lower bound on CO2 recovery,
600
580
540
520
500
480
460
440
420
70 72 74 76 78 80 82 84 86 88 90 92
CO recovery (%)
2
while keeping purity at 90%, and optimizing NLP (4.3) multiple times. Thus, we obtain an
optimal trade-off curve, although it is possible to obtain a different optimal cycle configuration
at each point plotted on the curve. As expected, the curve shows that the power requirements
a recovery level of 84%. The power requirements then start growing steeply if more than 84%
Case studies discussed in the previous section clearly demonstrate that we can obtain substan-
tially different PSA configurations after performing superstructure optimization with different
objectives. In all the case studies above, the final optimal cycle is governed by the required
specifications, constraints and objective function. However, optimal PSA configurations have
some similarities as well, which convey that the superstructure approach finds some common
features as a necessary requirement for optimal performance. For instance, the optimal cycles
obtained in both case II and case III depend heavily on the heavy reflux step to enrich gas-
phase CO2 concentration towards the heavy end of the adsorber bed. Both case studies employ
this step to enhance CO2 purity in the final product, and run it for a substantial 60-65% of
the total cycle duration. This not only asserts that such a step is vital for producing heavy
product at a high purity, but also proves that the superstructure approach yields intuitive and
meaningful configurations since all of the literature studies have included heavy reflux step to
boost CO2 purity in the final product. Besides heavy reflux step, both cycles in case II and
III are similar in terms of the optimal cycle duration, which is close to 40 minutes for both
cases. Moreover, both cycles do not collect CO2 product when a light reflux stream is present
as it necessarily dilutes CO2 product, and both employ vacuum for almost entire duration of
Barring heavy reflux step and other minor similarities, optimal cycles obtained in case II
and III are quite different from each other, especially in the power consumption aspect. For
case II, Pads is at its upper bound while Pdes is at its lower limit for almost entire cycle. Hence,
the total power consumption is quite high for case II. In contrast, optimal profile for Pads takes
a lower value for the most part of the cycle and attains upper bound only for a short duration.
This leads to substantial power savings. One of the key differences between cycles of case II
and III is the pressure equalization step. This leads to additional power savings as it avoids
uneconomical pressure drop when cycle transitions from step 4 to step 5 in case III. However,
lack of a pressure equalization step causes such a pressure drop in case II between step 3 and
4 when pressure drops significantly from 450 kPa to 90 kPa. Only reason for such a contrast
in the optimal solution is the absence of any constraint on power consumption in the problem
formulation for case II. To avoid this, an upper bound on the power consumption can be used
for case II in future. Besides power aspect, cycles in case II and III differ in steps 3 and
6. Case III doesn’t incorporate a total reflux step unlike case II. Instead, that step is still a
combination of light and heavy reflux steps in case III with the presence of external feed and
nitrogen removal.
To deduce multibed cycles for a continuous cycle operation from the optimal two-bed
solutions, a coordination of step times will be required which will depend upon whether we
to the system. In any case, continuous flow can be maintained either through feed or product
buffer tanks, or by adding parallel beds and ensuring that step times are integral multiples of
A fairly extensive review of the previous work on post-combustion CO2 capture reveals that a
systematic methodology is still required for the design of PSA cycles. To address this, we assess
the applicability of the superstructure approach in this context. It is illustrated for three case
studies of post-combustion CO2 capture. The first case study optimizes the standard 2-bed
4-step Skarstrom cycle, and shows that such conventional cycles, which focus on separating
light product at a high purity, fail to produce heavy product at a high purity because of
the absence of a heavy reflux step. To obtain high-purity separation, the superstructure is
optimized in the second case study. A 2-bed 6-step VSA cycle is derived from the solution of
the optimal control problem. With this configuration, we are able to recover about 80% of CO2
at a substantially high purity of 95%, and at a significantly high feed flux of 80 kgmol m−2
hr−1 , but with a power consumption of 637 kWh/tonne CO2 captured. Thus, in the third case
study, we focus on developing optimal configuration which yields high-purity separation with
minimal power requirements. We construe a 2-bed 8-step VSA configuration from the optimal
profiles, with which, at 90% purity and 85% recovery, CO2 is extracted with a substantially low
power consumption of 465 kWh/tonne CO2 captured. Hence, with the proposed superstructure
approach, we are able to design optimal configurations that make pressure swing adsorption a
promising option for high purity CO2 capture from flue gas streams.
A complete discretization approach is used to solve the optimal control problem as a large-
scale nonlinear program, using the nonlinear optimization solver IPOPT. Verifications of the
accuracy of the discretization scheme show this approach is reasonably accurate in capturing
the dynamics of PSA systems governed by hyperbolic PDAEs and steep adsorption fronts,
and can be used for PSA systems with efficient NLP solvers like IPOPT. To improve upon
the accuracy of the results and eliminate the verification step, a sensitivity-based sequential
approach, similar to Jiang et al. [100], will be developed in future to solve the optimal control
problem.
in this work, is quite generic and can be extended to many other PSA applications. In the
Synopsis
PSA/VSA technology has been widely applied for H2 production from the effluent streams of a
shift converter. It also offers significant advantages for pre-combustion CO2 capture in terms of
performance, energy requirements and operating costs since the shifted synthesis gas (syngas)
is available for separation at a high pressure with a high CO2 concentration. Most commercial
PSA cycles recover H2 at very high purity, but do not focus on enriching the strongly adsorbed
CO2 . Thus, a major limitation exists with the use of these conventional PSA cycles for high
purity CO2 capture. Novel PSA cycle designs are anticipated which recover both H2 and CO2
at a high purity. We demonstrate the superstructure approach for case studies related to pre-
combustion CO2 capture. In particular, optimal PSA cycles are synthesized which maximize
CO2 recovery or minimize overall power consumption. The results show the potential of the
superstructure to predict PSA cycles with purities as high as 99% for H2 and 96% for CO2 .
Moreover, these cycles can recover more than 92% of CO2 with a power consumption as low
as 46.8 kWh/tonne CO2 captured. Hence, this chapter demonstrates the versatility of the
superstructure approach.
Global energy-related carbon dioxide emissions are increasing by 1.7% every year and have
been estimated to reach 41 gigatonnes by 2030 [93]. Power generation accounts for about
one-third of CO2 emissions from fossil fuel use. Carbon dioxide capture and storage is a
critical technology to significantly reduce CO2 emissions, and is most applicable to large,
centralized emission sources such as power plants. The purpose of CO2 capture is to produce a
concentrated stream that can be readily transported to a CO2 storage site. One of the potential
capture systems that has gained recent popularity is the pre-combustion capture system. Pre-
combustion capture involves partial oxidation (gasification) of coal to produce syngas (or fuel
gas) composed mainly of carbon monoxide and hydrogen. The carbon monoxide is reacted
in a shift converter to increase carbon dioxide and hydrogen yield. CO2 is then concentrated
from this H2 /CO2 mixture, resulting in a hydrogen-rich fuel and a CO2 -rich stream available
CO2 capture because the fuel gas from the shift converter has a higher CO2 concentration in
the range 30-60%, and is also typically at a higher pressure, thus offering cost-effective means
PSA offers significant advantages for pre-combustion CO2 capture in terms of performance,
energy requirements and operating costs. Voss [191] provides an overview of how the PSA units
can be integrated in complex flowsheets of power plants and steam reformers for pre-combustion
CO2 capture. Industrial PSA technology to remove CO2 and other trace components from
steam reformer off-gas and fuel gas primarily focuses on producing hydrogen at a high purity,
and considers CO2 as a waste stream [23, 79, 80]. The most frequently used PSA processes in
this area, the Polybed process and the Lofin process [81, 130, 171, 205], produce H2 with more
than 99.9999% purity, but consider CO2 as a by-product and reject it in the tail gas (i.e., the
desorbed gas containing H2 O, N2 , CO2 , CO, and H2 ) at a much lower purity. The hydrogen
recovery in these processes ranges between 60-80%, with the tail gas generally being used as
a fuel for the reformer. Over the past few decades, researchers have focused on development,
improvement and optimization of novel PSA cycle configurations for H2 purification and CO2
removal. Cen et al. [42] studied a bench-scale 1-bed 4-step PSA process, with activated carbon
as the adsorbent, to remove CO2 from a feed mixture comprising 24.75% CO2 , 24.75% H2 , and
0.0001% H2 S. Whysall and Wagemans [199] increased the H2 production capacity by extending
the purge step in their 16-bed 13-step PSA cycle. Baksh et al. [18, 19] developed a simple 2-bed
12-step process which used layered beds packed with alumina, activated carbon and zeolite.
With this configuration they were able to recover 76% H2 at a very high purity level of 99.996%.
Xu et al. [204] developed a 6-bed 16-step PSA process in which only four pressure equalization
steps were incorporated. Zhou et al. [213] proposed a 4-bed 13-step PSA cycle and explored
the idea of using buffer tanks to carry out pressure equalization during the cycle. Jiang et
al. [100] optimized a 5-bed 11-step PSA process, using layered beds of activated carbon and
zeolite 5A, and were able to achieve a hydrogen recovery of around 89% with CO impurity
as low as 10 ppm in the hydrogen product stream. Jee et al. [98] studied the adsorption
layered bed packed with activated carbon and zeolite 5A, and concluded activated carbon to
be a suitable adsorbent for CO2 extraction. Warmuzinski and coworkers [196] designed a 5-
bed 8-step PSA process through rigorous mathematical simulation, for which they obtained a
recovery of 74% for H2 , as well as 92% for methane in the tail gas stream. They also verified
their results using bench-scale experimentation [184]. Yang et al. [208] studied a 4-bed 9-
step cycle experimentally and theoretically using layered beds of activated carbon and zeolite
5A, and recovered 66% of H2 from syngas at 99.999% purity. Ritter and Ebner [152] provide
a comprehensive review on the use of adsorption technologies for H2 production and CO2
removal.
In all the PSA cycles developed so far, the weakly adsorbed hydrogen (or the light-product)
in the mixture is the desired product, and enriching the strongly adsorbed CO2 (or the heavy-
product) is not a concern. On the other hand, for CO2 sequestration, it is necessary to
concentrate CO2 to a high purity. The adsorbents designed to date preferentially adsorb
Parameter Value
Bed porosity (b ) 0.37
Particle diameter (dp ) 0.00149 m
Adsorbent density (ρs ) 544.64 kg m−3
Bulk density (ρb ) 343.12 kg m−3
Heat capacity of solid (Cps ) 711.75 J kg−1 K−1
Heat transfer coefficient (UA ) 0.2839 J m−3 sec−1 K−1
Gas viscosity (µ) 1.2021×10−5 kg m−1 sec−1
Gas constant (R) 8.314 J mol−1 K−1
Mass transfer coefficient (k) CO2 =0.45 sec−1
H2 =1.45 sec−1
Heat of adsorption (∆H ads ) CO2 =24801 J mole−1
H2 =8420 J mole−1
Ambient temperature (Tw ) 298 K
Isotherm parameters
CO2 H2
k11 1.16 1.16
k21 0 0
k31 6.96×10−10 1.06×10−9
k41 3259.683 1012.75
k12 8.33 8.33
k22 0 0
k32 1.88×10−10 1.06×10−9
k42 2706.279 1012.75
CO2 from a flue gas or reformer off-gas mixture, consequently making it a heavy-product.
The conventional PSA cycles are inappropriate for concentrating heavy-product because the
light-product purge step (or the light reflux step) in these cycles uses a portion of the light-
product for purge. This necessarily dilutes the heavy component in the heavy-product stream.
Therefore, a pure light component is easy to attain from such cycles, but not a pure heavy
component. Thus, it is necessary to develop PSA processes specifically targeted to obtain pure
strongly adsorbed CO2 . Very few examples of CO2 purification from a reformer off-gas mixture
using a PSA process can be seen in the literature. Sircar et al. developed a 5-bed 5-step PSA
process to extract methane and carbon dioxide both at a high purity from a feed mixture
having 40-60% CO2 and CH4 [174, 165]. A pure CO2 rinse step was used in the process to
obtain a CO2 product containing 99.8-99% CO2 . Schell et al. [159] suggested a dual-reflux
PSA process with a stripping and a rectifying section to obtain both light and heavy product at
high purities. Xiao et al. [202] studied single-stage and dual-stage 2-bed 8-step VSA processes
which could recover more than 90% of CO2 , at 95% purity, from a feed mixture having 21.5%
CO2 and 76.8% H2 . Air Products and Chemicals, Inc. have developed the Gemini process
to simultaneously produce H2 and CO2 at high purities and recoveries [164]. It consists of
6 adsorbers (A beds) to selectively adsorb CO2 , which is then obtained by applying vacuum
depressurization, and 3 adsorbers (B beds) to purify hydrogen. Both beds undergo two entirely
different sequences of operating steps. However, one A bed and one B bed are connected in
series during the adsorption step. Sircar [173] provides more detailed information about the
process.
It is clear that novel PSA cycle sequences are anticipated which not only recover H2 at
a high purity, but simultaneously also produce a highly pure CO2 stream with a reasonably
high recovery. In this chapter, we demonstrate the versatility of the superstructure approach
by applying it to develop cycles for pre-combustion capture that produce both H2 and CO2 at
Here the feed is considered to be a syngas mixture having 55% H2 and 45% CO2 , arriving at a
temperature of 310 K after a single shift conversion in an IGCC [134]. The feed mixture also
consists of negligible amounts of CO, CH4 , Ar and N2 , besides H2 and CO2 . However, hydrogen
and carbon dioxide together constitute around 97-99% of the mixture [134]. Therefore, we
consider a binary feed mixture for the case studies. We assume that the fuel gas enters at
a pressure of 700 kPa, and a maximum velocity (vf eed ) of 50 cm/sec. Since feed pressure is
high, optional inlet compressor doesn’t exist in the superstructure for this case. Consequently,
work done by inlet compressor is omitted from Equation (3.15a). Since the PSA model doesn’t
require bed diameter to be specified, we specify superficial feed velocity for the model instead
of a volumetric flow rate. The bed length is fixed and is assumed to be 12 metres. For all the
case studies, we also assume an efficiency of 72% for all compressors and the vacuum generator
in the superstructure [30]. Activated carbon is chosen as the adsorbent, especially to extract
CO2 . Based on the breakthrough tests, Jee et al. [98] recommended activated carbon for high
recovery CO2 separation. The scope of this study is to explore the limits of the performance
of the PSA processes for this sorbent. We also note that other sorbents, such as alumina,
molecular sieves, zeolite (also in layers), are also applicable and these form the basis for future
study with this synthesis technique. The properties and other model parameters for activated
Although a large number of spatial and temporal discretization nodes are essential to
accurately capture the dynamic movement of the steep adsorption fronts, we consider only 10
spatial finite volumes and 10 temporal finite elements for the NLP to obtain the solution in a
reasonable amount of time. Because of such a small number of nodes, accuracy validation of
the optimal solution obtained from IPOPT by performing more accurate dynamic simulations
in MATLAB at the optimal values is extremely essential. Here we consider two different
approaches for accuracy verification. In the first approach, called the step-by-step approach,
each operating step of the cycle is simulated in MATLAB for only one cycle, and the purities
and recoveries are then compared with AMPL results. The initial condition for each step and
the time-dependent fluxes between the beds are taken from the AMPL solution. The number
of spatial finite volumes are kept same for both AMPL and MATLAB. Note that, depending on
the accuracy, the MATLAB solution may or may not be at CSS after simulating each step in
this approach. In the second approach, called the full-cycle approach, entire cycle is simulated
in MATLAB multiple times until CSS is achieved. In this approach, we consider more spatial
finite volumes for MATLAB simulation. While the step-by-step approach only verifies temporal
accuracy, the full-cycle approach validates both spatial and temporal accuracy. Although the
full-cycle approach yields more accurate comparison, the step-by-step approach is useful in
getting a quick assessment of the validity and physical correctness of the AMPL solution.
To illustrate the generality of the superstructure approach, we consider two different cases.
The first case involves superstructure optimization to obtain an optimal PSA configuration
which maximizes CO2 recovery for a given lower bound on both CO2 and H2 purity, while the
second case involves generating optimal cycle that minimizes overall power consumption for a
We solve the optimal control problem (3.1) to obtain an optimal cycle which maximizes CO2
recovery for a lower bound of 90% on both H2 and CO2 purity. Besides this, a lower bound on
feed flux Qf eed is also imposed. In the absence of this bound, the optimizer may force the feed
fraction φ(t) to zero in order to maximize CO2 recovery. Large-scale NLP that results after
complete discretization of state and control variables in the optimal control problem is shown
below. In the following problem, cycle time Tc is also a decision variable. Optimal values of
the moving temporal finite elements together with optimal Tc give the optimal step times.
Equation (5.1a) is the fully discretized PDAE system with the cyclic steady state condition.
Constraint (5.1e) ensures that the pressure always decreases through the valve in the super-
Bottom reflux ( α )
1
950
900
Pads (kPa)
0.5
850
0 800
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
750
Top reflux ( β )
1 700
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.5
0
200
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Pdes (kPa)
150
1
Feed ( φ )
100
0.5
50
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized cycle time Normalized cycle time
structure. Similarly, constraints (5.1f) and (5.1g) ensure that the gas is never expanded by the
heavy gas and the feed compressors, respectively. It should be noted that a lower bound of 50
kPa is chosen for the vacuum generated, which is not a substantially high vacuum. Similarly,
the chosen upper bound of 1000 kPa for Pads is also reasonably low. Also, it is important
to note that a lower bound of 35 kgmol m−2 hr−1 is imposed on the total feed flux which is
independent of the bed diameter. Thus, with a bigger bed diameter, it will be possible to
handle a much higher feed throughput for the same cycle configuration. Hence, as in the case
of post-combustion capture case studies, the focus here is to synthesize industrial-scale PSA
systems.
With 10 temporal finite elements and 10 spatial finite volumes, the optimization problem
was solved in AMPL using IPOPT. The optimal profiles for the control variables α(t), β(t),
φ(t), Pads (t) and Pdes (t) are shown in Figure 5.1. They are drawn against the cycle time nor-
malized between 0 and 1. These profiles suggest an optimal 2-bed 8-step VSA cycle, illustrated
in Figure 5.2, which can be deciphered in the following manner. The cycle starts with α(t)=1,
β(t) close to 0.67, and φ(t)=1. This suggests bottom reflux from CnB to CoB, a 67% top
reflux from CoB to CnB and feed being fed to CoB. From the values of Pads (t) and Pdes (t) for
this step, it can be observed that CoB is operating at the feed pressure (700 kPa) while CnB
is operating at a vacuum of 50 kPa. Hence, this is an adsorption with a heavy reflux step for
H2 H2
(t) = 0.67 (t) = 0
Adsorption
Light reflux Heavy reflux Vacuum
at feed pressure
at vacuum at high pressure desorption
with heavy reflux
(Step 1) (Step 5) (Step 2) (Step 6)
H2
(t) = 0 (t) = 1
(t) = 1 (t) = 1
(t) as
profile Feed
Heavy reflux Atmospheric
at high pressure desorption Total reflux
CoB (step 1) and a light reflux step at vacuum for CnB (step 5). After this, α(t) drops to
around 0.63 while both β(t) and φ(t) drop to zero. CnB continues to operate at vacuum while
the pressure in CoB rises to around 820 kPa. Thus, with no feed and around 63% bottom
reflux, we have a high pressure heavy reflux step for CoB (step 2) with H2 collection at the
light end, and, with no top reflux, we have a vacuum desorption step for CnB (step 6) in which
a part of the desorbed CO2 is collected, while the rest is sent as a heavy reflux to further
enrich its solid-phase concentration towards the heavy end of CoB. Next, α(t) goes to 1 while
β(t) and φ(t) remain at zero. The pressure further rises to 900 kPa in CoB while the vacuum
is stopped in CnB and it starts operating in the atmospheric range. Therefore, we have an
atmospheric desorption step for CnB (step 7) in which desorption occurs at around 120 kPa.
The desorbed gas is sent to CoB which undergoes a heavy reflux step at further elevated pres-
sures (step 3). In the final operating step, the values of α(t) and β(t) both go to 1, indicating
no light or heavy product extraction from the system. The feed enters midway through the
step for a short duration, and is otherwise at zero. To reflect this, a dotted line is shown for
feed during this step in Figure 5.2. The profiles of Pads (t) and Pdes (t) show that the pressure
rises in both beds. Since the PSA system gets isolated during this step and a recirculation of
the components occurs within the system, we call it a total reflux step (step 4 and step 8).
After the total reflux step, the co-current bed follows the steps of the counter-current bed and
Figure 5.2 together with the gas-phase CO2 concentration profiles in Figure 5.3 illustrate
several unconventional, but key, aspects of the cycle. First, the light reflux step at vacuum
(step 5) follows the total reflux step at around 950 kPa (step 4). Such a transition in the bed
pressure, although not economical, is essential to improve the purity and recovery of CO2 in
the final product. During the light reflux step, a large amount of CO2 desorbs in CnB which is
then sent to CoB. This is necessary to enrich the adsorbed-phase CO2 concentration towards
the heavy end of CoB. From step 1 in Figure 5.3, it can be observed that the CO2 front rises
significantly towards the heavy end due to this recycle. Such a significant rise is important
to achieve the desired CO2 purity and recovery. Since a large amount of CO2 is desired for
this enrichment, and since the duration of the light reflux step is short, the step operates at
Second, the light reflux step at vacuum (step 5) precedes the vacuum desorption step (step
6), whereas conventionally it is vice-versa. Since the step duration for step 5 is small, the
hydrogen recycle helps in getting more CO2 desorbed in that interval. This hydrogen reflux
is obtained from the feed stream going in CoB. Also, since the hydrogen reflux dilutes the
product CO2 , it is collected during the next vacuum desorption step and not during step 5.
Therefore, vacuum desorption succeeds the light reflux step. The third key aspect of the cycle
is the presence of heavy reflux from CnB to CoB during the entire cycle. From the CO2
concentration profiles of first four steps in Figure 5.3, it is clear that this CO2 reflux helps
push the CO2 front towards the light end of the adsorbing bed before we start desorbing and
collecting CO2 . Thus, we infer that the heavy reflux step is essential for high purity CO2
production.
Another aspect of the cycle is the atmospheric desorption step (step 7) after vacuum des-
orption (step 6). Since CO2 is not collected as a product during step 7, we observe that the
purpose of this step is only to send CO2 reflux to CoB. The step is carried out at the atmo-
spheric conditions to ensure a controlled CO2 reflux to CoB such that the CO2 front doesn’t
break through CoB’s light end. Final aspect of the cycle is the total reflux step (steps 4 and 8).
It is a mutual reflux step in which the CO2 reflux from CnB to CoB helps push hydrogen out
of the light end of CoB to the light end of CnB while enhancing adsorbed CO2 concentration
in CoB, while the H2 reflux enriches its concentration in CnB and helps CO2 desorb out of the
heavy end of CnB. Such a step is important to ensure that both H2 and CO2 are collected at
a high purity in subsequent steps, and thus is the longest step in the cycle. The feed stream
in the middle of the step provides more hydrogen for the light reflux from CoB to CnB.
The optimization results for this case are summarized in Table 5.2. With 10,512 variables
and 78 degrees of freedom in the NLP, the optimal solution was obtained in approximately
52 CPU minutes on an Intel Quad core 2.4 GHz machine with 8 GB RAM. At the optimum,
the feed flux attained its lower bound of 35 kgmol m−2 hr−1 . For this feed flux, and 72%
efficiency for compressors and vacuum generator, a power consumption of 536.16 kWh/tonne
CO2 captured was obtained after optimization. An optimum CO2 recovery of 98% at a purity
of 90% was obtained. Also, a reasonably high hydrogen purity of 98% and a recovery of 91%
In Table 5.2 we also provide a validation of the optimal results obtained using full discretiza-
tion approach in AMPL with the method of lines simulations in MATLAB. As discussed in
Accuracy check
Full discretization MATLAB verification
step-by-step full-cycle
Spatial finite volumes 10 10 40
H2 purity 98.20% 99.10% 95.92%
H2 recovery 91.09% 91.32% 91.73%
CO2 purity 90% 90.32% 90.99%
CO2 recovery 97.95% 98.99% 96.03%
section 5.2, AMPL results were validated using both step-by-step and full-cycle approaches in
MATLAB. The step-by-step validation was done with the same number of spatial finite vol-
umes as used in AMPL, i.e., 10, while full-cycle validation was done with 40 finite volumes. We
observe that the results from AMPL are in reasonable agreement with those from MATLAB
for both the approaches. The step-by-step verification is closer to the AMPL solution because
the initial conditions for each step and the time-dependent fluxes between the beds are taken
from AMPL, and MATLAB only verifies the temporal accuracy of the AMPL solution. On
the other hand, the full-cycle approach reflects more accurate comparison as it simulates the
entire cycle and verifies both spatial and temporal accuracy. We observe a reasonably good
comparison with the full-cycle approach as well. Moreover, we note that as we switch from
CoB to CnB or vice-versa during the cycle, it takes a short while for the flow to reverse entirely
in the bed. As a result, a flow of components from the heavy end of CoB or the light end of
CnB is observed during this short duration. Such a flow is accounted in the purity and re-
100 750
98 700
94 600
92 550
90 500
88 89 90 91 92 93 94 95 96
CO2 purity (%)
covery calculations in AMPL and the full-cycle approach, since they simulate the entire cycle,
but not in the step-by-step approach. Hence, we observe higher recoveries for H2 and CO2
in the step-by-step approach. We register this flow because in our formulation we control the
pressures Pads and Pdes and not the flow rates at the heavy end and the light end of CoB and
CnB, respectively. To avoid this, a valve-based superstructure formulation, which can control
the flows instead of pressures, will be considered in future extensions of this work.
Figure 5.4 shows a trade-off curve between CO2 purity and recovery. The curve is con-
structed by varying the lower bound on CO2 purity and solving the superstructure NLP re-
peatedly. As a result, each point plotted on the curve represents an optimal cycle which yields
the corresponding optimal CO2 recovery for the corresponding purity. In other words, it is an
optimum purity-recovery trade-off curve for the activated carbon adsorbent and the process
conditions assumed in this case study. The feed flux and the cycle time were fixed to their
respective optimal values of 35 kgmol m−2 hr−1 and 198.8 sec for the entire curve. Figure 5.4
also shows the power consumption for the corresponding optimal CO2 purity-recovery com-
bination. With activated carbon as the sorbent, we are able to obtain a maximum purity of
around 96% with a recovery of 90%, but with a power consumption of around 700 kWh/tonne
CO2 captured. For this system, high CO2 purity (> 99%) is not possible with activated carbon
as the sorbent. The curve shows that if a very high purity CO2 separation is desired then the
recovery falls drastically. A similar trend is observed with the purity when a very high CO2
recovery is sought. The intermediate section of the curve is a preferable region to operate.
Although we achieved a high purity separation in the previous case, the power consumption was
also quite high. Therefore, in this case, we modify the objective function of the optimization
problem from maximizing CO2 recovery to minimizing overall power consumption. A lower
bound of 92% is specified for CO2 recovery, while no lower bounds are specified for hydrogen
purity and recovery. The efficiency is kept same as 72% for all compressors and vacuum
generator. The rest of the optimization problem remains same as in the previous case, and is
as below.
As in the previous case, 10 temporal finite elements and 10 spatial finite volumes were
chosen for complete discretization in AMPL. The optimal control profiles obtained for α(t),
β(t), φ(t), Pads (t) and Pdes (t) are shown in Figure 5.5. These profiles translate in a 2-bed
10-step VSA cycle, illustrated in Figure 5.6, which can be deduced in the following manner.
The cycle starts with the first step similar to the first step of the cycle obtained in the previous
case. However, the duration of this step is extremely short in this case. With α(t)=1, φ(t)=1,
Bottom reflux ( α )
1 1000
Pads (kPa)
0.5 900
0
800
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Top reflux ( β )
1 700
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.5
0 500
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 400
Pdes (kPa)
1 300
Feed ( φ )
0.5 200
100
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized cycle time Normalized cycle time
and β(t) close to 0.72, CoB undergoes an adsorption with a heavy reflux step (step 1) at the
feed pressure, while CnB undergoes a light reflux step (step 6) at around 380 kPa. After this,
α(t) stays at 1 while β(t) and φ(t) drop to zero. Also, Pads (t) rises slightly while Pdes (t) drops
considerably. Therefore, we observe a high-pressure heavy reflux step for CoB (step 2) and
a counter-current depressurization step for CnB (step 7). Then we observe the longest step
of the cycle in which α(t) drops down to zero, β(t) rises slightly from zero towards the end
of the step, and the profile of φ(t) indicates near constant feed to CoB. From the pressure
profiles it can be inferred that the pressure rises steadily in CoB during this step, while CnB
first operates at atmospheric pressure and then at a vacuum of 50 kPa. Thus, we have a feed
pressurization with adsorption step for CoB (step 3) and a desorption step for CnB (step 8).
Both hydrogen and carbon dioxide are collected at high purity during this step. A small value
of β(t) towards the end of the step suggests a small amount of light reflux from CoB to CnB
to increase CO2 recovery. This small light reflux is shown as a dotted connection between CoB
and CnB in Figure 5.6. Next, a short step is observed in which the values of α(t) and β(t)
both go to 1 and feed rises to a value close to 1. Pads (t) hits the upper bound of 1000 kPa
while Pdes stays in the vacuum range. This suggests a feed pressurization with heavy reflux
step for CoB (step 4) and a light reflux step for CnB (step 9). In the final step, both α(t) and
β(t) stay at 1 while feed goes to zero. In addition CoB pressure drops CnB pressure rises so
(t) as
H2 H2 H2 profile
(t) = 0.72 (t) = 0
(t) = 1 (t) = 1
(t) = 1 (t) = 1
(t) = 0.86 Feed (t) = 0
Feed pressu-
Light reflux Pressure
rization with
at vacuum equalization
heavy reflux
(Step 4) (Step 9) (Step 5) (Step 10)
that Pa = Pdes . This leads to an energy-saving pressure equalization step for both beds (step
5 and step 10). Although α(t) is 1, not much flow is observed from CnB to CoB during this
step. Thus, this reflux is shown as a dotted connection in Figure 5.6. Further, the co-current
bed follows the steps of the counter-current bed and vice-versa. This completes the cycle.
The optimal VSA cycle obtained in this case incorporates conventional operating steps,
and in a conventional order. Figure 5.6 together with the gas-phase CO2 concentration profiles
in Figure 5.7 illustrate several key aspects of the cycle. First we observe that in this cycle
heavy reflux is not used as a major step to enrich adsorbed-phase CO2 concentration towards
the heavy end of the bed. The light reflux step (step 5) for CnB, since carried out at a high
pressure, doesn’t contribute much CO2 for enrichment for CoB during step 1, as observed
from the CO2 concentration profile for step 1 in Figure 5.7. Similarly, we observe from the
concentration profiles for step 2 in Figure 5.7 that the CO2 reflux from CnB to CoB during
step 2 and step 7 marginally pushes the CO2 adsorption front in CoB. This suggests that the
heavy reflux from CnB to CoB during steps 1 and 2 are specifically used to push hydrogen
out of the light end of CoB. However, though for a short duration, we do observe the use of
heavy reflux to concentrate CO2 towards the heavy end of CoB during the light reflux step at
vacuum (steps 4 and 9). This vindicates the use of vacuum conditions in CnB during this step
to provide a large amount of CO2 for heavy reflux. Moreover, since the duration of these steps
is short, the feed stream jumps in CoB to provide enough hydrogen for CnB as a light reflux.
Second aspect of the cycle is that the CO2 enrichment towards the heavy end of the bed
is mostly done with the feed stream. Unlike previous case, this optimal VSA cycle utilizes
the fact that the feed stream has a high concentration of CO2 at a high pressure. From the
concentration profile of step 3 in Figure 5.7, it is clear that the feed stream is primarily used to
push the CO2 adsorption front. This not only allows a higher feed throughput and enhanced
CO2 recovery for the process, but also reduces the specific power consumption. Such a step is
a conventional way of elevating CO2 concentration in the bed, and thus makes this VSA cycle
more conventional.
The final key aspect of the cycle is the pressure equalization step (steps 5 and 10) which
leads to savings in the power consumption. Although this step saves energy, it can be observed
from the concentration profiles of step 5 that as the pressure drops in CoB, CO2 starts diffusing
towards the light end of CoB. As a result, a small amount of CO2 breaks through the light end
of CoB and enters the light end of CnB, which is clear from the concentration profiles of step
10 in Figure 5.7. However, the amount is minimal and doesn’t lead to a loss in CO2 recovery.
The optimization results for this case are summarized in Table 5.3. With the same number
of variables and degrees of freedom as in the previous case we were able to get the optimal
Accuracy check
Full discretization MATLAB verification
step-by-step full-cycle
Spatial finite volumes 10 10 40
H2 purity 93.33% 94.14% 94.22%
H2 recovery 91.64% 93.02% 91.05%
CO2 purity 90% 91.59% 89.42%
CO2 recovery 92% 92.92% 93.67%
solution in around 1 CPU hour. An optimal power consumption of 46.82 kWh/tonne CO2
captured was obtained which is an order of magnitude less than the one obtained in the
previous case. The low power consumption stems from an optimal feed flux, 96.61 kgmol m−2
hr−1 that is three times the feed flux of case I, and an optimal cycle time which is more than
twice as long as in case I. Since the cycle is handling three times the feed over longer time,
the amount of CO2 recovered increases which leads to a lower work done per tonne of CO2
captured. Another reason for the savings in power consumption is the pressure equalization
At the optimum, CO2 purity and recovery were at their respective lower bounds of 90%
and 92%. With this, a reasonable hydrogen purity of 93% and recovery of 91.6% was obtained.
Table 5.3 also lists the accuracy verification of the results obtained from the full discretization
approach in AMPL. The purities and recoveries obtained from MATLAB using both the step-
110
100
80
70
60
50
40
30
86 88 90 92 94 96 98
CO recovery (%)
2
Figure 5.8: Power-recovery trade-off curve at 90% CO2 purity for case II
by-step approach with 10 spatial finite volumes and the full-cycle approach with 40 finite
volumes are reasonably close to the ones obtained from AMPL. As observed in the previous
case, the step-by-step approach verification is closer to the AMPL solution. However, the
full-cycle verification depicts more accurate comparison since it compares both spatial and
Figure 5.8 shows the trade-off curve between power consumption and CO2 recovery. As in
the previous case, the curve is constructed by varying the lower bound on CO2 recovery, while
keeping the CO2 purity, feed flux and cycle time fixed to their respective optimal values of
90%, 96.61 kgmol m−2 hr−1 and 424.74 sec, and solving the superstructure NLP repeatedly.
As a result, each point on the curve represents the minimum power consumption that can be
obtained for the corresponding CO2 recovery. As expected, the curve shows that the power
From the case studies above, we observe that the superstructure optimization can yield entirely
different configurations with different objectives. The final configurations obtained match the
respective objectives sought in both case studies. The major difference between the optimal
cycles is the way they enrich the CO2 concentration towards the heavy end of the bed. Since the
objective of case I is to maximize CO2 recovery, the optimizer achieves it by minimizing the feed
input through the system, thus attaining the specified lower bound for feed flux. As a result,
minimal feed is used and the optimal configuration doesn’t use the high CO2 concentration
observe the utilization of the heavy reflux step through the entire cycle to achieve desired
CO2 purity. In contrast, the optimal VSA cycle in case II utilizes the feed stream for CO2
enrichment. Thus, we infer that a heavy reflux step is not an absolute necessity to obtain
heavy component at a high purity when the feed to the PSA system is sufficiently rich in the
heavy component.
As a result of the CO2 enrichment through feed, although the lower bound on feed flux
is 35 kgmol m−2 hr−1 for both cases, the optimal feed flux for case II is almost three times
this value. Consequently, it also decreases the specific power consumption for the cycle. In
contrast, the optimal cycle in case I doesn’t incorporate any power saving step due to the lack
of any constraint on the power consumption in the problem formulation. Thus, unlike case II,
pressure drop from 950 kPa to 50 kPa when the cycle transitions from step 4 to step 5. To
avoid this, an upper bound on the power consumption can be used for case I in future.
To deduce multibed cycles for a continuous cycle operation from the optimal two-bed
solutions, a coordination of step times will be required which will depend upon whether we
to the system. In both case I and case II, H2 is collected for a longer period in the cycle and
continuous flow can be maintained through product buffer tanks. Thus, the coordination can
be achieved with a small number of beds. However, in case I, feed is given or CO2 is removed
for a short duration in the cycle. Such small step times, without feed and product buffer tanks,
can lead to a large number of parallel beds in the continuous operation. On the contrary, the
optimal cycle in case II handles a large amount of feed and removes CO2 for a long duration.
Consequently, a continuous cycle operation will require a small number of parallel beds. Thus,
the optimal cycle obtained in case II is more practical and implementable. To avoid the kinds
of steps obtained in case I, the step times can be constrained to avoid an overly complicated
cycle. One way to handle this is to set the step times as integer multiples of each other; this
A major limitation exists with the use of conventional PSA cycles for high purity CO2 capture
because they have been designed to recover H2 at an extremely high purity, and consider CO2
produce H2 and CO2 at a high purity. Complex dynamic behavior of PSA processes together
with the numerical difficulties of the model governed by PDAEs makes the evaluation of differ-
ent cycle configurations challenging and computationally expensive. In this work, we propose
illustrated for two different case studies of pre-combustion CO2 capture using only activated
carbon as the sorbent. The first case study deals with obtaining optimal PSA cycle which
maximizes CO2 recovery for at least a desired amount of CO2 and H2 purity. Superstructure
optimization for this case results in a 2-bed 8-step VSA cycle which can produce both H2 and
CO2 at a substantially high purity of 98% and 90%, respectively. A significantly high CO2
recovery of 98% is achieved at a high feed flux of 35 kgmol m−2 hr−1 . Changing the objective
to minimizing power consumption, in the second case study, yields an entirely different 2-bed
10-step VSA cycle. The cycle can produce CO2 at a purity of 90% and a recovery of 92%
with a significantly low power consumption of 46.82 kWh/tonne CO2 captured. With these
results it can be inferred that PSA/VSA is a promising technology for pre-combustion capture
systems. It can produce highly concentrated CO2 streams with minimal energy requirements.
Both case studies were solved to optimality within 1 CPU hour in AMPL using IPOPT
with a reasonable accuracy. Thus, the proposed superstructure approach, with a complete dis-
cretization framework and efficient NLP solvers like IPOPT, is a computationally inexpensive
way to obtain optimal cycles. However, as briefly mentioned in section 4.6, to improve upon
the accuracy of the approach a sensitivity-based sequential approach, similar to [100], will also
be developed to solve the optimal control problem for the superstructure without a separate
verification step. Instead, the PDAEs for the PSA system will be decoupled from the opti-
mization problem, and the partially discretized PDAEs, together with the sensitivities of the
state variables with respect to decision variables, will be integrated outside the optimization
problem using a sophisticated dynamic simulator which is able to capture the state variable
profiles with high accuracy. The optimization problem will then be solved for the decisions
Finally, as mentioned in section 4.6, our superstructure based methodology, is quite generic
and can be extended to many other PSA applications; no assumptions are made on the ad-
sorbent or feedstock, the operating steps that can be predicted, or details of the bed models.
This makes the approach fairly general. Moreover, the superstructure can also be used to
evaluate different kinds of adsorbents for the same feedstock and process conditions. While
the current superstructure involves only two beds, in future we plan to extend the formulation
to incorporate more beds with multiple layers of adsorbents, more complex flow patterns and
Optimization
Synopsis
culties that arise due to large-scale state equations related to PDE-constrained optimization
problems. Model reduction is one approach to generate cost-efficient low-order models which
can be used as surrogate models in the optimization problems. This chapter develops a re-
duced order modeling framework based on proper orthogonal decomposition (POD), which is
leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial
discretization and making the optimization problem computationally efficient. We explain the
concept of POD, the methodology to construct reduced-order models (ROMs), and motivate
describe how ROMs can be utilized to optimize in a trust-region around the point where ROM
6.1 Motivation
It is clear that the mathematical model of pressure swing adsorption processes is described by
coupled nonlinear partial differential and algebraic equations distributed in space and time with
periodic boundary conditions that connect the processing steps together, and high nonlineari-
ties arising from non-isothermal effects and nonlinear adsorption isotherms. Also, the solution
of such convection dominated hyperbolic PDAEs is governed by steep adsorption fronts. Con-
sequently, a large number of spatial finite volumes are generally required to capture dynamic
behavior with steep fronts. As a result, optimization of such systems for either design or
Although sophisticated optimization strategies have been developed and applied to PSA
systems with a significant improvement in the performance of the process (such as the complete
discretization based approach by Nilchan [138] for optimization of a bench-scale and a rapid
PSA process, a mixed-integer nonlinear programming based approach by Smith et al. [177,
178, 179] to minimize number of beds, an SQP-based approach by Ko et al. [110, 111] to
optimize PSA and fractionated vacuum PSA processes, an SQP-based approach by Jiang et
al. [99] with direct sensitivities to obtain derivatives for the optimization problem, and the
complete discretization approach with the interior-point nonlinear solver IPOPT applied in
chapter 4 and 5 for the case studies related to the superstructure optimization), even the most
efficient of these approaches can usually be quite expensive and prohibitively time-consuming.
For instance, we report a CPU time of 12.6 hrs. for case II, and 4.5 hrs. for case III of the post-
combustion capture case study in sections 4.4.2 and 4.4.3, respectively. Even for just 10,500
variables in the optimization problem, CPU time was as high as 52 min. for case I, and 66 min.
for case II of the pre-combustion capture case study in section 5.3.1 and 5.3.2, respectively.
Jiang et al. [100] reported a CPU time of 50-200 hrs. on a 2.4 GHz linux machine for a 5-bed
of a simple single bed air drying PSA process by Sankararao et al. [157] took 720 hrs. on a 2.99
GHz Pentium IV machine. This gives a strong motivation to develop cost-efficient and robust
optimization strategies for PSA processes. Moreover, for flowsheet optimization, incorporation
of dynamic PSA models with other steady-state models in the flowsheet requires much faster
representations of large-scale systems that, in particular, result from the discretization of the
PDEs. Antoulas et al. [11] provide an overview of numerous model reduction techniques
which can be applied to large-scale PDE based systems. In particular, over the past decade,
proper orthogonal decomposition (POD) has been developed as a powerful model reduction
approach that provides an accurate reduction of the large spatially distributed models to
much smaller models, and have resulted in an extensive list of articles. Berkooz et al. [27]
provide a detailed list of such articles. Chatterjee [45] explains the concept and applications
of POD in a clear and concise manner. POD (also known as Karhunen-Loéve approximation)
based reduced-order modeling technique is ubiquitous and has been applied for a multitude
of applications. It has been extensively used in obtaining low-dimensional models for efficient
simulation [116, 117, 128, 131, 153] and control [12, 26, 115, 147] in fluid dynamics. Yuan
et al. [210] developed reduced-order models for bubbling fluidized beds. Armaou et al. [13]
utilized the concept of POD for a diffusion reaction process, while Theodoropoulou et al. [185]
extended the applications of ROM-based modeling to chemical vapor deposition. Park et al.
on an irregular domain. Shvartsman et al. [162] utilized POD for generating ROMs for MOVPE
reactor. Couplet et al. [56], Favier et al. [68], and Galletti et al. [82] developed calibrated
reduced-order modeling techniques using POD for the laminar and turbulent flow problems.
Cao et al. [39] developed a POD-based reduced order model for analysis of a detailed model
of the upper tropical Pacific Ocean. Gunzburger et al. [89] developed a generic framework to
dependent PDEs. The framework addresses both homogeneous and nonhomogeneous boundary
conditions. On similar lines, Rambo et al. [146] developed ROMs with parametric conditions
specifically for turbulent forced convection problems. Willcox has done extensive research in
the field of POD-based reduced-order models (see [200, 201, 15, 34, 35, 36]). In particular,
Bui-Thanh et al. [37] provided a goal-oriented framework for generating POD-based ROMs
ROMs are derived from solutions of detailed distributed models through the representation
ROMs are then formulated through the substitution of the eigenfunction expansion into the
PDE model using Galerkin projection. Truncation of those modes that have no significant
contribution to the solution profile then leads to a significant reduction in the number of
states which eventually leads to a much smaller optimization problem. Numerous studies
report use of reduced-order modeling for the purpose of optimization and optimal control.
Kunisch et al. [115] used it to control Burgers equation. Armaou et al. [14] applied the
concept of model reduction to the time-dependent parabolic PDEs and generated ROMs for
of the transport-reaction processes, while Theodoropoulou [185] optimized the chemical vapor
deposition process using ROMs. Luna-Ortiz et al. [127] developed an input-output based
optimization scheme for large-scale systems. Fahl [65] applied a trust region proper orthogonal
decomposition algorithm for the optimal boundary control of a cavity flow. Bergmann et al.
[25] also applied the same algorithm for optimal control of the circular cylinder wake flow
considered in a laminar regime. Balsa-Canto et al. [21] solved a problem related to design and
using POD-based ROMs. LeGresley et al. [122] performed analysis and design optimization
of inviscid airfoils using ROMs for both subsonic and transonic flows. Weickum et al. [198]
developed extended ROMs for optimal design problems, in which ROMs are developed for
the whole design space before optimization and then a trust region based strategy is used for
globalization. Recently, Varshney et al. [189] utilized ROMs to optimize multiscale systems.
However, until now ROM-based optimization has been used only for small-scale optimal control
or dynamic optimization problems, and its use for large-scale dynamic optimization problems,
especially PSA, has not been explored. For PSA, very few studies involve model reduction
of experiments, short-cut models suggested by Chung et al. [53], and a model simplification
strategy chalked out by Zhang et al. [211], which relies on understanding detailed physics
behind each operating step, are the only known articles. Our focus is to develop systematic
demonstrate construction of reduced-order models using POD. We also illustrate ROM con-
6.2.1 Concept
Proper orthogonal decomposition (POD), first introduced by Lumley [126] is now used in a wide
variety of disciplines such as turbulence, image processing, signal analysis, data compression,
oceanography, and process identification and control. The key idea of POD is to compute a set
of orthonormal functions, called POD basis functions, such that they can describe the dynamic
system under consideration with as few basis functions as possible. The set of basis functions
is optimal in the sense that it captures and describes the dynamic behavior of the system with
space, we seek to find a subspace, much smaller than the original vector space, such that
the projection of all the observations on to that is maximal. The attractiveness of the POD
technique lies in the fact that the basis functions are derived from the numerical solutions or
experimental measurements of the system, thus exhibiting a local characteristic and ensuring
that such a basis set inherently describes the dynamics in the best possible manner by being
In particular, we use method of snaphots in this work to generate POD basis functions
[175]. POD-based model reduction begins with the collection of snapshot sets which consist of
solutions of the PDEs at several time instants during the evolution of the system. These snap-
shot sets are obtained by solving a rigorous, large-dimensional system obtained after spatial
discretization (and temporal also in some cases) of the PDEs. The determination of these sets
is crucial to the effectiveness of POD-based reduced-order modeling. Hence, they must contain
sufficient information to accurately represent the dynamics of the system. One then uses the
set of snapshots to determine a POD basis set which can accurately capture the information
Y = {y 1 , . . . , y Nt } (6.1)
with the fields y j = y(x, tj ), where Nt is the number of snapshots and Nx is the number of spa-
averaged mean of the trajectory at location xi . POD procedure computes an orthonormal set
of basis functions {φ1 , . . . , φNx } which maximizes projection of each snapshot on to the first
X
M X
Nt
max |(y j , φi )|2 s.t. kφi k = 1, (φi , φj )i6=j = 0 i, j = 1, . . . , M (6.2)
φ1 ,...,φM
i=1 j=1
where (v, w) = (v, w)L2 denotes the L2 -inner product with the corresponding norm kvk =
maximing a convex problem, it is reformulated such that the sum of the error between each
snapshot and its projection represented with a truncated first M ≤ Nx basis functions is
minimized.
Nt
2
N
2
X
Nt
X
j X j
X
M x
min ε P OD
(M ) =
y − y , φi φi
=
j
y , φi φi
(6.3a)
φ1 ,...,φM
j=1 i=1 j=1 i=M +1
s.t. kφi k = 1, (φi , φj )i6=j = 0 i, j = 1, . . . , M (6.3b)
where each snapshot y j can be represented in terms of the new basis set using its projection
(y j , φi ) in the direction of φi
X
Nx
j
y = y j , φi φi , j ∈ {1, . . . , Nt } (6.4)
i=1
significant model reduction is achieved since the value of M is much smaller compared to the
value of Nx . Moreover, even with such less M , generally the POD error in projection εP OD (M )
is significantly small. Thus, eventually we obtain a much smaller subspace spanned by a very
It can be shown that POD basis functions provide an optimal basis set for the representation
of the dynamics under consideration [65, 115]. Here, from a physical point of view, these
first M basis functions capture more “energy” of the snapshot field than any other set of M
orthonormal spatial basis functions. In other words, if we desire to represent each snapshot
with exactly M orthonormal basis functions, M POD basis functions will provide the best
Nt
2
2
X
Nt
j X j
X
j X j
M M
ε P OD
(M ) =
y − y , φ i φi
≤
y − y , ψ i ψi
(6.5)
j=1 i=1 j=1 i=1
Computation of POD basis functions is closely linked with calculating the singular value decom-
position (SVD) of the snapshot matrix Y ∈ RNx ×Nt . Singular-value decomposition guarantees
such that U T YV = D, where D ∈ RNx ×Nt is a diagonal matrix with diagonal entries being
as both have same eigenvalues). As a consequence, we obtain the singular value decomposition
Y = U DV T (6.6)
SVD can be used to obtain a complete set of POD basis functions. To obtain POD basis set,
we solve Problem (6.2) for a given value of M . A necessary optimality condition for Problem
YY T u = σ 2 u (6.7)
solve (6.7), and thus solve Problem (6.2) as well. Hence {ui }M
i=1 represent the desired set of
POD basis functions (or basis vectors in our case of finite spatial dimension). The amount
function captures maximum projection of Y on the new reduced basis set, second POD basis
is second best in capturing projection of Y, and so on. We note that Equation (6.6) can
b B, with U
also be written as Y = U b ∈ RNx ×M being the reduced set of eigenvectors, and
b
of M linearly independent columns of U
X
M
j
y = bi,j ui , j = 1, . . . , Nt (6.8)
i=1
where bi,j are the elements of B. Hence, it is clear that the columns of U indeed represent
a set of basis functions, and eventually turn out to be the POD basis functions based on the
bD
definition. We also note that Equation (6.6) can also be written as U b = YV , with D
b ∈ RM ×Nt
b as defined above. It follows that the POD basis vectors are linear combinations of the
and U
snapshots, thus ensuring that such a basis set inherently describes the dynamics of the system
Choice of the subspace dimension M controls the overall error in projection εP OD (M ) in Prob-
lem (6.3). Nevertheless, the choice of M is an important and critical task, since it determines
the interrelation between accuracy and dimension of the POD based reduced order models.
Since the singular values of the snapshot matrix Y convey the amount of projection captured,
we utilize a criterion based on these singular values to choose M . Using SVD, the error in
X
Nx
P OD
ε (M ) = σi2 (6.9)
i=M +1
PM 2
PNx 2
i=1 σi i=M +1 σi εP OD (M )
λ=1− PN = PNx = PN x = εPnorm
OD
(M ) (6.10)
x 2 2 2
i=1 σi i=1 σi σ
i=1 i
where εPnorm
OD (M ) is the normalized error in projection. A value of M is chosen such that
projection error should be less than the tolerance level λ∗ [175, 65, 27]. It is also known as an
M -rank approximation since the rank of the matrix U after truncation, and that of the solution
It is commonly observed that the first few singular values are significantly larger than the
subsequent ones, thus representing most of the captured projection of the system. Therefore,
based on the aforementioned criterion, basis functions corresponding to those smaller singular
values are dropped, which eventually leads to a much smaller subspace spanned by a very few
basis functions. Hence, a significant model reduction is achieved since generally the value of
M is much smaller compared to the value of Nx . For instance, M can be less than 10, whereas
6.3.1 Methodology
After computing POD basis functions, a reduced-order model (ROM) is derived by projecting
the underlying PDEs of the system onto the corresponding POD subspace. We use a Galerkin-
∂y ∂y ∂ 2 y
=f y, , (6.11)
∂t ∂x ∂x2
In terms of the new set of POD basis functions, the state variable y(x, t) is written as
X
M
y(x, t) = ai (t)φi (x) (6.12)
i=1
where {ai }M
i=1 are the unknown temporal coefficients in the expansion. We solve this system of
PDEs using method of weighted residuals in which the inner product of the residual of PDEs
with an orthonormal set of basis functions {ωi }Pi=1 is set to zero, i.e.,
Z
∂y ∂y ∂ 2 y
−f y, , ωi dx = 0, i = 1, . . . , P (6.13)
∂t ∂x ∂x2
In particular, for Galerkin projection, we choose such a basis set {ωi }Pi=1 same as the set of
basis functions in terms of which the state variable is defined, i.e., POD basis functions in this
Z
∂y ∂y ∂ 2 y
−f y, , φi dx = 0, i = 1, . . . , M (6.14)
∂t ∂x ∂x2
Z XM X
M X
M X
M 2φ
da dφ d
−f aj (t) 2 φi dx = 0,
j j j
φj (x) aj (t)φj (x), aj (t) , i = 1, . . . , M
dt dx dx
j=1 j=1 j=1 j=1
(6.15)
Since the POD basis functions are orthonormal, we finally obtain our reduced-order model
Z XM X
M X
M 2φ
dai dφ d
f aj (t) 2 φi dx,
j j
= aj (t)φj (x), aj (t) , i = 1, . . . , M (6.16)
dt dx dx
j=1 j=1 j=1
product. It should be noted that in the final reduced-order model we obtain only M ordinary
differential equations (ODEs) compared to Nx ODEs that we usually obtain after applying spa-
tial discretization techniques such as finite difference, finite element, or finite volume. Since M
optimization problem, replacing the set of ODEs obtained after spatial discretization with the
smaller set of ODEs of the reduced-order model yields a much smaller and computationally
One of the key issues in reduced-order modeling is the incorporation of boundary conditions.
If the boundary conditions are homogeneous, no changes are required in the aforementioned
methodologies of obtaining POD basis functions and final reduced-order model. However,
general. Gunzburger et al. [89] have developed a generic framework to incorporate boundary
For non-homogeneous Dirichlet boundary conditions, we utilize the idea of computing POD
basis elements for fluctuations around the mean value of the snapshots. Given Nt snapshots,
P t j
first we compute the mean value of the snapshots y = (1/Nt ) N j=1 y . Next, the snapshot
using this modified input ensemble. This helps in projecting out the boundary condition to
the mean value of the snapshot and allows POD basis functions to follow a homogeneous
X
M
y(x, t) = y(x) + ai (t)φi (x) (6.17)
i=1
Let the boundary conditions be y(0, t) = A, y(L, t) = B. Since all the snapshots satisfy
this, y(x) will also satisfy this, i.e., y(0) = A and y(L) = B. Hence, φi (0) = φi (L) = 0,
i = 1, . . . , M , which helps in ensuring that the boundary conditions are always satisfied by the
solution obtained after integrating ROM. In some cases, especially when a boundary condition
boundary condition in the expansion (6.17). Let such a boundary condition be y b . In this case,
first we subtract y b from all the snapshots at all spatial points {xi }N x
i=1 . To ensure consistency
for the other boundary condition, mean value of the snapshots is then computed for these
P t j
modified snapshots, i.e., y b = (1/Nt ) N
j=1 (y − y ). Consequently, the snapshot matrix used
b
X
M
y(x, t) = y b + y b + ai (t)φi (x) (6.18)
i=1
work, we construct a ROM only for the interior spatial domain and boundaries are excluded. In
other words, ROMs do not directly determine the solution at the boundaries. We approximate
the derivative at boundaries with any finite difference scheme. Based on this approximation
together with the boundary condition and the interior solution obtained after integrating ROM,
example of Burgers equation. Burgers equation represents a wave moving in time with a
constant velocity. The wave doesn’t retain its shape and diffuses out because of the diffusion
present in the system. Thus, it closely represents the dynamic behavior observed in a PSA
system. Mathematically, Burgers equation is similar to the component mass balance equation
∂y ∂y ∂2y
+y = µ 2, µ = 0.01
∂t ∂x ∂x
0.5 for 0 < x ≤ 0.5 (6.19)
y(0, t) = 0, y(1, t) = 0, y(x, 0) =
0 for 0.5 < x < 1
While the boundary conditions are homogeneous, this peculiar initial condition represents a
P
square wave. To construct a POD-based reduced-order model, we express y = M i=1 ai (t)φi (x),
where {φi }M
i=1 are the POD basis functions, and apply Galerkin projection to obtain
XM
daj XM X
M
dφj X
M 2
d φj
φj + aj φj aj −µ aj , φi = 0, i = 1, . . . , M (6.20)
j=1
dt
j=1
dx
j=1
dx2 j=1
Figure 6.1: Comparison of original profile and ROM profiles for Burgers equation for varying
subspace dimension
Here we use an L2 inner product since the snaphots are obtained after spatial discretization
of the PDE which lead to POD basis vectors (not basis functions). After simplification and
applying orthonormality of the basis functions, we obtain our reduced-order model which is
given as
2
dai X X X
M M M
dφ d φ
aj φj , φi − µ
j j
+ aj aj , φi = 0, i = 1, . . . , M (6.21)
dt
j=1
dx
j=1 j=1
dx2
Equation (6.19) was first discretized in space using a simple finite difference scheme with
50 spatial nodes (Nx = 50). The resulting set of ODEs was then integrated in MATLAB using
ode15s to obtain snapshots. With 100 time snapshots in the snapshot matrix, POD basis set
was computed using SVD. Finally, ROM was constructed as in Equation (6.21) and analyzed
for different values of subspace dimension M . Figure 6.1 compares the original solution profile
of the Burgers equation with that obtained from ROM for different values of M . It also
for just 7 POD basis functions, error in projection is as low as 0.7%, and we also obtain a
substantial model reduction (almost (1/7)th of the model obtained after spatial discretization
with 50 nodes). Moreover, the solution of ROM with 7-rank approximation is almost identical
to the original solution. Also, the solutions obtained with 3-rank and 5-rank approximation
are fairly accurate with a little high error in projection. Hence, this illustrates the power of
reduced-order modeling to predict dynamic behavior of the system with significant accuracy
optimization problems. Since the number of DAEs in a ROM is much smaller compared to the
number of DAEs obtained after spatial discretization of PDEs, replacing the latter with the
former yields a much smaller and computationally-efficient optimization problem which can be
with differential variables y(t), algebraic variables z(t), decision variables p with lower and
upper bounds bL and bU , respectively, and Nx ODEs obtained after spatial discretization, the
Here ai (t) are the unknown temporal coefficients from Equation (6.12). Equation (6.23b)
To solve Problem (6.23), we discretize DAEs in time and convert it into a standard nonlinear
programming problem (NLP) which can be solved using state-of-the-art NLP solvers such as
IPOPT. With the superstructure optimization case studies in chapters 4 and 5, we observed
leads to a very large set of algebraic equations and prohibitively expensive optimization problem
due to a large number of spatially discretized nodes required to capture steep adsorption fronts.
Thus, we considered fewer spatial finite volumes to solve the NLP in a reasonable amount of
time, and compromised on the accuracy. However, Problem (6.23) doesn’t present such an
issue since the DAE set is obtained after projecting PDEs onto the POD subspace, and thus is
quite small in size. Moreover, even after considering many temporal finite elements to ensure
satisfactory temporal accuracy, the size of the resulting NLP remains manageable. Although
a large number of spatial finite volumes are required to obtain snapshots using method of
lines to obtain POD subspace, such computation is done just once and remains outside the
A major issue with ROM-based optimization and using a ROM for Problem (6.22) is that
although a ROM is substantially accurate for the values of the decision variables at which it
is constructed (we call it “root-point”), it loses its accuracy at a different point in the decision
variable space since the snapshots at the root-point do not capture the spatial behavior and
dynamics of the system at any other point in the decision variable space. Moreover, the error
in the solution given by the ROM increases as we go further away from the root-point. There-
fore, it is invalid to use a particular ROM for the optimization problem (6.22), i.e., Problem
(6.23) cannot be defined for the entire decision variable space. However, we assume a reason-
able accuracy for the ROM in a confidence region (or “trust-region”) around root-point, and
write Problem (6.23) only for that trust-region to benefit from the computational advantage
offered by ROMs. Hence, we define tighter bounds on decision variables in Problem (6.23)
is then performed using ROM and the optimal solution obtained becomes the new root-point
where ROM can be updated with new snapshots. Problem (6.23) is then solved again with a
new trust-region around this new root-point. A systematic adaptive scheme based on such a
repetitive strategy will be developed in the next chapter. In the subsequent sections, we il-
lustrate ROM-based optimization within a neighborhood of the root-point with the help of an
example of the hydrogen PSA process. Since this forms a key step in the adaptive optimization
We consider a 2-bed 4-step hydrogen PSA process which extracts hydrogen from a feed mixture
comprising 30% hydrogen and 70% methane. In particular, the process is a Skarstrom cycle
shown in Figure 2.3, and described in detail in section 2.4.1. The target process for this case
study is bench-scale as described in Ko et al. [110, 111]. Design specifications and simulation
conditions are listed in Table 6.1. We make following assumptions to develop a mathematical
2. There are no radial variations in temperature, pressure and concentrations of the gases
parameter value
Bed Length (L) 1m
Bed porosity (b ) 0.404
Bed radius (Rb ) 0.25 m
Particle radius (Rp ) 5.41 × 10−3 m
Particle porosity (p ) 0.546
Diffusivity (Dx ) 1.3 × 10−5 m2 /sec
Particle density (ρp ) 716.3 kg/m3
Bed density (ρb ) 426.7 kg/m3
Thermal diffusivity (KL ) 1.2 × 10−6 J/m/sec/K
Heat capacity of solid (Cps ) 1046.7 J/kg/K
Heat transfer coefficient (h) 60 J/m2 /sec/K
Lumped mass transfer coefficient (k) (0.136,0.259)(CH4 ,H2 ) 1/s
Heat of adsorption (∆H) (24124,8420)(CH4 ,H2 ) J/mole
Gas viscosity (µ) 3.73 × 10−8 kg/m/sec
R 8.314 J/mol/K
Ambient temperature (Tw ) 300 K
Feed temperature (Tf eed ) 310 K
Feed composition (0.7,0.3)(CH4 ,H2 )
Feed pressure (Pf eed ) 600 kPa
Purge pressure (Ppurge ) 150 kPa
Pressurization time (tp ) 5s
Adsorption time (ta ) 50 s
3. The gas and the solid phases are in thermal equilibrium and bulk density of the solid
6. The adsorption rate is approximated by the linear driving force (LDF) expression.
7. A linear profile is assumed for superficial gas velocity for all the steps. Cruz et al. [58]
suggested that this kind of an assumption is valid for bench-scale PSA processes and an
Based on the above assumptions, the mathematical model for the PSA process is listed in
Table 6.2. Here we use a lumped mass transfer coefficient for the LDF model. The temperature
Ergun equation
∂P 150µ (1 − b )2 1.75ρg 1 − b
− = u+ u2 (6.26)
∂x 4Rp2 3b 2Rp b
LDF equation
∂qi
= ki (qi∗ − qi ) i = 1, 2 (6.27)
∂t
Energy balance
∂T ∂T ∂2T
(b ρg Cpg + ρb Cps ) + ρg Cpg b u − KL 2
∂t ∂x ∂x
X 2
∂qi 4hw
−ρb ∆Hiads + (T − Tw ) = 0 (6.28)
∂t D
i=1
P X
2
ρg = yi Mwi
RT
i=1
X
2
i
Cpg = yi Cpg
i=1
i
Cpg == aic + bic T + cic T 2 + dic T 3 i = 1, 2
Langmuir isotherm
ai yi P
qi∗ = P2 ai = α1i eα2i T bi = β1i eβ2i T i = 1, 2 (6.29)
1 + i=1 bi yi P
dependent adsorption isotherm parameters for hydrogen and methane on activated carbon
(α1 , α2 and β1 , β2 ) are listed in Table 6.3 [108]. As suggested by Equations (6.24) and (6.25),
PDE for the component mass balance is solved only for methane and mole fraction of hydrogen
is evaluated by ensuring that the mole fractions sum up to one. We enforce this summation
because the overall mass balance, which implicitly ensures such a summation, is not taken into
account for superficial velocity calculation. We denote this model in Table 6.2 as the rigorous
Tables 6.4 and 6.5 show the equations for molar flux variables, to calculate purities and
recoveries, and the boundary conditions for each operating step, respectively. Based on the
molar flux variables, purities and recoveries of hydrogen and methane are given by
Z
OH2 (t) dt
purityH2 = Z (6.32)
OH2 (t) + OCH4 dt
Z
HpCH4 (t) dt
purityCH4 = Z (6.33)
HpH2 (t) + HpCH4 dt
Z Z
OH2 (t) dt − PgH2 (t) dt
recoveryH2 = Z (6.34)
FH2 (t) dt
Z
HpCH4 (t) dt
recoveryCH4 = Z (6.35)
FCH4 (t) dt
We use the method of lines approach to convert PDAEs in Table 6.2 into a set of DAEs after
spatial discretization, which is then simulated to generate snapshots for POD basis functions.
For spatial discretization, we use an upwind-based finite volume scheme with Van Leer flux
limiter for mole fraction and temperature in order to introduce additional numerical dispersion
around steep adsorption fronts (cf. section 2.5.1). We utilize the Multibed approach and
simulate both beds simultaneously for half the cycle (cf. section 2.4.3).
Table 6.3: Isotherm parameters for H2 and CH4 on activated carbon [108]
α1 α2 β1 β2
Methane 0.0086 -0.2155 0.0004066 -0.010604
Hydrogen -0.0000379 -0.01815 2.2998 -0.05993
The snapshots are generated only after the 2-bed system attains cyclic steady state. To
achieve CSS, we use a successive substitution method in which first the PSA cycle is simulated
with random initial conditions, and then a series of simulations are performed with initial
conditions of each new cycle taken from the final condition of the previous cycle. This is
executed successively until the bed conditions do not change from cycle to cycle. Since a
bench-scale PSA model is considered in this case, we achieve cyclic steady state after 100-120
cycles.
After obtaining snapshots, separate POD basis functions are generated for pressurization,
adsorption, depressurization and desorption steps. Moreover, we derive separate POD basis
functions for gas phase mole fractions, solid phase loadings, temperature, and pressure. Similar
to Equation (6.17), these state variables are then expressed in terms of the correspoding POD
basis as below
X
M X
M
yiR (x, t) = y0i (x) + ayij (t)φyij (x) qiR (x, t) = q0i (x) + aqij (t)φqij (x)
j=1 j=1
X
M X
M
R R
T (x, t) = T0 (x) + aT j (t)φT j (x) P (x, t) = P0 (x) + aP j (t)φP j (x)
j=1 j=1
Here y0i (x), q0i (x), T0 (x), and P0 (x) are snapshot averages for mole fraction, solid loading,
temperature, and pressure, respectively. Gas density, specific heat and equilibrium solid con-
centrations are calculated explicitly in terms of yiR , T R , and P R . Table 6.6 shows DAEs of the
reduced-order model obtained after Galerkin projection of the model in Table 6.2 on to these
Since the snapshots are obtained after the rigorous model achieves cyclic steady state,
snapshots for both beds for the corresponding steps are identical. For instance, snapshots
collected during pressurization step for bed 1 are identical to the snapshots of the pressurization
step for bed 2. Therefore, we construct reduced-order model for only one bed and the other
bed is ignored. Coupling for the adsorption/desorption steps of the two beds is ensured by the
adsorption and desorption steps of the same bed. Hence, a greater model reduction is achieved
Ergun equation
150µ (1 − b )2 1.75ρg 1 − b
(u, φP k ) + (u2 , φP k )
4Rp2 3b 2Rp b
dP0 X
M
dφP j
+ + aP j , φP k = 0 ∀k ∈ [1, M ] (6.37)
dx dx
j=1
LDF equation
X
daqik
M
= ki qi , φqik − aqik − q0i , φqik
R∗
∀k ∈ [1, M ] (6.38)
dt
j=1
Energy balance
X X
M
daT j R R dT0
M
dφT j
b ρR R
g Cpg
φT k , φT j + b uρg Cpg + aT j , φT k
dt dx dx
j=1 j=1
daT k X2 XM
daqrj
+ρb Cps − ρb ∆Hrads φqrj , φT k
dt dt
r=1 j=1
d2 T0 X
M
d2 φT j 4hw R
−KL + aT j , φT k + T − Tw , φT k = 0 ∀k ∈ [1, M ] (6.39)
dx2 dx2 D
j=1
Langmuir isotherm
R
aR y R P R aR
i = α1i e
α2i T
qiR∗ = Pi2 i R R R R (6.40)
1 + i=1 bi yi P bi = β1i e 2i T
R β
0.2 0.2
−0.05
0.1 0.1
−0.1 0 0
φ
−0.15 −0.1 −0.1
−0.2 −0.2
−0.2
−0.3 −0.3
0.2 0.2
0.2
0.1 0.1
0.1
0 0
φ
φ
0
−0.1 −0.1
−0.1
−0.2 −0.2
−0.2 −0.3 −0.3
Figure 6.2: First six POD basis functions of methane mole fraction for adsorption
since ROM comprises DAEs of only one bed, while the rigorous model is simulated for both
With 35 spatial finite volumes, we convert PDAEs in Table 6.2 to DAEs and integrate using
ode15s in MATLAB. Cyclic steady state is achieved up to the desired tolerance after simulating
the model repeatedly for 120 cycles, and snapshots are collected to generate empirical POD
basis. Figure 6.2 shows POD basis functions of the gas-phase methane mole fraction for the
adsorption step. Since these functions are empirical, their shapes are different for all four
operating steps. Figure 6.3 shows first 10 singular values for the gas-phase methane mole
fraction and temperature for all operating steps. Slopes of the curves show that the singular
values decay sharply and the first 5-6 values can accurately capture the dynamic behavior with
εPnorm
OD as low as 0.1%. Thus, we infer that we can represent all the state variables for all the
0 0
10 10
0 0
10 10
−1 −2
10 10
−1 −1
10 10
−2 −4
10 10
−2 −3 −6 −2
10 10 10 10
0 5 10 0 5 10 0 5 10 0 5 10
(a) Singular values for gas−phase mole fraction of methane (on log scale)
1 2
10 10
2
10
0 1 0
10 10 10
0
10
−1 0
10 10
−2 −1 −5 −2
10 10 10 10
0 5 10 0 5 10 0 5 10 0 5 10
(b) Singular values for temperature (on log scale)
Figure 6.3: Singular values of gas-phase mole fraction of methane and temperature
operating steps with only 5-6 spatial POD basis functions, instead of 35 spatial volumes. With
35 spatial volumes, the rigorous model comprises a total of 2800 DAEs (including both beds
and all four operating steps), while with 5 basis functions the ROM obtained after Galerkin
projection comprises a mere 200 DAEs which is 1/14th of the rigorous system. Hence, we
We discretize DAEs of the reduced-order model for all the operating steps in time with
30 finite elements, and the resulting algebraic equations are solved simultaneously in AMPL
using IPOPT. Instead of solving the discretized ROM repeatedly for CSS, we consider initial
conditions as decision variables and reduced CSS conditions (shown in Table 6.6) are solved
simultaneously with the model equations in AMPL. Figure 6.4 compares profiles of gas-phase
methane mole fraction obtained for the rigorous model and ROM with 5 POD basis functions.
We observe that 5-rank approximation captures the dynamics of the problem very well and
profiles are nearly identical. Temperature profiles in Figure 6.5 also depict such similarity. We
note that all the profiles plotted are at CSS. We also observe that the mole fraction profiles
Figure 6.4: Comparison of methane mole fraction for all the steps after CSS
Figure 6.5: Comparison of temperature profiles for all the steps after CSS
From MATLAB after integration From AMPL after solving algebraic equations
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 40 0 40
0.5 20 0.5 20
Bed length 1 0 Time 1 0 Time
Bed length
Figure 6.6: Comparison of methane mole fraction profile for adsorption step obtained after
integrating in MATLAB and solving simultaneously in AMPL
Table 6.7: Comparison of rigorous model and ROM based on the performance variables
Performance variables Rigorous model ROM
H2 purity 0.9987 0.9987
H2 recovery 0.1095 0.1094
CH4 purity 0.9421 0.9425
CH4 recovery 0.2091 0.2094
are quite steep in the spatial dimension, especially for the adsorption and depressurization
steps, and ROM effectively captures such steep behavior, besides adequately capturing system’s
To check for the accuracy of the temporal discretization in AMPL, we integrate the DAE
system of the ROM in MATLAB as well as solve it in AMPL without the CSS conditions.
Figure 6.6 compares the gas-phase methane mole fraction profile for the adsorption step for
both approaches. Clearly, the profiles compare very well and show that the additional errors
introduced by a pre-determined temporal discretization in AMPL are negligible for this system
of equations.
Table 6.7 compares the purity and recovery of hydrogen and methane obtained from the
rigorous model as well as the reduced-order model at CSS. This can be seen as another basis
to compare the accuracy of ROM. Values obtained from ROM developed with the 5-rank
approximation are accurate up to 3 decimal places and can be considered identical to the true
In this section, we demonstrate how a ROM can be used to perform computationally efficient
optimization within a trust-region around the root-point where ROM is constructed. In par-
ticular, we utilize the reduced-order model in Table 6.6 to maximize hydrogen recovery subject
to a purity constraint and tight bounds on the decision variables which form the trust-region.
As discussed before, such tight bounds are essential as ROM loses its accuracy as we go further
Table 6.8 shows decision variables considered for the optimization problem and their values
at which the ROM is constructed. We consider two separate cases for optimization. In the
first case, we do not consider feed and regeneration velocities as decision variables, while they
are included as decision variables in the second case. Optimization results for both cases are
discussed further. We also perform optimization without a trust-region and demonstrate why
With feed and regeneration velocities held fixed at their root-point values, we solve the following
dimension and the resulting large-scale NLP is solved in AMPL using IPOPT. Optimal values
Table 6.8: Decision variables and the root-point at which ROM is built
Variable Value
High operating bed pressure (PH ) 500 kPa
Low operating bed pressure (PL ) 150 kPa
Pressurization step time (tp ) 5 sec
Adsorption step time (ta ) 50 sec
Feed velocity (uf eed ) 0.1 m/sec
Regeneration velocity (ureg ) -0.05 m/sec
of decision variables together with the CPU time are listed in Table 6.9. Within the bounded
region, we observe an increase in the recovery of hydrogen up to 16.3%. The optimum point
is achieved in only 195 CPU seconds as ROM based on 5-rank approximation (200 DAEs) is
used for optimization. Moreover, optimization is performed cheaply since few iterations are
To validate accuracy of optimization results, we simulate the rigorous model using the
method of lines approach in MATLAB at the optimal values of decision variables. Purities and
recoveries obtained from the rigorous model simulation is also listed in Table 6.9. We observe
that these values are reasonably close to the ones obtained after the ROM-based optimization
in AMPL. Thus, we infer that the ROM is behaving reasonably accurately at the optimum
too. Since increase in the hydrogen recovery is same, ROM is not depicting much error at the
optimum. Moreover, optimum is feasible since the hydrogen purity constraint is also satisfied
To futher consolidate our inference, we plot gas-phase mole fraction profiles of methane in
Figure 6.7 for all steps. Mole fraction profiles are generated from the rigorous model simulation
at the optimum and compared with profiles obtained from the ROM after optimization in
AMPL. The profiles match reasonably accurately and show that the behavior of the ROM is
It is worth noting that all decision variables are at their bounds after optimization. There-
fore, to verify that (locally) optimal values for the ROM are also optimal for the rigorous
model, we evaluate KKT conditions of the rigorous model at this point, x∗ . A general rigorous
model based optimization problem with its KKT conditions is given by:
a≤x≤b µl (x∗ − a) = 0
µu (x∗ − b) = 0
By shifting the decision variables to the beginning of vector x and defining a null space basis
matrix that satisfies Z T [I 0]T = I and Z T ∇c(x∗ ) = 0, allows KKT conditions to be written
as:
lower bound, and (Z T ∇f (x∗ ))i ≥ 0 if a decision variable i is at its upper bound. We indeed
verify these properties by perturbing decision variables from their optimal values. We provide
a positive perturbation to variables at upper bound and negative to the ones at lower bound,
and record the change in the objective function. The results are shown in Table 6.10. We
observe an increase in the hydrogen recovery for all perturbations which proves optimality.
Although successful results are obtained by imposing tight bounds on decision variables in
the ROM-based optimization problem, we desire to verify if such a strategy is indeed necessary.
Thus, we solve Problem (6.42) with the following relaxed bounds on decision variables:
At the optimum, we obtain a hydrogen recovery of 34.8%. However, solution profiles ob-
tained from the ROM are oscillatory and physically unrealistic. Figure 6.8 illustrates methane
gas-phase mole fraction profiles obtained after ROM optimization with relaxed bounds. For
adsorption, the oscillations are quite big and they tend to increase as step time increases.
In case of depressurization, there is a jump in the profile before steep decrease along spatial
dimension. Large oscillations in the profiles thus show large error in the reduced-order model
at the optimum. Moreover, Table 6.11 shows that when the rigorous model is simulated to
CSS at the optimal values, hydrogen purity dips to 80%, compared to 99.8% given by ROM
optimization. This vindicates the use of a trust-region and the claim that tighter restrictions
Figure 6.8: Gas-phase methane mole fraction profiles for ROM for relaxed bounds in Case I
In this case, besides operating pressures and step times, we also consider feed and regeneration
velocities as decision variables and solve the following ROM-based optimization problem which
thus a minus sign is used for ureg . We solve the optimization problem in AMPL using IPOPT
after discretizing ROM in the temporal dimension. Optimization results along with the CPU
the previous case, optimization is computationally efficient and the problem gets solved in only
184 CPU seconds. The rigorous model is also simulated at the optimal values of the decision
variables to validate optimization results from AMPL. Purities and recoveries obtained from
the rigorous model simulation are also listed in Table 6.12. We observe that these values
are quite close to the ones obtained after ROM-based optimization, indicating that the ROM
is predicting the dynamic behavior reasonably accurately at the optimum as well. As in the
previous case, we also compare gas-phase methane mole fraction profiles for the rigorous model
simulation and ROM optimization in Figure 6.9. Barring small oscillations in the adsorption
step, profiles match reasonably accurately, thus showing that ROM’s behavior is fairly close
Figure 6.10: Methane mole fraction profiles for ROM for relaxed bounds in Case II
Unlike previous case, we observe that in this case the hydrogen purity constraint is violated
slightly by the rigorous model at the optimum. Rigorous model gives a hydrogen purity of
99.74% which is slighly less than the desired lower bound 99.8%. Hence, we infer that attaining
feasibility in a ROM-based optimization problem cannot guarantee feasibility for the original
optimization problem, even with a small trust-region. A more systematic algorithm to ensure
We also solve Problem (6.44) with slightly relaxed bounds as shown below:
Within this new trust-region, hydrogen recovery could be increased to 21.6%. However, ob-
taining this extra increment in recovery is marred by oscillatory solution profiles in the ROM.
Figure 6.10 shows the profiles of the gas-phase mole fraction of methane obtained after ROM
optimization in AMPL. It can be observed that profiles are oscillatory for the adsorption and
ics by the ROM at the optimum. Hence, these results also suggest that tight restrictions on
decision variables and an adaptive trust-region based strategy with appropriate ROM updation
6.6 Conclusions
Beginning with a review of the previous work on model reduction, we describe how proper or-
els. In particular, with the method of snapshots, singular-value decomposition, and Galerkin’s
framework, POD can be successfully used to construct reduced-order models which can be
orders of magnitude smaller than the original model without losing accuracy. Methodology to
construct ROMs is illustrated for a Skarstrom PSA process to separate hydrogen and methane.
We not only show that the reduced-order modeling technique can be successfully used for large-
scale models as well, but on the basis of the comparison made between the rigorous model and
the ROM, we also conclude that such ROMs can provide significant model reduction and can
We also discuss that ROMs can be used to optimize in a confidence-region in the vicinity
of the root-point assuming it is reasonably accurate in that region. Such ROM-based opti-
mization in a trust-region is successfully demonstrated for two separate case studies of the
hydrogen PSA process. ROMs accurately predict the descent direction in the trust-region and
an improvement in the objective is obtained for both cases. Moreover, based on the CPU times
observed, we conclude that ROMs enable fast and cheap optimization. Current results indicate
that the proposed ROM-based methodology is a promising surrogate modeling technique for
based optimization, and conclude that an adaptive strategy with appropriate ROM updation
ROM-based Optimization
Synopsis
since they not only restrict the step around the root-point, but also synchronize ROM upda-
tion with the information obtained during the course of optimization, thus providing a robust
and globally convergent framework. In this chapter, we first develop an exact penalty-based
trust-region algorithm, and develop correction schemes for the objective and the constraints
to ensure global convergence with ROM-based approximate models. Algorithm with correc-
tions is demonstrated for a two-bed four-step PSA case study for post-combustion capture.
Next, highlighting drawbacks of the penalty approach and benefits of a filter, we develop a
hybrid filter trust-region algorithm for constrained ROM-based optimization. Finally, the filter
algorithm is illustrated with the PSA case study. In particular, we observe that both algo-
rithms converge to a local optimum of the original optimization problem within reasonable
7.1 Introduction
From the previous chapter it is clear that a single POD-based ROM is generally reliable in a
restricted zone around the point it is constructed (root-point), and it needs to be updated as the
optimization proceeds from the root-point to other points in the decision space. To converge
to the optimum we can a) solve the ROM-based optimization problem within tight bounds, b)
take a step and construct a new ROM by generating new snapshots at this new point, and c)
repeat the process until the optimum is achieved. Construction and updation of ROM should
be coupled with the progress in the optimization process. Here, the computational advantage
comes from the fact that the ROM is used for optimization instead of the detailed rigorous
trust-region framework.
Trust-region methods [54, 61] are suitable and quite appropriate for ROM-based optimiza-
tion since they not only ensure that the step computed by the optimizer using ROM stays
reasonably close to the root-point (as demonstrated in section 6.5.4.1), but also allow ROM
updation decisions based on the information obtained during the optimization procedure, and
provide a robust and globally convergent framework with ROMs. With such an adaptive
framework we can regulate the amount of optimization done with a ROM before we return to
the detailed model to update it. By comparing the improvement predicted by the ROM to the
improvement realized for the true system being optimized, we not only deduce how well ROM
is predicting the behavior of the system, but also decide if it should be updated or re-used
was first introduced by Alexandrov et al. [8]. However, they developed their generic framework
only for unconstrained optimization problems. They use a basic trust-region algorithm [54]
which involves solving the ROM-based problem in a trust-region, taking a step if the reduction
in the original objective function is reasonable compared to the one predicted by ROM-based
problem, and repeating this until convergence. In order to ensure convergence to the correct
optimum, they assume first-order consistency conditions to be true. One of their major contri-
butions is the introduction of scaled (or corrected) objective functions and constraints in the
trust-region subproblem to enforce these consistency conditions. Later, Fahl [65] developed the
TRPOD algorithm based on the Alexandrov’s approach with few modifications. In TRPOD,
they relax the consistency conditions and use an inexact gradient based formulation suggested
by Carter [41]. Moreover, they solve the trust-region subproblem approximately and utilize
Toint’s algorithm [186] for step computation. However, Alexandrov’s correction for objective
functions and constraints is essentially a part of the TRPOD. Bergmann et al. [25] also ap-
plied the TRPOD algorithm for optimal control of the circular cylinder wake flow considered
in the laminar regime. Kragel [112] developed a streamline diffusion POD methodology to
construct ROMs which are tuned to a high-order Navier-Stokes solver, and applied a recursive
multilevel trust-region algorithm [87, 88] for an optimal flow control problem. For optimal
design problems, Weickum et al. [198] developed extended ROMs for the whole design space
The focus of all the aforementioned studies is only unconstrained optimization, and the
strategies developed do not involve any discussion about the techniques to manage equality
the ROM-based optimization problem cannot guarantee feasibility for the original optimiza-
tion problem, and thus a systematic formulation to ensure feasibilty is desired. Eldred et al.
[63] briefly discuss few ways to handle infeasibility in a trust-region based methodology for
optimization problems involving ROMs, but do not provide any systematic rigorous frame-
work. Alexandrov et al. extended their previous work on unconstrained optimization with
approximate models, and included equality and inequality constraints as well in the original
optimization problem [4, 5, 6, 7, 9, 10]. In particular, they develop three distinct algorithms
for constrained optimization problems involving ROMs; Augmented Lagrangian AMMO based
function as a merit function, and SQP-AMMO which utilizes an exact `1 penalty function as
a merit function and a trust-region SQP formulation. However, they use squared slack vari-
ables for inequalities in the optimization problem which can be easily shown to fail the linear
independence constraint qualification (LICQ), thus making the corresponding KKT system
inconsistent.
problems using reduced-order models. In particular, we explore penalty and filter based ap-
proaches to manage infeasibility and utilize a few concepts from Fahl’s TRPOD algorithm,
Alexandrov’s scaling (correction) scheme for the objective function and the constraints, and
MAESTRO-AMMO algorithm. The algorithms developed are demonstrated for a case study
of a two-bed four step isothermal PSA process for post-combustion CO2 capture.
In this work, the original optimization problem is represented by a nonlinear program of the
following form
min f (x)
x
s.t. cE (x) = 0
(7.1)
cI (x) ≤ 0
xL ≤ x ≤ xU
where x ∈ Rn are the decision variables bounded between lower and upper bounds xL and xU ,
respectively, and the objective function f (x) : Rn → R, equality constraints cE (x) : Rn → RNE ,
and inequality constraints cI (x) : Rn → RNI are assumed to be sufficiently smooth and at least
twice differentiable functions. We note that Problem (7.1) is written in the reduced-space of the
original DAE-constrained optimization problem. DAEs are integrated outside Problem (7.1),
and the solution profiles are then used to compute the objective function and the constraints.
At k th iteration during the course of optimization, the ROM constructed for a particular xk
is used to build the model function for the trust-region subproblem. We define a ROM-based
s.t. cR
E,k (xk + s) = 0
cR
I,k (xk + s) ≤ 0
(7.2)
xL ≤ xk + s ≤ xU
ksk∞ ≤ ∆k
and inequality constraints, respectively, computed from the reduced set of state variables of
the reduced-order model. For this subproblem also, DAEs of the ROM are solved outside
Problem (7.2) and the solution of the unknown temporal coefficients in the POD expansion is
(7.2) is the trust-region constraint which limits the step size within the current trust-region
radius ∆k . In this work, we prefer to use an infinity norm for the trust-region constraint, i.e.,
we use a box-type (`∞ ) trust-region to restrict the step size of the decision variables.
It should be noted that the dimension of x, f (x), cE (x), and cI (x) remains same for both
the number of decision variables and constraints remain same for both problems. Computa-
tional advantage is achieved in terms of the smaller number of DAEs of the reduced-order model
which leads to cheap calculation of the gradients of the objective function and the constraints
(AF3) The second derivatives of f (x), cE (x), and cI (x) are uniformly bounded for all x ∈ Rn .
Bk = {x ∈ Rn | kx − xk k∞ ≤ ∆k }, ∆k > 0
(A2) The values of the objective and the constraints for the original optimization problem
and the trust-region subproblem coincide at the current iterate, i.e., for all k,
(A3) The gradient of the objective and the Jacobian of the constraints for both the problems
In this work, it is assumed that (AF1)–(AF3) are true, and assumptions (A1) and (A4)
hold by construction of the ROM. However, it cannot be guaranteed if assumptions (A2) and
(A3) (also called first-order consistency conditions) will be true in general. Depending on the
accuracy of the reduced-order model, values of the objective and the constraints for the original
problem and the ROM-based trust-region subproblem may reasonably match; however, their
gradients will differ since the POD basis set is obtained from the snapshots containing state
information but no gradient information. One way to ensure reasonable gradient matching is to
develop a separate ROM for the sensitivity equations of the original system, and solve it with
the ROM for the state variables. Fahl et al. [66] applied such an approach with the adjoint
sensitivity equations of the system and generated a separate ROM for the adjoint variables,
different from the ROM for the state variables. However, they reported that such an approach
can lead to inconsistent gradients in Problem (7.2), and thus to an algorithm which is not
globally convergent.
behavior of the trust-region algorithm, Alexandrov et al. [8, 10], Eldred at al. [64], and Giunta
et al. [83] propose enforcing assumptions (A2) and (A3) by using scaled/corrected objective
and constraints in the trust-region subproblem which can be derived by using local corrections
the gradient of the original objective function and the Jacobian of original constraints. In
particular, they propose two types of local corrections, additive and multiplicative, which can
Multiplicative:
T !
e R (x) = ΦR (x) Φ(xk ) ΦR R
k (xk )∇Φ(xk ) − Φ(xk )∇Φk (xk )
Φ + (x − xk ) (7.6)
k k
ΦR
k (xk ) ΦRk (xk )
2
Multiplicative correction can become ill-conditioned and may require additional modification
when ΦR
k (xk ) gets close to zero, especially for the equality constraints and active inequality
constraints. Hence, we prefer to use the additive correction for our work. For both corrections,
Therefore, we re-define the trust-region subproblem (7.2) in terms of the corrected objective
s.t. cR
eE,k (xk + s) = 0
cR
eI,k (xk + s) ≤ 0
(7.7)
xL ≤ xk + s ≤ xU
ksk∞ ≤ ∆k
It is worth noting that in these correction schemes, the gradient of the original objective
function and the Jacobian of the constraints is computed only once at the center of the trust-
region for a single trust-region iteration. Optimization within a trust-region is performed using
the cheap gradient of the objective and Jacobian of the constraints of the reduced-order model,
thus offering computational advantage. However, ∇f (xk ), ∇cE,k (xk ), and ∇cI,k (xk ) will have
are used. In this work, we define two kinds of additive correction schemes
We can obtain Zero-order Correction cheaply as it doesn’t require gradient evaluation for the
original objective and the constraints. However, with ZOC, only assumption (A2) is satisfied
while assumption (A3) is not guaranteed. On the other hand, First-order Correction ensures
both assumption (A2) and (A3) are satisfied, although it is expensive to construct.
Note that if Problem (7.7) is constructed at an infeasible point, the trust-region box may be too
small to satisfy the constraints in (7.7). Thus, handling feasibility needs special treatment in a
trust-region framework. Few trust-region algorithms are designed to deal with general equality
X X
ψekR (xk + s) = fekR (xk + s) + µ ecR
i,k (xk + s) + µ cR
max 0, ei,k (xk + s) (7.10)
i∈E i∈I
to reformulate the trust-region subproblem (7.7) into the following bound-constrained problem
s.t. xL ≤ xk + s ≤ xU (7.11)
ksk∞ ≤ ∆k
Here µ is the penalty parameter, which must be chosen sufficiently large. Note that the bound
constraints xL ≤ xk + s ≤ xU are either ignored if the box trust-region lies completely within
the polytope defined by them, or help to redefine the box trust-region if it intersects with the
polytope.
A penalty based formulation enables us to minimize the objective function while controlling
constraint violations by penalizing them. The penalty function is exact in the sense that for
a sufficiently high µ, the local solution of (7.7) is equivalent to the local minimizer of (7.11).
To evaluate the actual reduction obtained in the original objective function in (7.1), we define
X X
ψ(xk + s) = f (xk + s) + µ |ci (xk + s)| + µ max (0, ci (xk + s)) (7.12)
i∈E i∈I
Since the penalty functions are non-differentiable, we adopt the following smoothing approxi-
p
max(0, f (x)) = 0.5 f (x) + f (x)2 + 2 (7.13a)
p
|f (x)| = max(0, f (x)) + max(0, −f (x)) = f (x)2 + 2 (7.13b)
One of the main issues with penalty functions is to find a reasonable value for the penalty
parameter µ. Since the Lagrange multipliers, and thus the lower bound on µ are not known a
priori, choice of a value for µ is not intuitive. Too high a value for µ can cause the algorithm
to suffer from poor performance and ill-conditioning. Usually µ is adjusted by some update
criterion as the algorithm proceeds, and an acceptable step is decided thereafter. Such an
approach can provide considerable flexibility in choosing larger steps. However, in this work,
we do not propose any update mechanism for µ and work with a constant value which is
Choose 0 < η1 ≤ η2 < 1 ≤ η3 , 0 < γ1 ≤ γ2 < 1 < γ3 , penalty µ > 0, an initial trust-region
radius ∆0 , minimum radius ∆min , and an initial iterate x0 . Compute ψ(x0 ) and set k = 0.
1. Compute POD basis functions and construct a reduced-order model using the snapshots
2. Compute a step sk from (7.11). Problem (7.11) can also be solved “approximately” such
If ρk < η1 , set xk+1 = xk and ∆k+1 = γ1 ∆k . If ∆k+1 ≤ ∆min , STOP, otherwise increment
k by 1 and go to Step 2.
4. Set xk+1 = xk + sk ,
γ2 ∆k if ρk ∈ [η1 , η2 ),
∆k+1 =
∆k if ρk ∈ [η2 , η3 ),
γ3 ∆k if ρk ≥ η3
The algorithm repeatedly solves the ROM-based trust region subproblem (7.11) with the Zero-
order Correction (ZOC) or the First-order Correction (FOC) for the objective and the con-
straints until the trust-region radius shrinks to a value less than ∆min , and no further improve-
ment is obtained. In order to estimate the quality of the next iterate, we compare the actual
reduction in the true objective aredk , to the predicted reduction predk . This requires compu-
tation of a new solution for the DAEs of the original problem in order to evaluate ψ(xk + sk ).
If the trial step sk yields a satisfactory decrease, and if ρk ≥ η1 , it is accepted and we update
the ROM for next iteration with the help of these new snapshots just calculated. Otherwise,
the size of the trust-region is reduced and Step 2 is repeated with a smaller trust-region. In
Step 2, “approximately” means that a solution sk can be obtained in any manner suitable to
the application as long as it satisfies the following sufficient decrease condition, also known as
k∇ψ(xk )k
ψekR (xk ) − ψekR (xk + sk ) ≥ κp k∇ψ(xk )k min ∆k , (7.14)
βk
where 0 < κp < 1, while 1 < βk < ∞ is any bounded sequence of numbers (note that ∇ψ(xk )
can be computed because of the smoothing approximation (7.13)). However, at the beginning
of the algorithm, when not close to the optimum, equation (7.14) can be replaced by the
for some 0 < κp < 1, 0 < ν2 ≤ 1, ν1 > 0. Although a step sk can be computed approximately,
in this work we find an exact local optimum for Problem (7.11) using IPOPT for each iteration.
One of the key features of this algorithm is that once we achieve feasiblity during the course
of the algorithm, we stop using the penalty formulation, constraint relaxation is removed and
they are transferred back to the trust-region subproblem. In other words, Problem (7.11) is
converted back to Problem (7.7). This is especially important when FOC is used for objective
and constraints as the penalty parameter can put a lot of weight on constraint violation and
its gradient, thus allowing tiny reduction in the objective with each trust-region iteration.
Moreover, with feasible equalities and active inequalities, smoothing parameter (see Equation
(7.13)) in the penalty function can substantially skew the results and can make the algorithm
terminate prematurely.
In this work we use the following values for the constants in the algorithm
Note that the value for η1 is very close to zero. This implies that we take a step even if the
reduction in ψ(x) is quite small. This is essential for POD ROM-based optimization because
computation of ψ(xk + sk ) in ρk involves evaluation of new snapshots from the original DAEs.
Hence, it is always beneficial to take a step, even if ρk is small, and use these new snapshots to
update our ROM which we expect to perform better in the next iteration. Also, the choice of
η2 = 0.5 and η3 = 1 shows that most of the time during the algorithm we wish to keep the trust-
region radius constant instead of increasing it frequently. With POD ROM-based optimization
we prefer not to be greedy and limit the growth of the trust-region for longer duration because
of the oscillatory behavior of the ROM for large trust-regions, as observed in the previous
chapter. With oscillations that result from large extrapolation, ROM can take the algorithm
in a direction which may not be a descent direction, which can cause ρk to become negative
and lead to drastic reductions in ∆k for subsequent iterations. The oscillatory behavior can
be monitored by checking whether the state variables are within the defined bounds or not.
Since Algorithm I becomes a basic trust-region algorithm because of the smoothing approxi-
Theorem 7.3.1. (See Theorem 6.4.6 in [54]) Suppose that (AF1)–(AF3), (A1)–(A4), and
In other words, all limit points for Algorithm I converge to x∗ , a first-order optimal point
for Problem (7.1), when all the assumptions (AF1)–(AF3), and (A1)–(A4) hold, and a
sufficient decrease (7.14) can be ensured. As mentioned before, (AF1)–(AF3), (A1) and
(A4) are assumed to be true in this work. With Zero-order Correction, we can satisfy (A2),
while First-order Correction can ensure both (A2) and (A3) are satisfied. With assumptions
(A2) and (A3) being true, it can be shown that as the trust-region gets small enough, the
linear part of ψekR (xk + s) dominates and we can always compute a step sk which lies in the
steepest descent direction of ψekR (xk + s) (Cauchy step) and satisfies the fraction of Cauchy
decrease condition (7.14). Moreover, with FOC, the Cauchy step obtained for Problem (7.11)
coincides with the exact Cauchy step of Problem (7.11) with ψekR (xk +s) replaced with ψ(xk +s).
Assumptions (A2), and (A3) should necessarily be satisfied at the optimum to ensure
that the Algorithm I converges to a solution that corresponds to the optimum of the original
optimization problem (7.1) [29, 77]. Since we satisfy (A2) and (A3) with FOC for all the
trust-region iterations, it is ensured that the algorithm will converge to the true optimum.
We follow the approach presented in section 6.5.4.1 for our case studies, in order to verify if
We consider a 2-bed 4-step isothermal PSA process with an 85%-15% N2 -CO2 feed mixture
which is a typical composition of a post-combustion flue gas stream. In particular, the process
is a Skarstrom cycle shown in Figure 2.3, and described in detail in section 2.4.1. Zeolite 13X
LDF equation
∂qi
= ki (qi∗ − qi ) i = 1, 2 (7.19)
∂t
Dual-site Langmuir Isotherm
s b y P
q1i s b y P
q2i
qi∗ =
1i i 2i i
X + X i = 1, 2 (7.20)
1+ b1j yj P 1+ b2j yj P
j j
2. There are no radial variations for concentrations in the solid and the gas phase.
3. The process is isothermal with a fixed temperature for the entire cycle.
6. The adsorption rate is approximated by the linear driving force (LDF) expression.
Based on the above assumptions, the mathematical model for the PSA process is listed in
Table 7.1. Here we assume no axial dispersion and use a lumped mass transfer coefficient for
the LDF model. The adsorbent properties for 13X and other model parameters are listed in
Table 7.2 [111]. Since we also have an overall mass balance in the model to solve for velocity
Parameter Value
Bed Length (L) 1m
Bulk porosity (b ) 0.34
Adsorbent density (ρs ) 1870 kg m−3
CO2 =0.1631 sec−1
Mass transfer coefficient (k)
N2 =0.2044 sec−1
Process temperature (T ) 310 K
Isotherm parameters
CO2 N2
qs1 2.708769 1.819949
qs2 2.436388 1.819949
b1 1.23×10−5 6.17×10−7
b2 4.78×10−4 6.17×10−7
along the bed length, we solve component mass balance for only one component. Moreover, we
don’t have to ensure that the mole fractions sum up to one as it implicitly happens because of
the overall mass balance. We denote this model in Table 7.1 as the rigorous model for which
Boundary conditions for each step are shown in Table 7.3. We note that for the depressur-
ization step a boundary condition for mole fraction is not needed since u|x=L = 0 automatically
sets the inlet flux to be zero for the component mass balance. However, boundary conditions
are needed for both velocity and mole fraction for the pressurization step due to the nature of
the upwind-based spatial discretization scheme. Also, we note that the purge fraction of the
outlet from x = L which goes from one bed to the other during the adsorption step is chosen
to be 0.4, which appears in the boundary condition of velocity for the desorption step.
Tables 6.4 in the previous chapter shows the equations for molar flux variables of the 2-bed
4-step process which is used to calculate purities and recoveries of nitrogen and CO2 in the
following manner
Z
ON2 (t) dt
purityN2 = Z (7.22)
ON2 (t) + OCO2 dt
Z
HpCO2 (t) dt
purityCO2 = Z (7.23)
HpN2 (t) + HpCO2 dt
Z Z
ON2 (t) dt − PgN2 (t) dt
recoveryN2 = Z (7.24)
FN2 (t) dt
Z
HpCO2 (t) dt
recoveryCO2 = Z (7.25)
FCO2 (t) dt
We use the method of lines approach to convert PDAEs in Table 7.1 to a set of DAEs after
spatial discretization, which is then simulated to generate snapshots for POD basis functions.
For spatial discretization, we use an upwind-based finite volume scheme with the Van Leer flux
limiter for mole fraction to introduce additional numerical dispersion around steep adsorption
fronts (cf. section 2.5.1). Moreover, we utilize Unibed approach and simulate only one bed for
the entire cycle with the help of storage buffers to handle boundary matching for the two beds
It is important to note that the snapshots are generated only after the 2-bed system attains
cyclic steady state. To achieve CSS, we use a successive substitution method in which a series
of simulations are performed with initial conditions of each new cycle taken from the final
condition of the previous cycle until the bed conditions do not change from cycle to cycle. In
After obtaining snapshots, separate POD basis functions are generated for pressurization,
RT X M
daqij
(1 − b )ρs (φyik , φqij ) =0 ∀k ∈ [1, M ], i=1 (7.26)
P dt
j=1
Overall
mass balance
X
M X
2 X
M
du0 + dφuj RT daqij
auj , φuk + (1 − b )ρs (φuk , φqij ) =0 (7.27)
dx dx P dt
j=1 i=1 j=1
∀k ∈ [1, M ]
LDF equation
X
daqik
M
= ki qi , φqik − aqik − q0i , φqik
R∗
∀k ∈ [1, M ] (7.28)
dt
j=1
Langmuir isotherm
s b yR P
q1i s b yR P
q2i
1i 2i
qiR∗ = P2 i R + P2 i R i = 1, 2 (7.29)
1 + j=1 b1j yj P 1 + j=1 b2j yj P
adsorption, depressurization and desorption steps. Moreover, we derive separate POD basis
functions for gas phase mole fractions, solid phase loadings, and velocity. State variables are
X
M
yiR (x, t) = y0i (x) + ayij (t)φyij (x) for i=1
j=1
X
M X
M
qiR (x, t) = q0i (x) + aqij (t)φqij (x) uR (x, t) = u0 (x) + auj (t)φuj (x)
j=1 j=1
Here y0i (x), q0i (x), and u0 (x) are snapshot averages for mole fraction, solid loading, and
velocity respectively. It is noteworthy to mention that for the adsorption step, we use the
following POD basis representation for velocity to make the adsorption feed velocity ua visible
X
M
uR (x, t) = ua + auj (t)φuj (x) for adsorption step.
j=1
Table 7.4 shows DAEs of the reduced-order model obtained after Galerkin projection of the
In this section we apply the exact penalty trust-region algorithm on a ROM-based optimization
problem which maximizes CO2 recovery subject to a constraint on CO2 purity. For optimiza-
tion, we consider five decision variables, high pressure Ph up to which the bed is pressurized
and at which the adsorption step takes place, low pressure Pl for the depressurization and des-
orption steps, step time for pressurization and depressurization tp , and that for adsorption and
desorption ta , and finally, feed velocity during the adsorption step ua . The DAE-constrained
extract CO2 , which lacks any kind of a step that enriches CO2 concentration in the bed. Thus,
we cannot achieve high CO2 purity with this 2-bed 4-step cycle. However, our focus here is to
illustrate the exact penalty trust-region algorithm for optimization using ROMs with the help
of this case study. We also note that in order to improve CO2 purity and recovery, we allow
vacuum depressurization and desorption steps as the bounds for Pl lie in the vacuum range.
discretize the DAEs of the ROM in the temporal dimension and convert it into a standard
NLP which is solved in AMPL using IPOPT. We note that to construct a ROM, snapshots
are obtained only after CSS is achieved by the rigorous model. Therefore, it is ensured that
CSS is satisfied by ROM as well for every trust-region iteration. In other words, CSS is solved
In the subsequent sections, we first compare the accuracy of the reduced-order model at the
starting guess x0 for the optimization problem. Next, we solve the optimization problem (7.31)
with ZOC for the objective and the purity constraint and monitor if Algorithm I converges to
an optimum even when assumption (A3) is not true. Finally, we solve problem (7.31) with
FOC. The quality of the dynamic behavior predicted by ROM is compared at the optimum as
well.
In this section, we validate the accuracy of the ROM and verify how accurately ROM is
predicting the dynamic behavior of the original isothermal PSA process at the initial guess x0 .
Table 7.5 lists the starting guess for our optimization problem (7.31).
With 50 spatial finite volumes, we convert the PDAEs in Table 7.1 into DAEs and integrate
using ode15s in MATLAB. Cyclic steady state is achieved up to the desired tolerance after
simulating the model repeatedly for 50 cycles, and snapshots are collected to generate empirical
POD basis. Figure 7.1 shows first six POD basis functions of gas-phase CO2 mole fraction for
the adsorption step. The shapes of these functions are quite different since they are empirical
in nature. Figure 7.2 shows first 10 singular values for gas-phase CO2 mole fraction and
0.2
−0.1 0.4
0.1
−0.2 0.2
φ 0
φ
−0.1
−0.3 0
−0.2
−0.4 −0.2
−0.3
0.3 0.3
0.2
0.2 0.2
0.1 0.1 0
0 0
φ
φ
−0.1 −0.1 −0.2
−0.2 −0.2
−0.4
−0.3 −0.3
Figure 7.1: First six POD basis functions of CO2 mole fraction for adsorption
superficial gas velocity for all the operating steps. From the slopes of the curves, it is clear
that the singular values decay fairly sharply. For this case study, we choose a threshold error
respectively. In other words, with such few basis functions error in projection can be at most
5%. We purposely choose a slightly higher value of 0.05 for the threshold tolerance since it is
observed that with a low value of λ∗ (say 0.01 or 0.001), ROM incorporates those POD basis
functions which do not contribute much towards predicting the dynamics, thus causing the
DAE system of the ROM to become ill-conditioned. Moreover, with these values of M , ROM
comprises a mere 70 DAEs, while the rigorous model contains a total of 1400 DAEs. Hence, we
obtain a significant model reduction as ROM is 1/20th of the rigorous model with this choice
0 0 0 0
10 10 10 10
−2 −2 −2 −2
10 10 10 10
−4 −4 −4 −4
10 10 10 10
−6 −6 −6 −6
10 10 10 10
0 5 10 0 5 10 0 5 10 0 5 10
(a) Singular values for gas−phase mole fraction of CO (on log scale)
2
0 0 0 0
10 10 10 10
−2 −2 −2 −2
10 10 10 10
−4 −4 −4 −4
10 10 10 10
−6 −6 −6 −6
10 10 10 10
0 5 10 0 5 10 0 5 10 0 5 10
(b) Singular values for superficial velocity (on log scale)
Figure 7.2: Singular values of mole fraction of CO2 and superficial gas velocity
of λ∗ .
We discretize the DAEs of the reduced-order model in time with 20 finite elements and 3
collocation points for all four operating steps, and the resulting algebraic equations are solved
with IPOPT. The initial conditions for the process are taken as decision variables and reduced
CSS conditions (shown in Table 7.4) are solved simultaneously with the model equations in
AMPL. Figure 7.3 compares the profiles of gas-phase CO2 mole fraction obtained after inte-
grating the rigorous model till CSS is achieved, and after solving the algebraic equations of
ROM with reduced CSS conditions. We observe a significant match between the profiles, thus
As another basis to verify ROM’s accuracy, we compare purities and recoveries of nitrogen
and CO2 for both rigorous model and ROM at CSS for this starting guess. Table 7.6 lists
such a comparison. It can be observed that the values obtained from ROM are fairly close to
the ones obtained after integrating the rigorous model even with relatively large λ∗ = 0.05.
Figure 7.3: Comparison of CO2 mole fraction for all the steps at the starting guess
Table 7.6: Comparison of rigorous model and ROM based on the performance variables
Since the comparison is reasonable, we solve (7.31) with the POD-based ROM in Table 7.4
We first solve (7.31) with the Zero-order Correction (7.8) applied for the objective function
and the purity constraint. As mentioned before, ZOC can be computed cheaply as it doesn’t
involve evaluation of the original objective and the constraint gradients. However, it doesn’t
satisfy assumption (A3), although it ensures (A2) holds. Therefore, we cannot ensure that
ZOC satisfies the fraction of Cauchy decrease (7.14). Our focus in this section is to observe
if the POD-based ROM, besides accurately predicting the dynamics, can also predict the de-
scent direction accurately without involving the actual gradients from the original optimization
problem. In other words, since POD-based ROMs are reasonably accurate at any xk , we are
inquisitive about whether they can also implicitly satisfy (A3) by their very construction, or
whether a first-order correction (7.9) with accurate gradients is vital for ROM-based optimiza-
As mentioned before, we use a box trust-region in this work. For problem (7.31), with a
B.1 in Appendix B lists all the trust-region iterations together with the values of the decision
variables after the iteration ends (i.e., xk + sk ), CO2 purity pCO2 , and recovery rCO2 at xk + sk ,
ROM-based objective function at the center of the trust-region ψekR (0) and at the end of the
iteration ψekR (sk ) (see Equation (7.10)), the true objective function at the end of the iteration
ψ(sk ) (see Equation (7.12)), ratio ρk defined in Algorithm I, and the trust-region radius ∆k .
We observe that because of the penalty, the focus of the first few iterations of the algorithm is
to gain feasibility, although additionally CO2 recovery also improves. Feasibilty is attained in
7th iteration (k=6), after which the penalty formulation is dropped and the purity constraint
is moved from the objective function to the constraint section of the trust-region subproblem.
Eventually, the algorithm terminates after 13th iteration (k=12) when ∆12 is shrunk further
Optimal values of the decision variables together with the optimization CPU time are listed
in Table 7.7. CPU time for optimization only accounts for the time taken for all the trust-
region iterations, and doesn’t include the time required to calculate snapshots and construct
ROM. With 52,247 algebraic variables, algorithm terminated with 13 trust-region iterations
and within a reasonable CPU time of 35.7 min. We notice that after algorithm termination,
only Pl is at its lower bound, while other decision variables are well within their limits. We
also report the purities and recoveries of nitrogen and CO2 as obtained from AMPL after final
optimization iteration, and from the rigorous model MATLAB simulation at the optimum. We
recall that the ROM is constructed with a threshold tolerance λ∗ = 0.05. With this λ∗ , ROM
In order to verify if the algorithm terminated at an optimal point, we follow the analysis
presented in section 6.5.4.1 and perturb decision variables from their optimal values. In partic-
ular, we provide a positive perturbation to variables at their upper bounds and negative to the
ones at their lower bounds, and record the change in CO2 purity and recovery. For decision
variables not at their bounds, we provide both positive and negative perturbations. Since the
CO2 purity constraint is active at the termination, the termination point can be proven opti-
mal if, with such perturbations, we either improve CO2 recovery and simultaneously decrease
its purity, or vice-versa is true for all the decision variables. Moreover, if we improve both
CO2 purity and recovery for the parameters at their bounds, termination point is optimal.
However, if CO2 purity and recovery both improve for a decision variable not at a bound, or if
they both decrease for a variable at either of its bounds, the termination point is not optimal.
Perturbation results for this case are shown in Table 7.8. It can be observed that the CO2
purity and recovery both improve when Ph is perturbed from 203.28 kPa to 206.28 kPa, and
tp is perturbed from 55.43 sec to 53.43 sec. This shows that the point at which algorithm
terminated is not optimal, which implies that ROM is not predicting the correct descent
direction at the termination point. The reason for this is that the assumption (A3) is not
true at the termination point, i.e., the objective function gradient and the constraint Jacobian
obtained from ROM do not match the actual ones. Hence, this clearly shows that although
we can construct susbstantially accurate ROMs based on the snapshot information of the
state variables, such ROMs in general cannot always ensure that the gradient information is
also reasonably accurate. Moreover, we infer that to ensure convergence to an optimal point,
assumption (A3) is essential and accurate gradients should be incorporated in the ROM-
based optimization problem. This can be accomplished with a First-order Correction (FOC),
We solve (7.31) with the First-order Correction (7.9) applied for the objective function and the
purity constraint. For FOC, we need to evaluate gradients of the objective and the constraints
of the original optimization problem with the rigorous model before each trust-region iteration
starts. However, this is computed just once and optimization within a trust-region is carried
out using ROM. For this case, we evaluate gradients using perturbation.
Table B.2 in Appendix B lists all the trust-region iterations with the penalty parameter
µ=1000. As in the previous case, we observe that because of the penalty parameter, the al-
gorithm focuses on satisfying the CO2 purity constraint for the first few iterations. In fact,
within 5 iterations (k=4) feasibility is attained after which we drop the penalty parameter and
move the purity constraint into the trust-region subproblem. Because of the exact gradient
information, algorithm goes beyond the optimal CO2 recovery of 81.74% obtained in the previ-
ous case, up to a recovery of 97.19%. However, the key observation is that the algorithm takes
tiny steps to improve CO2 recovery after achieving feasibility, and thus eventually takes 92
iterations to get to the optimum, which is considerably large. After 92nd iteration (k=91), ∆91
gets reduced from 0.021 to 0.005, thus going below ∆min = 0.02, and the algorithm terminates.
Final optimization statistics are listed in Table 7.9. Because of 92 trust-region iterations,
the algorithm takes 1.88 hrs. of optmization CPU time to converge. Unlike previous case,
three decision variables (Ph , Pl , and tp ) are at their bounds at the optimum, while the bounds
of ta , and ua are not active. Purities and recoveries of the components obtained after final
trust-region iteration in AMPL, are quite close to the ones obtained from MATLAB simulation
its upper bound and negative perturbation to tp at its lower bound improves both CO2 purity
and recovery, while a negative perturbation for Pl at its lower bound improves recovery but
behavior of either improvement in CO2 recovery with loss in its purity, or vice-versa. Therefore,
To complete the analysis and to ensure ROM is predicting physically correct dynamic
behavior of the system, we also compare gas-phase CO2 mole fraction profiles at the optimum,
Figure 7.4: Comparison of CO2 mole fraction for penalty TR algorithm with FOC
obtained from ROM-based optimization in AMPL, and rigorous model simulation in MATLAB.
Such a comparison in Figure 7.4 reveals convincing behavior of the reduced-order model as the
7.5.1 Motivation
In this section we develop a hybrid filter-based trust-region algorithm for optimization involving
ROMs. The algorithm is hybrid in the sense that it utilizes both ZOC and FOC at different
The motivation to develop this algorithm comes from two key observations in the aforemen-
tioned PSA case study. First, we notice that although ROM-based trust-region subproblems
with ZOC cannot ensure convergence to an optimal point, they are at least capable of generat-
ing descent directions and even attain feasibilty even without the actual gradient information.
Moreover, constructing FOC for every trust-region iteration is expensive as it involves compu-
tation of gradients from the original problem. However, FOC is necessary to converge to an
optimal point. Therefore, we propose a hybrid strategy in which the trust-region algorithm be-
gins with subproblems constructed with ZOC. This is continued until no further improvement
is observed, after which the algorithm switches to subproblems with FOC until convergence.
Second, we observe in the optimization case study with FOC that the penalty-based trust-
region algorithm takes tiny steps and marches considerably slowly once feasibility is attained.
The penalty parameter doesn’t allow infeasible moves as aredk , and thus ρk in this case becomes
negative which entails sharp reduction in the trust-region radius and therefore, short steps.
Hence, instead of developing a hybrid algorithm with the exact penalty algorithm, we develop
a filter-based approach. Such an approach is desirable since it can allow taking a step which
reduces objective while increasing infeasibility, and therefore can march faster towards the
optimum. Moreover, the motivation for developing a filter-based algorithm comes from the
difficulty of determining a suitable penalty parameter µ or its updation strategy in the exact
penalty function.
Filter methods have been extensively studied and applied for nonlinear programming prob-
lems. Fletcher et al. [143] provide a brief survey of the literature on filter methods. Filter
approach was first proposed by Fletcher in 1996; later described in [74]. The first global
convergence proof for these methods was given for an SLP method [75], which was later gen-
eralized to SQP methods [76]. Fletcher et al. [72] analyze a trust-region SQP filter method
that decomposes the SQP step into a normal step to attain feasibility, and a tangential step
which reduced objective function. Nie et al. [135], on the other hand, combine the normal
and tangential problem with a penalty parameter and solve them simultaneously in Fletcher’s
trust-region SQP filter method. Other studies with filter method include a bundle method for
[16]. Benson et al. [24] and Ulbrich et al. [187] have studied filter methods in the context
of interior-point methods for solving NLPs. Wächter and Biegler [193, 194] have successfully
incorporated filter mechanism in the NLP solver IPOPT [195]. They develop a line-search filter
method that avoids convergence to arbitrary stagnation points, as illustrated by the example
in [192].
In this work, we use Fletcher’s trust-region filter method [72] with additional modifica-
tions for POD-based ROMs. The proposed modifications still enjoy the globally convergent
7.5.2 Filter
There are two objectives in a general nonlinear programming problem, minimization of the
objective function f (x), and minimization of the constraint violation θ(x), where
A penalty function combines both these goals in one single measure and minimizes ψ(x) =
f (x) + µθ(x). In contrast, a filter method considers both of these as separate goals, and
interprets the NLP as a bi-objective optimization problem. There is a special emphasis on the
second goal since a point has to be feasible in order to be an optimal solution, and thus θ(x∗ )
should be zero at the optimum x∗ . Filter methods borrow the concept of domination from
filter method involves storing iterates xk that are not dominated by any other iterates. More
either θi ≤ θj or fi ≤ fj ∀j, i 6= j
During optimization, we aim to accept a new iterate xi only if it is not dominated by any
other iterate in the filter. Figure 7.5 illustrates the concept by showing (θk , fk ) at xk as black
dots in the (θ, f )-space. The lines emanating from each (θ, f )-pair (forming filter envelope)
indicate that any iterate whose associated (θ, f )-pair occurs in the shaded region in Figure 7.5
is dominated by at least one of these black dots. Iterates which do not lie in the shaded region
are acceptable. The contours of the `1 exact penalty function will be straight lines with slope
−µ in this plot, indicating that the filter is generally less restrictive than penalty methods.
We do not accept new iterate xk +sk if its (θ, f )-pair is quite close to the filter envelope, and
thus set a small “margin” around this envelope. Formally, we say that a point x is acceptable
f(x)
xk
(x)
for some γf , γθ ∈ (0, 1). In Figure 7.5, this margin corresponds to the thin dotted line. In
keep adding (θk , fk )-pairs to the filter for the acceptable iterates xk . However, it is important
to note that (θ, f )-pairs are not added to a filter for all the acceptable iterates. We observe
that θ(x) in (7.33) dominates to a certain point, especially when infeasibility is large. However,
as θ(x) → 0, the method must focus on descent of f (x). For this case, no filter point is added
(see further details in section 7.5.4). Since xk may not be in the filter, we move from xk to
xk + sk only if xk + sk is “acceptable for the filter and xk ”, i.e., if the following condition holds
Maintaining a list of (θ, f )-pairs in a filter avoids what is known as cycling. Cycling results
between two points that alternately improve one of the measures, θ or f , but worsen the other
one at the same time. The filter avoids cycling because for a movement from xk to xk+1 that
improves θ but not f , the filter is augmented with xk , and thus xk becomes unreachable during
optimization. Hence, cycling cannot occur with filter methods [74, 193].
Following the strategy proposed by Fletcher et al. [72], we decompose the trust-region sub-
problem (7.7) into a normal subproblem and a tangential subproblem. In [72], the normal
subproblem computes a step vk which reduces infeasibility of (7.7), while the tangential sub-
problem evaluates a step pk which improves the objective and lies in the null space of equality
and inequality constraints, thus maintaining feasibility achieved by the normal problem. The
which the infeasibility can be reduced in the given trust-region. Tangential subproblem, then,
computes the overall step sk which reduces the objective while maintaining this infeasibility
min δ
v,δ
s.t. −δ ≤ e cR
E,k (xk + v) ≤ δ
eR
cI,k (xk + v) ≤ δ (7.36)
δ≥0
xL ≤ xk + v ≤ xU
kvk∞ ≤ ∆c
In order to ensure a non-zero tangential step, we choose ∆c = 0.6∆k . Once the optimum
infeasibility level δ is obtained, it is fixed to δ̄ and we solve the following tangential subproblem
These subproblems are similar to the ones proposed by Alexandrov et al. [10] in their
MAESTRO-AMMO algorithm. However, unlike our case, MAESTRO-AMMO takes the nor-
mal step vk and solves the tangential subproblem at xk + vk to obtain a tangential step pk
which reduced objective in the null space of the constraints. Hence, their overall step is a
Unlike MAESTRO-AMMO, we do not take the normal step vk and compute the overall
constructing corrections (ZOC or FOC) for the objective and the constraints at xk + vk . If the
normal step is taken, corrections will have to constructed once for the normal subproblem at
xk and again for the tangential subproblem at xk + vk . This involves evaluating the values and
derivatives of the objective and the constraints of the original problem twice. To circumvent
this, we construct ZOC or FOC only once at xk and solve both subproblems with xk as the
We note that although Fletcher et al. [72] allow obtaining approximate solutions for nor-
mal and tangential problem, we solve both problems using IPOPT and compute exact local
Relying solely on the condition (7.34) can produce a situation when the sequence of iterates
{xk } always provide sufficient reduction of the constraint violation alone, and not the objective
function. This could result in convergence to a feasible but non-optimal point. In order to
where fekR (x) is the ROM-based objective function of the tangential subproblem, and θk =
def
θ(xk ) is the current actual constraint violation, different from θekR in (7.35), and is defined as
θ(x) = max 0, max |ci (x)|, max ci (x) (7.39)
i∈E i∈I
Note that θk is used for the filter margin (7.34) instead of θekR defined in (7.35). Current iterate
xk is added to the filter if condition (7.38) fails. The role of this switching condition can be
interpreted as follows. If it fails, then the current constraint violation θk is significant and we
aim to improve on this in the future by inserting xk into the filter. On the other hand, if it
holds, then the reduction in the objective function predicted by ROM is more significant than
the current θk , and the algorithm should promote descent in the objective. In this case, it is
important that a sufficient decrease is also realized in the objective function of the original
should hold together with condition (7.38). In the parlance of filter methods, a step generated
in such a case is called an “f-type step”. With an f-type step, xk is not added to the filter.
If an iterate xk is feasible (θk = 0), equation (7.38) becomes fekR (xk ) − fekR (xk + sk ) ≥ 0.
Consequently, the filter mechanism is irrelevant if all iterates are feasible, and the algorithm
that no feasible iterate is ever included in the filter. This is vital to avoid convergence to
a feasible but suboptimal point, and crucial in allowing finite termination of the feasibility
As mentioned before, our interest here is to develop a hybrid filter algorithm which utilizes
benefits of both ZOC and FOC. Since ZOC is cheap to construct and can predict descent
in objective or infeasibility even without accurate gradients, the algorithm begins in Section
I with normal subproblem (7.36) and tangential subproblem (7.37) defined with ZOC, and
proceeds until no further improvement in the objective or the infeasibility measure is obtained.
After this, the algorithm moves to Section II where subproblems are constructed with FOC
if the ROM-based objective function and constraints constructed with ZOC can indeed provide
a descent in either objective or infeasibility or both. Normal and tangential subproblems are
solved only if we can guarantee descent. Since we have no information about accurate gradients
in this section, we rely on the gradients of the objective and the constraints obtained from the
reduced-order model to promote such descent. Formally, we evaluate Cauchy steps (steepest
0 if sC
f =0
αf ∇fekR
sC
f = τf = fek (xk ) − fek (xk + sf )
R R C (7.41)
k∇feR k
otherwise
k
αf k∇fekR k
0 if sC
θ =0
αθ ∇θekR
sC
θ = τθ = θekR (xk ) − θekR (xk + sC (7.42)
k∇θeR k
k
θ)
otherwise
αθ k∇θek kR
for some αf , αθ ∈ (0, 1). Here fekR is constructed with ZOC in equation (7.8), and θekR is given
cR
by equation (7.35) with e cR eR
E (x), and eI (x) given by equation (7.8). We note that since fk and
θekR are based on the ROM for k th iteration, their gradients, and thus sC C
f and sθ can be cheaply
evaluated for each trust-region iteration. Hence, with a little computational expense, we can
determine if the reduced-order model can predict descent for fekR or θekR or both. If τf > 0 or
τθ > 0, it can guarantee descent for the reduced objective function fekR or reduced infeasibility
θekR , respectively. Therefore, the normal subproblem with ZOC is solved only if τθ > 0, and
similarly, the tangential subproblem with ZOC is solved only when τf > 0. If both cannot be
ensured, algorithm moves to Section II where FOC with exact gradients is used.
In Section I, we also incorporate POD subspace augmentation which involves adding more
POD basis functions to improve accuracy of the existing ROM for the k th iteration, and thus
enhancing its ability to accurately predict the descent direction for fekR or θekR or both. To
an error tolerance level λ∗ . During the course of the algorithm, if we encounter a situation
when both τf ≤ 0 and τθ ≤ 0, tolerance level λ∗ is reduced to increase the POD subspace
dimension M and more basis functions are added to the ROM. This is repeated until either
one of the τf or τθ or both become positive, or we hit the maximum allowable limit for POD
subspace dimension M max . Once we reach M max , algorithm switches to Section II.
Since Section II involves expensive gradient computation, we desire to achieve larger re-
duction in the objective and infeasibility in Section I itself and delay switching to Section II.
POD subspace augmentation not only allows such a delay, but also improves the performance
of normal and tangential subproblems with ZOC by producing more accurate ROM. Note that
if M max is same as the number of spatial discretization nodes Nx , ROM is essentially as accu-
rate as the original DAE system and Section I itself can be used to attain convergence to an
optimum. However, we avoid choosing M max as high as Nx as we lose all the computational
advantage offered by ROMs. Usually an M max is chosen which is reasonably high compared
Section II with FOC and accurate gradients instead of utilizing more expensive ROMs.
point within Section I itself. Therefore, our filter algorithm never terminates in Section I and
In Section II of our hybrid filter algorithm, normal subproblem (7.36) and tangential subprob-
lem (7.37) are constructed with First-order Corrections for the objective and the constraints.
This involves computing exact gradients for each trust-region iteration. Because FOC and
the exact gradients can ensure proper descent direction, we do not calculate Cauchy steps for
the objective and infeasibility as done in Section I. Moreover, we do not utilize POD basis
augmentation strategy for this section as the ROMs, even with few basis functions M , can
generate accurate steps with accurate original gradients. As a result, we can work with smaller
We note that once the algorithm proceeds from Section I to Section II, it never resorts back
Section II is essentially the SQP-filter algorithm proposed by Fletcher et al. [72]. The
difference lies in the fact that Fletcher et al. use a quadratic model approximation for their
However, we ensure all the assumptions made by Fletcher et al. are satisfied, which renders
our algorithm similar to the SQP-filter algorithm from convergence theory point of view (see
The algorithm switches to a feasibility restoration phase when it is not able to obtain an
• when the next iterate xk + sk obtained after solving the normal and the tangential
• when xk + sk is acceptable for the filter and xk , and satisfies the switching condition
(7.38) as well, but fails to provide sufficient reduction in the original objective function
Restoration phase is invoked when either the current trust-region radius ∆k goes below ∆min ,
when the current infeasibility level θekR goes beyond a maximum limit θmax , or when in Section
I, τf ≤ 0 together with τθ > 0, i.e., when ROM-based subproblems with ZOC can only decrease
The purpose of the restoration phase is to decrease the current constraint violation and
generate a new iterate xk +sk which is acceptable for the current filter and xk . In our algorithm,
restoration phase involves solving the normal subproblem using the basic trust-region algorithm
[54] until such an iterate is obtained. Consequently, restoration phase can follow its own trust-
region update rules separate from the ones used in the filter trust-region algorithm. In the
terminology of filter methods, such an infeasibility reducing step is known as “θ-type step” or
“h-type step”. We define distinct restoration phases for Section I and Section II. Restoration
phase in Section I solves the normal subproblem with ZOC, while that in Section II solves it
with FOC. Moreover, restoration phase in Section I also involves POD subspace augmentation
and evaluation of the Cauchy step τθ to determine if a descent direction exists for infeasibility
with current ROM incorporating Mk basis functions for the current iterate k.
We note that the iterates generated during the restoration phase are never added to the
filter as it can also lead to the addition of a feasible point to the filter which is detrimental for
the algorithm. But we also note that whenever restoration phase is invoked at an iterate xk , it is
augmented to the filter to avoid a visit to this point again in the future. Since a feasible iterate
is never included in the filter, feasibility restoration phase always either generates a successful
iterate, or converges to a local minimizer with some measure of infeasibility, indicating that
Choose 0 < η1 ≤ η2 < 1 ≤ η3 , 0 < γ1 ≤ γ2 < 1 < γ3 , γf , γθ ∈ (0, 1), κθ ∈ (0, 1), β ∈ (0, 1),
ψ > 1/(1 + β), αf , αθ ∈ (0, 1), an initial trust-region radius ∆0 , minimum radius ∆min , and
1. ROM construction: Compute POD basis functions using the snapshots obtained at xk .
(a) Compute sC C
f , sθ , τf , and τθ .
(c) If τθ > 0, add xk to the filter and go to the restoration phase in Step 4.
(d) If Mk < M max , decrease λ∗ to update Mk and ROM, and repeat Step 2, else, go to
Step R.
3. Step computation
(a) Solve normal subproblem (7.36) and tangential subproblem (7.37) with ZOC.
(b) If θk ≥ θmax , or if ∆k ≤ ∆min , add xk to the filter and go to the restoration phase
in Step 4.
(c) If xk + sk is not acceptable for the filter and xk , i.e., (7.34) fails, set xk+1 = xk and
(f) If (7.38) holds and ρk < η1 , set xk+1 = xk and ∆k+1 = γ1 ∆k . Increment k by 1 and
repeat Step 3.
γ2 ∆k if ρk ∈ [η1 , η2 ),
∆k+1 =
∆k if ρk ∈ [η2 , η3 ),
γ3 ∆k if ρk ≥ η3
(a) With Mk , solve normal subproblem (7.36) with ZOC using a basic trust-region
algorithm until a point acceptable for the filter and xk is found. If found, increment
Step 6.
5. ROM construction: Compute POD basis functions using the snapshots obtained at xk .
6. Step computation
(a) Solve normal subproblem (7.36) and tangential subproblem (7.37) with FOC.
(c) If θk ≥ θmax , or if ∆k ≤ ∆min , add xk to the filter and go to the restoration phase
in Step 7.
(d) If xk + sk is not acceptable for the filter and xk , i.e., (7.34) fails, set xk+1 = xk and
(f) If (7.38) holds and ρk < η1 , set xk+1 = xk and ∆k+1 = γ1 ∆k . Increment k by 1 and
repeat Step 6.
(h) Set xk+1 = xk + sk . If (7.38) fails, ∆k+1 = ∆k , else update ∆k+1 as in 3(h).
7. Restoration with FOC: Solve normal subproblem (7.36) with FOC using a basic trust-
region algorithm until a point acceptable for the filter and xk is found. If found, increment
Clearly, algorithm begins in Section I and always terminates in Section II. The choice of the
constants in the algorithm depends on the optimization problem and the scaling mechanism
used for the decision variables. In this work, we choose the following values:
As in the case of exact penalty trust-region algorithm, we choose a small value for η1 to allow
taking a step even if the reduction in f (x) is quite small. Since computation of f (xk + sk ) in
ρk involves evaluation of new snapshots from the original DAEs, which can be used to update
ROM at xk +sk , it is always beneficial to move to xk +sk and expect the new ROM to predict a
better descent step. Also, as in the penalty trust-region case, we choose η2 = 0.5 and η3 = 1 to
maintain the trust-region for longer duration because of the oscillatory behavior of the ROM
One peculiar feature of the algorithm is step 3(e) in Section I. Even though predk < 0, this
step allows us to move from xk to xk + sk because of the aredk being positive. Such a sce-
nario is possible especially with ROM-based trust-region subproblems without exact gradient
information. In particular, we encounter this situation when the normal and the tangential sub-
problems focus more on reducing infeasibility, leading to an increase in the objective function
fekR which causes predk to become negative. However, such an iterate can actually decrease
both infeasibility and objective for the original optimization problem, leading to a positive
aredk . Inaccurate gradients in the ROM-based tangential subproblem with ZOC entails predk
to become negative. Consequently, ρk becomes negative (< η1 ). If we move from 3(e) to 3(f),
a step will be denied which is undesirable as aredk > 0. Hence, we jump from 3(e) to 3(g).
Note that a counterpart of 3(e) is missing in Section II since such a scenario cannot occur as
Another important feature of the algorithm is that in both sections, trust-region radius
is updated only when (7.38) holds. If (7.38) fails, the main effect of the current iteration is
not to reduce objective (which makes ρk essentially irrelevant), but rather to reduce constraint
violation (which is taken care of by inserting xk to the filter in steps 3(g) and 6(g)). In this case,
we impose no further restriction on ∆k+1 and keep it same as ∆k because reducing ∆k+1 might
cause steps towards infeasibility that are too small, or an unnecessary call for the restoration
phase. If, on the other hand, (7.38) holds, iteration’s emphasis is on reducing the objective
Finally, the performance of the algorithm depends significantly on the quality of the
reduced-order model constructed. Highly accurate ROMs with sufficiently accurate gradients
can quickly approach close to the optimum within Section I itself. On the other hand, ROMs
which poorly predict the actual dynamic behavior and portray inaccurate gradients can end up
landing in Section II quite early during optimization, making the whole process computation-
ally demanding. Moreover, size of the trust-region and as a result, total number of iterations
also rely significantly on ROM accuracy. POD-based ROMs can be made more accurate by
adding more basis functions. However, it can also lead to ill-conditioned ROMs due to addition
of those basis functions which do not affect the dynamics much. Hence, ROM-construction is
(AD) The sequence of iterates {xk } produced by Algorithm II lies within a closed, bounded
domain Ω.
(AR) If {xki } is any subsequence of iterates for which limi→∞ θki = 0, then a normal step vki
exists for i sufficiently large, and kvki k ≤ κv θki for some κv > 0.
Theorem 7.5.1. (See Theorem 15.5.13 in [54]) Suppose that (AF1)–(AF3), (A1)–(A4),
(AD), and (AR) hold and the fraction of Cauchy decrease (FCD) condition
eR eR χk
fk (xk + vk ) − fk (xk + sk ) ≥ κf χk min , ∆k (7.43)
βk
is satisfied for some κf > 0, and a bounded sequence of βk > 1. Then either the restoration
As discussed before in the case of Algorithm I, assumptions (AF1)–(AF3), (A1), and (A4)
are assumed to be true in this work. Moreover, with the First-order Correction (FOC), we
ensure that assumptions (A2) and (A3) are satisfied. Also, since xL ≤ x ≤ xU , (AD) is
also guaranteed. Assumption (AR) requires existence of a normal step especially when the
def
current constraint violation θki = θ(xki ), defined by (7.39), is sufficiently small. For Algorithm
II, (AR) is satisfied by the construction of the normal subproblem (7.36) and by assuming
that the gradients of the constraints are linearly independent. Since it is solved using a basic
trust-region algorithm with exact gradients due to FOC, existence of a nonnegative step vk
together with a fraction of Cauchy decrease can always be ensured unless θki = 0.
For FCD condition, χk is a first-order criticality measure. Based on the tangential problem
def
(7.37), we define χk = χ(xk ) in the following manner
s.t. cR
−δ̄ ≤ e cR
i,k (xk ) + ∇ei,k (xk ) d ≤ δ̄ i ∈ {E}
T
(7.45)
cR
ei,k (xk ) + cR
∇e T
i,k (xk ) d ≤ δ̄ i ∈ {I}
kdk ≤ 1
where δ̄ is the optimum infeasibility level obtained from the following normal subproblem.
min δ
q,δ
s.t. cR
−δ ≤ e cR
i,k (xk ) + ∇ei,k (xk ) q ≤ δ i ∈ {E}
T
(7.46)
cR
ei,k (xk ) + cR
∇e T
i,k (xk ) q ≤δ i ∈ {I}
kqk ≤ 1 − ϑ, δ ≥ 0
Here ϑ > 0 ensures (7.45) remains feasible. We note that χk can be defined in terms of fekR (xk ),
cR
e cR
E,k (xk ), and eI,k (xk ), and their gradients since they match the original objective function and
constraints, and their gradients at xk because of the FOC. In order to use χk in (7.45) for the
FCD condition, we need to show that it is a first-order criticality measure. Since the constraint
set of (7.45) is linear, and thus convex, the following theorem ensures that χk is a first-order
criticality measure.
Theorem 7.5.2. (See Theorem 12.1.6 in [54]) Suppose that (AF1), (A2), and (A3) hold
and xk belongs to a nonempty, closed and convex feasible region. Then χ(xk ) defined by (7.45)
In other words, we can always compute a Cauchy descent direction if χ(xk ) > 0, and χ(xk )
vanishes only when xk is a first-order critical point. Therefore, as the trust-region gets smaller,
the linear part of the objective and the constraints dominate and thus, a Cauchy step can
always be taken to ensure FCD condition (7.43) is satisfied. Also, because of FOC in the
Section II of Algorithm II, the Cauchy step of the tangential problem (7.37) coincides with
that of problem (7.37) with the original objective and constraints. Hence, Section II satifies
Theorem 7.5.1, and thus converges to the first-order critical point if the restoration procedure
reaches Section II. Hence, Algorithm II is globally convergent and always converges to the
exact local optimum of the original optimization problem. In order to verify optimality of the
termination point of Algorithm II, we conduct a perturbation analysis as done with the exact
We demonstrate Algorithm II for the 2-bed 4-step PSA case study for post combustion CO2
capture, and utilize it to solve the optimization problem (7.31) with same five decision variables.
The algorithm begins at the same initial guess as shown in Table 7.5. At this initial guess, ROM
is constructed with a threshold error tolerance λ∗ of 0.05, similar to the exact penalty function
and desorption steps, respectively. For Section II, gradients are evaluated using perturbation.
Table B.3 in Appendix B lists the trust-region iterations for the tangential subproblem,
while Table B.4 lists those for the normal subproblem. Table B.3 shows decision variables at
However, CO2 purity and recovery is listed at xk + sk for both tables, since it is observed that
for all k, CO2 purity at xk + sk remains same as that at xk + vk . The algorithm begins with
the restoration phase of Section I since the inaccurate gradients of ROM yield negative τf .
Restoration phase is invoked for every iteration until k = 3, after which feasibility is attained
and CO2 purity goes beyond 50%. After k = 3, since τθ = 0, and ROM yield a τf < 0,
POD basis augmentation is used. When λ∗ is reduced up to 0.1%, we obtain τf > 0 and
thus, proceed for step computation in Section I for 5th iterate (k = 4). Algorithm continues
in Section I after this and improves objective until 35th iterate (k = 34). At k = 34, ROM
is not able to predict a descent in the objective function even after increasing POD subspace
dimension. Hence, algorithm switches to Section II. Eventually, the algorithm terminates after
51st iteration when ∆50 shrinks from 0.031 to 0.008 and goes below ∆min of 0.02. One of
the key observations in Table B.3 is the value of tp , which increases steadily in Section I
but starts decreasing and hits its lower bound after k = 34, when the algorithm switches to
∂ fekR
Section II. This implies that ∂tp has an opposite sign in Section I during optimization which
gets corrected in Section II. Moreover, since Ph hit its upper bound and Section I terminates
during same iteration (k = 34), we conclude that CO2 recovery improves in Section I even
∂ fekR
with incorrect ∂tp due to the increment obtained in Ph . Another key observation is the 31st
iteration (k = 30) when we take a step despite ρk being negative. This is a consequence of the
step 3(e) in the algorithm. In this iteration, although predk < 0, we observe aredk > 0.
As highlighted before, once feasibility is attained in the exact penalty trust-region algo-
rithm, we are not allowed a move which increases infeasibility. This eventually causes Algorithm
I to pursue tiny steps towards optimum. In contrast, Algorithm II allows such a step which can
increase infeasiblity when it tries to improve objective function value. For instance, in Table
B.4, we notice iterations k=28, or 30, or iterations after k = 33, when algorithm sacrifices
feasibility in order to achieve greater improvement in CO2 recovery. Hence, Algorithm II takes
Table 7.11 lists the optimal values of the decision variables together with the optimization
CPU time. With 52,247 algebraic variables, Algorithm II terminated within a reasonable CPU
time of 1.36 hrs. As observed in the case of exact penalty trust-region algorithm with FOC,
Ph , Pl , and tp are at their bounds at the optimum. However, the local optimum obtained in
this case is slighly different from the one obtained in section 7.4.3.3 in the sense that the values
of ta and ua are marginally different. As a consequence, optimal CO2 recovery obtained in this
case is marginally better than the one obtained in section 7.4.3.3. We also report the purities
and recoveries of nitrogen and CO2 obtained from AMPL after final optimization iteration,
and from the rigorous model MATLAB simulation at the optimum. The values are fairly close
Table 7.12 lists the results for the pertubation analysis performed in order to validate if the
algorithm terminated at an optimal point. A positive perturbation for Ph at its upper bound
and a negative perturbation for tp at its lower bound improves both CO2 purity and recovery,
while a negative perturbation for Pl at its lower bound improves recovery but deteriorates CO2
purity. Moreover, perturbing ta and ua in both directions either improves CO2 recovery while
diminishing its purity, or vice-versa. Therefore, we can safely conclude that the algorithm
Finally, in Figure 7.6, we also present a comparison between the gas-phase CO2 mole frac-
tion profiles at the optimum, obtained from the final valid tangential subproblem iteration, and
from the rigorous model simulation in MATLAB. Profiles are nearly identical which confirms
that ROM is predicting physically correct dynamic behavior of the system during optimization.
7.7 Conclusions
utilize reduced-order models for optimization since it not only restricts the validity zone of the
reduced-order model, but also provides a robust and globally convergent algorithm. Therefore,
we develop trust-region based algorithms and explore both exact penalty-based and filter-based
approaches to handle general equality and inequality constraints in the original optimization
problem.
First, exact penalty trust-region algorithm is demonstrated for a 2-bed 4-step isothermal
PSA process. We illustrate that executing the algorithm with only Zero-order Correction
cannot ensure convergence to the local optimum of the original optimization problem, and
thus conclude that FOC with exact gradient information is necessary for convergence. With
Figure 7.6: Comparison of CO2 mole fraction for hybrid filter TR algorithm
FOC, exact penalty algorithm converges to a local optimum after 92 trust-region iterations and
1.88 hrs of optimization CPU time. Although not so encouraging, these results and success
with this case study enables us to conclude that we can indeed perform optimization using
reduced-order models with the help of a systematic trust-region based adaptive strategy.
We find that the reason for this high iteration count is early attainment of feasibility which
further doesn’t allow infeasible moves and thus, tightens the step size. One reason for this
might be the choice of high penalty for constraint violation (µ = 1000). This reflects one of the
main issues with penalty functions, i.e., to find a reasonable value for µ. An updating scheme
can be developed which penalizes constraint violation based on its magnitude; however, it may
only avoids such difficult decisions of choosing µ, but also allows steps which can achieve greater
the PSA case study, filter trust-region algorithm converges to a local optimum within 51 trust-
region iterations consuming 1.36 hrs of CPU time., which is significantly less compared to the
penalty algorithm.
Since trust-region subproblems with ZOC can also generate descent due to accurate ROMs,
we follow a hybrid strategy for filter-based algorithm. Moreover, we also incorporate POD basis
augmentation in Section I to improve ROM’s accuracy. For the PSA case study, we observe
that 35 iterations out of the total 51 are indeed carried out in Section I of the algorithm, which
is quite encouraging as it delays expensive gradient evaluations for FOC. Thus, we infer that a
hybrid strategy and POD subspace augmentation are potentially useful tools for optimization
with ROMs.
Finally, success of the idea of using ROMs for computationally efficient optimization ulti-
mately depends on the quality of the ROM and its ability to accurately predict the descent
direction. Although we obtain promising results in this chapter, iteration count for both
penalty and filter approaches can further be reduced by improving the quality of ROMs. In
future, alternative methodologies can be explored to construct better and more efficient ROMs.
Conclusions
Synopsis
With growing demands for efficient PSA cycles, and increasing needs for computationally
cheap modeling techniques, especially for flowsheet simulation and optimization, it has become
essential to develop novel systematic optimization-based strategies for design and operation of
PSA systems. In this dissertation, we not only introduce a novel idea of synthesizing PSA
cycles using a superstructure, but also successfully demonstrate it for practical applications.
a new optimization framework using reduced-order modeling, which when applied to PSA
optimization problems yields promising results. All these developments and our contributions
are summarized in the next section, followed by directions for future work.
This dissertation primarily focuses on introducing and developing two new ideas to address
research challenges presented by PSA processes in terms of cycle synthesis and computational
complexity of the PDAEs governing its dynamics, and presents a successful proof of principle
analysis for both ideas. Beginning with an overview of the PSA processes and adsorption
fundamentals in the first two chapters, we describe that a practical PSA/VPSA process can
be fairly complex with a multicolumn design executing a wide variety of non-steady-state op-
erating steps in a non-trivial sequence, and motivate the need for a systematic methodology
to synthesize PSA cycles. Therefore, we first explore the idea of development of a unique PSA
superstructure to design optimal PSA processes. Secondly, we show that PSA processes are
governed by highly nonlinear PDAEs with solution profiles characterized by steep adsorption
to current optimization techniques. Consequently, we explore the idea of using POD to gener-
ate computationally-efficient ROMs and actualize novel trust-region algorithms to solve PSA
optimization problems using these ROMs. We provide a summary of the work done and discuss
PSA Superstructure
new cycle configurations and design parameters. Interconnections between the two beds of
the superstructure are governed by time-dependent control variables, which are manipulated
to accomplish a wide variety of different PSA operating steps. An optimal cycle is eventually
obtained by solving an optimal control problem for the superstructure. To solve it, we adopt a
complete discretization approach, and alleviate its singular nature by using coarse discretization
for controls.
The superstructure approach is illustrated for a post-combustion CO2 capture case study.
Superstructure is optimized to maximize CO2 recovery. With the optimal 2-bed 6-step VSA
cycle, we are able to recover about 80% of CO2 at a substantially high purity of 95%, and at a
significantly high feed flux of 80 kgmol m−2 hr−1 . Next, we develop an optimal configuration
which yields high-purity separation with minimal power requirements. Optimal profiles trans-
late in a 2-bed 8-step VSA configuration which, at 90% purity and 85% recovery, extracts CO2
with a substantially low power consumption of 465 kWh/tonne CO2 captured. We also apply
the superstructure methodology for pre-combustion CO2 capture in Chapter 5. When CO2
recovery is maximized, superstructure optimization results in a 2-bed 8-step VSA cycle which
can produce both H2 and CO2 at a substantially high purity of 98% and 90%, respectively.
Changing the objective to minimizing power consumption yields an entirely different 2-bed
10-step VSA cycle which can produce CO2 at a purity of 90% and a recovery of 92% with a
significantly low power consumption of 46.82 kWh/tonne CO2 captured. Our contributions
All the studies in the literature so far only suggest simplistic formulations to determine
minimum number of beds required in a PSA process with a given fixed sequence of
operating steps. To the best of our knowledge, this is the first instance when a system-
atic methodology is proposed to design, evaluate and optimize PSA processes, and the
By developing cycles that can extract CO2 at a purity of over 95% for post-combustion
capture, and with a power consumption as low as 46.82 kWh/tonne CO2 captured for pre-
for both post-combustion and pre-combustion carbon capture. We not only synthesize
cycles which are practically feasible, but also suggest operating steps which should be in-
corporated in a PSA process for high-purity CO2 capture. More importantly, we discover
novel operating steps such as the total reflux step which have never been seen before in
• Generic framework
The key accomplishment is that the proposed superstructure framework is quite generic
and can be extended to many other PSA applications. We do not make any assumption
on the adsorbent or the feedstock, the kinds of operating steps that can be predicted, or
details of the bed model. Moreover, we do not impose any upper bound on the number of
operating steps eventually included in the optimal PSA cycle. This makes the approach
fairly general. Also, besides developing optimal cycles, the framework can be used to
evaluate different kinds of adsorbents for the same feedstock and process conditions.
ROM-based Optimization
In Chapter 6, with the help of the method of snapshots and Galerkin projection, we utilize
proper orthogonal decomposition (POD) to successfully construct ROMs which are orders of
magnitude smaller than the original problem and also, significantly accurate. Methodology to
construct ROMs is illustrated for a Skarstrom PSA process to separate H2 and CH4 . With
a model reduction of 93% in size, the resulting ROM accurately mimics the actual dynamic
behavior. ROM is also used to maximize hydrogen recovery within a trust-region around the
point where it is constructed. ROM-based optimization is not only fast and cheap, but an
accurate prediction of the descent direction together with an improvement in the objective is
With such encouraging results, we devise a systematic adaptive trust-region based frame-
work for optimization with ROMs. First, an exact penalty-based trust-region algorithm is
developed and illustrated for a two-bed four-step PSA process for post-combustion capture.
We conclude that a First-order Correction with exact gradient information is necessary for
convergence to an optimum. For a CO2 recovery maximization problem, the exact penalty
algorithm with FOC converges to a local optimum after 92 TR iterations and 1.88 hrs of
also devise a filter-based trust-region framework. This hybrid framework utilizes both ZOC
and FOC to save on the computational effort of computing exact gradients. When applied to
the PSA case study, filter TR algorithm converges to a local optimum within 51 trust-region
iterations consuming 1.36 hrs of CPU time, which is significantly smaller than the penalty
algorithm. Our major contributions for this part of the thesis are given below:
Although POD-based reduced order modeling technique has been used for a variety of
disciplines, its utilization has remained limited to small-scale optimal control or dynamic
optimization problems. This is the first instance when the use of POD-based ROM
multiple sets of PDAEs, state variables, and boundary conditions. Moreover, this is the
first instance when POD-based ROMs are used for adsorption systems.
Although studies in the literature have attempted model simplification for PSA processes,
this is the first successful study which reports the use of a POD-based technique to de-
velop low-order approximations for PSA models. Also, we present a unique construction
technique for ROMs for PSA which is different from other dynamic processes in the sense
that we develop separate ROMs for each state variable and each operating step.
Trust-region algorithms have been developed to handle approximate models for uncon-
strained optimization. In this work, we extend the use of approximate models to con-
region frameworks based on the exact penalty and filter approaches, and their successful
quite generic, do not make any assumption on the optimization problem, and can be
Development of the proposed ideas in this dissertation together with the fruitful analysis of the
case studies have also helped us identify many potential areas for improvement and a number
of outstanding issues that need to be investigated. Some recommendations for future work in
PSA Superstructure
• The superstructure can be updated by incorporating flow valves for the inlet and exit
streams of CoB and CnB. To realize operating steps, valve constants can be manipulated
instead of pressures at the ends of CoB and CnB, which is more practical, and can
lead to more stable and nonoscillatory solutions compared to the current optimal control
framework. Moreover, such valves can ensure a proper flow control during steps like co-
current pressurization and pressure equalization, which cannot be ensured neatly with
• Product tanks can also be incorporated in the superstructure with additional mass bal-
ance equations for them. This can help obtain operating steps that involve a pure product
• The complete discretization approach used in this work to solve the optimal control
problem requires an additional accuracy verification step. Such a step can be completely
avoided, and accuracy of the results can be enhanced by using a sensitivity-based sequen-
tial approach, similar to [100]. Partially discretized PDAEs together with the sensitivity
equations can be integrated outside the NLP problem using a sophisticated dynamic
• In this work, computational limitations allow us to do the analysis with only binary
feed mixtures. In future, the approach can be extended to applications that involve
incorporated in CoB and CnB for higher selectivity, efficient separation, and enhanced
• Although analyzed for CO2 capture in this work, the superstructure framework is fairly
general and can be applied for many other PSA applications in future.
ROM-based Optimization
• Although the trust-region algorithms developed are globally convergent, more detailed
• In this work, we do not consider CSS conditions as a part of the original optimization
problem, and achieve CSS for each trust-region iteration in our case studies because we
assume that the number of decision variables, and equality and inequality constraints re-
main same for both original problem and ROM-based trust-region subproblem. However,
CSS conditions for the original problem get reduced in dimension for the ROM-based
trust-region subproblem after applying Galerkin projection onto the POD subspace, thus
violating our assumption. In future, a different trust-region framework, such as the re-
cursive multilevel algorithm proposed by Gratton et al. [87, 88] can be devised to handle
CSS constraints.
• Although we demonstrate the trust-region algorithms for a two-bed four-step PSA pro-
cess, in future, the proposed framework can easily be extended to optimize large-scale
PSA applications involving multiple adsorbent layers, complex flow patterns and more
• The trust-region framework architected in this work is fairly generic and can be utilized
• Finally, success of the trust-region framework depends heavily on the quality of the
reduced-order models and their ability to correctly predict the descent directions for the
objective and infeasibility measure. Although, POD-based ROMs produce promising re-
sults, quality of the ROMs can be further enhanced with alternative ROM-construction
ROMs are constructed using neural networks, or a Kriging approximation based method-
ology proposed by Caballero et al. [38], in which ROMs are essentially metamodels which
[3] D. Aaron and C. Tsouris, Separation of CO2 from Flue Gas: A Review, Separ. Sci.
Technol. 40 (2005), no. 1, 321–348.
[7] N. M. Alexandrov and J. E. Dennis Jr., Multilevel Algorithms for Nonlinear Optimization,
In Computational Methods for Optimal Design and Control ; J. Borggaard, J. Burns, E.
Cliff, and S. Schreck, Eds., Birkhäuser, 1998.
[12] E. Arian, M. Fahl, and E. Sachs, Trust-region Proper Orthogonal Decomposition for
Optimal Flow Control, Tech. report, Institute for Computer Applications in Science and
Engineering, ICASE 2000-25, NASA Langley Research Center, Hampton, 2000.
BIBLIOGRAPHY 192
BIBLIOGRAPHY
[14] , Dynamic Optimization of Dissipative PDE Systems using Nonlinear Order Re-
duction, Chem. Eng. Sci. 7 (2002), 5083–5114.
[15] P. Astrid, S. Weiland, K. Willcox, and T. Backx, Missing Point Estimation in Models
Described by Proper Orthogonal Decomposition, IEEE T. Automat. Contr. 53 (2008),
no. 10, 2237–2251.
[16] A. Audet and J. E. Dennis Jr., A Pattern-Search Filter Method for Nonlinear Program-
ming without Derivatives, SIAM J. Optim. 14 (2004), no. 4, 980–1010.
[17] G. Bader and U. M. Ascher, A New Basis Implementation for a Mixed Order Boundary
Value ODE Solver, SIAM J. Sci. and Stat. Comput. 8 (1987), no. 4, 483–500.
[18] M. S. A. Baksh and M. W. Ackley, Pressure Swing Adsorption Process for the Production
of Hydrogen, US Patent 6340382, 2002.
[19] M. S. A. Baksh and C. E. Terbot, Pressure Swing Adsorption Process for the Production
of Hydrogen, US Patent 6503299, 2003.
[20] S. Balakrishna and L. T. Biegler, Targeting Strategies for the Synthesis and Energy
Integration of Nonisothermal Reactor Networks, Ind. Eng. Chem. Res. 31 (1992), no. 9,
2152–2164.
[21] E. Balsa-Canto, J. R. Banga, and A. A. Alonso, A Novel, Efficient and Reliable Method
for Thermal Process Design and Optimization. Part II: Applications, J. Food Eng. 52
(2002), no. 3, 235–247.
[23] C. Benkmann, System for Treatment of Plural Crude Gases in Single Adsorption Plant,
US Patent 4402712, 1983.
[25] M. Bergmann, L. Cordier, and J.-P. Brancher, Control of the Cylinder Wake in the Lami-
nar Regime by Trust-region methods and POD Reduced Order Models, Proceedings of the
44th IEEE Conference on Decision and Control, and the European Control Conference
2005, Seville, Spain, Dec 12-15, 2005.
[26] , Optimal Rotary Control of the Cylinder Wake using Proper Orthogonal Decom-
position Reduced-order Model, Phys. Fluids 17 (2005), 97–101.
[27] G. Berkooz, P. Holmes, and J. L. Lumley, The Proper Orthogonal Decomposition in the
Analysis of Turbulent Flows, Annu. Rev. Fluid Mech. 25 (1993), 539–575.
BIBLIOGRAPHY 193
BIBLIOGRAPHY
[30] , Systematic Methods for Chemical Process Design, Prentice Hall PTR: Upper
Saddle River, NJ, 1997.
[31] L. T. Biegler, L. Jiang, and V. G. Fox, Recent Advances in Simulation and Optimal
Design of Pressure Swing Adsorption Systems, Sep. Purif. Rev. 33 (2005), no. 1, 1–39.
[35] T. Bui-Thanh, K. Willcox, and O. Ghattas, Model Reduction for Large-Scale Systems
with High-Dimensional Parametric Input Space, SIAM J. Sci. Comput. 30 (2008), no. 6,
3270–3288.
[38] J. A. Caballero and I. E. Grossmann, An Algorithm for the Use of Surrogate Models in
Modular Flowsheet Optimization, AIChE J. 54 (2008), no. 10, 2633–2650.
[39] Y. Cao, J. Zhu, Z. Luo, and I. M. Navon, Reduced-order Modeling of the Upper Tropical
Pacific Ocean Model using Proper Orthogonal Decomposition, Comput. Math. Appl. 52
(2006), 1373–1386.
[41] R. G. Carter, On the Global Convergence of Trust Region Algorithms using Inexact
Gradient Information, SIAM J. Numer. Anal. 28 (1991), no. 1, 251–265.
[42] P. Cen and R. T. Yang, Separation of a Five-Component Gas Mixture by Pressure Swing
Adsorption, Separ. Sci. Technol. 20 (1985), no. 9, 725–747.
[43] P. L. Cen, W. N. Chen, and R. T. Yang, Ternary Gas Mixture Separation by Pressure
Swing Adsorption: a Combined Hydrogen-Methane Separation and Acid Gas Removal
Process, Ind. Eng. Chem. Process Des. Dev. 24 (1985), no. 4, 1201–1208.
BIBLIOGRAPHY 194
BIBLIOGRAPHY
[46] A. S. T. Chiang, Arithmetic of PSA Process Scheduling, AIChE J. 34 (1988), no. 11,
1910–1912.
[47] K. Chihara and M. Suzuki, Air Drying by Pressure Swing Adsorption, J. Chem. Eng.
Jpn. 16 (1983), no. 4, 293–299.
[50] W.-K. Choi, T.-I. Kwon, Y.-K. Yeo, H. Lee, H. Song, and B.-K. Na, Optimal Operation
of the Pressure Swing Adsorption (PSA) Process for CO2 Recovery, Korean J. Chem.
Eng. 20 (2003), no. 4, 617–623.
[51] C.-T. Chou and C.-Y. Chen, Carbon Dioxide Recovery by Vacuum Swing Adsorption,
Sep. Purif. Technol. 39 (2004), no. 1-2, 51–65.
[53] Y. Chung, B.-K. Na, and H. K. Song, Short-cut Evaluation of Pressure Swing Adsorption
Systems, Comput. Chem. Eng. 22, Suppl. (1998), S637–S640.
[59] M. S. Darwish and F. Moukalled, TVD Schemes for Unstructured Grids, Int. J. Heat
Mass Tran. 46 (2003), no. 4, 599–611.
[60] P. G. de Montgareuil and D. Domine, Process for Separating a Binary Gaseous Mixture
by Adsorption, US Patent 3155468, 1964.
[61] J. E. Dennis Jr. and R. B. Schnabel, Numerical Methods for Unconstrained Optimization
and Nonlinear Equations, Prentice Hall: Engelwood Cliffs, NJ, 1983.
BIBLIOGRAPHY 195
BIBLIOGRAPHY
[62] S. J. Doong and R. T. Yang, Bidisperse Pore Diffusion Model for Zeolite Pressure Swing
Adsorption, AIChE J. 33 (1987), no. 6, 1045–1049.
[65] M. Fahl, Trust-region Methods for Flow Control based on Reduced Order Modelling, Ph.D.
thesis, Trier University, 2000.
[67] S. Farooq, M. M. Hassan, and D. M. Ruthven, Heat Effects in Pressure Swing Adsorption
Systems, Chem. Eng. Sci. 43 (1988), no. 5, 1017–1031.
[68] J. Favier, L. Cordier, A. Kourta, and A. Iollo, Calibrated POD Reduced-Order Mod-
els of Massively Separated Flows in the Perspective of their Control, Proceedings of
FEDSM2006, 2006 ASME Joint U.S. - European Fluids Engineering Summer Meeting,
July 17-20, Miami, FL, 2006.
[70] , Numerical Methods for Problems with Moving Fronts, Ravenna Park Publishing,
Inc.: Seattle, WA, 1992.
[73] R. Fletcher and S. Leyffer, A Bundle Filter Method for Nonsmooth Nonlinear Optimiza-
tion, Tech. report, NA/195, Department of Mathematics, University of Dundee, Scotland,
UK, 1999.
BIBLIOGRAPHY 196
BIBLIOGRAPHY
[79] A. Fuderer, Pressure Swing Adsorption Process and System, US Patent 4381189, 1983.
[81] A. Fuderer and E. Rudelstorfer, Selective Adsorption Process, US Patent 3986849, 1976.
[84] E. Glueckauf and J. I. Coates, Theory of Chromatography IV: The Influence of Incom-
plete Equilibrium on the Front Boundary of Chromatograms and the Effectiveness of
Separation, J. Chem. Soc. (1947), 1315–1321.
[85] V. G. Gomes and K. W. K. Yee, Pressure Swing Adsorption for Carbon Dioxide Seques-
tration from Exhaust Gases, Sep. Purif. Technol. 28 (2002), no. 2, 161–171.
[86] C. A. Grande, S. Cavenati, and A. E. Rodrigues, Pressure Swing Adsorption for Carbon
Dioxide Sequestration, 2nd Mercosur Congress on Chemical Engineering and 4th Mercosur
Congress on Process Systems Engineering, 2005.
[87] S. Gratton, A. Sartenaer, and P. L. Toint, Recursive Trust-region Methods for Multiscale
Nonlinear Optimization, Tech. report, Department of Mathematics, University of Namur,
Namur, Belgium,, 2004.
[88] , Numerical Experience with a Recursive Trust-region Method for Multilevel Non-
linear Optimization, Tech. report, Department of Mathematics, University of Namur,
Namur, Belgium,, 2006.
[90] M. Hirose, I. Omori, M. Oba, and T. Kawai, Carbon Dioxide Separation and Recovery
System, Japanese Patent 2005262001, 2005.
[91] C. Hirsch, Numerical Computation of Internal and External Flows, Volume 1, Funda-
mentals of Numerical Discretization, John Wiley-Interscience: New York, NY, 1988.
BIBLIOGRAPHY 197
BIBLIOGRAPHY
[93] IEA/WEO, World Energy Outlook 2006, Tech. report, International Energy Agency,
Paris, France, 2006.
[94] IPCC, Carbon Dioxide Capture and Storage, Tech. report, Intergovernmental Panel on
Climate Change, Geneva, Switzerland, 2005.
[96] K. Ito, K. Otake, and M. Itoi, Carbon Dioxide Desorption Method, Japanese Patent
2004202393, 2004.
[98] J.-G. Jee, M.-B. Kim, and C.-H. Lee, Adsorption Characteristics of Hydrogen Mixtures
in a Layered Bed: Binary, Ternary, and Five-Component Mixtures, Ind. Eng. Chem.
Res. 40 (2001), no. 3, 868–878.
[100] L. Jiang, V. G. Fox, and L. T. Biegler, Simulation and Optimal Design of Multiple-Bed
Pressure Swing Adsorption Systems, AIChE J. 50 (2004), no. 11, 2904–2917.
[103] , Convergence Rates for Direct Transcription of Optimal Control Problems using
Collocation at Radau points, Comput. Optim. Appl. 41 (2008), no. 1, 81–126.
[104] A. Kapoor and R. T. Yang, Optimization of a Pressure Swing Adsorption Cycle, Ind.
Eng. Chem. Res. 27 (1988), no. 1, 204–206.
[105] J. Karger and D. M. Ruthven, Diffusion in Zeolites and Other Porous Solids, John
Wiley-Interscience: New York, NY, 1992.
[106] G. E. Keller, Gas Adsorption Processes: State of the Art, In Industrial Gas Separations;
T. E. Whyte, Ed., American Chemical Society: Washington, DC, ACS Symposium Series
223, Vol. 145, 1983.
BIBLIOGRAPHY 198
BIBLIOGRAPHY
[112] B. Kragel, Streamline Diffusion POD Models in Optimization, Ph.D. thesis, Trier Uni-
versity, 2005.
[113] R. Kumar, Removal of Water and Carbon Dioxide from Atmospheric Air, US Patent
4711645, 1987.
[114] R. Kumar, V. G. Fox, D. Hartzog, R. E. Larson, Chen Y. C., P.A. Houghton, and
T. Naheiri, A Versatile Process Simulator for Adsorptive Separations, Chem. Eng. Sci.
49 (1994), 3115–3125.
[115] K. Kunisch and S. Volkwein, Control of the Burgers Equation by a Reduced-Order Ap-
proach Using Proper Orthogonal Decomposition, J. Opt. Theory Appl. 102 (1999), no. 2,
345–371.
[122] P. A. LeGresley and J. J. Alonso, Airfoil Design Optimization using Reduced Order
Models based on Proper Orthogonal Decomposition, Fluids 2000 Conference and Exhibit,
June 19-22, Denver, CO. AIAA Paper 2000-2545, 2000.
[125] , Finite Volume Methods for Hyperbolic Problems, Cambridge University Press:
Cambridge, UK, 2002.
BIBLIOGRAPHY 199
BIBLIOGRAPHY
[126] J. L. Lumley, Coherent Structures in Turbulence, pp. 215–242, in: R.E. Meyer (ed.),
Transition and Turbulence, Academic Press: New York, NY, 1981.
[128] H. V. Ly and H. T. Tran, Modeling and Control of Physical Processes Using Proper
Orthogonal Decomposition, Math. Comput. Model. 33 (2001), no. 1-3, 223–236.
[129] A. Malek and S. Farooq, Study of a Six-Bed Pressure Swing Adsorption Process, AIChE
J. 43 (1997), no. 10, 2509–2523.
[130] G. Q. Miller and J. Stöcker, Selection of a Hydrogen Separation Process, NPRA Annual
Meeting: San Francisco, California, 1989.
[131] J. Moehlis, T. R. Smith, P. Holmes, and H. Faisst, Models for Turbulent Plane Couette
Flow using the Proper Orthogonal Decomposition, Phys. Fluids 14 (2002), no. 7, 2493–
2507.
[132] B.-K. Na, K.-K. Koo, H.-M. Eum, H. Lee, and H. Song, CO2 Recovery from Flue Gas by
PSA Process using Activated Carbon, Korean J. Chem. Eng. 18 (2001), no. 2, 220–227.
[133] B.-K. Na, H. Lee, K.-K. Koo, and H. K. Song, Effect of Rinse and Recycle Methods
on the Pressure Swing Adsorption Process To Recover CO2 from Power Plant Flue Gas
Using Activated Carbon, Ind. Eng. Chem. Res. 41 (2002), no. 22, 5498–5503.
[134] NETL/DOE, The Cost and Performance Baseline for Fossil Energy Power Plants study,
Volume 1: Bituminous Coal and Natural Gas to Electricity, Tech. report, National En-
ergy Technology Laboratory, Department of Energy, USA, May, 2007.
[135] P.-y. Nie and C.-f. Ma, A Trust-region Filter Method for General Non-linear Program-
ming, Appl. Math. Comput. 172 (2006), 1000–1017.
[138] S. Nilchan, The Optimisation of Periodic Adsorption Processes, Ph.D. thesis, Imperial
College of Science, Technology, and Medicine, London, UK, 1997.
[140] J. Nocedal and S. J. Wright, Numerical Optimization, Springer-Verlag: New York, NY,
1999.
[141] H. M. Park and D. H. Cho, The Use of the Karhunen-Loéve Decomposition for the
Modeling of Distributed Parameter Systems, Chem. Eng. Sci. 51 (1996), no. 1, 81–98.
BIBLIOGRAPHY 200
BIBLIOGRAPHY
[142] J.-H. Park, H.-T. Beum, J.-N. Kim, and S.-H. Cho, Numerical Analysis on the Power
Consumption of the PSA Process for Recovering CO2 from Flue Gas, Ind. Eng. Chem.
Res. 41 (2002), no. 16, 4122–4131.
[143] Fletcher. R., S. Leyffer, and P. Toint, A Brief History of Filter Methods, Tech. report,
ANL/MCS-P1372-0906, Argonne National Laboratory, Mathematics and Computer Sci-
ence Division., 2006.
[145] R. Rajasree and A. S. Moharir, Simulation based Synthesis, Design and Optimization of
Pressure Swing Adsorption (PSA) Processes, Comput. Chem. Eng. 24 (2000), no. 11,
2493–2505.
[146] J. Rambo and Y. Joshi, Reduced-order Modeling of Turbulent Forced Convection with
Parametric Conditions, Int. J. Heat Mass Tran. 50 (2007), 539–551.
[148] S. Reynolds, A. Mehrotra, A. Ebner, and J. Ritter, Heavy Reflux PSA Cycles for CO2
Recovery from Flue Gas: Part I. Performance Evaluation, Adsorption 14 (2008), no. 2,
399–413.
[149] S. P. Reynolds, A. D. Ebner, and J. A. Ritter, New Pressure Swing Adsorption Cycles
for Carbon Dioxide Sequestration, Adsorption 11 (2005), no. 0, 531–536.
[150] , Carbon Dioxide Capture from Flue Gas by Pressure Swing Adsorption at High
Temperature using a K-promoted HTlc: Effects of Mass Transfer on the Process Perfor-
mance, Environ. Prog. 25 (2006), no. 4, 334–342.
[151] , Stripping PSA Cycles for CO2 Recovery from Flue Gas at High Temperature
Using a Hydrotalcite-Like Adsorbent, Ind. Eng. Chem. Res. 45 (2006), no. 12, 4278–
4294.
[153] C. W. Rowley, T. Colonius, and R. M. Murray, Model Reduction for Compressible Flows
using POD and Galerkin Projection, Physica D 189 (2004), no. 1–2, 115–129.
[156] D. M. Ruthven, S. Farooq, and K. S. Knaebel, Pressure Swing Adsorption, VCH Pub-
lishers: New York, NY, 1994.
BIBLIOGRAPHY 201
BIBLIOGRAPHY
[159] J. Schell, N. Casas, and M. Mazzotti, Pre-combustion CO2 Capture for IGCC Plants by
an Adsorption Process, Energ. Procedia 1 (2009), 655–660.
[160] W. E. Schiesser, The Numerical Method of Lines Integration of Partial Differential Equa-
tions, Academic Press: San Diego, CA, 1991.
[163] R. E. H. Sims, H.-H. Rogner, and K. Gregory, Carbon emission and mitigation cost
comparisons between fossil fuel, nuclear and renewable energy resources for electricity
generation, Energ. Policy 31 (2003), no. 13, 1315–1326.
[165] , Separation of Methane and Carbon Dioxide Gas Mixtures by Pressure Swing
Adsorption, Separ. Sci. Technol. 23 (1988), no. 6, 519–529.
[168] , Applications of Gas separation by Adsorption for the Future, Adsorpt. Sci. Tech-
nol. 19 (2001), 347–366.
[169] , Pressure Swing Adsorption: Commentaries, Ind. Eng. Chem. Res. 41 (2002),
no. 6, 1389–1392.
[170] , Basic Research Needs for Design of Adsorptive Gas Separation Processes, Ind.
Eng. Chem. Res. 45 (2006), no. 16, 5435–5448.
[172] S. Sircar and J.R. Hufton, Why Does the Linear Driving Force Model for Adsorption
Kinetics Work?, Adsorption 6 (2000), no. 2, 137–147.
[173] S. Sircar and W. C. Kratz, Simultaneous Production of Hydrogen and Carbon Dioxide
from Steam Reformer Off-Gas by Pressure Swing Adsorption, Separ. Sci. Technol. 23
(1988), no. 14, 2397–2415.
BIBLIOGRAPHY 202
BIBLIOGRAPHY
[175] L. Sirovich, Turbulence and the Dynamics of Coherent Structures, Q. Appl. Math. 45
(1987), no. 3, 561–571.
[176] C. W. Skarstrom, Method and Apparatus for Fractionating Gaseous Mixtures by Adsorp-
tion, US Patent 2944627, 1960.
[178] , The Optimal Design of Pressure Swing Adsorption Systems, Chem. Eng. Sci. 46
(1991), no. 12, 2967–2976.
[179] , The Optimal Design of Pressure Swing Adsorption Systems–II, Chem. Eng. Sci.
47 (1992), no. 15-16, 4213–4217.
[181] M. Suzuki, Adsorption Engineering, Kondasha Ltd.: Tokyo, and Elsevier Science Pub-
lishers: Amsterdam, 1990.
[183] T. Tamura, Absorption Process for Gas Separation, US Patent 3797201, 1974.
[184] M. Tańczyk and K. Warmuziński, Multicomponent Pressure Swing Adsorption. Part II.
Experimental Verification of the Model, Chem. Eng. Process. 37 (1998), no. 4, 301–315.
[186] P. L. Toint, Global Convergence of a Class of Trust-region Methods for Nonconvex Min-
imization in Hilbert Space, IMA J. Numer. Anal. 8 (1988), 231–252.
[188] D. P. Valenzuela and A. L. Myers, Adsorption Equilibrium Data Handbook, Prentice Hall:
Englewood Cliffs, NJ, 1989.
[189] A. Varshney and A. Armaou, Reduced Order Modeling and Dynamic Optimization of
Multiscale PDE/kMC Process Systems, Comput. Chem. Eng. 32 (2008), no. 9, 2136–
2143.
[190] T. Vermuelen, G. Klein, and N. K. Hiester, in Chemical Engineers’ Handbook, ch. 16, R.
H. Perry and C. H. Chilton, Ed., 5th ed, McGraw-Hill: New York, NY, 1974.
BIBLIOGRAPHY 203
BIBLIOGRAPHY
[192] A. Wächter and L. T. Biegler, Failure of Global Convergence for a Class of Interior-Point
Methods for Nonlinear Programming, Math. Program. 88 (2000), no. 3, 565–574.
[193] , Line Search Filter Methods for Nonlinear Programming: Local Convergence,
SIAM J. Optim. 16 (2005), no. 1, 32–48.
[194] , Line Search Filter Methods for Nonlinear Programming: Motivation and Global
Convergence, SIAM J. Optim. 16 (2005), no. 1, 1–31.
[197] P. A. Webley and J. He, Fast Solution-adaptive Finite Volume Method for PSA/VSA
Cycle Simulation; 1 Single Step Simulation, Comput. Chem. Eng. 23 (2000), no. 11-12,
1701 – 1712.
[198] G. Weickum, M. S. Eldred, and K. Maute, Multi-point Extended Reduced Order Modeling
For Design Optimization and Uncertainty Analysis, 2nd AIAA Multidisciplinary Design
Optimization Specialist Conference, May 1 - 4, Newport, RI. AIAA Paper 2006-2145.,
2006.
[199] M. Whysall and L. J. M. Wagemans, Very Large-scale Pressure Swing Adsorption Pro-
cesses, US Patent 6210466, 2001.
[201] K. Willcox and Peraire J., Balanced Model Reduction via the Proper Orthogonal Decom-
position, AIAA J. 40 (2002), no. 11, 2323–2330.
[202] P. Xiao, S. Wilson, G. Xiao, R. Singh, and P. Webley, Novel Adsorption Processes for
Carbon Dioxide Capture within an IGCC Process, Energy Procedia 1 (2009), 631–638.
[203] P. Xiao, J. Zhang, P. A. Webley, G. Li, R. Singh, and R. Todd, Capture of CO2 from
Flue gas Streams with Zeolite 13X by aVacuum-Pressure Swing Adsorption, Adsorption
14 (2008), no. 4, 575–582.
[204] J. Xu and E. L. Weist, Six Bed Pressure Swing Adsorption Process with Four Steps of
Pressure Equalization, US Patent 6454838, 2002.
[205] T. Yamaguchi and K. Yasushi, Gas Separation Process, US Patent 5250088, 1993.
[206] R. T. Yang, Gas Separation by Adsorption Processes, Butterworths: Boston, MA, 1997.
[207] R. T. Yang and S. J. Doong, Gas Separation by Pressure Swing Adsorption: a Pore-
Diffusion Model for Bulk Separation, AIChE J. 31 (1985), 1829–1841.
[208] S.-Il Yang, D.-Y. Choi, S.-C. Jang, S.-H. Kim, and D.-K. Choi, Hydrogen Separation
by Multi-bed Pressure Swing Adsorption of Synthesis Gas, Adsorption 14 (2008), no. 4,
583–590.
BIBLIOGRAPHY 204
BIBLIOGRAPHY
[209] T. Yokoyama, Japanese R&D on Large-Scale CO2 Capture, ECI Symposium Series on
Separations Technology VI: New Perspectives on Very Large-Scale Operations, vol. RP3,
2004.
[210] T. Yuan, P. G. Cizmas, and T. Ó Brien, A Reduced-order Model for a Bubbling Fluidized
Bed based on Proper Orthogonal Decomposition, Comput. Chem. Eng. 30 (2005), 243–
259.
[211] J. Zhang and P. A. Webley, Cycle Development and Design for CO2 Capture from Flue
Gas by Vacuum Swing Adsorption, Environ. Sci. Technol. 42 (2008), no. 2, 563–569.
[212] J. Zhang, P. A. Webley, and P. Xiao, Effect of Process Parameters on Power Require-
ments of Vacuum Swing Adsorption Technology for CO2 Capture from Flue Gas, Energ.
Convers. Manage. 49 (2008), no. 2, 346–356.
[213] L. Zhou, C.-Z. L, S.-J. Bian, and Y.-P. Zhou, Pure Hydrogen from the Dry Gas of
Refineries via a Novel Pressure Swing Adsorption Process, Ind. Eng. Chem. Res. 41
(2002), no. 21, 5290–5297.
[214] S. E. Zitney and M. Syamlal, Integrated Process Simulation and CFD for Improved Pro-
cess Engineering, Proceedings of the European Symposium on Computer Aided Process
Engineering–12, ESCAPE–12, May 26-29: The Hague, The Netherlands; Grievink, J.,
and van Schijndel, J., Eds.; pp 397–402., 2002.
BIBLIOGRAPHY 205
Appendix A
Nomenclature
BRi flux of ith component in the bottom reflux stream (gmol m−2 sec−1 )
Ci gas-phase concentration of ith component (gmol m−3 )
i
Cpg heat capacity of ith component (J gmol−1 K−1 )
Cps heat capacity of the adsorbent (J kg−1 K−1 )
D Adsorbent bed diameter (m)
DK Knudsen diffusivity of ith component (m2 /s)
DL Axial dispersion (m2 /s)
Dm,i Bulk diffusivity of ith component (m2 /s)
Dp,i Macropore diffusivity of ith component (m2 /s)
dp particle diameter (m)
Fi input flux of ith component to the co-current bed (gmol m−2 sec−1 )
h total gas-phase enthalpy (J m−3 )
hw fluid-to-wall heat transfer coefficient (J m−2 sec−1 K−1 )
∆Hiads isosteric heat of adsorption (J gmol−1 )
HPi flux of ith component in the heavy product stream (gmol m−2 sec−1 )
KL effective axial thermal conductivity (J m−1 sec−1 K−1 )
ki lumped mass transfer coefficient for ith component (sec−1 )
kiH Henry’s constant (gmol kg−1 kPa−1 )
L bed length (m)
LPi flux of ith component in the light product stream (gmol m−2 sec−1 )
Subscripts
des desorption step
pres pressurization step
Optimization Iterations
Table B.1: Iteration sequence for Algorithm I with ZOC for Problem (7.31)
0 8 158 48 54 154 0.22 40.29 67.55 12170 9990 9642.45 1.16 yes
1 12 148.67 45.02 57.05 159.76 0.198 40.34 69.93 9641 5311.5 9590.07 0.01 no
2 3 161 47.25 55.5 155.5 0.228 41.25 68.04 9641 8923.7 8681.96 1.34 yes
3 4.5 165.5 46.13 57.75 157.75 0.239 42.68 68.7 8684 7517.8 7251.3 1.23 yes
4 6.75 172.25 44.44 61.13 161.13 0.256 44.83 69.73 7252 5248.7 5100.27 1.07 yes
5 10.13 182.38 41.91 66.19 166.19 0.281 48.05 71.25 5100 1598.6 1878.76 0.92 yes
6 10.13 192.5 40 61.13 170.57 0.256 50.3 75.48 1878 486.41 -75.3967 1.4 yes
7 15.19 207.08 40 53.53 162.98 0.218 50.79 80.98 -75.4 -84 -80.9484 0.65 yes
8 15.19 210.02 40 61.13 155.39 0.18 48.76 86.15 -80.95 -93.12 1153.87 -101 no
9 3.797 203.28 40 55.43 161.08 0.208 50.01 81.74 -80.95 -84.29 -79.6689 0.23 yes
10 0.949 202.33 40 55.9 160.61 0.206 49.81 81.94 -78.99 -79.86 108.191 -217 no
11 0.237 203.04 40 55.55 160.96 0.208 49.96 81.79 -78.99 -79.21 -41.1745 -178 no
12 0.059 203.22 40 55.5 161.05 0.208 49.99 81.75 -78.99 -79.05 -69.6789 -185 no
Table B.2: Iteration sequence for Algorithm I with FOC for Problem (7.31)
0 8 158 48 46 154 0.22 40.39 67.69 12170 8823.4 9542.31 0.79 yes
1 8 166 46 42 158 0.24 43.05 69.09 9544 6745.1 6880.91 0.95 yes
2 8 174 44 38 162 0.26 45.695 70.41 6885 3369.2 4234.59 0.75 yes
3 8 182 42 35 166 0.28 48.37 71.78 4234 1726.9 1558.22 1.07 yes
4 12 194 40.1 35 172 0.25 50.77 76.48 1560 -2058 -76.48 0.45 yes
5 6 200 40 35 169 0.235 51.101 78.58 -76.49 -80.35 -78.58 0.54 yes
6 6 206 40 35 166 0.22 51.21 80.86 -78.58 -80.54 -80.86 1.16 yes
7 9 215 40 35 161.5 0.1975 51.06 84.47 -80.85 -88.76 -84.465 0.46 yes
8 4.5 219.5 40 35 159.25 0.1863 50.79 86.3 -84.45 -86.13 -86.3 1.1 yes
9 6.75 226.25 40 35 155.88 0.1694 49.03 91.51 -86.27 -89.53 878.49 -295 no
10 1.688 221.19 40 35 158.41 0.182 50.64 86.99 -86.29 -87.04 -86.99 0.93 yes
11 1.688 222.88 40 35 157.56 0.1778 50.36 87.71 -86.98 -87.72 -87.71 0.97 yes
12 1.688 224.56 40 35 156.72 0.1741 49.76 89.72 -87.71 -88.64 150.28 -255 no
13 0.422 223.3 40 35 157.35 0.1768 50.19 88.38 -87.71 -87.95 -88.38 2.75 yes
14 0.64 223.94 40 35 157.03 0.1758 50.02 88.95 -88.38 -89.19 -88.95 0.7 yes
15 0.64 224.58 40 35 157.35 0.1749 50 89.18 -88.95 -89.17 -89.059 0.5 yes
16 0.32 224.9 40 35 157.51 0.1746 50 89.27 -89.18 -89.29 -89.149 0.83 yes
17 0.32 225.22 40 35 157.35 0.1747 50.002 89.31 -89.27 -89.37 -89.309 1.75 yes
18 0.48 225.7 40 35 157.59 0.1742 50.002 89.4 -89.31 -89.41 -89.401 1.01 yes
19 0.72 226.42 40 35 157.23 0.1742 50.007 89.57 -89.4 -89.56 -89.57 1.08 yes
20 1.08 227.5 40 35 157.77 0.1732 50 89.76 -89.57 -89.81 -89.637 0.27 yes
21 0.54 228.04 40 35 157.5 0.1731 50 89.98 -89.76 -90.02 -89.859 0.84 yes
22 0.54 228.58 40 35 157.23 0.1733 50.009 90.01 -89.98 -90.05 -90.007 1.99 yes
23 0.81 229.39 40 35 156.83 0.1734 50.012 90.16 -90.01 -90.19 -90.155 0.79 yes
24 0.81 230.2 40 35 157.23 0.1725 50 90.35 -90.16 -90.41 -90.229 0.3 yes
25 0.405 230.61 40 35 157.03 0.1729 50.05 90.32 -90.35 -90.46 -90.32 0.81 yes
26 0.405 231.01 40 35 157.23 0.1721 50 90.51 -90.32 -90.58 -90.455 0.52 yes
27 0.405 231.42 40 35 157.43 0.1718 50.001 90.48 -90.51 -90.6 -90.479 0.26 yes
28 0.2 231.62 40 35 157.53 0.1716 50 90.54 -90.48 -90.5 -90.489 0.55 yes
29 0.2 231.82 40 35 157.63 0.1714 50.001 90.55 -90.54 -90.57 -90.551 2.48 yes
30 0.3 232.12 40 35 157.78 0.1711 50 90.72 -90.55 -90.6 -90.599 1.06 yes
31 0.45 232.57 40 35 158.01 0.1707 50.004 90.78 -90.72 -90.78 -90.779 3.05 yes
32 0.8 233.37 40 35 158.41 0.1697 50 90.92 -90.78 -91.11 -90.864 0.26 yes
33 0.4 233.77 40 35 158.61 0.1694 50 90.96 -90.92 -91 -90.909 0.51 yes
34 0.4 234.17 40 35 158.81 0.1692 50.01 90.99 -90.96 -91.01 -90.986 1.48 yes
35 0.6 234.77 40 35 159.11 0.1687 50.009 91.08 -90.99 -91.14 -91.083 0.63 yes
36 0.6 235.37 40 35 159.41 0.1682 50.02 91.24 -91.08 -91.23 -91.24 1.07 yes
37 0.9 236.27 40 35 158.96 0.1683 50.012 91.34 -91.24 -91.43 -91.336 0.5 yes
38 0.9 237.17 40 35 158.51 0.1684 50.01 91.56 -91.34 -91.51 -91.56 1.29 yes
39 1.35 238.52 40 35 157.83 0.1685 50 91.72 -91.56 -91.88 -91.667 0.34 yes
40 0.675 239.19 40 35 158.17 0.1678 50 91.92 -91.72 -91.89 -91.865 1.15 yes
42 0.5 240.69 40 35 157.42 0.1679 50 92.17 -92.17 -92.32 -92.12 0.51 yes
43 0.5 241.19 40 35 157.17 0.1681 50 92.2 -92.17 -92.22 -92.15 0.56 yes
44 0.5 241.69 40 35 156.92 0.1682 50 92.22 -92.21 -92.27 -92.168 0.3 yes
45 0.25 241.94 40 35 157.05 0.1681 50.007 92.27 -92.22 -92.24 -92.266 4.83 yes
46 0.375 242.32 40 35 156.86 0.168 50 92.37 -92.27 -92.42 -92.315 0.32 yes
47 0.2 242.52 40 35 156.76 0.1681 50 92.39 -92.37 -92.38 -92.319 0.53 yes
48 0.2 242.72 40 35 156.66 0.1682 50.003 92.34 -92.39 -92.41 -92.335 0.78 yes
49 0.2 242.92 40 35 156.76 0.168 50 92.36 -92.33 -92.37 -92.304 0.57 yes
50 0.2 243.12 40 35 156.86 0.1679 50.008 92.38 -92.36 -92.39 -92.378 2.84 yes
51 0.4 243.52 40 35 157.06 0.1675 50.004 92.44 -92.38 -92.44 -92.439 1.01 yes
52 0.8 244.32 40 35 157.46 0.1668 50.007 92.69 -92.44 -92.54 -92.692 2.44 yes
53 1.2 245.52 40 35 158.06 0.1658 50.008 92.7 -92.69 -92.71 -92.703 0.58 yes
54 1.2 246.72 40 35 158.66 0.1648 50.009 92.87 -92.7 -92.87 -92.868 0.99 yes
55 1.8 248.52 40 35 159.56 0.1634 50.041 93.05 -92.87 -93.02 -93.053 1.22 yes
56 2.7 251.22 40 35 160.91 0.1609 50 93.52 -93.05 -93.88 -93.399 0.42 yes
58 2.025 254.59 40 35 160.57 0.1608 50.088 93.85 -93.79 -94.01 -93.848 0.52 yes
59 2 256.59 40 35 159.57 0.1611 50.075 94.07 -93.85 -94.26 -94.067 0.53 yes
60 2 258.59 40 35 160.57 0.1592 50.018 94.27 -94.07 -94.45 -94.269 0.52 yes
61 2 260.59 40 35 161.57 0.1578 50.043 94.39 -94.27 -94.49 -94.386 0.52 yes
64 1 264.59 40 35 163.57 0.1548 50.045 94.89 -94.84 -94.91 -94.889 2.65 yes
65 1.5 266.09 40 35 164.32 0.1538 50.067 94.98 -94.89 -95.06 -94.976 0.52 yes
66 1.5 267.59 40 35 165.07 0.1521 50 95.26 -94.98 -95.3 -95.139 0.5 yes
67 1.5 269.09 40 35 165.82 0.1516 50.055 95.25 -95.26 -95.31 -95.25 2.46 yes
68 2.25 271.34 40 35 166.95 0.1501 50.073 95.33 -95.25 -95.55 -95.332 0.27 yes
69 1.2 272.54 40 35 167.55 0.1491 50.038 95.55 -95.33 -95.56 -95.55 0.96 yes
70 1.2 273.74 40 35 168.15 0.1483 50.037 95.63 -95.55 -95.72 -95.634 0.5 yes
71 1.2 274.94 40 35 168.75 0.1475 50.043 95.63 -95.63 -95.63 -95.634 0.52 yes
72 1.2 276.14 40 35 169.35 0.1467 50.035 95.82 -95.63 -95.8 -95.821 1.1 yes
73 1.8 277.94 40 35 170.25 0.1454 50.011 95.97 -95.82 -96.11 -95.969 0.5 yes
74 1.8 279.74 40 35 171.15 0.1447 50.098 96 -95.97 -96.04 -96.002 0.51 yes
75 1.8 281.54 40 35 170.25 0.1444 50 96.16 -96 -96.21 -96.11 0.5 yes
76 1.8 283.34 40 35 169.35 0.1448 50 96.24 -96.16 -96.3 -96.19 0.57 yes
77 1.8 285.14 40 35 168.45 0.1452 50 96.38 -96.24 -96.39 -96.325 0.93 yes
78 1.8 286.94 40 35 167.55 0.1455 50 96.51 -96.38 -96.52 -96.453 0.9 yes
79 1.8 288.74 40 35 166.65 0.146 50.002 96.51 -96.51 -96.61 -96.509 0.52 yes
80 1.8 290.54 40 35 165.75 0.1463 50 96.79 -96.51 -96.64 -96.578 0.52 yes
81 1.8 292.34 40 35 164.85 0.1467 50 96.7 -96.79 -96.92 -96.645 0.5 yes
82 1.8 294.14 40 35 163.95 0.1471 50 96.78 -96.7 -96.84 -96.694 0.35 yes
83 0.9 295.04 40 35 163.5 0.1474 50 96.81 -96.77 -96.83 -96.784 1.63 yes
84 1.35 296.39 40 35 162.82 0.1478 50.014 96.93 -96.81 -96.97 -96.93 0.95 yes
85 1.35 297.74 40 35 162.15 0.1481 50.023 96.95 -96.93 -96.97 -96.951 0.51 yes
86 1.35 299.09 40 35 161.47 0.1483 50.004 96.99 -96.95 -97.08 -96.984 0.25 yes
87 0.675 299.77 40 35 161.13 0.1485 50.007 97.01 -96.98 -97.04 -97.009 0.43 yes
88 0.338 300 40 35 160.97 0.1487 50.009 97.2 -97.01 -97.2 -97.197 0.99 yes
89 0.338 300 40 35 160.8 0.1487 50.001 97.03 -97.2 -97.98 -97.027 -0.2 no
90 0.084 300 40 35 160.92 0.1487 50.007 97.03 -97.2 -97.29 -97.028 -1.9 no
91 0.021 300 40 35 160.96 0.1486 50.006 97.16 -97.2 -97.28 -97.16 -0.4 no
problem
Table B.3: Iteration sequence for tangential subproblems of Algorithm II for Problem (7.31)
4 4 197.33 40 66.3 170.68 0.279 74.27 -73.12 -75.4 -74.265 0.5 holds yes
5 4 193.33 40 68.3 168.68 0.269 74.69 -74.27 -77.36 -74.694 0.14 holds yes
6 2 191.33 40 69.3 167.68 0.264 74.89 -74.69 -76.25 -74.889 0.13 holds yes
7 1 191.79 40 68.8 167.18 0.2615 75.14 -74.89 -75.39 -75.143 0.51 holds yes
8 1 192.28 40 68.3 166.68 0.259 75.48 -75.14 -75.71 -75.476 0.59 holds yes
9 1 192.69 40 67.8 166.18 0.2565 75.75 -75.48 -75.99 -75.746 0.52 holds yes
10 1 193.09 40 67.3 165.68 0.254 75.99 -75.75 -76.23 -75.989 0.5 holds yes
11 1 193.91 40 66.8 165.18 0.2515 76.27 -75.99 -76.52 -76.271 0.54 holds yes
12 1 193.94 40 66.3 164.68 0.249 76.59 -76.27 -76.9 -76.586 0.5 holds yes
13 1 194.94 40 65.8 164.45 0.2465 76.86 -76.59 -77.07 -76.855 0.56 holds yes
14 1 194.83 40 65.3 163.95 0.244 77.1 -76.86 -77.53 -77.103 0.37 holds yes
15 0.5 195.2 40 65.04 163.69 0.2427 77.27 -77.1 -77.38 -77.267 0.59 holds yes
16 0.5 195.35 40 64.79 163.44 0.2415 77.41 -77.27 -77.49 -77.408 0.64 holds yes
17 0.5 195.66 40 64.54 163.19 0.2402 77.55 -77.41 -77.6 -77.552 0.74 holds yes
18 0.5 196.14 40 64.29 162.94 0.239 77.75 -77.55 -77.73 -77.752 1.1 holds yes
19 1 197.14 40 63.79 163.26 0.2365 78.03 -77.75 -78.02 -78.028 1.03 holds yes
20 2 199.14 40 62.79 163.56 0.2315 78.63 -78.03 -78.96 -78.626 0.64 holds yes
21 2 199.74 40 61.79 162.56 0.2264 79.35 -78.63 -79.63 -79.351 0.72 holds yes
22 2 201.74 40 60.79 162.77 0.2215 79.98 -79.35 -80.35 -79.979 0.63 holds yes
23 2 203.1 40 59.79 161.77 0.2165 80.71 -79.98 -80.99 -80.706 0.72 holds yes
24 2 205.1 40 58.79 161.47 0.2115 81.42 -80.71 -81.42 -81.423 1.01 holds yes
25 4 209.1 40 56.79 159.47 0.2054 82.67 -81.42 -82.66 -82.671 1.01 holds yes
26 8 217.1 40 52.79 155.47 0.1971 84.8 -82.67 -84.58 -84.802 1.12 holds yes
27 16 233.1 40 60.79 157.85 0.1571 93.26 -84.8 -96.69 -93.26 0.71 holds yes
28 16 249.1 40 68.79 165.85 0.1639 89.66 -96.23 -89.01 -89.662 0.5 fails yes
29 8 257.1 40 64.79 169.85 0.1516 92.85 -89.66 -92.92 -92.848 0.98 holds yes
30 8 265.1 40 68.79 173.85 0.1484 92.89 -92.85 -92.62 -92.887 -0.2 fails yes
31 8 273.1 40 72.79 177.85 0.1425 93.66 -92.89 -93.24 -93.662 2.22 holds yes
32 16 289.1 40 80.79 185.85 0.1328 94.52 -93.66 -93.93 -94.519 3.16 holds yes
33 32 299.44 40.06 86.08 200.34 0.1208 94.98 -94.52 -97.03 -94.978 0.18 holds yes
34 16 300 40 94.08 208.34 0.1201 93.83 -94.98 -94.31 -93.832 1.72 fails yes
35 16 300 40 86.08 200.34 0.1235 94.47 -93.85 -95.06 -94.466 0.52 holds yes
36 16 300 40 78.08 192.34 0.1273 94.87 -94.44 -96.21 -94.869 0.23 holds yes
37 8 300 40 74.08 188.34 0.1295 95.28 -94.86 -95.62 -95.28 0.54 holds yes
38 8 300 40 70.08 184.34 0.1326 95.18 -95.28 -95.2 -95.182 1.18 fails yes
39 8 300 40 66.08 180.34 0.1346 95.59 -95.19 -95.46 -95.594 1.53 holds yes
40 16 300 40 58.08 188.34 0.129 96.05 -95.59 -95.92 -96.047 1.38 holds yes
41 32 300 40 42.08 172.34 0.139 96.95 -96.07 -96.98 -96.946 0.99 holds yes
42 32 300 40 35 188.34 0.1277 97.24 -96.94 -99.06 -97.237 0.14 holds yes
45 1 300 40 35 187.84 0.1278 97.25 -97.24 -97.42 -97.253 0.08 holds yes
46 0.5 300 40 35 188.09 0.1276 97.24 -97.25 -97.28 -97.244 -0.3 holds no
47 0.125 300 40 35 187.9 0.1277 97.25 -97.25 -97.27 -97.254 0.05 holds yes
48 0.063 300 40 35 187.93 0.1276 97.25 -97.25 -97.26 -97.248 -2.6 holds no
49 0.016 300 40 35 187.91 0.1277 97.26 -97.25 -97.25 -97.256 6.28 holds yes
lem
Table B.4: Iteration sequence for normal subproblems of Algorithm II for Problem (7.31)