Linear Algebra For Computational Engineering
Linear Algebra For Computational Engineering
and Engineering
Ferrante Neri
123
Ferrante Neri
School of Computer Science
University of Nottingham
Nottingham, UK
This Springer imprint is published by the registered company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Счастье - это когда тебя понимают.
Happiness is to be understood.
– Георгий Полонский–
(Доживём до понедельника)
– Georgi Polonsky –
(We’ll Live Till Monday)
Foreword
The history of linear algebra can be viewed within the context of two important
traditions.
The first tradition (within the history of mathematics) consists of the progres-
sive broadening of the concept of number so to include not only positive integers
but also negative numbers, fractions and algebraic and transcendental irrationals.
Moreover, the symbols in the equations became matrices, polynomials, sets and,
permutations. Complex numbers and vector analysis belong to this tradition. Within
the development of mathematics, the one was concerned not so much about solving
specific equations but mostly about addressing general and fundamental questions.
The latter were approached by extending the operations and the properties of sum
and multiplication from integers to other linear algebraic structures. Different alge-
braic structures (lattices and Boolean algebra) generalized other kinds of operations,
thus allowing to optimize some non-linear mathematical problems. As a first exam-
ple, lattices were generalizations of order relations on algebraic spaces, such as set
inclusion in set theory and inequality in the familiar number systems (N, Z, Q, and
R). As a second example, Boolean algebra generalized the operations of intersection
and union and the principle of duality (De Morgan’s relations), already valid in set
theory, to formalize the logic and the propositions’ calculus. This approach to logic
as an algebraic structure was much similar as the Descartes’ algebra approach to
the geometry. Set theory and logic have been further advanced in the past centuries.
In particular, Hilbert attempted to build up mathematics by using symbolic logic
in a way that could prove its consistency. On the other hand, Gödel proved that in
any mathematical system, there will always be statements that can never be proven
either true or false.
The second tradition (within the history of physical science) consists of the
search for mathematical entities and operations that represent aspects of the phys-
ical reality. This tradition played a role in the Greek geometry’s bearing and its
following application to physical problems. When observing the space around us,
vii
viii Foreword
we always suppose the existence of a reference frame, identified with an ideal ‘rigid
body’, in the part of the universe in which the system we want to study evolves
(e.g. a three axes system having the sun as their origin and direct versus three fixed
stars). This is modelled in the so-called Euclidean affine space. A reference frame’s
choice is purely descriptive at a purely kinematic level. Two reference frames have
to be intuitively considered distinct if the correspondent ‘rigid bodies’ are in relative
motion. Therefore, it is important to fix the links (linear transformations) between
the kinematic entities associated with the same motion but relatives to two different
reference frames (Galileo’s relativity).
In the seventeenth and eighteenth centuries, some physical entities needed a new
representation. This necessity made the above-mentioned two traditions converge
by adding quantities as velocity, force, momentum and acceleration (vectors) to the
traditional quantities as mass and time (scalars). Important ideas led to the vectors’
major systems: the forces’ parallelogram concept by Galileo, the situations’ geom-
etry and calculus concepts by Leibniz and by Newton and the complex numbers’
geometrical representation. Kinematics studies the motion of bodies in space and in
time independently on the causes which provoke it. In classical physics, the role of
time is reduced to that of a parametric independent variable. It needs also to choose
a model for the body (or bodies) whose motion one wants to study. The fundamental
and simpler model is that of point (useful only if the body’s extension is smaller than
the extension of its motion and of the other important physical quantities considered
in a particular problem). The motion of a point is represented by a curve in the tridi-
mensional Euclidean affine space. A second fundamental model is the “rigid body”
one, adopted for those extended bodies whose component particles do not change
mutual distances during the motion.
Later developments in electricity, magnetism and optics further promoted the
use of vectors in mathematical physics. The nineteenth century marked the develop-
ment of vector space methods, whose prototypes were the three-dimensional geo-
metric extensive algebra by Grassmann and the algebra of quaternions by Hamilton
to, respectively, represent the orientation and rotation of a body in three dimen-
sions. Thus, it was already clear how a simple algebra should meet the needs of the
physicists in order to efficiently describe objects in space and in time (in particu-
lar their dynamical symmetries and the corresponding conservation laws) and the
properties of space-time itself. Furthermore, the principal characteristic of a simple
algebra had to be its linearity (or at most its multi-linearity). During the latter part of
the nineteenth century, Gibbs based his three-dimensional vector algebra on some
ideas by Grassmann and by Hamilton, while Clifford united these systems into a
single geometric algebra (direct product of quaternions’ algebras). Afterwards, the
Einstein’s description of the four-dimensional continuum space-time (special and
general relativity theories) required a tensor algebra. In the 1930s, Pauli and Dirac
introduced Clifford algebra’s matrix representations for physical reasons: Pauli for
describing the electron spin while Dirac for accommodating both the electron spin
and the special relativity.
Each algebraic system is widely used in contemporary physics and is a funda-
mental part of representing, interpreting and understanding the nature. Linearity in
Foreword ix
as a Taylor’s series, and the leading (linear) term is dominant for small oscillations.
A detailed treatment of coupled small oscillations is possible by obtaining a diag-
onal matrix of the coefficients in N coupled differential equations by finding the
eigenvalues and the eigenvectors of the Lagrange’s equations for coupled oscilla-
tors. In classical mechanics, another example of linearization consists of looking
for the principal moments and principal axes of a solid body through solving the
eigenvalues’ problem of a real symmetric matrix (inertia tensor). In the theory of
continua (e.g. hydrodynamics, diffusion and thermal conduction, acoustic, electro-
magnetism), it is (sometimes) possible to convert a partial differential equation into
a system of linear equations by employing the finite difference formalism. That ends
up with a diagonally dominated coefficients’ matrix. In particular, Maxwell’s equa-
tions of electromagnetism have an infinite number of degrees of freedom (i.e. the
value of the field at each point), but the Superposition Principle and the Decoupling
Principle still apply. The response to an arbitrary input is obtained as the convolution
of a continuous basis of Dirac δ functions and the relevant Green’s function.
Even without the differential geometry’s more advanced applications, the basic
concepts of multilinear mapping and tensor are used not only in classical physics
(e.g. inertia and electromagnetic field tensors) but also in engineering (e.g. dyadic).
In particle physics, it was important to analyse the problem of neutrino oscil-
lations, formally related both to the Decoupling and the Superposition Principles.
In this case, the three neutrino mass matrices are not diagonal and not normal in
the so-called gauge states’ basis. However, through a bi-unitary transformation (one
unitary transformation for each “parity” of the gauge states), it is possible to get
the eigenvalues and their own eigenvectors (mass states) which allow to render it
diagonal. After this transformation, it is possible to obtain the Gauge States as a
superposition (linear combination) of mass states.
Schrödinger’s linear equation governs the nonrelativistic quantum mechanics,
and many problems are reduced to obtain a diagonal Hamiltonian operator. Besides,
when studying the quantum angular momentum’s addition, one considers Clebsch-
Gordon coefficients related to an unitary matrix that changes a basis in a finite-
dimensional space.
In experimental physics and statistical mechanics (stochastic methods’ frame-
work), researchers encounter symmetric, real positive definite and thus diagonaliz-
able matrices (so-called covariance or dispersion matrix). The elements of a covari-
ance matrix in the i, j positions are the covariances between ith and jth elements
of a random vector (i.e. a vector of random variables, each with finite variance).
Intuitively, the variance’s notion is so generalized to multiple dimension.
The geometrical symmetry’s notion played an essential part in constructing sim-
plified theories governing the motion of galaxies and the microstructure of matter
(quarks’ motion confined inside the hadrons and leptons’ motion). It was not until
the Einstein’s era that the discovery of the space-time symmetries of the funda-
mental laws and the meaning of their relations to the conservation laws were fully
appreciated, for example, Lorentz transformations, Noether’s theorem and Weyl’s
covariance. An object with a definite shape, size, location and orientation consti-
tutes a state whose symmetry properties are to be studied. The higher its “degree of
Foreword xi
symmetry” (and the number of conditions defining the state is reduced), the greater
is the number of transformations that leave the state of the object unchanged.
While developing some ideas by Lagrange, by Ruffini and by Abel (among oth-
ers), Galois introduced important concepts in group theory. This study showed that
an equation of order n ≥ 5 cannot, in general, be solved by algebraic methods. He
did this by showing that the functional relations among the roots of an equation have
symmetries under the permutations of roots. In the 1850s, Cayley showed that every
finite group is isomorphic to a certain permutation group (e.g. the crystals’ geomet-
rical symmetries are described in finite groups’ terms). Fifty years after Galois, Lie
unified many disconnected methods of solving differential equations (evolved over
about two centuries) by introducing the concept of continuous transformation of a
group in the theory of differential equations. In the 1920s, Weyl and Wigner recog-
nized that certain group theory’s methods could be used as a powerful analytical tool
in quantum physics. In particular, the essential role played by Lie’s groups, e.g. ro-
tation isomorphic groups SO (3) and SU (2), was first emphasized by Wigner. Their
ideas have been used in many contemporary physics’ branches which range from
the theory of solids to nuclear physics and particle physics. In classical dynamics,
the invariance of the equations of motion of a particle under the Galilean transfor-
mation is fundamental in Galileo’s relativity. The search for a linear transformation
leaving “formally invariant” the Maxwell’s equations of electromagnetism led to the
discovery of a group of rotations in space-time (Lorentz transformation).
Frequently, it is important to understand why a symmetry of a system is ob-
served to be broken. In physics, two different kinds of symmetry breakdown are
considered. If two states of an object are different (e.g. by an angle or a simple
phase rotation) but they have the same energy, one refers to ‘spontaneous symmetry
breaking’. In this sense, the underlying laws of a system maintain their form (La-
grange’s equations are invariant) under a symmetry transformation, but the system
as a whole changes under such transformation by distinguishing between two or
more fundamental states. This kind of symmetry breaking, for example, character-
izes the ferromagnetic and the superconductive phases, where the Lagrange function
(or the Hamiltonian function, representing the energy of the system) is invariant un-
der rotations (in the ferromagnetic phase) and under a complex scalar transformation
(in the superconductive phase). On the contrary, if the Lagrange function is not in-
variant under particular transformations, the so-called ‘explicit symmetry breaking’
occurs. For example, this happens when an external magnetic field is applied to a
paramagnet (Zeeman’s effect).
Finally, by developing the determinants through the permutations’ theory and
the related Levi-Civita symbol, one gains an important and easy calculation tool for
modern differential geometry, with applications in engineering as well as in modern
physics. This is the case in general relativity, quantum gravity, and string theory.
We can therefore observe that the concepts of linearity and symmetry aided to
solve many physical problems. Unfortunately, not the entire physics can be straight-
forwardly modelled by linear algebra. Moreover, the knowledge of the laws among
the elementary constituents of a system does not implicate an understanding of the
global behaviour. For example, it is not easy at all to deduce from the forces acting
xii Foreword
between the water’s molecules because the ice is lighter than water. Statistical me-
chanics, which was introduced between the end of the nineteenth century and the
beginning of the twentieth century (the work by Boltzmann and Gibbs), deals with
the problem of studying the behaviour of systems composed of many particles with-
out determining each particle’s trajectory but by probabilistic methods. Perhaps, the
most interesting result of statistical mechanics consists of the emergence of collec-
tive behaviours: while the one cannot say whether the water is in the solid or liquid
state and which is the transition temperature by observing a small number of atoms,
clear conclusions can be easily reached if a large number of atoms are observed
(more precisely when the number of atoms tends to infinity). Phase transitions are
therefore created as a result of the collective behaviour of many components.
The latter is an example of a physical phenomenon which requires a mathemati-
cal instrument different from linear algebra. Nonetheless, as mentioned above, linear
algebra and its understanding is one of the basic foundations for the study of physics.
A physicist needs algebra either to model a phenomenon (e.g. classical mechanics)
or to model a portion of phenomenon (e.g. ferromagnetic phenomena) or to use it as
a basic tool to develop complex modern theories (e.g. quantum field theory).
This book provides the readers the basics of modern linear algebra. The book
is organized with the aim of communicating to a wide and diverse range of back-
grounds and aims. The book can be of great use to students of mathematics, physics,
computer science, and engineering as well as to researchers in applied sciences
who want to enhance their theoretical knowledge in algebra. Since a prior rigorous
knowledge about the subject is not assumed, the reader may easily understand how
linear algebra aids in numerical calculations and problems in different and diverse
topics of mathematics, physics, engineering and computer science.
I found this book a pleasant guide throughout linear algebra and an essential
vademecum for the modern researcher who needs to understand the theory but
has also to translate theoretical concepts into computational implementations. The
plethora of examples make the topics, even the most complex, easily accessible to
the most practical minds. My suggestion is to read this book, consult it when needed
and enjoy it.
The first edition of this book has been tested in the classroom over three academic
years. As a result, I had the opportunity to reflect on my communication skills and
teaching clarity.
Besides correcting the normal odd typos and minor mistakes, I decided to re-
write many proofs which could be explained in a clearer and more friendly way.
Every change to the book has been made by taking into great consideration the
reactions of the students as well as their response in terms of learning. Many sections
throughout the book have been rephrased, some sections have been reformulated,
and a better notation has been used. The second edition contains over 150 pages of
new material, including theory, illustrations, pseudocodes and examples throughout
the book summing up to more than 500.
New topics have been added in the chapters about matrices, vector spaces and
linear mapping. However, numerous additions have been included throughout the
text. The section about Euclidean spaces has been now removed from the chap-
ter about vector spaces and placed in a separated introductory chapter about inner
product spaces. Finally, a section at the end of the book showing the solutions to the
exercises placed at the end of each chapter has been included.
In its new structure, this book is divided still into two parts: Part I illustrates basic
topics in algebra, while Part II presents more advanced topics.
Part I is composed of six chapters. Chapter 1 introduces the basic notation, con-
cepts and definitions in algebra and set theory. Chapter 2 describes theory and appli-
cations about matrices. Chapter 3 analyses systems of linear equation by focussing
on analytic theory as well as numerical methods. Chapter 4 introduces vectors with
a reference to the three-dimensional space. Chapter 5 discusses complex numbers
and polynomials as well as the fundamental theorem of algebra. Chapter 6 intro-
duces the conics from the perspective of algebra and matrix theory.
Part II is composed of seven chapters. Chapter 7 introduces algebraic structures
and offers an introduction to group and ring theories. Chapter 8 analyses vector
spaces. Chapter 9 introduces inner product spaces with an emphasis on Euclidean
spaces. Chapter 10 discusses linear mappings. Chapter 11 offers a gentle introduc-
tion to complexity and algorithm theory. Chapter 12 introduces graph theory and
xiii
xiv Preface to the Second Edition
presents it from the perspective of linear algebra. Finally, Chap. 13 provides an ex-
ample on how all linear algebra studied in the previous chapters can be used in
practice in an example about electrical engineering.
In Appendix A, Boolean algebra is presented as an example of non-linear alge-
bra. Appendix B outlines some proofs to theorems stated in the book chapters but
where the proof was omitted since it required a knowledge of calculus and mathe-
matical analysis which was beyond the scope of this book.
I feel that the book, in its current form, is a substantially improved version of
the first edition. Although the overall book structure and style remained broadly the
same, the new way to present and illustrate the concept makes the book accessible to
a broad audience, guiding them towards a higher education level of linear algebra.
As the final note, the second edition of this book has been prepared with the
aim of making Algebra accessible and easily understandable to anybody who has an
interest in mathematics and wants to devote some effort to it.
Theory and practice are often seen as entities in opposition characterizing two dif-
ferent aspects of the world knowledge. In reality, applied science is based on the
theoretical progress. On the other hand, the theoretical research often looks at the
world and practical necessities to be developed. This book is based on the idea that
theory and practice are not two disjointed worlds and that the knowledge is intercon-
nected matter. In particular, this book presents, without compromising on mathemat-
ical rigour, the main concepts of linear algebra from the viewpoint of the computer
scientist, the engineer, and anybody who will need an in depth understanding of the
subject to let applied sciences progress. This book is oriented to researchers and
graduate students in applied sciences but is also organized as a textbook suitable to
courses of mathematics.
Books of algebra are either extremely formal, thus will not be enough intuitive
for a computer scientist/engineer, or trivial undergraduate textbooks, without an ad-
equate mathematical rigour in proofs and definitions. “Linear Algebra for Computa-
tional Sciences and Engineering” aims at maintaining a balance between rigour and
intuition, attempting to provide the reader with examples, explanations, and prac-
tical implications of every concept introduced and every statement proved. When
appropriate, topics are also presented as algorithms with associated pseudocode. On
the other hand, the book does not contain logical skips or intuitive explanations to
replace proofs.
The “narration” of this book is thought to flow as a story of (a part of) the math-
ematical thinking. This story affected, century after century, our brain and brought
us to the modern technological discoveries. The origin of this knowledge evolution
is imagined to be originated in the stone age when some caveman/cavewoman had
the necessity to assess how many objects he/she was observing. This conceptual-
ization, happened at some point in our ancient history, has been the beginning of
mathematics, but also of logics, rational thinking, and somehow technology.
The story narrated in this book is organized into two parts composed of six chap-
ters each, thus twelve in total. Part I illustrates basic topics in algebra which could
be suitable for an introductory university module in Algebra while Part II presents
more advanced topics that could be suitable for a more advance module. Further-
xv
xvi Preface to the First Edition
more, this book can be read as a handbook for researchers in applied sciences as the
division into topics allows an easy selection of a specific topic of interest.
Part I opens with Chap. 1 which introduces the basic concepts and definitions in
algebra and set theory. Definitions and notation in Chap. 1 are used in all the sub-
sequent chapters. Chapter 2 deals with matrix algebra introducing definitions and
theorems. Chapter 3 continues the discussion about matrix algebra by explaining
the theoretical principles of systems of linear equations as well as illustrating some
exact and approximate methods to solve them. Chapter 4, after having introduced
vectors in an intuitive way as geometrical entities, progressively abstracts and gen-
eralizes this concept leading to algebraic vectors which essentially require the solu-
tion of systems of linear equations and are founded on matrix theory. The narration
about vectors leads to Chap. 5 where complex numbers and polynomials are dis-
cussed. Chapter 5 gently introduces algebraic structures by providing statement and
interpretation for the fundamental theorem of algebra. Most of knowledge achieved
during the first five chapters is proposed again in Chap. 6 where conics are intro-
duced and explained. It is shown how a conic has, besides its geometrical meaning,
an algebraic interpretation and is thus equivalent to a matrix.
In a symmetric way, Part II opens with an advanced introduction to algebra by
illustrating basic algebraic structures in Chap. 7. Group and ring theories are intro-
duced as well as the concept of field which constitutes the basics for Chap. 8 where
vector spaces are presented. Theory of vector spaces is described from a theoretical
viewpoint as well as with reference to their physical/geometrical meaning. These
notions are then used within Chap. 10 which deals with linear mappings, endomor-
phism, and eigenvalues. In Chaps. 8 and 10 the connections with matrix and vector
algebra is self-evident. The narration takes a break in Chap. 11 where some logi-
cal instruments for understanding the final chapters are introduced. These concepts
are the basics of complexity theory. While introducing these concepts it is shown
that algebra is not only an abstract subject. On the contrary, the implementation
of algebraic techniques has major practical implications which must be taken into
account. Some simple algebraic operations are revisited as instructions to be exe-
cuted within a machine. Memory and operator representations are also discussed.
Chapter 12 discusses graph theory and emphasizes the equivalence between a graph
and a matrix/vector space. Finally Chap. 13 introduces electrical networks as alge-
braic entities and shows how an engineering problem is the combination of multiple
mathematical (in this case algebraic) problems. It is emphasized that the solution
of an electric network incorporates graph theory, vector space theory, matrix the-
ory, complex numbers, and systems of linear equations, thus covering all the topics
presented within all the other chapters.
I would like express my gratitude to my long-standing friend Alberto Grasso
who inspired me with precious comments and useful discussions. As a theoretical
physicist, he offered me a different perspective of Algebra which is more thoroughly
explained in the Foreword written directly by himself.
Furthermore, I would like to thank my colleagues of the Mathematics Team at
De Montfort University, especially Joanne Bacon, Michéle Wrightham, and Fabio
Caraffini for support and feedback.
Preface to the First Edition xvii
Last but not least, I wish to thank my parents, Vincenzo and Anna Maria, for the
continued patience and encouragement during the writing of this book.
As a final note, I hope this book will be a source of inspiration for young minds.
To the youngest readers who are approaching Mathematics for the first time with
the present book I would like to devote a thought. The study of Mathematics is
similar to running of a marathon: it requires intelligence, hard work, patience, and
determination, where the latter three are as important as the first one. Understand-
ing mathematics is a lifetime journey which does not contain short-cuts but can be
completed only mile by mile, if not step by step. Unlike the marathon, the study
of mathematics does not have a clear and natural finish line. However, it has the
artificial finish lines that the society imposes to us such as an exam, the publication
of an article, a funding bid, a national research exercise etc. Like in a marathon,
the study of mathematics contains easier and harder stretches, comfortable downhill
and nasty uphill bends. In a marathon, like in the study of mathematics, the most
important point is the focus on the personal path, the passion towards the goal, and
to persevere despite of the difficulties.
This book is meant to be a training guide towards an initial understanding of
linear and abstract algebra and possibly a first or complementary step towards better
research in Computational Sciences and Engineering.
I wish to readers a fruitful and enjoyable time reading this book.
2 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1 Numeric Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Basic Definitions About Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4 Determinant of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.4.1 Linear Dependence of Row and Column Vectors
of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.4.2 Properties of the Determinant . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.4.3 Submatrices, Cofactors and Adjugate Matrices . . . . . . . . . . 41
2.4.4 Laplace Theorems on Determinants . . . . . . . . . . . . . . . . . . . . 44
2.5 Invertible Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.6 Orthogonal Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.7 Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
xix
xx Contents
3.3.3
LU Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.3.4
Equivalence of Gaussian Elimination and LU
Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.4 Iterative Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3.4.1 Jacobi’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.4.2 Gauss-Seidel’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.4.3 The Method of Successive Over Relaxation . . . . . . . . . . . . . 121
3.4.4 Numerical Comparison Among the Methods and
Convergence Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
About the Author
Ferrante Neri received his master’s degree and a PhD in Electrical Engineering
from the Technical University of Bari, Italy, in 2002 and 2007 respectively. In 2007,
he also received a PhD in Scientific Computing and Optimization from the Uni-
versity of Jyväskylä, Finland. From the latter institution, he received the DSc de-
gree in Computational Intelligence in 2010. He was appointed Assistant Professor
in the Department of Mathematical Information Technology at the University of
Jyväskylä, Finland, in 2007, and in 2009 as a Research Fellow with the Academy
of Finland. Dr. Neri moved to De Montfort University, United Kingdom, in 2012,
where he was appointed Reader in Computational Intelligence and, in 2013, pro-
moted to Full Professor of Computational Intelligence Optimization. Since 2019
Ferrante Neri moved to the School of Computer Science, University of Notting-
ham, United Kingdom. Ferrante Neri’s teaching expertise lies in mathematics for
computer science. He has a specific long lasting experience in teaching linear and
abstract algebra. His research interests include algorithmics, hybrid heuristic-exact
optimization, scalability in optimization and large-scale problems.
xxv