Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Adaptive Neuro Fuzzy Inference System (ANFIS) With Error Backpropagation Algorithm Using Mapping Function

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/267177430

Adaptive neuro fuzzy inference system (ANFIS) with error


backpropagation algorithm using mapping function

Article in International Journal of Artificial Intelligence · January 2008

CITATIONS READS

8 1,865

2 authors:

Endra Joelianto Basuki Rahmat


Bandung Institute of Technology Universitas Pembangunan Nasional Veteran Jawa Timur, Sur…
213 PUBLICATIONS 897 CITATIONS 49 PUBLICATIONS 67 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Earthquake estimation View project

Control of Autonomous Vehicles View project

All content following this page was uploaded by Endra Joelianto on 03 January 2017.

The user has requested enhancement of the downloaded file.


International Journal of Artificial Intelligence,
Autumn 2008, Volume 1, Number A08
ISSN 0974-0635; Copyright © 2008 by IJAI, ISDER

Adaptive Neuro Fuzzy Inference System (ANFIS) with Error


Backpropagation Algorithm using Mapping Function
Endra Joelianto1 and Basuki Rahmat2
1
Instrumentation and Control Research Group
Engineering Physics Study Program
Institut Teknologi Bandung, Bandung 40132, Indonesia
E-mail: ejoel@tf.itb.ac.id
2
Department of Engineering Informatics
University of Pembangunan Nasional Veteran, Surabaya 60294, Indonesia

ABSTRACT
Adaptive Neuro Fuzzy Inference System (ANFIS) is a class of adaptive networks which
enjoys many of the advantages claimed by neural networks (NNs) and the linguistic
interpretability of Fuzzy Inference Systems (FIS). The fixed membership functions used in
the backward pass lead to a problem in minimizing discrepancy between the actual
outputs and the desired outputs. To overcome this problem, we propose a method to
modify ANFIS algorithm in the backward pass by using a mapping function. The function
maps the inputs to all corrected values obtained via error correction rules in the first layer
by means of an interpolation of the inputs and the corrected values in the first layer.
Simulation results demonstrate the effectiveness of the proposed modified ANFIS in
significantly reducing the discrepancy between the actual outputs and the desired outputs.

KEY WORDS: Adaptive neuro fuzzy inference system (ANFIS); fuzzy systems; neural networks;
backpropagation algorithm; mapping function
Mathematics Subject Classification: 93C42, 94D05

1. INTRODUCTION

Adaptive Neuro Fuzzy Inference System (ANFIS) as developed by Jang et al. (1997) is a class of
adaptive networks that are functionally equivalent to fuzzy inference systems, where the parameters
of fuzzy inference systems are updated by neural networks from a set of training data. ANFIS enjoys
many of the advantages claimed by neural networks (NNs) and the linguistic interpretability of Fuzzy
Inference Systems (FIS), wherein both NNs and FIS play active roles in an effort to reach specific
goals. The adaptive capability of ANFIS makes it almost directly applicable to adaptive control and
learning control. In fact, ANFIS can replace almost any neural networks in a control system and can
perform the same function. The active role of neural networks in signal processing, as shown in
Widrow and Winter (1988) and Kosko (1991), also suggests similar application for ANFIS. The
nonlinearity and structural knowledge representation of ANFIS are the primary advantages over
classical linear approaches in adaptive filtering and adaptive signal processing, such as identification,
inverse modelling, predictive coding, adaptive channel equalization, adaptive interference canceling
and so on. Recent application of ANFIS can be found, to illustrate, in Xu et al. (2007), and Giripunje
and Bawane (2007). Moreover, the architecture of ANFIS has been used to develop a new fuzzy logic
controller chip by Aminifar and Yosefi (2007).

www.ceser.res.in/ijai.html
4 International Journal of Artificial Intelligence (IJAI)

ANFIS is featured by a hybrid learning algorithm which consists of an automatic tuning of Sugeno-
type inference system and generation of outputs of a weighted linear combination of the consequents.
The hybrid learning algorithm consists of two stages, i.e., feedforward pass to identify the consequent
parameters by using the learning mechanism of FIS based on the neural architecture and Least-
squares Estimator, and backward pass to update the premise parameters by the error
backpropagation algorithm. However, this hybrid learning has a problem in minimizing discrepancy
between the actual output and the desired output due to the fixed membership function in backward
pass. Several methods to improve performance of ANFIS were proposed by Janovic et al. (2004),
Panella and Gallo (2005), and Su and Zhao (2007).

In ANFIS, the membership functions (using bell functions) are expected to map all inputs by changing
the parameters of bell functions. It is desired that all inputs can be mapped to produce the desired
outputs. Unfortunately, in the case that there occur variations in the inputs, the desired outputs will be
poorly approximated by the actual outputs because of limitations in finding the parameters of the fixed
finite number of fuzzy membership functions. As discussed by Bilgic and Turksen (1999), the fuzzy
membership function is the building block of fuzzy logic systems that has many possible
interpretations. In view of fuzzy modelling Frantti (2001), the fuzzy membership function will determine
how rich the information is extracted from the given data in highly nonlinear systems. In fact, the
membership function in fuzzy logic is an extension of classical interpolation Castro (1999). Therefore,
there is an opportunity to extend the form of the membership functions from predefined functions with
some well known mathematical formula to more complex functions in order to incorporate the richness
of the data in the fuzzy logic system.

In this paper, we propose a method to overcome the problem of limitation in extracting highly
nonlinear data introduced by using fixed membership functions. The proposed method is carried out
by modifying the ANFIS algorithm in the backward pass and introducing a different membership
function called a mapping function. The membership functions are still used in the feedforward pass.
However, in the backward pass, a mapping function is applied to map the inputs to all corrected errors
obtained via error correction rules in the first layer. The mapping function is then found by means of
an interpolation technique from the inputs and the corrected errors in the first layer.

2. LEARNING ALGORITHM OF MODIFIED ANFIS

The standard ANFIS structure represents Sugeno fuzzy model. For simplicity, it is assumed that the
considered fuzzy inference system has two inputs x and y and one output f . A common rule set for
a first-order Sugeno fuzzy with two fuzzy if-then rules is as follows:

Rule 1: If x is A1 and y is B1 , then f1 p1  q1 y  r1

Rule 2: If x is A2 and y is B2 , then f2 p2  q2 y  r2

Figure 1.a shows the reasoning mechanism for this Sugeno model and the corresponding standard
ANFIS architecture which is shown in Figure 1.b where nodes of the same layer have similar
functions. The detail explanation the ANFIS architecture and its mechanism can be found in Jang et
al. (1997).

The important part of modified ANFIS is the modification of the error correction rules of error back
propagation (EBP) by using a mapping function to replace the membership function in the standard
ANFIS. The learning system of the modified ANFIS is similar to the standard ANFIS. It uses hybrid
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 5

learning algorithm which consists of two steps, i.e. the forward pass by using fuzzy inference
mechanism based on the neural network architecture and Least-squares Estimator (LSE), and the
backward pass to update the premise parameters by applying the error backpropagation algorithm
which has been modified.

Figure 1 (a) A two-inputs first-order Sugeno fuzzy model; (b) equivalent ANFIS architecture, Jang et
al. (1997)
2.1 Forward Pass

The forward pass uses the same architecture of Jang et al. (1997), see Figure 1, where it has two
inputs and one output. For convenience, we introduce different notation as shown in Figure 2.

x y

n1

x n5 n7 n9
n2
d7 d9 H 11
n11
d8 d10 d11
n3
n6 n8 n10
y
n4

x y
Figure 2 Correction rule of modified EBP

The mechanism in the forward pass can then be explained as follow:

Layer 1:

In this layer, the well known bell function is used as the membership function which is given by the
following equation

1
PA( x) (1)
x  ci 2bi
1 | |
ai
6 International Journal of Artificial Intelligence (IJAI)

The membership function has parameters {ai , bi , ci } , i 1,2,3,4 which are predetermined by
selecting parameter values. Each output of this node is labeled by a . Accordingly, the outputs are
denoted by n1a , n2a , n3a , and n4a . The symbol a is used in order to differentiate with new
symbol b (after the correction) that will be used later in the backward pass.

Layer 2:

In this layer, fuzzy logic AND is used in the node function. The outputs of this layer follow the
equations

n5a min(n1a, n3a)


n6a min(n2a, n4a) (2)

Layer 3:

The input signals of this layer are normalized. Let ntot _ a n5a  n6a , then the normalization is
given by

n7 a n5a / ntot _ a

n8a n6a / ntot _ a (3)

Layer 4:

Arranging the incoming signals, we obtain matrix A which is of the form


A [(n7 a x) (n7a y ) n7 a (n8a x) (n8a y ) n8a ] (4)

By means of the LSE method, we obtain the consequent parameters P = [ p1 , q1 , r1 , p 2 , q 2 , r2 ] by


using the following equation

P [ AT A]1 ATU (5)

where U is the desired output of the controller. The consequent parameter P = [ p1 , q1 , r1 , p 2 , q 2 , r2 ]


is then used to compute f1 and f 2 by using the following equation

f1 p1 x  q1 y  r1

f2 p2 x  q2 y  r2 (6)

After that, the output of the node n9 and n10 are calculated by using the equation

n9 a n7 a f1

n10a n8a f 2 (7)

Layer 5:
The output of this layer is the summation of the input signals given by

n11a n9a  n10a (8)

2.2 Backward Pass


The resulted error in the forward pass is then propagated back by using error correction rule of the
modified EBP. The learning process of the modified ANFIS is shown in Figure 2. The symbol H11
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 7

defines the error between the output of the network and the desired output d k . The sum of the
squared error is given by
N (l )
Ep ¦ (d
k 1
k
p
 xlp,k ) 2 (9)

In this case, we define E p H11 . The value xl in this layer is given by n11 and d k U . Hence, we
have

H11 2(U  n11a ) (10)

Next, d11 is defined as follows

d11 H11 / 2 U  n11a (11)

The output of the node n11 then becomes

n11b n11a  d11 (12)

In fact, we have

n11b n9b  n10b since n11a n9a  n10a


If we define

n9b n9a  d9 and n10b n10a  d10 ,


we then obtain

d11 d9  d10 (13)

Multiplying the left hand side (13) by ( f1  f 2 ) /( f1  f 2 ) , this leads to

d11 f1 d f
 11 2 d9  d10 (14)
f1  f 2 f1  f 2

Since n9a n7 a f1 and n10a n8a f 2 , after correction, we have n9b n7b f1 and
n10b n8b f 2 . As a result, we obtain

n9 a  d 9 (n7a  d 7 ) f1

n10a  d10 (n8a  d8 ) f 2

Next, from the ntot _ a of the forward pass, we write the new ntot _ b as follows

ntot _ b ntot _ a  d _ tot (15)

where d _ tot is arbitrary and obtained from the experiment data. Suppose d _ tot 0 , this implies
ntot _ b ntot _ a . From (3), we write other output nodes in Layer 2 as follow

n5b1 (n7a  d 7 )ntot _ b


(16)
n6b1 (n8a  d8 )ntot _ b
8 International Journal of Artificial Intelligence (IJAI)

In this layer, the logic AND is applied to process the outputs of Layer 1. More specifically, the
minimum value of the input signals has been selected. As in Layer 2, we already have n5a1 and
n6b1 , it is important that the outputs of this node must satisfy n5b n5b1 and n6b n6b1 . A
simple way is to split n5b1 and n6b1 into two parts. We then add an arbitrary value to the one part,
so that it has higher value than the other part. As a result, this part will not be chosen in Layer 2. The
result of this manipulation is the original value of n1b to n 4b after the addition of arbitrary value
which belongs to the output node in Layer 1. Next, all inputs are mapped to the corrected output of
Layer 1. The mapping function then becomes the membership function of the learning mechanism of
the modified ANFIS.

3. MODIFIED ANFIS CONTROLLER FOR INVERSE CONTROL

3.1 Inverse Control Problem

The control design is aimed to produce a controller output that leads to the satisfaction of the design
requirements by the plant output. The diagram block of inverse control problems is shown in Figure 3.
The inverse dynamic modelling is used to obtain an appropriate controller output by fixing the output
and then finding the input.

xd(k+1) U(k)
Mod. ANFIS x(k+1)
Plant
Controller

x(k)
z-1

Figure 3 Inverse control problems with modified ANFIS controller

Consider the plant in the following form

x(k  1) f ( x(k ), u (k )) (17)

where x(k  1) is the state at time k  1 , x(k ) is the state at time k , and u (k ) is the control signal
at time k . In general, the state at time k  n can be written as

x ( k  n) F ( x(k ),U ) (18)

where n denotes the order of the plant, F is a multiple composite function of f and U is the control
action from k to k  n . Equation (18) shows that if the plant in equation (17) is given by the control
input u from time k to k  n  1 the state will move from x( k ) to x( k  n) .

Suppose that the inverse dynamics of the plant do exist, then U can be expressed as the function of
x(k ) and x(k  n)

U G ( x(k ), x(k  n)) (19)

The design of ANFIS controller requires the inverse dynamics G are obtained by using neuro-fuzzy
systems where a pair data [ x( k ) x ( k  1) U ] are applied in the training.
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 9

u(k+1) x(k+1)
Plant
+
x(k)
- eu
z-1
ANFIS
Identifier

Figure 4 Learning process

Figure 4 shows the learning process of ANFIS identifier. At the end of the learning process, the neuro-
fuzzy system has rule base as follows

If x1 (k ) is A1 and . . . and xn ( k ) is An and x1 ( k  1) is B1 and . . . and xn ( k  1) is Bn ,


l l l l

Then u
l
a1l x1 (k )  ...  anl xn (k )  b1l x1 (k  1)  ...  bnl xn (k  1)  c l

where l 1,2,..., M , A1j , B1j and aij , b1j , c1j are fuzzy membership functions and parameters
obtained from the learning process, and M is the number of rules. The learning process of neuro-
fuzzy is carried out by minimizing the network error using the following objective

|| e(k ) ||2 || U (k  1)  Uˆ (k  1) ||2

|| e(k ) ||2 || U  G ( x(k ), x(k  1)) ||2 (20)

However, this minimum value does not guarantee that the system error

|| eˆ(k ) ||2 || xd (k )  x(k ) ||2 (21)

will be minimized.

3.2 Simulation

Given the nonlinear plant model by Jang et al. (1997) as follows

y (k )u (k )
y (k  1)  tan(u (k )) (22)
1  y 2 (k )
The pair data are obtained by applying random data to the mathematical model of the plant in (22).
Figure 5 shows the training data. Next, the resulted weightings from the learning process are applied
to the standard ANFIS controller (Jang et al. (1997)) and the modified ANFIS. Figure 6 shows the
initial and final membership functions, Figure 7 shows the error of the desired and actual output and
Figure 8 shows the control input.

The simulation results of the modified ANFIS are shown in Figure 9, 10 and 11 respectively. It can be
seen in Figure 9 that the mapping functions are very different from the initial bell membership
functions. The mapping functions show more rapid changes in the functions in order to incorporate
more active dynamics. Comparing the errors to the standard ANFIS, the resulted errors by using the
modified ANFIS as shown in Figure 10 are very small with almost the same magnitude of the control
input.
10 International Journal of Artificial Intelligence (IJAI)

0.5

u(k)
0

-0.5

0 10 20 30 40 50 60 70 80 90 100
Time
2

1
y(k)

-1

0 10 20 30 40 50 60 70 80 90 100
Time

Figure 5 Pair of learning data

1
0.8
Initial MF

0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5 2
input y(k)

1
0.8
Initial MF

0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5 2
input y(k+1)

(a)

1
0.8
Final MF

0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5 2
input y(k)

1
0.8
Final MF

0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5 2
input y(k+1)

(b)

Figure 6 Initial (a) and final (b) membership functions of standard ANFIS
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 11

0.1

0.05

Error y(k)

-0.05

-0.1

-0.15
0 10 20 30 40 50 60 70 80 90 100
Time

Figure 7 Error of the desired and actual output system using standard ANFIS controller

1.5

0.5
Control signal u(k)

-0.5

-1

-1.5
0 10 20 30 40 50 60 70 80 90 100
Time

Figure 8 The control input using standard ANFIS

1
0.8
Initial MF

0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5 2
input y(k)

1
0.8
Initial MF

0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5 2
input y(k+1)

(a)
12 International Journal of Artificial Intelligence (IJAI)

Final Mapping Function y(k)


30

20

10

-10

-1.5 -1 -0.5 0 0.5 1 1.5 2


input y(k)
Final Mapping Function y(k+1)
30

20

10

-10

-1.5 -1 -0.5 0 0.5 1 1.5 2


input y(k+1)

(b)

Figure 9 Initial (a) and final (b) mapping functions of modified ANFIS
-16
x 10
5

1
Error y(k)

-1

-2

-3

-4

-5
0 10 20 30 40 50 60 70 80 90 100
Time

Figure 10 Error of the desired and actual output system using modified ANFIS controller

0.8

0.6

0.4
Control signal u(k)

0.2

-0.2

-0.4

-0.6

-0.8

-1
0 10 20 30 40 50 60 70 80 90 100
Time

Figure 11 The control input using modified ANFIS

4. APPLICATION IN FEDBATCH FERMENTATION PROCESS CONTROL

4.1 Process Model

Fermentation is a slow process in which microorganisms utilize available nutrients (substrates) for
growth, biomass maintenance and product formation. One of typical fermentation processes is
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 13

fermentation using S.cerevisiae known as baker’s yeast. The mathematical model of the fermentation
process is described by a nonlinear state space equation

x f ( x, u, p)  [ ( x), x(0) xo (23)

where x (t )ƒ5 denotes the state at time t, u (t )ƒ2 denotes the control at time t , the parameter
vector is denoted by p  ƒA . Unmodelled dynamics of the process in (23), whose exact form is not

known, is denoted by [ . The elements of the state vectors (i.e. the state variables) x j (.) , ( j =
1,2,…,5) are defined as follow: x1 is the concentration of microorganisms, x 2 is the concentration of

substrate, x3 is the concentration of inhibitory substance, x4 is the concentration of ethanol and x5 is


the volume of the fermentation broth. The elements of the control vector u1 (.) and u 2 (.) are the feed
of substrate and the influent substrate concentration respectively. Let the process starts at time t 0
and terminates at time T , where T is generally of the order of a few hours to a few days. In this
problem, T is assumed to be 15 hours.
The model of a baker’s yeast fed-batch fermentation process used in this paper is taken from
Boskovic and Narendra (1995) as follow:

x1 x2 1 x
x1 Pm  1 u1  0.48 x1[ 1 ( x),
k s  x2 1  x32 x5

P m x1 x2 1 u  x2

1
x1[ 2 ( x),
x2   2 u1  k m x1 (24)
k y k s  x2 1  x32 x5 0.51

x1 x2 1 x
x3 0.0023 x1  0.007 P m  3 u1,
k s  x2 1  x3 2 x5

x4
x 4 x1[[ 2 ( x )  [ 1 ( x )]  u1,
x5

x 5 u1 ,
In equations (24), the unknown process parameter μm, ks, ky and km are assumed to vary with time as
described by the following equation

Pm(t) = 0.41 + 0.01 cos 2 / 5 S t ,

ks(t) = 0.03 – 0.005 cos 2 / 15S t , (25)

ky(t) = 0.4 + 0.1 cos 2 / 10 S t ,

k m(t) = 0.04 – 0.01 cos 2 / 20 S t .

The nonlinearities [ 1 (.) and [ 2 (.) correspond to ethanol consumptions and formation rates
respectively. These nonlinearities are generally not known precisely, and represent the unmodelled
dynamics of the system. Following Boskovic and Narendra (1995), these nonlinearities are assumed
to be the form
14 International Journal of Artificial Intelligence (IJAI)

­S e ( x2 ) ( x2  0.28 dan x4 ! 0, atau S e ( x2 ) ! 0),


°
[ 1 ( x) ®0 ( x2 d 0.28 dan x4 0), (26)
°0 ( x2 d 0.28 atau S e ( x2 ) d 0),
¯

­ve ( x2 , x3 ) ( x2 ! 0.28 atau ve ( x2 , x3 ) ! 0),


[ 2 ( x) ®
¯0 ( x2 d 0.28 atau ve ( x2 , x3 ) d 0, atau x4 0) (27)

where

0.0028
ve ( x2 , x3 ) 0.138  0.062 x3  ,
x2  0.28 (28)
S e ( x2 ) 0.155  0.123 ln x2 ;

The model (24) has attracted a lot of attention because it shows high yield on biomass. In Boskovic
and Narendra (1995), it has been shown that the high yield can be obtained by the so called “nominal
regime” which is characterized by the low concentration of the ethanol. In this paper, this result will be
used as the optimal feed rate profile.

4.2 Control Scheme

The objective of the control is to determine u1 (t ) and u 2 (t ) that maximize the production of

microorganisms in the interval [0, T ] while maintaining minimum concentration of ethanol x 4 (t ) as it


has a deleterious effect on the final product. This is a difficult control problem since the both the
nonlinear nature of the plant and the uncertainty caused by unmodelled dynamics of the system
( [ 1 (.) and [ 2 (.) ) are encountered in the process. Specifically, the control design is aimed to
produce controller output that results in the satisfaction of the design requirements by the plant output.
In this paper, we follow the method developed by Okada et al. (1981) and Takamatsu et al. (1985)
which has also been used in Boskovic and Narendra (1995). In this approach, the feed rate does not
need to be solved via the solution of the optimal control problem. Instead, it is found by minimizing the
effects of the unmodelled dynamics.

From (24), as is shown in Boskovic and Narendra (1995), the problem comes from the presence of
*
uncertainty which arises when x 2 (t ) has value more than its critical value x2 0.28 . When x2 (0) is
*
selected to be 0.28 and the control input u1 (t ) u (t ) is determined such that dx 2 / dt 0 over the
*
time interval [0, T ], we obtain [ (x(t))=0. Also it has been found that choosing u 2 200 , the
* *
parameter vector p=p =[0.42 0.025 0.5 0.03] and the initial vector x(0) x (0) = [10 0.28 0 0 10]T
yield the trajectory of the state x(t ) that satisfies the optimal performance of the process denoted by
*
x (t ) . The process described by (24) with p=p*, x(t ) x * (t ) , u 2 200 and u1 u1* is given by

x 5* (t ) 0.42 x1* (t ) x 2* (t ) 1
u1* (t ) [0.03 x1* (t )  ]
200  x 2* (t ) 0.5 0.025  x 2 (t ) 1  x 3* (t )
*
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 15

is referred as “nominal model”. Figure 12 shows the simulation result of the system (24) which is
operated in the so called nominal regime. Notice that the obtained biomass x1 (T ) at the final time T
is 8 times the initial biomass x1 (0) .

x1(t) x2(t)
80 0.36

70 0.34

0.32
60
0.3
50
[g/l]

[g/l]
0.28
40
0.26
30
0.24

20 0.22

10 0.2
0 5 10 15 0 5 10 15
t[h] t[h]
x3(t) x4(t)
1.6 1

0.8
1.4
0.6
1.2
0.4
1 0.2
[g/l]
[g/l]

0.8 0

-0.2
0.6
-0.4
0.4
-0.6
0.2
-0.8

0 -1
0 5 10 15 0 5 10 15
t[h] t[h]
x5(t) u1(t)
60 6

55
5
50

45
4
40
[l/h]
[g/l]

35 3

30
2
25

20
1
15

10 0
0 5 10 15 0 5 10 15
t[h] t[h]
Figure 12 Nominal model

The applied control system is by using the inverse dynamic method shown in Figure 3. The modified
ANFIS controller is aimed to obtain the appropriate controller output by fixing the output and then
finding the input. In this paper, the fixed output is the output that is produced by the nominal model
operated in the nominal regime. The pair of the arranged training data is obtained from the simulation
results in Figure 12. For the training purposes, the state variables are arranged as follow:

[ x1 ( k ) x1 (k  1) . . . x5 (k ) x5 (k  1) u (k ) ].

4.3 Architecture of Modified ANFIS

The architecture of ANFIS controller with modification in the learning algorithm for the fermentation
control problem is shown in the following figure.
16 International Journal of Artificial Intelligence (IJAI)

Layer 1 Layer 2 Layer 3 Layer 4


x1(k) x1(k+1) ... x5(k) x5(k+1)
n1
...
in1 =
x1(k) n2 n21 n31 n41

n3
in2 = n22 n32 n42
x1(k+1) n4
Layer 5
in3 n5
n23 n33 n43
x2(k)
n6

n7 n24 n34 n44


in4
x2(k+1) n8

n9
in5 n25 n35 n45
x3(k) n10 u1(k)
n51

n11 n26 n36 n46


in6
x3(k+1)
n12

in7 n13
x4(k) n27 n37 n47
n14

n15 n28
in8 = n38 n48
x4(k+1) n16

in9 = n17
n29 n39 n49
x5(k) n18

in10 = n19 n30 n40 n50


x5(k+1)
n20 ...
x1(k) x1(k+1) ... x5(k) x5(k+1)

Figure 13 Modified ANFIS structure

The learning algorithm is given in the following.

4.3.1 Forward Pass

The mechanism in the forward pass can be explained as follows:

Layer 1:

The membership function (usually the bell function) is used in this layer is given by
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 17

1
PA( x) (29)
x  ci 2bi
1 | |
ai

The parameters of the membership functions {ai , bi , ci } , i 1,2,3,4 are predetermined. Every output
from this node is labeled by a , so the output is given by n1a to n20a.

Layer 2:

Fuzzy logic AND is used as the node function in this layer. The outputs of this layer are given by

n 21a min(n1a, n3a)


# # (30)
n30a min(n18a, n 20a )

Layer 3:

The input signals of this layer are normalized, we then have

n31a n 21a /(n21a  ...  n30a )


# # (31)
n40a n30a /(n 21a  ...  n30a )

Layer4:

From the incoming signals, we obtain the components of matrix A as follow


A1 = [(n31a in1) (n31a in2) (n32a in1) (n32a in2) (n33a in3) (n33a in4)];

A2 = [(n34a in3) (n34a in4) (n35a in5) (n35a in6) (n36a in5) (n36a in6)];

A3 = [(n37a in7) (n37a in8) (n38a in7) (n38a in8) (n39a in9) (n39a n10)];

A4 = [(n40a in9) (n40a in10)]

The matrix A is then given by

A = [A1 A2 A3 A4] (32)

By using Least Squares Estimator (LSE) method, we then obtain the consequent parameters of P by
using the following equation

P [ AT A] 1 AT U 1 (33)

where U 1 is the desired output of the controller. Next, we obtain the parameter P = [p1 ... p20] and

f1 = p1 in1 + p2 in2;
18 International Journal of Artificial Intelligence (IJAI)

f2 = p3 in1 + p4 in2;

f3 = p5 in3 + p6 in4;

f4 = p7 in3 + p8 in4;

f5 = p9 in5 + p10 in6;

f6 = p11 in5 + p12 in6; (34)

f7 = p13 in7 + p14 in8;

f8 = p15 in7 + p16 in8;

f9 = p17 in9 + p18 in10;

f10 = p19 in9 + p20 in10;

After that, the output of the node n41a ... n50a are given by

n41a = n31a f1; n42a = n32a f2;

n43a = n33a f3; n44a = n34a f4;

n45a = n35a f5; n46a = n36a f6; (35)

n47a = n37a f7; n48a = n38a f8;

n49a = n39a f9; n50a = n40a f10.

Layer 5:

The output of this layer is the summation of all incoming signals as follows

n51a = n41a+n42a+n43a+n44a+n45a+n46a+n47a+n48a+n49a +n50a (36)

4.3.2 Backward Pass

Define the error between the output of the network and the desired output d k . The sum of the
squared error is given by the following quadratic error

N (l )
Ep ¦ (d
k 1
k
p
 xlp,k ) 2 (37)

where we have defined E p H 51 , xl in this layer is n51 and d k U 1 . Hence, we have

H 51 2(U 1  n51a ) (38)

Next, d 51 is defined as follows


ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 19

d 51 H 51 / 2 U 1  n51a (39)

The output of the node n51 becomes

n51b = n51a + d51 (40)

In fact, n51b = n41b + ... + n50b since n51a = n41a + ... + n50a. All mechanisms in forward pass from
layer 1 until layer 5 are used in the same manner for backward pass to produce a corrected value
(labeled b after added by corrected factors d i , i = 21 .. 51).

Further, by extending the correction rule of backward pass modified ANFIS learning algorithm in Jang
et al. (1997) for ten inputs, the corrected value of each node is obtained. Finally, if the corrected
values in layer 1 are obtained, then all inputs are mapped to this value by using the interpolation
technique. The mapping function then becomes the membership function of the learning mechanism
in the modified ANFIS.

4.5 Simulation

To show the effectiveness of the modified ANFIS controller, simulation is carried out by using
experimental data used in Joelianto et al. (2002). The data was obtained from experiment held in a
fermentor with physical states: temperature 30qC, pH 4.5, air speed flow 1 vvm, initial volume 1.5 L,
feed concentration ( u 2 ) 10 g/L, feed speed ( u1 ) 0.104 g/L, sampling time 2 hours, and sampling
volume 20 mL. From 1st hour until 14th, baker's yeast was grown in batch condition, and the fed-batch
condition started at 14th hour until 36th hour. In this simulation, the modified ANFIS controller is trained
by using state variables from the experimental data. Next, the modified ANFIS finds the controller
output that yields the output response following the response of the nominal model in Figure 12.

Figure 14 shows the initial membership function and final mapping function of the modified ANFIS
learning process. It is shown that in the fermentation control problem, the resulted ten membership
functions using the modified ANFIS are not very different compared to the membership functions of
the standard ANFIS. The upper values of the membership functions are slightly higher than the
standard ANFIS. The response of the state variables and the control signal are shown in Figure 15.
Simulation results show that output responses follow the given output profiles, but these responses
are achieved by an oscillatory control input. The average percentage error is given in Table 1.
20 International Journal of Artificial Intelligence (IJAI)

1 1 1 1 1

0.8 0.8 0.8 0.8 0.8

MF 0.6 0.6 0.6 0.6 0.6

0.4 0.4 0.4 0.4 0.4

0.2 0.2 0.2 0.2 0.2

0 0 0 0 0
0 50 100 0 50 100 0.28 0.28 0.28 0.28 0.28 0.28 0 1 2
x1(k) x1(k+1) x2(k) x2(k+1) x3(k)

1 1 1 1 1

0.8 0.8 0.8 0.8 0.8

MF 0.6 0.6 0.6 0.6 0.6

0.4 0.4 0.4 0.4 0.4

0.2 0.2 0.2 0.2 0.2

0 0 0 0 0
0 1 2 0 0.5 1 0 0.5 1 0 50 0 50
x3(k+1) x4(k) -6 x4(k+1) -6 x5(k) x5(k+1)
x 10 x 10

(a)

1.5 1.5 1.5 1.5 1.5

Mp 1 1 1 1 1
F

0.5 0.5 0.5 0.5 0.5

0 0 0 0 0
0 50 100 0 50 100 0.28 0.28 0.28 0.28 0.28 0.28 0 1 2
x1(k) x1(k+1) x2(k) x2(k+1) x3(k)

1.5 1.5 1.5 1.5 1.5

Mp
1 1 1 1 1
F

0.5 0.5 0.5 0.5 0.5

0 0 0 0 0
0 1 2 0 0.5 1 0 0.5 1 0 50 0 50
x3(k+1) x4(k) -6 x4(k+1) -6 x5(k) x5(k+1)
x 10 x 10

(b)
Figure 14 (a) Initial membership function, (b) Final mapping function of modified ANFIS learning
process
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 21

80 0.36
actual x1(t) actual x2(t)
desired x1(t) 0.34 desired x2(t)
70

0.32
60
0.3
50

[g/l]
[g/l]

0.28
40
0.26
30
0.24

20 0.22

10 0.2
0 5 10 15 0 5 10 15
t[h] t[h]

1.6 1
actual x3(t) actual x4(t)
1.4 desired x3(t) 0.8 desired x4(t)

0.6
1.2
0.4
1
0.2
[g/l]

[g/l]
0.8 0

0.6
-0.2

-0.4
0.4
-0.6
0.2
-0.8

0 -1
0 5 10 15 0 5 10 15
t[h] t[h]
x5(t) u1(t)
50 6

45
5
40
4
35
[l/h]

30 3
[l]

25
2
20
1
15

10 0
0 5 10 15 0 5 10 15
t[h] t[h]

Figure 15 Simulation results

Table 1 The average percentage error (APE)

Output plant APE (%)


x1 (t) 1.5670
x2 (t) 7.0361 x 10- 7
x3 (t) 3.3392
x4 (t) 0

5. CONCLUSION

A mapping function has been proposed as an alternative function to the well known predefined
membership function in fuzzy logic systems of ANFIS. The effectiveness of the proposed modified
ANFIS compare to the ANFIS was shown by the simulation results in solving two different control
problems. Unlike the standard ANFIS, the modified ANFIS gave the final mapping function as the
22 International Journal of Artificial Intelligence (IJAI)

result of the learning process in the backward pass. Simulation showed that the form of the mapping
functions was determined by the characteristics of the given data.

REFERENCES

Aminifar, S., and Yosefi, Gh., 2007, Application of Adaptive Neuro Fuzzy Inference System (ANFIS) in
Implementing of New CMOS Fuzzy Logic Controller (FLC) Chip, Numerical Analysis and Applied
Mathematics: International Conference of Numerical Analysis and Applied Mathematics. AIP
Conference Proceedings, 936, 49-53.
Bastin, G., and van Impe, J.F., 1995, Nonlinear and Adaptive Control in Biotechnology, European
Journal Control, 1, 37-53.
Bilgic, T., and Turksen, I.B., 1999, Measurement of Membership Functions: Theoretical and Emperical
Work. in Dubois, D., and Prade, H. (Eds.), Handbook of Fuzzy Sets and Systems, Vol. 1.
Fundamentals of Fuzzy Sets, Kluwer Academic Publishers, 195-202.
Boskovic, J.D., and Narendra, K.S., 1995, Comparison of Linear, Nonlinear and Neural Network-
based Adaptive Controllers for A Class of Fed-batch Fermentation Processes, Automatica, 31, 817-
840.
Castro, J.L.,1999, The Limits of Fuzzy Logic, Mathware and Soft Computing, 6, 155-161.
Frantti, T., 2001, Timing of Fuzzy Membership Functions from Data, Unpublished Ph.D. Dissertation,
University of Oulu, Linnanmaa, Faculty of Technology.
Giripunje, S., and Bawane, N., 2007, ANFIS Based Emotions Recognision in Speech. in Knowledge-
Based Intelligent Information and Engineering Systems, Berlin/Heidelberg: Springer, 77-84.
Jang, J.S.R., Sun, C.T., and Mizutani, E., 1997, Neuro-Fuzzy and Soft Computing, Prentice Hall Int.
Edition, USA.
Jovanovic, B.B., Reljin, I.S., and Reljin, B.D., 2004, Modified ANFIS Architecture - Improving
Efficiency of ANFIS Technique, 7th Seminar on Neural Network Applications in Electrical Engineering
(NEUREL), 215 – 220.
Joelianto, E., Dwihandayani, R., and Ling, L., 2002, Parameter Estimation using Genetic Algorithm for
Fed-Batch Growth of Saccharomyces cerevisiae, in Intelligent Control for Agricultural Applications
2001. in Purwadaria, H.K., Seminar, K.B., Suroso, Tjokronegoro, H.A., and Widodo, R.J. (Eds.),
Elsevier Science Technology, 107-112.
Kosko, B., 1991, Neural Networks for Signal Processing, Prentice Hall, Upper Saddle River, NJ.
Okada, W., Fukuda, H., and Morikawa, H., 1981, Kinetic Expression of Ethanol Production Rate and
Ethanol Consumption Rate in Baker's Yeast Cultivation, J. Fermentation Technology, 59, 103-109.
Panella, M., and Gallo, A.S., 2005, An Input-Output Clustering Approach to the Synthesis of ANFIS
Networks, IEEE Transactions on Fuzzy Systems, 13, 69-81.
Su, H., and Zhao, F., 2007, A Novel Learning Method for ANFIS Using EM Algorithm and Emotional
Learning, International Conference on Computational Intelligence and Security, 23-27.
Takamatsu, T., Shioya, S., Okada, S., and Kanda, M.,1985, Profile Control Scheme in a Baker's
Yeast Fed-batch Culture, Biotechnology and Bioengineering, 27, 1675-1686.
Widrow, B. and Winter, R., 1988, Neural Nets for Adaptive Filtering and Adaptive Pattern Recognition,
IEEE Transactions on Computer, 25-39.
Xu, W., Li, L., and Xu, P., 2007, A New ANN-based Detection Algorithm of the Masses in Digital
Mammograms, IEEE International Conference on Integrated Technology, 26-30.

View publication stats

You might also like