Adaptive Neuro Fuzzy Inference System (ANFIS) With Error Backpropagation Algorithm Using Mapping Function
Adaptive Neuro Fuzzy Inference System (ANFIS) With Error Backpropagation Algorithm Using Mapping Function
Adaptive Neuro Fuzzy Inference System (ANFIS) With Error Backpropagation Algorithm Using Mapping Function
net/publication/267177430
CITATIONS READS
8 1,865
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Endra Joelianto on 03 January 2017.
ABSTRACT
Adaptive Neuro Fuzzy Inference System (ANFIS) is a class of adaptive networks which
enjoys many of the advantages claimed by neural networks (NNs) and the linguistic
interpretability of Fuzzy Inference Systems (FIS). The fixed membership functions used in
the backward pass lead to a problem in minimizing discrepancy between the actual
outputs and the desired outputs. To overcome this problem, we propose a method to
modify ANFIS algorithm in the backward pass by using a mapping function. The function
maps the inputs to all corrected values obtained via error correction rules in the first layer
by means of an interpolation of the inputs and the corrected values in the first layer.
Simulation results demonstrate the effectiveness of the proposed modified ANFIS in
significantly reducing the discrepancy between the actual outputs and the desired outputs.
KEY WORDS: Adaptive neuro fuzzy inference system (ANFIS); fuzzy systems; neural networks;
backpropagation algorithm; mapping function
Mathematics Subject Classification: 93C42, 94D05
1. INTRODUCTION
Adaptive Neuro Fuzzy Inference System (ANFIS) as developed by Jang et al. (1997) is a class of
adaptive networks that are functionally equivalent to fuzzy inference systems, where the parameters
of fuzzy inference systems are updated by neural networks from a set of training data. ANFIS enjoys
many of the advantages claimed by neural networks (NNs) and the linguistic interpretability of Fuzzy
Inference Systems (FIS), wherein both NNs and FIS play active roles in an effort to reach specific
goals. The adaptive capability of ANFIS makes it almost directly applicable to adaptive control and
learning control. In fact, ANFIS can replace almost any neural networks in a control system and can
perform the same function. The active role of neural networks in signal processing, as shown in
Widrow and Winter (1988) and Kosko (1991), also suggests similar application for ANFIS. The
nonlinearity and structural knowledge representation of ANFIS are the primary advantages over
classical linear approaches in adaptive filtering and adaptive signal processing, such as identification,
inverse modelling, predictive coding, adaptive channel equalization, adaptive interference canceling
and so on. Recent application of ANFIS can be found, to illustrate, in Xu et al. (2007), and Giripunje
and Bawane (2007). Moreover, the architecture of ANFIS has been used to develop a new fuzzy logic
controller chip by Aminifar and Yosefi (2007).
www.ceser.res.in/ijai.html
4 International Journal of Artificial Intelligence (IJAI)
ANFIS is featured by a hybrid learning algorithm which consists of an automatic tuning of Sugeno-
type inference system and generation of outputs of a weighted linear combination of the consequents.
The hybrid learning algorithm consists of two stages, i.e., feedforward pass to identify the consequent
parameters by using the learning mechanism of FIS based on the neural architecture and Least-
squares Estimator, and backward pass to update the premise parameters by the error
backpropagation algorithm. However, this hybrid learning has a problem in minimizing discrepancy
between the actual output and the desired output due to the fixed membership function in backward
pass. Several methods to improve performance of ANFIS were proposed by Janovic et al. (2004),
Panella and Gallo (2005), and Su and Zhao (2007).
In ANFIS, the membership functions (using bell functions) are expected to map all inputs by changing
the parameters of bell functions. It is desired that all inputs can be mapped to produce the desired
outputs. Unfortunately, in the case that there occur variations in the inputs, the desired outputs will be
poorly approximated by the actual outputs because of limitations in finding the parameters of the fixed
finite number of fuzzy membership functions. As discussed by Bilgic and Turksen (1999), the fuzzy
membership function is the building block of fuzzy logic systems that has many possible
interpretations. In view of fuzzy modelling Frantti (2001), the fuzzy membership function will determine
how rich the information is extracted from the given data in highly nonlinear systems. In fact, the
membership function in fuzzy logic is an extension of classical interpolation Castro (1999). Therefore,
there is an opportunity to extend the form of the membership functions from predefined functions with
some well known mathematical formula to more complex functions in order to incorporate the richness
of the data in the fuzzy logic system.
In this paper, we propose a method to overcome the problem of limitation in extracting highly
nonlinear data introduced by using fixed membership functions. The proposed method is carried out
by modifying the ANFIS algorithm in the backward pass and introducing a different membership
function called a mapping function. The membership functions are still used in the feedforward pass.
However, in the backward pass, a mapping function is applied to map the inputs to all corrected errors
obtained via error correction rules in the first layer. The mapping function is then found by means of
an interpolation technique from the inputs and the corrected errors in the first layer.
The standard ANFIS structure represents Sugeno fuzzy model. For simplicity, it is assumed that the
considered fuzzy inference system has two inputs x and y and one output f . A common rule set for
a first-order Sugeno fuzzy with two fuzzy if-then rules is as follows:
Figure 1.a shows the reasoning mechanism for this Sugeno model and the corresponding standard
ANFIS architecture which is shown in Figure 1.b where nodes of the same layer have similar
functions. The detail explanation the ANFIS architecture and its mechanism can be found in Jang et
al. (1997).
The important part of modified ANFIS is the modification of the error correction rules of error back
propagation (EBP) by using a mapping function to replace the membership function in the standard
ANFIS. The learning system of the modified ANFIS is similar to the standard ANFIS. It uses hybrid
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 5
learning algorithm which consists of two steps, i.e. the forward pass by using fuzzy inference
mechanism based on the neural network architecture and Least-squares Estimator (LSE), and the
backward pass to update the premise parameters by applying the error backpropagation algorithm
which has been modified.
Figure 1 (a) A two-inputs first-order Sugeno fuzzy model; (b) equivalent ANFIS architecture, Jang et
al. (1997)
2.1 Forward Pass
The forward pass uses the same architecture of Jang et al. (1997), see Figure 1, where it has two
inputs and one output. For convenience, we introduce different notation as shown in Figure 2.
x y
n1
x n5 n7 n9
n2
d7 d9 H 11
n11
d8 d10 d11
n3
n6 n8 n10
y
n4
x y
Figure 2 Correction rule of modified EBP
Layer 1:
In this layer, the well known bell function is used as the membership function which is given by the
following equation
1
PA( x) (1)
x ci 2bi
1 | |
ai
6 International Journal of Artificial Intelligence (IJAI)
The membership function has parameters {ai , bi , ci } , i 1,2,3,4 which are predetermined by
selecting parameter values. Each output of this node is labeled by a . Accordingly, the outputs are
denoted by n1a , n2a , n3a , and n4a . The symbol a is used in order to differentiate with new
symbol b (after the correction) that will be used later in the backward pass.
Layer 2:
In this layer, fuzzy logic AND is used in the node function. The outputs of this layer follow the
equations
Layer 3:
The input signals of this layer are normalized. Let ntot _ a n5a n6a , then the normalization is
given by
n7 a n5a / ntot _ a
Layer 4:
f1 p1 x q1 y r1
f2 p2 x q2 y r2 (6)
After that, the output of the node n9 and n10 are calculated by using the equation
n9 a n7 a f1
Layer 5:
The output of this layer is the summation of the input signals given by
defines the error between the output of the network and the desired output d k . The sum of the
squared error is given by
N (l )
Ep ¦ (d
k 1
k
p
xlp,k ) 2 (9)
In this case, we define E p H11 . The value xl in this layer is given by n11 and d k U . Hence, we
have
In fact, we have
d11 f1 d f
11 2 d9 d10 (14)
f1 f 2 f1 f 2
Since n9a n7 a f1 and n10a n8a f 2 , after correction, we have n9b n7b f1 and
n10b n8b f 2 . As a result, we obtain
n9 a d 9 (n7a d 7 ) f1
Next, from the ntot _ a of the forward pass, we write the new ntot _ b as follows
where d _ tot is arbitrary and obtained from the experiment data. Suppose d _ tot 0 , this implies
ntot _ b ntot _ a . From (3), we write other output nodes in Layer 2 as follow
In this layer, the logic AND is applied to process the outputs of Layer 1. More specifically, the
minimum value of the input signals has been selected. As in Layer 2, we already have n5a1 and
n6b1 , it is important that the outputs of this node must satisfy n5b n5b1 and n6b n6b1 . A
simple way is to split n5b1 and n6b1 into two parts. We then add an arbitrary value to the one part,
so that it has higher value than the other part. As a result, this part will not be chosen in Layer 2. The
result of this manipulation is the original value of n1b to n 4b after the addition of arbitrary value
which belongs to the output node in Layer 1. Next, all inputs are mapped to the corrected output of
Layer 1. The mapping function then becomes the membership function of the learning mechanism of
the modified ANFIS.
The control design is aimed to produce a controller output that leads to the satisfaction of the design
requirements by the plant output. The diagram block of inverse control problems is shown in Figure 3.
The inverse dynamic modelling is used to obtain an appropriate controller output by fixing the output
and then finding the input.
xd(k+1) U(k)
Mod. ANFIS x(k+1)
Plant
Controller
x(k)
z-1
where x(k 1) is the state at time k 1 , x(k ) is the state at time k , and u (k ) is the control signal
at time k . In general, the state at time k n can be written as
where n denotes the order of the plant, F is a multiple composite function of f and U is the control
action from k to k n . Equation (18) shows that if the plant in equation (17) is given by the control
input u from time k to k n 1 the state will move from x( k ) to x( k n) .
Suppose that the inverse dynamics of the plant do exist, then U can be expressed as the function of
x(k ) and x(k n)
The design of ANFIS controller requires the inverse dynamics G are obtained by using neuro-fuzzy
systems where a pair data [ x( k ) x ( k 1) U ] are applied in the training.
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 9
u(k+1) x(k+1)
Plant
+
x(k)
- eu
z-1
ANFIS
Identifier
Figure 4 shows the learning process of ANFIS identifier. At the end of the learning process, the neuro-
fuzzy system has rule base as follows
Then u
l
a1l x1 (k ) ... anl xn (k ) b1l x1 (k 1) ... bnl xn (k 1) c l
where l 1,2,..., M , A1j , B1j and aij , b1j , c1j are fuzzy membership functions and parameters
obtained from the learning process, and M is the number of rules. The learning process of neuro-
fuzzy is carried out by minimizing the network error using the following objective
However, this minimum value does not guarantee that the system error
will be minimized.
3.2 Simulation
y (k )u (k )
y (k 1) tan(u (k )) (22)
1 y 2 (k )
The pair data are obtained by applying random data to the mathematical model of the plant in (22).
Figure 5 shows the training data. Next, the resulted weightings from the learning process are applied
to the standard ANFIS controller (Jang et al. (1997)) and the modified ANFIS. Figure 6 shows the
initial and final membership functions, Figure 7 shows the error of the desired and actual output and
Figure 8 shows the control input.
The simulation results of the modified ANFIS are shown in Figure 9, 10 and 11 respectively. It can be
seen in Figure 9 that the mapping functions are very different from the initial bell membership
functions. The mapping functions show more rapid changes in the functions in order to incorporate
more active dynamics. Comparing the errors to the standard ANFIS, the resulted errors by using the
modified ANFIS as shown in Figure 10 are very small with almost the same magnitude of the control
input.
10 International Journal of Artificial Intelligence (IJAI)
0.5
u(k)
0
-0.5
0 10 20 30 40 50 60 70 80 90 100
Time
2
1
y(k)
-1
0 10 20 30 40 50 60 70 80 90 100
Time
1
0.8
Initial MF
0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5 2
input y(k)
1
0.8
Initial MF
0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5 2
input y(k+1)
(a)
1
0.8
Final MF
0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5 2
input y(k)
1
0.8
Final MF
0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5 2
input y(k+1)
(b)
Figure 6 Initial (a) and final (b) membership functions of standard ANFIS
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 11
0.1
0.05
Error y(k)
-0.05
-0.1
-0.15
0 10 20 30 40 50 60 70 80 90 100
Time
Figure 7 Error of the desired and actual output system using standard ANFIS controller
1.5
0.5
Control signal u(k)
-0.5
-1
-1.5
0 10 20 30 40 50 60 70 80 90 100
Time
1
0.8
Initial MF
0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5 2
input y(k)
1
0.8
Initial MF
0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5 2
input y(k+1)
(a)
12 International Journal of Artificial Intelligence (IJAI)
20
10
-10
20
10
-10
(b)
Figure 9 Initial (a) and final (b) mapping functions of modified ANFIS
-16
x 10
5
1
Error y(k)
-1
-2
-3
-4
-5
0 10 20 30 40 50 60 70 80 90 100
Time
Figure 10 Error of the desired and actual output system using modified ANFIS controller
0.8
0.6
0.4
Control signal u(k)
0.2
-0.2
-0.4
-0.6
-0.8
-1
0 10 20 30 40 50 60 70 80 90 100
Time
Fermentation is a slow process in which microorganisms utilize available nutrients (substrates) for
growth, biomass maintenance and product formation. One of typical fermentation processes is
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 13
fermentation using S.cerevisiae known as baker’s yeast. The mathematical model of the fermentation
process is described by a nonlinear state space equation
where x (t )5 denotes the state at time t, u (t )2 denotes the control at time t , the parameter
vector is denoted by p A . Unmodelled dynamics of the process in (23), whose exact form is not
known, is denoted by [ . The elements of the state vectors (i.e. the state variables) x j (.) , ( j =
1,2,…,5) are defined as follow: x1 is the concentration of microorganisms, x 2 is the concentration of
x1 x2 1 x
x1 Pm 1 u1 0.48 x1[ 1 ( x),
k s x2 1 x32 x5
P m x1 x2 1 u x2
1
x1[ 2 ( x),
x2 2 u1 k m x1 (24)
k y k s x2 1 x32 x5 0.51
x1 x2 1 x
x3 0.0023 x1 0.007 P m 3 u1,
k s x2 1 x3 2 x5
x4
x 4 x1[[ 2 ( x ) [ 1 ( x )] u1,
x5
x 5 u1 ,
In equations (24), the unknown process parameter μm, ks, ky and km are assumed to vary with time as
described by the following equation
The nonlinearities [ 1 (.) and [ 2 (.) correspond to ethanol consumptions and formation rates
respectively. These nonlinearities are generally not known precisely, and represent the unmodelled
dynamics of the system. Following Boskovic and Narendra (1995), these nonlinearities are assumed
to be the form
14 International Journal of Artificial Intelligence (IJAI)
where
0.0028
ve ( x2 , x3 ) 0.138 0.062 x3 ,
x2 0.28 (28)
S e ( x2 ) 0.155 0.123 ln x2 ;
The model (24) has attracted a lot of attention because it shows high yield on biomass. In Boskovic
and Narendra (1995), it has been shown that the high yield can be obtained by the so called “nominal
regime” which is characterized by the low concentration of the ethanol. In this paper, this result will be
used as the optimal feed rate profile.
The objective of the control is to determine u1 (t ) and u 2 (t ) that maximize the production of
From (24), as is shown in Boskovic and Narendra (1995), the problem comes from the presence of
*
uncertainty which arises when x 2 (t ) has value more than its critical value x2 0.28 . When x2 (0) is
*
selected to be 0.28 and the control input u1 (t ) u (t ) is determined such that dx 2 / dt 0 over the
*
time interval [0, T ], we obtain [ (x(t))=0. Also it has been found that choosing u 2 200 , the
* *
parameter vector p=p =[0.42 0.025 0.5 0.03] and the initial vector x(0) x (0) = [10 0.28 0 0 10]T
yield the trajectory of the state x(t ) that satisfies the optimal performance of the process denoted by
*
x (t ) . The process described by (24) with p=p*, x(t ) x * (t ) , u 2 200 and u1 u1* is given by
x 5* (t ) 0.42 x1* (t ) x 2* (t ) 1
u1* (t ) [0.03 x1* (t ) ]
200 x 2* (t ) 0.5 0.025 x 2 (t ) 1 x 3* (t )
*
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 15
is referred as “nominal model”. Figure 12 shows the simulation result of the system (24) which is
operated in the so called nominal regime. Notice that the obtained biomass x1 (T ) at the final time T
is 8 times the initial biomass x1 (0) .
x1(t) x2(t)
80 0.36
70 0.34
0.32
60
0.3
50
[g/l]
[g/l]
0.28
40
0.26
30
0.24
20 0.22
10 0.2
0 5 10 15 0 5 10 15
t[h] t[h]
x3(t) x4(t)
1.6 1
0.8
1.4
0.6
1.2
0.4
1 0.2
[g/l]
[g/l]
0.8 0
-0.2
0.6
-0.4
0.4
-0.6
0.2
-0.8
0 -1
0 5 10 15 0 5 10 15
t[h] t[h]
x5(t) u1(t)
60 6
55
5
50
45
4
40
[l/h]
[g/l]
35 3
30
2
25
20
1
15
10 0
0 5 10 15 0 5 10 15
t[h] t[h]
Figure 12 Nominal model
The applied control system is by using the inverse dynamic method shown in Figure 3. The modified
ANFIS controller is aimed to obtain the appropriate controller output by fixing the output and then
finding the input. In this paper, the fixed output is the output that is produced by the nominal model
operated in the nominal regime. The pair of the arranged training data is obtained from the simulation
results in Figure 12. For the training purposes, the state variables are arranged as follow:
[ x1 ( k ) x1 (k 1) . . . x5 (k ) x5 (k 1) u (k ) ].
The architecture of ANFIS controller with modification in the learning algorithm for the fermentation
control problem is shown in the following figure.
16 International Journal of Artificial Intelligence (IJAI)
n3
in2 = n22 n32 n42
x1(k+1) n4
Layer 5
in3 n5
n23 n33 n43
x2(k)
n6
n9
in5 n25 n35 n45
x3(k) n10 u1(k)
n51
in7 n13
x4(k) n27 n37 n47
n14
n15 n28
in8 = n38 n48
x4(k+1) n16
in9 = n17
n29 n39 n49
x5(k) n18
Layer 1:
The membership function (usually the bell function) is used in this layer is given by
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 17
1
PA( x) (29)
x ci 2bi
1 | |
ai
The parameters of the membership functions {ai , bi , ci } , i 1,2,3,4 are predetermined. Every output
from this node is labeled by a , so the output is given by n1a to n20a.
Layer 2:
Fuzzy logic AND is used as the node function in this layer. The outputs of this layer are given by
Layer 3:
Layer4:
A2 = [(n34a in3) (n34a in4) (n35a in5) (n35a in6) (n36a in5) (n36a in6)];
A3 = [(n37a in7) (n37a in8) (n38a in7) (n38a in8) (n39a in9) (n39a n10)];
By using Least Squares Estimator (LSE) method, we then obtain the consequent parameters of P by
using the following equation
P [ AT A] 1 AT U 1 (33)
where U 1 is the desired output of the controller. Next, we obtain the parameter P = [p1 ... p20] and
f1 = p1 in1 + p2 in2;
18 International Journal of Artificial Intelligence (IJAI)
f2 = p3 in1 + p4 in2;
f3 = p5 in3 + p6 in4;
f4 = p7 in3 + p8 in4;
After that, the output of the node n41a ... n50a are given by
Layer 5:
The output of this layer is the summation of all incoming signals as follows
Define the error between the output of the network and the desired output d k . The sum of the
squared error is given by the following quadratic error
N (l )
Ep ¦ (d
k 1
k
p
xlp,k ) 2 (37)
d 51 H 51 / 2 U 1 n51a (39)
In fact, n51b = n41b + ... + n50b since n51a = n41a + ... + n50a. All mechanisms in forward pass from
layer 1 until layer 5 are used in the same manner for backward pass to produce a corrected value
(labeled b after added by corrected factors d i , i = 21 .. 51).
Further, by extending the correction rule of backward pass modified ANFIS learning algorithm in Jang
et al. (1997) for ten inputs, the corrected value of each node is obtained. Finally, if the corrected
values in layer 1 are obtained, then all inputs are mapped to this value by using the interpolation
technique. The mapping function then becomes the membership function of the learning mechanism
in the modified ANFIS.
4.5 Simulation
To show the effectiveness of the modified ANFIS controller, simulation is carried out by using
experimental data used in Joelianto et al. (2002). The data was obtained from experiment held in a
fermentor with physical states: temperature 30qC, pH 4.5, air speed flow 1 vvm, initial volume 1.5 L,
feed concentration ( u 2 ) 10 g/L, feed speed ( u1 ) 0.104 g/L, sampling time 2 hours, and sampling
volume 20 mL. From 1st hour until 14th, baker's yeast was grown in batch condition, and the fed-batch
condition started at 14th hour until 36th hour. In this simulation, the modified ANFIS controller is trained
by using state variables from the experimental data. Next, the modified ANFIS finds the controller
output that yields the output response following the response of the nominal model in Figure 12.
Figure 14 shows the initial membership function and final mapping function of the modified ANFIS
learning process. It is shown that in the fermentation control problem, the resulted ten membership
functions using the modified ANFIS are not very different compared to the membership functions of
the standard ANFIS. The upper values of the membership functions are slightly higher than the
standard ANFIS. The response of the state variables and the control signal are shown in Figure 15.
Simulation results show that output responses follow the given output profiles, but these responses
are achieved by an oscillatory control input. The average percentage error is given in Table 1.
20 International Journal of Artificial Intelligence (IJAI)
1 1 1 1 1
0 0 0 0 0
0 50 100 0 50 100 0.28 0.28 0.28 0.28 0.28 0.28 0 1 2
x1(k) x1(k+1) x2(k) x2(k+1) x3(k)
1 1 1 1 1
0 0 0 0 0
0 1 2 0 0.5 1 0 0.5 1 0 50 0 50
x3(k+1) x4(k) -6 x4(k+1) -6 x5(k) x5(k+1)
x 10 x 10
(a)
Mp 1 1 1 1 1
F
0 0 0 0 0
0 50 100 0 50 100 0.28 0.28 0.28 0.28 0.28 0.28 0 1 2
x1(k) x1(k+1) x2(k) x2(k+1) x3(k)
Mp
1 1 1 1 1
F
0 0 0 0 0
0 1 2 0 0.5 1 0 0.5 1 0 50 0 50
x3(k+1) x4(k) -6 x4(k+1) -6 x5(k) x5(k+1)
x 10 x 10
(b)
Figure 14 (a) Initial membership function, (b) Final mapping function of modified ANFIS learning
process
ISSN 0974-0635, Volume 1, Number A08, Autumn 2008 21
80 0.36
actual x1(t) actual x2(t)
desired x1(t) 0.34 desired x2(t)
70
0.32
60
0.3
50
[g/l]
[g/l]
0.28
40
0.26
30
0.24
20 0.22
10 0.2
0 5 10 15 0 5 10 15
t[h] t[h]
1.6 1
actual x3(t) actual x4(t)
1.4 desired x3(t) 0.8 desired x4(t)
0.6
1.2
0.4
1
0.2
[g/l]
[g/l]
0.8 0
0.6
-0.2
-0.4
0.4
-0.6
0.2
-0.8
0 -1
0 5 10 15 0 5 10 15
t[h] t[h]
x5(t) u1(t)
50 6
45
5
40
4
35
[l/h]
30 3
[l]
25
2
20
1
15
10 0
0 5 10 15 0 5 10 15
t[h] t[h]
5. CONCLUSION
A mapping function has been proposed as an alternative function to the well known predefined
membership function in fuzzy logic systems of ANFIS. The effectiveness of the proposed modified
ANFIS compare to the ANFIS was shown by the simulation results in solving two different control
problems. Unlike the standard ANFIS, the modified ANFIS gave the final mapping function as the
22 International Journal of Artificial Intelligence (IJAI)
result of the learning process in the backward pass. Simulation showed that the form of the mapping
functions was determined by the characteristics of the given data.
REFERENCES
Aminifar, S., and Yosefi, Gh., 2007, Application of Adaptive Neuro Fuzzy Inference System (ANFIS) in
Implementing of New CMOS Fuzzy Logic Controller (FLC) Chip, Numerical Analysis and Applied
Mathematics: International Conference of Numerical Analysis and Applied Mathematics. AIP
Conference Proceedings, 936, 49-53.
Bastin, G., and van Impe, J.F., 1995, Nonlinear and Adaptive Control in Biotechnology, European
Journal Control, 1, 37-53.
Bilgic, T., and Turksen, I.B., 1999, Measurement of Membership Functions: Theoretical and Emperical
Work. in Dubois, D., and Prade, H. (Eds.), Handbook of Fuzzy Sets and Systems, Vol. 1.
Fundamentals of Fuzzy Sets, Kluwer Academic Publishers, 195-202.
Boskovic, J.D., and Narendra, K.S., 1995, Comparison of Linear, Nonlinear and Neural Network-
based Adaptive Controllers for A Class of Fed-batch Fermentation Processes, Automatica, 31, 817-
840.
Castro, J.L.,1999, The Limits of Fuzzy Logic, Mathware and Soft Computing, 6, 155-161.
Frantti, T., 2001, Timing of Fuzzy Membership Functions from Data, Unpublished Ph.D. Dissertation,
University of Oulu, Linnanmaa, Faculty of Technology.
Giripunje, S., and Bawane, N., 2007, ANFIS Based Emotions Recognision in Speech. in Knowledge-
Based Intelligent Information and Engineering Systems, Berlin/Heidelberg: Springer, 77-84.
Jang, J.S.R., Sun, C.T., and Mizutani, E., 1997, Neuro-Fuzzy and Soft Computing, Prentice Hall Int.
Edition, USA.
Jovanovic, B.B., Reljin, I.S., and Reljin, B.D., 2004, Modified ANFIS Architecture - Improving
Efficiency of ANFIS Technique, 7th Seminar on Neural Network Applications in Electrical Engineering
(NEUREL), 215 – 220.
Joelianto, E., Dwihandayani, R., and Ling, L., 2002, Parameter Estimation using Genetic Algorithm for
Fed-Batch Growth of Saccharomyces cerevisiae, in Intelligent Control for Agricultural Applications
2001. in Purwadaria, H.K., Seminar, K.B., Suroso, Tjokronegoro, H.A., and Widodo, R.J. (Eds.),
Elsevier Science Technology, 107-112.
Kosko, B., 1991, Neural Networks for Signal Processing, Prentice Hall, Upper Saddle River, NJ.
Okada, W., Fukuda, H., and Morikawa, H., 1981, Kinetic Expression of Ethanol Production Rate and
Ethanol Consumption Rate in Baker's Yeast Cultivation, J. Fermentation Technology, 59, 103-109.
Panella, M., and Gallo, A.S., 2005, An Input-Output Clustering Approach to the Synthesis of ANFIS
Networks, IEEE Transactions on Fuzzy Systems, 13, 69-81.
Su, H., and Zhao, F., 2007, A Novel Learning Method for ANFIS Using EM Algorithm and Emotional
Learning, International Conference on Computational Intelligence and Security, 23-27.
Takamatsu, T., Shioya, S., Okada, S., and Kanda, M.,1985, Profile Control Scheme in a Baker's
Yeast Fed-batch Culture, Biotechnology and Bioengineering, 27, 1675-1686.
Widrow, B. and Winter, R., 1988, Neural Nets for Adaptive Filtering and Adaptive Pattern Recognition,
IEEE Transactions on Computer, 25-39.
Xu, W., Li, L., and Xu, P., 2007, A New ANN-based Detection Algorithm of the Masses in Digital
Mammograms, IEEE International Conference on Integrated Technology, 26-30.