! "# $
% & '& (
*
(
+
.
)
"
'
%
, ,
-
+
! "12
/
3# / 4
+
#
&
0
Dr. J. Narayana Das
Chairman, Board of Governors
SVNIT, Surat
MESSAGE
It gives me great pleasure to learn that SVNIT, Surat is organizing an international
conference
on
“Advanced
Engineering
Optimization
Through
Intelligent
Techniques” during July 1-3, 2013.
In the field of Science & Technology India is advancing fast and is making great
strides in system engineering and manufacturing. We have also demonstrated our
expertise in the cutting edge areas of design of large systems and system of systems,
like Space vehicles, Battle tanks, Ships and Submarines. Apart from functional and
structural considerations present day designs will have to consider several auxiliary
aspects like energy efficiency, life cycle management, environmental impacts, hardware
and software compatibility with international standards, etc. It is essential that all
practicing engineers are aware of the latest techniques and approaches in optimizing
engineering designs under such multiple constraints.
I see that the conference has in its scope presentations, discussions and lectures
on several of the contemporary techniques, by internationally reputed academicians,
scientists and engineers. I am sure that the conference will serve as a good platform for
exchange of views by the expert delegates. This will also provide an exposure to the
students, to this upcoming awareness.
I wish the conference all success.
1st July 2013
(Dr. J. Narayana Das)
DIRECTOR’S MESSAGE
I am happy to note that the Department of Mechanical Engineering of Sardar
Vallabhbhai National Institute of Technology (SVNIT), Surat is organizing an
International Conference on “Advanced Engineering Optimization Through Intelligent
Techniques (AEOTIT)” during July 01-03, 2013. The conference is concerned with the
latest in theoretical, mathematical and scientific developments in the field of engineering
optimization as well as applications of the advanced optimization techniques to various
domains. In the modern world, this type of conference is very much relevant to meet the
demands of resource optimization in different fields. I believe that this conference will
provide an international technical forum for experts and researches from both the
academia and industry to meet and exchange new ideas and present their findings of
ongoing research in various disciplines. I am confident that the technical papers of the
conference provide an opportunity to rapidly disseminate research results that are of
timely interest to the engineering fraternity. I warmly welcome all the participants for the
fruitful technical sessions.
Hearty congratulations to the organizers and best wishes for the success of the
conference.
Surat
1st July 2013
(Dr. P.D. Porey)
Director
REGISTRAR’S MESSAGE
I am very happy to know that the Department of Mechanical Engineering of this Institute
is organizing a three days international conference on “Advanced Engineering
Optimization Through Intelligent Techniques (AEOTIT)” during July 01-03, 2013. I
understand that this international conference will promote the areas of study, teaching
and research activities in field of engineering optimization to the participants and
budding technocrats. I feel the importance of the topics of this conference in
troubleshooting some of the problems in our day-to-day works and for preparing the
future academic and administrative action plans.
SVNIT produces very good quality technocrats and it will be known soon as one
of the premier global institutes because we are on the path of achieving academic
excellence. The Research & Development plays a very crucial role in the Institutional
Development. We have the capability to create the all round growth of the Institution by
focusing our attention on state of art teaching and research.
I understand that, more than 150 papers have been received out of which 100
high quality papers have been finally selected and researchers from all over the world
are participating in the conference. I am confident that the topic of the conference will be
useful for academic enhancement and excellence. I would like to warmly welcome all
the participants for rejuvenating technical sessions.
I wish a total success for the Conference.
Surat
1st July 2013
(Sh. H.A.Parmar)
Registrar
CONVENER - ABOUT THE CONFERENCE
The international conference on “Advanced Engineering Optimization Through
Intelligent Techniques (AEOTIT)” is scheduled to be held during July 01-03, 2013. The
objective of this conference is to bring together experts from academic institutions,
industries and research organizations and professional engineers for sharing of
knowledge, expertise and experience in the emerging trends related to advanced
engineering optimization techniques and their applications. The conference is structured
as keynote speeches followed by technical sessions and the special lectures on various
advanced optimization techniques. Potential topics to be addressed in this conference
include the applications of the advanced optimization techniques of genetic algorithm
(GA), differential evolution (DE), simulated annealing (SA), particle swarm optimization
(PSO), ant colony optimization (ACO), artificial bee colony (ABC) algorithm, harmony
search (HS) algorithm, teaching-learning-based optimization (TLBO) algorithm, cuckoo
search (CS) algorithm, firefly algorithm, hybrid optimization techniques and artificial
neural networks. The fuzzy multiple attribute decision making methods such as SAW,
WPM, AHP, TOPSIS, Graph theory and matrix approach, ELECTRE, PROMETHEE,
ORESTE, etc. are also going to be addressed during the conference. The special
lectures going to be delivered on the advanced optimization techniques including the
MADM and MODM methods are expected to be very much useful to the young
researchers.
There has been an overwhelming response to the call for papers. More than 150
full papers have been received from the researchers and academicians of the leading
institutes and organizations including those from Australia, China, Slovenia and
Ukraine. However, only 100 full papers have been finally selected based on the
recommendations of the reviewers for presentation and inclusion in the proceedings. By
and large these technical papers give a true account of current research and
development trends in the research field of optimization. The extended versions of
some of the selected high quality papers are going to be published in the special issues
of the reputed international journals.
The Co-convener, Mr. V.D.Kalyankar, and I are extremely grateful to the authors
of the papers, reviewers, international advisory committee members, faculty and staff
members of the department of Mechanical Engineering and student volunteers for their
cooperation and sincerity. We are deeply indebted to Dr. J. Narayana Das (Chairman),
Dr. P.D.Porey (Director) and Shri H.A. Parmar (Registrar) of our Institute for their
constant support and encouragement in making this international conference a success.
Surat
1st July 2013
(Dr. R. Venkata Rao)
Convener – AEOTIT
Dean (Academic)
INTERNATIONAL ADVISORY COMMITTEE
Dr. Leandro S. Coelho, Pontifícia Universidade Católica do Paraná, Brazil
Dr. Viviana C. Mariani, Pontifícia Universidade Católica do Paraná, Brazil
Dr. Kazem Abhary, University of South Australia, Australia
Dr. S. H. Masood, Swinburne University of Technology, Australia
Dr. Kishore Pochampally, Southern New Hampshire University, USA
Dr. Manukid Parnichkun, Asian Institute of Technology, Thailand
Dr. Nitin Afzulpurkar, Asian Institute of Technology, Thailand
Dr. Joze Balic, University of Maribor, Slovenia
Dr. F. Bleicher, Technical University of Vienna, Austria
Dr. Felix T.S. Chan, Hong Kong Polytechnic University, Hong Kong
Dr. A. Joneja, Hong Kong university of Science and Technology, Hong Kong
Dr. David K. H. Chua, National University of Singapore, Singapore
Dr. Husam I. Shaheen, Tishreen University, Syria
Dr. V. S. Kovalenko, National Technical University of Ukraine, Ukraine
Dr. Syed J. Sadjadi, Iran University of Science and Technology, Iran
Dr. Ali Kaveh, Iran University of Science and Technology, Iran
Dr. Wenyin Gong, China University of Geosciences, China
Dr. Liang Gao, Huazhong University of Science and Technology, China
Dr. Samuelson W. Hong, Oriental Institute of Technology, Taiwan
Dr. S. Sundarrajan, National Institute of Technology,Trichy
Dr. V.K. Jain, Indian Institute of Technology, Kanpur
Dr. P.K. Jain, Indian Institute of Technology, Roorkee
Dr. B.K. Panigrahi, Indian Institute of Technology, Delhi
Dr. S. K. Mohapatra, Thapar University, Patiala
Dr. S. S. Mahapatra, National Institute of Technology, Rourkela
Dr. B. Bhattacharya, Jadavpur University, Kolkata
Dr. S. Chakraborty, Jadavpur University, Kolkata
Dr. B. E. Narkhede, Veermata Jijabai Technological Institute, Mumbai
Dr. Manjaree Pandit, Madhav Institute of Technology and Sciences, Gwalior
INDEX
Sr. No.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
Title of the papers
Page No.
Analysing, Modelling and Optimising Disassembly Sequence Plan
using Genetic Algorithms
B. Motevallian, K. Abhary, L. Luong, R. Marian
Hybrid Chaotic Harmony Search and Differential Evolution Algorithm
for Constrained Engineering Problems
Jin Yi, Xinyu Li, Xiao Mi
Multi-Objective Optimization for Milling Operation by Using Genetic
Algorithm
A. Gjelaj, J. Balic
Comparative Analysis of MCDM Methods and Implementation of the
Scheduling Rule Selection Problem: A Case Study in Robotic
Flexible Assembly Cells
K. Abd, K. Abhary, R. Marian
On Simulation and Optimization of the Process of Laser Selective
Sintering
Gladchenko Olekzander
On the Problem of Optimization of Laser Material Cladding
Baybakova Olena
Modified ABC Algorithm for the Optimal Design of Multiplier-less
Cosine Modulated Filter Banks
Shaeen K, Elizabeth Elias
Multi-Response Optimization of WEDM Process Parameters using
TOPSIS and Differential Evolution
B.B. Nayak, S. S. Mahapatra
Application of Shannon Entropy to Optimize the Data Analysis
D. Datta, P.S. Sharma, Subrata Bera, A. J. Gaikwad
Optimal Process Parameter Selection in Laser Transmission
Welding by Cuckoo Search Algorithm
Debkalpa Goswami, Shankar Chakraborty
Artificial Neural Networks Based Indoor Air Quality Model for A
Mechanically Ventilated Building Near an Urban Roadway
Rohit J, Shiva Nagendra S. M.
Parametric Optimization of Die Casting Process using Cuckoo
Search Algorithm
C. V. Chavan, P. J. Pawar
Cuckoo Optimization Algorithm for the Design of a Multiplier-less
Sharp Transition Width Modified DFT Filter Bank
Kaka Radhakrishna, Nisha Haridas, Bindiya T. S., Elizabeth Elias
Developing an Optimistic Model on Food Security
A. A. Thakre, Atul Kumar
Optimizing Surface Finish in Turning Operation by Factorial Design
of Experiments
A. J. Makadia, J. I. Nanavati
Parallelization of Teaching-Learning-Based Optimization Over MultiCore System
A. J. Umbarkar, N. M. Rothe
Optimization of Compression-Absorption Refrigeration System using
Differential Evolution Technique
A. K. Pratihar, S. C. Kaushik, R. S. Agarwal
1
6
11
16
21
21
22
27
32
40
45
51
56
61
67
72
77
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
Performance Evaluation of Ground Granulated Blast Furnace Slag
(GGBFS) based Cement Concrete in Aggressive Environment
Amish R. Bangade, Hariharan Subramanyan
Solving Constrained Optimization Problem by Genetic Algorithm
Aditya Parashar, Kuldeep Kumar Swankar
Multi-Criteria Decision-Making for Materials Selection using Fuzzy
Axiomatic Design
Anant V. Khandekar, Shankar Chakraborty
Modeling and Optimization of MRR in Powder Mixed EDM
Ashvarya Agrawal, Avanish Kumar Dubey
Thermal design of a Shell and Tube Heat Exchanger
Hemant Upadhyay, Avdhesh Kr. Sharma
Multi-Objective Optimization of Turning Operations using NonDominated Sorting Genetic Algorithm Enhanced with Neural Network
Aditya Balu, Sharath Chandra Guntuku, Amit Kumar Gupta
Optimization of Effect of Degree of Saturation on Strength and
Consolidation Properties of an Unsaturated Soil
Bhavita S. Dave, Lalit Thakur, D. L. Shah
Application of Artificial Neural Network for Weld Bead Measurement
Devangi Desai, Bindu Pillai
Optimization of Error Gradient Functions by NAGNM through ESTLF
Chandragiri RadhaCharan, M. Shailaja, B V Ram Naresh Yadav
Design of Least Cost Water Distribution Systems
Dibakar Chakrabarty, Mesanlibor Syiem Tiewsoh
Optimization of Supply Chain Network - A Case Study
D.SrinivasaRao, K. Surya Prakasa Rao
Supplier Selection using ELECTRE – I & II Methods
S. R. Gangurde, G. H. Sonawane
Optimized Design and Manufacturing of Tooling by Concurrent
Engineering
Gunjan Bhatt
Design of an Oversaturated Traffic Signal using Simulated Annealing
A. Parameswaran, Bhagath Singh K., Sandeep K., Harikrishna M.
Shape Control of Cantilever Beam with Smart Material using Genetic
Optimization Technique
Hitesh Patel, J. R. Mevada
Neuro-Fuzzy Applications in Urban Air Quality Management
Hrishikesh C.G, S.M. Shiva Nagendra
Review of Phylogenetic Tree Construction Based on Some
Metaheuristic Approaches
J. Agrawal, S. Agrawal, B. K. Anuragi, S. Sharma
Optimization Model for Inventory Distribution
A. A. Thakre, Krishna. K. Gupta
Limited View Tomographic Image Reconstruction using Genetic
Algorithm
Saran S, Prashanth K R, Atul Srivastava, Ajay Kumar, M.K. Gupta
A Quality Function Deployment-based Model for Machining Center
Selection
K. Prasad, S. Chakraborty
An Automatic UnSupervised Data Classification using TLBO
K. Karteeka Pavan, A. V. DattatreyaRao, R. Meenakshi
Population Based Advanced Engineering Optimization TechniquesA Literature Survey.
R.V. Rao, K.C. More
82
87
91
96
101
106
111
116
120
125
130
136
141
146
151
157
162
169
176
181
186
192
40.
A Novel Approach for Fuel Properties Optimization for the Production
of Blended Biofuel by using Genetic Algorithm
L. K. Behera, Payodhar Padhi, P. Gorai, Vivek Kumar, S.K. Behera
Selection of Lubricant in Machining using Multiple Attribute Decision
Making Technique
M. A. Makhesana
Simulated Annealing based Optimization of Inventory Costing
Problem
Mukul Shukla, Alok Kumar Mishra, Ravi Kumar Gupta
Modified Differential Evolution for Optimization using Alien
Population Member
P. Kapoor, P. Goulla, N. Padhiyar
Hybrid Differential Evolution for Optimization: Using modified
Newton’s Method
P.Goulla, P. Kapoor, N. Padhiyar
Optimization of Machining Parameters in Face Milling of Al6065
using Fuzzy Logic
P. VenkataRamaiah, N. Rajesh
Strength Optimization of Orthotropic Plate Containing Triangular
Hole Subjected to In-plane Loading
N. P. Patel, D. S. Sharma, R. R. Trivedi
Selection of Pattern Material for Casting Operation using Fuzzy
PROMETHEE
P.A. Date, A.K. Digalwar
Parametric Optimisation of Cold Backward Extrusion Process using
Teaching-Learning-Based Optimization Algorithm
P. J. Pawar, R. V. Rao
Optimization of Hole-Making Operations: A Genetic Algorithm
Approach
P. J. Pawar, M .L .Naik
Neural Network Prediction of Erosion Wear in Pipeline Transporting
Multi-size Particulate Slurry
K.V. Pagalthivarthi, P.K. Gupta, J.S. Ravichandra, S. Sanghi
Optimization of Plasma Transferred Arc Welding ProcessParameter
for Hardfacing of Stellite 6Bon Duplex Stainless Steel using Taguchi
Method
P. S. Kalos, D. D. Deshmukh
Ant Colony Optimization for Reservoir Operation: Case Study of
Panam Project
Pooja C. Singh, T. M. V. Suryanarayana
Combining AHP and TOPSIS Approaches to Support Rubble Filling
Method Selection for a Construction Firm
Prerana Jakhotia, N. R. Rajhans
Identification of Parameters Affecting Liquefaction of Fine Grained
Soils using AHP
Rajhans N.R., Purandare A. S., Pathak S.R.
200
55.
Identifying Key Risk Factors for PPP Projects in the Indian
Construction Industry: A Factor Analysis Approach
Rakesh P. Joshi, Hariharan Subramanyan
279
56.
Multi-Objective Optimization of Rotary Regenerator using MultiObjective Teaching-Learning-Based Optimization Algorithm
Vivek Patel, R. Venkata Rao
284
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
205
211
216
222
228
233
238
243
248
253
258
263
269
274
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
Optimization of Machining Parameters in Electrical Discharge
Machining (EDM) of Stainless Steel
Rajeev Kumar, Gyanendra Kumar Singh
Suppliers Delivery Performance Evaluation & Improvement using
AHP
Rajesh Dhake, N.R. Rajhans
Decisions in High Volume Low Variety Manufacturing System
R. R. Lekurwale, M. M. Akarte, D. N. Raut
Application of RSM Based Simulated Annealing Algorithm Approach
for Minimization of Surface Roughness in Cylindrical Grinding using
Factorial Design
Ramesh Rudrapati, Asish Bandyopadhyay, Pradip Kumar Pal
Application of TOPSIS Analysis for Selection of Nozzle in
Mechanical Deterioration Test Rig
N. R. Rajhans, R. S. Garodi, Jyoti Kirve
Optimum Design of Cylindrical Roller Bearings by Optimization
Techniques and Analysis using ANSYS
R. D. Dandagwhal, V. D. Kalyankar
Optimization of Performance and Analysis of Internal Finned Tube
Heat Exchanger under Mixed Convection Flow
S. B. Mishra, S. S. Mahapatra
Aerodynamic Shape Optimisation of a Two Dimensional Body for
Minimum Drag using Simulated Annealing Method
S. Brahmachary, G. Natarajan, N. Sahoo
Parametric Optimization of Compression Molding Process using
Principal Component Analysis
S. P. Deshpande, P. J. Pawar
Selection of Material for Press Tool using Graph Theory and Matrix
Approach (GTMA)
S. R. Gangurde, Sudish Ray
Optimum Design of PID Controller using Teaching-Learning-Based
Optimization Algorithm
R. V. Rao, G. G. Waghmare
An Overview of Applications of Intelligent Optimization Approaches
to Power Systems
S. S. Gokhale, V. S. Kale
Intelligent Modelling and Optimization of Laser Trepan Drilling of
Titanium Alloy Sheet
Md. Sarfaraz Alam, Avanish Kumar Dubey
Selection of Magnetorheological (MR) Fluid for MR Brake using
Analytical Hierarchy Process
Kanhaiya P. Powar, Satyajit R. Patil, Suresh M. Sawant
Selection of Media using R3I(Entropy, Standard Deviation) and
TOPSIS
Savita Choundhe, Purva Khandeshe, N. R. Rajhans
Advance Technique for Energy Charges Optimization
S. G. Shirsikar, Shubhangi Patil
An Up Link OFDM/SDMA Multiuser Detection Using Firefly
Algorithm
K.V. Shahnaz, Palash Kulhare, C.K. Ali
Multi-Criteria Material Selection for Heat Exchanger using an
Integrated Decision Support Framework
P. B. Lanjewar, R. V. Rao, A. V. Kale
290
295
300
305
310
315
321
326
332
337
343
348
353
358
363
368
373
377
75.
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
Application of AHP-TOPSIS for Comparison of Layouts
S. M. Samak, N.R. Rajhans
Optimization of Tretability Study of UASB Unit of Atladara Old and
New Sewage Treatment Plant, Vadodara
Shweta M. Engineer, L.I.Chauhan, A.R.Shah
Role of Fuzzy Set Theory in Air Pollution Estimation
Subrata Bera, D. Datta, A. J. Gaikwad
Selection of Material for Press Tool using Graph Theory and
Matrix Approach (GTMA)
S. R. Gangurde, Sudish Ray
Antenna Size Optimization using Metamaterial
Surabhi Dwivedi, Vivekanand Mishra
Material Selection for Cleaner Production using AHP, PROMETHEE
and ORESTE Methods
R. V. Rao, F. Bleicher, S. Goud
Application of Differential Evolution Algorithms for Optimal Relay
Coordination
Syed Mohammad Zaffar, Vijay S. Kale
Optimization of Hygienic Conditions on Indian Railway Stations
T.L. Popat, Jay Brahmakhatri, Dinesh Popat
Process Parametric Optimization in Wire Electric Discharge
Machining for P-20 Material
Ugrasen G, H V Ravindra, G V Naveen Prakash, B Chakradhar
Speed Parameters Optimization for Wind Turbine Generators using
the Teaching-Learning-Based Optimization Algorithm
R.V. Rao, Y.B. Kanchuva
Automatic Monitoring System for Geared Shaft Assembly
R.B.Nirmal, R.D. Kokate
Surface Texture Improvement on Inconel-718 by Roller Burnishing
Process
P. S. Kamble, C. Y. Seemikeri
Some Investigations into Surface Texture Modifications of Titanium
Alloy by Conventional Burnishing Method
C. Y. Seemikeri, P. S. Kamble
Experimental Investigation on Submerged Arc Welding of Cr-Mo-V
Steel and Parameters Optimization
R. V. Rao, V. D. Kalyankar
Multi Objective Optimization of Weld Bead Geometry in Pulsed Gas
Metal Arc Welding using Genetic Algorithm
K. Manikya Kanti, P.Srinivasa Rao, G.Ranga Janardhana
A New Teaching-Learning-Based Optimization Method for MultiObjective Optimization of Master Production Scheduling Problems
S. Radhika, Ch. Srinivasa Rao, K. Karteeka Pavan
PSO_TVAC Based Optimal Location and sizing of TCPST for Real
power loss minimization
Ankit Singh Tomar, Laxmi Srivastava
Optimal Placement and Sizing of TCSC for Minimization of Line
Overloading and System Power Loss using Multi- Objective Genetic
Algorithm
Gautam Singh Dohare, Laxmi Srivastava
ACODE Algorithm Based Optimized Artificial Neural Network for
AGC in Deregulated Environment
K. Chandra Sekhar, K. Vaisakh
383
388
392
398
403
408
413
418
422
427
432
438
443
448
454
459
464
470
477
94.
95.
96.
97.
98.
99.
100.
101.
102.
Analysis of Transverse Vibration of A Simply Supported Beam
Through Finite Element Method
S. K. Jha, R. K. Deb, V. C. Jha, I. A. Khan
Optimization through Residual Stress Analysis of High Pressure
Cylindrical Component at Post-Autofrettage Stage
Shrinivas Kiran Patil, Santosh J. Madki
Analytic Hierarchy Process (AHP) for Green Supplier Selection in
Indian Industries
Samadhan P. Deshmukh, Vivek K. Sunnapwar
Integration of Process Planning and Scheduling Activities using A
Hybrid Model
A. Sreenivasulu Reddy, Abdul Shafi. M, K. Ravindranath
A Brief Review on Algorithm Adaptation Methods for Multi-Label
Classification
J. Agrawal, S. Agrawal, S. Kaur, S. Sharma
Optimization of Influential Parameters of Solar Parabolic Collector
using RSM
P. Venkataramaiah, P. M. Reddy, D. Vishnuvardhan Reddy
Evaluating Coupling Loss Factors of Corrugated Plates for
Minimising Sound Radiated by Plates
S. S. Pathan, D.N.Manik
Performance of Elitist Teaching-Learning-Based Optimization
(TLBO) Algorithm on Problems from GLOBAL Library
Anikesh Kumar, Nikunj Agarwalla, Prakash Kotecha
Optimal Placement of Multiple TCSC: A New Self Adaptive Firefly
Algorithm
R. Selvarasu, C. Christober Asir Rajan
484
491
495
501
506
512
517
522
528
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Analysing Modelling and Optimising Disassembly Sequence
Plan using Genetic Algorithms
B. Motevallian*, K. Abhary, L. Luong, R. Marian
School of Engineering, University of South Australia, Mawson Lakes, 5095, South Australia
*Corresponding author (e-mail: Behzad.Motevallian@unisa.edu.au)
The focus of this research is to solve and optimise disassembly sequence planning
(DSP) problem, a large scale, highly constrained combinatorial problem. Solving and
optimising DSP is vital for the efficiency of disassembly operations, because the order
of disassembly determines the number of operation(s) needed and the degree of
difficulties in each of these operations affects the cost and quality of materials. To
optimise the DSP, the problem has to be analysed, structured, modelled, and then it
has to be solved prior to be optimised. However, to discuss all thus are beyond the
scope of this paper; but a brief overview on some of those issues are provided. In this
research, to solve and optimise the DSP, Genetic algorithms (GA) are used as an
optimisation tool, because the GA is very robust technique and capable of dealing with
a very large class of optimisation problems, especially when the search space is large
and functions are quite irregular, as disassembly tends to have and display. But, due to
exceptionally constrained characteristics of disassembly, the GA as they exist today
cannot be directly used to optimise it; therefore a different approach of GA is needed
which has been developed in this research.
1.
Introduction
Disassembly sequence is the most important part of a disassembly plan, and affects
several aspects of the disassembly process, as well as various details in the product design.
In fact, the second part of the disassembly planning comes strictly after and is influenced by
the first part of the planning, namely the DSP. Since it may be costly to overlook a potential
candidate in disassembly sequence, but it is desirable to select a satisfactory sequence from
the set of all feasible sequences. The DSP influences a number of vital design factors,
including part fixturing, the degrees of freedom and the proper accessibility that required for
disassembly. It also affects the productivity of disassembly operation(s), since it determines
the number of delays necessary to change tooling, for example the end-of-arm effectors’ used
by a robot. Thus, operation sequence therefore is a primary factor in determining the cost of
disassembly. Optimisation of disassembly plan is very complex in nature and difficult or even
impossible to solve by conventional optimisation approaches and techniques. In this research,
Genetic algorithms (GA) are used as an optimisation tool, because, GA is very robust
technique and capable of dealing with a very large class of optimisation problems, especially
when the search space is large and functions are quite irregular, as disassembly tends to
have and display. But in order to optimise the DSP, the problem has to be analysed,
structured and modelled, then it has to be solved prior to be optimised
2.
Literature review
The critical review of the literature proved the existing approaches and techniques that
are used to optimise the disassembly problem are limited to a reduced search space;
therefore their applications are very limited in practice. The characteristic of DSP precludes
the use of any classic optimisation technique and imposes the use of special approach for
solving and optimising the DSP problem. The review of the literature also showed, there is
limited number of work to optimise disassembly using GA, but the common disadvantage of
the approaches are that, by not searching the complete space, valuable solutions in the
optimisation process are lost and their work therefore has no application in real world.
1
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.
The scale of the disassembly planning problem
The disassembly planning problem is a large scale, highly constrained, combinatorial
problem and has an exponential computational complexity, with various uncertainties and
variety of variables to be considered. The solution space for the DSP problem is the number
of potential disassembly sequences, i.e. all the possibilities, in which an n-part product can be
disassembled into components, constitute the solution space. The size of the solution space
can be used to estimate the relative difficulty of searching for an optimal solution in that
space. If the problem is solved by an exhaustive search, the complexity of finding an optimal
solution would increase linearly with the size of the solution space. If a tree search that
recursively divides the solution space is used, then the complexity increases as the log of the
size of the solution space. Let’s consider the DSP problem for which m = n (see the definition
of DSP in the following section), the size of the solution space for the DSP is given by the
number of permutations of n, i.e. (n!). This number of potential disassembly sequences, (n!),
includes feasible and non-feasible sequences. In a permutation, an element appears once,
i.e. in disassembly operation, (m = n), thus the disassembly sequence is linear. The size of
the solution space grows if problem is less constrained, the number of sequential disassembly
plans is ( 2 n 2 )! that grows exponentially,
2 n therefore the maximum number of
1
( n 1)!
C
n
n 1 n
potential sequential plans is infinite (Wolter 1988). That is for n parts product, the number of
possible disassembly sequences by enumeration is [2(n - 1)]? , which grows exponentially as n
(n - 1)?
6
increases (Lee et al. 1993), for a 10-parts product is up to 30010 . To successfully solve and
optimise DSP, by having such vast number of choices to choose from, requires a problemoriented approach, considering its generality and diversity. Solution-oriented methods that
have been used so far try to truncate a real-life problem, by using a certain (existing)
technique or algorithm; via a number of artificially limiting hypothesis. the solution for DSP
problem with such diversity cannot be generalised, since a slight change in the problem
requires totally different algorithm and approach. It is unlike other problems encountered in
Artificial Intelligence, the DSP problem is NP-complete and highly constrained, combinatorial
problem, with so many variables to consider, which requires completely a different approach
as the available approaches in their current forms can not be used to solve and optimise its
solutions. The disassembly plans that have not been optimised can result in inefficient
disassembly operation(s). It is therefore necessary to endeavour to automate and systematise
the generation and optimisation of disassembly sequences. A disassembly plan however
should be generated by considering a number of factors that affect its quality, including the
structure of the product, the size of the batch and the layout of the disassembly facility. If any
of those factors changes, then the quality of the disassembly plan change, thus the plan might
have to be changed (Marian et al. 2003). Automation of planning enables the quick and
efficient planning when all elements contributing to the disassembly sequence are known and
re-planning when those elements change.
4.
Definition of DSP
To automat the generation of disassembly sequences implies the need for more
structured and rigorous methods for planning the sequence and the order in which parts can
be disassembled. In general terms, the DSP problem can be formulated as: having an n-part
product suitably described the disassembly constraints and optimisation measures, generate
(1) a feasible, (2) optimal or near optimal disassembly sequence to disassemble the given
product optimally. The first part of the problem, (1), implies finding a feasible solution for the
problem, whereas part (2) tries to find an optimal/near optimal one. The aim here, however is
to generate optimal or near optimal disassembly plans that minimises the cost of disassembly
operation(s) and obtains the best cost/benefit ratio. Solving the disassembly problem is an
essential step prior to its optimisation, implies generating feasible sequences, in order to
disassemble all the components of a product, given its geometric description and constraints.
Valid disassembly plans therefore must satisfy all geometrical constraints of components in a
2
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
given product so that parts can be disconnected and removed without collision during the
disassembly operation(s). In addition, disassembly plan should be optimised for achieving
objectives of the disassembly, as un-optimised disassembly plans may result in difficulties in
disassembly operation(s). DSP as defined here is general and this generality is required by
the generality of the disassembly characteristics and the disassembly related problems. The
DSP problem can be also individualised for different types and levels of details for the input
(description of the product) and the output (disassembly plan). Thus the formal definition of
DSP therefore defined as follows;
Given: a product (A) composed of (n) components, A = {c1, c2 …cn}, that is to be
disassembled in a finite number (m) of Operations, O= (o1, o2…..om), m n, and described
in detail to permit the extraction and definition of the following information:
1. I1. The connection information: on O a set of relations C is defined, so as if (oi, oj) C, it
is said that oi and oj have a connection; Also, (oi, oj) C (oj, oi) C;
2. I2. The precedence information: on O a set of relations P is a binary relation (oi, oj) P
means oi has to be performed after oj. If (oi, oj) P, then (oj, oi) P;
3. I3. The quality measures: on A is defined a set of quality measures QMk, k N;
4. I4. The optimisation function F - measure of the facility to perform a disassembly
sequence: F = Fi, Fi = 1…m, defined as a sum of the facility to perform each individual
disassembly operation, Fi, with the characteristic that the value of Fi depends, besides
the quality measures QMk, on the disassembly operations already carried out: Fi = f(QMk,
s1,..si-1), i =1..m, k N;
Determine: A Disassembly Sequence S = {s1, ..si, ..sn / si A} so as:
C1: si S, i =1,..n, sj S, j N and j < i, so that (si, sj) C;
C2: si, sj S, i, j =1,..n, (si,sj) P (precedence condition);
C3: a disassembly sequence S that maximises the value of F (optimisation).
i. Finding a disassembly sequence S that respects conditions C1 and C2 means; solving
the DSP problem (finding a feasible disassembly sequence);
ii. Finding a disassembly sequence that respects conditions C1, C2 and C3 means
optimising the DSP problem (finding an optimum disassembly sequence);
5.
GA approach to solve and optimise the DSP problem
One of the most universal problem-solving methods for complex problems is searching
for solutions and there are two important issues in search strategies: exploiting the best
solution and exploring the search space. GA are a class of general-purpose search methods
combining directed and stochastic search, achieving a good balance between exploration and
exploitation of the search space. Optimising the DSP involves selecting an optimum or near
optimum feasible disassembly sequence according to a quality function based on optimisation
criteria. The selection of GA, as an optimisation tool, is for its ability to handle large-scale
combinatorial problems and the flexibility in defining the optimisation function. The
optimisation of DSP therefore is performed using GA and the structure of the proposed GA is
based on a classic GA algorithm (Gen and Cheng 1997), which incorporates the guided
search (Marian, et al. 2003), to handle the combinatorial explosion and the highly constrained
character of the DSP problem, which requires modified genetic operators to be used. The
initial population undergoes crossover, which is a modified genetic operator and heavily relies
on the guided search that is designed to produce only feasible chromosomes. After
crossover, the chromosomes are translated back to the solution space and are evaluated
using a fitness function based on the optimisation criteria, defined on the solution space. The
fitness value of a chromosome is the sum of partial fitness values corresponding to each
disassembly operation. Once the disassembly sequences have been evaluated, the optimum
sequence with the highest fitness function (FF) is selected. The selection is a classical
operation, through a weighed roulette algorithm and operates on an extended population of
parent and child chromosomes (Gen and Cheng 1997). After selection, the population
3
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
undergoes a pseudo-mutation, an operation functionally identical with a mutation, but
implemented in a different manner. The optimisation however is an iterative process and the
result of the optimisation is a population of disassembly sequences with high fitness values,
from which the one with the highest fitness can be selected. However, optimising the DSP
problem (a solution that satisfies conditions C1 and C2 and C3), as defined in the, DSP
problem definition, (section, 3.), implies prior solving of the DSP problem (generating a
solution that satisfies conditions C1 and C2). Thus, solving the DSP therefore is a
precondition before optimising it. The optimisation of disassembly however has to be
extremely flexible to facilitate the optimisation of and extraction of one single part or more
and/or all of the parts in a given product (depends on the objectives of the disassembly). The
reason for such extreme flexibility is because no single disassembly strategy can work for all
products and therefore an optimum disassembly sequence should be obtained out of the
feasible sequences based on defined disassembly strategy and objectives. The objective
could be service, maintenance or repair that requires the removal of a malfunctioning
component (Selective disassembly), or removal of two or more parts (Partial disassembly)
and/or all the components of the product (Complete disassembly). The approach used in this
research to solve and optimise the DSP problem is a development of the Three Step
Approach, a population-based search is used (a development of the 4-th approach, which is
presented with reference to Fig. 1, below and all the elements and operations that appear in
the Figure are briefly explained as well.
Solution Space
Disassembly
Sequences
Modelling DSP Problem
(Representation Issues)
Representation of
Disassembly
Sequences
M odel Space
CHOROMOSOM
GUIDED SEARCH OPERATOR
GENERATION OF INITIAL POPULATION OF
Absolute
Constraints
Optimisation
Criteria
FEASIBLE CHROM OSOM ES
Representation of
Absolute
Constraints
THROUGH GUIDED SEARCH
PRECEDEN CE RELATIONS
(M ODIFIED GENETIC OPERATOR)
POPULATION OF CHROM OSOM ES
Fitness Function
Representation of
Evaluation
Chromosomes
Representation of
Disassembly
Sequences
CROSSOVER
(M ODIFIED GEN ETIC OPERATOR BASED
ON GUIDED SEARCH)
SELECTION – W EIGHED
ROULETTE
PSEUDO-M UTATION
OPERATOR
YES
REPEAT
Optimised
Disassembly
Sequences
Representation of
Chromosomes
NO
OPTIM ISED CHROM OSOM E
Figure.1. the approach used to solve and optimise the DSP problem with Genetic Algorithms
4
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
A disassembly sequence indicates the succession of operations, and is defined
according to the characteristics of the product (geometry of components, relations between
components, materials of components, location of components, the direction in which a
component can be disassembled, etc), which includes constraints, such as disassembly line
and layout constraints. The disassembly sequences, absolute constraints and optimisation
constraints are defined in the solution space, where evaluation takes place. Disassembly
sequences are modelled and represented as chromosomes. The genetic operators in this
approach work in the model space with chromosomes, with a by-unique mapping between a
disassembly sequence and a chromosome. However, not all the disassembly sequences are
feasible, but if and only if it satisfies a class of constraints, the absolute constraints, which are
modelled and derived in precedence constraints that are represented as precedence relations
in the model space. One of the basic features of the GA used in this research for the
optimisation of disassembly sequences is that they work in an iterative process, alternatively
in the coding or model space and in the solution space. For this reason, modelling and
representation are important issues in optimisation, as they are the interface between the
real-life problem, defined in the solution space, and the abstract replica of the problem
defined in the model space. The modelling of the problem also dictates the way relevant
information about the problem is encoded; the size of the associated database needed to
store the information and the complexity of the algorithms required to access and retrieve it.
6.
Discussion and conclusions
The focus of this research was to solve and optimise the DSP problem, a large scale,
highly constrained combinatorial problem. However, in order to solve it, the problem needed
to be analysed, structured, modelled, and then solved prior to be optimised. Genetic
algorithms were used as an optimisation tool, because GA are a class of general-purpose
search methods combining directed and stochastic search, achieving a good balance
between exploration and exploitation of the search space. The selection of GA, as an
optimisation tool, wasn’t only for its ability to handle large-scale combinatorial problems, but
also the flexibility in defining the optimisation function, and in addition the GA is very robust
technique and capable of dealing with a very large class of optimisation problems, especially
when the search space is large and functions are quite irregular, as disassembly tends to
have and display.
References
Gen, M. and Cheng, R. Genetic algorithms and engineering design, John Wiley & Sons, Inc., 1997.
Gungor A. and Gupta, S. Issues in Environmentally Conscious Manufacturing and Product
Recovery: A Survey. Computers and Industrial Eng, Vol.36, no.4, pp.811-853., 1999
Marian, R. Luong, L. and Abhary, K. Assembly Sequence Planning and Optimisation Using GA.
Applied Soft Computing, 2003, (2/3F): 223-253.
Marin, M.R. Optimisation of assembly sequences using genetic algorithms: Ph.D. thesis,
2003. School of Advanced Manufacturing and Mechanical Engineering, University of
South Australia., 2003.
Motevallian, B. Products Disassembly: Design, Modelling and Optimisation: Ph.D. thesis,
2010. School of Advanced Manufacturing and Mechanical Engineering, University of
South Australia., 2010.
Motevallian, B. Abhary, K. Luong, L.and Marian, R. Optimisation of Product Design for Ease
of Disassembly", Engineering the Future, edited by Aleksandar Lazinica,, ISBN 978-9537619-X-X, pp. 01-17. Sciyo., Vienna, Austria., 2010.
Motevallian, B. Abhary, K. Luong, L. and Marian, R. A Heuristic Method to Generate an
Optimum Disassembly Sequence. Advanced Technologies: Research, Development,
Application, edited by Bojan Lalic, pIV pro literature Verlag Robert Mayer-Scholz, ISBN 386611-197-5, pp. 675-690, Mammendorf, Germany. 2006.
Motevallian, B. Abhary, K. and Luong, L. Representation of Precedence Constraints in
Disassembly. Proceedings of the 34th International Conf. on Computers & Industrial
Engineering, pp. 1-1, November 2004, San Francisco, CA. USA.
5
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Hybrid Chaotic Harmony Search and Differential Evolution
Algorithm for Constrained Engineering Problems
Jin Yi, Xinyu Li*, Xiao Mi
State Key Laboratory of Digital Manufacturing Equipment & Technology, Huazhong University
of Science and Technology, Wuhan 430074,China
*Corresponding author (e-mail: lixinyu@mail.hust.edu.cn)
Constrained optimization is a major real world problem, which consists of an objective
function subjected to both linear and nonlinear constraints.In this paper, aHybrid chaotic
harmony search and differential evolution algorithm is proposed (CDEHS)to solve
constrained engineering problem.Several well-known constrained engineering problems
are tested with the new approach.The numerical results obtained reflect the superiority
of the proposed CDEHS algorithm in terms of efficiency, accuracy and robustness when
compared with other state of the art of algorithms reported in the previous literatures.
Keywords:Harmony Search; Constrained Optimization; Chaotic Sequence; Differential
evolution
1. Introduction
General constrained optimization problems with m variables and n constraints can be
stated as follow:
f (x)
inimize
g j ( x ) 0, j 1, 2,..., ng
s.t. hk ( x ) 0,k 1, 2,..., nh (1)
L x U ,i 1, 2,..., m
i
i
i
Where
x ( x1 , x2 ,..., xm )T is the solution vector, f ( x) is the objective function. g j ( x )
and hk ( x ) ,which represent the feasible region, are the inequality and equality constraints,
respectively. The values of Li and U i are the lower and upper bounds of xi , respectively.
When algorithmsare applied in constrained optimization problems, some difficulties can
be encountered because they are blind to constraints, a lot of randomly generated solution
variables will be found in the space forbidden by the constraints. It’s necessary to deal with the
constraints before algorithms are applied. Penalty functions method is one of the most
common constraint-handling techniques (Smithet al.1997). The idea is to transform the
constrained optimization problem into an unconstraint one by adding a certain value to the
objective function value based on the constraint violation present in a solution.
In this paper, we present a new variant of harmony search algorithm, called CDEHS for
the constrained engineering problems. In the proposed algorithm, a novel differential operator
is embeddedinto the HS structure to strengthen the global search ability. At the same time,
sequencesgenerated from chaotic system areadopted to help improving the diversity of the
solution variables.Numerical examples are tested then to demonstrate the efficiency, accuracy
and robustness of the new approach. The rest of the paper is organized as follows. In Section
2, the new approach is proposed. In Section 3, several constrained engineering problems are
tested and the results are compared with the ones in previous literature. Finally, Section 4
gives the concluding remarks.
2. Proposed CDEHS algorithm
2.1 HS algorithm
The harmony search algorithm is inspired by the instrumental playing process. It is simple
6
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
in concept, few in parameters, and easy in implementation (Zarei et al. 2009). In the basic HS
algorithm, each solution called a “harmony” and represented by an n-dimension real vector.
The steps in the procedure of harmony search are as follows: 1) Initialize the problem and
algorithm parameters; 2) Initialize the harmony memory; 3) Improvise a new harmony; 4)
Update the harmony memory; 5) Check the stopping criterion.
2.2 DE algorithm with intersect mutation operator
Zhou et al. (2012)proposed a novel variant of DE named IMDE to improve the global
search ability.The new algorithm has a few modifications based on the traditional DE,
especially in the mutation operation. First of all, the individuals are divided into two parts: the
better part and the worse part, according to their fitness values. Then, novel mutation,
crossover and selection operations are used to generate the next generation.Since the
individuals have been divided into two parts, novel mutation and crossover operations for the
two parts are different.
For the better part, vectors are mutated with one individual (wr1) chosen from the worse
part and the left two individuals (br1 and br2) chosen from the better part,
i
i
i
M ij 1 X wr
1 F ( X br1 X br 2 ),
br1 br 2 wr1 i (2)
For the worse part, vectors are mutated with one individual (br1) chosen from the better
part and the left two individuals (wr1 and wr2) chosen from the worse part,
i
i
M ij 1 X bri 1 F ( X wr
br1 wr1 wr 2 i (3)
1 X wr 2 ),
2.3 Procedure of proposed method
Inspired by the work of Zhou et al, a new variant of HS is proposed. The intersect
mutation operator of DE algorithm is embedded into the HS structure to improve the
exploitation ability. In the meanwhile, chaotic sequence (Alatas B. 2007) is adopted to increase
the diversity of the solution vectors. The steps of the proposed method are as follows.
Step 1: Initialize the optimization problem and HS parameters.
The optimization problem is defined as a minimized function f ( x ) such that li xi ui ,
where x is a solution vector and li and ui are the lower and upper bound for each
variable, respectively. Moreover, parameters of HS are specified in this step too, they
are harmony memory size (HMS), harmony memory consideration rate (HMCR), the
pitch adjusting rate (PAR) and the max iteration number (NI).
Step 2: Initialize the harmony memory.
The initial HM randomly generated from the chaotic sequence in ranges [li , ui ]
Step 3: Divide the harmony memory into two groups.
In this step, the harmony memory matrix is divided into two groups according to their
fitness values, the better group (Group B) and the worse group (Group W).
Step 4: Improvise a new harmony.
A new harmony is improvised in this step. The new vector X new ( x1 , x2 ,..., xm ) is determined
in this step.
Step 4: Update the harmony memory.
Calculate the fitness of the improvised harmony. If the new fitness is better than the fitness of
the worst harmony, then the new harmony will be included into the harmony memory
and meanwhile the worst harmony will be excluded out of the harmony memory. Sort
the individuals in HM from small to large again based on their fitness values.
Step 5: Check the stopping criterion.
The CDEHS algorithm will be terminated if the computation met the max iteration
number NI. Otherwise, it will repeat step 4.
The detailed pseudo code explaining the steps of the CDEHS algorithm is shown in Fig
1.for easier understanding and implementation the proposed algorithm.
'
7
'
'
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 1. Pseudo code of the CDEHS algorithm
3. Experimental results and analysis
The proposed CDEHS algorithm is applied to solve several constrained engineering
problems. These famous examples have been previously solved based on a variety of
optimization techniques, so they are compared with the new approach and help testing the
effectiveness and efficiency of the new approach. The parameters setting in the experiment
are shown in Table 1.
Table1.CDEHS parameters used in experiments
Parameters
HMS
HMCR
PAR
NI
Values
10
0.98
0.75
50,000
3.1. The tension/compression string design problem
The compression string design problem was first introduced by Arora(1989). In this
problem, the weight of a string subject to constraints on minimum deflection, shear stress,
surge frequency, limits on outside diameter. Table 2 shows the comparisons of results; once
again, CDEHS obtained the better solution.
8
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 2. Experimental results for tension/compression string problem
Design
variables
CDEHS
Belegundu
Arora
(1989)
Coello
(2000)
Majid
(2010)
He et al.
(2007)
x1 (d )
0.0516806
0.050000
0.053396
0.051480
0.0518606
0.051728
x2 ( D)
0.3565153
0.315900
0.399180
0.351661
0.3608578
0.357644
x3 ( P)
11.3009
14.250000
9.185400
11.63220
11.0503394
11.244543
g1 ( x)
g2 ( x)
-5.55e-16
-0.000014
-0.000019
-0.002080
-2.19627e-06
-0.000845
-1.25e-13
-0.003782
-0.000018
-0.000110
-2.84083e-07
-1.260e-05
g3 ( x)
g4 ( x)
f ( x)
-5.84801
-3.938302
-4.123832
-4.026318
-4.06187279
-4.051300
-1.918
-0.756067
-0.698283
-4.026318
-0.7248544
-0.727090
0.0126652
0.0128334
0.012730
0.012705
0.0126658
0.0126747
3.2. The pressure vessel design
In the pressure vessel design problem is taken fromMahdavi, the objective is to minimize
the total cost, which is consists of the material, forming and welding costs. A cylindrical vessel
is capped at both ends by hemispherical heads.
Table 3. Experimental results for vessel design problem(four inequalities)
Design
CDEHS
Zahara
Mahdavi
Mun
He et al.
variables
(2009)
(2007)
(2012)
(2007)
0.727687
0.8036
0.75
0.744060
0.812500
x1 (Ts )
0.360102
0.3972
0.3750
0.367789
0.437500
x2 (Th )
37.704
41.6392
38.86010
38.552312
42.0984
x3 ( R )
x4 ( L)
239.917
182.4120
221.36553
226.155260
176.6366
g1 ( x)
g 2 ( x)
g 3 ( x)
g 4 ( x)
-2.0e-07
-3.656e-05
-7.0e-08
-3.784e-07
-0.000139
-4.0584e-04
-3.797e-05
-0.004275
-5.648e-08
-0.035949
-2.0011
-1.5912
-0.0131098
-0.037896
-116.382700
-0.830
-57.588
-18.6345
-13.8447
-63.253500
5805.55
5930.3137
5849.76169
5829.54746
6059.7143
f ( x)
Table 4. Experimental results for vessel design problem(six inequalities)
Design
CDEHS
Sandgren Lee et al. Mahdavi
Majid
Mun
variables
(2005)
(2007)
(2010)
(2012)
1.10224
1.125000 1.125000
1.125
1.125
1.11157
x1 (Ts )
0.60000
0.625000
0.625000
0.625000
0.625
0.60000
x2 (Th )
57.1108
48.97
58.27890
58.29015
58.290138
57.5944
50.3312
106.72
43.7549
43.69268
43.6927585
47.5712
-0.0214212
-0.1799
-0.000022
0.00000
-3.3596e-07
-1.92e-06
-0.0551628
-0.1578
-0.06902
-0.06891
-0.068912
-0.0505
-2.3283e-10
-97.760
-3.71629
-2.01500
-0.070531
-2.57076
-189.669
-133.28
-196.245
-196.307
-196.307
-192.429
-0.00224
-0.025
-0.025
-0.025
-0.025
-0.01157
x3 ( R )
x4 ( L)
g1 ( x)
g 2 ( x)
g 3 ( x)
g 4 ( x)
g5 ( x )
g 6 ( x)
0
-0.250
-0.250
-0.25
-0.25
0
f ( x)
7021.91
7980.894
7198.433
7197.730
7179.730
7032.419
9
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The comparisons of results are shown in Table 3. The results obtained by the proposed
CDEHS algorithm were better than other earlier solutions reported in the literature. Another
variant of this problem has other two more inequalities. Table 4. Shows comparisons of the
best solutions, it is clear that CDEHS finds the best solution.
4. Conclusions
Anovel variant of harmony search algorithm called CDEHS is proposed for constrained
engineering problems in this paper. The new algorithm has the basic structure of harmony
search, and an insect mutation operator in differential evolution algorithm is embedded into the
structure to improve the exploitation ability; at the meanwhile, chaotic sequence is adopt to
increase the diversity of the solution space, which helps enhancing the exploration ability of
the algorithm. Several famous examples are tested then, the results show that the proposed
CDEHS algorithm obtained better solutions than other previous methods reported in literature,
which proves the CDEHS is a powerful method for constrained optimization.
Acknowledgments
This research work is supported by the National Basic Research Program of China (973
Program) under grant no. 2011CB706804.
References
A.E. Smith, D.W. Coit, Constraint handling techniques—penalty functions, in: T. Bäck,
D.B.Fogel, Z. Michalewicz (Eds.), Handbook of Evolutionary Computation, Oxford University
Press, Institute of Physics Publishing, 1997. pp. C 5.2:1–C 5.2:6.
Alatas B. Chaotic harmony search algorithms[J]. Applied Mathematics and Computation, 2010,
216(9): 2687-2699.
Arora J S. Introduction to optimum design[J].1989.
Coello C A. Use of a self-adaptive penalty approach for engineering optimization problems [J].
Computers in Industry, 2000, 41(2): 113-127.
HeQ, Wang L. A hybrid particle swarm optimization with a feasibility-based rule for constrained
optimization[J]. Applied Mathematics and Computation, 2007, 186(2): 1407-1422.
Jaberipour M, Khorram E. Two improved harmony search algorithms for solving engineering
optimization problems[J]. Communications in Nonlinear Science and Numerical Simulation,
2010, 15(11): 3316-3331.
Lee K S, Geem Z W. A new meta-heuristic algorithm for continuous engineering optimization:
harmony search theory and practice[J]. Computer methods in applied mechanics and
engineering, 2005, 194(36): 3902-3933.
Mahdavi M, Fesanghary M, Damangir E. An improved harmony search algorithm for solving
optimization problems[J].Applied mathematics and computation, 2007, 188(2):1567-1579.
Mun S, Cho Y H. Modified harmony search optimization for constrained design problems[J].
Expert Systems with Applications, 2012, 39(1): 419-423.
Zahara E, Kao Y T. Hybrid Nelder–Mead simplex search and particle swarm optimization for
constrained engineering design problems[J]. Expert Systems with Applications, 2009, 36(2):
3880-3886.
Zarei O, Fesanghary M, Farshi B, et al. Optimization of multi-pass face-milling via harmony
search algorithm[J]. Journal of materials processing technology, 2009, 209(5): 2386-2392.
Zhou Y, Li X, Gao L. A differential evolution algorithm with intersect mutation operator[J].
Applied Soft Computing, 2012.
10
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Multi-objective Optimization for Milling Operation by Using
Genetic Algorithm
A. Gjelaj1*, J. Balic2
1
Faculty of Technical Applied Sciences of Mitrovica – University of Prishtina, Kosova
2
Faculty of Mechanical Engineering of Maribor – University of Maribor, Slovenia
*Corresponding author (e-mail: afrim.gjelaj@uni-pr.edu)
This paper presents the analysing of cutting power as well as cutting force by end
milling process. Analysing of results is obtained for cutting power and cutting force in
theoretical and experimental is achieved. To achieve the best value of cutting power
also cutting force is utilization the multi objective genetic algorithm as optimized
method. Also in our work paper will be optimized the main machining parameters for
cutting power Pc, cutting force Fc and material removal rate MMR by end milling
operation.
1.
Introduction
Nowadays, artificial intelligence in manufacturing has wide applications. Therefore
their application by metal cutting has biggest influence such us: to optimize cutting
parameters, expression, and in same time to minimize the errors. In our work paper for cutting
power, cutting force and material removal rate by end milling operation are investigated. Also,
is applied the artificial intelligence such us Multi-objective Genetic Algorithm to find the best
solution of optimal machining parameters.
YAO Yunping et al. (2010) proposed the online optimal machining parameters for
OPTIMILL control machine tools. The proposed system could short the working time as well
as to protect milling tools in same time reduce the cost of production and raising production.
H. Ganesan et al. (2011) discussed about the optimization of machining parameters for
continues profile machining with respect to reduce the time, cutting force, and cutting power
by turning operation. Also the authors due to the complexity of machining have made the
comparing of results obtained in environments of Genetic Algorithm as well as Particle Swarm
Optimization. M. Janardhan and A. Gopala Krishna (2012) present the multi objective
optimization of cutting parameters for response surface roughness and metal removal rate by
grinding operation. In our work the cutting force, cutting power and material removal rate to
end milling operation for workpiece material 42CrMo4 are investigated. Also the selection of
tool with four teeth and diameter (d = 8 mm) are used to achieved the optimal results for
cutting force, cutting power and material removal rate. Three conflicting objectives, cutting
force (Fc), cutting power (Pc) and material removal rate (MRR), are simultaneously optimized.
2.
Analytical model for cutting power and cutting force by end milling
Many authors are oriented for optimal surface roughness, tool wear, cutting force
machining parameters, tool geometry etc. In our paper will analysed expect cutting force (Fc),
cutting power (Pc) and material removal rate (MRR) due to the important of machining
process will investigated the main of machining parameters (v, f and a). The machining
parameters play an important role for optimization and determination of the cutting power,
cutting force as well as material removal rate by end milling operation. Therefore, in order to
optimization cutting power Pc and material removal rate MRR by end milling process,
respectively workpiece material and tool geometry will get in this case as constant. The main
machining parameters such as depth of cutting, feed rate and cutting speed will optimization
by employing Genetic Algorithm. Cutting Power Pc for milling operation can be expressed in
function of cutting force Fc as follow.
11
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Pc Fc v Z
.
3600
(1)
Where, Z – number of teeth (Z=4), - helix angle ( = 30)
The cutting force Fc is given with expression as follow:
F b v 1 f 2 a 3 .
c
(2)
Exponent of cutting force 1, 2, 3 and b are constant in our investigation of cutting
power Pc and cutting force Fc. The material removal rate (MRR) can be expressed in
analytical form in function of the cutting speed (v), depth of cut (a) and feed rate (f) as follow:
MRR 1000 v f a
(3)
There are several factors constrains of cutting parameters feed rate, cutting speed,
depth of cut, workpiece material, tool geometry, coolant system etc. In our work will take only
the main parameters as constrains.
As procedure of optimal constrains are presented as follow:
v min v v max
f min f f max
(4)
a min a a max
The constrains in these case are given to analyze the cutting force Fc, cutting power
Pc as well as material removal rate MRR.
Table 1. The results of main cutting force Fc without optimization
No. of
Experiment
1
2
3
4
Main cutting parameters
v
f
a
[m/min]
[mm/rev] [mm]
10
0.65
0.8
15
0.65
0.8
12
0.4
0.5
12
0.4
0.5
Cutting force Fc
Measurement Theoretical
value
value
3.31
3.55
2.83
4.099
3.79
3.482
3.04
3.482
The main of cutting force Fc for theoritical and practical procedure also for feed rate
(f) is presented in figure 1.
Figure 1. Various value of main cutting force (Fc) in relationship of feed rate (f).
The cutting power (Pc) as well as depth of cut (a) is possible to determine the value of
volume for material removal rate (MRR). In table 2 the cutting power Pc based on the
obtained results of cutting force (Fc) regarding to the cutting speed (v) and helix angle () is
calculating wit mathematical expression of cutting power for Pcmax, Pcmean and Pcmin.
12
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 2. Obtained results for cutting power Pc
Cutting
speed v
[m/min]
Number
of teeth
Z
12
15
10
12
4
4
4
4
Helix
angle
[]
30
30
30
30
Measurement
of cutting
force Fc [N]
Cutting power Pc [kW]
Pc
Pc
Pc
max
mean
min
3.31
2.83
3.79
3.04
16.55
14.15
18.95
15.20
13.24
11.32
15.16
12.16
11.033
9.433
11.607
10.133
In figure 2 is presented the cutting power Pc respect to the cutting force and cutting
speed. With minimal of cutting value and maximal of feed rate and depth of cut will reach the
maximum of cutting power as in figure 2. With increased of the cutting speed will decreased
the cutting power.
Figure 2. Cutting power Pc in function of cutting speed v and cutting force Fc
3.
Multi objective optimization problem by machining using genetic algorithm
Being a population based approach, Genetic Algorithm are well suitable to solve
multi–objective optimization problem. To find the optimal solution for treated problem the
multi-objective optimization problem using Genetic Algorithms are employed. This is a very
popular problem in practice and can be written in the following form:
f1 ( x)
f ( x)
2
min f ( x)
; nobj 2;
f nobj ( x)
(3)
x x1 x2 xn T 0
The number of function required to be minimal two nobj>2, never then less.
subjected to the constraints:
g ( x) g1 ( x) g 2 ( x) g nc ( x)T 0 .
(4)
where scalar functions f(x) and g(x) are termed the multi – objective and constraint
function, respectively. The vector x denotes variables. The symbol (n) denotes the number of
variables, and (nc) is the number of constrains. In many case of real problems, multi –
objective optimization problem involving multiple and conflicting objectives. Hence, optimizing
of the machining variables with respect to a single objective function often results in
13
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
unacceptable results with respect to the other objectives functions. Therefore, a perfect multi
– objective solution that simultaneously optimizes each objective function is almost impossible
by Konak Abdullah et al. (2006). A reasonable solution to a multi – objective problem is to
investigate a set of solutions, each of which satisfied the objectives at an acceptable level
without being dominated by any other solution. The definition of optimal cutting parameters
plays an important role in metal cutting process. In this work is present a multi-objective
optimization technique, based on genetic algorithms, for optimizing the cutting parameters by
milling process. The cutting force is including in cutting power. So in our case two objectives
function: cutting power Pc and material removal rate MMR are suggested to optimise.
Figure 3. Diagram of Genetic Algorithm process
The first multi – objective GAs, called vector evaluated GAs, was proposed by
Schaffer (1995). Afterwards several multi – objective evolutionary algorithms were developed
including Multi – objective Genetic Algorithms (MOGAs). Clearly, the above optimization
process is a nonlinear programming problem. Optimization procedure in GAs, starts by
reading the boundary of variables and initialize machining parameters: f, a, v. Furthermore,
the main machining parameters are passed into the milling operation to find near more near
optimal solutions of cutting power Pc and material removal rate MRR. These values are
substituted into GAs process to calculate fitness function - objective function.
The optimization results for two objective function as cutting power and material
removal rate and are shown in figure 4. Also the optimal machining parameters cutting speed
(v), feed rate (f) and depth of cut (a) are obtained from the Pareto front graph. After
optimization of the objective function are reached the optimal machining parameters.
14
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
1690
1680
Population size: 100
Number of generations: 103
Selection strategy: Tournament
Crossever probability: 0.7
Material removal rate MRR
1670
1660
1650
1640
1630
1620
1610
400
600
800
1000
1200
1400
Cutting power Pc
1600
1800
2000
Figure 4. Pareto front solutions for cutting power Pc and material removal rate MRR.
4.
Conclusion
This paper presents Multiobjective Genetic Algorithm (MOGA) optimization for solving
the problem of machining operations by end milling process. The obtained results of cutting
force in experimental way are used by expression of cutting power Pc. Material removal rate
MRR, cutting power Pc and main machining parameters are optimized in environment of
Matlab such as Genetic algorithm. The optimization results are plotted as Pareto optimal
front. This paper also remarks the advantages of multi-objective optimization approach over
the single-objective one.
Acknowledgement
The author is profoundly grateful to Professor Joze Balic.
References
Abdullah, K., David, C. and Alice, S. Multi-objective optimization using genetic algorithms: A
tutorial, Reliability Engineering and System Safety. 2006, 91 992–1007.
Ganesan, H. Optimization of machining parameters in turning process using genetic algorithm
and particle swarm optimization with experimental verification. International Journal of
Engineering Science and Technology, 2011, 3(2), 1092-1102.
Janardhan, M. and Krishna A.G. Multi-objective optimization of cutting parameters for surface
roughness and metal removal rate in surface grinding using response surface
methodology. International Journal of Advances in Engineering & Technology, 2012, 3 (1)
270 – 283.
Schaffer, J.D. Multi-objective optimization with vector evaluated genetic algorithms. In Genetic
Algorithms and Their Applications. Proceedings of the First International Conference on
Genetic Algorithms, 1985, 93-100.
Yunping, Y. Optimize CNC Milling Parameters On-line. International Conference on
Measuring Technology and Mechatronics Automation, 2010, 856 – 859.
15
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Comparative Analysis of MCDM Methods and Implementation
of the Scheduling Rule Selection Problem: A Case Study in
Robotic Flexible Assembly Cells
1,2
1
1
K. Abd *, K. Abhary , R. Marian
1
School of Engineering, University of South Australia, Mawson Lakes, SA 5095, Australia
2
School of Industrial Engineering, University of Technology, Baghdad, Iraq
*Corresponding author (e-mail: abdkk001@mymail.unisa.edu.au)
Multi-criteria decision-making (MCDM) is a method for ranking solutions and finding the
optimal one when the decision maker has two criteria or more. AHP, ELECTRE and
TOPSIS are the most popular and acceptable MCDM methods. However, they are not
suitable when the information isuncertain and vague.Therefore, the purpose of this
study is to implement a new strategy for the scheduling rule selection problem of
robotic flexible assembly cells, using Fuzzy decision making method.After determining
the criteria that affect the scheduling rule selection decisions, Fuzzy decision making is
applied using the MATLAB fuzzy toolbox. The effectiveness of the proposed strategy is
demonstrated through a case study.The results of the proposed strategy are analysed
and compared and with another author’s study, which combined AHP with TOPSIS for
the same problem. The similarities and differences between AHP-TOPSIS and Fuzzy
decision making methods are also briefly discussed.
1.
Introduction
Multi-criteria decision-making(MCDM) is the process of evaluating a number of feasible
solutions and determining the optimum one, according to the number of criteria that have
different effects.Many methods have been proposed to solve MCDM problems. The most
well-known methods are TOPSIS, AHP, Electre, Promethee and SMART. There is no
universally accepted MCDM method, but some methods are more suitable than others to
solve particular decision problems (Dagdeviren 2008).
AHP (Analytic Hierarchy Process) method was originally developed by Saaty (1980).
AHP is one of the most popular methods in the area of MCDM, which assist the decision
maker to set criteria priorities. AHP consists of three main stages: constructing pair-wise
comparison matrix, estimating the relative weights of the decision elements and testing for the
consistency of the judgment matrix (Vaidya and Kumar 2006). The advantage of AHP is that it
is easy to use and understand, because the weight assigned to each criterionis given by
experts to evaluate the alternatives for the purpose of determining the best solution.
TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method was
initially presented by Hwang and Yoon (1981). TOPSIS is one of the common methods used
to solve MCDM problems. The basic idea of TOPSIS method is that the selected alternative
should be closest to the ideal solution and farthest from the worst one(Lin et al. 2008, Shyjith
2008). The procedural steps of TOPSIS are stated as follows: (1) construct decision matrix (2)
calculate normalized decision matrix (3) construct weighted normalized decision matrix (4)
identify ideal and non-ideal solutions (5) calculate the separation measure (6) calculate the
relative closeness to the ideal solution. Similarly to AHP, TOPSIS is designed toidentify the
best alternative simply and quickly.
Several studies have combined AHP with TOPSIS to solve different decisionmakingproblems. For example, Lin et al. (2008) used AHP and TOPSIS to improve the
customer-driven product design process. Rao (2007) presented the TOPSIS method in which
the AHP was used to finf the weights of relative importance of attributes. Shyjith et al. (2008)
combined AHP-TOPSIS for the selection of maintenance strategy for a textile industry.
Chakladar and Chakraborty(2008) developed methodology based on TOPSIS and AHP to
select the most appropriate non-traditional machining processes. Athawale and
Chakraborty(2010) applied AHP and TOPSIS to solve the conveyor belt material selection
16
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
problem. Accordingly, AHP combined with TOPSIS has attracted significant attention to solve
many industrial problems, including, the use of AHP-TOPSIS method to rank scheduling rules
in Robotic Flexible Assembly Cells (RFACs) has been presented (Abd et al. 2011b). The
drawback of this method is that, it still cannot reflect the human thinking style.
In order to deal with the vagueness of human thought, fuzzy set theory was first
introduced by Zadeh (1965). The main feature of fuzzy set theory is its ability to represent
vague knowledge. One class of fuzzy set theory is named fuzzy decision-making (Wang
1997). The aim of this study is to first apply the Fuzzy decision-making method of scheduling
rule selection problem in RFACs, and then compare the outcomes of this method with AHPTOPSIS.
2.
The proposed strategy
The proposed strategy consists of five main stages. The first two stages are to determine
the alternatives scheduling rules and the criteria to be used in evaluation. Third, construct the
hierarchical tree known as a decision hierarchy. Fourth, apply Fuzzy decision-making to rank
the scheduling rules and select the optimal one. For a single criterion, the optimum
scheduling rule is the one having the highest normalisation value. Multi-criteria optimisation is
not as straightforward as a single criterion optimisation. To overcome this problem, a multiple
performance characteristics index (MPCI) based on Fuzzy decision-making is developed to
derive the optimal solution. The last stage is to rank and compare the alternatives scheduling
rules, using both Fuzzy decision making and AHP-TOPSIS results.Schematic representation
of the optimisation strategy is shown Figure 1.The application of the Fuzzy decision-making
method can be achieved via four steps:
A. Fuzzification: in this interface, the real world variables (crisp input data) are converted into
linguistic variables (fuzzy values). This step can be done using the membership functions
of input variables.
B. Knowledge base: in this component, the membership functions are determined. These
membership functions reflect a human reasoning mechanism. In this study, three steps linguistic variables, membership functions and fuzzy rule - are prepared to establish a
knowledge base.
C. Decision rules: in this component, fuzzy input values map to fuzzy output using IF-THEN
type fuzzy rules. These rules reflect human experts’ knowledge of the system. The
number of decision rules depends on the number of inputs variables and their linguistic
values.
D. Defuzzification: in this interface, the fuzzy outputs are translated into a crisp value. The
defuzzification process can be achieved using the membership functions of the output
variable. Several methods have been proposed for the defuzzification process. The wellknown method, named Centre of Gravity is used to transform the fuzzy inference output.
Normalise the objective
Determining Alternative Scheduling Rules
Determining the criteria to be used in evaluation
function values
Fuzzification Interface
Decision Rules
Apply the Fuzzy decision-making method
Defuzzification Interface
Compare the Fuzzy decision-making with AHP-TOPSIS and make the
Calculate the MPCI
decision
Figure.1 Schematic representation of the optimization procedure
17
Knowledge Base
Structuring decision hierarchy
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.
Application in robotic flexible assembly cells
Robotic flexible assembly cells (RFACs) are highly modern systems, structured with
industrial robot(s), assembly stations and an automated material handling system, all
monitored by computer numerical control [1-3]. In this study, the application is used to
address the scheduling rule selection problem for RFACs. Fouralternative scheduling rules
are considered, namely short processing time (SPT), long processing time (LPT), earlier due
date(EDD) and random (RAND). In addition, two main sets of criteria are used, namely timebased measures and utilisation-based measures. The first set of criteria is divided into two
sub-criteria: scheduling length (Tmax) and total transportation time (Ttran). The second set is
divided also into two sub-criteria: utilisation rate (UR) and workload rate (WR). The
performance values of these four scheduling rules according to defined criteria are given in
Table 1, which is taken from our previous study (Abd et al. 2011a).After determining the
alternatives scheduling rules and the criteria to be used in evaluation, a decision hierarchy
tree can be described simply by four levels, Figure 2. The first level represents the goal of the
decision problem which is described as “Selection of the best Scheduling Rule”. The criteria
are on the second level. The sub-criteria are represented at the third level of the hierarchy.
The alternatives, or decision options, are on the bottom level of the hierarchy.
Table 1. Performance values of scheduling rules
Criteria
Max/Min
SPT
LPT
EDD
RAND
Tmax(Sec)
Min
296
266
300
257
Goal
Ttran(Sec)
Min
67
73
69
68.5
UR(%)
Max
62%
71%
62%
72.5%
WR (%)
Max
79%
85%
82%
74%
Selection of the best Scheduling Rule
Criteria
Time-based measures
Utilisation-based measures
Figure.2 The decision hierarchy of scheduling rule selection
Sub-Criteria
Alternative
3.1
T
Min. max
LPT
T
Min. tran
SPT
Max. U R
EDD
Max. W R
RAND
Application using AHP-TOPSIS method
In this study, the methodology is divided into three stages: (1) problem definition, (2)
AHP computation and (3) TOPSIS computation. In the first stage, alternative scheduling rules
and the selected criteria were presented. Second, the weights of selected criteria were
obtained using the AHP computation stage. In this stage, a pair-wise comparison matrix was
created, based on the judgement of the decision-maker, which determines the values of this
matrix. Then, the relative weight of each criterion was computed based on the pair-wise
comparison matrix. In the last stage, scheduling rules alternatives were analysed and ranked
using TOPSIS method. Subsequently, the most suitable scheduling rule was selected
according to the ranking of the alternatives. The results of applying AHP-TOPSIS are shown
in Table 2.
18
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 2.The results of applying AHP-TOPSIS
Alternati
ve
SPT
LPT
EDD
RAND
Alternati
ve
SPT
LPT
EDD
RAND
Tmax
0.003
0.004
0.003
0.004
Decision Matrix
Ttran
UR
0.015 62%
0.014 71%
0.014 62%
0.015 72.5%
The Normalised Decision Matrix
Tmax
Ttran
UR
WR
0.469
0.517
0.462
0.493
0.522
0.474
0.529
0.531
0.463
0.502
0.462
0.512
0.541
0.506
0.541
0.462
Separation measure and relative closeness
coefficient
S1
S2
C
0.311197
0.030
0.013
0.713385
0.012
0.030
0.333676
0.031
0.015
0.614983
0.019
0.031
WR
79%
85%
82%
74%
Weighted normalized
Tmax
0.171
0.191
0.169
0.197
Ttran
0.121
0.111
0.117
0.119
UR
0.057
0.066
0.057
0.067
WR
0.137
0.147
0.142
0.128
3.2
Application using Fuzzy decision-making method
In order to rank the alternatives scheduling rules,Fuzzy decision-making is implemented
using the MATLAB fuzzy toolbox.Fuzzy decision-making are containing four input and one
output parameters. The input parameters areTmax, Ttran, UR and WR, MPCI is the output
parameter. Three steps are used to generate MPCI. First, fuzzy sets for inputs and output are
determined. Tmax, Ttran, UR and WRarebroken down into a set of linguistic values: low (L),
medium (M), and high (H), and seven for the MPCI: tiny (T), very small (VS), small (S),
medium (M), large (L), very large (VL) and huge (H). Table 3 shows the different linguistic
values of input/output and their numerical range.Second, the membership plot for the
input/output parameters using triangular and trapezoidal shape is shown in Figure 3.Third, the
fuzzy rules are structured and written in fuzzy toolbox to control the output parameter (MPCI).
Fuzzy rules are derived directly based on the formula (nm), where n andmdenote input
3
parameters and their linguistic values respectively. Thus, the number of fuzzy rules is 4 =
64.TheMATLAB fuzzy toolbox displays a graphical representation of the values of the input
parameters and the output of a fuzzy decision making through all the fuzzy rules. The output
(MPCI) in rule viewer can be interpreted easily, as in the following: IFTmax is (0.09), Ttran is
(1.00), the UR is (0.01) and WR is (0.45) THENMCPI will be (0.36). The final results obtained
from AHP-TOPSIS and Fuzzy decision method are presented in Figure 4.
Table 3.Input and output variables with their fuzzy values.
Input
Tmax
Ttran
UR
WR
Linguistic
Value
Low
Medium
High
Range
Output
[0– 0.25]
[0.25–0.75]
[0.75–1]
MPCI
Linguistic
Value
Tiny
Very Small
Small
Medium
Large
Very Large
Huge
Figure 3.Membership functions for fuzzy input/output variables
19
Range
[0– 0.167]
[0– 0.334]
[0.167 – 0.5]
[0.334 -0.66]
[0.5 – 0.834]
[0.667 – 1]
[0.834 –1]
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
0.75
0.6
0.45
AHP-TOPSIS
0.3
Fuzzy Decision M aking
0.15
0
SPT
LPT
EDD
Scheduling Rules
RAND
Figure 4.Preference ranking of scheduling rules depending on decision methods
4.
Discussion and conclusions
In this paper, Fuzzy decision-making method is utilised for the scheduling rule selection
problem in robotic flexible assembly cells. The decision criteria were scheduling length (Tmax),
total transportation time (Ttran), utilisation rate (UR) and workload rate (WR). These criteria
were evaluated to determine the rank of scheduling rules for selecting the best one. The
results of Fuzzy decision-making are compared with AHP-TOPSIS. According to the AHPTOPSIS method, the final ranking of the scheduling rules is LPT → RAND → EDD→ SPT in
descending order of preference. LPT is observed to be the superior scheduling rule for
RFACs, while SPT is the worst one on this numerical example. When the Fuzzy decision
making method was applied, the best alternative is LPT among the four alternatives; EDD is
the worst one, and the ranking order of the alternatives is LPT → RAND → SPT → EDD. The
ranking results of both methods are slightly different, because in AHP-TOPSIS, the weight
assigned to each criterion is subjective, while Fuzzy decision making eliminates this
weakness. Therefore, Fuzzy decision making method may have more potential applications in
the real-world.
References
Abd,K., Abhary,K. and Marian,R.Scheduling and performance evaluation of robotic flexible
assembly cells under different dispatching rules.Advances in Mechanical Engineering,
2011, 1(1), 31-40.
Abd,K., Abhary,K. and Marian,R.An MCDM Approach to Selection Scheduling Rule in Robotic
Flexible Assembly Cells," presented at the International Conference on Mechanical,
Aeronautical and Manufacturing Engineering, Venice, Italy, 2011.
Athawale, V. M. and Chakraborty,S. A combined TOPSIS-AHP method for conveyor belt
material selection.Journal of The Institution of Engineers, 2010, 90, 8-13.
Chakladar, N. D. and Chakraborty,S. A combined TOPSIS-AHP-method-based approach for
non-traditional machining processes selection.Journal of Engineering Manufacture, 2008,
222, 1613-1623.
Dagdeviren,M. Decision making in equipment selection: an integrated approach with AHP and
PROMETHEE. Journal of Intelligent Manufacturing, 2008, 19, 397-406.
Hwang, C. and Yoon, K.Multiple attribute decision making methods and application. New
York: Springer, 1981.
Lin, M. C., Wang,C.C., Chen,M.S. and Chang,C. A. Using AHP and TOPSIS approaches in
customer-driven product design process.Computers in Industry, 2008, 59, 17-31.
Rao, R.V. Decision making in the manufacturing environment using graph theory and fuzzy
multiple attribute decision making methods. London: Springer Verlag, 2007.
Saaty,T. L. The analytic hierarchy process. New York: McGraw-Hill, 1980.
Shyjith,K., Ilangkumaran,M. and Kumanan,S. Multi-criteria decision-making approach to
evaluate optimum maintenance strategy in textile industry.Journal of Quality in
Maintenance Engineering, 2008, 14, 375-386.
Vaidya,O. S. and Kumar,S. Analytic hierarchy process: An overview of applications. European
Journal of Operational Research, 2006, 169, 1-29.
Wang, L. X. A course in fuzzy systems and control, Prentice-Hall, USA, 1997.
20
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
On Simulation and Optimization of the Process of Laser
Selective Sintering
Gladchenko Olekzander*
Department of Laser Technology
National Technical University of Ukraine “Kiev Polytechnic Institute”
*Corresponding author (e-mail: glad23@i.ua)
Intensely under development now is the method of selective laser sintering (SLS)
based on the laser-sintering of powder materials, when a mixture of materials with different
melting points is irradiated by laser beam. As the result of few stages of transformation the
material with a complex synthetic structure is composed where metal and ceramic particles
are bound by a matrix on organic base and it is possible in such a way to perform rapid
prototyping of components from any material. Empirical search for suitable working conditions
at laser processing, providing the obtaining of a given structure-phase state is extremely
difficult and time consuming. In fact, only in a very narrow range of irradiation conditions the
satisfactory quality layers are generated. In this regard, the urgent task is to develop a
method for optimizing the parameters of laser processing, based on mathematical modeling
of physical processes in the area of laser exposure. Creating a computer package, predictive
characteristics of the structure and mechanical properties may be promising from the point of
view of development of automatic control systems of components manufacturing using SLS
technology.
On the Problem of Optimization of Laser Material Cladding
Baybakova Olena*
Department of laser Technology
National Technical University of Ukraine (KPI)
*Corresponding author (e-mail: baybakova09@rambler.ru)
Laser cladding is a process of local supply filler material and melting material on
short-term basis. The high degree of automation of process control allows to adjust not only
the size of the molten zone, but the thermal cycle process.
The need to design and build coatings with enhanced properties arise in various fields
of modern engineering more often now. In order to save metal used in the manufacturing of
parts and to reduced weight structures some special alloys and coatings had been developed
to perform certain functions.
Functional coverage, in this case is an attempt to help the design engineer to
optimize the material and processing on a higher level, taking into account the external
factors acting on every detail of the mechanism separately.
21
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Modified ABC Algorithm for the Optimal Design of Multiplierless Cosine Modulated Filter Banks
Shaeen K., Elizabeth Elias
National Institute of Technology Calicut – 673601, Kerala, India
*
Corresponding author (e-mail: shaeen_k@yahoo.com, elizabeth@nitc.ac.in)
This paper presents the design of totally multiplier-less near perfect reconstruction
cosine modulated filter banks. The prototype filter is designed using Kaiser Window
Approach (KWA) and also using Parks McClellan Algorithm (PMA) and the coefficients
are represented using Canonic Signed Digit (CSD) representation. Both the filters are
optimized in the discrete space using modified Artificial Bee Colony (ABC) algorithm.
The comparison of both the filters in terms of performance and design complexity is
done.
1. Introduction
M Channel Maximally decimated filter banks are widely used in different multirate
applications [Vaidyanathan, 1998]. Cosine Modulated Filter banks (CMFB) are one class
among the different M channel maximally decimated filter banks. CMFB has got an easy and
efficient design method. Only the prototype filter needs to be designed, all the analysis and
synthesis filters are obtained from it by cosine modulation [Vaidyanathan, 1998]. In near
perfect reconstruction CMFB, the prototype filter is optimized to obtain flat overall magnitude
response and good stopband attenuation. Different approaches exist for the design of the
prototype filter. A linear phase FIR low pass filter using Parks McClellan algorithm was
proposed by Creusere and Mitra in 1995. The filter order is initially fixed and passband edge
is moved in small steps towards zero to obtain the desired filter bank. In another approach
[Lin and Vaidyanathan, 1998], a Kaiser Window based prototype filter is proposed. In which
the cut off frequency is optimized to obtain the desired filters. Inspired by these two
approaches, different prototype filter design approaches have been reported, using different
types of windows and also using different optimization algorithms [Cruz Roldan et.al, 2009].
This paper proposes an approximate perfect reconstruction multiplierless CMFB that
permits small amplitude and aliasing errors causing small distortions to the signal. The
coefficients of the CMFB are real and these coefficients are quantized into signed power of
two (SPT) terms [Lim et.al, 1999]. Canonic signed digit (CSD) representation is a minimal
signed digit representation among the various SPT forms. For any 2’s complement number,
the corresponding CSD representation will be unique and the adjacent digits are never both
nonzero. For an N bit CSD representation, the maximum number of non-zero present is N/2.
The multipliers in the digital filters can be implemented using adders and shifters. CSD
coefficients lead to minimum number of adders and shifters, thereby reducing the complexity
of implementation. The direct rounding of coefficients to finite length CSD degrades the filter
performance. To improve the frequency response characteristics of the filters, optimization in
the discrete domain is required. Conventional gradient based approaches cannot be deployed
here, as the search space is discrete. Hence metaheuristic algorithms result in global
solutions for this problem by properly tuning the parameters [Xin-She Yang, 2011].
2. Cosine modulated filter banks
A linear phase FIR filter with good stopband attenuation and which provides a flat
amplitude distortion function is initially designed. All the other analysis and synthesis filters
are generated from this prototype filter by modulation. The coefficients of the analysis and
synthesis filters are given by equation (1) and (2) respectively [Vaidyanathan, 1998].
ℎ ( )= 2
( ) cos
( + 0.5)
−
+ ( −1)
22
(1)
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
( )= 2
( ) cos
( + 0.5)
Amplitude distortion error is given by
distortion is given by
=
−
− ( −1)
(
= max
( ) , where
(2)
) − 1 . The worst case aliasing
( ) =
3. Design of cosine modulated filter bank
∑
(
)
.
3.1 Design of continuous coefficient filter bank
Since the prototype filter is cosine modulated to obtain the analysis and synthesis
filters, the filter bank design problem is reduced to the optimal design of the prototype filter. If
the prototype filter has linear phase response, then the overall filter bank will have linear
phase response. Also the adjacent channel aliasing is eliminated, what remains is the aliasing
between non adjacent channels. The 3dB cut off frequency of the prototype filter should be
at
=
, this condition will reduce the amplitude distortion around the transition
,
frequencies ( ) , where k = 0, 1,. . M-1 [Vaidyanathan, 1998]. In this paper, the prototype
filter is designed using the Kaiser Window approach (KWA) and also using the Parks
McClellan algorithm (PMA). A closed form expression for the design of CMFB using Kaiser
Window for a given stopband attenuation, cosine roll off factor and number of channels is
designed [Berger and Antoniou, 2007]. For the same specifications, the prototype filter is
designed using Parks McClellan algorithm also. The performances of both CMFBs are given
in Table 1.
Design Specifications
Number of Channels: 8
Cosine Roll of Factor: 0.5
Stopband Attenuation: 70 dB
Table 1. Performance Comparison of the prototype filters using Continuous Coefficients
Filter Order
Kaiser Window
Approach
Parks
McClellan
Method
168
Max. Stopband
Attenuation(dB)
69.46
Error in Amplitude
distortion
0.0043
Worst Aliasing
Distortion
1.11 x 10-5
154
68.51
0.005
3.2 x 10-4
From Table 1, the filter order of the prototype filter designed using PMA is less compared to
KWA. But the performances in terms of stopband attenuation, aliasing and amplitude
distortion are better for KWA based prototype filter.
3.2 Design of CSD Coefficient Filter bank
The coefficients of both the prototype filters are converted to finite word length CSD
representation with restricted number of signed power of two terms. For fast conversion to
CSD representation, a Look up Table (LUT) is used. The LUT consists of four fields: an index,
CSD equivalent, corresponding decimal value and number of nonzero present in the CSD
equivalent. A typical CSD look up table entry is shown in Table 2.
Table 2. A typical CSD Look Up Table Entry
Index
8814
CSD Equivalent
00100010100-10-101
23
Decimal Equivalent
0.5379
No: of nonzero
6
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The performances of both the filters in 16 bit finite word length CSD is given in Table
3. As a result of the finite rounding of the coefficients, the stopband attenuation reduces by
about 10dB and leads to more aliasing between non adjacent channels.
Table 3. Performance Comparison of prototype filters using CSD Rounded Coefficients
Filter Order
Kaiser Window
Approach
Parks McClellan
Method
168
Max. Stopband
Attenuation(dB)
59.58
Error in Amplitude
distortion
0.004
Worst Aliasing
Distortion
2.11 x 10-4
154
58.32
0.005
2.2 x 10
-4
3.3. Objective function formulation
The performance of the CMFB has been degraded in the CSD space. This calls for
discrete optimization using metaheuristic algorithms. The optimization goals in the multiplier
less CMFB is to reduce the following objective functions.
F = max
min ∅ =
)
(
F = max
(
)
F + α F
+
− 1
0<
>
<
(4)
(5)
(6)
The objective function given in equation (4) minimizes overall amplitude distortion and
equation (5) is to minimize the maximum deviation in the stopband of the filter. Equation (6)
combines the two objective functions, where
and
are the trade off parameters, which
define the relative importance given to each objective function. Since the CSD representation
results in poor stopband attenuation,
is more weighted compared to .
4. Optimization of CMFB using modified ABC algorithm
ABC algorithm can be used for optimizing multimodal and multivariable functions and
results in getting global minima [Xin-She Yang, 2011]. The artificial colony of honey bees
consists of Employee bees, Onlooker bees and Scout bees. Food sources are the possible
solutions or filter coefficients in CSD forms and the corresponding fitness function as given in
equation (6) is the amount of nectar of the food source. The different phases involved in the
optimization using ABC algorithm are given below [Manuel and Elias, 2012].
4.1 Initialization
The various parameters of the ABC algorithm such as population sizes, maximum
number of iterations and the control parameter ‘limit cycles’ are initialized. Since the prototype
filter is a linear phase filter, only half the coefficients need to be optimized. The initial food
source is formed by the CSD rounded filter coefficients. Initial population of food sources are
generated by giving random perturbations to the initial food source. The nectar amount
present in each food source is determined and the population is sorted according to the
amount of nectar.
4.2 Employee bee phase
An employee bee takes the food sources adjacent to the food sources in their
memory and evaluates the corresponding fitness function. The new food source at the ith
position is given by
=
+ ∅
where ∅ is a random value in the interval [-1, 1] and
is defined as
=
−
,
is the jth parameter of the ith food source. The new food
sources are also prevented from crossing the boundaries of the look up table. If the fitness
value of the new food source is better, then it replaces the older food source. This is called
the greedy selection mechanism.
24
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4.3 Onlooker bee phase
Employee bee passes the information regarding the fitness function to onlooker bees.
Onlooker bees choose the food sources with a probability related to its fitness function, so the
food source with good fitness will get more onlookers. In onlooker bee phase also, a greedy
selection mechanism is done to select the new food source.
4.4 Scout bee phase
If the fitness function of a food source is not showing any improvement after the limit
cycles is reached, it is abandoned. This abandoned food source becomes a scout bee. The
scout bee randomly finds a food source. This food source replaces the abandoned food
source.
4.5 Termination
Termination is reached either, when the stopband attenuation and error in amplitude
distortion function reached the limits specified or when a predetermined number of iterations
are reached. Steps 4.2 to 4.4 are repeated if neither condition are satisfied. After termination
the food sources are decoded and the best filter is taken as the optimal filter.
5. Result analysis
Table 4. Performance comparison
Error in Amp.
Distortion fn.
Worst Aliasing
Distortion
Max. Stopband
attenuation(dB)
No: of
multipliers
No: of
coefficient
adders
Adders due to
SPT terms
Total no: of
adders
KWA
KWA(CSD
Rounded)
PMA
PMA(CSD
Rounded)
0.0043
KWA(ABC
Optimized
)
0.0048
0.0045
0.007
PMA(ABC
Optimized
)
0.005
0.004
1.1x10-5
2.1x10-4
2.4x10-4
3.2x10-4
2.2x10-4
2.08x10-4
69.46
59.58
65.09
68.51
58.32
65.5
77
77
85
84
78
84
84
77
121
125
116
119
205
209
193
196
All the simulations are done using a Dual Core AMD Opteron processor operating at
2.17GHz using MATLAB 7.12.0. The performance comparison is given in Table 4. The
performances of both the filters after optimization in the CSD space are compared in terms of
the worst aliasing distortion, error in amplitude distortion and stopband attenuation. In this
case both the filters give similar results. When the design implementation complexity is
considered in terms of multipliers or adders and shifters, the filter designed using the Parks
McClellan algorithm gives better results. So it can be concluded that, the finite word length
performance or multiplierless performance of PMA is better compared to KWA. Figure 1
shows the magnitude response of the prototype filter designed using PMA, and its
corresponding CSD rounded and ABC optimized responses. Similarly Figure 2 shows those
of the filter designed using KWA. Figures 3 and 4 are the amplitude distortion function plots
using PMA and KWA respectively.
25
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure1. Magnitude response (PMA)
Figure 2. Magnitude response (KWA)
Figure 3. Amplitude distortion (PMA)
Figure 4. Amplitude distortion (KWA)
6. Conclusion
Totally multiplier-less cosine modulated filter banks are designed and optimization in the
discrete space using modified ABC algorithm is done. The prototype filters are designed using
both KWA and PMA. The performance and implementation complexity of both the filters and
the filter banks in continuous domain and discrete domain are studied and compared.
References
Berger, S.W.A and Antoniou, A. An efficient closed form design method for cosine modulated
filter banks using window functions, Signal Processing, 2007, 87, 811-823.
Creusere, C. D. and Mitra, S. K. A simple method for designing high quality prototype filters
for M-band pseudo-QMF banks. IEEE Trans. Signal Processing, 1995, vol.43, 1005–
1007.
F. Cruz Roldan, P. Martin-Martin, J. Saez-Landete, M. Blanco-Velasco, and T. Saramaki, A
Fast Windowing-Based Technique Exploiting Spline Functions for Designing Modulated
Filter Banks, IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 56, no. 1, pp. 168–178,
2009.
Lin, Y. P. and Vaidyanathan, P.P. A Kaiser Window Approach for the Design of Prototype
Filters of Cosine Modulated Filter Banks. IEEE Signal Processing Letters, 1998, 5(6).
Lim Y.C. Rui Yang, Dongning Li, and Jianjian Song. Signed Power-of-two term Allocation
Scheme for the Design of Digital Filters. IEEE Transactions on Circuits and Systems II:
Analog and Digital Signal Processing, 1999, 46(5), 577–584.
Manuel, M. and Elias, E. Design of Frequency-Response Masking FIR filter in the Canonic
Signed Digit Space using modified Artificial Bee Colony Algorithm. Engineering
Applications of Artificial Intelligence, 2012.
Vaidyanathan, P.P. Multirate Systems and Filter Banks. Prentice-Hall, Englewood Cliffs, New
Jersy: 1993.
Xin-She Yang. Nature-Inspired Metaheuristic Algorithms. Luniver Press, 2011.
26
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Multi-Response Optimization of WEDM Process Parameters
using TOPSIS and Differential Evolution
*
B.B. Nayak , S.S.Mahapatra
Department of Mechanical Engineering, National Institute of Technology, Rourkela, India
*Corresponding author (e-mail: bijeta_bijaya@yahoo.co.in)
The work presents a multi response optimization approach to determine the optimal
process parameters in wire electrical discharge machining process. Taguchi L27
orthogonal array is used to gather information regarding the process with less number
of experimental runs. Traditional Taguchi approach is insufficient to solve a multi
response optimization problem. In order to overcome this limitation, a multi criteria
decision making method, techniques for order preference by similarity to ideal solution
(TOPSIS) is applied in the present study. In order to consider experimental uncertainty,
the responses are expressed in linguistic terms rather than crisp values. The weight for
each criterion (response) is obtained by analytical hierarchy process instead of using
intuition and judgement of the decision maker. In this work, the relationship between
input factors and response are established by means of a nonlinear regression
analysis, resulting in a valid mathematical model. Finally Differential evolution (DE)
algorithm is employed to optimize the machining parameters of WEDM process with
multiple objectives.
Key words— WEDM, Multi-response optimization, Analytic hierarchy process (AHP),
TOPSIS, Differentia evolution
1.
Introduction
Wire electrical discharge machining (WEDM) is basically a thermo-electrical process in
which material is eroded from the work piece by a series of discrete sparks between the work
piece and the wire electrode (tool) separated by a thin film of dielectric fluid (deionized water)
continuously fed to the machining zone to flush away the eroded particles. The movement of
wire is controlled numerically to achieve the desired three-dimensional shape and accuracy of
the work piece. In today’s manufacturing scenario, WEDM contribute a prime share in the
manufacture of complex-shaped dies, molds, and critical parts used in automobile,
aerospace, and other industrial applications. The process is best suited for parts having
complex work piece configuration, close tolerances, the need of high repeatability and hardto-work materials. However, the selection of optimum machining parameters in WEDM is an
important issue. Improper selected parameters may result serious problems like short
circuiting of wire, wire breakage and work surface damage causing disruption of production
schedule and reducing productivity.
2.
Literature review
Tosun et al. (2004) presented an investigation on effect of machining parameters on
kerf and the material removal rate (MRR) in WEDM operations and finally optimization of
parameters. Simulated annealing is applied to select optimal values of machining parameters
for multi-objective problem considering minimization of kerf and maximization of MRR.
Mahapatra and Patnaik (2007) established the relationship between control factors and
responses like MRR, SF and kerf by means of non-linear regression analysis resulting in a
valid mathematical model. Finally genetic algorithm is employed to optimize the WEDM
process with multiple objectives. Kuraiakose and Shunmugam (2005) presented a multiple
regression model to represent relationship between input and output variables and a multiobjective optimization method based on a Non-Dominated Sorting Genetic Algorithm (NSGA)
is used to optimize Wire-EDM process. Rao and Pawar (2010) have developed a
mathematical model using response surface methodology (RSM) and obtained optimal
27
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
parameter setting using particle swarm optimization (PSO) algorithm. Gadakh (2012) applied
techniques for order preference by similarity to ideal solution (TOPSIS) method for solving
multiple criteria optimization problem in WEDM process. A good amount of research has
been done in this area for optimal process parameter selection and most of the works used
experimental data for the optimization. In this paper, a new approach has been considered by
representing the experimental results in terms of linguistic variables since experimental
results involves some sort of uncertainties. Expressing the responses in linguistic terms
enables the decision maker to account for uncertainty and fuzziness embedded in the
experimental data. An attempt is made to apply TOPSIS (Yoon and Hwang, 1995) in
associated with Analytic hierarchy process (AHP) (Saaty, 1980) for converting multiple
responses into an equivalent single response.Since the method is simple and easy to
implement, it does not require mathematical rigour. AHP process is capable of estimating
weights for each response. A mathematical model is suggested here to develop a relation
between control factors and the response by means of nonlinear regression analysis.
Differential evolution has been used to optimize WEDM process parameters for multiresponses.
3.
Proposed methodology
The procedural steps for the present research work are enlisted as below.
Step I: Experiments have been conducted by using Taguchi’s L27 orthogonal array.The
experimental responses are first normalized because the responses are expressed in
different units. Data is normalized using the following criteria.
Lower- the- Better (LB)
max yi k yi k
Xi k
max yi k min yi k
(1)
2. Higher- the- Better (HB)
Xi k
y k min y k
max y k min y k
i
i
i
(2)
i
Xi (k) is the normalized data of the ith experiment of kth response.
Yi(k) represents ith value for the response
Xi (k) lies between 0 to 1.
Step II: The normalized responses are expressed in linguistic variables to account for the
uncertainties involved in it using a 7-point scale (extremely low, very low, low, medium, high,
very high, extremely).
Step III: Using the triangular fuzzy numbers the linguistic variables are converted into crisp
scores. Chen and Hwang’s fuzzy ranking method (using left and right scores) (Chen and
Hwang, 1992) was used to convert fuzzy numbers into crisp scores.
Step IV: TOPSIS method has been applied to convert the multi-response into single
equivalent response.
Step V: TOPSIS method requires the priority weight to calculate the normalized weighted
matrix. The associate weight for each response can be obtained with the help of Analytic
Hierarchy Process (AHP) developed by T.L. Saaty (Saaty, 2008)
Step VI:By using the procedure of TOPSIS method (Gadakh 2012), the closeness co-efficient
value has been calculated.
Step VII: A Mathematical model is developed to establish a relation between the control
factors and closeness co-efficient value by using nonlinear regression analysis.
Step VIII: Finally, Differential evolution approach is used to optimize the process parameters
of WEDM.
3.1 Differential evolution
Differential evolution, one of the evolutionary computations, proposed by Storn andPrice,
aimed at solving continuous optimization problems (Storn and Price, 1997). It is a simple,
stochastic, population based technique that was initially formulated to deal with unconstrained
28
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
problems.DE has three main advantages (i) able to locate the accurate globaloptimum
irrespective of the initial parameter values (ii) has rapid convergence (iii) utilizes few control
parameters, thus easy and simple to use.DE belongs to the class of genetic algorithms (GAs)
which use biology-inspired operations of crossover, mutation, and selection on a population in
order to optimize an objective function over the course of successive generations.
4.
Experimental details
In the present research work, a ROBOFIL 100 high precision 5axis CNC WEDM, which
is manufactured by Charmilles Technologies Corporation, was used for the study. The WEDM
machine consists of a wire, work table, a servo control system, and a power supply and
dielectric supply system. The ROBOFIL 100 allows the operator to choose input parameters
according to the material and height of the work piece and tool material from a manual
provided by the WEDM manufacturer. The machine has a transistor controlled RC Circuit. A
block of D2 tool steel (1.5% C, 12% Cr, 0.6% V, 1% Mo, 0.6% Si, 0.6%Mn, and balance Fe)
with 200mm×25mm×10mm size has been cut 100mm length with 10mm depth along the
longer length. The parameters kept constant during machining are wire/electrode (Zinc–
coated copper wire, dia-0.25mm), work piece (rectangular block of D-2 tool steel) andlocation
of the work piece on the working table (at the center of the table), The six most important
input variables were selected after an extensive literature review and subsequent preliminary
investigations. Their limits were set on the basis of capacity and limiting conditions of the
WEDM, ensuring continuous cutting by avoiding the breakage of the wire, as listed in Table 1.
Table 1. Process parameters with their levels
Input variables
Unit
Symbol
Discharge Current
Pulse Duration
Pulse Frequency
Wire Speed
Wire Tension
Dielectric Flow Rate
Amp
µs
KHz
m/min
g
Bars
A
B
C
D
E
F
Level l
16
3.2
40
7.6
1,000.00
1.2
Levels
Level II
24
6.4
50
8.6
1,100.00
1.3
Level lII
32
12.8
60
9.2
1,200.00
1.4
The most important performance measures in WEDM are metal removal rate, surface
roughness and kerf. The kerf has been measured using the Mitutoyo tools makers’
microscope (x100) which can be expressed as sum of the wire diameter and twice of wirework piece gap. The kerf value is the average of five measurements made from the work
piece with 20mm increments along the cut length.
MRR is calculated as,
MRR = k.t.vc.ρ
where k is the kerf, t is the thickness of the work piece (10mm), v c is the cutting speed and ρ
is the density of the work piece material (7.8 gm/cm3).
The surface roughness value (in µm) has been obtained by measuring the mean absolute
deviation, Ra (surface roughness) from the average surface level using a type C3A
MahrPerthenperthometer (stylus radius of 5 µm).
To evaluate the effect of machining parameters on performance characteristic MRR, SR, and
Kerf, a specially designed experimental procedure is required. In this study, Taguchi’s L27
orthogonal array is used to gather maximum information regarding the process with less
number of experimental run. The experiments were conducted for each combination of factors
(rows) as per selected orthogonal array. Then the experimental results have been normalized
by using equation (1) for MRR and usingequation (2) for surface roughness and kerf.
Normalized responses are converted into linguistic variables as listed in Table 2.
29
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 2. Experimental data in terms of linguistic variables
Expt.NO.
A
B
C
D
E
F
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
2
2
2
3
3
3
1
1
1
3
3
3
1
1
1
2
2
2
1
1
1
2
2
2
3
3
3
3
3
3
1
1
1
2
2
2
2
2
2
3
3
3
1
1
1
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
2
3
1
2
3
1
2
3
1
3
1
2
3
1
2
3
1
2
MRR
(g/min)
VL
L
EL
L
L
L
H
H
M
L
VL
VL
VL
M
M
VH
VH
H
L
L
L
H
H
H
EH
VH
VH
5.
SR
(µm)
H
EH
VH
M
M
H
VL
VL
L
H
VH
H
M
H
M
VL
VL
EL
VH
H
H
M
L
L
VL
EL
VL
Kerf
(mm)
VH
EH
EH
M
H
H
L
L
L
H
VH
VH
M
M
M
VL
VL
VL
H
H
H
L
L
L
EL
EL
VL
CCi
0.15438
0.13122
0.03450
0.33235
0.33009
0.31648
0.69346
0.69346
0.51393
0.31648
0.13745
0.15438
0.18539
0.48808
0.50000
0.86162
0.86162
0.70010
0.30659
0.31648
0.31648
0.68352
0.68352
0.68352
0.96576
0.86774
0.86162
Results and discussion
The linguistic variables as shown in Table 2 were described using triangular fuzzy
numbers. By using Chen and Hwang’s fuzzy ranking method (using left and right scores), the
crisp scores of fuzzy numbers have been computed. The crisp values of responses are first
normalized. In order to determine the relative normalized weight of each criterion of WEDM, a
pair-wise comparison matrix is developed using AHP method. The criteria weights are
obtained as W MRR = 0.7305, W SR= 0.1883 and W KERF = 0.0809 respectively. The value of CR
is calculated as 0.0608 which is less than the allowed value of CR (=0.1), indicating the fact
that there is a good consistency in the judgments made by the decision maker while assigning
values in the pair-wise comparison matrix. Now, following the procedure of TOPSIS method
the relative closeness co-efficient (CCi) value for each combination of factors of WEDM has
been calculated as shown in Table 2.
A mathematical model suggested here is in the following form.
(3)
Y = K0 + K1× A + K2 × B + K3 × C + K4 × D + K5 × E + K6 × F
Here, Y is the performance output terms and Ki (i=0,1……6) are the model constants. The
constants were calculated by using non-linear regression analysis method with the help of
SYSTAT 7.0 software. The following coefficients from SYSTAT software were substituted in
Equation (4) and the following relations were obtained.
(4)
CCi = 0.271 + 0.552A + 0.674B – 0.049C – 0.199D – 0.283E – 0.092F
r2 = 0.967
30
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2
The higher correlation coefficients (r ) confirm the suitability of the used model and
correctness of the calculated model.
Differential evolution (DE) was used to obtain the optimum machining parameters of WEDM.
Following the steps of DE( Coelho, 2009), the algorithm is coded in MATLAB 7.6.0,
considering the mutation factor-0.8, Crossover rate-0.6, number of control devices-6, number
of vector (NP)-50 and maximum of generation-1500. Table 3 shows the optimum machining
conditions of WEDM process.
Table 3. Optimum machining conditions
Control factors and
Optimum machining
Performance measure
conditions
Discharge Current (A)
31.6768
Pulse Duration (B)
11.1168
Pulse Frequency(C)
32.67
Wire Speed (D)
4.98
Wire Tension (E)
837.48
Dielectric Flow Rate (F)
1.09
Closeness co-efficient value
0.99900
6.
Conclusions
In this work, an attempt was made to determine the optimum machining parametersby
considering the experimental results in terms of linguistic variables in order to avoid the
uncertainty involved in it for maximizing the material removal rate and minimizing the surface
roughness and kerf simultaneously in the WEDM process. TOPSIS and AHP method is quite
capable for converting multi-responses into single equivalent responses. A mathematical
model suggested here is also establish a relationship between the control factors and
response.DE algorithm approach is relatively simple and easy to implement with few
parameters and also a fast techniques to solve multi-response optimization problem.
References
Chen, S.J. and Hwang, C.L. Fuzzy Multiple Attribute Decision Making. Springer-Verlag,Berlin
Heildelberg, New York, 1992.
Gadakh, V.S. Parametric optimization of wire electric discharge machining using
TOPSISmethod.Advances in Production Engg. and Management, 2012 7(3), 157-164.
Kriakose, S. and Shunmugam, M.S. Multi-objective optimization of wire-electro discharge
machining process by Non-Dominated Sorting Genetic Algorithm.IEEE Trans. Image
Process, 2005 170(1-2), 133-141.
K. P. Yoon and C. L. Hwang, “Multiple attribute decision making,” SAGE Publications, Beverly
Hills, CA, 1995
Mahapatra, S.S. and Patnaik, A. Optimization of wire electrical discharge machining process
parameters using Taguchi method.International Journal of Advanced Manufacturing
Technology, 2007 34 (9-10), 911-925.
Coelho, L.S.Reliability-redundancy optimization by means of a chaotic differential evolution
approach. Chaos Solitons and Fractals, 2009 41(2), 594-602.
Rao, R.V. and Pawar, P.J. Process parameters modeling and optimization of wire electric
discharge machining.Advances in Production Engg and Management 2010 5(3), 139-150.
Rao, R.V. Decision making in the Manufacturing environment using Graph Theory and Fuzzy
Attribute Decision making methods. Springer-Verlag, London, 2007.
Saaty, T.L. The Analytic Hierarchy Process.McGraw Hill, New York, 1980.
Saaty, T.L.Decision making with the analytic hierarchy process.International Journal of
Services Sciences,2008 1(1), 83-98.
Stron, R. and Price, K.V. Differential evolution- A simple and efficient heuristic for global
optimization over continuous spaces. Journal of Global Optimization1997, 11(4), 341-359
Tosun, N, Cogun, C and Tosun, G. A study on kerf and material removal rate in wire
electrical discharge machining based on Taguchi method.Journal of Materials processing
Technology, 2004 152(3), 316-322.
31
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Application of Shannon Entropy to Optimize the Data Analysis
D. Datta1, P.S. Sharma1, Subrata Bera 2 and A. J. Gaikwad2
1
Computational Radiation Physics Section, HPD, BARC, Mumbai-400085, Maharashtra, India
2
Nuclear Safety Analysis Division, AERB, Mumbai-400094, Maharashtra, India
*Corresponding author (e-mail: ddatta@barc.gov.in)
Data analysis rather data mining is an important task for computation of contamination of
any environmental matrices such as soil. Investigation of the soil contamination from the
migration of any toxic chemicals (contaminants) or radioactive materials through soil is
carried out by measuring the concentration of the relevant chemical elements or
radionuclide. Recorded level of the measured value of the concentration of any
contaminant is a statistical measure and hence strictly depends on the sample size.
However, collection of a large number of such samples is not possible. Accordingly
substantially a few samples (say, < 10) are collected to quote the average value of the
concentration of the contaminant. It is mandatory to have an optimum size of samples to
reduce the associated uncertainty that may arise due to insufficiency in data. This paper
presents an optimization methodology of data analysis of experimental observations using
Shannon entropy. The representative mathematical modeling is carried out using Shannon
entropy. Modeling addresses the computation of the uncertainty (probability) distribution of
the experimental observations by using the principle of maximum entropy. Maximization of
the entropy has been carried out under some constraints. Constraints chosen for
maximization are the recommended average value of the input data set and the
normalization of the total probability. Soil contamination is demonstrated as a case study to
defend the present mathematical modeling.
1. Introduction
Contamination of soil may occur due to migration of toxic chemicals or radionuclide during an
event of an accident, for example, the nuclear power plants at the Fukushima Daiichi, Fukushima
Daini, Higashidori, Onagawa, and Tokai Daini nuclear power stations (NPSs) were affected
following theTohoku earthquake, that occurred at 2:46 p.m. (Japan time) on Friday, March 11,
2011, on the east coast of northern Japan [Report-1, 2011]. The earthquake caused a tsunami,
which hit the east coast of Japan and caused a loss of all on-site and off-site power at the
Fukushima Daiichi NPS, leaving it without any emergency power [Report, 2011]. The resultant
damage to fuel, reactor, and containment caused a release of radioactive materials to the region
surrounding the NPS [Report-1, Report-II and Report-III, 2011]. Data analysis pertaining to the
accident of Fukushima Daiichi reactors (Boiling water reactor(BWRs)) were categorized with
respect to the damage of fuel (specially melting) due to overheating and contamination of
environment (soil, water and air). The American Nuclear Society Special Committee on
Fukushima (the Committee) collected information for the radiation exposure of workers, the
release and deposition of radioactive materials over a wide area surrounding the Fukushima
Daiichi NPS, and the contamination of water and food sources [Umeki, 2010]. It is important to
note that for making any firm conclusion of the contamination of soil based on the collected data,
an optimized and reliable data analysis methodology is required. Literature study indicates that
non-parametric methods such as bootstrap [Datta, 2012; Efron,1993] and kernel-density
estimation technique [Botev, 2010] have been applied by many researchers. But acceptability
index of all such methods for the best estimation of average value of the experimental data
(sparse and discrete) is very low due to their inherent bases. With a view to this, a new technique
on the basis of the principle of maximum entropy using Shannon information entropy [Kapur,
32
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
1990] has been proposed for estimation of probabilities of experimental data. The present paper
will focus the application of Shannon entropy as an optimization approach of data analysis. The
small (say < 30) sample size of the collected data and the possibility of presence of non-detects
in the collected data reveals that method of estimation of the average value of the collected data
should be probabilistic because deterministic method in this case results an overestimation.
Determination of density function of data distribution being difficult with small and incomplete
data, Shannon entropy based construction of probability density function is an optimized version
of statistical technique of fitting probability distribution through small data set.
In the event of Fukushima accident it was known that air, water and soil in the accident site at
Japan was contaminated by iodine-131, caesium-134 and caesium-137 [Umeki, 2010]. Data
analysis in the event of Fukushima accident has been addressed here as the task of soil
contamination by radionuclide caesium-134, caesium-137 and cobaly-60. The mean value of the
data set is estimated using the sum of the product of the data value and its corresponding
probability. Fitting a probability distribution of the data collected during an accident is error prone
because the data set in this domain is not very populated [IAEA,2002]. Shannon entropy [Kapur,
1990] being a measure of uncertainty the ignorance (partial or complete) has been taken care.
2. Boiling water reactor – Brief description
In a BWR NPP (Figure 1), the nuclear reactions take place in the nuclear reactor core, which
mainly consists of nuclear fuel and control elements. The nuclear fuel rods (each ~10 mm in
diameter and 3.7 m in length) are grouped by the hundred into bundles called fuel assemblies
(Figure 2). Inside each fuel rod, pellets of uranium, or more commonly uranium oxide, are
stacked end to end. The control elements (shown as red in cross section), called control rods, are
filled with substances like boron carbide that readily capture neutrons. When the control rods are
fully inserted into the core, they absorb neutrons, precluding a nuclear chain reaction. When the
control rods are moved out of the core, enough neutrons are produced by fission and are
absorbed by fissile Uranium-235 or Plutonium-239 nuclei in the fuel rods, causing further fissions,
and more neutrons are produced. This chain reaction process becomes self-sustaining, and the
reactor becomes critical, producing thermal energy (heat). The fuel and the control rods and the
surrounding structures that make up the core are enclosed in a steel pressure vessel called the
reactor pressure vessel (RPV). When uranium (or any fissile fuel) is fissioned and energy is
produced, fission products (atomic fragments left after a large atomic nuclear fission) remain
radioactive even when the fission process halts, and heat is produced from their radioactive
decay, i.e., decay heat. Although decay heat decreases quickly from a few percent to <1% of the
rated NPP thermal power after a few hours, water must be circulated within the reactor pressure
vessel (RPV) to maintain adequate cooling. The cooling is provided by numerous systems. Some
systems operate during normal conditions, and some systems, such as the emergency core
cooling systems (ECCSs), respond to off-normal events. Normal reactor cooling systems
maintain the RPV and temperature and a proper cooling water level, or if that is not possible,
ECCSs directly flood the core with more water. Because of the large amount of radioactivity that
resides in the nuclear reactor core, regardless of the specific design, the defense-in-depth
philosophy is used. This approach provides multiple, independent barriers to contain radioactive
materials. In the BWR, the fuel rod itself and the RPV with its primary system act as the first two
barriers. The containment system is designed around the RPV and its primary system to be the
final barrier to prevent accidental release of radioactive materials to the environment. As far as
safety systems are concerned all BWRs have control rod drive systems that can be inserted to
shut the reactor down. As a backup a standby liquid control system consisting of a neutron-absorbing water solution (borated) exists and this can be injected to shut down the fission chain
reaction. After shutdown, the reactor continues to produce reductive low-level decay heat from a
few percent at shutdown, reducing to a fraction of 1% after 1 day—that must be removed in order
to prevent overheating of the nuclear fuel. In the event that the normal heat-removal pathway to
33
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
the main turbine/condenser is lost, BWRs have, as the first backup, systems to provide core
safety by either adding water to the RPV or by an alternate heat removal path, or by both.
Figure 1 simplified diagram of BWR NPP
assembly of BWR NPP
Figure 2 Nuclear fuel
2.1 Overview of Fukushima accident
The Tohoku earthquake, which occurred at 2:46 p.m. (Japan time) on Friday, March 11, 2011, on
the east coast of northern Japan, is believed to be one of the largest earthquakes in recorded
history. Following the earthquake on Friday afternoon, the nuclear power plants at the Fukushima
Daiichi, Fukushima Daini, Higashidori, Onagawa, and Tokai Daini nuclear power stations (NPSs)
were affected, and emergency systems were activated. The earthquake caused a tsunami, which
hit the east coast of Japan and caused a loss of all on-site and off-site power at the Fukushima
Daiichi NPS, leaving it without any emergency power [INPO, 2011]. The resultant damage to fuel,
reactor, and containment caused a release of radioactive materials to the region surrounding the
NPS.
3. Entropy based fitting of experimental data
3.1 Shannon or information entropy
By using an appropriate statistical average, the macroscopic quantities are derived from the
microscopic laws. The central concept behind this paradigm shift is again entropy, S. According
to Boltzmann, it is related to the number W of the different microscopic states which give rise to
the same macroscopic state of the system, by means of the law
(1)
S k ln W
where k is the Boltzmann constant. W in eq. (1) is the thermodynamic probability or statistical
weight. According to [9, 10] it can be informed that W is not a probability but an integer. However,
by simple mathematical definition of probability, one can write W as
P( An )
W ( An )
W ( An )
(2)
n
where P(An) denotes the probability of occurrence of a macroscopic event An. In this formal
definition, entropy comes first and probability comes later. If we denote the macroscopic event An
as an ensemble of observations [N1, N2, …, N n] then W in eq. (1) can be explicitly written as
W
N!
in1 N i !
(3)
Eq. (1) can now becomes
S k ln
N!
n
i 1 N i !
n
kN pi ln pi
(4)
i 1
34
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
where,
pi
Ni
is the relative frequency and for large N it is the probability that an observation
N
n
or any experimental value lies within the ith sample. The expression pi ln pi appearing on
i 1
the right hand side of eq. (4) is called as Shannon entropy. The physical significance of Shannon
entropy is that it measures the uncertainty associated with the probability distribution (p1, p 2, …,
pn). Eventually for large classical system, Boltzmann entropy is proportional to Shannon entropy.
Shannon entropy provides the measures of disorder of a system. As Shannon entropy is also
alternately labeled as information entropy the question comes to the researcher’s mind at this
stage is that what should be the maximum information required to know the probability distribution
of experimental observations (an ordered state)? Mathematical formulation of the answer to this
question is framed in the next paragraph.
3.2 Principle of maximum entropy
Within classical information theory, the principle of maximum uncertainty is called as the principle
of maximum entropy. The principle of maximum uncertainty [Klir and Folger,1988] is the second
principle and is essential for any problem that involves ampliative reasoning. This is the reasoning
in which conclusions are not entailed in the given premises. The general formulation of the
principle of maximum entropy is to determine a probability distribution
p( x) | x X that
maximizes the Shannon entropy [Klir and Folger,1988] subject to given constraints C1, C2, …, Cn
which expresses partial information about the unknown probability distribution as well as general
constraints (axioms) of probability theory. The most typical constraints employed in practical
applications of the maximum entropy principle [Klir and Folger,1988] are the average values of
one or more random variables.
3.3
Algorithm and computational issues
Let x be a random variable with possible non-negative real values x1, x2, .., xn. Assume that
probabilities pi of values xi (iNn) are not known. Employing the maximum entropy principle, we
can estimate the unknown probabilities, pi (iNn) by solving the following optimization problem
n
Maximize S ( p1 , p 2 ,..., pn ) pi ln pi
(5)
i 1
n
n
subject to the constraints, E ( x ) xi pi (6) and pi 0 (i N n ), pi 1
i 1
i 1
n
n
n
i 1
i 1
i 1
We form the Lagrange function, L pi ln pi ( pi 1) ( pi xi E ( x ))
where and are the Lagrange multipliers that correspond to the two constraints. Basic goal is
now to maximize L with respect to p’s, and . By taking the derivative of L with respect to p i,
and we have
L
ln pi 1 x i 0
pi
n
L
1 pi 0
i 1
n
L
E ( x ) pi xi 0
i 1
(7)
(8)
(9)
35
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The first n equations can be written as: ln pi 1 xi
Therefore, pi exp[ (1 xi )]
p1 e
Thus,
p4 e
(1 )
(1 )
e
e
x4
x1
,
, . . . . , pn e
p2 e
(1 )
(1 )
e
e
x2
,
p3 e
(1 )
e
x3
,
xn
x
1
p1
e
Therefore, we can easily write now,
. Thus, in general the probability
n xi
pi
e
i 1
distribution of
random value xi can be written as
exp( xi )
(10)
n
x
exp(
)
i
i 1
Now to compute pi (Eq.(10)) completely, we have to compute . In order to compute the
Lagrangian parameter, , following algorithm is adopted. Let us multiply L.H.S. of eq. (10) by xi
and add all these over the index i,
pi
n
E ( x) pi xi
i 1
n
xi
xi e
n
[ xi E ( x )]
i 1
, or, ( xi E ( x ))e
0
n xi
1
i
e
i 1
(11)
Equation (11) is solved finally using Newton-Rapshon method to compute, . Substituting the
value of the computed in eq. (10), probability of the corresponding random variable is
computed.
4. Case study: Results and discussions
The soil samples collected from the survey results are tabulated in table 1. The important
radionuclides required to investigate level of contamination of soil samples are Co-60, Cs-134,
and Cs-137. The aim of this study is to estimate the probability of activity levels of the
radionuclide using the proposed method of maximization of Shannon entropy. The study also
aims to identify the most important radionuclide that plays the role in the measurement
uncertainty and also is responsible for contamination of the specified environmental sample (here
in this case environmental sample is soil sample). Probabilities of each of the experimental data
for a specific radionuclide are computed using eq.(10). Constraints, i.e. average of the activity
level of each radionuclide are as shown in table 2. The basis of the constraint values of the
radionuclide on soil sample is with respect to the limiting value of the specific radionuclide
stipulated by any regulatory authority. Estimated probability distribution of each radionuclide is
tabulated in table 3. The actual average of the each column of the data presented in table 1 are
1.09, 0.57 and 1.66 respectively. Taking these values as the constraint if one proceeds for
estimating probability distribution, it is obvious that uniform probability distribution will result.
Accordingly, the analyst will draw the conclusion that each of the radionuclide (Co-60, Cs-134,
and Cs-137) will be uniformly distributed on the soil sample around a nuclear power plant.
However, it is not true. The distribution of each radionuclide is totally different on soil sample of
the same site. On the other hand, practice says that the activity level of environmental sample
36
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
follows lognormal distribution and that is the reason, without looking into the quality of the data
most of the analysts fit the data into a lognormal distribution. With a view to this fact the activity
level of soil sample due to Co-60, Cs-134 and Cs-137 is fitted by a lognormal, non-parametric
and normal distribution and the results are shown in figures 1, 2, and 3 respectively.
Sample
Table 1 Activity Level of radionuclide on soil sample
CsCsCsCo-60
Sample Co-60
134
137
134
(pCi/g) (pCi/g) (pCi/g)
(pCi/g) (pCi/g)
Cs137
(pCi/g)
1
2
3
4
5
0.7
1.4
0.3
0.2
0.6
0.3
0.4
1.3
0.3
0.4
1.5
0.4
1
2.7
2.2
14
15
16
17
18
0.7
0.7
0.1
0.5
2.8
0.4
0.2
0.2
0.6
0.5
1.7
1.8
3
2.5
0.9
6
7
8
9
10
11
0.2
2.6
2
0.9
1.1
1.8
0.4
0.1
0.1
0.1
0.9
0.7
1.9
2
2
2
1.6
1.4
19
20
21
22
23
24
2.2
0.7
0.2
0.9
1.1
2
1.4
0.6
0.5
0.3
0.5
0.8
1.3
1.7
1.9
1.5
1.8
1.9
12
13
0.3
3.8
1.1
0.7
0.7
0.3
25
26
0.1
0.4
1
1
2
1.4
Table 2 Constraint value (average) of radionuclide on Soil Sample
Co-60
Cs-134
Cs-137
2.33
1.3
1.8
Sample
Table 3 Probability distribution of radionuclide on Soil Sample
Co-60
Cs-134
Cs-137
Sample
Co-60
Cs-134
Cs-137
1
2
3
0.0154
0.03
0.0105
0.0002
0.0005
0.273
0.0353
0.0235
0.0293
14
15
16
0.0154
0.0154
0.0087
0.0005
0.0001
0.0001
0.038
0.0395
0.0617
4
5
6
7
8
9
0.0095
0.014
0.0095
0.0944
0.0533
0.0186
0.0002
0.0005
0.0005
0.0001
0.0001
0.0001
0.0552
0.0458
0.041
0.0425
0.0425
0.0425
17
18
19
20
21
22
0.0127
0.1143
0.0645
0.0154
0.0095
0.0186
0.0019
0.0009
0.5546
0.0019
0.0009
0.0002
0.0512
0.0283
0.0328
0.038
0.041
0.0353
10
11
12
13
0.0225
0.044
0.0105
0.2971
0.016
0.0039
0.0661
0.0039
0.0367
0.034
0.0262
0.0226
23
24
25
26
0.0225
0.0533
0.0087
0.0116
0.0009
0.0079
0.0325
0.0325
0.0395
0.041
0.0425
0.034
37
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Co60 data
fitted with lognormal distribution
fitted with non-parametric distribution
fitted with normal distribution
0.9
0.8
1.6
Cs134 data
Fitted with lognormal distribution
Fitted with normal distribution
Fitted with non-parametric distribution
1.4
1.2
0.6
Probability Density
Probability Density
0.7
0.5
0.4
0.3
0.6
0.4
0.2
0.2
0.1
0
1
0.8
0
0.5
1
1.5
2
2.5
Contamination level of Co-60
3
3.5
0
4
0
Figure 3 Co-60 activity level of
soil samples
0.5
1
Contam ination level of Cs-134
1.5
Figure 4 Cs-134 activity level of
soil samples
0.8
0.7
Probability Density
0.6
0.5
0.4
0.3
0.2
Cs137 data
fitted with lognormal distribution
fitted with non-parametric distribution
fitted with normal distribution
0.1
0
0
0.5
1
1.5
Contam ination level of Cs-137
2
2.5
3
Figure 5 Cs-137 activity level of soil samples
It can be clearly interpreted from these figures that none of the fitted distributions is appropriate
because in each case large amount of information is added (in case of lognormal distribution fit)
and lost (in case of normal and non-parametric distribution). So, estimation of probabilities using
maximization of Shannon entropy is justifiable and worth. Having these probabilities we have
computed the entropy of the activity of each of the radionuclide and entropy of activity level of Co60, Cs-134, and Cs-137 are 85.32, 72.11 and 137.77 respectively. As entropy is the measure of
uncertainty and entropy of the activity level of Cs-137 radionuclide is maximum compared to all
other radionuclide, Cs-137 is the major contributor for uncertainty of the activity level of the soil
sample.
5. Conclusion
Entropy based investigation of radiological consequence of nuclear reactor accident is an
innovative concept. In the present article an attempt has been made to show the applicability of
the Shannon entropy to assess the probability distribution associated with small sample size
based experimental data set collected in the event of an accident. Principle of maximum entropy
is the basis of the method of estimation of probabilities of experimental data set. It has been
experienced that the method of estimation of probability distribution is always better than fitting a
probability distribution with the data set because the later one is biased and error prone. Further
the present method of determination of the probability distribution of environmental samples is
more direct and transparent than the old method of inverting Boltzmann principle.
38
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
References
Datta D., “Application of Bootstrap in Dose Apportionment of Nuclear Plants via Uncertainty
Modeling of the Effluent Released from Plants, World Journal of Nuclear Science and
Technology, vol. 2, 2012, pp. 41-47.
Efron, B and R. J. Tibshirani, “An Introduction to the Bootstrap,” Chapman and Hall, New York,
1993.
Botev, Z.I.; Grotowski, J.F.; Kroese, D.P., "Kernel density estimation via diffusion". Annals of
Statistics, 38 (5): 2916–2957, 2010, DOI:10.1214/10-AOS799.
IAEA, Procedures for Conducting Probabilistic Safety Assessment of Nuclear Power Plants
(Level 1), Safety Series No. 50 – P-4, Vienna, 1992, ISBN 92-0-102392-8.
IAEA, Dispersion of Radioactive Material in Air and Water and Consideration of Population
Distribution in Site Evaluation for Nuclear Power Plants, Safety Series No. NS-G-3.2,
Vienna, 2002.
INPO 11-005, Rev. 0, “Special Report on the Nuclear Accident at the Fukushima Daiichi Nuclear
Power Station,”, Institute of Nuclear Power Operations (November 2011).
Kapur, J.N., Maximum-Entropy Models in Science and Engineering (Wiley Eastern, New-Delhi),
1990.
Klir, G.J., and Folger, T.A., "Fuzzy Sets, Uncertainty and Information", Prentice-Hall, New Jersey,
pp. 1-32, 1988.
Report-I,“Report of Japanese Government to the IAEA Ministerial Conference on Nuclear
Safety—The Accident at TEPCO’s Fukushima Nuclear Power Stations,” Government of
Japan (June 2011).
Report-II,“Fukushima Daiichi Nuclear Power Station, Response After the Earthquake,” Summary
Report of Interviews of Plant Operators, International Atomic Energy Agency (June 2011).
Report-III,“IAEA International Fact Finding Expert Mission of the Fukushima Dai-ichi NPP
Accident Following the Great East Japan Earthquake and Tsunami,” International Atomic
Energy Agency (June 16, 2011).
UMEKI, H., “Radioactive Waste Management in Japan—An Overview,” Proc. IAEA Technical
Meeting on the Establishment of a Radioactive Waste Management Organization,
International Atomic Energy Agency / Japan Atomic Energy Agency (June 2010).
39
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimal Process Parameter Selection in Laser Transmission
Welding by Cuckoo Search Algorithm
Debkalpa Goswami, Shankar Chakraborty*
Department of Production Engineering, Jadavpur University, Kolkata – 700 032, WB, India
*
Corresponding author (e-mail: s_chakraborty00@yahoo.co.in)
Laser transmission welding (LTW) is a novel and promising technology for many
industries involved in the joining of plastics. LTW is advantageous because it is a noncontact, non-contaminating, precise and flexible process which is easy to control and
automate. To obtain improved welding performance, it is essential to determine the
optimal settings of parameters influencing the process. The previous researchers
concentrated on predicting trends of responses in LTW with respect to the control
parameters, rather than optimization of these responses. In this paper, a modern
metaheuristic algorithm, cuckoo search (CS), is used as an optimization tool for LTW
process. Both single and multi-objective optimizations are considered. It is found that
CS is not only an excellent algorithm with respect to fast convergence and locating
global optima, it is also very efficient in predicting trends of multi-dependent responses.
1.
Introduction
Laser welding was first demonstrated on thermoplastics in the 1970s. However, the
process overcame the industrial threshold only during the last decade. Laser transmission
welding (LTW) is the latest addition in the field of plastic welding, patented by Nakamata in
1987 (US Patent 4636609). LTW processes are characterized by short welding cycle times
while providing optically and qualitatively high-grade joints. There is absence of any vibration,
it imposes minimal thermal stress, creates a very small heat affected zone (HAZ) and avoids
particle generation. Additionally, the method shows excellent integration capabilities and
potential for automation. LTW process is thus preferred over its conventional counterparts,
and has found several state-of-the-art applications in microfluidics, micro electro-mechanical
systems (MEMS) and biomedicine (Wang et al., 2012).
In LTW, a laser beam is aimed at two overlapping thermoplastic parts: one of which
(top) is transparent to the radiation at laser wavelength, and the other (bottom) is absorbent to
that radiation. Depending on the thickness and absorption coefficient of the bottom part,
surface layers of both the parts are melted at the joining interface. After cooling and resolidification, a bond is formed at the weld seam (Van de Ven and Erdman, 2007). The
efficiency of LTW strongly depends on the optical properties of the components to be joined
and the process parameters.
The key control parameters for LTW are laser power, welding speed, beam spot area
and clamping pressure, which together control the temperature field inside the weld seam,
and hence, the weld quality. Only a limited number of studies have been carried out to
investigate the effects of different process parameters on weld quality, with different
thermoplastic materials and application strategies. The attempt to optimize the parametric
settings was feeble in most of those cases.
Acherjee et al. (2009) implemented response surface methodology (RSM) to develop
regression equations based on experimental results to predict the effects of process
parameters on weld strength and seam width. LTW of polymethyl methacrylate (PMM) was
carried out using a diode laser system. Van de Ven and Erdman (2007), starting from first
principles of heat transfer, put forward an analytical tool to allow LTW process parameters to
be designed through an iterative process. The model also enabled optimization of operating
parameters while providing monetary and time-saving benefits. Acherjee et al. (2011)
established a correlation between the LTW parameters and output variables through a
nonlinear model developed by artificial neural network (ANN). A sensitivity analysis was also
performed to determine the parametric influence on the model outputs.
40
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Wang et al. (2012) undertook a maiden venture for multi-objective optimization of
LTW of polymer sheets. RSM was employed, and the responses under consideration were
joint strength, joint width and joint cost. A sequentially integrated optimization approach based
on Taguchi method, RSM and desirability function analysis was proposed by Acherjee et al.
(2013) for evaluating the optimal set of LTW parameters. The authors found that the
performance of the proposed approach was better than that of grey-Taguchi method.
To the best of our knowledge, no work has so far been reported in the literature
where a dedicated parametric optimization of LTW has been carried out using any stochastic
algorithm. This fact has motivated the present study. The cuckoo search (CS) algorithm has
been chosen as the tool for optimization. It is a new, almost unexplored swarm intelligence
algorithm for global optimization. Preliminary studies (Yildiz, 2012; Gandomi et al., 2013) have
shown that it is very promising, and outperforms the existing algorithms, such as genetic
algorithm (GA) and particle swarm optimization (PSO).
2.
The cuckoo search algorithm
The cuckoo search (CS) is a new meta-heuristic optimization algorithm, proposed by
Yang and Deb (2009, 2010). This bio-based algorithm draws inspiration from the curious
breeding behavior of cuckoo birds. Before going into a technical description of the algorithm, it
is first essential to briefly review this interesting breeding habit of cuckoos.
Cuckoo birds have many characteristics which differentiate them from other birds, but
their main distinguishing feature is aggressive reproduction strategy. Some species, such as
the Ani and Guira cuckoos lay their eggs in communal nests, though they may remove others’
eggs to increase the hatching probability of their own eggs. Cuckoos engage brood
parasitism. It is a type of parasitism in which a bird (brood parasite) lays and abandons its
eggs in the nest of another species. Some host birds do not behave in a friendly way against
intruders and engage in direct conflict with them. In such a situation, the host bird will throw
those alien eggs away. In other situations, more friendly hosts will simply abandon its nest
and build a new nest elsewhere. Some cuckoo species, such as the Tapera have evolved in
such a way that female parasitic cuckoos are often very specialized in the mimicry in color
and pattern of the eggs of a few chosen host species. This reduces the probability of their
eggs being abandoned and thus increases their chance of hatching (Payne et al., 2005).
The CS algorithm employs a type of random search called Lévy flights in which the
step-lengths are distributed according to a heavy-tailed probability distribution. Specifically,
the distribution used is a power law of the form y = xα, where 1 < α < 3, and therefore has an
infinite variance. According to conducted studies, foraging behavior of many flying animals
and insects show typical characteristics of these flights (Pavlyukevich, 2007). The following
idealized rules are used to simplify the description of CS:
a) Only one egg at a time is laid by a cuckoo. The cuckoo dumps its egg in a randomly
chosen nest.
b) Only the best nests with high quality eggs will be passed into the next generation.
c) The number of available host nests is fixed. Egg laid by a cuckoo is discovered by the
host bird with a probability pd.
A detailed description of the aforementioned Lévy flights implementation can be found
in Yang and Deb (2009). Here, the final pseudo-code for CS algorithm is presented as below
(Yang and Deb, 2010, Gandomi et al., 2013):
1.
2.
3.
4.
5.
6.
7.
8.
Begin
Objective function f(x), x = (x1,…, xd)T;
Initialize a population of n host nests xi (i = 1,2, …, n);
while (t < Maximum Generation) or (not stop criterion)
Get a cuckoo (say i) randomly and generate a new solution by Lévy flights;
Evaluate its quality/fitness, Fi;
Choose a nest among n (say j) randomly;
if (Fi > Fj)
41
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
9. Replace j by the new solution;
10. end if
11. Abandon a fraction (pd) of worse nests, and build new ones at new locations via Lévy
flights;
12. Keep the best solutions (or nests with quality solutions);
13. Sort the solutions and find the current best;
14. end while
15. Post-processing and visual representation.
3.
Optimization of LTW process: results and discussions
To demonstrate the application and effectiveness of CS algorithm for parametric
optimization of LTW process, the objective functions derived from the models given by
Acherjee et al. (2009) are used here as an illustrative example. Based on the pseudo-code
given in section 2, a computer program is developed in MATLAB 7.10.0 (R2010a) on an
Intel® CoreTM i5-2450M CPU @ 2.50 GHz, 4.00GB RAM operating platform. The following
algorithm-specific parameters are used: maximum iterations = 200, function evaluations =
10000, number of nests (n) = 25 and pd = 0.25.
As stated earlier, Acherjee et al. (2009) employed RSM to develop regression
equations from experimental data. No optimization was attempted. The different parameters
considered and their RSM-coded levels are given in Table 1. Thus, each parameter is
constrained within the values corresponding to -2 and +2 coded levels. The two responses
recorded were lap-shear strength (LSS) and weld-seam width (WSW). The regression
equations for LSS and WSW, in terms of coded values of parameters, are given in Eqs. (1)
and (2) respectively.
Table 1. Control parameters and their coded levels.
Unit
Notation
Level
-2
-1
0
P
Power
Watt
16
19
22
S
Welding speed
mm/min
240
300
360
F
Stand-off distance
mm
24
28
32
C
Clamp pressure
MPa
3.30
6.30
9.30
Parameter
+1
25
420
36
12.30
+2
28
480
40
15.30
Lap-shear strength, LSS (N/mm) = 47.25 + 2.15P – 6.54S – 0.90F + 0.80C + 2.22PF –
1.31PC – 7.07SF – 0.84FC – 3.23S2 – 7.77F2
(1)
Weld-seam width, WSW (mm) = 2.50 + 0.13P – 0.26S + 0.26F + 0.022C – 0.057PS –
2
2
(2)
0.034PF – 0.065SF + 0.038F – 0.030C
At first, single objective optimization is considered, and the two responses, LSS and
WSW, are optimized independently. For superior joint quality, it is desired that LSS would be
maximum and WSW minimum. The maximum value of LSS obtained by CS is 67.41 N/mm;
and the minimum value of WSW is achieved as 1.54 mm. The results, along with the optimal
parameter settings for the responses, are summarized in Table 2.
Response
Lap-shear strength
Weld-seam width
Table 2. Results of single objective optimization.
Nature of
Optimal value
Optimal parameter setting
optimization
P
S
F
C
Maximize
67.41
28.00
240.0 36.98 3.30
Minimize
1.54
16.00
480.0 24.00 3.30
Figure 1 shows the convergence diagrams for CS algorithm with respect to LSS (1a)
and WSW (1b). It is evident from the graphs that CS algorithm exhibits a property of very
rapid convergence towards the global optimum. It is worthy to mention here that the statistical
variability of the results on repetitive runs is also insignificant (St. Dev. in the order of 1E-9)
and thus CS is far superior in this regard as compared to other stochastic algorithms involving
42
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
70.0
67.5
b
1.8
Weld-seam width (mm)
a
Lap-shear strength (N/mm)
random number generations. The average computational time observed for single-response
optimization is 0.56 s. The variations of responses with process parameters are displayed in
Figure 2. These scatter plots represent the trend as obtained using CS algorithm. For better
visualization, the results are compared with the trends as predicted by Acherjee et al. (2009).
The parameter hold-values considered by them was at coded level 0. It is clear from the plots
that the trends predicted by CS, and those given in the published work are in complete
agreement with each other. Moreover, CS is more advantageous in this regard because no
parameter is held constant during analysis, and hence, the true overall trend can be
predicted. Similarly agreeing graphs may also be obtained for other parameter-response
combinations, which are not presented here due to lack of space.
65.0
62.5
60.0
1.7
1.6
57.5
55.0
1.5
0
50
100
150
0
200
50
100
Iteration
150
200
Iteration
Figure 1. Algorithm convergence plots: (a) for lap-shear strength, (b) for weld-seam width.
a
60
Weld-seam width (mm)
Lap-shear strength (N/mm)
b
Acherjee et al. (2009)
CS Algorithm
70
50
40
30
4
Acherjee et al. (2009)
CS Algorithm
3
2
20
250
300
350
400
1
450
25
Welding speed, S (mm/min)
30
35
40
Stand-off distance, F (mm)
Figure 2. Comparison of scatter diagrams with published work: (a) LSS vs. S, (b) WSW vs. F.
Now, for multi-objective optimization, the following objective function is developed:
Z
w1 weld-seam width
weld-seam widthmin
w 2 lap-shear strength
lap-shear strengthmax
(3)
where w1 and w2 are the weights (relative importance) assigned to WSW and LSS
respectively (such that w1 + w2 = 1), and the min and max values in the denominator of the
expression (3) are those obtained from single-response optimization results of CS. The
choice of weights depends entirely on the preference of the process engineer, or can be
determined by analytic hierarchy process (AHP). Table 3 shows the results of multi-response
optimization for three considered situations. The function Z is minimized in the all cases. The
43
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
average computational time observed is 0.87 s. It is clear from the results that a compromise
must be struck between LSS and WSW during any LTW process.
Table 3. Results of multi-objective optimization
4.
Condition
Case 1: w1 w 2 0.5
Zmin
0.229
LSS
45.57
WSW
1.74
P
16.00
S
410.5
F
26.71
C
15.30
Case 2: w1 0.1, w 2 0.9
-0.655
66.81
3.64
28.00
250.8
36.17
3.30
Case 3: w1 0.9, w 2 0.1
0.878
14.63
1.54
16.00
480.0
24.00
3.30
Conclusions
In this paper, a maiden venture is undertaken to optimize LTW responses and predict
the optimal control parameters using a new and almost unexplored stochastic algorithm, i.e.
cuckoo search (CS). It is found that CS can successfully handle both single and multiobjective optimization problems, has a very fast convergence rate and very low computational
time (less than 1 s). The algorithm also exhibits exceptionally low statistical variability (St.
Dev. < 1E-9), and can also be used efficiently for overall response trend prediction in case of
responses dependent on several factors.
The optimal parametric combinations for LTW derived here will help the process
engineers in exploiting the full potential of this process, without depending solely on
experimental data or empirical laws. CS can now be applied as a global optimization tool for
multi-response optimization of other processes too.
References
Acherjee, B., Misra, D., Bose, D. and Venkadeshwaran, K. Prediction of weld strength and
seam width for laser transmission welding of thermoplastic using response surface
methodology. Optics & Laser Technology, 2009, 41(8), 956-967.
Acherjee, B., Mondal, S., Tudu, B. and Misra, D. Application of artificial neural network for
predicting weld quality in laser transmission welding of thermoplastics. Applied Soft
Computing, 2011, 11(2), 2548-2555.
Acherjee, B., Kuar, A.S., Mitra, S. and Misra, D. A sequentially integrated multi-criteria
optimization approach applied to laser transmission weld quality enhancement - a case
study. International Journal of Advanced Manufacturing Technology, 2013, 65(5-8), 641650.
Gandomi, A.H., Alavi, A.H. and Yang, X.-S. Cuckoo search algorithm: a metaheuristic
approach to solve structural optimization problems. Engineering with Computers, 2013,
29(1), 17-35.
Pavlyukevich, I. Cooling down Levy flights. Journal of Physics A: Mathematical and
Theoretical, 2007, 40(41), 12299-12313.
Payne, R.B., Sorenson, M.D. and Klitz, K. The Cuckoos. Oxford University Press, 2005.
Van de Ven, J.D. and Erdman, A.G. Laser Transmission Welding of Thermoplastics - Part I:
Temperature and Pressure Modeling. Journal of Manufacturing Science and Engineering,
2007, 129(5), 849-858.
Wang, X., Zhang, C., Wang, K., Li, P., Hu, Y., Wang, K. and Liu, H. Multi-objective
optimization of laser transmission joining of thermoplastics. Optics & Laser Technology,
2012, 44(8), 2393-2402.
Yang, X.-S. and Deb, S. Cuckoo search via Levy flights. Proceedings of World Congress on
Nature & Biologically Inspired Computing (NaBIC), 9-11 December 2009, Coimbatore,
India.
Yang, X.-S. and Deb, S. Engineering Optimisation by Cuckoo Search. International Journal of
Mathematical Modelling and Numerical Optimisation, 2010, 1(4), 330-343.
Yildiz, A.R. Cuckoo search algorithm for the selection of optimal machining parameters in
milling operations. International Journal of Advanced Manufacturing Technology, 2012,
64(1-4), 55-61.
44
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Artificial Neural Networks Based Indoor Air Quality Model for A
Mechanically Ventilated Building Near an Urban Roadway
Rohit J, Shiva Nagendra S M.
Indian Institute of Technology, Chennai – 600036, Tamil Nadu, India
This paper describes the development of Artificial Neural Network (ANN) based indoor air
quality (IAQ) model for predicting particulate matter (PM) concentrations in an office
environment using vehicular and meteorological parameters as input variables. The hourly
average, temperature, Relative humidity, dew point, wind speed, wind direction, total
vehicular count (TVC) and PM10, PM2.5 and PM1.0 concentrations data measured during 4th
June to 19th August 2012 have been used as input and target vectors for model
development. The feed-forward neural networks that employ back-propagation algorithm
with momentum term and variable learning rate has been used for training ANN based IAQ
model. Architecture of 6 input neurons, 4 hidden neurons and 3 output neurons has been
found to be the best model for predicting indoor PM concentrations. The performance of
the ANN based IAQ model on the data set is found to be reasonably accurate with an
coefficient of regression (R) = 0.6.
1.
Introduction
Study of indoor air quality (IAQ) in an office microenvironment is important in understanding
the occupant’s exposure levels to outdoor pollution, considering that people spend most of their
daily time in workplace (Klepeis et al., 2001). Motor vehicles are believed to be the major causes
of ambient air pollution in urban environment (Baek, 1991). Emissions from these sources have
immediate implications for the indoor environment, since the air in both naturally and
mechanically ventilated buildings is replenished to varying degrees with ambient air, which may
or may not be filtered or otherwise conditioned before being brought indoors. Several studies
have demonstrated that ambient air can have a significant impact on the indoor environment
(Chitra and Nagendra 2011).
One of the general methods of studying IAQ in any microenvironment is to use the mass
balance models. These models are generic in nature and are derived based on the mass balance
for contaminant flow into and out of the indoor volume considered. However, such models aim to
resolve the underlying physical and chemical equations controlling contaminant concentrations
and therefore require detailed data on indoor and/or outdoor air contaminant concentrations,
meteorological conditions, and components of the indoor space (like obstructions and ventilation
openings) and their respective dimensions in the microenvironment considered.
Alternatively, air quality inside a microenvironment can also be modeled with the use of
artificial neural networks (ANNs). ANNs’ ability to solve complex systems resulted in their
popularity and application in a number of areas (Maier and Dandy, 2001). Several studies
observed ANNs to perform much better than the conventional statistical methods (Gardner and
Dorling, 1998, 1999; Perez et al., 2000; Kolehmainen et al., 2001; Kukkonen et al., 2003;
Papanastasiou et al., 2007). The role of ANNs in the field of air pollution was limited to modeling
vehicular exhaust emissions (Brunelli et al., 2007; Khare and Nagendra, 2006) and atmospheric
contaminants.
ANNs are easy and better to work with for predicting indoor contaminants where complex
rules and mathematical calculations are involved. Considering the practical applicability and the
validity of ANN models in assessing the indoor air quality, it is hoped that the proposed
methodology encourages the development of more efficient environmental monitoring systems
(with limited independent input variables), thereby reducing operational and maintenance costs.
45
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Studies to develop ANN models for predicting indoor concentrations from outdoor pollution data
are scanty. In this paper an effort to fill the knowledge gap of using ANNs for predicting indoor air
quality by inputting outdoor traffic and meteorological parameters.
2.
Methodology
2.1 Study region
Chennai city (13.04N 80.17E) is the fourth largest city in India lying on the Coromandel Coast of
the Bay of Bengal towards the southern part of the country. It is one of the most highly
industrialized and densely populated city in India; having an estimated urban population of over 8
million. Chennai has a tropical climate, specifically a tropical wet and dry climate. The city lies on
the thermal equator and is also on the coast, which prevents extreme variation in seasonal
temperature. The weather is hot and humid for most of the time. For the present study a
mechanically ventilated office building in Ramco Systems, Chennai has been selected. Fig 1
gives the study area. The study room is adjacent to an urban road (Sardar Patel road) having
heavy traffic flow (about 7000 vehicles/h). The monitored site was near the reception of the
mechanically ventilated office building. The site is characterized by frequent human movement.
Figure 1. Study Area
2.2 Data collection and data analysis
Measurements of PM10 PM2.5, PM1.0 concentration were carried out during the summer of 2012.
There are no significant pollution sources near the site except vehicular traffic during the peak
hours. The monitoring period was from June 4th 2012 to August 19th 2012. The monitoring is
done by GRIMM dust monitors, which measures PM10 PM2.5, PM1.0 concentration at a frequency
of every 5 minutes. Vehicular volume was measured by recording the traffic flow using a video
camera and manual counting. Meteorological data were obtained from the Weather Underground
website. Monitored meteorological parameters were, temperature (expressed in degrees
Celsius), Dew Point (expressed in degrees Celsius), relative humidity (expressed in percent),
wind speed (expressed in kilometers per hour) and Wind direction.
The indoor–outdoor diurnal variations of the measured pollutants’ concentrations were
analyzed qualitatively and statistically to investigate the indoor air quality status of the building.
2.3 ANN Model
This study used multilayer perceptron networks which are successfully employed in
environmental studies. In this network, the input quantities are fed into input neurons, processed,
and then passed onto the next level, hidden layer neurons. In the process, input signal is
multiplied by a weight that determines the intensity of the input. The weighted input received from
each input neuron is added up by the hidden layer neurons, and associated with a bias before
passing the result onto the next level using a transfer function. All bias neurons are connected to
all neurons in the next hidden and output layers. Training is of the most fundamental importance
to the ANN in which observed values of the output variable is compared to the network output,
and then the error is minimized by adjusting the weights and biases.
2.4 Model performance evaluation
The statistical performance measures, namely, coefficient of regression (R), Root mean square
error (RMSE), Mean Bias Error (MBE), index of agreement (IA), time series plots, were used for
IAQ model validation.
46
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3. Results and discussion
3.1 Descriptive Statistics
The experimental days were sorted to “working days” and ‘‘non-working’’ days. The First
category refers to weekdays with the Office Building either full or partially occupied and the
second includes days of little or no activity recorded at indoors. The variations of indoor and
outdoor PM10, PM2.5, PM1.0 concentrations followed same trend with a time lag of close to an hour
(Fig. 2) during weekdays. The maximum indoor and outdoor PM concentrations of 261 µg/m 3 and
110 µg/m 3 were observed during peak hour traffic flow.
60
100
50
200
80
40
150
60
30
100
40
20
50
20
10
outdoor Concentration
µg/ m 3
Outdoor Pm 10
250
Indoor PM 10
0
0
1
4
Out door Pm 2.5
20
15
20
10
Outdoor Pm 1.0
Indoor Pm 1.0
8
15
6
10
4
5
2
0
0
10
5
0
7 10 13 16 19 22
Time of t he Day
25
Indoor Pm2.5
0
1 4 7 1013161922
1 4 7 10 13 16 19 22
Figure 2. Diurnal Variation of a) PM10, b)PM2.5, c) PM1.0 on “Weekdays”
18
60
22
Concentration µg/ m 3
200
18
100
50
0
50
16
20
150
30
14
20
12
Out door Pm 10
Indoor PM 10
1 4 7 10 13 16 19 22
Time of t he day
40
16
15
10
20
9
15
8
10
7
14
13
12
10
6
5
11
10
0
11
25
17
Out door Pm2.5
10
Indoor Pm2.5
1 4 7 10 13 16 19 22 25
5
Out door Pm 1.0
0
Indoor Pm 1.0
4
1 4 7 10 13 16 19 22
Figure 3. Diurnal Variation of a) PM10, b) PM2.5, c) PM1.0 on “Weekends”
Similar variations of indoor and outdoor PM concentrations were observed during
weekends. There is a slight increase in the PM2.5 and PM1.0 Concentration of 15.1±6.1 and
8.1±3.2 respectively during weekends compared to that of 8.11±3.9 and 4.42±2.1 during
weekdays, which may be attributed to the Operation of mechanical ventilation system during
working hours.
Indoor/Outdoor ratio for PM1.0 of 0.35 appeared to be higher than that of the PM10 and
PM2.5 of 0.08 and 0.22 respectively, showing that there is high infiltration of finer particles from
Outside Traffic.
3.2 ANN model
Model development
In this study, feed forward networks were employed. Hyperbolic tangent function was
used as the transfer function. The database was divided into three sections for early stopping.
Data from 60% of the buildings were used in training the networks, 20% were designated as the
validation set, and the remaining 20% of the buildings were employed in testing. Output variables
47
Indoor Concentration µg/ m 3
120
300
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
are PM10, PM2.5, and PM1.0 with inputs of Total vehicular count (TVC) and different outdoor
meteorological parameters.CO was not selected as an input variable because a large portion of
the data was below detection limit. Structure of the best performing network is presented in Fig. 4.
Training and testing performance of the network is shown in Table. 1.
Figure 4. Structure of the Developed ANN model
Table. 1. Training and testing Performance of the Developed ANN model
Training
Testing
Sum
of
Squares
Error
Average
Overall
Relative Error
Relative
Pm10
Error
for PM2_5
Scale
PM1
Dependents
Sum
of
Squares
Error
Average
Overall
Relative Error
Relative
Pm10
Error
for PM2_5
Scale
PM1
Dependents
254.738
.613
.613
.601
.625
91.310
.685
.704
.692
.646
Model Validation
Table 2 describes the performance of the networks by measures: observed and predicted mean
and standard deviation, coefficient of regression (R), Root mean square error (RMSE), Mean Bias Error
(MBE), index of agreement (IA) are shown in the table 2. The observed and predicted mean values for
PM10 of 11.3±8.7 and 9.32±7.5 respectively, were found to be close to each other. The IA was also found
48
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
to be 0.85 which indicates the overall the performance of model is found to be satisfactory. Similar
observations were found for PM2.5 and PM1.0.
Variable
Table 2. Model performance evaluation Parameters
Predicted
R
RMSE
MBE
Observed
IA
Mean
STDEV
Mean
STDEV
PM10
11.3
8.7
9.32
7.5
0.67
8.228
-2.119
0.85
PM2.5
9.7
7.65
7.85
6.0
0.7
7.232
-1.95
0.71
PM1.0
7.7
4.1
5.0
3.1
0.74
3.6
-0.39
0.74
Concentration
µg/ m 3
Model performance is also explained using time series graph of observed vs predicted
concentrations during various days of the week given in fig 5 a,b,c. Model worked very well during
weekdays and there is a slight under prediction of model during weekends. R value during
weekdays of 0.65 was high compared to that of the weekends of 0.42. Also the difference
between observed and predicted average PM10 concentrations of 21.705±5.76 and 12.5803±4.46
respectively during weekends was high compared to that of 7.17±5.68 and 8.0±3.8 respectively
during weekdays. Similarly there was an under prediction for PM2.5 and PM1.0 during weekends.
40
30
20
10
0
observed Pm2.5
Predict ed Pm2.5
1 5 9 1317 21 1 5 9 13 17 21 1 5 9 131721 1 5 9 131721 1 5 9 13 17 21 1 5 9 13 17 21 1 5 9 131721
Tuesday
Wednesday
Thursday
Friday
Timeline
Sat urday
Sunday
M onday
Figure 5.a Model Performance Observed vs. Predicted PM 2.5 Concentrations
observed Pm10
Concent rat ion µg/ m 3
40
30
20
10
0
Predict ed Pm10
1 5 9 131721 1 5 9 131721 1 5 9 131721 1 5 9 131721 1 5 9 131721 1 5 9 131721 1 5 9 131721
Tuesday
Wednesday
Thursday
Friday
Sat urday
Sunday
M onday
Timeline
Concent rat ion
µg/ m 3
Figure 5.b Model Performance Observed vs. Predicted PM 10 Concentrations
30
observed Pm1.0
Predict ed Pm1.0
20
10
0
1 5 9 13 17 21 1 5 9 13 17 21 1 5 9 13 17 21 1 5 9 13 17 21 1 5 9 13 17 21 1 5 9 13 17 21 1 5 9 13 17 21
Timeline
Figure 5.c Model Performance Observed vs. Predicted PM 1.0 Concentration
49
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4. Conclusion
At urban areas vehicles contributes significant amount of particulates pollution and
affects the indoor air quality of buildings located close to an urban roadway. In the present study
indoor and outdoor air quality measured at an urban area clearly demonstrates the contribution of
vehicular pollution on the air quality of mechanically ventilated office building. Further ANN based
indoor air quality model was developed for predicting the indoor PM concentrations in a
mechanically ventilated office building near an urban roadway. Results indicated that ANN are
capable of modeling indoor air quality with a prediction accuracy of IA = 0.84 for PM10, IA = 0.71
for PM2.5 and IA = 0.74 for PM1.0.
References
Brunelli, U., V. Piazza, L. Pignato, F. Sorbello, and S. Vitabile. 2007. Two-Days Ahead Prediction
of Daily Maximum Concentrations of SO 2, O3, PM10, NO2,CO in the Urban Area of Palermo,
Italy. Atmos. Environ. 41:2967–2995. doi:10.1016/j.atmosenv.2006.12.013
Chithra, V.S., Shiva Nagendra, S.M. Indoor air quality investigations in a naturally ventilated
school building located close to an urban roadway in Chennai, India(2012) Building and
Environment, 54, pp. 159-167.
Gardner, M.W., and S.R.Dorling. 1998. ArtificialNeural Networks (the Multilayer Perceptron)—A
Review of Applications in the Atmospheric Sciences. Atmos. Environ. 32:2627–2636.
Gardner, M.W., and S.R. Dorling. 1999. Neural Network Modelling and Prediction of Hourly NOx
and NO2 Concentrations in Urban Air inLondon. Atmos. Environ. 33:709–719.
doi:10.1016/S1352-2310(98)0230-1
Khare, M., and S.M.S. Nagendra. 2006. Artificial Neural Networks in Vehicular Pollution
Modeling. New York, NY: Springer-Verlag.
Klepeis, N.E.,W.C. Nelson,W.R. Ott, J.P. Robinson, A.M. Tsang, P. Switzer, J.V.Behar, S.C.
Hern, andW.H. Engelmann. 2001. The National Human ActivityPattern Survey (NHAPS): A
Resource for Assessing Exposure to Environmental Pollutants. J.Expos. Anal. Environ.
Epidemiol. 11:231–252.doi:10.1038/sj.jea.7500165
Kolehmainen, M., H. Martikainen, and J. Ruuskanen. 2001. Neural Networks and Periodic
Components Used in Air Quality Forecasting. Atmos. Environ.35:815–825.
doi:10.1016/S1352-2310(00)00385-X
Kukkonen, J., L. Partanen, A. Karppinen, J. Ruuskanen, H. Junninen, M.Kolehmainen, and M.
Gwynne. 2003. Extensive Evaluation of Neural Network Models for the Prediction of NO2
and PM10 Concentrations, Compared With a Deterministic Modelling System and
Measurements in Central Helsinki. Atmos. Environ. 37:4539–4550. doi:10.1016/S13522310(03)00583-1
Maier, H.R., and G.C. Dandy. 2001. Neural Network Based Modeling of Environmental Variables:
A Systematic Approach. Math. Comput. Model.33:669–682. doi:10.1016/S08957177(00)00271-5
Papanastasiou, D.K., D. Melas, and I. Kioutsioukis. 2007. Development and Assessment of
Neural Network and Multiple Regression Models in Order to PredictPM10 Levels in a
Medium-Sized Mediterranean City.Water Air SoilPollut. 182:325–334. doi:10.1007/s11270007-9341-0
Perez, P., A. Trier, and J. Reyes. 2000. Prediction of PM2.5 Concentrations Several Hours in
Advance Using Neural Networks in Santiago, Chile. Atmos. Environ. 34:1189–1196.
doi:10.1016/S1352-2310(99)00316-7
50
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Parametric Optimization of Die Casting Process using Cuckoo
Search Algorithm
C. V. Chavan1, P. J. Pawar2
1
Sandip foundation’s S.I.T.R.C., Nasik
K.K.W.I.E.E.R., Nasik, Maharashtra, India
2
*Corresponding author (e-mail: charudatt.chavan@sitrc.org)
Die castings are amongst the highest volume, mass produced items manufactured by
the metalwork industry, and they can be found in thousands of consumer, commercial
and industrial products. While processing, there are number of parameters which
govern the quality of a die casting and hardly anything can happen in a die casting
industry without affecting the casting quality. In this paper an attempt is made to find
out response of selected process parameters on the cycle time and density of the cast
component. Work is done to evaluate the main effects i.e. thermal characteristics
(temperature of the molten metal, temperature of the fixed and moving die half) and
injection pressure of the molten metal on the mentioned responses. In order to reduce
experimental runs, Response Surface Methodology (RSM) based on Central
Composite Design (CCD) is adopted. Regression is done for exploring the space of the
variables to develop the relationship between yield and the process variables. An
experimental model is prepared using second degree regression equation for these
responses. For the obtained equations the results are obtained for optimum levels of
parameters using Cuckoo Search Algorithm.
1.
Introduction
Cold chamber high pressure die casting (HPDC) is a process ideally suited to
manufacture mass produced parts of complex shapes requiring precise dimensions. In this
process, molten metal is forced into a cold empty cavity of a desired shape and is then
allowed to solidify under a high holding pressure (Syrcos, 2003; Tsoukalas, 2008). The
parameters which exert a great deal of influence on the die casting process can be adjusted
to the levels of intensity so that some settings can result in soundness of the manufacturing
process. Die casting parameters are divided as follows (Syrcos, 2003)- (1) Die casting
machine-related parameters. (2) Shot-sleeve related parameters. (3) Die-related parameters.
(4) Cast-metal related parameters. The following paragraph presents fundamental concepts
of experimental design applied to the die casting process (Verran, 2008):
Response variables are the dependent variables, which undergo changes when they go
through different process parameters.
Control factors are the selected independent variables of the experiment, which have different
effects on the response variables when adjusted to different levels. They are subdivided into:
• Quantitative control factors (injection pressure, piston speed and temperature) and
• Qualitative control factors (die casting machine, operator and aluminum alloy).
Factor levels are the intensity to which the control factors are adjusted in a particular
experiment. They can be identified as low level, intermediate level and high level.
Treatments: Experimental run is a treatment. It is a combination of factor levels (parameters).
Though the main controlled variables are mold temperature, dosage volume, slow
and fast shots, commutation spots, injection pressure, upset pressure as well as chemical
composition and liquid metal temperature (Syrcos, 2003; Verran et al., 2008; Zhao et al.,
2009;), but the set of the variables are different for different responses. Density in die casting
process has always been a problem and in spite of considerable research, design and
development, the ever-increasing complexity of die castings demanded by industry has made
virtually impossible to eliminate porosity caused due to density altogether, though die casting
parameters optimization techniques can limit it to non-critical areas (Verran et al., 2006).
Density itself leads to many internal defects (porosity, shrinkage porosity, micro voids etc.)
51
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
(Syrco, 2003). However, optimizing the conditions to render aluminium die castings for
maximum density is the most critical as well as costly and time consuming (Krimpenis et al.,
2006). The time required for solidification, cooling individually is optimized (Roy, 2006), but
still the overall time required from injection of metal to the ejection of component is not
optimized precisely. The set of the parameters required to reduce the time of casting is yet to
be defined. Also from production point of view time saved for producing castings can result in
improving qualitative as well as quantitative performance (Xirouchakis et al., 2008).The
attempt of the present paper is to investigate the role of set parameters on cycle time and
density to obtain optimized set using Cuckoo search algorithm.
2.
Experimental procedure
2.1 Material and equipment
The use of high performance and high accuracy equipment that would be able to secure
the correct measurement of die casting parameters value is required in the experimental
procedure. Hence, a semi-automated die casting cell, fully equipped with appropriate
instrumentation and control system is employed. The die casting cell comprises a 800t locking
force TOSHIBA made cold chamber die casting machine, a holding furnace, an automatic
lubrication system of the inner die surfaces and operator controlled metal loader. The test
sample is a cylinder head cover of BSII LM9 aluminum alloy. It’s composition is given in Table
1. The test cell, casting sample, including the gating system are shown in Figure 1.
Element
Si
Percentage
(%)
10.5- 12.5
Table 1. Chemical composition of LM9.
Cu
Fe
Zn
Mn
Ni
1.5-2.5
0.7-1.0
1.4
Max
0.55
Max
0.4
max
Mg
Ti
Sn
0.3
Max
0.2
Max
0.25
Max
The die casting machine parameters are monitored on line and recorded. Casting
density is measured using a precision weight measuring scale and by incorporating the
volume of the component (2890 m3), all weightings are conducted on a balance, accurate to
0.001 gr. A great deal of attention is paid to the determination of a theoretically sound casting.
While the cycle time for each is measured online.
Figure 1. Actual cast components and die casting cell.
2.2 Design of experiment (D.O.E.) and data collection
Die casting quality is the result of a great number of parameters. Among these, holding
furnace temperature (X1), die temperature of fixed half (X2) and moving half (X3) and injection
pressure (X4) are selected as the most critical. Other parameters are kept constant during the
runs. Their levels and ranges are as per given in Table 2. In order to reduce the number of
runs the Central Composite Design (CCD) is used. The readings along with corresponding
responses are given in Table 3.
52
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 2. Variables and coded levels.
Variables and Symbol
Pouring temp. (°C)
Die temp. of fixed half (°C)
Die temp. of moving half (°C)
Injection pressure (Kg/ cm²)
X1
X2
X3
X4
-2
630
164
156
180
Coded Level
-1
0
+1
640
650
660
172
180
188
166
176
186
185
190
195
+2
670
196
196
200
Table 3. Experimental data obtained from CCD runs.
Run
Coded Levels
Cycle Density
X2 X3 X4 time “Y1” “Y2”
3
(Sec) (gm/cm )
-1 -1 -1 -1
22.6
2.223
1
-1 -1 -1 22.73
2.213
-1
1
-1 -1 22.93
2.206
1
1
-1 -1 22.68
2.22
-1 -1
1 -1 22.61
2.209
1
-1
1 -1
23.9
2.178
-1
1
1 -1 23.03
2.165
1
1
1 -1 24.10
2.175
-1 -1 -1 1
22.68
2.154
1
-1 -1 1
21.85
2.458
-1
1
-1 1
20.83
2.138
1
1
-1 1
22.81
2.43
-1 -1
1
1
22.03
2.14
1
-1
1
1
22.73
2.54
Run
X1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Coded Levels
Cycle Density
X1
X2 X3 X4 time “Y1” “Y2”
(Sec) (gm/cm3)
-1
1
1
1
20.03
2.14
1
1
1
1
22.65
2.278
0
0
0
0
22.09
2.196
0
0
0
0
23.05
2.237
0
0
0
0
22.185
2.216
0
0
0
0
22.77
2.23
2
0
0
0
22.37
2.416
-2
0
0
0
20.20
2.014
0
2
0
0
22.37
2.231
0
-2 0
0
22.54
2.237
0
0
2
0
24.40
2.234
0
0 -2 0
22.9
2.251
0
0
0
2
20.50
2.57
0
0
0 -2
22.72
2.241
2.3 Process parameter relationship formulation using Multi variable regression
Multivariable regression analysis model is used to model and forecast process variables. In a
regression the relationship is studied, called the regression function, between one variable ,
called the dependent variable, and several others
called the independent variables.
Regression function involves a set of unknown parameters
. If a regression function is
linear in the parameters it is termed as a linear regression model. Otherwise, the model is
called non-linear. A second-order model is useful in approximating a portion of the true
response surface with parabolic curvature (Aggarwal et al., 2008). The second-order model is
expressed as=
+
²+
+
+
The second-order model is flexible, because it can take a variety of functional forms and
approximates the response surface locally (Tsoukalas, 2008). The method of least squares
can be applied to estimate the coefficients i, j in a second-order model. After solving the runs
using regression tool, second degree model for the cycle time and density are as follows1) Cycle time= −1003.4576 + 3.0655 − 0.960018 − 2.1039 + 3.105008 − 0.002577 +
0.0009098 + 0.00357 − 0.00612 + 0.001125
+ 0.001225
− 0.00058
+
0.001438
− 0.00184
− 0.00092
(1)
2) Density= 161 − 0.14084 + 0.22365 + 0.05028 − 1.4889 − 0.000066 − 0.000029 +
0.0000026 + 0.00164 − 0.00016
− 0.000052
+ 0.001439
− 0.0002
−
0.00039
+ 0.0000662
(2)
53
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.
Optimization using cuckoo search algorithm
3.1 Cuckoo search algorithm
Cuckoo search algorithm is one of the most recent algorithms by Xin-She Yang and Suash
Deb in 2009. It is a population based Bio-Inspired algorithm developed form a strange
hatching behavior of cuckoo bird. Female parasitic cuckoos lay eggs that closely resemble to
the eggs of their chosen host nest (i.e., nest is built by other species bird). The survival of the
cuckoo bird depends upon the the probability of detection by the host bird. If survived, the
cuckoo bird builds a territory and gives the best nest for survival (Rajabioun, 2011).
Objective function f( x) , x = ( x , …. . , x ) ;
Generate initial population of “n” host nests
While (t < MaxGeneration) or (stop criterion);
Get a Cuckoo (Say i) by Lévy flights;
Evaluate its quality/fitness F ;
Choose a nest among n (say j ) randomly;
if (F > Fj)
replace j by the new solution;
end
Abandon a fraction (pa) of worse nests
Keep the best solutions (with quality solutions);
Rank the solutions and find the current best;
end while
Post process results and visualization;
Figure 2. Pseudo-code of the cuckoo search algorithm (Kaveh et al.,2013).
Above pseudo code is used to solve the equation no.1 and 2 for the cuckoo search
algorithm. The number of cuckoos is set to 25 and iterations are 1000.
(1) For Cycle time (Y1): Minimization code is used for this response. The optimum value
obtained and parameter values are as followsY1 (Cycle Time) = 18.3050 Sec (Threshold Value)
X1 (Metal temperature) = 630°C,
X2 (F/H Die temperature) = 196°C,
°
X3 (M/H Die temperature) = 167.81 C,
X4 (Injection pressure) = 200 Kg/cm2.
(2) For Density (Y2): Maximization code is used for this response. The optimum value
obtained and parameter values are as followsY2 (Density) = 2.6941 gm/cm3 (Threshold Value)
°
X1 (Metal temperature) = 670°C,
X2 (F/H Die temperature) = 164 C,
°
X4 (Injection pressure) = 200 Kg/cm2.
X3 (M/H Die temperature) = 156 C,
3.2 Comparison of predicted values with actual online production values
The actual process parameters recorded in online production are as followsX1 (Metal temperature) = 630°C,
X2 (F/H Die temperature) = 196°C,
°
X4 (Injection pressure) = 200 Kg/cm2.
X3 (M/H Die temperature) = 167.81 C,
Obtained values of the responses after putting these parameter values in the
equation No.1 & 2 are :
Y1 (Cycle Time) = 21.76 Sec and Y2 (Density) = 2.4712 gm/cm3.
The cuckoo search algorithm shows the improved results when the equation No. 1 &
2 are applied to actual readings.
3.3 Optimum values for the given weightages
For optimizing these responses a combined objective function is formulated. This equation
gives a combined optimum solution for a given weightage of the response shown in Table 4.
and
Where, W 1 and W 2 are the weights for the cycle time and density respectively and
are the threshold values of Cycle time and Density calculated individually.
54
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
−
=
Table 4. Variation of responses with given weightages by priori approach.
4.
W1
W2
Zmin
X1
X2
X3
X4
Y1
Y2
0.1
0.9
-0.7495
670
164
196
180
23.8185
2.615336
0.2
0.8
-0.536
670
164
196
180
23.8185
2.615336
0.3
0.7
-0.1534
670
164
196
200
22.06719
2.50524
0.4
0.6
-0.0856
670
164
196
200
22.06719
2.50524
0.5
0.5
0.1232
630
196
196
180
22.3178
2.42524
0.6
0.4
0.5091
630
196
196
180
22.3178
2.42524
0.7
0.3
0.6016
630
168.4919
196
180
22.71967
2.354771
0.8
0.2
0.7471
630
192.0179
196
180
22.45327
2.291277
0.9
0.1
0.8419
630
191.7858
196
180
18.54157
2.225925
Conclusion
In this paper optimization aspects of High Pressure Die Casting (HPDC) are considered.
Two objectives considered for optimization are maximization of density and minimization of
cycle time. The predicted results by cuckoo search algorithm are compared with actual online
production parameter settings and show a considerable improvement in the responses.
Higher value of temperature forms the turbulence thus the gases are entrapped which
increases the time for casting. For die temperatures the lower value can cause cold shuts as
it affects the fluidity of the metal thus can increase the filling time which will increase the
overall cycle time. In order to prevent this intermediate values of mold temperatures are
proposed. Hypothesis of high injection pressure for obtaining higher value of density is also
confirmed here. Increase in the value of mold temperature will cause the die sticking and thus
will reduce the density. However validation of the results is yet to be confirmed.
References
Aggrawal, A. Optimizing power consumption for CNC turned parts using response surface
methodology and Taguchi’s technique- A comparative analysis. Journal Of Materials
Processing Technology, 2008, 200, 373-384.
Kaveh, A. and Bakhshpoori, T. Optimum design of space struss using cuckoo search
algorithm with
levy flights. IJST, Transactions of Civil Engineering, 37, No.C1, 1-15.
Krimpennis, A. Simulation-based selection of optimum pressure die-casting process
parameters using neural nets and genetic algorithm, Int. J. Adv. Manufacturing
Technology, 2006, 27, 509-517.
Rajabioun, R. Cuckoo Algorithm”, Applied Soft Computing, 2011, 11, 5508-5518.
Syrcos, G.P. Die casting process optimization using taguchi methods. Journal of Materials
Processing Technology, 2003, 135, 68-74.
Tsoukalas, V. D. Optimization of porosity formation in AlSi9Cu3 pressure die casting using
genetic algorithm analysis. Materials and Design, 2008, 29, 2027-2033.
Verran, G.O., Mendes, R.P.K. and Valentina, L.V.O.D. DOE applied to optimization of
aluminum alloy die castings. Journal of Materials Processing Technology, 2008, 200,
120–125.
Yang, X. and Deb, S. Engineering optimization by Cuckoo Search”, Int. J. Mathematical
Modeling and Numerical Optimization, 2010, 1(4), 330-343.
Yi, Y., Weixiong, Y. and Bin, Z. Novel methodology for casting process optimization using
Gaussian process regression and GA. China Foundry, 2009, 06, 231-240.
Zhao, H. D. Experimental and numerical analysis of gas entrapment defects in plate ADC 12
die castings. Journal of Materials Processing Technology, 2009, 209, 4537-4542.
55
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Cuckoo Optimization Algorithm for the Design of a
Multiplier-less Sharp Transition Width
Modified DFT Filter Bank
Kaka Radhakrishna, Nisha Haridas, Bindiya T. S.*, Elizabeth Elias
Department of Electronics and Communication Engineering, National institute of Technology,
Calicut, Kerala-673601
*Corresponding author (e-mail: bindiyajayakumar@nitc.ac.in)
In this paper, a multiplier-less sharp transition width modified discrete Fourier transform
(MDFT) filter bank based on frequency response masking (FRM) is proposed. The
significant advantage of MDFT filter banks is the structure inherent alias cancellation.
The amplitude distortion in the filter bank is mainly due to the overlap of the frequency
responses of the adjacent channel filters of the filter bank. Designing the filters in the
filter bank with sharp transition width can reduce the amplitude distortion. FRM
approach is known to result in low complex sharp transition width filters. This paper
proposes to design a totally multiplier-less sharp transition width MDFT filter bank by
converting the continuous filter coefficients to the Canonic Signed Digit (CSD)
representation and optimizing the filter bank performance using metaheuristic
algorithms such as modified integer coded genetic algorithm (GA) and modified integer
coded Cuckoo optimization algorithm.
1. Introduction
The conventional discrete Fourier transform (DFT) filter banks are very simple to
design, but they do not possess any alias cancellation structure (Vaidyanathan, 1987). The
modified DFT (MDFT) filter banks, in which the analysis and synthesis filters are the same,
are reported to have structure inherent alias cancellation (Karp and Fliege, 1999). Also,
designing narrow transition width filters can reduce the amplitude distortion caused by the
non-ideal filters in the filter bank. Conventional finite impulse response (FIR) filters are known
to result in high order filters when the transition width is reduced. Frequency response
masking (FRM) approach can be employed to design sharp transition width filters with less
complexity (Lim, 1986). When MDFT filter bank is designed using FRM prototype filter, it
leads to a reduction in the number of filter coefficients as compared to the corresponding
conventional MDFT filter banks as reported in (Li and Nowrouzian, 2006). To further reduce
the complexity in hardware realization, we propose a totally multiplier-less MDFT filter bank
with FRM filter as the prototype filter. This is done by converting the continuous filter
coefficients to the canonic signed digit (CSD) representation with restricted number of nonzero bits. But this may degrade the performance of the filter bank.
To improve the performance of the CSD rounded filter banks, this paper uses the
Cuckoo optimization algorithm (COA). It is a metaheuristic algorithm introduced by Rajabioun,
based on the life style of cuckoo birds (Rajabioun, 2011). In this work, we propose to modify
this algorithm to be used in the discrete space, which is not reported in the literature so far.
This modified COA algorithm is used for the optimization of the CSD represented FRM
prototype filter and hence the CSD represented MDFT filter bank. Integer coded genetic
algorithm (GA) for the optimization of multiplier-less transmultiplexer had been reported
(Manoj and Elias, 2009). The performances of the COA optimized FRM filter and MDFT filter
bank are compared with those of the GA optimized FRM MDFT filter bank.
The paper is organized as follows. Section 2 gives a brief review of the MDFT filter
bank. Section 3 gives an overview of the FRM approach. The CSD representation is
discussed in Section 4. Section 5 explains the statement of the problem. This section also
describes the various steps of the Cuckoo optimization algorithm. A design example and
56
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
MATLAB simulation results are presented in section 6. This section also compares the filter
bank performance using GA and COA. Section 7 concludes the paper.
2. MDFT filter bank
In the M-channel DFT filter banks, the synthesis and analysis filters are derived from
the proto-type filter by complex nodulation. Since the proto-type filter is band limited to 2 / ,
all the non-adjacent alias components are ignored. If Hk(z)and Fk(z) are denoted as the kth
analysis and synthesis filters respectively and considering only the adjacent alias
components, the reconstructed signal of an M-channel DFT filter bank can be written as
(Fliege, 1994a, 1994b).
X( z) = ∑
F ( z) ∑
H zW
X zW
(1)
/
) , Fk(z)= (
) and
=
. From Equation (1), we can see that
Where Hk(z)=H(
the output contains alias components. In the DFT filter banks, there is no inherent mechanism
to cancel out the alias components. So they do not give perfect reconstruction. This is
overcome by introducing some modifications to the DFT filter banks, which result in the MDFT
filter banks.
The MDFT filter bank can be derived from a complex modulated filter bank by
decimating the sampling rate with and without a delay of M/2 samples and using the real or
the imaginary part, alternately, in the sub-bands (Fliege, 1994a, 1994b). This will eliminate the
adjacent aliasing spectra. Designing the prototype filter with high stop band attenuation can
reduce the non-adjacent aliasing terms. This leads to a near perfect reconstruction filter bank.
The structure of M- channel MDFT filter bank is given in Fig. (1).
Figure 1. The structure of Modified DFT Filter Bank
The transfer function from input to output, Tdist(z) can be obtained as (Karp and Fliege, 1999)
( ) = ∑
( )
( )
(2)
3. Review of FRM approach
The structure of the FRM filter is given in Figure (2) (Lim, 1986).
Figure 2. Basic structure of FRM
The overall transfer function of the FRM FIR filter, H(z), can be written as follows (Lim, 1986):
( ) =
(
)
( )+
(
)
( )
57
(3)
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Where Ha(z) is the band edge shaping filter, Hc(z) is the complementary filter of Ha(z), HMa(z)
and HMc(z) are masking and complementary masking filters respectively and M is the
interpolating factor. The transition width of H(z) is 1/M times that of Ha(z).
4.
Canonic-signed digit representation
If the filter coefficients are represented in the signed power of two (SPT) space, the
multipliers in the filter realization can be replaced with adders and shifters. The canonic
signed digit (CSD) representation is a unique representation of the decimal number with
minimum number of non-zero bits. When the filter coefficients are represented in the CSD
space, the number of partial product additions reduces. Any number ‘d’ can be represented
using the CSD format as follows:
= ∑
2
(4)
where, W is the word length of the CSD number and integer R represents a radix point in the
range 0 < R < W.
5. Statement of the problem
When the continuous FRM prototype filter coefficients are converted to CSD
representation with restricted number of non-zero bits, the performances of the FRM filter and
MDFT filter bank degrade. To improve the performance, suitable objective function and
optimization techniques are to be used, such that we get a totally multiplier-less MDFT filter
bank with near perfect reconstruction. The objective of the optimization is to minimize the
amplitude distortion and errors in the pass-band and stop-band with reduced number of nonzero bits.
Since the search space contains integers, metaheuristic algorithms are deployed in this
work as they are reported to give global solutions (X. S. Yang, 2009). The metaheuristic
algorithms such as genetic algorithm (GA) and cuckoo optimization algorithm (COA) are
employed in this paper for the optimization.
5.1. Cuckoo optimization algorithm (COA)
Figure 3. Flowchart of Cuckoo Optimization Algorithm (Rajabioun, 2011)
Cuckoo birds lay eggs on the nests of other birds. COA is initialized with some initial
population of cuckoos and each cuckoo is assumed to have a certain number of eggs. Then
the cuckoo searches the space to find best region to lay each egg in order to maximize their
egg survival rate. Some of these eggs, which are more similar to the host birds’ eggs, have
the opportunity to grow up and become a mature cuckoo. p% of all eggs, usually 10%, with
less profit values, will be killed and the rest of the eggs grow in host nests, hatch and are fed
by host birds. The grown eggs reveal the similarity of the nests in that area. The eggs, which
58
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
survive, grow and turn into mature cuckoos and they make some societies. Each society has
its habitat region to live in. When young cuckoos grow and become mature, they live in their
own area and society for sometime. After the cuckoo groups are formed in different areas, the
society with the best profit value is selected as the goal point for other cuckoos. Figure(3)
gives the flow chart of COA (Rajabioun, 2011).
A habitat is an N dimensional vector 1xN, representing the current living position of a
cuckoo. Generally cuckoos lay eggs within a maximum distance Radius (ELR), defined as
ELR = α x
x( Var
− Var
(5)
)
where α is an integer and is the maximum value of ELR.
COA for the continuous space is proposed by Rajabioun, (2011). In this work, it is
modified for the discrete space.
6. Design of the continuous FRM proto-type filter for the MDFT filter bank
The FRM FIR low pass proto-type filter is designed with the following set of specifications.
Max. pass-band ripple :0.004 dB
Min stop-band attenuation: 60 dB
Pass band edge frequency: 0.124π
Stop band edge frequency: 0.127π
50
Continous Coefficients
CSD Rounded
COA Optimized
0
Magnitu de re sponse in dB
M ag nitude respo nse in dB
20
-20
-40
-60
-80
-100
-120
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Normalized frequency ( /pi)
0.8
0.9
0
-50
-100
-150
-200
-250
-300
-350
0
1
Figure 4a. Magnitude response of the FRM
FIR Filter
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Normalized Frequency ( /2pi)
0.4
0.45
0.5
Fig.4b: Magnitude responses for the COA
optimized analysis filters of MDFT
Table 1.Comparison of filter bank performance
Infinite
CSD (3 nonGA
precision
zero bits)
optimization
Max. Pass-band Ripple(dB)
0.0083
0.07557
0.02131
Min. Stop Band Attenuation (dB) 62.91
40.98
46.862
Amplitude Distortion of Filter
0.023
0.1506
0.04273
Bank (dB)
Cost of Objective function
0.0261
0.009486
No: of multipliers
199
0
0
No: of SPT terms
0
391
396
No: of adders due to SPT terms 0
200
205
No: of adders
197
197
197
Total no: of Adders
197
397
402
COA
optimization
0.02215
46.92
0.04449
0.009645
0
398
207
197
404
The continuous coefficients obtained for the FRM filter are converted to CSD
coefficients. To convert to the CSD space, a 14-bit look up table is created with 4 fields, CSD
equivalent, decimal equivalent, number of non-zero terms and index. Figure (4a) shows the
magnitude responses of the FRM FIR filter with continuous coefficients and with CSD
coefficients. The performance of the filter is degraded when continuous filter coefficients are
converted to the CSD space. In this paper, COA is used to optimize the performance and the
results are compared with those using genetic algorithm (GA). The magnitude responses of
FRM filter with COA and GA optimization is shown in Figure (4a). Figure (4b) shows the
magnitude response of the analysis filters of 8-channel MDFT after COA optimization. The
performance comparison is given in Table 1. Both GA and COA optimization results in
multiplier-less MDFT filter bank. We can see that stop band attenuation is better for COA
59
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
optimized FRM filter. But COA optimized FRM filter has more number of adders compared to
that of GA optimized FRM filter.
7. Conclusion
When MDFT filter bank is designed using FRM prototype filter, it leads to a reduction
in the number of multipliers in the filter realization as compared to the corresponding
conventional MDFT filter banks. Design of FRM based MDFT filter banks in the canonic
signed digit space is proposed in this work using modified COA, which leads to a totally
multiplier-less filter bank. For the given specifications, it is observed that COA results in an
FRM filter with better performance in terms of pass band ripple and stop band attenuation and
an MDFT filter bank with better amplitude distortion with respect to those using GA. The
number of adders is also found to be less when COA is deployed.
References
Bindiya T. S., Satish Kumar V. and Elias E, Design of Low Power and Low Complexity
Multiplier-less Reconfigurable Non-uniform Channel filters using Genetic Algorithm,
Global Journal of researches in engineering Electrical and electronics engineering,
Volume 12, Issue 6, Version1.0, May 2012.
Fliege N. J., Multirate Digital Signal Processing. Chichester, U.K., Wiley., 1994a.
Fliege N. J., Modified DFT Polyphase SBC Filter Banks with Almost Perfect Reconstruction,
IEEE International Conference on Acoustics, Speech, and Signal Processing, vol.3, 1922Apr1994b, pp.149-152.
Karp T. and Fliege N. J., Modified DFT Filter Banks with Perfect Reconstruction, IEEE
Transactions on Circuits and Systems-II, Analog and Digital Signal Processing, Vol. 46,
No. 11, November 1999
Lim Y. C., Frequency-response masking approach for the synthesis of sharp linear phase
digital filters, IEEE Transactions on Circuits and Systems, vol. CAS-33, pp. 357–364, Apr
1986.
Manoj V. J., Elias E., 2009. Design of multiplier-less nonuniform filter bank transmultiplexer
using genetic algorithm. Signal Process. 89, 2274–2285.
Li N. and Nowrouzian B. Application of frequency-response masking technique to the design
of a novel modified-DFT filter bank. In Circuits and Systems, 2006. ISCAS 2006.
Proceedings. 2006 IEEE International Symposium on (pp. 4-pp). IEEE.
Rajabioun R., Cuckoo Optimization Algorithm, Applied Soft Computing.vol. 11, Issue
8,2011,pp.5508-5518.
Sengar S., Bhattacharya P.P., Design and Performance Evaluation of a Quadrature Mirror
Filter (QMF) Bank, International Journal of electronic s & communication technology, Vo
l. 3, Issue 1, Jan. - March 2012, ISSN: 2230-7109 (Online) | ISSN : 2230-9543.
Vaidyanathan P. P., Theory and design of M-channel maximally decimated quadrature mirror
filters with arbitrary M, having the perfect-reconstruction property, IEEE Transactions on
Acoustics, Speech and Signal Processing, vol.35, no.4, pp.476,492, Apr 1987
X. S. Yang, Harmony search as a metaheuristic algorithm, Music-inspired harmony search
algorithm, Springer Berlin Heidelberg, 2009, 1-14.
Yu Y. J. and Lim Y. C., Genetic Algorithm Approach for the Optimization of Multiplierless Subfilters generated by the Frequency-Response Masking Technique. In Electronics,
th
Circuits and Systems, 2002. 9 International Conference on. Vol. 3, pages 1163–1166.
IEEE.
Zhang Z., and Yang Y, Efficient Design Method for Modified DFT Modulated Filter Banks with
Perfect Reconstruction. Chinese Journal of Electronics, 2009, 18(3).
60
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Developing an Optimistic Model on Food Security
A. A. Thakre, Atul Kumar*
Visvesvaraya National Institute of Technology, Nagpur-440010 (India)
*Corresponding author (email:aathakre@mec.vnit.ac.in, akumar047@yahoo.com)
In the 21st century, food security is the major concern all over the world, every nation are
now trying to identify the impact of it which directly or indirectly influence the economy and
social system. In India which is leading producer of food grains in the world, now has been
touching its peak production, this production will be remain almost constant forcoming next
decade. In this paper, it is attempted to make a model which forecast the crisis of food
shortage in future, for making crisis model we have considered several factors which
directly and indirectly influence the demand and supply of food availability. For making out
result more reliable we do regression analysis so we can check errors between actual and
simulation values.
Key Words: Food security, crisis model, regression analysis
1.
Introduction
In recent years, price of food grains have spiked despite record production of Rice and
Wheat in India, which raises question that now production of food grains in India, is sufficient for
the need of common people? In the last 5 years, we see continuous rising of price, which
indirectly indicate the shortage of stock, in spite of last 5 years, production of grains has touched
its high. Rising price of grains and rising the production are against the law of economy which
state that if any product is available in huge quantity than its price will be less as compare to
those which availability is less. If we see the last 10 years data of food production of wheat[2]we
see that production of wheat has been varied ,sometimes it show increment in production and
sometimes it shows decrement which shows the characteristics of Indian agriculture which is
volatile and highly dependent on Monsoon.
In India maximum amount of wheat is produced during the month of April-June every year, the
share of this 3 month production is more than 90% of wheat production throughout the year. June
is the beginning month of monsoon raining, and storage capacity of India is not too sufficient that
whole stock can be stored in closed store.so approximate 30% of grains are stored in open space
from where a great amount approximate 30-40% of food go waste. For minimizing these food
storage losses, several steps have been initiated like public-private partnership, invitation to
private player to make storage godowns, but this process is very slow and also not sufficient for
the future needs of India. Food Security exists, when all people, at all times, have physical and
economic access to sufficient, safe and nutritious food to meet their dietary needs and food
preference for an active and healthy life. Food Security has 4 dimensions:
Physical availability of food in which main concern is to maintain the determined level of
food stock, for maintaining stock production, tradedetc. Tools are used.
Economic and physical access to food in which government use several scheme to make
surety that economic backward people can get minimum amount of food.
Food Utilization is the way the body makes the most of nutrients in the food. Itinvolves
care and feeding practice, food preparation, diversity of diet and intra-household
distribution of food
Stability of other 3 dimension over time.
2.
Literature review
ZHOUQiang et.al. developed Chinese Public Food Safety Pre-warning System of Crisis
Management. A theoretical crisis model was analyzed and various reasons that led to malfunction
61
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
and inefficient status of food security crisis management of china had been discussed. It mainly
focuses on problem of overlapping functions, overstaffing, and stagnant information changing.In
India, condition of farming, storing of food grains and consumption of food is quite different from
china, so For Indian context the concept is modified according to need of our purpose and a
modified chart is made which is small and having less number of parameter.
ZHANG RUN-HAO et. al. discussed the legal system of food safety, political
science theory, economics, and management’s issues. In his study, Food safety and Right
of Food Safety have been revived.
Construction of Pre warning system:
Information collection
Information processing subsystem
Information processing
Information analysis
Food
secur ity
ear ly
w arning
system
Crisis decision making support
sub system
Information on the early
warning system
Collection of all information and storage at library
Scoping food safety early crisis warning
Determine crisis early warning
Fig 1. Diagram of early warning system
3. Information processing subsystem
In information processing subsystem, we classified 3 processes in which first we collect
variable information (variable considered- inflation, total wheat supply, total consumption, rainfall
,government policy, import, export) then using this information we process this variable and
forecast the value, then using these forecasted value, we use regression analysis to check
accuracy of our project.
3.1 Information collection
In our study, we have considered those factors which directly impact the food security of any
nation, that factors are net export, production and stock.
Net Export: we have considered net export in which we subtract import from export, here
we are subtracting because import and export are not independent variable, and it
depends each other. And in regression analysis it become cause of error, we collect last
13 years data of export and import both and then subtract import from export.
Production: production is the 2nd variable which we are using in our study. We collect
last 13 years of production data since 2000.
Stock: food storage is the key criteria for any nation to secure food requirement of
people, food is not produced throughout the year so it is necessary to store it carefully.
We have considered ending stock of the year. In India storage is done by state
government, Food Corporation of India, and Private players. Here too we have collected
last 13 years of data.
62
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.2 Information processing
In Information Processing, we plot all data into MS excel sheet, and we make polynomial
6thorder equation using MS excel and plot trend line too for checking accuracy of equation, and
amazingly we get more than 90% accuracy almost all plotting. And using these 6th degree
polynomial equations, we do forecasting for coming next 5 years.
Q
u
a
nt
it
y
in
1
Q
u
a
nt
it
y
in
1
0
[a]
95000
85000
y = 0.071x6 - 3.558x5 + 63.33x4 - 477.8x3 + 1317.x2 +
999.8x + 64125…
75000
65000
1
3
5
7
9
11
13
15
17
[b]
130000
6
y = -0.097x + 6.865x5 - 188.9x4 + 2485.x3 15159x2 + 35728x + 66418
R² = 0.971
110000
90000
70000
1
3
5
Years
Q
u
a
nt
it
y
in
1
20000
15000
10000
5000
6
Q
u
a
nt
it
y
in
1
5
4
3
y = -0.165x + 10.39x - 251.6x + 2879.x - 15176x
+ 28988x + 5438.
R² = 0.957
2
0
1
3
5
9
Years
11
13
15
11
13
15
17
[d]
100000
90000
y = 0.053x6 - 3.009x5 + 64.52x4 - 681.8x3 +
4092.x2 - 13021x + 85589
R² = 0.963
80000
70000
60000
1
3
5
7
9
11
13
15
17
17
Years
[e]
6000
4000
7
9
Years
Q
u
a
nt
ity
in
1
0
[c]
25000
7
y = 0.024x 6 - 1.143x 5 + 16.00x 4 - 16.82x 3 - 1065.x 2 + 5637.x - 3240.
R² = 0.876
2000
Figure 2. [a] wheat domestic consumption,
2[b] total supply, 2[c] wheat stock, 2[d]
wheat production, 2[e] wheat export
0
-2000
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17
Years
3.3 Information analysis
Regression analysisis performed for checking the simulated result accuracy on that basis
we will develop a theoretical model. Since there is more variation in data, so we do regression
analysis using equations rather than using any already made regression analysis tool. For 13
years regressing analysis, it is considered net export, production and stock as 3 variable while
subtraction total supply from total distribution as an objective function.
. Here taking A0, A1, A2…A9 as constant .ф represent difference between demand and supply.
фs=A0+A1*X1+A2*X2+A 3*X3+A4*X 1^2+A5*X2^2+A6*X3^2+A7*X1*X2+A8*X1*X3+A9*X2*X3………. [1]
For solving this equation we make 13*10 matrix so that value of constant A0…A9 can be
find out. For find out value of constant we follow below procedure:
X=[ ]M*N
T
X =[ ]N*M
[XT*X]N*N
[XT*X]-1N*N
T
-1
T
[X *X] N*N*[X ]N*M
T
-1
T
([X *X] *[X ]N*M)*[ф]M*1
Using above procedure,matrix is calculated, and final outcome of value of unknowns are:
Table-1, constant value
A0
11267.737
37
A1
0.3880162
28
A2
0.1474529
61
A3
1.3703464
93
A4
0.0001015
59
63
A5
4.09252
E-07
A6
1.14884
E-05
A7
1.51771
E-05
A8
6.76954
E-06
A9
9.2906
E-06
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
After getting constant value we put this value into equation 1 [X*A]=фswhereфsis simulated
difference
Table no. 2, simulated value
Result of simulated result is
shown in table. So after
comparing
simulated
difference
with
actual
difference, we get simulated
difference is approximately
right and show very minute
error.
Error%
Year
wheat
total
supply(
million
ton)
Domestic
consumption
(wheat MT)
Difference
Total supplydomestic
consumption(
MT)ф
2000
89890
66821
23069
Simulated
difference b/w
total
supply
and domestic
consumption(
MT)фs
22733.16
2001
91212
65125
26087
26217.28
0.496917
2002
95804
75254
20550
20655.4
0.510267
2003
81468
68918
12550
12396.44
-1.23878
2004
79058
72838
6220
6452.247
3.59947
2801
2838.05
1.305461
4594
4684.55
1.93296
5849
5500.125
-6.34304
13453
13527.96
0.554144
16177
16171.23
-0.0357
15432
15404.05
-0.18144
20849
21164.88
1.49248
Scoping food crisis early
69980
warning: as we see from 2005 72781
regression analysis ,next 2006 78071
73477
step is to scoping food crisis
2007
82272
76423
early warning for this we
2008
84377
70924
again consider the value of
2009
94327
78150
wheat total distribution and
2010
97192
81760
wheat
total
domestic
consumption. We forecast 2011 102255 81406
the value of both total 2012 113850 84540
distribution and domestic
2013
113436. 84502
consumption.
3
As we see that from table, 2014 114369. 84853.31
9
at the end of year 2024-25
total distribution of wheat 2015 7113902. 84915.12
and
total
domestic
2016
112762. 84688.73
consumption will coincide
7
approximately. This will be 2017 111378. 84299.92
8
very
critical
condition
because difference between distributions is very less.
-1.47731
29310
29195.63
-0.39173
28934.27
28350.69
-2.05842
29516.64
28078.53
-4.8722
28987.56
26870.23
-7.87984
28074
26267.65
-6.87672
27078.86
25028.05
-8.19404
Wheat [ a]
[b]
y = -0.054x6 + 4.319x5 - 131.1x4 + 1849x3 - 11718x2 +
120000
27588x + 72340
Quant110000
it y in M T
R² = 0.973
100000
Quant95000
it y in M T
85000
90000
80000
70000
y = -0.017x6 + 1.353x5 - 40.23x4 + 566.0x3 3798.x2 + 11941x + 56820
R² = 0.943
75000
65000
1
3
5
7
9 11 13 15 17 19 21 23 25
1 3 5 7 9 11 13 15 17 19 21 23 2
t dom est ic consum pt ion
Figure 3[a] wheat total distribution, 3[b] wheat domestic consumption
Determine early warning crisis : determining the crisis only based on forecasting of total
distribution and total consumption is not right because several factor also play crucial role on
deciding food crisis problem. We have made a general model for checking crisis for critical
condition, but for this model we have follow several assumptions:
Population at the end of 2025 would be 1.4 billions
No climatic or natural/unnatural events will occur which cause sharp decrement in
production of crops
64
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Export of grains will not see drastic increment or decrement in export; it would be
increase as usually.
Model is for continuous 2 year drought/flood situation because in last 200 years of history
6
,there is no example of consecutive 3 years drought.
We will examine this model using conditions:
Now we suppose that at the end of 2024 food stock is normal. On the year of 2024 two
situations may arise:
FOOD ST OCK
ABNORMAL
1. Monsoon will be normal then
NORMAL MONSOON
MONSOON
no food crisis problem will
FOOD ST OCK
occur this year.[fig 4]
CHECKING
FOOD ST OCK
STATUS
2. If monsoon will not be normal
OCT -DEC
J AN-MARCH
and drought or flood situation
happen then condition will
become critical, now we
CENTRAL
I M PORT
analyses the situation:
POOL
If monsoon is abnormal then
we check the status of food
stock, then we check food
shortage in which month
MI N FOOD
ST OCK
occurs. If it is occurred in the
month of Jan-march then little
FOOD STOCK [ABNORM AL
ST OCK
import and raising inflation
Figure
4FOOD
crisis
model
MONSOON]
[NORM
AL MONSOON]
may be the solution because
next wheat stock have to
come in next 3 month, but if it occurs during month of October-December then condition is
going to be critical because next lot of wheat stock is 6 month far to arrive, so on this condition
government have to release reserve/buffer stock of wheat[8](as government of India did in
2003) and rely on import option which may be not available at proper quantity in 2025 because
rest of world also will be facing same kind of situation on same year.
If next year monsoon is remain normal and wheat production will be normal then we will not face
bigger problem of food crisis. But if next year monsoon is not remain normal then situation will
become worse because government would have released its buffer stock and reserve stock, and
due to draught ,country production of wheat will be less, so government have to increase price of
food grains and could have depend upon import.
4. Conclusion
Today inflation is increasing very high while production of wheat grains is more than
requirement which raise alarm about problem of food grains policy, every year approximate 15-20
percent of food grains go into waste due to carelessness of government .Today food corporation
of India has total 34 million capacity in which 4 million tone is open space storage. In 2013, India
needs 50 million tones more storage space to prevent grains from rotting in rains.
India purchase large amount of petroleum product from gulf countries where India supplies its
food grains to those countries, so for smooth trading and strategic purpose, India must have to
invest in enhancing storage capacity.
References
AkshayaDeepa. L. R, Praveen. N “Impact of Climate Change and Adaptation to Green
Technology in India” Bharathidasan University, Trichy, Tamilnadu, India
Food Reserves in India-A report for the Canadian Foodgrains Bank – May 2012
http://www.indexmundi.com/agriculture/?country=in&commodity=wheat&graph=domesticconsumption-growth-rate
65
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
http://www.indexmundi.com/agriculture/?country=in&commodity=wheat&graph=imports-growthrate
http://www.indexmundi.com/agriculture/?country=in&commodity=wheat&graph=ending-stocksgrowth-rate
Public Food Safety Pre-warning System of Crisis Management,ZHOUQiang, GONG Chen, ZHOU
Yi School of Economics and Management, Harbin Engineering University
Report of Draught(india) by surinderkaur,indian metrological department.
www.fciweb.nic.in
Zhang Run-haoZhongRai-yan,The Right Perspective on Food Safety Issues,Central South
University of Forestry and Technology, Changsha Hunan 410004, China
66
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimizing Surface Finish in Turning Operation by Factorial
Design of Experiments
1*
A.J.Makadia , J.I.Nanavati
2
1
Darshan Institute of Engineering and Technology, Rajkot-360005, Gujarat, India
2
Faculty of Engineering and Technology, Baroda- 390001Gujarat, India
*Corresponding author (e-mail: ajmakadia@yahoo.com)
Design of experiments has been used to study the effects of machining parameters
such as cutting speed, feed and tool nose radius on the surface roughness of
Aluminium. A mathematical prediction model of the surface roughness has been
developed in terms of the above parameters. The effect of these parameters on the
surface roughness has been investigated by using response surface methodology
(RSM). The developed prediction equation shows that the feed is the most important
factor that influences the surface roughness. The surface roughness was found to
increase with increase in the feed and it decreased with increase in the nose radius
and cutting speed. Response surface contours were constructed for determining the
optimum conditions for a required surface roughness. Response surface optimization
shows that the optimal combination of machining parameters are (280 m/min, 0.1
mm/rev, 0.93 mm) for cutting velocity, feed rate and nose radius respectively. In
addition, a good agreement between the predicted and measured surface roughness
with 95% confidence interval within range.
1.
Introduction
Machinability of a material provides an indication of its adaptability to be
manufactured by a machining process. In general, machinability can be defined as an optimal
combination of factors such as low cutting force, high material removal rate, good surface
integrity, accurate and consistent workpiece geometrical characteristics, low tool wear rate
and good curl or chip breakdown of chips.
In machinability studies investigations, statistical design of experiments is used quite
extensively. Statistical design of experiments refers to the process of planning the experiment
so that the appropriate data can be analysed by statistical methods, resulting in valid and
objective conclusions (Montgomery, 1997). Design and methods such as factorial design,
response surface methodology (RSM) and Taguchi methods are now widely use in place of
one-factor-at-a-time experimental approach which is time consuming and exorbitant in cost.
Fang and wang (2002) developed an empirical model for surface roughness using two level
fractional factorial design (25-1) with three replicates considering work piece hardness, feed
rate, cutting tool point angle, cutting speed and cutting time as independent parameters using
3
non linear analysis. Galanis et al. (2010) used 2 full factorial design for AISI 316L steel with
three variables named feed, speed and depth of cut for application of femoral head. Taguchi
method was used by (Kirby and Zhang, 2006; Yang and Tarng, 1998) to find the optimal
cutting parameters for turning operations. Kini and chincholkar (2010) have used two level full
factorial design to study the effect of machining parameters on surface roughness and
material removal rate in finish turning of glass fibre reinforced polymers. Suleyman Neseli
(2011) and Ashvin Makadia (2013) used optimization of machining parameters for turning
operation based on response surface methodology for AISI 1040 and AISI 410 steel. Noordin
et al. (2004) studied the application of response surface methodology in describing the
performance of coated carbide tools when turning AISI 1045 steel. Ramesh et al. (2012) used
Taguchi method to study the effect of cutting parameters on the surface roughness in turning
of titanium alloy using response surface methodology. Sachin and Motorcu (2005) used 23
factorial design for the development of surface roughness model for turning of mild steel with
coated carbide tools.
67
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The aim of the present study was, therefore to develop the surface roughness
prediction model and optimization of machining parameters for Aluminium with the aid of
3
statistical method. By using RSM and (3 ) full factorial design of experiment, second-order
model has been developed with 95% confidence level.
2. Response surface methodology
In Response Surface Methodology the factors that are considered as most important
are used to build a polynomial model in which the independent variable is the experiment’s
response. RSM is the collection of mathematical and statistical techniques that are useful for
the modelling and analysis of problems in which a response of interest is influenced by
several variables and the objective is to optimize the response.
In many engineering fields, there is a relationship between output variables ‘ ’ of
interest and a set of controllable input variables{ , , …, } . In some system, the nature of
the relationship between and values may be known. Then, a model can be written in the
form
= ( , , …, ) + ,
(1)
where represents error observed in the response . If we denote the expected response as
( )=
(
,
,…,
)=
,
then the surface represented by:
= ( , , …, )
(2)
is called response surface. In most of the RSM problems, the form of the relationship between
the response and the independent variable is unknown. Thus the first step in RSM is to find
out a suitable approximation for the true functional relationship between
and set of
independent variables employed. Usually a second order model is utilized in RSM [5, 10].The
coefficient used in the model below can be calculated by means of least square method:
=
+ ∑
+ ∑
+ ∑ ∑
+
(3)
The second –order model is normally used when the response function is not known or
nonlinear.
3. Experimental details
3
In this study, the experiments were planned using 3 full factorial design with 27
numbers of experiments. The three cutting parameters are selected for the present
investigation is cutting speed (v), feed (f) and nose radius (r). The machining parameters used
and their levels chosen are given in Table 1.
Table 1. Parameters and level
Parameters
Level -1
Level -2
Level -3
Cutting speed (v)
220
250
280
Feed (f)
0.1
0.15
0.2
Nose radius (r)
0.4
0.8
1.2
All the turning experiments were conducted on a Jobber XL model made by Ace
designer CNC lathe machine with variable spindle speed 50 - 3500 RPM and 7.5 KW motor
drive was used for machining tests. In this study, ceramic inserts (supplied by Ceratizit) were
used, ISO code TNMG160404 EN-TMF, TNMG 160408 EN-TM and TNMG 160412 EN-TM
with different nose radius. (600 triangular shaped inserts). The inserts were mounted on a
commercial tool. In the present investigation, the bar of Aluminium is used as the work
material. Surface finish of the work piece material was measured by Surf test model No. SJ400 (Mitutoyo make).The surface roughness was measured at three equally spaced locations
around the circumference of the work pieces to obtain the statistically significant data for the
68
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3
test. The result table from the machining test performed as per the 3 full factorial design are
not shown here. These results are fed into the Minitab-16 for analysis.
4.
Result and discussion
Table 2 shows ANOVA for the response surface quadratic model for surface
roughness. The value of “p” in Table 2 for model is less than 0.05 which indicates that the
model is adequately significant at 95% confidence level, which is desirable as it indicates that
the term in the model, have a significant effect on the response. From response surface Eq. 4
the most significant factor on the Surface roughness is feed rate. The next contribution on
surface roughness is nose radius and cutting speed. In present work, R2 value is 0.9728 and
the Adj. R2 is 0.9584. The predicted R2 value 0.9316 is in reasonable agreement with Adj. R2
2
value. The R value in this case is high and close to 1, which is desirable.
Ra
= 0.9377 − 0.00755 v + 38.672 f − 4.623r + 0.0000098 v − 40.44 f +
2.545r −
0.02611 vf + 0.00395vr −
12.208f r
(4)
Table 2. Analysis of variance for second –order model
DF
Adj
Source
Seq SS
Adj SS
DF
MS
Regression
9 12.3133 12.3133 1.36815
Linear
3 10.4957 10.4957 3.49855
Cutting speed (v)
1
0.1840
0.1840 0.18402
Feed (f)
1
4.7227
4.7227 4.72269
Nose radius(r)
1
5.5889
5.5889 5.58894
Square
3
1.0568
1.0568 0.35226
(v)*(v)
1
0.0005
0.0005 0.00047
(f)* (f)
1
0.0613
0.0613 0.06134
(r)* (r)
1
0.9950
0.9950 0.99498
Interaction
3
0.7609
0.7609 0.25363
(v)* (f)
1
0.0184
0.0184 0.01841
(v)* (r)
1
0.0271
0.0271 0.02708
vvv((v)*Nosrrrrrradiuradius(r)
(f)* (r)
1
0.7154
0.7154 0.71541
Residual Error
17
0.3439
0.3439 0.02023
Total
26 12.6572
F
value
67.63
172.95
9.10
233.46
276.28
17.41
0.02
3.03
49.19
12.54
0.91
1.34
35.37
p
value
0.000
0.000
0.008
0.000
0.000
0.000
0.880
0.100
0.000
0.000
0.353
0.263
0.000
The diagnostic checking of the model has been carried out using residual analysis
and the results are presented in Figs.1, 2. This implies that the model is adequate and there
is no reason to suspect any violation of the independence or constant variance assumption.
The 3D surface plots and 2D contour plots for surface roughness in terms of the
process variable are shown in Figs. 3 - 6. These response contours can help in the prediction
of surface roughness at any zone of the experimental domain. It is clear from these figures
that the surface roughness reduces with the increase of cutting speed. However, it increases
with the increase of feed and decreases with increasing tool nose radius.
Normal Pr obability Plot
Ver sus Fits
(response is Roughness(Ra))
(response is Roughness(Ra))
99
0.3
95
0.2
90
80
Residual
Percent
70
60
50
40
30
0.1
0.0
20
10
-0.1
5
-0.2
1
-0.3
-0.2
-0.1
0.0
0.1
0.2
0.3
0.0
Residual
0.5
1.0
1.5
2.0
2.5
3.0
Fit t ed Value
Figure 1. Normal probability plot of Residuals
Figure 2. Residual Vs fit for roughness
69
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Contour Plot of Roughness( Ra) vs Feed ( f) , Cutting speed ( v)
0.20
0.9
Hold Values
Nose radius (r )
Contour Plot of Roughness( Ra) vs Cutting speed , Nose radius ( r
280
1.25
Feed ( f)
0.2
270
0.7
0.8
0.16
Cut t ing speed ( v)
0.18
Feed ( f)
Hold Values
1.75
1.2
0.14
0.6
260
2.25
250
240
0.5
0.12
230
1.50
0.4
0.10
220
230
240
250
260
270
220
0.4
280
1.00
2.00
0.5
0.6
Cut t ing speed ( v)
0.7
0.8
0.9
1.0
1.1
1.2
Nose radius ( r)
Figure 3. Contour plot for f and v
Figure 4. Contour plot for v and r
Surface Plot of Roughness( Ra) vs Feed ( f) , Cut ting speed ( v)
Surface Plot of Roughness( Ra) vs Feed ( f) , Nose radius ( r )
Hold Values
No se r ad iu s ( r) 1.2
Hold Values
Cutting speed
(v )
280
1.0
2
0.8
Ro ughn ess( Ra)
R oughne ss( R a )
0.6
1
0.20
0.20
0.4
0.15
220
240
Cutt ing sp eed
260
0
Fe ed ( f )
0.15
0.50
0.10
0.75
1.00
280
Nose r a dius
( v)
Figure 5. Surface plot for Ra, v and f
Fe e d ( f)
0.10
1.25
( r)
Figure 6. Surface plot for Ra, r and f
Response surface optimization is an ideal technique for determination of the best
cutting parameters in turning operation (Table3). Here, the goal is to minimize surface
roughness. RSM optimization results for surface parameters are: cutting velocity of 280
m/min, feed of 0.1 mm/rev and tool nose radius of 0.93 mm. The optimized surface roughness
parameter is Ra = 0.1240 µm.
Table 3. Response optimization for surface roughness parameters
Parameters
Goal
Ra
Min.
Optimum conditions
v
f
r
280
0.1
0.93
Lower
Target
Upper
Pre. resp.
Desi.
0.35
0.35
0.6
0.1240
1
5.
Conclusions
In this paper, application of RSM on the Aluminium is carried out for turning
operation. The results are as follows:
1. The quadratic model for surface roughness has been developed using RSM.
2. The established equations clearly show that the feed is the main factor which influences
surface roughness followed by tool nose radius and cutting speed.
3. 3D and 2D surface counter plots are useful in determining the optimum condition for Ra.
4. Response surface optimization shows that the optimal combination are (280 m/min, 0.1
mm/rev, 0.91 mm) for cutting velocity, feed and tool nose radius respectively.
5. The predicted and the measured values are satisfactorily close to each other which
indicates that the developed surface roughness prediction model can be effectively used
for predicting the surface roughness for Aluminium with 95% confident level.
References
Feng, C.X. and Wang, X. Development of Empirical Models for Surface Roughness Prediction
in Finish Turning. Int. J. Adv. Manuf. Technol., 2002, 20, 348–356.
70
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Galanis, N.I. and Manolakos, D.E. Surface roughness prediction in turning of femoral head.
Int. J. Adv. Manuf. Technol., 2010, 51, 79–86.
Kini, M.V. and Chincholkar, A.M. Effect of machining parameters on surface roughness and
0
material removal rate in finish turning of ± 30 glass fibers reinforced polymer pipes.
Mater. Des., 2010, 31, 3590–3598.
Kirby, E.D and Zhang, Z. Optimizing surface finish in a turning operation using the Taguchi
parameter design method. Int. J. Adv. Manuf. Technol., 2006, 30, 1021–1029.
Makadia, A.J. and Nanavati, J.I. Optimization of machining parameters for turning operations
based on response surface methodology. Measurement, 2013, 46, 1521-1529.
Makadia, A.J. and Nanavati, J.I. Optimization of tool geometry and cutting parameters for
turning operations based on response surface methodology, Proceeding of the
international conference on advances in materials processing and characterization
(AMPC 2013), 6-8 February 2013, pp. 885-890, Anna University, Chennai, India.
Montgomery, D.C Design and Analysis of Experiments. 4th ed., John Wiley, New York 1997.
Neseli, S. and Yaldız, S. Optimization of tool geometry parameters for turning operations
based on the response surface methodology. Measurement, 2011, 44, 580–587.
Noordin, M.Y. and Venkatesh, V.C. Application of response surface methodology in
describing the performance of coated carbide tools when turning AISI 1045 steel. J.
Mater. Process. Technol., 2004, 145, 46–58.
Palanikumar, K. Application of Taguchi and response surface methodologies for surface
roughness in machining glass fiber reinforced plastics by PCD tooling. Int. J. Adv. Manuf.
Technol., 2008, 36, 19–27.
Ramesh, S. Karunamoorthy, L. and Palanikumar, K. Measurement and analysis of surface
roughness in turning of aerospace titanium alloy (gr5). Measurement, 2012, 45, 1266–
1276.
Sahin, Y. and Motorcu, A.R. Surface roughness model for machining mild steel with coated
carbide tool. Mater. Des., 2005, 26, 321–326.
Yang, W.H. and Tarng, Y.S. Design optimization of cutting parameters for turning operations
based on Taguchi method. J. Mater. Process. Technol., 1998, 84, 112–129.
71
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Parallelization of Teaching-Learning-Based Optimization over
Multi-Core System
A.J. Umbarkar*, N.M. Rothe
Walchand College of Engineering, Sangli – 416415, Maharashtra, India
*Corresponding author (e-mail: ajumbarkar@rediffmail.com)
Teaching-Learning-Based optimization (TLBO) is an efficient metaheuristic technique
recently developed, for solving optimization problems with less computational efforts
and high consistency. Its advantage over other Evolutionary Algorithms (EAs) is that it
has no algorithm-specific parameter. Problem with Evolutionary Algorithms (EAs),
including TLBO, is that, increase in number of dimension (D) - to get more optimal
solution - leads to increase in search space, hence they take more amount of time to
find the optimal solution. Nowadays, multi-core systems are getting cheaper and
common. Many times all the cores of multi-core system are not utilized optimally. To
solve the above large dimensionality problem, we can use and exploit the functionality
of multi-core systems using parallel programming language such as OpenMP and thus,
can maximize the CPU utilization, which was not considered till now. In this paper, we
propose a parallelization strategy for TLBO on multi-core systems using OpenMP
API’s. In a parallel implementation of TLBO, if the bottlenecks are identified and
modified suitably significant speed-ups can be achieved.
1.
Introduction
Optimization, in simple terms, means minimize the cost incurred and maximize the
profit such as resource utilization. EAs are population based metaheuristic (means optimize
problem by iteratively trying to improve the solution with regards to given measure of quality)
optimization algorithms. To use any EA, we must build a model of our decision problem that
specifies: 1) The decisions to be made, called decision variables, 2) The measure to be
optimized, called the objective, and 3) Any logical restrictions on potential solutions, called
constraints. These 3 parameters are necessary while building any optimization model. The
solver will find values for the decision variables that satisfy the constraints while optimizing
(maximizing or minimizing) the objective. The TLBO algorithm requires only these 3
necessary parameters to be adjusted to get operational unlike other EAs which require
various algorithm-specific parameters to be adjusted to provide solution to the problem (Rao
et al. 2012).
TLBO, which has no algorithm-specific parameters, is actually based on the
philosophy of teaching and learning. In a class room, teacher is considered as a highly
learned person who tries to improve the outcome of learner by sharing his/her knowledge with
them. It is called Teacher phase of TLBO. Learners also share and learn from the interaction
among them, which also help to improve their outcome. It is called Learner phase of TLBO.
The detailed diagram of TLBO is shown in figure 1 (Rao et al. 2012). Crepinsek et al. (2012)
had commented on the efficiency of TLBO which is very well addressed by Waghmare
(2013). Rao and patel (2013 and Rap and Kalyankar (2013) also addressed the issues raised
in Crepinsek et al. (2012) and provide rationale behind the efficiency of TLBO. In the entire
process, TLBO tries to shift mean towards best.
There are various problems associated with Evolutionary Algorithms (EAs) (Umbarkar
et al., 2013). With the recent advancement of multi-core system, researchers have been
modifying EAs for parallel implementation on a multi-core system and trying to solve the
problems associated with EAs. This paper contributes towards this direction. In the remainder
of this paper, we give a brief introduction to OpenMP and multi-core system. Thereafter, we
discuss the possibilities of tweaking a TLBO to make it suitable for parallel implementation on
a multi-core system to address various problems associated with it.
72
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
1.1 OpenMP
Recent advancement in High Performance Computing (HPC) have seen the usage of
many cores CPU (viz. i3, i5, i7 etc.) for solving computational intensive task parallelly.
OpenMP is an API for writing shared-memory parallel applications in C, C++, and FORTRAN.
With the release of API specifications (in 1997) by OpenMP Architecture Review Board
(ARB), use of CPU’s computational power (in particular, multiple cores of multi-core CPU) for
parallel computing has become easy [a]. OpenMP is ideally suited for multi-core architectures
hence it allows program to be parallelized incrementally with little programming efforts.
The main tasks while using OpenMP are 1) to determine which loops should be
threaded and 2) to restructure the algorithms for maximum performance. The OpenMP
provides maximum performance, if it is used to thread the most time-consuming loops in the
application. Thus, if the bottlenecks in the algorithm are identified and modified suitably, using
OpenMP, one can easily exploit the functionality of multi-core CPU and can maximize the
utilization of all the cores of multi-core system which is necessary from the optimization point
of view (which says, maximize the resource utilization).
1.2 Multi-core system
Nowadays, multi-core CPUs are getting common and cheaper. One cannot ignore
their importance anymore. A multi-core system/processor is a single computing component
with two or more independent central processing units (called "cores"). These multiple cores
can run multiple instructions simultaneously, thereby increasing overall speed for programs
amenable to parallel computing.
Originally, processors were developed with only one core. Gradually, a dual-core (two
cores) processor (e.g. AMD Phenom II X2, Intel Core Duo), a quad-core (four cores)
processor (e.g. AMD Phenom II X4, Intel's quad-core processors- i3, i5, and i7), a hexa-core
(six cores) processor (e.g. AMD Phenom II X6, Intel Core i7 Extreme Edition 980X), an
octo/octa-core (eight cores) processor (e.g. Intel Xeon E7-2820, AMD FX-8350), a deca-core
(ten cores) processor (e.g. Intel Xeon E7-2850) came into existence.
While using multi-core system, the improvement in performance depends upon the
algorithms used and their implementation on multi-core system. In particular, possible
benefits depend upon (or limited by) the fraction of the algorithm that can be run in parallel on
multiple cores simultaneously; this issue is explained in Amdahl's law. The problems which
are embarrassingly parallel may realize speedup factors near to the number of cores. Even
more speed up factors can be realized if the problem is split up enough to fit within the cache
of each core(s), thereby avoiding use of much slower main memory of system. In order to
accelerate performance of application, programmers have to invest a prohibitive amount of
effort in re-factoring the whole problem [b]. The parallelization of algorithms on a multi-core
system is a significant ongoing topic of research.
2. Parallelization strategy
Most of the times applications use only single core of multi-core system thereby
leaving other cores idle. As one of the goals of optimization is to maximize the profit such as
resource utilization, here we discuss a parallel way for TLBO through which it can exploit all
the cores of multi-core system and thus, can maximize the utilization of multi-core system
[a] http://openmp.org/wp/about-openmp
[b] http://www.futurechips.org/tips-for-power-coders/parallel-programming.html
which may provide more optimal solution in less amount of time thereby providing speed up
compare to single core CPU and may address the large dimensionality problem in which
performance of algorithm deteriorates as the dimension size (D) increases. Here, we propose
the TLBO with Multiple Teacher Phase and Multiple Learner Phase (MTML TLBO) on multicore system.
73
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2.1 TLBO with multiple teacher phase and multiple learner phase
In the original TLBO, we assume that two different teachers, say T1 and T2, teach a
subject with same content to the same merit learners (students) in two different classes. So,
depending upon the quality of teachers, learners learn the subject. If T1 is better than T2 then
obviously learners of T1 learn that subject more effectively and will produce better result than
the learners of T2. Also, in the original TLBO, it is assumed that apart from teacher, learners
also learn from interaction between themselves, which also helps them in their results.
In this approach, we assume that, Teacher T1 and Teacher T2 interact with each
other and share their knowledge. As T1 is better than T2, their interaction will surely help T2
and so T2 can teach that subject in more better way than the before. Also, T1 may learn
something new from the interaction with T2 which may help T1 to improve his/her teaching.
Thus, learners/students of T1 and T2 also get benefited from the interaction of their Teachers
and this may provide better results.
Also, we assume that, learners of T1 and learners of T2 interact with each other and
share the knowledge grasp by them which is shared by their respective teachers T1 and T2
with them (learners). This will also help both (i.e. students of T1 and students of T2) in their
results. As T1 is better than T2 so students of T2 will definitely get benefited from the
interaction with the students of T1 and students of T2 may also learn something new which
T1 may have forgotten while teaching them. Thus, this MTML TLBo may produce better
results than the simple TLBO. The detailed flowchart of MTML TLBO is shown in figure 2.
To execute multiple Learner/Teacher phases simultaneously, one can use the
SECTION pragma of OpenMP. Programmer can decide the number of Learner/Teacher
phases s/he wants to create based on the core of CPU available and can deploy one
Learner/Teacher phase on one core. Apart from this, fitness of each individual can be
evaluated parallelly on each core using PARALLEL pragma of OpenMP thereby exploiting all
the cores. Besides this, various loop level parallelism pragmas are available (e.g. PARALLEL
FOR pragama) using which one can exploit all the core of multi-core system and can realize
effective speed up.
3.
Discussion
If the bottlenecks in the TLBO are identified and modified suitably for parallel
implementation over multi-core system, one can Optimize the resource utilization - As the parallel implementation exploits all the cores
of system, resource (CPU in this case) is optimally utilized, which is necessary from
the optimization point of view.
Achieve the significant speed-ups - By running Teacher and Learners phase
parallelly, significant amount of time gets saved.
Search large population (search space) – As multiple cores are utilized
simultaneously, searching large population (search space) will not get much amount
of time thereby improving the speed up.
Achieve early convergence - As multiple cores executes multiple Teacher and
learner phases simultaneously, there is a chance of getting optimal solution in less
amount of time thereby leading to early convergence of the problem.
74
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Initialize population size, Number of design variables, Termination criterion (Number of generation, Desire output etc.)
Generate solutions (population) stochastically (P_old)
Calculate mean for each design variable (P_mean)
T
E
A
C
H
E
R
P
H
A
S
E
Find best solution (P_best)
Calculate P_new based on:
P_new = P_old + [r * (P_best - (Tf * P_mean))]
Do note update
P_old
N
Compare P_new
with P_old:
Is P_new better
than P_old ?
Y
Update P_old:
Replace with
P_new
Select any two different solutions (from P_old),
say Pi and Pj
L
E
A
R
N
E
R
N
P_new = P_old + [r * (Pj - Pi)]
N
Calculate P_new
based on:
Is Pi better than
Pj?
Y
N
Compare P_new
with P_old:
Is P_new better
than P_old ?
Y
P_new = P_old + [r * (Pi - Pj)]
P
H
A
S
E
Do note update
P_old
Update P_old:
Replace with
P_new
Termination
criteria met?
Y
optimal solution
obtained till now
Figure 1. Flow-chart of simple TLBO
Where, r - Random number in the range [0, 1], and Tf – Teaching factor, either 1 or 2,
decided randomly using Tf = round[1 + rand(0, 1){2 - 1}].
75
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Initialize population size, Number of design variables, Termination criterion (Number of generation, Desire output etc.)
Generate solutions (population) stochastically (P_old)
Teacher
Phase_1
P_old_1
Update P_old_1:
Replace w ith better
N
Learner
Phase_1
P_old_1
Update P_old_1:
Replace w ith better
N
Teacher
Phase_2
-----------
P_old_2
Teacher
Phase_n
P_old_n
Compare P_old_1 w ith
P_old_2, P_old_3, ....,
P_old_n
Is P_old_1 better than
others?
Learner
Phase_2
-----------
P_old_2
Y
Do not
Update
P_old_1
N
Learner
Phase_n
P_old_n
Compare P_old_1
with P_old_2,
P_old_3, ....,
P_old_n Is P_old_1
better than others?
Y
Do not
Update
P_old_1
Termination
criteria met?
Y
Optimal solution
obtained till now
Figure 2. TLBO with Multiple Teacher Phase and Multiple Learner Phase (MTML TLBO)
References
Crepinsek, M., Liu, S-H. and Mernik, L. A note on teaching–learning-based optimization
algorithm. Information Sciences. 2012, 212, 79–93.
Rao, R.V. and Kalyankar, V.D. Parameter optimization of modern machining processes using
teaching–learning-based optimization algorithm. Engineering Applications of Artificial
Intelligence, 2013, 26(1), 524-531.
Rao, R.V. and Patel, V. An elitist teaching–learning-based optimization algorithm for solving
complex constrained optimization problems. International Journal of Industrial
Engineering Computations, 2012, (3), 535–560.
Rao, R.V., Savsani, V.J. and Vakharia D.P. Teaching–learning-based optimization: an
optimization method for continuous non-linear large scale problems. Information
Sciences, 2012, 183 (1) 1–15.
Umbarkar, A.J., Joshi, M.S. and Rothe, N.M. Genetic algorithm on general purpose graphics
processing unit: parallelism review. ICTACT J on Soft Computing, 2013, 3(2), 492-497.
Waghmare, G. Comments on “A note on teaching-learning-based optimization algorithm”.
Information Sciences, 2013, 229, 159–169.
76
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization of Compression-Absorption Refrigeration
System using Differential Evolution Technique
A. K. Pratihar1*, S. C. Kaushik2 and R. S. Agarwal3
1
Department of Mechanical Engineering, College of Technology, G. B. Pant University of Ag.
& Technology, Pantnagar-263145, India
2
Centre of Energy Studies, Indian Institute of Technology, N. Delhi, India
3
Department of Mechanical Engineering, Indian Institute of Technology, N. Delhi, India
*
Corresponding author (e-mail: akpratihar@gmail.com)
Optimization of 100 kW capacity compression-absorption refrigeration systems has
been carried out using ‘Differential Evolution’, a robust nontraditional optimization
technique. Optimization resulted in significant reduction in life cycle cost of the system.
12.3 % reduction in absorber area and 6.5 % reduction in total area has been achieved.
The results showed a small reduction in the fixed cost but a good saving on the
electricity bills. The optimization procedure converged after only 13000 function
evaluations whereas 5 million function evaluations are needed if some exhaustive
search method were used. This results in a great saving of both computational time
and effort.
1.
Introduction
Compression-absorption systems are characterized by higher capital costs in
comparison to conventional vapor compression and vapor absorption systems. Therefore, it is
imperative to carry out cost optimization to avoid unnecessary cost burdens. Cost optimization
of liquid desiccant cooling system has been carried out by Jain (1994). Optimization of a
falling film type shell and tube absorber has been performed by Pratihar et al (2005) using
Differential Evolution technique. Hardly any economic optimization study on compressionabsorption systems has been reported so far.
In refrigeration and air conditioning industry, various types of heat exchangers
are used as evaporators, condensers, absorbers and desorbers etc. Optimization of
heat exchangers or complete system may yield significant savings in the cost. The
reason for the industries not adopting thorough optimization procedure may be
attributed to the complexity of the optimization process. However, with the
introduction of some simple and easy to use optimization techniques, it is possible to
design an optimum system with lowest possible cost.
A non-traditional optimization technique known as Differential Evolution (DE),
developed by Price and Storn (1997) is a novel approach for solving mixed integer, discretecontinuous, non-linear engineering design optimization problems and it can be categorized as
evolutionary optimization algorithm, which belongs to the class of stochastic population based
optimization algorithms (Onwubolu and Babu, 2004).
The work presented in this paper is an endeavor to carry out economic optimization of
an ammonia-water compression-absorption refrigeration system using ‘Differential Evolution’
technique with an objective to minimize life cycle cost of the system.
2.
The system and its modeling
The compression-absorption system, designed for chilling water, has been shown in
Fig. 1. An ideal mixer has been added before absorber to bring the hot vapors and the weak
solution into thermal equilibrium. A separator has been added after desorber to facilitate the
separation of weak solution and vapors after desorption. Heating effect is produced in the
absorber as the heat of absorption is carried away by the water (source) and cooling effect is
produced in the desorber as the heat for generation of refrigerant is derived from the water
77
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
(sink). The absorber and desorber have been modeled as vertical co-current shell and tube
heat exchanger, with the solution as falling film on inside tube surface, vapor in the core cocurrent with solution and water on baffled shell side in counter-current direction with solution.
Solution heat exchanger has been modeled as a multi-tube hairpin heat exchanger. The
process in the expansion device has been modeled as throttling in which enthalpy before and
after the process has been assumed to be same. Pump has been modeled by assuming its
isentropic efficiency equal to 90 %. The compressor has been assumed to be of screw type
and its isentropic efficiency has been taken equal to 0.70.
Qabs
Absorber
5a
Mixer
5
2
Weak solution
6
Rich solution
Solution heatexchanger
Compressor
7
4
Pump
1
Exp. Valve
8
3
3a
Desorber
Refrigerant vapor
Separator
Qo
Figure 1. Compression-absorption refrigeration system
3.
Simulation procedure
Warner’s technique (Dhar and Saraf, 1987) has been used for system simulation for
faster and steady convergence of the iteration procedure. The process of simulation has been
explained in detail and simulation results of compression-absorption system designed for
supplying chilled water for summer air conditioning application, has been presented by the
authors (Pratihar et al, 2010).
4.
Optimization procedure
The optimization with DE needs a system simulation subroutine to be coupled with
DE procedure for the computation of objective function. DE generates a population of a set of
variables by random number generation. These sets of variables are passed one by one to
the system simulation subroutine which evaluates the performance of the system and another
subroutine which calculates the objective function value. An important step in the
optimization, using DE, is the scheme for handling mixed population of integer, discrete and
78
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
continuous variables. In the present work, the scheme used for handling discrete variables
has been extended for handling all types of variables.
5.
Validation of code
The validity of source code, written in FORTRAN, has been checked by solving the
problem of minimization of 2-dimensional function, f (x1, x2) = 100 (x2-x12) + (1-x1)2. This
function has been used extensively as test example in optimization and is known as
Rosenbrock’s saddle function. The correct solution is (1, 1) and minimum function value is
zero. The computer program developed for present work gives the correct solution. The
program has been modified later to handle discrete, continuous and integer variables,
simultaneously, as the design variables in the present problem are of mixed type.
6.
Optimization of compression-absorption system
The optimization problem may be defined as minimization of total cost (CT) of the
system subjected to some feasibility constraints on operating conditions along with boundary
constraints on the optimization variables. The objective function is given by Equation (1).
(1)
Minimize CT
In Equation (1), CT is the sum of the fixed cost (CF) and operating cost (CP) i.e. cost of
power for running the system over a period of its lifespan.
CT = CF + CP
(2)
The cost of tubes has only been considered for the fixed cost of the absorber,
desorber and the solution heat exchanger. The operating cost has been calculated as follows:
CP = PWF × CE
(3)
In Equation (3), CE is the cost of electricity for one year at present rate and PWF is the
present worth factor which can be calculated by formulae given in the literature (Duffie and
Beckman, 1980).
The cost of electricity (CE) per year has been calculated from the following formula:
CE = CkW × kW × LF × 8760
(4)
Where CkW is the cost of one kWh of electricity which is taken as Rs. 4.35., ‘kW’ is
total power consumption of the system in kW, LF is load factor taken as 0.60 and 8760 is
number of hours of system operation in a year. System life has been assumed as 20 years,
inflation rate as 4 % and interest rate on borrowed capital as 7 %.
6.1
Design variables and constraints
A total number of 14 variables identified for cost optimization of system have been
listed component wise. In absorber; inside tube diameter, tube length, baffle spacing, tube
pitch, number of tubes, mass flow rate of weak solution and mass flow rate of cooling water
(sink), in desorber; inside tube diameter, tube length, baffle spacing, tube pitch, number of
tubes, mass flow rate of water (source) and in solution heat exchanger; length of tubes are
the variables.
The constraints have been imposed on the optimization algorithm so that the solution
is obtained in the required and feasible domain. Boundary constraints have been put on the
geometric parameters as permitted by design practices and/or due to dimensional limitations.
Inequality constraints have been put on performance parameters e.g. capacity to design a
system with a rated cooling capacity and fluid velocities to ensure that they do not exceed
prescribed limits.
79
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The internal diameter (Di)of the tubes is a variable of discrete type. Four values of the
internal diameter 16.7 mm, 19.9 mm, 26.2 mm and 31.3 mm have been considered to be
sufficient. The length of tubes has been varied between 4.5 and 5.1 m. The design variable,
baffle spacing, is given as (0.4 to 0.6) × Dshell (Kakac and Liu, 2002). Therefore, 5 values of
baffle spacing viz., (0.40, 0.45, 0.50, 0.55, 0.60) × Dshell have been taken. The variable, tube
pitch ranges from (1.25 to 1.50) × Do (Kakac and Liu, 2002). Three values of tube pitch, 1.25
Do, 1.30 Do, 1.35 Do have been taken. In the simulation study (Pratihar et al, 2010), number of
tubes were found to be 223, therefore, the number of tubes has been varied between 170 and
250, which may be considered to be sufficiently large range. Length of solution heat
exchanger has also been varied to vary the SHX area within suitable limits.
7.
Results and discussion
The results of optimization of a 100 kW compression-absorption system are given in
Table 1. The decrease in total cost of the system achieved due to improvement in the design,
as the search for optimum progresses through DE technique, has been shown in Fig. 2 as a
function of number of objective function evaluations. Optimization converged only after 13000
function evaluations (26 generations) whereas 5 million function evaluations are needed if
some exhaustive search method were used. But, the search for optimum was continued up to
50 generations to check any further reduction in the LCC. Early convergence results in a
great saving of both computational time and effort. This is the uniqueness of the population
based optimization techniques such as DE.
Table 1 Details of the original and the optimum compression-absorption systems
Absorber
Labs Area BFSR
Ndes
Desorber
Ldes Area
Design
Nabs
Original
64
4.40
18.43
0.50
64
4.40
Optimum
60
4.31
16.17
0.45
64
4.35
18.43
17.40
BFSR
SHX
Lshx
Area
0.50
137.1
13.70
0.40
137.0
13.69
Results show 12.3 % reduction in absorber area and 6.5 % reduction in the total area
as a result of optimization. It is evident from the results of optimization that the life cycle cost
for the optimum system decreases by Rs. 1,78,980. The decrease in the fixed cost is small
but there is a net saving of Rs. 12,269 per annum on the electricity bills.
An appreciable saving in costs can be achieved if the original system is not carefully
designed. However, the original system in the present case has been designed after carrying
out a large number of system simulations. In spite of this, a fairly good amount of savings
have been obtained as a result of optimization. This clearly explains the need of optimization
in engineering designs.
8.
Conclusions
Optimization of 100 kW capacity compression-absorption systems resulted in 12.3 %
reduction in the absorber area and 6.5 % reduction in the total area. The life cycle cost for the
optimum system decreases by Rs. 1,78,980. The decrease in the fixed cost is small but there
is a net saving of Rs. 12,269 per annum on the electricity bills. It is also to be emphasized
here that convergence was obtained after only 13000 function evaluations whereas 5 million
function evaluations are needed if some exhaustive search method were used. This results in
great saving of both computational time and effort.
80
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 2. Convergence of DE in the optimization of 100 kW system
(Reduction in life cycle cost with number of function evaluations)
References
Dhar, P. L. and Saraf, G. R. Computer simulation and design of refrigeration system, Ist
edition, Khanna Publishers, Delhi, India, 1987.
Duffie, J. A. and Beckman, W. A. Solar Engineering of Thermal Processes, I edition, WileyInterscience, USA, 1980.
Jain, S. Studies on desiccant augmented evaporative cooling systems, Ph.D. Thesis, Mech.
Engg. Deptt., I. I. T. Delhi, India, 1994.
nd
Kakac, S. and Liu, H. Heat Exchangers; Selection, Rating and Thermal Design, 2 Edition,
CRC Press, Washington D.C., USA, 2002.
Onwubolu, G. C. and Babu, B. V. New Optimization Techniques in Engineering, Ist edition,
Springer, Germany, 2004.
Pratihar, A. K., Kaushik, S. C. and Agarwal, R. S. Optimization of absorber using Differential
th
Evolution: an evolutionary optimization technique, Proceedings of 14 ISME Int. Conf.
on Mech. Engg. in Knowledge Age (eds. Sharma, P.B., Garg, S.K., Pathak, B.D. and
Shamsher) Dec. 12-14, 2005, Delhi, 656-661, ISBN 81-88901-21-0, Elite publishing
house private limited, New Delhi.
Pratihar, A. K., Kaushik, S. C. and Agarwal, R. S. Simulation of an ammonia-water
compression-absorption refrigeration system for water chilling application, Int. J.
Refrig., 2010, 33, 1386-1394.
Price, K. and Storn, R. Differential evolution, Dr. Dobb’s Journal, 1997, April, 8-24.
81
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Performance Evaluation of Ground Granulated Blast Furnace
Slag (GGBFS) based Cement Concrete in Aggressive
Environment
Amish R. Bangade*, Hariharan Subramanyan
Department of Civil Engineering, Sardar Patel College of Engg., University of Mumbai
*Corresponding author (e-mail: bangade_amish@rediffmail.com)
Experimental investigation on the effect of ground granulated blast furnace slag
(GGBS) on durability including compressive strength of concrete and effect of chloride
and sulphate attack including loss of mass and strength, is reported in this paper. Now
a day’s strength of concrete is major part of the construction. The strength word used
here implies to compressive strength, flexural strength, tensile strength etc. of
hardened concrete made by GGBS added to OPC while preparing the concrete mix. In
the extreme climatic conditions and the salty geology makes the effect of the
environment on the concrete structure more aggressive than that of any other part of
the world. Extreme and harsh conditions exist because of the changing and fluctuating
high temperatures and humidity at the coast and relatively lesser humidity. This type of
aggressive and harsh environment leads to premature deterioration of the concrete
buildings and structures due to depassivation, sulphate attack or by expansive cracking
of the reinforcement present in the concrete by concomitant actions of chloride ions
and carbonation. Therefore in order to get higher durability and reliability from the
concrete buildings in such harsh environment, the concrete should be impervious and
should be dense.
1.
Introduction
Concrete is a mixture of cement, fine aggregate, coarse aggregate and water.
Concrete plays a vital role in the development of infrastructure Viz., buildings, industrial
structures, bridges and highways etc. leading to utilization of large quantity of concrete.
There is always a scope for improving the qualities of construction materials which will
improve its performance, reduce its and make it suitable for working in any hostile
environment. It is also advantageous if such materials are helpful in optimizing resources for
long term use and help maintain eco-friendly atmosphere. Understanding the concept of
concrete durability and the right specification of materials for infrastructure project is utmost
important to protect the structure from possible adverse effect of the exposed environment.
The present work involves the use of one such material called Ground Granulated BlastFurnace Slag, popularly known as „GGBFS‟
Large placements of Ordinary Portland Cement produce high temperatures as they
cure. This in turn can result in cracks. For years the construction industry has used ground
granulated blast furnace slag (GGBFS) as a replacement for ordinary Portland concrete
cement (OPC) to lower curing temperatures. However, MoDOT specifications only allowed
low levels of blast furnace slag in concrete mixes. Higher concentrations warranted further
investigation for strength and durability. GGBFS is a by-product of the iron production
process, and consists mostly of calcium silicates and aluminosilicates. This cementitious
material has been touted for both its strength and durability enhancing characteristics when
used in concrete. Ground granulated blast furnace slag also has a lower heat of hydration
and, hence, generates less heat during concrete production and curing. As a result, GGBFS
is a desirable material to utilize in mass concrete placements where control of temperature is
an issue. Percentage replacements by weight of GGBFS for cement have ranged from 10 to
90%.Most ready-mix concrete producers use 50% replacement with highly reactive slag
during warm weather.
Ground granulated blast furnace slag (GGBFS) also improves the sulphate
&Chloride resistance of concrete. The use of GGBFS in producing sulfate-resistant concrete
82
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
has been recognized by both ACI (1991) and ASTM (1997), who reported blended cements
with 60% to 65% slag as widely used in sulfate- and sea-water-resistant concretes.
GGBS has been widely used in Europe, and increasingly in the United States and in
Asia (particularly in Japan and Singapore) for its superiority in concrete durability, extending
the lifespan of buildings from fifty years to a hundred years.
2. Significance of study
Experimental investigation on the effect of ground granulated blast furnace slag
(GGBS) on durability including compressive strength of concrete and effect of chloride and
sulphate attack including loss of mass and strength, is reported in this paper.
Now a day‟s strength of concrete is major part of the construction. The strength word
used here implies to compressive strength, flexural strength, tensile strength etc. of hardened
concrete made by GGBS added to OPC while preparing the concrete mix. In the extreme
climatic conditions and the salty geology makes the effect of the environment on the concrete
structure more aggressive than that of any other part of the world. Extreme and harsh
conditions exist because of the changing and fluctuating high temperatures and humidity at
the coast and relatively lesser humidity. This type of aggressive and harsh environment leads
to premature deterioration of the concrete buildings and structures due to depassivation,
sulphate attack or by expansive cracking of the reinforcement present in the concrete by
concomitant actions of chloride ions and carbonation. Therefore in order to get higher
durability and reliability from the concrete buildings in such harsh environment, the concrete
should be impervious and should be dense.
A common problem found in buildings in Mumbai these days is that of leakage. This
is a problem that used to be traditionally found with old building as they begin to weaken.
Finding this in new houses is just an example of shoddy or overlooked construction. More
often when repaired it becomes worse. GGBFS is being used since very long time for various
types of works and by now the fact has been established that structures made with GGBFS
can sustain the degrading effects of even the most hostile environments. One important
aspect of the use of GGBFS is that it is more eco-friendly as compared to other cements
available as it emits less pollution to environment. The GGBFS is use for constructions in
marine environments, where sulphate and chloride are present. GGBS reduces the alkalinity
of the concrete, and thus the alkali-silica ratio, GGBS reduces mobility of alkalis in the
concrete. GGBS reduces free lime in the concrete (regarded as an important factor for alkali
silica reaction).
3. Literature review
GGBS is used to make durable concrete structures in combination with ordinary
portland cement and/or other pozzolanic materials. GGBS has been widely used in Europe,
and increasingly in the United States and in Asia (particularly in Japan and Singapore) for its
superiority in concrete durability, extending the lifespan of buildings from fifty years to a
hundred years.
This paper focussed on GGBS improves the general performance of PC concrete,
where concrete exposed to silage effluent. Also GGBS decreasing chloride diffusion and
chloride ion permeability, reducing creep and drying shrinkage, increasing sulfate resistance,
enhancing ultimate compressive strength, and reducing heat of hydration and bleeding.
ShaikhFaizUddin Ahmed & Yoshihiko Ohama‟, this paper is a part of Nicmar
publication. In this paper, ShaikhFaizUddin Ahmed & Yoshihiko Ohama experimental
investigation was done, on the effect of ground granulated blast furnace slag (GGBFS) on
mechanical properties and durability, including compressive & flexural strength, water
absorption, carbonation and chloride ion penetration of mortar (as per JIS A 1172-Method of
test for strength of polymer modified mortar).
VenuMalagavelli and P. N. Rao was found the partial replacement of cement
with
GGBS and sand with ROBO sand helped in improving the strength of the concrete
substantially compared to normal mix concrete. Compressive strength of concrete is
increased as the percentage of ROBO sand and percentage of GGBS increases. The
83
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
maximum compressive strength of concrete is achieved at the combination of 25% ROBO
sand and 50% of GGBS.
4. Research methodology
The use of Ground Granulated Blast Furnace Slag (GGBFS) in cement mortar and
concrete has become wide spread all over the world in the production of high strength and
durable mortar. The purpose of the this experimental investigation is to evaluate the effect of
GGBFS on partial replacement of cement (i.e.50% Cement & 50% GGBFS), in mortar with
sufficient strength and high durability, Chloride and sulphate resistance, water absorption &
capillary action of concrete. In this paper, prepared Mix design of M-20 & M25 grade of
concrete as per IS 10262 -1982 with OPC mortar, OPC+fly ash mortar & OPC+GGBFS
mortar, for investigating the strength and durability of concrete.
Following mix design is prepared in the RMC plant. A total 3 mix of M20 & 3 mix of
M-25 grade of concrete is prepared with three different binder content. and cast 12 cubes of
each binder content, for checking the compressive strength of concrete (as per IS 456-2000)
with age of days i.e. 7 day, 28 day, 56 days, 90 days.
From the above cues following test reasults are obtained, following table
(1.1),(1.2)& Graph (1.1), (1.2) shows the average compressive strength of concrete for OPC,
OPC+FLY ASH & OPC+GGBFS of different age of days.
TABLE-1.1
sr. no.
Mix M-20
1
2
3
OPC
OPC+FLYASH
OPC+GGBFS
Average Compressive Strength N/mm2
7 DAYS
28 DAYS 56 DAYS
19.22
30
39.09
17.94
28.67
35.49
17.5
29.44
51.63
90 DAYS
42.63
40.74
56.18
Comparative strength of M-20 grade concrete
60
51.63 OPC+GGBS
Avg. Compressive
strength N/mm240
20
19.22
17.9417.5
39.09
35.49
30
29.44 28.67
56.18
OPC
42.63
40.74
OPC+FLYASH
OPC
OPC+FLYASH
OPC+GGBS
0
7 DAYS
28 DAYS
56 DAYS
90 DAYS Age of days
Figure 1. Graph showing compressive strength of concrete increase with age of days
sr.no
1
2
3
Mix M-25
OPC
OPC+FLYASH
OPC+GGBFS
100
Avg. Compressive
strength N/mm2
Table-1.2
Average Compressive Strength N/mm2
7 DAYS
28 DAYS
56 DAYS
23.32
39.23
45.48
22.76
34.1
41.09
21.33
37.16
55.3
90 DAYS
51.04
47.45
64.97
Comparative strength of M-25 grade concrete
37.16
23.32
22.76
21.33
39.23
34.1
47.45 45.48
41.09
0
7 DAYS
Age of days
28 DAYS
56 DAYS
64.97
OPC+GGBS
51.04 OPC
47.45 OPC+FLYASH
OPC
OPC+FLYASH
OPC+GGBS
90 DAYS
Figure-1.2 Graph showing compressive strength of concrete increase with age of days
84
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
5. Experimental study of silage effluent (sulphate and chloride) attack on concrete
I)
Silage effluent test on concrete
1) M-25 grade of OPC, OPC+FLYASH & OPC+GGBS cubes immersed in silage effluent
i.e. Lactic acid, Acetic acid, Formalin, KOH, Ca(OH)2, Mg(OH)2, Na2SO4 &NaCl,
effluent for 3 months period & Visually observed degradation of concrete cubes & it is
observed that the OPC cube is more degradable than the GGBFS cube.
Before solutionAfter solution
Before solution
OPC cube
After solution
OPC+Fly Ash
Before solution
After solution
OPC + GGBFS
Figure-1.3 Effect of Silage effluent on concrete mortar
Table-1.3 Compressive strength test report of cubes by Silage Effluent
Compressive
Mass of cubes Mass of cubes % loss of
strength N/MM2
before immersed after take out from mass
in solution (g)
solution (g)
OPC
32.33
8.611
7.349
17.17%
OPC+FLYASH
38.10
8.609
7.890
9.111%
OPC+ GGBS
51.00
8.606
8.100
6.24%
Sr. No.
60
Bar chart showing loss of compressive
strength
51.00
20
38.10
Compres
40
sive
strength
202
N/mm
Bar chart showing loss of mass in %
17.17 %
15
32.33
9.11 %
10
Loss of
Mass in
% 5
0
6.24 %
0
Mix
OPC+FLYASH
OPC
OPC+GGBS
OPC
Chart 1.1
Mix
OPC+FLYASH
OPC+GGBS
Bar
Bar Chart 1.2
Water
absorption in %
Suction in
g/cm2
Capillary test & Water absorption test (OPC, OPC+FLYASH, OPC+ GGBS)
The capillary tests & water absorption test is performed in accordance with BS (1999)
& Spanish Standards UNE 1984, using formula S = (Ws – Wd/A) X 100 &Wa = ( Ws –
Wd/Ws) X 100 respectively, by immersing the cube in water for a period of one month. The
purpose of this test to determine how much fluid will enter into the mortar through suction
forces created by the water molecules and their micro connections with pore walls. The more
fluid able to enter the mortar through capillary action, the more susceptible it will to be attack
by silage effluent. From the above test it is observed that the GGBFS mortar is less capillary
action and water absorption than the OPC mortar. Following bar chart shows the capillary
action and water absorption.
124.44
150
4
3.18 %
100
2
31.11
0.81 %
50
7.55
0.19 %
0
0
OPC
Bar Chart 1.3
OP+FLYASH OPC+GGBS
OPC
Mix
Capillary test
OPC+FLYASH OPC+GGBS
Mix
Bar Chart-1.4Water absorption test
85
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
6. Conclusion
Base on the results of the experimental work on various grades of concrete using
GGBFS as a partial replacement to OPC, the following observations are made.
1)
This paper has evidenced that OPC composites incorporating GGBS i.e. 50% + 50%,
are more durable than those made with OPC alone in aggressive environments under the
action of acids and salts such as those produced by silage, and that durability increased with
increasing amounts of GGBS.
2) GGBS mixes showed the smallest progressive rise in water absorption, and capillary
suction as a result of silage immersion and salt cycling, as well as the smallest mass and
compressive strength loss than the OPC mixes. The addition of GGBS will increase lifespan
and decrease maintenance of OPC concrete.
3)
As the natural resources which are important for making OPC are depleting fast, it is
strongly recommended to use GGBFS to maximum possible extent to conserve our natural
resources. Also manufacture of cement-clinker emits plenty of carbon dioxide responsible for
green house effect and global warming. Partial replacement of OPC with GGBFS obviate the
need to manufacture clinker and help maintain the eco-friendly atmosphere.
4) GGBS is manufacture from by-product of steel industry, GGBFS is much more cheaper
in terms of price as compared to OPC. A saving of 10 to 20% or even more may be realized
in cost of concrete manufacturing depending upon the price of OPC and hence bringing
down the cost of construction in general.
5)
The by-product of the steel industry is waste, which may impose the problem of
disposition. Cement industries can utilize this by-product for manufacturing GGBFS and thus
avoid the disposition problem.
References
C Rajkumar “Portland slag cement in civil Engineering construction” article in the book
published by Orissa Cement Ltd.
IS 10262-1982 “Concrete Mix design- Guidelines”
IS 12089-1987, “Specification for granulated slag for the manufacture of Portland slag
cement,” BIS
IS 383-1970, “Specification for coarse and fine aggregates from natural sources for
concrete”,BIS
IS 456-2000, “Specification for strength, water cement ratio and cement content for concrete
mortar.
IS 526-1959, “Methods of tests for strength of concrete”,BIS.
Jeevan N., Meera Joe &P.M.Ravindra “Prediction of compressive strength for GGBS
incorporated concrete for different curing conditions” international journals of advance
scientific and technical research, Vol. 5, October 2012, ISSN 2249-9954.
Oner, S. Akyuz, “An experimental study on optimum usage of GGBS for the compressive
strength of concrete” paper from Cement & Concrete Composites 29 (2007) 505–514S. Chand and Co. Ltd M. S. SHETTY “Concrete Technology”, M. S. SHETTY
S. Pavin and E. Corden “Study of the Durability of OPC versus GGBS Concrete on Exposure
to Silage Effluent” Journal of materials in civil engineering © 10.1061/(ASCE)08991561(2008)20:4(313-320).
ShaikhFaizUddin Ahmed & Yoshihiko Ohama “Mechanical Properties and Durability of
Ground Granulated Blast Furnace slag modified mortar” paper from Nicmar Journals of
construction managements, Vol. 2 XY April-June 2000.
VenuMalagavelli& P. N. Rao “High Performance concrete with GGBS and ROBO sand” ,
International Journal of Engineering Science and Technology, 2(10), 2010,5107-5113.
86
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Solving Constrained Optimization Problem by Genetic Algorithm
Aditya Parashar*, Kuldeep Kumar Swankar
Madhav Institute of Technology & Science, Gwalior-474012, M.P. India
*Corresponding author (e-mail: adi07.parashar@gmail.com)
This paper presents a solution method of constrained optimization problem using genetic
algorithm. Genetic algorithm which endows an effective search and therefore greater
economy. This introduced method leads us to obtain better result of optimization problem.
Keywords- constrained optimization, genetic algorithm
1.
Introduction
Optimization is the process of doing things. The optimization problem consists of objective
function and information about the specification of variable then it is constrained optimization
problem. Constraints are classified as the Equality relation and Inequality relation. The objective
function is evaluated and constraints are checked to see it there is any outrage. If there are no
outrage the variable set is determined the fitness value according to the object function evolution.
when the constrained are outraged, the solution is infeasible and there have no fitness. A genetic
algorithm generates a sequence of parameter to be tested using the system under consideration,
objective function (to be maximized or minimized) and constraints (Petridis et al. 1998).
Genetic algorithms are adaptive heuristic search algorithms based on the evolutionary thought of
natural selection and genetics. They give an ideas for a random search used to solve optimization
problem. They make use of historical information to direct the search into the scope of the better
Performance whit in the search space. The basic principle of the genetic algorithms are designed
to simulate processes in natural system necessary for evolution especially Principle first lay down
by Charles Darwin "survival of fittest" because in nature. Genetic algorithms are the maintenance
of a population of encoded solutions to the problem (genotypes) that developed in time.
They are based on solution of reproduction, solution of evaluation and selection of the best
genotypes. Genetic reproduction is performed by Crossover and Mutation (Holland, 1975). IN
1975, Holland developed this idea in adaptation in natural and artificial systems. By depicting how
can be apply the principle of natural evolution to optimization problem. He apprize first genetic
algorithm. Holland's theory has been further evolved and now genetic algorithms stand up as
powerful adaptive methods to solve search and optimization problems. In the present days GAs
are applied to resolve complicated optimization problems, such as organizing the time table,
scheduling job shop (Petridis et al. 1998; Michalewicz).
2.
Problem formulation
The constrained optimization problem
Y(x) =x
Subject to z i (x) ≥0 , i=1,2,….n.
Optimize a function f(x1,x2……xn) subject to following sets of linear constraints.
2.1
Domain
We write
li ≤ xi ≤ ui for
i=(1, 2, ……n)
l ≤ x≤ u
l(l1 ,l2…….ln)
l=lower bound
x = (x1, x2…..xn )
u =(u1, u2…..un ) u= upper bound
87
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2.2
Equality
Where
2.3
A x=b
x = (x1, x2…..xn) 1≤ j ≤ n
A= aij .
b = ( b1, b2,….bp) 1 ≤ i ≤ p (p is the number of equation).
Inequality
Cx ≤ d
1≤ j ≤ n
x=(x1,x2…..xn)
C = cij
d = (d1,d2……dm ) 1≤i≤m
m is the number of inequality.[5]
A constrained optimize problem of maximize function
F (x) = x3
Where x vary 0 to 31.
Where
3.
Genetic algorithm using constrained optimization problem
Genetic algorithms are computerized search and optimization algorithms based on
mechanics of natural genetics and natural selection Ann Arbor, envisaged the concept of these
algorithms in mid-sixties and published his seminal work. In genetic algorithm used some
operators for problem solution, they explained below (Sivanandam and Deepa).
3.1 Encoding of solution
Encoding is the way of representing individual genes. The process can be performed using n
binary bits string, number, trees, arrays, lists, or any other objects. The encoding depends on
solving the problem. Genetic algorithms need design space to be converted into genetic space.
So, genetic algorithms work with a coding of variables.
3.2 Fitness of functions
As we are generating always feasible solutions in the proposed Genetic algorithm. The fitness
function of given problem
3
Fitness function f(x) = x
The value of x is 0 to 31
3.3 Population
A population is a collection of individuals. A population consists of a number of chromosomes
being tested, the phenotype parameters defining the individual and some information about the
search space. For each and every problem, the population size will depend on the complexity of
the problem. The initial population chooses randomly.
3.4 Selection
Selection is a method that randomly chooses chromosome out of the population according to
their evaluation function. The higher the fitness function, the better individual will be selected.
The selection pressure is defined as the degree to which the better individuals are favored. The
higher the selection pressured the more the better individuals are favored. The convergence rate
of GA is determined by the magnitude of the selection pressure, with higher selection pressure
resulting in higher convergence rates.
88
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.5 Crossover
Crossover is a random technique of mating of string .Based on the probability of crossover, partial
exchange of characters between two strings is performed. The crossover technique is include,
the select two mating parents, select a crossover point and swap the chromosome between two
strings. [1]
3.6 Mutation
Mutation is a technique for the relevant random alteration of the bits in the string ,with binary
representation , flipping the state of bit from 1to0 and 0to1 (Abookazemi et al., 2009).
4 GA using example solution
3
Given f(x) = x
The value of x varies 0to31.
Step1 Encoding - encode the value of x
x (0)=00000
x(31)=11111
Step2 –set initial population (randomly selected)
x = [01100; 11001; 00101; 1011];
Step3- Fitness function
f(x)= x3
Step4 Selection- selection is the process for chooses population set for crossing. The actual
count is to be obtained to select the individuals who would participate in the crossing using
roulette wheel selection.
(i) String 1 occupies 7.1% so there is chance for it to occur at least once. Hence its actual count
is1.
(ii) String 2 occupying 64.20% ,it has a fair chance of being selected twice. thus its actual count
can be 2.
(iii) String 3 occupying 0.51% so their occurrence is very poor .its actual count is 0.
(iv) String 4with 28.81% has at least one chance for occurring, thus its actual count is1.
String
No.
Initial population
(randomly)
x value
f=x3
1
2
3
3
sum
average
maximum
01100
11001
00101
10011
12
25
5
19
1728
15625
125
6859
24337
6084.25
15625
Step5 Crossover
String no
Mating pool
1
01100
2
11001
3
11001
4
10111
Sum
Average
maximum
Crossover point
4
4
2
2
Prob.
0.071
0.6420
0.0051
0.2818
1.0000
0.2500
0.5411
Offspring after crossover
01101
11000
11011
10001
89
%
of
prob.
Expected
count
Actual
count
7.1
64.20
0.51
28.81
100
25
64.20
0.2840
2.568
0.0205
1.127
4.0000
1.0000
2.1645
1
2
0
1
4
1
2
x value
13
24
27
17
f(x)=x3
2197
13824
19683
4913
40617
10154.25
19683
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Step6 Mutation
String no.
1
2
3
4
Sum
Average
maximum
Offspring
after
crossover
01101
11000
11011
10001
Mutation
chromosomes
for flipping
10000
00000
00000
00100
Offspring
after
mutation
11101
11000
11011
10101
X
value
f(x)=x3
29
24
27
21
24389
13824
19683
9261
67157
16789.25
24389
Step7
Maximum value of x is 24389 of the resulting string 11011 proved to be very best solution.
3
The Maximum fitness value of f(x)= x is 67157.
5.
Conclusions
The technique for handling constraints in genetic algorithm for optimization problem, we
proposed a method for handling constraints. This method should enable such constrained
problem with difficult objective functions be solved without incurring the high computational
overhead associated with frequent constraint checking and without a requirement for design a
specific system.
References
Abookazemi, K., Wazir, M., Ahmad M.H. Structured Genetic Algorithm Technique for Unit
Commitment Problem. International Journal of Recent Trends in Engineering, 2009, 1(3).
Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning. Reading, MA:
Addison Wesly,1989.
Holland, J.H. Adaptation in Natural and Artificial Systems. AnnArbor, MI: Univ. Michigan Press,
1975.
Michalewicz, Z., Cezeary Z., Janikow. Handling constrained in genetic algorithm.
Petridis, V., Kazarlis, S., and Bakirtzis, A. Varying Fitness Functions in Genetic Algorithm
Constrained Optimization: The Cutting Stock and Unit Commitment Problems. IEEE
Transactions on Systems, Man, and Cybernetics—PART B: Cybernetics, 1998, 28(5).
Sivanandam, S.N. and Deepa, S.N. Principle of soft computing second edition, Wiley India, 418.
90
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Multi Criteria Decision Making for Materials Selection using
Fuzzy Axiomatic Design
Anant V. Khandekar1, Shankar Chakraborty2*
1
Department of Mechanical Engineering, Government Polytechnic, Bandra (East),
Mumbai - 400 051, Maharashtra, India
2
Department of Production Engineering, Jadavpur University, Kolkata - 700 032,
West Bengal, India
*Corresponding author (e-mail: s_chakraborty00@yahoo.co.in)
With the advancement of technology, varieties of materials are now available for a
particular engineering application. The designer has to choose the most appropriate
material so that there is optimal fulfilment of the final requirements for the product. Most
of the times, the material characteristics are more comfortably expressed in qualitative
terms as opposed to quantitative terms. Fuzzy set theory was developed to process
such imprecise data in an efficient manner. Decision-making methods using fuzzy set
theory have gained wide acceptance due to their ability to handle the impreciseness in
the data. Material selection problems possess such characteristics of impreciseness.
This paper presents a fuzzy multi-criteria decision-making method using axiomatic
design principle for material selection and two material selection problems are
subsequently solved.
1.
Introduction
In today’s technologically advanced world, ever increasing varieties of materials are
available for designing a particular product. While selecting materials for engineering
components, a clear understanding of the functional requirements for each individual
component is required and various pertinent criteria need to be considered. Such a decisionmaking situation involving multiple candidate materials having numerous attributes is known
as a multi-criteria decision-making (MCDM) problem.
Fuzzy set theory is used to convert qualitative data into numerical values by means of
triangular or trapezoidal fuzzy numbers. Qualitative material properties are also of varying
degrees of importance for different design requirements. For example, the corrosion
resistance of a material should be ‘high’ for a given service condition. It is not possible to
exactly quantify such a rating of the material properties. This impreciseness in material
selection problem has compelled the designers to adopt fuzzy MCDM techniques for its
solution.
The past researchers have already applied several fuzzy MCDM techniques to solve
varieties of material selection problems. Liao (1996) determined the fuzzy suitability index for
each alternative material and finally ranked those materials in order of priority. Rathod and
Kanzaria (2011) applied fuzzy technique for order preference by similarity to ideal solution
(TOPSIS) to select the proper phase change material used in solar domestic hot water
system for latent heat thermal energy storage. Sharif Ullah and Harib (2008) used the
material property charts to select the optimal materials for robotic components at an early
stage of design. Wang and Chang (1995) applied the aggregation and ranking of fuzzy
numbers to select tool steel materials. Giachetti (1998) presented a prototype material and
manufacturing process selection system called MAMPS that would integrate a formal multiattribute decision model with a relational database. Rao and Patel (2010) considered the
objective weights of importance of the attributes as well as subjective preferences of the
decision makers to decide the integrated weights of importance of the attributes in their novel
MCDM method for material selection.
In this paper, an attempt is made to explore the applicability and capability of fuzzy
axiomatic design (FAD) principle for selection of the most appropriate material for a given
engineering application. As such, FAD is being used for decision-making from 2005 onwards,
91
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
but its use for material selection in engineering application domain is totally untapped. In the
paper, two material selection problems are solved by FAD method and satisfactory ranking
results are obtained.
2.
Fuzzy axiomatic design
Axiomatic design (AD) was put forth as a scientific approach for design of products. It
takes into consideration the customer needs for a product to be designed in terms of
functional requirements (FRs) and establishes the relation with the final design parameters
(DPs) of the product. Recently, it is being used in the area of engineering decision-making.
AD principles are supported by two axioms. The first one is the independence axiom
and second one is the information axiom. According to independence axiom, each FR should
be independently satisfied by the corresponding DP (Suh, 1990). The information axiom
states that among those solutions that satisfy the independence axiom, the solution having
the smallest information content is the best one (Suh, 2001).
Design range is decided by the decision maker and it is the ideal range of values, to
be tried to be achieved in the decision-making process. System range denotes the capability
of the available alternatives. The overlap between design range (DR) and system range (SR)
is the common range (CR), where the acceptable solution exists.
Kulak and Kahraman (2005) proposed FAD approach to solve MCDM problems
under uncertainty. The system and design ranges are expressed using fuzzy numbers, i.e.
triangular or trapezoidal fuzzy number. The information content for FAD is expressed as
Ii = log2(TFN of SR/TFN of CR)
(1)
The considered criteria have different weights in MCDM problems according to their
importance and they have a great influence on the final result of the problem. Eqn. (2) was
proposed for the weighted multi-criteria AD approach (Kulak et al., 2005).
1/w j
1
log , 0 I 1
ij
2 pij
wj
1
Iij log 2 , Iij 1
pij
w j , Iij 1
(2)
where Iij is the weighted information content of alternative i for criterion j, wj is the weight of jth
criterion and pij is the probability of achieving the functional requirement FRj (criterion j) for
alternative i (pij is the reciprocal of Iij).
~
A fuzzy number A is a trapezoidal fuzzy number (Tr.F.N.) if its membership function
is fA~ (x ) = (x – α)/(β – α)
α≤x≤β
=1
β≤x≤τ
= (x – δ)/(τ – δ)
τ≤x≤δ
=0
otherwise
with α ≤ β ≤ τ ≤ δ. The trapezoidal fuzzy number, as given above, can be denoted by (α,β,τ,δ).
~
A fuzzy number A is a triangular fuzzy number (T.F.N.) if its membership function is
fA~ (x) = (x – α)/(β – α)
α≤x≤β
= (x – δ)/( β – δ)
β≤x≤δ
with α ≤ β ≤ δ. A triangular fuzzy number is a special case of a Tr.F.N. Thus, a T.F.N. can also
be represented as a Tr.F.N. and is usually denoted by (α,β,β,δ).
92
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
A material property with a value ‘approximately equal to 300’ can be represented by
(270,300,300,330), taking into consideration that there is 10% fuzziness involved in any
property value. On the other hand, a material property with a value ‘approximately between
360 and 400’ can be represented by (324,360,400,440). A desired material cost ‘smaller or
equal to about Rs. 20 per kg.’ can be represented by (0,0,20,22) as the minimum cost is zero.
~
Corrosion resistance is a linguistic variable and its values are ‘recommended’ ( RE ),
~
~
‘acceptable’ ( AC ) and ‘not recommended’ ( NR ). Using fuzzy set theory, these values can be
expressed in Tr.F.N. as follows:
~
~
~
RE : (18,18,18,22);
NR : (45,55,55,55)
AC : (18,20,50,55);
The ‘importance weight of material’ is a linguistic variable with upper and lower limits
~~ ~ ~ ~
set as 1 and 0 respectively. For this purpose, a weighing set of VL, L,M,H, VH is used. The
meanings of fuzzy values are ‘very low’, ‘low’, ‘medium’, ‘high’ and ‘very high’ respectively.
They can be represented by Tr.F.N. as follows:
~
~
~
~
VL = (0,0,0,0.3), L = (0,0.3,0.3,0.5), M = (0.2,0.5,0.5,0.8), H = (0.5,0.7,0.7,1.0),
~
VH = (0.7,1.0,1.0,1.0)
{
3.
}
Numerical illustrations
3.1. Example 1
The design of a nozzle of a jet fuel system to be operated in a high-temperature
oxygen-rich environment was considered by Liao (1996). The relevant material properties,
their desirable values and the corresponding importance weights for that material selection
problem are shown in Table 1. Table 2 shows the database of ten different materials along
with the values of relevant properties.
Table 1. Design range of material properties
Sl.No.
1.
2.
3.
4.
Material property
Brinell hardness
Machinability rating
Cost
Corrosion resistance
Desirable value
Approximately equal to 300
Approximately greater or equal to 30
Approximately smaller or equal to $3.5/lb
Recommended
Importance weight
High
Very high
Medium
Medium
Table 2. System range of materials
Sl.No.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Material
Hardness
(HB)
270-420
155-350
215-390
150-330
130-250
250-700
Machinability
rating*(%)
25
40
30
45
35
25
Cost ($/lb)
Corrosion
resistance#
Recommended
Recommended
Recommended
Not Recommended
Recommended
Recommended
Stainless steel 17-4PH
4.5
Stainless steel 410
3
Stainless steel 440A
2.5-3.0
Stainless steel 304
2
Ni-resist cast iron
0.8-1.3
High-chromium cast
2-2.5
iron
Ni-hard cast iron
525-600
30
1.8-2.2
Recommended
Nickel 200
75-230
55
4
Acceptable
Monel 400
110-240
35
8
Recommended
Inconel 600
170-290
45
8.5-9.0
Recommended
*Cold-drawn AISI 1112 steel was taken as having a machinability rating of 100%.
#
The guidelines were: ‘recommended’ if corrosion rate < 20 mpy (mill per year), ‘acceptable’
if 20 ≤ corrosion rate ≤ 50 mpy, and ‘not recommended’ if corrosion rate > 50mpy.
93
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The system range and design range for functional requirement of hardness for
Inconel 600 are shown in Figure 1.
Figure 1. Common area of system range and design range
Now, the crisp values of importance weights are calculated as follows:
For hardness:
w1 = ¼(0.5+0.7+0.7+1.0) = 0.7
For machinability rating:
w2 = ¼(0.7+1.0+1.0+1.0) = 0.9
For cost:
w3 = ¼(0.2+0.5+0.5+0.8) = 0.5
For corrosion resistance: w4 = ¼(0.2+0.5+0.5+0.8) = 0.5
The design range, system range and importance weights of the materials
expressed in Tr.F.N. Then, by applying Eqns. (1) and (2), and crisp values of the
weights, the weighted information contents of all the properties for different
alternatives are calculated, as shown in Table 3. This table also exhibits the
information contents for all alternatives and their corresponding rankings.
are first
material
material
total of
Table 3. Ranking of materials
Material
Hardness
Machinability
rating
Cost
Corrosion
resistance
Stainless steel 17-4PH
Stainless steel 410
Stainless steel 440A
Stainless steel 304
Ni-resist cast iron
High-chromium cast iron
Ni-hard cast iron
Nickel 200
Monel 400
Inconel 600
2.02
2.16
2.11
2.10
5.07
2.78
Infinite
Infinite
Infinite
2.13
4.36
0.0
0.0
0.0
0.0
4.36
0.0
0.0
0.0
0.0
2.67
0.0
0.0
0.0
0.0
0.0
0.0
1.81
Infinite
Infinite
0.0
0.0
0.0
Infinite
0.0
0.0
0.0
2.15
0.0
0.0
Total
information
content
9.06
2.16
2.11
Infinite
5.07
7.14
Infinite
Infinite
Infinite
Infinite
Rank
5
2
1
Rejected
3
4
Rejected
Rejected
Rejected
Rejected
3.2 Example 2
For selecting the most suitable material for coating (Athanasopoulos et al., 2009), the
resign requirements and the corresponding importance weights of various material properties
are shown in Table 4. Three relevant coating materials and system range values of their
properties are given in Table 5. Coating material properties, like corrosion resistance, wear
resistance and appearance are expressed using linguistic terms from the set consisting of
values as ‘very poor’, ‘poor’, ‘fair’, ‘good’ and ‘very good’. Their T.F.Ns. are (0,0,0.3),
(0,0.3,0.5), (0.2,0.5,0.8), (0.5,0.7,1) and (0.7,1,1) respectively.
By applying FAD principle, material Y2 is selected as the best choice, while the other
two are rejected from further consideration.
94
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 4. Design range of material properties
Coating property
Corrosion resistance (C1)
Wear resistance (C2)
Appearance (C3)
Hardness (C4)
Thermal conductivity (C5)
Desirable value
Very good
Fair
Good
(180,200,220)
(180,200,220)
Property weight
Very high
Low
High
Medium
Low
Table 5. System range of materials
Material
Y1
Y2
Y3
4.
C1
Poor
Very good
Good
C2
Good
Fair
Very poor
C3
Good
Good
Poor
C4
(247,275,303)
(166,185,204)
(75,83,91)
C5
(284,305,336)
(213,237,261)
(119,132,145)
Conclusions
In this paper, axiomatic design principles under fuzzy environment are used for
solving two material selection problems. For the first example of nozzle material selection, the
derived results are observed to be more logical than those obtained by fuzzy suitability index
method. In the second example of material coating selection, the top ranked material exactly
matches with the observation of the part researchers. This multi-criteria decision-making
method is capable of dealing with quantitative and qualitative data simultaneously. In this
method, non-worthy alternatives get rejected at the early stage using simple mathematical
calculations, resulting into the more accurate ranking of only the feasible alternatives.
References
Athanasopoulos, G., Riba, C.R., Athanasopoulos, C. A decision support system for coating
selection based on fuzzy logic and multi-criteria decision making. Expert Systems with
Applications, 2009, 36(8), 10848-10853.
Giachetti, R.E. A decision support system for material and manufacturing process selection.
Journal of Intelligent Manufacturing, 1998, 9, 265-276.
Kulak, O., Kahraman, C. Fuzzy multi-attribute selection among transportation companies
using axiomatic design and analytic hierarchy process. Information Sciences, 2005, 170,
191-210.
Kulak, O., Durmusoglu, M.B., Kahraman, C. Fuzzy multi-attribute equipment selection based
on information axiom. Journal of Materials Processing Technology, 2005, 169, 337-345.
Liao, T.W. A fuzzy multi-criteria decision-making method for material selection.
Journal of Manufacturing Systems, 1996, 15(1), 1-12.
Rao, R.V., Patel, B.K. A subjective and objective integrated multiple attribute decision making
method for material selection. Materials & Design, 2010, 31, 4738-4747.
Rathod, M.K., Kanzaria, H.V. A methodological concept for phase change material selection
based on multiple criteria decision analysis with and without fuzzy
environment. Materials & Design, 2011, 32, 3578-3585.
Sharif Ullah, A.M.M., Harib, K.H. An intelligent method for selecting optimal materials and its
application. Advanced Engineering Informatics, 2008, 22, 473-483.
Wang,
M.J.,
Chang,
T.C.
Tool
steel
materials
selection
under
fuzzy
environment. Fuzzy Sets and Systems, 1995, 72, 263-270.
Suh, N.P. The Principles of Design, Oxford University Press, New York, 1990.
Suh, N.P. Axiomatic Design: Advances and Applications, Oxford University Press, New York,
2001.
95
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Modeling and Optimization of MRR in Powder Mixed EDM
Ashvarya Agrawal, Avanish Kumar Dubey
Motilal Nehru National Institute of Technology, Allahabad-211004, Uttar Pradesh, India
*Corresponding author (e-mail: ash21agrawal@gmail.com)
Electric discharge machining (EDM) is the earliest known and the most widely used nonconventional machining process used to machine electrically conductive materials by
removing material through electrical discharges in a dielectric fluid. It can be used to
machine complex geometries even in difficult-to-machine materials. Powder mixed electric
discharge machining (PMEDM) is one of the most useful advancement in the capabilities of
EDM process. In PMEDM, an electrically conductive powder is mixed in the dielectric fluid
of EDM, which reduces the insulating strength of the dielectric fluid and increases the spark
gap between the electrodes, which in turn makes the process more stable and improves
the material removal rate (MRR). This paper presents an experimental research on MRR of
metal matrix composite (MMC) using PMEDM. The result shows that MRR improves in
PMEDM by selecting proper machining parameters.
Keywords: PMEDM, MMC, CCRD, RSM, GA, MRR
1.
Introduction
Electric Discharge Machining (EDM) is one of the most extensively used non-conventional
material removal process among all the non-conventional machining processes. It can be
successfully employed to machine electrically conductive parts regardless of their hardness,
Bulent et al (2012). Even highly delicate sections and weak materials can be machined without
any fear of distortion because there is no direct contact between the tool and the work piece,
Kansal et al (2007).In this process, material is removed by controlled erosion through a series of
electric sparks between the tool and the work piece. The thermal energy of the sparks leads to
intense heat conditions on the work piece causing melting and vaporization of the electrodes,
Padhee et al (2012).
Although EDM technology is widely used in mechanical manufacturing, its low efficiency
and poor surface quality have been the key problems restricting its development, Zhao et al
(2002). To address these problems, recently, powder mixed EDM (PMEDM) has been emerged
as one of the advanced technique in the direction of enhancement of capabilities of EDM, Kansal
et al (2006). In PMEDM an electrically conductive powder of suitable grain size is added into the
dielectric fluid. This electrically conductive powder reduces the insulating strength of the dielectric
fluid and increase the spark gap between the tool and the work piece. Enlarged spark gap
between the electrodes makes the flushing of debris uniform. As the result, the process becomes
more stable, Gurule et al (2012). However, excessive contamination may increase spark
concentration, i.e. arching, leading to unstable inefficient process, Kumar et al (2010). Arching
happens when a series of discharges strikes repeatedly on the same spot, Tzeng et al (2001).
PMEDMed surfaces also have high resistance to corrosion and abrasion, Gurule et al (2012).
Jeswani (1981) investigated the effects of the addition of graphite powders into kerosene
on machining of tool steels and observed an increase in spark gap. Also an increase in MRR by
60 % was reported.Padhee et al. (2012) performed the experiments on EN 31 steel with silicon
powder suspended into the kerosene oil and reported that the powder mixed dielectric promotes
the reduction of surface roughness and enhances MRR. Tzeng et al. (2001) investigated the
effect of different powders and their properties while powder mixed electric discharge machining
of Mould steel SKD-11 and reported that with increase in the particle size, spark gap increases
and Cr produced the greatest MRR followed by Al, SiC, Cu. And for TWR the reverse trend was
96
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
observed. Singh et al. (2006)studied the effect of PMEDM on cast aluminium MMCs using silicon
carbide powder into the dielectric and reported better MRR. They also optimized process
parameters using Taguchi methodology.
(a)
(b)
(c)
Figure1. (a) Experimental setup of PMEDM, (b) Stirrer, (c) Machined surface
The literature survey revealed that most of the research work has been done on metal
and alloys and Al/SiC MMC. No work has been reported on copper-iron-graphite MMC, which is
being widely used in automobile industries. In the present research, an experimental study has
been done on copper based MMC with varying peak current, pulse-on time, pulse-off time and
powder concentration to study the process performance such as MRR. Further, a hybrid
approach of RSM and GA has been used to maximize the MRR.
.
2. Experimental planning methods
2.1
Response surface method
Response surface method is a collection of statistical and mathematical methods that are
useful in modeling and optimization of engineering science problems. The main objective of this
technique is to optimize the responses that are affected by various input process parameters.
RSM also establishes the relationship between the controllable input parameters and the
obtained responses, Dubey et al (2008). When RSM is used for modeling and optimization of
manufacturing science problems, sufficient data is collected through designed experimentation.
Usually the response surface is represented graphically, where one of the responses (Φ) is
plotted versus two of the input parameters (x 1 and x2). Generally a second order regression
model as given below is utilized in RSM
y = b0+
k
i=1 bixi +
2
k
i biixi +
i
j bijxixj.
(1)
where all b’s are regression coefficients which can be determined by least square method. It is
important for a second-order model to provide optimum prediction about the process behavior
within the specified range of all input process parameters. So the model should have a
reasonably consistent and stable prediction of responses at points of interest x i. This can be
achieved by central composite rotatable design (CCRD). According to CCRD, standard error is
kept same for all points that are at the same distance from the centre of the region.
Mathematically
97
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2
2
2
x1 + x2 +…..+xk = constant
(2)
In CCRD the total number of runs required is 2k+ 2k + more than one run at the centre, Cocharan
and Cox (1959).
2.2 Genetic algorithm (GA) for optimization
Genetic algorithms (GA) are computerized search and optimization algorithms based on the
mechanics of natural genetics and selection [13]. GA is based on the Darwin’s principle of
“survival of fittest” .The algorithm starts with the creation of random population. The individual
with best fitness are selected to form the mating pair and then the new population is created
through the process of cross-over and mutation. The new individuals are again tested for their
fitness and this cycle is repeated until some termination criteria are satisfied.
3. Experimental setup and design of experiments
The experiments have been performed on Electronica EMS 5030 EDM machine. The
material used is Iron-Copper-Graphite, an MMC widely used in automobile and aviation
industry.The tool material used is mild steel of 20 mm diameter and the dielectric fluid used is
SEO 450. Each run has been carried out for half an hour. Fine graphite powder of 200 mesh size
was used for mixing into the dielectric.A stirrer has been employed in the machining tank to avoid
particle settling and ensure proper mixing of the powder into the dielectric fluid. Magnetic forces
were used to separate the debris from the dielectric fluid. For this purpose, few permanent
magnets were placed in the machining tank. The variable input process parameters taken are
pulse Pulse-on time, pulse Pulse-off time, peak current and additive powder concentration. The
response analyzed is MRR which was calculated by using the following formula
MRR=
work piece weight loss
machining time
(3)
(g/min)
The weight loss was obtained by weighing the work piece before and after the machining.
For this purpose an electronic balance (Denver made) was used.The control factors and their
values are shown in table 1. In present case with k = 4 and seven central point runs, total number
of runs = 24 + 2*4 + 7 = 31. The experimental matrix used in CCRD and the observed values of
MRR are shown in table 2.
Table 1. Control factors and their levels
Symbol
X1
X2
X3
X4
Factors
Pulse-pulse-on time
Pulse-pulse-off time
Peak current
Powder concentration
Unit
µs
µs
A
g/l
-2
5
15
1
0
-1
10
20
2
1
Coded levels
0
15
25
3
2
1
20
30
4
3
2
25
35
5
4
4. Modeling and optimization
4.1
Response surface modeling
The second order response surface model for MRR has been developed from the
experimental response values obtained using CCRD (Table 2). The model developed using
MINITAB software is
98
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
MRR(g/min) = - 0.0406 + 0.00093 x1 + 0.00123 x2 + 0.00388 x3 + 0.0145 x4 - 0.000019 x12 0.000013 x22 + 0.00124 x32 + 0.000537 x42 - 0.000018 x1x2 + 0.000133 x1x3 +
0.000371 x1x4 + 0.000094 x2x3 - 0.000362 x2x4 - 0.00332 x3x4.
(4)
Table 2. Experimental observations using CCRD
Experiment
No.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
4.2
A
1
0
1
-1
-1
1
-1
0
-1
0
-1
0
0
2
0
0
1
0
1
-1
0
0
-2
1
1
-1
0
0
-1
1
0
Factors levels
B
C
-1
-1
2
0
1
-1
-1
-1
-1
1
1
1
-1
-1
0
0
1
1
0
0
1
1
0
0
0
0
0
0
0
0
0
0
-1
1
0
0
1
-1
1
-1
0
-2
-2
0
0
0
-1
-1
-1
1
1
-1
0
0
0
0
-1
1
1
1
0
2
D
1
0
-1
1
1
1
-1
-2
-1
0
1
0
2
0
0
0
-1
0
1
-1
0
0
0
-1
1
1
0
0
-1
-1
0
MRR
(g/min)
0.03351
0.026
0.01432
0.02135
0.02912
0.03903
0.00913
0.018
0.03432
0.02419
0.02315
0.02831
0.037
0.036
0.02219
0.02562
0.03891
0.02427
0.026
0.01134
0.009323
0.022193
0.011
0.01592
0.04412
0.00831
0.02621
0.05128
0.0301
0.03912
0.02722
Genetic algorithm based optimization
The objective function of optimization problem can be stated as below
2
Maximize MRR = - 0.0406 + 0.00093 x1 + 0.00123 x2 + 0.00388 x3 + 0.0145 x4 - 0.000019 x1 0.000013 x22 + 0.00124 x32 + 0.000537 x42 - 0.000018 x1x2 + 0.000133 x1x3 +
0.000371 x1x4 + 0.000094 x2x3 - 0.000362 x2x4 - 0.00332 x3x4
(5)
Find x1, x2, x3 and x4 with the following range of process input parameters
5 ≤ x1 ≤ 25, 15 ≤ x2 ≤ 35, 1 ≤ x3 ≤ 5, 0 ≤ x4 ≤ 4
99
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The critical parameters of GA are the size of the population, cross-over rate, mutation rate,
and number of generations. After trying different combinations of GA parameters, the population
size 20, cross-over rate 0.8, mutation rate 0.01 and number of generation 51, have been taken in
the present study. The objective function in Eq. (5) has been solved without any constraint in the
specified intervals. The optimum value of the optimization function have been found as 0.0645
g/min and the corresponding values of Pulse-on time (x1), pulse-off time (x2), peak current (x3)
and powder concentration (x4) have been found as 20.128 µs, 33.88µs, 4.9Amp and 0.016 g/l.
5.
Conclusions
The optimization of Powder Mixed EDM of iron-copper-graphite using response surface
modeling and genetic algorithm technique has been done. Following conclusions have been
drawn on the basis of results obtained:
(1) The developed model for MRR with mean square error of 0.001742 % is well in
agreement with the experimental result.
(2) The optimum levels of control factors Pulse-on time,Pulse-off time, peak current, and
powder concentration have been found as 20.128 µs, 33.88µs, 4.9Amp and 0.016 g/l.
(3) Validation has been performed in order to verify the result.
References
Cochran W. G., Cox G. M. Experimental designs. Bombay: Asia Publishing House, 1959
Deb Kalyanmoy.Optimization for engineering design, algorithms and examples.© 1995 by PHI
Learning Private Limited, New Delhi. August 2010
DubeyAvanish Kumar and YadavaVinod.Multi-objective optimization of laser beam cutting
process. Optics and Laser Technology 40 (2008) 562-570
EkmekciBulent and Ersoz Yusuf. How suspended particles affect surface morphology in powder
mixed electrical discharge machining (PMEDM). The minerals, metals and materials society
and ASM Interntional2012
Gurule N. B. and Nandurkar K. N. Effect of tool rotation on material removal rate during powder
mixed electric discharge machining of die steel. International journal of Emerging
Technology and advanced engineering ISSN 2250-2459, Vol. 2, Issue 8, August 2012
Jeswani M. L. Effect of the addition of graphite powder to kerosene used as the dielectric fluid in
electrical discharge machining.Wear 1981, 70 (2), 133-139
Kansal H. K., Singh Sehijpal and Kumar Pradeep. Performance parameters optimization (multicharacteristics) of powder mixed electric discharge machining (PMEDM) through Taguchi’s
method and utility concept. Indian Journal of Engineering and Materials Sciences Vol. 13,
June 2006, pp. 209-216
Kansal H. K., Singh Sehijpal and Kumar Pradeep. Technology and research developments in
powder mixed electric discharge machining (PMEDM). Journal of materials processing
technology 184 (2007) 32-41
Kumar Anil, MaheshwariSachin, Sharma Chitra and Beri Naveen. Research developments in
additive mixed electrical discharge machining (AEDM): A State of Art Review. Materials and
Manufacturing Processes, 25:10, 1166-1180
PadheeSoumyakant, NayakNiharranjan, Panda S. K., Dhal P. R. and Mahapatra S. S.
SadhanaVol. 37, Part 2, April 2012, pp. 223-240. © Indian Academy of Sciences
Singh S., Maheshwari S and Dey A. Electrical discharge machining of aluminium metal matrix
composites using powder suspended dielectric fluid. Journal of mechanical engineering
2006, 57(5), 271-290
Tzeng Y. F. and Lee C. Y. Effects of powder characteristics on electrodischarge machining
efficiency.Int J of advManufTechnol(2001) 17:586-592
Zhao W. S., Meng Q. G. and Wang Z. L. The application of research on powder mixed EDM in
rough machining. Journal of Materials Processing Technology 129 (2002) 30-33
100
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Thermal Design of a Shell and Tube Heat Exchanger
Hemant Upadhyay1, Avdhesh Kr. Sharma2*
1
IITM, Murthal (Sonepat)
Deenbandhu Chhotu Ram University of Science and Technology, Murthal (Sonipat), Haryana, India
2
*Corresponding author (e-mail: avdhesh_sharma35@yahoo.co.in)
A generalized approach for designing a shell and tube heat exchanger (STHEx) has
been developed for given heat transfer duty. Objective function, which includes total
(fixed and operational) cost, has been minimized using non-traditional approach
namely Simulated Annealing (SA) technique. After validation, the routine was used to
study the various materials for construction of STHEx. In case of expensive material,
the optimization model minimizes the total cost by decreasing the surface area and
forcing high velocities (increasing the overall heat transfer coefficient and friction factor)
on tube and shell sides.
1.
Introduction
The shell and tube heat exchanger (STHEx) is versatile equipment and have vast
industrial applications. Different approaches (for abating the thermal stress, preventing
leakage, controlling corrosion and to facilitate cleaning) have been used to optimize STHEx
design. Thermal and hydraulic performance, economic considerations, space and weight
considerations, operational and maintenance considerations and other important parameters
are also a function of heat exchanger design. Thus, obtaining a good STHEx design is a
complex activity. Commercially, a variety of optimization tools are available, many of them
have limited generality. Investigators had employed different optimization techniques (Selbas
et al. 2006, Caputo et al. 2008, Fesanghary et al. 2009). Selbas et al. (2006) employed
genetic algorithm (GA) technique to minimize the total cost of STHEx. Caputo et al. (2008)
also used GA technique to arrive at optimum heat exchanger architecture. On the other hand,
Fesanghary et al. (2009) highlighted the global sensitivity analysis in order to identify the
critical design parameters using Harmony Search algorithm. Hajabdollahi et al. (2011)
combined both GA technique and Particle Swarm (PO) method to perform thermoeconomic
optimization of shell and tube condenser. Rao & Patel (2013) modified the teaching-learning
based optimization (TLBO) algorithm to customize for other large thermal systems; they used
modified TLBO algorithm for multi-objective optimization of STHEx and plate-fin heat
exchangers (PFHEx); while, Hadidi et al. (2013) applied imperialist competitive algorithm
(ICA) for cost minimization with higher accuracy and computation speed.
Traditional approaches are highly iterative and time consuming. Thus, in this work, a
generalized approach for STHEx design has been developed. Herein, total cost of
equipment/material and operating costs were combined to define objective function. Nontraditional Simulated Annealing (SA) technique has been employed. Studies on cost
optimization of STHEx are carried out by selecting various construction materials.
2. Formulation
Heat transfer duty of the STHEx can be written in terms of LMTD and flow correction factor as
(1)
Q (UA) o Th,i Tc,o Th,i Tc,o ln Th,i Tc,o Th,i Tc,o F
In eqn. (1), A0 is surface area ( N t d o L ), Nt is number of tubes. U and T correspond to
overall heat transfer coefficient and stream temperature. Subscript i, o, h and c designate to
inlet and outlet condition, and hot and cold flow stream. The flow correction factor, F, is
defined to match the flow distribution in STHEx (Vengateson, 2010) as
F R 2 1 R 1 ln 1 P 1 PR ln 2 P R 1 R 2 1 2 P R 1 R 2 1 (for R1) (2a)
(for R=1) (2b)
2 P 1 P ln 2 P 2 2
2P 2 2
Where, R Th,i Th,o
T
c,o
Tc,i and P Tc,o Tc,i
Overall heat transfer coefficient can be written as
101
T
h,i
Tc,i
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
1 UAo 1 hAi R f ,i ln d o / d i 2k w L 1 hAo R f ,o
(3)
The inner tube diameter (di) is assumed to be 80% of the outer side tube diameter (do).
kw and h are thermal conductivity of wall and convective heat transfer coefficient, respectively.
Nusselt number is used to define the h on both tube and shell side as given in table 1.
Table 1. Nusselt number and Friction factor for STHEx (Caputo et al, 2008)
Side
Nusselt number
Friction factor
Tube-side
ft 8Ret 1000 Prt 1 d L0.67
Nut
i
0.5
0.67
1 12.7 ft 8 Prt 1
f t 1.82 log 10 Re t 1.64
2300<Ret<10000
2300<Ret<10000
Shell-side
Nus 0.36 Res
0.55
6
1/ 3
Prs
t
2
w 0.14
f s 2 0.72 Re s
0.15
Res<40,000.
2000<Res<10
The ratio of tube pitch to tube outside diameter (do) varies from 1.25 to 2. Herein, the
most common value of 1.25 is considered. For square and triangular pitch, the relation as
reported in Patel & Rao (2010) in terms Nt, shell diameter (Ds) and d0 written be given as
n
(4)
N t = KDs d o
In eqn. (4) parameter K and n are curve fitted to the data (see Fig. 1) for square and
triangular pitch layout from Hadidi et al. (2013) in terms of number of pass (Np) as
(5)
K (or n) ai bi N p ci N p2 di N 3p ei N p4
Table 2. Empirical constants in eqn. (5)
Constants
ai
bi
ci
di
ei
Square
0.4509
-0.372
0.1633
-0.0289
0.0017
K
Triangle
0.466403
-0.205365
0.068946
-0.011660
0.000676
Data : Square
Data : Triangular
Poly. (Data : Square )
Poly. (Data : Triangular )
0.3
K
n
0.2
0.1
0.0
1
3
Number of passes(Np)
n
Square
1.71729
0.81295
-0.39225
0.07342
-0.00441
Data : Square
Data : Triangular
Poly. (Data : Square )
Poly. (Data : Triangular )
2.8
2.7
2.6
2.5
2.4
2.3
2.2
2.1
2.0
5
Triangle
1.95854
0.27798
-0.11467
0.02140
-0.00125
1
3
5
Number of passes (Np)
(a)
(b)
Figure 1. Curve fits constants (a) K vs. Np , (b) n vs. Np for triangular and square pitch layout
Pumping power (Pp) can be defined in terms of pressure drop on tube and shell side as
Pp mt t Pt ms s Ps p
Where, p is the pumping efficiency, which is fixed at 0.85 in present calculations.
The tube and shell side pressure drop is computed (Sinnot et. al., 1996) as
2
Pt t vt 2 L di f t 2.5N p
Ps f s s vs 2 L B Ds Dhs
The hydraulic diameter of shell can be expressed as
2
102
(6)
(7)
(8)
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2
4 Pt d o 4 / d o ; for square pitch
Dhs
2
2
4 0.43Pt 0.5d o / 0.5d o ; for triangular pitch
2
(9)
3. Simulated annealing (SA) technique and objective function
3.1 Simulated annealing (SA) technique
Non-traditional “Simulated Annealing” (SA) optimization technique has been employed
(Deb, 1995). It resembles the cooling process of molten metal through annealing process. As
temperature is high, atoms in molten metal move more freely, but when temperature
subsides, their movement gets restricted. If temperature is reduced drastically, the crystalline
state may not be achieved and system may end up in a polycrystalline state (a higher energy
state than the crystalline state). Thus, for achieving the absolute minimum energy state, the
temperature needs to be reduced slowly. According to Boltzmann probability distribution, a
system in thermal equilibrium has probabilistic energy distribution as
P (E) = exp (-E/kT)
(10)
Here k is Boltzmann constant. From eqn. (10) a system at a high temperature has almost
uniform probability of being at any energy state, but at low temperature it has a small
probability of being at a high energy state. Therefore, assuming search process follows
Boltzmann probability distribution, the convergence can be controlled via temperature.
It is a point-by-point method, where iterations are performed with an initial point and a
high temperature. A second point is created at random in the vicinity of the initial point and the
difference in the function values (ΔE) at these two points is calculated. If the second point has
a smaller function value, the point is accepted; otherwise the point is accepted with a
probability exp (-ΔE/T). This completes first iteration of SA procedure. In next generation,
another point is created at random in the neighborhood of current point and metropolis
algorithm is used to accept or reject the point. In order to simulate thermal equilibrium at
every temperature, a number of points are usually tested at a particular temperature before
reducing the temperature. Termination is obtained at sufficient small temperature.
3.2 Optimization: Objective function
Life cycle cost of STHEx (in $) can be written in terms of fixed (CF) & operating cost (COD) as
(11)
CT CF COD
Fixed cost involves cost due to material, which depends on surface area A. Hall‟s correlations
(Taal, 2003) have been used to calculate fixed cost (in $) of STHE as
(12)
CF i i A i
Here i, i and I are constants (see Table 3). is currency exchange factor from $ to €.
Table 3. Constants in eqn.(12) for using different materials in shell and tube (Taal, 2003)
i
7000.
10000.
17500
Material: shell-tube
Carbon steel (CS)-CS
SS-SS
Ti-Ti
i
360
324
699
i
0.8
0.91
0.93
The operating cost is considered to be paid annually over the lifespan of STHEx and
therefore, it is a function of inflation rate (f) in the prices of electricity. Thus, operating cost can
be computed in terms of present worth of all future annual payments on electricity bills as
COD Pp CE hr PWF
(13)
Where, Pp, CE and hr is pumping power on both sides, energy costs (€/kWhr) and
operational hours per year, respectively. PWF is present worth factor, which is defined in
terms of expected life (k), interest rate (i) and inflation rate, j, as (Duffie & Beckman 1980)
k
1
1 f
1
(14)
PWF i f
1 i ; (for i f)
k 1 i ; (for i f)
103
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4. Results and discussion
The source code for optimization SA has been tested against Rosenbrock‟s saddle
function for minimization before implementation. Thereafter, total cost of STHEx was used as
objective function (eqn. 11). Two tube side passages and one shell side passage has been
considered. Resistance due to tube wall is ignored. Allocate the working fluid on both sides,
operating conditions, physical properties of working fluids, fouling resistance and other
parameters as assigned in Table 4. For heat duty of 4.34 MW for square pitch and fixing
system life, electricity cost, working hours, and interest and inflation rate are fixed at 10 years,
€ 0.12/kWhr, 7000 hrs, 10% and 0%, respectively, and further imposing constraints in the
range of (1) 0.015m tube outer diameter 0.051m, (2) 0.05 m baffle spacing 0.5 m and
(3) 0.1m shell diameter 1.5m; the STHEx optimization code was run. Optimization results
are compared with data of Caputo et al (2008) and Hadidi et al (2013) in Table 5.
Table 4: Operating conditions and other properties for 4.34 MW heat duty (Caputo et al, 2008)
Stream
Shell side:
methanol
Tube side:
sea water
Flow rate
(kg/s)
Tinlet
(C)
Toutlet
(C)
(kg/m3)
Cp
(J/kg.K)
µ
(Pa.s)
k
(W/m.K)
Rf
(m2K/W)
27.8
95
40
750
2850
0.00034
0.19
0.00033
68.9
25
40
995
4200
0.00080
0.59
0.00020
Table 5. Case studies results for square pitch
Parameter
Caputo et al
Hadidi et al
Present work
do (m)
B (m)
Ds (m)
L (m)
Nt
Ret
2
ht (W/m K)
Pdt (Pa)
Dhs (m)
Res
hs (W/m2 K)
Pds (Pa)
2
U (W/m K)
2
A (m )
CF (€)
COD (€)
CT (€)
0.016
0.5
0.830
3.379
1567
10939
3762
4298
0.011
11075
1740
13267
660
262.8
49259
5818
55077
0.015
0.5
0.879
3.107
1752
10,429
3864
5122
0.011
9917
1740
12367
677
256.6
48,370
5995
54,366
0.015
0.5
0.886
3.072
1783
10285
3842
4955
0.0107
9833
1735
12161
675
258
48600
6072
54673
It is revealed that any increase in overall heat transfer coefficient due to selection of
small geometry may reduce the heat exchange surface area and thus saves cost on
material/equipment. However, the small geometry warrants more pumping cost. This trade-off
between material/equipment cost (i.e., transfer surface area) and pumping cost decides the
optimum condition (i.e., minimum total cost). Table 5 shows that results are in good
agreement with data of Hadidi et al (2013) and Caputo et al (2008). The comparison with
Caputo et al shows a 13.8 % increase in number of tubes (reduces flow velocity in the tubes)
which consecutively decreases tube side heat transfer coefficient by 2% and corresponding
decrease in shell diameter (which increase flow velocity on shell side) consecutively improves
the heat transfer coefficient on shell side by 0.30%. Thus, net increase of Uo is 2.3%, which in
turn leads to 1.8 % reduction in surface area, and 9.1% reduction in exchanger length. Thus
the corresponding decrease in initial cost (due to reduction in heat exchanger area) has been
worked out to be 1.3%. The decrease in flow velocity in tube, results in decreasing tube side
pressure drop. On other hand, the high shell side velocity increases shell side pressure drop
by 8.3%. Thus, net decrease in total cost is found to be of 0.7% as compared to Caputo et al.
104
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
After validation, the STHEx optimization model was used to study the feasibility of
selecting expensive/ different materials for STHEx construction (Table 6). In these
calculations, the costs are converted into Rupees. Result shows that the selection of
construction material for shell and tubes is very sensitive to equipment cost. If expensive
materials are employed, the Uo and pressure drop needs to be increased in order to arrive at
minimum total cost. Thus shell diameter need to decrease, while exchanger length should be
increased, keeping tube diameter at their minimum value.
Table 6. Effect of various materials on STHEx design
Parameter CS-CS
SS-SS
Ti-Ti
L (m)
B (m)
Ds (m)
Nt
Ret
Pdt (Pa)
Res
Pds (Pa)
2
U (W/m K)
2
A (m )
CF (Rs)
COD (Rs)
CT (Rs)
2.858
0.500
0.924
1963
9310
3927
9430
10918
657
264
1833660
209580
2043240
3.370
0.482
0.833
1550
11791
6794
10853
15641
705
246
2813400
325260
3138660
4.052
0.411
0.735
1163
15715
13119
14413
32847
782
222
5946600
658500
6605100
5. Conclusion
A generalized thermal design methodology for STHEx has been developed for given
heat duty. Objective function has been developed in terms of total (material and operational)
cost. Simulated Annealing (SA) technique is employed for optimization by considering a
workable range of constraints including outside tube diameter, shell diameter and baffle
space. Optimization code was validated with data from literature. The routine was used to
study the role of various construction materials for shell and tubes on optimum total, material
and operating costs. Result shows that the selection of construction material for shell and
tubes is very sensitive to total and equipment cost.
This work can be extended for exergy and thermo-economic analysis of STHEx. The
generalized optimization approach for STHEx needs to account for effect of baffle cut on shell
side heat transfer coefficient and friction factor in order to make more realistic predictions.
References
Caputo A.C., Pelagagge P.M., Salini P., 2008, „Heat exchanger design based on economic
optimization‟. Applied Thermal Engineering; 28: 1151-1159.
Deb, K., 1995, „Optimization for Engineering design: Algorithms and Examples‟, PHI, New Delhi.
Duffie J.A., Beckman W.A., 1980, „Solar engineering of thermal processes‟, John Wiley & Sons.
Fesanghary M., Damangir E., Soleimani I., 2009, Design optimization of shell & tube heat exchangers
using global sensitivity analysis & harmony search algorithm. Applied Thermal Engg; 29,1026-1031
Hadidi A, Hadidi M., Nazari A., 2013, A new design approach for shell-and-tube heat exchangers using
imperialist competitive algorithm (ICA) from economic point of view.
Hajabdollahi H., Ahmadi P., Dincer I., 2011, Thermoeconomic optimization of a shell and tube
condenser using both genetic algorithm and particle swarm algorithm, International Journal of
Refrigeration, 34, 1066-1076.
Patel V., Rao R.V., 2010, „Design optimization of shell-and-tube heat exchanger using particle swarm
optimization technique‟. Applied Thermal Engineering; 30: 1417-1425.
Rao R.V., Patel V., 2013, „Multi-objective optimization of heat exchangers using modified teachinglearning-based optimization algorithm‟. Applied Mathematical Modelling; 37: 1147-1162.
Selbas R., Kizilkan O.,Reppich M., 2006, A new design approach for shell and tube heat exchangers
using genetic algorithm from economic point of view. Chemical Engg & Processing; 45: 2068-2075.
Sinnot R.K., Coulson J.M., Richardson J.F., 1996; „Chemical engineering design‟, 6, butter worth
henemann, Boston
Taal M., Bulatov I., Klemes J., Stehilik P., 2003, „Cost estimation and energy price forecast for economic
evaluation of retrofit projects‟. Applied Thermal Engineering; 23: 1819-1835.
Vengateson U., 2010, „Design of multiple shell and tube heat exchangers in series: E shell and F shell‟.
Chemical Engineering Research and Design; 88: 725-736.
105
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Multi-Objective Optimization of Turning Operations using
Non-dominated Sorting Genetic Algorithm Enhanced with
Neural Network
Aditya Balu , Sharath Chandra Guntuku , Amit Kumar Gupta
1
2
1
1
Department of Mechanical engineering, BITS-Pilani, Hyderabad Campus - 500078, AP,India
2
Department of Computer Science, BITS-Pilani, Hyderabad Campus - 500078, AP, India
Corresponding author: (email: f2010155@hyderabad.bits-pilani.ac.in)
This paper focuses on methods of optimising the turning operations using the models
the techniques of data mining such as artificial neural networks (ANN) and response
surface methodology (RSM). As the output parameters, tool wear and surface
roughness are conflicting in nature, there is no single combination of parameters for
best turning operations. A multi-objective optimization method, non-dominated sorting
genetic algorithm-II(NSGA-II) is used to optimize the parameters. A test set of 27 points
from the literature is used with response parameters being the cutting speed and feed
rate and cutting time. The results demonstrate that the artificial neural network (ANN)
model is suitable for predicting the response parameters.
1. Introduction
The selection of efficient machining parameters is of great concern in manufacturing
industries, where economy of machining operations plays a key role in the competitive
market. Many researchers have dealt with the optimization of machining parameters. The
surface roughness, tool life and cutting force are considered to be the manufacturing goals in
the turning operations as per Davim and Conceicao Antonio (2001). It is also recognised that
if the cutting force is less, the surface finish and tool wear is better.
Multi-objective optimization of two or more given objectives is important for designing of tools
and machinery.In this case the tool-wear and the surface roughness are conflicting in nature.
Since Genetic Algorithms is a good tool for solving multi-objective optimization, the
optimization of the parameters can be done using evolutionary algorithms like that of NSGA-II
or PAES or SPAE, NSGA etc. Non-dominated sorting genetic algorithm (NSGA-II) is used for
obtaining the pareto optimal front.
The objective functions are classically obtained by predictive modelling techniques such as
response surface methodology (RSM). Often the pareto obtained by them is deviating from
the true pareto of the problem. In this paper an integration of Artificial Neural Networks (ANN)
which is a non-linear curve fitting tool for getting accurate outputs, with NSGA-II is done to get
a better convergence to the true pareto. A comparison of the pareto obtained from the ANN
and RSM is given to see which of the techniques is more precise.
2. Design of experiment
Davim (2003) investigated the influence of cutting speed (V m/min), feed rate (f mm/rev)
and cutting time (T min) on the responses of surface roughness (Ra mm), tool wear (Vb mm)
and power required (Pm kW). He conducted 27 experiments in turning metal matrix
composites of type A356/20/SiCp-T6 with billet size of 95 mm diameter. The used tool
geometry was as follows: rake angle 0o, clearance angle 7o, edge major tool cutting angle 60o
o
and cutting edge inclination angle 0 . A lathe of 6 kW spindle power was used to perform the
experiments. The tool wear was measured with a Mitutoyo optical microscope with 30x
magnification and 1 mm resolution. The surface roughness was evaluated with a Homeltester
T500 profilometer. The data collected from various experiments are presented in the Table 1.
106
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 1. Data for model construction
Test no. V (m/min)f (mm/rev)T (min)Ra (mm)Vb (mm)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
500
500
500
500
500
500
500
500
500
350
350
350
350
350
350
350
350
350
250
250
250
250
250
250
250
250
250
0.05
0.05
0.05
0.10
0.10
0.10
0.20
0.20
0.20
0.05
0.05
0.05
0.10
0.10
0.10
0.20
0.20
0.20
0.05
0.05
0.05
0.10
0.10
0.10
0.20
0.20
0.20
1
5
10
1
5
10
1
5
10
1
5
10
1
5
10
1
5
10
1
5
10
1
5
10
1
5
10
0.33
0.46
0.59
0.56
0.45
1.00
1.50
1.60
3.63
0.70
0.97
1.56
0.50
0.35
0.90
1.34
1.24
3.24
0.95
1.32
2.20
1.64
2.28
3.80
2.83
3.02
3.53
0.18
0.45
0.55
0.11
0.32
0.43
0.11
0.24
0.3
0.09
0.23
0.28
0.06
0.13
0.22
0.05
0.15
0.18
0.05
0.13
0.15
0.04
0.08
0.10
0.03
0.06
0.09
3. Modelling of the data
3.1. Response surface methodology
Gupta (2010) has evaluated the relationship between the parameters using Response
surface methodology (RSM)and Artificial Neural Networks. First response surface
methodology is applied to improve the surface roughness and tool wear models using Minitab
15 (Minitab Inc. 2006). The coefficients of the terms of both the models are presented in
Table 2. The correlation coefficient was 0.95 and 0.97 respectively.
Table 2. Coefficients of RSM
Term
Coef. For Ra
Coef. For Vb
Constant
8.93288
0.00534230
V
-0.0398659
5.76369E-05
F
-8.42692
-0.874176
T
-0.0754512
0.0144947
2
4.58519E-05
9.18519E-07
F
2
41.8519
7.11111
T
2
0.0174938
-0.00204321
V*f
0.0113860
-0.00332080
V*T
-1.76301E-04
9.60023E-05
f*T
0.620180
-0.0661202
V
107
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.2. Artificial neural networks
Artificial neural networks are among the newest pattern recognition technologies in the
engineer's toolbox. Each neural network is composed of an input layer, an output layer and
one or more hidden layers. Each neuron works as an independent processing element, and
has an associated transfer function, which describes how the weighted sum of its inputs is
converted to the results into an output value. Currently, there are diverse training algorithms
available. The back propagation (BP) learning algorithm has become the most popular in
engineering applications. Back propagation algorithm is based on minimization of the
quadratic cost function by tuning the network parameters. The mean square error (MSE) is
considered as a measurement criterion for a training set. Specially, BP neural network is the
most suitable tool for treating non-linear systems.
The whole 27 data sets were taken as the training set. Eachcontinuousinputrepresents
speed, the feed rate and cutting time.The outputpertainstothesurface roughness and tool
wear. The data was normalized between 0.05- 0.95. The data was then trained with two
hidden layers using the transfer function as tansig(tan-sigmoid function), and purelin (line)
and training function as trainlm (Levenberg-Marquadt function) which uses the minimization of
mean square error(MSE).
Thetraininginputsshouldbealargeenoughsampletotrainthenetworkforexecution
onthetestingdata.
The whole 27 data sets were taken as the training set. The usual problem with the ANN
training is that the trained network gets either over-trained or under-trained. In both of the
above mentioned cases, output values of the network shoot up to very high positive or
negative values. Since the data set is very less and the chance cannot be taken of losing any
datasets, it was decided that the criterion for checking the ANN would be to check whether
the output overshoots or not. To check this 100 randomly generated data points are taken
inside the interval of the inputs. If the values of the output do not shoot and the correlation is
acceptable, then it is considered to be a good neural network architecture.
Thus a novel method of training all the two objectives separately was proposed. The
final result was that it was having a good training and a good correlation with the experimental
values of the objective.
This case encompasses some of the non-classical methods of training the ANN which
are required to train the data in cases like this where the number of data sets is very less.
The data was separately trained with each of the objectives.The data was trained with
the architecture [3-5-3-1] and [3-2-3-1] for Ra, Vbrespectively which is shown in Figure 1(a)
and 1(b). The correlation coefficient is around 0.99 for all the two output functions. Thus the
training ANN was completed.
Figure 1(a) Architecture of ANN Model of Ra
108
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 1(b) Architecture of ANN Model of Vb
4. Non-dominated sorting genetic algorithm
Many of the real world problems are generally characterized by the presence of many
conflicting objectives. Therefore, it is necessary to look at that problem as a multi-objective
optimization problem. Pareto-optimal (non-dominated or non-inferior) solutions can be
obtained by using multi-objective optimization techniques
One of the multi-objective evolutionary algorithms (MOEA) that has been effective in finding
the Pareto optimal solutions is the elitist non-dominated sorting genetic algorithm (NSGA-II)
developed by Deb et al. (2002) and Seshadri, 2010. It is been applied for the objective
functions obtained from RSM.
5. Integration of NSGA-II with ANN
The Integration on RSM was done by feeding the equations in evaluate objective function
of the NSGA-II algorithm.After the training is done the trained network is globalized. Figure 2
explains the flowchart of the NSGA-II which is proposed for the integration of Artificial Neural
Networks.
After the integration of all the models with the NSGA-II algorithm the Pareto optimal fronts
were generated. Both the Pareto fronts were merged together to show a comparison inFigure
3.
Figure 2. The flow chart of NSGA-II integrated with ANN
109
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
6. Results and conclusion
In this paper models from machine learning were explored which were implemented on a
data set. The model for artificial neural networks (ANN) was built. The data was found to have
the tool wear and surface roughness with conflicting nature, and hence the Pareto optimal
front was obtained using NSGA-II algorithm. The aim of the paper is to learn ANN and NSGAII and integrate them to work more accurately and predictably to get optimum parameters for
the turning operations.
The Figure 3 shows the comparison of Pareto obtained from RSM (o) and ANN (*). It is
clearly visible that the limits of Ra and Vb are perfectly matching in the case of ANN whereas it
is going beyond the limit in the case of RSM.
Figure 3. The final pareto obtained for the RSM(o) and ANN(*). The x- axis is the toolwear
and y- axis is the surface roughness.
Thus, the designers get more precise setting for working in an optimal environment as shown
in the Figure 3. This shows that the ANN is more precise and accurate to give the Pareto
optimal front.
References
Aravind seshadri, 2010, NSGA-II; a fast and elitist algorithm.
Chien, W. and Chou, C., 2001, The predictive model for machinability of 304 stainless steel,
Journal of Materials Processing Technology, 118 (1–3), 441–447.
Davim, J.P. and C.A. Conceicao Antonio, 2001, Optimization of cutting conditions in
machining of aluminium matrix composites using a numerical and experimental
model, Journal of Material Process. Technology 112; 78-82
Davim, J.P., 2003. Design of optimisation of cutting parameters for turning metal matrix
composites based on the orthogonal arrays. Journal of Materials Processing
Technology, 132 (1), 340–344.
Gupta A.K, 2010, Predictive modeling of turning operations using response surface
methodology, artificial neural networks and support vector regression, International
Journal of Prod. Research, Volume 48, Issue 3, 763-778
Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and T. Meyarivan, A Fast Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary
Computation, 6(2):182 - 197. . 2002.
Minitab Inc., 2010. Meet MINITAB release 16. PA, USA: Minitab Inc
110
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization of Effect of Degree of Saturation on Strength and
Consolidation Properties of an Unsaturated Soil
1*
2
3
Bhavita S. Dave , Lalit Thakur , D. L. Shah
1
Parul Institute of Technology, Baroda – 391 760, Gujarat, India
2
Babaria Institute of Technology, Baroda - ,Gujarat, India
3
M.S. Uni., Faculty of Technology and Engineering, Baroda – 390 001, Gujarat, India
*Corresponding author (e-mail: davebhavita@hotmail.com)
Determination of soil properties is the most important first phase of work for every type
of civil engineering facility. The present study was an attempt to study the effect of
saturation on strength and consolidation properties of two soils namely CH and CL type
of soil at various percentage of maximum dry density. Result showed an apparent
increase in the cohesion value of yellow soil increased with increase in the saturation
value as well as the density value. Black soil followed a reverse trend of decrease in
the cohesion as well as the angle of internal friction value with increase in degree of
saturation, but comparatively increased for a given degree of saturation with increase in
value of density. No definite trend is observed in the value of coefficient of
consolidation.
1.
Introduction
Civil engineers build on or in earth’s crust. Most of the earth’s land surface comprises
notoriously hazardous geo-materials called “unsaturated soils”. An unsaturated soil is
commonly defined as having three phases, namely, solids, water, and air. However it may be
more correct to recognize the existence of fourth phase, namely, that of the air-water
interface. The types of problems of interest in unsaturated soil situations are the negative
pressures in the pore-water. The type of problem involving negative pore-water pressures that
has received the most attention is that of swelling or expansive clays. Research into the
volume change and shear strength of unsaturated soils commenced with new impetus in the
late 1950’s.
The stress state variables generally used for an unsaturated soil are the net normal
stress, and the matric (the difference between the pore-air pressure and pore-water pressure)
suction. The shear strength equation for an unsaturated soil exhibits a smooth transition to
the shear strength equation for a saturated soil. As the soil approaches saturation the pore
water pressure approaches the pore-air pressure and the matric suction goes to zero. The
matric suction component vanishes which reverts to the equation for a saturated soil. The
shear strength testing of unsaturated soils can be viewed in two stages. The first stage is prior
to shearing, where the soils can be consolidated to a specific set of stresses or left
unconsolidated. The second stage involves the control of drainage during the shearing
process.
Soil consolidation has important practical implications with regard to changes in soil
behavior. The application of a load to an unsaturated soil specimen will result in the
generation of excess pore-air and pore-water pressures. The pore-air and pore-water
pressures increase as an unsaturated soil is compressed. In an unsaturated soil, the pore
fluid consists of water, free air, and air dissolved in water. The compressibility of pore fluid in
an unsaturated soil takes into account the matric suction of the soil. The theory of
consolidation does not play an important role for unsaturated soils as it does for saturated
soils.
2.
Scheme of investigation
For the purpose of experiments, two different soil compositions had been selected.
The first soil sample was procured from Valiya - Netrang Road, near Bharuch and the second
111
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
soil sample was collected from village Sevasi near Vadodara. It was sieved through 425
sieve and then tested for basic geotechnical properties as per IS classification system.
Geotechnical properties of the soils were determined using standard methods as prescribed
in I.S. 2720, and are summarized below.
Table 1. Properties of Different Soils
3.
Material used
CL
CH
M.D.D (gm/cc)
O.M.C (%)
Liquid limit (%)
Plastic limit (%)
2
U.C.S (kg/cm )
Sp. Gravity
Free Swell (%)
1.720
17.30
32.00
20.00
2.436
2.66
15.00
1.65
19.20
54.00
25.00
3.509
2.64
17.00
Experimental study
For triaxial and consolidation test, the samples were remolded at required density and
moisture content for 80% M.D.D, 85% of M.D.D, 90% of M.D.D and 95% of M.D.D. Before
testing soils were air-dried, sieved through I.S. sieve size 425µ sieve and then oven dried at
105°C for 24 h.
4.
Result and discussion
In this study, triaxial and consolidation tests were performed on two types of soils
(CH, CL) compacted at 80%, 85%, 90% and 95% of proctor density and at various degree of
saturation as 20%, 40%, 60%, 80% & 100%. From the readings of triaxial test, c u and øu were
obtained from the plot of Mohr’s circle. Also graphs for peak stress and failure strain were
drawn.
4.1
Shear strength parameters
Table 2 and 3 shows the maximum and minimum values of cohesion and angle of
friction obtained at specific degree of saturation for specific maximum dry density for CH soil
and CL soil respectively.
Table 2. Cohesion Values for CH and CL Soils
Soil Type
Degree of
Saturation (%)
MDD (%)
CH
CH
CL
CL
20
100
100
20
95
80
95
80
112
2
Cohesion (kg/cm )
Max
0.444
0.442
-
Min
0.114
0.122
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 3. Values of Angle of Friction for CH and CL Soils
Degree of
Saturation (%)
Soil Type
MDD (%)
Angle of Friction ( ° )
Max
Min
CH
20
80
26.9
-
CH
100
95
-
13
CL
20
80
28.6
-
CL
100
95
-
6.9
Figure 1 shows the plot of cohesion with varying volumetric water content at different %
M.D.D for CL and CH soils. It is seen that as the degree of M.D.D increases the value of
cohesion increases for CL soil and decreases for CH soil. A curve used shows best fit for
80% M.D.D giving R2 value as 0.985 for CL soil and 95% M.D.D giving R2 value as 0.947 for
2
CH soil. Equations for all fits with their respective R for CL and CH soils are as shown in
Table 4.
0.500
0.500
% MDD = 80
0.450
y = 0.185ln(x) - 0.262
R² = 0.953
%MDD = 80
%MDD = 85
0.400
0.450
%MDD = 90
% MDD = 85
y = -0.03ln(x) + 0.536
R² = 0.947
y = 0.150ln(x) - 0.186
R² = 0.956
% MDD = 90
0.400
0.350
% MDD = 95
y = 0.126ln(x) - 0.150
R² = 0.976
0.300
0.250
0.200
Cohesion (kg /cm 2 )
Cohesion (kg/cm2)
%MDD = 95
0.350
0.300
y = -0.06ln(x) + 0.527
R² = 0.882
0.250
0.150
y = -0.05ln(x) + 0.409
R² = 0.913
0.200
0.100
y = 0.127ln(x) - 0.220
R² = 0.985
0.050
y = -0.04ln(x) + 0.313
R² = 0.836
0.150
0.000
0.100
0.00
5.00
10.00
15.00
20.00
25.00
30.00
35.00
40.00
45.00
50.00
55.00
0.000
10.000
Volumetric moisture content (%)
20.000
30.000
40.000
50.000
60.000
Volumetric moisture content (%)
CL Soil
CH Soil
Figure 1. Volumetric moisture content versus cohesion characteristics for CL soil and
CH soil compacted at different density
Table 4. Relationship between volumetric water content and cohesion for CL soil and CH soil
MDD (%)
CL Soil
2
R
CH Soil
2
Relationship
R
80
0.985
0.836
85
0.976
0.913
90
0.956
0.882
95
0.953
0.947
Relationship
x = degree of saturation and y = cohesion
Figure 2 shows the plot of angle of friction with varying volumetric water content at different %
M.D.D for CL and CH soils. It is seen that as the degree of M.D.D increases the value of
angle friction decreases for both CL and CH soil. A curve used shows best fit for 95% M.D.D
113
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
giving R2 value as 0.959 for CL soil and 90% and 95% M.D.D giving R2 value as 0.998 for CH
soil. Equations for all fits with their respective R2 for CL and CH soils are as shown in Table 5.
35.0
y = 269.3x -0.89
R² = 0.955
%MDD = 80
28.0
%MDD = 85
26.0
y = 95.28x -0.49
R² = 0.989
%MDD = 90
25.0
20.0
15.0
y = 350.4x -0.91
R² = 0.950
10.0
y = 166.3x -0.83
R² = 0.959
5.0
% MDD = 80
% MDD = 85
24.0
%MDD = 95
Angle of friction (Degree)
Angle of friction (Degree)
30.0
77.29x-0.45
22.0
y=
R² = 0.985
% MDD = 90
% MDD = 95
20.0
18.0
16.0
14.0
y = 168.5x -0.78
R² = 0.917
y = 70.32x-0.45
R² = 0.998
y = 81.25x-0.48
R² = 0.998
12.0
0.0
10.0
0.000
10.000
20.000
30.000
40.000
50.000
60.000
0.000
10.000
20.000
30.000
40.000
50.000
60.000
Volumetric moisture content (%)
Volumetric moisture content (%)
CL Soil
CH Soil
Figure 2. Volumetric moisture content versus angle of friction characteristics for CL
soil and CH soil compacted at different density
Table 5. Relationship between volumetric water content and angle of friction for CL soil and
CH soil
MDD (%)
CL Soil
2
R
CH Soil
2
Relationship
R
80
0.950
0.989
85
0.955
0.985
90
0.917
0.998
95
0.959
0.998
Relationship
x = degree of saturation and y = angle of friction
4.2
Consolidation properties
Table 6. CV values for yellow soil (CL soil)
MDD (%)
80
80
85
85
90
90
95
95
Degree of
Saturation
(%)
20
80
100
60
20
60
60
100
Coefficient of
Consolidation CV
(cm2/min)
Max
Min
0.149
0.146
0.187
0.139
-
0.007
0.01
0.002
0.006
114
Consolidation
Pressure
(kg/ cm2)
3.2
0.2
0.8
0.2
3.2
0.8
0.4
0.2
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 7. CV values for black cotton soil (CH soil)
MDD (%)
80
80
85
85
90
90
95
95
Degree of
Saturation
(%)
20
20
100
100
100
100
20
20
Coefficient of
Consolidation CV
(cm2/min)
Max
Min
0.157
0.259
0.111
0.140
-
0.009
0.008
0.002
0.005
Consolidation
Pressure
(kg/ cm2)
1.6
0.2
6.4
3.2
1.6
0.2
3.2
0.4
Table 6 and 7 shows the maximum and minimum values of coefficient of
consolidation obtained at specified consolidation pressure for various degree of density and
saturation for CL soil and CH soil respectively.
4.
Conclusion
The results of the research presented in this paper confirm that degree of saturation
plays an important role in the shear strength parameters of the soil. Specifically, the following
conclusions can be reached:
Cohesion value of yellow soil (CL) increased and angle of internal friction value
decreased with increase in the saturation value as well as the density value.
Cohesion as well as the angle of internal friction value of black cotton soil (CH)
decreased with increase in degree of saturation, but comparatively increased for a
given degree of saturation with increase in value of density.
No definite value of coefficient of consolidation
References
Alam Singh, Soil Engineering in Theory & Practice- Volume II. Asia Publishing House (P.)
Ltd., 1981 Ed.
Alan W. Bishop and D.J. Henkel, Measurement of Soil Properties in Triaxial Test. Adeard
Arnold (Publishers) Ltd., 1964 Ed.
B.C. Punamia, Soil Mechanics & Foundations. Standard book house, Delhi, 1981 Ed.
Barden LAING, Consolidation of compacted and unsaturated clays. Geotechneque, Vol. 15,
No. 4, pp. 267-285, 1965 Ed.
D.G. Fredlund & H. Rahardjo, Soil Mechanics for Unsaturated soils. A wiley-Interscience
Publication, JHON WILEY & SONS, INC., 1993 Ed.
Lorret A. & Alonso E.E, Consolidation of unsaturated soils including swelling and collapse
behavior. Geotechneque Vol. 30, No. 4, pp. 449-477.
Subhash Chandra, Volume change behavior of partially saturated soils. M.E Thesis,
Department of civil engineering, Regional Engineering College, Kurukshetra (Harayana).
115
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Application of Artificial Neural Network for Weld Bead
Measurement
Devangi Desai1, Bindu Pillai2*
1
Dharmsinh Desai Institute of Technology, DDU –387001, Gujarat, India
Faculty of Technology and Engineering, Charotar University of Science
and Technology, Changa-388421, Gujarat, India.
2
*Corresponding author (e-mail: bindupillai.me@charusat.ac.in)
In the area of welding technology there is a wide field for studying the effect of process
parameters as well as the welding conditions for better quality of welded parts. Metal
Inert Gas (MIG) welding is the most popular arc welding process, with a variety of
applications, involves high complexity because of the use of consumable electrode.
This result in variation in bead geometry and therefore predicting dimensions of the
weld bead becomes an important measure of quality control. To assist this process an
Artificial Neural Network (ANN) based model for predicting the weld bead dimensions
particularly the weld Penetration, Reinforcement and Heat Affected Zone (HAZ) for MIG
welded components is presented in this study. A Multilayer Perceptron (MLP) Neural
Network is used to prepare the model. Parameters like groove angle, arc voltage, and
current are used as input data sets for welded parts for training and testing ANN model.
The results show that ANN predicted values are in good agreement with the actual
values. Thus an ANN model of this kind can be used for prediction of the output
parameters for a particular set of inputs for MIG welded components.
1. Introduction
MIG welding has been successfully used in industries like aircraft, automobile,
shipbuilding, and pressure vessel etc. where better quality of weld is the most desired
property. This process is slightly more complex as compared to the other welding methods,
as a number of variables is to be controlled effectively to achieve good results. To study,
analyze and evaluate the process parameters of MIG welding are therefore very important in
order to produce good quality of welded parts (Pal et.al, 2008).
Figure-1. Geometry of the weld bead cross-section (Dey et.al, 2008)
The parameters for weld bead measurement are shown in Figure 1. Mild steel plates are MIG
welded with different values of welding parameters: groove angle, arc voltage and current.
The samples are cut and weld bead geometry: penetration, reinforcement and heat affected
zone is measured across the cross-section.
116
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
An artificial neural network used in this study, is a system that works on the principle of
biological neural networks, in other words, is a system prepared artificially that imitates the
method of learning of biological neural system. A neural network model has to be configured
such that the application of a set of inputs produces the desired set of outputs. Various
methods to set the strengths of the connections exist. One way is to set the weights explicitly,
using a priori knowledge. Another way is to train the neural network by feeding it teaching
patterns and letting it change its weights according to some learning rule. The learning
situations in neural networks may be classified into three distinct sorts. These are supervised
learning, unsupervised learning, and reinforcement learning (Ben and Patrick, 1996).
Artificial neural net models or simply “neural nets” are also named as connectionist models,
parallel distributed processing models, and neuromorphic systems. All these models attempt
to achieve good performance via dense interaction of simple computational elements
(Lippmann, 1987).
A very important feature of these networks is their adaptive nature, where “learning by
example” replaces “programming” in solving problems having more complexity. This feature
makes such computational models very appealing in application domains where one has little
or incomplete understanding of the problem to be solved but where training data is readily
available. A group of samples are generated and their results are used to train the neural
network by providing it with a number of input-output value combinations (Hassoun, 1995).
2.
Modeling with ANN
Neural Network software Easy NN is used to design the neural network. The basic
steps involved in designing the network are: Generation of data; Pre-processing of data;
Design of the neural network elements; Training and testing of the neural network; Simulation
and prediction with the neural networks; and Analysis and post-processing of predicted result
(Yao, 1999).
In order to generate input/output dataset for training and testing of the network, a number of
experimental trials were performed and the results were obtained for the three output
parameters of the weld bead. The dataset is portioned randomly into two subsets: training
dataset (75%), and testing dataset (25%). The target error is fixed at 0.01 and the learning
rate is set as 0.6. The momentum is fixed as 0.8.
The architecture of the ANN for the weld bead measurement with the input and output
parameters is schematically illustrated in Figure 2. A Feed-forward Multilayer Perceptron
(MLP) neural network model is used. The input layer has 3 neurons corresponding to the 3
input parameters: groove angle, arc voltage and current and the output layer has 3 neurons
corresponding to the output parameters: penetration, reinforcement and heat affected zone.
The performance of an ANN model is noticeably affected by the number of hidden layers and
the number of nodes in each hidden layer. By trial and error with different ANN configurations,
the optimal number of hidden layers was selected as 3 and that of neurons in the hidden layer
were selected as 2-6-3 respectively. The input/output dataset was normalized to range
between 0 and 1 using the „threading‟ function inbuilt in the Easy NN software.
Figure-2. Architecture of the proposed ANN model
117
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The training set of each group consisted of 23 input-output pairs, corresponding to 75% of the
data set. The training of the network was carried out by the inbuilt training algorithm in Easy
NN software which is used because of its fast convergence and accuracy for training the
network. Finally, the input vectors from the test data set were presented to the trained
network and the network predictions were compared with the experimental outputs for the
performance measurement. The test data set contained 6 input-output pairs for each group,
corresponding to the 25% of the data set containing the results of 29 tests in each group.
Figure-3 Training and validating error from proposed ANN model
The average validating error for the proposed model is 0.01892485 which is lesser than the
average training error so the ANN model of (3-2-6-3-3) is finalized.
3.
Results and discussion
The comparison between the ANN predicted values and actual values of the output
parameters: penetration (P), reinforcement (R) and heat affected zone (HAZ) are shown in
Figure-4. The result shows that the ANN model is able to predict the values of output
parameters with good accuracy and close to the target.
14
12
10
HAZ-Actual
8
HAZ-ANN
6
Penetration-Actual
4
Penetration-ANN
2
0
Reinforcement-Actual
-2
Reinforcement-ANN
1
3
5
7
9 11 13 15 17 19 21 23 25 27 29
Figure-4 Comparison of Actual v/s ANN predicted values
118
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4.
Conclusion
An ANN model for weld bead measurement is proposed here. The comparison of ANN
predicted values with the actual values shows good convergence with the target. In this work
we demonstrated that ANN can be trained with a set of welding inputs and the same ANN
model can also simulate the data set from outside the MIG welding parameters not given
during training thereby estimating the weld bead measurement parameters with good
accuracy.
Acknowledgment
The authors wish to thank Dr. Piyush Gohil, Head, department of Mechanical
Engineering Department
and Dr. N.D. Shah, Principal-Faculty of Technology and
Engineering (Chandubhai S. Patel Institute of Technology- Changa) Charotar University of
Science and Technology, Changa for their guidance, encouragement and support in
undertaking the research work. Special thanks to the Management for their moral support and
continuous encouragement.
References
Ben K. and Patrick. An introduction to computing with Neural Nets. University of Amsterdam,
1996.
Dey V., Pratihar D.K. and Datta G.L. Prediction of Weld Bead Profile using Neural Networks,
First International Conference on Emerging Trends in Engineering and Technology, DOI
10.1109/ICETET.2008.237581, 2008.
Dey V.,Pratihar D.K. and Datta G.L. Prediction of weld bead profile using Neural Networks,
Proceedings of First International Conference on Emerging Trends in Engineering &
Technology - ICETET , 16-18 July 2008, Nagpur, Maharashtra.
Hassoun M.H. Fundamentals of Artificial Neural Networks, Cambridge, MA: MIT Press,
1995,544 pp., ISBN 0-262-08239-X.
Lippmann R. An introduction to computing with Neural Nets, IEEE ASSP Magazine, 1987.
Nagesh D.S. and Datta G.L. Modeling of fillet welded joint of GMAW process: integrated
approach using DOE, ANN and GA, Springer Verlag France 2008.
Pal S., Pal S. K. and Samantaray A. K. Artificial neural network modeling of weld joint
strength prediction of a pulsed metal inert gas welding process using arc signals ,
Journal Of Materials Processing Technology (2008) 464–474 Elsevier B. V.
Yao X. Evolving Artificial Neural Networks. Proceedings of IEEE, Vol, 87 no.9, 1999.
119
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization of Error Gradient Functions by NAGNM through
ESTLF
Chandragiri Radha Charan1*, M.Shailaja2, B V Ram Naresh Yadav 3
1
EEE Department, JNTUH, College of Engineering, Nachupally, Karimnagar, Andhra Pradesh
Mech Department, JNTUH, College of Engineering, Nachupally, Karimnagar Andhra Pradesh
3
IT Department, JNTUH, College of Engineering, Nachupally, Karimnagar, Andhra Pradesh
2
*Corresponding author (e-mail: crcharan@gmail.com)
Non Adaptive Generalized Neuron Model (NAGNM) has greater advantages than Back
Propagation Neural Network such as no hidden nodes, more flexibility, less training
time etc. Electric Short Term Load Forecasting (ESTLF) with NAGNM is applied for
mean median error gradient function.. The data points have been made of electrical
load and climatic parameters (maximum temperature, minimum temperature, humidity).
MATLAB 7.0® simulation results are estimated in root mean square testing error,
maximum testing error, minimum testing error and time elapsed for with seconds.
Keywords: Non Adaptive Generalized Neuron Model, Mean Median Error Gradient
Function, Electric Short Term Load Forecasting.
1.
Introduction
Short term load forecasting is required for control, unit commitment, security
assessment, optimum planning of power generation, and planning of both spinning reserve
and energy exchange, also as inputs to load flow studies and contingency analysis. The IEEE
load forecasting working group (1980-81), has published a general philosophy load
forecasting on the economic issues. Some of the techniques are general exponential
smoothing by Christiaanse, W. R. (1971), state space and Kalman filter and multiple
regression. Hagan (1987) proposed stochastic time series model for short term load
forecasting performance. Rahaman (1990) and Ho (1990) proposed the application of KBES.
Park and Peng (1991-92) used ANN for STLF, which did not consider the dependency of
weather on load. Kalra (1995) incorporated the feature of weather dependency also for STLF.
Khincha (1996) developed online ANN model for STLF.
In artificial neural networks the drawbacks are limited to accuracy, large training time,
huge data requirement,relatively large number of hidden layer to train for nonlinear complex
load forecasting problem. So the fuzzified neural network approach for load forecasting, D. K.
Chaturvedi (2001) et al has been developed. In-order to train the total number of neurons, it
requires large amount of time. Man Mohan, (2002) et al proposed a generalized neuron
model (GNM) for training and testing of short-term load forecasting.
In order to reduce local minima and other deficiencies, the training and testing
performances of the models have been compared by Chaturvedi D. K. et al (2003) . In ANN,
the training time required training the neurons, size of hidden layer can cause training
difficulties, size of training data, learning algorithm is comparatively large. Here an attempt
has been made to develop new neuron model, which is using neuro-fuzzy approach by Man
Mohan et al. (2003). By having all these difficulties with ANN, so a new neuron model with
development for short term load forecasting has been done by Man Mohan et al. (2003). C.
Radha Charan (2010) has developed generalized neuron model with short term load
forecasting for different error functions.
2.
Generalized neuron model
Generalized Neuron Model over comes the above draw backs. The GNM has less
number of unknown weights. The number of weights in the case of GNM is equal to twice the
number of inputs plus one, which is very low in comparison to a multi layered feed forward
ANN. By reducing number of unknown weights, training time can be reduced. The number of
120
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
training patterns required for GNM training is dependent on the number of unknown weights.
The number of training patterns must be greater or equal to number of GNM weights. The
number of GNM weights are lesser than multilayered ANN, hence the number of training
patterns required is also lesser. In GNM usage of flexible neuron model reduces the total
number of neurons, less training time, no hidden layer is required and a single neuron is
capable to solve most of the problems. The complexity of GNM is less as compared to multi
layered ANN.
Figure 1. Generalized Neuron Model
The flexibility of GNM has been improved by using more number of activation
functions and aggregation functions In this the model of Figure1., GNM contains sigmoid,
gaussian, straight line activation functions, with two aggregation functions summation (Σ),
product (Π).The summation and product of an aggregation function have been incorporated
and aggregated output passes through non-linear activation function.
Figure 2. Architecture of Generalized Neuron Model
In Figure 2., the output opk of the GNM is
Opk=f1out1×w1s1+f2out1×w1s2+.....+fnout1×w1sn+f1out2×w1p1+f2out2×w1p2+.....+fnout2×w1pn
(1)
Here f1out1, f2out1,…. ,fnout1 are outputs of activation functions f1,f2,…,fn related to
aggregation function Σ, and f1out2, f2out2,…., fnout2 are outputs of activation functions
f1,f2,…,fn related to Π. Output of activation function f1 for aggregation function,
f1out1=f1(ws1×sumsigma). Output for activation functions f1 for aggregation function of π,
f1out2= f1(wfp1×product).
3. Data points for ESTLF
Data for the ESTLF has been taken from Department of Electricity and water supply,
Dayalbagh and Dayalbagh science museum, Agra, India. Different types of conditions have
been considered which are mentioned below as different types. The data consists of load of
different weeks, weather conditions (maximum temperature in C, minimum temperature in C
and humidity in percentage) have been considered for the month of January 2003.
121
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Normalization value = [ Ymax -Ymin *(
L-L min
L max -L min
(2)
)]+(Ymin )
where: Ymax=0.9, Ymin=0.1, L= values of variables, L min= minimum value in that set,
Lmax= maximum value in that set.
Data is tabulated in two types where in the inputs are six and output is one. Type I consists of
I, II, III weeks loads III week maximum temperature, III week minimum temperature, III week
humidity as inputs and IV week of load. Type II consists of I, II, III weeks load, average
maximum temperature, average minimum temperature, and average humidity as inputs and
IV week load as output.
Table 1: TYPE I : Inputs :Three Weeks of load, third week of maximum temperature,
minimum temperature and humidity and output : fourth week of load data
First Week
Load
Second
Week
Load
Third
Week
Load
2263.2
2479.2
2166
2238
2482.2
2384.4
2196
2678.4
2887.6
3007.2
3016.8
3285.6
2295.6
2286
2458.8
First Week
Load
Second
Week
Load
0.17
0.14
0.43
0.31
0.10
0.65
0.90
0.25
0.67
0.68
0.90
0.10
0.10
0.23
Third
Week
Max
Temp
9.5
2227.2
11
2802
10.5
2022
10
2014.8
8.5
3087.6
10.5
2618.4
13.5
Normalized Data
Third
Third
Week
Week
Load
Max
Temp
0.20
0.26
0.25
0.50
0.68
0.42
0.10
0.34
0.09
0.10
0.90
0.42
0.54
0.90
The mean median error gradient function is
δE
δWsi
Third
Week
Min
Temp
5
Third
Week
Humidity
Output(Fourth
Week Load)
95
2461.2
6
6
5
5
5
5
99
98
88
92
90
81
2383.2
2025.6
2557.2
2548.8
2560.8
2800.8
Third
Week
Min
Temp
0.10
0.90
0.90
0.10
0.10
0.10
0.10
Third
Week
Humidity
Output(Fourth
Week Load)
0.72
0.90
0.85
0.41
0.58
0.50
0.10
0.54
0.46
0.10
0.64
0.63
0.65
0.90
=-sum((
1+error -0.5 δopk
)
*
2
δWsi
(3)
where E=change in error, Wsi= change in weights, opk= actual output, opk=
change in output ,D = desired output.
122
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 2:TYPE II : inputs :Three Weeks of load, average maximum temperature, average
minimum temperature and average humidity and output : fourth week of load data
First
Week
Load
Second
Week
Load
Third
week
Load
2263.2
2238
2482.2
2384.4
2196
2678.4
2887.6
2479.2
3007.2
3016.8
3285.6
2295.6
2286
2458.8
2166
2227.2
2802
2022
2014.8
3087.6
2618.4
I Week
Load
II Week
Load
0.17
0.14
0.43
0.31
0.10
0.65
0.90
0.25
0.67
0.68
0.90
0.10
0.10
0.23
III
Week
Load
0.20
0.25
0.68
0.10
0.09
0.90
0.54
Average
Max.
Temp.
Average
Min.
Temp.
11.5
5.83
12
6.66
11.5
6.83
10.83
5.16
10.16
5.66
10.5
6.33
12.5
5.83
Normalized Data
Avg. Max.
Avg Min
Temp
Temp
0.55
0.72
0.55
0.32
0.10
0.21
0.90
0.42
0.81
0.90
0.10
0.33
0.66
0.42
Average
Humidity
87
95
88.6
95
90
90
85.6
Avg
Humidity
0.21
0.90
0.35
0.90
0.64
0.47
0.10
Output
(Fourth
Week
Load)
2461.2
2383.2
2025.6
2557.2
2548.8
2560.8
2800.8
Output
(IV Week
Load)
0.54
0.46
0.10
0.64
0.63
0.65
0.90
4. Simulation results
The NAGNM has been applied to ESTLF with the help of mean median error gradient
®
function using Equation (3). This simulation results were brought out in MATLAB 7.0 , by
assuming constants. They are momentum factor, α = 0.95, learning rate, = 0.0002, all initial
weights = 0.95, gain scale factor = 1.0, tolerance level = 0.002, training epochs = 30,000.
Table 3: Simulation Results of Mean Median Error Gradient Functions
S.No.
Type
1
Type I
2
Type II
RMS
testing
error
6.9922
8.5904i
Maximum
testing error
-7.4618 +
8.5999i
Minimum
testing
Error
-6.6575 + 8.5995i
Time elapsed
in Seconds
8.453
7.0530
8.6334i
-7.5210 +
8.6422i
-6.7187 + 8.6418i
4.766
-
5. Conclusions
Using NAGNM the ESTLF has been simulated using MATLAB 7.0® for different types
of data sets. Mean median error gradient were calculated. The mean median error gradient
function with three weeks of load, average max. temp., average min. temp., avg. humidity and
IV week of load data will take less amount of time for execution. The optimization of time
elapsed in seconds is held achieved by considering inputs three weeks of load, average
maximum temperature, average minimum temperature, average humidity and IV week of load
as output which is type II.
123
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Acknowledgement
The authors would like to thank the Department of Electricity and Water Supply,
Dayalbagh and Dayalbagh Science Museum at Dayalbagh, Agra for giving the data sets.
References
Chandragiri Radha Charan, "Application of Generalized Neuron Model in Short Term Load
Forecasting under Error Functions", Second Int. Conf. on Computing, Communication
and Network Technologies, 1-4, 29th -31st July, 2010, Chettinad College of
Engineering and Technology, Karur, Tamilnadu.
Chaturvedi, D. K., Satsangi, P. S., Kalra, P. K., on “Fuzzified neural network approach for
load forecasting”, Engineering Intelligent Systems, 8 (1) March 2001, 3-9
Chaturvedi, D.K., Mohan M, Singh , R.K. Kalra, P.K. “Improved generalized neuron model for
short-term load forecasting”, Soft Computing, Springer-Verlag, Heidelberg, vol. 8, no. 1,
2003, 10 -18
Chaturvedi, D.K., Soft Computing Techniques and Its Applications in Electrical Engineering,
Springer Verlag, Heidelberg , 2008, 93–94.
Christiaanse, W. R. "Short term load forecasting using General Exponential Smoothing”.
IEEE Trans. in Power Apparatus and System, vol. PAS-90, no. 2, March- April, 1971.
Galiana, F. D. on “Identification of Stochastic Electric Load Models from Physical Data” IEEE
Trans. on Automatic Control, vol. ac-19, no. 6, December 1974.
Hagan, M. T. on The Time series Approach to Short Term Load Forecasting” IEEE Trans. on
Power Systems, 2 (3), 785 August 1987.
Ho, K. L. on “Short Term Load Forecasting Taiwan Power System Using Knowledge Based
Expert System” ,IEEE Trans. on Power Systems, 5 (4), 1214 November 1990.
IEEE Committee Report, Load Forecasting Bibliography, Phase 1, IEEE Trans. on Power
Apparatus and Systems, vol. PAS-99, no. 1, 1980,53.
IEEE Committee Report, “Load Forecasting Bibliography” , Phase 2,IEEE Trans. on Power
Apparatus and Systems, vol. PAS- 100, no. 7,1981, 3217.
Kalra, P. K. Neural Network- A Simulation Tool - National Conf. on Paradigm of ANN for
Optimization Process Modeling and Control at IOC, Faridabad, 7-9 September 1995.
Khincha, H. P. and Krishnan, N, Short Term Load Forecasting Using Neural Network for a
Distribution Project”, National Conf. on Power Systems (NPSC‟96), at Indian Institute of
Technology, Kanpur, December 1996, 17.
Man Mohan, Chaturvedi, D. K., Saxena, A.K. , Kalra, P.K., “Short Term Load Forecasting by
Generalized Neuron Model”, Inst. of Engineers (India), 83, September 2002, 87-91.
Man Mohan, Chaturvedi D.K., Satsangi P.S., Kalra P. K., „”Neuro -fuzzy approach for
developing of a new neuron model”, Soft Computing, Springer-Verlag ,Heidelberg , 8
(1), October 2003, 19-27
Man Mohan, D. K. Chaturvedi , P.K. Kalra , “Development of New Neuron Structure for Short
Term Load Forecasting”, Int.l J. of Modeling and Simulation, ASME, periodicals, 2003,
46(5), 31-52.
Mathewmann, P.D. and Nicholson, H. on “Techniques for Load Prediction in Electric Supply
Industry”, IEE Proc., 115 (10), October 1968.
Park, D on “Electric Load Forecasting Using an Artificial Neural Network”, IEEE Trans. on
Power Systems, 6, 1991, 442.
Peng, T. M. on “Advancement in Application of Neural Network for Short Term Load
Forecasting”, IEEE Trans. on Power Systems, 7 (1), 1992, 250.
Rahaman, S. D. and Bhatnagar, R on “Expert Systems Based Algorithm for Short Term Load
Forecasting”, IEEE Trans. on Power Systems, 3 (2), 392 May 1988.
Sharma , A. K. K. L. S. and Mahalanabis, A. K. "Recursive Short Term Load Forecasting
Algorithm". IEE Proc., vol. 121, no. 1, January 1974, 59.
124
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Design of Least Cost Water Distribution Systems
Dibakar Chakrabarty*, Mesanlibor Syiem Tiewsoh
National Institute of Technology Silchar, Assam, India
*Corresponding author (e-mail:dibachakra@gmail.com; dibakar@nits.ac.in)
Design of a least cost water distribution system mainly deals with selection of optimal sets
of various network components, like pipe diameters, pumps, tanks, valves etc., while
satisfying the nodal flow and pressure head requirements. Water distribution systems are
generally costly and for a large pipe network system, use of suboptimal sets of network
components can have a huge cost implication. In the least cost design, the pipe diameters
are generally optimized as the cost of pipes constitutes a major component of the overall
network cost. However, the cost of a pipe network can be further reduced by use of optimal
sets of other network components in addition to optimal pipe diameters set. In the present
study, a model is proposed for least cost design of water distribution systems using
simulation-optimization technique. The technique involves linking of pipe flow simulator to
an optimization method. In the proposed model, locations of water reservoirs (or water
tanks) are optimized along with pipe diameters for design of least cost water distribution
systems. Under such circumstances, the optimization model becomes a mixed integer
nonlinear programming problem. The performance analysis of the developed model is
studied by linking EPANET, a pipe flow simulator with the genetic algorithm toolbox of
MATLAB. Results show that the developed model has the potential to reduce the cost of
water distribution systems significantly as compared to cases where only pipe diameters
are optimized.
Key words: Water distribution system, simulation-optimization, mixed integer nonlinear
programming problem, EPANET, genetic algorithm toolbox (MATLAB).
1. Introduction
A common objective in the design of a water distribution system (WDS) is to minimize the
overall cost of the WDS. In the optimal design of a WDS, it is necessary that consumer
requirements (i.e. flow at requisite pressure) at specified nodes are met satisfactorily throughout
the expected life of the project. A WDS is a hydraulic system through which water is conveyed.
The main components of a WDS include pipes, valves, pumps, tanks (or reservoirs), and other
accessories It is obvious, therefore, that use of suboptimal network components can have cost
implications in the design, while use of optimal set of any network component may reduce the
overall cost of the system significantly. Over the last few decades, many researchers and
scientists have investigated the problem from different perspectives with the primary objective of
developing methodologies for least cost water distribution system design. Optimizing all possible
network components of a WDS is not always easy even for a simple WDS. The problem becomes
complicated with increase in size of a WDS. In the design of a least cost WDS, the designer
generally requires to select the optimal sets of different network components from a large number
of potential options. For each possible option, ideally the designer needs to check whether the
proposed design is meeting the requirements or not. The simulation-optimization approach is
generally used for tackling such problems. However, simulation of a WDS involves a complex
procedure and linking it with an appropriate optimization algorithm is not always easy. Moreover,
the least cost WDS model generally becomes a mixed integer nonlinear programming problem.
As such, solving such a problem requires proper modeling and use of proper optimization routine.
125
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Over the years, the researchers and scientists all over the world achieved reasonable success in
the design of optimal WDSs.. The pipe network computer modeling based on the numerical
method was developed by Hardy Cross in the 1930s for analyzing a looped pipe network (Cross,
1936). Based on this method, more efficient computer programs were later developed by various
researchers (Dillingham, 1967; Martin and Peters, 1963; Shamir and Howard, 1968). Rossman,
L.A. (1993) developed the EPANET, which is a collection of functions that helps simplify
computer programming of water distribution network analysis. Simulation-optimization methods
have been used by many researchers with considerable success for optimal design of pipe
networks both under steady and unsteady flow conditions. Jowitt, P. and Xu, C. (1990) presented
a methodology for optimal control of valves in a WDS for minimizing leakage. Saud A. Taher and
Labadie J.W. (1996) developed a decision support system for analysis and design of WDSs using
linear programming and GIS. Genetic algorithm (GA) has been used for several decades in
solving pipe network problems by many researchers (Savic and Walters, 1997). Vasan, A. and
Simonovic, S. P. (2010) presented a model involving the application of differential evolution for
optimal design of water distribution networks using EPANET as the simulator. Most of the studies
available in literature mainly concentrated on design of a WDS by optimizing the pipe diameters,
control valves etc. under different design scenarios. Tiewsoh, M. S. (2013) developed a mixed
integer nonlinear optimization approach for optimal design of WDSs using EPANET and GA.
In the present study, a simulation-optimization approach for optimal design of a water
distribution system is proposed using mixed integer nonlinear programming. Both pipe diameters
and locations of water reservoirs are assumed to be the decision variables. The developed
methodology is initially validated for a two looped pipe network system. Subsequently,
performance analysis of the developed model is carried out for a larger WDS.
2. Model formulation
A generalized simulation-optimization based least cost WDS model can be expressed as under:
Minimize:
,
(1)
(2)
,
,
,
(3)
(4)
Where,
total number of links in the WDS;
= total number of demand nodes;
are the minimum and the maximum
simulated pressure head at node ;
length of
pipe;
cost per unit length of
allowable pressure heads at node ;
set of commercially available pipe diameters;
diameter of the
pipe of diameter ;
pipe;
set of other design parameters (excluding pipe diameters) of the network;
set of all
feasible values of ; and
design cost of the network. The constraint
in the
model is the hydraulic flow simulator, which ensures that both flow as well as energy equations
are satisfied.
In the present study, EPANET is used as the network flow simulator while the GA toolbox
under MATLAB is used as the optimization routine. In the performance analysis of the developed
model, pipe diameters are optimized along with optimizing locations of a fixed number of
reservoirs for minimizing the overall cost of a WDS.
126
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3. Model validation
The developed model is initially validated by solving a two loop bench mark problem, taken from
Savic and Walters (1997). The two loop network is given in figure 1.
Figure 1: Two loop WDS used for model validation
In figure 1, the pipe lengths are in meters and so are the node elevations. Flow requirements
(demand) at each node are given as cubic meters per day (CMD). Pressure head requirements at
all demand nodes are assumed as 30.0 m. Using 14 number of available pipe diameters (in
inches) as follows 1, 2, 3, 4, 6, 8, 10,12, 14, 16, 18, 20, 22, and 24, Savic and Walters (1997)
reported the least cost of the network as USD 419,000. Other details are available in the referred
paper. In this study, using exactly the same parameters, the least cost of the networks comes out
to be USD 401,000, justifying validity of the proposed model. Obviously, while solving the
problem, , set of other design parameters (excluding pipe diameters) of the network (in Eq. 4)
is assumed to be known explicitly while only , set of commercially available pipe diameters is
chosen as the set of decision variables.
4. Performance evaluation
For performance analysis, a relatively large WDS used in Jowitt and Xu (1990) with minor
modifications is chosen in this study. The objective here, different from the one used in Jowitt
and Xu (1990), is to minimize the total cost of the WDS by selection of optimal pipe diameters set
along with identifying optimal locations of a fixed number of reservoirs in the WDS. The modified
network with 10 potential reservoir locations is shown in Figure 2.
127
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 2: WDS with potential reservoir locations
The network has 22 nodes, 44 links, and 10 potential reservoir locations at nodes 3, 7, 9, 12, 13,
16, 20, 23, 24, and 25, as shown. In this study it is assumed that 4 reservoirs are to be optimally
selected out of the 10 potential locations so that the overall cost of the WDS becomes the
minimum when an optimal pipe diameters set is used. In order to ensure the same, the constraint
(Eq. 4) in the generalized formulation is replaced by
, where
is a binary
variable
;
is the number of potential reservoir locations; and
is the fixed number of
reservoirs to be selected for the WDS. For a specific potential reservoir location, a reservoir is to
be considered if the corresponding
is 1 and if = 0, a reservoir is not to be constructed at that
location. In this study, elevations of all reservoirs at each potential location are assumed to be
fixed. In figure 2, the 10 potential reservoirs are shown as 23T, 24T, 25T,…, 32T etc. Elevations
of the reservoirs are 56.0m, 56.0m, 56.0m, 29.0m, 29.0m, 29.0m, 30.0m, 38.0m, 25.0m, and
22.0m respectively. It is assumed that the minimum pressure head requirement at each node is
10 m. Other data in respect of the proposed WDS, including available pipe diameters along with
their costs (in Indian rupees) per meter length, are given in Table 1.
Table 1. Relevant data for the WDS (Figure 2)
Attributes
(Link ID)/Link length
(m)
Node ID/Elevation
(m)/Node Demand in
cubic meters per day
Available pipe dia. in
mm (Cost in Indian
rupees per m length)
Values
(1)/606, (2)/454, (3)/2782, (4)/304, (5)/3383, (6)/1767, (7)/1014,
(8)/1097, (9)/1930, (10)/5150, (11)/762, (12)/914, (13)/822, (14)/411,
(15)/701, (16)/1072, (17)/864, (18)/711, (19)/832, (20)/2334
,(21)/1996, (22)/777, (23)/542, (24)/1600, (25)/249, (26)/443,
(27)/743, (28)/931, (29)/2689, (30)/326, (31)/844, (32)/1274,
(33)/1115, (34)/615, (35)/1408, (36)/500, (37)/300, (38)/10, (39)/10,
40(10), (41)/10, (42)/10, (43)/10 (44)/10.
1/18/432, 2/18/864, 3/14/0, 4/12/432, 5/14/2592, 6/15/864, 7/14/0,
8/14/1728, 9/14/0, 10/15/432, 11/12/864, 12/15/0, 13/23/0,
14/20/432, 15/8/1728, 16/10/0, 17/7/0, 18/8/432, 19/10/432, 20/7/0,
21/10/432, 22/15/1728.
40(150), 60(250), 65(269), 80(349), 100(505), 125(661), 150(785),
175(850), 200(900), 225(1002), 250(1100), 275(1250), 300(1500).
128
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
As mentioned earlier, EPANET is used for simulation in this study while the MATLAB GA
toolbox is used as the optimization algorithm. The problem has 54 decision variables altogether,
out of which 44 variables are discrete while the remaining 10 variables are binary. The optimal
solutions are given in Table 2. The minimum cost of the WDS with 4 reservoirs comes out to be
Indian rupees 28,993,875.00.
Table 2. Optimal solution set for WDS (at Figure 2)
Optimal Attributes
(Link ID)/Pipe
diameters (mm)
Reservoir ID/Elevations
(m)
Values
(1)/200, (2)/200, (3)/60, (4)/100, (5)/250, (6)/275, (7)/100, (8)/275,
(9)/300, (10)/60, (11)/275, (12)/250, (13)/275, (14)/225, (15)/100,
(16)/200, (17)/125, (18)/60, (19)/150, (20)/125, (21)/80, (22)/150,
(23)/60, (24)/80, (25)/60), (26)/275, (27)/150, (28)/275, (29)/65,
(30)/125, (31)/125, (32)/175, (33)/60, (34)/80, (35)/275, (36)/225,
(37)/60, (38)/0, (39)/0, (40)/200, (41)/0, (42)/0, (43)/0, (44)/0.
23T/56.0, 24T/56.0, 25T/56.0m, 26T/0.0, 27T/0.0, 28T/29.0, 29T/0.0,
30T/0.0, 31T/0.0, 32T/0.0
5. Conclusion
Results show that the developed methodology has the potential to be applied for least cost
design of very large scale water distribution system. Although, several GA runs are made for
getting the optimal parameter values reported above, there is no guarantee that the reported
results are indeed global. There are several scopes for extension of the work reported in this
paper. The first author is working on it while some possible extensions are already available in
Tiewsoh, M. S. (2013).
References
Cross, H. Analysis of flow in networks of conduits or conductors, Bulletin No. 286, Univ. of Illinois
Engineering, Experimental Station, Urbana,1936.
Dillingham, J. H., Computer Analysis of Water Distribution Systems, Parts 1, 2, 4," Water and
Sewage Works, 1967, 114(1):1, 114(2):43, 114(4):141.
Jowitt, P. and Xu, C. Optimal Valve Control in Water‐Distribution Networks. J. of Water
Resources Planning and Management, ASCE, 1990, 116(4), 455–472.
Martin, D.W. and Peters, G. The application of newton’s method of network analysis by digital
computers. J. of the Institute of Water Engineers, 1963, 17(2), 115.
Rossman, L. A., The EPANET Water Quality Model in B. Coulbeck, ed., Integrated Computer
Applications in Water Supply, Vol. 2, Research Studies Press Ltd., Somerset, England,
1993.
Samir,U. and Howard, C.D.D. Water distribution systems analysis. 1968, J. of Hyd. Div, ASCE,
94 (HY1), 291-234.
Savic, D.A. and Walters, G.A. Genetic algorithms for least-cost design of water distribution
networks. J. of Water Resources Planning and Management, ASCE, 1997, 123 (2), 67-77.
Soud, A. Taher and Labadie, J. W. Optimal Design of Water-Distribution Networks with GIS. J. of
Water Resources Planning and Management, ASCE, 1996, 122(4), 301–311.
Tiewsoh, M. S. Optimal water distribution systems design using genetic algorithm. M.Tech.
dissertation, NIT Silchar, 2013.
Vasan, A. and Simonovic, Slobodan, P. Optimization of water distribution network analysis using
differential evolution. J. of Water Resources Planning and Management, ASCE, 2010,136(2),
279-287.
129
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization of Supply Chain Network - A Case Study
*
D.Srinivasa Rao , K. Surya Prakasa Rao
DMS SVH College of Engineering, Machilipatnam-521002, Andhra Pradesh, India
*Corresponding author (e-mail:sameerasrinivas@gmail.com)
This case study covenants with the design of forward supply chain network for an
automobile battery maker. The majority research in this field has paid attention on
optimization and allocation of new facilities to existing forward supply chain network. The
case study reported in this paper inquires about to contribute by experimenting with two
different regions for the allocation and gathering of goods. This facility position predicament
was solved by mixed-integer linear programming modeling paraphernalia. It is established
that by providing certain new facilities in the existing forward supply chain network by
optimizing it would diminish the cost of the network and it also diminishes the amount of
time spent by goods in the supply chain network. This cram also conveyed out concern like
de-centralized network, manufacturer’s dilemma in managerial control over the compilation,
annoyance to existing network.
1.
Introduction
A supply chain is referred as logistics network which includes suppliers, manufactures,
warehouses, distribution centers and retail outlets called as facilities. The flow between the
facilities may be in the form of product, information or cash flow. Supply chain management is a
set of approaches utilized to efficiently integrate suppliers, manufacturers, warehouses and
stores; so that merchandise is produced and distributed at the right quantities, to the right
locations and at the right time, in order to minimize system wide costs while satisfying service
level requirements. Williams (1983) generated a dynamic programming algorithm for concurrently
determining the manufacture and allocation lot sizes at every node inside a supply chain network.
Ishii, et. al (1988) established a deterministic model for determining the base stock levels and
lead times associated with the lowest cost solution for an integrated supply chain on a finite
horizon. Based on economic order quantity (EOQ) systems, Cohen and Lee (1989) presented a
deterministic, mixed integer, non-linear mathematical programming model, Arntzen, et. al. (1995)
developed a mixed integer programming model, called GSCM (Global Supply Chain Model) that
can contain numerous products, facilities, stages (echelons), time periods, and transportation
types. An integer programming model, for Procter and Gamble Company was developed by
Camm, et. al. (1997) depending on an incapacitated facility location formulation. The above
models are categorized as deterministic analytical models. Now the following review is under the
category of stochastic analytical models. Cohen and Lee (1988) developed a model for
ascertaining a strategy for all materials obligation for every stage in the supply chain production
system. Svoronos and Zipkin (1991) deemed multi-echelon, distribution-type supply chain
systems (i.e., each facility has at most one straight antecedent, but any number of direct
successors). Herein, the authors presumed a base stock, one-for-one (S-1, S) replacement policy
for each facility. Lee, et. al. (1993), generated a stochastic, periodic-review, order-up-to inventory
model to develop a system for process localization in the supply chain. Tzafestas and Kapsiotis
(1994) used a deterministic mathematical programming loom to optimize a supply chain, and then
used simulation techniques to analyze a numerical example of their optimization model. Finally,
Lee, et. al. (1997) developed stochastic mathematical models describing ‘The Bullwhip Effect’,
which is described as the fact in which the variance of consumer demand turn out to be gradually
more augmented and distorted at every echelon upwards throughout the supply chain. Christy
and Grout (1994) developed a cost-effective, game-theoretic structure for representing the
130
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
consumer-seller connection in a supply chain. The source of this work is a 2 x 2 supply chain
‘relationship matrix’, which may be employed to recognize surroundings under which each type of
association is preferred. This is categorized under economic model. Under simulation model
category, Towill (1991) and Towill, et. al. (1992) employed simulation methods to assess the
consequences of assorted supply chain strategies on demand augmentation. In the present study
optimization is carried out for facility location of an existing forward supply chain network for
which two regions were selected and data were collected to implement the mixed integer linear
programming model.
2.
Case study model
In the present case study, a leading manufacturer of lead–acid automotive batteries in
South India is considered. The company is concerned in manufacture and marketing of extensive
choice of automotive batteries (lead–acid batteries). These products are utilized in cars, trucks
and other light and heavy automobiles. Moreover it produces batteries for inverters and other
power provisions. It delivers batteries to main companies of automobiles as OEM supplier and
provides to replacement market through a network of warehouses, franchisees and retailers. The
company produces batteries utilizing lead imported from countries like New Zealand and
Australia. It does not use the recycled lead and hence it does not run any recycling facilities. The
batteries are being distributed via a network of local warehouses. The batteries then are vended
to selected franchisees from these warehouses, which also run outlets. The franchisees will
distribute batteries to retailers and customers. The old batteries are gathered from consumers
generally at retail level, when they purchase a new battery. An enticement system is pursued to
make the consumer to return the old battery and the batteries collected back are then returned to
franchisees. Subsequently after hoarding a considerable quantity, the franchisee ships all old
batteries to the warehouse and the warehouse in sequence trade the batteries to recycling plants
as scrap. The lead recuperated from the old batteries is used by unorganized battery makers and
also for further industrial purpose.
In the present work an attempt has been made for optimal location of facilities in supply
chain networks using existing forward network for both distribution of new batteries and collection
of used batteries. The purposes to serve two regions of Tamilanadu have been selected and
existing data has been collected for the purpose of optimization.Fig.1 shows the details of
positions of plant, warehouses, franchisees and dealers for the two regions in supply chain
network..Fig.2 and Fig.3 represents the facilities of the individual regions Chennai and
Coimbatore. The decision to be made here is to select optimally from a finite number of possible
locations, for warehouses, franchisees. For this strategic level problem, all the inputs related to
demand, cost, capacity, etc., are known and the problem is developed as a deterministic mixed
integer linear programme. This model is solved using LINDO optimizer tool.
2.1
Assumptions
The following assumptions are made in the mathematical model.
1. Factory location and costumer zones are fixed.
2. Warehouses and franchisees are having limited capacities.
3. The quantity of production from the present plant is adequate to meet the demand.
2.2
Notations used
Region 1: Chennai
A1i – Number of units shipped from the existing plant ‘P’ to warehouse ‘i’
Bij – Number of units shipped from warehouse ‘i’ to franchisee ‘j’
W1 and W2- Warehouses, F1 to F7- Franchisees
131
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Region 2: Coimbatore
X1i –Number of units shipped from the existing plant ‘P’ to warehouse ‘i’
Yij– Number of units shipped from warehouse ‘i’ to franchisee ‘j’
W3 and W4- Warehouses
F8 to F12- Franchisees
D1 to D5 are market demand quantities for region 1
L1 to L3 are market demand quantities for region 2
1
Chennai south
2
PLANT
3
WAREHOUSE
FRANCHISEE
Chennai north
Chennai
DEALER
Vellore
Vellore
n
Trichy
Trichy
Coimbatore
north
Pondy
Coimbatore
south
1
2
3
Salem
Tirupathi
Coimbatore
Erode
n
FORWARD SUPPLY CHAIN
NETWORK
Madur ai
Tirunelv eli
Fig 1. Forward Supply Chain Network of Total System
Fig.2 Forward Supply Chain Network for Region 1 (Chennai)
Figure 3. Forward Supply Chain Network for Region 2 (Coimbatore)
132
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2.3
Objective function
The mixed integer linear programming (MILP) model for both the regions is presented in
the following lines.
Minimize [Transportation cost + Franchisees operating cost + Warehouse operating cost]
Minimize
[7.6A 11 + 4.25A12 + B11+B12+6.1B13+3.25B14+15.8B15+
17.1B16+8.25B17+6.1B21+6.1B22+B23+3.05B24+12B25+12B26+6.6B27+20.8X13 + 26.85X14 + 0.8Y38 +
0.8Y39 + 5.8Y310 + 9.8Y311 + 2.3Y312 + 4.5Y48 + 9Y49 + 0Y410 + 7.4Y411 + 10Y412]
+[2100F1+2100F2+1600F3+1600F4+1600F5+1600F6+1600F7+1600F8+1600F9+2100F10+1600F11+
1600F12] + [5148W 1 + 3811W 2 +3811W 3 + 5148W 4]
……………………..
(1)
2.4
Constraints for region 1 (Chennai region)
Warehouse capacity constraints are:
A11 -4250W 1≤0,
A12 -4250W 2≤0.
(1.1)
(1.2)
Flow constraints between warehouse and franchisees are:
A11-B11-B12-B13-B14-B15-B16-B17=0,
A12-B21-B22-B23-B24-B25-B26-B27=0.
(1.3)
(1.4)
Franchisees handling capacity constraints are:
B11+B21 -1200F10 ≤ 0,
B12+B22 -1200F2≤0,
B13+B23 -700F3≤0,
B14+B24 -700F4≤0,
B15+B25 -1000F5≤0,
B16+B26 -1000F6≤0,
B17+B27 -700F7≤0,
(1.5)
(1.6)
(1.7)
(1.8)
(1.9)
(1.10)
(1.11)
Demand constraints are:
B11+B21-D1>=0,
B12+B22-D2>=0,
B13+B23+B14+B24-D3>=0,
B15+B25+B16+B26-D4>=0,
B17+B27-D5>=0,
D1=625,
D2=625,
D3=372,
D4=496,
D5=372.
(1.12)
(1.13)
(1.14)
(1.15)
(1.16)
(1.17)
(1.18)
(1.19)
(1.20)
(1.21)
Only one warehouse is to be operated
W1+W2=1
(1.22)
Only five franchisees are allowed for the region
F1+F2+F3+F4+F5+F6+F7=5
(1.23)
133
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Flow constraints between warehouses and franchisees
X13 - Y38 - Y39 - Y310 - Y311 - Y312 = 0
X14 - Y48 - Y49 - Y410 - Y411 - Y412 = 0
2.5
(1.24)
(1.25)
Constraints for region2 (Coimbatore region)
Warehouse capacity constraints are:
X13 - 2700W 3 ≤0
X14 - 2700W 4 ≤0
Franchisee handling capacity constraints are:
Y38 + Y48 - 850F8≤0
Y310 + Y410 - 900F10≤0
Y39 + Y49 - 850F9≤0
Y311 + Y411 - 970F11≤0
Y312 + Y412 - 970F12≤0
Demand constraints are:
Y38+Y48+Y39+Y49-L1≥0
Y310+Y410-L2>=0
Y311+Y411+Y312+Y412-L3≥0
L1=500
L2=514
L3=570
Only one warehouse is to be operated
W 3+W 4 =1
Only three franchisees are allowed in this region
F8+F9+F10+F11+F12=3
Binary variables in the above formulation are
W 1, W 2, W 3, W 4, F1, F2, F3, F4, F5, F6, F7, F8, F9, F10, F11, and F12 = {0, 1}
(2.1)
(2.2)
(2.3)
(2.4)
(2.5)
(2.6)
(2.7)
(2.8)
(2.9)
(2.10)
(2.11)
(2.12)
(2.13)
(2.14)
(2.15)
(2.16)
All other variables take non-negative values.
A1i, Bij, X1i, and Yij ≥ 0
(2.17)
3
Results and discussion
Data were gathered from concerned industry for the purpose of experimentation and
analysis. The data collected can not be presented here due to space restraint. With that basic
data the results of the optimized model are given in table 1.The optimal solution consents
potential new facilities of warehouses at Vellore for region 1(Chennai) and at Erode-Madurai for
region 2 (Coimbatore).Similarly franchisees also selected based on the optimal solution. The cost
structure for the present network is shown in table2. The results show major difference in cost
structure between existing network and optimal network. Thus optimization of existing forward
supply chain network is achieved.
S.No
Table1. Results for the optimal forward distribution net work.
Total cost in Indian Rs
Region Warehouse selected Franchisees selected
1
Region 1
W 2*(Vellore)
F1 F2 F3 F6 F7
2
Region 2
W 3*(Erode- Madurai)
F8 F10 F12
3
Total
Rs 50,330
Rs36,218
Rs 86,548
134
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 2. Cost structure for the present network
S.No
1
2
Type of flow
Forward flow
(Region 2)
Forward flow
(Region 1)
Facility operating cost
Total cost(Rs)
54,156
15,748
69,904
32,838
14,148
46,986
Total
3
4
Transportation cost (Rs)
Rs 1,16,890
Conclusions
In this case study optimization of forward distribution network was achieved for an
existing network for battery manufacturing industry. The network consists of a plant, a group of
warehouses, a group of franchisees large no of retailers. The location of warehouses and
franchisees are discussed for two market regions. The total cost distribution and market allocation
are considered. Independent networks for collection of used batteries can be designed in a
similar way. The results of this can be considered by management for their strategic decision
making. Further by applying queuing theory simulation can be done for the evaluation of time of
the forward flow time of the network.
References
Beneta M. Beamon, supply chain design and analysis: models and methods. International Journal
of Production Economics, 1998, 55(3),281-294
Cohen, M.A. and Moon, S., Impact of Production Scale Economics, Manufacturing Complexity,
and Transportation Costs on Supply Chain Facility Networks, Journal of Manufacturing and
Operations Management, 1990,3, 269-292.
Lee, H.L and Feitzinger, E, Product configuration and Postponement for Supply chain efficiency,
Institute of Industrial Engineers, Fourth Industrial Engineering Research Conference
Proceedings, 1995,43-48
Pyke, D.F. and Cohen, M.A., Performance Characteristics of Stochastic Integrated Production
distribution Systems, European Journal of Operational Research, 1993, 68(1),23-48
Tzafestas, S and Kapsiotis, G., Coordinated control of Manufacturing Supply chains using Multi
level Techniques, Computer Integrated Manufacturing Systems,1994,.7 (3), 206-212
135
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Supplier Selection using ELECTRE – I & II Methods
1
2*
S.R.Gangurde , G.H.Sonawane
1
K.K.Wagh Institute of Engg. Education & Research, Nashik - 422003, Maharashtra, India.
Shri Sant Gadge Baba College of Engineering, Bhusawal – 425203, Maharashtra, India.
2
*Corresponding author (email: girish.sonawane123@gmail.com)
This study presents an approach for solving the supplier selection problem from the
perspective of strategic management of the supply chain. The suppliers are selected
based on the various criteria such as release cost, quality, discount rate, on-time
delivery, payments terms, technical superiority, financial & credit strength etc. The case
study of the suppliers for pneumatic cylinder is selected for evaluation. The suppliers
are evaluated using the multi-criteria decision making (MCDM) approach: ‘ELimination
and Et Choice Translating REality’ (ELECTRE).The overall objective of the supplier
evaluation process is to reduce risk and maximize overall value to the purchaser.
1.
Introduction
A good supplier selection makes a significant difference to an organization’s future to
reduce operational costs and improve the quality of its end products. There have been a lot of
factors in today’s global market that influence companies to search for a competitive
advantage by focusing on purchasing raw materials and component parts represents the
largest percentage of the total product cost. Therefore, selecting the right suppliers is a key to
the procurement process and represents a major opportunity for companies to reduce costs
(Zeydan et al. 2011). The traditional approach to supplier selection has been to select
suppliers solely on the basis of price for many years. However, as companies have learned
that price as a single criterion for supplier selection is insufficient, they have turned into more
comprehensive multi-criteria decision making (MCDM) techniques. Some authors have
identified several criteria for supplier selection, such as the net price, quality, delivery,
discount rate, historical supplier performance, technical superiority, and service (Benyoucef et
al. 2003). In addition, the importance of each criterion, varies from one purchase to the next
and is complicated further by the fact that some criteria are quantitative (price, quality,
discount rate, payments terms etc.), while others are qualitative (service, flexibility, credit
strength etc.). Thus, a technique is needed that can adjust for the decision maker’s attitude
toward the importance of each criterion and incorporates both qualitative and quantitative
factors. Thus, the objective of this study is to help decision makers to reduce a base of
potential suppliers to a manageable number and make the supplier selection by means of
multi-criteria techniques.
2.
Literature review
Roy et al. (2005) proposed the ELECTRE method which has a strong impact on the
Operational Research community. Quantitative and qualitative aspects of the problem have
been considered in a structured multi objective frame work. Aiello et al. (2006) referred
ELECTRE method for layout design problem tend to maximize the efficiency of layout,
measured by the handling cost related to the inter-departmental flow and to the distance
among the departments. Shanian and Savadogo (2006) suggested the use of ELECTRE
model in material selection. It must be noted that in some cases, there is more than a single
definite criterion for selecting the right kind of material. In this work, a new approach has been
carried out for the use of the ELECTRE: By producing a material selection decision matrix
and criteria sensitivity analysis. Gurler (2007) applied the ELECTRE method for supplier
selection in Turkish Automotive Industry. This increases the importance of effective supplier
selection in automotive industry. Tahriri et al. (2008) studied supplier selection by integrating
a collaborative purchasing program and came up with a new approach, based on the use of
AHP method. Chatterjee et al. (2010) applied MCDM methods to solve the robot selection
136
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
problem using two most appropriate multi-criteria decision- making (MCDM) methods such as
VIKOR and ELECTRE also compares their relative performance for a given industrial
application. Two real time examples are cited in order to demonstrate and validate the
applicability and potentiality of both these MCDM methods. Ozcan et al. (2011) presented
many methodologies such as AHP, TOPSIS, ELECTRE and Grey Theory under the title of
supplier selection & evaluation. Comparative analysis of the three most widely used basic
methodologies, namely AHP, TOPSIS and ELECTRE has been conducted and basic
characteristics of these methods have been displayed. Greco et al. (2011) suggested
ELECTRE method for robust ordinal regression to construct a set of outranking models
compatible with preference information. Zeydan et al. (2011) introduced methodology such as
fuzzy AHP & TOPSIS for increasing the supplier selection and evaluation quality. The new
approach considers both qualitative and quantitative variables. Benyoucef et al. (2003)
represented one of the most important functions to be performed by purchasing dept. This
work summarized the different criteria, the various problems of supplier selection & the
ELECTRE method to solve the problem. Yoon and Hwang (1995) has been applied an
ELECTRE method algorithm steps for solving many decision making problems, such as
supplier selection, material selection.
3.
ELECTRE: An outranking method
The Elimination and Et Choice Translating Reality (ELECTRE) method, developed by
Roy [1], is based on multi-attribute utility theory (MAUT). The concept of an outranking
relation (S) is introduced as a binary relation defined on the set of alternatives A. Given the
alternatives Ap and Aq, Ap outranks Aq or Ap S Aq, if given all that is known about the two
alternatives, there are enough arguments to decide that Ap is at least as good as Aq. The
objective of this outranking method is to find all the alternatives that dominate other
alternatives while they cannot be dominated by any other alternative.
An algorithm and decision process of the ELECTRE method is described as follows (Yoon &
Hwang, 1995):
Step 3.1: Construct the decision matrix with the relevant criteria, potential alternatives and a
set criteria weights, the original decision matrix as shown in Table 1.
Step 3.2: Convert all non beneficial attributes to the beneficial attributes by inverting, the
converted decision matrix shown in Table 2.
Step 3.3: Raw values (xij) are transformed to normalized values (rij). Using equation (1) the
normalized values are given in Table 3.
𝑋 𝑖𝑗
𝑟𝑖𝑗 =
(𝑖 = 1, 2, … . , 𝑛)
(1)
𝑚 𝑋2
𝑗 =1 𝑖𝑗
Step 3.4: Obtain weighted normalized values (Vij) for each criterion, by using weights of
criteria (Wj), using equation (2) weighted normalized values are calculated in Table
4.
𝑉𝑖𝑗 = 𝑊𝑖 𝑋 𝑟𝑖𝑗
(𝑖 = 1, 2, … , 𝑚. 𝑗 = 1, 2, … , 𝑛)
(2)
Step 3.5: The concordance matrix, ‘C’ is formed by using the concordance set, for each pair
of alternatives p and q (p, q = 1… m and p ≠ q), it is calculated by using equation
(3),
𝐶𝑝𝑞 = 𝑗 𝑉𝑝𝑗 ≥ 𝑉𝑞𝑗
(3)
The relative value of the concordance set is measured by means of the
concordance index. Cpq defined by equation (4), shown in Table 5.
( j = 1, 2, ….,n)
(4)
𝐶𝑝𝑞 = 𝑊𝑗
The discordance matrix, ‘D’ which is the complementary matrix of
concordance
matrix, is formed by equation (5), and shown in Table 5.
𝐷𝑝𝑞 =
𝑚𝑎𝑥 𝑉 𝑞𝑗 − 𝑉 𝑝𝑗
𝑉 𝑞𝑗 − 𝑉 𝑝𝑗
(5)
𝑚𝑎𝑥
Step 3.6: In the last step, according to the condition, if C (p,q) ≥ Cavr and D (p,q) ≤ Davr,
outranking relations between alternatives are determined.
137
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Step 3.7:
For the ELECTRE - II method up to the step 3.5 are same, calculate Pureconcordance index (Cj) & Pure-discordance index (Dj), using equation (6) & (7)
respectively as shown in Table 6.
𝐶𝑗 =
𝐷𝑗 =
𝑛
𝑖=1 𝐶
𝑛
𝑖=1 𝑑
𝑗, 𝑖 −
𝑗, 𝑖 −
𝑛
𝑖=1 𝐶
𝑛
𝑖=1 𝑑
𝑖, 𝑗
(6)
𝑖, 𝑗
(7)
Obtain two separate rankings of the alternatives on the basis of these indexes. The
ranking orders of all alternatives are generated and select best alternatives, as
shown in Table 6.
4.
Example
In order to demonstrate the applications of the above-mentioned MCDM (ELECTRE)
method for solving the problem of supplier selection of pneumatic cylinder is considered, the
following real time example are cited.
4.1
For the pneumatic cylinder different suppliers such as S1, S2, S3 and S4 are evaluated
on the basis of nine different attributes considered such as, release cost (X1) in Rs., quality
defect (X2) in (%), on-time delivery (X3) in days, discount rate (X4) in (%), payments terms (X5)
in days, on-technical superiority (X6), product & service quality (X7), financial stability & credit
strength (X8), miscellaneous (X9). For miscellaneous including past performance, reputation of
industry, spare parts availability, flexibility. Thus, the decision matrix as given in Table 1.
Table 1. Decision matrix
Alter.
X1
X2
X3
X4
X5
X6
X7
X8
X9
S1
1422
14
25
5
13
5
4
3
4
S2
2075
11
24
10
5
3
4
4
3
S3
2250
08
30
10
9
3
4
4
5
S4
1935
10
28
10
10
5
5
5
3
4.2
The X1, X2 and X3 are indicating non-beneficial attributes & the X4, X5, X6, X7, X8 and
X9 are indicate beneficial attributes. Hence, the values of X1, X2, and X3 are inverted in order
to transform this attributes to a beneficial one as shown in Table 2.
Table 2. Converted decision matrix
Alter.
S1
S2
S3
S4
X1
0.000703
0.000481
0.000444
0.000516
X2
0.07142
0.08333
0.125
0.1
X3
0.040
0.045
0.033
0.035
X4
5
10
10
10
X5
13
5
9
10
X6
5
3
3
5
X7
4
4
4
5
X8
3
4
4
5
X9
4
3
5
3
4.3
For n criteria performance data of m alternatives are collected. Normalized values (rij)
are calculated by using equation (1) as shown in Table 3.
138
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 3. Normalized matrix
Alter.
X1
X2
X3
X4
X5
X6
X7
X8
X9
S1
0.643
0.3681
0.5142
0.2773
0.647
0.6154
0.4417
0.4242
0.520
S2
0.441
0.4293
0.5842
0.5547
0.323
0.4923
0.4417
0.5656
0.390
S3
0.407
0.6443
0.4284
0.5547
0.431
0.3692
0.5521
0.4242
0.650
S4
0.473
0.5154
0.4590
0.5547
0.539
0.6154
0.5521
0.5656
0.390
4.4
For calculating the weighted normalized values decision maker requires the weights
of the each criterion. The weights are W 1=0.25, W2=0.18, W 3=0.10, W 4=0.12, W 5=0.11,
W 6=0.05, W 7=0.06, W 8=0.08, W 9=0.05. The weighted normalized values are shown in Table
4.
Table 4. Weighted normalized matrix
Alter.
X1
X2
X3
X4
X5
X6
X7
X8
X9
S1
0.1610
0.0662
0.0514
0.0332
0.0711
0.0307
0.0265
0.0339
0.026
S2
0.1103
0.0772
0.0584
0.0665
0.0355
0.0246
0.0265
0.0452
0.019
S3
0.1017
0.1159
0.0428
0.0665
0.0474
0.0184
0.0331
0.0339
0.032
S4
0.1183
0.0927
0.0459
0.0665
0.0593
0.0307
0.0331
0.0452
0.019
4.5
The concordance index is equal to the sum of the weights associated with those
criteria which are contained in the concordance set. The discordance matrix which is the
complementary matrix of concordance matrix, therefore, the concordance index Cpq & Dpq
defined by equation (4) & (5) respectively as shown in Table 5.
Table 5. Concordance matrix & discordance matrix
Alter.
S1
S2
S3
S4
Alter.
S1
S2
S3
S4
S1
----
0.49
0.55
0.535
S1
----
0.6565
0.8386
0.7791
S2
0.51
----
0.54
0.225
S2
1
----
0.8386
1
S3
0.45
0.46
----
0.32
S3
1
0.4026
----
0.7139
S4
0.465
0.775
0.68
----
S4
1
0.5277
1
----
4.6
Ap is at least as good as Aq (i.e. Ap S Aq), if and only if C (p,q) ≥ Cavr and D (p,q) ≤
Davr, where Cavr and Davr, are the threshold values as set by the decision maker. Cavr = 0.50
and Davr = 0.813, with these specifications, the indices are (1, 4), (1, 3), (3, 2) and (4, 2). An
outranking graph, as shown in Fig. 1 is developed.
1
2
4
3
Figure 1. Resulting graph
139
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4.7
Once these two indices are estimated, using equation (6) & (7) respectively. There
are two separate rankings of the alternatives on the basis of these indexes. An average
ranking from the two rankings are achieved, as shown in Table 6.
Table 6. Final rankings of suppliers
Alter.
PureConcordance
index
Initial
Rank
PureDiscordance
index
Initial
Rank
Average
Rank
Final
Rank
S1
0.15
2
-0.7258
1
1.5
1
S2
S3
S4
- 0.45
-0.54
3
4
1.2518
-0.5667
4
2
3.5
3
0.84
1
0.0347
3
2
4
3
2
5.
Result and conclusion
Using this outranking graph Fig. 1, the decision maker can now select the best choice
of pneumatic cylinder supplier by eliminating the other nodes. From Fig. 1, it is understood
that supplier 1 outperform all other supplier and hence by using ELECTRE– I method,
supplier no.1 is the best choices among the four considered alternative. The decision matrix
and weighted coefficients are taken as the input for the ELECTRE- II method. These method
list alternatives suppliers from best to worst, based on these values, the initial, average and
final rankings of the alternative are achieved using equation (6) and (7) respectively, as
shown in Table 6. The final ranking is, S1 – S4 – S3 – S2.
References
Aiello, G., Enea, M. and Galante, G. A multi-objective approach to facility layout problem by
genetic search algorithm and ELECTRE method. Robotics and Computer-Integrated
Manufacturing, 2006, 22, 447–455.
Benyoucef, L., Ding, H. and Xie, X. Supplier selection problem: selection criteria & methods.
Expert Systems with Applications, 2003, 4, 4726-4744.
Chatterjee, P., Athawale, V. M. and Chakraborty, S. Selection of industrial robots using
compromise ranking and outranking methods. Robotics and Computer-Integrated
Manufacturing, 2010, 26, 483–489.
Greco, S., Kadzinski, M., Mousseau, V. and Słowinski, R. ELECTRE: Robust ordinal
regression for outranking methods. European Journal of Operational Research, 2011, 214,
118–135.
Gürler, I. Supplier selection criteria of turkish automotive industry. Journal of Yasar University,
2007, 2(6), 555-569.
Ozcan, T., Celebi, N. and Esnaf, S. Comparative analysis of multi-criteria decision making
methodologies and implementation of a warehouse location selection problem. European
Journal of Operational Research, 2011, 38, 9773–9779.
Roy, B., Figueira, J. and Mousseau, V. Multiple criteria decision analysis: state of the art
surveys. International Series in Operations Research & Management Science, 2005, 78,
133-153.
Shanian, A. and Savadogo, O. A material selection model based on the concept of multiple
attributes decision making. Materials and Design, 2006, 27, 329–337.
Tahriri, F., Osman, M. R., Ali, A. and Yusuf, R. M. A review of supplier selection methods in
manufacturing industries. Suranaree J. Science, 2008, 15(3), 201-208.
Yoon, K. P. and Hwang, C. L. Multiple Attribute Decision Making an Introduction. Sage,
London, 1995.
Zeydan, M., Colpan, C. and Cobanoglu, C. A combined methodology for supplier selection
and performance evaluation. Expert Systems with Applications, 2011, 38, 2741–2751.
140
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimized Design and Manufacturing of Tooling by
Concurrent Engineering
Gunjan Bhatt
Institute of Diploma Studies, Nirma University, Ahmedabad, Gujarat, India
*Corresponding author (e-mail: gunjan.bhatt@nirmauni.ac.in)
In today’s competitive age Industries are required to be more active, efficient. They
need to respond fast. More and more competition calls for the fast response to buyer.
In this quest maintaining quality is also a key issue. To meet this effective resource
utilisation, optimised design and manufacturing of product is required. Compared with
the traditional sequential design method, concurrent engineering is a systematic
approach to integrate concurrent design of products and their related processes. One
of the key factors to successfully implement concurrent engineering is information
technology. In order to design a product and its manufacturing process
simultaneously, information on product features, manufacturing requirements, and
customer demands must be processed while the design is concurrently going on.
There is an increased understanding of the importance of the correct decisions being
made at the conceptual design and development stages that involve many complex
evaluation and decision-making tasks. CAD engineers, CAE engineers and CAM
engineers can work concurrently and the integration of CAD/CAM/CAE can be
realized. This theory has been successfully applied to design and manufacturing of
investment casting die and the results are obtained in terms of optimization and lead
time reduction.
Keywords: Concurrent Engineering, Sequential Engineering, Investment casting
1. Introduction
The manufacturing environment has dramatically changed in the last few years.
Worldwide competition among manufacturers and the development of new manufacturing
technologies have contributed to today’s competitive situations in manufacturing industries.
Such competition has stimulated rapid changes in manufacturing industries, causing a
significant shift in how products are designed, manufactured, and delivered. Customers
demand products of higher quality, lower price, and better performance in an ever-shorter
delivery time. As an example, in the mid-1960s the Chevrolet Impala was the best selling
car in the USA, and the platform on which it was based was selling 1.5 million units a year;
in 1991 the best selling car was the Honda Accord, and the platform on which it was based
was selling 400,000 units a year. Despite the increase in the market size the number of
units per model has decreased by a factor of 4. Companies are required to produce more
and more new products, and at the same time reduce the time to market these products .
The first attempt made by Western companies to respond to this faster changing
environment was to shorten their response time, pushing their development processes to
move faster, but kept on doing the same things. Product design was asked to reduce the
time to deliver the blueprints, and so was process engineering to design the process and
manufacturing to produce.
Strong efforts were made to help each function to meet the goal of shortening its lead time,
particularly where Western companies felt to be stronger on new technologies; and
particularly on computer technologies: CAD, CAE, CAM and CIM. Sophisticated automation
has been introduced, but in most cases results were disappointing. The main reason is that
these technologies have been utilized just to speed up the process, not to change it. The
need for a new development process then became clear and concurrent engineering (CE)
has emerged as an effective answer to this need.
141
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent
Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2.
Concurrent engineering
Competition forcing changes in the way product designers and manufacturing
engineers develop products. In conventional product development, conceptual design,
detailed design, process planning, prototype manufacturing, and testing are considered as
sequential processes. Compared with the traditional sequential method, concurrent
engineering is a systematic approach to integrate the concurrent design of products and
their related processes. Concurrent engineering is intended to stimulate product
designers/developers to consider all elements of the product life cycle in the early stage of
product development.
2.1 Concurrent engineering vs. sequential engineering
Figure 1. (a) Flow diagram of the serial engineering organization and (b) flow diagram of CE
organisation
A flow diagram of the serial engineering organization is shown in Figure 1(a). In serial
engineering, the various functions such as design, manufacturing, and customer service are
separated. The information in serial engineering flows in succession from phase to phase. In
sequential engineering a department starts working only when the preceding one has
finished, and, once a department has finished working on a project.
On the contrary, in CE all functional areas are integrated within the design process.
A flow diagram of CE is shown in Figure 1(b). The decision making process in a CE
environment differs from sequential engineering in that at every stage decisions are taken
considering the constraints and the objectives of all stages of the product life cycle, thus
taking at the product design level issues that are usually addressed much later, thus giving
the possibility to achieve a better overall solution. The integration of other functional areas
within the design process helps to discover hard to solve problems at the design stage.
Thus, when the final design is verified, it is already manufacturable, testable, serviceable,
and of high quality. The most distinguishing feature of CE is the multidisciplinary, crossfunctional team approach. Product development costs range between 5% and 15% of total
costs, but decisions taken at this stage affect 60–95% of total costs. Therefore it is at the
product development stage that the most relevant savings can be achieved.
142
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent
Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3. Establishment of concurrent engineering for investment casting dies
Figure 2. Integrated CAD/CAE/CAM systems for investment casting die
The general scheme of CAD/CAE/CAM integrated system of investment casting
dies is shown in figure 2. In this paper the platform of the UG NX CAD/CAM software, the
AutoCAST simulation software, and a primary expert system for the design of investment
casting process are used to establish the CAD/CAE/CAM integrated system of investment
casting dies.
3.1 Solid modelling
First the 3D solid modelling of a part is created, and then the 3D modelling of the investment
casting including the information of machining allowance, shrinkage and taper is formed by
using the UG NX CAD/CAM software (shown as figure 3). The data of machining allowance,
shrinkage and taper are chosen from Database I in accordance with the accuracy and
surface rating of the parts, the structure of the parts and the type of alloy.
Figure 3. Solid Model of Automobile (Tractor) Part
3.2 Design for investment casting die
Standard parts and raw materials which will be used for manufacturing die are shown in
table I. Die is designed for two cavities. So the production is increased at 50%.
Table 1. Die Components
Part Name
Material
Std. Dowel
Hardened Steel
Std. Bush
Hardened Steel
Std. Dowel
Hardened Steel
Block
Aluminium
Injection Plate
Mild Steel
Core
Aluminium
Copper Block for Electrode
Copper
Std. Dowel
Hardened Steel
143
Quantity
2
2
2
2
1
2
1
2
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent
Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The parting line is the intersection of parting surface (the surface separating the die halves)
of a casting die with the object or die cavity. The main objective of parting line is to split the
component in such a way so as to reduce core and slides to absolute minimum. Here
parting line is generated at centre line of the product. So the die is consisting of mainly two
parts. Figure 4(a) shows method design details. Finally the 3D solid modelling of the whole
set of investment casting dies, including cores, gating system, feeding system, injection
plate, etc. are all completed as shown in figure 4(b).
(a)
Figure 4. Multicavity Investment Casting Die
(b)
3.3 CAE – Casting simulation and analysis
The metal flow and solidification in the dies is simulated with the use of AutoCAST software.
The simulation results of several technological schemes are analysed. The problems
occurring in the metal flow and solidification can be observed directly through the simulation.
The defects of shrinkage cavities and porosity are basically eliminated. The design of the
investment casting process is optimized with the help of CAE simulation and analysis as
shown in figure 5.
Figure 5. Optimisation of casting method design
3.4 CAM for die
On the basis of the data of the 3D solid modelling of the cavities of investment casting dies,
with the use of the MANUFACTURING module of the UG NX CAD/CAM software, the
operation table of machining including the machining parameters, cutters, cutter path, etc. is
listed, and the NC (numerical control) cutting procedures and the CL (cutter locate) data files
are also created.
(a)
(b)
Figure 6. (a) Toolpath verification for core and (b) Toolpath verification for electrode for EDM
144
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent
Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Furthermore, due to the use of module of an NC-check, the cutter path in machining is
checked and the instantaneous machining process can be visualized. The CL data file of
each NC cutting procedure is revised until a satisfactory result is reached. The CAM is
realized as soon as these data files are post-processed and transferred into the NC
machine code. The cutter path in the machining of the core and the electrode for Electric
Discharge Machine for machining die cavity is shown in figure 6 (a) and (b) respectively.
4.
Benefits
With the help of the UG NX CAD/CAM software and a primary expert system
package, the 3D solid modeling of investment casting and the design of technological
scheme of investment casting process are created. Next the simulation and analysis of
metal flow and solidification in dies are performed with the use of the AutoCAST software;
the technological scheme and process parameters of investment casting are also revised
and optimized. Then the 3D solid modeling of whole set of dies is completed. Finally the
CAM machining data of the complex surface of dies and cores are created and the CAM of
whole set of performed simultaneously, and the design and manufacturing cycle of the dies
can be shortened obviously. The method design and process parameters of investment
casting process are revised and further optimized by using the CAE simulation, and the
quality of the investment castings improved greatly in a shorter time.
5.
Conclusion
Optimisation plays vital role in business. It is an effort towards making the things run
smoothly with efficient utilisation of available resources. Optimisation is the philosophy of
life. When applied to the engineering sector, that too to foundry it saves unnecessary
wastage of resources. This leads to the noticeable savings in terms of cost. Concurrent
engineering, an integrated CAD/CAE/CAM system of investment casting dies can be used
successfully in the design and manufacturing of investment casting dies of parts such as
automobile components, machine tool components, etc. The cycle of design and
manufacturing of investment casting dies are shortened obviously. The process parameters
and technological scheme of investment castings can be optimized with the help of CAE
simulation. This results in the production of investment castings of consistently high quality
in a shorter time, and the lead-time is shortened greatly.
References
Kermanpur, Sh. Mahmoudi, A. Hajipour, “Numerical Simulation of Metal Flow and
Solidification in the Multi-Cavity Casting Moulds of Automotive omponents”, Journal of
materials processing technology, 2008, 206, 62–68
Ravi, B., Creese, R.C. and Ramesh, D. Design for Casting – A New Paradigm for
Preventing Potential Problems. Transactions of the American Foundry Society, 1999,
107
Reddy, A.P., Pande, S.S. and Ravi, B. Computer Aided Design of Die Casting Dies," 42nd
Indian Foundry Congress, Institute of Indian Foundrymen, Ahmedabad, 1994
Starkbek, M. and Grum, J. Concurrent engineering in small companies. Machine tools &
Manufacture, 2002, 42, 417-426.
Staudacher, P., Landeghem, H.V., Mappelli, M. and Redaelli, C.E. Implementation of
concurrent engineering: a survey in Italy and Belgium. Robotics and Computer
Integrated Manufacturing, 2003, 19, 225-238.
Vijayaram, T.R, Sulaiman, S., Hamouda, A.M.S. and Ahmad, M.H.M. Numerical Simulation
of Casting Solidification in Permanent Metallic Molds”, Journal of Materials Processing
Technology, 2006, 178, 29–33
145
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Design of an Oversaturated Traffic Signal using Simulated
Annealing
Adithya Parameswaran, Bhagath Singh K., Sandeep K., Harikrishna M.*
Department of Civil Engineering, National Institute of Technology Calicut, Kozhikode, 673601
*Corresponding author (e-mail: harikrishna@nitc.ac.in)
Traffic signals play a major role in easing delay and congestion at intersections. Design
of a traffic signal involves the computation of cycle time and apportioning of the cycle
time to the green in the various phases. Traditional methods of cycle time computation
like Webster’s method cannot be used in oversaturated conditions. Oversaturated
signals are common in urban road networks having heterogeneous traffic conditions. In
this work, the design of the cycle time of an oversaturated signal is taken up using the
optimisation technique, Simulated Annealing (SA). Traffic volume data at a busy traffic
intersection in Kozhikode city, Kerala, was collected using manual counts for morning
and evening peak periods, which were used for design. Dynamic PCU values were
used for the signal design. The optimum cycle timing was computed using SA.
Significant differences were observed between the observed signal timings and those
computed using Simulated Annealing. The average delay per cycle was found to be
less for the cycle time computed using SA compared to the actual delay observed at
the intersection.
1.
Introduction
Indian cities are faced with the unique challenge of burgeoning vehicle population
over the past few decades. The total number of registered motor vehicles increased from
about 0.3 million as on 31st March, 1951 to about 142 million as on 31st March, 2011 and the
total registered vehicles in the country grew at a Compound Annual Growth Rate (CAGR) of
9.9% between 2001 and 2011 (Ministry of Road Transport and Highways, 2012). The rapid
urbanization has caused the concentration of vehicles in cities and it is alarming to note that
32 percent of the total vehicles are plying in metropolitan cities (Sood, 2012). Improvements
in the existing road network therefore become mandatory so as to minimize the delay, cost of
travel as well as time between an origin and destination, in addition to reducing air and noise
pollution. Intersections in the road network contribute to the delay to vehicles, which
necessitate the introduction of improvement measures. The control of traffic at an intersection
can be established over time and space. The separation over space of conflicting traffic
includes the use of channels, rotaries and grade-separated intersections. These methods
control the intersection by allocating a definite region in space to the vehicles moving in
different directions. In time control, the control of intersection is handled by a part of the traffic
for particular time intervals. These time intervals are allocated to the traffic either by
authorized personnel or by means of traffic signals.
The traditional design of traffic signals is based on the procedure recommended by
Webster. The design procedure involves the determination of the number of phases required
(signal display timing for an individual vehicle or pedestrian movement), computation of the
total cycle time (time for one complete set of signal indications) and apportioning the same to
the various phases based on the ratio of the traffic flow to the saturation flow associated with
a phase. The phase design involves the separation of conflicting movements into different
phases with minimum or less severe conflicts. It is largely governed by the geometry of the
intersection, flow pattern, i.e., turning movements and the relative magnitudes. Webster’s
equation, one of the foremost delay equations developed in 1958 is based on assumptions
such as Poisson vehicle arrivals and uniform discharge headways. The average delay per
vehicle is computed as a function of cycle length, the proportion of the cycle length which is
green, the approach volumes and the flow ratios. The equation developed is as follows:
146
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Where
d
=
c
=
λ
=
𝑑=
𝑐 1−𝜆 2
2 1−𝜆𝑥
+
𝑥2
2𝑞 1−𝑥
−
0.65 𝑐 1/3 𝑥 2+5𝜆
𝑞 1/3
(1)
Average delay per vehicle
Cycle time
Proportion of the cycle that is effectively green for the phase under
consideration
q
=
Flow
x
=
Degree of saturation, which is the ratio of the actual flow to the maximum flow
that can be passed through the approach
Equation (1) was used by Webster to develop the Optimum Cycle time for an intersection as
1.5×𝐿+5
(2)
𝐶0 =
Where
C0
=
L
=
y
=
1−(𝑦1 + 𝑦2 +⋯……𝑦𝑛 )
Optimum cycle length in seconds
Total lost time per cycle, generally taken as the sum of total amber and all
red clearance per cycle in seconds
Ratio of Volume to Saturation Flow for the critical approach in each phase
A number of other methods like Average Loaded Phase Expanded Method (ALE Method),
Belli’s method, Failure rate method, Australian Road Capacity Guide method have also been
used for signal design. Considering the multi- attribute influence on the delay at a signal,
optimisation techniques have also been used to arrive at an optimum value of signal timings.
Ceylan and Bell (2004) used Genetic algorithm (GA) to tackle the optimisation of signal
timings with stochastic user equilibrium flows. Ceylan (2006) suggested that a good
optimisation might be obtained by combining GA and Hill Climbing (HC) techniques, in order
to remove being trapped in bad local optima or to locate good global optima. In most of the
works reported, the design pertained to under saturated traffic condition, wherein the sum of
ratios of the flow to the saturation value was less than 1 for all the critical approaches in each
phase. However, in many of urban intersections, due to the heterogeneous traffic conditions,
the sum of ratios of the flow to saturation flow were found to greater than 1, whereby
computing of signal timings using (2) becomes infeasible.
In the present work, the signal timing for an oversaturated signal was arrived at using the
optimisation technique, Simulated Annealing (SA). The subsequent sections describe the
methodology adopted for the work and the theoretical foundation, results from the study and
conclusions are presented.
2.
Methodology adopted and theoretical foundations
The existing traffic signal at Malaparambu intersection, in Kozhikode city, Kerala was
chosen for the study. The intersection is a four armed intersection and is signalised. Undue
delays were observed at the intersection for the existing signal timings. In order to design the
signal for the existing traffic flows, traffic volume count during morning and evening peak
periods was taken at the intersection using videographic survey. The vehicle volumes were
converted to Passenger Car Equivalent (PCU) values, using the dynamic PCU values
suggested by Turner and Harahap (1993). The intersection geometrics were collected, based
on which the modified saturation flow values were computed (Raval et al., (2012) As per their
recommendations, the formula for saturated flow values were calculated based on the width
of the road and proportion of two wheelers, buses, auto rickshaws and cars in the traffic
stream. The various formulas used for computation of saturation flow are as follows:
Saturation Flow Model Width Approach (SFMW)
𝑆 = 626 𝑤 + 268
Saturation Flow Model Traffic Composition (SFMC) Approach
𝑆 = 647 𝑤 + 709 𝑡𝑤 + 270 𝑏 + 702 𝑎𝑢 − 1568 𝑐𝑎𝑟
147
(3)
(4)
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Where,
S
w
tw
b
au
car
=
=
=
=
=
=
Saturation Flow in PCUs/hr.
width of road in m
proportion of two wheelers in percentage
proportion of buses in percentage
proportion of auto rickshaw in percentage
proportion of cars in percentage
The objective function chosen for the problem is minimisation of delay at the intersection as
given in equation (1). The constraints for the minimisation problem are that the optimum cycle
time should be between 25 and 120 seconds. Moreover, the effective green ratio is
constrained between 0.35 and 0.4. The maximum cycle time for an isolated signal is
prescribed as 120 seconds, as very long cycle lengths result in excessive delay (Garber and
Hoel, 2010).
Simulated Annealing, which models the physical process of heating a material and then
slowly lowering the temperature to decrease the defects, thus minimizing the system energy,
is used as the optimisation technique. In each iteration of the simulated annealing algorithm, a
new point is randomly generated. The distance of the new point from the current point, or the
extent of the search, is based on a probability distribution with a scale proportional to the
temperature. The algorithm accepts all new points that lower the objective function, but also,
with a certain probability, points that raise the objective fucntion. By accepting points that
raise the objective function, the algorithm avoids being trapped in local minima in early
iterations and is able to explore globally for better solutions. A program developed in MATLAB
is used to solve the optimisation problem. The algorithm developed for the program is
described in the following sequential steps.
1. Initialise the flow, q, the saturated flow, s, the cycle time, c, and the proportion of the
cycle which is effectively green, l , for the phase under consideration.
2. Set the degree of saturation to x=q/(l*s).
3. Set delay d= c*(1-l)^2*.5/(1-l*x)+x^2*.5/(q*(1-x))-.65*((c/q^2)^(1/3))*x^(2/(5*l))
4. Set the initial temperature T=1000.
5. Set the iteration counter t=0.
6. Set the error value=10^-3.
7. While T>= e go to step 8
8. Set counter n=0.
9. If n<=100, go to step 10
10. c1=c + c
11. l1=l + l
12. x=q/(l1*s)
13. If |c1-c|>=e and |l1-l|>=e go to step 14
14. If c1<25 or c1>120 or l1<0.35 or l1>0.4, go to step
15. Set d1= c1*(1-l1)^2*.5/(1-l1*x)+x^2*.5/(q*(1-x))-.65*((c1/q^2)^(1/3))*x^(2/(5*l1))
16. If d1<d, go to step 17 else go to step 20
17. c=c1
18. l=l1
19. d=d1, go to step 26
20. p=exp(-(d1-d)/T)
21. r= random generated on the Gaussian function
22. If r<p go to step 23
23. c=c1
24. l=l1
25. d=d1, go to step 26
26. n=n+1, go to step 10
27. t=t+1
28. T=0.9*T, go to step 8
29. Display c, l, d.
30. Stop
148
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
A comparison is made between the delay observed at the intersection and the delay
computed using SA.
3.
Results and discussion
The traffic volume data was used to calculate the flow values in the approaches for
morning and evening peak periods. The traffic flows for the morning and evening peak
periods are given in Table 1 and Table 2 respectively.
Table 1. Traffic flows (PCUs/ hour) in all approaches during evening peak hour
Fro
m
To
Flow
North
Eas
t
152
Sout
h
527
East
Wes
t
208
Sout
h
292
Wes
t
720
South
Nort
h
124
Wes
t
421
Nort
h
500
West
Eas
t
265
Nort
h
174
Eas
t
627
Sout
h
255
Table 2. Traffic flows (PCUs/ hour) in all approaches during morning peak hour
Fro
m
North
East
To
Eas
t
Sout
h
Wes
t
Sout
h
Flow
250
596
256
339
Wes
t
121
6
South
West
Nort
h
Wes
t
Nort
h
Eas
t
Nort
h
Eas
t
Sout
h
192
487
653
216
158
508
297
The traffic flow values indicate that the pattern of flow during morning and evening peak
periods vary and hence separate signal timings could be adopted for morning and evening
peak periods. The SA program was run for 20 times for all the approaches, so as to minimise
the value of delay per phase to the minimum possible extent. A scatter plot of the values was
made and values falling far from the mean were ignored. The mean of the values from the
approaches was adopted as the optimum cycle time for the intersection. The mean and
Standard Deviation of the values for cycle length and delay for the four approaches are given
in Table 3.
Morning
peak
hour
Evening
peak
hour
Table 3. Optimal Cycle length and delay in approaches
Approach
North
East
South
Mean
102.87 100.17 97.34
Cycle Length
Standard
(seconds)
10.94
10.80
12.70
Deviation
Mean
26.27
66.31
26.36
Delay (seconds)
Standard
2.69
8.34
2.91
Deviation
Mean
101.42 104.64 104.90
Cycle Length
Standard
(seconds)
12.21
8.28
10.07
Deviation
Mean
23.96
37.12
27.17
Delay (seconds)
Standard
2.75
3.20
2.65
Deviation
West
104.71
11.10
32.74
3.26
104.54
10.09
34.44
3.84
The results of the optimised cycle length indicate that the values are quite close to one
another and the standard deviation values are comparable. The average of the cycle length
values for the morning peak hour and evening peak hour are 101.28 seconds and 103.88
seconds respectively, are chosen as the cycle times for the intersection, which could be
apportioned to the various phases, on the basis of the traffic flows.
149
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
At present, a cycle time of 150 seconds is adopted by the Traffic Police at the intersection,
resulting in delays at the intersection. Table 4 gives the delay at the approaches which are
observed and those computed for the cycle length as per SA.
Table 4. Delay (in seconds) observed at the approaches and computed values from SA
Approach
North
East
South
West
Existing
78
151
84
61
Morning
peak
From SA
60
63
63
65
Existing
74
82
80
63
Evening
peak
From SA
57
50
62
53
It was observed that the delay per cycle is the lower for the optimised cycle length computed
as per SA.
4
Conclusions
The study demonstrates the use of SA for computing the signal timings of
oversaturated intersections, for which the traditional methods cannot be employed. The
results indicate that the cycle length calculated by simulated annealing is better than that
adopted at present and can be succesfully applied at the intersection for a better traffic flow
during rush hours. The delay that would be caused to the vehicles would be less if the cycle
length obtained from SA is adopted. The traffic authorities may use the signal timings
obtained using simulated annealing for a smoother flow of traffic and an immediate remedy
for the present scenario witnessed at the signal. However, the use of SA also has the
following disadvantages. The initial temperature chosen and the initial value to be assigned to
the design variable would also influence the result. There is also the probability that the
obtained result is a local minimum.
In the present study, only the cycle time for the intersection has been optimized. An
optimization of the effective green times for the different phases can also be undertaken. The
factor considered for optimizing the cycle time is by minimizing the delay at the intersection.
An optimum signal design can also be done by maximizing capacity at the intersection,
minimizing pollution, etc. Multiobjective optimisation of the cycle lengths can also be
attempted. Other optimisation techniques like Hill Climbing and Genetic Algorithms and their
combinations could also be tried for traffic signal optimisation.
References
Ceylan, H. (2006) Developing combined genetic algorithm- hill climbing optimization method
for area traffic control, Journal of Transportation Engineering, 132 (8), American Society
of Civil Engineers, 663 0 671.
Ceylan, H. and Bell, M.G.H. (2004) Traffic Signal Timing Optimization based on genetic
algorithm approach including driver’s routing, Transportation Resaerch Part B, 38 (4),
329 - 342.
Garber, N.L. and Hoel, L.A. (2010) Principles of Highway and Traffic Engineering, Cenagage
Learning India Private Limited, New Delhi.
Ministry of Road Transport and Highways (2012) Road Transport Year Book (2009-10 &
2010-11).
Raval, N G., and Gundaliya, P J. (2012) Modification of Webster’s delay formula using
Modified Saturation Flow Model for Non-Lane based Heterogeneous Traffic Conditions,
Highway Research Journal, Vol. 5, No.1, Journal of Indian Road Congress, 41 – 48.
Sood, P.R., Air Pollution Through Vehicular Emissions in Urban India and Preventive
Measures, Proceedings of 2012 International Conference on Environment, Energy and
Biotechnology, 5 -6 May 2012 Kuala Lumpur, Malaysia.
Turner, J. and Harahap, G., Simplified saturation flow data collection methods”. Proceedings
of CODATU VI Conference on the Development and Planning of Urban Transport,
February 1993, Tunis, Tunisia.
150
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Shape Control of Cantilever Beam with Smart Material using
Genetic Optimization Technique
Hitesh Patel*, J. R. Mevada
Department of Mechanical & Mechatronics Engineering, U.V. Patel College of engineering,
Ganpat University, Mehsana-384012, Gujarat, India
*Corresponding author (e-mail: hitesh_davada@yahoo.com)
In this study, Analytical work is carried out for static shape control of cantilever beam
structure with a use of laminated piezoelectric actuator (LPA). The mathematical modeling
of beam element covered with LPA based on Timoshenko beam element theory and linear
theory of piezoelectricity has been used. This work shows how control cost decreased by
varying number of actuators, actuator size, and actuator location on beam and control
voltage. Initially beam is deflected due to external point load and desired condition is taken
as to take and keep beam to horizontal position. Here error between desired shape and
achieved shape is taken as an objective to minimize, size, location and control voltage of
actuators taken as variables. Genetic Algorithm for calculating optimum values of all
variables is carried out using Matlab tool.
1. Introduction
Now a day there is a demand for highly precise structure in aeronautical & Astronautical
industries. So, considerable attention carried out behind shape control of structure. Recently,
researcher giving considerable attention on developing advanced structures having integrated
control and self monitoring capability. Such structures are called smart structure. Using direct and
converse effect of piezoelectric materials for sensing and control of structures, a considerable
analytical and computational modeling works for smart structures have been reported in paperH.
Irschik(2002) and sunar(1999),also B.N. Agrawal and Trensor (1999) show in his work analytical
and experimental results for optimal location for Piezoceramic actuators on beam here size of
Piezoceramic actuators are not varied only optimization carried out for optimal location of
actuators. RuiRibeiro(2000)show work on the optimal design and control of adaptive structure
using genetic optimization algorithm.S daMotaSilva(2004) show the application of genetic
algorithm for shape control with piezoelectric patches and also show comparison with
experimental data Osama J. Aldraihem (2000) shows analytical results for optimal size and
location of piezoelectric actuator on beam for various boundary conditions of beam at here
desired shape of beam is considered as horizontal and single pair of actuator is used for his work.
Work carried out by E.P. Hadjigeorgioua (2006) for shape control of beam using piezo electric
actuator based on Timoshenko beam theory. To minimize damage of patches due to high voltage
multiple layered LPA are used. This concept is used by author Y Yu, X N Zhang and S L Xie
(2009).in this work carried out for shape control of cantilever beam and optimization done for
optimum control voltage. Here in this work control cost behind the structure by minimizing number
of actuators and also by varying size and location of actuators. Also coverage of beam by
actuators is going to be minimized.
2. Mathematical modeling of beam
Considering laminated composite beam with elastic and piezoelectric layers, also many
piezoelectric patches glued one over other to make laminated piezoelectric actuators (LPA).
151
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2.1 Governing equation of beam
Mathematical modeling carried out based on Timoshenko beam element theory and linear theory
of piezoelectricity.
Figure 1. Beam with surface bonded LPA
Considering Cartesian coordinate system as shown in figure 1, here analysis is restricted in
x-z plane only so displacements in all three directions (using Timoshenko beam theory) are as
below shown in E.P. Hadjigeorgioua (2006). Where 𝜔 is transverse displacement of point on
centroidal axis and 𝜓 is the rotation of beam cross section about positive y axis.
𝑢1 𝑥, 𝑦, 𝑧, 𝑡 ≈ 𝑧𝜓 𝑥, 𝑡 ,
(1)
𝑢2 𝑥, 𝑦, 𝑧, 𝑡 ≈ 0 ,
(2)
𝑢3 𝑥, 𝑦, 𝑧, 𝑡 ≈ 𝜔 𝑥, 𝑡 ,
Nonzero strain component of beam using above equations are as below:
𝜕𝜓
𝜕𝜔
𝜀𝑥 = 𝑧
, &
𝛾𝑥𝑧 = 𝜓 + ,
𝜕𝑥
𝜕𝑥
The linear piezoelectric coupling between elastic field and electric field – no thermal effect is
considered are as following taken from H.F. Tiersten(1969).
𝜎 = 𝑄 𝜀 − 𝑒
𝑇
𝑡2
(6)
𝐻 − 𝑊𝑒 𝑑𝑡 = 0
𝑡1
(4)
(5)
𝐸 ,
𝐷 = 𝑒 𝜀 + 𝜉 {𝐸},
The equation of laminated beam is derived with the use of Hamilton principle as below.
𝛿
(3)
(7)
Where (.) denote first variation operator, T kinetic energy of beam, H enthalpy of beam with
laminated piezoelectric actuator and W e external work done.
Electric enthalpy of beam (H.F. Tiersten(1969) &H.S. Tzou (1993)) by using above equations
form (4) to (6) as
𝐻=
1
2
𝑉𝑏
𝜀
𝑇
𝜎 𝑑𝑉 +
=
𝐿𝑒
0
Where, 𝐸𝐼 =
𝑠𝑏
𝑠𝑝
𝑠𝑝
𝑧𝑒31 𝐸𝑧 𝑑𝑠 ,
𝜀
𝑉𝑝
1
𝐸𝐼
2
1
−
2
𝑧 2 𝑄11 𝑑𝑠 +
𝑀 𝑒𝑙 =
1
2
𝜕𝜓
𝜕𝑥
𝐿𝑒
0
2
𝑇
𝐸
𝜎 − 𝐸
2
+
𝑇
𝑇
1
𝐺𝐴
2
𝜉 {𝐸}𝑑𝑉
𝐷 𝑑𝑉
𝜓+
𝜕𝜔
𝜕𝑥
(8)
2
𝐺𝐴 = 𝑘
𝑧 𝑄11𝑝 𝑑𝑠,
el
𝜕𝜔
𝜕𝜓
− 𝑄𝑒𝑙 𝜓 +
𝜕𝑥
𝜕𝑥
− 𝑀𝑒𝑙
𝑄𝑒𝑙 =
𝑠𝑏
𝑄55 𝑑𝑠 +
𝑠𝑝
𝑠𝑝
𝑑𝑥
𝑄55𝑝 𝑑𝑠
𝑒15 𝐸𝑥 𝑑𝑠 = 0,
Electric field intensity in x direction is neglected so Q =0 and also value of k is taken as 5/6.
Finally the work of external force is given by
152
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
𝐿𝑒
𝑊𝑒 =
𝑞𝜔 + 𝑚𝜓 𝑑𝑥,
0
(9)
2.2 Finite element formulation of beam
Considering a beam element of length Le having two degrees of freedom per node One is
translation W 1 (or W 2) and other is rotation 𝜓1 (or 𝜓2) degree of freedom. As shown in figure 2.
The array of nodal displacement is defined as
(10)
𝑿𝑒 = 𝜔1 𝜓1 𝜔2 𝜓2 𝑇
Figure 2. Beam element
In figure 2 shown a Timoshenko beam element having two node two degrees of freedom and
stiffness and mass matrices for finite element formulation are considered as per paper E.P.
Hadjigeorgioua(2006) and Y Yu(2009), so final shape control equation obtained as following.
(11)
𝐾 𝑒 𝑿 𝑒 = 𝐹 𝑒 + 𝐹𝑒𝑙 𝑒
Where,
𝑒 𝑇
𝑒 −1
𝑒
𝑒
(12)
𝐾𝑢𝑉
𝐾𝑉𝑉
+ 𝐾𝑢𝑉
𝐾 𝑒 = 𝐾𝑢𝑢
𝐹
𝑒
=
𝐿𝑒
0
𝐿𝑒
𝑁𝜔
𝑁𝜓
𝑞
𝑑𝑥
𝑚
(13)
𝜕 𝑁𝜓
𝜕 𝑁𝜔 𝑀𝑒𝑙
𝑑𝑥
𝑁𝜓 +
𝑄𝑒𝑙
𝜕𝑥
𝜕𝑥
0
3. Shape control and optimization using genetic algorithm
𝐹𝑒𝑙
𝑒
=
(14)
Here, study carried out for static shape control, In equation (13) there is not any time
dependent vector so it is directly used in static shape control problem. Layouts of beam with
laminated piezoelectric actuators (LPA) are as shown in below figure 3. As shown in below figure
beam divided in to 30 finite elements. One LPA covers several elements of beam, equal
amplitude voltage with an opposite sign are provided to upper and lower LPA,
Figure 3. Layout of cantilever beam with surface bonded LPA
Material selected for cantilever beam is graphite epoxy composite because having lower
density and lower thermal coefficient of expansion, which will minimize deflection due to
temperature rise and also due to self weight, and material selected for actuator is PZT G1195N,
properties of both the material are specified in table 1. Taken form E.P. Hadjigeorgioua (2006),
Here size of beam is considered as 300 mm long, 40 mm width and 9.6 mm thick, also width and
thickness of LPA layer is considered as40 mm and 0.2 mm respectively. Here length of LPA is
going to be varied. and 4 N force is applied at the end of beam.
153
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 1. Material properties of base beam and LPA
Properties
Symbol
Young modulus (GPa)
Poisson Ratio
Shear modulus (GPa)
3
Density (Kg/m )
2
Piezoelectric constant (C/m )
E11
𝜈12
G12
𝜌
e13
𝜉13
𝜉15
Electric permittivity (F/m)
LPA material
PZT G1195N
63
0.3
24.2
7600
17.584
15.3 x 10-9
-9
15.0 x 10
Graphite epoxy composite
material T300/976
150
0.3
7.1
1600
The error function used in this study as under as per RuiRibeiro(2000)
𝐸𝑟𝑟𝑜𝑟 =
𝑛
𝑖=1
(15)
(𝛾𝑖 − 𝑞𝑖 )2
Where 𝛾i is the pre-defined displacement at the ith node and qi the achieved displacement
Using this mathematical model and configuring beam as per results shown in table 4 of Y Yu
(2009) and comparing results for first and third actuators group for validation of mathematical
model, here the effect of gravity for shape control calculation is neglected
Table 2. Comparing results with paper Y Yu (2009) for validation of mathematical model
Work
Present
Ref. Y Yu (2009)
1
203
203
Voltage of actuators group
2
3
4
5
--209
------209
-----
-6
Error in (10 ) m
3.6
3.3
3.1 Optimization using genetic algorithm
For Genetic algorithm population is set as 100 for all three cases and max. Number of
generation is allowed as 1000.Also algorithm having stochastic in nature. Here crossover function
is selected as heuristic in nature.
Upper Control limit for voltage is specified as 400 V and lower limit is specified as 400 V in
reverse polarity (for reverse polarity voltage shown as minus (-) sign), Minimum length of LPA is
considered as 30 mm because as length decreased control force (Actuation force) of LPA on
beam is also decreased. And maximum length provided as full length of beam, for empty length
upper limit is considered as full length of beam and lower limit is considered as zero. Constraint
equation is become as the summation of all Actuators length and Empty length of beam is less
than or equal to 300 mm.
Table 3. Comparing optimization results with paper Y Yu (2009)
No. of LPA
used
1
2
3
4
-6
Error (x 10 ) m
Ref. Y Yu (2009)
Present
--3.46
3.3
2.08
2.9
1.25
1.9
0.64
154
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
After optimization obtained results shown in figure 5, results are compared with table no 4
ofpaper by Y Yu (2009), we can see that by varying size and location of patches error between
desired shape and actual shape minimized in considerable extent, comparison of results with
paper shown in table 3.
Figure 4. Optimization result by varying no of actuators
4. Conclusion
To minimize control cost of any structure it is required to optimize voltage, location and size of
piezoelectric actuators on the structure. Here Optimization carried out for the error minimization
between desired and actual shape with the use of genetic optimization algorithm, Size and
location of actuators are depending on desired shape to obtain. As number of actuators increased
better shape control obtained.For cantilever beam higher actuation force is needed near fixed end
of the beam.
References
Brij N Agrawal and Kirk E Treanor, Shape control of a beam using piezoelectric actuators, Smart
Mater.Struct. 8 (1999) 729–740. Printed in the UK,
Crawley E F and Anderson E H 1985 Detailed models of piezoelectric actuation of beams J.
Intell.Mater. Syst. 1 4–25
E.P. Hadjigeorgioua*, G.E. Stavroulakisb, C.V. Massalasa, Shape control and damage
identification of beams using piezoelectric actuation and genetic optimization, International
Journal of Engineering Science 44 (2006) 409–421,
H. Irschik, A review on static and dynamic shape control of structures using piezoelectric
actuation, Comput. Mech. 26 (2002) 115–128.
H.F. Tiersten, Linear Piezoelectric Plate Vibration, Plenum Press, New York, 1969.
H.S. Tzou, Piezoelectric Shells, Kluwer Academic Publishers, The Netherlands, 1993.
155
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Osama J. Aldraihem, Optimal Size and Location of Piezoelectric Actuator/Sensors: Practical
Considerations, journal of guidance, control, and dynamics Vol. 23, No. 3, May–June 2000,
RuiRibeiro and Suzana Da Mota Silva, Genetic algorithms for optimal design and control of
adaptive structures, CMS Conference Report, 11th May 2000
1
2
2
S daMota Silva , RRibeiro , and JMMonteiro , The application of genetic algorithms for shape
control with piezoelectric patches — an experimental comparison, Smart Mater.Struct. 13
(2004) 220–226,
Sunar M and Rao S 1999 Recent advances in sensing and control of flexible structures via
piezoelectric materials technology Appl. Mech. Rev. 52 1–16
Y Yu, X N Zhang1 and S L Xie, Optimal shape control of a beam using piezoelectric actuators
with low control voltage, Smart Mater. Struct. 18 (2009) 095006 (15pp)
156
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Neuro-fuzzy Applications in Urban Air Quality Management
Hrishikesh C.G, S.M. Shiva Nagendra
Department of Civil Engineering, I.I.T Madras
Urban air pollution is a serious public concern in many developed and developing cities of
the world. In India, several urban areas are declared as non-attainment areas because of
frequent violation of national air quality standards (NAAQS) set by Central Pollution Control
Board (CPCB). Several mathematical modelling approaches such as deterministic,
statistical and physical approaches are widely used in addressing urban air quality issues.
Among them soft computing techniques gained popularity in the last two decades due to
the shortcomings of deterministic and statistical approaches and their ability to model
complex non-linear systems. In recent years, Neuro-fuzzy approach has became popular in
addressing urban air quality management problems. The Neuro-fuzzy model is a type of
hybrid computing technique, which is having the learning ability of neural network as well
as linguistic interpretation of fuzzy logic systems. This attractive feature makes Neuro-fuzzy
model as a useful tool for decision making by the local authorities. In this paper the
applications of Neuro-fuzzy models in urban air quality management is discussed. The
merits and demerits of Neuro-fuzzy approach is also presented in this paper.
Keywords: Air pollution, Neuro-fuzzy, model, standard, non-linear
1. Introduction
Urban air pollution can be credited as one of the major environmental problems facing by the
human in the recent days. The problem of anthropogenic air pollution started with urbanization
and rapid industrialization which has been acause of multitude of health problems and other
nuisance. In the recent years better fuels with lower emissions and higher efficiency have
decreased the overall rate of pollution but at the same time the number of vehicles and industries
has increased due to rapid growth in population making the overall situation worse.
Prediction and forecasting of air quality and informing the public about the health effects and
precautions to be taken are the major concerns of air quality management. In order to forecast
the air quality, the interrelationship between pollutant concentrations, different meteorological
parameters as well as vehicular parameters like traffic flow, traffic density, travel time have to be
determined so that the concentrations can be corrected with changes in meteorological conditions
as well as vehicular usage
patterns. The relationship of pollutant concentration with
meteorological and vehicular parameters being complex and nonlinear deterministic approach
cannot be used in modeling with these parameters (Bin hao et al., 2009).
New soft computing techniques have been developed in the recent decades in which
modeling of data with non linear relationship as well as data sets missing any known relationship
or datasets with missing or in complete data has become possible.In these soft computing
techniques neural networks and fuzzy logic has been most commonly used due to their unique
properties. Neural networks have the ability to learn and predict better results and relationship
while fuzzy system has the ability to process and analyze data in the form of imprecise, linguistic
variables which can be more easily interpreted and analyzed by humans as compared to crisp
datasets.
2. Neural networks
Neural networks are massively parallel and distributed network of processing elements or
computational units known as neurons. These networks have higher computational power due to
the vast number of interconnections between the nodes. They are inspired by the structure of
157
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
brain consisting of a number of neurons in which synaptic connection plays a major role in the
functioning. The network basically consists of input layer, hidden layer and output layer of
interconnected neurons, nodes of input layer representing different input parameters; the
structure as well as number of neurons in the intermediate layers are not known so called as
hidden layer.The data is processed by the application of activation functions which are mostly
logistic functions (also known as sigmoid functions ) or hyperbolic tangent functions in some
cases and constant update of weights with the help of various learning rules and algorithms such
as least mean square(widrow-huff rule) or back propagation algorithm.
The main benefit of using neural networks is that it can be used for non-linear data, it can learn
complex and even unknown relationships from the data and no prior assumption regarding
distribution of data is required. Nowadays, most of the neural network models are made of
multiple layer perceptrons trained with some variant of backpropogation algorithm. A large
amount of training data is required for the neural network to give optimum results and the internal
behavior of the neural network is hidden. This technique has difficulty in converging to global
minima even through back propagation as it depends on trial and error for determining number of
hidden layers and nodes (Jain and Khare, 2010)
3. Fuzzy systems
Fuzzy systems were developedby Zadeh in 1965 to introduce linguistic vagueness in any
complex system by the application of linguistic variables, membership functions and fuzzy IFTHEN rules.Linguistic variables can be defined as input or output variables of a system whose
values are words or sentences from a natural language in place of numerical values. Fuzzy
systems analyze data by applying partial set membership as compared to crisp set membership
(which is normally used) and can be applied in case of imprecise,ambiguous and noisy input
data. It consists of a fruzzification interface, rule base, knowledge base which is the sum of rule
base and database, decision making unit and finally a defuzzification interface (Figure 1).
Figure 1.Structure of Fuzzy system (chen,1989)
Fuzzy systems are based on a set of IF-THEN rules which are used to transform the strict
numerical values into approximate linguistic terms, which makes the results to be easily
interpreted and understood. These rules help in description of model as well as acts as atool for
decision making for authorities. The fuzzy systems are able to represent uncertainties of the
human knowledge with linguistic variables; simple interaction between the expert with the
designer of the system; easy interpretation of the results, because of the natural rules
representation; addition of new rules can be easily done.
Membership functions are used for quantification of linguistic terms to map the non fuzzy inputs
values to fuzzy linguistic terms and vice versa. Different types of membership functions like
158
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
triangular, trapezoidal, Gaussian, singleton, piecewise linear (Figure 2) are widely used in various
applications. Among them triangular, trapezoidal and Gaussian are most commonly used in air
quality management application.
Figure 2. Types of membership functions
The short-coming of fuzzy model is its inability to learn from examples, inability to
generalize, can only answer to what is written in its rule base; changes in input parameters as
well as structure of model will require alterations in the rule base.
4. Neuro-fuzzy approach
Neuro-fuzzy model uses the neural network’s ability to learn in combination with the fuzzy
model’s ability to interpret (Figure 3). It can be described as a neural network which is being used
as an equivalent to fuzzy inference system. It can interpret numerical, linguistic, logical
information as well has the ability to self –learn, self-organize and self-attune(Jain and Khare,
2010). It can be trained to develop fuzzy IF-THEN rules or membership functions for input as well
as output. It closely resembles a neural network having multiple layers consisting of input layer,
output layer and hidden layers consisting of membership function and fuzzy rule base.
Figure 3. Basic representation of Neuro-fuzzy model (Kumar and Garg, 2005)
159
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4.1 Neurofuzzy Applications in air quality management
Negnevitsky and kelereva (2001) developed a Neuro-fuzzy model for prediction of PM10
values in Launceston, Tasmania. Meteorological parameters such as temperature, wind speed,
cloud cover, pressure at mean sea level, date of the year as well as wind speed and mean sea
level pressure of the previous day were used as input to the model. A five layered Neuro-fuzzy
model having input and output layers, and three hidden layers representing fuzzy membership
functions and rules were showed reasonable predicting performance. Morabito and Versaci
(2003) developed a model of multivariate relationship between local traffic data and topographic
information using Neuro-fuzzy approach to reproduce local interactions and to predict threshold
level of pollutant concentrations fixed by EU recommendations and local regulations. In their
research work neural network approach was used to predict the hydrocarbon (HC) concentration
and fuzzy approach was used to evaluate the pollution concentration levels. Yildirim and
Bayramoglu (2006) used adaptive Neuro-fuzzy logic method to estimate impact of meteorological
factors on sulphur dioxide (SO2), particulate matter (PM) levels over an urban area in zonguldak,
Turkey. The input parameters used in the model were temperature, SO2 or Total suspended
particulates (TSP) concentrations of the previous day, wind speed, relative humidity, pressure,
solar radiation and precipitation. Their model forecasted trends in SO2 concentration levels with
accuracy of 75-90% and in the case of Particulate matter an accuracy of 69-80% was achieved
using 5 year datasets of input parameters. Jain and Khare (2010) developed Neuro-fuzzy model
for prediction of 1h average CO concentration at 2 traffic intersections in New Delhi India. In total
of 8 models were developed for 4 different seasons at the 2 locations. The results were
satisfactory with an accuracy varied between 89% to 93%. Chaudhary and Middey (2011) use
ANFIS to predict peak gust speeds during thunderstorms of pre-monsoon in Kolkata India using 4
prominent stability classes as inputs. Later, the model was compared with radial basis function
(RBF), multiple layer perceptron (MLP) and multiple linear regression (MLR) and found that
ANFIS gave better prediction results. Tomica et al. (2012) predicted the concentration of CO2
along the roads and intersections of the city Nis in Serbia with the help of a Neuro-fuzzy model
known as ANFIS (Adaptive Neuro Fuzzy Inference System) having a first order Takagi–Sugeno–
Kang fuzzy inference system. Temperature, wind speed, wind direction, traffic intensity,
atmospheric stability and time were used as an input data. The model gave satisfactory results for
high concentration of CO2 as well as rapid rise in its concentration. Soni and Shukla (2012)
developed a 3 layer Neuro-fuzzy model trained with back propagation algorithm to predict the
urban concentration of NO2 and SO2 from their industrial concentration and determined the
concentration of ozone from the concentration of SO2 and NO2 in the city of Jabalpur, Madhya
Pradesh, India. Ozgurkisi et al. (2012) developed a generalized Neuro-fuzzy based for
determination of pan evaporation. The model was made by using temperature as the sole input
parameter to evaluate the application and performance of single-input based models for
estimating the evaporation values for three weather stations Tucson, Phoenix and Flagstaff
located in Arizona, USA. Five-parameter Neuro-fuzzy models were generally found to be better
than the Penman, Stephens-stewart , and Griffiths models (the models normally used for
determination of pan scale evaporation). The Neuro-fuzzy models were used for estimating
evaporations at the Tucson station by using the data from the Phoenix and Flagstaff stations in
the second part of the study. It was found that Neuro-fuzzy models can be successfully used in
cross-station applications. In the third part of the study, the generalized Neuro-fuzzy models were
obtained by calibrating and using the pooled data from the Phoenix, Flagstaff, and Tucson
stations located in Arizona and were tested using the data from weather stations in Albuquerque,
New Mexico; Tucumcari, New Mexico; Cedar City, Utah; and Ahwaz, Iran.
4.2 Benefits of Neuro-fuzzy approach
In the case of air pollution monitoring and modeling most of the data generated are
having non-linear or very complex relationship or relationship with no physical basis nor cannot
160
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
be modeled by regular deterministic or empirical models. The ability of a neural network in the
analysis of non-linear data as well as its ability to learn has given it an extra edge in the prediction
of air quality. As the air quality is expressed in linguistic parameters it can be easily
communicated to the administrators, local bodies and the general public.The main motive of
Neuro-fuzzy approach is forecasting of pollutant concentration as accurately as possible at short
time intervals so that warning systems could be developed based on these results and urban
activities could be preplanned or rescheduled in case of air pollution episodes.
4.3
Demerits of Neuro-fuzzy approach
As the model is a black box it is difficult understand the physical basis or governing laws
for the changes input of the model. It cannot be directly used for different input conditions without
prior learning. Learning process also requires extra data which should be available at the time of
model development. In the case of concentration determination it can only give accuracy within a
given range which is used during training.
5. Conclusion
Neuro-fuzzy technique gives better results in air quality models as compared to deterministic
and statistical techniques because of their ability to map non linearity. Inputs with missing and
incomplete data can also be modeled with the help of learning algorithms. Linguistic
approximation helps in the modeling of imprecise and ambiguous data. The interpretations in the
form of linguistic variables are easily understood and interpreted by local authorities and public.
Neuro-fuzzy models have been applied in many cities around the world for prediction of criterion
pollutants. Early warning systems can also be developed using short term predictions of Neurofuzzy models. However, Neuro-fuzzy model is a black box and difficult understand the physical
basis of atmospheric condition.
References
Chaudhuri S, Middey A (2011) Nowcasting thunderstorms with graph spectral distance and
entropy estimation. Met Appl (RMS) 18:238–249
Chen, Y.-Y. (1989). The Global Analysis of Fuzzy Dynamical Systems. University of California,
Berkeley
Francesco Carlo Morabito, Mario Versaci (2003)Fuzzy neural identification and forecasting
techniquesto process experimental urban air pollution data. Neural Networks 16 (2003) 493–
506
Hao, B., Xie, H. and Ma, F. Airborne Dispersion Modelling Based on Artificial Neural Networks.
IEEE Global Congress on Intelligent Systems, 2009.
Jain, S. and Khare, M. (2010) Adaptive neuro-fuzzy modeling for prediction of ambient CO
concentration at urban intersections and roadways.AirQualAtmos Health (2010) 3:203–212
Kumar, M. and Garg, D.P. (2005) Neuro-fuzzy control applied to multiplecooperating robots.
Industrial Robot: An International Journal32/3 (2005) 234–239
Mladen A. Tomića et al. (2012)Neuro-fuzzy estimation of traffic induced air quality. Proceedings
of ECOS 2012, Perugia, Italy
Negnevitsky, M. and Kelareva, G. (2001) Air Quality Prediction Using a Neuro-Fuzzy System.
IEEE International Furry Systems Conference
OzgurKişi (2012) Generalized Neurofuzzy Models for Estimating Daily Pan Evaporation Values
from Weather Data.J. Irrig. Drain Eng. 138:349-362.
Soni, A. and Shukla, S. Application of Neuro-Fuzzy in Prediction of Air Pollution in Urban Areas.
IOSR Journal of Engineering, 2012, 2(5), 1182-1187.
Yildirim Y, Bayramoglu M (2006) Adaptive Neuro-Fuzzy based modelling for prediction of air
pollution daily levels in city of Zonguldak. Chemosphere 63(9):1575–1582.
161
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Review of Phylogenetic Tree Construction Based on Some
Metaheuristic Approaches
Jitendra Agrawal1*, Shikha Agrawal2, Bhupendra Kumar Anuragi1, Sanjeev Sharma1
1
SOIT, Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal, INDIA
2
UIT, Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal, INDIA
*Corresponding author (e-mail: jitendra.agrawal@rgtu.net)
Phylogenetic tree construction is a challenging and widely studied problem in
Bioinformatics. Due to NP-Complete characteristics, it is still an open problem for
researchers. With the increase in the number of species the computational complexity
also increases, which cannot be solved by the traditional methods (such as Unweighted
pair group methods using arithmetic averages (UPGMA), Maximum likelihood, Maximum
Parsimony) etc. In order to solve this problem, some metaheuristic methods are being
researched by several researchers and phylogenetic tree has been constructed and
have reported some promising results. This paper gives a brief survey on some
metaheuristic approaches such as Ant Colony Optimization (ACO), Particle Swarm
Optimization (PSO) and Genetic Algorithm(GA) which were used
to optimize the
phylogenetic tree reconstruction.
1.
Introduction
Phylogenetic is a method of studying the evolutionary history of various living
organism and is developed by Peng. et. al (2008), where the divergences between the
species is represented by directed graphs or trees, known as phylogeny. The tree is constructed
based on the molecular sequences among different species.The representation of molecular
sequence is derived from genes or protein sequences known as gene phylogeny whereas the
species phylogeny is defined as process of representing the evolutionary path of different
species. A gene phylogeny can be thought of as a local descriptor which describes the gene
evolution and the encoded gene sequences which help in knowing the interrelation among
different genes i.e. the gene sequence among different gene have more or less interrelated to
each other.There are mainly two types of trees a). rooted trees: in which all nodes were
derived from the single node and b) unrooted trees: are those which do not from one particular
node.
a
ad
c
b
b
c
f
d
e
f
g
Fig.1.1 (a)
Fig. 1.1 (b)
Figure1.1 (a) Unrooted Tree, (b) Rooted Tree.
162
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The tree constructed must follow graph theory standards notation, in which nodes
represents the species and the branch or edge were used to represent the relationship between
species. The remainder of the paper is organized as follows: Section 2. gives an introduction of
Genetic Algorithm, followed by the work done for Optimizing the phylogenetic tree by using GA.
Section 3. covers an
introduction of Ant Colony Optimization, Section 4. describes the
methods other than GAs and ACO applied for the construction of phylogenetic tree and finally
Section 5 concluded the paper.
2.
Genetic algorithm
2.1
Introduction
Genetic Algorithm is a heuristic search algorithm developed by Goldberg.et.al (1989) and
is based on the natural evolution, such as inheritance, mutation, selection, and crossover. GAs
begins with a population of encoded random solutions. Such encoded solutions are usually
termed as chromosomes, and their ability in solving the problem is described with the help of
the fitness function. These individuals are subjected to natural selection, based on the there
fitness value. In each generation individuals is subjected to mutation and recombination
events, with mutation and recombination operators defined based on the nature of the
problem. In the next iteration the new population obtained is then used. The algorithm
usually terminates when a solution that produce an output is close enough or equal to the
desired answer or in the population appropriate fitness level has been reached. In section B, we
discuss how Genetic Algorithm is applied in Phylogenetic Tree Reconstruction, and then in
Section C we describe the work done by several researchers for the construction of
Phylogenetic Tree through Genetic Algorithm.
2.2
How genetic algorithm is applied in phylogenetic tree reconstruction
GAs have been applied to a variety of complex problems in engineering for many
years, although their use in problems involving biological data is only a few years back
explored The ability of GAs to find near-optimal solutions quickly in the case of complex data
makes them ideal candidates for the problem of phylogenetic inference, especially when
many taxes are included or complicated evolutionary models (necessitating the use of computer
intensive inference methods such as maximum likelihood) are applied. In the case of phylogeny
reconstruction, the single chromosome of each individual can be designed to encode a
single phylogenetic tree, along with its branch lengths and the values of other parameters
comprising the substitution model used. Mutation and recombination operators can be defined for
phylogenetic trees, and the fitness of an individual may be equated to its natural log likelihood
(lnL) score. Trees with higher values of lnL thus tend to leave more offspring to the next
generation, and natural selection increases the average lnL of the individuals in the
simulated population. The tree with the highest lnL after the population fitness ceases to improve
is taken to be the best estimate of the maximum-likelihood tree.
2.3
Genetic algorithm based phylogenetic tree reconstruction
Hideo Matsuda (1996) proposed a Construction of Phylogenetic Trees from Amino Acid
Sequences using a Genetic Algorithm which differs from that of simple genetic algorithm on
the basis of implementation of encoding scheme, crossover and mutation operator. At the
initial stage, among the available alternative trees, a fixed number of trees were selected
with the help of roulette selection based on their fitness value. After this the crossover and
mutation operation were applied from generation to generation for improving the quality of the
trees. As the number of trees in each generation was fixed, trees with the best score will be
removed by these operators. The algorithm also checks that the constructed tree with the
best score must survive for each generation. The main advantage of the algorithm is its
163
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
capability to construct more likely tree from randomly generated trees with the help of
crossover and mutation operators. The experimental results show that the performance of the
proposed algorithm is comparable to that of other tree construction methods such as
Maximum Parsimony, Maximum
Likelihood,
UPGMA
methods
with different search
algorithms.
The Phylogeny reconstruction is a difficult computational problem, because as
more number of taxes (object) included the number of possible solutions also increases
which further increases the amount of time spend in evaluating non optimal trees. To overcome
this problem Paul.et.al (1998) proposed A Genetic Algorithm for Maximum- Likelihood
Phylogeny Inference Using Nucleotide Sequence Data. Paul provides a genetic algorithm
based heuristic search, which reduce the time required for maximum-likelihood
phylogenetic inference, in case of datasets involving large numbers of taxa’s. The algorithm
works as follows, Firstly each individual is initialized with random tree topology in which every
branch is assigned a random value. Based on InL score the fitness value of each particle is
calculated. The individual having highest InL score value is used to generate the offspring for the
next generation. Finally recombination operation is performed. This recombination operation
separates GA from other traditional methods of obtaining a solution in less time. The
experimental results show that only 6% of the computation effort required by a conventional
heuristic search using tree bisection reconnection (TBR) branch swapping to obtain the same
Maximum-Likelihood topology.
In 2002 Clare et. al proposed “Gaphyl: an evolutionary algorithms approach to
investigate the evolutionary relationship among organisms”. The existing phylogenetic
software packages use heuristic search methods to find the optimum phylogenetic tree while
Graphyl uses evolutionary mechanisms, thus finds a more complete solution in less time. The
GA search process as implemented in Graphyl represents a gain for phylogenetics in finding
more equally plausible trees than Phylip (Felsenstein .et .al (1995)) in the same runtime.
Furthermore, as the datasets get larger due to increase in number of species and attributes, the
effectiveness of Gaphyl over Phylip appears to increase because the Gaphyl search
process is independent of the number of attributes (and attributes-values) and the
complexity of the search varies with the number of species which determines the number of
leaf nodes in the tree.
Clare. et. al (2003) proposed a new version of Gaphyl in which the Gaphyl is extended
to work with genetic data. In the proposed algorithm, the DNA version of Gaphyl is
constructed and the search process of Gaphyl and Phylip is compared based on DNA data.
The experimental results reveal that Gaphyl’s performance is better than that of Phylip, in some
cases.
3.
Ant Colony Optimization
3.1
Introduction
Ant Colony Optimization (ACO) is an evolution algorithm developed by M. Dorigo. et. al
(1996), inspired by the foraging behavior of real ants. When an ant searches the food, ant initially
moves the area covered by nest randomly. When an ant finds the food source, it analyzes the
quality and the quantity of it and carry back some of its amount to the nest. The chemical
pheromone trail is deposited on the ground when the ant returns back. This helps other ants to
reach the food sources. With the help of indirect communication between the ants via pheromone
trails helps in finding the shortest path between the nest and the food source. This property of
ant colonies is used in artificial ant colonies in order to solve combinatorial optimization (CO)
problems. In general, the ACO repeat the two steps for solving the optimization problems.
1) Candidate solution is constructed using a pheromone model i.e. In the solution space use
of a parameterized probability distribution;
2) The candidate solutions are used to update or modify the pherome values for filtering in
obtaining good quality solution.
164
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
In section B, we discuss how Ant Colony Optimization (ACO) is applied in Phylogenetic Tree
Reconstruction, and then in section C we describe the work done by several researchers for the
construction of Phylogenetic Tree through Ant Colony Optimization.
3.2
How ant colony optimization is applied in phylogenetic tree reconstruction
The phylogenetic tree construction problem bears close resemblance to a standard
TSP, (Traveling Salesman Problem). One can simply associate one imaginary city to each
taxa, and defined as the distance between two cities the data obtained from the data matrix
for the corresponding pair of taxas. This kind of formulation of the problem paves the path for
the application of heuristic algorithms like ACO. The intermediary node is selected by ant
system between the two previously selected ones. Based on the intermediary node, the
distances to the remaining nodes (species) are recalculated. This procedure is recursively
repeated until all the nodes which were visited not belonging to already visited nodes after this
the path is constructed. The sum of the transition probabilities of the adjacent nodes of the path
is termed a the score of the path used in updating the pheromone trail. During the execution
cycle all those nodes belongs to at least one path will helps in incrementing the pheromne
trail. This key point helps to avoid trapping in a local maximum. In this way, following an
algorithm very close in spirit to the ant colony algorithm for solving the TSP, the phylogenetic
trees may be reconstructed efficiently (Das.et.al (2008)).
3.3
Ant colony optimization based phylogenetic tree reconstruction
Shin Ando.et.al (2002) proposed an Ant Algorithm for Construction of Evolutionary
Tree, that hybridizes Ant colony algorithm with stack count. The algorithm applies the ACO
algorithm for the exploring the metaheuristic search in NP problems. Author introduces
two new mechanisms, the suffix representation and vertex choosing mechanism that
helps in enhancing the exploration capability of the ant colony by applying the stack count
(Keith.et.al (1997)) strategy. The algorithm chooses a tree from the set of possible trees, which
minimizes the score for a given set of DNA sequences. The proposed algorithm shows
satisfactory results in simulated experiment and alignment of protein sequences from 15 species.
Pisist Kumnorkaew.et.al (2004) proposed Ant Colony based new algorithm in which the
evolutionary tree is constructed with minimum total branch lengths by including the tree
construction, branch length calculation, a branch point selection, new ACO parameter and
a distance-weighting parameter. The algorithm starts by placing the ants at different branch
points and set the initial value of pheromone trail on every edge. Once the branch point of each
ant in the branch point selection vector is sorted, each ant selects cities to move to the next
step based on pheromone trail and distance. The algorithm repeats city selection and movement of
each ant until all ants have completed their tours and their branchpoint selection vector is filled.
The total branch length of each ant is computed and the value of pheromone is updated.
Before emptying the branch point selection vector, the shortest total branch length is stored. This
process continues until the terminating condition is reached. To further enhance the
algorithm's ability, a small negative branch lengths acceptance has been used because in the
case of large values of n (n is the number of species in the evolutionary tree problem),
the positive branch lengths acceptance limits the convergence. The output obtained is
the shortest pathway of the ant including the branch labels and lengths, which is sufficient
for the construction of the evolutionary tree. The experimental results show that the
algorithm greatly reduces the exponential time complexity of the evolutionary tree
problem in polynomial time.
Mauricio Perretto. et. al (2005) proposed a Reconstruction of Phylogenetic Tree
using the Ant Colony Optimization Paradigm. In the proposed algorithm the phylogenetic
trees is constructed where using the distance matrix a fully connected graph is constructed
among species. In this graph’s edge represents the distance between species and the nodes
165
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
represent the species. Initially ants select a random node, then at each node the direction
is determined based on the transition function. The main objective of the given ant is to find a
path which maximizes the transition probabilities, as a result the sequence of species were
obtained which produces smallest
evolutionary
distance.
The proposed algorithm is
compared with the well-know PHYLIP package using the programs NEIGHBOR and
FITCH. Based on the analysis of their structure and the total distance between nodes the
comparison is done. Overall, the experimental results reported in this paper were very
promising.
Ling. et. al (2006) proposed a novel approach to phylogenetic tree construction using
stochastic, optimization and clustering in which ant colony algorithm is applied with both
clustering method and a global optimization technique so that an optimal tree can be found
even with bad initial tree topology. The proposed method is divided into 3 parts. In the first part
initialization is done which is followed by the construction of phylogenetic trees through clustering
and finally in the third and last part the optimization ofthe tree is done. In the initialization phase a
weighted digraph is built in which vertex represent data to be clustered and edge represent
the acceptance rate between two objects. Then while travelling in the digraph the ant
update the pheromone on the path and finally the ant colony and its pheromone feedback
system is used which act as a global optimization technique for deriving the optimal topology
of the tree. The proposed algorithm is compared with Genetic Algorithm and the results
shows that it converges much faster and achieves high quality.
Jing Juo. et. al (2006) proposed A Self Adaptive Ant Colony Algorithm for Phylogenetic
Tree Construction, in which the phylogenetic tree constructed based on the equilibrium of
distribution. The proposed method involves 3 steps, initialization, constructing phylogenetic trees
by the optimal path found by ants, and optimization. First fully connected graph is constructed
using the distance matrix among species. To begin the Phylogenetic Tree Reconstruction, the
ants start by selecting the random node. They travel across the structured graph and at each
node based on the probability function ants finds its direction. The algorithm adjusts the
trail information and based on the equilibrium of the path selection distribution the probability of
each path is determined. Every ant repeats the procedure until all nodes are traversed once,
i.e. all nodes are listed into the list of already visited nodes. The sum of the probability function
of the adjacent nodes in the path gives the score of the path. To accelerate the convergence
and also to avoid local convergence the algorithm adjusts the probability of selection and the
strategy of the trail information on each path based on the quality and the distribution of the
solutions obtained. The proposed algorithm is compared with Neighbor Joining (NJ) programs in
the PHYLIP software package and TSP-Approach. The experimental results show
that the proposed algorithm is easier to implement and obtain higher quality results than
other algorithms.
Ling Chen.et. al (2009) proposed a new algorithm for Phylogenetic Tree
Construction based on Ant Colony Partitioning. Initially the root of the tree is defined which
corresponds to the set of gene sequences. Then the algorithm bisects the set of gene sequences
such that one subset has similar property and gene sequences between different subsets have
different property. This process is recursively repeated until all subsets contain only one
gene sequence. With the help of these subsets, a phylogenetic tree is progressively
constructed in which the leaves are the gene sequences. Each level of bisection is based on an
extension of ant colony optimization for traveling salesman problem. The experimental
results demonstrate that the proposed algorithm is easy to implement, efficient, converge
faster and obtain higher quality results than other methods.
4.
Phylogenetic tree reconstruction with others
Shuying.et.al (2000) develop a Bayesian method based Markov chain phylogenetic
reconstruction method Using Markov chain Monte Carlo (MCMC) technique the method
generates
a
sequence
of phylogenetic trees. The Markov chain is based on the
166
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
metropolis algorithm,
whose
stationary distribution is the conditional distribution of the
phylogenetic tree given the observed sequences. The algorithm maintains the balance between
the desire to move globally around the phylogenies and need to make feasible moves in the high
probability area. The proposed algorithm is fast per iteration, because the calculation of the target
node is kept local and as the large data set is potentially swapped, changes in the trees are
possible with the fewest moves.
Most of the existing approach for phylogenetic inference use multiple alignment
sequences. But multiple sequence alignment is inefficient due to gene rearrangements,
inversion, transposition and translocation at the substring level, unequal length of sequences,
etc. and also it does not work for whole genome phylogeny. Complete genome based
phylogenetic analysis is appealing because single gene sequences does not contain enough
information to construct an evolutionary history of organisms. To overcome such problem Hasan.
et. al (2003) proposed A new sequence distance measure for phylogenetic
tree
construction, in which a phylogenetic tree is constructed based on the distance measured
between finite sequences using LZ complexity (Lampel. et. al (1976)). LZ complexity of the
finite sequence S is defined as the number of steps required by a production process that
built S. The obtained distance matrix is used to construct phylogenetic trees. The main
advantage of the proposed approach is that it does not require any sequence alignment strategy
and is totally automatic. From Experiment results it reveals that the proposed algorithm
successfully constructed an efficient & consistent phylogenies for real and simulated data sets.
The Hui-Ying.et. al (2004) proposed a novel algorithm for Phylogenetic Tree
Reconstruction in which a Discrete Particle Swarm Optimization (DPSO) is used to
select the best tree from the population. In the proposed algorithm, Initially the fitness
value of each particle is calculated in the population and individual with maximum fitness
value is then used for the phylogenetic tree construction. Once the tree is
constructed, the population updation
and
branch
adjustment
is performed. In the
population updation the position and velocity is updated using DPSO developed by Kennedy. et.
al (1995), position and velocity update equations. In the next step to adjust the branch of the
tree, comparision is done. If the distance between two nodes is greater than or equal to 2D
(D refer to the distance between two Sequences) then separate the branch
otherwise combine the branch. This updation continues until the phylogenetic tree is not
optimized. The DPSO algorithm gives optimized results even if initial population is
changed. The DPSO algorithm is applied on 25 sequences problem which involve
sequences of the chloroplast gene rbcL from a diversity of green plants and
Experimental results reveals a satisfactory result when compared to other traditional
algorithms.
5.
Conclusion
In this paper we overviewed some recent efforts made by several researchers for the
construction of Phylogenetic Tree. After giving the brief introduction to the problem of
phylogenetic tree reconstruction, we have discuss the applicability and work done by several
researchers
for phylogenetic tree reconstruction by GA’s, PSO and ACO.
Several traditional methods are challenged and reviewed closely for there relevance and
acceptance but they lack in achieving a near optimum solution and suffer from
extensive
computational overhead. To overcome these problems, GA has been combined with
several other methods. The results shows satisfactory outcomes as compared to
traditional methods. However the SI tools also seems to be promising because several
tasks in bioinformatics involve optimization of different criteria thereby making the
application of SI tools (like ACO and PSO) more obvious and appropriately in solving
phylogenetic tree reconstruction problem. As compared to GA, the SI based algorithm
proposed by different researchers gives better results. The papers published in this context, may
167
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
be small in volume, but are of immense significance to the researchers of tomorrow because
the field is broad and a lot of research work is still needed to be done.
References
Ando Shin,Iba Hitoshi, “Ant Algorithm for Construction of Evolutionary Tree “, IEEE, 2002.
Chen Ling, Qin Ling, Liu Wei, Chen Bolun, ”A Phylogenetic Tree Constructing Method Based on
Ant Colony Partitioning”, IEEE, 2009.
Congdon Clare Bates, J. Kevin. Septor, (2003),” Phylogenetic Trees Using Evolutionary
Search: Initial Progress in ExtendingGaphyl to Work with Genetic Data”,IEEE, 2003..
Congdon Clare Bates,”Gaphyl: An Evolutionary Algorithm Approach for the Study of
Natural Evolution” , 2002.
Das Swagatam, Abraham Ajith, and Konar Amit,” Swarm
Intelligence
Algorithms in
Bioinformatics”, Springer-Verlag Berlin Heidelberg, 2008, 113-147
Dorigo M., Maniezzo V., and Colomi A.,“Ant system: Optimization by a colony of
coorperatingagents”, IEEE Transactions on Systems, Man, and Cybernetics-Part B, Vol.
26, 1996, 29-41.
Felsenstein J., Phylip,source code and documentation, 1995.
Goldberg, D. E., Genetic Algorithms in Search, Optimization, and Machine Learning, AddissonWesley Publishing Company.1989.
Guo Jing, Chen Ling,Qin Ling, Wang Chao,”A Self-adaptive Ant Colony Algorithm for
Phylogenetic Tree Construction”, IEEE, 2006.
Hasan, Out H. and Sayood Khalid, “A new sequence distance measure for phylogenetic tree
construction”, Bioinformatics
Keith, J. M. and Martin M. C.,”GeneticProgramming in C++: Implementation Issues”,
Advances in Genetic Programming , 1997.
Kennedy J, Eberhart R, “ Particle swarm optimization”,In Proceedings of IEEE International
conference on Neural Networks. 1995, 1942-1948.
Kumnorkaew Pisist, Ming Ku-Hong and Ruenglertpanyakul Wiwat, “ApplicationofAntColony
Optimization to Evolutionary Tree Construction”, 2004.
Lempel, A. and Ziv, J. On the complexity of finite sequences. IEEE T. Inform. Theory, 22, 1976,
75-81.
Matsuda Hideo”Construction of Phylogenetic Trees from Amino Acid Sequences Using a
Genetic Algorithm 1996.
O Paul. Lewis,“A Genetic Algorithm for Maximum-Likelihood Phylogeny Inference Using
Nucleotide Sequence Data”, The Society for Molecular Biology and Evolution.”, 1998,
277-283.
Peng Chuang, “Distance Based Methods InPhylogentic Tree Construction”, Department of
Mathematics, Morehouse College, Atlanta, GA 30314, 2008.
Perretto Mauricio and Lopes HeitorSilverio, ”Reconstruction of Phylogenetic Tree using the Ant
Colony Optimization Paradigm”,Genetic and Molecular Research, 2005.
Qin Ling, Chen Yixin, Pan Yi, Chen Ling, ”A Novel approach to phylogenetic tree construction
using stochastic optimization and clustering”, BMC Bioinformatics, 2006.
Shuying Dennis Li, K Pearl, DossHani,”Phylogenetic Tree Construction using Markov Chain
Monte Carlo”, Journal of the American Statistical Association,), 2000, 493-508.
Ying Lv- Hui, Gang Zhou-Wen, GuangZhou-Chun, “A Discrete Particle Swarm Optimization
Algorithm For Phylogenetic Tree Reconstruction”, Proceeding of the Third International
Conference on Machine Learning and Cybernetics, Shanghai, 2004, 2650-2654.
168
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization Model for Inventory Distribution
A.A.Thakre, Krishna.K.Gupta
Visvesvaraya National Institute of Technology, Nagpur-440010 (India)
*Corresponding author (e-mail: aathakre@mec.vnit.ac.in, krishna591990@gmail.com)
Supply chain generally involves the transformation of products from supplier to
manufacturer and from distributors to retailers & finally to customers. But by using the
lateral transshipment the products from one location to another location in a same stage
i.e. from retailers to retailers at emergency condition can be sent. This can optimize the
inventory carrying at warehouse and transferred unit from warehouse to retailers and
minimize the back order and inventory level at the retailers. Thus total supply chain cost
can be minimized. In this paper single-echelon two stage optimization models is employed
for inventory distribution from warehouse to three retailers in first stage. Optimal
transshipment of product among three retailers due to interactive-lateral transshipment is
identified in second stage. Model is validated by case study of bread industry.
Key words: Supply chain, Inventory Distribution, Interactive-Lateral Transshipment, Linear
programming.
1. Introduction
The purpose of supply chain management is to improve trust and collaboration among supply
chain partners, thus improving inventory visibility and the velocity of inventory movement. It
includes not only the manufacturer and suppliers, but also transporters, warehouses, retailers,
and even customers themselves. Supply chain management is the active management of supply
chain activities to maximize customer value and achieve a sustainable competitive advantage.
Inventory systems often account for a large proportion of a business costs. This makes it crucial
to manage them efficiently. The traditional design of an inventory system is hierarchical, with
transportation flows from one echelon to the next, i.e. from manufacturers to wholesalers and
from wholesalers to retailers. More flexible systems also allow lateral transshipment within an
echelon, i.e. between wholesalers or retailers. In this way, members of the same echelon pool
their inventories, which can allow them to lower inventory levels and costs whilst still achieving
the required service levels. Lateral transshipment can either be restricted to take place at
predetermined times before all demand is realized, or they can take place at any time to respond
to stock outs or potential stock outs.
In this two stage inventory distribution model from warehouse to three retailers. In first stage
optimal stock allocation on retailers, it is based on the forecasted demand value on the retailers.
Then this value used for the second stage lateral transshipment among retailers. Aim of first
stage was to minimize the inventory at every level as on warehouse and retailers. This was
possible by minimize the inventory level of warehouse and inventory level of retailers. Aim of
second stage was to minimize the total back order of products on retailers that not satisfied by
warehouse to the retailers. This was possible by use of lateral transshipment among retailers
and, it possible by transferring the products from retailers to retailers. Total supply chain cost can
be minimized by transferring the excess products among retailers. If it does not works then
inventory of warehouse and also retailer’s increases. Due to this product having short life will
cause more loss of the warehouse and retailers. This is because of increases in the inventory
carrying cost and it disturbs the space on warehouse and retailer. Lateral transshipment is not
possible by the attention of one party for this required a best supply chain model and control by
an authority on all parties that involve in this model. Lateral transshipment work not only
transshipment among locations to minimize inventory and back order but to fulfill the demand of
169
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
the customers. By use of this we can increases the customer’s satisfactions for the product and
services. This also increases the products sale & its acceptance among customers. Fig1. Indicate
SCM model for current work.
Warehouse
Retailer1
Retailer2
Retailer3
Figure1. Two stage inventory distribution model.
2. Literature review
Dharamvir Mangal et al. (2010) developed model for single supply source and multiple
retail locations. It was observed that lateral transshipment is profitable in terms to increase to
service level and minimizing the problem of randomly demand and lead time. Fahimina et al.
(2008) developed a mixed integer formulation model for two echelon supply network and study
integration of aggregate production plan and distribution plan. The model was analyzed for a
realistic scenario-based production- distribution problem. K.Balaji Reddy et al. (2011) developed
a supply chain two stage distribution optimization model in which first stage interaction between
warehouses and retailers and second stage lateral transshipment among retailers. By optimizing
total supply chain cost by minimizing inventory at warehouse and backorder at retailers. Results
of model were compared with existing condition of confectionary industry data.
3. Proposed Model Description
In this we took transshipment between one warehouse and three retailers in first stage and
in second stage, lateral transshipment among three retailers. In this once stocks allocated from
warehouse to these three retailers then in second stage if necessary then they can transfer
among them to fulfill the demand of customers.
3.1 Assumptions used
Following assumptions have been used in the formulation of the model:
1.A single product is being distributed from warehouse to retailers in the system.2.The demand
for the product will be forecasted before beginning of every period and will be used as the
reference for warehouse to transfer stocks from them to retailers in a particular period p.3.All
retailers and customers in the downstream are identical, i.e., they have identical cost structures
and demand distributions.4.Any demand that is not met is considered as the back
order.5.Warehouse assumes the instantaneous supply from an outside source.6.Warehouse and
retailers will be managed by a single organization only.7.Random demands occur at retailers,
which were then transferred to warehouse for replenishing inventories.8.Excess demands at
retailers are completely backlogged.9.Lead times of retailers from warehouses are different and
depending on their geographical positions.10.This model is suitable to any kind of industry that
follows the two- stage supply chain model with finite number of warehouses and retailers.11.For
modeling simplicity, we did not consider the lead times between retailers at the end of every
170
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
period, we assume it as negligible.12.We assume that this lateral transshipment of stocks among
retailers will be taken place before starting of the next period.
4. Mathematical Model Formulation:
4.1 First stage Problem:
Objective Function= Total inventory carrying cost at warehouse in period p + Total transportation
cost from warehouse to all retailers in period p.
Objective Function Minimization TCp1= a ×Z + a1×X1 +a2×X2 +a3×X3
In the first stage, we have the following constraints:
1. Inventory at the warehouse should be less than or equal to the warehouse capacity in period p.
Z ≤ Cw
2. The total retailers forecasted demand for a period p should be less than or equal to warehouse
inventory in that particular period p.
Z ≥ (Df1+Df2+Df3)
3. The sum of units transferred from warehouse to a retailer should be greater than or equal to
the forecasted demand of that particular retailer in period p.
X1 ≥ Df1 X2 ≥ Df2 X3 ≥ Df3
4. The total number units transferred from warehouse to a particular retailer should be less than
or equal to that retailer capacity in period p.
X1 ≤ Cr1 X2 ≤ Cr2 X3 ≤ Cr3
5. The sum of the units transferred from a warehouse to all retailers should be less than or equal
to the warehouse inventory for a particular period p.
X1+X2+X3 ≤ Z
4.2 Second stage problem
Objective function=Total inventory carrying cost of all three retailers in period p. + Total back
order cost of retailers in period p.
Objective function minimization TCp2= (y1×I1+y2×I2+y3×I3) + (c1×Ib1+c2×Ib2+c3×Ib3)
I=Inventory at retailer at the end of period=Stage Inventory at retailer at the end of stage 1-Unit
transferred to retailer to retailers. I b =Back order at retailer at the end of period p=Stage back
order at retailer at the end of stage 1-Unit received by retailer from retailers.
In the second stage, we have the following constraints:
1. The inventory at a particular retailer at the end of stage one, is equal to the total number units
received from warehouse by that retailer minus the actual demand of that particular retailer in
period p.
Is1=X1- Da1 Is2=X2-Da2 Is3=X3-Da3 If this value is negative then I s=0
2. The back order of a retailer at the end of the stage 1 is equal to the actual demand of that
particular retailer minus the total number units received from warehouse by that retailer in the
period p.
Ibs1=Da1-X1 Ibs2=Da2-X2 Ibs3=Da3-X3
If this value is negative then I b s=0
3. The sum of the units transferred from a retailer to other two retailers should be less than or
equal to the excess inventory at the end of the stage 1.
T12 +T13 ≤ Is1 T21+T23 ≤ Is2 T31+T32 ≤ Is3
T11, T22, T33=0
4. The sum of the units received by a retailer from other two retailers should be less than or equal
to the first stage back order of that retailer for a particular period p.
R12+R13 ≤ Ibs1 R23+R21 ≤ Ibs2 R31+R32 ≤ Ibs3
5. The sum of the units transfer by all retailers minus sum of units received by other retailers will
be equal to zero.
(T13 +T23) – (R31+R32) =0
(T21 +T31) – (R12+R13) =0
(Total transportation cost) T13×M+T23×N ≤Ibs3×O + Ibs3×c3
4. Case Study
Suggest model is implemented a case study of bread industries. Input data is in table.
Table1. Input Data for warehouse.
Warehouse
0.24625
1200
Inventory carrying cost per units in Rupees
Warehouse Capacity in units
171
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table2. Retailers Demand.
Retailers Demand in Units
R1
R2
Forecasted Actual
Forecasted Actual
316
303
263
233
288
330
225
219
P1
P2
R3
Forecasted Actual
166
198
138
122
Table3. Input Data for retailers.
Retailers
R1
R2
Inventory carrying cost in rupees per unit
0.185
0.37
R3
0.4625
Table4. Transportation Cost from Warehouse to Retailers per unit in Rs.
Retailers
R1
R2
R3
Transportation Cost in Rupees from warehouse to retailers
0.07456 0.082016 0.096928
Retailers
R1
R2
R3
Table5. Transportation Cost from Retailers to Retailers per unit in Rs.
R1
R2
R3
0
0.7456
2.2368
0.7456
0
1.4912
2.2368
1.4912
0
Table6. Back Order at each retailer per unit.
Retailers
R1
R2
Back order cost in Rupees
2.50
2.50
Capacity in units
Table7. Retailer’s Capacity in units.
Retailers
R1
R2
400
350
R3
2.50
R3
300
5. Results
By using MS Excel solver to solve these two stages one warehouse and three retailer’s
problems. Optimal solutions were obtained. This problem solved for two periods, these results are
tabulated, compared real time existing solutions of the industry.
Table8. Optimal Warehouse Stocks in units.
Period
Number of units at Warehouse
Proposed model
Existing (Industry)
P1
745
800
P2
651
750
Period
P1
P2
Table9. Optimal numbers of units transferred from warehouse to retailer.
Retailer
Proposed model
Existing (Industry)
R1
316
330
R2
263
270
R3
166
180
R1
288
300
R2
225
210
R3
138
150
172
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Period
P1
P2
Table 10.Optimal number of units transferred from retailer to retailer.
Proposed model
Existing (Industry)
Customer
R1
R2
R3
R1
R2
R3
R1
0
2
0
0
0
R2
0
30
0
0
0
R3
0
0
0
0
0
R1
0
0
0
0
0
R2
6
0
0
0
0
R3
16
0
0
0
0
Period
P1
P2
Period
P1
P2
Table11.Optimal numbers of Inventory at Retailers.
Proposed model
Existing (Industry)
R1
R2
R3
R1
R2
11
0
0
27
37
0
0
0
0
0
R3
0
28
Table 12.Number of back ordered units at retailers.
Proposed model
Existing (Industry)
R1
R2
R3
R1
R2
0
0
0
0
0
20
0
0
30
9
R3
18
0
Table13. Optimal Total Cost.
Proposed
Rs.
P1
Inventory carrying cost at warehouse183.46
Transportation cost from warehouse to all retailers- 61.22
Inventory carrying cost at all retailers2.035
Transportation cost from retailers to retailers49.21
Back order cost at all retailers0
Total cost295.92
Component
model in
P2
160.31
53.30
0
40.26
50.00
303.87
Existing (Industry)
in Rs.
P1
P2
197
184.69
64.20
54.13
18.69
12.95
0
0
45
97.50
324.88 349.27
5.1 Result comparison
By using this model minimized the excess inventory at both level warehouse and retailers
of this industry. By using this model total cost saving is 10 to 15 %.
6. Conclusion
This paper is presented optimization model for inventory distribution for two stages. In
first stage transshipment between warehouse and three retailers and in second stage lateral
transshipment among retailers in two periods is studies. By using of bread industry data, it is clear
that obtained results are reduced total supply chain cost.
References
B.Fahimnia, L.Luong and R.Marian, an integrated model for the optimization of a two- echelon
supply network, Journal of Achievements in Materials and Manufacturing Engineering,
Volume 31 Issue 2 December 2008.
173
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Dharamvir Mangal and Pankaj Chandna, Lateral transshipment –A technique for inventory
control in multi retailer supply chain system, International Journal of Information Technology
and Knowledge Management, July- December 2010, Volume 2,No.2, pp. 311-315.
K.Balaji Reddy, S. Narayanan and P.Pandian, Single-Echelon supply chain two stage distribution
inventory optimization models for the confectionery industry, applied mathematical Sciences,
Vol. 5, 2011, no.50, 2491-2504.
Appendix-A
Demand of product (bread) at retailer R1, R2, and R3 of 10th intervals for forecasted demand for
th
11 interval (period1).
Interval
R1(units)
R2(units)
R3(units)
1
262
207
111
2
303
224
154
3
272
198
123
4
265
213
116
5
261
206
112
6
276
221
122
7
294
239
145
8
312
257
156
9
305
270
163
10
325
245
172
Appendix-B
Demand of product (bread) at retailer R1, R2, and R3 of 10th intervals for
11th interval (period2).
Interval
R1(units)
R2(units)
1
318
253
2
296
221
3
314
259
4
291
211
5
330
240
6
311
256
7
265
264
8
271
212
9
280
234
10
326
213
174
demand forecasted for
R3(
152
131
143
112
133
157
134
146
126
147
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Appendix-C
Parameters:
TCp1=First stage cost in period p.
TCp2=Second stage cost in period p.
Z=Total inventory at warehouse.
X1=Unit transferred from warehouse to retailer 1.
X2= Unit transferred from warehouse to retailer 2.
X3= Unit transferred from warehouse to retailer 3.
C w =Warehouse capacity in period p.
Df1=Forecasted demand to the retailer 1 at period p.
Df2=Forecasted demand to the retailer 2 at period p.
Df3=Forecasted demand to the retailer 3 at period p.
Cr1=Retailer 1 capacity to making stock in period p.
Cr2=Retailer 2 capacity to making stock in period p.
Cr3=Retailer 3 capacity to making stock in period p.
a=Unit inventory carrying cost at warehouse.
a1=Unit transportation cost from warehouse to retailer1.
a2=Unit transportation cost from warehouse to retailer2.
a3=Unit transportation cost from warehouse to retailer3.
I1=Inventory at retailer 1 at the end of period p.
I2=Inventory at retailer2 at the end of period p.
I3=Inventory at retailer 3 at the end of period p.
Is1=Stage inventory at retailer 1 at the end of stage 1.
Is2=Stage inventory at retailer 2 at the end of stage 1.
Is3=Stage inventory at retailer 3 at the end of stage 1.
T12= Unit transferred from retailer 1st to 2nd retailer.
T13= Unit transferred from retailer 1st to 3rd retailer.
nd
rd
T23= Unit transferred from retailer 2 to 3 retailer.
nd
st
T21= Unit transferred from retailer 2 to 1 retailer.
T31= Unit transferred from retailer 3rd to 1st retailer.
T32= Unit transferred from retailer 3rd to 2nd retailer.
y1= Unit inventory carrying cost at retailer1.
y2= Unit inventory carrying cost at retailer2.
y3= Unit inventory carrying cost at retailer3.
175
Ib1= Back order at retailer 1 at the end of period p.
Ib2= Back order at retailer 2 at the end of period p.
Ib3= Back order at retailer 3 at the end of period p.
Ibs1= Stage back order at retailer 1 at the end of stage 1.
Ibs2= Stage back order at retailer 2 at the end of stage 1.
Ibs3= Stage back order at retailer 3 at the end of stage 1.
R12= Unit received by retailer 1st from 2nd retailer.
st
rd
R13= Unit received by retailer 1 from 3 retailer.
nd
R23= Unit received by retailer 2 from 3rd retailer.
nd
st
R21= Unit received by retailer 2 from 1 retailer.
rd
st
R31= Unit received by retailer 3 from 1 retailer.
R32= Unit received by retailer 3rd from 2nd retailer.
c1= Unit cost of back order at retailer 1.
c2= Unit cost of back order at retailer 2.
c3= Unit cost of back order at retailer 3.
Da1= Actual demand at retailer1 in period p.
Da2= Actual demand at retailer2 in period p.
Da3= Actual demand at retailer3 in period p.
M= Unit transportation cost from retailer 1to 3,
N= Unit transportation cost from retailer 2to3,
O= Unit transportation cost from warehouse to retailer 3
in second stage in period p.
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Limited View Tomographic Image Reconstruction using Genetic
Algorithm
Saran S.1, Prashanth K. R.2, Atul Srivastava1, Ajay Kumar3, M.K. Gupta3
1
Indian Institute of Technology Bombay, Mumbai – 400076, Maharashtra, India
Vellore Institute of Technology University, Vellore – 632014, Tamil Nadu, India
3
Institute for Plasma Research, Gandhinagar, Gujarat - 382428, India
2
Corresponding author (email: atulsr@iitb.ac.in)
In its most widely used forms, such as in medical imaging, tomography attempts to create a
function using a large number of projection views of the target object. However, there are
many engineering applications wherein the number of views is severely limited. Most
conventional approaches fail to create useful reconstructions in such cases. This paper
explores a genetic algorithm based approach to carry out limited view tomographic
reconstruction and provides a comparative evaluation of its performance and
reconstruction quality with conventional iterative approaches. Finally, the GA-based
approach has been applied for the reconstruction of spatial characteristics of plasma ring
generated at Aditya Tokamak wherein the projection data is available only from two
orthogonal directions due to the experimental constraints.
1.
Introduction
In general terms, tomography is a process of making cross-sectional images of an object
from its projection data recorded from different view angles. In contrast to areas like medical
imaging, there are certain applications of practical interest where one encounters limited number
of projection data due to inherent experimental and structural constraints. This class of problems
falls into the category of limited data tomography and being mathematically highly ill-posed, it has
been a subject of intense research in recent times.
The tomography methods fall under two main categories namely back projection based
approaches and variants of Algebraic Reconstruction Technique (ART). The first class of
algorithms is computationally efficient and is widely used in the field of medical diagnosis.
However, the quality of reconstructions degrades severely as the number of projection views
becomes small. The second class of methods works on the principles of regressive optimization.
Although it requires fewer number of projection views, ART based methods are susceptible to get
trapped at local minima. In view of these inherent limitations of conventionally employed
reconstruction approaches, the present work poses the problem of tomographic reconstruction
as an optimization problem which is solved by a highly robust global search heuristic technique
called Genetic Algorithm.
The performance of GA-based reconstruction scheme developed in the study has been
compared with that of conventionally employed iterative methods (MART) in terms of quality of
reconstruction, accuracy and convergence time. The comparative studies have been performed
on numerically simulated phantoms. Thereafter, the GA-based reconstruction scheme is
implemented on experimental projection data of Aditya Tokamak recorded from two orthogonal
o
directions (0 and 90 ). Aditya Tokamak is a device at IPR Gandhinagar focused at magnetic
confinement of extremely hot plasma and employs a range of diagnostic techniques for real time
monitoring of the plasma characteristics inside the Tokamak and imaging of plasma radiation
emission pattern. In particular, emission characteristics in the visible range of electromagnetic
radiation are of special interest. Hence arrangements have been made to record the visible light
radiation from two orthogonal directions along their line-of-sights. However, in the present
configuration, the detectors provide only the path integrated information and local characteristics
176
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
of plasma distribution cannot be retrieved without the principles of tomography. Moreover, in view
of the fact that projection data is available from only two directions, the tomographic inversion
becomes a challenging task. It is expected that the tomographic visualization of plasma would aid
physicists and engineers at IPR to understand high energy plasma before incorporating related
technologies into applications such as thermonuclear fusion.
2.
Genetic algorithm-based in tomographic reconstruction
Genetic Algorithms are extremely versatile tools for global optimization for any problem
that can be posed in a way such that the solution lies in finding the ideal vector for a given
objective function.The technique mimics natural evolution with bio-inspired operations like
inheritance, selection, crossover, and mutation.The outline of the routine employed is as follows:
1.
Generate a random population of images, termed as parents.
2.
Evaluate fitness of each parent based on how low the projection deviation is. Alternative
fitness functions were also tested. For example: Fitness = |Rorg – R(img)|.
Here, img is the set of decision variables and the projection data from the original
image or experiment is stored by the variable Rorg.
3.
Run the standard operations of the chosen algorithms to update the population on the
basis of fitness.
4.
Go back to step 2 and keep running it in a loop till the value of Fitness reaches
sufficiently close to zero.
5.
The values of the set ‘img’ after termination would represent the reconstructed image.
3.
Results and discussion
3.1.
Simulated images
Reconstructions were obtained from both the traditional MART based and the GA based
approaches for numerically simulated phantoms. Identical projection data based on parallel
Radon Transform was used as input for both. The test phantom is a ring with a uniform intensity
distribution. This is a simplified representation of the plasma boundary in the poloidal crosssection of the Tokamak. Benchmarks provide some indication of the expected deviation between
the actual image and its reconstructions. This deviation is observed with different levels of paucity
in projection data. Being a computationally costly operation, the resolution was limited to 40x40
pixels only. Shown in Figure 1, reconstructions (a) and (b) were performed using only two
projection views while (c) and (d) used six projection views.
It is to be seen that both the approaches show similar behavior with regard to quality and
nature of reconstructions with increasing amounts of projection data. However, GA does a better
job at reconstructing the regions with uniform density even with limited data. In the highly limited
view reconstruction (a) and (b) from Figure 1, it can be seen that the character of dominant
peripheral distribution is more pronounced in the GA based reconstruction. Higher artifacts can
be observed in the centre of the image in the MART based reconstruction (a) which goes against
the expected distribution.
Comparing the reconstructions (c) and (d), the contour plots almost exactly match that of
the original image. MART based reconstructions show discontinuity in the images, which are not
present in the original phantoms. MART also tends to overshoot the peak intensity of the
phantom in certain regions of the image even if it successfully managed to reconstruct the shape
boundaries correctly. This characteristic is seen in almost all images reconstructed using MART.
Even for phantoms with uniform intensity distribution, results from MART showed high amounts of
heterogeneity as seen in (c).
177
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Original Phantom
MART
Genetic Algorithm
(a)
(b)
(c)
(d)
Figure 1. Reconstructions from simulated images
Similar tests were performed on asymmetric phantoms with regions of varying intensities.
This evaluated robustness of the two approaches. The results showed how effectively each
reconstruction managed to capture fairly localized regions of high and low intensities. This shed
some light into the reliability of the reconstructions obtained from limited data. GA seems to
reconstruct small but notable features on the image better than MART.
3.2.
Performance benchmarks
Numerical experiments on these phantoms and the experimental data were performed on
the same quad core workstation featuring a Core i7-2600 CPU on MATLAB R2012a. MART used
a relaxation factor of 0.5 and GA used a population of 2000 individuals.
Table 1. Performance of MART for the Reconstruction of the Ring Phantom
Number of
Projection
Views
MART
Angles (degrees)
Iterations
Time
(s)
Projection
Error
Image
Error
2
0, 90
145
1.05
1.0226
0.3179
3
0, 60, 120
924
11.19
0.2786
0.2268
4
0, 45, 90, 135
2658
45.22
0.1475
0.0669
6
0, 30, 60, 90, 120, 150
9756
301.73
0.0724
0.0576
178
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 2. Performance of GA for the Reconstruction of the Ring Phantom
Number of
Projection
Views
GA
Angles (degrees)
Generations
Time
(s)
Projection
Error
Image
Error
2
0, 90
5000
3172.68
0.0555
0.002
3
0, 60, 120
5000
3577.62
0.1625
0.0051
4
0, 45, 90, 135
5000
3835.89
0.1504
0.0051
6
0, 30, 60, 90, 120, 150
5000
4236.02
0.1503
0.0051
3.3.
Reconstruction of plasma characteristics using GA-based approach
Tokamak is one of the most researched technological candidates for effective
confinement of plasma for the purpose of nuclear fusion. Some of the major issues associated
with smooth operation of Tokamak include plasma-wall interaction and challenges associated
with the spatial as well as temporal fluctuations in the magnetic fields. In view of this, tomographic
techniques have been employed to investigate the local properties of plasma near the edges of
Tokamak plasma. Experimental constraints prohibit installation of large number of detectors for
obtaining projection views. At the time of writing, only two Visible Light Tomography (VLT)
detector setups were under operation in the setup, as schematically shown in Figure 2(a). The
objective is to reliably reconstruct the plasma boundaries for obtaining a better understanding of
high energy matter behavior. It is to be mentioned here that only the low energy peripheral
regions of the plasma cross-section emit visible radiation. The high energy regions concentrated
in the core region emit shorter wavelengths like X-Rays. Therefore, boundaries of the plasma are
established with the help of VLT. A schematic representation of plasma ring is shown in Figure
2(b) and the interest is to reconstruct the spatial and temporal characteristics of these plasma
rings based on the line-of-sight projection data recorded from two orthogonal projection angles
making the input data highly limited.
(a)
(b)
(c)
Figure 2. Experimental Setup Details and Reconstruction Result
(a) Schematic representation of collection of projection data from two orthogonal view
angles at Aditya Tokamak; (b) Visualization of a typical plasma ring inside the Tokamak
(courtesy: http://www.rijnhuizen.nl); (c) Reconstruction of plasma ring using actual
experimental data indicating higher visible light emission in the periphery
Figure 2 shows the local distribution of plasma ring as reconstructed using the GA-based
approach from two orthogonal projection data. In agreement with the theoretical understanding,
the distribution is denser in the peripheral regions and much less concentrated in the central
portion. Even with two projection views, the approach successfully captured the overall expected
179
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
character of the distribution. Further tests are required with more of the experimental data for a
detailed study of wall-plasma interaction based on the reconstruction results using GA-based
tomography approach.
4.
Concluding remarks
Potential of genetic evolution-based tomography approach has been demonstrated to
reconstruct the field of interest from limited projection data. The developed GA-based tomography
algorithm has been compared with the conventionally employed reconstruction schemes e.g.
MART for numerically simulated phantoms. The comparative study reveals that the GA-based
approach works better in terms of quality and accuracy of reconstruction than the conventional
tomography algorithms. Minute features of the object under study are clearly brought out in the
reconstructions performed using the evolutionary approach even for lesser number of projection
data. Finally, the reconstruction of spatial characteristics of plasma ring at Aditya Tokamak has
been carried out based on experimental projection data recorded from two orthogonal directions.
The reconstruction reveals the presence of higher visible light emission concentrated near the
periphery of the Tokamak in the form of a ring which is in agreement with the theoretical and
experimental predictions.
On the other hand, it is seen that the GA-based reconstruction is computationally more
expensive and a higher computational time is needed for a meaningful reconstruction. However,
we can speed up the convergence and also enhance the accuracy substantially by incorporating
prior information during the generation of initial population. This information can be obtained
through faster techniques such as the traditional iterative methods or experimental information.
Furthermore, GA can be made faster with by coding objective functions more efficiently. GA also
offers tremendous amounts of customizability compared to other traditional approaches. For
instance, the chromosome need not simply represent the pixel values of the image. One can also
represent an image using a set of Elementary Distribution Functions (EDFs) scattered across the
domain. Parameters of these EDFs can then be tuned using the GA to obtain the reconstruction.
Appropriately changing the decision variables can help reduce the number of unknowns in the
optimization problem thereby speeding up calculations. Modifications to objective functions and
incorporation of prior information can make the problem better posed.
References
Debasish Mishra, K. Muralidhar, P. Munshi, A Robust Mart Algorithm for Tomographic
Applications, Numerical Heat Transfer, Part B: Fundamentals: An International Journal of
Computation and Methodology, 35:4, 485-506
Nurgül GÖKGÖZ, Serdar TANIL and Ahmet ONUR, Iterative Methods for Discrete Tomography
Implementation & Comparison, SIAM 2007
FabrizioCuccoli, Alessandro Manneschi et al, Genetic Algorithms For The Tomographic
Reconstruction Of 2D Concentration Fields of Atmospheric Gas, 2004 IEEE 0-7803-87422/04
EmadElbeltagia, TarekHegazyb, Donald Grierson, Comparison among Five Evolutionary-Based
Optimization Algorithms, Elsevier Advanced Engineering Informatics 19 (2005) 43–53
Visible Light Tomography on Aditya Tokamak, Deepak Varshney, IIT Kanpur (2011)
Gabor T. Herman, Image Reconstruction from Projections, Academic Press, 1980.
180
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
A Quality Function Deployment-based Model for Machining
Center Selection
K. Prasad, S. Chakraborty*
Production Engineering Department, Jadavpur University, Kolkata – 700032, West Bengal, India
*Corresponding author (e-mail: s_chakraborty00@yahoo.co.in)
Machining center selection is an important decision to be made by the manufacturing
industries, as it plays a key role in enhancing their productivity. Availability of a wide range
of alternatives and similarities among the selection criteria of the machining centers make
the selection procedure more complex and time consuming. The past researches have
already reported on selection of machining centers using different expert systems,
mathematical models and multi-criteria decision-making (MCDM) methods. Most of the
proposed models have not considered qualitative or subjective data in the decision-making
process. They also have not given due importance to the voice of the customers. Quality
function deployment (QFD) is a systematic approach of determining customers’ needs and
designing the product or service so that it meets the customers’ needs first time and every
time. The adopted QFD-based methodology helps in the selection procedure of machining
centers in manufacturing industries, giving requisite emphasis on the voice of the
customers to meet their requirements. A user friendly software prototype in Visual BASIC 6
is also developed to ease out the selection procedure. Two illustrative examples are also
cited.
1.
Introduction
Machining centers are now being widely used in the manufacturing industries all over the
world. Use of these advanced machining centers increase productivity of the plant by automating
the production processes and providing effective utilization of resources. Their use also improves
flexibility, repeatability and reliability of the plant. Improper selection of a machining
center/machine tool may cause serious problems affecting the overall performance and
profitability of a manufacturing organization. Since the investments made in machining centers
are long term and expensive, it is a very important decision for the production planners to select
the most appropriate machining center among the various alternatives available. Selection of a
machining center involves consideration of a large number of qualitative and quantitative factors,
such as table area, cost, three axes movement, power, spindle speed range, feed rate etc.
Determination of the best alternative for a production system from a wide range of available
alternatives having conflicting criteria is a complex and difficult task, and it requires advanced
knowledge and experience in this particular field.
The researchers in past have applied different mathematical and MCDM approaches for
selection of machining centers. Wang et al. (2000) used a fuzzy MCDM model to deal with the
machine selection problem for flexible manufacturing cells. Sun (2002) applied data envelopment
analysis to evaluate computer numerical control machines, in the context of advanced
manufacturing technology investment. Ayag and Ozdemir (2006) applied fuzzy logic in analytic
hierarchy process (AHP) to capture the right judgment of the decision makers in machine tool
selection. Cimren et al. (2007) designed an AHP-based machine tool selection system. Duran
and Aguilo (2008) developed a fuzzy-AHP-based software for evaluation and justification of an
advanced manufacturing system. Onut et al. (2008) combined fuzzy AHP and fuzzy TOPSIS
approaches for machine tool selection. Dagdeviren (2008) integrated AHP together with
PROMETHEE for equipment selection. Taha and Rostam (2012) developed a decision support
system to select the best alternative machine using a hybrid approach of fuzzy AHP and
181
PROMETHEE. Samvedi et al. (2012) integrated fuzzy AHP and grey relational analysis
approaches for selection of a machine tool.
Economic globalization, together with heightened market competition which is customer
driven, is motivating organizations to use an efficient, accurate and practical decision-making tool.
Although various models have been developed in this regard, they do not take customers’ views
into consideration. Therefore, there is a need for a sound model which incorporates the demands
of the customers during each phase of the decision-making procedure. QFD is such a customerfocused decision-making tool which integrates the needs of customers into the product. Along
with the voice of customers, QFD also takes organizational process into consideration. Due to
these advantages associated with QFD technique, it is used here for solving two machining
center selection problems.
2.
QFD-based model for machining center selection
Decision-making involving a large number of variables requires advanced and an indepth knowledge in the applied field. A lot of time is consumed in every selection process due to
tedious calculations involved in evaluating each alternative with respect to the considered
selection criteria. To eliminate these time consuming calculations and ease out the decisionmaking process, a software prototype in Visual BASIC 6 is developed. The developed QFDbased selection model integrates the customers’ requirements with the technical requirements
and can be used to select the most appropriate machining center for a given application based on
the selected technical requirements.
2.1
Development of a QFD-based software prototype
Figure 1 shows the opening window of the developed QFD-based selection model,
providing guidelines to the decision makers while selecting the best alternative for a particular
application. The name of the product for which decision is being taken can be entered in the box
provided in the opening window. Then the functional key ‘Go’ is pressed to go to the next stage of
the decision-making process. Two main steps, i.e. development of the house of quality (HoQ)
matrix and construction of the related score matrix for the available alternatives, need to be
followed here for the selection procedure using this QFD-based selection model. The HoQ matrix
used here for this purpose is a simplified one. On the left hand side of the HoQ matrix, various
product features as required by the customers, such as table size, power, spindle speed range,
allocated fund etc. are placed and on the top of the HoQ matrix, various technical requirements,
like table area, cost, three axes movement, power, spindle speed range, number of tools,
machining diameter and length, feed rate, positional accuracy etc. are incorporated in different
columns. The detailed data for various technical requirements can be obtained from different
manufacturers’ handbooks and Internet sites.
Development of the HoQ matrix can further be divided into four sub-steps. The first step
is to determine the customers’ requirements (Whats) and technical requirements (Hows), which
may be either beneficial (higher the better) or non-beneficial (lower the better) in nature. In the
second step, customers’ requirements are quantified by the value of the corresponding
improvement driver (+1 for beneficial criteria and -1 for non-beneficial criteria), after identifying
them as beneficial or non-beneficial. Then, for assigning the priority values to the customers’
requirements, a scale of 1-5 is set where 1 - not important, 2 - important, 3 - much more
important, 4 - very important and 5 - most important. In the final step, the interrelationship matrix
is filled up with the correlation index. An appropriate scale based on a convention is set for
assigning the relative importance value between the customers’ requirements and technical
requirements (correlation index), i.e. 1 - very weak, 3 - weak, 5 - moderate, 7 - strong and 9 - very
strong.
Once the HoQ matrix is developed, the ‘Compute’ functional key is pressed to derive the
weights for all the technical requirements. The weight for each technical requirement can be
obtained using the following expression:
182
n
(1)
w j Prix IDi x corrleation index
i1
th
where wj is the weight for j technical requirement, n is the number of customers’ requirements,
IDi is the value of improvement driver for ith customer requirement, Pri is the priority assigned to ith
th
customer requirement and correlation index is the relative importance of j technical requirement
th
with respect to i customer requirement, as obtained from the HoQ matrix.
Figure 1. Opening window of QFD-based selection model
Pressing of the ‘Input data’ functional key generates empty cells same in number as the
criteria considered in the decision-making process. In these cells, the user needs to provide the
names of the considered criteria based on which selection can be made. Pressing of the ‘Weight’
functional key will automatically store the values of the criteria weights and display them. On
pressing of the ‘Next’ key, a score matrix with the required number of empty cells will be
generated in a new window. The short-listed candidates for the considered application and the
relevant selection criteria are needed to be fill up in the score matrix. This matrix actually
calculates the performance score for each alternative with respect to the considered criteria. The
score matrix is normalized using linear normalization technique to make it dimensionless, so that
the performances of all the alternatives can be compared. Then, the score for each alternative
can be obtained using the following expression.
n
Scorei w j x (Normalized value)ij
(2)
j1
Pressing of the ‘Score’ key will automatically normalize the data, compute the
performance scores for the alternatives and display those values in the score column. Ranking of
the alternatives can be obtained by pressing the ‘Rank’ key. Pressing the ‘Graph’ key displays the
performance scores for the alternatives in the form of vertical bars and identifies the best
alternative in the box placed at the bottom of the score matrix.
2.2
Illustrative example 1
Wang et al. (2000) adopted a fuzzy MCDM model to solve a machine selection problem
for a given flexible manufacturing cell. In that problem, ten alternatives were evaluated based on
four criteria, i.e. purchasing cost, floor area, number of machines and productivity, which would
correspond to the slowest feed rate. The number of machines and productivity are beneficial
criteria (higher values are desired), whereas, purchasing cost and floor area are non-beneficial
criteria (lower values are desired). Table 1 shows the data for this machining center selection
problem. Wang et al. (2000) obtained the ranking of alternatives as 4-5-3-1-2-10-9-6-7-8. While
solving this problem using the developed QFD-based approach, the HoQ matrix is first
183
developed. After deriving the weights for the considered criteria from the HoQ matrix, the score
matrix is obtained, as shown in Figure 2.
MM
1
MM
2
MM
3
MM
4
MM
5
4.8
1.3
24
12.7
58
1,265
6.0
2.0
21
12.7
65
680
3.5
0.9
24
8.0
50
650
5.2
1.6
22
12.0
62
580
3.5
1.05
25
12.0
62
Stroke
(mm)
936
Power
(K watt)
Spindle
speed
( 1000
rpm/min)
Diameter
(mm)
5500
4500
5000
5800
5200
5600
5800
5600
6400
6000
Weight (kg)
Total
machine
number
3
3
3
3
3
4
4
4
4
4
Price ($)
54.49
49.73
51.24
45.71
52.66
74.46
75.42
62.62
65.87
70.67
Table 2. Data for example 2
Alternative
581 818
595 454
586 060
522 727
561 818
543 030
522 727
486 970
509 394
513 333
Productivity
(mm/min)
A1
A2
A3
A4
A5
A6
A7
A8
A9
A10
Total
purchasing
cost
($)
Total floor
area
2
(m )
Alternative
Table 1. Data for example 1
The score matrix depicts that the ranking obtained by the QFD-based approach is 4-5-31-2-10-9-6-7-8, which exactly matches with that obtained by Wang et al. (2000).
Figure 2. House of quality matrix and score matrix for example 1
2.3
Illustrative example 2
Dagdeviren (2008) adopted PROMETHEE method along with AHP to solve an equipment
selection problem. Five milling machines were evaluated based on six criteria, e.g. price, weight,
power, spindle speed, diameter and stroke. Among these, spindle speed, diameter and stroke are
beneficial criteria, and price, weight and power are non-beneficial criteria. The data for this
machining center selection problem is given in Table 2. Dagdeviren (2008) obtained the ranking
of the milling machines as 4-3-5-2-1 and MM-5 was the ideal choice.
184
While solving this problem using the QFD-based selection approach, the HoQ matrix and
the score matrix are developed, as shown in Figure 3, suggesting that MM-5 is the most suitable
option, whereas, MM-3 is the least preferred choice.
Figure 3. House of quality matrix and score matrix for example 2
3.
Conclusions
Two machining center selection problems are solved using the developed QFD-based
approach. It is observed that the obtained rankings of the alternatives match excellently with
those derived by the earlier researchers. The main advantage associated with this approach is
that the decision makers need not now require to have an in-depth technological knowledge
about the capabilities and characteristics of various alternatives. This QFD-based decisionmaking tool can be adopted in the manufacturing domain very effectively.
References
Ayag, Z. and Ozdemir, R.G. A fuzzy AHP approach to evaluating machine tool alternatives,
Journal of Intelligent Manufacturing, 2006, 17, 179-190.
Cimren, E., Catay, B. and Budak, E. Development of a machine tool selection system using AHP.
International Journal of Advanced Manufacturing Technology, 2007, 35, 363-376.
Dagdeviren, M. Decision making in equipment selection: an integrated approach with AHP and
PROMETHEE, Journal of Intelligent Manufacturing, 2008, 19, 397-406.
Duran, O. and Aguilo, J. Computer-aided machine-tool selection based on a Fuzzy-AHP
approach. Expert Systems with Applications, 2008, 34, 1787-1794.
Onut, S., Kara, S.S. and Efendigil, T. A hybrid fuzzy MCDM approach to machine tool selection.
Journal of Intelligent Manufacturing, 2008, 19, 443-453.
Samvedi, A., Jain, V. and Chan, F.T.S. An integrated approach for machine tool selection using
fuzzy analytical hierarchy process and grey relational analysis. International Journal of
Production Research, 2012, 50, 3211-3221.
Sun, S., Assessing computer numerical control machines using data envelopment analysis.
International Journal of Production Research, 2002, 40(9), 2011-2039.
Taha, Z. and Rostam, S. A hybrid fuzzy AHP-PROMETHEE decision supports system for
machine tool selection in flexible manufacturing cell. Journal of Intelligent Manufacturing,
2012, 23, 2137-2149.
Wang,T-Y., Shaw, C-F. and Chen, Y-L. Machine selection in flexible manufacturing cell: A fuzzy
multiple attribute decision-making approach. International Journal of Production Research,
2000, 38(9), 2079-2097.
185
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
An Automatic Un Supervised Data Classification using TLBO
1*
2
K.Karteeka Pavan , A.V.Dattatreya Rao , R.Meenakshi
2
1
R.V.R.& J.C.College of Engineering, Guntur-19, A.P., India
2
Acharya Nagarjuna University, Guntur, A.P., India
*
Corresponding author (e-maill: karteeka@yahoo.com)
Clustering is an important and a preprocessing phase for a wide variety of applications.
Almost all clustering algorithms require some parameters in advance, which is difficult
in practical.Teaching–Learning-Based Optimization (TLBO) is the recent parameter
optimization technique without any algorithmic specific parameters.This paper
proposes a new algorithm to find optimal clusters automatically using TLBO and also
demonstrates the relative performance of the proposed algorithm with other traditional,
evolutionary algorithms using various public and synthetic data sets.
1.
Introduction
Data clustering is an important and successful data mining technique to extract
meaningful information in a wide variety of applications. Clustering is an unsupervised
classification that aims at grouping a set of unlabeled objects into meaningful clusters or
groups (Jain, A.K. 2010) such that the groups are homogeneous and neatly separated. There
are many clustering algorithms in the literature to find homogeneous groups in the data set.
But almost all require some parameters in advance. There is a little work in automatic
clustering. Thus finding optimal number of clusters and clustering structures automatically
became a challenging task. Fogel et al. (1966) and Sarkar,et al. (1997) have proposed an
approach to dynamically cluster a data set using evolutionary programming, with two fitness
functions one for optimal number of clusters and other for optimal centroids. Lee and
Antonsson (2000) used an evolutionary method to dynamically cluster a data set. Guo et. al
(2002) have proposed a unified algorithm for both unsupervised and supervised learning.
Cheung (2005) studied a rival penalized competitive learning algorithm
that has
demonstrated a very good result in finding the cluster number. The algorithm is formulated by
learning the parameters of a mixture model through the maximization of a weighted likelihood
function. Swagatam Das and Ajith Abraham (2008) have proposed an Automatic Clustering
using Differential Evolution (ACDE) algorithm. Differential evolution (DE) is one of the most
powerful stochastic real-parameter optimization algorithms in current use (Price et al 2005).
DE follows similar computational steps as in any standard evolutionary algorithm with
specialized crossover and mutation operations (Swagatam Das and P. Nagaratnam
Suganthan 2011). Recently Rao, R.V. &Kalyankar, V.D (2012a, 2012b, 2012c) and Rao and
Patel (2012a) have introduced the Teaching-Learning-Based Optimization (TLBO) algorithm
which does not require any algorithm specific parameters. TLBO is developed based on the
natural phenomena of teaching and learning process of a class room. TLBO contains two
phases as teacher phase and learning phase (R.V. & Patel, V. 2012b). As in any population
based algorithms the TLBO is also contains population. Solution vectors are the learners and
dimensions of each vector is termed as subjects. Best learner in the population is a teacher
(Rao, R.V. &Savsani, V.J. 2012). This paper proposed an automatic clustering algorithm
using TLBO that determines homogeneous groups automatically. The results of the algorithm
are compared with K-means,Fuzzy-k, DE, ACDE, and with TLBO. Experiments on various
public and synthetic data sets demonstrated that TLBO is equally performs well for some data
sets. In some cases the TLBO is better than DE and also observed DE is better than TLBO in
some cases.
2.
Methodology
The paper is mainly focused on the applicability of TLBO in finding optimal clusters
automatically. The following subsections contains the procedure of TLBO and the proposed
186
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Automatic clustering using TLBO (AUTOTLBO). The chromosome contains Maxk, threshold
values for active centroids and Maxk centroids as in ACDE.In this work Maxk is the square
root of total elements in the data set.
2.1
TLBO
TLBO is a recent evolutionary algorithm which providing competitive solutions for various
applications and does not require any program specific parameters compared to other
existing evolutionary algorithms. The process of TLBO is as follows
2.1.1
Initialization
The population X, is randomly initialized by a given data set of n rows and d columns using
the following equation.
X i , j (0) X min
rand (1) * X max
X min
j
j
j
(1)
th
Xi,j Creation of a population of learners or individuals. The i learner of the population X at
current generation t with d subjects is as follows,
(2)
X i (t ) X i ,1 (t ), X i , 2 (t ),..., X i ,d (t )
2.1.2
Teacher phase
The mean value of each subject, j, of the population in generation t is given as
M (t ) [M 1 (t ), M 2 (t ),..., M d (t )
(3)
The teacher is the best learner with minimum objective function value in the current
population. The Teacher phase tries to increase the mean result of the learners and always
tries to shift the learners towards the teacher. A new set of improved learners can be
generated by adding a difference of teacher and mean vector to each learner in the current
population as follows.
(4)
X i (t 1) X i (t ) r * ( X best (t ) TF M (t ))
TF is the teaching factor with value between 1 and 2, and riis the random number in
the range [0, 1]. The value of TF can be found using the following equation (5)
TF round (1 rand (1))
(5)
2.1.3 Learner phase
The knowledge of the learners can be increased by the interaction of one another in
the class. For a learner, i, another learner is selected, j, randomly from the class.
X i (t 1)
X i (t ) r * ( X i (t ) X j (t )), iff (( X i (t )) f ( X j (t ))
X i (t ) r * ( X j (t ) X i (t )), iff (( X j (t )) f ( X i (t ))
(6)
The two phases are repeated till a stopping criterion has met. Best learner is the best
solution in the run.
2.1.4
Stopping criteria
The stopping criteria in the present work is repeating the two phases for some selected
number of iterations.
2.2
Automatic Clustering using TLBO (AUTOTLBO)
The new AUTOTLBO is to find optimal clusters automatically. Any cluster validity
measure can be selected as fitness function. Here Rand Index is selected as fitness function.
The algorithm for the AUTOTLBO is as follows. Let X is a given data set with n, elements.
Step 1) Initialize each learner to contain Maxk number of randomly selected cluster centers
and Maxk (randomly chosen) activation thresholds in [0, 1].
Step 2) Find the active cluster centers with value greater than 0.5,in each learner.
Step 3) For t = 1 to tmax do
187
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
a) For each data vector Xp, calculate its distance from all active cluster centers using
Euclidean distance metric.
b) Assign Xp to closest cluster
c)Evaluate each learner quality and find Teacher, the best learner using Rand Index.
d) Update the learners according to the TLBO algorithm described in the section 2.1.
Step 4) Report the final solution obtained by the globally best learner (one yielding the highest
value of the fitness function) at time t = tmax.
3.
Experimental results and discussion
Table1. Mean values of 50 independent runs in various non automatic algorithms
Cluster Validity Measures
Error rate in %
Dataset
Algorithm
ARI
RI
HI
SIL
CS
DB
synthetic1
k-means
0.92
0.96
0.92
0.84
0.65
0.47
0.236
0.47
2.571
(k=2)
synthetic2
(k=4)
0.9
0.95
0.9
0.84
0.52
DE
0.99
1
0.99
0.83
0.85
0.47
0.2171
TLBO
0.92
0.96
0.92
0.8
0.48
3.1714
k-means
0.82
0.93
0.85
0.72
1.05
1.18
0.58
0.48
19.1
2.2
fuzzyk
fuzzyk
0.94
0.98
0.96
0.79
0.93
DE
0.81
0.93
0.85
0.69
1.19
0.56
10.952
0.9
0.96
0.92
0.76
0.5
0.96
0.98
0.96
0.81
0.97
0.87
0.51
0.5
4.2
2.242
TLBO
synthetic3
erm
k-means
fuzzyk
0.97
0.99
0.97
0.82
0.96
DE
0.91
0.96
0.92
0.79
1.32
0.52
3.2533
TLBO
0.95
0.98
0.95
0.81
0.5
k-means
0.82
0.94
0.88
0.82
0.72
0.72
0.41
1.8667
51.27
synthetic4
fuzzyk
0.98
0.99
0.99
0.95
0.45
0.18
8.738
(k=6)
DE
0.96
0.99
0.97
0.92
0.55
0.23
17.888
TLBO
0.88
0.96
0.92
0.82
0.3
21.219
k-means
0.77
0.89
0.79
0.8
0.62
0.61
0.46
15.77
0.46
15.33
(k=3
iris
(k=3)
wine
(k=3)
1
fuzzyk
0.79
0.9
0.8
0.8
0.66
DE
0.83
0.92
0.85
0.77
0.75
0.44
6.4133
TLBO
0.89
0.95
0.9
0.79
0.43
3.8333
0.3
0.68
0.35
0.69
0.71
0.61
0.57
34.58
0.57
30.34
k-means
fuzzyk
0.34
0.7
0.4
0.7
0.75
DE
0.37
0.7
0.4
0.42
1.02
0.63
35.461
TLBO
0.41
0.72
0.43
0.44
0.6
32.416
k-means
0.25
0.69
0.38
0.51
1.45
0.97
0.9
55.86
1
62.29
glass
fuzzyk
0.24
0.72
0.44
0.29
1.61
(k=6)
DE
0.26
0.63
0.25
0.63
0.84
0.69
61.486
TLBO
0.28
0.63
0.26
0.68
0.79
0.71
59.673
188
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The performance of AUTOTLBO is tested using both simulated and real data. The
real data sets are taken from UCI data bases and synthetic data sets are from Karteeka et al
(2011). The performance of the proposed AUTOTLBO is compared with k-means, fuzzy-k,
DE, TLBO and with ACDE in terms of the validation measures Rand (Rand 1971), Adjusted
Rand (Hubert and Arabie 1985), DB (Davies and Bouldin 1979), CS (Chou et al. 2004)
Silhouette (Rousseeuw 1987) metrics values and the error rate (Karteeka et al, 2011).Among
these k-means, fuzzy-k, DE, TLBO are require number of clusters in advance, hence, non
automatic.ACDE is the automatic Evolutionary clustering using DE. Since the all selected
algorithms produce different solutions in different independent runs, we have run 50 number
of times the each algorithm. The indices of various validation measures are calculated based
on 50 independent runs of each algorithm and then the average values of all indices are
reported in Table1 and Table2. The minimum and maximum values of the clustering indices
and error rate found in 50 independent runs for ACDE and AUTOTLBO are reported in
Table3.
Table2. Mean values of 50 independent runs of Automatic clustering Methods
Cluster Validity Measures
Data set
iris
Algorithm
k
ARI
RI
Sil
DB
CS
ACDE
3.26
0.87
0.95
0.89
0.76
0.75
0.48
13.63
3.7
0.83
0.93
0.852
0.7
0.82
0.6
19.67
ACDE
4.54
0.35
0.72
0.44
0.38
1.77
0.8
52.53
AUTOTLBO
4.32
0.32
0.7
0.4
0.39
1.72
0.78
52.57
5.5
0.29
0.7
0.404
0.42
3
1.19
58.54
6.44
0.26
0.72
0.434
0.26
3.32
1.24
68.07
2
1
1
0.998
0.83
0.98
0.46
0.046
3.2
0.63
0.82
0.634
0.56
1.76
0.85
33.89
4.04
0.95
0.98
0.961
0.78
0.99
0.51
4.168
4.1
0.93
0.97
0.949
0.78
1.04
0.52
7.584
3.04
0.97
0.99
0.977
0.81
0.94
0.48
4.133
AUTOTLBO
3.8
0.88
0.95
0.9
0.74
1.17
0.59
20.67
ACDE
6.9
1
1
0.998
0.96
0.3
0.18
11.04
6.84
1
1
0.998
0.97
0.3
0.17
6.585
AUTOTLBO
wine
ACDE
glass
AUTOTLBO
ACDE
synth1
AUTOTLBO
ACDE
synth2
AUTOTLBO
ACDE
synth3
synth4
AUTOTLBO
HI
Error rate in %
Error
Table3. Maximum, minimum values in 50 independent runs of automatic techniques
Algorithm
min
ma
x
ACDE
AUTOTLB
O
ACDE
AUTOTLB
O
Synthetic Data sets
Public Data sets
iris
3.33333
3
3.33333
3
62.6666
7
56.6666
7
wine
39.3258
4
28.0898
9
78.6516
9
69.1011
2
glass
38.3177
6
39.7196
3
88.7850
5
86.9158
9
189
1
2
0
1.8
0
0.28571
4
52.5714
3
2
80.
8
82.
2
3
0.66666
7
4
1
0
37.7
5
37.7
5
66.333
62.333
0
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.1 The observations from the experiments
In the case of synthetic1 and synthetic4 data sets the Automatic TLBO for clustering
outputs true clusters with minimum error rate, zero where as the mean error rates are
7.584 and 6.85.
AUTOTLBO results 19.67, 52.57, 7.584, 6.585 as the mean error rates for the data
sets iris, wine,synthetic2 and for synthetic4 where as the ACDE results 13.63, 52.75,
4.16 and 11.06 values as the mean error rates.
The quality of AUTOTLBO is 87% and 69.31% in terms of Rand and Adjusted Rand
measures
From the Table3, in most of the cases the maximum error rates found in 50
independent runs for AUTOTLBO are lesser than ACDE.
4.
Conclusions
From the recent studies TLBO is the recent, simple, evolutionary and parameter free
algorithm. The proposed AUTOTLBO is aimed to find clusters automatically from the data set.
Experimental results on various real and synthetic data sets have shown that AUTOTLBO
seems tobe equally performs well with the existing ACDE. The results demonstrates that the
need of applying TLBO to variety of applications.We extend the concept of Atomatic TLBO to
various applications like micro array clustering, medical imging etc. All the evolutionary
algorithms results different solutions in different independent runs. Therefore, finding a novel
evolutionary algorithm that finds single solution is our future endeavor.
References
Cheung, Y. Maximum Weighted Likelihood via Rival Penalized EM for Density Mixture
Clustering with Automatic Model Selection IEEE Trans. Knowledge and Data Eng.,2005,
17(6), 750-761.
Chou, CH. Su, MC. Lai E. A new cluster validity measure and its application to image
compression Pattern Anal. Appl., 7( 2), 205–220.
Davies, DL. and Bouldin, DW, A cluster separation measure. IEEE Transactions on Pattern
Analysis and Machine Intelligence 1979,(1)224–227.
Fogel, L. J , Owens, A. J. and Walsh, M. J Artificial Intelligence Through Simulated
Evolution”. 1996 New York: Wiley.
Guo, P. Chen, C.L. and Lyu, M.R Cluster Number Selection for a Small Set of Samples
Using the Bayesian Ying-Yang Model IEEE Trans. Neural Networks,2002, 13( 3), 757763.
Hubert, L. and Arabie, P. Comparing partitions, Journal of Classification,1985,2(1),193 218.
Jain ,A.K. Data Clustering: 50 Years Beyond K-Means, Pattern Recognition letters,2010 31,
651-666
KarteekaPavan, K., AppaRao, A., DattatreyaRao, A.V., Sridhar,G.R. A robust seed selection
algorithm, IJCSIT , 2011, 3(5),147-163 DOI : 10.5121/ijcsit.2011.3513
Lee, C.Y. and Antonsson, E.K. Self-adapting vertices for mask-layout synthesis in Proc.
Model. Simul. Microsyst. Conf., M. Laudon and B. Romanowicz, Eds., San Diego,
CA,2000 Mar. 83–86.
Mirkin,B. Mathematical Classification and Clustering, 1996 Kluwer Academic Publisher.
Price K., Storn, R, &Lampinen, A. Differential evolution - a practical approach to global
optimization, 2005, Springer Natural Computing Series.
Rand , W.M. Objective criteria for the evaluation of clustering methods, Journal of the
American Statistical Association,1971, 66, 846-850.
Rao, R.V. & Patel, V. Multi-objective optimization of combined Brayton and inverse Brayton
cycle using advanced optimization algorithms, Engineering Optimization, 2012b, doi:
10.1080/0305215X.2011.624183.
Rao, R.V. & Patel, V. An elitist teaching-learning-based optimization algorithm for solving
complex constrained optimization problems. International Journal of Industrial
Engineering Computations, 2012a, 3(4), 535-560.
190
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Rao, R.V. and Kalyankar, V.D. Multi-objective multi-parameter optimization of the industrial
LBW process using a new optimization algorithm. Journal of Engineering
Manufacture,2012b, DOI: 10.1177/0954405411435865
Rao, R.V. and Kalyankar, V.D. Parameter optimization of machining processes using a new
optimization
algorithm.
Materials
and
Manufacturing
Processes,
2012c,
DOI:10.1080/10426914.2011.602792
Rao, R.V. and Kalyankar, V.D. Parameter optimization of modern machining processes using
teaching–learning-based optimization algorithm. Engineering Applications of Artificial
Intelligence, 2012a, http://dx.doi.org/10.1016/j.engappai.2012.06.007.
Rao, R.V., Savsani, V.J. &Vakharia, D.P.. Teaching-learning-based optimization: A novel
method for constrained mechanical design optimization problems. Computer-aided
Design, 2011, 43 (3), 303-315.
Rousseeuw, P.J. Silhouettes: a graphical aid to the interpretation and validation of cluster
analysis, Journal of Computational and Applied Mathematics, 1987,20,53–65.
Sarkar, M., Yegnanarayana, B. and Khemani, D A clustering algorithm using an evolutionary
programming-based approach , Pattern Recognit. Lett.,1997, 18(10), 975–986.
Swagatam Das, Ajith Abraham Automatic Clustering Using An Improved Differential Evolution
Algorithm, Ieee Transactions On Systems, Man, And Cybernetics—Part A: Systems And
Humans,2008, 38( 1),218-237.
Swagatam Das, P. Nagaratnam Suganthan,Differential Evolution: A Survey of the State-ofthe-Art, IEEE Transactions On Evolutionary Computation,2011, 15(1),4-32
Xu, L. Rival Penalized Competitive Learning, Finite Mixture, and Multisets Clustering, Pattern
Recognition Letters,1997, 18( 11), 1167-1178.
191
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Population Based Advanced Engineering Optimization
Techniques- A Literature Survey
R.V. Rao, K.C. More*
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
*Corresponding author (e-mail: kiran.imagine67@gmail.com)
This paper presents a survey on a few advanced optimization techniques. The
techniques are: Gravitational Search Algorithm, Firefly algorithm, Cuckoo Search
Algorithm and Teaching-Learning-Based Optimization Algorithm.
1.
Introduction
Most of the traditional techniques require gradient information and therefore it is not
possible to solve non-differentiable functions with the assistance of such traditional
techniques (Rao and Savsani, 2012). Metaheuristic algorithms are very diverse including:
Genetic Algorithms, Simulated Annealing, Differential Evolution, Ant and Bee Algorithms,
Particle Swarm Optimization, Harmony Search, Firefly Algorithm, Cuckoo Search and others.
Some of these algorithms are introduced in this paper namely: Gravitational Search
Algorithm, Firefly algorithm, Cuckoo Search Algorithm, and Teaching-Learning-Based
Optimization Algorithm.
2.
Gravitational search algorithm (GSA)
Gravitational search algorithm proposed in 2009 by Rashedi et al. Gravitational
search algorithm (GSA) based on the law of gravity and mass interactions (Rashedi et al.,
2009).
Table 1. The literature survey on the Gravitational search algorithm is followed:
Author /Year
Rashedi et al. (2009)
Zibanezhad et al. (2009)
Seyedali and Siti (2010)
Rashedi et al. (2010)
Chatterjee et al. (2011)
Method
GSA
GSA
H GSA-PSO
BGSA
GSA
Altinoz and Yilmaz
(2011)
Xiao and Cheng (2011)
GSA
Seljanko (2011)
Purwoharjono et al.
(2011)
Askari and Zahiri (2011)
Hybrid GSA-GA
GSA
GSA
Intelligent GSA
Sheikhan and Rad
(2012)
Sonmez et al. (2012)
IGSA
Duman et al. (2012)
GSA
Kazak and Duysak
MGSA
GSA
Application
Various standard benchmark functions.
Solving Web service composition.
Twenty-three benchmark functions.
Various standard benchmark functions.
Optimum radial amplitude distribution and
radial phase distribution.
The length and width of the rectangular patch
antenna.
Produce good DNA sequences for reliable
DNA computing.
Hexapod walking robot gait generation.
Minimize active power losses in a transmission
line.
Five well-known benchmarks and a practical
radar target recognition problem.
Unmanned Ariel Vehicle (UAV ) path planning.
Minimizes the fuel cost function of flexible AC
transmission systems.
Determination of the optimal PI and PID
parameters in load frequency control of a
single area power system.
Four benchmark functions.
192
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
(2012)
Moghadam et al. (2012)
Moghadam and pour
(2012)
Shamsudin et al. (2012)
Liu et al. (2012)
Ganesan et al. (2012)
Niknam et al. (2013)
3.
QGSA
IQGSA
Improved visual quality of stego images.
Tested some benchmark functions.
FDGSA
Disruption GSA
Hybrid DE &
GSA
OSAMGSA
Tested some benchmark functions.
23 nonlinear benchmark functions.
The green sand mould system problem.
For reactive power and voltage control.
Firefly algorithm (FA)
Firefly algorithm is a biologically inspired algorithm proposed by Yang in 2009. The
flashing light of fireflies is an incredible sight in the summer sky in the tropical and temperate
regions (Yang, 2009).
Table 2. The literature survey on Firefly algorithm is followed:
Author /Year
Yang (2009)
Horng and Jiang (2010)
Bojic et al. (2010)
Palit et al. (2011)
Gandomi et al. (2011)
Nandy et al. (2012)
Hassanzadeh and
Meybodi (2012a)
Iva Bojic et al. (2012)
M. H. Sulaimanet al.
(2012)
Ismail et al. (2012)
Marichelvam et al.
(2012)
Amiri et al. (2013)
Poursalehi et al. (2013)
Mohammadi et al.
(2013)
Coelho & Mariani
(2013)
4.
Method
FA
Maximum entropy
(MEFFT)
algorithm
FA
Binary FA
FA
Application
Benchmarked functions.
Complex document analysis and biomedical
image application.
Hybrid FA with the
back propagation
method
Hybrid FA with
CLA
FA
FA
FA
FA
Multi-objective
enhanced FA
FA
AMFA
improved firefly
algorithm (IFA)
Achieving energy savings in access networks.
To determine a plaintext from the cipher text.
Mixed
continuous/discrete
structural
optimization problems.
Some standard benchmark functions.
Five benchmark functions.
The reduction of the telecommunications
operator’s costs with-out reducing the quality
of the provided services for mobile users.
Minimizing the fuel cost and transmission
losses of Economic Dispatch (ED) problem.
Optimized path in PCB holes drilling process.
Flow shop scheduling problems.
Community detection process as a Multiobjective Optimization Problem (MOP) for
investigating the community structures in
complex networks.
The nuclear reactor loading pattern
optimization problem.
Minimize the total operating cost of the Micro
Grids.
Minimize energy consumption of multi-chiller
systems.
Cuckoo search algorithm (CSA)
The cuckoo search (CS) algorithm proposed by Yang and Deb in 2009. It is based on
the behavior of cuckoos. For simplicity in describing the Cuckoo Search (Yang and Deb,
2009).
193
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 4. The literature survey on the Cuckoo Search algorithm is followed:
Author /Year
Yang and Deb (2009)
Yang and Deb (2010)
Method
CSA
CSA
Kaveh et al.(2011)
CSA
Rani and Malek (2011)
CSA
Kumar and
Chakraverty (2011a)
Pop et al.(2011)
CSA
kartikayen and
Venkatalakshmi
(2012)
Zaho and Li (2012)
Rani et al. (2012)
Hybrid CSA & tabu
search techniques
PSO incorporated
CSA
Improved CS A
MCS algorithm
Radovan et al. (2012)
CSA
Burnwal and Deb
(2012)
Yildiz (2013)
CSA
5.
CSA
Application
Benchmarked functions.
Engineering design optimization problems,
including the design of springs and welded
beam structures.
Minimizing self weight of diagonal
square
grids.
Optimize the locations of array elements to
exhibit an array pattern with either
suppressed sidelobes and/or nulls placement
in specified directions.
Design optimization for reliable embedded
systems.
Optimal Web service composition.
Maximizing the use of advanced node and
minimize communication distance.
Some benchmark function.
Linear antenna array element excitation
locations, amplitudes and phases.
Dimensional synthesis of Stephenson III Sixbar linkage.
Solving an FMS scheduling problem.
Optimization of machining parameters in
milling operations.
Teaching–Learning-Based Optimization (TLBO)
TLBO is a teaching-learning process inspired algorithm proposed recently by Rao et
al. in 2011 based on the effect of influence of a teacher on the output of learners in a class
(Rao et al., 2011).
Table 5. The literature survey on Teaching–Learning-Based Optimization algorithm followed:
Author /Year
Method
Application
Rao et al. (2011)
TLBO
Five
benchmark
functions
and
six
mechanical design optimization problems.
Rao and Kalyankars
TLBO
Process parameter optimization of advanced
(2011)
machining processes ECM process and
ECDM.
Togan (2012)
TLBO
Minimizing the overall weight of frame
structure.
Niknam et al. (2012a)
MTLBO
Optimal location of automatic voltage
regulators (AVRs) in distribution systems at
presence of distributed generators (DGs).
Jadhav et al. (2012)
MTLBO algorithm Optimization of economic load dispatch
problem of wind-thermal system.
Niknam et al. (2012b)
IMTLBO
Optimizing energy management with the
goal of cost, total power loss and emission
minimization of PEM-FCPPs.
Zou et al. (2012)
MOTLBO
Several benchmark functions.
Nayak et al. (2012)
MOTLBO
Optimal power flow (OPF) problem.
algorithm
194
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Rao et al. (2012)
Rasoul et al. (2012)
TLBO
MTLBO algorithm
Rao and Patel (2012a)
TLBO
Rao and Patel (2012b)
ETLBO
Rao and Patel (2012c)
Improved TLBO
algorithm
Hybrid TLBO and
Taguchi’s
techniques
TLBO
TLBO
Yildiz (2012)
Amiri (2012),
Satapathy et al. (2012),
Hoseini et al. (2012)
MTLBO
Pawar and Rao (2012)
TLBO
Rao and Kalyankar
(2012a)
Rao and Kalyankar
(2012b)
Rao and Kalyankar
(2012c)
Degertekin and
Hayalioglu (2013),
Waghmare G. (2013)
TLBO
Rao and Kalyankar
(2013)
TLBO
TLBO
TLBO
TLBO
TLBO
Rao and Patel (2013a)
MTLBO
Rao and Patel (2013b)
ETLBO
Rao
and
Kalyankar
(2013)
Rao and Patel (2013c)
Satapathy et al. (2013)
Satapathy and Naik
(2013)
Wang et al. (2013)
Satapathy et al. (2013)
Sahu et al. (2013)
Rao
and
(2013)
Waghmare
TLBO
MTLBO
Weighted TLBO
Modified TLBO
(mTLBO)
TLBO
TLBO
PSO, DE, ABC,
TLBO and
MTLBO
TLBO
25 different unconstrained benchmark
functions and 35 constraint benchmark
functions.
Multiobjective
wind-thermal
economic
emission dispatch problem.
Multi-objective optimization of combined
Brayton and inverse Brayton cycle.
35 constraints optimization problems having
different characteristics.
Unconstrained optimization problems.
Machining parameters considering minimum
production cost.
Clustering problem.
High
dimensional
real
parameter
optimization benchmark functions.
Multi objective optimal location of AVRs in
distribution systems.
Process parameter optimization of abrasive
water jet machining process and two
conventional machining processes namely
grinding and milling.
Parameter optimization of a multi-pass
turning operation.
Optimization aspect of the process
parameters of a multi-pass turning operation.
multi-objective optimization of industrial
LBW.
Optimization of truss problems.
Commented on TLBO technique, it showed
the correct understanding of the TLBO
algorithm.
Process parameter optimization of three
modern machining processes USM, AJM
and WEDM.
multiobjective optimization of two stage
thermo-electric cooler considering objectives
were cooling capacity and COP.
76 unconstrained optimization problems to
check his performance.
Parameters Optimization of Continuous
Casting Process.
multi-objective
optimization
of
heat
exchangers.
Several benchmark optimization problems.
Enhances the convergence rate used
constrained benchmark Problems.
Benchmark evaluation functions.
Different benchmark functions.
Eight different benchmark functions.
Solving composite test functions.
195
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
6.
Conclusion
Advanced optimization techniques are powerful techniques used to ascertain the
solution for the real-world problems. Advanced optimization techniques intend to be suitable
for global optimization. Four population-based metaheuristic algorithms have been reviewed.
The GSA has thus far been used mainly for Electrical and Electronics Engineering
application. The FA has so far been used for applications in Civil, Structural, Computer and
Electrical Engineering. The CS algorithm far been used for solving constrained and
unconstrained benchmark functions and Civil, Structural, Computer, Electrical Engineering
and Manufacturing applications. The TLBO Algorithm has so far been utilized for many
constrained and unconstrained benchmark functions and many engineering applications like
Civil, Structural, Computer, Electrical, Manufacturing, Thermal and Design Engineering.
Out of these all advanced optimization techniques it is discovered that the TLBO
algorithm is a parameter less algorithm and it does not depend on any algorithm-specific
parameters to its working. All the evolutionary and swarm intelligence based algorithms are
probabilistic algorithms and required common controlling parameters like population size and
number of generations. Beside the common control parameters, different algorithms require
their own algorithm-specific control parameters. The Teaching-Learning-Based Optimization
(TLBO) algorithm does not require any algorithm-specific parameters and it requires only
common controlling parameters like population size and number of generations of its working.
Thus, TLBO can be read as an algorithm-specific parameter-less algorithm.
References
Altinoz, O. T., Yilmaz, A. E., 2011, Calculation of Optimized Parameters of Rectangular Patch
Antenna Using Gravitational Search Algorithm. IEEE International Symposium on
Innovations in Intelligent Systems and Applications, Isanbul -Kadiköy, Turkey. 349 – 353.
Amiri, B., Hossain, L., Crawford, J. W. , Wigand, R. T., 2013, Multi-objective
enhanced
firefly algorithm for community detection in complex networks. Knowledge-based
Systems, http://dx.doi.org/10.1016/j.knosys.2013.01.004.
Askari, H., Zahiri, S. H., 2011, Data Classification Using Fuzzy-GSA. IEEE 1st International
Conference on Computer and Knowledge Engineering, Ferdowsi University of Mashhad,
Iran, 6 – 11.
Bojic, I., Podobnik, V., Ljubi, I., Jezic, G., Kusek, M., 2010, a self-optimizing mobile network:
Auto-tuning the network with a firefly-synchronized agents. Information Sciences, 182,
77–92.
Chatterjee A., Mahanti, G. K., Mahapatra, P. R. S., 2011, Generation of Phase-only Pencilbeam Pair from Concentric Ring Array Antenna Using Gravitational Search Algorithm.
IEEE International Conference on Communications and Signal Processing, Calicut, India,
384 – 388.
Coelho, L. D. S., and Mariani, V. C., 2013, Improved firefly algorithm approach applied to
chiller loading for energy conservation. Energy and Buildings, 59, 273–278.
Duman, S., Sonmez, Y., Guvenc, U., Yorukeren, N., 2012a. Optimal reactive power dispatch
using a gravitational search algorithm. IET Generation, Transmission & Distribution, 6(6),
563-576.
Ganesan, T., Vasant, P., Elamvazuthi, I., Shaari, K. Z. K., 2012. Multiobjective Optimization of
Green Sand Mould System using DE and GSA. IEEE International Conference on
Intelligent Systems Design and Applications, Córdoba, Spain, 1012 – 1016.
Hassanzadeh, T., Meybodi, M. R., 2012a. A New Hybrid Algorithm Based on Firefly Algorithm
and Cellular Learning Automata, Twentyth Iranian Conference on Electrical Engineering,
Tehran, Iran, 628 – 633.
Horng, M. H., 2012, Vector quantization using the firefly algorithm for image compression.
Expert Systems with Applications, 39, 1078–1091.
Ismail, M. M., Othman, M. A., Sulaiman, H.A., Misran, M. H., Ramlee, R. H., Abidin, A. F. Z.,
Nordin, N. A., Zakaria, M. I., Ayob, M. N., Yakop, F., 2012, Firefly Algorithm for Path
Optimization in PCB Holes Drilling Process. International Conference in Green and
Ubiquitous Technology, Bandung, Indonesia, 110 – 113.
196
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Kaveh, A., Bakhshpoori, T., And Afshari, E., 2011, An Optimization Based Comparative Study
of Double Layer Grids With Two Different Configurations Using Cuckoo Search Algorithm.
International Journal of Optimization in Civil Engineering, 4, 507-520.
Kazak, N., Duysak, A., 2012, Modified Gravitational Search Algorithm. IEEE International
Symposium on Innovations in Intelligent Systems and Applications, Trabzon, Turkey, 1 –
4.
Liu C., Gao, Z., Zhao, W., 2012, A New Path Planning Method Based on Firefly Algorithm.
Fifth IEEE International Joint Conference on Computational Sciences and Optimization,
Harbin, Heilongjiang, China, 775 - 778 .
Marichelvam, M. K., Prabaharan, T., Yang, X. S., 2012, A Discrete Firefly Algorithm for the
Multi-Objective Hybrid Flow shop Scheduling Problems. IEEE Transactions on
Evolutionary Computation, Portland, USA.
Moghadam., M. S. Nezamabadi-pour, H., Farsangi, M. M., Mahyabadi, M., 2012, A more
secure steganography method based on pair-wise LSB matching via a quantum
th
gravitational search algorithm. The 16 IEEE CSI International Symposium on Artificial
Intelligence and Signal Processing, Shiraz, Iran, 034 – 038.
Moghadam., M. S. Nezamabadi-pour, H., 2012, an improved quantum behaved gravitational
search algorithm. IEEE 20th Iranian Conference on Electrical Engineering, Tehran, Iran,
711 – 715.
Mohammadi, S., Mozafari, B., Solimani, S., Niknam, T., 2013. An Adaptive Modified Firefly
Optimisation Algorithm based on Hong’s Point Estimate Method to optimal operation
management in a microgrid with consideration of uncertainties. Energy, 51, 339-348.
Niknam, T., Narimani, M. R., Rasoul, A. A., Bahman, B. F., 2013. Multi objective Optimal
Reactive Power Dispatch and Voltage Control: A New Opposition-Based Self-Adaptive
Modified Gravitational Search Algorithm. IEEE Systems Journal, 99, 1-12.
Palit, S., Sinha, S.N., Molla, M.A., Khanra, A., Kule, A., 2011. A Cryptanalytic Attack on the
Knapsack Cryptosystem using Binary Firefly Algorithm. Second International Conference
on Computer & Communication Technology, Allahabad, India, 428 – 432.
Pawar, P. J., Rao, R. V., 2012. Parameter optimization of machining processes using
teaching learning-based optimization algorithm. The International Journal of Advanced
Manufacturing Technology. DOI 10.1007/s00170-012-4524-2.
Pop, C. B., Chifu, V.R., Salomie, I., Vlad, M., 2011, Cuckoo-inspired Hybrid Algorithm for
Selecting the Optimal Web Service Composition, IEEE International Conference on
Intelligent Computer Communication and Processing, Cluj-Napoca, Romania, 33 – 40.
Poursalehi, N., Zolfaghari , A. , Minuchehr, A. , Moghaddam , H.K. 2013, Continuous firefly
algorithm applied to PWR core pattern enhancement. Nuclear Engineering and Design.
258, 107-115.
Purwoharjono, P., Penangsang, O., Muhammad, A., Soeprijanto, A., 2011. Voltage Control on
500kV Java-Bali Electrical Power System for Power Losses Minimization Using
Gravitational Search Algorithm. IEEE 1st International Conference on Informatics and
Computational Intelligence, Bandung, Indonesia, 11 – 17.
Rani, A., Malek, 2011, Symmetric Linear Antenna Array Geometry Synthesis using Cuckoo
Search
Metaheuristic
Algorithm.
Seventeen
Asia-Pacific
Conferences
on
Communications, Sabah, Malaysia, 374 – 379.
Rani, K. N. A., Malek, M. F. A., Siew C. N., Jamlos, F., Affendi, N. A. M., Mohamed, L.,
Saudin, N., Rahim, H. A. 2012, Hybrid Multiobjective Optimization using Modified Cuckoo
Search Algorithm in Linear Array Synthesis. Loughborough Antennas & Propagation
Conference, Loughborough, UK, 210 – 215
Rani, K. N. A., Malek, M. F.A., Siew C. N., Jamlos, F., Affendi, N. A. M., Mohamed, L.,
Saudin, N., Rahim, H. A., 2012, Modified Cuckoo Search Algorithm in Weighted Sum
Optimization for Linear Antenna Array Synthesis. IEEE Symposium on wireless
technology and applications, bandung, Indonesia, 1 – 4.
Rao, R. V., Kalyankar V. D., 2012, Parameters Optimization of Continuous Casting Process
Using Teaching-Learning-Based Optimization Algorithm, Swarm, Evolutionary, and
Memetic Computing Lecture Notes in Computer Science, 7677, 540-547.
197
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Rao, R. V., Kalyankar V. D., 2012a, Multi-pass turning process parameter optimization using
teaching–learning-based
optimization
algorithm.
Scientia
Iranica.
D.,
http://dx.doi.org/10.1016/j.scient.2013.01.002.
Rao, R. V., Kalyankar, V. D., 2011, Parameter optimization of advanced machining processes
using TLBO algorithm. EPPM, Singapore. 20-21.
Rao, R. V., Kalyankar, V. D., 2012b, Parameter Optimization of Machining Processes Using a
New Optimization Algorithm. Materials and Manufacturing Processes. 27, 978–985.
Rao, R. V., Kalyankar, V. D., 2012c, Multi-objective multi-parameter optimization of the
industrial LBW process using a new optimization algorithm. I Mech Part B: Journal of
Engineering Manufacture, 226 (6), 1018-1025.
Rao, R. V., Kalyankar, V. D., 2013, Parameter optimization of modern machining processes
using teaching–learning-based optimization algorithm. Engineering Applications of
Artificial Intelligence, 26, 524–531.
Rao, R. V., Patel, V. 2012a, Multi-objective optimization of combined Brayton and inverse
Brayton cycles using advanced optimization algorithms. Engineering Optimization, 44 (8),
965–983.
Rao, R. V., Patel, V. 2013a, Multi-objective optimization of two stage thermo electric cooler
using a modified teaching–learning-based optimization algorithm. Engineering
Applications of Artificial Intelligence, 26, 430–445.
Rao, R. V., Patel, V., 2012b, an elitist teaching-learning-based optimization algorithm for
solving complex constrained optimization problems. International Journal of Industrial
Engineering Computations, 3, doi: 10.5267/j.ijiec.2012.03.007
Rao, R. V., Patel, V., 2012c, an improved teaching-learning-based optimization algorithm for
solving
unconstrained
optimization
problems.
Scientia
Iranica
D
http://dx.doi.org/10.1016/j.scient.2012.12.
Rao, R. V., Patel, V., 2013 Comparative performance of an elitist teaching-learning-based
optimization algorithm for solving unconstrained optimization problems. International
Journal of Industrial Engineering Computations, 4, doi:10.5267/j.ijiec.2012.09.001.
Rao, R. V., Patel, V., 2013c Multi-objective optimization of heat exchangers using a modified
teaching-learning-based optimization algorithm. Applied Mathematical Modelling, 37,
1147–1162.
Rao, R. V., Savsani, V. J., Balic, J., 2012, Teaching–learning-based optimization algorithm for
unconstrained and constrained real-parameter optimization problems. Engineering
Optimization, 44, 1447–1462.
Rao, R. V., Waghmare G. G.. 2013, Solving Composite Test Functions Using TeachingLearning-Based Optimization Algorithm. International Conference on Frontiers of
Intelligent Computing: Theory and Applications (FICTA) Advances in Intelligent Systems
and Computing, 199, 395-403.
Rao, R.V . and Savsani, V.J. Mechanical Design Optimization Using Advanced Optimization
Techniques.Springer-Verlag, London, 2012.
Rao, R.V., Savsani, V.J., Vakharia, D.P., 2011, Teaching–learning-based optimization: A
novel method for constrained mechanical design optimization problems. Computer-Aided
Design, 43, 303–315.
Rao, R.V., Savsani, V.J., Vakharia, D.P., 2012, Teaching–learning-based optimization: an
optimization method for continuous non-linear large scale problem. Information Science,
183, 1–15.
Rashedi E., Nezamabadi-pour, H., Saryazdi, S., 2009, GSA: A Gravitational Search
Algorithm. Information Sciences, 179, 2232–2248.
Rashedi, E., Nezamabadi-pour , H ., Saryazdi, S., 2010, BGSA: binary gravitational search
algorithm. Natural Computing, 9,727–745.
Satapathy, S. C., Naik, A., Parvathi K, 2012, Teaching Learning Based Optimization for
Neural Networks Learning Enhancement. Swarm, Evolutionary, and Memetic Computing
Lecture Notes in Computer Science, 7677, 761-769
Satapathy, S. C., Naik, A., Parvathi K, 2013, rough set and teaching learning based
optimization technique for optimal feature selection. Central European Journal of
Computer Science, 3 (1), 27-42.
198
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Satapathy, S. C., Naik, A., Parvathi K,, 2013, 0-1 Integer Programming for Generation
Maintenance Scheduling in Power Systems Based on Teaching Learning Based
Optimization (TLBO). Contemporary Computing Communications in Computer and
Information Science, 306, 53-63.
Seljanko, F., 2011, Hexapod Walking Robot Gait Generation Using Genetic-Gravitational
Hybrid Algorithm. IEEE the 15th International Conference on Advanced Robotics Tallinn
University of Technology, Tallinn, Estonia, 253 – 258.
Seyedali, M., Siti, Z. M. H., 2010. A New Hybrid PSOGSA Algorithm for Function
Optimization. International Conference on Computer and Information Application, Tiajin,
Chaina, 374 – 377.
Shamsudin, H. C., Irawan, A., Ibrahim, Z., Abidin, A. F. Z., Wahyudi, S., Rahim, M. A .A.,
Khalil, K.,
2012, A Fast Discrete Gravitational Search Algorithm. IEEE Fourth
International Conference on Computational Intelligence, Modelling and Simulation,
Kuantan, Malaysia, 24 – 28.
Sheikhan, M., Rad, M., S., 2012, Gravitational search algorithm–optimized neural misuse
detector with selected features by fuzzy grids–based association rules mining. Neural
Computing & Applications, doi 10.1007/s00521-012-1204-y.
Sonmez, Y., Duman, S., Guvenc, U., Yorukeren, N., 2012. Optimal Power Flow Incorporating
FACTS Devices using Gravitational Search Algorithm. IEEE International Symposium on
Innovations in Intelligent Systems and Applications, Albena, Bulgeria, 1 – 5.
Sulaiman, M. H., Mustafa, M. W., Azmi, A., Aliman, O., Abdul, R. S. R., 2012a, Optimal
Allocation and Sizing of Distributed Generation in Distribution System via Firefly
Algorithm. IEEE International Power Engineering and Optimization Conference, Melaka,
Malesia, 84 – 89.
Togan, V., 2012, Design of planar steel frames using Teaching–Learning-Based Optimization.
Engineering Structures, 34, 225–232.
Vazquez, R. A., 2011. Training Spiking Neural Models using Cuckoo Search Algorithm. IEEE
Congress on Evolutionary Computation, New Orleans, USA, 679 – 686.
Waghmare, G., 2013, Comments on “A note on teaching–learning-based optimization
algorithm”. Information Sciences, 229, 159-169.
Xiao, J., Cheng, Z., 2011, DNA Sequences Optimization Based on Gravitational Search
th
Algorithm for Reliable DNA computing. IEEE 6 International Conference on Bio-Inspired
Computing: Theories and Applications, Penang, Malesia, 103 – 107.
Yang, X. S., Deb, S. 2010. Engineering Optimisation by Cuckoo Search, International Journal
of Mathematical Modelling and Numerical Optimisation, 1 (4), 330–343.
Yang, X. S., Deb, S. 2011. Multiobjective cuckoo search for design optimization. Computers
and Operations Research, doi:10.1016/j.cor.2011.09.026.
Yang, X., Hosseini, S. S. S., Gandomi, A. H., 2012. Firefly Algorithm for solving non-convex
economic dispatch problems with the valve loading effect. Applied Soft Computing, 12,
1180–1186.
Yildiz, A. R., 2013, Cuckoo search algorithm for the selection of optimal machining
parameters in milling operations. International Journal of Advanced Manufacturing
Technology, 64, 55–61.
Zibanezhad, B., Zamanifar, K., Sadjady, R. S., Rastegari, Y., 2012. Applying the gravitational
search algorithm in the QoS-based Web service selection problem. Journal of Zhejiang
University SCIENCE C – Computer & Electronics, 12 (9), 730-742.
Zou, F., Wang, L., Hei, X., Chen, D., Wang, B., 2013, Multi-objective optimization using
teaching-learning-based optimization algorithm. Engineering Applications of Artificial
Intelligence, 26 (4), 1291-1300.
199
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
A Novel Approach for Fuel Properties Optimization for the
Production of Blended Biofuel by using Genetic Algorithm
Lalit Kumar Behera1*, Payodhar Padhi2, Priya Gorai2, Vivek Kumar2,
3
Sujit Kumar Behera
1
Utkal University, Vanivihar, Bhubaneswar, Odisha, India
Konark Institute of Science & Technology, Bhubaneswar, Odisha, India
3
Aricent, Bengaluru, India
2
*Corresponding author (email: lalitkubehera@yahoo.com)
Recently, an increased focus has been observed on biofuels and Blended fuels which
are mixtures of traditional and alternative fuels in varying percentages. Renewable
fuels are bound to gradually replace fossil fuels. Development of biorefineries will mark
the historic transition into a sustainable society in which biological feedstocks,
processes and products constitute the main pillars of the economy. The efficiency of
the mixture of fuel depends on various fuel properties . The present study proposes
optimization of various properties of fuel to increase the fuel efficiency and helps in the
production of good quality of blended biofuel . For this purpose the evolutionary
Genetic algorithm has been adopted. The concept will provide a framework for quality
enhancement of biofuel.
Keywords: Biofuel, Blending, Genetic Algorithm
1. Introduction
The world is presently confronted with two crises of fossil fuel depletion and
environmental degradation (Searchinger et al. 2008). To overcome these problems,
renewable energy has recently been receiving increased attention and production is gaining
momentum due to its environmental benefits and the fact that it is derived from renewable
sources (Wahid, 2007). The world’s excessive demand for energy, the oil crisis, and the
continuous increase in oil prices have led countries to investigate new and renewable fuel
alternatives. Throughout the early to late part of the 20th century, petroleum-based fuels were
cheap and abundant. New oil fields were discovered throughout the world, and it seemed we
would be able to rely forever on crude oil as a cheap, readily available source of energy.
Throughout the 20th century, motorized transportation flourished after the invention of the
automobile. Now, there is almost one automobile for every home and in many cases two or
more. But due to fossil fuel depletion and environmental degradation some alternative
resource is unavoidable. Quality is a prerequisite for the success of a biofuel. Biodiesel
quality depends on several factors that reflect its chemical and physical characteristics. The
quality of biodiesel can be influenced by a number of factors: the quality of the feedstock; the
fatty acid composition of the parent vegetable oil or animal fat; the production process and the
other materials used in this process; the postproduction parameters; and the handling and
storage. The main criterion of biodiesel quality is the inclusion of its physical and chemical
properties into the requirements of the adequate standard. Quality standards for biodiesel are
continuously updated, due to the evolution of compression ignition engines, everstricter
emission standards, reevaluation of the eligibility of feedstocks used for the production of
biodiesel, etc. Evolutionary genetic algorithm can be useful in the production of good quality
blened fuel. By using this method the optimized quantity of the mixture components of the
blened fuel can be prepared.
2.
2.1
Material and method
Biodiesel and blending
Biodiesel is the name given to a clean burning alternative fuel, produced from domestic,
renewable resources. Biodiesel contains no petroleum, but it can be blended at any level with
200
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
petroleum diesel to create a biodiesel blend. It can be used in compression-ignition (diesel)
engines with little or no modifications. Biodiesel can be produced from a great variety of
feedstocks. These feedstocks include most common vegetable oils (e.g., soybean, Jatropha
plant seeds, cotton seed, palm, peanut, sunflower, coconut etc.) and animal fats as well as
waste oils (e.g., used frying oils). The choice of feedstock depends largely on
geography(Gressel, 2008; Herskowitz, 2006; Şensoz and Kaynar, 2006).
The major components of vegetable oils and animal fats are triacylglycerols (also
called triglycerides). Chemically, TAG are esters of fatty acids (FA) with glycerol. The different
FA that are contained in the TAG comprise the FA profile (or FA composition) of the
vegetable oil or animal fat. Because different FA have different physical and chemical
properties, the FA profile is probably the most important properties of a vegetable oil or
animal fat. Depending on the origin and quality of the feedstock, different production
processes are used.
To obtain biodiesel, the vegetable oil or animal fat is subjected to a chemical reaction
termed transesterification as the kinematic viscosity of the biodiesel is much closer to that of
petrodiesel. In that reaction, the vegetable oil or animal fat is reacted in the presence of a
catalyst (usually a base) with an alcohol (usually methanol) to give the corresponding alkyl
esters (or for methanol, the methyl esters) of the FA mixture that is found in the parent
vegetable oil or animal fat. Four methods to reduce the high viscosity of vegetable oils to
enable their use in common diesel engines without operational problems such as engine
deposits have been investigated: blending with petrodiesel, paralysis, micro emulsification
(co-solvent blending), and transesterification. Hydrodynamic cavitation process has been
developed for production of Thumba biodiesel. The results show that maximum biodiesel yield
upto 80% can be achieved within 30 min.
The fact that vegetable oils, animal fats, and their derivatives such as alkyl esters are
suitable as diesel fuel demonstrates that there must be some similarity to petrodiesel fuel or
at least to some of its components. The fuel property that best shows this suitability is called
the cetane number. In addition to ignition quality as expressed by the cetane scale, several
other properties are important for determining the suitability of biodiesel as a fuel. Heat of
combustion, pour point, cloud point, kinematic viscosity, oxidative stability, and lubricity are
among the most important of these properties. Biodiesel has several distinct advantages
compared with petrodiesel in addition to being fully competitive with petrodiesel in most
technical aspects which are as follows,
• Derivation from a renewable domestic resource, thus reducing dependency on external
resource.
• Reduction of most exhaust emissions (with the exception of nitrogen oxides)
• Higher flash point, leading to safer handling and storage.
• Excellent lubricity, a fact that is steadily gaining importance with the advent of low-sulfur
2.2
Genetic algorithm
The genetic algorithm (GA) is an optimization approach similar to Darwin’s theory of
evolution[2]. The idea of the genetic algorithm is based on the so-called ‘principle of survival
of the fittest’. GA simulates the evolutionary process numerically (Tsai et al., 2013; Zhigang et
al., 2011). They represent the parameters in a given problem by encoding them into a string.
As in genetics, genes are constituted by chromosomes, similarly, in simple GA, encoded
strings are composed of bits. A simple genetic algorithm consists of three basic operations,
these being reproduction, crossover and mutation. The algorithm begins with a population of
individuals each of them representing a possible solution of the problem. The individuals, as
in nature, perform the three basic operations and evolve in generations where a population of
individuals more adapted emerges as natural selection..A new population is formed from a
previous population according to the ‘fitness’ of each member of the population. During the
optimization process, only the ‘fittest’ solutions are selected to reproduce and create a new
solution. Once the chosen convergence criterion is satisfied, the optimal solution is obtained.
A detailed description of GAs techniques can be found in the works of Goldberg and Davis
among other available excellent works. A brief process of genetic algorithm is illustrated in Fig
1.
201
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 1. Steps of Genetic Algorithm
3. Proposed method
3.1 Proposed algorithm
The basic idea of the current approach is to evaluate a group of optimized property
values and use these sample points for the preparation of blended biofuel by mixture of oils in
appropriate proportion .The procedure is proposed as follows.
Step 1: Construct the basic data set by identifying the critical variables or properties that are
helpful in the efficiency estimation of the fuel.
Step 2: Construct a matrix for the design variables, the objective function and the
constraints.
Step 3: The objective function is fed as input to the genetic algorithm.
Step 4: Verifying the the optimum design by exact analysis . If the predicted constraint values
are identical with the results from GA, or the estimated optimum design is satisfactory
enough, exit.
3.2
Process
In this study seven different oils has been considered. They are diesel, kerosene,
karanja oil, castor oil, turpentine oil, neem oil, mahula oil. For the proposed work few real
property values has been used along with some simulated data values. This database is
comprised of seven records representing six fuel properties which is presented in Table 1. In
the preliminary stage the dataset is converted to a matrix . The multivariate regression has
been used to produce the objective function. Then the objective function is used in the
Genetic algorithm to produce the optimised values for the different properties.
Table 1. Components of blended biofuel and their properties
Sl no
Fuel
1
2
3
4
5
6
7
Diesel
Kerosene
Karanja oil
Castor oil
Turpentine oil
Neem oil
Mahula oil
P1
(flash
point) in
deg. C
52
37
241
305
35
100
238
P2
(fire
point)
in deg. C
68
56
253
320
46
109
250
202
P3
(viscosity
)
in CP
76.2
2.2
29.65
650
1.375
35.83
37.18
P4
(density)
in KJ/m3
890
820
912
961
875
920
904
P5
(heat
value)KJ/
Kg
44800
46200
35000
39500
44000
44650
38963
P6
(Sp.
Gravity))
0.89
0.82
0.912
0.961
0.875
0.92
0.904
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4.
Result and discussion
In the present study six real-valued properties has been selected for prediction of
fuel efficiency. In Table 1 Pi represents the property of the oil. Genetic Algorithm was
implemented by MATLAB 7.10 (Mathwork Inc). All calculations has been processed on a
computer with Samsung 1.50 GHz and 1 GB RAM. Regression analysis method was
employed to investigate relationship between properties and efficiency. The objective
function obtained is represented in equation -1.
Y = ((-0.0430*p(1))+ ( 0.0407*p(3))+ ( 0.0002*p(4))+ ( 0.0022 *p(6))
Table 2 .Fitness values in each generation and best and mean values
Best
Mean
Stall
Generation
f-count
f(x)
f(x) Generations
1
100
2.113
2.167
0
2
150
2.086
2.152
0
3
200
2.055
2.138
0
......
68
3450
0.9833
1.034
0
69
3500
0.9831
1.013
0
70
3550
0.9401
0.9978
0
Optimization terminated: maximum number of generations exceeded.
x = 63.6609 46.0137
2.1443 820.1427
fval = 0.9401
Figure 2. Best and Mean values obtained from Genetic Algorithm
203
(1)
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
In the function above , Y is the constrained variable indicating the fuel efficiency and
Pi represent the fractional values of various properties, respectively. This objective function
has been processed by the genetic algorithm to give the optimized values of the respective
fuel properties. The fitness values in each generation and best and mean values are obtained
by this approach which has been given in Table 2 and Fig 2.The result will facilitate the good
quality blended biofuel production by satisfying these optimized properties by mixture of oils in
appropriate proportion.
5.
Conclusion
Developing renewable sources of energy has become necessary due to the limited
supply of fossil fuels. Due to rapidly increasing energy requirements along with technological
development around the world, research and development activities have perforce focused on
new and renewable energy. Biodiesel is a promising alternative solution. But for the
production of a good quality fuel the property optimization is required which will help any
blending different components in proper percentage. The present study provides a framework
for this purpose. Based on the results obtained in an array of optimization studies, a
comprehensive methodology has been established which will facilitate the production of good
quality biofuel.
References
Calabotta, B.J. and Burger, K. Application of molecular and genetic technologies to improve
feedstock supplies for biodiesel production, Proc. Intl. Congress on Biodiesel, Vienna
,2007.
Chih-Fong Tsai , William Eberle , Chi-Yuan Chu , Genetic algorithms in feature and instance
selection, Knowledge-Based Systems ,2013, 39, 240–247
Chunni Dai , Meng Yao , Zhujie Xie , Chunhong Chen , Jingao Liu ; Parameter optimization
for growth model of greenhouse crop using genetic algorithms, Applied Soft Computing ,
2009, 9 , 13–19
Efren Mezura-Montesa, Carlos A. Coello Coello, Constraint-handling in nature-inspired
numerical optimization: Past, presentand future, Evolutionary Computation 1, 2011,
173–194.
Gressel, J. Transgenics are imperative for biofuel crops, Plant Sci. ,2008, 174, 246–63.
Herskowitz, M., Landau, M., Reizner, I. and Kaliya, M. (to Ben Gurion University of the Negev
Research and Development Authority), Production of diesel fuel from vegetable and
animal oils, US Pat. Appl. 2006/0207166 A1 , 2006.
Koskinen, M., Sourander, M. and Nurminen, M. Apply a comprehensive approach to biofuels,
Hydrocarb. Proc. ,2006,85, 81–6.
Searchinger, T., Heimlich, R., Houghton, R.A., Dong, F., Elobeid, A., Fabiosa, J., Tokgoz, S.,
Hayes, D. and Yu, T.-H. Use of U.S. croplands for biofuels increases greenhouse gases
through emissions from land-use change, Science ,2008, 319, 1238–40.
Şensoz, S. and Kaynar, I. Bio-oil production from soybean (Glycine max L.): fuel properties of
bio-oil, Ind. Crops Prod. ,2006, 23, 99–105.
Wahid, M.B. Asian perspective: Overview of the biodiesel industry in Asia, Proc. Intl.
Congress on Biodiesel, Vienna , 2007.
Zhigang Ji, Zhenyu Li, Zhiqiang Ji, Research on Genetic Algorithm and Data Information
basedon Combined
Framework for Nonlinear Functions Optimization, Procedia
Engineering, 2011,23, 155 – 160.
204
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Selection of Lubricant in Machining using Multiple Attribute
Decision Making Technique
M. A. Makhesana
Institute of Technology, Nirma University, Ahmedabad-382481 Gujarat, India
*Corresponding author (e-mail: mayur.makhesana@nirmauni.ac.in)
The main objective of this work is to select a best suitable lubricant from among a number
of lubricants available for machining of alloy steel with tungsten carbide insert tool by using
multiple attribute decision making technique. The selection procedure of right lubricant is
based on the PROMETHEE method. The manufacturing organization has to select right
manufacturing methods, product and process designs, manufacturing technologies,
materials, lubricants, machinery and equipment. The selection decisions become more
complicated as the decision makers in the manufacturing environment have to choose a
best possible option by considering large number of alternatives based on a set of
conflicting criteria. To aid these selection processes, various multiple attribute decision
making methods are now available. Preference ranking organization method for
enrichment evaluation is one of the technique which helps to find out the best solution
among available options. The factors affecting the lubricant selection are first identified and
that are cutting force during machining, surface roughness, rate of tool wear and
temperature at work tool interface. The aim of multiple attribute decision making technique
is to combine different measures in to a single lubricant matrix which helps to select best
suitable lubricant and rank the lubricants for alloy steel machining operation.
1.
Introduction
To meet the needs and challenges, manufacturing industries have to select best suitable
manufacturing processes, designs of products, manufacturing tools, work piece and tool
materials, machinery and equipment, etc. As many factors affects the process the selection
decisions are complicated decision making and even more challenging today (Rao 2007).
In most of the machining processes because of the heat and friction generated there is a
chance to damage the cutting tool and surface of the work piece. To overcome the effects of the
friction, heat generated and to remove fine metal particles away from the cutting zone normally
lubricants or sometimes cutting fluids are used in industries. It is very much important to provide
proper lubricant to reduce the friction and to remove the heat generated as fast as possible. It is
also important to maintain proper cutting conditions while machining, because it affects the
surface quality of the parts. Machining with high cutting velocity, high feed rate and high depth of
cut also increases the possibility of large amount of heat generation and high cutting temperature.
This heat and temperature will result poor dimensional accuracy and will affect the surface
homogeneity by inducing residual stresses and sub surface cracks.
The use of lubricant or cutting fluid during the machining can serve many useful purposes
like increase in tool life, improving surface finish, cooling of the cutting tool at higher speeds, easy
chip handling and removal etc. On the other side the chemical ingredients of the cutting fluid
creates some environmental problems and health problems to the operator. (Byrne and Schlote
1993) And also the cost of providing lubrication covers the major part of total manufacturing cost.
(Klocke and Eisenblatter 1997).
It is necessary to find out some alternative solution for this. The alternative can be
lubrication using solid lubricant (Reddy and Rao 2006, Deshmukh and Basu 2006) minimum
quantity lubrication (Varadrajan et al. 2002).
205
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
In a manufacturing environment, the decision makers need to select the most suitable
AMS while assessing a wide range of alternative options based on a set of conflicting
attributes/criteria. To help and guide the decision makers, there is a need for simple, systematic,
and logical approaches or mathematical tools that can consider a large number of selection
attributes and candidate alternatives. The objective of any selection procedure is to identify the
appropriate selection attributes and obtains the best decision in conjunction with the real-time
requirements.
This paper presents one such simple, systematic and logical method, called
PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluations). A lot of
applications of PROMETHEE in various fields of science and technology can be found in the
literature (Behzadian et al. 2009). However, only a few applications are found in the field of
manufacturing such as, scheduling (Duvivier et al. 2007, Roux et al. 2008) manufacturing system
selection (Anand and Kodali 2008). Rao R.V. and Patel B.K. (2009) applied AHP and
PROMETHEE for cutting fluid selection, manufacturing programme, end of life scenario and rapid
prototyping process selection.
2.
The PROMETHEE method
The PROMETHEE method was introduced by Brans et al. (1984) and belongs to the
category of outranking methods. The PROMETHEE proceeds to a pair wise comparison of
alternatives in each single criterion in order to determine partial binary relations denoting the
strength of preference of an alternative a1 over alternative a2. In the evaluation table, the
alternatives are evaluated on different criteria. The implementation of PROMETHEE requires
additional types of information, namely:
. Information on the relative importance or the weights of the criteria considered, and
. Information on the decision maker preference function, which he/she uses when
comparing the contribution of the alternatives in terms of each separate criterion.
Step I: Identify the selection criteria for the considered decision making problem and short-list the
alternatives on the basis of the identified criteria satisfying the requirements.
Step II:
(1) After short-listing the alternatives, prepare a decision table including the measures or values
of all criteria for the short-listed alternatives.
(2) The weights of relative importance of the criteria may be assigned using analytic hierarchy
process (AHP) method (Rao 2007) or decision maker can make own preference for weightage of
attributes.
Step III: After calculating the weights of the criteria using AHP method, the next step is to have
the information on the decision maker preference function, which he/she uses when comparing
the contribution of the alternatives in terms of each separate criterion. The preference function
(Pi) translates the difference between the evaluations obtained by two alternatives (a1 and a2) in
terms of a particular criterion, into a preference degree ranging from 0 to 1. Let Pi,a1a2 be the
preference function associated to the criterion ci.
Pi, a1a2= Gi [ ci (a1)- ci(a2) ]
0<Pi, a1a2<1
(1)
Where Gi is a non-decreasing function of the observed deviation (d) between two
alternatives a1 and a2 over the criterion ci. In order to facilitate the selection of a specific
preference function for a criterion, six basic types were proposed. These include ‘usual function’,
‘U-shape function’, ‘V-shape function’, ‘level function’, ‘linear function’, and ‘Gaussian function’.
206
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Preference ‘usual function’ is equal to the simple difference between the values of the criterion c i
for alternatives a1 and a2.
Let the decision maker have specified a preference function Pi and weight wi for each
criterion ci (i=1, 2 . . . M) of the problem. The multiple criteria preference index Πa1a2 is then
defined as the weighted average of the preference functions Pi:
Пa1a2 = ∑ wi Pi, a1a2
M
(2)
i=1
Πa1a2 represents the intensity of preference of the decision maker of alternative a1 over
alternative a2, when considering simultaneously all the criteria. Its value ranges from 0 to 1. This
preference index determines a valued outranking relation on the set of actions.
For PROMETHEE outranking relations, the leaving flow, entering flow and the net flow for an
alternative a belonging to a set of alternatives A are defined by the following equations:
+
φ (a) = ∑ Πxa
(3)
xεA
-
φ (a) = ∑ Πax
(4)
xεA
φ (a) = φ+(a) - φ-(a)
(5)
φ+ (a) is called the leaving flow, φ- (a) is called the entering flow and φ (ai) is called the
net flow. φ+ (a) is the measure of the outranking character of a (i.e. dominance of alternative a
overall other alternatives) and φ- (a) gives the outranked character of a (i.e. degree to which
alternative a1 is dominated by all other alternatives). The net flow, φ (a), represents a value
function, whereby a higher value reflects a higher attractiveness of alternative a. The net flow
values are used to indicate the outranking relationship between the alternatives (Rao and Patel
2009). For an example, the schematic calculation of the preference indices for a problem
consisting of three alternatives and four criteria is given in Figure 1.
Figure 1. Preference indices for a problem consisting of three alternatives and four criteria. (Rao
and Patel 2009)
207
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3. Details of experiment work
In order to select a best suitable lubricant from among a number of lubricants available
for machining, an alloy steel rod En-31 was turned on lathe machine with tungsten carbide insert
tool. The details about the tool, cutting velocity, depth of cut and various lubricant selected during
turning operation is given in table I below. To measure the value of chip interface temperature
(Tc), tool wear rate (Tw), cutting force (Fc) and surface roughness (Ra) the experiments are
carried out on ferrous material at cutting speed V m/min, depth of cut d mm, feed f mm/rev and
tool nose radius r mm. In this operation dry, wet and minimum quantity of lubricant (graphite,
boric acid powder mixed with SAE-40 base oil by weight) were used.
Machine tool
Work specimen material
Process parameters
Lubricants
Table 1. Experimental conditions
10 HP lathe machine
En-31 steel alloy
Cutting speed V=112m/min
Feed f=0.10 mm/rev
Depth of cut d=0.4 mm
Tool nose radius R=0.8 mm
(i) Dry (no lubricant)
(ii) Wet (soluble oil mixed with water in the ration of 1:20)
(iii) Minimum quantity lubrication
(a) 10% graphite + SAE-40 base oil
(b) 10% boric acid + SAE-40 base oil
(c) 15% graphite + SAE-40 base oil
(d) 15% boric acid + SAE-40 base oil
(e) Pure SAE-40 base oil
In this experiment a commercial alloy steel work piece En-31 of length 500 mm and
diameter 50 mm is machined on heavy duty lathe machine. The application of the material
includes roller bearings, boll bearings, mandrels, spindles, knurling tools, molding dies etc. The
chemical composition of the material is as shown below.
Composition
Wt %
Table 2. Chemical composition of work piece
C
Si
Mn
Cr
0.95-1.2
0.10-0.35
0.30-0.75
1.0-1.6
S
0.040
P
0.040
The cutting temperature is measured by using tool-work thermocouple. Measurement of
cutting force is carried out by lathe dynamometer of strain gauge type. The surface roughness
measuring instrument was used to find the value of surface roughness and the tool wear is
measured by single pan balance. To find accurate value of tool wear each experiment is repeated
three times. The experimental results are shown in table 3.
Table 3. Lubricants and parameter values
Sr
No.
1
2
3
4
5
6
7
Lubricant
Dry
Wet
10% graphite + SAE-40 base oil
10% boric acid + SAE-40 base oil
15% graphite + SAE-40 base oil
15% boric acid + SAE-40 base oil
Pure SAE-40 base oil
208
Tc
(°C)
410
385
358
318
374
322
369
Fc (N)
240
219
224
150
216
148
225
Tw
(mg/min)
0.330
0.316
0.268
0.242
0.278
0.238
0.329
Ra
(μm)
12.60
10.85
10.70
10.38
10.66
10.36
10.74
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The table shows average of three experiments for each parameter. The various steps of the
PROMETHEE method can be apply as below.
Step I
The objective is to select right lubricant among the all available lubricants and value of parameter
measured. For all considered parameters Chip-tool interface temperature, cutting force, tool wear
and surface roughness the lower value is desirable since all are non-beneficial attributes.
Because as the value of temperature increases it tends to increase in surface roughness of work
piece. The higher value of cutting force results higher power consumption of machine.
Step II
The weight of different parameter can be calculated by using various methods. i.e. AHP
The weights considered are W 1= 0.6938, W 2=0.1392, W 3=0.1225 and W 4=0.0444
Step III
Let the decision maker use the preference ‘usual function’ for all criteria. If two alternatives have a
difference d≠0 in criterion ci, then a preference value ranging between 0 and 1 is assigned to the
‘better’ alternative lubricant whereas the ‘worse’ alternative lubricant receives a value 0. All the
attributes is a non-beneficial criterion and lower values are desired. The lubricant having a
comparatively low value of attribute is said to be ‘better’ than the other.
Table 4. Resulting preference indices as well as leaving, entering and net flow values.
1
2
3
4
5
6
7
φ(a)
1
0.9999
0.9999
0.9999
0.9999
0.9999
0.9999
5.9994
2
0
0.8607
0.9999
0.9999
0.9999
0.7382
4.5986
3
0
0.1392
0.9999
0.1836
0.9999
0
2.3226
4
0
0
0
0
0.3061
0
0.3061
5
0
0
0.8163
0.9999
0.9999
0.6938
3.5099
6
0
0
0
0.6938
0
0
0.6938
7
0
0.1225
0.9999
0.9999
0.3061
0.9999
3.4283
φ+(a)
0
1.2616
3.6768
5.6933
2.4895
5.3056
2.4319
φ
-5.9994
-3.3370
1.3482
5.3812
-1.0204
4.6118
-0.9961
Rank
7
6
3
1
5
2
4
Results
From the results it is clear that 10% boric acid mixed with SAE-40 base oil by weight is
the best lubricant for the turning of En-31 alloy steel for the parameter measured. The above
ranking can change if the user gives some other importance value for the attribute considered.
Here 10% and 15% boric acid mixed with SAE-40 base oil is having close value for ranking.
Turning with 10% boric acid offers minimum value of chip-tool interface temperature and cutting
force.
4.
Conclusion
A methodology based on a PROMETHEE method is suggested for the selection of best
lubricant for turning of En-31 alloy steel material by considering chip-tool interface temperature,
cutting force, tool wear and surface roughness. It is found that 10% boric acid mixed with SAE-40
base oil by weight is the best lubricant as compare to the other combination of lubricant
considered.
209
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The method is general decision making methods and can consider any number of quantitative
selections attributes simultaneously and offers more objective and simple selection approaches.
This technique can be used for any type of selection problem involving any number of selection
attributes.
References
Anand, G. and Kodali, R. 2008. Selection of lean manufacturing systems using the
PROMETHEE. Journal of Modelling in Management, 3 (1), 40–70.
Behzadian M. et al. 2009. PROMETHEE: A comprehensive literature review on methodologies
and
applications.
European
Journal
of
Operational
Research,
doi:
10.1016/j.ejor.2009.01.021
Brans, J.P., Mareschal, B., and Vincke, P., 1984. PROMETHEE: a new family of outranking
methods in multicriteria analysis. Proceedings of Operational Research, 84, Amsterdam,
North Holland, 477–490.
Byrne, G. and Schlote, E. 1993, Environmental clean machining processes a strategic approach,
Annals of CIRP, Vol-42,471-474
Deshmukh S D and Basu S K 2006, Significance of solid lubricant in metal cutting, Proceedings
of all India manufacturing technology, design and research-Chennai, 156-162
Duvivier, D. et al. 2007. Multicriteria optimisation and simulation: an industrial application. Annals
of Operations Research, 156 (1), 45–60.
Klocke F. and Eisenblatter G. 1997, Dry machining, Annals of CIRP, Vol-46,520-526
Rao R.V. and Patel B.K. 2009 Decision making in the manufacturing environment using an
improved PROMETHEE method ,International Journal of Production Research , 1–18, iFirst
Rao, R.V. 2007. Decision making in the manufacturing environment using graph theory and fuzzy
multiple attribute decision making methods. London: Springer-Verlag.
Reddy N S K and Rao P V 2006, Performance improvement of end milling using graphite as
solid lubricant, Materials and manufacturing processes, Vol-20, 1-14
Roux, O. et al. 2008. Multicriteria approach to rank scheduling strategies. International Journal of
Production Economics, 112 (1), 192–201.
Varadrajan A S, Philip P K, Ramamo0rthy B 2002, Formulation of a cutting fluid for a hard turning
with minimal cutting fluid, Proceedings of 10th AIMTDR, 89-96.
210
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Simulated Annealing based Optimization of Inventory Costing
Problem
Mukul Shukla1,2*, Alok Kumar Mishra1, Ravi Kumar Gupta1
1
M. N. National Institute of Technology, Allahabad- 211004, UP, India
University of Johannesburg, Johannesburg, Republic of South Africa
2
*Corresponding author (e-mail: mshukla@uj.ac.za; mukulshukla@mnnit.ac.in)
This paper reports the application of simulated annealing (SA) the probabilistic,
multivariable, non-linear optimization method in Matlab for an inventory costing problem
with shortage (or back order). The primary focus is on highlighting the importance of
different variable inputs which control the output of the SA optimization problem. A
critical and comparative overview of SA with other optimization techniques is also
presented.
1.
Introduction
Simulated annealing (SA) is one of the nontraditional optimization methods which
resembles the cooling process of molten metal through annealing. Annealing is the process of
slow cooling. At higher temperatures, the atoms in molten metal can move freely but as the
temperature is reduced, the atoms get arranged and form a crystalline structure having a
minimum possible energy configuration. However, the formation of crystals depends on the
cooling rate. At a very fast cooling rate, crystals may not be formed even and the material
remains in amorphous form. The SA algorithm simulates this process of slow cooling of
molten metal to achieve the minimum function value in a minimization problem. The cooling
phenomenon is simulated by controlling a temperature-like parameter introduced with the
concept of Boltzmann probability distribution Laarhoven (1987). To simulate a quasi thermal
equilibrium state at the current temperature a large number of points are usually tested.
2.
Simulated annealing algorithm
1: Choose - an initial point x(0), a sufficiently high value of temperature T and number of
iterations n to be performed at a particular temperature, and the termination criterion €.
2: Calculate a random neighbouring point x(t+1). Metropolis algorithm is applied to accept or
reject this point.
3: Calculate ∆E=Ex(t+1) – Ex(0)
if ∆E≤0, set t=t+1;
else create a random number (r) in the range (0,1).
(-∆E/T)
if r ≤ exp
, set t=t+1;
else go to step 2.
4: If |x(t+1)– x(t)| < € and T is small, Terminate;
else if (t mod n) = 0 then lower T according to a cooling schedule.
go to step 2;
else go to step 2.
This algorithm is based on Boltzmann probability distribution according to which the
probability of energy distribution is
E
P E = exp − kT
(1)
where k is Boltzmann constant (Deb 2010).
The probability of the next point being at x (t+1) depends on the difference in the function values
at these two points and is calculated using the Boltzmann probability distribution: P(E(t+1)) =
(-∆E/kT)
min[1, exp
]. In one such study on SA based multiobjective optimization to determine the
acceptance probability of a new solution vis-a-vis the current solution as well as those in the
211
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
archive, the domination status of the new solution with the old solutions was found and taken
into account by Bandyopadhyay et al. (2008).
3.
SA optimization Matlab toolbox
MATLAB has various readymade toolboxes including the Optimization toolbox based on
various optimization methods. The SA solver is opened within a window named „Optimization
Tool‟. In it select the solver “simulannealbnd – Simulated annealing algorithm”. This window is
divided into two parts - left and right (Fig. 1). On the left side, there are options for inputting
the problem‟s objective function, start point, constraints in the form of bounds, start the
solution etc. Further, space for output of the solution in the form of optimum function value
and final points etc is also available. On the right side various options such as stopping
criteria, annealing parameters, acceptance criteria, problem type, plot functions etc are
present. Some of these options are described below briefly (Matlab 2009):
Stopping criteria
(1) Maximum iterations – here we can set the maximum number of iterations or leave it as
default (infinite).
(2) Maximum function evaluations – here we can set the maximum number of function
evaluations or leave it as default (3000*numberOfVariables).
(3) Time limit – here the time limit can be set for the algorithm to run.
(4) Function tolerance – the algorithm stops if the change in function value after specific
number of iterations is below the value of this function tolerance.
(5) Objective limit – here we can set a minimum value of the objective function.
(6) Stall iterations – here we can specify the number of iterations after which the algorithm
will terminate if the change in function value is below the value of function tolerance.
Annealing parameters
(1) Annealing function – here we can choose the annealing function as fast annealing or
Boltzmann annealing or self specified one. Fast annealing randomly takes a step size
proportional to temperature T whereas Boltzmann annealing‟s random step size is
proportional to 𝑇 (and hence is comparatively slower).
(2) Reannealing interval – is the number of points accepted before resetting of temperature.
(3) Temperature update function – here we can specify the temperature variation trend
whether it is exponential, logarithmic or linear.
(4) Initial temperature – a high value is usually inputted for a wider search and better result
as any point (even with lower probability) can be chosen in the intermediate steps. A
rough estimate of T can be obtained by taking the mean of the function values at a
number of random points within the search space. However, the number of iterations for
convergence increases.
Plot functions
These show the variation of the Best function value, Current function value etc. with
iterations and the Best point, Current Temperature plot etc (Fig. 3).
The initial temperature and number of iterations performed at a particular T (cooling schedule)
are the two most critical parameters for successful working of the SA method. Besides, there
are various other toolbox options which can effect the output of the SA algorithm. However,
some amount of trial and error runs are needed for finalizing them. For details, the interested
readers may refer to Deb (2010) and Laarhoven (1987).
4.
Case study
In inventory control the objective often is to minimize the total inventory cost by placing
the orders at a particular time (or inventory level) such that little shortage in inventory is
allowed but excess inventory is not.
Various deterministic and stochastic inventory models are available in literature Porteus
(2002). One of the most often used and standard deterministic inventory cost model (objective
function) with shortage (or back order) is given by Eq. 2 𝐶𝑏
𝐶ℎ
𝐷𝐶𝑜
(2)
f = 𝑄 + 𝑄 − 𝑆 2 ∗ 2𝑄 + 𝑆 2 ∗ 2𝑄
212
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
where f= total inventory cost; D= annual demand; Co= ordering cost; Ch= holding cost/unit/
year; Cb= back order (shortage) cost; S= number of units back ordered; and Q= inventory
order quantity. Here, S and Q are the two variables while the rest input variables are fixed.
Input data: D=6000 unit/yr; C=40; Co=500/order; Ch=8/unit/yr; Cb=20/unit/yr.
For the above inventory cost problem the exact solution for minimum cost is Q=1024.69 and
S=292.77. The same solution has also been attempted in this paper using the SA optimization
solver in Matlab and is presented underneath. A similar SA based Order Quantity optimization
problem with limited budget and free return was presented by Zhang (2012) and Tang (2004)
for lot sizing problems,.
4.1 Algorithm implementation and results
MATLAB 2009 software has a Optimization toolbox for solving both single and
multivariable minimization problems with SA solver MATLAB (2009). The toolbox is opened at
first and the objective function is called (already defined in a .m file) and the two bounds of the
variables are specified. After solution the optimized function value and optimal parameters‟
values are obtained. The various toolbox variables options affect the result as per the problem
at hand. For the present problem of minimizing total inventory cost, the different cases of SA
functions and the corresponding solutions obtained using the SA solver are presented next:
(1) Fast annealing function and exponential temperature update function
SA solution obtained - Q*=1024.644; S*=291.92; and f min=5855.4099 (Fig. 1).
Figure 1. SA problem setup and results for Fast annealing function and exponential
temperature update function
(2) Boltzmann annealing function and linear temperature update function
SA solution obtained - Q*=1023.522; S*=293.838; and f min=5855.4312 (Fig. 2).
As seen from the solutions obtained for the two cases, the two match closely with each other
and with the standard solution. The plots of variation of the Best function value and Current
function value over iterations, and the Best point and the Current Temperature (for Boltzmann
annealing function and linear temperature update function) are shown in Fig. 3.
213
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 2. SA problem setup and results for Boltzman annealing function and linear
temperature update function
Figure 3. Plots of various outputs obtained after the SA solver run
214
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
5. Advantages and disadvantages of SA
Advantages:
(1) Its convergence to the optima is „good‟ even if the initial guess is far away from optima.
(2) It can deal with arbitrary systems and cost functions and highly non linear and complex
models, chaotic and noisy data
(3) It statistically guarantees finding an optimal solution.
Disadvantages:
(1) It is a comparatively slow process for high quality results. But with proper hybridization
with other methods, this weakness can be overcome.
(2) If the parameter T is assigned a low value the result is poor and far away from optima.
(3) For a constrained optimization problem a penalty has to be introduced in the objective
function which affects the result sometimes.
6.
Conclusion
Simulated annealing nontraditional optimization technique is based on random search
process and gives a superior solution even if the start point is far away from optima. It can be
used for solution of real life multi-variable problems including stochastic inventory control (in
progress), operations research (Koulamas, 1994) and distribution network design and
management (Jayaraman, 2003). However, a careful choice of the critical parameters is
absolutely key for successful working of the simulated annealing method.
References
Bandyopadhyay, S. Saha, S. Maulik, U. and Deb, K. A Simulated Annealing-Based
Multiobjective Optimization Algorithm: AMOSA, IEEE Transactions on Evolutionary
Computation, 2008, 12(3), 269-283.
nd
Deb, K. Optimization for Engineering Design: Algorithms and examples, 2 ed., Prentice Hall
India Pvt. Ltd., 2010.
,
,
Jayaraman, V. and Ross, A. A simulated annealing methodology to distribution network
design and management, European Journal of Operational Research, 2003, 144(3),
629–645.
Koulamas, C. Antony S.R. and Jaen, R. A survey of simulated annealing applications to
operations research problems, Omega, 1994, 22(1), 41–56.
Laarhoven, P.J.M. and Aarts, E. Simulated Annealing: Theory and Applications, Reidel,
Ordrecht, 1987.
Matlab 2009: Optimization Toolbox - Product Documentation.
Porteus, E. Foundations of Stochastic Inventory Theory, Stanford University Press, Stanford,
CA, 2002.
Tang, O. Simulated annealing in lot sizing problems, International Journal of Production
Economics, 2004, 88(2), 173–181.
Zhang, X. Order Quantity Optimization Problem with Limited Budget and Free Return,
Proceedings of the 2012 International Conference on System Modeling and Optimization
(ICSMO 2012), 2012.
215
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Modified Differential Evolution for Optimization using
Alien Population Member
P. Kapoor, P. Goulla, N. Padhiyar*
Indian Institute of Technology, Gandhinagar – 382424, Gujarat, India
*Corresponding author (e-mail: nitin@iitgn.ac.in)
Differential Evolution is a stochastic, multi start direct search optimization technique and
usually applied for obtaining potentially global minimum solution. We in this work have
tested a modified DE algorithm using two test problems, namely constrained Himmelblau
and Ackley’s Path functions. The modified DE involves addition of an alien member in the
population for increasing the diversity in the population for enhancing the probability of
convergence to global minimum by maintaining high convergence rate. The comparison of
the proposed approach is carried out with the case without alien member addition, by
adding alien member from pth generation onwards. The optimum value of p for both the
test applications are computed by a sensitivity analysis. Further, the optimization problem
for both conventional and alien based DE are run 20 times to average out the trend of
optimization results.
1.
Introduction
Usually in process industry, operating at near the physical limits of the operating variables
leads to maximum utilization of the resources. Optimization is the tool to ensure the maximum
possible utilization of the resources for achieving a specific objective. The objective can be
maximizing the production rate, quality, safety or satisfying the environmental constraints. Most of
the Chemical Engineering applications form nonlinear and non-convex optimization problems.
Hence, there can be more than one local minimum and obtaining the global minimum can be of
high importance. Usually, the global optimization techniques such as interval analysis (Hansen,
1979) are computationally very extensive. On the other hand optimization techniques, such as
genetic algorithms (Holland, 1975; Goldberg, 1989), ant colony optimization (Dorigo and Di Caro,
1999), particle swarm optimization (Kennedy and Eberhart, 1995) and differential evolution (Storn
and Price, 1997) are stochastic techniques which provide global minimum with high probability
but with no guarantee. Among these techniques, differential evolution (DE) is one of the most
promising optimization techniques. DE is a multi start, stochastic, direct search optimization
method.
DE has been successfully applied to several Chemical Engineering applications (Babu and
Angira, 2002; Thomsen, 2003). Vesterstrom and Thomsen (2004) have shown Differential
Evolution (DE) as one of the most efficient and robust optimization techniques. There have been
several attempts to decrease the computation time by DE (Babu and Angira, 2006; Angira and
Santosh, 2008, Ali and Bagdadi, 2009) to name a few. Recently Patel and Padhiyar, (2010)
proposed a modification of genetic algorithm by introducing an alien member in the population at
every generation. This approach has been shown to obtain the global minimum for bench mark
test applications with higher convergence rate. A similar approach has been studied for
differential evolution in this study. The proposed algorithm is tested with two benchmark test
functions namely Himmelblau function and Ackley’s Path function (Ackley, 1987) with bound
constraints and a desired product maximization in a series of batch reactors with different
temperatures (Luss, 1994). In this study first the DE algorithm has been described in Section 2
followed by the proposed modification in DE in Section 3. Next, various test problems are
described in Section 4. Finally, results are discussed in Section 5 followed by conclusions in
Section 6.
216
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2.
Differential evolution algorithm
Differential Evolution (DE) algorithm starts with an initial population of NP potential
solutions: XiG i = 1, 2, 3.....NP generated randomly. Here G denotes the generation number to
which the member belongs. Fig. 1 shows a schematic diagram for the conventional differential
evolution algorithm. There are three main operations involved in DE: mutation, crossover and
selection.
Mutation
Mutation is responsible for perturbations in the population of potential solutions for
increasing the diversity in the population. The process involves random selection of two
population members say Xr1 and Xr2 from the current population. The difference vector of Xr1
and Xr2 is weighted and operated upon the best member as follows,
𝐺
𝐺
𝐺
𝑈𝑖𝐺+1 = 𝑋𝑏𝑒𝑠𝑡
+ 𝐹 ∗ 𝑋𝑟1
− 𝑋𝑟2
𝑖 = 1 𝑡𝑜 𝑁𝑃
(1)
𝐺
𝑋𝑏𝑒𝑠𝑡
is the fittest member of the generation G, NP is the population size whereas U denotes for
mutated member. F is the mutation factor such that F [0, 1].
Crossover
After mutation is performed, the next step is to perform crossover operation. To add more
diversity in population crossover operation is performed for every variable for all population
members. Depending upon the crossover probability CR, the variable can be retained from the
parent generation or can be replaced by the variable in the mutated member,
𝐼𝑓 𝑟 < 𝐶𝑅, 𝑉𝑖,𝑗 𝐺+1 = 𝑈𝑖,𝑗 𝐺+1
(2)
𝑒𝑙𝑠𝑒
𝑉𝑖,𝑗 𝐺+1 = 𝑋𝑖,𝑗 𝐺+1
Here j = 1 to n; i = 1 to NP; n = number of variables; r = random number [0, 1]. V is
called the child population and X is the parent population. To save the computational efforts,
mutation operation is performed only when the generated random number r < CR (MezuraMontez and Lopez, 2009).
Selection
Here the objective function value for every member of the parent population is compared
with its corresponding member of the child population and the better of the two survives in the
next generation.
3.
Proposed DE modification
Conventionally DE provides high convergence rate during the initial few generations. As
the generations proceed, the search space narrows down and hence diversity of the population
deteriorates. To increase the diversity in the population, mutation and crossover operations are
performed. Though, sometimes the diversity obtained by mutation and crossover may not be
sufficient and larger diversity may be desirable. Though, very large diversity may deteriorate the
convergence rate, appropriate diversity may increase diversity and can maintain good
convergence rate as well. Considering this, we propose to add an alien member by replacing the
worst member. The addition of alien member increases the diversity in the population. Note that
crossover and mutation operation increase diversity, but the perturbation is within the population.
On the other hand alien member is added from the original search space resulting into increasing
the diversity in the population. The proposed alien member in the population is generated as
follows:
𝑋𝑎 = 𝑋𝑏 + 𝑟 ∗ 𝑋𝑤 − 𝑋𝑏
(3)
217
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Where, Xb is the fittest member of the population, Xw is the most inferior member of the
population and r is a random number between 0 and 1. This alien ensures diversity within
bounds. During initial few generations, generally there is more diversity and high convergence
rate, and hence there may not be a necessity of adding an alien member in the population. On
the other hand, as the generation progresses, generally population diversity and conversion rate
decreases. To verify this hypothesis, we can perform a sensitivity analysis of the alien inclusion
generation on to the DE solution. Fig. 1 shows a schematic diagram for the proposed modified
differential evolution algorithm used.
4. Test problems
The following three test functions were used for testing the proposed modification. While the first
two are simple math functions with a known solution, the third one is an optimal control grade
problem of grade transition in a polymerization reactor.
Figure 1: Flowchart of the proposed Modified DE with inclusion of an alien member
218
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4.1 Himmelblau function
Himmeblau function is a quadratic multimodal function defined as:
Min f 𝑥1 , 𝑥2 = 𝑥1 2 + 𝑥2 − 11
𝑥 1 ,𝑥 2
2
+ 𝑥1 + 𝑥2 2 − 7
2
(4)
Subject to: −6 ≤ 𝑥1 , 𝑥2 ≤ 6
It has 4 global minimum (no local minimum) 𝑓 𝑥1 , 𝑥2 = 0 at points (x1, x2) = (3, 2), (-2.805118,
3.131312), (-3.77931, -3.28318) and (3.58442, -1.84812), as shown in Fig. 2.
4.2 Ackley’s Path Function
Ackley’s Path function (Ackley, 1987) is a widely used multimodal function defined as:
Min f 𝑥1 , 𝑥2 = −20 ∗ exp −0.2
𝑥 1 ,𝑥 2
− exp
𝑥1 2 + 𝑥2 2
2
cos 2𝜋𝑥1 + cos
(2𝜋𝑥2 )
+ 20 + 𝑒
2
(5)
Subject to: −2 ≤ 𝑥1 , 𝑥2 ≤ 2.
It has a global minimum 𝑓 𝑥1 , 𝑥2 = 0 at point (x1, x2) = (0, 0) and several local minima, as shown
in Fig. 3. Since Ackley’s Path function has several local minima, many times the optimization
problem converges to a local minimum.
5.
Results and discussion
The proposed DE algorithm is tested for the above three test applications in this work.
Since there may not be significant convergence during initial few populations there may not be
need of any alien inclusion for first few generations. To find out the optimum generation for
starting the addition of an alien member, we present the study of the effect of alien addition at
various stages for all the applications. In this work the alien addition is proposed with its inclusion
st
rd
th
at various stage, namely (1) 1 generation onwards, (2) 3 generation onwards, (3) 5 generation
th
onwards, and (4) 20 generation onwards
Twenty runs are performed with different initial populations for every case. Same set of 20
initial populations is used for all the five cases including the case of no alien addition. The DE
parameters used in this study for all the cases are: mutation factor F = 0.5, crossover probability
CR = 0.6 and population size NP = 10 for all the case studies.
5.1
Himmelblau function
Fig. 2 show the DE results obtained with Himmelblau function. First plot shows the convergence
of the best of the 20 runs. As can be seen, the convergence that is achieved after 33 generations
for the case of conventional DE, is achieved in 23 generations for the case where alien inclusion
is started after 3 generations. For the case where alien inclusion starts after 20 generations
coincides with the conventional DE results upto 20 generations and the convergence is superior
afterwards. Second plot shows the average and standard deviation of the 20 different runs, this
figure is significant because of the stochastic nature of the DE approach. The best population
might have been by chance, while the average of 20 runs shows the appropriate trend with high
probability. As can be observed from third plot, the trend with the alien inclusion starting from the
first generation is inferior to the others. It was observed (results not shown) that one of the
population members converged very slowly and hence the average of the 20 runs was very high.
Though average of the remaining nineteen runs is found better than the case with alien inclusion
after alien inclusion after 5th generation at the end of 40th generation.
219
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The effect of the location of inclusion of the alien member can also be evaluated using third plot.
Here, the average of the 20 runs solutions at the end of 40 generations is plotted against the
location of inclusion of the alien member. As can be observed, the optimum location for this
function has been found to be 5. The location before and after 5 has resulted in inferior
convergence.
Best solution of 20
Average solution of 20
Average at the end of
runs
runs
40th generation
Figure 2: Results of the Constraint Himmelblau functions
5.2
Ackley’s Path Function
Fig. 3 show similar results obtained for Ackley’s Path function. As can be seen from the first
plot, the alien inclusion increases the convergence rate compared to no alien inclusion in the
population. As can be seen in second plot, the convergence rate for the case of alien inclusion
th
from 5 generation is inferior to the case without alien inclusion. Further, except the case with
immediate inclusion of alien (dash dotted), all the other cases have resulted in slower
convergence and in fact to the other local minimum. This can also be observed from third plot,
where the optimum location for alien inclusion has been found to be 1st generation.
6.
Conclusions
A modified DE algorithm with an inclusion of alien member in the population is tested for its
efficacy in this work. Two test problems, namely Himmelblau function and Ackley’s Path Function
and a temperature profile optimization in a series of batch reactors are selected and the
comparison has been carried out with the case without alien addition in DE. Further, there is a
user defined parameter, namely the location of inclusion of the alien member, in the proposed DE
approach. For Ackley function, the best location based on the average of 20 runs was found to be
the first generation. Similarly the location of the Himmelblau function is found to be the fifth
generation. It should be noted that this location is dependent on the application and a sensitivity
analysis, discussed in this work, can be useful for other applications.
Best solution of
Average solution of 20
Average at the end of
20 runs
runs
40th generation
Figure 3: Best solution of the 20 runs for the constrained Ackley’s Path function
220
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
References
Ackley D.H.. A connectionist machine for genetic hillclimbing, Kluwer Academic Publishers,
Boston, 1987.
Angira, R. and A.Santosh . A modified Trigonometric Differential Evolution algorithm for
optimization of dynamic systems. Evolutionary Computation, 2008. CEC 2008. (IEEE World
Congress on Computational Intelligence). IEEE Congress on, 1463 -1468.
Ali, M.M. and Z.K. Bagdadi A local exploration-based differential evolution algorithm for
constrained global optimization. Applied Mathematics and Computation, 2009, Vol. 208, 31 –
48.
Babu, B.V. and R. Angira. A Differential Evolution Approach for Global Optimization of MINLP
Problems. Proceedings of 4th Asia-Pacific Conference on Simulated Evolution and Learning
(SEAL’02), 2002, Vol. 2, 866-870.
Babu, B.V. and R. Angira. Modified differential evolution (MDE) for optimization of non-linear
chemical processes. Computers & Chemical Engineering, 2009. Vol. 30, 989 – 1002.
Dorigo, M .and G. Di Caro. Ant colony optimization: a new meta-heuristic. Evolutionary
Computation, 1999. CEC 99. Proceedings of the 1999 Congress on, Vol. 2, 1470 – 1477.
Goldberg, D.E. Genetic algorithms in search, optimisation, and machine learning, AddisonWesley. Publishing Co. Inc., 1989, Reading Massachusetts
Hansen, R. Global optimization using interval analysis: The one-dimensional case. Journal of
Optimization Theory and Applications,1979, Vol. 29, 331-344.
Holland, J. H. Adaptation in natural and artificial systems. The University of Michigan Press, Ann
Arbor Michigan. 1975
Kennedy, J. and R. Eberhart. Particle swarm optimization. In: Neural Networks, 1995.
Proceedings., IEEE International Conference on 1989. Vol. 4, 1942 -1948.
Mezura-Montez, E. And Monterrosa-Lopez, C. A. Global and local selection in differential
evolution for constrained numerical optimization. Journal of computer science & technology,
2009. Vol. 9, 43-52.
Pahiyar, N.; S. Bhartiya; R.D. Gudi . Optimal Grade Transition in Polymerization Reactors: A
Comparative Case Study. Ind. Eng. Chem. Res. 2006. Vol. 45, 3583-3592.
Patel N. and N. Padhiyar. Alien Genetic Algorithm for Exploration of Search Space. AIP
Conference Proceedings, 2010. Vol. 1298, 325-330.
Storn, R. and K. Price. Differential Evolution – A Simple and Efficient Heuristic for global
Optimization over Continuous Spaces. Journal of Global Optimization. 1997. Vol. 11, Issue
4, 341-359.
Thomsen, R.. Flexible ligand docking using differential evolution. Evolutionary Computation,
2003. CEC '03. The 2003 Congress on, Vol. 4, 2354-2361.
Vesterstrom, J. and R. Thomsen. A comparative study of differential evolution, particle swarm
optimization, and evolutionary algorithms on numerical benchmark problems. Evolutionary
Computation, 2004. CEC2004. Congress on, Vol. 2, 1989-1987.
221
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Hybrid Differential Evolution for Optimization: Using Modified
Newton’s Method
P. Goulla, P. Kapoor, N. Padhiyar*
Indian Institute of Technology, Gandhinagar – 382424, Gujarat, India
*Corresponding author (e-mail: nitin@iitgn.ac.in)
Differential Evolution is a stochastic, multi start direct search optimization technique and
usually applied for obtaining potentially global minimum solution. We in this work have
tested a modified DE algorithm using two test problems, namely constrained Himmelblau
and Ackley’s Path functions. In the proposed DE algorithm, we replace the inferior
population member with a member obtained by modified Newton’s method without extra
computational burden. The comparison of the proposed approach is carried out with the
case without replacement, by replacing the inferior population member by the one obtained
by modified Newton’s method from pth generation onwards. The optimum value of p for
both the test applications are computed by trial and error. Further, the optimization problem
for both conventional and modified Newton’s method based DE are run 20 times to
average out the trend of optimization results.
1. Introduction
Usually in process industry, operating at near the physical limits leads to maximum utilization
of the resources. Optimization is the tool to ensure the maximum possible utilization of the
resources for achieving the objective. The objective can be maximizing the production, quality,
safety or satisfying the environmental constraints. Most of the Chemical Engineering applications
form nonlinear and non-convex optimization problems. Hence, there can be more than one local
minimum and obtaining the global minimum can be of high importance. Usually, the global
optimization techniques such as interval method (Hansen, 1979) are computationally very
extensive. On the other hand optimization techniques, such as genetic algorithms (Holland, 1975;
Goldberg, 1989), ant colony optimization (Dorigo and Di Caro, 1999), particle swarm optimization
(Kennedy and Eberhart, 1995) and differential evolution (Storn and Price, 1997) are stochastic
techniques which provide global minimum with high probability but with no guarantee. Among
these techniques DE is one of the most promising optimization techniques. DE is multi start,
stochastic, direct search optimization method.
DE has been successfully applied to several real world applications (Babu and Angira,
2002; Thomsen, 2003;). Vesterstrom and Thomsen (2004) have shown Differential Evolution
(DE) as one of the most efficient and robust optimization technique. There have been several
attempts made to decrease the computation time by DE (Babu and Angira, 2006; Angira and
Santosh, 2008, Ali and Bagdadi, 2009) to name a few. Babu and Angira (2004) proposed a quasi
Newton based Hybrid DE, where the fittest member of the population is modified with a Quasi
Newton step. We in this work have proposed a modification in the DE algorithm by replacing the
inferior population member by a modified Newton’s method. Though, the method does not require
carrying out line search while performing the modified Newton’s step. The proposed algorithm is
tested with two benchmark test functions namely Himmelblau function and Ackley’s Path function
(Ackley, 1987) with bound constraints. The remainder of the paper is structured as follows.
Section 2 presents the basic Differential Evolution, section 3 describes our proposed modification,
the test problems we have used and the results and the comparisons of those functions are
presented in section 4 and 5 respectively. Section 6 concludes the paper.
222
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2. Differential evolution algorithm
Differential Evolution (DE) algorithm starts with an initial population of NP potential solutions:
XiG i = 1, 2, 3.....NP generated randomly. Here G denotes the generation number to which the
member belongs. Fig. 1 shows a schematic diagram for the conventional differential evolution
algorithm. There are three main operations involved in DE: mutation, crossover and selection.
Mutation
Mutation is responsible for perturbations in the population of potential solutions for
increasing the diversity in the population. The process involves random selection of two
population members say Xr1 and Xr2 from the current population. The difference vector of Xr1
and Xr2 is weighted and operated upon the best member as follows,
𝐺
𝐺
𝐺
𝑈𝑖𝐺+1 = 𝑋𝑏𝑒𝑠𝑡
+ 𝐹 ∗ 𝑋𝑟1
− 𝑋𝑟2
𝑖 = 1 𝑡𝑜 𝑁𝑃
(1)
𝐺
𝑋𝑏𝑒𝑠𝑡
is the fittest member of the generation G, NP is the population size whereas U denotes for
mutated member. F is the mutation factor such that F [0, 1].
Crossover
After mutation is performed, the next step is to perform crossover operation. To add more
diversity in population crossover operation is performed for every variable for all population
members. Depending upon the crossover probability CR, the variable can be retained from the
parent generation or can be replaced by the variable in the mutated member,
𝐼𝑓 𝑟 < 𝐶𝑅, 𝑉𝑖,𝑗 𝐺+1 = 𝑈𝑖,𝑗 𝐺+1
(2)
𝑒𝑙𝑠𝑒
𝑉𝑖,𝑗 𝐺+1 = 𝑋𝑖,𝑗 𝐺+1
Here j = 1 to n; i = 1 to NP; n = number of variables; r = random number [0, 1]. V is
called the child population and X is the parent population. To save the computational efforts,
mutation operation is performed only when the generated random number r < CR (MezuraMontez and Lopez, 2009).
Selection
Here the objective function value for every member of the parent population is compared
with its corresponding member of the child population and the better of the two survives in the
next generation.
3. Proposed DE modification
Newton’s method for solving nonlinear algebraic equations is modified in the current work. The
Newton’s method for solving a set of nonlinear equations can be represented by the following,
For f(X) = 0, 𝑋 𝐺+1 = 𝑋 𝐺 − 𝑓 ′ 𝑋 𝐺
−1
𝑓 𝑋𝐺
(3)
′
𝐺
Here Superscript G or G+1 denotes the generation number, X ≡ X ; 𝑓 𝑋 is the derivative
matrix of algebraic equations at XG. The above equation at sufficiently large value of G leads to
f(X) = 0. Note that this method is adopted by solving an optimization problem for minimizing f(X)
as follows,
−1
𝑋 𝐺+1 = 𝑋 𝐺 − 𝑓 ′′ (𝑋 𝐺 ) 𝑓 ′ (𝑋 𝐺 )
(4)
Here f ’(X) is the derivative vector and f ’’(X) is the second
derivative of the objective function, the Hessian matrix. This method requires the computation of
hessian matrix, which is computationally very expensive. This method at sufficiently large value of
G converges to f’(X)=0. Though, there are Quasi Newton methods making use of approximated
hessian matrix using for example Broyden–Fletcher–Goldfarb–Shanno (BFGS) Method, even
simpler extension of Newton’s method in this work is proposed as follows:
223
n
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
𝑋 𝐺+1 = 𝑋 𝐺 −
𝑓 𝑙𝑜𝑤 −𝑓 𝑋1𝐺
𝑓 ′ 𝑋1𝐺
,
𝑓 𝑙𝑜𝑤 −𝑓 𝑋2𝐺
𝑓 ′ 𝑋2𝐺
…
𝑓 𝑙𝑜𝑤 −𝑓 𝑋𝑛𝐺
𝑓 ′ 𝑋𝑛𝐺
𝑇
(5)
Here, flow is the lower limit on the function, which has to be specified by the user apriori. After
each generation the inferior member of the population is replaced by a new member, which is
calculated by above mentioned modified Newton’s method. This replacement is carried out before
mutation. This has been experimented with replacement after different number of generations.
Fig. 1 shows a schematic diagram for the modified differential evolution algorithm. Note that n
numerical derivatives can be calculated with additional n function evaluations. Thus, modified DE
requires additional n function evaluation compare to the conventional DE at each iteration.
Hence, the comparison of the two DE algorithms is carried out in terms of number of function
evaluations and not the iterations.
Figure 1: Flowchart of the proposed Modified DE with inclusion of an alien member
224
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4. Test problems
The following two test functions were used for testing the proposed modification. These are
simple math functions with a known solution.
4.1 Himmelblau function
Himmeblau function is a quadratic multimodal function defined as:
Min f 𝑥1 , 𝑥2 = 𝑥1 2 + 𝑥2 − 11
𝑥 1 ,𝑥 2
2
+ 𝑥1 + 𝑥2 2 − 7
2
(4)
Subject to: −6 ≤ 𝑥1 , 𝑥2 ≤ 6
It has 4 global minimum (no local minimum) 𝑓 𝑥1 , 𝑥2 = 0 at points (x1, x2) = (3, 2), (-2.805118,
3.131312), (-3.77931, -3.28318) and (3.58442, -1.84812)
4.2 Ackley’s path function
Ackley’s Path function (Ackley, 1987) is a widely used multimodal function defined as:
Min f 𝑥1 , 𝑥2 = −20 ∗ exp −0.2
𝑥 1 ,𝑥 2
− exp
𝑥1 2 + 𝑥2 2
2
cos 2𝜋𝑥1 + cos
(2𝜋𝑥2 )
+ 20 + 𝑒
2
(5)
Subject to: −2 ≤ 𝑥1 , 𝑥2 ≤ 2.
It has a global minimum 𝑓 𝑥1 , 𝑥2 = 0 at point (x1, x2) = (0, 0) and several local minima. Since
Ackley’s Path function has several local minima, many times the optimization problem converges
to a local minimum.
5.
Results and discussion
The proposed DE algorithm is tested for the above three test applications in this work. To
find out the optimum generation number forbe need of any alien inclusion for first few
generations. To find out the optimum generation for starting the replacement of the inferior
member, we present the study of effect of replacement at various stages for both the applications.
In this work the modified Newton’s method proposed with its replacement at various stages,
th
th
th
namely (1) 5 generation onwards, (2) 10 generation onwards, (3) 15 generation onwards, and
th
(4) 20 generation onwards.
Twenty runs are performed with different initial populations for every case. Same set of 20
initial populations is used for all the five cases including the case of no alien addition. The DE
parameters used in this study for all the cases are: mutation factor F = 0.5, crossover probability
CR = 0.6 and population size NP = 10.
5.1
Himmelblau function
Fig. 2 show the DE results obtained with Himmelblau function. First plot shows the convergence
of the best of the 20 runs. As can be seen, the convergence that is achieved with 820 number of
function evaluations for the case of no replacement is achieved in only around 410 number of
function evaluations for the case where replacement is started after 20 generations. The second
plot shows the average of the 20 different runs, this figure is significant because of the stochastic
nature of the DE approach. The best population might have been by chance, while the average of
20 runs shows the appropriate trend with high probability, from this figure it is clear that
replacement after any generation is far more better than the case without replacement. As can
be observed from third plot, the average of the 20 runs solutions at the end of 40 generations is
plotted against the location where we replaced the inferior member. As can be observed, the
optimum location for this function has been shared almost equally between 5, 10 and 20, but is
superior to without replacement case.
225
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Average at the end of
Average solution of 20
Best solution of 20
th
40 generation
runs
runs
Figure 2: Results of the Constraint Himmelblau functions
Ackley’s path function
Fig. 3 show similar results obtained for Ackley’s Path function. As can be seen from the first
plot, the replacement of the inferior member increases the convergence rate compared to the no
replacement case. As can be seen in second plot, the convergence rate for the case of
th
replacement from 20 generation is high as compared to the others. Except this case, all the
other cases have resulted in slower convergence and in fact to the other local minimum. This can
also be observed from the third plot, where the optimum location for the replacement has been
found to be after 20th generation.
5.2
6.
Conclusions
A modified DE algorithm with inclusion of alien member in the population is tested for its
efficacy in this work. This modification has enhanced the convergence rate without compromising
in the final solution. Two test problems, namely Himmelblau and Ackley are selected and the
comparison has been carried out with the conventional DE. Further, there is one more user
defined parameter, namely the location at which we do the replacement of the inferior member, in
the proposed DE approach is necessary. The performance may vary with the generation after
which the replacement of the inferior member is done. For Ackley function, the best location
based on the average of 20 runs was found to be after 20th generation. Similarly the location for
the Himmelblau function is found to be after 10 or 20 generations. It should be noted that this
location is dependent on the application and a sensitivity analysis, discussed in this work, can be
useful. In future work, sensitivity of inclusion of number of aliens will be studied. The motivation
behind using 20 different populations was that DE follows a stochastic approach and the
converged solution may be different during each run even for the same initial population. Hence
an analysis featuring the best and the averaged solutions have been done. In order to maintain
some consistency 40 generations of population were used for the cases.
Average at the end of
Average solution of 20
Best solution of
th
40 generation
runs
20 runs
Figure 3: Best solution of the 20 runs for the constrained Ackley’s Path function
226
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
References
Ackley D.H.. A connectionist machine for genetic hillclimbing, Kluwer Academic Publishers,
Boston, 1987.
Angira, R. and A.Santosh . A modified Trigonometric Differential Evolution algorithm for
optimization of dynamic systems. Evolutionary Computation, 2008. CEC 2008. (IEEE World
Congress on Computational Intelligence). IEEE Congress on, 1463 -1468.
Ali, M.M. and Z.K. Bagdadi A local exploration-based differential evolution algorithm for
constrained global optimization. Applied Mathematics and Computation, 2009, Vol. 208, 31 –
48.
Babu, B.V. and R. Angira. Optimization Using Hybrid Differential Evolution Algorithms.
Proceedings of International Symposium & 57th Annual Session of IIChE in association with
AIChE (CHEMCON-2004.)
Babu, B.V. and R. Angira. A Differential Evolution Approach for Global Optimization of MINLP
Problems. Proceedings of 4th Asia-Pacific Conference on Simulated Evolution And Learning
(SEAL’02),2002. Vol.2, 866-870.
Babu, B.V. and R. Angira. Modified differential evolution (MDE) for optimization of non-linear
chemical processes. Computers & Chemical Engineering, 2006. Vol 30, 989 – 1002.
Dorigo, M .and G. Di Caro. Ant colony optimization: a new meta-heuristic. Evolutionary
Computation, 1999. CEC 99. Proceedings of the 1999 Congress. Vol. 2, 1470 – 1477.
Goldberg, D.E. Genetic algorithms in search, optimisation, and machine learning, AddisonWesley. Publishing Co. Inc., Reading Massachusetts.1989
Hansen, R. Global optimization using interval analysis: The one-dimensional case. Journal of
Optimization Theory and Applications,1979, Vol. 29, 331-344.
Holland, J. H. Adaptation in natural and artificial systems. The University of Michigan Press, Ann
Arbor Michigan. 1975
Kennedy, J. and R. Eberhart. Particle swarm optimization. In: Neural Networks, 1995.
Proceedings., IEEE International Conference on 1989. Vol. 4, 1942 -1948.
Mezura-Montez, E. And Monterrosa-Lopez, C. A. Global and local selection in differential
evolution for constrained numerical optimization. Journal of computer science & technology,
2009. Vol. 9, 43-52.
Storn, R. and K. Price. Differential Evolution – A Simple and Efficient Heuristic for global
Optimization over Continuous Spaces. Journal of Global Optimization. 1997. Vol. 11, Issue
4, 341-359.
Thomsen, R.. Flexible ligand docking using differential evolution. Evolutionary Computation,
2003. CEC '03. The 2003 Congress on, Vol. 4, 2354-2361.
Ursem, R.K. and P. Vadstrup (2003). Parameter identification of induction motors using
differential evolution. Evolutionary Computation, 2003. CEC '03. The 2003 Congress on, Vol.
2, 790-796.
Vesterstrom, J. and R. Thomsen. A comparative study of differential evolution, particle swarm
optimization, and evolutionary algorithms on numerical benchmark problems. Evolutionary
Computation, 2004. CEC2004. Congress on, Vol. 2, 1989-1987.
227
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization of Machining Parameters in Face Milling of
Al6065 using Fuzzy Logic
P. Venkata Ramaiah1, N. Rajesh2*
1
S. V. University College of Engineering, Tirupati-517502, A.P, India
2
S. V. College of Engineering, Tirupati-517507, A.P, India
*Corresponding author (e-mail: raajesh06@gmail.com)
In this paper, Fuzzy logic has been used to identify the optimal combination of
influential factors in the milling process. Using fuzzy logic, overall fuzzy grade values
are determined for multiple response characteristics. Milling experiment has been
performed on Al6065 material, according to Taguchi orthogonal array (OA16), for
various combination of controllable parameters speed, feed and depth of cut, the
experimental responses: surface roughness and material removal rate are recorded.
These responses are analyzed using fuzzy logic and optimum controllable parameter
combination is identified. Finally confirmation experiment is conducted for this
combination and the result is satisfactory.
Keywords: Face Milling, Fuzzy logic, Taguchi, Influencing Parameters
1.
Introduction
Roughness is often a good predictor of the performance of a mechanical component
since irregularities in the surface may form nucleation sites for cracks or corrosion. Although
roughness is usually undesirable, it is difficult and expensive to control during manufacturing.
Decreasing roughness of a surface will usually exponentially increase its manufacturing costs.
This often results in a trade-off between the manufacturing cost of a component and its
performance in application.
Increasing the productivity and the quality of the machined parts are the main
challenges of metal-based industry. There has been increased interest in monitoring all
aspects of the machining process. Quality of machining can be judged by surface
roughness. Higher the surface finish higher will be the quality. Surface finish mainly depends
on cutting speed, Depth of cut, Feed. Most of the operators use trial and error method to
find the appropriate cutting condition. It is not the effective way to find out optimal cutting
parameters. So the main objective of the study is to find the optimum parameters
(speed, feed, depth of cut) so that surface roughness is optimized. Aluminum has
much application in industries. Also automotive aircraft and train companies need to
replace steel and cast iron with lighter metal like aluminum. So it is important to know
the machining behavior of aluminum. There are various optimization techniques like
Genetic Algorithm, Artificial Neural Network, Grey Analysis, Utility Concept, Response
Surface Methods, Taguchi technique, Fuzzy Logic, etc. to find out optimum cutting condition.
2.
Literature review
Bajic, Lela and Zivkovi have conducted a series of experiments on Design of
Experiments to investigate the influence of cutting parameters such as cutting speed, feed
rate and depth of cut on surface roughness in face milling operation. John L. yang and
Joseph C. Chen have worked on Taguchi parameter design and provided a systematic
procedure that can effectively identify the optimum surface roughness in the process control
of individual end milling machines. Gopala Krishna has investigated the machining
parameters such as number of passes, depth of cut in each pass, spindle speed and feed
rate to get better surface finish, dimensional accuracy and tool wear. Dalgobind Mahto and
Anjani Kumar have conducted a series of experiments on Taguchi‘s parametric design in
which signal to noise ratio and Pareto analysis of variance are employed to analyze the effect
of milling parameters such as cutting speed, feed rate and depth of cut on surface roughness.
228
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Tsao proposed a Grey-Taguchi parameter method to optimize the milling parameters of
Aluminum alloy to get better surface finish. Vijaya Kumar and Venkataramaiah have
developed a hybrid approach by combining Taguchi, grey relational analysis method and
fuzzy logic to reap their advantage in drilling process. N. Rajesh, P. V. Ramaiah and A.
Nagarjuna have used Taguchi Method to identify the optimal combination of influential factors
in the milling process
Fuzzy logic: Fuzzy logic has great capability to capture human commonsense reasoning,
decision-making and other aspects of human cognition. It overcomes the limitations of classic
logical systems, which impose inherent restrictions on representation of imprecise concepts.
Vagueness in the coefficients and constraints may be naturally modeled by fuzzy logic.
Modeling by fuzzy logic opens up a new way to optimize cutting conditions and also tool
selection importance of integration between fuzzy and ANN-based technique for effective
process control in manufacturing. Several applications of fuzzy set theory-based modeling of
metal cutting processes are reported in the literature.
Lee, Yang, and Moon used fuzzy set theory-based non-linear model for a turning
process as a more effective tool than conventional mathematical modeling techniques if there
exists ‗fuzziness‘ in the process control variables. Al-Wedyan, Demirli, and Bhat used fuzzy
modeling technique for a down milling cutting operation. I/p used a fuzzy rule based feed rate
control strategy in mild steel bar surface milling operation for improvement in cutting efficiency
and prolonging the tool life. Kamatala, Baumgartner, and Moon developed a fuzzy set theorybased system for predicting surface roughness in a finished turning operation. Fuzzy set
theory-based techniques suffer from a few shortcomings, such as rules developed based on
process expert‘s knowledge, and their prior experiences and opinions are not easily
adjustable to dynamic changes of underlying cutting process. It also does not provide any
means of utilizing analytical models of metal cutting processes.
3. Experimental procedure
Figure 1. Milling Machine, Talysurf – Surface Meter and Milling Cutter
Face milling tests have been performed on Al6065 work material by using milling
machine with HSS cutter by considering different process parameter combinations. The
surface roughness and Material Removal Rate are selected as indices to evaluate cutting
performance in milling. Therefore these are considered as response characteristics in this
study. Basically surface roughness should be minimized and MRR should be maximized in
any metal cutting process for better performance.
In this experiment three controllable parameters are considered and each
parameter is set at four levels. The parameters and its levels are shown in
Table - 1. For full factorial design, the experimental runs required are (levels) (factors)
equal to 43 = 64. To minimize the experimental cost, fractional factorial design is
chosen, ie. 43-1 = 16 runs. Therefore Taguchi experimental design L 16 is chosen for
conducting experiments (Table - 2). Experiments are performed according to this
design and the values of surface roughness are recorded (Table - 2) for each
experimental run.
Table1. Process parameters and their levels
Parameters Design
Speed (rpm)
Feed
(mm / min)
Depth of Cut (mm)
Level - 1
900
125
0.1
Process Parameters
Level - 2
Level - 3
1120
1400
160
200
0.15
0.2
229
Level - 4
1800
250
0.25
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Exp.
Run
No.
1
2
3
4
5
6
7
8
4.
Input Parameters
Table 2. Experimental design and Data
Responses
Input Parameters
Exp.
SPEED
(RPM)
FEED
(mm/
min)
DOC
(mm)
Avg
Ra
(µm)
900
900
900
900
1120
1120
1120
1120
125
160
200
250
125
160
200
250
0.1
0.15
0.2
0.25
0.15
0.1
0.25
0.2
0.69
1.08
1.04
1.13
1.2
0.89
0.33
0.53
MRR
3
( mm
/
min)
Run
No.
1270
2438
4064
6350
1905
1626
5080
5080
9
10
11
12
13
14
15
16
Responses
SPEED
(RPM)
FEED
(mm/
min)
DOC
(mm)
Avg
Ra
(µm)
MRR
3
( mm
/
min)
1400
1400
1400
1400
1800
1800
1800
1800
125
160
200
250
125
160
200
250
0.2
0.25
0.1
0.15
0.25
0.2
0.15
0.1
0.74
0.62
0.53
0.48
0.59
0.6
0.35
0.11
2540
4064
2032
3810
3175
3251
3048
2540
Optimization of machining parameters
The Experimental responses surface finish (Ra) and Material Removal Rate (MRR)
are analyzed using fuzzy tool box of Matlab software and overall fuzzy grade values are
determined. The optimum levels of influential parameters are determined based on overall
fuzzy grade as follows:
Implementation of Fuzzy Logic: Fuzzy logic involves a fuzzy interference engine and a
fuzzification -defuzzification module. Fuzzification expresses the input variables in the form of
fuzzy membership values based on various membership functions. Governing rules in
linguistic form, for example if cutting force is high and machining time is high, then tool wear is
high, are formulated on the basis of experimental observations. Based on each rule, inference
can be drawn on output grade and membership value. Inferences obtained from various rules
are combined to arrive at a final decision. The membership values thus obtained are
defuzzified using various techniques to obtain true value.
4.1 Determination of overall fuzzy grade
A Fuzzy Logic unit comprises a fuzzifier, membership functions, a fuzzy rule base, an
inference engine and a defuzzifier. In the fuzzy logic analysis, the fuzzifier uses membership
functions to fuzzify the grey relational coefficient first. Next, the inference engine performs a
fuzzy reasoning on fuzzy rules to generate a fuzzy value. Finally, the defuzzifier converts the
fuzzy value into a fuzzy grade. The structure built for this study is a two input- one-output
fuzzy logic unit as shown in Fig. 2. The function of the fuzzifier is to convert outside crisp sets
of input data into proper linguistic fuzzy sets of information.
The input variables of the fuzzy logic system in this study are Surface Roughness and
Material removal rate. They are converted into linguistic fuzzy subsets using membership
functions of a triangle form, as shown in Fig. 3, and are uniformly assigned into three fuzzy
subsets—small (S), medium (M), and large (L) grade. The fuzzy rule base consists of a group
of if-then control rules to express the inference relationship between input and output. A
typical linguistic fuzzy rule called Mamdani is described as:
Rule 1: if x1 is A1, x2 is B1 then y is E1 else
Rule 2: if x1 is A2 , x2 is B2 ,then y is E2 else
……………………………………………
Rule n: if x1 is An , x2 is Bn ,then y is En
else
In above Ai, Bi are fuzzy subsets defined by the Corresponding membership
functions i.e., α/4Ai, α /4Bi. The output variable is the Fuzzy grade yo, and also converted
into linguistic fuzzy subsets using membership functions of a triangle form, as shown in Fig. 4.
230
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 2. Two input and one
output fuzzy logic unit
Figure 3.MF‘s for I/p parameters Figure 4 MF‘s for o/p parameters
Unlike the input variables, the output variable is assigned into relatively nine subsets i.e.,
very very low (VVL), very low (VL), small(S)medium low(ML),medium (M), medium high(MH)
high(H), very high (VH), very very high(VVH) Then, considering the conformity of four
performance characteristics for input variables, 9 fuzzy rules are defined and listed in Table 4. The fuzzy inference engine is the kernel of a fuzzy system. It can solve a problem by
simulating the thinking and decision pattern of human being using approximate or fuzzy
reasoning. In this paper, the max-min compositional operation of Mamdani is adopted to
perform calculation of fuzzy reasoning.
4.2 Optimal levels of machining parameters
After determining the overall fuzzy grade values (Table-3) the effect of each Machining
parameter is separated based on overall Fuzzy grade at different levels. The mean values of
Fuzzy grade for each level of the controllable parameters and the effect of parameter on multi
responses in rank wise are summarized in Table-5. Basically, large Fuzzy grade means it is
close to the product quality, thus, a higher value of the Fuzzy grade is desirable. From the
Table -5, the cutting parameters with the best level are spindle speed at level-4, feed at level4 and DOC at level-1. The optimal levels for the controllable parameters obtained from this
methodology are verified by the conformation test shown in Table-6.
Table 3. Overall fuzzy grade
Exp. Run No
1
2
3
4
5
6
7
8
Overall Fuzzy grade
0.3164
0.6808
0.5000
0.5000
0.6445
0.4729
0.5000
0.5000
Exp. Run No
9
10
11
12
13
14
15
16
Overall Fuzzy grade
0.4406
0.5000
0.2998
0.4374
0.4253
0.4748
0.3335
0.1244
Table 4. Fuzzy rules
Input variables
Output values
Rule no
Ra
MRR
1
Low
Low
Vvl
2
Low
Medium
Vl
3
Low
High
L
9
High
High
vvh
*Here: vvl-very very low, vl -very low, l-low, ml-medium low, m-medium,
mh-medium high, h- high, vh-very high, vvh-very very high
231
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 5. Fuzzy grade for each level of influential parameters
Cutting parameter
Level -1
Level -2
Level -3
Level -4
Rank
Speed
feed
doc
6.344
7.089
11.267
5.590
9.332
5.969
7.698
5.584
6.408
10.384
8.011
6.372
2
3
1
Table 6. Confirmation test results
Speed (rpm)
Feed (mm/ min)
Depth of Cut (mm)
Ra (µm)
MRR (mm3/min)
5.
1800
250
0.1
0.14
2540
Conclusions
The experiment has been performed on Al 6065 and obtained data has been analyzed
using Fuzzy logic. The influence of spindle speed, feed and depth of cut on surface
roughness and material removal rate in face milling operation is studied. Optimum machining
parameter combination has been found using fuzzy logic technique which yields good results
in milling of Al 6065. This method can also be used for other process while machining
different materials.
References
Al-Wedyan H., Demirli K. and Bhat R., ―A technique for fuzzy logic modeling of machining
process‖ 20th NAFIPS international conference, 2001, 5, 3021–3026.
Bajic. D, Lela. B, Zivkovic. D , ―Modeling of machined surface roughness and optimization of
cutting parameters in face milling‖, Journal of Industrial technology, 2008.
Dalgobind Mahto and Anjani Kumar, ‖Optimization of process parameters in vertical CNC mill
machines using Taguchi‘s Design of Experiments‖, Journal of Industrial Technology,
2008.
Gopala Krishna, ‖A global optimization approach to select optimal machining parameters of
multipass face milling‖, 2007.
John L. yang & Dr. Joseph C. Chen, ‖A systematic approach for identifying optimum surface
roughness performance in end milling‖, Journal of Industrial technology, 2001,
Kamatala M.K., Baumgartner E.T. and Moon K.S, ―Turned surface finish prediction based on
fuzzy logic theory‖, Proceedings of the 20th international conference on computer and
industrial engineering, Korea, 1996, 1, 101–104.
Lee Y.H., Yang B.H. and Moon K.S, ―An economic machining process model using fuzzy nonlinear programming and neural network‖, International Journal of Production Research,
1999, 37(4), 835–847
Rajesh. N, P. V. Ramaiah and A. Nagarjuna, ―Identification of optimal influencing factors in
milling of Al-6065 using taguchi method‖ proceedings of national conferences for
researchers on latest innovations in Mechanical Engineering (LIME-2013), 2013, Andhra
University, Vishakhapatnam, India, 65-68.
Tsao . C. C, ‖Grey-Taguchi method to optimize the milling parameters of aluminum alloy‖,
Journal of Advanced Manufacturing Technology, 2007.
Vijaya Kumar. G And Venkataramaiah. P, ―Optimization Study On Drilling Of Al-6061 With
Coated Tools Under MQL Condition Using Hybrid Approach‖ Elixir Mech. Engg., 2012,
45, 7831-7839.
232
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Strength Optimization of Orthotropic Plate Containing
Triangular Hole Subjected to In-plane Loading
N. P. Patel*, D. S. Sharma, R. R. Trivedi
Mechanical Engineering Department, Institute of Technology, Nirma University, Ahmedabad
*Corresponding author (e-mail: nirav_npp@yahoo.com)
In the present work, the best fiber orientation is obtained by using hybrid genetic
algorithm for graphite/epoxy and glass/epoxy plate containing triangular hole subjected
to in-plane loading. To analyze the stress distribution around holes the Mushkelishivili's
complex variable approach is used. In genetic algorithm the Tsai-hill failure strength
criteria is taken as the fitness function, and the ply orientation angle is the design
variable. Tournament selection and heuristics are used for reproduction and crossover
respectively. For effective reproduction in population, the elitism is used. Genetic
algorithm is hybridized with the pattern search method which is applied at end of
genetic algorithm process to find the best global solution.
1.
Introduction
Holes and cut-outs are bound to be present in many engineering structures, which
cause serious problems of stress concentrations. These hole/opening works as stress raisers
and may lead to the failure of the structure/machine component. The stress analysis around
different shaped holes is carried out by Muskhelishvili (1954), Lekhnitskii (1963), Ukadgonker
(2000) and Sharma (2012) etc. considering different types of loading conditions.
In composite plate, fiber orientation can be managed to achieve the required strength
and stiffness for the specific purpose. The genetic algorithm (GA) which is a subset of
evolutionary algorithm is very useful tool for optimization because of its advantages like direct
use of a coding, search from a population and direct use of objective function value. Ball,
Sargent and Ige (1993), Sivakumar, Iyengar, and Deb (2000), Venkataraman and Hafta
(1999), Ziad et al. (2012) have observed that GA was the best tool to optimize composite
laminates. Callahan and Weeks (1992), Le Riche and Haftka (1993), Nagendra et al. (1993)
and Ball, Sargent and Ige (1993) are the first researchers who have adopted and used the
genetic algorithm for the optimization of the stacking sequence in the laminated composite
plate. J. H. Park et al. (2001) has obtained the best stacking sequence of the composite
laminates for the maximum strength of the laminates. Very few research articles are available
for the optimization of stacking sequence of composites by using advance optimization
techniques. The optimization of composite plate with a hole is not addressed yet. The stress
analysis by using complex variable approach in conjunction with the optimization is also not
addressed. Here the attempt is made to obtain a best fiber angle for the single lamina of
composite plate with a hole when the plate is subjected to in-plane loading. The Genetic
algorithm is used to optimize the fiber angle based on the strength. The plate stresses are
calculated by using Muskhelishvilli’s complex variable approach (1954). A Tsai-Hill criterion is
used as objective function and fiber orientation is used as a design variables. Single lamina of
a graphite/epoxy and glass/epoxy containing triangular hole is considered for the study.
2.
Complex Variable Formulation
The plate is assumed to be loaded in such a way that resultants lies in XOY plane
(Figure 1). The stresses on top and bottom surface of plate as well as σz, τxy and yz are zero
everywhere within the plate. Using generalized Hooke’s law, Airy’s stress function and straindisplacement compatibility condition, the following characteristic equation is obtained, roots of
which represents constant of anisotropy.
233
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
a11s 4 2a16 s3 (2a12 a66 )s 2 2a26 s a22 0 (1)
aij is the compliance co-efficient. ( z1 ) and ( z2 ) are the
Mushkhelishvili’s complex function. The stress components for
plane stress conditions can be written in terms of these stress
functions as follows:
x 2 Re s12 ' ( z1 ) s2 2 ' ( z 2 )
y 2 Re ' ( z1 ) ' ( z 2 )
xy 2 Re s1 '( z1 ) s2 '( z2 )
(2)
Figure 1. Plate with hole
The area external to a given triangular hole, in Z-plane is
mapped conformably to the area outside the unit circle in ζ plane using following mapping
function.
z j j ( )
a j 1 is j ,
N
mk
R 1 N
k
a
m
b
j
j
k
k
2 k 1
k 1
b j 1 is j ; j 1,2
(3)
Here k=1, 3, 5, 8, 11, 14, 17. Gao’s (1996) arbitrary biaxial loading condition is adopted to
facilitate solution of plate subjected to biaxial loading.
3.
Stress Functions
The problem of stress around a hole in orthotropic plate is divided into two stages. In
first stage the stress functions 1 ( z1 ) and 1 ( z2 ) for hole free plate under application of
remotely applied load σx∞, σy ∞ are taken as
1 ( z1 ) B* z1; 1 ( z2 ) ( B'* iC '* ) z2
(4)
where, the constants can be obtained by putting the values of the first stage stress functions
and the value of applied stress at infinity into Equation (2). The boundary conditions f1, f2 on
the fictitious hole are determined from these stress functions as follows.
f1 2Re[1 ( z1 ) 1 ( z2 )]; f 2 2Re[ s11 ( z1 ) s2 1 ( z2 )]
(5)
The second stage stress functions 0 ( z1 ) and 0 ( z2 ) are determined by applying negative of
0
0
the boundary conditions f1 = - f1 and f2 = - f2 into Schwarz formula on its hole boundary in
the absence of the remote loading:
o ( )
i
0
0 t dt
0
0 t dt
( s 2 f 1 f 2 )
( s1 f 1 f 2 )
; o ( )
4 ( s1 s 2 )
4 ( s1 s 2 )
t t
t t
i
The stress function ( z1 ) and ( z2 ) for single hole problem, can be obtained by adding the
stress functions of first and second stage.
4.
Optimization through Genetic Algorithm
The various selection methods were compared by Byoung et al. (2000). The
experimental result shows that ranking and tournament selection are, in general more
effective in both solution quality and convergence time than the other methods.
The aim is to find the best fiber angle for the graphite/epoxy plate with triangular hole
subjected to in-plane loading which will give a maximum strength. The ply orientation angles
are the design variables. Tsai Hill criterion is selected as an objective function for the analysis
of single plate. The mathematical modeling of this problem is as follows:
1
2
2
2
1
1
1
1
6
2
1 2
max imize min 2f 1
(6)
2 2 2 2
2
X
Y
S
X
o
o
Subjected to -90 ≤θ≤ 90
234
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Where θ fiber orientation and X and Y is tensile strength in longitudinal, transverse directions
and S is shear. The tournament selection is adopted as a selection method. The heuristic
crossover is used for recombination. Adaptive feasible mutation is used as mutation operator.
The stresses σ1/ σ, 6/ σ and σ2/ σ are the normalized values of transformed stresses in
principal directions obtained from normalized values of σx, xy and σy which are calculated
using Muskhelishvili’s approach (1954).
5.
Results and discussion
The objective of the present work is to obtain the best fiber angle in composite plate
containing a triangular hole. The material considered is graphite/epoxy and glass/epoxy
whose properties as mentioned in Table 1.
Table 1. Material Properties: X, X’= longitudinal strength in tension and in compression
Y, Y’= Transverse strength in tension and in compression, S=Shear Strength
Material
E1
E2
G12
ν12
X
X’
Y
Y’
S
name
(MPa) (MPa) (MPa) (MPa) (MPa)
(GPa) (GPa) (GPa)
Graphite/epoxy
181
10.3
7.17
0.28 1500
1500
40
246
68
Glass/epoxy
38.6
8.27
4.14
0.26 1062
610
31
118
72
Table 2. Optimum fiber angle and failure strength for graphite/epoxy and glass/epoxy plate
containing Triangular Hole with different radius subjected to different loading
Hole
Load
Optimum
Hole
Load
Optimum
Corner Applied
failure
Optimum corner applied
failure
Optimum
radius
strength
fiber
radius
strength
fiber
(MPa)
angle
(MPa)
angle
Graphite/Epoxy
Graphite/Epoxy
o
o
Bi
5.0957
60
Bi
8.255397
60
o
o
Uni-X
8.187642
0
Uni-X
13.112821 0
o
o
Uni-Y
6.766579
-90
Uni-Y
10.861766 -90
o
o
0.0030
0.0091
Shear
7..0427
-53.3838
Shear
11.018353 -53.028
mm
mm
Glass/Epoxy
Glass/Epoxy
o
o
Bi
4.2452
30
Bi
5.583224
30
o
o
Uni-X
7.369366
0
Uni-X
9.712968
0
Uni-Y
6.520892
-90o
Uni-Y
8.572409
90o
o
Shear
5.059675
-90
Shear
6.533566
90o
Graphite/Epoxy
Graphite/Epoxy
Bi
5.793776
60o
Bi
10.95103 60o
o
Uni-X
9.268255
0
Uni-X
17.250589 0o
o
Uni-Y
7.701586
-90
Uni-Y
14.239926 -90o
o
o
0.0040
0.0171
Shear
7.932203
-53.3838
Shear
14.216399 -52.7226
mm
mm
Glass/Epoxy
Glass/Epoxy
o
o
Bi
4.801513
30
Bi
10.95103
30
o
o
Uni-X
8.348804
0
Uni-X
17.250589 0
o
o
Uni-Y
7.363836
-90
Uni-Y
14.239926 -90
o
o
Shear
5.679091
-90
Shear
14.216399 -52.7226
Graphite/Epoxy
Graphite/Epoxy
Bi
6.753935
60o
Bi
17.711204 60o
o
Uni-X
10
0
Uni-X
27.214128 0o
o
Uni-Y
8.911766
-90
Uni-Y
22.537592 -90o
o
0.0058
0.0476
Shear
9.162163
-53.2031
Shear
21.505194 -51.9456o
mm
mm
Glass/Epoxy
Glass/Epoxy
Bi
5.583224
30o
Bi
14.009115 -30o
Uni-X
9.712968
0o
Uni-X
24.297344 0o
o
o
Uni-Y
8.572409
90
Uni-Y
21.475894 -90
o
o
Shear
6.533566
90
Shear
14.559762 90
Bi=Biaxial, Uni-X=Uniaxial X, Uni-Y=Uniaxial Y
235
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The design variables are varied from -900 to 90o. The parameters used in genetic
algorithm are: population size=100, upper bound=-90o, lower bound=90o, selection method=
tournament selection with size=4, elite count=6, mutation function= adaptive feasible,
crossover function= heuristic with 0.70 crossover fraction.
Different loading conditions like uniaxial-X, uniaxial-Y, biaxial and shear are
considered for the present study. The optimum fiber angle for graphite/epoxy and glass/epoxy
plate containing triangular hole with different radii, subjected to different loading conditions is
obtained and as shown in Table 2. The convergence of best fitness to mean fitness and
average distance between individual at each generation is plotted for all result. One of the
results is presented in Figure 2. Failure strength in each generation is obtained and the
convergence to best value is presented in Figure 3 for graphite/epoxy plate containing
triangular hole (corner radius=0.0040 mm) subjected to different types of loading. The
tangential stress distribution around a triangular hole (corner radius=0.0171 mm) for optimum
fiber angle for respective loading is obtained in this work. (Refer figure 4).
Figure 2. Convergence and average distance between individual for graphite/epoxy plate with
triangular hole (corner radius=0.0476 mm) subjected to uniaxial-X loading
Figure 3. Evaluations of failure strengths for each generation for graphite/epoxy plate containing
triangular hole (corner radius=0.0040 mm) subjected to (a) Biaxial (b) Uniaxial-X (c) Uniaxial-Y (d)
Shear
236
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
6.
Conclusion
The complex variable approach in conjunction with the GA gives appreciable solution
of the optimum fiber orientation. In genetic algorithm tournament selection method with the
heuristic cross over methods gives good results for this problem. For any triangular radius the
o
o
o
o
o
o
o
o
optimum fiber angle for graphite/epoxy, 60 , 0 , 90 , 53 and for glass/ epoxy 30 , 0 , 90 , 90
are obtained for biaxial, uniaxial-x, uniaxial-y and shear loading respectively. Bluntness of
triangle has significant effect on the strength of lamina.
Figure 4. Tangential stress distribution around triangular hole with radius 0.0171 mm in
graphite/epoxy plate with optimum fiber angle subjected to different types of loading (a)
o
o
o
Biaxial, fiber angle=60 (b) Uniaxial X, fiber angle=0 (c) Uniaxial-Y, fiber angle=-90 (d)
o
Shear, fiber angle=-52.22587
References
Awad, Z. K., Aravinthan, T., Zhuge, Y., and Gonzalez, F. A review of optimization techniques
used in the design of fibre composite structures for civil engineering applications.
Materials and Design. 2012, 33, 534-544.
Ball, N.R. Sargent, P.M. and Ige, D.O. Genetic algorithm representations for laminate layups.
Artificial Intelligence in Engineering, 1993, 8, 99-108.
Byoung, T. Z. and Jung, J. K, Comparison of selection methods for evolutionary optimization.
Evolutionary Optimization, 2000, 2, 55-70.
Callahan, J. K. and Weeks, G. E. Optimum design of composite laminates using genetic
algorithm. composite engineering, 1992, 2, 149-160.
Gao, XL. A general solution of an infinite elastic plate with an elliptic hole under biaxial
loading. Pressure Vessels Piping, 1996, 67, 95–104.
Le Riche, R. and Haftka, R. T. Optimization of laminate stacking sequence for buckling load
maximization by genetic algorithm. AIAA Journal, 1993, 31, 951-956.
Lekhnitskii, S.G. Theory of Elasticity of an Anisotropic Body. Inc. San Francisco, 1963.
Muskhelishvili, N.I. Some basic problems of the mathematical theory of elasticity. 2nd English,
The Netherlands: P. Noordhooff Ltd., 1962.
Nagendra, S. Haftka R.T. and Gurdal Z. Design of blades stiffened composite panel by
genetic algorithm. Structural Dynamics and Materials Conference, 1993, 4, 2418-2436.
Park, J. H., Hwang, J. H., Hwang, C. S. and Hwang W. Stacking sequence design of
composite laminates for maximum strength using genetic algorithm. Composite Structure,
2001, 52, 217-231.
Sharma, D.S. Stress distribution around polygonal holes. Int J of Mech Sci, 2012,65,115–124.
Sivakumar K., Iyengar, NGR. and Deb, K. Optimization of composite laminates with cutouts
using genetic algorithm. Eng Optm Variable Metrix Complex search meth, 2000, 32, 5.
Ukadgaonker, V.G. and Rao D.K.N. A general solution for stresses around holes in symmetric
laminates under in plane loading. Journal of Composite Structures, 2000, 49, 339-354.
Venkataraman S. and Hafta R. T. Optimization of composite panels- a review. In proceeding
of the 14th annual conference of the American society of Composites, 1999, 27-29.
237
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Selection of Pattern Material for Casting Operation using Fuzzy
PROMETHEE
P.A. Date, A.K. Digalwar*
Mechanical Engineering Dept., Birla Institute of Technology and Science, Pilani, Rajasthan, India
*Corresponding author (e-mail: akd@pilani.bits-pilani.ac.in)
This paper presents a methodology for selecting the best pattern making material for a
casting operation. The alternative materials are evaluated based on a set of
criteria/parameters decided by the user. The Fuzzy PROMETHEE algorithm has been
used for making the choice. The methodology has been explained step by step with the
help of an example.
1.
Introduction
The manufacturing process of casting is one of the oldest known processes. It is usually
the first step in manufacturing and is a very versatile process as there is almost no limitation with
respect to size, shape and intricacy of the job that can be manufactured (Ghosh and Mallik,
2008). Over the years, the process has been modified and improved so as to keep up with the
rising demands and expectations of the customers. However, the basic steps remain unaltered.
The molten metal is poured into a mold which contains a hollow cavity of the desired shape. The
hollow cavity is made in the mold with the help of a pattern (Rao, 2008). A pattern is a model of
the actual casting with some allowances, constructed so that it can be used to form impressions
on the mold (Heine et al., 2004). It can be made from many materials like wood, steel, aluminum,
plastic, cast iron etc. Selection of this pattern material often becomes a crucial factor for the
productivity and quality of the operation. There are several factors to be considered for example
machinability, wear resistance, strength, weight, cost etc. for the selection of pattern material for a
casting operation
In this paper Fuzzy PROMETHEE algorithm has been used for the selection of best
possible pattern material for casting operation. The Preference Ranking Order Method for
Enrichment Evaluation (PROMETHEE) is an MCDM method originally given by Brans and Vincke
(1985). The PROMETHEE algorithm has been chosen because of its ease of application,
efficiency and the fact that its evaluation methodology is based on the importance of a
performance difference between two solutions, describing whether a solution should be preferred
to another one (Anand and Kodali, 2008; Gupta R., et al., 2012). In this study, fuzzy set theory,
fuzzy arithmetic and fuzzy logic has been used to tackle qualitative data. The PROMETHEE
algorithm has been suitably modified so as to incorporate fuzzy data. Hence, this algorithm is
called Fuzzy PROMETHEE.
2.
Methodology
The Fuzzy PROMETHEE algorithm has been used for the selection of the pattern
material amongst several alternatives. These alternatives are evaluated on the basis of a set of
criteria. Following sub-sections describe the process in detail:
2.1.
Generation of criteria and alternatives
From the literature, it is found that the most common pattern making materials for casting
operation are wood, steel, aluminum, plastic and cast iron. These five materials are henceforth
referred to as alternatives. To evaluate these alternatives, certain parameters /criteria are defined
238
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
from the user point of view; these are machinability (MC), wear resistance (WR), strength (ST),
weight (WT), reparability (RP), resistance to corrosion (RC), resistance to swelling (RS) and cost
(CO).
2.2.
Assigning weights to criteria
The first step is to assign weights to the criteria so that a measure of relative importance
of a criterion over all the other criteria is established. This is done by using a linguistic variable
called ‘degree of importance’. It has five linguistic values as shown in Table 1; the associated
fuzzy numbers are shown in Figure 1. Using this linguistic variable, pairwise comparison of the
criteria is done to obtain a Pairwise Comparison Matrix as shown in Table 2. A series of steps is
then followed to obtain the Fuzzy Weights of the criteria. Thereafter, Degree of Possibility is
calculated which gives the Normalized Crisp Weights for each criterion as shown in Table 3. The
above mentioned computations are done through the following equations:
Row Sum, Sri =
=(
,
,
);
Total Sum, St =
;
-1
Fuzzy Weight = Sri X [St] ;
Where,
th
th
bij is the fuzzy number in i row, j column of pairwise comparison matrix,
(eij, f ij, gij) is the triangular fuzzy number represented by bij, and
n is the number of criteria used (n=8 in this study).
Degree of Possibility, V(Si
; i,j = 1,2,3…,n
Sj) =
and i
j;
Minimum degree of possibility for ith criteria, D’(Ai) = min {V(Si Sj)}
For j = 1,2,3…,n, the weight vector, W’ = {D’(A1), D’(A2),…,D’(An)}T
Normalizing the weight vector, W = {D(A1), D(A2),…,D(An)}T
Table 1. Linguistic Values and Fuzzy Numbers of the Linguistic Variable ‘Degree of Importance’ for
Pairwise Comparison Matrix
Linguistic Values
Much More Important
More Important
Equally Important
Less Important
Very Less Important
Fuzzy No.
(4,5,6)
(3,4,5)
(2,3,4)
(1,2,3)
(0,1,2)
Table 2. Pairwise Comparison Matrix and Calculation of Fuzzy Weights of the Criteria
MC
WR
ST
WT
RP
RC
RS
CO
MC
(2,3,4)
(1,2,3)
(1,2,3)
(0,1,2)
(2,3,4)
(1,2,3)
(1,2,3)
(0,1,2)
WR
(3,4,5)
(2,3,4)
(2,3,4)
(1,2,3)
(2,3,4)
(1,2,3)
(1,2,3)
(1,2,3)
ST
(3,4,5)
(2,3,4)
(2,3,4)
(1,2,3)
(1,2,3)
(1,2,3)
(1,2,3)
(0,1,2)
WT
(4,5,6)
(3,4,5)
(3,4,5)
(2,3,4)
(3,4,5)
(3,4,5)
(3,4,5)
(3,4,5)
RP
(2,3,4)
(2,3,4)
(3,4,5)
(1,2,3)
(2,3,4)
(1,2,3)
(1,2,3)
(1,2,3)
RC
(3,4,5)
(3,4,5)
(3,4,5)
(1,2,3)
(3,4,5)
(2,3,4)
(2,3,4)
(1,2,3)
239
RS
(3,4,5)
(3,4,5)
(3,4,5)
(1,2,3)
(3,4,5)
(2,3,4)
(2,3,4)
(2,3,4)
CO
(4,5,6)
(3,4,5)
(4,5,6)
(1,2,3)
(3,4,5)
(3,4,5)
(2,3,4)
(2,3,4)
Sum (St)
Row Sum (S ri)
(24,32,40)
(19,27,35)
(21,29,37)
(8,16,24)
(19,27,35)
(14,22,30)
(13,21,29)
(10,18,26)
(128,192,256)
Fuzzy Weights
(0.09, 0.17, 0.31)
(0.07, 0.14, 0.27)
(0.08, 0.15, 0.29)
(0.03, 0.08, 0.19)
(0.07, 0.14, 0.27)
(0.05, 0.11, 0.23)
(0.05, 0.11, 0.23)
(0.04, 0.09, 0.20)
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 1. Fuzzy Numbers for Degree of Importance
Table 3. Computations for degree of possibility and normalized weights
Criteria
Fuzzy Weights
MC
WR
ST
WT
RP
RC
RS
CO
(0.09, 0.17, 0.31)
(0.07, 0.14, 0.27)
(0.08, 0.15, 0.29)
(0.03, 0.08, 0.19)
(0.07, 0.14, 0.27)
(0.05, 0.11, 0.23)
(0.05, 0.11, 0.23)
(0.04, 0.09, 0.20)
MC
0.86
0.91
0.53
0.86
0.7
0.7
0.58
WR
1
1
0.67
1
0.84
0.84
0.72
Degree of
ST
WT
1
1
0.95
1
1
0.61
0.95
1
0.79
1
0.79
1
0.67
1
Possibility
RP
RC
1
1
1
1
1
1
0.67 0.82
1
0.84
0.84
1
0.72 0.88
RS
1
1
1
0.82
1
1
0.88
CO
1
1
1
0.94
1
1
1
Sum
Minimum
1
0.86
0.91
0.53
0.86
0.7
0.7
0.58
6.14
Normalized
Weights
0.16
0.14
0.15
0.09
0.14
0.11
0.11
0.10
1.00
2.3.
Evaluation of alternatives for each criterion
The next step in the Fuzzy PROMETHEE algorithm is to evaluate the alternatives with
respect to each criterion. This is done by forming the Difference Matrix first. Then, the Preference
Function is defined. The preference function is evaluated for each element in the difference
matrix to get the Preference Matrix. Thereafter, the Global Preference Index, Positive Outranking
Flow, Negative Outranking Flow and Net Outranking Flow are calculated for each alternative. The
net outranking flow ranks the alternatives in the order of desirability. The following sub-sections
explain the process in depth.
Table 4. Evaluation of alternatives over all the criteria
Alternatives
Criteria
Wood
Mild
Aluminum
Plastic
MC
(30,40,50) (10,20,30)
(20,30,40) (20,30,40)
Steel
WR
(0,10,20) (30,40,50) (20,30,40) (10,20,30)
ST
(10,20,30) (30,40,50) (20,30,40) (20,30,40)
WT
(30,40,50) (0,10,20)
(20,30,40) (20,30,40)
RP
(30,40,50) (20,30,40)
(0,10,20)
(10,20,30)
RC
(30,40,50) (0,10,20)
(30,40,50) (30,40,50)
RS
(0,10,20) (30,40,50) (30,40,50) (30,40,50)
CO
(20,30,40) (10,20,30)
(0,10,20)
(30,40,50)
Cast Iron
(20,30,40)
(30,40,50)
(20,30,40)
(0,10,20)
(20,30,40)
(0,10,20)
(30,40,50)
(20,30,40)
Table 5. Linguistic Values and Fuzzy Numbers of the Linguistic Variable ‘Degree of Comparison’ for
Difference Matrix
Linguistic
Values
Excellent
Good
Fair
Poor
Associated
Fuzzy No.
(30,40,50)
(20,30,40)
(10,20,30)
(0,10,20)
2.3.1.
Difference matrix
First of all, each alternative is evaluated for each criterion. For example the alternative
‘wood’ is evaluated for its machinability (MC), wear resistance (WR) etc. For the evaluation, the
240
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
linguistic variable ‘Degree of Comparison’ is used so that a relative estimation of comparison is
established. Table 5 shows the linguistic values and fuzzy numbers for the same. Figure 2 shows
the corresponding fuzzy numbers for degree of comparison. For example strength (ST) of mild
steel is fairly higher than that of wood. So, on a comparative scale, it can be said that while mild
steel has ‘excellent’ strength, wood has ‘fair’ strength as shown in Table 4. In this way, qualitative
relations can be dealt with very easily and there is no particular need for exact quantitative data,
which often differs. For instance, just by knowing that generally cost of aluminum > mild steel >
cast iron > wood > plastic, appropriate linguistic values and hence appropriate fuzzy numbers can
be assigned to these alternatives for the criterion ‘cost’ without knowing their actual costs. This is
helpful as the cost of a material depends on many factors like geographical location. But, the
general relation more or less holds true.
Once the fuzzy numbers are assigned to each alternative for each cost, the difference
matrix is then constructed by pairwise comparison of all the alternatives over all the criteria. The
difference of the fuzzy numbers is calculated for each pair of alternatives for each criterion.
Standard fuzzy operations are used for the same.
Fuzzy Difference, dk(bi,bj) = (e,f,g) = bi – bj = (ei,fi,gi) – (ej,f j,gj) = (ei – gj ,f i – f j, gi – ej)
Poor Fair Good Excellent
1
1
q
p
-20
0
20
40
Figure 3. Indifference Threshold (q) and Preference Threshold (p)
0
10
20
30
40
50
Figure 2. Fuzzy Numbers for Degree of Comparison
2.3.2.
Preference function
The preference of alternative ‘bi’ over alternative ‘bj’ with respect to a criterion ‘k’ is
measured with the help of a preference function Pk(bi,bj). In order to define the preference
function, the following thresholds have to be fixed: The Indifference Threshold ‘q’ is the lowest
value of dk(bi,bj) below which there is indifference between selecting ‘bi’ or ‘bj’. In this study, q = (20,0,20).The Preference Threshold ‘p’ is the lowest value of dk(bi,bj) above which there is strict
preference of ‘bi’ over ‘bj’. In this study, p = (0,20,40). In this study, the preference function is the
same for all the criteria and is defined as follows:
Pk(bi,bj) =
Comparison of two fuzzy numbers is done according to degree of possibility.
2.3.3.
Preference matrix and global preference index
Table 6. Preference Matrix
Crit
eria
Norm
alized
Weig
hts
W
S
W
A
W
P
W
C
S
W
S
A
S
P
S
C
A
W
A
S
MC
0.16
1
0.
75
0.
75
0.
75
0
0
0
0
0.
75
WR
0.14
0
0
0
0
1
ST
0.15
0
0
1
0
0.
75
1
0.
75
0.09
1
0
0
1
0
0
0
0.
75
0
0
0
0.14
0.11
0.11
1
0.
75
1
0
0
RP
RC
RS
1
0.
75
1
0
0.
75
0.
54
0
0
0.
75
1
0.
75
WT
0
0.
75
0
0.
75
0.
75
1
0.
42
8
0
0.
32
8
0
0.
42
5
0
0.
40
0
0
0.
35
8
0.1
Global
Preference
Index
CO
0
0
1
0
0.
75
0
0
0.
75
0.
39
8
A
P
A
C
P
W
0
0
0
0.
75
0
0
0
0
0.
75
0.
75
0
1
0
1
0
0
0
0
0
1
0
1
0
0
0
0
0
1
0
0
0.
11
3
0
0.
36
3
0
0.
32
0
0
0.
10
5
0
0.
20
0
241
0
P
S
P
A
P
C
C
W
C
S
C
A
C
P
0.
75
0
0
0
0.
75
0
0
0
0.
75
0
0
0
0
0
0
1
0.
75
0
0
0
0
1
1
0
0
1
0.
75
0.
40
3
0
1
0
0
0.
75
0
0
0
0
0
0
0
1
0
0
0
0.
75
0.
19
5
1
0
0
0
0.
75
0
0
1
0.
42
0
1
0.
20
5
0
1
0
0.
75
0.
27
5
1
0.
34
5
0
0.
24
5
0
0.
36
3
1
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The Preference Matrix is obtained when the preference function is applied on the
difference matrix. Table 6 shows the Preference Matrix. The Global Preference Index is
computed by taking the column sum of each column.
2.4.
Outranking flows
Positive outranking flow gives a measure of the strength of the alternative. Negative
outranking flow gives a measure of weakness of the alternative. The net outranking flow is used
to obtain complete ranking of the alternatives. Table 7 shows the outranking flows:
Positive Outranking Flow of alternative ‘a’, (a) =
(a,b)]
Negative Outranking Flow of alternative ‘a’, (a) =
(b,a)]
Net Outranking Flow of alternative ‘a’, a a a)
Where (a,b) is the global preference index.
Table 7. Outranking Flows
Positive
Negative
Outranking Outranking
Flow
Flow
Alternative
Wood
1.721
1.529
Mild Steel
1.269
1.475
Aluminum
0.988
1.376
Plastic
1.303
1.036
Cast Iron
1.148
1.013
Net Outranking
Flow
0.192
-0.206
-0.388
0.267
0.135
3.
Results
The net outranking flows show that the pattern materials are preferred in the following
decreasing order: Plastic > Wood > Cast Iron > Mild Steel > Aluminum. Hence plastic is to be
selected for making patterns for casting operation.
4.
Conclusion
In this study, a methodology was proposed to select a pattern material for casting
operation from five possible materials called alternatives. The alternatives were evaluated based
on a set of criteria using the Fuzzy PROMETHEE algorithm. The methodology was explained with
the help of an example. The alternatives and criteria may vary in different circumstances, but the
algorithm can still be applied and results can still be used for the best. For the example used in
the study, plastic is found to be the most apt material for pattern making.
References
Anand G., Kodali R., Selection of lean manufacturing systems using PROMETHEE. Journal of
Modeling and Management, 2008, 3(1), 40-70.
Brans J.P. and Vincke Ph., A Preference Ranking Organization Method: (The PROMETHEE
Method for MCDM). Management Science, 1985, 31(6), 647-656.
Ghosh A, Mallik A.K., Manufacturing Science, East-West Press Private Limited, New Delhi, 2008.
Gupta R., Sachdeva A., Bharadwaj A., Selection of logistic service provider using fuzzy
PROMETHEE for cement industry. Journal of Manufacturing Technology Management, 2012,
23(7), 899-921.
Heine R. W., Looper C. R., Rosenthal C., Principles of Metal Casting, Tata McGraw-Hill
Publications, II Edition, 2004.
Rao P.N., Manufacturing Technology Vol-1, Tata McGraw-Hill Publications, III Edition, New Delhi,
2008.
242
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Parametric Optimisation of Cold Backward Extrusion Process
using Teaching-Learning-Based Optimization Algorithm
P. J. Pawar1*, R. V. Rao2
1
K.K.Wagh Institute of Engineering Education and Research, Nasik, Maharashtra, India
2
S.V. National Institute of Technology, Surat, Gujarat, India
*Corresponding author (e-mail: pjpawar1@rediffmail.com)
Cold backward extrusion process is a widely accepted metal forming process due to high
production rate and capability to produce complex part with good surface quality. The
present work highlights the development of mathematical relations for correlating the interrelationships of various parameters of cold backward extrusion process such as extrusion
force, stroke length, slug diameter, and back pressure on the extrusion power and
dimensional stability of product. A recently developed advanced optimization technique,
known as teaching-learning-based optimization (TLBO) is then applied to find the optimal
combination of process parameters with objectives of minimizing extrusion power as well
as dimensional instability subjected to the constraints on diaphragm thickness and nozzle
thickness.
1.
Introduction
Cold backward extrusion processes is capable to produce complex part design for special
purpose use and serve for many industries like pharmaceutical, aerospace, automobile, consumer
goods, etc. Having such enormous advantages, a big hindrance in the full scale application of cold
backward extrusion technology is the requirement of huge amount of power to form the product in
cold state which substantially increases the manufacturing cost of the product. The distribution of
the comparative strain for cold backward extrusion is clearly non-homogeneous, which means cold
backward extrusion process deals with a non-stationary process behavior resulting into
dimensional instability.
Although various researchers (Elkholy, 1996; Kuzman et al., 1996; Onuh et al., 2003;
Danckert, 2004; Tiernan et al., 2005; Abrinia and Gharibi, 2008; Plančak et al., 2009) have
considered the effect of different process variables on various performance measures, these
efforts needs to be further extended by considering more performance measures and more input
variables. Extrusion power and dimensional stability of product are considered to be very crucial
and important performance measures for cold backward extrusion process, hence the same are
considered in the present work. A mathematical model relating these performance measures with
four important process parameters namely, extrusion force (Fe), stroke length (Sl), diameter of
slug (ds), and back pressure (Pb), is developed using a second order response surface modeling
technique. Furthermore, it is also revealed from the literature that very few efforts have been
made so far by earlier researchers for optimization of process parameters of cold backward
extrusion processes. The cold backward extrusion processes in engineering industries must be
cost and quality efficient to sustain and excel in current competitive scenario. A significant
improvement in process efficiency can be obtained if effect of process parameters on power
requirement and dimensional variation is identified and then selecting the most optimal
combination of critical process control factors. This work is therefore intended to provide the
optimization aspects of the cold backward extrusion process.
For most of the non-traditional optimization algorithms such as genetic algorithm (GA),
particle swarm optimization (PSO), artificial bee colony (ABC), etc., the selection of suitable
values of algorithm-specific parameters for a particular application is itself a complex optimization
243
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
problem. To overcome this drawback of existing advanced optimization algorithms, an
optimization algorithm known as teaching-learning-based optimization (TLBO) is developed by
a
b
Rao et al. (2011 , 2011 ) and Rao and Patel (2012). TLBO requires only common controlling
parameters like population size and number of generations for its working. In this way TLBO can
be said as an algorithm-specific parameter-less algorithm.
The next section describes the development of a mathematical model for cold backward
extrusion process using response surface modeling (RSM).
2. Response surface modeling (RSM)
Response surface modeling quantifies the relationship between the controllable input
parameters and the obtained responses. In modeling of cold backward extrusion using RSM, the
k
sufficient data is collected through designed experimentation. An experiment is designed with 2
(where, k = number of variables, in this study k =4) factorial with central composite-second order
rotatable design is used. This consists of number of corner points =16, number of axial points=8,
and a centre point at zero level =4. The axial points are located in a coded test condition space
through parameter ‘α’. For the design to remain rotatable, ‘α’ is determined as (2k) ¼ = 2. Coded
levels for the factors are given in Table 1.
Table 1 Coded values of process variables
_____________________________________________________________________________
Factors
Coded levels
_____________________________________________________________________________
-2
-1
0
+1
+2
_____________________________________________________________________________
Extrusion force (KN)
380
477.5
575
672.5
770
Stroke Length(mm)
341
341.75
342.5
343.25
344
Diameter Of slug (mm) 19
19.75
20.25
21.25
22
Back pressure(MPa)
50
68.225
87.5
106.78
125
_____________________________________________________________________________
The experimental set up used for data collection is as given below:
Machine type/make: Horizontal type manufactured by Maharashtra machine tools.
Capacity of machine: 100 tons.
Work piece material: Benthovate-c made of 96% pure aluminum
Work piece specifications: Work piece is aluminum collapsible tube as shown in Fig. 1.
Figure 1. Specification of test component
To study the effect of process parameters i.e. Fe, Sl, ds, and Pb, on performance
measures i.e. power (Pw), deviation in extruded length (Le), thickness of diaphragm (Td), and
nozzle thickness (Tn) a second-order polynomial response is fitted into the following equation.
k
k
i 1
i 1
k
y b0 bi xi bii xi bij xi x j
2
j 1
244
(1)
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Where ‘y’ is the response and xi (1, 2… k) are coded levels of k quantitative variables. The
coefficient b0 is the free term, the coefficients bi are the linear terms, the coefficients bii are the
quadratic terms, and the coefficients bij are the interaction terms. Equations (2) to (5) are then
derived by determining the values of the coefficients using the least square technique for the
observations collected for power (Pw), extruded length (Le), thickness of diaphragm (Td), and
nozzle thickness (Tn) respectively.
Pw 5166.443 279.723x1 212.949 x2 175.32 x3 254.782 x4 56.597 x12 64.815x22 28.354 x32 44.014 x42
11.029 x1 x2 31.295x1 x3 16.021x1 x4 13.097 x2 x3 12.23x2 x4 36.667 x3 x4
Le 151.375 3.648 x1 4.126 x2 2.2958 x3 2.6083x4 0.4927 x12 0.5302 x22 0.4985 x32 1.2797 x42
0.3175 x1 x2 0.0025 x1 x3 0.805 x1 x4 1.2475 x2 x3 1.135 x2 x4 0.3325 x3 x4
Td 0.13605 0.013737 x1 0.009462 x2 0.0035958x3 0.020004 x4 0.006703x12 0.008028x22 0.0072406 x32
0.0079031x42 0.0018937 x1 x2 0.000943x1 x3 0.00263125x1 x4 0.0000687 x2 x3 0.0006437 x2 x4 0.00094375x3 x4
Tn 1.28395 0.0637666 x1 0.0216 x2 0.000775x3 0.0742333x4 0.0074541x 0.0069958x 0.00041666 x
2
1
2
2
2
3
(2)
(3)
(4)
(5)
0.01538333x42 0.016825x1 x2 0.015x1 x3 0.0483375x1 x4 0.004875x2 x3 0.0088625x2 x4 0.0249625x3 x4
To test whether the data are well fitted in model or not, the calculated S value of the
regression analysis for power, extruded length, thickness of diaphragm, and nozzle thickness are
obtained as 118.30, 2.411, 0.011, and 0.054 respectively, which are smaller and R value for
these responses are 0.94, 0.88, 0.83, and 0.78 respectively. The R value is moderately high for
all responses. Hence, the models developed in this work fit the data well.
The next section describes the working of teaching-learning-based algorithm used in this
work for optimization of cold backward extrusion process.
3.
Teaching-learning-based optimization algorithm
Teaching-learning-based optimization algorithm (TLBO) is a teaching-learning process
a
b
inspired algorithm proposed by Rao et al. (2011 , 2011 ) and Rao and Patel (2012), which is
based on the effect of influence of a teacher on the output of learners in a class. The algorithm
mimics the teaching-learning ability of teacher and learners in a class room.
The working of TLBO is divided into two parts, ‘Teacher phase’ and ‘Learner phase’.
Working of both the phases is explained below.
Teacher phase: During this phase a teacher tries to increase the mean result of the class in the
subject taught by him or her depending on his or her capability. The existing solution is updated in
the teacher phase according to the following expression.
X'j,k,i = Xj,k,i + Difference_Meanj,k,i
(6)
Where, Difference_Meanj,k,i = ri (Xj,kbest,i - TFMj,i)
TF can be taken as 1 or 2.
Learner phase: Learners increase their knowledge by interaction among themselves. A learner
interacts randomly with other learners for enhancing his or her knowledge.
In this phase the solution is updated by randomly selecting two learners P and Q such that X'totalP,i ≠ X'total-Q,i (where, X'total-P,i and X'total-Q,i are the updated values of Xtotal-P,i and Xtotal-Q,i respectively
at the end of teacher phase)
X''j,P,i = X'j,P,i + ri (X'j,P,i - X'j,Q,i), If X'total-P,i < X'total-Q,i
(7)
X''j,P,i = X'j,P,i + ri (X'j,Q,i - X'j,P,i), If X'total-Q,I < X'total-P,i
(8)
Accept X''j,P,i if it gives a better function value.
4. Example
Now, to demonstrate and validate the teaching-learning-based optimization algorithm for
parameter optimization of cold backward extrusion process, an example is considered. In this
example the following two objectives are considered:
245
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Objective 1: Minimization of extrusion power given by equation (2)
Objective 2: Minimization of extrusion length as given by equation (3)
The combined objective function is then formulated as:
Min Z w1
Pw
L
w2 e
Pwmin
Lemin
(9)
Where, w1 and w1 are the weigthages assigned to the two objectives. In the present example
equal weightages are considered for both the objectives. Pwmin is the minimum value of extrusion
power obtained when single objective optimization problem considering only extrusion power as
an objective was solved for the given constraints. Lemin is the minimum value of extrusion length
obtained when single objective optimization problem considering only extrusion length as an
objective, was solved for the given constraints.The constraints and variable bounds for example
are as follows.
Constraints:
a) Thickness of diaphragm: Td 0.10 0 ; 0.15 Td 0
b)
Thickness of nozzle: Tn 1.15 0 ; 1.525 Tn 0
c) Length of the extruded tube: Le 130
The variable bounds for the four variables considered in this work are:380 ≤ Fe ≤770 (kN);
Sl ≤ 344 (mm); 19≤ ds ≤ 22 (mm); 50 ≤ Pb ≤125 (N/mm2)
The results of optimization using TLBO and GA are presented in Table 2.
341≤
Table 2
Results of multi-objective optimization of cold backward extrusion process using GA
and TLBO
_____________________________________________________________________________
Method Fe
Sl
ds
Pb
Pw
Le
Td
Tn
Z
_____________________________________________________________________________
GA
380 341
19.66
56.80
3239.44
132.9 0.1
1.421 1.0154
TLBO 380 341
19.71
54.97
3252.33
132.4 0.1
1.402 1.0154
_____________________________________________________________________________
Table 3 shows, the mean values of the optimum solutions obtained by using TLBO and GA for 30
trial runs.
Table 3
Mean and standard deviation of the optimum solutions obtained by using TLBO and GA
for minimization of combined objective function (Number of function evaluations=200)
_____________________________________________________________________________
Method
Mean value
Best value
Standard deviation
__________________________________________________________________________
GA
1.0164
1.0154
0.0013
TLBO
1.0158
1.0154
0.0006
__________________________________________________________________________
It is observed from Table 3 that, although the best value of the optimum solution for
combined objective function obtained by both the algorithms is same, the TLBO outperformed GA
with respect to the mean solution.
5
Conclusions
This paper deals with the optimization aspects of cold backward extrusion process. The
objectives considered are minimization of extrusion power and improving the dimensional stability
thereby minimizing the deviation in the extruded length. The effect of important extrusion process
246
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
variables such as extrusion force, stroke length, diameter of slug, and back pressure on the
extrusion power and extrusion length is obtained by response surface modeling approach. The
optimization is then carried out using the recently developed optimization algorithm known as
teaching-learning-based optimization (TLBO) algorithm. The performance of the TLBO algorithm
is studied in terms of convergence rate and accuracy of the solution. As compared to other
advanced optimization methods, TLBO algorithm does not require selection of the algorithmspecific parameters. It makes this algorithm to apply to the real life optimization problems easily
and effectively.
References
Abrinia, K., Gharibi, K. An investigation into the backward extrusion of thin walled cans.
International Journal of Material Forming, 2008, 1, 411-414.
Danckert, J. The influence of the punch land in backward can extrusion. Material Technology,
2004, 53, 227-230.
Elkholy, A. H. Parametric optimization of power in hydrostatics extrusion. Journal of Materials
Processing Technology, 1996, 70, 111-115.
Kuzman, K., Pfeifer, E., Bay, N., Hunding, J. Control of material flow in a combined backward
can-forward rod extrusion. Journal of Materials Processing Technology, 1996, 60, 141-147.
Onuh, S. O., Ekoja, M., Adeyemi, M. B. Effects of die geometry and extrusion speed on the cold
extrusion of aluminium and lead alloys. Journal of Materials Processing Technology. 2003,
132, 274-285.
Plancak, M.., Karl, Kuzman., Dragiša, Vilotić., Dejan, Movrin. FE analysis and experimental
investigation of cold extrusion by shaped punch. International Journal of Material Forming,
2009, 2, 117-120.
Rao, R. V., Patel, V. An elitist teaching-learning-based optimization algorithm for solving complex
constrained optimization problems. International Journal of Industrial Engineering
Computations, 2012, 3, 535-560.
Rao, R.V., Savsani, V.J., Vakharia, D.P. Teaching-learning-based optimization: optimization
b
method for continuous non-linear large scale problems. Information Sciences, 2011 , 183, 115.
Rao, R.V., Savsani, V.J., Vakharia, D.P. Teaching-learning-based optimization: A novel method
for constrained mechanical design optimization problems. Computer Aided Design, 2011a ,
43, 303-315.
Tiernan, P., Draganescu, B., Hillery, M.T. Modelling of extrusion force using the surface response
b
method. International Journal of Advanced Manufacturing Technology, 2005 , 27, 48-52.
247
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization of Hole-Making Operations: A Genetic Algorithm
Approach
P. J. Pawar1*, M .L .Naik2
1
K. K. Wagh Institute of Engineering Education and Research, Nasik. Maharashtra, India.
2
Sandip foundation’s SITRC, Nasik
*Corresponding author (e-mail: pjpawar1@rediffmail.com)
This paper deals with the optimization of hole-making operations in applications where
several holes are required to be machined with number of tools of different sizes and
types satisfying the precedence of operation. The objective of interest in the
considered problem is to minimize total cost of hole making operation. A real coded
genetic algorithm is presented to solve this problem. An application example is
considered to show the effectiveness of the proposed approach.
1.
Introduction
Hole-making operations such as drilling, reaming, and tapping compose a large
portion of machining processes for many industrial parts. For a part with many holes, a
particular tool may be required by several holes and also tools of different diameters may be
used to drill a single hole to its final size. To reduce tool traverse, it may be suggested that the
spindle should not be moved until a hole is completely drilled using several tools of different
diameters. This however will lead to excessive tool switches. By the same token, though tool
switches can be reduced by completing all operations on all the holes that require the current
tool, the travel time will be increased. Furthermore, the amount of tool movement and the
number of tool switches will depend on which set of tools are to be used to drill each hole to
its final size. The machining cost and tool cost are affected by the selection of tool
combination for each hole. Various researchers had employed different optimization methods
such as Tabu search (Kolahan and Liang, 2000), traveling salesman problem (Kenneth et al,
2002; Khan et al, 2010), particle swarm optimization (Onwubolu and Clerc, 2004), ant colony
algorithm (Ghaiebi and Solimanpur, 2007), hybrid GASA (Oysu and Bingul, 2009) to solve this
problem. The present work is focused on the formulation of the model and proposes real
coded genetic algorithm approach to solve the optimization of hole-making problem. The
objective is to minimize production cost which consists of tool travel cost, and tool switch cost
subjected to constraint on precedence of operation.
2.
Problem statement
To make a complete hole, tools with different sizes may be needed. This is specially a
must when the diameter of the hole to be made is large. In this case, the hole is initially made
using the small-sized tools and then it is enlarged to the size of interest using the large-sized
tools.The selection of the set of tools and their sequence can directly affect the machining
time and cost. It is common in practice that several holes need a particular tool and a hole
may need different tools. The time needed to move from a hole to one another is called as
airtime .To minimize tool airtime, it may be initially thought that a hole should be completed
through different tools before movement into another hole. However, this may result in
excessive tool switches and thus increments in tool switch time. On the other hand, one may
decide to process all the holes which need the tool currently in use. Although, this decision
will decrease the tool switch time, it can result in a huge increment in tool airtime. For each
hole in Fig. 1 the largest tool, shown by solid lines, has to be used to drill the hole to its final
size. Some pilot or intermediate tools, shown by dashed lines, may also be used. For
instance, for hole A, there could be four different sets of tools; {1,2,3}, {2,3}, {1,3}, and {3}.
The selection of tool set for each hole directly affects the required number of tools switches,
and tool travel distance. The problem is now to select a set of operations along with the
248
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
optimum sequence of those operations in such a way that the total processing cost is
minimized.
Figure 1. A schematic representation of alternative sets of tools for hole making
The cost components considered in this paper include:
a) Tool travel cost: This is the cost of moving the tool from its previous location to the current
drilling position. Tool travel cost is proportional to the distance required for the spindle to
move between two consecutive drilling locations (Kolahan et al, 2000).
b) Tool switch cost: This cost occurs whenever a different tool is used for the next operation.
If for any operation tool type is not available on the spindle, then the required tool must be
loaded on the spindle prior to performing operation. This causes a longer tool switch time and
hence a higher tool switch cost (Kolahan et al, 2000).
3.
Problem formulation
The objective of interest in this paper is to minimize the summation of tool airtime and
tool switching time and thereby to minimize the production cost. The following mathematical
model (Kolahan et al, 1996) is used in this work.
𝑴𝒊𝒏 𝒁 = 𝑴𝒊𝒏 𝒚
𝒌
𝒊=𝟏 𝒋=𝟏
𝒊≠𝒋
𝒂 𝒑𝒊𝒋 + 𝒃 𝒒𝒊𝒋 + 𝒑𝒆𝒏𝒂𝒍𝒕𝒚 𝒗𝒂𝒍𝒖𝒆 ∗ 𝑵𝒐. 𝒐𝒇 𝒄𝒐𝒏𝒔𝒕𝒓𝒂𝒊𝒏𝒕𝒔 𝒗𝒐𝒊𝒍𝒂𝒕𝒊𝒐𝒏𝒔
The following notation is used in the proposed mathematical model.
i = tool type index, i= 1,……,
j = hole index, j=1,…….,
k = number of possible operations in sequence
a = cost per unit non productive travelling distance in Rs/mm.
b = cost per unit tool switch time in Rs/min.
pij = non productive travelling distance between current hole and previous hole in mm.
qij = tool switch time between current tool and tool required by previous hole in minutes.
y = summation of travelling and switching cost in Rs.
Z= total cost.
4.
Application example
This example presents the optimization of hole making of drilling operation where
precedence of sequence is required. The proposed algorithm was coded in C-programme to
determine the optimum sequence of operations for the part shown in Fig.3 which requires
drilling operation. All types of moulds, moulded parts and plastic moulded goods are the
applications of this industrial problems. The selection of tool set for each hole directly affects
the optimum required number of tools switches, and tool travel distance. The problem is now
to select a set of operations along with the optimum sequence those operations in such a way
that the total processing cost is minimized. The process parameters are: a=0.04 Rs/min and
249
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
b=50 Rs/min. The tool switch times are considered to be in the range of 0.2 to 0.5 minutes
depending on the operator skills.The computation is carried out for 10 different starting
sequences. To investigate the effect of genetic algorithm on search performance, the search
was repeated for optimum result.
Figure 3 Top view of example part
The steps of optimization using RCGA algorithm are discussed below.
4.1
Parameter setting
The following parameters as shown in Table 1 are set to give better result for optimization
of hole-making problem to run RCGA.
Table 1. Parameters for Genetic Algorithm.
S.N
1
2
3
4.2
Parameters
Population Size
Crossover Fraction
Mutation Fraction
Value
10
0.8
0.2
Initialize the population
In the present work population size of 10 is considered. This example presents the
optimization of hole making of drilling operation where precedence of sequence is required
drilling operation must be followed by reaming if this is not happened then constraint violates.
The objective function value for the initial population is calculated by using formula,
Cost (Z) = Cost(y) + Penalty value × Number of constraint violation.
The penalty value is selected such that any solution violating the constraint (i.e. if precedence
of operation is not maintained) should never appear in optimum sequence. For this particular
problem penalty value is calculated 750. Following table shows objective function value and
number of constraints violation for initial 10 sequences.
250
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table2. Initial population
Initial 10 random sequences
S.
N
1
2
3
4
5
6
7
8
9
10
4.3
2r 5d 22d 3d 11d 30d 7d 23d 13d 19d 8r 27d 24d 4d 10d 1d
32d 28d 20d 2d 17d 12d 29d 27r 6d 16d 8d 25d 9d 32r 11r 15d
21d 14d 18d 26d 23r 31d
23r 17d 22d 2r 14d 30d 26d 13d 6d 27r 8r 21d 10d 31d 18d
28d 15d 5d 12d 9d 19d 16d 24d 25d 29d 8d 11d 20d 23d 4d 3d
32d 7d 2d 11r 1d 27d 32r
11r 18d 32r 4d 23d 1d 5d 8d 31d 19d 27d 7d 2d 20d 24d 13d
17d 28d 15d 26d 12d 29d 6d 32d 16d 10d 25d 11d 9d 23r 21d
14d 3d 22d 8r 2r 30d 27r
27r 17d 29d 8d 2r 15d 22d 9d 30d 14d 32r 13d 20d 3d 7d 11r
28d 1d 26d 12d 18d 6d 21d 23r 25d 8r 31d 10d 16d 23d 27d
24d 5d 32d 4d 19d 2d 11d
5d 28d 12d 23d 2d 24d 10d 7d 16d 32d 6d 11d 1d 22d 4d 21d
30d 25d 14d 31d 8d 27d 18d 13d 3d 19d 2r 29d 32r 26d 17d 8r
15d 11r 27r 20d 23r 9d
24d 30d 8r 9d 10d 19d 27d 20d 2d 21d 22d 15d 4d 32d 29d
16d 27r 18d 12d 17d 11d 7d 26d 31d 3d 32r 25d 14d 13d 11r
23d 28d 5d 1d 6d 8d 2r 23r
2d 6d 13d 26d 17d 14d 4d 5d 7d 11r 1d 18d 29d 19d 25d 15d
31d 22d 23r 9d 2r 20d 8d 16d 28d 32d 10d 30d 12d 3d 27d
21d 24d 8r 11d 23d 32r 27r
23r 32r 30d 25d 24d 22d 2d 10d 20d 3d 31d 9d 23d 27r 21d
12d 13d 32d 7d 17d 1d 11d 8d 4d 2r 28d 16d 15d 27d 14d 6d
19d 5d 8r 18d 29d 26d 11r
19d 14d 28d 6d 4d 17d 8r 5d 32r 12d 29d 16d 23r 26d 9d 1d
20d 7d 15d 3d 10d 8d 18d 32d 24d 21d 11r 23d 25d 22d 30d
31d 2r 27r 13d 11d 27d 2d
2d 18d 23d 15d 25d 21d 11d 22d 8d 3d 16d 27r 26d 24d 32d
5d 31d 12d 30d 7d 10d 1d 28d 8r 20d 14d 29d 4d 13d 2r 6d
27d 17d 9d 23r 19d 32r 11r
Z (Rs.)
2453.5
No. of
constraint
violations
2
3963.0
4
2515.7
2
4653.9
5
923.78
0
1710.9
1
2414.1
2
3142.8
3
5474.8
4
6
1705.0
9
1
Reproduction
In RCGA, the reproduction of the solution is based on the shared fitness value.
4.4
Updating the solutions
The new solution (offspring) is obtained using crossover and mutation. In the present
work, single point crossover is used along with mutation in order to generate new population.
Steps 2 and 3 are repeated till the optimum sequence obtained which has minimum cost (Z).
The following section provides the existing method used in actual industrial practice in hole
making operations without any optimum sequence for case study.
The results of optimization using the proposed approach is compared with that obtained with
current industrial practice and is presented is Table 3.
Method
Existing
practice
RCGA
Table 2 Result of optimization
Sequence
5d 28d 12d 23d 2d 24d 10d 7d 16d 32d 6d 11d 1d 22d 4d 21d 30d
25d 14d 31d 8d 27d 18d 13d 3d 19d 2r 29d 32r 26d 17d 8r 15d 11r
27r 20d 23r 9d
2d 10d 16d 12d 5d 24d 7d 32d 1d 28d 11d 6d 4d 23d 22d 27d 14d
30d 31d 8d 25d 13d 19d 3d 18d 26d 32r 17d 29d 11r 15d 2r 9d 23r
20d 21d 8r 27r
251
Z (Rs.)
923.78
820
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Fig. 4 (a) shows optimal sequence for making the holes in hole using RCGA whereas Fig. 4
(b) shows the existing sequence of machining holes.
(a)
(b)
Figure 4 Sequence of machining holes using RCGA and existing practice.
5.
Conclusion
The present work is focused on the formulation of the model and proposes real coded
genetic algorithm approach to solve the optimization of hole making problem. In this work real
coded genetic algorithm is used to reduce the overall production cost in hole making
operations. The computational result using RCGA shows that about 12% cost reduction can
be achieved for the problem under consideration. It is observed that, the performance of real
coded genetic algorithm depends upon algorithm specific parameters such as population size,
string length, crossover and mutation probabilities and the number of iterations performed.
Referencesmethod gives a significant improvem
Ghaieb and Solimanpur. An ant algorithm for optimization of hole-making operations.
Computers & Industrial Engineering, 2007,52,308–319.
Khan et al. Sequential and non-sequential procedure fo drilling on a switch board using TSP.
Canadian Journal on Computing in Mathematics, Natural Sciences, Engineering &
Medicine,2010,1(2),37-48.
Kolahan and Liang. A tabu search approach to optimization of drilling operations.Computers
ind engg,1996,31,371-374.
Kolahan and Liang. Optimization of hole-making operations: a tabu-search approach.
International Journal of Machine Tools & Manufacture, 2000,40,1735–1753.
Onwubolu, G.C. & Clerc, M. Optimal path for automated drilling operations by a new heuristic
approach using particle swarm optimization. International Journal of Production
Research, 2004, 42 (3), 473–491.
Oysu and Bingul. Application of heuristic and hybrid-GASA algorithms to tool-path
optimization problem for minimizing airtime during machining. Engineering Applications
of Artificial Intelligence, 2009,22,389–396.
252
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Neural Network Prediction of Erosion Wear in Pipeline
Transporting Multi-size Particulate Slurry
K.V. Pagalthivarthi1, P.K. Gupta2*, J.S. Ravichandra3 , S. Sanghi4
1
CFD Research Leader, GIW Industries Inc., Augusta, GA – 30813, USA
2
Chhatrapati Shivaji Institute of Technology, Durg – 491001, India
3
John F. Welch Research Centre, General Electric, Bangalore, India
4
Dept. of Applied Mechanics, IIT Delhi, New Delhi – 110016, India
*Corresponding author (e-mail: pankajkgupta@gmail.com)
The paper presents the neural network predictions of maximum erosion wear rate and
the location of maximum wear in a pipeline transporting multisize particulate slurries.
The feed forward type neural network model is trained and tested on Galerkin finite
element generated data of wear rates. The neural network model could capture the
non-linear relationship of wear rate variables with the governing parameters. The
trained network is shown to accurately predict maximum wear rate and its lateral
location on the pipe wall.
1.
Introduction
Erosion wear is a major problem in slurry transportation system. Pipelines are
important integrated component in slurry transportation system. In these pipelines, the flow
quickly attains fully developed state after a relatively short distance. The superficial mixture
velocity in these pipelines often exceeds 10 m/s. Industrial slurries have wide distribution of
particle sizes with fair percentage of coarse particles. All these effects together can lead to
severe erosion of slurry pipelines.
Erosion wear is a complex boundary problem governed by a large number of
independent parameters [Meng and Ludema (1995); Wood et al. (2004)] Some of the
parameters are local solids concentration, mixture velocity, solids velocity, as well as the
material properties of carrier, particulate material and wearing surface [Wood et al. (2004)].
The local flow conditions need to be correlated to wear rate via suitable wear models
[Pagalthivarthi and Helmly (1992); Pagalthivarthi and Addie (2001)].
The local flow conditions near the wear surface can be obtained either through
extensive experimental study covering a wide range of parameters or through numerical
techniques. From the database of such solutions (obtained either experimentally or
numerically), correlations relating wear rates with other operating parameters may be
developed. Because of the complex nature of the relations that exist between the various
operating parameters, a large number of solutions covering a wide range of parameters are
essential for reasonably accurate prediction via correlations.
Whether derived from experiments or from numerical solutions, the correlations have
some pre-selected (chosen a priori) functional form of input-output that may be inadequate to
describe the complex nonlinear relationships that exist in a real slurry flow. Unlike
correlations, neural networks are theoretically established as universal approximators
[Hornik (1991)], that need not have a pre-selected form of functional dependence between
input and output. Pagalthivarthi et al. (2010) has shown the significant capability of neural
network predictions over the correlations to accurately predict pressure drop in multisize
particulate slurry flow through pipes. The study also established that neural networks could
accurately account for the effect of broad PSD through weighted mean diameter and
standard deviation of the PSD.
In the present study, therefore, erosion wear rate is predicted using artificial neural
253
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
networks. Erosion wear is a complex non-linear function of the governing flow parameters.
Both maximum erosion wear rate and its location vary with the flow parameters. Average
wear rate and erosion wear rate at the pipe bottom could also be of significance in designing
pipelines for multi-size particulate flows. Therefore, unlike pressure drop, which is a single
variable, erosion wear needs to be characterized by four variables: maximum erosion wear
rate, its lateral location on the pipe wall, wear rate at the pipe bottom and average wear rate.
Neural network models have been proven to fit data very accurately [Pagalthivarthi et
al. (2010)] due to their ability to capture the non-linear relationship between the inputs and
outputs. Further, a single neural network model can predict any number of dependent
variables for a specific number of input variables. In the case of wear rate prediction, four
variables (as noted previously) need to be predicted for a set of input variables. Therefore, a
single neural network model would serve the purpose of predicting all the four variables.
Details and results of this neural network model are discussed in the sequel.
2.
Input and output variables to neural network model
The chosen inputs for the neural network model are pipe diameter, average mixture
velocity, overall average concentration, solids density, weighted mean diameter and
standard deviation of the PSD. The outputs of the neural network model are four in number.
They are the maximum wear rate, its lateral location on the pipe wall, wear rate at pipe
bottom and average wear rate. The neural network model simultaneously predicts all the
four variables for a given set of input variables.
Erosion wear rate data is generated for all the runs performed for pressure drop as in
Pagalthivarthi et al. (2010). These runs were numerically generated from Galerkin finite
element methodology [Ravichandra et al. (2005)]. Details of runs performed for generating
pressure drop are given in Table 1. The runs (1600 in number) cover a significant range of
governing parameters.
Table 1. Operating parameters used in finite element solutions.
Density of water
=
1000 kg/m3
Viscosity of water
=
0.001 N-s/m2
Overall concentration =
5% to 25% (in steps of 5%)
Pipe diameters
=
100, 200, 300, 400 and 500 mm
3
Particulate densities
=
1520, 2650, 3600, 4200 kg/m
Percentage contributions of various species to the overall concentration
128
m
91
m
38
m
dw
m
% *
m
3.5
10.0
5.7
19.3
16.67 16.67 16.67 16.67
19.3
47.6
13.9
10.0
47.6
13.9
19.3
5.7
*
d w = weighted mean diameter;
13.9
16.67
5.7
10.0
47.6
16.67
3.5
3.5
116.9
238.4
308.5
439.1
6.58
42.63
61.24
137.16
738
m
Slurry/d
A
B
C
D
235
m
180
m
*
= standard deviation
*
3.
Configuration of the neural network model
Seventy five percent of the total number of data sets are used for training the
network and the remaining 25% are used for testing the trained network. In a data set there
are six input variables and four output variables. Therefore, six neurons are used in input
layer and four neurons are used in the output layer. Twenty five neurons are used in the
254
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
hidden layer. Following previous study [Pagalthivarthi et al. (2010)], log-sigmoid function is
chosen as the activation function and back propagation algorithm [Rumelhart et al. (1986)] is
used to train the employed feed-forward type of neural network model. MATLAB function
named “newff” is employed for building a trainable feed-forward network and a specific
variation of back propagation algorithm named TRAINLM is used for training the network.
4.
Results of neural network model
The details of the neural network model are presented in Table 2. The neural network
model with 25 neurons in the hidden layer and log-sigmoid activation function is run for 300
epochs. Percentage mean square error between the finite element generated wear outputs
and the trained neural network model outputs are tabulated in Table 2. Neural network
model is found to fit the training and testing data of all the four variables very accurately.
Table 3. Performance of the neural network model
Number of hidden layers
=1
Number of neurons in the input layer
=6
Number of neurons in the hidden layer = 25
Number of neurons in the output layer = 4
Time for 300 epochs
= 2382 s
100× mean square error
With training set With testing set
Maximum wear rate
0.0084
0.012
Location of
0.0092
0.013
Maximum wear rate
Figure 1 shows the neural network (NN) output of maximum wear rate in comparison
with finite element (FE) computed maximum wear rate. Similarly Fig. 2 shows the location of
maximum wear rate computed with NN and FE. Figures 1 and 2 compare both training and
testing samples. The excellent comparison indicates that neural network model could
capture the non-linear relationship of wear rate variables with the governing parameters.
Maximum Wear Rate
Maximum Wear Rate
FEM
NN
4
FEM
NN
Test
samples
Training
samples
W (m/hr)
3
2
1
0
300
600
900
1200
Np
100
200
300
400
Np
Figure1. Training and testing samples of maximum wear rate obtained using finite element
and neural network computations
255
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
To further check the accuracy of the neural network model, linear regression analysis
is performed on the testing samples of the output variables. Linear regression line is fitted
between FE data points and the corresponding NN data points. Results of linear regression
analysis are presented in Table 4 and Fig. 3, respectively.
Angle of Maximum Wear Rate
100
FEM
NN
Training
samples
FEM
NN
Test
samples
90
(degrees)
80
70
60
50
40
30
20
10
0
300
600
900
100
1200
200
300
400
mp
mp
Figure 2. Training and testing samples of location of maximum wear rate obtained using
finite element and neural network computations
Table 4 Results of regression analysis on test samples of the neural network outputs
Variable
Slope (m)
Intercept
(b)
Regression
coefficient r 2
Max. wear rate
Location of max
wear rate
0.985
1.16E-6
0.9953
0.971
6.78E-5
0.9882
Test Samples
Angle of Maximum Wear Rate
Maximum Wear Rate
100
m=0.985
b=1.164E-6
R=0.9953
3
2
75
(FE)
W (m/hr) (FE)
4
m=0.9708
b=6.7863E-5
R=0.9882
50
25
1
0
0
1
2
3
4
0
0
W (m/hr) (NN)
25
50
(NN)
75
100
Figure 3. Linear regression analyses of neural network model results.
256
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The regression coefficients are all within 2% of perfect regression (i.e., r 2 1 ).
Similarly, slopes and intercepts are nearly equal to the ideal slope and intercept of unity and
zero, respectively. Table 4 indicates the accuracy of neural network predictions. Neural
network models built on the FE generated database are thus promising in wear rate
prediction.
5.
Conclusion
A feed forward type of neural network model was used to predict erosion wear in
multisize particulate flow through pipes. The neural network model was first trained using
back propagation algorithm and log-sigmoid function as activation function on the database
generated from finite element runs. It was then trained and tested and found to reliably and
accurately predict the maximum erosion wear rate and its location along pipe circumference.
References
Hornik, K. Approximation capability of multilayer feedforward networks as universal
approximators, Neural Networks, 1991, 4, 251-257.
Meng, H. C. and Ludema, K. C. Wear models and predictive equations: their form and
content. Wear, 1995, 181, 443-457.
Pagalthivarthi, K. V. and Addie, G. R. Prediction methodology for two-phase flow and erosion
wear in slurry impellers." 4th International Conference on Multiphase Flow. 2001.
Pagalthivarthi, K. V. and Helmly, F. W. Applications of materials wear testing to solids
transport via centrifugal slurry pumps. Wear Testing of Advanced Materials, ASTM STP
1167 (1992): 114-126.
Pagalthivarthi, K. V., Mittal, A., Ravichandra, J. S., Sanghi, S. Prediction of pressure drop in
multi-size particulate pipe flow using correlation and neural network techniques.
Progress in Computational Fluid Dynamics, an International Journal, 2007, 7(7), 414426.
Ravichandra, J.S., Pagalthivarthi, K.V. Sanghi, S. Multi-size particulate flow in horizontal
ducts – modeling and validation. Progress in Computational Fluid Dynamics, 2005, 5(8),
466-481.
Rumelhart, G.E., Hinton, R.J. and Williams, Learning Internal Representations by Error
Propagation. 1986, Parallel Distributed Processing, MIT Press, Cambridge, pp. 318362.
Wood, R. J. K., Jones, T. F., Ganeshalingam, J., Miles, N. J. Comparison of predicted and
experimental erosion estimates in slurry ducts. Wear, 2004, 256(9), 937-947.
257
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techni ques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization of Plasma Transferred Arc Welding Process
Parameter for Hardfacing of Stellite 6B on Duplex Stainless Steel
using Taguchi Method
1
2
P. S. Kalos , D. D. Deshmukh *
1
K.K.W.I.E.E.R., Nasik, Maharashtra, India
2
MET’s B.K.C. I.O.E., Nasik
*Corresponding author (e-mail: dhirgajanan@gmail.com)
Hardfacing involves the application of a deposition on the surface of a metallic workpiece
by employing a welding method such as plasma transferrd arc (PTA),and has found
widespread application in the steel , power , mining and in the petroleum industry .Stellite 6
has an outstanding resistance to seizing or galling as well as cavitation erosion and is
extensively used to combat galling in valve trims, pump sleeves and liners.The base
material used in this investigation was casting plates of duplex stainless steel of 30 mm
thickness of grade S32205 UNS no.S31803,which is widely used for the fabrication of
valves, valve cones, spindles, and pressure vessel parts. As per Taguchi design of
experiment of L9 orthogonal array, deposits were prepared on each plate as described in
the design of experiments matrix and according to the WPS EN ISO 11970:2007. After
hardfacing the welding specimens were ground and hardness is measured in the weld
cross section. The output shows the results of fitting a multiple linear regression model to
describe the relationship between hardness and four independent variables (hardfacing
speed, arc current, oscillation speed, powder feed rate,). Taguchi method was used to
evaluate the main effects of various parameters on hardness and to optimize PTA welding
process parameter for maximum hardness and defect free process.
Keywords: PTA, hardfacing, Taguchi method, Regression
1. Introduction
Hardfacing involves the application of a deposition on the surface of a metallic workpiece
by employing a welding method. The process of hardfacing should be aimed at achieving a
strong bond between the deposit and the base metal with a high deposition rate.Hardfacing is
applied in numerous industries, including chemical and fertilizer plants, nuclear and steam power
plants, pressure vessel, as well as valves and valve seats in the automotive industry. (Jong-Ning,
et al., 2001) Plasma transferred arc welding (PTA) is a commonly used technology and efficient
method to coat a surface with such wear-resistant hardfacings. Single or multilayer depositions
provide strong metallurgical bonding between the deposit and the base metal, as well as porosityfree coating and low dilution with substrate. In the PTA process, the heat of the plasma (arc of
ionized gas) is used to melt the surface of the substrate and the welding powder, where the
molten weld pool is protected from the atmosphere by the shielding gas. (Tarng Y.S., et al., 2002)
Surface treatments of metals are commonly based on the use of high energy density sources, as
they offer a means of rapid heating and subsequent quenching from the melt, leading to fine
microstructures and consequently to possible improvement of mechanical, corrosion or
tribological properties. Superficial layers of the appropriate thickness, free of cracks and with high
hardness may be obtained by suitable control of the process variables. In this respect there has
been considerable interest in the use of laser and electron beam sources for surface treating and
melting of low carbon steels and stainless and tool steels. The impact of the process variables on
temperature profiles, microstructure and properties have been examined and the results already
obtained may serve as a reference for further investigations. However, only a limited number of
investigations concern the use of the plasma transferred arc (PTA), although there are serious
258
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techni ques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
indications that its use may be quite attractive in industrial applications. It is, therefore, interesting
to investigate the PTA process, which—despite its lower energy density—has the main
advantage to require rather inexpensive equipment and the possibility to work with a higher heat
input (E. Bourithis, et al.,2002). Plasma transferred arc welding (PTAW) is an extension of the
GTAW process where both utilize a gas shielded arc produced by a non-consumable tungsten
cathode. In PTA hardfacing, transferred arc melts the powder and the local surface of the treated
component so that the whole amount of powder and only a thin film of component surface under
the arc will be melted. As a result, a solidified metallurgical bond between the deposit and
substrate is obtained with minimum dilution (less than 10%), whereas the amount of dilution is 10
% - 30% in the case of other hardfacing techniques. (Balasubrarnanian V., et al., 2009).Stellite 6
is a well-known Co–Cr–W–C alloy where chromium provides mechanical strength by formation of
solid solution and corrosion resistance through the formation of chromium oxide protective layer.
In addition, this element also acts as the chief carbide former during alloy solidification. Tungsten
increases the strength of Co–Cr by solid solution strengthening. (Madadi F., et al., 2012)
2. Experimental procedure
2.1 Determining the effective process parameters and their working limits
By using different combinations of PTA hardfacing process parameters large number of
trial runs was carried out on Inconel 825 and duplex stainless steel UNS32205 and stellite grade
6B powder was used for hardfacing, to set the ranges of operating parameters, following
inferences were drawn during operation.
(1) If the transferred arc current was less than 100 A, the incomplete melting of powders and lack
of penetration were observed. For the transferred arc current greater than 140 A, the undercut
and spatter were noticed on the weld bead surface.
(2) If the travel (hardfacing) speed was less than 80 mm/min, there was an over deposition of
weld metal and higher reinforcement height was observed. Travel speed greater than 140
mm/min resulted in incomplete penetration and very thin weld bead.
(3) If the powder feed rate was lower than 10 g/min, over melting of base metal and overheating
of tungsten electrode were noticed. When the powder feed rate was greater than 20 gm/min, weld
bead formation was not smooth owing to incomplete melting of powders.
(4) For torch oscillation speed less than 475 mm/min, the bead appearance and contours were
not so smooth and very narrow bead was obtained. When the oscillation speed was greater than
525mm/min, wider bead width and smaller reinforcement height were observed.
The hardness, surface defects, bead contour, bead appearance, and weld quality were inspected
to identify the working limits of the welding parameters, the parameter and their working limit are
shown in table 1.
Table 1. Parameters and their working limits
Sr.no
1
2
3
4
Parameters
Transferred arc current(Amp)
Travel(hardfacing) speed(mm/min)
Powder feed rate (gms/min)
Oscillation speed (mm/min)
Low level(1)
100
80
10
475
Moderate level(2)
120
100
14
500
High level(3)
140
120
18
525
2.2 Material
The base material used in this investigation is casting plates of duplex stainless steel 30
mm in thickness of grade S32205 UNS no.S31803 which is widely used for the fabrication of
valves, valve cones, spindles, and pressure vessel parts, the composition of stellite grade 6B and
base material duplex stainless steel, Shown in table 2.
259
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techni ques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 2. Composition of stellite 6B and duplex stainless steel
Name/composition
C%
Si%
Cr%
Co%
Stellite 6 B
1.08
1.09
28.75
Balance
Name/composition
C%
Si% Cr%
Ni%
P%
S%
Mo%
W%
4.37
Mn% N%
Duplex stainless steel
1.28
0.026
0.45
22.41
6.12
0.007
0.007
3.15
0.16
2.3 Experimental set up and data collection
For conducting the experiment, an automatic PTA hardfacing machine, designed and
fabricated by M/s Primo Automation Systems, Chennai, India with the support of KOSO India Pvt.
ltd, Ambad, Nashik is employed. As shown in fig.1
Figure1. PTA Machine
2.4 Preparation of specimen for test
As per the Taguchi design of experiment of L9 orthogonal array created in MINITAB15
software, Plates of thickness 30 mm was taken as deposition is above 2 mm, according to the
WPS EN ISO 11970:2007, the welding joint is G1 as applicable for the casting plate. Deposit
were prepared on each plate as described in the design of experiments matrix ,the length of the
run is taken up to 15cm and width of deposition is 4.5 cm according to EN ISO 11970:2007. The
experiments were conducted by forming layers of stellite grade 6 powder (size 45-125 micron) on
the substrate plate with the electrode negative (DCEN) according to the welding process
specification (WPS) ASME21 with position of groove 1G.Tungesten electrode size 4 mm diameter
(2% Thoriated Tungesten),torch orifice diameter 25mm Industrially pure argon (99.99%) is used
at a constant flow rate of 15 L/min for shielding, 2.5 L/min for centre, and 3 L/min for powder
feeding. And a constant standoff distance is 4 mm .The welding coupons (specimens) were
ground by using portable grinding machine and then superfine using emery paper.
Figure 2. Specimen after hardfacing and finishing (grinding)
2.5 Recording the response (measuring the hardness)
A load of HV10 performed on digital Vickers tester available in QC Lab, hardness is
measured in the weld cross section, as the main aim to optimize (increase) the hardness of weld
260
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techni ques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
cross-section. For the measurement of hardness weld section is divided into three sections, of
each specimen transverse to the weld travel, as shown in fig.2, then hardness is measured at
each section and the average value of hardness is marked as shown in table 4.
2.6 Calculation of signal to noise (S/N) ratio
S/N ratio is calculated for the hardness. The objective function is to be maximized. Thus,
Larger-the-better S/N ratio is used, S/N ratio= -10log10 (1/Hardness2).
Sr.
No.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Current
I(amp)
100
100
100
100
100
100
100
100
100
120
120
120
120
120
120
120
120
120
140
140
140
140
140
140
140
140
140
Table 4.Design of experiment (D.O.E.) and data collection
Travel Speed
POWDER
Oscillation
Hardness
HRC*HRC
TS(mm/min)
PF(gms/min) OS(mm/min)
HRC
80
10
475
37
1369
80
10
475
35
1225
80
10
475
36
1296
100
14
500
38
1444
100
14
500
39.3
1544.49
100
14
500
39.2
1536.64
120
18
525
44.8
2007.04
120
18
525
44.9
2016.01
120
18
525
44.8
2007.04
80
14
525
38.1
1451.61
80
14
525
38
1444
80
14
525
37.9
1436.41
100
18
475
39.6
1568.16
100
18
475
40.3
1624.09
100
18
475
40.6
1648.36
120
10
500
39.6
1568.16
120
10
500
36
1296
120
10
500
36.2
1310.44
80
18
500
41
1681
80
18
500
41.4
1713.96
80
18
500
40.3
1624.09
100
10
525
37
1369
100
10
525
36
1296
100
10
525
37
1369
120
14
475
38.2
1459.24
120
14
475
40
1600
120
14
475
41
1681
S/N
ratio
31.364
30.881
31.126
31.595
31.887
31.865
33.025
33.044
33.025
31.61
31.595
31.572
31.951
32.106
32.170
31.953
31.126
31.174
32.255
32.340
32.106
31.364
31.126
31.364
31.641
32.041
32.255
Table 5.Average S/N ratio and regression analysis.
Variable/level
1
2
3
Maximum
Minimum
Multiple R
=0.9187
I (Amp)
TS(mm/min)
PF(gms/min)
OS(mm/min)
31.979
31.651
31.275
31.726
31.696
31.714
31.786
31.811
31.832
32.143
32.447
31.970
31.979
32.714
32.447
31.970
31.696
31.651
31.275
31.726
R Square
Adjusted R2
Standard Error
Observations
=0.8440
=0.8157
=1.1619
=27
261
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techni ques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.
Result
The output shows the results of fitting a multiple linear regression model to describe the
relationship between hardness and 4 independent variables. The equation of the fitted model is,
Hardness = 14.4306 - 0.0197222×TC + 0.0577778×TS+ 0.665278×FR + 0.024×OS -----(1)
The R-Squared statistic indicates that the model as fitted explains 84.4066% of the variability in
result (Hardness). The adjusted R-squared statistic, which is more suitable for comparing models
with different numbers of independent variables, is 81.5714%.Standard deviation of the residuals
to be 1.16196. The effect of various parameters on hardness is shown in fig.
Figure 4. Effect of various parameter on hardness
4.
Conclusion
By using Taguchi method and larger the better criteria the optimum levels of parameter
are current at low level i.e.100 amp and powder feed rate which shows a maximum effect at high
level i.e.18 gms /min and oscillation speed at high level i.e. 525 mm/min and travel speed at high
level i.e.120 mm/min to yield a hardness of 42.81HRC.
References
Aoh Jong-Ning, Chen Jian-Cheng, The Wear Characteristics Of Cobalt-Based Hardfacing Layer
After Thermal Fatigue And Oxidation, Wear. (2001) 250, 611–620
Balasubrarnanian V., Lakshminarayanan A. K., Varahamoorthy R. , Babu S., Application of
Response Surface Methodology To Prediction Of Dilution In Plasma Transferred Arc
Hardfacing Of Stainless Steel On Carbon Steel. International Journal of Iron and Steel
Research. (2009), 16(L), 44-53
Bharath R. Ravi, Ramanathan R., Sundararajan B., Bala Srinivasan P. Optimization of process
parameters for deposition of Stellite on X45CrSi93 steel by plasma transferred arc
technique, International Journal Of Materials and Design.(2008) 29, 1725–1731.
Bourithis E., Tazedakis A., Papadimitriou G., A Study on the Surface Treatment of ‘Calmax’’ Tool
Steel by A Plasma Transferred Arc Process, Journal of Materials Processing Technology.
(2002) 128, 169–177
Hamad A.R., Abboud J.H., Shuaeib F.M., Benyounis K.Y. Surface Hardening Of Commercially
Pure Titanium by Laser Nitriding: Response Surface Analysis, Advances In Engineering
Software. (2010) 41, 674–679.
Madadi F., Ashrafizadeh F., Shamanian M., Optimization Of Pulsed TIG Cladding Process Of
Stellite Alloy On Carbon Steel Using Rsm,Journal of Alloys and Compounds.(2012) 510,
71– 77.
Tarng Y.S., Juang S.C., Chang C.H., The Use Of Grey-Based Taguchi Methods To Determine
Submerged Arc Welding Process Parameters In Hardfacing, Journal Of Materials
Processing Technology, (2002) 128, 1–6
262
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Ant Colony Optimization for Reservoir Operation: Case Study
of Panam Project
1
2
Pooja C. Singh *, T. M. V. Suryanarayana
1
Parul Institute of Technology, Baroda, Gujarat, India
Water Resources Engineering and Management Institute, Faculty of Technology and
Engineering, M. S University of Baroda, Gujarat, India
2
*Corresponding author (e-mail: pooja_singh_c@yahoo.co.in)
Most of real world problems often involve non - linear optimization in their solution with
large number of equality and inequality constraints. In the field of water resources,
Reservoir Operation is one such problem that involves complexities in its operation.
ACO Technique is applied and an attempt has been made to minimize the difference in
demand and releases and hence to determine the best and worst paths. It is found that
the best preferred and recommended path for Panam Project is Scenario 1C8 from all
three classes considered, having probability of 99.85% having deviation of release and
demand as 0.316 MCM. It is also found that Scenarios 9A6, 8A6, 7A6, 9B6, 8B6 and
7B6 are the worst paths.
1. Introduction
Ant Colony Optimization (ACO) is one of the most recent techniques for approximate
optimization. The first algorithm which can be classified within this framework was presented
in 1991. ACO belong to the class of Metaheuristics. Ant colony optimization algorithm is a
probability technique, searching for optimal path in the graph based on behaviors of ants
seeking a path between their colony and source of food.
2. ACO Concept
The development of these algorithms was inspired by the observation of ant colonies.
Ants are social insects which live in colonies and their behavior is governed by the goal of
colony survival rather than being focused on the survival of individuals. The behavior that
provided the inspiration for ACO is the ants foraging behavior, and in particular, how ants can
find shortest paths between food sources and their nest. When searching for food, ants
initially explore the area surrounding their nest in a random manner. While moving, ants leave
a chemical pheromone trail on the ground. Ants can smell pheromone. When choosing their
way, they tend to choose, in probability, paths marked by strong pheromone concentrations.
As soon as an ant finds a food source, it evaluates the quantity and the quality of the food and
carries some of it back to the nest. During the return trip, the quantity of pheromone that an
ant leaves on the ground may depend on the quantity and quality of the food. The pheromone
trails will guide other ants to the food source. It has been shown in that the indirect
communication between the ants via pheromone trails—known as stigmergy—enables them
to find shortest paths between their nest and food sources.
Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive
strength. The more time it takes for an ant to travel down the path and back again, the more
time the pheromones have to evaporate. A short path, by comparison, gets marched over
faster, and thus the pheromone density remains high as it is laid on the path as fast as it can
evaporate. Pheromone evaporation has also the advantage of avoiding the convergence to a
locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants
would tend to be excessively attractive to the following ones. In that case, the exploration of
the solution space would be constrained.
263
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3. Methodology
3.1 ACO algorithm
Ant System (Dorigo et al. 1996) is the original and most simple ACO algorithm. As
such, it has been the most influential in the development of more advanced ACO algorithms.
Tour construction
Let Tij (t) be the total pheromone deposited on path ij at time t, and ηij (t) be the heuristic
value of path ij at time t according to the measure of the objective function. The transition
probability from node i to node j at time period t is given as:
(1)
Where
α and β = parameters that control the relative importance of the pheromone trail versus a
heuristic value.
Let q be a random variable uniformly distributed over [0, 1], and q0 [0, 1] be a tunable
parameter.
The state transition rule is as follows: the next node j that ant k chooses to go is given as:
(2)
Where J = a random variable selected according to the probability of P ij (t).
Pheromone update
The pheromone trail is changed both locally and globally.
Local updating:
Local updating is intended to avoid a very strong path being chosen by all the ants and helps
in exploring the new search space. Every time a path is chosen by an ant, the amount of
pheromone will change by applying the local trail updating formula:
T ij (t)
Step
. T ij (t) + (1-). T0
(3)
Where,
T0 = initial value of pheromone
= tuning parameter (0≤ ≤ 1)
At the end of an iteration of the algorithm, the best and the worst paths are found out from all
the possible paths to reach the finish node, based on the value of the probability.
3.2 Optimal reservoir operation using ant colony optimization
To apply ACO algorithms to a problem following steps have to be taken: (1) An
appropriate representation of the problem, as a graph, which facilitates the construction of
possible solutions, using a probabilistic transition rule to move from one state i to a
neighboring state j;(2) Calculating the value of objective function to be optimized for the
problem;(3) Find various paths from any starting node i to the finishing the node j. Calculate
probability of all the paths found and update the pheromone values locally; (4)Find the best
and worst paths based on probability.
(1) Representation of problem
To apply ACO algorithms to the optimum operation problem, it is represented as a graph. The
reservoir volume is divided into several classes for each time period. Graphical
264
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Representation of Reservoir Operation Problem is shown in Fig 1. Where, NT = Total number
of time periods and NSC = Total number of storage classes.
Figure 1. Graphical Representation of Reservoir Operation Problem
(2) Calculation of objective function
Objective function
Optimal operation of a reservoir may be stated as minimization of total squared deviation of
releases from required demands and given mathematically as follows:
2
Min Z = ∑ (Release - Demand)
= ∑ (R - D) ²
Subject to the following constraints
1. Release Constraint
2. Storage Bound Constraint
3. Mass Balance Equation Constraint
4. Non- Negativity Constraints
1. Release Constraint
Release ≤ Demand
i.e. Rt ≤ Dt
Where
Rt = Release from the reservoir during period t, MCM
Dt = Demand during period t, MCM
2. Storage Bound Constraint
Minimum and maximum allowable values for the storage volumes at each period is given by
S min ≤ St ≤ S max
Where
S min and S max = Minimum and Maximum water storage allowed, MCM
St = Reservoir storage at the period t, MCM
3. Mass Balance Equation Constraint
S t+1 = S t + I t – R t – E t – Q t – O t
Where
S t+1 = Final storage volume at period t, MCM, S t = Initial reservoir storage at period t, MCM
I t = Inflow into reservoir during period t, MCM, R t = Release from the reservoir in period t,
MCM, E t = Evaporation losses during period t, MCM, Q t = Outflow at period t, MCM, O t =
Overflow (Spillover) at period t, MCM
4. Non- Negativity Contraints
S t, S t+1, R t, E t, Q t, O t, Dt ≥ 0
The reservoir volume is divided into 3 classes. The values of objective function are obtained
for each class and for each month i.e. June, July, August, September and October for the
year 1997 to 2006 using Excel Solver 2010.
4. Result and analysis
3 classes are made with an interval of 234 MCM viz. (1) Class A: 38 MCM- 272 MCM; (2)
Class B: 273 MCM - 506 MCM; and (3) Class C: 507 MCM - 740 MCM.
265
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 1. Initial Values of Objective Function for 3 Classes
Classes/Month
June
July
August
September
38-272
1,207.19
497.01
750.56
1281.39
273-506
3217.27
719.93
750.56
1281.39
507-740
77.70
591.57
1435.65
October
2846.12
2965.89
4022.47
Table 2. Final Values of Objective Function and Expected Trial Number for 3 Classes
June
July
August
September
October
Classes/
Trial
Values
Trial
Values
Trial
Values
Trial
Values
Trial
Values
Month
12
9.39
41
11.79
33 11.77
14
9.41
9
8.12
38-272
21 11.09
10
10.44
11 10.53
1
0.05
1
0.10
273-506
507-740
1
0.08
1
7.47
42 11.21
53 12.23
Values of objective function and the reservoir operation problem graphically considering 3
classes is represented in Fig 2.
Figure 2. Graphical representation of reservoir operation problem considering 3 classes
4.1 Feasible Paths
81 feasible paths are found from each node A and B respectively to reach the finish
node. 81 paths from Node A are shown in Fig 3. 81 paths from Node B are shown in Fig 4. 27
paths are found from node C, as the node C1 doesn’t have storage volume between 507 and
740 MCM. 27 paths from Node C are shown in Fig 5.
Fig 3. Eighty one paths from Class A
266
Fig 4. Eighty one paths from Class B
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Fig 5. Twenty seven paths from Class C
Probability Calculation
Probability is calculated for all the feasible paths of Class A, B and C i.e. of 81 paths of Class
A, 81 paths of Class B and 27 paths of Class C. Probability Values for Class C is shown in
Table 3., amongst which best and worst paths are selected. Similarly the probability for
classes A and B are also determined.
Table 3. Probability Values for Class C
Sr.
No.
1
2
3
4
5
6
7
8
9
Scenario
1C1
1C2
1C3
1C4
1C5
1C6
1C7
1C8
1C9
Paths
C2-A3-A4-A5
C2-A3-A4-B5
C2-A3-A4-C5
C2-A3-B4-A5
C2-A3-B4-B5
C2-A3-B4-C5
C2-A3-C4-A5
C2-A3-C4-B5
C2-A3-C4-C5
P(x),%
44.47
99.82
26.82
0.07
32.3
0.03
49.17
99.85
30.69
Sr.No
10
11
12
13
14
15
16
17
18
S
2C1
2C2
2C3
2C4
2C5
2C6
2C7
2C8
2C9
Paths
C2-B3-A4-A5
C2-B3-A4-B5
C2-B3-A4-C5
C2-B3-B4-A5
C2-B3-B4-B5
C2-B3-B4-C5
C2-B3-C4-A5
C2-B3-C4-B5
C2-B3-C4-C5
P(x),%
42.27
99.8
25.1
0.07
32.3
0.03
46.49
99.83
28.45
Sr.No.
19
20
21
22
23
24
25
26
27
S
3C1
3C2
3C3
3C4
3C5
3C6
3C7
3C8
3C9
Paths
C2-C3-A4-A5
C2-C3-A4-B5
C2-C3-A4-C5
C2-C3-B4-A5
C2-C3-B4-B5
C2-C3-B4-C5
C2-C3-C4-A5
C2-C3-C4-B5
C2-C3-C4-C5
Fig 6. and Fig 7.shows the Best Path and Worst Path of Class C with Probability of 99.847%
and 0.032% respectively. Fig 8. and Fig 9. shows the Best Path and Worst Path from Classes
A, B and C with the Probability of 99.847% and 0.020% respectively. Similarly best and worst
paths are obtained from Classes A and B also.
Figure 6.
Figure 7.
Figure 8.
Figure 9.
5. Conclusions
The study presents an application of ACO for Panam reservoir operation.
For storage class of 38 - 272 MCM (Class A), A1-A2-A3-C 4-B 5(Scenario 1A8) is the best
path having probability of 99.774%. By following this path, one may get the deviation of
release and demand as 0.316 MCM.
For storage class of 38 - 272 MCM (Class A), A1-C2-C3-B4-C 5(Scenario 9A6), A1-C2-B3B4-C5 (Scenario 8A6), A1-C2-A3-B4-C5(Scenario 7A6) is the worst path having probability of
0.020%. By following this path, one may get the deviation as 3.5 MCM.
For storage class of 273 - 506 MCM (Class B), B1-A2-A3-C4-B5 (Scenario 1B8) is the best
path having probability of 99.774%. By following this path, one may get the deviation of
release and demand as 0.316 MCM.
267
P(x),%
34.17
99.72
19.2
0.07
32.29
0.03
36.88
99.75
21.1
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
For storage class of 273 - 506 MCM (Class B), B1-C2-C3-B4-C5 (Scenario 9B6), B1-C2-B3B4-C5(Scenario 8B6) and B1-C2-A3-B4-C5(Scenario 7B6) is the worst path having
probability of 0.020%. By following this path, one may get the deviation as 3.5 MCM.
For storage class of 507 - 710 MCM (Class C), C2-A3-C4-B5 (Scenario 1C8) is the best path
having probability of 99.847%. By following this path, one may get the deviation of release
and demand as 0.316 MCM.
For storage class of 507 - 710 MCM (Class C), C2-C3-B4-C5 (Scenario 3C6), C2-B3-B4-C5
(Scenario 2C6) and C2-A3-B4-C5 (Scenario 1C6) is worst path having probability of 0.032%.
By following this path, one may get the deviation of release and demand as 3.5 MCM.
C2-A3-C4-B5 (Scenario 1C8) is the best path considering all the classes A, B and C having
probability of 99.847%. By following this path, one may get the deviation as 0.316 MCM.
A1-C2-C3-B4-C5(Scenario 9A6), A1-C2-B3-B4-C5(Scenario 8A6), A1-C2-A3-B4-C5(Scenario
7A6), B1-C2-C3-B4-C5(Scenario 9B6), B1-C2-B3-B4-C5(Scenario 8B6) and B1-C2-A3-B4C5(Scenario 7B6) is worst path considering all the classes A, B and C having probability of
0.020%. By following this path, we will get the deviation of release and demand as 3.5 MCM.
Thus, if one wants to know that which storage levels has to be maintained in various
time periods, to obtain the optimum deviation of release and demand, then based on the
probability, one can select the best path.
By integrating the classes into fewer intervals, the same methodology can be applied to
achieve the best and the worst paths.
References
Jalali, M.R., Afshar, A. and Marino, M.A., Reservoir Operation by Ant Colony Optimization
Algorithms, http://www.optimization-online.org/DB_FILE/2003/07/696.pdf, 2003.
Kumar, D.N. and Reddy, M.J., Ant Colony Optimization for Multi-Purpose Reservoir
Operation, Water Resources Management, 2006 Volume: 20, pp 879–898, DOI:
10.1007/s11269-005-9012-0, Springer.
Singh, P.C., Ant Colony Optimization for Reservoir Operation: Case Study of Panam Project,
M.E. Dissertation, M.S. University of Baroda, 2011.
268
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Combining AHP and TOPSIS Approaches to Support Rubble
Filling Method Selection for a Construction Firm
Prerana Jakhotia*, N.R.Rajhans
College of Engineering Pune, Pune, Maharashtra, India
*Corresponding author (e-mail: jakhotiaprerana21@gmail.com)
This paper presents a case study involving evaluation of three methods of rubble filling
at a construction site. Method selection was based on for application of a Multi-Criteria
Decision Making (MCDM) algorithm. This study showed how suitable method could be
identified and ranked using a combination of AHP and TOPSIS. This proposed
technique allowed for an often qualitative assessment of method selection to be
replaced by a more scientific, informed and unbiased method.
Keywords: Rubble filling, MCDM, TOPSIS, AHP
1.
Introduction
This paper proposes a method for selection of rubble filling method on a construction
site. Initially the selection criteria were determined and all feasible alternatives were listed. In
this paper, a Multi- Criteria Decision Making (MCDM) approach was used to find the
optimum method. MCDM is based on analytical ranking of all alternatives which meet all the
required criteria fully and is widely used in economic, social, political and environmental
studies. TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution) and
AHP (Analytic Hierarchy Process) are two methods which can be used in MCDM to solve
different decision making problems and in this case study were combined to support method
selection. Overview of MCDM process is given by Hwang and Yoon (1981). Application of
MCDM in evaluating optimum maintenance strategy is explained by Shyjith (2008). Abd
(2011) explained how MCDM approach can be used for selection of scheduling rule in
robotic flexible assembly cells. The AHP process is explained by Satty (1980). Chen (2006)
applied Analytical Hierarchy Process (AHP) approach for site selection. Wide applications of
AHP are explained by Vaidya and Kumar (2006). Opricovic and Tzeng (2004) showed a
comparative analysis between VIKOR and TOPSIS. Combination of AHP and TOPSIS in
customer-driven product design process is explained by Lin (2008).
2.
Methodology
The overall rubble filling method selection process involved a three step procedure: 1)
Establishing selection criteria 2) Evaluating weight of criteria by AHP and 3) Method
selection using TOPSIS.
2.1
Establishing site selection criteria and data collection
In the first stage, various feasible methods for rubble filling were collated. The first
alternative was four men loading the rubble in the tractor and tractor unloading it in the pit.
The second alternative was JCB loading it in the tractor and the tractor unloading it. Last
alternative available was JCB loading it in the dumper and dumper unloading it.
Simultaneously, without reference to any particular method, a list of general method
suitability criteria was developed. These criteria included:
1) Cost (INR): Total amount required for the filling the rubble in the pit. This includes the
269
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
wages of the workers, cost of JCB, cost of tractor, cost of dumper.
2) Supervision (days): Number of days for which manual supervision is required
3) Breakdown (percentage): The probability of breakdown and hence the work stoppage
4) Work specification: Ease with which how the work is to be done can be specified.
5) Time (hours): Total time required to complete the job
6) Release of Payment (INR/day): It is the total cost divided by total days required to do the
job. Less the ROP better it is for the construction site.
The final method was selected based on suitability as determined by matching of these
selection criteria. The alternative methods and data pertaining to the relevant selection
criteria are summarized in Table1.
Table 1. Score matrix
Man+Tractor
JCB+Tractor
JCB+ dumper
A1
A2
A3
Cost
S
Breakdown
WS
Time
ROP
C1
11800
10000
7000
C2
3
1.5
0.5
C3
(less)2
(very high)8
(high)6
C4
(excellent)10
(very good)8
(good)6
C5
25
10
5
C6
3933.33
5000
7000
2.2. Evaluating weight of criteria by AHP approach
In this study the AHP method used paired comparison to weight the importance
criteria based on a hierarchical structure. Notably, the AHP method has the advantages of
yielding more precise results and verifying consistency of judgments. The computation of the
weights using the AHP approach involves two main steps 1) Development of the pair-wise
comparison matrix and 2) synthesis of judgments.
2.2.1. Development of a pair-wise comparison matrix
The matrix of pair-wise comparisons was constructed from i × j elements, Where i
and j were the number of criteria (n) so that in matrix A, aij represents comparative values of
criterion i with respect to criterion j, such that aij = 1 / aji and aij when i = j.
The comparisons between each criterion were made using the measurement scale of Satty
[5] which gave numerical values between 1 and 9 depending on the relative importance of
the criterion.
2.2.2. Synthesis judgments
After all pair-wise comparison matrices were formed, the vector of weights, w = [w1,
w2. . . wn], of each criterion was computed on the basis of Satty’s eigenvector procedure.
This procedure is known a synthesis judgment and involves three phases:
1) Sum the values of the elements in each column of the pair-wise comparison matrix A.
2) Divide each element of the pair-wise comparison matrix by its square root of addition of
square of each element of the column to obtain the normalized pair wise comparison matrix
3) Calculate the product of elements in each row in the normalized matrix A and its nth root.
Normalize the nth root which represents the weight of each criterion.
4) Calculate the maximum Eigen value:
n
n
i=1
j=1
λmax = (1/n)* (∑ (∑ aij Wj)/ Wi).
(1)
270
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
5) In the AHP method the maximum eigenvalue (λmax) is a significant parameter of the
validation consistency and is used as a reference index to screen information by calculating
the consistency ratio (CR) of the estimated vector. CR is calculated by applying following
equations, which represents the consistency index for each matrix of order n
C.I = (λmax – n) / (n -1).
(2)
C.R = C.I / R.I.
(3)
6) The value of the random index RI depends on n. RI values corresponding with n varying
from 1-10 is listed in Table 2.
Table 2. RI values
The consistency ratio (CR) represented the key check of inconsistency of the subjective
values of the A matrix so that if CR is ≤ 0.1, the values of subjective judgment are
considered acceptable.
2.3. Method selection using TOPSIS
The basic concept of this method is that the selected alternative should have the
shortest distance from the ideal solution and the farthest distance from the negative-ideal
solution in a geometrical sense. TOPSIS assumes that each attribute has a tendency of
monotonically increasing or decreasing utility. Therefore, it is easy to locate the ideal and
negative-ideal solutions. The Euclidean distance approach is used to evaluate the relative
closeness of alternatives to the ideal solution. Thus, the preference order of alternatives is
yielded through comparing these relative distances.
The procedural steps of TOPSIS method are enlisted as below:
Figure 1. Procedure of TOPSIS
271
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.
Results
The weighted values of the criteria were assigned using the scale developed by
Satty and consequently a pair-wise comparison matrix was established to determine
decision criteria in a hierarchical tree structure at different levels using AHP (Table 3)
resulting in the final weight of each criterion (Fig. 2).
Table 3. Initial matrix for AHP
Criteria
C1
C2
C3
C4
C5
C6
C1
1
7
8
5
6
1/5
C2
1/7
1
5
1/6
1/5
1/8
C3
1/8
1/5
1
1/7
1/6
1/9
C4
1/5
6
7
1
5
1/6
C5
1/6
5
6
1/5
1
1/7
C6
5
8
9
6
7
1
Figure 2. Weight of criteria
The initial matrix for AHP was checked for consistency. The Consistency ratio obtained was
0.046 which was found to be less than 0.1 which confirmed that the weight given are correct.
After assigning weights using AHP, methods were ranked using the TOPSIS method:
1) Construction of a decision matrix (DM) (Table 1)
2) Calculation of the normalized decision matrix (Table 4)
Table 4. Normalized decision matrix for TOPSIS
A1
A2
A3
C1
0.695032132
0.589010281
0.412307197
C2
0.884652
0.442326
0.147442
C3
0.196116
0.784465
0.588348
C4
0.707107
0.565685
0.424264
C5
0.912871
0.365148
0.182574
C6
0.415833
0.528602
0.740042
3) Construction of a weighted normalized decision matrix 4) Identification of the ideal (A+)
and negative ideal (A-) solutions 5) Calculation of the separation of each alternative from the
272
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
ideal and negative ideal solutions and then calculating the relative closeness to the ideal
solution (Table 5).
Table 5. Relative closeness
Methods S
SC
A1
0.094997 0.3082 0.627456
A2
0.075751 0.2752 0.603728
A3
0.159746 0.3997 0.37312
6) Finally, the preference order was ranked from the final performance score using the
TOPSIS method to select preferred method for rubble filling. The preferred method has the
maximum performance score. The final ranking of method selection in descending order of
preference was Man and Tractor (A1) > JCB and tractor (A2) > JCB and dumper (A3). Man
and Tractor (A1) was found to be a superior method while JCB and dumper (A3) was the
worst location.
4.
Conclusion
The paper proposed a new procedure for selection of method for rubble filling in the
pit, to find the most suitable method among three alternatives based on predefined selection
criteria, which combined two decision making methods. AHP was used to determine the
weights of six criteria by pair-wise comparisons. Subsequently, TOPSIS was applied to
achieve final ranking preferences in descending order thus allowing relative performances to
be compared. Using this multiple criteria decision making approach, using man and tractor
was identified as the best method for rubble filling in the pit.
References
Abd, K. et al. An MCDM Approach to Selection Scheduling Rule in Robotic Flexible
Assembly Cells. World Academy of Science, Engineering and Technology, 2011, 76
Chen, C. Applying the Analytical Hierarchy Process (AHP) Approach to Convention Site
Selection. Journal of Travel Research, 2006, 45.
Hwang C.L. and Yoon, K. Multiple Attribute Decision Making: Methods and Applications.
Springer- Verlag, New York, NY, 1981.
Lin, M. et al., Using AHP and TOPSIS approaches in customer-driven product design
process. Computers in Industry, 2008, 59, 17-31.
Opricovic, S. and Tzeng, G.H. Compromise solution by MCDM methods: A comparative
analysis of VIKOR and TOPSIS. European Journal of Operational Research, 2004, 156
(2) 445–455.
Satty, L. The Analytical Hierarchy Process: Planning, Priority Setting. McGraw-Hill Resource Allocation, New York, 1980.
Shyjith, K. et al., Multi-criteria decision-making approach to evaluate optimum maintenance
strategy. Journal of Quality in Maintenance Engineering, 2008, 14, 375-386,
Vaidya, O.S. and Kumar, S. Analytic hierarchy process: an overview of applications.
European Journal of Operation Research, 2006, 169, 1-29.
.
273
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Identification of Parameters Affecting Liquefaction of Fine
Grained Soils using AHP
Rajhans N.R.1*, Purandare A. S.2, Pathak S.R.1
1
College of Engineering,Pune.Maharashtra,India
M.E.S.College of Engineering,Pune,Maharashtra,India
2
*Corresponding Author (nrr.prod@coep.ac.in)
Liquefaction is one of the devastating effects of earthquake, in which there is significant
loss in soil’s shear strength and stiffness with increase in excess pore pressure. The major
failures are tilting of buildings, excessive settlements and lateral displacements and caused
several thousands of casualties and tremendous financial losses. It was believed that only
clean sand do liquefy and the cohesive soils are resistant to cyclic loading due to their
cohesional component of shear strength. However, Haicheng (1975) and Tangshan (1976)
showed that even cohesive soils could liquefy. To predict the liquefaction susceptibility of
fine grained soil, different criteria had been developed by incorporating various parameters,
but these criteria are data- specific. Aim of this paper is to identify significant parameters of
soil which affect susceptibility to liquefaction. This can be achieved by using one of the
MCDM tools as AHP.
Key words: Soil liquefaction, AHP, Parameters, Hierarchy
1.
Introduction
Only clean sandy soils with few amount of fines were considered to be liquefiable until
Haicheng (1975) and Tanshan (1976) earthquakes. However these earthquakes showed that
even cohesive soils could liquefy. So, all types of soils, i. e. clean sand, sand with non-plastic and
plastic fines are prone to liquefy. Various soil properties of these soils are responsible for
occurrence of liquefaction at a particular site. Earthquake of magnitude 5.5 and above on Richter
scale is found to cause liquefaction of loose saturated soil. Thus, in order to assess liquefaction
potential at a particular site, seismic parameters such as Magnitude (M), PGA (a max), frequency
content, duration( t ) and epicentral distance ( R ) along-with specific soil properties such as
Plasticity Index (PI),Relative density (Dr), ratio of natural water content to Liquid Limit (Wc/LL),
Effective over-burden pressure (‟V) and Co-efficient of permeability (K) are required to be
considered.
Multi-criteria decision making (MCDM) methods are used to take decisions when a
multivariate criterion is present in any decision making scenario [Zionts, 1988]. Analytic Hierarchy
Process, AHP, is a method that allows the consideration of both objective and subjective factors
in ranking the alternatives. AHP organizes the criteria and alternative solutions of a decision
problem in a hierarchical decision model represented in the form of a matrix called „priority
matrix‟. AHP has an added advantage that it takes into account the varied range of parameters
governing any physical process and thus is found to be effective in liquefaction phenomenon
.MCDM tool has been effectively used in such situations. The soil parameters, based on their
inherent relationships with mechanism of liquefaction can be selected using AHP.
1.1 Literature review
Various approaches to assess possibility of liquefaction at a particular site are laboratory
testing, field data studies and analytical models. Earlier, it was considered that only clean sand is
prone to liquefaction. Further, Kishida (1969) and Ishihara(1989 ) based on damages caused
during earthquakes Mino-owari, Tohankai & fukui (1891) showed that soil with fines content also
liquefy. Number of criteria are developed by researchers (Wang 1979, Seed et al. 1983, Polito
1999, Andrews and Martin 2000, Seed et al. 2003, Bray and Sancio 2006, Boulangar and Idriss
274
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2006) for evaluating liquefaction potential of sandy soil with fines content. Based on observed
damages due to major earthquakes in China, Wang (1979) put forth the threshold values for
liquefaction of fine grained soil. With the same data-base of Wang, Seed and Idriss (1982)
developed a criteria called as “Chinese Criteria” because of its origin. Koester (1992) correlated
Chinese procedure and ASTM procedures for determination of liquid limit. This criteria was
globally accepted as an indicator until 1990s.
Figure 1. Bridge at Niigata, Year 1964, (Mw=7.5)
Further, Youd (1998) and Andrews and Martin (2000) recommended the Chinese criteria
with some modifications. After 1994 Northridge,1999 Kocaeli and 1999 Chi-Chi earthquakes, from
field observations, the researchers, Bray et.al (2004,2006),Boulanger (1997,1998,2006), Polito
and martin (2001), concluded that indiscriminate use of the Chinese Criteria as a substitute for
laboratory and in situ testing should be avoided. Diana Worthen (2009) showed that „Critical state
soil Mechanics framework‟ could be used effectively. Bol et.al.(2010) showed that liquefaction
potential can the assessed by using the parameters such as coefficient of permeability(K) and
time required for 90% dissipation of pore pressure (U90). Pehlivan, Menzer (2010) developed
probabilistically based boundary curves for cyclic shear strain and excess pore water pressure
generation response, mapped on Cone Penetration test domain.
However all these charts or models considered different set of soil properties and have
been developed using a specific data set. The present work deals with extracting key soil
properties using AHP to assess liquefaction potential of sandy soil with fines content.
1.2
Analytic Hierarchy Process and its use in various fields
Nowadays MCDM procedures such as Analytic Hierarchy Process (AHP) are being
widely used in the fields of industrial engineering and operational research areas AHP as
developed by Saaty [1980] has also been applied in a variety of practical applications including
economics, planning, energy policy, health, conflict resolution, site selection, project selection,
budget allocation and so on. These procedures are selected in the field of emergency
management to simulate earthquake hazard maps by integrating AHP and GIS. However, it is
understood that evaluation of liquefaction possibility not only involves large number of parameters
but also wide ranges of these parameters contributing to the phenomenon. Thus a new
application of MCDM techniques by using of AHP in the field of earthquake induced liquefaction is
presented.
2. Liquefaction mechanism
Liquefaction is a phenomenon caused by monotonic, transient, or repeated disturbance to
saturated cohesion-less soils under un-drained conditions. Liquefaction is defined as the
transformation of a granular material from a solid to a liquefied state as a consequence of
increased pore-water pressure and reduced effective stress. Increased pore-water pressure is
due to the tendency of granular materials to compact when subjected to cyclic shear loading.
Thus when earthquake occurs, the shear strength of loose saturated sandy soils decreases as a
result of increase in excess pore pressure. Hence such soils are found to liquefy, thereby unable
to support the structure supported on them, leading to failures.
275
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
It has been proved that clean sands are more prone to liquefaction. However, damages
during some of the recent earthquakes and the research carried out, [Haicheng (1975) and
Tanshan (1976)] reveals that sand with fines content are also susceptible to liquefaction.
However, the liquefaction mechanism of sand with fines is quite similar to clean sand except the
rate of rise and dissipation of excess pore pressure is small as compared to that of clean sand.
The presence of fines results in the increase in the threshold shear strain for pore-water pressure
generation [ Vucetic and Dobry(1991), Hazirbaba and Ratheja (2008)]. Such sand beyond certain
amount of fines is found to resist liquefaction.
3. Liquefaction parameters
From several studies to assess the liquefaction potential, it is well understood that a large
number of seismic as well as soil parameters are involved which affect the occurrence of
liquefaction during an earthquake, making it necessary to identify the significant parameters
based on their contribution to liquefaction phenomenon to expedite the assessment of
liquefaction.
The parameters which are responsible for liquefaction phenomenon are basically classified
into two groups as a) Seismic parameters: Peak ground acceleration (a max), Magnitude (M),
frequency, duration (t) and epicentral distance (R) are the „salient seismic parameters‟ and b)
Soil parameters: Plasticity index (PI), Relative density (Dr), Ratio of Natural water content to
Liquid Limit (Wc/LL), Effective over-burden pressure (‟V) and Coefficient of permeability (K), are
the „salient soil parameters‟.
4. AHP
AHP consists of four steps as listed below:
(1) Structuring the hierarchy of criteria and alternatives for each criterion.:(2) Assessing the decision-makers evaluations by pair wise comparisons. :Pair-wise comparison derives accurate ratio scale priorities by comparing the relative
importance, preference or likelihood of two elements with respect to another element .
(3) Using the Eigen vector method to yield priorities for criteria and for alternatives by criteria.:After the comparison stage has completed, mathematical process will evaluate the priority
weights for each matrix. The consistency ratio (CR) is being determined, whereby if the value is
larger than 0.10, it implies that non consistency in decision making has occurred and the
comparison must be reviewed, (Saaty, 1982).
(4) Synthesizing the priorities to achieve the value of CR under the limit and to arrive at a set of
ratings for the alternatives.
Accordingly, in present study a hierarchical structure for liquefaction susceptibility is thus
generated as portrayed. in Fig.2.
Liquefaction
Parameters
Seismic
Parameters
a
max
Mag
nitude
Freque
-ncy
Epicentrlc
distance
Earthquake
Prmeters
Dura
-tion
Plasticity
Index
Dr
Wc/LL
Figure 2. Hierarchical Structure for Liquefaction Susceptibility
276
(’V)
K
.
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
5
Results
After applying AHP method, the local weightage for seismic and soil parameters are
represented in tabular form (Table No. 1)
It is a well established fact that for liquefaction to occur there must be ground shaking and
the character of ground motion such as PGA (a max) determines shear strains which causes
contraction of soil particles and leads to development of excess pore pressure finally causing
liquefaction. Also it is also known that potential for liquefaction increases with the increase in
earthquake shaking which is usually represented by earthquake magnitude (M). The weightage
obtained in the present work for PGA( a max) and Magnitude (M) are rightly justifies their
significance. Based on the weightage of all salient seismic parameters thus obtained, indicates
that for the criterion, „seismic parameters‟, the attributes such as magnitude (27%), peak ground
acceleration (31%) are significant.
Table 1. Criteria, Local weight and Total weight.
Criteria
Local weight
Seismic
Parameters
50%
Soil
Parameters
50%
Sub Criteria
a max
M
Frequency
R
t
PI
Dr
Wc/LL
‟V
K
Local weight
31%
27%
15%
16%
11%
45%
22%
14%
13%
10%
Total weight
0.165
0.135
0.075
0.080
0.055
0.225
0.110
0.070
0.065
0.050
Further, the susceptibility of liquefaction is known to decrease with the increase in
Plasticity Index. Plasticity Index is the measure of plasticity of fine grained soil which is the
difference between Liquid limit and Plastic limit. It has close relationship with soil properties such
as strength, compressibility and permeability. Soils with high „PI‟ tends to be clay, lower „PI‟ tends
to be silt and with „0‟value tends to have little or no silt or clay. Similarly, loose soils are known to
be more prone to liquefaction; accordingly, the relative density (Dr) is another important attribute
which actually indicates the loose or dense state of soil. So, for criterion, „Soil Parameter‟, the
present analysis identifies the „PI‟(45%) and Dr (22%) as high weigthage attributes relatively
higher than Wc/LL, effective over-burden pressure (‟V) and permeability (K). It is thus anticipated
that such an analysis will help in extracting the significant parameters to be included for reliable
and accurate liquefaction vulnerability.
6. Conclusion
AHP method from Multi-Criteria Decision Making is applied for liquefaction potential studies.
As a number of parameters affect the possibility of liquefaction which is one of the devastating
earthquake-induced hazards, it becomes necessary to identify the significant parameters so as to
perform reliable and accurate liquefaction forecast. A prior knowledge of contribution of significant
parameters will certainly enhance the predictability in liquefaction evaluation procedures. A
hierarchy structure representing these parameters is thus generated.
By using AHP method, weights of parameters under consideration are obtained. It is thus
concluded that few of the seismic as well as soil parameters have been found to be extremely
important to decide liquefaction possibility. Thus peak ground acceleration (amax), magnitude (M)
are observed to be the significant seismic parameters where as amongst soil properties; the PI
277
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
and Rd primarily govern the liquefaction phenomenon. So, AHP can be effectively used to select
salient seismic and soil properties responsible to cause liquefaction of loose saturated sandy soils
with certain amount of plastic fines.
References
Andrews .C.A, and Martin, G.R. (2000), “Criteria for liquefaction of silty soils,” Proc., 12 th World
Conf. on Earthquake Engineering, Auckland ,New Zealand.
Bol, E.Önalp A, Arel, E., (2010), “Diagnosing liquefaction potential by the dissipation test in fine
grained soil”.
Boulanger R. W., Meyers M. W., Mejia L. H., and Idriss, I. M.(1998).“Behavior of a fine-grained
soil during the Loma Prieta earthquake.”Can. Geotech. J., 35, 146–158.
Boulanger R.W. and Idriss R.B.(2006). “Liquefaction susceptibility criteria for silts and clays” ,
Journal of geotechnical and Geoenvironmental Engineering, 132(11), 1413-1426
Bray J. D., Sancio R.B., Riemer M.F., and Durgunogiu T, (2004b). “Liquefaction susceptibility of
fine grained soils.” proc.,11th Int. Conf. on Soil Dynamics and Earthquake Engineering,
Singapore,
Bray ,J.D. and Sancio, R.B.(2006). “Assessment of the liquefaction susceptibility of fine grained
soils.” Journal of Geotechnical and Geoenvironmental Enginnering,132(9),1165-1177.
Diana Worthen(2009), “Critical state framework and liquefaction of fine-grained soils”,Thesis for
Master of Science in Civil Engineering, Washington state University .
Pehlivan, Menzer, (2010). “Assessment of Liquefaction susceptibility of fine grained soils”. Ph.D.
thesis for Middle East Technical University.
Polito, C.P.,and Martin, J.R., (2000). “The Effects Of Non-Plastic Fines On the Liquefaction
Resistance Of Sands” ,
Saaty T. L., [1990], “How to make a decision: The Analytic Hierarchy Process”. European Journal
Operational Research, Vol. 48, pp.9-26.
Saaty T. L., [2008], “Decision making with the analytic hierarchy process”. International Journal
Services Sciences, Vol. 1, No. 1, pp.83-98.
Seed H.B. and Idriss I.M. (1982), “Ground motion and soil liquefaction during earthquakes”,
Monograph, Earthquake Engineering Research Institute, Oakland, Ca.
Seed R. B.,et al.,(2003).”Recent advances in soil liquefaction engineering:
A Unified and
consistent framework.” EERC-2003-06, Earthquake Engineering Research Institute,
Berkeley, California.
Wang, W., (1979). “Some findings in soil liquefaction”. Water Conservancy and Hydroelectric
Power Scientific Research Institute. Beijing, China.
Youd T.L., Idriss I.M., Andrus R.D., Arango I., Castro G., Christian J.T., et al.[2001], “ Liquefaction
resistance of soils: summary report from the 1996 NCEER and 1998 NCEER/NSF
Workshops on evaluation of liquefaction resistance of soils”. J Geotech Geoenvion Eng,
ASCE; 127[10]:817–33.
278
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Identifying Key Risk Factors for PPP Projects in the Indian
Construction Industry: A Factor Analysis Approach
Rakesh P. Joshi1*, Hariharan Subramanyan2
1
2
Sardar Patel College of Engineering, Mumbai-400 058, Maharashtra, India
Thadomal Sahani Engineering College, Mumbai-400 050, Maharashtra, India
*Corresponding author (e-mail: rpj_7784@rediffmail.com)
Public Private Partnerships (PPPs) are increasingly used by Indian Government. Past
research works presented different risk management models for successful implementation
of Public Private Partnership projects however; actual empirical research studies in Indian
context are limited. This research has presented the current views of stakeholders of Public
Private Partnership projects in India through a questionnaire survey on identification of
critical risk attributes, their allocation, and impact on the Indian Public Private Partnership
projects. A total of 50 risk attributes were identified through literature. Based on the risk
attributes a questionnaire was designed to collect the data. Factors were analyzed by
principal component analysis method of factor analysis technique using SPSS software
package. After factor rotation by varimax method, total 8 component factor grouping were
evolved named as Social Acceptability Risk; Experience in PPP Risk; Market / Competition
Risk; Procedural Delay Risk; Economical Risk; F6 Macro economical Risk; Construction
Delay Risk; & Subletting Risk. The findings from this study provide a comprehensive
picture of critical risks for the stakeholders of PPP projects in India that intend to participate
in contracts.
Keywords: Public Private Partnership Projects (PPP), Critical Risk Attributes, Factor
Analysis.
1. Introduction
Public-private partnerships (PPP’s) are increasingly being used by governments and public
sector authorities throughout the world as a way of increasing access to infrastructure services for
their citizens and economies at a reduced cost.PPP infrastructure projects involve many different
stakeholders. The Government of India has been promoting involvement of private entrepreneurs
in development of road projects with a focus on overcoming the limitations of the traditional public
procurement system. Participation of private entrepreneurs through Public–Private Partnership
(PPP) route brings in additional capital and imparts techno-managerial efficiency in the project
development and operation. The success of projects procured through PPP route greatly
depends on the transfer of risks associated with the project to the parties best able to manage the
risks. (L. Boeing Singh et al 2006) & thus the prime objective of this research paper is to identify
the risk attributes & determine the key risk attribute in Indian context by factor analysis approach
using SPSS software package.
2. Identifying risk attributes & application of factor analysis approach
Risks in PPP projects can be clustered according to the conventional risk management
process, identification of risk areas, risk analysis, and risk strategies. To improve the use of risk
strategies, risk areas need to be identified and analyzed properly. Much of the risk of a PPP
project comes from the complexity of the arrangement itself in terms of documentation, financing ,
taxation, technical details, sub agreements, etc involved in a major infrastructure venture, while
the nature of the risk alters over the duration of the project(D. Grimsey et al 2002).
279
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The identification of risks attributes is a core element of any PPP project procurement. The
risk identification includes the recognition of potential risk event conditions in a project. The risk
identification includes the recognition of potential risk event conditions in a project. A total of 50
risk attributes PPP in context of India were identified after conducting an extensive literature
review. The basic principle of risk identification is to recognize the significance of risk factors
which starts from listing down the risk attributes.
Total 50 risk attributes i.e possible risk sub-factors were adapted from researches by Sid
Ghosh et al (2004), Nabil A. Kartam et al (2001), Yelin Xu et al (2010), Dr. Ashwin Mahalingam
et al (2007); Shou Qing Wang et al (1999), Li Bing et al (2005) , Li-Yin Shen et al (2006), Grimsey
D et al (2002), Makarand hastak et al (2000), Martinus P. Abednego et al (2006), P. K. Dey
(2000), L. Boeing Singh et al (2006).
The factor analysis technique has been widely used worldwide in risk management areas e.g.
Sid Ghosh et al (2004) uses principal component analysis to assess the critical risk factors for a
mass rapid-transit underground rail project (Chaloem Ratchamongkhon Line), Thailand ;
Hemanta Doloi et al (2011) uses principal component analysis with regression modeling for
analyzing the factors affecting delay in Indian construction projects , Yelin Xu et al (2010) uses
factor analysis approach for developing a risk assessment model for PPP projects in China, also
K. C. Iyer et al (2005) through factor analysis on the 55 success & failure attributes, extracted 7
critical success factors affecting cost performance of Indian construction.
3.
Research survey design & administration
To identify the significant risks of Public-Private partnership projects in India, a questionnaire
survey were administered. The fifty risk attribute were included in a questionnaire survey. The
survey was administered in 2012 among Indian organization with involvement of PPP projects. In
all, 90 completed questionnaires were returned out of 139 distributed with the effective rate of
return is 64.75%.
Table 1. Response from Field Experts
Academic
Public Sector
Contractor
Total
Sector
Sent
Received Sent Received Sent Received Sent Received Sent Received
139
90
45
19
60
43
30
24
4
4
64.75 %
Effective Response Rate = 100
4. Survey data preliminary analysis and results
The relative importance of the fifty risk attributes identified through literature review was
explored for like hood of event of risk & also for its severity by means of Likert rating scale.
Statistical analysis undertaken included descriptive analysis (Applying Mean scoring ranking
technique; Calculating Overall Impact of Risk Attributes; Normalization of Overall Impact Values),
Suitability ; reliability test (Kaiser Meyer Olkin & Barlett test of sphericity) & factor analysis (factor
extraction & factor rotation using varimax method). KMO test that measured adequacy gave a
sampling adequacy of 0.5503. This was greater than 0.5. A further test using the Barlett test of
sphericity gave an approximate chi-square of 164.8140 and 136 degree of freedom with
associated significance level was small at p=0.0.
5. Factor analysis of risk attributes for PPP projects in India
For this research work SPSS package for Principal Component Analysis (PCA) is used
because the main idea of this method is to form, from a set of existing variables, a new variable
(or new variables, but as few as possible) that contain as much variability of the original data as
possible. The method of eigen value decomposition was used in determination of the number of
factors to be extracted & Factor rotation was performed based on the varimax method with 21
280
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
iterations. It was found that a rotated factor loading matrix established 8 component groups.
(Table 2) & be interpreted as follows:
Table 2. Rotated factor matrix (loading) of critical risk attributes
Factor
Component
Critical Risk Attributes
Loading
Factor 1
a01 Protest Against Environmental System by Public
0.7540
a05 Pollution and Safety Rules
0.6199
a10 Protest Against Human Displacement and
0.5451
Compensation
Factor 2
a24 Inadequate Experience in PPP
0.7244
Factor 3
a31 Tariff Adjustment
0.6069
a23 Revenue Risk
0.5886
Factor 4
a45 Delay in Land Acquisition
0.8180
a29 Delay in Solving Disputes
0.5778
a11 Delay in Approval
0.5139
Factor 5
a14 Unavailability of Funds
0.7544
a15 Economic Disaster
0.5058
Factor 6
a22 Investment Risk
0.7469
a19 Inflation Rate Volatility
0.6345
a17 Exchange Rate Fluctuations
0.1552
Factor 7
a46 Construction Delay
0.8142
Factor 8
a34 Subcontractors Failure
0.8612
a35 Co-ordination of Subcontractors
0.3802
5.1 Factor grouping 1 represents – Social acceptability risk
Factor one explains 14.2911% of total variance of the linear component (factor) and
comprises of (i) a01 Protest against Environmental System by Public (ii) a05 Pollution and Safety
Rules (iii) a10 Protest Against Human Displacement and Compensation. Infrastructure projects
typically have significant social & environmental impacts, arising from their construction &
operation. Social impacts are on the communities affected by the project & environmental impacts
are on the account of the project locations. During inception stage of project in India, only Benefit
Cost Ratio is considered in the overall analysis. Decision to invest in private infrastructure has
largely been made on purely financial or economical grounds. But the study revealed that the
projects need to deal with the protest again environmental system by public; pollution & safety
rules; protest again human displacement & compensation.
5.2 Factor grouping 2 represents – Experience in PPP projects risk
Second factor explains 8.9058% of total variance of the linear component (factor) and
comprise of single risk attribute named a24 Inadequate Experience in PPP. Inadequate /
insufficient experience leads to the major issues in the public-private partnerships projects such
as insufficient lack of expertise in local environments, lack of understandings of legal & regulatory
framework, cultural norms, & so on.
281
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
5.3 Factor grouping 3 represents - Market/competition risk
Third factor explains 8.4500% of total variance of the linear component (factor) and
comprises of (i) a31 Tariff Adjustment (ii) a23 Revenue Risk. Realistic traffic / market assessment
studies are an important step in the project preparation stage for a PPP project. Such
assessments ensure bids submitted by interested private entities are well informed & realistic &
overall capacity proposed for a project is optimum. They also ease the pressure during the
operation phase since the operator is not exposed to very divergent demand & corresponding
revenue risk.
5.4 Factor grouping 4 represents - Procedural delay risk
Fourth factor explains 8.0606% of total variance of
comprises of (i) a45 Delay in Land Acquisition (ii) a11 Delay
Disputes. The land acquisition process for PPP projects is
development activity. Also obtaining numerous clearances
Indian PPP projects.
the linear component (factor) and
in Approval (iii) a29 Delay in Solving
no doubt the most challenging pre& approvals has been the bane of
5.5 Factor grouping 5 represents - Economical risk
Fifth factor explains 7.1900% of total variance of the linear component (factor) and comprises
of (i) a14 Unavailability of Funds (ii) a15 Economical Disaster. While the government of India has
provided financial support initiatives such as the viability gap funding scheme (VGF) or the
Jawaharlal Nehru National Urban Renewal Missions, it is important for PPP projects to be
financially independent to the extent possible & minimize reliance on such grants or schemes.
5.6 Factor grouping 6 represents - Macroeconomics risk
Sixth factor explains 7.1900% of total variance of the linear component (factor) and
comprises of (i) a19 Inflation Rate Volatility (ii) a22 Investment Risk
(iii) a17 Exchange Rate
Fluctuations. Frequently, when international private infrastructures providers are involved, a large
amount of capital is invested up front, with the intention that this investment is recovered down
the line, either through operational revenues, or through a transfer back to the host government.
However, falling currency exchange rates in the interim, a feature that may well occur in many
developing economies, may lead to rapid devaluation of the infrastructure provided by foreign
private providers, thus resulting in a loss on their investment.
5.7 Factor grouping 7 represents - Construction delay risk
Seventh factor explains 6.1410% of total variance of the linear component (factor) and
comprises of only one risk attribute named a46 Construction Delay Risk. Despite a clear
understanding of construction delay key factors among the research communities, a sincere
attempt to address this chronic issue of time overrun is yet to materialize amongst practitioners in
the Indian construction industry.
5.8 Factor grouping 8 represents - Subletting risk
Eighth factor explains 5.9607% of total variance of the linear component (factor) and
comprises of (i) a34 Subcontractors Failure (ii) a35 Co-ordination of Subcontractors.While
supports from subcontractors, specialized agencies are no doubt a must for PPP projects, strong
co-ordination among subcontractors to make project happen, is equally important. Large PPP
projects require active hand holding from all the stakeholders of the projects throughout the
planning & execution stage.
6. Conclusion
Project risks form an integral part of every project. The term project risk refers to events and
circumstances that cause an uncertainty of the costs and benefits involved. As a result, there is
the possibility of a project outcome or return that is below expectations.
282
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
A total of 50 risk attributes were identified through literature. Based on the risk variables a
questionnaire was designed to collect the data from field experts. Data were analyzed by
Principal component analysis using SPSS software. Results shows that 17 risks attributes could
be grouped into 8 component factor groupings. The critical 8 component groupings have been
named according to the characteristics of the attributes in the groupings, as F1 Social
Acceptability Risk; F2 Experience in PPP Risk; F3 Market / Competition Risk; F4 Procedural
Delay Risk; F5 Economical Risk; F6 Macro economical Risk; F7 Construction Delay Risk; & F8
Subletting Risk. Recently the government has taken efforts to promote systematic development of
infrastructure in general and private participation. The path forward requires framing of risk
responsive framework for the PPP projects in India.
References
Abednego, M. P. and Ogunlana, S. O. Good project governance for proper risk allocation in
public–private partnerships in Indonesia. International Journal of Project Management, 2006,
24(7), 622-634.
Dey. K. Y. Risk management for effective implementation of projects: A case study of cross
country petroleum pipeline NICMAR Journal of Construction Management, 2000, XV (I),1-18
Doloi, H., Sawhney, A., Iyer, K. C., and Rentala, S. Analysing factors affecting delays in Indian
construction projects. International Journal of Project Management, 2012, 30(4), pp.479-489.
Ghosh, S., Jintanapakanont, J. Identifying and assessing the critical risk factors in an
underground rail project in Thailand: a factor analysis approach. International Journal of
Project Management, 2004, 22(8), 633-643.
Grimsey, D., Lewis, M. K. 2002. Evaluating the risks of public private partnerships for
infrastructure projects. International Journal of Project Management, 20(2), pp.107-118.
Hastak, M., Shaked, A. ICRAM-1: Model for international construction risk assessment. Journal
of Management in Engineering, 2000, 16(1), 59-69.
Iyer, K. C., Jha, K. N. Factors affecting cost performance: evidence from Indian construction
projects. International Journal of Project Management, 2005, 23(4), 283-295.
Kartam, N. A., Kartam, S. A. Risk and its management in the Kuwaiti construction industry:
a contractors’ perspective. International Journal of Project Management, 2001, 19(6), 325335.
Li-Bing, Akintoye A., Edwards P.J., Hardcastle C. “The allocation of risk in PPP/PFI
construction projects in the UK.” International Journal of Project Management, 2005, (23)1,
25–35
Mahalingam A, Kim J. Private Risk in Public Infrastructure – A Review of the Literature.
NICMAR Journal of Construction Management, 2007, XXII (III), 1-23
Shen L.Y., Platten Andrew & Deng X. P. “Role of public private partnerships to manage risks in
public sector projects in Hong Kong.” International Journal of Project Management,
2006,(24)7, 587–594
Singh, B.L., Kalidindi, S. N. Traffic revenue risk management through annuity model of PPP
road projects in India. International Journal of Project Management, 2006, 24(7), 605-613.
Wang, S. Q., Tiong, R. L., Ting, S. K., and Ashley, D. Political risks: analysis of key contract
clauses in China's BOT project. Journal of Construction Engineering and Management, 1999,
125(3), 190-197.
Xu, Y., Yeung, J. F., Chan, A. P., Chan, D. W., Wang, S. Q., and Ke, Y. Developing a risk
assessment model for PPP projects in China - A fuzzy synthetic evaluation approach. Journal
of Automation in construction, 2010,19(7), 929-943.
283
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Multi-Objective Optimization of Rotary Regenerator using
Multi-Objective Teaching-Learning-Based Optimization
Algorithm
1
2
Vivek Patel , R. Venkata Rao *
1
L.E.College, Morbi, Gujarat, India
S.V. National Institute of Technology, Ichchanath, Surat, Gujarat, India
2
*Corresponding author (e-mail: ravipudirao@gmail.com)
The present work proposes a multi-objective teaching-learning based optimization
(MO-TLBO) algorithm for multi-objective rotary regenerator optimization. TeachingLearning-based optimization (TLBO) is one of the recently proposed population based
algorithm which simulates the teaching-learning process of the class room. This
algorithm does not require any algorithm-specific control parameters. The MO-TLBO
algorithm uses a grid-based approach to adaptively assess the non-dominated
solutions (i.e. Pareto front) maintained in an external archive. Maximization of
regenerator effectiveness and minimization of regenerator pressure drop are
considered as objective functions and treated simultaneously for multi-objective
optimization. Seven design variables such as regenerator frontal area, matrix rotational
speed, matrix rod diameter, matrix thickness, porosity and split are considered for
optimization. A case study is also presented to demonstrate the effectiveness and
accuracy of the proposed algorithm.
1.
Introduction
The difficulties associated with mathematical optimization on large-scale engineering
problems have contributed to the development of alternative solutions. Several modern
heuristic algorithms have been developed for the solutions of such engineering problems.
These algorithms can be classified into two important groups depending on the nature of
phenomenon simulated by the algorithms; evolutionary algorithms (EA) and swarm intelligence
based algorithms.
All the evolutionary and swarm intelligence based algorithms are probabilistic algorithms
and required common controlling parameters like population size, number of generations, elite
size, etc. Beside the common control parameters, different algorithm requires its own algorithm
specific control parameters. For example GA (Holland, 1975) uses mutation rate and crossover
rate. Similarly PSO (Kennedy and Eberhart, 1995) uses inertia weight, social and cognitive
parameters. The proper tuning of the algorithm specific parameters is very crucial factor which
affect the performance of the above mentioned algorithms. The improper tuning of algorithmspecific parameters either increases the computational effort or yields the local optimal
solution. Considering this fact, recently Rao et al. (2011, 2012) and Rao and Patel (2012)
introduced the Teaching-Learning-Based Optimization (TLBO) algorithm which does not
require any algorithm-specific parameters. TLBO requires only common controlling parameters
like population size and number of generations for its working. In this way TLBO can be said as
an algorithm-specific parameter-less algorithm.
The problem of optimization involving more the one objective function with conflicting nature
is known as multi-objective optimization (MOO) problem. Multi-objective optimization has been
defined as finding a vector of decision variables while optimizing several objectives
simultaneously with a given set of constraints. Unlike single objective optimization, MOO
solutions are in such a way that the performance of each objective cannot be improved without
sacrificing the performance of another one. Hence, the solution of MOO problem is always a
trade-off between the objectives involve in the problem.
In the present work, a multi-objective teaching-learning based optimization (MO-TLBO)
algorithm is proposed for multi-objective rotary regenerator optimization problem. The MOTLBO algorithm uses a fixed size archive to maintain the good solutions obtained in every
284
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
iteration. The ε - dominance method is used to maintain the archive. In ε - dominance method
the size of the final external archive depends on the ε value, which is usually a user-defined
parameter. The solutions kept in external archive are used by the learners to update their
knowledge. The proposed algorithm uses a grid to control the diversity over the external
archive.
The remainder of this paper is organized as follows. Section 2 describes the basic TLBO
algorithm and its multi-objective version. Section 3 presents the previous work on rotary
regenerator optimization. Section 4 Describe the objective function and decision variables.
Section 5 presents results obtained using the proposed algorithm. Finally, the conclusion of the
present work is presented in section 6.
2.
Multi-objective teaching-learning-based optimization (MO-TLBO) algorithm
Teaching-learning is an important process where every individual tries to learn
something from other individual to improve himself/herself. Rao et al. (2011, 2012) proposed
an algorithm known as teaching-learning based optimization (TLBO) which simulates the
traditional teaching-learning phenomenon of class room. The algorithm simulates two
fundamental modes of learning: (i) through teacher (known as teacher phase) and (ii)
interacting with the other learners (known as learner phase). TLBO is a population based
algorithm where a group of students (i.e learner) is considered as population and the different
subjects offered to the learners is analogous with the different design variables of the
optimization problem. The grades of a learner in each subject represent a possible solution to
the optimization problem (value of design variables) and the mean result of a learner
considering all subjects corresponds to the quality of the associated solution (fitness
value).The best solution in the entire population is considered as the teacher. The detail
description of the basic TLBO algorithm is available in previous work of Rao et al. (2011,
2012). The external archive employed in the present work is described below.
2.1 External Archive
The main objective of the external archive is to keep a historical record of the nondominated vectors found along the search process. This algorithm uses a fixed size external
archive to keep the best non-dominated solutions that it found so far. In the proposed algorithm
an ε-dominance method is used to maintain the archive. This method has been used widely in
multi-objective optimization algorithms to manage the archive.
Figure 1: Flowchart of MO-TLBO algorithm
285
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The archive is a space with dimension equal to number of problem’s objectives. The
archive is empty at the beginning of the search. In ε-dominance method each dimension of
space will be segmented in a ε to ε size which will break the space in to squares, cubes or
hyper-cubes for two, three and more than three objectives respectively. If a box that holds the
solution(s) can dominate other boxes then those boxes (along with the solution(s)) will be
removed. Then each box is examined to contain only one solution and the dominated ones in
each box will be eliminated. Finally, if a box still has more than one solution then the solution
with less distance from the left corner of the box will stay and the others will be removed. It is
observed from the literature that the use of ε-dominance guarantees that the retained solutions
are non-dominated with respect to all solutions generated during the execution of the
algorithm. The proposed MO-TLBO algorithm uses grid based approach for archiving process.
The flow chart of the proposed algorithm is shown in Figure 1.Both the teacher phase and
learner phase iterate cycle by cycle according to Figure. 1 till the termination criteria is
satisfied. At the termination of the algorithm, the external archive found by the algorithm is
returned as the output.
3.
Previous work on rotary regenerator optimization
A regenerator is a sizeable porous disk, fabricated from materials having a fairly high
heat capacity. Because of its high heat capacity and compactness, regenerator is widely used
for such energy recovery. The optimum design of regenerators always requires the optimal
trade-off between the increased effectiveness (i.e. heat transfer rate) and pressure drop within
the given set of constraints which leads to the multi-objective design-optimization of the
regenerators.
Previously different authors have applied different non-traditional random search algorithms
for the design-optimization of exchangers. Foli et al. (2006) used GA for multi-objective
optimization of micro channels in micro heat exchangers. Hilbert et al. (2006) carried out the
multi-objective optimization of tube bank heat exchanger using GA. Sanaye et al. (2008) used
GA for determining the optimum operating condition of rotary regenerator. Wu et al. (2006)
carried out modal-based analysis of rotary regenerator used for energy recovery. Sanaye &
Hajabdollahi (2009) carried out multi-objective optimization of rotary regenerator considering
regenerative effectiveness and pressure drop simultaneously. The authors applied GA for
multi-objective optimization. In the present work, an attempt is made to put the MO-TLBO
algorithm into the rotary regenerator optimization.
4.
Objective function and decision variables
The effectiveness of the present approach using MO-TLBO algorithm is assessed by
analyzing case study which was earlier analyzed using GA by Sanaye and Hajabdollahi
(2009). A radial flow rotary regenerator (as shown in Figure 2) with randomly stacked wovenscreen matrix used to preheat the compressed air needs to be design and optimize for
maximum effectiveness and minimum pressure drop. The fresh air having mass flow rate 12
kg/s is coming out from the compressor at a temperature of 400K. This air is preheated in the
regenerator with the help of hot gases coming out from the furnace having 100 ton/h melting
load. In the furnace the fuel-air ratio is about 0.08 for combustion. The property values of air
and hot gases are considered to be temperature dependent. So, the objectives are to find out
regenerator dimensions, e.g. frontal area (Afr), matrix thickness (t), matrix rod diameter (d), split
(s), porosity (p) and rotational speed (Nm) for maximum effectiveness (ε) and minimum
pressure drop (ΔP).
Figure 2. Radial flow rotary regenerator and randomly stacked woven screen matrix
286
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
In the present work, multi-objective problem is formulated considering two objectives. The
first objective is to maximize the effectiveness of exchanger as given by the following
expression,
X x1 , x2 ,......xn , xi ,min xi xi ,max , i 1, 2,.....n par
(3)
Z1 Maximize ( X )
,
The second objective is to minimize the total pressure drop of the exchanger as given by
the following expression,
Y y1 , y2 ,...... yn , yi ,min yi yi ,max , i 1, 2,.....n par
Z2 Minimize P(Y )
(4)
The multi-objective function formulated with the above mentioned single objective functions
is subjected to six inequality constraints which are bound by lower and upper limits of the
design variables.
0.003 dm 0.01
1 Nr 10
3 Afr 4 0.1 t 0.5
0.33 s 3
0.602 p 0.832
Number of trials is conducted to decide the control parameters of the MO-TLBO algorithm
to obtain the optimum solution. After various trials, Population size = 10 and number of
generations = 30 are set for the considered algorithm.
5.
Results and discussion
Figure 3(a) represent the Pareto optimal curve obtained using the MO-TLBO for multiobjective optimization. As seen from Figure 3(a), maximum effectiveness exists at design point
G corresponding to 0.94876 where the total pressure drop is 30.99 kPa. Similarly, the pressure
drop is minimum at design point A corresponding to 0.1817 kPa where the effectiveness is
0.0902. In the Pareto optimal curve the design points between E to G have maximum
effectiveness increment corresponding to 6.1% with 274.38% higher pressure drop. So the
region E-F is eliminated from the Pareto curve and remaining Pareto curve obtained by using
present approaches is shown in Figure 3(b) along with the Pareto curve obtained by Sanaye
and Hajabdollahi (2009) using GA approach.
9
30
G
8
Pressure drop (kPa)
Pressure drop (kPa)
E
7
25
F
20
15
E
10
D
C
5
D
4
MO-TLBO approach
C
3
2
A
B
B
A
5
GA approach
6
1
0
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Effectiveness
Effectiveness
(a)
(b)
Figure 3. (a) Pareto front of regenerator effectiveness and pressure drop obtained using MOTLBO (b) Pareto front for design point A-E and its comparision with GA
It is observed from Figure 3(b) that maximum regenerator effectiveness obtained using the
present approach is 2.79% higher as compared to GA approach at the cost of 2.61% higher
pressure drop. The higher regenerator effectiveness in the present approach increases the
total heat transfer rate by 2.98% compared to GA approach. Similarly, minimum pressure drop
obtained using the MO-TLBO algorithm is 5.76% less with 4.41% higher heat transfer rate as
compared to GA approach. The optimal regenerator geometry for various design points (A - G)
on the Pareto optimal curve are listed in Table 1.
287
0.9
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 1: Optimal regenerator geometry for design points (A - G) on Pareto optimal front.
___________________________________________________________________
Output variable
Design point
A
B
C
D
E
F
G
___________________________________________________________________
Frontal area (m2)
Matrix thickness (m)
Matrix rotational speed (rpm)
Matrix rod diameter (mm)
Split
Porosity
3
Matrix volume (m )
Cold fluid heat transfer
Hot fluid heat transfer
Effectiveness
Total pressure drop ΔPt (kPa)
Heat transfer rate, Q (MW)
4
0.1
4.2
10
1.875
0.832
0.4
202.17
219.32
0.0902
0.1817
0.72244
4
0.1
10
3
1.603
0.832
0.4
317.5
358.5
0.3643
0.658
2.96
4
0.2329
10
3
1.607
0.832
0.924
338.1
363.2
0.6015
1.57
4.96
4
0.3292
10
3
1.452
0.766
1.4584
378.1
379.6
0.7922
3.45
6.63
4
0.5
10
3
1.269
0.725
2
400.23
405.8
0.8935
7.96
7.533
4
4
0.3095 0.5
10
10
3
3
1.528 1.325
0.602 0.602
1.238 2
466.88 473.2
426.2 434.6
0.8954 0.94876
18.9
30.99
7.5499 8.0311
_____________________________________________________________________
To validate the effectiveness of the present approaches for multi-objective regenerator
optimization, the results obtained by using the present approaches are compared with the
results obtained by Sanaye and Hajabdollahi (2009) using GA for identical effectiveness and
heat transfer rate. Table 2 shows the comparison of the results obtained by using both the
approaches for identical effectiveness and heat transfer rates. As seen from the results, for
same effectiveness and heat transfer rate the present approaches result in lesser pressure
drop compared to GA approach. For the considered design point, 5.08% to 65.1% reduction in
pressure drop is observed using MO-TLBO as compared to GA approach.
Table 2: Comparison of regenerator geometry for identical effectiveness and heat transfer
Effectiveness
0.08687
0.6
0.7572
0.8617
0.8636
0.9229
6.
Heat transfer rate
(MW)
0.6919
4.7659
6.3151
7.2469
7.2636
7.7978
Matrix volume
3
(m )
Total Pressure drop (kPa)
GA approach
MO-TLBO
approach
GA approach
MO-TLBO
approach
0.1928
1.721
3.638
7.621
18.28
30.17
0.181
1.55
2.933
5.12
5.78
23.6
0.4
0.943
1.9711
1.9633
1.2078
1.9929
0.4
0.9196
1.1167
1..940
1.962
1.5494
Conclusions
In this work a multi objective version of TLBO algorithm namely MO-TLBO has been
adapted to handle the multi-objective rotary regenerator optimization problem. The MO-TLBO
algorithm used a fixed size archive to maintain the good solutions obtained in the every
iteration and a grid-based approach to control the diversity over the external archive. Six
design variables are optimized in the present work. A set of Pareto optimal points is obtained
using MO-TLBO algorithm. The algorithm’s ability is demonstrated using a case study and the
performance is compared with GA approach presented by previous researchers.
Improvements in the results are obtained using MO-TLBO algorithm as compared to GA
approach shows the better potential of the MO-TLBO algorithm for such a thermal system
design optimization. The presented MO-TLBO algorithm can be easily modified to suit
optimization of other types of thermal systems involving a number of design variables.
288
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
References
Foli, K. Okabe, T. Olhofer, M. Jin, Y. and Sebdhoff, B. Optimization of micro heat exchanger:
CFD, analytical approach and multi-objective evolutionary algorithms. International Jurnal
of Heat and Mass Transfer, 2006, 49, 1090-1099.
Hilbert, R. Janiga, G. Baron, R. and Thevenin, D. Multi-objective shape optimization of a heat
exchanger using parallel genetic algorithms. International Jurnal of Heat and Mass
Transfer, 2005, 49, 2567-2577.
Holland, J. Adaptation in natural and artificial systems. Michigan Press, Ann Arbor, 1975
Kennedy, J. and Eberhart, R.C. Particle swarm optimization. Proceedings of IEEE
International Conference on Neural Networks, 1995, IEEE Press, Piscataway.
Rao, R.V. and Patel, V. An elitist teaching-learning-based optimization algorithm for solving
complex constrained optimization problems. International Journal of Industrial
Engineering Computations, 2012, 3(4), 535-560.
Rao, R.V. and Patel, V. Comparative performance of an elitist teaching-learning-based
optimization algorithm for solving unconstrained optimization problems. International
Journal of Industrial Engineering Computations, 2012, 4, 241-249.
Rao, R.V. Savsani, V.J. and Vakharia, D.P. Teaching-learning-based optimization: A novel
method for constrained mechanical design optimization problems. Computer Aided
Design, 2011, 43(3), 303-315.
Rao, R.V. Savsani, V.J. and Vakharia, D.P. Teaching-learning-based optimization: An
optimization method for continuous non-linear large scale problems. Information
Sciences, 2011, 183(1), 1-15.
Sanaye, S. and Hajabdollahi, H. Multi-objective optimization of rotary regenerator using
genetic algorithm. Internationa Jurnol of Thermal Sciences, 2009, 30(14-15),1937-1945.
Sanaye, S. Jafari S. and Ghaebi, H. Optimum operational conditions of a rotary regenerator
using genetic algorithm. Energy and Buildings, 2008, 40(9), 1637-1642.
Wu, Z. Roderick, V.N. and Finn, B. Model-based analysis and simulation of regenerative heat
wheel. Energy and Buildings, 2006, 38, 502-514.
289
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization of Machining Parameters in Electrical Discharge
Machining (EDM) of Stainless Steel
Rajeev Kumar1*, Gyanendra Kumar Singh2
1
IIMT College of Engineering, Greater Noida, U.P-201308
2
Galgotia University, Greater Noida, U.P-201308
*Corresponding author (e-mail: lovelyrajeev@gmail.com)
In this present research work, the effect of electric discharge machining (EDM) parameters
such as peak current (I), pulse-on time (Ton), pulse-off time (Toff) and electrode diameter on
material removal rate of stainless steel was studied. The experiment is carried out as per
design of experiment (DOE) approach using L9 orthogonal array (OA).The analysis of
variance (ANOVA) and response graphs were used to analyze the results obtained in the
experiment. Experimental results have shown that the different combinations of EDM
process parameters are required to achieve higher material removal rate (MRR) for
stainless steel. Signal to noise (S/N) ratio and analysis of variance (ANOVA) is used to
analyze the effect of the parameter and also to identify the optimal cutting parameters. The
contribution of each process parameters towards the MRR is also identified. The results
found will be very much useful for manufacturing engineers to select appropriate EDM
process parameters to machine stainless steel.
1.
Introduction
Recently manufacturing industries are facing challenges for machining of advanced
difficult-to-machine materials (tough super alloys, ceramics, and composites) with stringent
design requirements (high precision, complex shapes, and high surface quality) (Benedict, 1987).
Conventional machining processes are not suitable for meeting these challenges effectively. If
these processes are used then they require expensive equipments and large labour forces and
hence making them economically unviable. To meet these challenges, new types of processes
need to be developed, (Jain, 2002; Aoyama and Inasaki, 1986). At present, many unconventional
machining processes have gained acceptance and are widely prevalent in industries (Kozak and
Kazimierz, 2001). Each of these unconventional machining processes has their own potential and
limitations. Electrical discharge machining (EDM) is a non-conventional, thermo electrical
process, which erodes material from the workpiece by a series of discrete sparks between the
tool and workpiece immersed in a dielectric medium.
In EDM, the most important task is to select appropriate machining parameters for
achieving high machining performance. Usually the machining parameters are determined based
on pilot experimentation. The most important performance measuring parameters in EDM are
MRR, Tool wear and dimensional accuracy. Taguchi method has been widely used in
engineering analysis, and is a powerful tool to design a high quality system. Moreover, Taguchi
method employs a special design of orthogonal array to investigate the effects of the entire
machining parameters through limited number of experiments. Recently, the Taguchi method has
been widely employed in several industrial fields and research works. Yan et al. (2000) optimized
the machining parameters using Taguchi methodology during electro-discharge machining of
Al2O3/6061Al composite using rotary disk electrode made of copper. Their analysis revealed that,
in general electrical parameters affects the machining characteristics more significantly than the
non-electrical parameters.
Wang and Yan (2000) found optimum machining parameters using Taguchi methodology during
blind-hole rotary electro-discharge drilling of Al2O3/6061Al composite. Lin et al. (2009) obtained
optimal input parameter settings for maximum MRR and minimum ASR using Taguchi method
290
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
during magnetic force assisted-EDM process. Ramakrishna and Karunamoorthy (2006) used
Taguchi methodology based experiments for the determination of optimum wire-EDM parameter
such as pulse on-time, wire tension, delay time, wire feed speed and intensity of ignition current
on the multi-objective characteristics such as MRR, WWR and ASR. Taguchi methodology has
also been applied by Ramakrishna and Karunamoorthy (2008) for finding optimum parameter
during wire-EDM of Inconel 718 with brass wire tool electrode.
From the review of literature, it is observed that the Taguchi method has been used in wide
machining process areas for the determination of optimal machining parameter setting. However,
optimization of EDM parameters using Taguchi methodology is rather lacking. In the present
paper Taguchi methodology has been applied for the determination of optimum MRR during
Electro-Discharge Machining of Stainless steel workpiece material. The present work is based on
analysis of effects of process parameters on MRR. Finally, all selected parameters are optimized
to get the maximum material removal rate (MRR).
2.
Taguchi methodology
It is a widely accepted method of design of experiment (DOE). Taguchi Methodology has
proved to be an effective methodology for producing high quality components at relatively low cost.
The objective of Taguchi approach is to find out the optimum setting of process parameters or
control factors, in turn making the process insensitive to sources of variation due to uncontrollable
or noise factors.
In this present study, the main control factors that have an influence on the performances are
taken as input parameters and the experiment is performed using specially designed orthogonal
array (OA). The selection of appropriate OA is based on total degree of freedom (d.f.) which is
calculated as [9,10]
d.f. = (number of levels-1) for each factor + (no. of levels-1) × (number of levels -1) for each
interaction+1
The S/N ratio (S/N, dB) represents the quality characteristics for the observed data in the Taguchi
DOE and is mathematically calculated as explained by MS. Phadke (1989) and P. J. Ross (1988).
S/N=-10log (MSD)
(1)
Where MSD is the mean square deviation and commonly known as quality loss function.
Depending upon the experimental objective, the quality loss function is classified into three types:
lower the better (LB), higher the better (HB) and nominal the best (NB).
In the case of cutting speed, higher the better and for surface roughness, lower the better are
desirable. These quality loss values in TM are called as HB and LB type and are computed as
follows [9,10].
For larger the better: MSD= [1/n
]
(2)
4
Normally, full factorial design would require 3 =81 experimental runs, however the effect and
experimental cost for such a design may prohibitive & unrealistic. Using Taguchi quality design L9
orthogonal array with 9 rows (corresponding to the number of experiments) are used in this
present work. Taguchi method provides a simple, efficient and systematic approach to design for
performance, quality and cost.
Table (2) shows four process parameters which are used as control factors and their levels.
MINITAB 14 Software is used for the graphical analysis of the experimental data.
Table1. Control factors for EDM
S.No.
1
2
3
4
Factor
A
B
C
D
Symbols
A
Ton
Toff
ED
Cutting Parameters
Current
Pulse-on time
Pulse-off time
Electrode Diameter
291
Units
A
µs
µs
mm
Level 1
Level 2
Level 3
8
3
4
6.3
10
4
5
8.6
15
5
6
9.9
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.
Experimental detail
The Experimental studies were performed on an Sparkonix EDM machine. During
experimentation the effect of various input parameters such as current, pulse on-time, pulse offtime and electrode diameter on the output parameters such as MRR have been studied.
Experiments were performed on flat workpiece ( 33×36×6) made of 304 stainless steel.The spark
erosion oil 450 was used as dielectric liquid.The MRR is calculated by using the following
formula:
MRR(gms/min)=(W i-W f)/t
(3)
where Wi is initial weight of workpiece in gram (before machining); Wf is final weight of workpiece
in gram (after machining); t is machining time in minutes;
The numerical values of machining parameters at different levels are shown in Table 3. A pilot
experimentation is done to decide the range of input parameters. In the present case, four
parameters each at three levels with no interaction effect has been considered. The total degree
of freedom (dof) has been calculated as Phadke, M.S. (1989): dof = (3 - 1) × 4 + 1 = 9
Hence, a standard L9 orthogonal array (OA) is selected for experimental design matrix.
Table 3. Experimental results
(a) Photograph of EDM set- Up
Exp.
No.
I (A)
Ton
(µs)
Toff
(µs)
1
1
1
1
2
3
4
5
6
7
8
9
2
3
2
3
1
3
1
2
1
1
2
2
2
3
3
3
2
3
1
2
3
1
2
3
Electrode
Diameter
(ED)
1
2
3
1
3
1
2
3
1
MRR
(gms/min)
0.00827
0.02550
0.01897
0.00706
0.03400
0.02581
0.03970
0.07500
0.02580
(b) Machined workpiece
with electrodes used
Figure1. EDM Set-up (a) & W/Ps with tool electrodes (b)
4. Results and discussion
4.1 Effect of input factors on MRR
The selected parameters have different effects on the machining performance. Analysis
of variance (ANOVA) is used to identify the significant effects of process parameters and the
optimal machining parameters are obtained using the main effects plot.
The characteristics that higher values correspond to the better machining performance such as
material removal rate is called “higher is better (HB)” in quality engineering. The signal to noise
(S/N) ratio could be an effective graphical representation to find the significant parameter by
evaluating the minimum variance. The equation which is used for calculating the S/N ratio is
(4)
“Higher is better” (HB) S/N ratio= -10 log (1/r(1/y12+1/y12+1/y12+………yn2)
The value of S/N ratio of machining performance for each experimental run of L9 OA can be
calculated for MRR using equation (4) and it is represented in Table (4). To obtain the effects for
machining parameters for each level, the S/N ratio values of each fixed parameter and level in
each machining performance is summed up.
292
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 4. Experimental result with S/N ratio
for material removal rate
S.No.
MRR
S/N ratio
1
2
3
4
5
6
7
8
9
0.008274
0.0255
0.01897
0.00706
0.034
0.02581
0.0397
0.0750
0.0258
Table 5. Response Table for S/N ratios
-41.6457
-31.8692
-34.4387
-43.0239
-29.3704
-31.7642
-28.0242
-22.4988
-31.7676
Level
I
Ton
Toff
ED
1
2
-34.26
-30.55
-35.98
-34.72
-37.56
-27.91
-34.26
-30.55
3
-30.61
-27.43
-32.66
-33.32
4.94
3
8.55
2
9.65
1
Delta
Rank
3.71
4
From the calculation of main effects for each level of the factors, the values of main effects are
shown in table (5). The main effects values are plotted in figure (c) for peak current (I), pulse-on
time (Ton), pulse-off time (Toff) and electrode diameter (ED) respectively. The influence of each
one of the factors of each level on the machining performance was shown in main effects plot.
The levels having the major contribution are selected from the plot are the optimized levels for the
particular factor.
Main Effects Plot (data means) for SN ratios
T on
-27
T off
-30
Mean of SN ratios
-33
-36
-39
3
4
5
4
5
I
-27
6
ED
-30
-33
-36
-39
8
10
15
6.3
8.6
9.9
Signal-to-noise: Larger is better
(c) Main effecte plot for S/N ratios
The analysis of variance was used to find out the relative importance of cutting parameters with
respect to MRR. Table (6) give the ANOVAresults for MRR. From the analysis of table (6), it was
found that the pulse-on time (42.4665%) and peak current (38.8692%) have statistical
significance on MRR.
Table 6. ANOVA for Material Removal Rate
Probability Plot of MRR
Normal - 95% CI
99
I
Ton
Toff
ED
Residual
error
Total
DF
SS
2
2
2
2
0
127.911
139.749
39.119
22.301
8
329.080
MS
63.956
69.875
19.559
11.150
P
(%Contri
-bution)
38.8692
42.4665
11.8873
6.7767
Mean
StDev
N
AD
P-Value
95
90
80
Percent
Source
70
60
50
40
30
20
10
5
1
-0.050
100
-0.025
0.000
0.025
MRR
0.050
0.075
0.100
(d) Normal Probability plot for MRR
293
0.02890
0.02031
9
0.553
0.110
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Thus by utilizing experimental results and computed values of the S/N ratios, average
effect of response value and average S/N ratios are calculated for MRR and presented in table
(4). Higher the S/N ratio corresponds to the better the performance characteristics regardless of
their category. So optimal level of the machining parameters is the level with greatest S/N ratio
value. Based on results of analysis of S/N ratio, the optimal machining performance for the MRR
is obtained at Current of 15 Ampere, Ton of 5µs, Toff of 5µs and ED of 8.6 mm.
4.2 Normal probability plot for MRR
The normal probability plot is a graphical technique for assessing whether or not a data
set is approximately normally distributed.
The point on this plot forms nearly a linear pattern, which indicates that the normal distribution is
a good model for this data set. The normal probability plot showed the set of values of responses
are very close to median of set values and not deviated from mid value. Normal probability Plot
for MRR is shown in figure (d).
.
5.
Conclusion
The experimental results which are based on calculated S/N ratio, and ANOVA, the
following conclusion are drawn for electrical discharge machining of stainless steel.(i) The peak
current and pulse-on time are the most significant parameter for obtaining maximum MRR in
electrical discharge machining of stainless steel. (ii) For higher MRR, the recommended
parametric combination is Current of 15 Ampere, pulse-on time of 5µs , pulse-off time of 5µs and
ED of 8.6 mm for electrical discharge machining of stainless steel. (iii) The Taguchi method
seems to be an efficient methodology to find out the optimum cutting parameters as experiment
was based on minimum number of trails conducted to obtain an optimum cutting parameters.
References
Benedict, G.F. Nontraditional Manufacturing Processes. New York: Marcel Dekker, 1987.
Jain, V.K. Advanced Machining Processes. New Delhi: Allied, 2002.
Kozak, J. and Kazimierz, E.O. Selected problems of abrasive hybrid machining, Journal of
Materials Processing Technology, 2001,109, 360-366.
Lin, Y.C., Chen, Y.F., Wang, D.A. and Lee, H.S. Optimization of machining parameters in
magnetic force assisted EDM based on Taguchi method, Journal of Materials Processing
Technology, 2009, 209, 3374-3383.
MS. Phadke, Quality Engineering Using Robust Design. Englewood Cliffs NJ: Prantice Hall; 1989.
P. J. Ross, Taguchi Techniques for Quality Engineering, New York: Mc Graw Hill; 1988.
Ramakrishnan, R. and Karunamoorthy, L. Modeling and multi-response optimization of Inconel
718 on machining of CNC WEDM process, Journal of Materials Processing Technology,
2008, 207, 343-349.
Ramakrishnan, R. and Karunamoorthy, L. Multi response optimization of wire EDM operations
using robust design of experiments, The International Journal of Advanced Manufacturing
Technology, 2006, 29,105–112.
Yan, B.H., Lin, J.L., Wang, K.S. and Tarng, Y.S. Optimization of the electrical discharge
machining process based on the Taguchi method with fuzzy logics, Journal of Materials
Processing Technology, 2000,102, 48-55.
Yan, B.H., Wang, C.C., Liu, W.D. and Huang, F.Y. Machining characteristics of Al2O3/ 6061Al
composite using rotary EDM with a disk like electrode, International Journal of Advanced
Manufacturing Technology, 2000,16, No.5, 322-333.
294
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Suppliers Delivery Performance Evaluation and Improvement
Using AHP
Rajesh Dhake1*, N.R. Rajhans2
1
Vishwakarma Institute of Technology, Pune - 411037, Maharashtra, India
2
College of Engineering, Pune – 411005, Maharashtra, India
*Corresponding author (e-mail: rj_dhake@yahoo.com)
Supplier relationship management deals with all processes that focus on the interface
between the firm and its suppliers and aims to arrange for and manage supply sources
for various goods and services. It includes the evaluation and selection of suppliers,
negotiation of supply terms, and communication regarding new products and orders
with suppliers. Although supplier selection is logically the first step involved, supplier
evaluation and improvement is a perpetual process and is necessary for ensuring longterm relationships with suppliers. This paper focuses on the application of Analytical
Hierarchy Process (AHP), a multi-criteria decision making tool for establishing a
software based supplier delivery performance evaluation system for all suppliers of
automobile manufacturer and provide a mechanism to assess their delivery
performance, identifying scope for improvements and thereby lead to continuous
improvement. The supplier delivery performance criteria are first identified followed with
application of AHP process. The proposed system is implemented with help of software
program facilitating a user friendly online system for vendor evaluation and
improvement.
1.
Introduction
One of the most vital functions of a purchase (sourcing) department in manufacturing
industry isselection, development and rating of its suppliers. Apart from the conventional
criteria of price, quality and delivery, several other factors are taken into consideration for
vendor performance rating. Diagnosing and improvement of supplier’s performance through
strategic scores ensures long-term relationships. A majority of studies in the past are limited
to functional scopes. Hence, a proper system to assess vendors is important. This system
must consider all performance evaluation factors, must be flexible and provide a mechanism
to assess all suppliers on a common scale, communicating areas of improvement to the
supplier. The paper proposes a comprehensive Vendor Delivery Performance Rating &
Monitoring System for an automobile manufacturer using Analytical Hierarchy Process.The
company was having a continuous monitoring system to evaluate and monitor supplier
performance at the end of each month. A careful study of the existing system enabled to
determine the flaws and difficulties faced in the existing system. The existing problems were
eliminated in the proposed system based on AHP.
2.
Analytical Hierarchy Process
Analytic Hierarchy Process (AHP), since its invention, has been a tool at the hands of
decision makers and researchers; and it is one of the most widely used multiple criteria
decision-making tools.
It is based on the well- defined mathematical structure of consistent matrices and their
associated eigenvector’s ability to generate true or approximate weights. The AHP
methodology compares criteria, or alternatives with respect to a criterion, in a natural, pair
wise mode. To do so, the AHP uses a fundamental scale (that captures individual preferences
with respect to quantitative and qualitative attributes) of absolute numbers that has been
proven in practice and validated by physical and decision problem experiments. It converts
individual preferences into ratio scale weights that can be combined into a linear additive
weight for each alternative. The resultant can be used to compare and rank the alternatives
295
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
and, hence, assist the decision maker in making a choice.The speciality of AHP is its flexibility
to be integrated with different techniques like LP, QFD, ANP, etc. which enables the user to
extract benefits from all the combined methods, and hence, achieve the desired goal in a
better way.
3. Methodology
Our focus was intended towards in depth analysis of existing system and to
determine the flaws in the existing system and to develop a new comprehensive system
which will overcome all the difficulties faced in the existing system.
3.1 Existing method
In existing method, monthly delivery schedule (divided into weekly buckets) was
communicated to the suppliers. Supplierswere communicated their performance at end of
each month based on the following compliance formula:
Weekly Compliance =
Delivery Quantit y
Scheduled Quantity
(1)
x 100
The suppliers were required to fill a Corrective Action-Preventive Action Report to fill by
them (stating the corrective and preventive actions taken at their end to improve their
performance) and submitted to the automobile manufacturer.
3.2 Scope of improvement
The existing system faced following problems:
Ignorance of other vital factors of supplier performance like delivery performance,
transparency and responsiveness, packaging and logistics, management, etc.
Absence of mechanism communicating overall performance of suppliers and
identifying the areas of improvement.
Time consuming process due to ineffective IT interface for generating online reports
and analysis mechanism
3.3 Proposed method and its implementation
3.3.1 Determination of delivery performance criteria
The table below summarizes the main and sub-level performance criteria identified
for supplier evaluation
Table 1. Supplier Delivery Performance Assessment Criteria
Delivery
Compliance
(DC)
1.1 Timely
delivery
1.2 Delivery
schedule
adherence
1.3 Minimum
safety stock
compliance
1.4 Flexibility to
production
changes:
1.
2.
2.1
2.2
2.3
2.4
Transparency and
Responsiveness
(TR)
Supply failure
communication
Additional premium
freight charges
required
Responsiveness
Documentation
296
3.
Packaging and
Logistics (PL)
3.1 Receipt
discrepancies
3.2 Adherence to
standard
packaging and
design
3.3 Availability and
maintenance of
logistics
infrastructure
3.4 Material tracking
4.
Management
(M)
4.1 Response to
newly
developed/locali
zed parts
4.2 Response to
new
improvements/s
ystems
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.3.2 Application of AHP technique to main criteria and sub-criteria
The next step was to apply analytical hierarchy process technique to each of the main
criterion to determine the global weight of each factor. The AHP process flow is explained as
below:
i.
ii.
iii.
iv.
v.
Pair wise comparison of performance criteria: Each criterionis compared against
every other. Comparison is done with Saaty’s intensity table by 3 experts to eliminate
bias.
Calculating geometric mean for all comparisons and formation of matrix to apply
eigen value method to find out principal Eigen value.
Perform Normalization: No. of iterations are carried out to get more accurate values.
Principal Eigen value is calculated by addition of all Eigen values.
Consistency index is calculated by formula
Consistency Index (C. I. ). =
vi.
Principal Eigen Value −Size of Matrix
Size of Matrix −1
=
𝜆 max − 𝑛
(2)
𝑛 −1
Consistency ratio is calculated by ratio of Consistency Index (C.I.) and Random Index
(R.I.)
𝐶𝑜𝑛𝑠𝑖𝑠𝑡𝑒𝑛𝑐𝑦 𝑅𝑎𝑡𝑖𝑜 (𝐶. 𝑅. ) =
𝐶𝑜𝑛𝑠𝑖𝑠𝑡𝑒𝑛𝑐𝑦 𝐼𝑛𝑑𝑒𝑥
𝑅𝑎𝑛𝑑𝑜𝑚 𝐼𝑛𝑑𝑒𝑥
=
𝐶.𝐼.
(3)
𝑅.𝐼.
The consistency is checked according to the acceptable CR range which varies
according to the size of matrix i.e. 0.05 for a 3 by 3 matrix, 0.08 for a 4 by 4 matrix and 0.1 for
all larger matrices, n>= 5.
AHP calculations for the main criteria are given in the tables below: (Similar
calculations were made for the sub-criteria)
Table 2. Main Criteria wise Weightage Matrix
Judges
DC TR PL M
Expert 1 40 15 20 10
Expert 2 45 20 15 20
Expert 3 50 25 15 10
DC
TR
PL
M
Table 3. Weighted Point Matrix for Main Criteria
DC TR PL M
DC TR PL M
DC TR
1
2
6
6 DC 1
2
4
5 DC 1
1
½
1
1
2 TR ½
1
1
3 TR
1
1
1/6 1
1
2 PL
¼
1
1
3 PL 1/6 ½
1/6 ½ ½ 1 M
1/5 1/3 1/3 1 M
1/5 1/5
Expert 1
Expert 2
Expert 3
Normalized Relative Weights
𝐷𝐶
𝑇𝑅
𝑃𝐿
𝑀
1
1.5874011 5.2414828 5.3132928
𝐷𝐶
0.6299605
1
1.259921 3.1072325
𝑇𝑅
A=
0.1907857 0.7937005
1
2.8844991
𝑃𝐿
0.1882072 0.3218298 0.3466806
1
𝑀
𝑆𝑢𝑚
𝜆
2.0089534 3.7029314 7.8480845 12.305024
1.00
1.016
1.131
1.029
297
PL
6
2
1
¼
M
5
5
4
1
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Calculation of Principal Eigen Value:
λmax = 1.00 + 1.016 + 1.131 + 1.029
Calculation of Consistency Index:
Consistency Index C. I. . =
Calculation of Consistency Ratio:
4.18 − 4
𝜆 max − 𝑛
=
= 0.059
4−1
𝑛−1
𝐶. 𝐼.
0.059
=
= 0.066
𝑅. 𝐼.
0.89
[Note: R.I. = 0.89 for n = 4 (Saaty table, 1990)]
𝐶𝑜𝑛𝑠𝑖𝑠𝑡𝑒𝑛𝑐𝑦 𝑅𝑎𝑡𝑖𝑜 𝐶. 𝑅. =
Weights calculated for main criteria and sub-criteria are illustrated in the tables below:
Table 4. Calculated Weights for Main Criteria
DC
TR
PL
M
DC
1
0.6299605
0.1907857
0.1882072
TR
1.5874011
1
0.7937005
0.3218298
PL
5.24148279
1.25992105
1
0.34668064
M
5.3132928
3.1072325
2.8844991
1
Corresponding Weights
50
21
19
9
Table 5. Calculated Weights for Delivery Compliance
T.D
D.S.A.
S.S.
F.L.E.
T.D.
1
1
0.2554365
0.2811442
D.S.A.
1
1
0.5
0.2554365
S.S.
3.91486764
2
1
0.43679023
F.P.C.
3.5568933
3.9148676
2.2894285
1
Corresponding Weights
40
36
15
10
Table 6. Calculated Weights for Transparency & Responsiveness
S.F.C
FREIGHT
RESP.
DOC.
S.F.C
1
1
0.6299605
0.3333333
FREIGHT
1
1
1
0.5
RESP.
1.5874011
1
1
1
DOC.
3
2
1
1
Corresponding Weights
34
29
22
15
Table 7. Calculated Weights for Packaging & Logistics
R.D
A.D.H.
A.M.
M.T.
R.D.
1
1
0.3466806
0.5
A.D.H.
1
1
0.7937005
0.4367902
A.M.
2.8844991
1.259921
1
0.5503212
M.T.
2
2.2894285
1.8171206
1
Corresponding Weights
36
31
19
14
3.3.3 Determination of final weights
The table 8 summarizes the main and sub-level performance criteria identified for supplier
evaluation
3.3.4 Develop software program for effective implementation
The last step was to develop a software program for continuous monitoring and
improvement of supplier’s performance. The program was developed using MS Visual Basic
with macro enabled format in MS Excel. The DSS developed is implemented and has over
the flaws in the old system.
298
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 8. Supplier Delivery Performance Assessment Criteria
Sub-Criteria
Local
Global
Weight
Weight
1. Delivery
1.1 Timely delivery
40
20
Compliance
1.2 Delivery schedule
35
17.5
(DC) (50)
adherence
15
7.5
1.3 Minimum safety stock
compliance
10
5
1.4 Flexibility to production
changes
2. Transparency 2.1 Supply failure
35
7
and
communication
29
5.8
Responsiven 2.2 Additional premium
freight charges required
21
4.2
ess (TR) (20)
2.3 Responsiveness
15
3
2.4 Documentation
3. Packaging
3.1 Receipt discrepancies
36
7.2
and Logistics 3.2 Adherence to standard
31
6.2
(PL) (20)
packaging and design
3.3 Availability and
19
3.8
maintenance of logistics
infrastructure
14
2.8
3.4 Material tracking
4. Management 4.1 Response to newly
60
6
(M) (10)
developed/localized parts
4.2 Response to new
40
4
improvements/systems
Main Criteria
Final
Weight
20
17
8
5
7
6
4
3
7
6
4
3
6
4
4. Results and conclusion
AHP has been traditionally used for supplier selection which is less frequent activity
(once a factory has been commissioned and new vendor development is through) in case of
automobile industries. The cost of switching from old supplier to a new one is very high and
hence supplier evaluation and performance improvement happens to be an important ongoing perpetual process to ensure long-term relationships between supplier and
manufacturer. The paper attempts to emphasise the need for use of AHP not only in the
vendor selection process but as a system for evaluation and performance improvement of
suppliers.
References
C. Elanchezhian, B. VijayaRamnath, R. Kesavan, An Application of Supplier Selection Using
ANP & AHP in SCM, Industrial Engineering Journal, Volume V & Issue No.5, May 2012
Enyinda, An analysis of strategic supplier selection and evaluation in a generic
pharmaceutical firm supply chain, International Journal of Production Research, Volume
17, Number 1
FarzadTahriri, AHP Approach for supplier evaluation and selection in steel manufacturing
company, Journal of Industrial Engineering & Management: 2008 01(02):54-76
M. Balaji, Dr.G.Karuppusami, G. Hari Ramesh Babu, Integrating Supply Chain to Improve
Agility, Industrial Engineering Journal, Volume V & Issue No.8, August 2012
Omkarprasad S. Vaidya, Sushil Kumar, Analytic hierarchy process: An overview of
applications, European Journal of Operational Research 169 (2006) 1–29
Russell and Taylor, Operations Management, 4th Edition
Sunil Chopra, Peter Meindl, Supply Chain Management – Strategy, Planning & Operation,
th
Pearson Publications, 4 Edition, 2010
T.L. Saaty, The Analytic Hierarchy Process, McGraw-Hill, 1980.
299
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Decisions in High Volume Low Variety Manufacturing System
1,3
2
R. R. Lekurwale *, M. M. Akarte , D. N. Raut
3
1
K. J. Somaiya College of Engineering, Vidyavihar (E) - 400077, Mumbai, India
National Institute of Industrial Engineering, Vihar Lake, Powai – 400087, Mumbai, India
3
Veermata Jijabai Technological Institute, Matunga – 400019, Mumbai, India
2
*Corresponding author (email: rlekurwale@yahoo.co.in)
This research work mainly highlighting on identifying the various decision areas, its
relevant criteria, and respective decision attributes for a line shop production system.
Based on the available literature a conceptual model for assessment of the manufacturing
capability of high volume low variety (line shop) production system has been developed. An
Analytical hierarchy process approach can be used to evaluate the importance of each
decision areas, criteria and its attributes in order to evaluate the manufacturing capability of
a production system. Once the manufacturing capability of a firm under study is computed
which can be further compared with the ideal manufacturing system in order to find out the
weak decision areas for its improvement. The finding of work will be helpful to the
practitioners and the future researchers.
Keywords: Manufacturing strategy, Line flow (shop) production system, Competitive
advantage, Multi criteria decision Methods (MCDM),
1. Introduction
Most of assets a manufacturing organizations are invested in its manufacturing related
activities commencing from the procurement of raw materials till the shipment of finished goods
(Hayes et al. 1988). Therefore manufacturing plays a very crucial role for obtaining competitive
advantages (Skinner 1969, Hayes et al. 1988). Skinner (1969) emphasized the importance of
linking of the corporate strategy to manufacturing strategy in a view to achieve the competitive
priorities. During early eighties manufacturing industries started giving attention to the
manufacturing related functions for the corporate success (Wheelwright 1984). In order to build
the competitive strength, by compromising each element of the production system does not give
the solution, where the concept of focused factory was born (Skinner 1974). Skinner further
argues that, the concept of focused factory can perform well by using repetitive experience and
concentrating the resources of the company on one of the area of manufacturing. In a view to find
where we are and where we want to be, a manufacturing needs a measurement system that can
provide perspectives on its direction and rate of improvement (Hayes et al 1988).
The complete manufacturing process falls in to three basic groups i) process
management ii) business management iii) external reporting (Hayes et al 1988). There should be
perfect fit between these three groups which strives the consistency between its capabilities and
policies to gain competitive advantages (Hayes and wheelwright 1984 and Hayes et al 1988). The
evaluation of effectiveness (capability) of manufacturing system is essential for two reasons i) to
know the present status of company in the market (strategic orientation) ii) to raise level of
manufacturing capability by improving weak decisions areas (Wheelwright and Hayes 1985,
Miltenburg 2005). This effectiveness of the manufacturing organization is named as stage 1,
stage 2, stage 3, and stage 4 (Wheelwright and Hayes 1985). Milteburg (2005) renamed this as
level 1(infant), level 2 (average), level 3 (adult) and level 4 (world class). The level of
manufacturing capability is based on the level of each decision areas (manufacturing levers or
subsystems). The sum of levels of all decision areas defines the overall capability of the
production system which forms the primary basis for competition between the firms (Miltenburg
2005 and Morgan and Harvey 1998). These manufacturing capabilities are hard to imitate which
300
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
distinguishes a company form its competitors and are based on the two dimensions of the
manufacturing structure (Morgan and Harvey 1998 and Choe et al. 1997). These two dimensions
are process innovations and product differentiation which are well explained by product-process
matrix given by Hayes and Wheelwright in 1979 (Choe et al. 1997 and Hayes and Wheelwright
1979a).
The manufacturing decisions areas which are to required to evaluate the manufacturing
capabilities are grouped into basic two types i.e. structural and infrastructural (Hayes and
Wheelwright 1984). Literature has given various classifications of these decision areas under
heading of structural and infrastructural whereas this research follows the classification given by
Miltenburg (2005) (Skinner 1974, Buffa 1984, Hayes and Wheelwright 1988, Fine and Hax 1986,
Leong et al 1990). These are human resources, organization structure and control, production
planning and control, sourcing, process technology and facility (Miltenburg 2008). These six
decision areas has certain decision criteria and each criterion has certain attributes (decision
choices) (Miltenburg 2008 and Choudhary et al 2010). It is very much essential to evaluate each
decision area in order to find its capability which decides capability of the entire manufacturing
system. This capability decides the strategic orientation of the company in the market
(Wheelwright and Hayes 1985, Miltenburg 2005).
The objective of this work is to identify and classify decisions involved in a line flow
manufacturing/production system so as to analyze its capability by using a systematic approach
of MCDM tool. Recently Choudhary et al (2012c) has presented only an exploratory study on
decisions used in line flow manufacturing systems using a case study approach
This work proposes a conceptual model to evaluate the manufacturing capability and its
comparison with ideal system for a line flow production system using MCDM. The work can also
be further extended to find out the strategic orientation of the company which is novelty of this
research. The rest of paper is organized as follows. Section 2 reviews the relevant literature and
indentifies the literature gap for this research, with a focus on identifying and classifying decisions
influencing the manufacturing capability of an organization. Section 3 talks about conceptual
model for the assessment of manufacturing capability of a line shop. Conclusion is given in
section 4.
2. Literature review
High volume low variety system (equipment paced line flow production system) is
designed to produce fewer products in higher, more regular volumes. It provides much high level
of cost, quality and delivery of the products. The line speed or production rate depends upon the
speed or production rate of each machine (Miltenburg 2005). The decision areas, its
corresponding decision choice (attributes) for line shop manufacturing system is explained next.
2.1 Decision areas
Decision areas in the literature of manufacturing strategy refers to constitutes or
subsystems of production system (Choudhary et al 2010). Miltenburg (2005) redefined these
decision areas as a manufacturing levers where setting or choices that can be made in the
corresponding decision areas which defines the manufacturing capability of a production system.
These levers can be adjusted slightly to make minor changes in the existing system, whereas for
the large changes in the existing manufacturing system all the levers has to adjusted
simultaneously, or one need to change the existing manufacturing system to the other in order to
deliver certain level of competitive priorities (Miltenburg 2005, Choudhary et al 2010 ). Beginning
from the work of Skinner (1969) numerous authors (Skinner 1974, Buffa 1984, Hayes and
Wheelwright 1988, Fine and Hax 1986, Leong et al 1990, Miltenburg 2005, Slack and Lewis
2011) identified various decisions areas. Hayes and Wheelwright (1984) have grouped these
decisions areas into two types i.e. structural and infrastructural. Miltenburg (2005) has grouped
structural decision areas as Process Technology, Facility and Sourcing whereas infrastructural
decision areas as Human Recourses, Organization structure and Control, and Production
Planning and Control. This work follows the decision areas given by Miltenburg.
301
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2.2 Decision choice
Decision choice is the attributes to the corresponding decision criteria of related decision area.
For example in case of human resources as a decision area, level of skill will be decision criteria
and highly skilled as a decision choice for job shop and unskilled or semiskilled will be decision
choice for line shop.
Choudhary et al. (2010) after reviewing rigorous literature on manufacturing strategy
indentified 54 decisions criteria and its corresponding decision choices for all seven types of
production systems. After reviewing the available literature on capability evaluation and
manufacturing strategy we have identified 32 decision criteria and its corresponding attributes to
evaluate the manufacturing capability of line shop and hence its strategic orientation. Dangayach
and Deshmukh (2001) have reviewed 269 papers published in various publications. They argued
that very less research had been done to formulate process of manufacturing strategy and more
research have done on the content of manufacturing strategy where evaluation of capability
occurs. Choudhary et al (2012a, 2012b, and 2012c) presented the exploratory study of these
decision areas for batch shop, job shop, and line shop. After reviewing all the available literature
we identified the gap that no systematic approach is reported to evaluate manufacturing capability
of line shop using MCDM and its strategic orientation. Hence, this research is. With this focus in
mind we have developed a conceptual model to evaluate the manufacturing capability of line
shop which is given in the next section.
3. Development of conceptual model
The capability measurement system has been designed by refereeing to the literature. The
measurement system is shown in Table 1. We have identified 32 decision criteria for all six
decision areas. Because of page limitation we are giving the details for two decision areas only.
Table 1. Decision areas, Decision criteria, and Decision choices for line shop (adopted from
Hayes Wheelwright 1984, Hayes et al 1988,Miltenburg 2005, Choudhary et al 2010, 2012a,
2012b, and 2012c)
Decision Choices
Group Criteria
Decision Choices
(Attributes) for
Decision Criteria
(Decision
(Attributes)
Ideal Case
Areas)
Human
Resources
(HR)
Organization
Structure and
Control
(OSC)
Level of Skill.
Highly Skilled,
Skilled, Semi-Skilled,
Highly Skilled, Mixed
Skilled, Un Skilled
Semi Skilled
Nature of Job
Performance Appraisal
---------------------------------------------
Two or Three
Team based
Training Need
Wage Rate/Hour
Work Content
Employee participation
Decision Making
Organization Structure
Importance of Line Staff
Quality Responsibility
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Low
Low
Small
Low
Centralize
Hierarchical
Low
Quality
Control
Specialist
302
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
This model will consist of three levels. The first level consist of six decision areas; the
second level consists of 32 decision criteria; and third level consist of related attributes. The
capability assessment method is comprehensive as it reviews manufacturing capability from
various perspectives. As a sample, the human resources decision area has been explained.
There are six major perspectives (decision criteria) are Level of Skill, Nature of job, Performance
appraisal, Training need, Wage rate, Work content, Employee participation. The level of skill
decision criteria consists of five attributes such as Highly Skilled, Skilled, Semi skilled, Mixed,
Unskilled. In this manner all the decision criteria for HR is identified and given in the model.
Similarly for the entire model all decision area, decision criteria and its relevant attributes has
been identified which form complete model (table 1).
3.1. Research methodology
The detailed research methodology to be follow is given in figure 1
Literature review on identification of decision areas, decision criteria and its
respective attributes
Development of a conceptual model for capability assessment
Identification of suitable organization for conducting a case study
Application of AHP approach for capability assessment
Computation of compatibility score and compare with the ideal manufacturing
system
Find out the deviation of practical inferences
Identification of areas for improvements
Figure 1 Research Methodology
4. Conclusion
This work presents six decision areas, 32 decision criteria and its relevant attributes which
define the complete model. Because of page limitation we are presenting only two decision areas
that are HR and OSC as well as the detailed decision attributes for level of skill criterion of human
resources group decision criterion. In a similar manner we have identified decision criteria for all
six decision areas and decision attributes for all 32 criteria which we can provide at any time on
request. On the basis of this model researchers/practitioners can formulate multi criteria decision
problem using MCDM in view to computes the manufacturing compatibility index. This
303
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
compatibility index can further use for benchmarking (Felix et al. 2006) the case company to
indentify the weak areas of the production system in comparison with his competitor. There are
various manufacturing practices/programms are given in the literature of manufacturing strategy
which will help in improving the weak decision areas.
References
Buffa, E. S. Meeting the Competitive Challenge, Dow (Jpnes & Irwin), New York, 1984.
Choudhari S.C., Adil G. K. and Ananthakumer U. Congruence of manufacturing decision areas in
a production system: a research framework. International Journal of Production Research
2010, 48(20), 5963–5989
Choudhari S.C., Adil G. K. and Ananthakumer U. Choices in manufacturing strategy decision
areas in batch production system – six case studies. International Journal of Production
Research, 2012a 50(14), 3698-3717.
Choudhari S.C., Adil G. K. and Ananthakumer U. Exploratory case studies on manufacturing
decision areas in the job production system. International Journal Operation and Production
Management. 2012b, 32(11), 1337-1361.
Choudhari S. C., Adil G. K. and Ananthakumer U. Configuration of manufacturing strategy
decision areas in line production system: five case studies. International Journal of Advanced
Manufacturing Technology. 2012c,DOI 10.1007/s00170-012-3991-9.
Choe, K., Booth, D. and Hu, M. Production competence and its impact on business performance,
Journal of Manufacturing Systems, 1997, 16(6), 409--421.
Dangayach, G.S. and Deshmukh, S.G. Manufacturing strategy: Literature review and some
issues. International Journal of Operations and Production Management, 2001, 21 (7), 884–
932.
Felix, T.S. Chan., Chan, H.K., Henry, C.W. Lau., and Ralph, W.L. Ip. An AHP approach in
benchmarking logistics performance of the postal industry. Benchmarking: An International
Journal, 2006, 13 (6), 636-661.
Fine C.H, Hax A.C. Manufacturing strategy: a methodology and an illustration. Interfaces, 1986,
15(6):28–46.
Hayes, R.H. and Wheelwright, S.C. Link manufacturing process and product life cycles, Harvard
Business Review, 1979a, 57(1), 133--140.
Hayes, R.H. and Wheelwright, S.C., Restoring Our Competitive Edge: Competing through
manufacturing. John Wiley and Sons, New York,1984.
Hayes, R.H. and Upton David. Operations-based strategy, California Management review, 1998,
40 (4), 8-25.
Hayes R.H, Wheelwright SC, and Clark KP. Dynamic manufacturing. Free Press, New York 1988.
Leong K, Snyder D, and Ward P. Research in the process and content of manufacturing strategy.
Omega, 1990, 18(2),109–122
Miltenburg, J., Manufacturing Strategy – How to Formulate and Implement a Winning Plan. OR:
Productivity Press, Portland, 2005.
Miltenburg, J. Setting manufacturing strategy for factory within factory. International Journal of
Production Economics, 2008, 113, 307–323.
Morgan Swink and W. Harvey Hegarty. Core manufacturing capabilities and their links to product
differentiation. International Journal of Operations & Production Management, 1998, 18(4),
374-396.
Skinner, W. Manufacturing – missing link in corporate strategy. Harvard Business Review, 1969,
47 (3), 136–145.
Skinner, W. The focused factory. Harvard Business Review, 1974, 54 (3), 113–119.
Slack, N. and Lewis, M. Operations Strategy, Prentice Hall, UK, 2011.
Wheelwright, S.C. and Hayes, R.H. Competing through manufacturing. Harvard Business
Review, 1985, 99-109.
304
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Application of RSM Based Simulated Annealing Algorithm
Approach for Minimization of Surface Roughness in Cylindrical
Grinding using Factorial Design
Ramesh Rudrapati*, Asish Bandyopadhyay, Pradip Kumar Pal
Mechanical Engineering Department, Jadavpur University, Kolkata, West Bengal, India
*Corresponding author (e-mail: rameshrudrapati@gmail.com)
In the present investigation, a new hybrid technique response surface methodology (RSM)
based simulated annealing (SA) algorithm approach is proposed to predict surface
roughness for stainless steel material in traverse cut cylindrical grinding process.
Experiments are designed as per full factorial design, wherein infeed, longitudinal feed and
work speed have been considered as the important input parameters. Analysis of variance
and graphical main and interaction plots have been plotted for analyzing the experimental
data for identifying the relationships between grinding parameters and surface roughness.
The variation of the performance parameter (surface roughness) with grinding parameters
has been mathematically modeled by RSM. SA algorithm has been employed for solving
the obtained mathematical model. Finally, the validation exercise is performed with
optimum levels of grinding parameters. The results confirm the efficiency of the approach
employed for prediction of surface roughness in this study.
1.
Introduction
Nowadays, a lot research work is carried out in metal cutting industries for optimizing
machining parameters for improving accuracy and surface finish of the machined surface.
Cylindrical grinder is one of the efficient and effective machining / finishing operation used in the
metal cutting industries to make the cylindrical jobs with high profile accuracy and better surface
roughness. The complex structure in the cylindrical grinding process i.e. simultaneous rotation of
grinding wheel and work speed and traverse movement of the work table along with other
interactive parameters such as grinding wheel properties, work-piece and machine parameters
are limiting the ability of grinding process to produce accurate and fine finish jobs (Kwak (2010).
But, literature reveals that systematic optimization methodology based on design of experiments
(DOE) could have ability to optimize the metal cutting operations (like grinding, turning, milling
etc.) predictably, much research work is done so far in this respect. Factorial design is important
methods of DOE, which was used by Baek et al. (2007) for optimizing the grinding conditions and
again, Thomal et al. (1997) applied same technique to analyze the turning operation. Response
surface methodology (RSM) is also one of the popular methods of DOE. Kwak wt al. (2006)
reported a study on RSM to predict the surface roughness and power spent during grinding in
external cylindrical grinding process. Lakshmi and Subbaiah (2012), Sahin and Riza (2005) used
integrated full factorial design cum RSM to optimize milling and turning operations respectively.
Design methods are effective in find out the local optimum conditions but these techniques
cannot provide global optimum settings. A new hybrid technique RSM based SA algorithm using
factorial design methodology is proposed and used in the present study to determine the global
optimal conditions for traverse cut cylindrical grinding process.
2.
Experimental planning and optimization techniques
Design of experiments (DOE): a methodology to achieve a predictive knowledge of a
complex, multi-variable process with fewest acceptable trials. Factorial design and RSM, which
305
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
are two major approaches in DOE, those are used in the present study for analyzing and
modeling the process parameters in traverse cut cylindrical grinding process.
A design in which every factor appears with every setting of every other factor is called
full factorial design. In this design, output responses are measured at all combinations of the input
parameter levels. A three-factor three-level- full factorial design (Table 1) has been used to plan
the experiments. Factorial design allows for studying effects of each factor on the response
variable, as well as the interaction effects of factors on the response variable through graphical
main and interaction effect plots. It can be analyzed by using analysis of variance (ANOVA) and it
is relatively east to estimate the main effects of a factor.
RSM is a collection of mathematical techniques, used in the present study to examine the
relationship between response variable and a set of quantitative experimental variables. This
method is often employed to build the mathematical relation between controllable factors and
output response. Furthermore, this mathematical model can be used to determine the operating
condition that produces best response, satisfies process specifications and identifies new
parametric condition that produces improved product quality over the quality achieved. RSM
creates second order mathematical model of the form:
2
2
2
Y =β0 +β1 (A) + β2 (B) + β3 (C) + β11 (A ) + β22 (B ) + β33 (C ) +β12 (A*B) + β13 (A*C) +β23 (B*C) (1)
where, all β‘s are regression coefficients determined by least square method; A, B, C are
input parameters and Y is the output response which is required to be optimized.
2.1 Simulated annealing (SA) algorithm
In the present study, simulated annealing is proposed to solve the mathematical model to
predict the surface roughness. It was introduced in 1982 by Kirkpatrik, Gelatt and Vecchi
(Kolahan et al. 2007). It is a stochastic optimization technique that able to find global optima by
using probability function. It uses single point search method and it resembles the coding process
of molten metals through annealing. The atoms in the molten metal can move freely with respect
to each other at higher temperature. If the temperature is slowly reduced, the movement of the
atoms gets restricted and starts to get ordered. Finally, crystals have been formed with minimum
possible energy. However, the formation of crystal depends on cooling rate. If, the temperature is
reduced very fast rate, the polycrystalline state has been formed, which may have higher energy
state than crystalline state. Therefore, in order to achieve the absolute minimum energy state,
cooling has been done at slow rate. The process of slow cooling is known as annealing. The
simulated annealing procedure simulates this process of slow cooling of molten metal to achieve
the minimum function value in minimization problems (Yang et al. 2009 and Mohd et al. 2011).
While solving the second order response mathematical model by using SA in the present work,
similar procedure is followed to optimize grinding conditions. In the present work, MATLAB
version 7.1 is used to predict surface roughness. SA’s fundamental operations are taken care of
by SA optimization toolbox of MATLAB and it finally provides the optimum conditions for the
desired value of Rq.
3.
Experimental details
Experimental runs have been conducted on stainless steel material on cylindrical grinding
machine. Photographic view of the experimental setup is shown in Figure 1. Three varied levels
of the process variables infeed, longitudinal feed and work speed are selected in the present
work. The selected input parameters and their levels are: infeed (A) = 0.04, 0.05 and 0.06
mm/cycle; longitudinal feed (B) = 70, 80 and 90 mm/s; and work speed (C) = 80, 112 and 160
rpm respectively. After completing the experiments, surface roughness has been measured using
stylus-type profilometer: Talysurf (Taylor Hobson, Sutronic 3+). Surface roughness has been
measured in three different places of each work-piece and the average value has been
306
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
considered. Surface roughness parameter Rq has been selected for present investigation,
because it is one of the important roughness parameter which is used to describe the quality of
the machine surface. The observed data are discussed and analyzed in the next section.
4.
Results and discussions
As mentioned earlier, full factorial design experiments have been done and the
corresponding output response is observed by measuring surface roughness. The output results
along with full factorial design matrix are shown in Table 1. The data shown in the Table 1 has
been used to analyze and optimize the cylindrical grinding process to improve / minimize the
surface roughness by using ANOVA and RSM based SA algorithm.
Table 1. Full factorial design matrix and output response
4.1 Parametric influence on surface roughness
ANOVA technique is applied on experimental data to determine the relative magnitude of
the effect of each factor on Rq and to estimate the error variance. The larger the factor effect
relative to the error variance can be judged from the F-column. The larger the F-value, the larger
the factor effect is compare to the error variance (Nixon and Ravindra (2011)). From Table 2, it is
concluded that work speed is the variable which has larger effect on surface roughness (Rq).
The main (Figure 2) and interaction (Figure 3) effect plots for input parameters Vs.
surface roughness (Rq ) are drawn by taking the mean values of Rq. These plots are very useful to
identify the factor effects, individually as well as combinedly. In a main / interaction plot, when the
lines are parallel, main / interaction effects are zero. The slopes of the lines are more, the more
influence the main / interaction effect has on the response. From Figure2, it is noted that
longitudinal feed (B) has larger effect on Rq and next is work speed (C). Infeed is insignificant as
the line is almost parallel, as found from the Figure 2. From Figure 3, it is conclude that the nonparallelism of the plots indicates that some amount of the interaction exists between the two
factors, whereas intersecting lines are strong indication of interaction effect on Rq.
307
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 2. Analysis of variance for Rq
Figure 1. Experimental setup
Main effects plots for surface roughness ( Rq)
Data Means
A
Interaction Plots for surface roughness (Rq)
Data Means
B
70
80
90
1.20
A
0.04
0.05
0.06
Mean of surface roughness (Rq)
1.2
1.15
1.10
1.0
1.05
0.8
A
1.00
1.2
0.04
0.05
C
0.06
70
80
90
B
1.0
1.20
0.8
1.15
BA
0.04
70
0.05
80
0.06
90
BC
7080
80
112
90
160
C
1.2
1.10
C
1.0
1.05
1.00
0.8
80
112
160
0.04
Figure 2. Main effect plots for Rq
0.05
0.06
80
112
160
Figure 3. Interaction plots for Rq
4.2 Mathematical modeling and optimization
As already mentioned above, RSM consists of collection of mathematical and statistical
techniques, which is applied on experimental data (Table 1) and the mathematical model is
developed and shown in Equation 2.
2
2
YRq = -5.30350-37.0266*A+0.172311*B+0.0118120 *C+63.8889*A -0.00120778*B 2
0.000114091*C +0.130000*AB+0.180482*AC+0.0000895833*BC
(2)
where, YRq = output response (surface roughness (Rq) in micron, A = infeed in mm/cycle,
B = longitudinal feed in mm/s and C = work speed in rpm. The above mathematical model
(Equation 2) can be used to select the optimum conditions for obtaining the desired Rq value
within the limits of input parameters. The simulated annealing algorithm is used in the present
case, to solve the obtained mathematical model i.e. Equation 2. Following steps are involved in
using the optimization toolbox in MATLAB 7.1 software: 1. Select the fitness function (i.e.
(objective function) that is to be optimized). 2. Select the starting points of input parameters. 3.
Fixing the lower and upper bounds of input parameters. 4. Running the solver
At each run in the SA toolbox, a new set of optimum parametric condition as well as
output response variable is generated by the process. Optimum grinding condition is found from
the SA toolbox is: infeed (A) = 0.06 mm/cycle, longitudinal feed (B) = 89.95 ≈90 mm/s and work
speed (C) = 80.02 rpm and surface roughness (Rq) = 0.8599 μm. This condition is obtained in the
range of the input parameters used in the study. Confirmatory test revels that the validity of the
proposed methodology for prediction of surface roughness in traverse cut cylindrical grinding
process.
308
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
5. Conclusion
On the basis of experimental results, optimal parametric setting for good surface
roughness using RSM based SA algorithm using factorial design the following points can be
concluded as listed below:
From the ANOVA results, it is found that work speed is the factor that has more influence
on surface roughness (Rq) than other factors as well as their interactions.
From the main effect plots, it is found that longitudinal feed and work speed are the
significant factors for surface roughness
Interaction effect plots reveled that all the interaction effects of input parameters have
significant effect on output response (Rq).
The second order mathematical model is developed by RSM to build relationships
between input factors and output variable; this equation can be used for predicting Rq
value for a given set of grinding parameters.
The optimum grinding setting found by solving the mathematical model by using SA is:
infeed = 0.06 mm/cycle, longitudinal feed = 90 mm/s and work speed = 80 rpm. The
optimum condition obtained from SA has been validated by confirmatory test.
Acknowledgement
Research support provided by Council of Scientific and Industrial Research (CSIR), India:
File No. 9/96 (0723)2k12-EMR-I dated 27/02/2012, and University Grants Commission (UGC),
India: File No. F1-17.1/2011-12/RGNF-SC-AND-2939/ (SA-III/Website) dated 06/06/1012 to
Ramesh Rudrapati (one of the authors) is gratefully acknowledged.
References
Baek, S. K., Lee, J. K., Lee, E. S. and Lee, H. D. An experimental investigation of optimal
grinding condition for aspheric surface lens using full factorial design. Key engineering
materials, 2007, 329, 27-32.
Kwak, J. S. Application of Taguchi and RSM for geometric error in surface grinding process.
International Journal of machine Tools & Manufacture, 2005, 45, 327-334.
Kolahan, F., Abolbashari, N. H. and Mohitzadeh, S. Simulated annealing application for structural
optimization. Word Academy of Science, Engineering and Technology, 2007, No 35.
Lakshmi, V.V.K. and Subbaiah, K.V. Modeling and optimization of process parameters during end
milling of hardened steel. Int. J. of Engg.Research and Applications, 2012, 2, 674-679.
Mohd, Z. A., Haron, H. and Sharif, S. Optimization of process parameters in the abrasive water
jet machining using integrated SA-GA. Applied soft computing, 2011, 11, 5350-5359.
Nixon, K. and Ravindra, H. V. Parametric influence and optimization of wire EDM of hot die steel.
Machining Science and Technology, 2011, 15:1, 47-75
Sahin, Y. and Riza, A. M. Surface roughness model for machining mild steel with coated carbide
tool. Materials & Design, 2005, 26, 321-326.
Thomas, M., Beauchamp, Y., Yossef , Y. A. and Masounave, J. An Experimental design for
surface roughness and buildup edge formation in lathe dry turning. International Journal
Quai Science, 1997, 2(3), 167-180
Yang, S. H., Srinivas, J., Mohan, S., Lee, D.M. and Balaji, S. Optimization of EDM using
simulated annealing. Journal of Material Processing Technology, 2009, 209, 4471-4475.
309
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Application of TOPSIS Analysis for Selection of Nozzle in
Mechanical Deterioration Test RIG
N. R. Rajhans1, R. S. Garodi1*, Jyoti Kirve2
1
2
College of Engineering, Pune 411005
Assistant Director, SHL, ARAI, Pune 411038
*Corresponding author (e-mail: rohit.garodi44@gmail.com)
Decision making problem is the process of finding the best option from all of the feasible
alternatives. In this paper, from among multi-criteria models in making complex decisions
and multiple attribute models for the most preferable choice, technique for order preference
by similarity to ideal solution (TOPSIS) approach has been dealt with.This paper proposes
the solution for selection of the nozzle for the mechanical deterioration test for headlight
lensused at Automotive Research Association of India (ARAI).
Keywords: TOPSIS, Nozzle Selection, Multi Criteria Decision
1.
Introduction
Mechanical Deterioration of Headlight lens carried out at automotive research association
of India (ARAI) make use of Special purpose nozzle. Test is carried out under standard
parameters, Various manufacturer provides a wide range of nozzles including some variation in
the parameters, material, cost. Basic purpose of this paper is to propose the solution for selection
of nozzle for the test. Multi Criteria Decision making application of TOPSIS method is used in this
paper, for selection of nozzle.The basic principle of the method is that the chosen alternative
should have the shortest distance from the positive ideal solution and the greatest distance from
the negative one.As Nozzle selection is a verycrucial problem for the spray test, a systematic and
scientific approach is necessaryfor its solution. A systematic methodology is proposed in the
present study by TOPSIS.
Multi-attribute decision making problem with ‘m’ alternatives that are evaluated by ‘n’
attributes may be viewed as a geometric system with ‘m’ points in the n-dimensional space.The
alternative which is the nearest to the positive-ideal solution could be preferred in this method.
Therefore, the logic and basic principle behind the TOPSIS concept are that the most preferred
alternative should simultaneously have the shortest distance from the positive ideal solution and
the farthest distance from the negative ideal solution, which also certainly reflects the rational of
human choice.TOPSIS method has been widely used for solving practical decision making
problems due to its simplicity and comprehensibility. The method is able to measure the relative
performance of the decision alternatives with a high computational efficiency, due to a minimum
numerical calculation.The TOPSIS method is atechnique for establishing order preference by
similarity to the ideal solution, and was primarily developed for dealing with real-valued data. This
technique is currently one of most popular methods for Multiple Criteria Decision Making
(MCDM).
2. Steps in TOPSIS Analysis
2.1 Step1: Construct standardized decision matrix A. For the comprehensive assessment
questions with n evaluation units and m evaluation indexes, its decision matrix A is
310
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
N N N
M x x x .
M x x x .
M
.
.
.
.
.
.
.
x
x
.
.
.
1
2
3
1
11
12
13
2
21
22
23
.
.
.
M n xn1
.
.
.
.
.
.
.
.
.
x
x
n2
n3
m
2m
.
.
.
xnm
1m
3
A
N
Construction of Normalized matrix,
rij =xij/ (x2ij) for i = 1, …, m; j = 1, …, n
r11
r 21
R .
.
.
r n1
2.2 Step2: construct
W=(w1,w2,…,wn)
weighted
r
r
22
13
23
.
.
.
.
.
.
r
r
n2
and
V 11
V 21
V .
vij=wjrij,i= 1,2,...,n, j=
.
.
V n1
r
r
12
V
V
n3
.
.
.
.
.
.
.
.
.
standardized
V
V
12
22
23
.
.
.
.
.
V
n2
.
.
13
.
V
(1)
.
n3
.
.
.
2m
.
.
.
r nm
r
r
1m
decision
matrix
2m
.
.
.
. V nm
.
.
V
V
V,
weight
vector
1m
1,2,...m.
(2)
2.3 Step3: determine the ideal solution A+and minus ideal solution A-,
A
'
{(max vij j J ), (min vij j J ) i 1,2,...n}
i
i
A
(3)
{v1 , v 2 ,..., v j ,..., v n}
(4)
'
{(min vij j J ), (max vij j J ) i 1,2,...n`}
i
i
{v1 , v 2 ,..., v j ,..., v n}
+
2.4 Step4: calculate ideal solution S is:
s
i
n
j 1
vij vj
2
i 1,2,..., m
311
(5)
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The minus ideal solution x is:
s
i
vij vj
n
2
(6)
i 1,2,..., m
j 1
2.5 Step5: calculate the relative proximity index of each project to the ideal solution ci
*
c
i
*
c
c
i
*
i
s
i
( si si )
1
if
0
if
0 ci 1,
,
(7)
i 1,2,..., m
AA
AA
i
i
2.6 Step6: rank the priority of the projects in descending order of ci[4]
3. The application of TOPSIS method for selection of nozzle
Option 1
Option 2
Option 3
Option 4
Option 5
Pressure
(bars)
5.6
5.6
6
4
6
Table 1 : Standard parameters
Flow rate
Material
(LPM)
0.24
ss304
0.24
brass
controllable
ss304
0.24
ss304
0.24
ss304
Spray angle
15º-20º
15º-20º
controllable
20º ±0.5º
20º ±0.5º
Cost
INR
6500
5500
17500
8500
9875
Multiple options available for manufacturing the nozzle, above table shows the various options
available with the parameters provided by each manufacturer respectively.
3.1 Assigning weight to parameters
Assigning weight depends on the importance of the parameter require in the test. If any
parameter is very important then highest weight is assigned to it. If any parameter is common or
less important than lowest weight is assigned to it. The assigned weight shows the overall ranking
of matrix.
Here the basic problem of selection of nozzle is dealing with five different parameters like
Pressure, Flow rate, Material of nozzle, Spray angle, Cost of nozzle. Different manufacturer
provides a range of products having variation in parameters. The purpose of TOPSIS method is
to provide an adequate solution. TOPSIS provides a descending range of selective products.
Even if proposed product is not desirable to select due to some reasons we can go for the next
proposed solution and so on.
312
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 2: weight assigned to different parameters
weight assigned to different parameters
9
8
7
6
5
Pressure
6 bar
5.6 bar
4 bar
--
--
Flow rate
controllable
0.24 LPM
less
--
--
Material
--
SS304
Brass
--
--
Spray angle
controllable
20º ±0.5º
15º-20º
--
--
Cost
5500
6500
8500
9875
17500
Weight
M1
M2
M3
M4
M5
Option 1
Option 2
Option 3
Option 4
Option 5
Table 3 : Standardized Matrix
0.25
0.2
0.1
N1
N2
N3
Pressure
Flow rate
Material
8
8
8
8
8
4
6
9
8
7
8
8
9
8
8
0.05
N4
Spray angle
6
6
9
8
8
0.4
N5
cost
8
9
5
7
6
3.2 Construction of normalized matrix
Method of construction of normalized matrix in explained in equation (1), using that equation
following matrix is evaluated.
Table 4: Normalized Matrix
weight
0.25
0.2
0.1
0.05
0.4
N1
N2
N3
N4
N5
Pressure
Flow rate
Material
Spray angle
cost
M1
M2
M3
M4
M5
Option 1
Option 2
Option 3
Option 4
Option 5
0.4666
0.4666
0.3499
0.4082
0.5249
0.4358
0.4358
0.4903
0.4358
0.4358
0.4851
0.2425
0.4851
0.4851
0.4851
0.3579
0.3579
0.5369
0.4772
0.4772
0.5010
0.5636
0.3131
0.4384
0.3757
All the parameters in first matrix are in the form of test standards; by assigning the weight to them
it will convert the matrix in to comparable form.
3.3 Ideal solution based on best and worst conditions
Considering the best ideal values & worst possible values an ideal solution is derived. Following
table shows the best ideal solution in descending order. If in any case the best ideal solution is
not selected then next ideal solution is ready. Thus this method provides the comparative study in
scientific manner to select the best ideal product with multiple attribute complex decision making
problems.
This study is useful in future for selection of nozzle. Even if one nozzle is selected from
anyoneoption defined in the table, a comparative study to select the best ideal solution will be
readily available in future.
The results obtained by TOPSIS are as shown in table 5,
313
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 5 : Optimized solution
Option number
TOPSIS Score
Option 2
0.7675
Option 1
0.7231
Option 4
0.4948
Option 5
0.4254
Option 3
0.2042
M2 > M1 > M4 > M5 > M3,
So it is observed that Manufacturer 2 provides the ideal nozzle, as the cost of the nozzle is very
less & all the parameters are within acceptance limit. It also ensures the nozzle will sustain to
given test standards. If in any case due to non- availability of manufacture of any other problem,
the next best solution out of the available solutions can be selected. TOPSIS method thus
identifies the priority for selection using scientific methodology.
The results obtained by TOPSIS helps in identifying the best option out of available options
scientifically. The TOPSIS scores gives a fair idea about by how much value each option differ
from the best option available. It not only helps in selecting the best option but also gives the
sequence in which different alternatives can be selected if one of the alternatives is not feasible.
In addition to this, it helps in deciding the priorities for different options
4.
Conclusion
TOPSIS method is a decision-making method of multiple criteria, and it is suitable for
handling decision questions with multiple attributes for different alternatives. This paper puts
forward that TOPSIS method is applied to the selection of spray nozzle for test Rig, which
achieved good results.
References
Baykasoglu, A., Kaplanoglu, V., Zeynep, D.U., Durmusoglu, Sahin C. Integrating fuzzy DEMATEL
and fuzzy hierarchical TOPSIS methods for truck selection.
Dymova, L., Sevastjanov, P. An approach to generalization of fuzzy TOPSIS method. Anna
Tikhonenko
Institute
of
Comp.
&Information
Sci.,Technical
University
of
Czestochowa,Dabrowskiego 73,42-201 Czestochowa,Poland
Jahanshahloo, G.R., HosseinzadehLotfi, F., Izadikhah, M. Extension of the TOPSIS method for
decision-making Problems with fuzzy data.
Rao, R.V. Decision Making in the Manufacturing Environment Using Graph Theory and Fuzzy
Multiple Attribute Decision Making Methods, Springer-Verlag, London, 2007.
Study on the Application of TOPSIS Method to the Introduction of Foreign Players in CBA
Games: Department of Public PE, Xuchang College Xuchang Henan, 461000
314
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimum Design of Cylindrical Roller Bearings by
Optimization Techniques and Analysis using ANSYS
R.D. Dandagwhal*, V.D. Kalyankar
S.V. National Institute of Technology, Surat – 395007, Gujarat, India
*Corresponding author (e-mail: dandgwhalpes@gmail.com)
Cylindrical roller bearings are most commonly used machine components in high speed
operations. Due to their non-linear, statistically indeterminate behavior, cylindrical roller
bearings have great importance in all global industries. Out off many failure criteria’s,
longer fatigue life i.e. higher dynamic capacity is the common criteria for the design and
selection of the bearings. To achieve maximum value of the objective function use of
advanced optimization technique is needed. In the present work, fatigue life of the
cylindrical roller bearings is optimized by using a new optimization approach, known as
Teaching-Learning-Based optimization (TLBO). Dynamic capacity of the cylindrical
roller bearing is a function of the internal geometries of bearings like pitch diameter
(Dm), mean roller diameter (Dr), effective length of the roller (le), and number of the
rolling elements (Z). To ensure the acceptance of the results, the paper illustrates the
validation of results with standard manufacturing data available. Also, a model for
representing non-linear mechanical behavior of cylindrical roller bearings has been
developed for static finite element analysis and analyzed by using ANSYS. Hertz
contact stress and deformation at the contact between inner race and the roller has
been validated.
Keywords: Rolling contact bearings, cylindrical roller bearing, fatigue life, Hertz contact
stress, TLBO algorithm, ANSYS, optimization.
1. Introduction
With increased competition between bearings manufactures in worldwide markets provide
consumers with low-cost, standard design bearings with higher endurance. Whole unit of the
bearings includes inner ring, outer ring, rolling element and the cage which separates the
rolling elements from each other. In mechanical terms, the system components, bearing, shaft
and housing, represent spring elements which form a statically indeterminate spring system
(Harris, 2000). Rolling elements behave in a unidirectional fashion, i.e. they only transmit
compressive and not tensile forces. The compressive load-deflection relationship within the
rolling contact is non-linear. Cylindrical roller bearings (CRB) are usually used for the large
load-supporting capabilities and for high speed operations. Under the simple applied radial
loads, inner and outer race carry the same radial loads (Gupta, 2011). Normally four rollingelement bearing life theories were used for the fatigue life of the bearings and those are The
Weibull Distribution Theory, Lundberg-Palmgren Theory, Ioannides and Harris Theory, and
Zaretsky Theory. In the present work, the specific dynamic capacities are based on LundbergPalmgren theory (1947).
Many deterministic optimization techniques can be applied for the nonlinear optimization
problem but the problem arises when numbers of design variables are more. Also these
methods has slow convergence rate (Changsen, 1991). With continuous research in the field
of optimization techniques and nature-inspired heuristic optimization method provide better
solution instead classical optimization methods. There are many nature-inspired well known
optimization algorithms such as genetic algorithm (GA), artificial neural network (ANN),
particle swarm optimization (PSO), artificial bee colony (ABC), teaching-learning-based
optimization (TLBO), mine blast algorithm (MBA) and water cycle algorithm (WCA) have been
applied to many engineering optimization problems.
Choi and Yoon (2001) optimized the design variables of an automotive wheel-bearing unit
of double row angular contact ball bearing by using GA considering maximization of unit as
objective function. Chakraborty et al. (2003) optimized deep groove ball bearing by using GA,
but certain conditions were not considered properly like assembly angle was constant for all
315
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
pair of solutions. Rao et al. (2007) done a parametric study on the design variables specified
by Chakraborty et al. (2003) and applied bounds to the five constant parameters involved in
the constraints. Gupta et al. (2007) applied a non-dominated sorting based genetic algorithm
(NSGA II) to a mathematical model which was a set of objective functions, design parameters
and constraints. Kumar et al. (2008) applied GA to a constrained non linear problem of design
cylindrical roller bearings. In another study, Kumar et al. (2009) applied GA for design
optimization of cylindrical roller bearings with the logarithmic profile (LP) crowning. Wei and
Chengzu (2010) applied NSGA-II to optimize the design of high speed angular contact ball
bearing (ACBB) with two objectives, namely, rating life and spin frictional power loss on
7007AC bearing. Tiwari et al. (2012) had done the same work as Kumar et al. (2008) for
design optimization on a tapered roller bearing.
Poplawski et al. (2001a, 2001b) used four rolling-element theories to predict and compare
the dynamic capacity and life of the bearings and validated with FE analysis for stress and
life. Cavallaro et al. (2005) presented an analytical method to account for the structural
deformation of the ring and housing in rolling element analysis based on Roark’s formulas
compared with FEM results. Demirhan et al. (2008) used FEM to investigate stress and
displacement distributions on inner and outer rings of cylindrical roller bearings. Zhaoping et
al. (2011) verified the theoretical results of the contact in deep groove ball bearing by using by
using APDL language embedded in the finite element software ANSYS.
It is observed from the above literature that, relatively less work was done on the design
optimization of the rolling contact bearing and finite element analysis (FEA) applied to
standard values available. The objective of the present work is to apply Teaching-Learningbased optimization (TLBO) technique for design optimization of cylindrical roller bearings and
the validation of results with ANSYS.
2. Cylindrical roller bearing geometry
Roller bearings usually used in the high speed application and for exceptionally large
load-supporting conditions which is difficult to obtain from ball bearings of same
specifications. Fatigue life of CRB is very much affected by the internal geometry. Standard
boundary dimensions includes the bore diameter (d), outer diameter of the bearing (D), and
the width of the bearing (B). Sharp edges of the roller and outer, inner race adversely affect
on fatigue life of the bearing. To reduce edge loading, inner and outer raceways are
chamfered. These chamfering dimensions are the outer ring chamfering height, r1, the outer
ring chamfer width, r2, the inner ring chamfer height, r3, and the inner ring chamfer width, r4.
The standard boundary dimensions and the minimum values of chamfer heights and widths
can obtain from SKF general catalogue.
3. Problem formulation for design of CRB
Roller bearings are used due their stiffer structures advantage and greater fatigue
endurance over ball bearings. At normal operating conditions, contact fatigue plays important
role in defining life of the bearings (Poplawski, 2001a, 2001b). The objective of present work
is to optimized fatigue life of the CRB expressed as
𝐶
L= 𝐹𝑑 𝑎 10 6
(1)
Where as per the Lundberg-Palmgren theory, value of ‘a’ is 3 for point contact and 10/3
for line contact. In CRB contact type is line contact between roller and both raceways (Harris,
2000).
3.1 Objective function
As explained in the previous section, fatigue life is function of dynamic capacity of the
bearing which is defined as that constant stationary radial load which a rolling bearing could
theoretically endure for a basic rating life of one million revolutions without sign of fatigue
crack in any one of its element (Shigley, 2011). Expressions for dynamic capacity and
constraints for the design optimization of cylindrical roller bearing are similar to Kumar et al.
(2008) expression for dynamic capacity for a cylindrical roller bearing is given by (Hamrock,
1983)
𝐶𝑑 = 𝑏𝑚 𝑓𝑐 𝑖𝑙𝑒
7
9
3
29
𝑍 4 𝐷𝑟 27
(2)
316
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
where
and
𝑓𝑐 = 207.9 𝜆 𝜈 𝛾
29
2
1−𝛾 27
9
1
1+𝛾 4
× 1 + 1.04
143
1−𝛾 108
9
2
−2
9
(3)
1+𝛾
𝐷
𝛾 = 𝐷𝑟
(4)
𝑚
Where, 𝑖 is the number of rows of rolling elements, le is the effective length of roller, Z
is the number of rolling elements, Dr is the mean diameter of the roller, Dm is pitch diameter of
the bearing. 𝜆 is the reduction factor to account for the manufacturing and mounting
deviations, 𝜈 is the factor to account for the edge loading. The factor bm in the eq.(2) is to
accommodate improvements in bearing geometrical accuracies afforded by modern
manufacturing methods, and improvements in bearing materials. Considered optimization
problem is single optimization problem and maximization of the fatigue life of the bearing is
the main objective.
3.2 Design parameter for optimization
Fatigue life of the roller bearings is the function of internal geometries of the bearings.
Those are expressed as,
𝐶𝑑 = 𝑓 𝐷𝑚 , 𝐷𝑟 , 𝑙𝑒 , 𝑍, 𝐾𝐷𝑚𝑖𝑛 , 𝐾𝐷𝑚𝑎𝑥 , 𝜀, 𝑒, 𝛽
(5)
Out off above parameters, first four are independent parameters need to describe internal
geometry of the bearings. Therefore, these four independent parameters selected
(𝐷𝑚 , 𝐷𝑟 , 𝑙𝑒 , 𝑍) as design variables. In additional to these, the five constraints constant
(𝐾𝐷𝑚𝑖𝑛 , 𝐾𝐷𝑚𝑎𝑥 , 𝜀, 𝑒, 𝛽) are also taken as a design variable (Gupta, 2007) which provides
constraint to basic design variables.
3.3 Constraints
Constraint reduces the search domain to feasible search domain. Present section
formulating the constraints on the basis of geometry, bearing standards, and strength
considerations. For optimizing the variables first step is the bounds for the variables.
Design
variables
Pitch
diameter(𝐷𝑚 )
Roller
diameter (𝐷𝑟 )
Table 1: Constraint defining bounds of basic design variables
Constraint range
Constraints
𝑑 + 2𝑟3𝑚𝑖𝑛
𝐷𝑟𝐿𝐵
Where,
𝐷𝑟𝐿𝐵 = 268.71
And
𝐷𝑟𝑈𝐵 =
Number of
rollers (Z)
≤ 𝐷𝑚
≤ 𝐷
− 2𝑟1𝑚𝑖𝑛
≤ 𝐷𝑟 ≤ 𝐷𝑟𝑈𝐵
1
2
𝑄𝑚𝑎𝑥
𝜎𝑐 𝑚𝑎𝑥
𝐷 − 2𝑟1𝑚𝑖𝑛
𝜋 𝑑 + 2𝑟3𝑚𝑖𝑛
≤
− 𝑑
+ 2𝑟3𝑚𝑖𝑛
≤𝑍
𝐷𝑟 𝑈𝐵
𝜋 𝐷 − 2𝑟1𝑚𝑖𝑛
𝐷𝑟𝐿𝐵
𝐶1 (𝑋) = 𝐷𝑚 − 𝑑 + 2𝑟3𝑚𝑖𝑛
≥0
𝐶3 𝑋
𝑄𝑚𝑎𝑥
= 𝐷𝑟 − 268.71
≥0
𝜎𝑐𝑚𝑎𝑥
𝐶5 𝑋
𝜋 𝑑 + 2𝑟3𝑚𝑖𝑛
=𝑍−
𝐷𝑟𝑈𝐵
≥0
𝐶2 𝑋 = 𝐷 + 2𝑟13 𝑚𝑖𝑛 − 𝐷𝑚
≥0
𝐶4 𝑋 =
𝐶6 𝑋 =
1
2
𝐷 − 2𝑟1𝑚𝑖𝑛
− 𝑑
+ 2𝑟3𝑚𝑖𝑛
− 𝐷𝑟 ≥ 0
𝜋 𝐷 − 2𝑟1𝑚𝑖𝑛
−𝑍
𝐷𝑟𝐿𝐵
≥0
Constraints 1 to 6: It should be noted that by providing the bounds to variables, search
space becomes narrow down and lead faster convergence. Constraints 1 to 6 are used to
317
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
obtained bounds for basic design variables by using standard dimensions taken from SKF
general catalogue.
Constraint 7 to 19: All the basic design parameters as well as the constraint constants
must be within certain limits. Constraint 7 to 19. 𝐾𝐷𝑚𝑖𝑛 , 𝐾𝐷𝑚𝑎𝑥 , 𝜀, 𝑒, 𝛽 are unknown constants
whose value derived by parametric study done by Gupta et al. (2007).
Table 2: Design constraints for designing of cylindrical roller bearings
Constraint conditions for design variables
𝐷−𝑑
𝐷−𝑑
0.4 ≤ 𝐾𝐷𝑚𝑖𝑛 ≤ 0.5
𝐾𝐷𝑚𝑖𝑛
≤ 𝐷𝑟 ≤ 𝐾𝐷𝑚𝑎𝑥
Constraint 7 &
2
2
0.6 ≤ 𝐾𝐷𝑚𝑎𝑥 ≤ 0.7
8
𝐶7 𝑋 = 2𝐷𝑟 − 𝐾𝐷𝑚𝑖𝑛 𝐷 − 𝑑
𝐶8 𝑋 = 𝐾𝐷𝑚𝑎𝑥 𝐷 − 𝑑 − 2𝐷𝑟 ≥ 0
≥0
Constraint 9 & 𝐶9 = 𝐷𝑚 − 0.5 − 𝑒 𝐷 + 𝑑 ≥ 0
𝐶10 = 0.5 − 𝑒 𝐷 + 𝑑 − 𝐷𝑚 ≥ 0
10
0.03 ≤ e ≤ 0.08
0.3 ≤ ɛ ≤ 0.4
Constraint 11
C11 X = 0.5 D − Dm − Dr − ɛDr ≥ 0
1
Constraint 12
𝐶12 𝑋 =
𝐷 − 𝐷𝑜 − 2𝑟1𝑚𝑖𝑛 ≥ 0
2
1
1
Constraint 13
𝐶13 𝑋 =
𝐷𝑖 − 𝑑 − 𝐷 − 𝐷𝑜 ≥ 0
2
2
𝑙𝑖
Constraint 14
𝐶14 𝑋 = 𝜎𝑐 𝑠𝑎𝑓𝑒 − 𝜎𝑐𝑚𝑎𝑥
≥0
𝑙𝑜
Constraint 15
≥0
𝐶15 𝑋 = 𝜎𝑐
− 𝜎𝑐
Constraint No.
𝑠𝑎𝑓𝑒
Constraint 16
Constraint 17 &
18
Constraint 19
𝐶17
𝑚𝑎𝑥
𝜋
𝐷𝑟
−𝑍
𝐶16 𝑋 = 2𝜋 − 2𝑍𝑠𝑖𝑛
≥0
180
𝐷𝑚
= 𝛽𝐵 − 𝑙𝑒
𝐶18 = 𝐵 − 𝑙𝑒 − 2𝑟 − 2𝑟1 ≥ 0
0.7 ≤ 𝛽 ≤ 0.85
1
𝐶19 =
𝐷 − 𝐷𝑜 − 3𝑍𝑠𝑡𝑎𝑡𝑖𝑐 ≥ 0
2
𝑤𝑖𝑡ℎ 𝑍𝑠𝑡𝑎𝑡𝑖𝑐 = 0.626𝑏𝑜
−1
4. Computational procedure for teaching-learning-based optimization (TLBO)
technique
As like other advanced optimization techniques, TLBO is also nature-inspired algorithm
inspired by system of teaching by teacher to students. It is a population-based method and
uses a solution of population to proceed to the global solution. Stepwise explanation of TLBO
was explained by Rao et al. (2011). TLBO algorithm has two phases: ‘Teacher phase’ and
‘Learner phase’. It is parameter less optimization technique and require only common
algorithm controlling parameters like population size and number of generations.
5. Results and discussion
With the development in the field of computer many soft computing advanced
optimization techniques proved their effectiveness over traditional optimization techniques.
Aim of the present work is to optimise design variables involved in the design of cylindrical
roller bearings. For the optimization purpose, nine variables (𝐷𝑚 , 𝐷𝑟 , 𝑙𝑒 , 𝑍, 𝐾𝐷𝑚𝑖𝑛 , 𝐾𝐷𝑚𝑎𝑥 , 𝜀, 𝑒, 𝛽)
has been considered similar to the Kumar et al. (2007). For the calculation purpose,
equivalent radial load is assumed to be 60% of the total standard dynamic capacity, Cd.
Results obtained by using TLBO algorithm are tabulated in table 3. Given results are
obtained after 100 generations with the population size of 50 and all case of bearings under
consideration are run for 50 times each to check for consistency of the results. It is observed
from the results that as the bearing size increases pitch diameter increases and in proportion
other independent parameters.
It is observed from the literature that the main failure mode is contact fatigue between
rolling element and both rings of the cylindrical roller bearings. Contact is complex nonlinear
phenomenon, which mainly includes two considerations. Firstly, it is difficult to identify contact
318
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
area and second that with the change of load, material and boundary conditions, contact area
changes. Contact stress which is also known as Hertz contact stress can be determined by
using input conditions of constraints 14 & 15 on inner and outer ring respectively. Also the
total deformation at the contact zone can be determined by using the expression
𝑄 0.9
𝛿 = 3.84 × 10−5 × 𝑙 0.8
(6)
Where, 𝛿 is total deformation, Q is total load and l is effective length of the roller.
Results obtained from the TLBO algorithm listed in table 3 are validated by using ANSYS
tool. This section took cylindrical roller bearings NU 202 for validation purpose, discussed
bearing contact, total deformation and built its finite element 3-D parameterized model by
using ANSYS and results are tabulated in table 4.
Table 3: Optimized design parameters for cylindrical roller bearings though TLBO algorithm
Optimized basic design variables
Bearings
No.
Dm
Dr
Z
le
kDmin
NU 202
25.00
6.25
11
8
NU 203
28.50
6.90
12
NU 303
32.00
7.80
12
NU 204
33.50
7.95
13
NU 304
36.00
8.36
NU 2205
38.50
NU 305
44.45
NU 2206
NU 207
𝜺
𝜷
kDmax
e
0.4780
0.6347
0.0609
0.3652
0.8361
9
0.4868
0.6991
0.0694
0.3299
0.7789
11
0.4187
0.6106
0.0429
0.3891
0.8083
11
0.4015
0.6796
0.0333
0.3339
0.8123
14
12
0.4587
0.6211
0.0319
0.3187
0.8442
7.62
17
14
0.4786
0.6477
0.0576
0.3096
8.45
18
12
0.4374
0.6321
0.0781
0.3230
46.00
8.25
18
12
0.4017
0.6226
0.0418
53.50
7.81
22
12
0.4007
0.6783
0.0569
CdSKF
CdGA
CdTLBO
18.48
22.64
17.20
23.58
27.16
24.60
31.87
36.27
25.10
32.23
39.09
35.50
45.48
46.42
0.7852
34.10
39.75
44.22
0.7951
46.50
50.39
54.44
0.3980
0.8483
44.00
51.64
52.46
0.3314
0.7842
56.00
60.97
64.60
12.50
Based on that, the nonlinear model is developed and contact state is analyzed. ANSYS
supports surface contact elements of rigid-flexible or flexible-flexible. The elements form
contact pairs by using target and contact surface. In the given problem, contact type is
flexible-flexible and line contact. By using fine meshing type, the whole model has 318048
nodes and 194755 elements. Results obtained from the ANSYS are show in figure 1.
Sr,
no.
1.
2.
3.
Table 4: Validated results of TLBO algorithm by ANSYS
Theoretical
value Analytical
value
obtained by TLBO
obtained
by
ANSYS
Total maximum contact stress on inner ring
3399 MPa
3531 MPa
Total maximum contact stress on outer ring
2633 MPa
2688 MPa
Total deformation on roller and ring contact
0.023 mm
0.02268 mm
Figure 1: Results obtained from ANSYS in terms of contact stress and total deformation
The obtained results of the TLBO algorithm in the present work are compared with the
results obtained by previous researcher using GA and SKF cylindrical roller bearing catalogue
values. It is observed that, results of TLBO have shown considerable improvement over the
previous results. Obtained output design variables checked by means of simulation for
contact stress, deformation and fatigue life for inner and outer ring of the bearing. Through
simulation, the calculation result of the maximum contact stress on inner ring is 3531 MPa
319
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
while the Hertzian theory value is 3399 MPa. Similarly, on outer race, maximum value
obtained in ANSYS is 2688 MPa while theoretically it is 2633 MPa. Total deformation at roller
and ring contact is 0.023 mm, which is again close to theoretical value of 0.02268 mm.
6. Conclusion
In the present work, optimum methodology for the design of cylindrical roller bearing has
been proposed. Design optimization is carried by using new approach known as TeachingLearning-based optimization approach. Maximization of dynamic capacity is considered as
main objective and constraints are formulated based on internal geometry and strength
consideration. By using ANSYS, simulation and analysis is carried out for the obtained results
from TLBO algorithm. The finite element solutions get for contact stress and deformation of
inner and outer ring for most heavily loaded roller in bearing have good consistency with
theoretical solutions. Convergence study ensured the global point in the design. The
methodology of design optimization by TLBO algorithm for constraint, non-linear, statically
indeterminate system could be considered milestone in design optimization of cylindrical roller
bearings to determine optimal of near-optimal design parameters.
References
Cavallaro, G., Nelias, D. and Bon, F., Analysis of high-speed inter shaft cylindrical roller
bearing with flexible rings, Tribology Transactions, 2005, 48(2), 154-164
Changsen, W., Analysis of rolling element bearings, Mechanical Engineering Pub. Ltd, 1991.
Choi, D.H. and Yoon, K.C., A design method of an automotive wheel-bearing unit with
discrete design variables using genetic algorithms; Journal of Tribology, ASME, 2001.
Chakraborty, I., Kumar, V., Nair, S.B. and Tiwari, R., Rolling element bearing design through
genetic algorithms, Engineering Optimization, 2003, 35(6), 649–659.
Demirhan, N., Kanber, B., Stress and displacement distributions on cylindrical roller bearing
rings using FEM, Mechanics Based Des of Str and Machines: An Int Journal, 2008.
Gupta, P.K., Current status of and future innovations in rolling bearing modeling, Tribology
Transactions, 2011, 54(3), 394-403.
Gupta, S., Tiwari, R., Nair, S.B., Multi-objective design optimization of rolling bearings using
genetic algorithm, Mechanism and Machine Theory, 2007, 42,1418–1443.
Hamrock, B.J. and Anderson, W.J., Rolling-elements bearings, NASA ref. pub 1105, 1983.
Harris, T. A. Rolling bearing analysis, John Wiley, New York, 2000.
Kumar, K.S., Tiwari, R. and Reddy, R. S., Development of an optimum design methodology of
cylindrical roller bearings using genetic algorithms, International Journal for
Computational Methods in Engineering Science and Mechanics, 2008, 9(6), 321-341.
Kumar, K.S., Tiwari, R. and Prasad P.V.V.N., An optimum design of crowned cylindrical roller
bearings using genetic algorithms, J of Mechanical Design, ASME, 2009, 131 / 051011-1.
Poplawski, J.V., Peters, S.M and Zaretsky, E. V., Effect of roller profile on cylindrical roller
bearing life prediction—part I: comparison of bearing life theories, Tribology Trans., 2001.
Poplawski, J.V., Peters, S.M, Zaretsky, E.V., Effect of roller profile on cylindrical roller bearing
life prediction—part II: comparison of roller profiles, Tribology Trans, 2001, 44(3),417-427.
Rao, B.R. and Tiwari, R., Optimum design of rolling element bearings using genetic
algorithms, Mechanism and Machine Theory, 2007, 42,233–250.
Rao, R.V., Savsani, V.J., Vakharia, D.P. Teaching–learning-based optimization: A novel
method for constrained mechanical design optimization problems. Comp-Aided Des, 2011
Shigley J. E., Mechanical Engineering Design, McGraw-Hill Book Company, New York, 2011.
SKF, General Catalogue, Germany, 2005.
Tiwari, R., Kumar, K.S. and Reddy, R.S., An optimal design methodology of tapered roller
bearings using genetic algorithms, International Journal for Computational Methods in
Engineering Science and Mechanics, 2012, 13(2), 108-127.
Wei, Y. and Chengzu, R., Optimal design of high speed angular contact ball bearing using a
multiobjective evolution algorithm, International Conference on Computing, Control and
Industrial Engineering, IEEE, 2010, 978-0-7695-4026-9/10.
Zhaopinga, T. and Jianping, S., The contact analysis for deep groove ball bearing based on
ansys, procedia engineering, International Conference on Power Electronics and
Engineering Application, 2011, 23,423 – 428.
320
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization of Performance and Analysis of Internal Finned
Tube Heat Exchanger under Mixed Convection Flow
S. B. Mishra*, S. S. Mahapatra
National Institute of technology, Rourkela, Odisha, 769008, India
*Corresponding author (e-mail: swayambikash86@gmail.com)
Internally finned tubes have received considerable attention because of the fact that
these are widely usedin many industrial applications, particularly in heat exchangers.A
heat exchanger is a device built for efficient heat transfer from one medium to another
medium while one medium is cooled and other is heated. Internal fin arrangement
increases the heat transfer rate by increasing the flow turbulence and increasing the
exposed surface area to fluid flow. Design of fin size, fin geometry and length of fin
must be focused to get a compact and reliable heat exchanger.Enhanced heat transfer
processesare mainly used in process industries, air-conditioning equipment,
refrigerator and radiators. Many researches have been performed to get an efficient,
reliable and optimum value of heat transfer rate. To get optimum results, some data
range are considered such as airflow velocity, air density, and heat transfer coefficient.
For the current study, experiments were conducted, validated with CFD results and the
responses are optimized using the Non- dominating sorting genetic algorithm II (NSGAII) to get the optimum results.
KeyWords:Internal Finned Tube,Heat Exchangers, Heat Transfer Rate, NSGA-II,
Optimization
1. Introduction
Heat exchangers have always been an important part to the lifecycle and operation of
many industrial systems. A heat exchanger is a device built for efficient heat transfer from one
medium to another in order to carry and transfer of heat energy. Typically, when the energy is
transferred, one medium is being cooled while the other is heated. They are widely used in
petroleum refineries, chemical plants, petrochemical plants, natural gas processing, air
conditioning, refrigeration and automotive applications. The use of integral finned tube shell
and tube heat exchanger design contributes greatly to the development of compact
design.Haldar etal.(2007)enhanced tubular heat exchanger performance more than smooth
tube units. In order to develop a compact shell and tube heat exchanger, the basic principles of
heat transfer must be considered. The amount of heat transferred is a function of the heat
exchanger geometry and fluid parameters. Fraas and Ozisik (1965) compared the heat
transfer coefficients for shell side with the fluid flow through round pipes. Shell and tube heat
exchangers are used to provide a greater surface area per unit volume than plain tubes and
further reduce the size of the unit. The use of low-finned tubes results more cost effective in
heat exchanger designs. The recent investigation of authors Jafari Nasr M.R. and Polly G.T.
(2000) showed that fin height, fin width and fin shape play a significant role in the design of
shell and tube heat exchangers. It is normal practice in exchanger design to restrict the length
of the tubes. Patankar et al.(1979) investigated the fully developed turbulent flow and heat
transfer characteristics for tubes and annuli of longitudinal straight fins. He found that local
heat transfer coefficient exhibits a substantial variation throughout the fin height. In general,
the fins were found to be as effective heat transfer surface as the tube wall.EI-Sayed(2012)
took an experimental study to determine the pressure drop and heat transfer coefficient of
turbulent flow inside a circular internally finned tube. He concluded that fin efficiency totally
depends upon Reynolds number.Dhanawade and Dhanawade(2010) performed a study on
enhancement of forced convection heat transfer with a fin array. The experiment was
conducted using different parameters such as variable heat sources, variable airflow rates and
variable geometrical conditions. The result was found to be more effective than that normal fin
array.Edwards and Jensen (1994) performed an experimental investigation of fully developed
steady state and turbulent flow in a longitudinal finned tube. That investigation used a two
321
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
channel, four beams, laser Doppler velocity meter to measure velocity profiles and turbulent
statistics of airflow. They compared friction factor with different Reynolds number for flow
through a smooth tube and internally finned tube. Islam and Mozumder (2009) took an
experimental study on performance of heat transfer through an internal finned tube. He
concluded that for internal finned tube friction factor and heat transfer rate increases 5 times
and 2 times respectively when compared with smooth tube for similar flow condition.
Optimization technique is applied to get the optimum result from multi variable objectives.
There are two type of approach for multi variable objectives. One is to combine all individual
variable objective functions into a single function or move all but one objective function into
constraint set.The best thing for solving these optimization problems is to two optimize two or
more conflicting objects up to certain limits. Srinivas and Deb (1994) suggested the nondominated sorting genetic algorithm.In this paper, the mainobjectives are to develop the
MOGA with two additionaltechniques, Pareto optimality ranking and fitness sharing,that
simultaneously maximizes the pumping rate andminimizes pumping cost.Ritzel et
al(1994) applied two variations of the genetic algorithm (GA), a Pareto GA and a
Vector-evaluated genetic algorithm (VEGA), for multi-objective optimization. The multi
objective problem was formulated to minimize the design cost and to maximize the reliability.
Park and Aral (2003) presented a multi-objective optimization approachto determine
pumping rates and well locations to prevent saltwater intrusion while satisfying desired
extractionrates in coastal belts.
In the present work, experiments are conducted considering variable Reynolds number at
constant heat flux through an internal finned tube. The results are compared with the CFD
(ANSYS) results and errors are calculated. An evolutionary approach NSGA is used to get an
optimum machining parameters for heat transfer through an internal finned tube.
2. Experimental procedure
An aluminum pipe is considered for experiment having length 1000mm, diameter 50mm
and thickness 3mm. At first, the thermocouples are mounted on the aluminum pipe at equally
spaced nodes to display temperature. Heating coils are mounted on the outer surface of the
pipe to supply constant heat fluxes. Voltage regulator (Variac) regulates heat fluxes are
controlled by means of electric supply, current flow. One manometer is used to measure the
airflow pressure inside the pipe, which is maintained constant, by the regulator valve. A
thermal indicator is used to display temperature along with 11 number of thermocouples
mounted at different nodes. To resist the heat flow from the outer surface of the pipe to the
environment 3 layers of insulation coating is provided. A layer of 7mm thickness of asbestos
rope, 25mm thickness of glass wool and a PVC pipe of 3mm thickness of are used to check
the heat flow from the pipe to surrounding.
Figure1. Snap view of the Experimental setup
Figure 2:Attachment Of four internal fins
In the experimental setup, a tube having internally four equal and straight fins attachment
considered for experiment. The fins are placed at equal distance from each other. The fin
material and tube material are same. Conductivity paste is applied in between fin and inside
tube surface to minimize the heat loss. The process continues with supply of continuous heat
flux and flow of turbulent air through it. The set-upis investigated with constant heat flux at
variable Reynolds number. A steady state is achieved after four hour running of the test rig.
Accordingly, five readings are taken at regular interval of time.
322
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3. Governing equation
The governing equations for the heat transfer through an internally finned tube are as
follows:
Cartesian co-ordinate isadopted for the fin and the equations are solved:
∂2T
∂x 2
+
∂2T
∂y 2
(1)
=0
Continuity Equation:
𝜕
𝜌𝑈𝑖 = 0
𝜕𝑥
(2)
𝑖
Momentum Equation:
𝜕𝑈𝑖 𝜕𝑈𝑗
𝐷 𝜌𝑈𝑖
𝜕𝑝
𝜕
𝜇
− 𝜌𝑢𝑖 𝑢𝑗
=−
+
+
𝜕𝑥𝑗 𝜕𝑥𝑖
𝐷𝑡
𝜕𝑥𝑖 𝜕𝑥𝑗
𝑝 = 𝑝𝑆 + 𝜌∞ 𝑔𝑍
𝑝
= 𝑅𝑇
𝜌
(3a)
(3b)
Variable R is the characteristic gas constant (R=0.287kJ/kg-K). Here, ps are the static
pressure and pis the modified pressure as per the definition of Eqn. (2a). The density is
taken to be a function of temperature according to ideal gas law as per Eqn. (2b) thermal
conductivity is kept constant. An empirical correlation to determine heat transfer coefficient for
forced convection through a smooth or rough duct has been provided by Ozisic. The relation
between Nusselt number, Prandtl number and Reynolds number are recommended as
follows:
1
𝑁𝑢 = 0.036 𝑅𝑒 0.8 𝑃𝑟 3
(4)
Where Nu represents Nusselt number, Re represents Reynolds number and Pr
represents Prandtl number.The following relation calculates friction factor at any axial
location. It gives the total friction factor f when x equals to L.
∆𝑃×𝐷
1
𝑓𝑥 =
×
(5)
2
𝑋
2𝜌 𝑣
fx stands for local friction factor , x stands for axial distance and ΔP stands for Pressure drop.
3.1 NSGA-II Technique applied for optimization
NSGA II is an elitist non-dominated sorting GeneticAlgorithm to solve multi-objective
optimization problem. It has been found that NSGA II can converge to the global Paretooptimal front and can maintain the diversityof population on the Pareto-optimal Front. Nondominating sorting algorothim has a great influence over solving multi objective optimization
problems. The main concept of this technique lies on the concept of non dominance of
solutions. The steps involved in NSGA are listed below:
i.
ii.
iii.
iv.
v.
vi.
vii.
viii.
Initilise the population.
Separatly calculate all the variqable functions.
Using the constarined non-dominating criteria rank all the populaitons.
Compute the crowding distance.
Selection is done according to the crowding operator.
Generate the children solution using crossover and mutation.
Both the children and parent population are combined together to implement elitism
and non-dominating sorting.
Replace the old parent population by the better members of the combined
populations.
NSGA-II algorithm is basicaly based on both non dominated sorting and crowding Sorting
to obtain the required non domintaed set.The governing equations used for finding out the
optimum results are
My multi f (1) = -9.84936454×10−6 ×((𝑥(1) × 𝑥(2)) ÷ 𝑥(3))0.8 × (𝑥(4) × 𝑥(5))1/3 (6)
−0.2
(7)
f (2) = 0.824(𝑥(1) × 𝑥(2) 𝑥(3))
323
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table.1 showing the maximum value of friction factor and Nusselt number
Nusselt Number
-336.4553969
-695.3589737
-519.3625683
-633.1715459
-628.2767745
-448.4105649
-548.312399
-413.1484202
-678.9348935
-456.7160863
Friction
factor
-0.007742229
-0.006988554
-0.007478575
-0.007121449
-0.007158673
-0.007723697
-0.007332202
-0.007734555
-0.007026785
-0.007669427
Nusselt Number
-535.8061745
-580.9493913
-650.2452142
-573.6276988
-380.1413818
-506.4502012
-422.6650484
-478.5184711
-603.8776249
-609.9671573
Friction
factor
-0.007429827
-0.007240111
-0.007088263
-0.007325279
-0.007742044
-0.007535595
-0.007730905
-0.007623093
-0.007233633
-0.00721564
Table.2. Variable parameters to get the optimum results
Air density
1.072
1.145999955
1.089796962
1.108945307
1.109896372
1.073417023
1.104950561
1.072516362
1.139592655
1.082866619
1.072006602
1.108080166
1.137113757
1.1204426
1.105728434
1.072034921
1.12149587
1.073059542
1.099507475
1.119365031
1.119853312
Air velocity
2.5846
3.947966318
2.992832172
3.734530054
3.651763562
2.603308727
3.286865113
2.593380813
3.90351528
2.675124085
2.584603043
3.021442225
3.346816329
3.789752046
3.240091748
2.584760042
2.793589648
2.594365502
2.703011283
3.449728545
3.450085458
Dynamic
viscosity
0.00002029
1.98545E-05
2.00857E-05
1.99688E-05
2.00591E-05
2.02201E-05
2.0261E-05
2.0268E-05
2.0061E-05
2.02347E-05
2.029E-05
1.99546E-05
1.99309E-05
2.00015E-05
1.98926E-05
2.02895E-05
2.00407E-05
2.02381E-05
2.01403E-05
2.01327E-05
1.98943E-05
Heat Transfer
coefficient
21.39358476
48.74529098
46.37743836
47.87180298
48.20614867
44.53590623
44.15165295
36.16483879
48.67864631
45.25006597
27.85977986
47.52745184
44.73391608
47.99352254
48.53308113
30.85815311
47.68922936
39.93305561
47.01954265
48.68933506
48.68933506
Length
0.882144462
0.999979241
0.987598755
0.963726891
0.995381525
0.974694506
0.962940125
0.954782703
0.995117555
0.93137338
0.879412556
0.978299657
0.97133629
0.984409181
0.991725057
0.881823732
0.975549879
0.920597179
0.958608974
0.991607201
0.991819871
The optimum values of Nusselt Number and Prandtl number have found out using the
NSGA II technique. The empirical equations meant for Nusselt number and friction factor
responces and their input parameters are used fro multi objective optimisation in MATALB
environment. Here an initial population size of 80 is taken and optimisation is carried out by
setting simple crossover and bitwise mutation with a crossover probability PC=0.8, migration
interval of 20, migration factor of 0.2 and pareto fraction of 0.35. according to the algorithm
ranking and sorting of solutions are done and the final pareto-optimal setting is shown in
fig.6.
324
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 3. Showing Pareto front
4. Conclusion
In this present paper friction,factor and Nusselt number are estimated experimentally for
an internal finned tube. The friction factor decreases with increase in Reynolds number that
may be due thin hydrodynamic boundary layer at higher Reynolds number. Nusselt number
goes on decreasing rapidly due to increase influidtemperature. Maximum Nusselt number is
found with low value of friction factor. The general equationsare tested for statistical validity.
Finally, NSGA II is used to obtain pareto-optimal solution for minimization of friction factor and
maximization Nusselt number. Validation of optimum results can be done by doing by the
experiment with the corresponding input parameters.
References
Dhanawade, K. H. and Dhanawade, H. S., “Enhancement of Forced Convection Heat
Transfer from Fin Arrays with Circular Perforation”, Frontiers in Automobile and
Mechanical Engineering (FAME), Vol. 01, pp.192-196, 25-27 Nov. 2010.
Edwards, D. P. and Jensen, M. K., “Pressure Drop and Heat Transfer Predictions of Turbulent
Flow in Longitudinal Finned Tubes”, Advances in Enhanced Heat Transfer, ASME HTD,
Vol. 287, pp. 17-23. 1994.
El-SayedS. A, “Experimental Study of Heat Transfer to Flowing Air inside a Circular Tube with
Longitudinal Continuous and Interrupted Fins”, Journal of Electronics Cooling and
Thermal Control, Vol. 02, no. 01, pp. 1–16, 2012.
Haldar S. C, Kochhar G. S, Manohar k, and Sahoo R. K, “Numerical study of laminar free
convection about a horizontal cylinder with longitudinal fins of finite thickness”,
International Journal of Thermal Sciences, Vol. 46, no. 7, pp. 692–698, Jul. 2007.
Islam, A. and Mozumder, A. K., Forced convection heat transfer performance of an internally
finned tube. Journal of Mechanical Engineering, North America, Vol. 40, Sep. 2009.
Jafari Nasr M. R. and polley, G. T., “Derivation of charts for the Approximate Determination of
the Area Requirements of Heat Exchangers using Plain and Low Finned Tube Bundles”,
Chemical engineering & technology Vol. 23(1) pp. 46-54, 2000.
Park C. H., Aral M. M., "Multi-objective optimization of pumping rate sand well placement in
coastal aquifers". J. Hydrology, Vol. 290 pp. 80–99, (2003).
Patankar S. V, Ivanovic M, and Sparrow E. M, “Analysis of Turbulent Flow and Heat Transfer
in Internally Finned Tubes and Annuli”, J. Heat Transfer Vol. 101(1), pp.29-37 (1979).
Ritzel B. J, Eheart J. W. and Ranjithan S., "Using genetic algorithms to solve a multiple
objective groundwater pollution containment problem", Water Resources Research, vol.
30(5) pp. 1589–1603. (1994).
SrinivasN. and Deb K., “Multi-Objective Function Optimization usingNon-dominated Sorting
Genetic Algorithms”, Vol. 2(3) pp. 221–248, (1994)
ESDU (Engineering Science Data Unit), “Low-Finned Staggered Banks, Heat Transfer and
Pressure loss for Turbulent Single Phase Cross Flow”, ESDU Number 84016, (1984).
Fraas, A.P., Ozisik, m.N, “Heat Exchanger Design”, John Wiley and Sons, Inc., New York,
London, Sydney, (1965).
HEI (Heat Exchanger Institute Incorporated) standards for Power Plant Heat Exchangers”,
Fourth Edition, Thermal Engineering international (USA) INC. 5701 South Eastern
Avenue, Suite #300 Loss Angeles, CA 90040, p1-29.
325
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Aerodynamic Shape Optimisation of a two dimensional body
for minimum drag using simulated annealing method
Shuvayan Brahmachary*, Ganesh Natarajan, Niranjan Sahoo
Department of Mechanical Engg., Indian Institute of Technology, Guwahati, Assam, India
a
Corresponding author (e-mail: b.shuvayan@iitg.ernet.in)
Optimization was carried out for an airfoil whose shape is defined by power law. The
flow domain was restricted to hypersonic flow and the results obtained from
optimization were matched with analytical results and later verified with FLUENT
simulations. The paper also presents before the performance analysis of the
optimization method that was used for this particular problem, namely simulated
annealing with various case studies, to enhance the results obtained from the above
mentioned method applied, in terms of the computational time and closeness to the
global optima.
1. Introduction
An important aspect of supersonic/hypersonic compressible flows is energy. When the
flow velocity is decreased, some of its kinetic energy is lost and reappears as an increase in
internal energy, hence increasing the temperature of gas. When the flow breaks the sound
barrier or enters the supersonic/hypersonic flow regime, a new phenomenon is introduced
called shock waves. When an object is placed in a flow, it experiences drag. The drag force
on a body due to viscous effects of the fluid is a sum total of the skin friction drag due to the
shear stress on the wall and the pressure drag due to flow of separation. Over the years,
aerodynamicists have developed design techniques to minimize the drag force to the fuselage
at supersonic/hypersonic speeds. The present problem of shape optimization derives its
constraints in the form of length and diameter. The minimum drag shape is a flat plate in twodimensional flow and a needle-like profile in axisymmetric flow. This is a meaningless result
as airfoil profile with the shape of a flat plate is not practical.
Literature survey has revealed that significant amount of research has been carried in
the field of aerodynamic shape optimization using various techniques. Foster and Dulikravich
(1997) gave a comparative study of the use of hybrid genetic and gradient search algorithm
for the 3 dimensional aerodynamic shape optimization. Jameson (2003) presented the use of
adjoint methods through the techniques of control theory to improve the convergence rate of
gradient based aerodynamic shape optimization. Zingg et. al. (2008), presented the use of
genetic algorithm and gradient based algorithm and applied it to aerodynamic optimization
related problems. In another study Deepak et. al. (2008), applied evolutionary algorithm in the
shape optimization of a hypersonic flight experiment nose cone, with the aid of an ANSYS
CFX Computational Fluid Dynamics solver. More recently, Yadav et. al, (2012) have done a
study on numerical study of Aerodynamic Shape optimization in a Supersonic flow.
2. Problem formulation
The length of the airfoil is kept fixed to 300 mm and height of the airfoil is 100 mm.
Operating conditions of the flow field are taken as Mach 5 and free stream total pressure of
101.325 kPa. The airfoil profile shape is given by power law.
i.e. y = a x n ; 0.1 ≤ n ≤ 1
The analytical solution for pressure and drag coefficient calculation is based on local
inclination method called the tangent wedge method which assumes that the pressure at a
point is the same as the surface pressure on the equivalent wedge at the free stream Mach
number, M∞ ,Anderson, (1989). For the generation of the surface of the airfoil body, power law
was used. The index of the power law „n‟ was varied from 0.1 to 1. If value of „n‟ is kept 0, the
shape obtained is a straight horizontal line and yield a meaningless shape for airfoil
326
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Palaniappan and Jameson, (2010). For value „n‟ greater than 1, the shape obtained is
concave and again is not suitable for our problem statement. Thus the value of „n‟ is varied
from 0.1 to 1.
The objective function for optimization is drag coefficient. The drag coefficient is a
direct measure of the drag force, which is to be reduced. Thus drag coefficient serves as the
appropriate objective function. The optimization code should give that particular optimized
value of „n‟ for which the drag coefficient is minimum, subject to the constraints. The
optimization method used is Simulated Annealing (SA) for the reason that it can deal with
arbitrary system and statistically guarantees an optimal solution. The optimization method
namely SA, has a lot of operands associated with it. It this gives opportunity to fine tune the
algorithm with respect to the problem under consideration for better and faster solution closer
to the global optima.
The final part of the project is to verify the results obtained with computational
simulations. ANSYS FLUENT v14 in this case has been used to verify the result obtained with
respect to the given frame work of the design specification, operating conditions and
reference values.
3. Algorithm
The source code was entirely written in MATLAB R2011b and corresponding results
were obtained. The optimization method used was simulated annealing.
Initial Guess (no):
Initial Temperature (T):
Standard deviation, 3σ =
Method used for sampling:
Cooling scheme:
0.1 to 1
0.1017
(Difference of upper and lower limit of „n‟) / 2
Gaussian Distribution
Tn+1 = α Tn; 0<α<1
A very low value of cooling temperature was set as the stopping criteria. Again, the values of
initial guess, cooling scheme and stopping criteria, etc. can be fine tuned for better
performance of the algorithm, keeping in mind that we need more diversity in result, faster
search and a solution closer to global optimal value (a part of case study under taken as
performance analysis).
4. Preliminary result and performance analysis
Result from optimization: n ~ 1
The results obtained were close to optimal i.e n = 1, but the optimization code had scope for
improvement. This was undertaken as a part of case study as described below.
Parameter 1: Cooling Scheme
One of the most important aspects of SA is the cooling scheme which defines the rate at
which the restrictions are imposed while accepting bad solutions. From the studies conducted
by Arenas et. al (2010), four cooling schemes were studied. Out of the four cooling schemes
using ANOVA method, the exponent cooling scheme served reasonably well under certain
conditions. This also served the reason for exponential cooling scheme being implemented in
this optimization algorithm.
Cooling scheme: Tn+1 = α Tn; 0<α<1
This exponential cooling scheme was proposed by Kirkpatrick et al (1893). A lower value of
„α‟ close to 0 resulted in faster result but premature convergence. A value of „α‟ close to 1
resulted in slow convergence but solutions close to optimal. The following is a comparative
study for three different values of „α‟.
From the studies done, a value of „α‟ equal to 0, figure 1(a), gave premature convergence.
Value of „α‟ close to 1, figure 1(b), resulted to slower convergence rate, and value of „α‟ equal
327
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
to 0.5, figure 1(c), gave better result as far as computational time and global minima was
concerned.
Figure 1(a)
(b)
(c)
Iteration (y axis) vs Index of Iteration (y axis) vs Index of
Iteration (y axis) vs
Index of power law as mean of current power law as mean of current
power law as
mean of current iteration (x axis)
iteration (x axis)
iteration (x axis)
Parameter 2: Initial cooling Temperature To
The initial cooling temperature To has been taken as the average of the four objective function
values, based on four random values of „n‟. It is obvious that for four values of „n‟ as 1, 0.99,
0.98, 0.97, the To value will be smaller compared to To value obtained from „n‟ values ranging
from 0.1, 0.11, 0.12, 0.13. The algorithm was tested for both sets of initial temperature and it
was found that for the same termination condition, the algorithm terminated earlier for lower
value of To compared to higher value of To, yielding solution closer to optimal in the later case,
i.e. higher value of To. Thus higher value of initial cooling temperature is preferred.
Parameter 3: Rate of cooling temperature reduction
At the beginning of the algorithm, the cooling temperature was reduced only when the point at
the current iteration „tk‟ was accepted. For the sake of curiosity, the cooling temperature was
reduced after every generation, i.e. even if the point in the current iteration was rejected.
Without changing the stopping criteria, the algorithm gave a solution not so close to optimal,
figure 2(a). This is undesirable and hence the stopping criterion was reduced. Even after
reducing the stopping criteria by a very large margin, the algorithm leads to premature
convergence. The next modification that was made came in the form of „α‟ value in the cooling
scheme. It was made very close to 1 but less than 1. A value of „α‟ equal to 0.99 was chosen.
With minor reduction in stopping criterion, the algorithm gave solutions close to optimal and
with large diversity in the result, figure 2(b).
Iteration
Iteration
Index of power law, n
Figure 2 (a):
Index of power law, n
Less diversity in search from
initial to final solution (0.2 refers
to initial guess)
328
(b) More diversity in search from
initial to final solution (0.2 refers
to initial guess)
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Parameter 4: Acceptance of bad solutions
One integral part of the SA algorithm is the acceptance of bad solutions and towards the later
stages of iterations, the restrictions imposed in accepting the bad solution is increased,
defined by cooling scheme. For the sake of exploration, the algorithm was again modified only
this time the bad solutions were entirely rejected. The above result gives solution close to
optimal in spite of completely rejecting bad solutions. This doesn‟t come as a surprise
because the problem under consideration is unimodal, i.e. only one optima as described
below.
A multi Modal function as given below was taken (Source: ScienceDirect) and solved using
SA. Two cases were targeted namely SA with bad points rejected completely and SA with
bad points accepted with certain probability. Kvasnička and Pospı́chal (1997), found out the
following results for local minima.
2
(1)
g ( x) 0.940249612 e0.1x .sin(10 x).cos(8 x)
x(1) = − 0.7844416, g (x(1)) = 1.75×10−10 (Global Optima)
(2)
(2)
x = − 0.4397995, g (x ) = 0.072921 (Local Optima)
(3)
(3)
x = − 1.1293051, g (x ) = 0.161959 (Local Optima)
The above function was taken as objective function and two cases were studied.
Case I: For SA with bad points rejected completely, the search got stuck in local optima. (X= 0.4749)
Case II: For SA with bad points accepted with probability, the search gave solution near to
global optima. (X= -0.7821)
Parameter 5: Initial Guess
The initial guess in this particular framework of algorithm is not of much relevance as even the
best solution as initial guess requires significant amount of iteration to converge to optimal
solution.
5. Final results and statistical analysis
The algorithm was run after it was fine tuned with modified values from the above case
studies. The following were the changes made.
Initial Cooling Temperature: To= 0.23543
Cooling Scheme:
Tk+1 = Tk * 0.99
Stopping criterion: While
Tk+1>=0.0000001
Iteration
Initial Guess (no):
0.55
Best solution:
0.9991
Data were recorded for 20 runs and are presented below.
Average of readings:
0.994
Standard Deviation „σ‟ = 0.0037
Variance,
d = 0.0001
No. of runs: 20
Index of power law, n
Figure 3. Accepted point i.e
index of
power law as
mean of current iteration Vs
iteration.(0.55 refers to initial
guess)
The results obtained after making the final modification in the algorithm gives solutions which
are very close to the optimal and within reasonable computational time as well. The values
obtained from the 20 runs also shows less deviation from the mean and thus make it an
algorithm fit for the problem which was solved using this algorithm. Figure 3. shows the
diversity in the search process.
329
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
6. Verification using FLUENT (v14) simulation
The given problem of optimization was verified by simulation carried on ANSYS FLUENT. The
airfoil profile tested for this were „n‟ equal to 1 (i.e. optimal solution). The mesh file, figure 4(a)
was made in FLUENT v14 and following setting was specified.
Reference Values:
M∞:
P∞:
ү:
Area (m2):
3
Density (kg/m ):
Depth (m):
3
101.325 kPa
1.4
1
1.1766
1
Ratio of Specific heats (ү):
Velocity (m/s):
Temperature (K):
Pressure (pa):
Length (m):
Enthalpy (j/Kg):
1.4
1041.263
300
101325
1
844043.4
Coefficient of drag CD from FLUENT simulation: 0.038
Coefficient of drag CD from optimization code: 0.065
Scaled
Residual
values
Figure 4 (a)
(b)
(c)
Iteration
Mesh generated in FLUENT (v14). Scaled Residual values Vs Iteration Contours of
pressure
The shape of airfoil profile for this
as plotted in FLUENT (v14)
coefficient along the
mesh is straight wedge (optimal)
surface of the wedge
7. Conclusion
The results obtained from optimization algorithm were fined tuned and solutions
closer to global optima were obtained. The results were then verified via ANSYS FLUENT
simulations. Figure 4 (b) represents the variation of residual values with iterations. Figure
4(c), represents the contour of pressure coefficient variation along the surface of the airfoil.
The results of the simulations were in agreement with the optimization results. Same problem
could be solved with different set of constraints and even for a 3 dimensional body. The
physics behind the methods used were approximate methods and they yield results better in
the domain of hypersonic.
It was found (within the time constraint) that there was no methods which would give a
clear indication as to what should be the optimal value of the operands, as it various from
problem to problem. As Harik (1999) claims, nobody knows the “optimal” parameter setting for
an arbitrary real world problem. The concept of “optimization” within an “optimization” thus
remains a field to be explored more.
References
Anderson J.D. Local Surface inclination methods. “Hypersonic and High temperature gas
dynamics”, McGraw-Hill Publishing Company, Singapore, 1989.
Arenas, M.G., et. al. "Statistical analysis of the parameters of the simulated annealing
algorithm", IEEE ,2010
330
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Deepak N.R., et. al. “Evolutionary Algorithm Shape Optimization of a Hypersonic Flight
Experiment Nose Cone”, in Journal of Spacecraft and Rockets, 2008.
Foster N.F. Dulikravich G.S. “Three Dimensional Aerodynamic Shape optimisation using
genetic and gradient search algorithm”, Journal of Spacecrafts and Rockets, 34 (1),
1997.
Harik G.R and Lobo F.G., “A parameter-less genetic algorithm,” in proceedings of the genetic
and Evolutionary Computation conference, W. Banzhaf, J. Daida, A.E.Eiben, M.H.
Garzon, V. Honavar, M. jakiela, R.E. Smith, Eds., Vol 1. Orlando, Florida, USA: Morgan
Kaufmann, 1999.
Jameson A. “Aerodynamic shape optimization using the adjoint method". Lectures at the Von
Karman Institute, Brussels (2003).
Kirkpartick S., et.al. “Optimisation by Simulated Annealing”, Science, 1893.
Kvasnička V., Pospichal J., “A hybrid of simplex method and simulated annealing”, 1997.
Palaniappan K. and Jameson A. “Bodies having Minimum Pressure drag in Supersonic flow:
Investigating nonlinear Effects” in the journal of aircraft , 2010 .
Yadav A., Natarajan G., Kulkarni V., Sahoo N and Dutta S. “Numerical study of Aerodynamic
shape optimization, in a supersonic flow”, 2012.
Zingg D.W. et.al. “A comparative evaluation of genetic and gradient-based algorithm applied
to aerodynamic optimization”, 2008.
331
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Parametric Optimization of Compression Molding Process using
Principal Component Analysis
S. P. Deshpande1*, P. J. Pawar2
1
Sanghavi College of Engineering, Nasik
K. K. Wagh Institute of Engineering Education and Research, Nasik. Maharashtra, India.
2
*Corresponding author (email: prajodip1@gmail.com)
Compression molding process is a widely accepted plastic molding process due to its
capability to produce complex and large size parts with good surface quality. In the present
work, principal component analysis is applied for multi-objective optimization of
compression molding of transformer housing to achieve the desired dimensional accuracy.
The simultaneous optimization of two quality characteristics namely center distance
variation and thickness variation is considered. The input parameters selected are melting
temperature, holding pressure, holding time, and weight of raw material. The results show
optimum input variable levels and their relative significance on multiple quality
characteristics. The results of optimization obtained using principal component analysis are
verified experimentally.
1.
Introduction
Compression molding is one of the original processing methods for manufacturing plastic
parts developed at the very beginning of the plastics industry. This process is commonly used for
manufacturing electrical parts, flatware, gears, buttons, buckles, knobs, handles, electronic
device cases, appliance housing, and large containers. The main advantages of the compression
molding process includes low initial setup costs, fast setup time, capable of molding large size
and intricate parts, good surface finish, less material wastes, and fewer knit lines and less fiberlength degradation. Having such enormous advantages, a big hindrance in the application of
compression molding process is lack of dimensional consistency. In today's competitive scenario,
the compression molding manufacturers are under constant pressure to reduce costs and
improve quality. The success of compression molding process in terms of cost and quality
depends on proper selection of various operating conditions in compression molding process
such as weight of raw material, preheating temperature, melting temperature, breathing time,
holding pressure, holding time, curing time, post curing time, post curing temperature etc.
Many researchers (Kim and Im 1997; Kim et al.,1997; Wakeman et al., 2000; Li et al.,
2004; Onal and Adanur 2005; Kim et al., 2006; Dumont et al. 2007;. Kim et al., 2009; Kim et al.,
2011; Fonseca et al., 2011, Fonseca et al., 2011) contributed to predict effects of various
process parameters on the dimensional accuracy and quality of the compression molded
products. It is observed from the literature that the researchers have applied mainly Taguchi
method, various traditional and non traditional optimization methods and simulation tools for the
optimization of process parameters of compression moulding process. However, no effort has
been made so far for the multi-objective optimization of the compression moulding process. In
this work simultaneous optimization of two quality characteristics namely center distance variation
and thickness variation in transformer housing is considered using a principle component analysis
(PCA) approach.
2.
Principal component analysis (PCA)
The PCA is a multivariate statistical method that selects a small number of components
to account for the variance of original multi-response (Antony, 2000). The central idea behind
PCA is to reduce the dimensionality of a data set consisting of a larger number of interrelated
332
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
variables, while retaining as much as possible variation present in the original data set. The
procedure of PCA is discussed below:
Step 1 Determine the S/N ratios
S/N ratio is the ratio of signal power to the noise power. S/N ratio () can be computed
(1)
mathematically as: 10 log( MSD)
Where, MSD is the Mean square deviation of data of quality characteristics.
Step 2 Determine the normalized S/N ratios
*
(2)
The normalized S/N ratios can be obtained as: X i* ( j ) X i ( j ) X ( j )
X ( j) X ( j)
Where X i* ( j ) is the normalized S/N ratio for jth quality characteristic in ith experimental run,
th
th
X i ( j ) is the S/N ratio for j quality characteristic in j experimental run, X ( j ) is the minimum
and X ( j ) + is the maximum of S/N ratios for jth quality characteristic in all experimental runs.
Step 3 Evaluate the correlation coefficients of responses
The correlation coefficients are calculated as: Rij
*
*
cov[ X i* ( j ), X i* (l )]
X i* ( j ) X i* (l )
*
*
(4)
*
*
Where, cov[ X i ( j ), X i (l )] is the covariance of sequences [ X i ( j ), X i (l )] , X i ( j ) and X i (l )
represents the normalized S/N ratios for the responses j and l respectively. σ is the standard
deviation. The correlation matrix is [Rij] where, i, j =1…m, m being number of responses.
Step 4 Determine the eigen values and the eigenvectors of the correlation matrix
For the covariance or correlation matrix, the eigenvectors correspond to principal components
and the Eigen values to the variance explained by the principal components.
Step 5 Compute the principle components of each response
The eigenvector with the highest eigenvalue is the principal component of the data set. This can
be computed as below.
m
p(i) X i* ( j ) vk ( j )
k
j 1
th
(5)
Where p(i ) is the k PC corresponding to i experimental run, v k(j) is the j element of kth
th
Eigen vector. The total principal component index (TPi) corresponding to i experimental run is
computed as follows.
K
th
th
m
Tpi ( pi k ) e(k )
(6)
k 1
e( k )
eig (k )
m
eig (k )
k 1
(7)
Where eig(k) is the kth eigenvalue.
Step 6 Determine the optimum level for each parameter
The total principal component index for each experiment run is used to find out the average factor
effect at each level. The level that corresponds to the maximum average factor effect is the
optimum level for that parameter.
3.
Application example
Now to demonstrate the proposed approach, an application example of transformer housing
shown in Fig. 1 is considered. The objective is to minimize the deviations in thickness value (Dt)
333
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
of 4.00 mm and deviation in centre distance between clamping holes (Dc) of 145.00 mm. Four
important process variables selected are melting temperature, holding pressure, holding time,
and weight of raw material as these process parameters significantly affect the dimensional
stability and quality of the product.
145
27,5
140
5,25
10,5
45
47
10
7
2,3
4
27,5
14
60
75
130
2,3
Ø5,5
7
Figure 1. Transformer housing
Three levels each of four factors were selected as shown in Table 1.The methodology is
discussed below.
Table 1. Process parameters and their levels
Parameter
Melting temperature (Tm)
Holding pressure (Ph)
Holding time (Th)
Weight of raw material (w)
Units
ºC
psi
Sec
grams
Level 1
140
2000
3.5
227
Level 2
150
2500
4.0
228
Level 3
160
3000
4.5
229
3.1
Data collection
In the present study L27 orthogonal array is used. The experimental set up used for data
collection is: Machine type/make: BEMCO make NM150/2 compression molding machine with
capacity of machine 150 Ton; Work piece material: SMC 405;Measuring devices: (a) Electronic
scale to measure weight of raw material (b) Mitutoyo digital micrometer to measure centre
distance and thickness. The variation in thickness and variation in center distance from their
respective basic dimensions are recorded for each experiment and is presented in Table 2.
3.2
Selection of optimum parameter levels using PCA
The S/N ratios for two quality characteristics i.e. center distance variation and thickness variation
are calculated using Eq. (1). The normalized S/N ratios can be computed using Eq. (3). The
correlation coefficient between the two responses is then calculated by using Eq. (4) and the
1
0.322
correlation matrix [Rij] is then obtained as: Rij
0.322
1
Now, the eigenvalues and eigenvectors computed from correlation coefficient matrix are: 1.322,
0.678, and [0.7071, 0.7071], [-0.7071, 0.7071] respectively. From the eigenvectors, the principal
components (p(i)k) are computed using Eq. (5) and are shown in Table 2. The total principal
component index (Tpi) for each experiment is then calculated using Eq. (6). The total principal
component index is presented in Table 2. The average factor effect at particular level is
334
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
calculated by taking the average of all total principal component index values for that level. The
average factor effect values are shown in Table 3. The optimum level (indicated by * mark) for a
particular parameter corresponds to the maximum average factor effect value for that parameter.
Table 2. Data collected through practical experimentation and principal components
Expt.
No.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Tm
(°C)
140
140
140
140
140
140
140
140
140
150
150
150
150
150
150
150
150
150
160
160
160
160
160
160
160
160
160
Ph
(psi)
2000
2500
3000
2000
2500
3000
2000
2500
3000
2000
2500
3000
2000
2500
3000
2000
2500
3000
2000
2500
3000
2000
2500
3000
2000
2500
3000
Th
(Sec)
3.5
4
4.5
4
4.5
3.5
4.5
3.5
4
3.5
4
4.5
4
4.5
3.5
4.5
3.5
4
3.5
4
4.5
4
4.5
3.5
4.5
3.5
4
W
(gm)
227
228
229
229
228
227
228
227
229
227
228
229
229
228
227
228
227
229
227
228
229
229
228
227
228
227
229
Dt
Dc
PC1
PC2
TP
0.0413
0.0342
-0.0833
0.1233
0.0521
-0.1012
0.0789
0.0499
0.0321
0.0956
0.0499
-0.0111
0.0657
0.0011
-0.0965
0.0822
0.0368
-0.0178
0.0947
0.0387
0.0027
0.1210
0.0068
-0.0011
0.0510
0.0011
-0.0501
0.0890
0.0952
0.1500
0.0870
0.0880
0.0780
0.0890
0.0840
0.1000
0.0620
0.0750
0.1020
0.0900
0.0860
0.0990
0.1820
0.1210
0.0810
0.1310
0.1220
0.0480
0.0630
0.0510
0.0490
0.0460
0.0970
0.074
0.5320
0.5250
0.1580
0.3790
0.5030
0.4650
0.4350
0.5330
0.5100
0.5920
0.5910
0.6580
0.4560
1.0930
0.3500
0.0610
0.3910
0.7060
0.2090
0.3790
1.2580
0.5480
1.0880
1.3820
0.8390
1.0310
0.5980
0.2040
0.1410
0.0410
0.3790
0.2450
0.4060
0.3010
0.2620
0.1060
0.5160
0.3200
-0.0630
0.2680
-0.3220
0.2760
-0.0610
0.0290
0.1260
0.1300
0.0320
0.1130
0.5430
0.2200
-0.0320
0.5750
-0.3840
0.3280
0.4210
0.3950
0.1180
0.3790
0.4150
0.4450
0.3890
0.4410
0.3730
0.5660
0.4990
0.4140
0.3920
0.6130
0.3250
0.0200
0.2680
0.5100
0.1820
0.2620
0.8700
0.5460
0.7940
0.9020
0.7500
0.5510
0.5060
Table 3. Average factor effect at each level for all parameters using PCA
Factors
Tm
Ph
Th
W
4.
1
0.375
0.405
0.456
0.456
2
0.401
0.471
0.429
0.460*
3
0.596*
0.496*
0.487*
0.456
Conclusions
In this work an attempt is made to optimize the multiple quality characteristics for the
compression molding process using principal component analysis. The effect of important
process parameters such as melting temperature, holding temperature, holding time, and weight
of raw material on two responses namely thickness variation and center distance variation is
considered in this work. The optimum parameter levels obtained by using Taguchi method could
improve only one response while seriously hampers the other response. The principal component
analysis is therefore applied to determine the optimum values of both the responses
simultaneously. The optimum values of control factors for overall improvement in multiple quality
characteristics is melting temperature = 160°C, holding pressure = 3000 psi, holding time = 4.5
seconds, and weight of raw material = 228 grams. The value of variation in thickness and
variation in center distance is observed to be 0.03 mm for both the responses. The contribution of
different control factors on the dimensional accuracy is observed as: melting temperature:
23.19%, holding pressure: 3.51%, holding time: 1.34%, and weight of raw material: 0.01%.
335
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
References
Antony, J. (2000) ‘Multi-response optimization in industrial experiments using Taguchi’s quality
loss function and principal component analysis’, Quality and Reliability Engineering
International, Vol. 16, pp. 3–8.
Dumont, P., Orgeas, L., Favier, D., Pizette, P., Venet, C. (2007) ‘Compression moulding of SMC:
In situ experiments, modelling and simulation’, Composites. Vol. 38 (A), pp.353–368.
Fonseca, A., Inacio, N., Kanagaraj, S., Oliveira, M. S., Simoes, J. A. (2011) ‘The use of Taguchi
technique to optimize the compression moulding cycle to process acetabular cup
components’, Journal of Nanoscience and Nanotechnology, Vol. 11, No.6, pp. 5334-5339.
Kim K. T., Jeong J. H., Im Y. T. (1997) ‘Effect of molding parameters on compression molded
sheet molding compound parts’, Journal of Materials Processing Technology, Vol. 67, No.1-3,
pp.105-111.
Kim, M. S., Lee, W., Han, W. S., Voutrin, A. (2011) ‘Optimization of location and dimension of
SMC precharge in compression molding machine process’, Computers and Structures, Vol.
89, No. 15-16, pp.1523-1534.
Kim, M. S., Lee, W., Woo, S.H., Voutrin, A., Chung H. P. (2009) ‘Thickness optimization of
composite plates by Box’s complex method considering the process and material parameters
in compression molding of SMC,’ Composites Part A: Applied Science and Manufacturing,
Vol. 40, No. 8, pp.1192-1198.
Kim, M. S., Woo, S.H., Voutrin, A., Lee, W. (2006) ‘Numerical modeling of compression moulding
for improvement of mechanical properties’, Information Control Problems in Manufacturing,
Vol. 12, No.1, pp. 819-824.
Kim, S., Im Y. (1997) ‘Three-dimensional finite-element analysis of the compression molding of
sheet molding compound’, Journal of Materials Processing Technology, Vol. 67, No.1–3, pp.
207-213
Li, W., Jin Y.F., Lv, X., Kua, C.H. (2004) ‘A Study on Computer Simulation of Plastic Lens
Molding Process,’ Materials Science Forum, Vol. 471-472, pp. 490-493.
Onal, L., and Adanur, S. (2005) ‘Optimization of compression molding process in laminated
woven composites’, Journal of Reinforced Plastics and Composites, Vol. 24, No.7, pp.775 780.
Wakeman M. D., Rudd, C.D., Cain T. A., Brooks, R., Long A. C. (2000) ‘Compression
molding
of glass and polypropylene composites macro- and micro- mechanical properties, 4:
Technology demonstrator – a door cassette structure’, Composites Science and Technology,
Vol. 60, No.10, pp. 1901-1918.
336
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Selection of Material for Press Tool using Graph Theory and
Matrix Approach (GTMA)
S.R.Gangurde*, Sudish Ray
K.K. Wagh Institute of Engineering Education & Research, Nasik
*Corresponding author (e-mail: gangurdesanjay@rediffmail.com)
Materials selection is a difficult and subtle task, due to the immense number of different
available materials. Materials play a crucial and important role during the entire design and
manufacturing process. In this paper; Graph Theory and Matrix Approach (GTMA) is
applied for making decisions in the presence of multiple attribute for press tool material. A
‘press tool material selection index’ is proposed to evaluate and rank the press tool
material .The index is obtained from a press tool material selection attributes function,
obtained from press tool material selection attributes diagraph. The digraph is developed
considering press tool material selection attributes and their relative importance. It will
guide the selection process and help a decision maker solve the selection problem.
1.
Introduction
An ever increasing variety of materials is available today, with each having its own
characteristics, applications, advantages, and limitations. When selecting materials for
engineering designs, a clear understanding of the functional requirements for each individual
component is required and various important criteria or attributes need to be considered. Material
selection attribute is defined as an attribute that influences the selection of a material for a given
application. The selection decisions are complex, as material selection is more challenging today.
There is a need for simple, systematic, and logical methods or mathematical tools to guide
decision makers in considering a number of selection attributes and their interrelations. Thus,
efforts need to be extended to identify those attributes that influence material selection for a given
engineering design to eliminate unsuitable alternatives, and to select the most appropriate
alternative using simple and logical methods. Materials are sometimes chosen by trial and error
or simply on the basis of what has been used before, while this approach frequently works. The
selection of a material for a specific purpose is a lengthy and expensive process. Various multiattribute decision-making (MADM) methods and different optimization tools have been proposed
by the past researchers to aid the material selection process. Decision analysis is concerned with
those situations where a decision maker has to choose the best alternative among several
candidates while considering a set of conflicting criteria. In order to evaluate the overall
effectiveness of the candidate alternatives and select the best material, the primary objective of
an MADM method-based material selection approach is to identify the relevant material selection
criteria for a particular application, assess the information relating to those criteria and develop
methodologies for evaluating those criteria in order to meet the designer's requirements. Decision
making problem is the process of finding the best option from all of the feasible alternatives.
2.
Literature review
The objective of any material selection procedure is to identify appropriate selection
attributes, and obtain the most appropriate combination of attributes in conjunction with the real
requirement. Various approaches had been proposed in the past to help address the issue of
material selection. Shanian and Savadogo (2006) introduced a new approach has been carried
out for the use of the ELECTRE: model in material selection. By producing a material selection
decision matrix and criteria sensitivity analysis, ELECTRE has been applied to obtain a more
337
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
precise material selection for a particular application, including logical ranking of considered
materials. Rao and Padmanabhan(2007) presented a methodology for selection of a rapid
prototyping (RP) process that best suits the end use of a given product or part using graph theory
and matrix approach. The index is obtained from a RP process selection attributes function,
obtained from the RP process selection attributes digraph. The digraph is developed considering
RP process selection attributes and their relative importance for the considered application. Rao
and Padmanabhan (2008) introduced a methodology for the selection of a best product end-of-life
(EOL) scenario using digraph and matrix methods. An ‘EOL scenario selection index’ is proposed
to evaluate and rank the alternative product EOL scenarios. Prasenjit Chatterjee et al. (2009)
introduced a methodology to solve the materials selection problem using two most potential multicriteria decision-making (MCDM) approaches and compares their relative performance for a
given material selection application. The first MCDM approach is (VIKOR), a compromise ranking
method and the other one is (ELECTRE), an outranking method. Maniya and Bhatt (2010)
implement a novel tool to help the decision-maker for selection of a proper material that will meet
all the requirements of the design engineers. Preference selection index (PSI) method is a novel
tool to select best alternative from given alternatives without deciding relative importance
between attributes. This method can be applied successfully to any number of alternatives. Rao
and Patel (2010) propose a novel MADM method for material selection for a considered design
problem. The method considers the objective weights of importance of the attributes as well as
the subjective preferences of the decision maker to decide the integrated weights of importance
of the attributes. Ali Jahan et al. (2011) proposed a new version of VIKOR method, which covers
all types of criteria with emphasize on compromise solution. The proposed comprehensive
version of VIKOR also overcomes the main error of traditional VIKOR by a simpler approach.
Suggested method can enhance exactness of material selection results in different applications. it
can help designers and decision makers for acquiring more strong decisions, especially in
biomedical material selection applications. Prasenjit Chatterjee et al. (2011) proposed new
MCDM methods, i.e. complex proportional assessment (COPRAS) and evaluation of mixed data
(EVAMIX) methods for materials selection. These two methods are used to rank the alternative
materials, for which several requirements are considered simultaneously. Singh and Rao (2011)
proposed a hybrid decision making method of graph theory and matrix approach (GTMA) and
analytical hierarchy process (AHP) for selection of appropriate alternative in the industrial
environment. Chatterjee and Chakraborty (2012) a systematic and efficient approach towards
material selection is necessary in order to select the best alternative for a given engineering
application. Focuses on the application of four preference ranking-based multi-criteria decisionmaking (MCDM) methods for solving a gear material selection problem. These are extended
PROMETHEE II (EXPROM2), complex proportional assessment of alternatives with gray
relations (COPRAS-G), ORESTE and operational competitiveness rating analysis (OCRA)
methods.
3.
Methodology
The main steps of the methodology are as follows:
Step-I
Identify the press tool material selection attributes for the given product or part and short-list the
press tool material on the basis of the identified attributes satisfying the requirements. A
quantitative or qualitative value or its range may be assigned to each identified attribute as a
limiting value or threshold value for its acceptance for the considered application. A press tool
material with each of its attribute, meeting the criterion, may be short-listed.
Step-II
1. After short-listing the press tool material, find out the relative importance (rij) relations between
the attributes and normalize the values of attributes (Ai) for different alternatives.
338
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2. Develop the press tool material selection attributes digraph considering the identified selection
attributes and their relative importance. The number of nodes must be equal to the number of
considered attributes in step I above. The magnitude of the edges and their directions will be
determined from the relative importance between the attributes.
3. Develop the press tool material selection attributes matrix for the press tool material selection
attributes digraph. This will be an N x N matrix with diagonal elements of Ai and off- diagonal
elements of rij.
4. Obtain the press tool material selection attributes function for the press tool material selection
attribute matrix.
5. Substitute the values of rij and normalized values of Ai, obtained in step 1, in Ai selection
attributes function to evaluate the Ai selection index for the considered press tool material.
6. Arrange the press tool material in descending order of press tool material selection index. The
press tool material having the highest value of press tool material selection index is the best
choice for the given engineering product or part.
Step-III
Take a final decision keeping in view the practical considerations. All possible constraints likely to
be experienced by the user are looked in during this stage. These include constraints, such as:
availability, management constraints, political constraints, economical constraints, etc. However,
compromise may be made in favor of a press tool material with a higher press tool material
selection index.
4.
Case study
Now to demonstrate and validate the methodology of press tool material selection using graph
theory and matrix approach.
Step-I
In the present work, the attribute considered are non- deforming properties(1), safety in
hardening(2), toughness(3), resistance to softening effect of heat(4), wear resistance(5),
decarburization risk during heat treatment(6), brittleness(7), hardness(Rc),(8) as shown in Table
4.1
Table 4.1 Data of Press tool material selection attributes
Press tool
1
2
3
4
5
6
7
8
material
W1
L
F
G
L
G
L
M
63
O1
G
G
G
L
F
L
L
63
A2
B
B
G
F
G
L
L
62
D2
B
B
F
F
G
L
L
62
S1
F
G
B
F
G
M
L
57
T1
G
G
G
B
B
L
L
64
M1
G
G
G
B
G
L
L
66
H12
G
G
G
B
G
M
L
52
W1=water hardening tool steels
O1 =Oil hardening tool steels
A2=Air hardening die steels
D2 =High-carbon high chromium die
steels
S1=Shock-resisting tool steels
T1 =Tungsten high
speed steels
M1=Molybdenum high speed steels
H12 =Hot
working steels
L-Low= (0.335)
(0.865)
F-Fair (0.410)
M-Medium (0.500)
G-Good (0.745)
B-Best
The objective data of all attribute is given in Table 4.2, which is obtained from the press tool
material selection attribute 11-pont fuzzy conversion scale.
339
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 4.2 Objective Data of the Press Tool Material Selection Attributes
Press tool
1
2
3
4
5
6
7
8
material
W1
0.335 0.410 0.745
0.335
0.745
0.335
0.500 63
O1
0.745 0.745 0.745
0.335
0.410
0.335
0.335 63
A2
0.865 0.865 0.745
0.410
0.745
0.335
0.335 62
D2
0.865 0.865 0.410
0.410
0.745
0.335
0.335 62
S1
0.410 0.745 0.865
0.410
0.745
0.500
0.335 57
T1
0.745 0.745 0.745
0.865
0.865
0.335
0.335 64
M1
0.745 0.745 0.745
0.865
0.745
0.335
0.335 66
H12
0.745 0.745 0.745
0.865
0.745
0.500
0.335 52
Step-II
1. The quantitative value of the press tool material selection attribute, which are given in the
Table 4.2, are to be normalized. Safety in Hardening, Toughness, Resistance to softening effect
of heat, Wear resistance and Hardness are beneficial attributes and higher values are desirable.
Values of these attributes are normalized and are given in Table 4.3. Non- Deforming Properties,
Decarburization risk during heat treatment and Brittleness are considered as non-beneficial
attributes and lower value are desirable.
Table 4.3 Normalized Data (Ai) of the Press Tool Material Selection Attributes
Press tool
1
2
3
4
5
6
7
material
W1
1 0.473988 0.861272 0.387283 0.861272
1 0.67
O1
0.449664 0.861272 0.861272 0.387283 0.473988
1
1
A2
0.387283
1 0.861272 0.473988 0.861272
1
1
D2
0.387283
1 0.473988 0.473988 0.861272
1
1
S1
0.817073 0.861272
1 0.473988 0.861272 0.67
1
T1
0.449664 0.861272 0.861272
1
1
1
1
M1
0.449664 0.861272 0.861272
1 0.861272
1
1
H12
0.449664 0.861272 0.861272
1 0.861272 0.67
1
8
0.954545
0.954545
0.939394
0.939394
0.863636
0.969697
1
0.787879
Relative importance of attribute (rij) is assigned the values as given in Table 4.4which is obtained
from 11-point scale.
Table 4.4 Relative Importance Matrix (rij)
1
2
3
4
5
6
7
Attributes
1
0.455 0.66125
0.5775
0.5
0.4175 0.37625
2
3
4
5
6
7
8
0.545
0.33875
0.4225
0.5
0.5825
0.62375
0.1725
0.68375
0.31625
0.44125
0.665
0.46
0.3775
0.1425
0.46125
0.44125
0.64625
0.33625
0.1725
0.55875
0.53875
0.43875
0.41875
0.50375
0.23375
0.335
0.55875
0.56125
0.35875
0.5825
0.28625
0.54
0.35375
0.58125
0.64125
0.545
0.26375
0.6225
0.66375
0.49625
0.4175
0.455
0.28625
2. Press tool material selection attributes digraph gives a graphical representation of the
attributes and their relative importance for quick visual appraisal. As the number of nodes and
their interrelations increases, the digraph becomes complex. In such a case the visual analysis of
340
8
0.8275
0.8575
0.8275
0.76625
0.71375
0.73625
0.71375
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
the digraph is expected to be difficult and complex. To overcome this constraint, the digraph is
represented in a matrix form.
Figure 4.1 Press tool material selection
attributes graph
1
2
3
4
5
6
7
8
1
A1
r12
r13
r14
r15
r16
r17
r18
2
r21
A2
r23
r24
r25
r26
r27
r28
3
r31
r32
A3
r34
r35
r36
r37
r38
4
r41
r42
r43
A4
r45
r46
r47
r48
5
r51
r52
r53
r54
A5
r56
r57
r58
6
r61
r62
r63
r64
r65
A6
r67
r68
7
r71
r72
r73
r74
r75
r76
A7
r78
8
r81
r82
r83
r84
r85
r86
r87
A8
D =
The matrix D, for the Press tool material selection
attributes digraph
3. The matrix D, for the Press tool material selection attributes digraph shown in Fig. 4.1 is
th
represented as above: Where Ai is the value of the i attribute represented by node n, and ni is
th
the relative importance of the ith attribute over the j represented by the edge eij.
4. Press tool material selection index (PTMSI) is calculated using the value of Ai and rij for each
alternative of press tool material .The PTMSI for different press tool material is shown in Table
4.5 in descending order.
Table 4.5 Press Tool Material selection index
Press tool material
PTMSI values
T1
260.612
M1
254.59
S1
224.05
A2
221.58
H12
218.904
W1
205.013
D2
199.934
O1
194.08
5.
Conclusion
After studying and applying GTMA methods and from result, it is concluded that the best
alternative out of number of different alternatives with the number of attributes, such that for press
tool material alternative T1 is the most preferred choice and O1 is last choice among the eight
material alternatives. GTMA Method suggests the ranking T1-M1-S1-A2-H12-W1-D2-O1. T1 is
considered to be the best general-purpose high-speed tool steel because of the comparative
ease of its machining and heat treatment. It combines a high degree of cutting ability with relative
toughness. It has higher toughness and excellent wear resistance than other tool steels.
341
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
References
Chatterjee Prasenjit and Chakraborty Shankar, Material selection using preferential ranking
methods. Materials and Design, 2012, 35,384–393
Chatterjee Prasenjit, Athawale Manikrao Vijay and Chakraborty Shankar Selection of materials
using compromise ranking and outranking methods, 2009, 30, 4043–4053
Chatterjee Prasenjit, Athawale Manikrao Vijay, Chakraborty Shankar, Materials selection using
complex proportional assessment and evaluation of mixed data methods. Materials and
Design, 2011, 32,851–860
Hoffman, G. Edward. Fundamental of Tool Design. II edition
Jahan Ali, Mustapha Faizal, Md Yusof Ismail, Sapuan S.M., Bahraminasab Marjan, A
comprehensive VIKOR method for material selection. Materials and Design, 2011, 32, 1215–
1221
Maniya Kalpesh and Bhatt, M.G. A selection of material using a novel type decision-making
method: Preference selection index method. Materials and Design, 2010, 31, 1785–1789
Rao, R.V. and Padmanabhan, K.K. Rapid prototyping process selection using graph theory and
matrix approach, 2007, 194, 81–88
Rao, R.V. and Padmanabhan, K.K. Selection of best product end-of-life scenario using digraph
and matrix methods, 2008, 1–18
Rao, R.V. and Patel, B.K. A subjective and objective integrated multiple attribute decision making
method for material selection Materials and Design, 2010, 31, 4738–4747
Rao, R.V. Decision Making in the Manufacturing Environment Using Graph Theory & Fuzzy
Multiple Attribute Decision Making Method. Springer- Verlag, London 2007
Shanian, A. and Savadogo, O (2006) A material selection model based on the concept of
multiple attribute decision making. Materials and Design, 2006, 27,329–337
Singh, Dinesh and Rao, R. V. A hybrid multiple attribute decision making method for solving
problems of industrial environment.2011, 2,631–644
Smith, D. Die Design Handbook. The Society of Manufacturing Engineers.Michigan, 1990
342
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimum Design of PID Controller using Teaching-LearningBased Optimization Algorithm
1*
2
R. V. Rao G. G. Waghmare
1
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
K.K. Wagh Institute of Engineering Education and Research, Nashik, Maharashtra, India
2
*Corresponding author (e-mail: ravipudirao@gmail.com)
This paper presents performance of the Teaching-Learning-Based Optimization (TLBO)
on design of Proportional Integral Derivative (PID) controller for obtaining optimal
control. The tuning performance of TLBO is investigated and compared with other
population based optimization algorithms. Teaching-Learning-Based Optimization
(TLBO) is a recently proposed population based algorithm which simulates the
teaching-learning process of the class room. This algorithm requires only the common
control parameters and does not require any algorithm-specific control parameters.
Experimental results shows that TLBO algorithm is successfully applied to the PID
tuning for improving the performance of the controller and shows a better tuning
capability than other population based optimization algorithms for this control
application.
Keywords: PID controller; Teaching-learning-based optimization; Optimal control
1.
Introduction
Proportional integral derivative (PID) control is one of the earlier control strategies. Its
early implementation was in pneumatic devices, followed by vacuum and solid state analog
electronics, before arriving at today’s digital implementation of microprocessors. It has a
simple control structure which was understood by plant operators and which they found
relatively easy to tune. Since many control systems using PID control have proved
satisfactory, it still has a wide range of applications in industrial control. Since many process
plants controlled by PID controllers have similar dynamics it has been found possible to set
satisfactory controller parameters from less plant information than a complete mathematical
model. These techniques came about because of the desire to adjust controller parameters in
situation with a minimum of effort, and also because of the possible difficulty and poor cost
benefit of obtaining mathematical models. Herreros et al. (2002) designed a multi-objective
PID controller problem. The designer has to adjust the parameters of the PID controller such
that the feedback interconnection of the plant and the controller has to satisfy the
specifications. Gaing (2004) presented a novel design method for determining the PID
controller parameters using the PSO method. The proposed method integrates the PSO
algorithm with the new time-domain performance criterion into a PSO-PID controller. Luo et
al. (2007) proposed the optimization of PID controller parameters based on AFSA (Artificial
Fish Swarm Algorithm). Wang et al. (2008) proposed an improved PSO method by dividing
the searching process into two steps and applied on Proportional- Integral Derivative (PID)
controller model. Dey and Mudi (2009) proposed an improved auto-tuning scheme for Ziegler
Nichols (ZN) tuned PID controllers (ZN-PIDs), which usually provide excessively large
overshoots, not tolerable in most of the situations, for high-order and nonlinear processes.
Luo and Chen (2010) applied a global search optimization method, called the random virus
algorithm for the optimization of the three parameters of PID controller of electro-hydraulic
servo system (KP, KI, KD). Rahimian and Raahemifar (2011) presented a method for
determining the PID controller parameters using the PSO algorithm. The method integrated
the PSO algorithm with the new proposed time-domain cost function into a PSO-PID
controller.
There are many nature-inspired optimization algorithms, such as the Genetic
Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Ant Colony
343
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization (ACO), Harmony Search (HS), the Grenade- Explosion Method (GEM), etc.
working on the principle of different natural phenomena. These algorithms have been applied
to many engineering optimization problems and proven effective in solving these problems.
Meta-heuristic applications have evolved a lot nowadays and have been used in many
domains. However, their parameter setting stills, till now, a serious problem which influences
their efficiency and their attitude. Many optimization methods require algorithm parameters
that affect the performance of the algorithm. Rao et al. (2011, 2012) introduced the TeachingLearning-Based Optimization (TLBO) algorithm, as an innovative optimization algorithm
inspiring the natural phenomena, which mimics teaching-learning process in a class between
the teacher and the students (learners). Unlike other optimization techniques TLBO does not
require any algorithm parameters to be tuned, thus making the implementation of TLBO
simpler. This algorithm requires only the common control parameters and does not require
any algorithm-specific control parameters. Hence in this paper the application of TLBO
algorithm to design the PID controller for obtaining optimal control is considered.
Next section gives brief description about the Teaching-Learning-Based Optimization
(TLBO) algorithm.
2. Teaching-learning-based optimization (TLBO)
TLBO is a teaching-learning process inspired algorithm proposed by Rao et al. (2011,
2012) based on the effect of influence of a teacher on the output of learners in a class. The
algorithm describes two basic modes of the learning: (i) through teacher (known as teacher
phase) and (ii) interacting with the other learners (known as learner phase). In this
optimization algorithm a group of learners is considered as population and different subjects
offered to the learners are considered as different design variables of the optimization
problem and a learner’s result is analogous to the ‘fitness’ value of the optimization problem.
The best solution in the entire population is considered as the teacher. The design variables
are actually the parameters involved in the objective function of the given optimization
problem and the best solution is the best value of the objective function. The working of TLBO
is divided into two parts, ‘Teacher phase’ and ‘Learner phase’. The readers may refer to Rao
et al. (2011, 2012) for more details on working of TLBO algorithm.
For the convenience
of the readers, the actual TLBO code is available at https://sites.google.com/site/tlborao/.
3. Experiments on design optimization of PID controller
A simple Automatic Voltage Regulator (AVR) system contains five basic components
such as amplifier, exciter, generator, sensor and comparator as depicted in Fig.1.The
response of AVR system without control is represented in Fig. 2. The aim of this study is that
the TLBO algorithm is applied to the AVR system in order to optimize the control parameters
of the PID controller, and its tuning performance for the application of optimal AVR system.
Figure 1. Real model of AVR system (Gozde and Taplamacioglu, 2011).
344
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
As a cost function will be minimized for determining the optimum values of the gains of the
controller, the integral of time weighted squared error (ITSE) function is preferred and
represented as given below.
ITSE =
(1)
(2)
Figure 2. Response of AVR system without control
The transfer function model of entire AVR system is shown in Fig 3. This transfer function
presents the ratio of incremental changes of the output (terminal voltage) and the input
(reference voltage) of this system.
The transfer function of the systems are represented by following equations
Figure 3. Transfer function model of AVR system (Gozde and Taplamacioglu, 2011).
345
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4. Experimental results and discussion
Transfer function model of AVR system using simulink is shown in Fig. 4. Transfer
function and parameter limits of AVR system is listed in Table 1 as given by from Gozde and
Taplamacioglu, 2011.
Figure 4. Transfer function model of AVR system using simulink
Table 1. Transfer function and parameter limits of AVR system
Transfer
Parameter
Used parameter
Function
limits
values
Kp+(Ki/s)+Kds 0.2 ≤ Kp,Ki,Kd Kp, Ki, Kd =
PID
≤ 2.0
controller
optimum_values
Ka/(1+sTa)
10 ≤ Ka ≤ 40,
Ka = 10, Ta = 0.1
Amplifier
0.02 ≤ Ta ≤
0.1
Ke/(1+sTe)
1 ≤ Ke ≤ 10,
Ke = 10, Te = 0.1
Exciter
0.4 ≤ Te ≤ 1
Kg depends on Kg = 10, Tg = 0.1
Generator Kg/(1+sTg)
load (0.7-1.0)
1.0 ≤ Tg≤ 2.0
0.001 ≤ Ts ≤ Ks = 10, Ts = 0.1
Sensor
Ks/(1+sTs)
0.06
Table 2. Optimum parameters of the PID controller
Kp
Ki
Kd
ITSE
ABC
PSO
DE
(Gozde and
Taplamacioglu,
2011)
(Gozde and
Taplamacioglu,
2011)
(Gozde and
Taplamacioglu,
2011)
1.6524
0.4083
0.3654
0.0294
1.7774
0.3827
0.3184
0.0276
1.9499
0.4430
0.3427
0.0260
TLBO
1.9502
0.498
0.403
0.0258
Optimum parameters of the PID controller for different algorithms are given in Table
2. A common platform is provided by maintaining the identical common control parameters for
different algorithms considered for the comparison. The number of iteration and the
population size are taken to be 50 and 150 for the TLBO algorithm and number of runs is 3.
The TLBO code is implemented on a laptop having Intel core i3 2.53 GHz processor with 1.85
GB RAM. It can be seen from Table 2 that the ITSE value for TLBO algorithm is much better
than ABC, PSO and DE algorithms. Voltage changing curves Vt(s) according to given transfer
function is shown in Fig. 5. It can be observed from Fig. 5 that response of voltage changing
curve using TLBO algorithm is better than the response of voltage changing curve using other
algorithms considered.
346
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 5. Voltage changing curves of the AVR system
5.
Conclusion
In this study, TLBO algorithm is applied to PID controller for obtaining optimal control.
The tuning performance of PID using TLBO algorithm is investigated and compared with
ABC, PSO and DE algorithms. It can be seen that the ITSE value for TLBO algorithm is better
than ABC, PSO and DE algorithms. Therefore, it can be stated that TLBO algorithm is
successfully applied to the AVR system for improving the performance of the controller and
shows a better tuning capability than the other similar population based optimization
algorithms for this control application. The TLBO is going to be tried on more complex
problems in the near future.
References
Dey, C., Mudi, R.K., 2009. An improved auto-tuning scheme for PID controllers. ISA
Transactions, 48, 396-409.
Gaing, Z., 2004. A Particle Swarm Optimization Approach for Optimum Design of PID
Controller in AVR System. Transactions on Energy Conversion, IEEE, 19( 2).
Gozde, H., Taplamacioblu, M.C., 2011. Comparative performance analysis of artificial bee
colony algorithm for automatic voltage regulator (AVR) system, Journal of Franklin Institute
348, 1927-1946.
Herreros, A., Baeyens E., Peran J. R., 2002. Design of PID-type controllers using
multiobjective genetic algorithms. ISA Transactions, 41, 457–472.
Luo, Y., Chen, Z., 2010. Optimization for PID Control Parameters on Hydraulic Servo Control
System Based on Random Virus Algorithm. Advanced Computer control, 3, 432 – 435.
Luo, Y., Zhang J., Li X., 2007. The Optimization of PID Controller Parameters Based on
Artificial Fish Swarm Algorithm, International Conference on Automation and Logistics,
IEEE, 1058 – 1062.
Rahimian, M.S., Raahemifar, K., 2011. Optimal PID controller design for AVR system using
particle swarm optimization algorithm, Electrical and Computer Engineering (CCECE),
IEEE, 337-340.
Rao, R.V., Savsani, V.J., Vakharia, D.P., 2011. Teaching-learning-based optimization: A
novel method for constrained mechanical design optimization problems. Computer Aided
Design 43 (3), 303-315.
Rao, R.V., Savsani, V.J., Vakharia, D.P., 2011. Teaching-learning-based optimization: A
novel optimization method for continuous non-linear large scale problems. Information
Science, 183(1), 1-15.
Rao, R.V., Savsani, V.J., 2012. Mechanical design optimization using advanced optimization
techniques. Springer-Verlag London.
Wang, Y.B., Peng X., and Wei B.Z., 2008. A New Particle Swarm Optimization based Autotuning of PID Controller, Machine Learning and Cybernetics, International conference,
IEEE, 1818 – 1823.
347
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
An Overview of Applications of Intelligent Optimization
Approaches to Power Systems
S.S.Gokhale1*, V.S.Kale2
1
Yeshwantrao Chavan college of Engineering, Nagpur-441110, Maharashtra, India
Visvesvaraya National Institute of Technology, Nagpur-440010, Maharashtra, India
2
*Corresponding author (e-mail: sanjeev_gokhale@rediffmail.com)
Optimization techniques have been used in power systems since its inception. They
have been applied across a broad spectrum ranging from power system design to
power system planning and economic power dispatch to protection. This paper
presents an overview of the intelligent optimization techniques and their applications to
power systems.
Index Terms: Power systems, optimization techniques, metaheuristics methods
1.
Introduction
The electrical power systems have become highly intricate structures. Their planning,
with generation and the after expansion has made designing and operating them optimally, a
formidable problem. Maintaining the quality of power is equally important. Also the multiple
objectives desired imply that the simple analytical methods of optimization cannot be used
beyond a certain point. Advances in intelligent control, have changed the scenario. This
paper presents an overview of intelligent approaches in optimization and their applications to
power systems.
The paper is organized as follows: Section 2, describes the classical optimization
techniques, Section 3. discusses the power system applications of the metaheuristic
approaches, Section 4. deals with fuzzy logic and neural networks, their relation with
optimization and some of their applications. Conclusions are drawn in section 5.
2.
Classical optimization
Optimization is the act of obtaining the best results under given circumstances.
Optimization can be traced to the days of Newton, Lagrange and Cauchy. The 'Calculus of
Variations' was developed. In so far as electrical engineering is concerned, the theory of
Optimal control was developed. The Kuhn -Tucker algorithm has still been used for optimal
control for combined heat and Power systems [Aihua Wang(2011)].
An optimization problem can be categorized into two classes: Linear
Programming(LP) and Non Linear Programming(NLP). Linear programming is applicable for
solving problems in which the objective functions and the constraints appear as linear
functions of the decision variables. Linear programming has been used for optimization of
electrical generation power schemes. They have also been used for online relay coordination
for adaptive protection [ B. Chattopadhyay et.al.(1996)].
If the objective function and the constraints are not linear functions of the decision
variables, then it is termed as a non linear programming problem. The same problem can
change from a non-linear one to a linear one if some constraints are relaxed or some
assumptions made. In many situations, decisions have to be made sequentially at different
points in space and at different levels. Such problems are called sequential decision
problems. Dynamic programming, a method first developed by Bellman can be applied for
solving these classes of problems. In power systems the operation of hydro electric-thermal
power systems is a sequential process as such and dynamic programming has been applied [
Dahlin, E.B et.al.(1965)].
There are cases where all variables are constrained to take only integer values in an
optimization problem. For example the number of electrical generators cannot be taken as
say 1.7. Such a constraint makes it an Integer programming problem (IP). When restrictions
348
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
are placed only on some variables to have integer values it is called a Mixed Integer
programming problem (MIP). Stochastic optimization deals with optimization of random
variables.
However, the classical methods are analytical and use techniques of differential
calculus in finding optimal solutions. As practical problems involve objective functions that are
not continuous and/or differentiable classical optimization techniques have limited scope in
practical applications.
3.
Metaheuristic methods
Heuristic search methods are mostly intuitive and do not have much theoretical
support. They can get stuck in local optima. Metaheuristic methods can overcome this
limitation. Most metaheuristic methods are founded on some biological behaviour of animals.
Metaheuristic methods make few or no assumptions about the problem being optimized and
can search very large spaces of candidate solutions. However, they do not guarantee that an
optimal solution will always be found. They usually begin from a set of points and approach
towards a better solution, guided by heuristic and empirical rules.
Evolutionary Algorithms are stochastic search methods. The initial solutions undergo
probabilistic operations like mutation, selection etc. and evolve better solutions. Genetic
algorithms which belong to the evolutionary class have been used to estimate faulted sections
accurately in the power system distribution network and restore power quickly [P.P. Bedekar
et.al( 2011)]. A protection system contains many relays. GA's have also been used for coordination of overcurrent relays and directional overcurrent relays. The time coordination of
these relays is important. Relay coordination avoids maloperation and outage. But in addition
if the relays are coordinated optimally it results in an increase in speed of operation. A hybrid
GA-NLP approach has been made for determining optimum values of Time Multiplier
setting(TMS) and Plug Setting(PS) of overcurrent relays[Urdaneta et.al (2011)].
An Evolutionary algorithm which uses a stochastic parallel method for variables has
also been used to this end, [C.W. So, K. K. Li (2000)]. It can find out the optimum relay
settings with maximum satisfaction of coordination constraints. The authors claim that the
algorithm optimizes the relay grading margins and minimizes the coordination constraint
violations. The Strength Pareto Evolutionary algorithm (SPEA) and the Non-dominated
Sorting Genetic Algorithm II(NSGA-II) have been applied to solve an optimal multiobjective
dispatch of hydroelectric generating units problem. The objectives are maximization of the
system efficiency and minimization of the startup/shutdown cost of generating units.
Experimental results show that these algorithms are able to provide several alternative
schedules to the decision maker [Villasanti C.M,et.al.(2004)].
Power System stabilizers control the excitation system of generators and improve the
dynamic performance by damping system oscillations. The Population Based Incremental
Learning (PBIL) and the Breeder Genetic Algorithm (BGA) have been used for Design and
Implementation of Power System Stabilizers [Severus Sheetekela et.al, (2009)]. The authors
found that their performance was better than the conventional Power system Stabilizers.
A constrained Genetic algorithm based load flow has been developed for Load flow of
systems containing Unified Power flow controllers(UPFC)[K.P. Wong et.al.(2003)].The
performance of the program on the standard IEEE 30 node system with UPFC's was studied.
The performance of the algorithm was found to be superior to the Newton - Raphson method
and convergence was fast.
Genetic algorithms have been used for optimum placement of phasor measurement
units, such that they can give complete observability of the system, [D.Dua.et.al.(2008)].
The estimation of frequency, phase and amplitude of voltage, can be done using a
continuous Genetic Algorithm (CGA) has been shown by M.EL. Naggar et. Al. (2000). The
authors have compared the continuous genetic algorithm with a binary genetic algorithm
(BGA) and found that the CGA requires less storage and is faster than binary GA.
The Particle Swarm Optimization (PSO) algorithm is, computationally inexpensive in
terms of memory and speed, simple, easy to implement, speedy and robust. It has found wide
applications. PSO's have been applied for tuning the controller gains (Kp and Ki) of the
automatic generation control System,[ Abdel Magid et.al (2003)].
349
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Ant colony optimization algorithm solves problems which reduce to finding good paths
through graphs. It mimics the behaviour of ants. Hybrid ant colony technique has found
application for reactive power tracing in deregulated power systems, [Hamid. Z, et.al.(2012)].
The authors propose, optimization assisted reactive power tracing via hybrid ant colony
technique. The Blended Crossover continuous Ant colony optimization (BX-CACO) technique
was used. It was experimented on the IEEE 14 bus system and results have shown its
performance to be better, compared to the Proportional sharing principle and circuit theory.
Cuckoo search algorithm is inspired by the obligate brood parasitism of the Cuckoo
bird. In the Cuckoo search optimization method each egg in a nest represents a solution and
a cuckoo egg represents a new solution. The aim is to use the new and potentially better
solution (cuckoos egg) which replaces the not so good solution (other bird’s egg).
For exploiting the potential of distributed generation, the optimal allocation and sizing
of distributed generation are necessary. The Cuckoo search algorithm has been implemented
for optimal location and sizing of a radial distributed generation system [Tan,W.S.
et.al(2012)].The objective was to minimize the total real power losses and improve voltage
stability within the system and at the same time improve voltage profile within the voltage
constraints. Two case studies were carried out on the 69 bus radial system and the results
were compared to the Standard Genetic Algorithm and Particle Swarm Optimization. The
authors claim that the Cuckoo search outperformed in terms of solution quality and standard
deviation. Cuckoo search has been for applied optimal capacitor placement in distribution
systems [Arcanjo. D.N, et.al.(2012)].The system is a mixed non-linear integer optimization
problem where the aim is to minimize losses in distribution systems. The algorithm was
evaluated on the IEEE 16-bus, IEEE 33-bus and IEEE 69-bus distribution systems and found
to be effective.
Simulated annealing is a stochastic relaxation technique derived from the annealing
process used in metallurgy where, we start the process at a higher temperature and then
lower the temperature slowly while maintaining thermal equilibrium. It is a powerful tool for
solving non convex optimization problems. Simulated annealing technique has been applied
to Transmission System Expansion Planning,[R. Romero et.al.(1996)].
Tabu search algorithm is similar to simulated annealing and is applied to
combinatorial planning problems. However, Tabu search generates many mutated solutions
and moves to the solution with the lowest energy. Meter placement plays an important role in
transmission security control. To enhance the topological observability in the network the
meter should be placed at the optimal location for power system state estimation.
Mathematically the problem results in complicated combinatorial optimization that it is difficult
to solve for large systems. Tabu search has been used effectively for solving this problem
[Mori, H. et.al (1999)]. The author has demonstrated the effectiveness of this method on the
IEEE 57 and 118 node systems.
Long term Transmission Network Expansion Planning has been approached by using
the Tabu search algorithm. The authors have tested it with two cases and found that Tabu
Search is a robust technique [E.L. Da Silva et al.(2001)].
Among other algorithms, the artificial Bee colony algorithm (ABC) is based on the
intelligent foraging behaviour of honey bee swarms. A modified honey bee optimization
algorithm has been used to solve dynamical optimal power flow considering generator
constraints [T. Niknam et.al, (2011)].
4.
Fuzzy logic and neural networks
Fuzzy logic and neural networks viewed as branches of Artificial Intelligence have
now become disciplines of their own. They are used to design optimal controllers. Fuzzy logic
controllers have been designed for power system stabilizers, excitation control of
generators,[Hiyama.T, et.al.(1996)]. Genetically optimized neuro-fuzzy power system
stabilizers for damping modal oscillations of power systems have been devised [Shamsollahi.
P, et.al.(1997)]. It has also found applications in optimal sizing of hybrid solar wind power
systems. Fuzzy Controllers for Flexible AC transmission devices have been developed. Fuzzy
theory has also been applied for detection and classification of partial discharges, [Contin, A.
et al.(2002)].
350
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Supervised training of a multilayered perceptron can be viewed as an optimization
problem, Haykin(1999). Adjusting the synaptic weights is related to optimizing the cost
function. To calculate the output layer connection weights (Train NN), Evolutionary algorithms
like Genetic Algorithms, Differential Evolution (DE) and Particle Swarm Optimization have
been used. Cuckoo search has been applied to train neural networks with improved
performance. It has also been successfully applied to train spiking neural models.
Neural networks have found application for online neural training of power system
stabilizers. Numerous applications of fuzzy logic and neural networks have been developed.
5.
Conclusion
The above references show that intelligent optimization has been used for relay
coordination, power system stabilizers, optimal power flow, meter placement, etc. Apart from
the above mentioned applications, the metaheuristic algorithms have found wide range of
applications in; Maintenance Planning, Network reconfiguration for loss minimization,
Pollution dispatch of Thermal Power plants, Power system Planning to name a few[M.O.
Grond et.al,(2012)]. This leads one to conclude that evolutionary algorithms have been
applied successfully in many areas and will be undoubtedly applied to many problems in
power systems in the future.
References
Abdel Magid, Abido, M.A, AGC Tuning of Interconnected reheat thermal system with Particle
Swarm Optimization, International Conference on Electronic Circuits and Systems, Vol.1,
Dec.2003, pp 376-379.
Aihua Wang, Optimal control of combined heat and power systems using Kuhn Tucker
algorithm, Intelligent Control and Information Processing Conference, Harbin China,
2011 Vol.2, pp 72-77.
Arcanjo D.N, Pereira.J.L.R, Oliveira.D.J, Peres.E, Cuckoo search optimization Technique
applied to Capacitor placement on distribution System problem, IEEE/IAS Conference on
Industry Applications,Fortaleza,5-7 Nov.2012,pp1-6
B. Chattopadhyaya, M.S. Sachdev and T.S. Sidhu, An Online relay coordination algorithm for
adaptive protection using Linear Programming technique, IEEE Transactions on Power
Delivery, Jan.1996. Vol.11, No.1, pp165-173
C.W. So, K. K. Li, Time co-ordination Method for power system protection by evolutionary
algorithm, IEEE Transactions on Industry applications, September /October 2000. Vol.
36, No.5, pp 1235-1240.
Contin.A, Cavallini. A; Montanari, G,C, Pasini.G, Puletti, Digital Detection and fuzzy
classification of Partial Discharge signals, IEEE Transactions on Dielectrics and Electric
Insulation 2002, Vol.92002, pp 335-348.
D.Dua, S.Dambhare, R.K.Gajbhiye and S.A.Soman, Optimal Multistage scheduling of PMU
placement -An ILP approach, IEEE Transactions on Power Delivery, 2008, Vol23.pp
1812-1820.
Dahlin E.B, Shen D, W.C, Applications of dynamic programming to optimization of
hydroelectric/steam power system operation, Proceedings of the Institution of Electrical
Engineers, Vol.112, 1965, pp. 2255-2260.
E.L .Da Silva, J,M,A, Ortiz, G.C.de Oliveira and S. Binato, Transmission Network Expansion
Planning under a Tabu search approach, IEEE Transactions on Power Systems,
Feb.2001, vol16(1), pp.62-68
Hamid. Z, Musirin I, Rahim M.N.A, Kamari N.A.M, Hybrid Ant Colony Technique for reactive
Power tracking in Deregulated Power System, Innovation Management and Technology
Research International Conference 2012, pp. 700-705.
Haykin Simon, Neural Networks, Second edition, Prentice Hall, 1999.
Hiyama.T, Oniki S, Nagashima.H, Evaluation of advanced fuzzy logic Power System
Stabilizers on analog network simulator and actual installation on hydro-generators, IEEE
Transactions on Energy Conversion Vol.2 1996, pp 125-131.
351
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
K.M.EL-Naggar and H.K.M.Youssef, A Genetic based algorithm for frequency relaying
applications, Electric Power Systems research, 2000, Vol.55, pp 173-178.
K.P.Wong, J Yuryevich and A. Li, Evolutionary programming based load flow algorithm for
Systems containing Unified power flow controllers, IEE Proceedings, Generation,
Transmission, Distribution, July 2003, Vol.150,No.4, pp 441-446.
M.O.Grond, N.H.Luong, J.Morren, J.G.Slootweg, Multi-Objective Optimization techniques
and applications in Electric Power Systems, Universities Power Engineering
th th
conference(UPEC), London U.K, 4 -7 September 2012.
Mori H, Sone Y, Tabu search based meter placement for Topological Observability in Power
System State Estimation, IEEE Transmission and Distribution Conference,New
Orleans,11-16 April 1999,pp 172-177.
P.P. Bedekar, S.R, Bhide, Optimum coordination of directional overcurrent relays using the
hybrid GA-NLP approach, IEEE Transactions on Power Delivery, Jan.2011, Vol.26
No.1,pp 109-119.
P.P. Bedekar, S.R.Bhide, V.S.Kale. Fault section estimation in power system using Hebb's
rule and continuous Genetic Algorithms, Electric Power and Energy Systems,(2011), pp
457-465.
R. Romero, R. A. Gallego and A. Monticelli, Transmission system Expansion Planning by
Simulated Annealing, IEEE Transactions on Power Systems, Feb.1996, Vol.11(1),pp
364-369.
Rao S.S, Engineering Optimization theory and applications, Third Edition, New Age
International, 1998.
Severus Sheetekela et.al, Design Implementation of Power System Stabilizers based on
Evolutionary Algorithms, IEEE AFRICON , Nairobi,Kenya,23-25 Sept.2009, pp 1-6
Shamsollahi.P, Malik O.P, An adaptive PSS using On-line trained neural networks, IEEE
Transactions on Energy Conversion, Vol.12,1997, pp382-387.
Simon Haykin, Neural Networks, Second Edition, Prentice Hall, India, 1999
T.Niknam, M.R. Narimani, J. Aghaei, Modified Honey Bee Optimization to solve dynamical
power flow considering generator constraints, IET Generation, Transmission ,Distribution
Journal, Vol.5, 2010, pp 989-1002.
Tan,W.S, Hassan M.Y, Majid M.S, Rehman H.A, Allocation and Sizing of DG using Cuckoo
Search Algorithm. IEEE International Conference on Power and Energy Kinabalu
2012,pp 133-138.
Urdaneta A.J,H Restrepo, S.Marquez and J. Sanchez, Optimal Coordination of Directional
relays in Interconnected Power Systems, IEEE Transactions on Power Delivery, July
1988, Vol.3 No.3, pp903-911.
Villasanti C.M, von Lucen, Baran B, Dispatch of Hydroelectric generating Units Using
Multiobjective Evolutionary algorithms, Transmission and Distribution Conference.
IEEE/PES 2004, pp 929-934.
352
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Intelligent Modelling and Optimization of Laser Trepan Drilling of
Titanium Alloy Sheet
Md Sarfaraz Alam*, Avanish Kumar Dubey
Motilal Nehru National Institute of Technology, Allahabad-211004, Uttar Pradesh, India
*Corresponding author (e-mail: sarfaraz45@gmail.com)
Nowadays laser machining became an attractive machining process for difficult to cut
materials like ceramics, composites and super alloys. Titanium alloys specially Ti-6 Al 4V
(grade 5) is most widely used for different technologically advanced industries due to their
superior performance characteristics such as high strength and stiffness at elevated
temperatures, high strength to weight ratio, high corrosion resistance. Laser trepan drilling
(LTD) being a thermal and non contact nature and having the ability to produce micro
dimensions with required level of accuracy. However laser drilled holes are inherently
associated with a number of defects such asnon circularity of hole, spatter thickness, and
hole taper. The present paper investigates the laser trepan drilling(LTD) process
performance during trepanning of titanium alloy (Ti 6 Al 4V) by modeling and optimization
of quality characteristics hole taper (HT). A hybrid approach of artificial neural network
(ANN) genetic algorithm (GA) has been proposed for modeling and optimization. The
verification results are in the close agreements with the optimization results.
Keywords: ANN, GA, HT and LTD.
1.
Introduction
Nowadays laser drilling is finding increasingly widespread application in the industries.
Laser beam machining is based on the conversion of electrical energy into light energy and then
into thermal energy to remove the material from work piece. The material removal process is by
focusing laser beam onto the work material for melting and vaporizing the unwanted material to
create a hole. There are two types of laser drilling: trepan drilling and percussion drilling. Trepan
drilling involves cutting around the circumference of the hole to be generated, whereas
percussion drilling is carried out by utilizing a focused laser spot to heat, melt and vaporize the
target material such that a desired hole is formed through the work piece with no relative
movement of the laser or work material Dubey and Yadava, (2007), Rajurkar et al, (2002).
A Number of researchers have performed the experimental studies to investigate into the
process of laser percussion drilling.Tongyu and Guoquan, (2008) performed a study to
investigate the relationship of laser beam parameters (energy, power, pulse width, pulse
frequency) with the hole geometrical quality characteristics and to find feasibility of high carbon
steel to investigate the heat affected zone in laser drilling. Ghoreishiet al, (2002) employed a
statistical model to analyze and compare hole taper and circularity in laser percussion drilling on
stainless steel and mild steel.Benyounis and Olabi, (2008) did a comprehensive literature review
of the applications of design of experiments, evolutionary algorithms and computational networks
on the optimization of different welding processes through mathematical models. According to
their review of various literatures, they were of the opinion that there was considerable interest
among the researchers in the adaption of response surface methodology (RSM) and artificial
neural network (ANN) to predict responses in the welding process. For a smaller number of
experimental runs, they noted that RSM was better than ANN and genetic algorithm (GA) in the
case of low order non-linear behavior of the response data. In the case of highly non-linear
behavior of the response data, ANN was better than other techniques. In several references,
theapplicability and superiority of the ANN method ofanalysis have been reported Sarkar et al,
(2006), Fausett, (1994) and Haykin, (2002).
353
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Titanium and its alloys(mainly Ti-6Al- 4V) are most widely used for different technologically
advanced industries such as aerospace, marine, chemical, food processing and medical due to
their superior performance characteristics such as high strength and stiffness at elevated
temperatures, high strength to weight ratio, high corrosion resistance, fatigue resistance, and
ability to withstand moderately high temperatures without creeping Biffi et al, (2011). More
recently, due to its biocompatibility, high durability, and good mechanical integrity in particular to
enhance implants stability and the strength of the bone/implant interface, Titanium micro
machining has also been investigated Nayak et al, (2008,2010),Oliveira et al,(2009), Bereznai et
al,(2003) and Yang et al, (2009). Rao et al,(2005) have used nitrogen (N2), argon (Ar) and helium
(He) for the pulsed laser cutting of 1 mm pure Titanium sheet. They found straight and parallel
cuts with Ar and N2 assist gases while use of He gave wavy cut surface. Most of the previous
works related to hole drilling used the percussion drilling process where with intense laser burst,
the hole size was the size of the beam that was varied by focusing. The present study focuses on
the alternative trepan drilling. This paper reports the GA based optimization of hole geometrical
quality hole taper, in the pulsed Nd:YAG laser trepanning of Titanium alloy sheet. The motivation
for the investigation is the fact that Titanium alloys being increasingly used in different industries
and engineers of these industries are trying to obtain best qualities of these materials in the laser
drilling. In this investigation, Ti-6Al-4V (Titanium alloy sheet grade 5) sheet has been selected
because it is known for its exceptional performance characteristics and is one of the mostly used
Titanium alloys.The reported research works show that poor qualities are obtained by use of air
or nitrogen assist gases due to low thermal conductivity and high chemical reactivity at elevated
temperatures. The use of costlier inert gases may further increase the cutting cost. Therefore, the
aim of present research is to obtain good quality of trepanned hole by using N2 as assist gas.
ANN has been applied for the modeling of hole taper with the help of data obtained by the L27
orthogonal array experimentation. The hybrid approach of ANN and GA technique has been
applied for modeling and objective optimization of hole taper. The predicted optimum results have
been verified by confirmation tests.
2.
Experimental setup and design of experiments
The experiments have been performed on 200W pulsed Nd:YAG laser cutting system with
CNC work table. The assist gas used is Nitrogen and it is passed through a nozzle of 1 mm
diameter, which remains constant throughout the experiments. The focal length of the lens is 50
mm and the standoff distance is 1 mm. The Titanium alloy sheet (Ti-6Al-4V) of thickness 1.4 mm
is used as work material. Pulse width or pulse duration, pulse frequency, assist gas pressure and
cutting speed have been selected as input process parameters (control factors). An exhaustive
pilot experimentation has been performed in order to decide the range of each control factors for
complete trepanning. The different control factors and their levels are shown in Table 1. 1 mm
diameter holes are made with two repetitions for the each experimental run. The hole diameters
0
at the entrance and exit were measured at six orientations at an interval of 30 . Diameters are
measured by using optical microscope with 10X magnification.The quality characteristics or
responses selected for the analyses are hole taper. The hole taper is calculated by following
formula:
Hole taper
d
f entrance
t
d f exit
,
(Since α =tan α, for small value of α) Where (df) entrance
and (df) exit are mean Feret’s diameters (t) is the drilled hole depth.
The total number of experiments can be substantially reduced with the help of a well
designed experimental plan without affecting the accuracy during the experimental study of any
manufacturing process Taguchi have suggested that it is better to make the process robust rather
than equipments and machinery just by nullifying the effects of variations through selection of
appropriate parameter level. Taguchi has suggested properly designed experimental matrices
known as orthogonal arrays (OAs) to conducts the experiments. In this present research work
354
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
four control factors with three levels of each have been considered. L27 OA is used for high
resolution factor Ross, (1996).
Table1. Control factors and their levels
Symbol
X1
X2
X3
X4
3.
Factors
Pulse width (ms)
Pulse frequency (Hz)
Gas pressure(kg/cm 2)
Trepanning speed (mm/s)
Level 1
0.8
13
6
0.1
Level 2
1.2
17
8
0.2
Level 3
1.6
21
10
0.3
Methodology
3.1
Artificial neural network (ANN)
ANN is information processing paradigm inspired by biological nervous systems like our
brain. In neural networka large number of highly interconnected processing elements (neurons)
are working together. Like people, they learn fromexperience. In a biological system, learning
involves adjustments to the synaptic connections between neurons; the same is true for
ANNsLamba, (2008). In the network, each neuron receives total input from all of the neurons in
the preceding layer as
𝑛𝑒𝑡𝑟 = 𝑁
𝑟=1 𝑤𝑞𝑟 𝑋𝑟 + 𝑏𝑞 (1)
Where 𝑛𝑒𝑡𝑟 the total or net input and N is are the numbers of inputs to the rth neuron in the
forward layer. 𝑤𝑞𝑟 isthe weight of the connection to the qth neuron in the forward layer from the rth
neuron in the preceding layer , 𝑋𝑟 is the input from the rth neuron in the preceding layer to the
forward layer and 𝑏𝑞 the bias to qth neuron . A neuron in the network produces its output (𝑜𝑢𝑡𝑟 ) by
processing the net input through an activation function Ғ, such as log sigmoid function and pure
linear function chosen in this study as below
1
𝑜𝑢𝑡𝑟 = Ғ 𝑛𝑒𝑡𝑟 = 1+𝑒 −𝑛𝑒𝑡 𝑟
(2)
&
𝑜𝑢𝑡𝑟 = 𝑁
𝑗 =0 𝑤𝑞𝑟 𝑋𝑟 + 𝑏𝑞 (3)
In calculation of connection weights, often known as network training, the weights are
given quasi-random initial values. They are then iteratively updated until converges to the certain
value using the gradient descent method. Gradient descent method updates weights so as to
minimize the mean square error between the network output and the training data set. For
simultaneous optimization of more than one quality characteristics, sometimes it is desirable to
normalize the quality characteristics. So the training data set, i.e. the experimental values of
quality characteristics have been normalized using following formula:
𝑦 (𝑘)
𝑥𝑛𝑖 (𝑘) = 𝑖
(4)
max 𝑦 (𝑘)
𝑖
Where 𝑥𝑛𝑖 𝑘 the normalized value of the kth response is during ith observation, max yi (k) is the
maximum value of yi k for the kth response.
3.2
Genetic algorithm (GA) for optimization
Genetic algorithms (GA) are the global optimization technique which is quite suitable for
non-linear optimization problems. GA is based on the Darwin’s principle of ―survival of fittest‖ .The
algorithm starts with the creation of random population. The individual with best fitness are
selected to form the mating pair and then the new population is created through the process of
cross-over and mutation. The new individuals are again tested for their fitness and this cycle is
repeated until some termination criteria are satisfied Lamba, (2008).
355
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4.
Modelling
The optimal neural network architecture used for normalized hole taper (NHT).The network
for NHT consists of one input, one hidden and one output layer. The input and output layers have
four and one neuron respectively. The neurons in input layer are corresponded to Pulse width,
Pulse frequency, Gas pressure and Trepanning speed. Output layer corresponds to NHT. The
hidden layer has five neurons in case of all. The activation function used for the hidden layer and
output layer was log sigmoid and pure linear respectively. In this work, a commercially available
software package MATLAB was used for the training of ANN .The values of the weights and
biases, after network getting fully trained are shown in the Table 2.
Table 2. Final values of weights and biasesfor NHT
Weights to hidden layer from input
layer
122.703 -28.307 16.969 -33.677
-4.8487 -22.6453 -32.7136 -10.3398
2.8113 -2.9929 5.8978 0.53677
3.9119 3.3126 0.68229 0.29932
-2.7571 2.4243 -5.7577 -0.61036
Bias
to
hidden layer
-81.901
58.7437
-1.1352
-4.0345
0.56399
Weights to output
layer
0.3354 -0.44801 43.5783 -2.3246
-105.1382
Bias to output
layer
46.4105
So, in the mathematical form, the ANN model for NHT can be represented as follows:
NHT= 5𝑞=1 𝑤𝑞 𝑦𝑞 + 𝑏𝑜 (5)
Where 𝑤𝑞 and 𝑏𝑜 corresponded to weight and bias to output layer and 𝑦𝑞 is the net input to
the output layer from hidden layer and it is given by,
1
, q=1, 2,…, 5
𝑦𝑞 =
− 4
𝑤 𝑋 +𝑏
1+𝑒𝑥𝑝
𝑟=1
𝑞𝑟
𝑟
𝑞
The values of 𝑤𝑞 ,𝑏𝑜 ,𝑤𝑞𝑟 and 𝑏𝑞 are shown in Table 2.
It is evident that ANN prediction is in good agreement with the experimental results. It is found
that ANN with mean square error of 0.000037% appears to constitute a workable model for
predicting the characteristics under given set of input parameters for LTD.
5.
Genetic algorithm based optimization
The objective function of optimization problem can be stated as below:
Find: X1, X2, X3 and X4
Minimize:
𝐹 = 5𝑞=1 𝑤𝑞 𝑦𝑞 + 𝑏𝑜 (6)
Eq. (6),with range of process input parameters:
0.8≤𝑋1 ≤1.6;13≤𝑋2 ≤21;6≤𝑋3 ≤10;0.1≤𝑋4 ≤0.3
The critical parameters of GA are the size of the population, cross-over rate, mutation rate,
and number of generations. After trying different combinations of GA parameters, the population
size 20, cross-over rate 0.8, mutation rate 0.01 and number of generation 51, have been taken in
the present study. The objective function in Eq. (6) has been solved without any constraint. The
generation-fitness graphics have been shown in the Fig.5.1. The fitness function is optimized
when the mean curve converges to the best curve after 11 generation. The corresponding values
of Pulse width, Pulse frequency, Gas pressure and trepanning speed have been found as 0.8 ms,
21 Hz, 6 kg/cm2 and 0.1 mm/s.
356
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
6.
Conclusions
The optimization of laser trepan drilling of Ti-6Al-4V using hybrid approach of artificial
neural network and genetic algorithm technique has been done. Following conclusions have been
drawn on the basis of results obtained:
(1) The developed model for HT with mean square error of 0.00037% is well in agreement
with the experimental result.
(2) The optimum levels of control factors are Pulse width, Pulse frequency, Gas pressure
and Trepanning speed have been found as 0.8 ms, 21 Hz, 6 kg/cm 2 and 0.1 mm/s respectively.
(3) Validation has been performed in order to verify the result.
References
Benyounis, K.Y., Olabi, A.G. Optimization of different welding processes using statistical and
numerical approaches—a reference guide. Advance Engineering Software 2008,39,483–96.
Bereznai, M., Pelsoczi, I., Toth, Z., Turzo, K., Radnai, M., Bor, Z., Fazekas, A. Surface
modifications induced by ns and sub-ps excimer laser pulses on titanium implant
material.Biomaterials 2003; 24:4197–203.
Biffi, C.A., Lecis, N., Previtali, B., Vedani, M., Vimercati, G.M. Fiber laser microdrilling of Titanium
and its effect on material microstructure.International Journal of Advanced Manufacturing
Technology 2011, 54, 149–60.
Dubey, A.K., Yadava, V. Laser beam machining—a review.International Journal of Machine Tools
and Manufacture 2007, 48 (6), 609-628.
Fausett, L. Fundamentals of neural networks: architectures, algorithms and applications, 1994
(Prentice Hall, New York).
Ghoreishi, M., Low, D.K.Y., Li, L. Comparative statistical analysis of hole taper and circularity in
laser percussion drilling. International Journal of Machine Tools and Manufacture 2002,
42(9), 985–95.
Haykin, S. Neural networks: A comprehensive foundation, 2002 (Pearson Publication, Harlow,
UK)
Lamba, V. K. Neuro fuzzy systems; University science press, New Delhi.2008.
Nayak, B.K., Gupta MC, Kolasinski KW. Formation of nano textured conical micro structures in
titanium metal surface by femto second laser irradiation. Applied Physics A: Materials
Science & Processing 2008, 90, 399–402.
Nayak, B.K., Gupta MC. Self-organized micro/nano structures in metal surfaces by ultrafast laser
irradiation. Optics and Lasers in Engineering 2010, 48, 940–9.
Oliveira, V., Ausset, S., Vilar, R. Surface micro/nanostructuring of titanium under stationary and
non-stationary femtosecond laser irradiation. Applied Surface Science 2009, 255,7556–60.
Rajurkar, K.P., Levy, G., Malshe, A., Sundaram, M.M., McGeough, J., Hua X., Resnick, R., De
Silva, A-Micro and Nano Machining by Electro-Physical and Chemical Processes. CIRP
Annals-Manufacturing Technology, Vol. 55, Issue 2, 2006, 643-666. Ann CIRP 2002,
55(2),643–66.
Rao, B.T., Kaul,R., Tiwari, P., Nath,A.K. Inert gas cutting of titanium sheet with pulsed mode CO 2
laser. Optics and Lasers in Engineering 2005; 43, 1330–48.
Ross, P.J. Taguchi Techniques for Quality Engineering.2nd edition. New Delhi (India): Tata
Mcgraw Hill Publishing Company Ltd; 1996.
Sarkar, S., Mitra, S. and Bhattacharyya, B. Parametric optimization of wire electrical discharge
machining of gamma titanium aluminide alloy through an artificial neural network model.The
International Journal of Advanced Manufacturing Technology, 2006, 27, 501–508.
TongyuW., Guoquan,S. Geometric Quality Aspects of Nd:YAG Laser Drilling Holes, Proceedings
of 2008 IEEE International Conference on Mechatronics and Automation.
Yang, Y., Yang, J., Liang C., Wang, H., Zhu, X., Zhang, N. Surface micro structuring of Ti plates
by femto second lasers in liquid ambiences: a new approach to improving biocompatibility.
Optic Express 2009,17,24–33.
357
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Selection of Magnetorheological (MR) Fluid for MR Brake
using Analytical Hierarchy Process
Kanhaiya P. Powar, Satyajit R. Patil*, Suresh M. Sawant
Rajarambapu Institute of Technology, Sakharale, (MS) 415 414
*
Corresponding author (email: satyajit.patil@ritindia.edu)
Magnetorheological fluid technology and devices are evolving and research is being
carried out for effective application of the same in various areas. Magneto rheological
brakes based on MR technology are recently being investigated for their potential use
for automotive purposes. Among the design decisions, selection of appropriate MR fluid
for the brake application remains a key issue. Analytical Hierarchy Process (AHP) is
one of the powerful decision making tools available for engineering decision making.
This article presents an effort to apply AHP for MR fluid selection problem in the
context of brake application. Six criteria and three alternatives have been considered in
this problem to demonstrate AHP application.
1. Introduction and literature review
Magnetorheological (MR) fluids are a class of new intelligent materials whose rheological
characteristics change rapidly and can be controlled easily in presence of an applied
magnetic field (Wang et al., 2001). MR fluids consist of stable suspensions of micro-sized,
magnetizable particles dispersed in a carrier medium such as silicon oil or water. When an
external magnetic field is applied, the polarization induced in suspended particles results in
MR effect of MR fluids. The MR effect directly influences the mechanical properties of the MR
fluids. The suspended particles in MR fluids become magnetized and align themselves, like
chains, with the direction of the magnetic field. The formulation of these particle chains
restricts the movement of the MR fluids, thereby increasing the yield stress of the fluids. This
change is rapid, reversible and controllable with the magnetic field strength (Huang et al.,
2002).
Devices based on this MR effect include dampers, clutches and brakes. The response
time of MR brakes have been claimed to be 15-20 ms as against 200-300 ms of conventional
automotive hydraulic brakes. This translates into reduction of stopping time and stopping
distance; the main brake performance parameters. Being an electromechanical brake (EMB),
there lies possibility of electronic control interface. Hence, (MR brakes) have raised
considerable interest for automotive applications.
An attempt to evaluate performance of MR brakes for automotive application has been
made by Park et al. (2006). They have presented design approach and finite element studies
on typical MR brake. They have also presented their optimization studies on MR brakes. (
Park et.al.,2008). Karakoc et al. (2008) evaluated a typical MR brake experimentally to find
the braking torque much lower than the required value for a typical mid size car. Recently,
Assadsangabi et al. (2011) have attempted to design and optimize disk type MR brake and
Younis et al. (2011) have attempted SEUMRE optimization algorithm for their MR brake.
Choice of suitable MR fluid remains a major challenge in the development of MR brake as
the behavior of the same is influenced by properties of MR fluid. However, past literature
doesn’t present any scientific approach towards selection of MR fluid for brake or any other
application. This article presents application of Analytical Hierarchy Process (AHP) for
selection of MR fluid for MR brake.
2. Problem definition
As mentioned in earlier section, MR fluid properties influence the MR behavior in terms of
braking torque. The properties of interest are temperature range, off state viscosity, density
and yield stress level. Magnetic saturation limit of MR fluid is also important as beyond the
358
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
saturation level, increment in current will not yield rise in magnetic field intensity which
generates braking torque. Table 1 presented below gives properties of typical MR fluid.
Table 1. Properties of Typical MR Fluid
Property
Typical Value
o
Initial viscosity
0.2–0.3[Pa·s] @25 C
3
Density
3 – 4 [g/cm ]
Mgn. field strength
150 – 250 [kA/m]
Typical supply voltage and current intensity
Property
Yield point
Reaction time
Work temperature
2 – 25 V, 1–2 A
Typical Value
50 – 100 [kPa]
few milliseconds
o
-50 to 150 [ C]
During the brake application, the temperature of MR brake and fluid shall rise. Also, the
brake is expected to perform at subzero as well as high temperatures. The transient thermal
analysis (Karakoc et al., 2008) shows that the temperature of fluid may rise upto around
120oC. Hence, it will be advantageous to have broad working temperature range of selected
MR fluid.
The yield stress of MR fluid conveys the amount of braking torque it generates. More the
yield stress, more will be the braking torque. The off state viscosity of MR fluid should be low
as otherwise resistance will be offered to the vehicle motion despite the brakes have not been
applied. Higher value of density of MR fluid allows the volume to be low and makes the brake
unit compact. Magnetic field strength of MR fluid determines the magnetic saturation limit and
thus higher value is preferred.
MR fluids made available by manufacturers possess these properties in varying degrees
depending on carrier fluid, size and shape of MR fluid particles, solid content amount and
additives (Olabi et al.,2007). One needs to take into account these properties simultaneously
to be able to choose appropriate MR fluid for one’s brake application. In this work, three
candidate MR fluids considered are MRF1, MRF2 and MRF3. The properties of these MR
fluids have been presented in Table 2 as produced below.
Table 2. Properties of MRF1, MRF2 and MRF3
MRF1
MRF2
MRF3
Density
3
(gm/cm )
Temperature
0
Range ( C)
2.28-2.48
2.95-3.15
3.54-3.74
0 to 140
-20 to 160
-40 to 125
Yield
Stress
(kPa)
32
44
58
Viscosity
0
(Pa-s @40 C)
0.042 ±.020
0.112 ±0.02
0.280 ±0.07
Saturation
Limit
(kAmp/m)
320
280
180
Solid
Content (%)
72
80.98
85.44
3. Selection of Analytical Hierarchy Process (AHP) as decision making tool
Among various decision making tools, SMART, Generalized Means and AHP are more
suitable methods for multi criteria decision making. AHP is one of the more widely applied
multi attribute decision making methods. AHP takes into account individual opinion of number
of decision makers and also checks the consistency for each pair of comparison. This is not
the case with the above mentioned tools. Hence, AHP approach is selected for the problem.
4. Application of AHP for MR fluid selection problem
This section develops the AHP for above said problem.
4.1 Hierarchy of the problem:
The hierarchy for the problem is constructed as shown in Fig.1.
359
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 1. Hierarchical structure for MRF selection
4.2 Construction of judgment matrix/pairwise comparison matrix:
Next, pairwise comparison matrix is formulated by assigning relative intensities among the
criteria as mentioned above, using scale from 0 to 9. As an illustration, the weightage given in
cell C12 can be explained as ‘importance of criteria C1 (density) is 1/7 times as compared to
importance of criteria 2’ for our problem. The AHP verbal scale has been produced as in
Pogarcic et.al. (2008). Pairwise comparison matrix formulated is as shown in Table 3.
Table 3. Pairwise Comparison Matrix [A]
C1
C2
C3
C4
C5
C6
C1
1
1/7
0.125
0.125
0.2
0.333
C2
7
1
0.667
0.667
4
4
C3
8
1.5
1
1
5
4.5
C4
8
1.5
1
1
4.5
5
C5
5
0.25
0.2
0.222
1
3
C6
3
0.25
0.222
0.2
0.333
1
4.3 Calculation of weight vector:
The normalized judgment matrix can be calculated by using equation (1),
𝑎
𝑎𝑖𝑗 = 𝑛 𝑖𝑗
𝑖, 𝑗 = 1,2, … . 𝑛
𝑘=1 𝑎 𝑘𝑗
(1)
a11
1
=
= 0.03125
a11 + a12 + a13
1+7+8+8+5+3
Similarly, calculating for other cells, we get normalized judgment matrix as shown in Table 4.
Table 4. Normalized Judgment Matrix
0.03125
0.030769
0.038894
0.038894
0.013304
0.018692
0.21875
0.215385
0.207433
0.207433
0.266075
0.224299
0.25
0.323077
0.31115
0.31115
0.332594
0.252336
0.25
0.323077
0.31115
0.31115
0.299335
0.280374
0.15625
0.053846
0.06223
0.069144
0.066519
0.168224
0.09375
0.053846
0.069144
0.06223
0.022173
0.056075
a11 =
To get W vector which provides the row wise summation, following equation (2) is used.
𝑊𝑖 =
𝑛
𝑗 =1
𝑎𝑖𝑗
𝑖, 𝑗 = 1,2, … … . 𝑛
(2)
e.g. W1 = 0.031 + 0.03 + 0.038 + 0.038 + 0.013 + 0.018 = 0.17
Similarly, W = 0.17 1.33 1.78 1.77 O. 57 0.35 T
The weight vector which signifies priority or importance of criteria is calculated by using
equation (3),
𝑊𝑖
𝑊𝐼 = 𝑛
𝑖, 𝑗 = 1,2, … … 𝑛
(3)
𝑗 =1 𝑊𝑗
e.g. W1 =
W1
W 1 +W 2 +W 3 +W 4 +W 5 +W 6
= 0.028
Hence, W = 0.028 0.22 0.29 0.29 0.09 0.05 T
This shows that criteria C3 and C4 carry highest weight.
360
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4.4 Check for Consistency:
As a part of the process, consistency needs to be checked. By [A]*{W},
AW = 0.17 1.44 1.90 1.88 0.59 0.35 T
𝑛
(𝐴𝑊)𝑖
0.17 + 1.44 + 1.90 + 1.88 + 0.59 + 0.35
ℷ𝑚𝑎𝑥 =
=
= 6.36
𝑊𝑖
0.028 + 0.22 + 0.29 + 0.29 + 0.09 + 0.05
𝑖=1
ℷ − 𝑛 6.36 − 6
Consistency Index 𝐶𝐼 =
=
= 0.072
𝑛−1
6−1
Where, n = size of [A] matrix.
Random Index (RI) is selected from Zoran et al. (2011).
CI
0.072
Consistency Ratio CR = =
= 0.058
RI
1.25
CR < 0.1, hence, given judgment maintains consistency.
(4)
(5)
(6)
4.5 Alternative Comparison for Each Criteria and Calculation of Vector and Checking
Consistency:
The corresponding third level alternative comparison matrices for each criterion and their
respective priorities are presented in Tables 5 to 10. The same procedure is used as
explained in sections 4.2 to 4.4.
Table 5. Matrix of Alternative Relative Importance Compared to C1 Criteria
A1
A2
A3
Weight
A1
1
0.33
0.20
0.10
A2
3
1
0.33
0.26
A3
5
3
1
0.63
ℷmax = 0.05; CI = 0.02; CR = 0.05 < 0.1
Table 6. Matrix of Alternative Relative Importance Compared to C2 Criteria
A1
A2
A3
Weight
A1
1
0.25
0.33
0.12
A2
4
1
2
0.55
A3
3
0.5
1
0.32
ℷmax = 3.02; CI = 0.011; CR = 0.02 < 0.1
Table 7. Matrix of Alternative Relative Importance Compared to C3 Criteria
A1
A2
A3
Weight
A1
1
0.33
0.2
0.10
A2
3
1
0.33
0.26
A3
5
3
1
0.63
ℷmax = 0.05; CI = 0.02; CR = 0.05 < 0.1
Table 8. Matrix of Alternative Relative Importance Compared to C4 Criteria
A1
A2
A3
Weight
A1
1
4
7
0.70
A2
0.25
1
3
0.21
A3
0.14
0.33
1
0.08
ℷmax = 3.05; CI = 0.026; CR = 0.05 < 0.1
Table 9. Matrix of Alternative Relative Importance Compared to C5 Criteria
A1
A2
A3
Weight
A1
1
2
5
0.56
A2
0.5
1
4
0.33
A3
0.2
0.25
1
0.09
ℷmax = 3.03; CI = 0.01; CR = 0.03 < 0.1
361
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 10. Matrix of Alternative Relative Importance Compared to C6 Criteria
A1
A2
A3
Weight
A1
1
0.25
0.16
0.08
A2
4
1
0.33
0.27
A3
6
3
1
0.63
ℷmax = 3.07; CI = 0.03; CR = 0.075 < 0.1
4.6 Optimal alternative MRF selection
At the end of the procedure, all alternatives are multiplied by the weight of the single decision
criteria, and the results obtained are summarized in Table 11. The alternative with the highest
value is, in fact, the most acceptable or optimal alternative.
Table 11. Synthesized Table on the Optimal Alternative MRF Selection
Criterion
Weight
A1
A1 * Weight
A2
A2 * Weight
A3
A3 * Weight
C1
0.028
0.10
0.0028
0.26
0.0072
0.63
0.0176
C2
0.22
0.12
0.0264
0.55
0.121
0.32
0.0704
C3
0.29
0.10
0.029
0.26
0.0754
0.63
0.1827
C4
0.29
0.70
0.203
0.21
0.0609
0.08
0.0232
C5
0.09
0.56
0.0081
0.33
0.0297
0.09
0.0504
C6
0.05
0.08
0.004
0.27
0.0135
0.63
0.0315
SUM
0.3335
0.3156
0.30778
5. Concluding remarks
Table 11 as above presents priority values for the said MR fluids. It is evident that MRF 3
emerges as the fluid which should be selected as it possesses the maximum value as
compared to the other two. MRF 2 which reports 0.3077 is the least preferred choice in this
case. The cost of all MR fluids is assumed to be same; however, if this is not the case, the
process needs to be repeated by adding cost as one more criteria. This article demonstrates
successful application of AHP for the problem of MR fluid selection for automotive brake
application.
References
Assadsangabi,B., Daneshmand, F., Vahdati, N, Eghtesad, M. and Bazargan-lari, Y.,
Optimization and Design of Disk-Type MR Brakes. International Journal of Automotive
Technology (KSAE), 2011, 12(6), 921−932.
Huang, J., Zhang, J.Q., Yang, Y. and Wei,Y.Q., Analysis and Design of a Cylindrical
Magneto-Rheological Fluid Brake. J of Materials Processing Tech, 129(2002), 559 – 562.
Karakoc, K., Park, E.J., and Suleman, A., Design Considerations for an Automotive
Magnetorheological Brake. Mechatronics, Elsevier, (2008).
Olabi, A.G. and Grunwald, A. Design and Application of Magneto-rheological Fluid., Materials
and Design, Elsevier 28 (2007), 2658–2664
Park, E.J., Falcao da Luz, L. and Suleman, A., Multidisciplinary Design Optimization of an
Automotive Magnetorheological Brake Design. Computers and Str, 86, 2008, 207–216.
Park, E.J., Stoikov, D., Falcao da Luz,L. and Suleman, A., Performance
Evaluation of an
Automotive Magnetorheological Brake Design With a Sliding Mode
Controller.
Mechatronics, Elsevier, 16 (2006), 405–416.
Pogarcic,I., Francic,M. and Davidovic,V., App of AHP Method in Traffic Planning. ISEP, 2008.
Wang,J. and Meng, G., Magneto Rheological Fluid Devices: Principles, Characteristics and
Applications in Mechanical Engineering. Institution of Mech Engg, (2001), 215, Part L
Younis, A., Karakoc K., Dong Z., Park E. and Suleman A., App of SEUMRE Global
Optimization Algorithm in Automotive Magnetorheological Brake Design. Struct. Multidisc
Optim, (2011) 44:761–772.
Zoran, D.,Sasa,M. and Dragi P., Application of the AHP Method for Selection of a
Transportation System in Mine Planning. YU ISSN 03542904, 2011, pp. 93-99.
362
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Selection of Media using R3I (Entropy, Standard Deviation)
and TOPSIS
Savita Choundhe*, Purva Khandeshe, N.R. Rajhans
1
College of Engineering Pune, Pune-411 005, Maharashtra, India
*
Corresponding author (e-mail: dchoundhe214@gmail.com)
Multi-Criteria Decision Making (MCDM) has an important role to play in service
sectors. Many of the service sectors are not aware of these techniques. An application
of these techniques will help in selecting the alternatives scientifically. The paper
addresses the selection of media problem with the help of three MCDM methods. The
three MCDM methods used R3I (Relative Reliability Risk Index), TOPSIS (Technique
for Order Preference by Similarity to the Ideal Solution) and Standard Deviation. The
attributes are defined to express the performance of particular alternatives (Media)
relevant for decision maker.
3
Keywords: Media Selection, Entropy, TOPSIS, R I, Standard Deviation
1.
Introduction
Selecting the media is one of the major problems in marketing. Several methods as
proposed by Pekka Korhonen (1989) have been developed to solve the media selection
problem. The problem is to aid the management in allocating an advertising budget across
the media. The objective is to maximize the audience exposure which will result in spending
minimum cost (Liang, 2010). Although many multi-criteria decision making methods are
available to select the best media from set of available alternatives of media, the data to
calculate reliability of media may not be available, in case of new media. Therefore it is
proposed to relatively assess the reliability of new alternatives and screen out those that
seem to have unacceptable levels of rank. Performance is a measure of reliability as
concluded by Weber al. refers to the performance of the media. Therefore a relative
approach is followed by calculating R3I using Entropy as proposed by G. Mamtani (2006)
and Standard Deviation Method. In this paper a comprehensive model of selecting media is
3
established by using entropy and standard deviation weight along with R I and TOPSIS
then applied to evaluate performance of six different media(newspaper) in order to improve
the level of ranking and ensure better selection of media.
2.
Methodology
Pekka Korhonen (1989) has studied the selection of media comprising of six
different newspaper and magazines as relevant advertising media. i.e. Talouseldmd (TE),
weekly Kauppalehti (KL), daily, InsinBBriuutber (IU), weekly ,Tieror&iikka (TT), weekly,
Tiekwiikko (TV), weekly Mikro (Ml), weekly different methods are performed to find the best
alternative. The relevant audience consists of the following (target) groups: Marketing
Management (MM); Finance and Personnel (F&P); Automatic Data Processing (ADP);
Production Management (PM); Research & Development (R&D); and General Management
(GM) are the attributes as shown in Table.1 which is the decision matrix.
Table 1. Readership numbers (Thousands)
Target Groups(Gr)
MM F&P ADP PM
Talouseldmd(TE) weekly
29
61
6
55
Kauppalehti(KL) daily
58
85
11
74
InsinBBriuutber(IU) weekly 12
20
3
46
Tieror&iikka(TT) weekly
2
5
7
4
Tiekwiikko(TV) weekly
3
11
8
7
Mikro(MI) weekly
4
6
5
6
363
R&D
13
13
10
1
2
1
GM
21
30
8
1
5
2
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Selection of media using R3I method
Relative Reliability Risk Index method helps to calculate the reliability of the media in
the marketing field.
2.1
2.2 Entropy method to calculate relative weights
The entropy method is an MCDM method, as stated by Hwang (2008).The weights
for the attributes considered have been calculated using the information from the decision
matrix shown in Table.1 and the entropy method . This method has been adopted as a part
of calculating R3I because it may be inappropriate for a decision maker to compare attributes
relatively from the attribute structure.
The entropy Vj of the set of normalized data (Rij) of attribute j is given by
𝑛
ej = - K 𝑖=1 𝑅𝑖𝑗 ∗ ln 𝑅𝑖𝑗
where Rij is normalized data for all j ( j =1 to k attribute and i=1 to n represent
alternative)
1
K is constant given by K =
0 < ej < 1 k remains same for all ej
𝑙𝑛𝑁
Degree of divergence dj=1- ej
If there are no preferences available the weights are calculated using
𝑑𝑗
wj = 𝑛 𝑑𝑗
𝑗 =1
Obtaining these weights R3Ii is calculated as R3Ii =
2.3 Solution by Entropy method
𝑘
𝑗 =1 𝑅𝑖𝑗
∗ wj
(1)
(2)
(3)
(4)
(5)
The quantitative data of media selection attributes as given in Table.1 are
normalized by using equation (9) and the values are shown in the Table.2 The entropy of the
normalized data is calculated by using equation (1).The degree of divergence is obtained
from equation(3)and relative weights are calculated by using equation (4) and the values are
shown in the Table.3.The relative reliability risk index of the alternatives are calculated using
equation(5) and the values are arranged in descending order to obtain the best alternative as
shown in the Table4.
Table 2. Normalized data of the attributes of Table.1
Target Gr
TE
KL
IU
TT
TV
MI
MM
0.4383
0.8766
0.1814
0.0302
0.045
0.06
F&P
0.5681
0.7917
0.1863
0.0466
0.1025
0.0559
ADP
0.3441
0.6309
0.1721
0.4015
0.4588
0.2868
PM
0.5313
0.7148
0.4443
0.0386
0.0676
0.0580
R&D
0.6170
0.6170
0.4746
0.0475
0.0949
0.0475
GM
0.5544
0.7919
0.2112
0.0264
0.1320
0.0528
Table 3. Values of ej and W j
Target Gr
Ej
Wj
MM
0.2004
0.1766
F&P
0.2261
0.1709
ADP
0.3404
0.1457
364
PM
0.2349
0.1690
R&D
0.2437
0.1670
GM
0.2264
0.1708
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3
Target
Group
KL
TE
IU
TV
TT
MI
Table 4. Values of R Ii
F&P
ADP
PM
R&D
MM
0.1548
0.0774
0.0320
0.0080
0.0053
0.0107
0.1353
0.0971
0.0318
0.0175
0.0080
0.0096
0.0918
0.0501
0.0250
0.0668
0.0584
0.0417
0.1207
0.0897
0.0750
0.0114
0.0065
0.0097
0.1030
0.1030
0.0793
0.0159
0.0079
0.0079
GM
Wj*Rij
Rank
0.1353
0.0947
0.0361
0.0225
0.0045
0.0090
0.7411
0.5121
0.2793
0.1422
0.0907
0.0887
1
2
3
4
5
6
2.4 Standard deviation method
The standard deviation (SD) method calculates objective weights of the attributes by
𝑗
using equation W j = 𝑀
(6)
𝑖=1 𝑗
where j is the standard deviation of normalized data Rij
The Relative Reliability Risk Index calculates the reliability of alternatives by the equation
R3Ii = 𝑘𝑗=1 𝑅𝑖𝑗 ∗ wj
(7)
2.5 Solution by standard deviation
The standard deviation of the normalized data Table.2 and the relative weights W j
of each attribute is calculated by using equation(6).The values are shown in the Table.5.The
relative reliability risk index is obtained by using equation (7) and alternatives are arranged in
descending order to ensure the best alternative and the values are shown in Table.6.
Table 5. Values of standard deviation
Target Gr
SD
Wj
MM
0.3334
0.1976
F&P
0.3127
0.1853
ADP
0.1567
0.0928
PM
0.292
0.1731
R&D
0.2826
0.1675
GM
0.3093
0.1833
Total
1.6870
1
Table 6. Values of R3Ii
Target Gr
KL
TE
IU
TV
MI
TT
MM
0.1732
0.0866
0.0358
0.0089
0.0119
0.0059
F&P
0.1467
0.1053
0.0345
0.0189
0.0103
0.0086
ADP
0.0586
0.0319
0.0159
0.0426
0.0266
0.0372
PM
0.1237
0.0919
0.0769
0.0117
0.0100
0.0066
R&D
0.1033
0.1033
0.0795
0.0159
0.007
0.0079
GM
0.1452
0.1016
0.0387
0.0242
0.0096
0.0048
total
0.7509
0.5209
0.2815
0.1223
0.0766
0.0713
Rank
1
2
3
4
5
6
2.6 TOPSIS method
TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution)
method, at the first stage ,consist of the composition of the decision matrix A with the values
of attributes and the construction of the normalized Decision matrix R based upon matrix A.
The elements of matrix R are computed as
𝑥 𝑖𝑗
Rij = 𝑚
(8)
2
√ 𝑖=1 𝑥 𝑖𝑗
Where xij is the value of the jth criterion for the ith alternative, and is, as in equation
(1), an element of Decision Matrix A. The weighted normalized decision matrix is obtained by
using the normalized decision matrix R and weights assigned to criteria as
Vij = 𝑊𝑗 ∗ 𝑅𝑖𝑗
(9)
365
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
+
At the second stage, the ideal (fictitious best) solution S and the negative-ideal
(fictitious worst) solution S- , are determined, respectively, as follows
S+ =
𝑀
+
𝑗 =1 (𝑉𝑗
-
𝑀
−
𝑗=1 (𝑉𝑗
S =
− 𝑉𝑖𝑗 )2
(10)
− 𝑉𝑖𝑗 )2
(11)
The relative closeness of each alternative to the ideal solution is computed as
𝑆−
Ci = 𝑆 + +𝑆 −
(12)
Finally, the alternative with the highest value of Ci is selected as the preferable (best)
one (Hwang and Yoon, 1981; Zanakis et al., 1998).
2.7 Solution by TOPSIS method
The quantitative values of media selection attributes which are given in Table.1 are
normalized by using equation (8).The normalized weight of each attribute is calculated by
using equation (9) and the values are shown in Table.7
Table 7. Weighted normalized data
Target Gr
TE
KL
IU
TT
TV
MI
MM
0.077
0.155
0.032
0.005
0.008
0.011
F&P
0.097
0.135
0.032
0.008
0.018
0.010
ADP
0.050
0.092
0.025
0.058
0.067
0.042
PM
0.090
0.121
0.075
0.007
0.011
0.010
R&D
0.103
0.103
0.079
0.008
0.016
0.008
GM
0.095
0.135
0.036
0.005
0.023
0.009
The ideal (best) and negative ideal (worst) solution are obtained whose values are
shown in the Table.8. The separation measures are obtained by using equation (10) and
(11) .The relative closeness of a particular alternative to the ideal solution is calculated by
using equation (12).The alternatives are arranged in descending order to ensure the ideal
solution and the values are represented in the Table.9.
Table 8. Values of ideal best and worst solution
Target Gr
S+
S-
MM
0.155
0.005
F&P
0.135
0.008
ADP
0.092
0.025
PM
0.121
0.007
R&D
0.103
0.008
GM
0.135
0.005
Table 9. Values of separation measures and relative closeness(C)
Target G
KL
TE
IU
TV
TT
MI
S+
0.00053
0.10868
0.20665
0.26132
0.28077
0.27635
S0.2865
0.1943
0.11481
0.0473
0.0335
0.0184
366
C
0.9982
0.6413
0.3572
0.1533
0.1066
0.0625
Rank
1
2
3
4
5
6
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4.
Conclusion
Table 10. Comparative results of all methods
Methods()/Target Gr()
R3I (Entropy method)
3
R I (Standard deviation)
TOPSIS
TE
2
2
2
KL
1
1
1
IU
3
3
3
TT
5
6
5
TV
4
4
4
MI
6
5
6
Since we applied different methods of optimization techniques to select the best
media among six different media (newspapers and magazines). Table.10 shows the
comparative study of the four methods. It can be concluded that the three different methods
were used to find the ranking of different media. It was observed that first four ranks obtained
were same for all the three methods and there is only one swapping for rank 5 and 6.Thus
for such type of problems these methods can be very useful.
References
Dong-qing liang. The application of Entropy-based fuzzy comprehensive evaluation method
in the sitting of thermal power plant experimental investigation, 2010
G. Mamtani, G. Green, Reliability Risk Evaluation during the Conceptual Design Phase Acta
Polytechnica Vol. 46 No. 1/2006
H. Deng, C.H. Yeh, R.J. Willis, Inter-company comparison using modified TOPSIS with
objective weights, Computers & Operations Research 27 (2000) 963–973.
J.W. Huang, Combining entropy weight and TOPSIS method for information system
selection, in: Proceedings of the IEEE International Conference on Automation and
Logistics, 2008, pp. 1965–1968.
Pekka Korhonen, Subhash C. Narula and Jyrki Wallenius, An Evolutionary Approach To
Decision-Making, With An Application to Media Selection, Mathl Comput. Modelling,
Vol. 12, No. 10/11, 1989, pp. 1239-1244.
Rupa Sunil Bindu, B. B. Ahuja, Vendor Selection In Supply Chain Using Relative Reliability
Risk Evaluation Journal of Theoretical and Applied Information Technology 2005 - 2010
JATIT.
Xiangxin LI, Kongsen WANG, Liwen LIU, Jing XIN, Hongrui YANG, Chengyao GAO
Application of the Entropy Weight and TOPSIS Method in Safety Evaluation of Coal
Mines Procedia Engineering 26 (2011) 2085 – 2091 1877-7058 © 2011 Published by
Elsevier Ltd.
367
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Advance Technique for Energy Charges Optimization
1
S.G. Shirsikar *, Shubhangi Patil
2
1
Dept of Elect Engg, Brahmdevdada Mane Institute of Technology, Solapur- 413007, India
Dept of ENTC Engg, Brahmdevdada Mane Institute of Technology, Solapur, 413007, India
2
*Corresponding author (e-mail:sgshirsikar@gmail.com)
Energy is crucial to human sustenance and development. Due to the increase in
demand of energy and deficiency in power generation, the gap between demand and
supply of electric energy is widening. Bridging this gap from the supply side is a very
difficult and expensive proposition. The only viable way to handle this crisis is the
efficient use of available energy. This paper helps to understand and solve this
important problem using a multiple criteria decision making method known as
Preference
Ranking
Organization
Method
for
Enrichment
Evaluations
(PROMETHEE).The method discussed in this paper is very effective for decision
making in energy optimization. One example is included to illustrate the method.
1. Introduction
Energy management program is a systematic and scientific process to identify the
potential for improvements in energy efficiency, with or without financial investment, to
achieve estimated savings in energy and energy cost. This requires collection & analysis of
existing energy usage data, careful study of existing equipments, processes and then
suggesting practical & economical ways for saving energy & energy cost. There is a need for
simple, systematic, and logical methods or mathematical tools to guide decision makers in
considering a number of selection criteria and their interrelations.
This paper presents one such simple, systematic and logical method, called
PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluations).
2. Improved PROMETHEE method
Improved Promethee method is described below:
1
Step I: Improved Promethee method proceeds to a pair wise comparison of alternatives in
each single criterion in order to determine partial binary relations denoting the strength of
preference of one alternative over other. Values of selection criterion are given in table No 1
S.N.
1
2
3
4
5
6
7
Table 1. Qualitative measures of selection criterion
Qualitative measures of selection criterion
Assigned value
Exceptionally low
0.045
Very low
0.255
Above average
0.335
Low
0.590
High
0.665
Very high
0.745
Exceptionally high
0.955
Step II
1) After short-listing the alternatives, prepare a decision table including the measures
of all criteria for the short-listed alternatives.
2) The weights of relative importance of the criteria may be assigned using ‘Analytic
Hierarchy Process (AHP)’, method. The steps are explained below:
Find out the relative importance of different criteria with respect to the objective. To do so,
one has to construct a pair-wise comparison matrix using a scale of relative importance.
In the matrix, rij=1 when i=j and rji=1/rij.
368
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Find the relative normalized weight (Wi) of each criterion by
1. Calculating the geometric mean of ith row
2. Normalizing the geometric means of rows in the comparison matrix.
This can be represented as 1/M
M
GMi =
M
&
rij
Wi = GMi/
GMi
i=1
i=1
The geometric mean method of AHP is used in the present work to find out the
relative normalized weights of the criteria.
Step III: The next step is to have the information on the decision maker preference function.
The preference function (Pi) translates the difference between the evaluations
obtained by two alternatives (a1and a2) in terms of a particular criterion, into a preference
degree ranging from 0 to 1. Let Pa1a2 be the preference function associated to the criterion ci.
Pi a1 a2 = Gi [Ci(a1)- (a2)]…………….0≤ Pi a1 a2≤1
Where, Gi is a non-decreasing function of the observed deviation (d), between two
alternatives a1 and a2, over the criterion ci. The multiple criteria preference index Oa1a2 is
then defined as the weighted average of the preference functions Pi.
𝑎1𝑎2
=
𝑀
𝑖=1
𝑊𝑖 𝑃𝑖 𝑎1 𝑎2
∏ a1a2 represents the intensity of preference of the decision maker of alternative a1
over alternative a2, when considering simultaneously all the criteria. Its value ranges from 0 to
1. This preference index determines a valued outranking relation on the set of actions. For
PROMETHEE outranking relations, the leaving flow, entering flow and the net flow for an
alternative a belonging to a set of alternatives A are defined by the following equations.
+
−
+
∅ (a) = ×∈𝐴 𝑥𝑎 ; ∅ (a) = ×∈𝐴 𝑎𝑥 ; ∅(ai) = ∅ (a) = ×∈𝐴 𝑥𝑎 - ∅−(a) = ×∈𝐴 𝑎𝑥
Ø+(a) is called the leaving flow, Ø-(a) is called the entering flow and Ø(ai) is called the
+
net flow. Ø ((a) is the measure of the outranking character of ‘a’ (i.e. dominance of alternative
‘a’ over all other alternatives) and Ø-(a) gives the outranked character of ‘a’ (i.e. degree to
which alternative ‘a’ is dominated by all other alternatives). The net flow, Ø(ai) represents a
value function, whereby a higher value reflects a higher attractiveness of alternative ‘a’. The
net flow values are used to indicate the outranking relationship between the alternatives. As
an example, the schematic calculation of the preference indices for a problem consisting of
three alternatives and four criteria is given in Figure – 1.
Mathematical Model of Promethee method-
c1
c2
a1
c3
a1
a2
a1
a2
a3 a2
a3
a3
c4
a1
a2
a3
=
31
Π a1
𝑖=1
𝑊𝑖 𝑃𝑖, 31
a2
a3
a1 Π12 Π13
a2 Π21 Π23
a3 Π31 Π32 Figure 1. Mathematical model of Promethee method
The IMPROVED PROMETHEE method is applied for the given industry for
optimization of energy cost.
369
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3. Example: Optimization of energy cost of the textile mill using Promethee method.
H.T. consumer has different TOD (time of day) charges. Hence with the help of energy
management technique, (i.e. by shifting flexible load) it is possible to get benefit from TOD
charges. Simultaneously it will increase the M.D.(maximum demand), which will increase the
M.D. charges. In contrast, if load curve is made flat with the help of load management
technique, M.D. will reduce, & hence M.D. charges will reduce, but in that case benefit of
2
TOD charges will not be achieved . Hence to achieve the optimization of energy charges, the
‘Promethee method of optimization’ is implemented. For the given industry, M.D is measured
for 24 hours. The data collected on 25th march 2010 is as given in Table No -2.
Time
8
10
am am
Zone B
C
MD 1275 1224
Table 2. Instantaneous M.D. recorded in different zones
12
2
4
6
8
10
12
2
4
6
noon pm
pm
pm
pm
pm
night am
am
am
B
B
B
D
D
A
A
A
A
B
1350 1228 1323 1360 1325 1290 1275 1380 1350 1290
Case I: As per the data collected on 25 March 2010 actual load curve of the mill is as shown
in Figureure-2. The values of maximum demand & TOD energy consumption are as
mentioned below2
M. D. - 1380 kVA, A zone - 10590 kWh, B zone - 11596 kWh, C zone - 3924 kWh, D zone 5230 kWh.
Figure 2. Actual load curve of the mill (25 March 2010)
Case-II: In this case the load curve is flattened to get the benefit of M.D. charges (Figure-3).
The values of M.D. & TOD energy consumption in this modified load curve are as mentioned
below.
M. D. - 1325 kVA, A zone - 10480 kWh, B zone - 11766 kWh, C zone - 4014 kWh, D zone
- 5280 kWh
Figure 3. Modified load curve of the mill for M.D. benefit.
Case-III: In this case the flexible load is transferred with the help of load management
technique, to get the benefit of TOD charges (Figure-4). Here the load is shifted from zones
having higher TOD charges to the zones having lower TOD charges by strategic load growth
technique.
370
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 4. Modified load curve of the mill for TOD benefit.
The values of M.D. & TOD energy consumption in this modified curve are as
mentioned below.
M. D. – 1400 kVA, A zone - 10790 kWh, B zone - 11596 kWh, C zone - 3824 kWh, D zone 5130 kWh
Total
Energy
Case-I
Case-II
Case-III
Table 3. Data of Optimum energy selection criteria of given example
M.D.(kVA) A-zone(kWh) Bzone(kWh) C-zone(kWh) D-zone(kWh)
1380
1325
1400
10590
10480
10790
11596
11766
11596
3924
4014
3824
5230
5280
5130
Step-I The problem considering five criteria & three alternative cases of energy charges is as
shown in Table No – 3. The five criteria used to evaluate the three alternatives are M.D, Azone kWh, B zone- kWh, C zone- kWh, D zone- kWh.
Step-II A decision table including the measures or values of all criteria for the short listed
alternatives is prepared as shown in the Table No – 3. The weights of the criteria may be
assigned using analytical Hierarchy Process (AHP) as explained in step II.
If demand is increased by 1 kVA, the M.D. charges get increased by 150 Rs/Month. If
energy consumption is increased by 1 kWh in each zone for one month, TOD charges in
A-zone, B-zone, C-zone & D-zone get increased by 103.5 Rs, 129 Rs, 153 Rs & 162 Rs.
respectively3.
Therefore the decision maker prepares the following matrix:
The normalized weight of each criterion is calculated following the procedure in step II
& values are as below.
M. D. - 0.2139 A - 0.1467 B - 0.1900
C - 0.2182
D - 0.2306
Step-III After calculating the weights of the criteria using AHP method, the next step is to
have the information on the decision maker preference.
The pair wise comparison of criterion M.D. gives the matrix as shown above. Figure5 gives preference values of P resulting from the pair wise comparisons of the three
alternative of total energy with respect to criterion M.D., A, B, C, and D zone respectively.
Mathematical Model of the given problem is as below.
371
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 5. Mathematical model of the given problem
Based on the net flow values, it is clear that the case-III (managing the load curve for
getting the benefit of TOD charges)is the best choice among all the three cases for
optimization of energy charges.
•
Verification of the result:-
Case
I
II
III
M.D.
charges(Rs)
150x1380
kVA
= 2,07,000
150x1325
kVA
= 1,98,750
150x1400
kVA
=.2,10,000
Table 5. Energy charges of the three alternatives
A zone
B zone
c zone
charges(RS)
charges(Rs)
charges(Rs)
102.9 x 10590 129 x 11596
153 x 3924
kWh
kWh
kWh
= 10, 89, 710
= 14, 95,884 =6, 00, 372
102.9 x 10480 129 x 11766
153 x 4014
kWh
kWh
kWh
= 10, 78, 392
= 15, 17, 814 = 6, 14,142
102.9 x 10790 102.9 x10790 153 x 3824
kWh
kWh
kWh
= 11, 10, 291
= 11, 10, 291 = 5, 85, 072
D zone
charges(Rs)
162 x 5230
kWh
= 8, 47, 260
162 x 5280
kWh
= 8, 55, 360
162 x 5130
kWh
= 8, 31, 060
Table 6. Ranking of the objectives by actual calculations
Case
I
II
III
Total energy charges (Rs)
42, 40, 226
42, 64, 458
42, 32, 307
Rank
2
3
1
From the Table No-6, it can be concluded that, the ranking of the three cases remains
same as obtained by IMPROVED PROMETHEE method, i.e. case-III is the best choice
among the all three cases considered for optimization of energy charges.
4. Result
Ranking of the alternatives from the best to the worst using the net flows remains same
using both by actual calculations & by Promethee method.
5. Conclusion
Promethee method helps to understand and solve problems using a multiple criteria
decision making method. This method is very effective for decision making in energy
optimization.
References
Bureau of energy efficiency, New Delhi. www.beeindia.nic.in
Ministry of power, http://powermin.nic.in/distribution/energy_audit
R.Venkata Rao, B. K. Patel, ‘‘Decision making in the manufacturing environment using an
improved PROMETHEE method’’, International Journal of Production Research, First
published on: 14 July 2009.
372
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
An Up Link OFDM/SDMA Multiuser Detection using Firefly
Algorithm
K.V. Shahnaz*, Palash Kulhare, C.K. Ali
National Institute of Technology, Calicut – 673601, Kerala, India
*Corresponding author (email: shahnaznitc@gmail.com)
Multiuser Detection in OFDM/SDMA systems faces serious issues as the number of
users and data rate increases. The ML detection is optimum, but the complexity
increases exponentially with number of users and constellation size. Conventional
detection techniques like ZF and MMSE are less complex with much poor performance.
The possibility of nature-inspired meta-heuristic algorithms to detect the user data have
been of great research interest these days. With these optimization algorithms we can
obtain almost optimum results with much reduced computational complexity. In this
paper, recently developed Firefly Algorithm has been utilized to det ect the users’
transmitted data and the result has been compared with a guided random search
algorithm.
1. Introduction
Smart antenna designs have emerged in recent years. They are applied with the main
objective of combating the effects of multipath fading on the desired signals, thereby increasing
both the performance and capacity of wireless systems. An application of smart antennas is
Space Division Multiple Access (SDMA). Here the users are identified with the help of their spatial
signature. In OFDM/SDMA systems, advantages of both OFDM and SDMA are combined. As a
result, substantially improved uplink capacity is achieved. Channel distortion due to multipath
propagation is easily mitigated with OFDM, while bandwidth efficiency can be increased with use
of SDMA. However the performance of these systems is critically dependent on precision of
channel knowledge [Vandenameele, P. et.al, 2001, Hanzo, L. et.al, 2003].
To solve the problem of multiuser detection in OFDM/SDMA systems various classical
solutions are available, having varying complexity. But the main problem with all is that they are
far too computationally complex to implement or does not achieve better performance. Therefore,
suboptimal solutions which have less complexity are important. The research work in [Zhang, J.
et.al, 2011] proposes a guided random search algorithm for joint channel estimation and
multiuser detection, referred as dual repeated weighted boosting search (DRWBS). It depends on
iteratively exchanging information between channel estimator and the symbol detector. The
detector gives soft outputs which can be directly used by forward error correction (FEC) decoder
to reduce BER further. The scheme is capable of attaining near optimum performance at a lower
computation complexity than the optimum ML-MUD, particularly with higher order M-QAM
scenarios. Different meta-heuristic algorithms are widely used in various scenarios of MIMOOFDM systems [Haris, P.A. et.al, 2010]. The search for a new robust algorithm with less
complexity and yet high performance led to Firefly algorithm (FA).
FA has been developed by Xin-She Yang at Cambridge University in 2008 and it is inspired by
light attenuation over the distance and fireflies’ mutual attraction. The paper explores the
possibility of this robust algorithm to detect transmitted data and compares the result with RWBS
algorithm [Xin-She Yang, 2011].
373
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2. System model
The multiuser MIMO OFDM/SDMA system considered supports U mobile stations (MSs)
simultaneously transmitting in the Up Link (UL) to the Base Station (BS). Each of the users is
equipped with a single transmit antenna, whereas the BS employs an array of P antennas. It is
assumed that a time-division multiple-access protocol organizes the division of the available timedomain (TD) resources into OFDM/SDMA time slots. Instead of one, U MSs are assigned to each
slot that is allowed to simultaneously transmit their streams of OFDM-modulated symbols to the
SDMA BS [Zhang, J. et.al, 2011].
All of the U MSs transmit independent data streams. The modulated data
𝑋 (𝑢) 𝑘 , 𝑘 = 1,2, … , 𝐾 are then serial to parallel converted, and the frequency-domain training
symbols are concatenated at the beginning of each frame. The parallel modulated data are
further processed by the inverse fast Fourier transform (IFFT) to form a set of OFDM symbols.
After concatenating the cyclic prefix (CP) of 𝐾𝑐𝑝 samples, the TD signal is transmitted through a
multipath fading channel and contaminated by the receiver’s additive white Gaussian noise
(AWGN).
At the BS, the CP is discarded from every OFDM symbol, and the resultant signal is fed into
th
the corresponding FFT-based receiver. Let 𝑌𝑝 𝑠, 𝑘 denote the signal received by the p receiver
th
th
antenna element in the k subcarrier of the s OFDM symbol, which is given as the superposition
of the different users’ channel-impaired received signal contributions plus the AWGN, which is
expressed as
𝑌𝑝 𝑠, 𝑘 = 𝑈𝑢=1 𝐻𝑝𝑢 𝑠, 𝑘 𝑋 𝑢 𝑠, 𝑘 + 𝑊𝑝 𝑠, 𝑘
(1)
where 𝐻𝑝𝑢 𝑠, 𝑘 denotes the frequency domain channel transfer function (FD-CHTF) of the link
between the uth user and the pth receiver antenna in the kth subcarrier of the sth OFDM symbol.
The DRWBS-JCEMUD scheme alternately estimates the channel as well as the users’ data
and mutually exchanges these estimates between both populations to find the joint optimum. The
respective Cost Functions (CFs) for estimating both the channel impulse responses (CIRs) and
the users’ transmitted data are formulated as in [Zhang, J. et.al, 2011].
2
𝐽ℎ ℎ𝑝 𝑠 𝑋 𝑠 = 𝑌𝑝 𝑠 − 𝑋 𝑇 𝑠 𝐹ℎ𝑝 [𝑠]
(2)
𝐽𝑋 𝑋 𝑠, 𝑘 𝐻 𝑠, 𝑘
3. Firefly algorithm
= 𝑌 𝑠, 𝑘 − 𝐻 𝑠, 𝑘 𝑋[𝑠, 𝑘]
2
(3)
This algorithm is based on the natural behaviour of fireflies which in turn is based on the
bioluminescence phenomenon. Flashes of lights produced using this phenomenon helps them to
communicate with each other and attract prey or other fireflies. This peculiar swarm intelligence
pattern is applied to FA. Various studies prove that it is better than GA and PSO because fireflies
aggregate more closely around each optimum (without jumping around as in the case of genetic
algorithms). FA can find the global optima as well as the local optima simultaneously in a very
effective manner and also it converges faster [Xin-She Yang, 2009, Lukasik and Zak, 2009].
Two important aspects in FA are,
1) Attractiveness, which is a function of light absorption coefficient, 𝛾 and distance between
two fireflies, 𝑟𝑖𝑗
2
𝛽 𝑟𝑖𝑗 = 𝛽0 е−𝛾𝑟𝑖𝑗 ,
(4)
where, 𝛽0 is the attractiveness at 𝑟𝑖𝑗 = 0 , and
𝑟𝑖𝑗 =
𝑥𝑖 − 𝑥𝑗
=
𝑑
𝑘=1
𝑥𝑖,𝑘 − 𝑥𝑗 ,𝑘
is the Cartesian distance between two fireflies 𝑖 and 𝑗 at 𝑥𝑖 and 𝑥𝑗 .
374
2
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2) Movement of a firefly 𝑖 is attracted to another more attractive (brighter) firefly 𝑗 is
determined by
2
𝑥𝑖 = 𝑥𝑖 + 𝛽0 е−𝛾𝑟𝑖𝑗 𝑥𝑗 − 𝑥𝑖 + 𝛼 𝑟𝑎𝑛𝑑 − 1/2 .
(5)
The second term is due to the attraction and the third is for randomization with 𝛼 being the
randomization parameter. A detailed pseudo code of FA is available in [Xin-She Yang, 2011].
The FA invoked in the OFDM/SDMA system commences its search for the optimum Usymbol solution at the initial population randomly generated. These are the first generation
fireflies. On next iterations, these fireflies move to different dimensions, and using Equation (5),
the best firefly has been retained. The same Equations (2) and (3) can be used as CFs in FA
also. By proper tuning of the parameters , 𝛾 and 𝛼 we can improve the performance.
Population size and number of generations affect the convergence rate and performance as in
other nature-inspired algorithms.
4. Simulation results
Since both RWBS and FA give almost perfect channel estimation, more focus has been given
to detection. MUD has been implemented using FA and the result was compared with that of
RWBS. Simulation parameters used are same as in Table 1. It is clear from figure 1 that FA
gives almost 2 dB improvements at BER 10-3 over RWBS for almost same complexity.
4.1 Complexity comparison of FA with RWBS
The computational complexity of the RWBS is predominantly determined by the
population size (Ps), the number of generations (G) required to approach convergence, and by
the number of boosting search steps, Tbs at each generation. Given Ps and a fixed number of Tbs
boosting search steps at each generation of the RWBS algorithm, the number of CF evaluations
required at each generation is equal to [(Ps−1) + 2Tbs], where (Ps −1) is the number of CFevaluations outside the boosting search, and 2Tbs is the number of CF- evaluations in the
boosting search. Hence, the total CF- evaluations required to detect the users’ transmitted signals
at each subcarrier of each OFDM symbol is equal to [(Ps−1) + 2Tbs] ×G. We denote the
computational complexity of the RWBS-based MUD as O ([(Ps−1) +2Tbs] ×G). But for FA it is just
O (Ps×G).
Table 1. Simulation Parameters for RWBS and FA MUD
Modulation Scheme
16-QAM
Convolution Code
½ rate polynomial [133 171]
Channel
No. of paths L
4 [0; -5; -10; -15] (dB)
4
No. of MSs, U
64
Subcarriers
16
Cyclic Prefix
4
No. Of BS antennas
Algorithm Parameters (RWBS)
Algorithm Parameters (FA)
Population size, Ps
60
Population size Ps
Mutation Parameter, µ
0.01 No. of generations
Initial weights
1/Ps Alpha
No. of generations
80
Beta-min
Weighted boosting update, Tbs
40
gamma
40
250
0.2
0.2
1
We can see that the computational complexity of RWBS scheme is independent of the
order M of M-QAM and the number of users U. To explicitly quantify the complexity of the RWBSbased optimization algorithm, we compare it with that of the MMSE-MUD and the optimum ML-
375
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
MUD. The computational complexity of the MMSE-MUD is dominated by the matrix inversion of a
3
(U×U)-element matrix; hence, it can be approximated as O (U ). The computational complexity of
the ML-MUD using exhaustive search for a U-user SDMA/OFDM system employing M-QAM is O
(MU).
So in this work, we can say that the complexity of DRWBS is [(Ps−1) +2Tbs] ×G =11120 and
that of FA as Ps×G = 10000.
Figure 1. BER performance using ML, RWBS and FA
5. Conclusion
The paper uses FA to detect the data of a four user OFDM/SDMA system with four antennas
at the BS and compares the result with already existing RWBS algorithm. From the simulation
conducted it became apparent that FA outperforms RWBS.
References
Hanzo, L. Munster, M. Choi, B.J. and Keller, T. OFDM and MC-CDMA for broadband multiuser
communications, WLAN’s and Broadcasting, IEEE press 2003.
Haris, P.A. Gopinathan, E and Ali, C.K. Performance of Some Metaheuristic Algorithms for
Multiuser Detection in TTCM-Assisted Rank-Deficient SDMA-OFDM System. Eurasip journal
on Wireless Communications and Networking, vol. 2010, Article ID 473435,11 pages,
2010.doi:10.1155/2010/473435
Jiang, M. Akhtman, J. and Hanzo, L. Iterative joint channel estimation and multiuser detection for
multiple antenna aided OFDM systems. IEEE Transactions on Wireless Communication, vol.
6, no. 8, pp. 2904–2914, Aug. 2007.
Lukasik, S. and Zak, S. Computational collective intelligence. Semantic web, social networks and
multiagent systems. Pages (97-1108), Springer, 2009.
Vandenameele, P and et.al. A combined OFDM/SDMA approach. IEEE Journal on Selected
Areas in Communication, vol. 18, no. 11, pp. 2312–2321, Nov. 2000.
Xin-She Yang. Firefly algorithms for multimodal optimization, pages (169-178), Springer, 2009.
Xin-She Yang. Nature-Inspired Metaheuristic Algorithms. Luniver Press, 2011.
Zhang, J. Chen, S., Mu, X. and Hanzo, L. Joint channel estimation and multiuser detection for
SDMA/OFDM based on dual repeated weighted boosting search. IEEE Transactions on
Vehicular Technology. vol. 60, no. 7, pp. 3265-3275, September 2011.
376
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Multi-criteria Material Selection for Heat Exchanger using an
Integrated Decision Support Framework
1*
2
3
P. B. Lanjewar , R. V. Rao , A. V. Kale
1
St.Vincent Pallotti College of Engineering and technology, Nagpur-440 001, India
2
Sardar Vallabhbhai National Institute of Technology, Surat-395 007, India
3
Yashwantrao Chavhan College of Engineering, Nagpur-440 001, India
*
Corresponding author (E-mail: lanjewarpb@gmail.com)
In the present work, a multicriteria decision methodology based on digraph and matrix
method and analytic hierarchy process is proposed for selection of appropriate material
for heat exchanger. The material selection attribute digraph model presents a visual
representation of the selection attributes considered and their interrelations. The
preference indicator obtained from the attributes permanent function evaluates the heat
exchanger material alternatives with respect to several attributes with linguistic terms.
The relative importance of attributes is expressed using analytic hierarchy process. It
offers a more objective, simple and consistent selection approach. A detailed
procedure for determination of preference indicator is suggested.
1. Introduction
A heat exchanger is an equipment built for efficient heat transfer from one medium to
another. The media may be separated by a solid wall to prevent mixing or they may be in
direct contact. They are widely used in space heating, refrigeration, air conditioning, power
plants, chemical plants, petrochemical plants, petroleum refineries, natural gas processing,
and sewage treatment. Heat exchangers can be classified into different types according to the
fluid transfer process, geometry of construction, heat transfer mechanism and flow
arrangements. Any component or the entire unit can be made of materials such as
copper, aluminum, carbon steel, stainless steel, nickel, nickel alloys, titanium or other special
alloys.
When selecting materials for engineering designs, a clear understanding of the
functional requirements for each individual component is essential and various important
criteria such as physical properties, electrical properties, magnetic properties, mechanical
properties, chemical properties, performance characteristics, material cost, availability, etc.
need to be considered (Rao R. V., Davim J.P., 2008)
The selection of an optimal material for heat exchanger from among two or more
alternative materials on the basis of two or more criteria is a multiple criteria decision making
problem. Various studies have been performed to address the issue of material selection.
Sirisalee P. et al. (2004) proposed a novel design support tool, the exchange constant chart
to assist designers selecting materials in multi-criteria situations. Liao T.W. (1996) presented
a fuzzy multi-criteria decision-making method for material selection quantitatively. However,
fuzzy methods are complex and increase the computational work. Ashby M.F., et al. (2004)
provided a comprehensive review of the strategies or methods for materials selection, from
which three types of materials selection methodology had been identified. Besides these
studies there are a few methodological approaches towards solution of the problem are listed
as follows: knowledge-based system (Zhu F. etal, 2008, Trethewey K.R., 1998, Sapuan
S.M.,2001), integrated information technology approach (Jalham I.S., 2006), a case-based
reasoning method (Amen R.& Vomacka P., 2001) and computer based material selection
system (Waterman N.A., etal, 1992).
In the present work, a methodology integrating digraph and matrix method and
analytic hierarchy process (AHP) is proposed for selection of an appropriate material for heat
exchanger. The proposed approach is relatively new and has not been employed earlier for
selection of heat exchanger material. AHP is employed to determine the relative weights of
the material selection attributes and the materials are ranked using digraph and matrix
method.
377
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2. Digraph and matrix method
Graph theory serves as a mathematical model of any system that includes multi
relations among its constituent elements because of its diagrammatic representations and
aesthetic aspects. Digraph models have been successfully used for modeling and analyzing
various kinds of systems and problems in numerous fields of science and technology (Rao,
2004, 2006a, 2006b, 2006c; Rao and Padmanabhan, 2006). Digraph modeling offers a better
visual representation of the attributes and their interrelations.
2.1 Material selection attributes digraph model
The material selection attributes digraph model (MSADM) is a graphical
representation of the system and is a significant tool for its visual analysis. The various
attributes are expressed in terms of nodes [ni], with i = 1, 2,…, N and their interrelation as
edges [eij]. A node ni represents i-th selection attribute and edge (eij) represents the relative
importance of attribute ‘i’ over another attribute ‘j’.
Fig. 1 illustrates the MSADM developed for six attributes represented by nodes 1, 2,
3, 4, 5 and 6.
Figure1. Material selection attributes digraph model
2.2 Material selection attribute matrix
The digraph becomes complex with the increase in the number of nodes and their
interrelation. In such cases, the visual analysis of the digraph becomes difficult. In view of
this, it is necessary to develop a representation of the digraph that can be understood, stored,
retrieved and processed by the computer in an efficient manner. The digraph model is
converted to matrix form, which makes it suitable for computer processing.
The MSADM is represented by an equivalent matrix named as material selection attribute
matrix (MSAM) which stores the deterministic values of all the identified attributes and their
relative importance. The size of this matrix is n*n corresponding to n attributes and considers
all the attributes (Xi) and their relative importance (rij). The matrix for the MSADM shown in fig.
1 is represented as
X1
r21
r
𝑀𝑆𝐴𝑀 = r31
41
r51
r61
r12
X2
r32
r42
r52
r62
r13
r23
X3
r43
r53
r63
r14
r24
r34
X4
r54
r64
r15
r25
r35
r45
X5
r65
r16
r26
r36
r46
r56
X6
(1)
Where the diagonal element (Xi) is the value of the ith attribute represented by node ni and rij
is the relative importance of the i-th attribute over the j-th attribute represented by the edge eij.
3. Analytic Hierarchy Process
AHP is a powerful and flexible decision-making process developed by Saaty (1980)
that is used to solve complex decision making problems involving interactions of different
criteria across different levels. AHP decomposes a decision making problem into a system of
hierarchies of objectives (or goals), attributes (or criteria) and alternatives. The ability to
effectively deal with objective and subjective attributes and to provide measures of
consistency of preference make AHP one of the most popular multicriteria decision making
378
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
technique (Saaty, 1980). In our proposed methodology, AHP is used to determine the relative
weights of attributes and to check the consistency of the judgments in the relative importance
matrix.
4. Proposed methodology
The proposed methodology is demonstrated for selection of appropriate porous
material for plate type heat exchanger. Cicek K. and Celik M. (2009) developed a decision aid
mechanism based on fuzzy axiomatic design (FAD) to select optimal form of porous materials
for plate type heat exchanger used in marine systems.
The major steps of the proposed methodology are elaborated below.
Step 1: Identify alternatives and attributes
The porous material alternatives identified are Porous Titanium Steel plate (PMA1),
Porous Brazed Plate (PMA1), Porous Stainless Steel Plate (PMA3) and Porous Aluminium
Steel Plate (PMA4). These alternatives are assessed with respect to seven attributes namely
Availability(PMSA1), Maintainability (PMSA2), Mechanical stability (PMSA3), Thermal
conductivity (PMSA4), Corrosion resistance (PMSA5), Fouling resistance (PMSA6) and cost
effectiveness (PMSA7).The data of seven selection attributes for four different materials is
presented in table 1.
Table 1: Subjective data of selection attributes
__________________________________________________________________________
Alternatives
Attributes
____________________________________________________________________
PMSA1 PMSA2 PMSA3 PMSA4 PMSA5 PMSA6 PMSA7
__________________________________________________________________________
PMA1
L
M
VH
H
H
H
M
PMA2
H
H
M
VH
H
M
M
PMA3
M
M
H
H
M
M
H
PMA4
VH
H
M
H
VL
L
VH
__________________________________________________________________________
VL: Very Low, L: Low, M: Medium, H: High, VH: Very High, Source: Cicek K. and Celik M. (2009)
Step 2:
The attributes may be expressed in different units and consequently they cannot be
used in their initial form for further calculation. Hence, the values of the attributes are
normalised. The attributes whose higher measures are desirable for given application are
called beneficial attributes (e.g. thermal conductivity), A non-beneficial attribute (e.g. cost) is
the one whose lower measures are desirable for the given application. The normalized values
for beneficial attributes are obtained as ui/uj, where ui is the measure of attribute for ith
alternative and uj is measure of the attribute for the jth alternative which has higher measure
of the attribute among the considered alternatives. For non-beneficial attribute the normalized
values are calculated by uj/ ui. A scale is suggested for the selection attributes with subjective
description and presented in Table 2. The subjective measures of the attributes are converted
to objective values and then normalized as explained above.
Table 2: Values of selection attribute.
__________________________________________________________________________
Subjective
Assigned Subjective
Assigned
Subjective
Assigned
measures
value
measures
value
measures
value
__________________________________________________________________________
Exceptionally low 0.045
Low
0.335
High
0.665
Extremely low
0.135
Below average
0.410
Very high
0.745
Very low
0.255
Average
0.500
Extremely high
0.865
Above average
0.590
Exceptionally high 0.955
__________________________________________________________________________
379
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 3: Normalised data of selection attributes
__________________________________________________________________________
Alternatives
Attributes
____________________________________________________________________
PMSA1 PMSA2 PMSA3 PMSA4 PMSA5 PMSA6 PMSA7
__________________________________________________________________________
PMA1
0.4497
0.7519 1.0000
0.8926
1.0000
1.0000
0.6711
PMA2
0.8926
1.0000
0.6711
1.0000
1.0000
0.7519
0.6711
PMA3
0.6711
0.7519
0.8926
0.8926
0.7519
0.7519
0.8926
PMA4
1.0000
1.0000
0.5000
0.8926
0.3834
0.5037
1.0000
__________________________________________________________________________
Step 3: Compose a relative importance matrix
It is difficult to determine with sufficient accuracy the individual importance of each
attribute. Therefore, the relative importance of one attribute over another (i.e. rij) is expressed
using pairwise comparisons. A relative importance matrix (M) is constructed using a pairwise
comparison scale proposed by Saaty (1980). The values of 1, 3, 5, 7, and 9 represent equal
importance, moderate importance, strong importance, very strong importance and absolute
importance respectively, while the values of 2, 4, 6, and 8 are used for compromise between
the above values.
Assuming n attributes, the pairwise comparison of attribute i with attribute j yields a
square matrix Mn*n where rij denotes the relative importance of attribute i with respect to
attribute j. In the matrix rij = 1 when i = j and rji = 1/ rij.
𝑟11 𝑟12 ⋯ 𝑟1𝑛
𝑟21 𝑟22 ⋯ 𝑟2𝑛
𝑀 = 𝑟𝑖𝑗 = ⋮ ⋮ ⋱ ⋮
(2)
𝑟𝑖1 𝑟𝑖2 ⋯ 𝑟𝑛𝑛
The relative importance of attributes is assigned values as explained above and the following
assignment is selected. However, the assigned values are for demonstration purpose only.
M=
1
1
3
7
5
5
1
1
1
3
7
5
5
1
1/3
1/3
1
5
3
3
1/3
1/7
1/7
1/5
1
1/3
1/3
1/7
1/5
1/5
1/3
3
1
1
1/5
1/5
1/5
1/3
3
1
1
1/5
1
1
3
7
5
5
1
Step 4: Calculate the criteria weights
The weights of the attributes (W i) are determined using the normalization of geometric
mean method and these weights are W PMSA1 = 0.04032, W PMSA2 = 0.04032, W PMSA3 =
0.091967, W PMSA4 = 0.392358, W PMSA5 = 0.197189, W PMSA6 = 0.197189 and W PMSA7 = 0.04032
Step 5: Obtain the principal eigen value
The principal eigen value (max) is obtained using the expressions 3 and 4
𝑟11 𝑟12 ⋯ 𝑟1𝑛
𝑊1`
𝑊1
`
𝑟21 𝑟22 ⋯ 𝑟2𝑛
𝑊2
= 𝑊2
(3)
⋮ ⋮ ⋱ ⋮ ∗ ⋮
⋮
𝑟𝑖1 𝑟𝑖2 ⋯ 𝑟𝑛𝑛
𝑊𝑛
𝑊𝑛`
𝑚𝑎𝑥 =
1
𝑛
∗
𝑊1`
𝑊1 +
𝑊2`
𝑊2 + ⋯ +
𝑊𝑛`
𝑊𝑛
The principal Eigen value is found to be 7.164705672
Step 6: Perform consistency check
380
(4)
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The consistency index (CI) and consistency ratio (CR) is obtained from expressions 5 and 6
𝐶𝐼 = (max – n) (n − 1)
(5)
𝐶𝑅 = CI RI
(6)
Smaller value of CI indicates lesser deviation from the consistency. The random index RI for
the number of attributes used in decision making is obtained from table 3. Usually, a CR of
0.1 or less is accepted as it reflects sufficient consistency made in the judgments.
The value of consistency ratio is obtained as 0.02033 (< 0.1) and hence sufficient consistency
exists in the judgment of relative importance values.
Table 3: Random index (RI) values
_________________________________________________________________
Attributes
3
4
5
6
7
8
9
10
RI
0.52
0.89
1.11
1.25
1.35
1.4
1.45
1.49
_________________________________________________________________
Step 7:
The MSADM is developed with seven nodes representing the material selection
attributes. The magnitude of the edges and their directions represents the relative importance
between the attributes. The MSADM is similar to fig.1 but with seven nodes.
Step 8:
The material selection attribute matrix (MSAM) for the MSADM is prepared. This will
be a square matrix of order 7 with diagonal elements representing the normalized values of
attributes (Xi) and off diagonal elements representing the attribute’s relative importance (rij).
Step 9:
Develop the attributes permanent function (APF) for the MSAM. An APF is a
complete representation of the material selection attributes and retains all possible
information of the attributes and their interrelations. The permanent is similar to the
determinant of a matrix with all the determinant terms as positive terms. The attribute
permanent function for ‘n’ attributes matrix when expanded, will have (n!) terms which may be
arranged in (n+1) groups.
Step 10:
The Preference Indicator (PI) represents the measure of performance of a alternative
with respect to the attributes. The numerical value of APF is called the Preference Indicator..
The system with highest PI represents the most preferred option for the given application and
is ranked first on the preference list. The PI for all the alternatives is evaluated by substituting
the normalised value of Xi’s and rij’s in the APF. The PI values of different materials
alternatives are presented below in descending order.
PMA2: Porous Brazed Plate
PMA1: Porous Titanium Steel plate
PMA3: Porous Stainless Steel Plate
PMA4: Porous Aluminium Steel Plate
5280.339481
5095.251826
5007.429498
4785.663410
From the above values of PI, it is understood that Porous brazed plate (PMA2) is the
most preferred material for the plate type heat exchanger followed by Porous titanium steel
plate (PMA2). These results are similar to those suggested by Cicek K. and Celik M. (2009).
However it may be mentioned that the ranking depends upon the judgements of relative
importance made by the decision maker.
5.
Conclusions
A methodology based on integrated matrix method and analytic hierarchy process is
proposed for selection of an appropriate material for heat exchanger.
381
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The material selection attribute diagraph model enables a graphical visualization of
various attributes present and their interrelations. The measure of the attributes and their
relative importance are used together to rank the alternatives and hence it provides a better
assessment of the alternatives under consideration.
The consistency check made in the judgments of relative importance adds advantage
to the proposed selection procedure.
The preference indicator evaluates and ranks the alternatives for a given selection
problem. A slight variation in the measure of attribute leads to a significant difference in the
preference indicator and hence it is easy to rank the options in the descending order of their
preference indicator.
The proposed methodology allows experts to be flexible and use a large evaluation
attributes including crisp values and linguistic terms and offers a more objective, simple and
consistent selection approach.
References
Amen R, Vomacka P. Case-based reasoning as a tool for materials selection. Mater Design,
2001;22(5):353–8.
Ashby MF, Brechet YJM, Cebon D, Salvo L. Selection strategies for materials and processes.
Mater Des, 2004;25(1):51–67.
Cicek K., Celik M. Selection of porous materials in marine system design: The case of heat
exchanger aboard ships. Materials and Design, 2009, 30, 4260-4266.
Jalham IS. Decision-making integrated information technology (IIT) approach for material
selection. Int J Comput Appl Technol, 2006;25(1):65–71.
Liao TW. A fuzzy multicriteria decision-making method for material selection. J Manuf Syst,
1996;15(1):1–12.
Rao, R.V. Digraph and matrix methods for evaluating environmentally conscious
manufacturing programs. International Journal of environmentally Conscious Design and
manufacturing , 2004, 12, 23-33
Rao, R.V. A decision making framework model for evaluating flexible manufacturing systems
using digraph and matrix methods. International journal of Advance Manufacturing
technology, 2006a, 30, 1101-1110
Rao, R.V. A material selection model using graph theory and matrix approach. Material
Science and Engineering , 2006b, 431, 248-255
Rao, R.V. Plant location selection using fuzzy digraph and matrix methods. International
Journal of Industrial Engineering, 2006c, 13, 357-362
Rao, R.V., Padmanabhan, K.K. Selection, identification and comparison of industrial robots
using digraph and matrix methods. Robotics and Computer Integrated Manufacturing,
2006, 22, 373-383
Rao, Venkata R., Decision making in the manufacturing environment: using graph thoery and
fuzzy multiple attribute decision making methods, Springer-Verlag London, 2007.
Rao RV, Davim JP. A decision-making framework model for material selection using a
combined multiple attribute decision-making method. Int J Adv Manuf Technol,
2008;35:751–60.
Saaty, T.L. The analytic hierarchy process. McGraw Hill, New York, 1980.
Sapuan SM. A knowledge-based system for materials selection in mechanical engineering
design. Mater Des, 2001;22(8):687–95.
Sirisalee P, Ashby MF, Parks GT, Clarkson PJ. Multi-criteria material selection in engineering
design. Adv Eng Mater, 2004;6(1–2):84–92.
Trethewey KR, Wood RJK, Puget Y, Roberge PR. Development of a knowledgebased system
for materials management. Mater Des, 1998;19(1):39–56.
Waterman NA, Waterman M, Poole ME. Computer based materials selection system. Met
Mater, 1992;8:19–24.
Zhu F, Lu G, Zou R. On the development of a knowledge-based design support system for
energy absorbers. Mater Des, 2008;29:484–91.
382
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Application of AHP-TOPSIS for Comparison of Layouts
1
2
S.M.Samak *, N.R. Rajhans
1
Satara College of Engineering & Management, Limb, Satara - 415416, Maharashtra, India
2
College of Engineering, Pune - 411005, Maharashtra, India
*Corresponding author (e-mail: shilpa_samak@rediffmail.com)
Facilities layout is always a crucial issue for any manufacturing plant as it directly
affects the productivity of the plant. Hence selection of the best layout requires due
attention. This paper tries to compare four different layouts of the same plant using
multi criteria decision making techniques such as AHP and TOPSIS. Various criteria for
measurement of performance of a layout including qualitative and quantitative criteria
are first listed. These criteria are compared using Analytical Hierarchy Process and
their weightages are found out. The values for quantitative measures are determined.
Qualitative measures are converted into numerical values. Then the four layouts are
compared using Technique for Ordered Preference by Similarity with Ideal Solution.
Thus this paper helps to find the best layout based on various predefined measures.
1.
Introduction
A facility layout is an arrangement of everything needed for production of goods or
delivery of services. A facility is an entity that facilitates the performance of any job. It may be
a machine tool, a work centre, a manufacturing cell, a machine shop, a department, a
warehouse, etc. The placement of the facilities in the plant area, often referred to as ‘‘facility
layout problem’’, is known to have a significant impact upon manufacturing costs, work in
process, lead times and productivity. A good placement of facilities contributes to the overall
efficiency of operations and can reduce until 50% the total operating expenses. A good plant
layout can easily achieve following objectives:
1. Minimize material handling cost.
2. Minimize overall production time.
3. Utilize existing space most effectively.
4. Provide for employee convenience, safety and comfort.
5. Maintain flexibility of arrangement and operation.
6. Minimize variation in types of material-handling equipment.
7. Facilitate the manufacturing process.
8. Facilitate the organizational structure.
Thus a good plant layout is very important for any manufacturing firm. Hence the
objective of this paper is to find the best layout from the available options.
2.
Systematic layout planning
An organized approach to layout planning has been developed by Muther which is
referred to as Systematic Layout Planning. In SLP, once the appropriate information is
gathered, a flow analysis can be combined with an activity analysis to develop the relationship
diagram. Space considerations, when combined with the relationship diagram, lead to the
construction of the space relationship diagram. Based on the space relationship diagram,
modifying considerations and practical limitations, number of alternative layouts are designed
and evaluated.
As this paper is mainly devoted to evaluation of alternatives, it starts with generated
layouts. The existing layout and three generated layouts are as shown in figure 1. Now
selecting one of these layouts is often a critical issue. This can be done by measuring the
performance of each of these layouts. For this measurement, Analytical Hierarchy Process
(AHP) and Technique for Ordered Performance by Similarity to Ideal Solution i.e. TOPSIS are
used.
383
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 1. Existing and proposed layouts
3.
Evaluation criteria
Before going for actual measurement of performance, some performance measures or
criteria are defined. D. Raman et al developed a measurement model considering a set of
three layout effectiveness factors—facilities layout flexibility (FLF), productive area utilization
(PAU) and closeness gap (CG)—is in their research. Their model enables the decision-maker
of a manufacturing enterprise to analyze a layout in three different aspects, based on which
they can make decision towards productivity improvement. Some minor changes are made in
the effectiveness factors considering the current situation. The criteria used in this case are
as follows:
1. Travel: This criterion concentrates on some quantitative measures of movement
between departments. The plant layout is mainly designed to facilitate the flow of the
product, from raw material to the finished product. But there are other types of flows
that need equal consideration. They are loaded as well as empty travel of material
handling equipment, personnel travel, information travel and other material or
equipment travel.
2. Area utilization: Any layout can be evaluated on the basis of the effectiveness of use
of available area. The available area can be distributed as productive area, nonproductive area, inspection area and storage area. Space is required in any
manufacturing unit for productive activities such as machining, cutting etc. Non
productive activities include packing, dispatch etc.
3. Flexibility: The layout designed should be flexible enough to accommodate change in
product mix. The flexibility can have sub-criteria viz. ease of expansion, volume
variation, free space available and alternate routes available.
These criteria together can measure the effectiveness of the layout. Hence these criteria
are weighted using Analytical Hierarchy Process.
4.
Analytic Hierarchy Process
Analytic Hierarchy Process (AHP) is one of Multi Criteria decision making method that
was originally developed by Prof. Thomas L. Saaty. It is a method to derive ratio scales from
paired comparisons. The input can be obtained from actual measurement such as price,
weight, distance etc., or from subjective opinion such as satisfaction feelings and preference.
AHP allow some small inconsistency in judgment because human is not always consistent.
The ratio scales are derived from the principal Eigen vectors and the consistency index is
derived from the principal Eigen value. Broad areas where AHP has been successfully
employed include: selection of one alternative from many; resource allocation; forecasting;
total quality management; business process re-engineering; quality function deployment etc.
384
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
By following AHP procedure, the criteria for performance evaluation of layouts are weighted
against each other and their relative weightage are found out. They are as follows:
Table 1. Criteria weightage after AHP
Sr. No.
01
CRITERIA
Travel (65 %)
02
Area Utilization (23%)
03
Flexibility (12%)
Sub criteria
Loaded travel of material handling equipment (53%)
Empty travel of material handling equipment (20%)
Personnel Travel (10%)
Information Travel (8%)
Other Equipment travel (9%)
Productive area (54%)
Non productive area (9%)
Inspection area (19%)
Storage area (19%)
Ease of expansion (22%)
Volume variation (57%)
Free space (12%)
Alternate routes (9%)
Here the process of comparison of criteria is completed. Any complex situation that
requires structuring, measurement, and and/or synthesis is a good candidate for AHP.
However, AHP is rarely used in isolation. Rather, it is used along with, or in support of other
methodologies. Here it is used in conjunction with Technique for Ordered Preference by
Similarity to Ideal Solution i.e. TOPSIS. The criteria for performance evaluation of plant layout
are compared and prioritized using AHP and then the alternatives i.e. existing layout and the
proposed layouts are compared using TOPSIS.
5.
TOPSIS
The TOPSIS method was developed by Hwang and Yoon (1981). TOPSIS is based
on the concept that the chosen alternative should have the shortest geometric distance from
the positive ideal solution and the longest geometric distance from the negative ideal solution.
It is a method of compensatory aggregation that compares a set of alternatives by identifying
weights for each criterion, normalizing scores for each criterion and calculating the geometric
distance between each alternative and the ideal alternative, which is the best score in each
criterion. An assumption of TOPSIS is that the criteria are monotonically increasing or
decreasing. Normalization is usually required as the parameters or criteria are often of
incongruous dimensions in multi-criteria problems. Compensatory methods such as TOPSIS
allow trade-offs between criteria, where a poor result in one criterion can be negated by a
good result in another criterion. This provides a more realistic form of modeling than noncompensatory methods, which include or exclude alternative solutions based on hard cut-offs.
The TOPSIS process is carried out as follows:
1. Create an evaluation matrix consisting of m alternatives and n criteria, with the
intersection of each alternative and criteria given as
, we therefore have a matrix
.
2. The matrix
is then normalized to form the matrix
, using the
2
normalization method. rij = xij/ (x ij) for i = 1, …, m; j = 1, …, n
3. Construct the weighted normalized decision matrix by multiplying each column of the
normalized decision matrix by its associated weight. An element of the new matrix is: vij
= wj rij
4. Determine the ideal and negative ideal solutions.
Ideal solution:
*
*
*
A* = { v1 , …, vn }, where v j ={ max (vij) if j J; min (vij) if j J’}
Negative ideal solution:
385
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
A' = { v1' , …, vn' }, where v' = { min (vij) if j J ; max (vij) if j J' }
5. Calculate the separation measures for each alternative.
The separation from the ideal alternative is:
i = 1, …, m
Si * = [ (vj*– vij)2 ] ½
Similarly, the separation from the negative ideal alternative is:
S'i = [ (vj' – vij)2 ] ½
i = 1, …, m
*
6. Calculate the relative closeness to the ideal solution Ci
*
*
*
0 Ci 1
Ci = S'i / (Si +S’i),
7. Select the option with Ci* closest to 1.
The above said method is applied to current situation.
Step 1: The various criteria discussed above are measured for each alternative and
summarized in the following table. All sub criteria for travel and area can be measured
quantitatively. In case of flexibility, volume variation, free space and alternate routes available
can be measured quantitatively. But ease of expansion is a qualitative term. Hence it is
converted into quantitative measures.
Table 2. Evaluation matrix
Criteria
Travel
AREA
FLEXIBILITY
Weight
0.65
0.23
0.12
Sub
LT
ET
IT
PT
OT
PA
NPA
IA
SA
EE
VV
FS
criteria
Weight
0.53 0.2
0.08
0.1 0.09 0.54
0.09 0.19
0.19
0.22
0.57
0.12
Existing
1584 402 1640.5 1986 308 9828 7100 8060 13424
5
9.2
5916
Proposed
964 132 1071.5 1096 423 12218 5838 6960 13405
7
35.8
4678
1
Proposed
1350 163 1602.3 1513 442 11823 5683 5916 14521
7
31.4
5438
2
Proposed
1237 143 1506 1379 445 10401 6123 7170 14330
7
15.6
4576
3
Step 2: Normalization: For normalizing the matrix, each term of the matrix is squared, all
columns are summed, square root of the sum is determined and then each term of the
original matrix is divided by this square root.
Step 3:: This matrix is determined by multiplying each term by the weight of each column
including criteria and sub criteria.
AR
0.09
3
5
6
4
Table 3. Weighted normalized matrix
Criteria
Travel
AREA
FLEXIBILITY
Weight
0.65
0.23
0.12
Sub criteria
LT
ET
IT
PT
OT
PA
NPA
IA
SA
EE
VV
FS
AR
Weight
0.53
0.2
0.08
0.1
0.09
0.54
0.09
0.19
0.19
0.22
0.57
0.12
0.09
Existing
0.209
0.11
0.03
0.04
0.02
0.05
0.01
0.02
0.02
0.01
0.01
0.01
0.003
Proposed 1
0.127
0.04
0.02
0.02
0.03
0.07
0.01
0.02
0.02
0.01
0.05
0.01
0.006
Proposed 2
0.178
0.04
0.03
0.03
0.03
0.07
0.01
0.02
0.02
0.01
0.04
0.01
0.007
Proposed 3
0.164
0.04
0.03
0.03
0.03
0.06
0.01
0.02
0.02
0.01
0.02
0.01
0.005
Step 4: Determination of best solution and negative solution: These values are the values
which show minimum or maximum values for the respective criterion.
Table 4. Best and negative solution
Best
Nega
t
0.127
0.036
0.019
0.023
0.022
0.07
0.01
0.018
0.021
0.014
0.048
0.008
0.007
0.209
0.11
0.029
0.042
0.032
0.05
0.01
0.025
0.023
0.01
0.012
0.006
0.004
386
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Step 5: Separation of each value from the best value and the worst value is calculated
independently and each value is squared to get two separation matrices. In each matrix, rows
are summed up, and square root of the sum is determined. Thus we get two separation
values for each of the four options. From these two values, relative closeness of each option
is determined. And then the options are ranked. The option having greatest value will be the
best option and vice versa.
Table 5. Separation values
6.
Options
Separation
from Best
EXISTING
PROPOSED
1
PROPOSED
2
PROPOSED
3
0.11899226
Separation
from Worst
Relative
Closeness
Rank
0.0101342
0.078482749
4
0.00909042
0.11884621
0.928945949
0.05456743
0.080132751
0.594897127
0.04866278
0.086045486
0.638754314
1
3
2
Result and conclusion
Table 4 shows that, proposed 1 has the greatest relative closeness value. Hence, in
this case, we can say that option 2 is the best option. On the other hand, the existing layout
has the least relative closeness value. Hence it is far away from the best condition.
There are many such issues in any manufacturing firm that require selection of the best
option from all available options. Each of these options can be considered as best option, if
we concentrate on any particular criterion. Such situations call for multi criteria decision
making. The combination of AHP and TOPSIS proves best in these situations where the
criteria can be evaluated using AHP and alternatives are compared using TOPSIS.
References
Bhutia, P.and Phipon R., Application of AHP and TOPSIS method for supplier selection
problem. IOSR Journal of Engineering (IOSRJEN) , 2012, 2(10),43-50.
Jahanshahloo, G.R. Hosseinzadeh Lotfi F. Izadikhah, M. Extension of the TOPSIS method for
decision-making problems with fuzzy data. Journal of Applied Mathematics and
Computation, 2006,181,1544–1551.
Krishnan P. V., B. Vijay Ramnath, Pillai, M. K. Mathematical Model Using AHP to optimize the
Organizational Financial Performance. Ninth AIMS International Conference on
Management, January 1-4, 2012, p.p.1061-1067.
Opricovi,S. Tzeng, G.H, Compromise solution by MCDM methods: A comparative analysis of
VIKOR and TOPSIS. European Journal of Operational Research 2004,156, 445–455.
Panneerselvam, R. Production And Operations Management, Prentice Hall India, 2012.
Raman, D. Nagalingam, S.V., Lin, G.C., Towards measuring the effectiveness of a facilities
layout. Journal of Robotics and Computer-Integrated Manufacturing, 2009, 25,191–203.
Shahroodi, Kambiz, Amin, Amini, shabnam. Application of Analytical Hierarchy Process (AHP)
Technique To Evaluate and Selecting Suppliers in an Effective Supply Chain, Kuwait
Chapter of Arabian Journal of Business and Management Review, 2012,1(8).
Tompkins and White J., Facilities Planning, John Wiely and Sons, 1996.
Viswanandham, N. and Narhari, Y. Performance Modelling of Automated Manufacturing
Systems, Prentice Hall India, 1994.
Yang, J. and Shi, P. Applying Analytic Hierarchy Process in Firm's Overall Performance
Evaluation: A Case Study in China. International Journal Of Business, 2002,7(1),29-44.
387
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization of Tretability Study of UASB Unit of Atladara Old
and New Sewage Treatment Plant, Vadodara
1
2
2
Shweta M. Engineer *, L.I.Chauhan , A.R.Shah
1
Parul Institute of Technology, Waghodia, Vadodara – 391 760, Gujarat, India
M.S. University, Faculty of Technology & Engineering, Vadodara – 390 001, Gujarat, India
2
*Corresponding author (e-mail: civil_env2010@yahoo.co.in)
Recent research has indicated the advantages of combining anaerobic and aerobic
processes for the treatment of municipal wastewater, especially for warm-climate
countries. Although this configuration is seen as an economical alternative, is has not
been investigated in sufficient detail on a worldwide basis. This work presents the
results of the monitoring of a pilot-scale plant comprising of an UASB reactor followed
by an activated sludge system, treating actual municipal wastewater from a large city in
Baroda. The plant was intensively monitored and operated regularly, divided into Nine
different phases, working with constant and variable inflows. The plant showed good
COD removal for the UASB reactor. The final effluent suspended solids concentration
was good. Based on the very good overall performance of the system, it is believed
that it is a better alternative for warm-climate countries than the conventional activated
sludge system. Also the production of gas from UASB reactor of old & new plant is also
useful for the production of Electricity. Japanese Technology is available for the
production of Electricity from UASB unit of both the plant. There is no need to use GEB
power for the operation of plant.
1.
Introduction
Rapid urbanization and industrialization in the developing countries pose severe
problems in collection, treatment and disposal of wastewaters. This situation leads to serious
public health problems. Increased discharge of domestic and industrial wastewater in
receiving bodies and simultaneously increased withdrawal of fresh water, make it impossible
to rely on the self purifying capacity of receiving water bodies. Decreasing assimilative
capacity of water bodies, need for water conservation and growing public awareness in the
maintenance of clean environment, bring the need for development of appropriate, cost
effective and resource recovery based wastewater treatment systems.Vadodara city is having
underground drainage system since the year 1896. The city presently has three sewage
treatment plants, each for three drainage zones. To the extent possible, it is planned to
construct new sewage treatment plants at the existing STP site itself so that the network and
drainage pattern of the area need not be totally changed.
2.
Experimental work
For the experimental study samples were collected from suitable places and were
analyzed for different parameters. Performance of UASB was also workout by collecting grab
samples of influent/ effluent from UASB, sludge samples from various height in UASB reactor,
and gas samples for analysis of various parameters.
2.1
General details of new sewage treatment plant
Average flow: 43 MLD, Maximum flow: 86 MLD, Treatment Process: UASB
Followed by Complete-mix activated sludge process, Commissioned in June- 2009,
Contractor: Rajkamal Builders India Pvt. Ltd., Consultants: MWH (I) Pvt. Ltd. & Envirocare
Engineers
388
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
About Plant Flow Diagram
Figure 1.The treatment of domestic wastewater at the plant
2.2
Treatment of domestic wastewater
The discharges of partially treated/untreated domestic sewage and industrial
wastewater have assumed a great significance in light of increasing awareness for prevention
of environmental pollution and water conservation.
Table 1. Summary table showing performance of UASB of old & new plant
DATE
9/12/2009
20/12/09
29/12/09
31/12/09
12/1/2010
18/1/10
27/1/10
29/1/10
Max.
Removal
Mini.
Removal
Average
%REMOVAL OF
B.O.D
% REMOVAL OF
C.O.D
%REMOVAL OF S.S
% REMOVAL OF
SULPHATE
OLD PLANT
NEW
PLANT
OLD
PLANT
NEW
PLANT
OLD PLANT
NEW
PLANT
OLD
PLANT
NEW
PLANT
62.85
45.45
53
57.69
62.37
50
43.47
40.74
50.41
47.1
53.3
50
54.54
44.18
30.9
28.33
50
51.99
44
47.43
42.22
49.66
41.55
39
46.38
59.83
58.51
59.29
59
45.13
51.72
53.1
52.07
47.8
14.3
14.29
19.15
34.97
8.33
21.56
6.19
36.03
51.39
14.38
12.92
5.46
25.56
42.85
38
40.43
52.3
57.14
21.34
78.57
69.02
49.4
45.38
54.35
48.24
56.12
57.56
16.12
37.42
47.15
62.85
54.54
50
59.83
52.07
51.39
78.57
57.56
40.74
28.33
39
45.13
8.33
5.46
21.34
16.12
51.94625
44.845
45.73125
54.12
26.55875
24.3475
50.775
45.2925
389
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The contaminants of wastewater must be removed or reduced to the extent which will
make it possible for safe discharge to receiving water bodies or land, since these effluents
ultimately find their way to receiving water bodies or on to the land. This has necessitated
planning and implementation of pollution control strategies. Which would result in proper
ecological balance in the environment? The treatment of domestic sewage is simple as
compared to that of industrial wastewaters. The selection of treatment method adopted,
should be appropriate so that it should be economically viable, scientifically sound, technically
feasible and socially acceptable.
The sludge generated from UASB Reactor is also analyzed at different height from
the bottom of the reactor. Different parameters like ph, T.S., T.S.S, T.D.S, V.S.S, VFA, COD
are analyzed.
Figure 2. Graph Showing Performance of UASB Analysis At Different level Old Plant
Figure 3. Graph Showing Performance of UASB Analysis at Different level New Plant
All levels from G.L.
Figure 2 and 3 shows the performance of UASB for different plant at different levels. From the
graph it is concluded that if we move from bottom to top side the VFA decreases & Ph
increases. So at the top of the reactor we get the neutral ph with low value of VFA.
3.
Report of biogas analysis
The maximum gas generated from UASB unit is 2811.4 m 3/day & maximum power
generated is 304.66 KWH. Here the inflow is variable in m 3/day. Here the gas generated from
old plant UASB Unit is diverted to new plant & the process is carried out.
The generated gas is analyzed with different parameters like CH4, O2, CO2, CO,
H2S & CV and we get the maximum calorific value. So we can say that the quality of gas is
also good and it is also good for environment point of view.
4.
Conclusion
In Atladara sewage treatment plant, the results obtained are as under:
Now, according to the design parameter for sewage B.O.D removal efficiency for
UASB is 75 to 85% and C.O.D removal efficiency is 74 to 78%.
In our case B.O.D and C.O.D removal efficiency of old and new plant are less as
compared to theoretical value which is not satisfactory. This is due to the variation in
load. so proper maintenance of UASB unit is required.
390
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The gas generated from the UASB unit is also higher & generation of electicity from J
208 GS Leanox Jenbacher Gas Engine is also higher. So due to this no need to
required GEB Power for the operation of plant. So it is a big profit for us.
The waste water generated is to be treated & disposed in to the receiving stream, so
that the environment becomes safe.
References
A report on Sewage Treatment Plants of Vadodara Mahanagar sewa sadan, Gujarat, India.
A.S.Bal & N.N.Dhagat, Indian Journal of Environmental Health, Volume :43(UASB) Reactor,
A Review, April 2001
B.C.Punmia, Waste Water engineering, Laxmi Publication Ltd.
Information About Best Practice, Surat Municipal Corporation, Green Energy Generation
From Sewage Gas.
Metcalf & Eddy, Waste Water Engineering, Treatment and Reuse, Tata McGraw-Hill
Publishing Company Limited, Fourth Edition 2003.
Soli.J.Arceivala & Shyam.R.Asolekar, Waste Water Treatment For Pollution Control And
Reuse, Tata McGraw-Hill Publishing Company Limited, third edition.
www.cpcb.nic.in
www.Gpcb.nic.in
391
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Role of Fuzzy Set Theory in Air Pollution Estimation
1
2
1
Subrata Bera *, D. Datta , A. J. Gaikwad
1
Nuclear Safety Analysis Division, AERB, Mumbai-400094, Maharashtra, India
Computational Radiation Physics Section, HPD, BARC, Mumbai-400085, Maharashtra, India
2
*Corresponding author (e-mail: sbera@aerb.gov.in)
The measure of air pollution from various industries is being carried out using standard
Gaussian plume model. Conventional method to estimate the concentration of air pollution
assumes crisp/precise input meteorological parameters. In general, identification of
weather class (Class A to Class F) as per Pasquill Gifford (PG) definition is based on
linguistic preposition for combined variation of wind speed and solar radiation. More
detailed description of weather class in the scale of 6 has been developed using a set of
fuzzy rules known as fuzzy rule base. Fuzzy set theory is essential to address
impreciseness of input variables. Interval arithmetic with fuzzy alpha-cut technique has
been developed to carry out classical arithmetic operations for fuzzy variables with
triangular membership functions. Fuzzy interval arithmetic with alpha-cut is used to
estimate concentration using Gaussian plume function. The tracking of all interim
parameters arisen during evaluation of Gaussian plume function has been carried out to
get the information on variation of membership function due to the mathematical operation.
Finally, the estimation of plume concentration at 1.6 km from the source has been carried
out with various degrees of fuzzy uncertainty. The sensitive input parameter is also
identified through evaluation of Hartley-Like measure.
1. Introduction
Air pollution is a major environmental and public concern world wide due to the rapid
industrial growth. [Dhanesh et al, 2012] have discussed various issues involved in quantification
of pollution contamination concentration through various models such as Gaussian plume model,
puff model, Lagrangian particle model, BOX model, computational fluid dynamics code, etc.
These conventional models and tools have assumed precise value of various input parameters.
There are many issues such as quantifying source term, meteorological data and demographical
data. Quantification of these parameters is based on linguistic preposition, which may not be
defined precisely. The Fuzzy Set Theory (FST) was first introduced by Zadeh(1965). FST deals
with imprecise variable with suitable Membership Functions (MFs). MFs are the key element in
FST. Variables may have various types of MFs such as Triangular, Trapezoidal, single sided
Trapezoidal, Gaussian, sigmoid etc. [Timothy J. Ross, 2010] has discussed various aspect of
Fuzzy Set Theory(FST). [M. Saeedi et al, 2008] have attempted to quantify Weather Stability
Class (WSC) with combined variation of wind speed and cloud cover. [Rituparna Chutia et al,
2013] have studied on non-probabilistic sensitivity and uncertainty analysis of atmospheric
dispersion using Hartley-like measure and fuzziness measure. AERB safety guide [AERB, 2008]
prescribes various formulae on atmospheric dispersion modeling. AERB guide has also
mentioned a look-up table for WSC due to combined variation of wind speed and solar radiation.
WSC has been defined into six classes (class A to class F). These definition is imprecise in the
sense that boundary region may have combined effect of more than one class. In this paper, it is
attempted to quantify WSC in the scale of 6 with combined variation of wind speed and solar
radiation using a set of fuzzy rules is commonly known as Fuzzy Rule Base (FRB). Air pollution
concentration estimation requires to process through many classical mathematical operation. In
the FST, classical mathematical operation is not valid. Equivalent mathematical operation in FST
has been developed by [Palash Dutta at el, 2011] using Fuzzy Interval Techniques (FIT) with α-
392
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
cut. These techniques have been adopted to derive equivalent classical operation to be
performed for evaluation of Gaussian Plume Function (GPF). Tracking of all interim parameter
generated during various equivalent mathematical operations has been made in this analysis.
Triangular membership functions for input parameters have been considered in this analysis.
Final estimation of pollution concentration equivalent to the classical theory is the core set of its
membership function. That core set is corresponding to the value of membership function equal
to unity. Core set of triangular membership function of concentration reduce to its prototype value,
which is corresponding to the equivalent classical estimation. Deviation from prototype value also
estimated by upper quartile set, middle quartile set, lower quartile set and support set by alpha
cut technique on membership function. Sensitivity of input parameters also evaluated using
Hartley-like measure. Lowest Hartley-like measure corresponds to the most sensitive parameter.
2. Atmospheric dispersion modeling with Gaussian Plume Model
[AERB, 2008] has prescribed many formulae for atmospheric dispersion estimation to take into
account various conditions such as ground level release, ground level concentration and elevated
release, centre line ground level concentration, etc. For simplicity, the formula corresponding to
centre line ground level concentration has been used in this analysis and given in the equation 1.
𝑄
𝑐(𝑥, 0,0) = 𝜋𝑢 𝑠
𝑦 𝑠𝑧
𝐻2
𝑒𝑥𝑝 − 2𝑠 2
(1)
𝑧
C: Steady state concentration of the effluent (x,0,0) (Bq/m3)
Q: Source strength (Bq/sec)
u: mean wind speed (m/sec)
Sy=cross wind dispersion parameter (m)
Sz=vertical dispersion parameter (m)
H=effective stack height (stack height + plume rise)
𝑤
Under neutral condition plume rise= 3𝐷𝑖 𝑢
Di =Stack Internal diameter (m)
W=Plume exit velocity (m/s)
Dispersion coefficients Sy,Sz depend on the atmospheric stability class and increase with
downwind distance. [AERB,2008 ] suggested formula for dispersion coefficients are given below
𝑆𝑦 = 𝐴𝑦 𝑥 0.9031
& 𝑆𝑧 = 𝐴𝑧 𝑥 𝑞 + 𝑅
(2)
The values for Ay, Az, q and R are given in the Table 1 for different weather class. The definition
of weather stability class for combined effect of wind speed and solar radiation is given in Table 2.
Pasquill
type
A
B
C
D
Ay
Table 1. Parameters to obtain Sy(x) and Sz(x)[P-G Model]
X<0.1 km
0.1 km <x<1.0 km
x>1.0 km
Az
q
R
Az
q
R
Az
0.3658
0.192 0.936 0
0.00066 1.941 9.27 0.00024
0.2751
0.156 0.922 0
0.038
1.149 3.3
0.055
0.2089
0.116 0.905 0
0.113
0.911 0
0.113
0.1471
0.079 0.881 0
0.222
0.725 -1.7 1.26
q
2.094
1.098
9.11
0.516
E
0.1046
0.063
0.871
0
0.211
0.678
-1.3
6.73
0.305
F
0.0722
0.053
0.814
0
0.086
0.74
0.35
18.05
0.18
A: Extremely unstable
D: neutral
B: Moderately unstable
E: Slightly stable
393
C: Slightly unstable
F: Moderately stable
R
-9.6
2.0
0.0
13.0
34.0
48.6
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Wind
speed u
(m/s)
U<2.0
2<=u<3
3<=u<4
4<=u<6
u>=6
Table 2. Modified stability classification table
Stability Class
Solar Insolation RD(Iangley/h) during day
Rd>=50
25<=Rd<50 12.5<=Rd<25 Rd<12.5
A
A-B
B
D
A-B
B
C
D
B
B-C
C
D
C
C-D
D
D
C
D
D
D
3. Results and discussion
3.1 Quantification of weather class in Scale of 6
Wind speed and solar radiation during day time have been considered as imprecise input
parameters. Triangular and one sided trapezoidal membership functions are considered and
shown in Figure 1 to 2. Mathematical representation MFs are given in equation 3 to 5.
𝜇(𝑥) = 𝑀𝑎𝑥 0, 𝑀𝑖𝑛
Triangular:
𝑥−𝑙
,
𝑛 −𝑥
(3)
𝑚−𝑙 𝑛 −𝑚
Right Sided Trapezoidal: 𝜇 𝑥 = 𝑀𝑎𝑥 0, 𝑀𝑖𝑛
Left Sided Trapezoidal: 𝜇(𝑥) = 𝑀𝑎𝑥 0, 𝑀𝑖𝑛
𝑛 −𝑥
𝑛−𝑚
𝑥 −𝑙
𝑚 −𝑙
(4)
,1
(5)
,1
Where l, m, n represents left point, middle point and right point in the MFs.
1.0
1.0
0.8
VeryLow
Low
High
ExtremelyHigh
Moderate
0.6
0.4
0.2
0.0
1
2
3
4
5
6
7
embership function
Membership function
0.8
Low
Moderate
High
VeryHigh
0.6
0.4
0.2
0.0
8
0
Wind Speed (m/s)
10
20
30
40
50
60
70
80
90
100
RD
Figure 2. Universe of Solar radiation
during Day
Figure 1. Wind speed universe
The universe of stability class is represented by assigning discrete numerical value from 1 to
6 with equal interval of unity. The universe of stability class has been shown in the Figure 3. All
combination of wind speed and solar radiation has been made using Cartesian product
techniques [Timothy J. Ross, 2010]. Based on linguistic preposition, the universe of wind speed
and solar radiation has 5 and 4 members respectively. To define stability class in the scale from 0
to 6, there is a requirement of 5x4=20 fuzzy rule base. Centroid method has been used to get
stability class for each combination wind speed and solar radiation. Final stability class for various
wind speed in the range of 0 to 10 m/s and solar radiation in the range 0 to 100 cal/m2 has been
mapped in the 3-dimensional representation as shown in the Figure 4.
394
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2
A
B
C
D
E
F
1
4
Stability Class 3.5
3
2.5
2
1.5
1
0.5
0
0
10
20
30
40
50
Solar radiation
60
70
80
0
0
1
2
3
4
5
6
90
7
100
Stability Class Value
Figure 3. Universe of stability class
10
9
8
7
6
5
4
0
1
2
3
wind speed
Figure 4. FRB stability class in the scale of 6
3.2 Air pollution concentration estimation through Fuzzy interval arithmetic
Two input parameters such as wind speed and plume exit velocity are considered as
imprecise for the analysis. Their degree of impreciseness is represented by triangular MFs and
shown in Figure 5. The concentration has been estimated for unit source strength at downwind
distance of 1.6 km from the source with weather class-D. The concentration of pollution has been
estimated using fuzzy α-cut technique of interval arithmetic. The fuzzy equivalent arithmetic
operations such as addition, subtraction, multiplication, division, square and exponential have
been carried out with triangular MFs. Interim parameters produced during GPF evaluation are
w/u, plume rise, effective height, exponential part of GPF and exponential part/u has been
tracked during the analysis. The resultant MFs for w/u, plume rise, effective height have been
shown in the figure 5. Other MFs for exponent, exponent/u and concentration have been shown
in the Figure 6.
Wind speed
W/U
1.0
Exit Velocity
Plume rise Eff. Height
Concentration
Exponant/U
1.0
0.8
Membership function
Membership function
0.8
0.6
0.4
0.2
Exponant
0.6
0.4
0.2
0.0
0
5
10
15
50
0.0
100 150 200
0.0
Fuzzy Variables
2.0x10
-6
3.0x10
-2
6.0x10
-2
9.0x10
-2
Fuzzy Variables
Figure 5. MFs for U, W, W/U, Plume Rise,
Effective Height
Figure 6. MFs for Exponant, Exponant/U
and Concentration
From the membership function of concentration, it is found that the prototype value equivalent to
the classical theory estimation is equal to 3.31E-7 corresponds to the unit source strength.
395
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.31E-7
1.0
3.18E-7
2.84E-7
Membership function
Fuzzy Percentile= 95%
=90%
7.96E-7
9.23E-7
2.44E-7
2.24E-7
0.8
4.53E-7
5.6E-7
=80%
=75%
Upper Quartile set
0.6
1.43E-7
=50%
Cross Over Point 1.64E-6
0.4
6.24E-8
Lower Quartile set
=25%
2.56E-6
0.2
0.0
-1.0x10
0.0
-6
1.0x10
-6
2.0x10
-6
Concentration, g/m
3.0x10
-6
4.0x10
-6
3
Figure 7. Ranges for upper, middle and lower Quartile for various degree of Uncertainty
The ranges for upper quartile set, middle quartile set, lower quartile set are [2.24E-7, 9.23E-7],
[1.43E-7, 1.64E-6], [6.24E-8, 2.56E-6] respectively. These ranges represent various degree of
uncertainty. The results for concentration with various level of uncertainty have been presented in
the Figure 7.
3.3 Hartley-like measure for input fuzzy variables
[Rituparna Chutia et al. 2013] has suggested an expression for Hartley-like measure for triangular
type fuzzy number {A} defined in the interval [aL, aM, aR]. The expression is given in equation 6.
𝐻𝐿 𝐴 =
1
𝑎 𝑅 −𝑎 𝐿 𝑙𝑛 2
1 + 𝑎𝑅 − 𝑎𝐿 𝑙𝑛 1 + 𝑎𝑅 − 𝑎𝐿 − 𝑎𝑅 − 𝑎𝐿
(6)
The estimated Hartley-like measures for input fuzzy variables are given in the Table 3.
Variable
Wind speed
Exit velocity
Table 3. Hartley-Like measure for input variables
Hartley-like measure
0.9347
2.2720
The minimum value of Hartley-like measure is found for wind speed. Hence, wind speed will be
the most sensitive parameter for Gaussian plume function.
4. Conclusion
Application of fuzzy set theory in atmospheric dispersion analysis has been demonstrated.
Fuzzification of Gaussian Plume model have been carried out using triangular membership
functions to take into account various imprecise parameters such as wind speed, exit velocity
from stack, etc. Weather stability class redefined using fuzzy rule base in the scale of 6 to
estimate more precise expression for dispersion coefficient. The estimation of concentration has
been carried out using fuzzy equivalent arithmetic operations for Gaussian plume function. Non-
396
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
probabilistic uncertainty has also been evaluated using fuzzy α-cut techniques. It is found that the
most sensitive parameter is wind speed based on lowest value of Hartley-like measure.
References
AERB Safety Guide, ”Atmospheris dispersion and Modelling”, AERB/NF/SG/S-1 (2008)
Dhanesh B. Nagrale, S. Bera, A. K. Deo, R. S. Rao, A. J. Gaikwad, “A Report on Decision
Support
System:
Important
Consideration for Acceptance”,
AERB Report,
AERB/NSAD/TR/2012/03 (2012)
M. Saeedi, H. Fakhraee and M. R. Sadrabadi, ”A Fuzzy Modified Gaussian Air pollution
dispersion model”,Res. J. Environ Sci., 2(3):156-169(2008)
Palash Dutta, Hrishikesh Boruah, Tazid Ali,”Fuzzy Arithmetic with and without using α-cut
method: A Comparative Study”, International Journal of Latest Trends in Computing, Vol. 2,
Issue 1 (2011)
Rituparna Chutia, Supahi Mahanta, D. Datta, “Non-probabilistic sensitivity and uncertainty
analysis of atmospheric dispersion”, Annals of Fuzzy Mathamatics and Informatics, Vol. 5,
No. 1.pp 213-228, (2013)
rd
Timothy J. Ross, ”Fuzzy Logic with Engineering Applications”, 3 Edition, John Wiley & Sons
Publication, (2010)
397
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Selection of Material for Press Tool using Graph Theory and
Matrix Approach (GTMA)
S. R. Gangurde, Sudish Ray*
K.K. Wagh Institute of Engineering Education & Research, Nasik, Maharashtra, India.
*Corresponding author (e-mail: sudish600@rediffmail.com)
Materials selection is a difficult task, due to the immense number of different available
materials. Materials play a crucial and important role during the entire design and
manufacturing process. In this paper; Graph Theory and Matrix Approach (GTMA) is
applied for making decisions for press tool material selection. Material selection index
(MSI) is considered to evaluate and rank the press tool material .The MSI is obtained from
a press tool material selection attributes function which is obtained from press tool material
selection attributes diagraph. The digraph is developed considering important attributes
required for selection of press tool material. It will help a decision maker solve the press
tool material selection problem.
1. Introduction
An ever increasing variety of materials is available today, with each having its own
characteristics, applications, advantages, and limitations. When selecting materials for
engineering designs, a clear understanding of the functional requirements for each individual
component is required and various important criteria or attributes need to be considered. Material
selection attribute is defined as an attribute that influences the selection of a material for a given
application. The selection decisions are complex, as material selection is more challenging today.
There is a need for simple, systematic, and logical methods or mathematical tools to guide
decision makers in considering a number of selection attributes and their interrelations. Thus,
efforts need to be extended to identify those attributes that influence material selection for a given
engineering design to eliminate unsuitable alternatives, and to select the most appropriate
alternative using simple and logical methods. Materials are sometimes chosen by trial and error
or simply on the basis of what has been used before, while this approach frequently works. The
selection of a material for a specific purpose is a lengthy and expensive process. Various multiattribute decision-making (MADM) methods and different optimization tools have been proposed
by the past researchers to aid the material selection process. Decision analysis is concerned with
those situations where a decision maker has to choose the best alternative among several
candidates while considering a set of conflicting criteria. In order to evaluate the overall
effectiveness of the candidate alternatives and select the best material, the primary objective of
an MADM method-based material selection approach is to identify the relevant material selection
criteria for a particular application, assess the information relating to those criteria and develop
methodologies for evaluating those criteria in order to meet the designer's requirements. Decision
making problem is the process of finding the best option from all of the feasible alternatives.
2. Literature review
The objective of any material selection procedure is to identify appropriate selection
attributes, and obtain the most appropriate combination of attributes in conjunction with the real
requirement. Various approaches had been proposed in the past to help address the issue of
material selection. Shanian and Savadogo (2006) introduced a new approach has been carried
out for the use of the ELECTRE in material selection. By producing a material selection decision
matrix and criteria sensitivity analysis. Rao and Padmanabhan (2007) presented a methodology
for selection of a rapid prototyping (RP) process that best suits the end use of a given product or
398
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
part using graph theory and matrix approach. The index is obtained from a RP process selection
attributes function, obtained from the RP process selection attributes digraph. Rao and
Padmanabhan (2008) introduced a methodology for the selection of a best product end-of-life
(EOL) scenario using digraph and matrix methods. An ‘EOL scenario selection index’ is proposed
to evaluate and rank the alternative product EOL scenarios. Prasenjit Chatterjee et al. (2009)
introduced a methodology to solve the materials selection problem using two multi-criteria
decision-making (MCDM) approaches and compares their relative performance for a given
material selection application. The first approach is (VIKOR), a compromise ranking method and
the other one is (ELECTRE), and an outranking method. Maniya and Bhatt (2010) implement a
novel tool, Preference selection index (PSI) to select best alternative from given alternatives
without deciding relative importance between attributes. Rao and Patel (2010) propose a novel
MADM method which considers the objective weights of importance of the attributes as well as
the subjective preferences of the decision maker to decide the integrated weights of importance
of the attributes. Ali Jahan et al. (2011) proposed a new version of VIKOR method, which covers
all types of criteria with emphasize on compromise solution. It can help designers and decision
makers for acquiring more strong decisions, especially in biomedical material selection
applications. Prasenjit Chatterjee et al. (2011) proposed new MCDM methods, i.e. complex
proportional assessment (COPRAS) and evaluation of mixed data (EVAMIX) methods for
materials selection. Singh and Rao (2011) proposed a hybrid decision making method of graph
theory and matrix approach (GTMA) and analytical hierarchy process (AHP) for selection of
appropriate alternative in the industrial environment. Chatterjee and Chakraborty (2012) focus on
the application of four preference ranking-based multi-criteria decision-making (MCDM) methods
for solving a gear material selection problem. These are extended PROMETHEE II (EXPROM2),
complex proportional assessment of alternatives with gray relations (COPRAS-G), ORESTE and
operational competitiveness rating analysis (OCRA) methods.
3. Methodology
The main steps of the methodology are as follows:
3.1 Identify the material selection attributes for the given application and short-list the
material on the basis of the identified attributes satisfying the requirements. A quantitative
or qualitative value or its range may be assigned to each identified.
3.2 After short-listing the materials, find out the relative importance (rij) relations between the
attributes.
3.3 Normalize the values of attributes (Ai) for different alternatives.
3.4 Develop the material selection attributes digraph considering the identified selection
attributes and their relative importance. The number of nodes must be equal to the
number of considered attributes in step 3.1. The magnitude of the edges and their
directions will be determined from the relative importance between the attributes.
3.5 Develop the material selection attributes matrix for the attributes digraph. This will be an
N x N matrix with diagonal elements of Ai and off- diagonal elements of rij.
3.6 Obtain the Permanent Function for the attributes matrix. Permanent Function is also
known as material selection index (MSI).Arrange the materials in descending order of
MSI.
4. Example:
Now to demonstrate the methodology of press tool material selection using graph theory and
matrix approach.
4.1 In the present work, the attribute considered are non- deforming properties(ND), safety in
hardening(SH), toughness(T), resistance to softening effect of heat(RS), wear resistance(WR),
decarburization risk during heat treatment(DR), brittleness(B), hardness(Rc),(H) as shown in
Table 4.1
399
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 4.1 Data of press tool material selection attributes [smith; 1990, Hoffman; 1984]
Press tool material
W1
O1
A2
D2
S1
T1
M1
H12
ND
L
G
B
B
F
G
G
G
SH
F
G
B
B
G
G
G
G
T
G
G
G
F
B
G
G
G
RS
L
L
F
F
F
B
B
B
W1=water hardening tool steels
A2=Air hardening die steels
S1=Shock-resisting tool steels
M1=Molybdenum high speed steels
WR
G
F
G
G
G
B
G
G
DR
L
L
L
L
M
L
L
M
B
M
L
L
L
L
L
L
L
H
63
63
62
62
57
64
66
52
O1 =Oil hardening tool steels
D2 =High-carbon high chromium die steels
T1 =Tungsten high speed steels
H12 =Hot working steels
The objective data of all attribute is given in Table 4.2, which is obtained from 11-pont fuzzy scale
[Rao; 2007]. The fuzzy scale are taken as Low; L= (0.335), Fair; F= (0.410), Medium; M= (0.500)
Good; G= (0.745) and Best; B= (0.865).
Table 4.2 Objective data of press tool material selection attributes
Press tool material
W1
O1
A2
D2
S1
T1
M1
H12
ND
0.335
0.745
0.865
0.865
0.410
0.745
0.745
0.745
SH
0.410
0.745
0.865
0.865
0.745
0.745
0.745
0.745
T
0.745
0.745
0.745
0.410
0.865
0.745
0.745
0.745
RS
0.335
0.335
0.410
0.410
0.410
0.865
0.865
0.865
WR
DR
B
0.745
0.410
0.745
0.745
0.745
0.865
0.745
0.745
0.335
0.335
0.335
0.335
0.500
0.335
0.335
0.500
0.500
0.335
0.335
0.335
0.335
0.335
0.335
0.335
H
63
63
62
62
57
64
66
52
4.2 Relative importance of attribute (rij) is assigned the values as given in Table 4.3 which is
obtained from 11-point scale. [Rao; 2007]
Table 4.3 Relative Importance Matrix (rij) press tool material selection attributes
Attributes
1
2
3
4
5
6
7
8
ND
0.545
0.33875
0.4225
0.5
0.5825
0.62375
0.1725
SH
0.455
0.31625
0.44125
0.665
0.46
0.3775
0.1425
T
0.66125
0.68375
0.46125
0.44125
0.64625
0.33625
0.1725
RS
0.5775
0.55875
0.53875
0.43875
0.41875
0.50375
0.23375
400
WR
0.5
0.335
0.55875
0.56125
0.35875
0.5825
0.28625
DR
0.4175
0.54
0.35375
0.58125
0.64125
0.545
0.26375
B
0.37625
0.6225
0.66375
0.49625
0.4175
0.455
0.28625
H
0.8275
0.8575
0.8275
0.76625
0.71375
0.73625
0.71375
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4.3. The quantitative value of the press tool material selection attribute, which are given in the
Table 4.4, are to be normalized. Safety in Hardening, Toughness, Resistance to softening effect
of heat, Wear resistance and Hardness are beneficial attributes and higher values are desirable.
Values of these attributes are normalized and are given in Table 4.4. Non- Deforming Properties,
Decarburization risk during heat treatment and Brittleness are considered as non-beneficial
attributes and lower value are desirable.
Table 4.4 Normalized data (Ai) of the press tool material selection attributes
Press tool
ND
SH
T
RS
WR
DR
B
H
material
W1
1
0.473988 0.861272 0.387283 0.861272
1
0.67 0.954545
O1
0.449664 0.861272 0.861272 0.387283 0.473988
1
1
0.954545
A2
0.387283
1
0.861272 0.473988 0.861272
1
1
0.939394
D2
0.387283
1
0.473988 0.473988 0.861272
1
1
0.939394
S1
0.817073 0.861272
1
0.473988 0.861272 0.67
1
0.863636
T1
0.449664 0.861272 0.861272
1
1
1
1
0.969697
M1
0.449664 0.861272 0.861272
1
0.861272
1
1
1
H12
0.449664 0.861272 0.861272
1
0.861272 0.67
1
0.787879
4.4. Press tool material selection attributes digraph gives a graphical representation of the
attributes and their relative importance for quick visual appraisal.
Figure 4.1 Press tool material selection attributes graph
4.5 The visual analysis of the digraph is expected to be difficult and complex. To overcome this
constraint, the digraph is represented in a matrix form. The matrix D, for the Press tool material
selection attributes digraph shown in Figure 4.1 is represented as below.
D=
ND
SH
T
RS
WR
DR
B
H
ND
A1
r21
r31
r41
r51
r61
r71
r81
SH
r12
A2
r32
r42
r52
r62
r72
r82
T
r13
r23
A3
r43
r53
r63
r73
r83
RS
r14
r24
r34
A4
r54
r64
r74
r84
401
WR
r15
r25
r35
r45
A5
r65
r75
r85
DR
r16
r26
r36
r46
r56
A6
r76
r86
B
r17
r27
r37
r47
r57
r67
A7
r87
H
r18
r28
r38
r48
r58
r68
r78
A8
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4.6. Material selection index (MSI) is calculated using the value of Ai and rij for each alternative of
press tool material .The MSI for press tool material is shown in Table 4.5 in descending order.
Table 4.5 Press Tool Material selection index
Press tool material
T1
M1
S1
A2
H12
W1
D2
O1
5.
MSI values
260.612
254.59
224.05
221.58
218.904
205.013
199.934
194.08
Result and conclusion
After applying GTMA methods, the result for selection of material for press tool is shown
in Table 4.5.From Table 4.5, it is understood that T1 is the most preferred (best) choice and O1 is
last choice (worst) among the eight material alternatives. GTMA method suggests the ranking T1
- M1 - S1 - A2 - H12 - W1 - D2 - O1. T1 is considered to be the best general-purpose high-speed
tool steel because of higher toughness and excellent wear resistance than other tool steels.
References
Chatterjee Prasenjit, Athawale Manikrao Vijay, Chakraborty Shankar, Materials selection using
complex proportional assessment and evaluation of mixed data methods. Materials and
Design, 2011, 32,851–860.
Chatterjee Prasenjit and Chakraborty Shankar, Material selection using preferential ranking
methods. Materials and Design, 2012, 35,384–393.
Chatterjee Prasenjit, Athawale Manikrao Vijay and Chakraborty Shankar Selection of materials
using compromise ranking and outranking methods, 2009, 30, 4043–4053.
Hoffman, G. Edward. Fundamental of Tool Design. 2nd edition, The Society of Manufacturing
Engineers. Michigan, 1984.
Jahan Ali, Mustapha Faizal, Md Yusof Ismail, Sapuan S.M., Bahraminasab Marjan, A
comprehensive VIKOR method for material selection. Materials and Design, 2011, 32,
1215–1221.
Maniya Kalpesh and Bhatt, M.G. A selection of material using a novel type decision-making
method: Preference selection index method. Materials and Design, 2010, 31, 1785–1789
Rao, R.V. and Patel, B.K. A subjective and objective integrated multiple attribute decision making
method for material selection Materials and Design, 2010, 31, 4738–4747.
Rao, R.V. and Padmanabhan, K.K. Rapid prototyping process selection using graph theory and
matrix approach, 2007, 194, 81–88.
Rao, R.V. and Padmanabhan, K.K. Selection of best product end-of-life scenario using digraph
and matrix methods, 2008, 1–18.
Rao, R.V. Decision Making in the Manufacturing Environment Using Graph Theory & Fuzzy
Multiple Attribute Decision Making Method. Springer- Verlag, London 2007.
Shanian, A. and Savadogo, O. (2006) A material selection model based on the concept of
multiple attribute decision making. Materials and Design, 2006, 27,329–337.
Singh, Dinesh and Rao, R.V. A hybrid multiple attribute decision making method for solving
problems of industrial environment.2011, 2,631–644.
Smith, D. Die Design Handbook. The Society of Manufacturing Engineers. Michigan, 1990.
402
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Antenna Size Optimization using Metamaterial
Surabhi Dwivedi*, Vivekanand Mishra
ECED, S. V. National Institute of Technology, Surat-395007, Gujarat, India
*Corresponding author (e-mail: sur_2310@yahoo.co.in)
The objective is to develop a methodology to analyze and design a metamaterial
substrate for a slotted microstrip antenna. Numerical simulation and theoretical studies
are used first to design a metamaterial structure that is suitable for the antenna
substrate; then use experiments to prove the prediction. Since simulating the real size
structure requires full wave analysis which needs more memory and takes longer time.
With the effective permittivity and permeability, one can use analytic formula to obtain
far-field results. Symmetrical slotted patch antenna using four slots is introduced to
design an antenna resonating at three resonant frequencies. Moreover, the design of
patch antenna with slots has been presented for different wireless communication
applications, e.g., triband circular polarized antennas for the UHF band. This study also
shows the effect of different parameters on the antenna structure.
The deviation in solution time and the results are the important factors that are
considered. Miniaturization of the patch antenna is to the core of our effort and the
enhancement of bandwidth is obtained by slotting of the patch.
1.
Introduction
Metamaterials are a broader class of materials which enables us to manipulate the
permittivity and permeability for optimizing physical properties of radiating patch primarily for
improvement in radiation from antenna. Recently, there has been growing interest in both the
theoretical and experimental study of metamaterials. Many properties and potential
applications of left-handed metamaterials have been explored and analyzed theoretically.
Emission in metamaterials using an antenna was presented in 2002 by Enoch et al. An MSA
in its simplest form consists of a radiating patch on one side of a dielectric substrate and a
ground plane on the other side. Radiation from the MSA can occur from the fringing fields
between the periphery of the patch and the ground plane. In 1953, Deschamps first proposed
the concept of the MSA. Practical antennas were developed by Munson and Howell in the
1970s. The numerous advantages of MSA led to the design of several configurations for
various applications, which includes its low weight, small volume, and ease of fabrication.
With increasing requirements for personal and mobile communications, the demand for
smaller and low-profile antennas has brought the concept of MSA.
Another objective of this paper is to develop a methodology to analyze, design and compare a
metamaterial substrate for a microstrip antenna with a patch cover. We will use numerical
simulation and theoretical studies first to design a metamaterial structure that is suitable for
the antenna substrate; then use experiments to prove our prediction for the comparision
purpose. MSAs are manufactured using printed-circuit technology, so that mass production
can be achieved at a low cost.
2.
Metamaterial
A metamaterial (or meta material) is a material which gains its properties from its
structure rather than directly from its composition.
DPS- Double Positive material, ENG- ε (Electrically) negative material, MNG- µ (Magnetically)
negative material, DNG- Double Negative material.
3.
Slotted patch with metamaterial
For different dimensions of the patch, after simulating various models, best results are
displayed below:
403
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.1 Standard Substrate dimensions: 52.4mm x 36.2mm x 1.56mm
Figure 1. SRR-TW in FR4 dielectric substrate of antenna
dy x wx = 7mm x 0.5mm, dx x wy = 8mm x 0.5mm, ap= 45.9 mm, bp= 30 mm;
Ansoft
NameCorporation
X
XY Plot 3
Y
m1
0.00
1.4450
-17.9261
m2
2.1000
-16.3489
m3
-5.00
2.4100
-35.1532
HFSSDesign1
Curve Info
dB(S(WavePort1,WavePort1))
Setup1 : Sw eep1
dB(S(WavePort1,WavePort1))
-10.00
-15.00
m2
m1
-20.00
-25.00
-30.00
m3
-35.00
-40.00
0.50
1.00
1.50
2.00
2.50
Freq [GHz]
3.00
3.50
4.00
Figure 2. Return Loss of slotted patch with MTM
Figure 2 represents simulated result with return loss/S-parameter and tribands in the range of
1-4GHz. Result analysis shows slotted patch with metamaterial responds between 1.4450 2.4150 GHz for patch dimensions 30mm x 45.9mm. Tribands are further shifted towards left
of the frequency axis with good results of return loss.
3.2 Size Reduction with Split Ring Resonator and Rod inclusions
Substrate dimensions: 40 mm x 30 mm x 1.56 mm
Reduction: 36.7%
404
4.50
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Return Loss; S11; MTM Inserted with size 40 x 30
Ansoft Corporation
HFSSDesign1
0.00
dB(S(WavePort1,WavePort1))
-5.00
-10.00
Name
X
Y
m1
2.1500
-20.1161
m2
2.6350
-19.9743
m3
3.7200
-29.5770
-15.00
m2
m1
-20.00
Curve Info
dB(S(WavePort1,WavePort1))
Setup1 : Sw eep1
-25.00
m3
-30.00
0.50
1.00
1.50
2.00
2.50
Freq [GHz]
3.00
3.50
4.00
4.50
Figure 3. Return Loss Graph for Reduced Size of antenna upto 36.7%
4. Comparative Analysis
4.1 Comparative analysis for the antenna dimensions, return loss and resonant
frequency shifts
Table 1. Comparative analysis without reduction
4.2 Comparative analysis of Metamaterial included Patch antenna dimensions, return
loss and resonant frequency shifts and percentage reduction
Achieved miniaturization
Size Reduction is achieved. More than 36.7% size has been reduced from that of the
conventional patch. After reduction, desired return loss and frequency bands can be achieved
by varying parameters of the metamaterials.
405
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 2. Comparative Analysis with Reduction
Figure 4. Reduced Metamaterial included patch compared with standard patch
Table 3 Comparative analysis of Metamaterial included Patch antenna dimensions and size
reduction with Conventional Patch Antenna
Percentage
Patch Antenna
W(mm)
L(mm)
H(mm)
reduction
Conventional
Design (Slotted
36.2
52.4
1.56
And Chopped
Patch)
36.7%
Metamaterial
30
40
1.56
included Patch
5. Conclusion
Triband Operation of slotted patch antenna for 1 to 4 GHz of frequency band is obtained
with optimized results in terms of return loss. It can be concluded that after appropriate
slotting the patch the triband is shifted more towards left of the frequency axis i.e. upto 1.4910
GHz. Moreover all the three bands are obtained between 1.4450-2.4100GHz. Circular
polarization is obtained by chopping the patch antenna. The important factors that are
considered are the difference in solution time and the deviation in the results. With increase in
width of the patch from 25mm to 30 mm 8.71% increase in bandwidth is observed.
Miniaturization of the patch antenna is to the core of our effort and the enhancement of
bandwidth is obtained by slotting of the patch. After inclusions of the metamatrial the
406
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
bandwidth increment is 12.43% as compared to the substrate with teflon as the dielectric
material. Size Reduction is obtained up to 36.7% by making parametric changes in the model.
It is also concluded that the model with an infinite substrate required less triangles to mesh
the model and simulated faster than the model with a specified substrate size. The difference
in the resonance frequency is approximately 2.5% and would become greater if substrate
dimensions are decreased further. The difference between the two models will decrease as
the specified size of the substrate increases and thus can more accurately be represented by
an infinite substrate.
Future scope of metamaterials hold great promise for new applications in the megahertz to
terahertz bands, as well as optical frequencies which includes super-resolution imaging,
cloaking, hyperlensing, and optical transformation.
References
Balanis, C. A. (1982). Antenna Theory, Analysis and design, John Wiley & Sons, Inc.
Dwivedi, V. V. (2012). Microstrip patch antenna using metamaterial, Lambert Academic
Publishing.
Dwivedi S., Bhalani J., Dwivedi V.V., Kosta Y.P. (2011). "Design and Development of a
Miniaturized Triband Slotted Patch Antenna using metamaterials." Proceedings of
Second International Conference on Signals, Systems & Automation (ICSSA-11): 189194.
Dwivedi S., Patel S. (2011). "Theoretical And Numerical Analysis of Slotted Patch Antenna
Loaded With Metamaterial." International Journal on Science and Technology, IJSAT2011-00-000 2(4).
Dwivedi S., Balani J. "Theoretical Analysis Of Metamaterials." IETE proceedings, State Level
Paper Contest, Control, Microcomputer, Electronics and Communication (CMEC_2011):
5-9.
Dwivedi V.V., Kosta Y. P., Jyoti R., (2008). "An Investigation on Design and Application
Issues of Miniaturized Compact Microstrip Patch antennas for RF Wireless
Communication Systems using Metamaterials: A Study." IEEE international RF and
microwave conference proceedings,: 2-4.
F.Bilotti, M. M., A. Alu, L. Vegni (2006). "Polygonal patch antennas with reactive impedance
surfaces " J. Electromagnetic waves Application 20(2): 169-182.
HU Jun, YAN Chun-sheng., LIN Qing-chun (2006). "A new patch antenna with metamaterial
cover." Journal of Zhejiang University SCIENCE A 7(1): 89-94.
Howell, J. Q. (1975). "Microstrip Antennas." IEEE Trans. Antennas Propagation AP-23: 90–93
H. Mosallaei , K. S. (2004). "Antenna miniaturizatin and bandwidth enhancement using
reactive impedance substrate." IEEE Trans Antennas Propagation AP-52 (9): 2403-2414.
H. Mosallaei , K. S. (2004). "Magneto-dielectrics in electromagnetic: Concept and application."
IEEE Trans Antennas Propagation AP-52(6): 1558-1567.
Kosta Y.P., Dwivedi V. V. (2009). "Design and modeling of a novel double negative
metamaterial for broadband application." IEEE Trans. Microw. Theory Tech.
Majid H.A., R. M. K. A., Masri T. (2009). "Microstrip antenna’s gain enhancement using lefthanded metamaterial structure." Progress in Electromagnetics Research 8: 235-247.
N.Engheta (2002). "An idea for thin,subwavelength cavity resonators using metamaterial with
negative permittivity and permeability." IEEE Antennas wireless Propagation Letter 1: 1013.
ZHU Fang-ming, H. J. (2007). "Improved patch antenna performance by using a metamaterial
cover." Journal of Zhejiang University SCIENCE A 8(2): 192-196
407
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Material Selection for Cleaner Production using AHP,
PROMETHEE and ORESTE Methods
Rao R.V.1*, F. Bleicher2, S. Goud1
1
S.V. National Institute of Technology, Surat-395 007, Gujarat, India
2
Technical University of Vienna, Vienna, Austria
*Corresponding author (e-mail: rvr@med.svnit.ac.in)
Decision making is the typical task in the design and manufacturing environment. The
decision makers face many problems to select best choice out off many alternatives. The
aim of present work is to highlight the role of various decision making methods for the
design and manufacturing environment. Three methods have been considered and these
are AHP, PROMETHEE and ORESTE. For the demonstrate purpose, a case study on
material selection problem is considered.
1.
Introduction
Due to vast improvement in the field of manufacturing process and machineries,
production is much faster and easier now days. This tends the scope for the proper selection of
machineries, materials, tools, etc. in any kind of manufacturing industry. In the rapidly changing
market, it is very necessary to withstand in competition for any organization and to overcome this
competition some additional attention is required. The decision making tools have helped to
overcome the competition to some extent. In the design and manufacturing environment of the
engineering field, the decision makers face problems while selecting proper alternatives and
multiple attribute decision making (MADM) methods can be used for this purpose. The MADM
methods are used to find the best alternative when predefined number of alternatives is available.
There are so many MADM methods are available such as simple additive weighting (SAW)
method, weighted product (WPM) method, Preference Ranking Organisation Method for
Enrichment Evaluations (PROMETHEE) method, technique for order preference by similarity to
ideal solution (TOPSIS) method, evaluation of mixed data (EVAMIX), analytic hierarchy process
(AHP) method, Organization, Rangement Et Synthese De Donnes Relationnelles (ORESTE),
operational competitiveness rating analysis (OCRA), additive ratio assessment (ARAS) method,
gray relational analysis (GRA) method, etc. In this paper, AHP, PROMETHEE and ORESTE
methods are considered to demonstrate their effectiveness in material selection for a given
product.
A number of approaches have been proposed to material selection. Rao (2007, 2013)
explained about MADM methods in decision making in the manufacturing environment using
graph theory and fuzzy multiple attribute decision making methods. Zhao et al. (2012) considered
grey relational analysis to aid for selection of material taking environmental evaluation into
account. Chakraborty and Chatterjee (2012) described the application of preferential ranking
methods for material selection. Rao and Patel (2010) explained the application of PROMETHEE
combined with AHP for material selection. Aditya and Rahul (2012) used VIKOR and YOPSIS
methods for selection of a soft and hard magnetic material. Rathod and Kanzaria (2011)
considered AHP and TOPSIS methods for the phase change material selection problem.
It is observed from the literature that many researchers applied many decision making
methods for various industrial problems. In the present work, three methods, viz. AHP,
PROMETHEE and ORESTE are considered for material selection for a cleaner production
problem.
408
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2
2.1
Decision making methods
Analytical hierarchy process (AHP) method
AHP method was proposed by Saaty (1980) to solve decision making problems by using
a systematic procedure. The AHP procedure suggested by Rao (2007) is used in this paper.
2.2
Preference Ranking Organisation Method for Enrichment Evaluations
(PROMETHEE) method.
The PROMETHEE method was proposed by Brans and Mareschal (1982). PROMETHEE
method is an outranking method for a finite number of alternative actions to be ranked.
PROMETHEE carries out a pair wise comparison of alternatives with each attribute in order to
determine partial binary relations denoting the strength of preference of an alternative over the
other alternative. The PROMETHEE method used in this paper is similar to that used by Rao and
Patel (2010).
2.3
Organization, Rangement Et Synthese De Donnes Relationnelles (ORESTE)
method.
The ORESTE method was proposed by Roubens (1982) and was subsequently
developed and improved by Pastijn and Leysen (1989). ORESTE could be used in the absence
of numerical attributes weights.
3.
Material selection problem for cleaner production
Problem considered in present work is based on the materials selection for cleaner
production. The details of this problem are available in Zhao et al. (2012).
Table 1. Objective data of the attributes of the problem
_____________________________________________________________________________
↓
↓
↑
↑
↑
↓
↑
↓
Materials
EC
HHR
MRECY MREUS
URM
MD
DR
BY-P
_____________________________________________________________________________
A
97
0.0015
94.7
0.13
90.8
8.1
90
0.02
B
93.6
0.0013
95.3
0.16
91
7.3
93
0.017
C
89.5
0.0005
94
0.22
91.9
5.8
93.6
0.013
D
83
0.0007
96.3
0.11
92.7
6
89
0.013
E
80
0.0002
98
0.23
92
5
94
0.011
_____________________________________________________________________________
________________________________
↑
↓
↓
↑
BYPRECY
WP
MC
SP
________________________________
0.12
4.9
42
40
0.14
4.5
38
39.8
0.16
3.6
32
38.6
0.1
2.9
34
35.7
0.2
2.5
28
31
_________________________________
↑: Beneficial Attribute; ↓: Non beneficial Attribute.
409
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The weights of the attributes are: EC=0.0886, HHR=0.1392, MRECY=0.1012, MREUS=0.1012,
URM=0.0253, MD=0.0569, DR=0.0253, BY-P=0.0569, BY-PRECY=0.0253, WP=0.1012,
MC=0.1518, SP=0.1265.
4.
Results of application of the considered MADM methods
The results of the problem are provided in the following sections.
4.1
AHP method
Results obtained after applying the AHP method are summarized in Table 2.Alternative with
maximum score is chosen based on the composite performance score of material selection for
cleaner production.
Table 2. Ranks of the alternative materials
___________________________________
Material
Score
Rank
___________________________________
A
0.6566
5
B
0.7033
4
C
0.8216
2
D
0.7556
3
E
0.9707
1
___________________________________
It is observed from the results that the Material E is considered as the best choice and
the material A is the last choice among all five alternatives considered.
4.2
PROMETHEE method
Results obtained after applying PROMETHEE method are summarized in Table 3.
Table 3. Net flow values of alternatives and the ranks
_______________________________
Materials
Score
Ranks
_______________________________
A
-2.4922
5
B
-0.9046
4
C
-0.525
2
D
-0.2782
3
E
3.15
1
_______________________________
It is observed from the results that the Material E is considered as the best choice and
Material A is the last choice among all five alternatives.
4.3
ORESTE method
Results obtained after applying the ORESTE method are summarized in Table 4.
410
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 4. Mean ranks of the alternatives
____________________________________
Material
score
Rank
____________________________________
A
438.5
5
B
393
4
C
351
2
D
366.5
3
E
281
1
____________________________________
It is observed from the results that the material E is considered as the best choice and
material A is the last choice among all five alternatives considered.
4.4
Comparisons between AHP, PROMETHEE and ORESTE methods
The comparison has taken the final raking order given by the AHP, PROMETHEE and
ORESTE methods. Table 5 shows the comparison of the three MADM methods considered.
Table 5. Ranks obtained by different MADM methods for materials selection
Materials
A
B
C
D
E
MADM methods
AHP PROMETHEE
5
5
4
4
2
2
3
3
1
1
Average rank
ORESTE
5
4
2
3
1
5
4
2
3
1
Final rank
5
4
2
3
1
From the average ranks given in Table 5, it is observed that the alternative material E is
considered as the best choice and the alternative material A as the last choice among the five
materials for cleaner production. The same results are obtained by the application of AHP,
PROMETHEE and ORESTE methods. But the AHP method has helped to find the weights of the
attributes and the calculations to find the ranks of alternatives are easier than the other methods.
Hence, the AHP method is considered as the best method for this problem.
4.
Conclusion
Three multiple attribute decision making methods known as analytical hierarchy process
(AHP), preference ranking method for enrichment evaluation (PROMETHEE) and Organization,
Rangement Et Synthese De Donnes Relationnelles (ORESTE) method are presented in this
work. These methods consider the measures of the attributes and their weightages together and
offer more logical decision making approaches. The methods are general decision making
methods and can consider any number of quantitative and qualitative attributes simultaneously
and offer more objective and simple selection approaches. These techniques can be used for any
type of selection problem involving any number of selection attributes.
Acknowledgement
The authors are thankful to Department of Science and Technology (DST) of India and
BMWF of Austria for sanctioning a research project with the help of which the present work has
been carried out.
411
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
References
Aditya, C. and Rahul, V. Magnetic material selection using multiple attribute decision making
approach. Materials & Design, 2012, 36, 1-5.
Brans, J.P. and Mareschal, B. The PROMCALC and GAIA decision support system for MCDA.
Decision
Support Systems, 1994, 12 (4–5), 297–310.
Chakraborty, S. and Chatterjee, P. Material selection using Preferential Ranking methods.
Materials & Design, 2012, 35, 384-393.
Ji-Hyun, L. and Li, T-C. Supporting user participation design using a fuzzy analytic hierarchy
process approach Engineering Applications of Artificial Intelligence, 2011, 24,850–865.
Pastijn, H. Leysen, J. Constructing an outranking relation with ORESTE. Mathematical computer
model, 1989, 12(10/11), 1255-1268.
Rao, R.V. Decision Making in the Manufacturing Environment Using Graph Theory and Fuzzy
Multiple Attributes Decision Making Methods. Springer Verlag London, 2007.
Rao, R.V. Decision Making in the Manufacturing Environment Using Graph Theory and Fuzzy
Multiple Attributes Decision Making Methods-Volume 2. Springer Verlag London, 2013.
Rao, R.V. and Patel, B.K. Decision making in the manufacturing environment using an improved
PROMETHEE method. International Journal of Production Research, 2010, 48(16), 46654682.
Rathod, M.K. and Kanzaria, H.V. A methodological concept for phase change material selection
based on multiple criteria decision analysis with and without fuzzy environment. Materials &
Design,2011, 32(6), 3578–3585.
Roubens, M. Preference relations on actions and criteria in multicriteria decision making.
European Journal of Operational Research, 1982, 10(1), 51-55.
Saaty, T.L. The analytic hierarchy process. McGraw Hill, New York, 1980.
Zhao, R. Gareth, N. Pauline, D. and Michael, M. Materials selection for cleaner production: An
environmental evaluation approach. Materials & Design, 2012, 37, 429–434.
412
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Application of Differential Evolution Algorithms for Optimal
Relay Coordination
Syed Mohammad Zaffar*, Vijay S. Kale
Visvesvaraya National Institute of Technology, Nagpur-440010, Maharashtra, India
*Corresponding author (e-mail: zaffar.gec@gmail.com)
Optimization of over current relay settings is an important problem in electrical
engineering, which is generally formulated as Linear Programming Problem. This can
be done by keeping value of Plug Setting as fixed and determining optimum value of
Time Multiplier Setting. This paper presents Differential Evolution Algorithm for optimal
time coordination of overcurrent relay. The Differential Evolution method, which is a
population based stochastic function minimizer, has been implemented in MATLAB. It
is successfully tested on two systems. One of the systems is radial system, while the
other one is single end fed system with parallel feeder. The results obtained by
Differential Evolution technique for both the systems are then compared with those
obtained by Genetic Algorithm and linprog function of optimization toolbox of MATLAB.
1.
Introduction
Over-current relays (OCR) have been commonly used as a primary protection in
distribution systems. They are also used as a backup protection in transmission systems. In
fact, overcurrent protection is the most widely used form of protection Mal-operation of
backup relays need to be avoided to reduce power outages. Hence, OCR coordination in
distribution network is an important consideration for protection engineers.
Paithankar and Bhide (2013) stated the relay coordination problem as follows:
Given the magnitudes of all the loads and the fault currents at all the buses, how to set the
relays at various buses, so that the entire system gets overcurrent protection arranged as
primary and backup protection. Proper coordination of relays ensures that there will not be
any mal-operation. Optimum coordination of relays is required not only to avoid maloperation but also to have lowest fault clearing time. Thus optimal relay coordination deals
with minimization of operating time of relays under selectivity constraint.
Several optimization methods for the relay coordination have been proposed over
the years. In the paper by Turaj (2012), seeker optimization technique was applied to solve
relay coordination problem. Uthitsunthorn et.al. (2011) used Artificial Bees Colony Algorithm
for optimal coordination of overcurrent relay and compared its result with Quasi-Newton and
particle swarm optimization. Optimal coordination of OC relays based on genetic algorithm
was discussed by Koochaki A. et.al (2008). Chunlin Xu. et.al (2007) proposed hybrid
evolutionary algorithm based on tabu search for optimal coordination of overcurrent relay. The
paper by Liu An. et.al (2012) deals with using hybrid Nelder-Mead simplex method and
Particle Swarm Optimization Technique for optimal coordination of directional overcurrent
relay.
In optimization problem, if the objective function and all the constraints are linear
function of variables, the problem is called Linear Programming Problem (LPP). In this paper
optimal relay coordination problem is considered as LPP with fixed Plug Setting (PS) of the
relay and operating time of each relay is considered as linear function of its Time Multiplier
Setting (TMS). The solution to this problem has been obtained using Differential Evolution
(DE) algorithm. A MATLAB program has been developed to implement the DE method. The
program was successfully tested for various systems out of which two systems are presented
in this paper. The detailed procedure for formulation of relay coordination problem is
explained. The problem is also solved using Genetic Algorithm (GA) technique and linear
programming function from MATLAB and results are compared.
413
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2. Coordination of over current relay in a radial system
Figure 1: A radial feeder (both relay are non-directional)
A simple radial feeder with two sections is shown in figure 1. For fault at point F,
relay RB is first to operate. Let the operating time of RB is set to 0.1 s. The relay R A should
wait for 0.1 s plus, a time equal to the operating time of circuit breaker (CB) at bus B, plus the
overshoot time of relay RA.
3.
Optimal relay coordination problem
The coordination problem of directional overcurrent relays in interconnected power
systems can be stated as an optimization problem, where the sum of the operating times of
the relays of the system, for near end fault, is to be minimized.
m
min z t p ,k
(1)
p 1
Where, m is the number of relay, t p , k is the operating of the primary relay, for fault in zone k,
for near end fault. [Bedekar P.P. et.al (2009)]
3.1
Coordination criteria
tq ,k t p ,k STI
(2)
Where, t p ,k is the operating time of the primary relay at k, for near end fault and t q ,k is the
operating time for the backup relay, for the same near end fault and STI is the selective time
interval.
3.2
Bound on the relay operating time
t p,k ,min t p ,k t p ,k ,max
(3)
Where, t p ,k ,min is the minimum operating time of relay at k for near end fault.
t p ,k ,max is the maximum operating time of relay at i for near end fault.
3.3
Relay characteristics
All relays are assumed to be identical and are assumed to have normal IDMT
characteristic as
top
014 * (TMS )
( PSM 0.02 1)
(4)
Where, t op are relay operating time and TMS is time multiplier setting and PSM is plug setting
multiplier.
As the pickup currents of the relays are pre-determined from the system requirements,
equation (4) becomes
top a(TMS ).
(5)
a
0.14
PSM 0.02 1
(6)
Making substitution from equation (5) in equation (1), the objective function becomes
414
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
m
min z a p (TMS ) p .
(7)
p 1
4. Differential evolution algorithm (DE) method
Price and Storn developed DE to be a reliable and versatile function optimizer that
is also easy to use. The various steps in the algorithm are discussed briefly below. [Storn, R.
and Price, K. (1997)]
4.1
Initialization
Before the population can be initialized, both upper and lower bounds foreach
parameter must be specified.
(8)
xi , j x j ,min rand j ( x j ,max x j ,min ) .
Where rand j is random number (0, 1) the subscript j, indicate that a new randomvalue is
generated for each parameter x j ,min and x j ,max indicate lower and upper bound value of
jth element of particle.
4.2
Mutation
Each of the N parameter vectors under goes mutation, Crossover and selection.
Once initialized DE mutates and recombines the population to produce a population of vector.
(9)
vi , j xbase, j F * ( x p , j xq , j ) .
Where vi , j mutant vector and F is weighing factor between (0-2). xbase, j is best vector. P and
q is assumed to be a randomly chosen distinct vector index that are different from the best
vector index and parent vector index.
4.3
Crossover
ui , j
vi , j if randj cr
xi , j otherwise
(10)
i = 1, 2, . . . ,N; j = 1, 2, . . . ,D
Where ui , j is trial vector. If the random number is less than or equal to Cr (crossover), the
target parameter is inherited from the mutant vi , j ; otherwise, the parameter is copied from
the vector xi , j
4.4
.
Selection
If the target vector U i has an equal and lower objective function value than that of
its parent vector, X i it replaces the parent vector in the next generation. Otherwise, the
parent retains its place in the population for at least one more generation.
X i , g 1
Ui , g if (Ui , g ) F ( X i , g )
X i , g ; otherwise
(11)
Where subscript g indicates the generation
5. Results
Differential evolution algorithm (DE) method is applied for optimum coordination of
overcurrent relays. Two cases are presented here for demonstration. Detail calculations are
given in case I and similar calculations are performed in case II for formation of objective
function and constraints and for finding the optimum solution.
5.1 Case I.
To test the algorithm, initially a simple radial system shown in figure.1 is
considered. The maximum fault current just beyond bus A and bus B are 4000 A and 3000 A
415
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
respectively, the plug setting of both the relays are 1, the CT ratio for R A is 300:1 and for RB
is 100:1. Minimum operating time for each relay is considered as 0.2 s and the CTI is taken
as 0.57 s. Calculation of value of ai (mentioned in equation 6) for relays is shown in table 1.
Table 1: Calculation of ai constant for relay
S
No.
Fault
position
1.
Just
beyond
bus A
2.
Just
beyond
bus B
Considering x1 and x2 as TMS of relay
–
Min z = 2.63 x1 +2 x2
Relay
RA
RB
0.14
(13.33) 0.02 1
-----
=2.63
0.14
(10) 0..02 1
0.14
(30) 0.02 1
=2.97
=2.00
R A and RB respectively, the problem can be stated as
(12)
Subject to 2.97 x1 -2 x2 0.57
2.63 x1
2 x2
(13)
0.2
(14)
0.2
(15)
This automatically satisfies the lower bound on the value of TMS ( x1
limit of TMS of for both relays is taken as 1.2.
0.025 ). The upper
Table 2: TMS obtained using DE, GA and Linprog method
Relay
TMS
DE
GA
linprog
Method
0.2599
0.259
0.209
x
R
A
1
RB
x2
0.101
0.1
0.025
5.2 Case II: Parallel feeder
Figure 3. Parallel feeders, single-end-fed system
All relays are assumed to have plug setting of 1 and CT ratio of 300: 1. Relay 4 will
backup relay 2 for fault at A and relay 1 will backup relay 3 for fault at B. Total fault current in
each case is taken as 4000 A. The current seen by the relays and
ai constant for relays are shown in table 3. [Bedekar P.P. et.al (2009)]
416
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 3: Current seen by relay and ai constant for relay (CASE II)
Relay
Fault
point
A
Relay
current
ai constant
B
Relay
current
ai constant
1
2
3
4
10
3.33
--
3.33
2.97
5.749
--
5.749
3.33
--
3.33
10
5.749
--
5.749
2.97
-- indicates that the fault is not seen by the relay.
Table 4: TMS obtained using DE, GA and Linprog (for Parallel feeder)
Relay
TMS
DE Method
GA
Linprog
1.
0.140
0.134
0.129
TMS
1
2.
0.035
0.035
0.025
3.
TMS2
TMS3
0.035
0.035
0.025
4.
TMS4
0.141
0.134
0.129
6. Conclusion
In this paper, DE algorithm has been applied to the problem of optimum
coordination of overcurrent relays in distribution systems, which is basically a highly
constrained optimization problem. A program has been developed in MATLAB for finding the
optimum time coordination of relays using DE method. The program can be used for optimum
time coordination of relays in a system with any number of relay and any number of primarybackup relationships. Two of the cases are presented in this paper. The results of DE method
are then compared with the results obtained by GA using GA toolbox of MATLAB and also
with the linprog function of Optimization tool box of MATLAB. It is shown that DE method acts
as an efficient tool to solve the relay coordination problem.
References
Bedekar, P.P. Bhide, S.R. and Kale V.S. Optimum Coordination of Over current Relays in
distribution System Using Dual Simplex Method. Second International Conference on
Emerging Trends in Engineering and Technology, ICETET-09.
Chunlin, Xu. Xiufen, Zou. And Chuansheng, Wu. Optimal coordination of protection relay
using new hybride evolutionary algorithm. IEEE Trans. on CEC 2008, 823-828.
Koochaki, A. Asadi, M. R. Naghizadeh, R. R. optimal overcurrent relay coordination using
th
genetic algorithm.11 international conference (OPTIM 2008). 197-202.
Liu, An. Yang, Ming-Ta. Optimal coordination of directional overcurrent relay using NM-PSO
technique. IEEE Trans. on Int. symposium on computer & control (2012). 678-681.
Paithankar, Y.G. and Bhide, S.R. Fundamentals of power system protection. Prentice Hall of
India PrivateLimited, New Delhi, 2013.
Storn,R. and Price,K. Differential evolution- A simple & efficient heuristic for global
Optimization over continuous space. Journal of Global Optimization, 11, 1997 341-359.
Tujar, A. coordination of directional overcurrent relay using seeker algorithm. IEEE Trans. On
Power Delivery, 27(3), June 2012. 1415-1422.
Uthitsunthorn, D., Pao-La-Or, P. and Kulworawanichpong, T. Optimal overcurrent
th
relaycoordination using artificial bees colony algorithm. The 8
electrical
engineering/electronics, computer, telecommunication and information technology (ECTI2011), 901-904.
417
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization of Hygienic Conditions on Indian Railway
Stations
1
1
T.L. Popat , Jay Brahmakhatri *, Dinesh Popat
2
1
Parul Institute of Technology, Waghodia, Baroda – 391 760, Gujarat, India
2
Manager, Axis Bank, Vadodara – 390 760, Gujarat, India
*Corresponding author (e-mail: jay_civil11@yahoo.com)
Most of Indians might have faced a very bad smell when they are waiting on major
railway stations due to the use of lavatories during haults. This causes horrible foul
smell on platforms. Besides it makes ecosystem unhygienic and hazardous to railway
passengers. If the pipe which transfers waste on railway track is closed with round lid
which can be operated with either by lever/vacuum or any other appropriate
arrangement from drivers/guards cabin, the problem can be solved to great extent. The
driver can close the lid when train is approaching the railway station where it would
stop and open the lid by operating the lever when train is about 2-12 KM away from
station. This innovation will not only reduce the workload of sweeper on railway station
but also busy stations will have hygienic environment free of foul smell. Although
railway is planning to introduce controlled discharge model or Biodegradable toilets but
both these methods may not be economical and there is always a question mark how
far the proposed systems would be effective to achieve the hygienic goals. Therefore,
authors have tried to device the methodology which certainly will optimize the hygienic
conditions on railway stations.
1.
Introduction
Indian railways are catering highest number of People as compared to any other
country in the world. This is because in India no other mode of transportation is available
which may be quite economical so that people can afford to travel by. Besides looking to the
huge travel demand no other transportation mode is capable to satisfy the demand except rail
transportation. looking to the ever growing demand for rail transportation it is essential that
the system should be further expanded and strengthened .
The Indian Railway board is trying its level best to meet the ever growing demand
with improving travel conditions and providing more facilities to the passengers .However very
little attention has been paid by the railway authorities on hygienic and cleanliness conditions
which now a days prevail in passenger bogies as well as on railway stations specially in and
around the lavatory blocks and on the track along the railway platforms. The railway board in
collaboration with DRDO has developed bio-toilets. Also controlled discharge model is
developed under which the excreta from toilet would be discharged automatically on the track
only when train gains speed more than 30 kmph. Recently railway authorities have thought of
vacuum-retention toilet similar to the ones in aircrafts.
All the above methods either developed or proposed to tackle with the problem of
hygienic disposal of toilet waste in railways are in experimental stage and how far they would
be successful that is to be seen. The authors of this research papers have thought of a
optimization technique with very simple mechanism, maintenance free, easy to operate and
more efficient to tackle with the railway toilet waste disposal problem which certainly will
ensure more hygienic environment on major railway stations.
2.
Literature review
Indian Railway is the main mode of transportation. It transports passengers in huge
mass from one corner of India to other corner. As comparatively it is economical, safe,
convenient and fast mode, people prefer to travel by trains.
418
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Because of movement of large masses of people, the hygienic conditions on medium
and major railway stations are deteriorating day by day. This is because of the human waste
which is discharged directly on the railway track along the platforms where trains halt for
various durations. More the duration of halt, more unhygienic environment and also corrosion
of track fittings are produced as large number of people use toilet. This leads generation of
different type of insects on railway platform. The major of it is flies rats and variety of insects.
These flies after sitting on the excreta fly all over the platform and sit on food articles sold on
various stalls on railway platform. That way it becomes an easy way to spread variety of
disease amongst the passengers and the people living around. Also waste with water gives
birth to mosquitoes which also are dangerous for health.
So far railway authorities have not thought about this serious problems and paid
much attention how to solve it.
But in recent past, the railway authorities have signed the Memorandum of
Understanding (MoU) with DRDO ( Defence Research and Development Organization) which
has developed certain methodologies to tackle with this ever growing problem. Methods are
briefly discussed below.
2.1
Controlled discharge toilet system (CDTS)
This system discharges waste on the run only after the train speed reaches 30 kmph.
Disadvantages
Almost the waste would be discharged at the spot and due to accumulation of waste
at one place, it produces foul and unbearable smell to the surrounding area and also may
cause producing insects which may make the life of surrounding colonies like a hell. This
requires maintenance of sensors at regular time interval which may be costly. Also failure in
sensor system may cause horrible condition for the passengers travelling in the coaches.
2.2
Vacuum-retention toilets
It is similar to the ones in aircraft retains the waste in a storage tank.
Disadvantages
These technologies are more expensive due to inherent complexities and
requirement of extra infrastructure at the terminals.
2.3
Bio- degradable toilets
In this system, waste would be directed to a tank with bacterial sludge. The sludge
will then break down all solid waste in an aerobic process to gas and water, which will then be
released on the railway tracks.
Disadvantages
The functioning of such degradable toilets could be hampered if plastic bottles are
thrown in the commode. It may be difficult to keep a check on this. However, on the
maintenance part, the potency of the bacteria sludge will reduce with the passage of time. So,
there will be a certain cycle of maintenance and refilling the bacterial sludge at the regular
intervals.
419
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 1. Bio-Degradable Toilets
3.
Proposed method for optimizing hygienic condition on passenger platforms
It is proposed to provide a round shaped lid at the bottom of every waste disposal
pipe in all lavatories. This lid is fastened with the help of hinges with waste discharging
cylinder (pipe) on one side and should have locking system at the other end. When the lock is
opened , the lid falls down and remains hanged on hinges opening the waste pipe. When the
lid is closed , it goes up and get locked with other opposite side of the discharging pipe.
When the train completes its trip and goes in the yard for cleaning and maintenance
purpose , after having the lavatory blocks cleaned, the lids of all lavatories should be in open
position. When the train is ready for its next trip and it is brought from yard to the railway
station, the lids should be tightly closed.
In case of passenger trains where trains are halting almost on every station on the
gap of about every 10 to 15 minutes, the lids may be kept open. It is only when passenger
trains are halting on major stations, the lids may be closed.
Similar guidelines may be worked out for different type of passenger trains like super
fast, express or passengers train etc.
Opening and closing of hanging lids can be achieved by various alternative
arrangements. It may be manual, hydraulic pressure, electric power or some mechanical
operation etc. The Indian Railway authorities whatever method may find more safe, efficient
and reliable may adopt that method. But it is certain that this innovation will improve hygienic
conditions of all major railway stations in India.
The lids may be operated by either the driver cabin or from guard office. Normally the
lids may be opened when train has travelled about 3 km from railway platform. This distance
should go on varying every week so that the problem of accumulation of large quantity of
waste at one location can be avoided.
Say during 1st week at 3 km distance. During 2nd week at 4 km distance. During 3rd
week at 5 km distance and so on.
The above guidelines may be followed for all the railway stations along a route. The
highly important stations like New Delhi, Bombay, Ahmedabad etc, the criteria for fixing
distance to discharge the waste may be worked out easily. It is proposed to increase the
distances to 3 to 5 times than the above distances.
420
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 2. Proposed System
This proposed scheme will not only optimize the hygienic conditions on railway
platforms but also the goals can be achieved with minimum initial cost and practically very
less maintenance cost. The system may prove most economical as no additional
infrastructure or materials or manpower are required.
4.
Conclusion
The authors of this research paper realize that the new system innovated in this
research paper would be more suitable to Indian conditions where huge masses of people are
travelling by thousands of trains daily. If the job of design of the lid, hinges locking system and
its operations is assigned to the team of experts of mechanical section, there is no doubt that
the proposed system would certainly work satisfactorily for long time improving hygienic
environment on railway stations. The local trains operating in metropolitan cities of India like
Bombay, Delhi, Madras etc have not been considered in this analysis as normally no lavatory
block are provided in such trains. But it would not be a problem to include such trains in future
if lavatory blocks are provided in such trains. Of course the approach may be same but
guideline may be different.
References
Agrawal, M. M. A text book of Indian Railway Track, 1985 Ed.
Ankur Paliwal, “Next generation toilets” DRDO report, SEP 2012
Kishore, Green toilets developed by DRDO for Indian Railways, March 2012
Manasi Phadke, “Eco-friendly toilets” DRDO report, May 2012
Saxena, S. C. A text book of Railway Engineering, 1977 Ed.
421
Proceedings of the International Conference on Advanced Engineering Optimization Through Intellig ent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Process Parametric Optimization in Wire Electric Discharge
Machining for P-20 Material
Ugrasen G1*, H. V. Ravindra1, G. V. Naveen Prakash2, B. Chakradhar3
1
Dept. of Mechanical Engg., PESCE, Mandya-571 401, Karnataka, India
2
Dept. of Mechanical Engg., VVCE, Mysore-1, Karnataka, India
3
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
*Corresponding author (e-mail: ugrasen.g@gmail.com)
Wire cut Electric Discharge Machine (WEDM) is a widely accepted non-traditional
material removal process used to manufacture components with intricate shapes and
profiles irrespective of hardness. Present study outlines the process parametric
optimization in wire EDM for P-20 material. Experimentation was carried out as per
Taguchi’s L’16 orthogonal array. Each experiment has been performed under different
cutting conditions of pulse-on, current, bed speed, pulse-off and flush rate. Three
responses namely accuracy, surface roughness, volumetric material removal rate have
been considered for each experiment. Based on this analysis, process parameters are
optimized. ANOVA is performed to determine the relative magnitude of the each factor
on the objective function. Finally experimental confirmation was carried out to identify
the effectiveness of this proposed method.
1.
Introduction
Since the discovery of EDM process by the Soviet scientists Lazarenko B.R. and
Lazarenko N.I., nearly five decades ago, the researches and improvements of the process
are still going on to identify the basic physical process involved during the process. Wire
electrical discharge machining (WEDM), an electro thermal process, has been widely used in
aerospace, nuclear and automotive industries, to machine precise, complex and irregular
shapes in various difficult-to-machine electrically conductive materials. In this process, the
material removal mechanism uses the electrical energy and turns it into thermal energy
through a series of discrete electrical discharges occurring between the electrode and
workpiece immersed in an insulating dielectric fluid. As the process depends of different
parameters it is very tedious task to analyze the effectiveness of all the parameter for the
process. So, different techniques are used to analyze the parameters for better utilization of
the process. Several experiments are conducted to consider effects of pulse-on, current, bed
speed and pulse-off, on the accuracy, surface roughness, VMRR. A Taguchi standard
orthogonal array is chosen for the design of experiments. Analyses of variance (ANOVA) are
performed on experimental data.
In the past, many researchers have investigated the effects of process parameters on
workpiece in WEDM. The cutting variables investigated in this study encompassed cutting
speed, peak current and offset distance. Box–Behnken design was employed as the
experimental strategy, and multiple response optimization on dimensional accuracy and
surface roughness was performed using the desirability function for K460 tool steel. Results
showed that both peak current and offset distance have a significant effect on the dimension
of the specimen while peak current alone affects the surface roughness [1]. The influence of
zinc-coated brass wire on the performance of WEDM is compared with high-speed brass and
also investigated the effect of seven process parameters including pulse width, servo
reference voltage, pulse current, and wire tension on process performance parameters (such
as cutting speed, wire rupture and surface integrity). A Taguchi L18 design of experiment
(DOE) has been applied. All experiments have been conducted using Charmilles WEDM. It
was also found that the peak current and pulse width have significant effect on cutting speed
and surface roughness. The Analysis of Variance (ANOVA) also indicated that voltage,
injection pressure, wire feed rate and wire tension have non-significant effect on the cutting
speed. Scanning Electron Microscopic (SEM) examination of machined surfaces was
422
Proceedings of the International Conference on Advanced Engineering Optimization Through Intellig ent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
performed. Compared with high-speed brass wire, zinc-coated brass wire results in higher
cutting speed and smoother surface finish [2]. A newly developed advanced algorithm named
„teaching–learning-based optimization (TLBO) algorithm‟ is applied for the process parameter
optimization of selected modern machining processes. The important modern machining
processes identified for the process parameters optimization in this work are ultrasonic
machining (USM), abrasive jet machining (AJM), and wire electrical discharge machining
(WEDM) process. The examples considered for these processes were attempted previously
by various researchers using different optimization techniques such as genetic algorithm
(GA), simulated annealing (SA), artificial bee colony algorithm (ABC), particle swarm
optimization (PSO), harmony search (HS), shuffled frog leaping (SFL) etc. In case of WEDM
process, the TLBO algorithm has given considerable improvement over that of ABC results.
Thus the TLBO algorithm is proved superior over the other advanced optimization algorithms
in terms of results and convergence [3]. A Neuro-Genetic technique was used to optimize the
multi-response of wire electro-discharge machining (WEDM) process. This technique was
developed through hybridization of a radial basis function network (RBFN) and nondominated sorting genetic algorithm (NSGA-II). The machining was done on 5 vol% titanium
carbide (TiC) reinforced austenitic manganese steel metal matrix composite (MMC). The
process parameters namely pulse on-time and average gap voltage have great influence on
the cutting speed and the kerf width. From the experimental results, an increase in the
average gap voltage leads to the decrease of the cutting speed but increase in the kerf width,
within the parametric range under consideration. The proposed Neuro-Genetic technique was
also compared with the weighted sum method based on single-objective GA. It was found that
the proposed technique is superior to the weighted sum method [4]. Two WEDM processes
have applied to six most popular population-based non-traditional optimization algorithms, i.e.
genetic algorithm, particle swarm optimization, sheep flock algorithm, ant colony optimization,
artificial bee colony and biogeography-based optimization for single and multi-objective
optimization. The major performance measures of WEDM process generally include material
removal rate, cutting width (kerf), surface roughness and dimensional shift. It is found that
although all these six algorithms have high potential in achieving the optimal parameter
settings, but the biogeography-based algorithm outperforms the others with respect to
optimization performance, quick convergence and dispersion of the optimal solutions from
their mean [5].
The optimization of process parameters during machining of SiCp/6061 Al metal
matrix composite (MMC) by wire electrical discharge machining (WEDM) using response
surface methodology (RSM) were carried out. Four input process parameters of WEDM
(namely servo voltage (V), pulse-on time (TON), pulse-off time (TOFF) and wire feed rate
(WF)) were chosen as variables to study the process performance in terms of cutting width
(kerf). The analysis of variance (ANOVA) was carried out. ANOVA results show that voltage
and wire feed rate are highly significant parameters and pulse-off time is less significant.
Pulse-on time has insignificant effect on kerf [6]. The optimization techniques and the
comparison of the latest five year researches from 2007 to 2011 that used evolutionary
optimization techniques to optimize machining process parameter of both traditional and
modern machining. Five techniques are considered, namely genetic algorithm (GA),
simulated annealing (SA), particle swarm optimization (PSO), ant colony optimization (ACO)
and artificial bee colony (ABC) algorithm. Literature found that GA was widely applied by
researchers to optimize the machining process parameters. Multi-pass turning was the largest
machining operation that deals with GA optimization. In terms of machining performance,
surface roughness was mostly studied with GA, SA, PSO, ACO and ABC evolutionary
techniques [7]. The parametric study on EDM process using ultrasonic assisted cryogenically
cooled copper electrode (UACEDM) during machining of M2 grade high speed steel has been
performed. Electrode wear ratio (EWR), material removal rate (MRR) and surface roughness
(SR) was the three parameters observed. EWR and SR were found to be lower in UACEDM
process as compared to conventional EDM for the same set of process parameters, while
MRR was at par with conventional EDM process. Thus in the present work UACEDM process
has been established to be better than conventional EDM process due to better tool life, tool
shape retention ability and better surface integrity [8]. The variation of this fraction of input
discharge energy with the help of thermo-mathematical models during EDM of Tungsten-
423
Proceedings of the International Conference on Advanced Engineering Optimization Through Intellig ent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Carbide by varying the machining parameters current and pulse duration. The data calculated
in this study can be further used in the existing thermophysical models, expecting to bring the
models preciously more close to the real conditions. The results obtained showed that the
energy effectively transferred to the workpiece varies with the discharge current and pulse
duration from 6.5% to 17.7%, which proves that the fixed value assumed in the models is not
in line with real EDM process. This study will help in prediction of optimum parameters using
existing thermo-physical models by using the values of current and pulse duration where
maximum fraction of energy is transferred to workpiece [9]. The multi parameter optimization
of tungsten carbide cobalt metal matrix composites were done using desirability approach.
They have selected, percentage of cobalt in the composite, pulse on time, delay time, wire
feed, wire tension, ignition current and die-electric pressure are the input variables.
Optimization of the multiple process variables were carried out using desirability function
analysis. Conformation experiments were carried out to check the accuracy of the optimized
results. For optimal machining conditions the percentage of cobalt binder phase needs to be
20% within tungsten carbide cobalt metal matrix composites. Using the composite desirability
analysis, surface roughness is decreased and material removal rate is increased [10].
2.
Experimental work
The experiments were performed on CONCORD DK7720C four axes CNC WED
machine. This WEDM allows the operator to choose input parameters according to the
material and height of the work piece. The WED machine has several special features. Unlike
other WED machines, it uses the reusable wire technology. i.e., wire can‟t be thrown out once
used; instead it is reused adopting the re-looping wire technology. The experimental set-up
for the data acquisition is illustrated in the Fig. 1. The WEDM process generally consists of
several stages, a rough cut phase, a rough cut with finishing stage, and a finishing stage. But
in this WED machine only one pass is used.
The gap between wire and work piece is 0.02 mm and is constantly maintained by a
computer controlled positioning system. Molybdenum wire having diameter of 0.18 mm was
used as an electrode. The control factors and fixed parameters selected are as listed in Table
1. In order to minimize their effects, these factors were held constant as for as practicable.
The control factors were chosen based on review of literature, experience and some
preliminary investigations. Each time the experiment was performed, an optimized set of input
parameters was chosen. Experiments have been carried out based on Taguchi‟s L‟16
orthogonal array. After conducting the experiment, response values are noted down and
analysis has been done. Taguchi analysis was conducted to determine the optimal
parameters and ANOVA was also performed to estimate magnitude of factors effects on the
responses. The experiment was conducted in the same environmental condition for all the
runs so that environmental noise factors can be minimized. The response variables for P-20
tool steel material is shown in Table 2.
Table 1: Machining settings used in
experiments
Level
Control Factors I
II III IV
A Pulse-on
16 20 24 28
B Pulse-off
4 6 8 10
C Current
3 4 5 6
D Bed speed
20 25 30 35
E Flush rate
Constant
Figure 1: Experimental Set-up
424
Proceedings of the International Conference on Advanced Engineering Optimization Through Intellig ent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
P-on
(μs)
16
16
16
16
20
20
20
20
24
24
24
24
28
28
28
28
Run
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
3.
Table 2: Experimental design using L‟16 orthogonal array
P-off
Current Bed Speed Accuracy Roughness
(μs)
(A)
(μm/s)
(μm)
Ra (μm)
4
3
20
2
2.32
6
4
25
6
2.61
8
5
30
4
2.92
10
6
35
6
3.22
4
4
30
3
2.84
6
3
35
8
2.54
8
6
20
3
2.72
10
5
25
7
2.95
4
5
35
5
3.17
6
6
30
4
3.32
8
3
25
6
2.94
10
4
20
8
2.45
4
6
25
3
3.48
6
5
20
6
2.66
8
4
35
8
3.15
10
3
30
9
2.81
VMRR
3
mm /min
4.07833
5.35219
6.51672
7.87251
5.72501
4.64865
6.36009
5.93262
6.58706
7.14313
6.58855
4.27493
7.82627
5.36738
7.76476
6.26666
Results and discussions
Main Effects Plot (data means) for SN ratios
Main Effects Plot (data means) for SN ratios
P-off
-10
-8.0
-12
-8.5
-14
-9.0
Mean of SN ratios
Mean of SN ratios
P-on
-16
-18
16
20
24
28
4
Current
6
8
Bed Speed
10
-10
-12
P-on
P-off
-9.5
-10.0
16
20
24
28
4
Current
-8.0
6
8
10
Bed Speed
-8.5
-9.0
-14
-16
-9.5
-18
-10.0
3
4
5
6
20
25
30
35
3
4
5
6
20
25
30
35
Signal-to-noise: Smaller is better
Signal-to-noise: Smaller is better
Figure 2: Main effects plot for P-20 material
for accuracy
Figure 3: Main effects plot for P-20
Material for surface roughness
Main Effects Plot (data means) for SN ratios
P-on
P-off
Table 3: Optimized process parameters
Response Optimized Parameters
variables
Pon Poff C BS
Acc
16
4
6
20
SR
16
4
3
20
VMRR
28
8
6
35
17
Mean of SN ratios
16
15
14
16
20
24
28
4
5
6
20
Current
6
8
Bed Speed
10
17
16
15
14
3
4
25
30
35
Signal-to-noise: Larger is better
Figure 4: Main effects plot for P-20 Material
for VMRR
Figure 2 shows the main effect plot for P-20 material of thickness 40 mm, for the
accuracy as the response variable based on S/N ratio. The main effect plot for P-20 material
shows that, the factor Pulse Off has more effect on the accuracy as the response variable.
Figure 3 and Figure 4 shows the main effect plot for P-20 of thickness 40 mm, for the surface
425
Proceedings of the International Conference on Advanced Engineering Optimization Through Intellig ent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
roughness and VMRR as the response variable based on S/N ratio. It is clear from the Figure
3 and Figure 4; the main effect plot for P-20 material that the factor Current has more effect
on the surface roughness and VMRR as the response variable. The optimum levels of the
control factors which give the good accuracy, surface roughness and VMRR are summarized
in the Table 3 for the P-20 material. Referring the ANOVA, the factor pulse-off has the more
effect on the response variable for accuracy and current has the more effect on the response
variable for surface roughness and VMRR.
4.
Conclusion
This paper has presented an investigation on optimization and the effect of machining
parameters on accuracy, surface roughness and VMRR in WEDM process. The level of
importance of the machining parameters on accuracy, surface roughness and VMRR is
determined by using Taguchi‟s technique. Optimized process parameters were obtained for
accuracy, surface roughness and VMRR. Based on the ANOVA the highly effective
parameters on accuracy were found as pulse off and similarly current has more effect on
surface roughness & VMRR. The verification experiment has been carried out. The results
3
shows that, accuracy as 1 μm, surface roughness is 2.2.4 μm and VMRR is 8.283 mm /min.
References
Farnaz Nourbakhsh, K. P. Rajurkar, A. P. Malshe and Jian Cao, “Wire electro-discharge
machining of titanium alloy”, The First CIRP Conference on Biomanufacturing, Procedia
CIRP, 2013, 5, 13–18.
Harminder Singh and D.K. Shukla, “Optimizing electric discharge machining parameters for
tungsten-carbide utilizing thermo-mathematical modeling”, International Journal of
Thermal Sciences, 2012, 59, 161-175.
Kannachai Kanlayasiri and Prajak Jattakul, “Simultaneous optimization of dimensional
accuracy and surface roughness for finishing cut of wire-EDMed K460 tool steel”,
Precision Engineering, 2013, Article in Press.
Norfadzlan Yusup, Azlan Mohd Zain and Siti Zaiton Mohd Hashim, “Evolutionary techniques
in optimizing machining parameters: Review and recent applications (2007–2011)”,
Expert Systems with Applications, 2012, 39, 9909-9927.
Pragya Shandilya, P.K.Jain and N.K. Jain, "Parametric optimization during wire electrical
discharge machining using response surface methodology ", Procedia Engineering,
2012, 38, 2371–2377.
Probir Saha, Debashis Tarafdar, Surjya K. Pal, Partha Saha, Ashok K. Srivastava and Karabi
Das, “Multi-objective optimization in wire-electro-discharge machining of TiC reinforced
composite through Neuro-Genetic technique”, Applied Soft Computing, 2013, 13, 2065–
2074.
R. VenkataRao and V.D.Kalyankar, “Parameter optimization of modern machining processes
using teaching–learning-based optimization algorithm”, Engineering Applications of
Artificial Intelligence, 2013, 26, 524–531.
Rajarshi Mukherjee, Shankar Chakraborty and Suman Samanta, "Selection of wire electrical
discharge machining process parameters using non-traditional optimization algorithms",
Applied Soft Computing, 2012, 12, 2506–2516.
V. Muthuraman and R. Ramakrishanan, “Multi Parametric Optimization of WC-Co composites
using Desirability Approach”, Procedia Engineering, 2012, 38, 3381-3390.
Vineet Srivastava and Pulak M. Pandey, ” Effect of process parameters on the performance
of EDM process with ultrasonic assisted cryogenically cooled electrode”, Journal of
Manufacturing Processes, 2012, 14, 393–402.
426
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Speed Parameters Optimization for Wind Turbine Generators
using the Teaching-Learning-Based Optimization Algorithm
R.V. Rao*, Y.B. Kanchuva
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
*Corresponding author (e-mail: raoravipudi@gmail.com)
Teaching-learning-based optimization (TLBO) is a recently developed population based
algorithm, based on the phenomenon of teaching learning process in the class room. In
the present work the TLBO algorithm is used for the selection of the optimal operating
speed parameters for wind power units. Three speed parameters need to be optimized,
namely, the rated speed, cut-in speed, and cut-off (furling) speed of the turbine. The
aim of the optimization process is to maximize the yearly power yield and turbine usage
time, which depends on optimizing both the generated energy and capacity factor
together, which means lower capital cost of the units. The study showed that the TLBO
algorithm outperformed the other optimization methods used by the previous
researcher.
1.
Introduction
Wind energy as a substitute to fossil fuels, is renewable, ample, clean, widely
distributed and results in no greenhouse gas emissions while operation. The effects on the
surroundings are altogether less hazardous than those from other energy sources.
According to Fahmy A. A., (2012) the production of wind-generated electric power at a
given site depends on different variables such as the mean wind speed at the site, and the
speed characteristics of the wind turbine. The speed characteristics of a turbine are defined
by three parameters, namely the cut-in (Vc), rated (Vr), and furling (Vf) wind speeds at the hub
height. These speed parameters determine largely the cost, maximum yearly generated
power, and capacity factor of the unit. The capacity factor measures the usage time of a
generator, and is defined as the ratio between the average generated power over one year
and the nominal (rated) power (Huang and Wan, 2009, 2008; Raju et al., 2004). If objective is
to maximize the power yield, it requires turbines of high rated power (i.e. a high rated speed),
and hence imply a high capital cost. If aim is to maximize the capacity, then it requires the use
of low rated power units which have limited capability to use the wind resources. Since it is
not possible to maximize the power output and capacity factor of a turbine simultaneously, an
adjustment between the two opposing objective is required.
This paper carries out the research into the use of the TLBO algorithm (Rao et al.,
2011) for the selection of the optimal turbine speed parameters. The performance of the
TLBO algorithm will be compared to bees algorithm, PSO algorithm (Fahmy A. A., 2012) and
the classical Turbine Selection Index method (Abdel-Hamid et al., 2009) based on the
approach of Jansamshetti and Rau (2001a,b, 1999a,b).
2.
Mathematical model
At any given site the yearly variation of wind speed is expressed by its hourly
frequency distribution curve. The wind speed frequency distribution curve can be described
by the distinct statistical probability density functions. The most frequently applied function is
the Weibull distribution (Gary, 2006) as it meets the required purpose in a satisfactory way for
the observed meteorological patterns. The Weibull probability distribution function is specified
by the shape parameter k (dimensionless), and scale parameter c (m/s). The model used by
Fahmy A. A., (2012) is the most common analytical model for describing the generation of
electrical power Pe from wind energy (Gary, 2006; Powell, 1981). The objective of this study is
to find the optimum operating speeds parameters that maximize the PN×CF product. Where
PN is the normalized power and CF is the capacity factor (Albadi and El-Saadany, 2009).
427
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Maximize, PN × CF =
=
vr 3
c
vr 3
c
× CF 2 ,
×
e
−
vc k
v k
− r
c
−e c
v k
vr k
− c
c
c
2
−
−e
vf k
c
.
(1)
Where Vc < Vr < Vf and the three variables are defined within the intervals:
0.2c ≤ Vc < c, 0.8c ≤ Vr ≤ 3.0c, 2.5c < Vf < 5.0c.
3.
Teaching-Learning-Based Optimization Algorithm
Rao et al., (2011) proposed a new efficient optimization method called ‘TeachingLearning-Based Optimization (TLBO)’ based on the effect of the impact of a teacher on the
output of students in a classroom. The process of TLBO is divided into two sections ‘Teacher
Phase’ and ‘Learner Phase’.
3.1.
Teacher phase
A good teacher is one who takes his or her learners up to his or her level in terms of
knowledge. But in practice this is not possible and a teacher can only move the mean of a
class up to some extent depending on the adequacy of the class. This follows a random
process depending on many factors. At any iteration i, assume that there are ‘m’ number of
subjects (i.e. design variables) and ‘n’ number of learners (i.e. population size, k = 1, 2,…, n).
Let Mi be the mean result of the learners in a particular subject ‘j’ (j = 1, 2 ,…, m) and Ti be the
teacher at any iteration i. Ti will try to move mean Mi towards its own level, so now the new
mean will be Ti called as Mnew. The solution is updated according to the difference between
the existing and the new mean given by
Difference_Meani = ri (Mnew – TFMi),
(2)
Where TF is a teaching factor that decides the value of mean to be changed, and r i is a
random number in the range [0, 1]. The value of TF can be either 1 or 2, which is again a
heuristic step and decided randomly with equal probability as
(3)
TF = round [1 + rand (0,1) {2−1}],
This difference converts the existing solution according to the following expression
Xnew, i = Xold, i + Difference_Meani,
3.2.
(4)
Learner phase
A learner finds out something new if the other learner has more ability than him or
her.
At any iteration i, randomly select two learners Xi and Xj, where i ≠ j. Accept Xnew if it gives a
better function value.
Xnew,i = Xold,i + ri(Xi − Xj)
If f (Xi) < f (Xj),
(5)
Xnew,i = Xold,i + ri(Xj − Xi)
If f (Xj) < f (Xi).
(6)
428
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4.
Case study of wind energy in Egypt
Fahmy A. A., (2012) considered 11 locations as the most promising sites for power
generation along the Mediterranean and Red Sea coasts. Table 1 lists the Weibull
parameters figured for the 11 sites examined in this study (Shata and Hanitsch, 2006a,b).
Table 1 Weibull parameters for selected sites (Fahmy A. A., 2012).
Sites
Sallum
Sidi Barrani
Dekhaila
Alexandria
Balteam
Damiett
Parameter c
4.88
4.27
4.53
4.36
3.62
3.12
Parameter k
1.46
1.47
1.40
1.49
2.46
2.49
Sites
Port Said
El Arish
Zafarana
Abu Darag
Hurghada
-
Parameter c
4.77
4.56
8.23
8.23
6.60
-
Parameter k
1.71
2.39
2.70
3.00
2.03
-
Figs. 1–3 illustrates the comparison of optimum Vc, Vr, and Vf values found by the TLBO
algorithms with that of the results obtained by Fahmy A. A. (2012) for the eleven sites listed in
Table 1. Figs. 4–7 explain the individual values of CF, normalized power yields (PN), yearly
generated electrical energy (E) and the objective function.
4
3
2
1
0
Vc (m/s)
Vc PSO (m/s)
Vc th (m/s)
Vc Bees (m/s)
Vc TLBO(m/s)
Figure 1. Results for the cut-in speed
Vr (m/s)
15
10
5
0
Vr PSO (m/s)
Vr th (m/s)
Vr Bees (m/s)
Vr TLBO(m/s)
Figure 2. Results for the rated speed.
429
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
30
20
10
0
Vf (m/s)
Vf PSO (m/s)
Vf th (m/s)
Vf Bees (m/s)
Vf TLBO(m/s)
Figure 3. Results for the cut-off speed.
0.6
0.4
0.2
0
Capacity factor
cf PSO
cf th
cf Bees
cf TLBO
Figure 4. Results for the capacity factor.
6
4
2
0
Normalized power
np PSO
np th
np Bees
np TLBO
Figure 5. Results for the normalized power.
1600000
1400000
1200000
1000000
800000
600000
400000
200000
0
Output energy (KWh)
energy PSO
(KWH)
energy th
(KWH)
energy Bees
(KWH)
energy TLBO
(KWH)
Figure 6.Results for the total energy generated in one year.
430
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
fmax
fmax PSO
1.5
1
0.5
0
fmax th
fmax Bees
fmax TLBO
Figure 7. Results for the objective function.
5.
Conclusion
It can be seen from the result that the overall generated energy from the wind turbine
generator can be enhanced to a better value by proper adjustment of the cut-in speed, rated
speed, and cut-off speed. The TLBO algorithm obtained good results in terms of power yield
at comparable capacity factors and speed parameters. The TLBO algorithm exceeded Bees
algorithm and PSO algorithm in all the given 11 sites of the case study.
References
Abdel-Hamid, R.H., Adma, M.A.A., Fahmy, A.A., Samed, S.F.A. Optimization of wind farm
power generation using new unit matching technique. Proceedings of the 7th IEEE
International Conference on Industrial Informatics, 24-26 June 2009, Cardiff, UK,
INDIN.
Albadi, M.H., El-Saadany, E.F. Wind turbines capacity factor modeling – a novel approach.
IEEE Transactions on Power Systems, 2009, 24 (3).
Fahmy, A.A. Using the Bees Algorithm to select the optimal speed parameters for wind
turbine generators. Journal of King Saud University – Computer and Information
Sciences, 2012, 24, 17-26.
Gary, L.J. Wind Energy Systems, Electronic Edition. Manhattan, KS, 2006.
Huang, S.J., Wan, H.H. A study on generator capacity for wind turbines under various tower
heights and rated wind speeds using Weibull distribution. IEEE Transactions on Energy
Conversion, 2008, 23 (2).
Huang, S.J., Wan, H.H. Enhancement of matching turbine generators with regime using
capacity factor curves strategy. IEEE Transactions on Energy Conversion, 2009, 24
(2).
Jansamshetti, S.H., Rau, V.G. Height extrapolation of capacity factors for wind turbine
generators. IEEE Power Engineering Review, 1999b, 19 (6).
Jansamshetti, S.H., Rau, V.G. Normalized power curves as a tool for
identification
of
optimum wind turbine generator parameters. IEEE Transactions on Energy
Conservation, 2001a, 16 (3).
Jansamshetti, S.H., Rau, V.G. Optimum siting of wind turbine generators. IEEE Transactions
on Energy Conservation, 2001b, 16 (1).
Jansamshetti, S.H., Rau, V.G. Site matching of wind turbine generators: a case study. IEEE
Transactions on Energy Conservation, 1999a, 14 (4).
Powell, W.R. An analytical expression for the average output power of a wind machine. Solar
Energy, 1981, 26.
Rao, R.V., Savsani, V.J. and Vakharia, D.P. Teaching-learning-based optimization: A novel
method for constrained mechanical design optimization problems. Computer-Aided
Design, 2011, 43, 303-315.
Shata, A.S.A., Hanitsch, R. Evaluation of wind energy potential and electricity generation on
the coast of Mediterranean Sea in Egypt. Renewable Energy, 2006a, 31.
Shata, A.S.A., Hanitsch, R. The potential of electricity generation on the east coast of Red
Sea in Egypt. Renewable Energy, 2006b, 31 (13).
431
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Automatic Monitoring System for Geared Shaft Assembly
1
R.B.Nirmal *, R.D. Kokate
2
1
Dept. of Electronics, Jawaharlal Nehru Engineering College, Aurangabad, Maharashtra
Dept. of Instrumentation, Jawaharlal Nehru Engineering College, Aurangabad, Maharashtra
2
*Corresponding author (e-mail: rupa6.124@rediffmail.com)
This paper presents monitoring system for geared shaft assembly based on
programmable logic controller (PLC) technology. PLC based assembly system which
recognizes that the worker has to do all operations as per pre-defined sequence. All
sensor output fed to PLC and it will detect the sequence if it is ok then operation
complete indicator will turn glow. If sequence is not follow it will not process next
operation. This system will be advantageous in preventing defects from being passed
to next process, which reduces Reworks, Scrap, and Wrong shipments.
Keywords- Geared shaft assembly, programmable logic controller, monitoring system,
zero quality control.
1.
Introduction
In this highly competitive world, the desire and expectation for high-quality and reliable
goods are growing on a daily basis. Consumers now have access to products of higher
design, quality and functionality at lower prices than were previously possible. Quality
becomes the dominant issues in the market place where customers make their buying
decisions based on product quality; sometimes they can even pay more for what they
consider as high quality product. Continuous improvement (CI) is one of the core strategies
towards manufacturing excellence and CI is necessary to achieve good financial and
operational performance. It will enhance customer satisfaction and reduce time and cost to
develop, produce and deliver products and service. Quality has a positive and significant
relationship to performance measurement for process utilization, process output, product
costs, and work-in-process inventory levels and on- time delivery. Quality is defined in
terms of an excellent product or service that fulfils or exceeds the customer‟s expectation
[Besterfield, 2001].
Improvement can be in the form of elimination, correction of ineffective processing,
simplifying the process, optimizing the system, reducing variation, maximizing throughput,
reducing cost, improving quality and reducing set-up time [Straker, 1995]. Some of the
commonly used tools to solve problems in Industrial Engineering (IE) include work study,
quality control, line balancing, Poka Yoke, and others, [Turner, 1993]. The basic principles of
this system are designing or developing tools, techniques and processes such that it is
impossible or very difficult for people to make mistakes, [Richard Chase and Douglas M.
Stewart, 1995]. It is a simple principle that can lead to massive savings. This system will
reduce the cost of failure dramatically. No defective part will be passed to the next process.
So at the end of the process you can trust that you have a good quality parts on your hand,
[Shigeo Shingo, 1986]. In this project system can recognize that the worker has to pick up
gear as per defined sequence. Even it can recognize the position of the gear when all
process is done successfully then signal send to Press to insert circlip. All sensor output fed
to PLC and it will detect the sequence if it is ok then operation complete indicator will turn
glow. If sequence is not follow it will not process next operation. It is not possible to eliminate
all the mistakes people make. The basic concept of this project is avoiding the problems by
correcting the process. The aim of this system is to eliminate defects in a product by
preventing or correcting mistakes as early as possible [M. Dudek-Burlikowska, D.
Szewieczek, 2009].
2. System development
The basis of this project is Zero Quality Control (ZQC) approach, which is a technique for
avoiding and eliminating mistakes. This system can recognize that the worker has to pick up
432
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
washers, gears, guide sleeves, circlip etc as per the pre-defined sequence. Even it can
recognize the position of the gear when all process is done successfully then signal send to
press to insert circlip. All sensor output fed to PLC and it will detect the sequence if it is ok
then operation complete indicator will turn glow. If sequence is not follow it will not process
next operation.
Following are the steps to be performed to assemble a shaft:
1.Insert shaft in fixture.
2.Insert washer on shaft.
3.Insert gear on shaft.
4.Oil dispenser operates to lubricate gear.
5.Air jet on gear1 to rotate it.
6.Guide sleeve lifter1 on.
7.Place guide sleeve on to shaft.
8.Detect position of guide sleeve on shaft.
9.Selection of circlip with help of sensor.
10.Press will be ready for operation.
11.Push button for press operation.
12.Give signal to lifter 1 cylinder to go down.
End of sequence of Operation.
The worker has to insert various elements such as washers, bearings, various gears,
guide sleeves, etc as per the pre-defined sequence. The sequence of operation to be
performed is as per the list given above. The system even can recognize the position of the
gear when all process is done successfully then signal send to press to insert circlip. The
heart of the system is PLC, which controls the operation of the system. Various sensors such
as IR proximity sensors, inductive proximity sensors are connected to sense the parameters
then to the PLC. The concept of this device is to prevent operators from omitting parts during
assembly. The worker tends to forget some parts such as connectors in the wire harness.
Since it is a small part, the tendency to forget exists.
Panel
1
2
Messung’s
µ PLC
Pneumatic Press
3
4
Indicators
5
Shaft
I/O Channels
Shaft
Air Jet
Washer
Guide sleeve
Gear
Circlip
Working Table
Ind. Sensor 1
Worker’s
position
Operations to be performed
IR sensor 2
IR Sensor 3
Various
sensors
IR Sensor 4
Ind. Sensor 5
Figure 1. Block Diagram of PLC Based geared shaft assembly sequence detection &
monitoring system
433
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Following sections illustrates the detail information about various key elements used in the
system.
1.PLC
2.Programming of PLC
3.Sensors
4.Electro Pneumatics Installation
3.
Messung’s micro PLC
The Programmable Logic controller is solid state equipment, designed to perform the
function of logical decision making for Industrial control applications. The PLC acts as a total
replacement for hard wired relay logic with an effective reduction in wiring and panel size; and
of course, with increase in flexibility and reliability. It has been experienced that majority of the
faults with systems using PLCs are due to external causes like, malfunctioning of field or
external devices such as sensors, limit switches, push buttons etc. Keeping in mind the lower
I/O requirements for smaller machines and plants that are still using conventional electrical
system, Messung has come up with its Micro PLC. The Micro PLC is a unique product by
itself offering the most advanced features in a compact size. It integrates the benefits of small
size controllers with the modularity of larger PLCs and has a wide range of basic configurations
and expansion modules that allow more flexibility and easy selection. Efficiency increases with
the combination of Micro PLC and Man Machine Interface. It provides effective dialogue
between the machine and operator. The accessibility to Parameters and Diagnostic
information in PLC ensures optimal utilization of the machine. The benefits include shorter
start-up time, simple and also improved productivity. The small and complete Micro is
complemented by a variety of expansion units both discrete and analog that tailors a control
system to exact requirement in a cost and space effective way. The viability of Micro saves
training and engineering costs for a range of applications.
3.1 Features of micro PLC
1. Compact and Economical
2. Easily mountable - DIN rail or Back panel mounting
3. Uses latest SMD Technology
4. Four digit alphanumeric displays (optional) used for displaying timer values, counter values,
process parameters, fault messages etc.
5. Choice for selecting either AC (220V) or DC (24V) powered Micro PLC.
6. High speed counter input up to 4 KHz.
7. Pulse catches inputs 500 ms or more.
4.
Programming of PLC
PLC controller can be reprogrammed through a computer (usual way), but also through
manual programmers (consoles). This practically means that each PLC controller can
programmed through a computer if you have the software needed for programming. Once the
system is corrected, it is also important to read the right program into a PLC again. It is also
good to check from time to time whether program in a PLC has not changed. This helps to
avoid hazardous situations in factory rooms. You can choose what configuration interface you
wish to use when writing your application:
1. Ladder Diagram
2. Instruction List
3. Function Block Diagram
4. Sequential Function Chart
5. Structured Text
The programming of the Micro PLC can be done with either „PG-308‟programming terminal
or DOX-Mini‟ software. We are going to use Doxmini software with ladder diagram for
programming of PLC.
434
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
5. Sensors
A sensor is a technical converter, which converts a physical value such as temperature,
pressure, flow, or distance, into a different value which is easier to evaluate. This is usually an
electrical signal such as voltage, current, resistance or frequency of oscillation.
5.1 Inductive sensor:
Figure 2. Inductive Sensor
An inductive sensor is an electronic proximity sensor, which detects metallic objects
without touching them. The sensor consists of an induction loop. Electric current generates a
magnetic field, which collapses generating a current that falls asymptotically toward zero from
its initial level when the input electricity ceases.
5.2 IR sensor
Optical sensors require both a light source (emitter) and detector. Emitters will produce
light beams in the visible and invisible spectrums using LEDs and laser diodes. Detectors are
typically built with photodiodes or phototransistors. The emitter and detector are positioned so
that an object will block or reflect a beam when present. A diffuse sensor is a single unit that
does not use a reflector, but uses focused light.
6. Pneumatics
A fluid power system that transmit and control energy through the use of pressurized
Liquid or Gas. A system with Pressurized Liquid is called Hydraulics, whereas a system with a
Pressurized Air is called Pneumatics.
6.1 What can pneumatics do?
The Applications of compressed air are limitless.
Few of the applications are:
1. Low pressure air to test fluid pressure in Eyeball
2. Linear or rotary motion of Robotic process machines
3. Pneumatic press/Vice
6.2 Properties of compressed air
1.
2.
3.
4.
5.
Availability
Storage
Simplicity
Choice of movement
Economy.
7. Result and discussion
The main selection criteria to be considered is the efficiency of the proposed
alternatives in eliminating the defect, and more importantly, is the cost of implementing the
proposed alternative as well as the overall quality and productivity performance. The
estimated output of part being produced assisted by the device is as follows:
435
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Total complete set within eight hours is = 1200 shafts.
The results obtained from the comparison between this system & traditional systems are
presented in Table above.
Table 1 Comparison of traditional & PLC based system
Feature
Productivity
Performance
Traditional system
PLC system
Remark
925
1200
There is about 30% increase in productivity
Quality
Awareness
5 % rejection
0 % rejection
Cost
150000
per year
75000 per year
With this system error will be detected
immediately while processing.
Cost of implementing device is lower than
recruiting new QA checker
Following graph shows output comparison between traditional system & PLC based system.
1400
1200
1000
800
Traditional System
PLC System
600
400
200
0
0
1 hr
2 hr
3 hr
4 hr
5 hr
6 hr
7 hr
8 hr
Figure 3. Performance analysis
8. Merits of system
Improvements due to developed system:
1. Enhanced Productivity.
2. The highest level of quality can be achieved.
3. Lowers Quality Cost.
4.Enhanced Customer Satisfaction.
5.Find errors and correct mistakes - where they occur.
9.
Conclusion
This paper has presented the findings of quality improvement using automatic
monitoring system for geared shaft assembly process. The problem can be overcome by
using a sensing device where a mistake proofing device has been proposed and tested
Successful results were obtained from the previously described scheme indicating 30%
436
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
increase in production. With this monitoring system error will be detected while processing
which results in zero rejection. Thus, this system proved to be a versatile and efficient Control
tool in an automobile industrial application.
References
th
Besterfield, D.H., Quality Control, 7 edition, New Jersey: Prentice Hall International, 2001.
Hugh Jack, “Automatic Manufacturing Systems with PLC”, 5th edition, 2007
L.A. Bryan, “Programmable Controllers- Theory & Implementation”2nd edition, 2003.
M. B. Younis, G. Frey, Formalization of PLC programs to sustain reliability, Robotics,
Automation and Mechatronics, 2004 IEEE Conference on Volume 2, 1-3 Dec. 2004
Page(s): 613 – 618
M. Chmiel, E. Hrynkiewicz, The Way of Ladder Diagram Analysis for Small Compact
Programmable Controller, Information System and Technologies IEEE, 2002 pages169173
M. Dudek-Burlikowska, D. Szewieczek, “The Poka-Yoke method as an improving quality tool
of operations in the process”; Journal of Achievements in Materials and Manufacturing
Engineering volume 36, issue 1 September 2009
Richard Chase and Douglas M. Stewart, “Mistake-Proofing: Designing Errors Out”,
Productivity Press, 1995.
Shigeo Shingo, “Zero Quality Control: Source Inspection and the Poka-yoke System”,
Productivity Press, 1986
Straker, D., A Toolbook for Quality Improvement and problem Solving, New York: Prentice
Hall,1995.
Turner, W.C., Mize, J.H., Case, K.E. and Nazametz, J.W., Introduction to Industrial and
Systems Engineering, New Jersey: Prentice Hall International, 1993.
437
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Surface Texture Improvement on Inconel-718 by Roller
Burnishing Process
P.S.Kamble1*, C.Y.Seemikeri2
1
Mechanical Engg. Dept., Dr. Daulatrao Aher College of Engineering, Karad, Maharashtra
2
Mechanical Engg. Dept., Govt. Polytechnic Karad, 415124, Maharashtra, India
*
Corresponding author (email: prasad21389@rediffmail.com)
Burnishing is a cold working process where hard roller is being pressed against
irregular surface, so that surface finish enhances along with many other surface
integrity benefits. In this study single roller burnishing tool is being used with L-9
Taguchi's orthogonal array. Some investigations are being made to understand the
improvement in the surface finish of burnished surfaces on Inconel-718 material, which
finds applications in aircraft and land-based gas turbine engines, and cryogenic tank,
automotive, biomedical. Speed, feed, and number of passes have been varied to
examine the surface finish. Roughness data have been compared before and after
burnishing. Surface finish from 3.41 micron to 0.36 micron has been achieved by this
process. From the current study, it was found that speed is a dominant factor and the
order of significance in decreasing order of importance is speed, feed and number of
passes as per ANOVA.
1. Introduction
Surface finish is required to minimise friction losses, inculcate good corrosion resistant
property and high fatigue life. Conventional machining process leaves surface irregularities,
which causes additional cost of finishing operations. Burnishing is a plastic deformation
process. In the burnishing process, the pressure generated by the roller/s exceeds the yield
point of the softer piece part surface at the point of contact, resulting a small plastic
deformation of the surface structure of the piece part. All machined surfaces consist of a
series of peaks and valleys of irregular height and spacing. P.S.Kamble, et.al. (2012) studied
the plastic deformation created by roller burnishing displaces the material from the peaks by
means of cold work under pressure into the valleys. The result is a mirror-like finish with a
tough, work hardened, wear and corrosion resistant surface. S. Thamizhmnaii et al. (2008)
investigated surface roughness and surface hardness by burnishing on titanium alloy. The
test results produce improvement in surface finish. P. Ravindra Babu et al.(2009) studied two
internal roller burnishing tool to perform roller burnishing process on mild steel at different
speed. The variation of surface finish and surface hardness are observed by varying speed.
Different materials have been investigated by several researchers using different roller and/or
ball burnishing methods to evaluate the behaviour of surface texture. However, limited studies
are being made on the novel materials like titanium alloy, Inconel-718 and are also not being
investigated by conventional burnishing methods as per the above literature review. Hence an
attempt has been made in the current research work to conduct preliminary investigations into
burnishing of these materials.
2. Materials and method
In this current research paper, some investigation are being made to understand the
improvement in the surface finish of burnished surfaces along with the influence of the
process parameters on “Inconel 718” material, which is a high-strength and corrosion
resistant nickel chromium alloy. This novel material finds applications in components of
casings and various formed sheet metal parts for aircraft and land-based gas turbine engines,
and cryogenic tank.
438
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 1.Composition of INCONEL-718
%
Ni+Co
%
Cr
%
Nb+Ta
%
Mo
5055
1721
4.75 5.5
2.8 3.3
Ti
0.65
1.15
Al
0.2
0.8
Figure.1 Specimen of INCONEL 718
Figure.1 shows the specimen prepared for this study. The work piece is of Inconel718 material turned to diameter 18 mm on lathe machine. Specimen is divided into number of
patches to perform number of trials. Burnishing is carried out using single roller burnishing
tool (as shown in figure 2) on same lathe machine (general purpose) used for turning. All the
experiments were planned according to Taguchi's L9-Orthogonal array, with three significant
process parameters and three levels as shown in Table 2. Burnishing force was held constant
at an optimum value.
Table 2. Process parameter and their range.
Level
1
2
3
215
543
874
feed(mm/rev)
0.83
1.81
4.64
no. of passes
1
2
3
Parameter
s Speed(rpm)
Figure.2 Roller burnishing tool
In the current work is external roller burnishing using single roller burnishing tool shown in
figure.2. Surface roughness values were measured before burnishing and after burnishing
using roughness measurement device (Hommelwerke device-T1000). Comparison has been
made with the help of graph plotted by roughness tester shown in figure.3 and figure (4-12).
There is significant change in the plastic deformation of higher picks after burnishing. The
values of roughness after burnishing in all trials have been tabulated in table 3.
Table 3. L9 Taguchi Orthogonal array & observation
E1
Speed
(rpm)
215
Feed
(mm/rev)
0.83
1
Ra
(microns)
0.39
8.178708
E2
215
1.81
2
0.46
6.744843
E3
215
4.64
3
0.55
5.192746
E4
543
0.83
2
0.38
8.404328
E5
543
1.81
3
0.36
8.87395
E6
543
4.64
1
0.46
6.744843
E7
874
0.83
3
1.05
-0.42379
E8
874
1.81
1
1.11
-0.90646
E9
874
4.64
2
0.92
0.724243
Expt
N.O.P.
439
S/N
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 4. Response table of S/N ratio for smaller is better
Level
speed
feed
no. of passes
1
6.7054
5.3864
4.6724
2
8.0077
4.9041
5.2911
3
-0.202
4.2206
4.5476
delta
8.2097
1.1658
0.7435
rank
1
2
3
Table 5. Analysis of variance
Source
DF
SS
MS
F
P
Speed
2
116.808
58.4039
21.81
0.044
feed
2
2.059
1.0294
0.38
0.722
nop
Residual
error
Total
2
0.951
0.4756
0.18
0.849
2
5.356
2.678
8
125.174
SS = Sum of Squares, D.O.F. = Degree of Freedom, MS = Mean of squares.
3.
Results and discussion
Experiments were carried out as per design of experiments and the results were
analysed using Minitab software version 15. It was found that Speed is significant parameter.
The order of significance is in decreasing order is speed, feed and number of passes as per
ANOVA as shown in Table 5. The changes in surface roughness due to the variation in
spindle speed, feed and number of passes has been shown in figure 13 graphically obtained
by Minitab. Figure.4-12 shows roughness profile of each experiment.
Figure.8 Roughness of trial 2
Figure.3 Roughness before burnishing
Figure.9 Roughness of trial 3
Figure.4 Roughness of trial 1
440
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure.5 Roughness of trial 4
Figure.10 Roughness of trial 7
Figure.6 Roughness of trial 5
Figure.11 Roughness of trial 8
Figure.7 Roughness of trial 6
Figure.12 Roughness of trial 9
speed
feed
1.0
Rouughness (micron)
0.8
0.6
0.4
215
543
nop
874
1
2
3
0.83
1.81
4.64
1.0
0.8
0.6
0.4
Figure.13 Effects of process parameters on roughness
3.1
Effect of feed
As feed increases roughness start increasing. Maximum surface finish is achieved at
lower feed shown in figure.13. At 0.83 mm/rev, roughness reduces to 0.6micron from original
3.41 micron. Feed does not have significant influence on the surface finish.
441
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.2
Effect of speed
Speed is having significant influence on the response variable. As speed increases
from 215 rpm to 543 rpm, roughness starts reducing. There is an optimum value of speed for
the roughness of 0.4 micron. Further increase in speed causes roughness to increase shown
in figure.13.
3.3
Effect of number of passes
Number of passes is also important parameter. It was found that at two number of
passes surface finish achieved is 0.56 micron. As number of passes increases roughness
increases shown in figure.13.
4.
Conclusion
It was found that speed is significant parameter. The order of significance is in
decreasing order of importance is speed, feed and number of passes as per ANOVA. Roller
burnishing reduces surface roughness from 3.41 micron to 0.36 micron. From this study it
was found that speed is dominant factor because it causes more variation in roughness.
There is a lot of research potential for IN-718 material in various applications.
Acknowledgement
The authors gratefully acknowledge TMC measurement lab, Pune for providing roughness
testing facilities.
References
A.Stoić, et.al, “An Investigation of Machining Efficiency of Internal Roller Burnishing”, Journal
of Achievements in Material and Manufacturing Engineering, 2010, Volume 40,Issue2.
Andrezej Pacana et.al, “comparison of the methods for parameter significance evaluation on
the basis of roller burnishing process”, MPER, vol 1,no.1, May 2010, 17-22.
B.B.Ahuja & U.M.Shirsat, “Parametric Analysis of Combined Turning and Ball Burnishing
Process”, Indian Journal of Engineering & Material Science, Vol.11, 2004, 391-396.
Dr.Safwan M.A.,et.al, “investigation of roller burnishing of Zamac5 alloyed by copper”, journal
of apllied science research, 5(10), 2009, 1796-1801.
K.Palka et. Al., “Mechanical properties & corrosion resistance of burnished x5crni 18-9
stainless steel”. JAMME, 2006, vol 16, issue 1-2.
Khalid S Rababa, et.al, “effect of roller burnishing of the mechanical behavior and surface
quality of O1 alloy steel”, Research journal of applied science Engg & technology ,
2011,3(3); 227-233.
P.Ravindra Babu, T. Siva Prasad, A. V. S. Raju, A. Jawahar Babu, “Effect of Internal Roller
Burnishing on Surface Roughness and Surface Hardness of Mild Steel”, Journal of
Scientific & Industrial Research, Volume.68, January 2009, 29-31.
P.S.Kamble,et.al, “Experimental study of Roller burnishing process on plain carrier of
planetary type gear box”,International Journal of Modern Engineering Research
(IJMER), Vol.2, Issue.5, 2012 ,3379-3383.
S.Thamizhmnaii, et.al, “Surface Roughness Investigation and Hardness by Burnishing on
Titanium Alloy”, Journal of Achievements in Material and Manufacturing Engineering,
2008 Volume 28, Issue2.
S.Hassan, et.al, “A Study of Multi-Roller Burnishing on Non-Ferrous Metals”, Journal of
Achievements in Material and Manufacturing Engineering, 2007, Volume 22, Issue2.
442
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Some Investigations into Surface Texture Modifications of
Titanium Alloy by Conventional Burnishing Method
C.Y.Seemikeri1*, P.S.Kamble2
1
Mechanical Engg. Dept., Govt. Polytechnic Karad, Maharashtra, India
Mech. Engg. Dept., Dr. Daulatrao Aher college of Engineering, Karad, Maharashtra, India
2
*Corresponding author (e-mail: cyseemikeri@gmail.com)
A new field ‘engineered surface’ would be a more effective and economic route to
successful manufacture. Burnishing is a cold working process where hard roller is
being pressed against irregular surface, so that surface finish enhances along with
many other surface integrity benefits. In this study single roller burnishing tool is being
used with L-9 Taguchi's orthogonal array. some investigations are being made to
understand the improvement in the surface finish of burnished surfaces on Titanium
alloy material, which finds applications in steam turbines blades, structural parts, tanks,
aviation industry, automotive, biomedical. Speed, feed, and number of passes have
been varied to examine the surface finish. Roughness data have been compared
before and after burnishing. Surface finish from 5.61 micron to 0.65 micron has been
achieved by this process. From the current study, it was found that feed is a dominant
factor and the order of significance in decreasing order of importance is feed, number
of passes and speed as per ANOVA. The results encourage the suitability of burnishing
to high speed burnishing applications in future.
1. Introduction
A new field „engineered surface‟ would be a more effective and economic route to
successful manufacture (Stout, 1998). Engineers who want to improve the life of a component
will eventually have to take into consideration the surface integrity of the component. Surface
alterations may include mechanical, metallurgical, chemical, electrical, biological and other
changes, although confined to a small surface layer, may limit component quality or in some
cases, render the surface unacceptable. Surface integrity, is a multi-disciplinary activity which
can have a great impact on parts function (Poulo Devim, 2010). There are two aspects to
surface integrity: topography characteristics and surface layer characteristics. The topography
is made up of surface roughness, waviness, errors of form and flaws. The surface layer
characteristics that can change through processing are: plastic deformation, residual
stresses, cracks, hardness, over aging, phase changes, recrystallization, intergranular attack
and hydrogen embrittlement (Dieter, 1988). Virtually all the machining and surface
modification processes affect surface integrity of the work material by producing altered
surface and subsurface condition. Alterations may differ, depending upon the process nature
and in case of burnishing process, topography characteristics and surface layer
characteristics are significantly changed (Brahmankar et al., 2006).
Surface finish is required to minimise frictional losses, good corrosion resistant, enhanced
surface hardness and high fatigue life. Conventional machining process leaves surface
irregularities, which causes additional cost of finishing operations. Burnishing is a plastic
deformation process in which the pressure generated by the roll/s exceeds the yield point of
the softer piece part surface at the point of contact, resulting a small plastic deformation of the
surface. All machined surfaces consist of a series of peaks and valleys of irregular height and
spacing. The plastic deformation created by roller burnishing displaces the material from the
peaks by means of cold work under pressure into the valleys. The result is a mirror-like finish
with a tough, work hardened, wear and corrosion resistant surface (Kamble et al., 2012).
Hassan carried out ball and roller burnishing experiments on non ferrous metals(Hassan and
Bsharat, 1996; Hassan, 1997). Different materials have been investigated by several
researchers using different roller and/or ball burnishing methods to evaluate the behaviour of
surface texture. (EL-Axir, 2000; Hamadache et.al, 2006; Karzynski, 2007; Jawalkar and
443
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Walia, 2009; Seemikeri and Mahagaonkar, 2010; Sagbas, 2011). However, limited studies
are being made on the novel materials like titanium alloy, Inconel-718 and are also not being
investigated by conventional burnishing methods as per the above literature review. Hence an
attempt has been made in the current research work to conduct preliminary investigations into
burnishing of these materials.
2. Materials and method
In this current research paper, some investigation are being made to understand the
improvement in the surface finish of burnished surfaces along with the influence of the
process parameters on “titanium alloy” material, which finds applications in steam turbines
blades, structural parts, tanks, aviation industry, automotive, biomedical etc. Table1 shows
the composition of material for grade 5.
Table 1.Composition of Titanium
Ti-6Al-4V (Grade 5)
Al
6%
Figure1. Specimen of Titanium alloy material
Sn
2%
Ti
86%
V
6%
Fig.1 shows the specimen prepared for this study. The work piece is of titanium alloy material
turned to diameter 18 mm on lathe machine. Specimen is divided into number of patches to
perform number of trials. Burnishing is carried out using single roller burnishing tool (as
shown in figure 2) on same lathe machine (general purpose) used for turning. All the
experiments were planned according to Taguchi's L9-Orthogonal array, with three significant
process parameters and three levels as shown in Table 2. Burnishing force was held constant
at an optimum value.
Table 2. Process parameters
Level
1
2
3
Speed (rpm)
215
543
874
Feed (mm/rev)
0.83
1.81
4.64
No. of passes
1
2
3
Parameterss
Figure 2 Roller burnishing tool
In the current work is external roller burnishing using single roller burnishing tool shown in
fig.3. Surface roughness values were measured before burnishing and after burnishing using
roughness measurement device (Hommelwerke device-T1000). Comparison has been made
with the help of graph plotted by roughness tester shown in fig.3 (a and b). There is significant
change in the plastic deformation of higher picks after burnishing. The values of roughness
after burnishing in all trials have been tabulated in table 3.
(a)
(b)
Figure 3. Roughness profile (a) before burnishing (b) after burnishing
444
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 3. L9 Taguchi Orthogonal array & observation
Feed
(mm/rev)
0.83
Speed
(rpm)
215
no. of
passes
1
Ra
(micron)
3.4
-10.6296
0.83
543
2
2.3
-7.23456
0.83
874
3
1.43
-3.10672
1.81
215
2
2.43
-7.71213
1.81
543
3
1.12
-0.98436
1.81
874
1
0.65
3.741733
4.64
215
3
2.4
-7.60422
4.64
543
1
1.01
-0.08643
4.64
874
2
0.95
0.445528
S/N
Table 4. Response table of S/N ratio for smaller is better
Level
Feed
Speed
1
2
3
Delta
Rank
-15.82
-5.12
-1.31
14.51
1
-8.64
-7.47
-6.14
2.50
3
No. of
passes
-5.80
-8.44
-8.02
2.64
2
Table 5. Analysis of variance
SOURCE
D.O.F.
SS
MS
F
P
FEED
2
339.59
169.79
2.63
0.276
SPEED
2
9.4
4.7
0.07
0.932
NO.OF PASSES
RESIDUAL
ERROR
TOTAL
2
12.1
6.05
0.09
0.914
2
129.18
64.59
8
490.29
SS = Sum of Squares, D.O.F. = Degree of Freedom, MS = Mean squares.
3.
Results and discussion
Experiments were carried out as per design of experiments and the results were
analysed using Minitab software version 15. It was found that Feed is significant parameter.
The order of significance is in decreasing order is feed, number of passes and speed as per
ANOVA as shown in Table 5. The changes in surface roughness due to the variation in
spindle speed, feed and number of passes has been shown in figure 4 graphically obtained
by Minitab.
445
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Feed
3.0
Speed
2.5
Roughness (micron)
2.0
1.5
1.0
0.83
1.81
4.64
215
543
874
no. of passes
3.0
2.5
2.0
1.5
1.0
1
2
3
Figure 4. Effects of process parameters on roughness
3.1 Effect of feed
Better surface finish can be achieved at lower feed as shown in fig.4. At 1.81 mm/rev
roughness values drop from 5.61 micron to 1.40 micron. As feed increases roughness slightly
increases. There is an optimum value of feed in the given combination.
3.2 Effect of speed
As speed increases surface roughness decreases, which is an usual trend holds good in this
analysis also. To achieve better surface finish, spindle speed should be maximum. It was
found from the results that 0.65 micron surface finish was achieved at 874 rpm shown in fig.4.
The results encourage the suitability of burnishing to high speed burnishing applications in
future.
3.3 Effect of number of passes
Number of passes is also important parameter. It was found that at two number of passes
surface roughness is maximum. At one and three number of passes roughness is minimum
shown in fig.4. With higher number of passes the material finish deteriorates due to excessive
contact between the tool and material. However a single pass is not enough to deform the
material plastically at lesser force.
4. Conclusions
It was found that Feed is significant parameter. The order of significance is in decreasing
order is feed, number of passes and speed as per ANOVA. Roller burnishing reduces surface
roughness from 5.61 micron to 0.65 micron. From this study it was found that feed is
dominant factor because it causes more variation in roughness. The results encourage the
suitability of burnishing to high speed burnishing applications in future.
Acknowledgement
The authors gratefully acknowledge TMC measurement lab, Pune for providing roughness
testing facilities
446
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
References
Brahmankar, P. K., Seemikeri, C. Y. and Mahagaonkar, S. B., 2006, A review of Burnishing;
state-of- the-art. In: International Conference, ICAMMP-2006, Kharagpur, India,. 288 –
295.
Dieter, G. E., 1988, Mechanical Metallurgy. McGraw-Hill Book Co., New York, ISBN 0-07084187-X, 328-337.
EL-Axir, M.H., 2000, “An investigation into roller burnishing”, Int. journal of machine tools and
manufacture, vol 40, issue 11, 1603-1617.
Hamadache, H. et.al, 2006, “Characteristics of Rb40 Steel Superficial Layer under Ball and
Roller Burnishing”, Journal of Material Processing Technology,180, 130-136.
Hassan, A. M. and Bsharat, A. S., 1996, Influence of Burnishing Process on Surface
Roughness, Hardness and Microstructure of Some Non – ferrous metals. Wear. vol. 199,
1-8.
Hassan, A. M., 1997, “The effect of ball and roller burnishing on the surface roughness and
hardness of some non ferrous metals”, journal of material processing technology, vol 72,
issue 3, 385-391.
Jawalkar, C.S. and Walia, R.S., 2009, “study of roller burnishing on EN8 specimens using
design of experiment”. Journal of mechanical engg. research, vol. 1(1), 038-045.
Kamble, P. S., et.al, 2012, “Experimental study of Roller burnishing process on plain carrier of
planetary type gear box”, International Journal of Modern Engineering Research
(IJMER), Vol.2, Issue.5, Sep-Oct. 2012, 3379-3383
Karzynski, M., 2007, “Modeling and experimental validation of the force- surface roughness
relation for smoothing the burnishing with spherical tool”, Int. Journal of machine tool and
manufacture, vol 47, issue12-13, 1956-1964.
Poulo Devim, J., 2010, Surface integrity in Machining, Springer, London, e ISBN 978-184882-874-2, 3-20
Sagbas, A., 2011, “Analysis & optimization of surface roughness in the ball burnishing
process using response surface methodology & desirability function”, Advances in Engg.
Software, vol.42, issue 11, 992-998.
Seemikeri, C.Y. and Mahagaonkar, S.B., 2010, “The influence of surface enhancement by low
plasticity burnishing on the surface integrity of steels”, Int.Journal science & Engg, vol
4,Nos.4/5/6.
Stout, K. J., 1998, “Engineering surfaces – a philosophy of manufacture‟‟, Proc. Instn. Mech.
Engrs. Vol. 212 Part B, 169-174
447
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Experimental Investigation on Submerged Arc Welding of CrMo-V Steel and Parameters Optimization
R. V. Rao*, V. D. Kalyankar
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
*Corresponding author (e-mail: ravipudirao@gmail.com)
Welding aspects of 2.25Cr-Mo-V grade of steel is investigated in the present work. The
material considered in this work consists of 35 mm plate thickness and complete
experimentation is carried out using an automatic submerged arc welding machine.
Due to higher plate thickness, multi-pass welding is chosen with total six numbers of
passes in each case in order to maintain the uniformity in weld experimentation. Effect
of various input parameters is studied on various output responses related to weld
bead geometry and weld hardness. Taguchi’s orthogonal array is used for design of
experiments and the mathematical models are developed for the process output
responses. The models developed are validated by conducting more experiments and
hence these models can be used for predicting the desired output responses.
Optimized sets of solutions are also presented for the chosen ranges of process
parameters using an advanced optimization technique named teaching-learning-based
optimization technique.
Keywords: Submerged arc welding, 2.25Cr-Mo-V steel, Multi-pass welding, Weld bead
geometry, Parameters optimization, TLBO algorithm
1. Introduction
Cr-Mo-V grade of steel is mainly useful for high temperature applications which include
steam generators, pressure vessels, reactors, etc. used in various chemical plants and power
plants. This steel possess high thermal conductivity and low thermal expansion coefficient
along with good corrosion resistance and thermal fatigue properties which makes it suitable
for these applications. The elevated-temperature properties also make this steel economically
attractive compared to the presently used austenitic stainless steels and high-nickel alloys up
to about 600°C. However, welding of this steel, particularly for thick plates, had faced various
difficulties related to proper setting of welding parameters. Furthermore, the multi-pass
welding of thick sections is subjected to multiple thermal cycles thereby creating complex
residual stress distribution throughout the thickness (Bae and Kim, 2004). The problem
becomes more severe if appropriate parameter setting is not chosen to carry out the welding.
Welding of Cr-Mo-V grades of steel also demands high heat input. Submerged arc
welding (SAW) is one of the important welding processes which is mostly preferred in
fabricating industries due to its higher heat input and higher deposition rate. SAW also
becomes one of the cost effective means of getting strong weld joint by selecting proper
process control parameters and hence may be considered for welding of Cr-Mo-V steel.
However, the welding of heavy steel sections are subjected to various difficulties such as
distortion, residual stress, softening and hardening of heat affected zone (HAZ) of steel.
Hence, there is strong need to develop the relation between the input and output parameters
of the process which can help to reduce the defects and optimise the target goal.
Some research works were observed in the literature related to the welding aspects of
Cr-Mo-V steels. McDonald et al. (2002) studied the residual stresses during multi-pass
welding of Cr-Mo-V low alloy steel. Various issues related to replacement of Cr-Mo steels and
stainless steels with Cr-Mo-V steel were discussed by Swindeman et al. (2004). Effects on
microstructure and wear properties was discussed by Lu et al. (2004) by attempting cladding
on Cr-Mo-V steel using SAW process. Storesund et al. (2006) carried out the inspection on
Cr-Mo-V steel welding and simulated the creep behaviour of welded parts.
Zielinski et al. (2007) studied the structural changes in low alloy cast Cr-Mo-V steel to
investigate the creep behaviour of the material. Hilkes and Gross (2009) discussed the past,
448
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
present and future of Cr-Mo steel welding and its application for power generation and
petrochemical industries. Naz et al. (2009) carried out the failure analysis of low carbon CrMo-V steel weldments by inspecting the crack formation in HAZ area. Arivazhagan et al.
(2009) focused their research on weld fusion zone to study the microstructure and mechanical
properties of welded parts of Cr-Mo-V steel.
The aim of this work is to understand the influence of SAW process parameters on
the welding aspects of 2.25Cr-Mo-V steel through experimentation. The welding
experimentations are carried out using SAW process as per the design of experiments and
the effect of various important input parameters is studied on the weld bead geometry and
weld hardness. The SAW process involves various input parameters and determination of
optimum process parameters is very important in SAW process. Hence, generalised
mathematical models are also developed for various responses which may be useful for
determining the optimum parameter setting. Optimum parameter setting is also suggested in
this work using an advanced optimization technique named as teaching-learning-based
optimization (TLBO) technique.
2.
Experimental investigation on 2.25Cr-Mo-V steel
The 2.25Cr-Mo-V grade of steel used in the present work is having plate thickness of
35 mm and it is obtained for the research purpose from a leading industry dealing with the
fabrication of various types of pressure vessels using this material. Welding of such a thick
material is to be carried out in number of passes which is referred as multi-pass welding. Very
less research was carried out on the welding aspects of such material which is presented
above in the first section of this paper. In the present work, efforts are carried out to study the
effects of various parameters on the weld bead geometry and weld hardness of this material.
Efforts are also made to develop the relations between the input and the output parameters of
the process which may be useful to the end users. In the following subsections all the
experimental details of the work is produced including the result analysis.
2.1
Experimental details
As the work material under consideration consists of more thickness i.e. 35 mm, hence
welding is completed in number passes. Total six number of passes are made in each
experiment i.e. the number of passes are kept constant in order to avoid the variation in
process performance. The important input parameters considered in this work are welding
current (I), voltage (V) and the welding speed (S) which is having very significant effect on the
output of the process. Initially trail experiments have been conducted on the material under
consideration to decide the parameters range by considering three parameters with two levels
combination. Keeping in view the feasible range of various parameters reported in the
literature, the range considered in the present work is: welding current = 400 – 500 Amp,
voltage = 29 – 33 V and welding speed = 4 – 14 cm/min.
In order to limit the total number of experiments without deleting any of the important
combination, Taguchi’s L8 orthogonal array is used for 3 parameters with 2 levels. Similar
orthogonal array was also reported by many researchers in their experimental work of SAW
process on certain types of steels (Tarng and Yang, 1998; Tarng et al. 2000). The detailed
design of experiments is given in Table 1.
2.2
Material composition
The Cr-Mo-V steel having about 2.25% chromium, 1% Molybdenum and 0.25%
Vanadium is considered for the investigation. This type of steel is found more suitable for
fabrication of heavy duty products under high temperature operating condition. The material is
in the plate form with 35 mm thickness. A number of specimens are prepared of the size of
200 x 100 x 35 mm.
Selection of appropriate SAW wire and flux is very important stage for SAW process
to obtain good weld joint with significant properties. The composition of wire and flux should
be such that it should match the overall chemistry of the weld joint. Overall matching
consumables are required for the welding of Cr-Mo-V steel plates in order to achieve
homogenous and strong weld joint (Hilkes and Gross, 2009). Keeping in view of various
449
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
problems involved with the welding of 2.25Cr-Mo-V steel, a 3.00 mm low-alloyed, copper
coated ESAB make wire is selected. The wire is designed for the submerged arc welding of
creep resistant Cr-Mo-V steels. The wire was coded by the manufacturer as OK AUTROD
13.20 (EB-3-21/4Cr1Mo) with electrode classification as AWS A5.23: EB3R. The ESAB make
agglomerated fluoride basic flux suitable for SAW of Cr-Mo-V steels and best suitable for OK
AUTROD 13.20 wire is selected. The flux was coded by the manufacturer as OK FLUX 10.62
and the grain size of the flux used varies from 0.2 – 1.6 mm. The flux is preheated every time
before conducting any experiment to reduce the chances of defects.
2.3
Specimen preparation
A number of specimens are prepared in the plate form of the size of 200 x 100 x 35
0
mm. Edge preparation is done by providing a 30 bevel on one side of the plate so that when
two plates with bevel edge brought in contact with each other, then a butt joint of ‘V’ shape
can be formed. The edges to be welded are cleaned properly in order to ensure a dirt free
surface. The butt joint assembly is then preheated for approximately 30 min and the flux is
also preheated. With this the atmospheric effect is minimised and chances of weld defects are
also reduced.
2.4
SAW machine specification
A 600 Amp thyristorised controlled automatic SAW machine with tractor mounted
welding head fitting is used to conduct all the welding experiments. The recommended input
supply for the machine is 415 V, 3 phase and 50 Hz. The welding speed varies from 0-150
cm/min and the wire feed range varies from 50-450 cm/min.
2.5
Welding procedure
Welding of all the test samples of 2.25Cr-Mo-V steel are carried out by adjusting the
input parameters on the SAW machine as per the design of experiment for respective
experiments. As the specimens are subjected to multi-pass welding, extra care is taken to
remove the slag formed on the weld joint at the end of each pass of welding. A common and
sufficient time interval is kept between each pass of welding in order to allow the removal of
excess heat from the specimen.
After completion of welding, all the samples are subjected to post weld heat treatment
and then all the weld joints are prepared for post welding analysis. For this purpose, some
welded portion is removed from the welded specimens to measure the required outputs of the
process. In the present work, various outputs considered for analysis purpose are weld bead
width, weld reinforcement and weld hardness. As multi-pass welding is involved in the present
work with first pass starting from the root of joint in each case, hence complete weld
penetration is obtained in all the cases.
3.
Results and discussion
After conducting all the experiments, various responses are measured accurately and
the effect of all input parameters on each response is then analyzed to identify the critical
parameters affecting the various responses. For measuring various responses, sophisticated
equipments are used having high degree of accuracy. The various equipments used to carry
out different tests are: Toolmakers microscope for measuring weld bead width and
reinforcement, and micro hardness tester with a load of 500g and magnification of 30µm to
carry out Vickers hardness test for measuring weld hardness. Large numbers of readings are
taken on all the samples and the average reading obtained for weld bead width (BW), weld
reinforcement (RF) and weld hardness (H) is presented in Table 1. The results obtained for
each response are explained below with their analysis.
It is observed that the weld bead width obtained varies from 45 mm to 62 mm. Even
though the multi-pass welding with total six run is carried out, but appropriate care is taken
that each run is located almost at the same region in case of all the specimens. This ensures
that the molten metal is evenly spreaded on the face of weld joint for all the specimens and
gives the corresponding bead width. The range of hardness obtained is also in good
agreement with that reported in the literature. The analysis of all the measured responses is
carried out using MINITAB 15 software.
450
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 1. Input parameters as per DOE and corresponding average output parameters
Expt No.
1
2
3
4
5
6
7
8
I (Amp)
400
500
400
500
400
500
400
500
Input parameters
V (Volts)
S (cm/min)
29
4
29
4
33
4
33
4
29
14
29
14
33
14
33
14
Output parameters
BW (mm)
RF (mm)
H (HV)
45.228
12.379
264
45.446
17.312
266
62.281
8.488
339
60.023
11.855
303
55.033
7.546
296
55.588
15.158
329
59.866
7.339
314
60.346
10.588
324
In case of weld bead width, it is observed that, it is not affected much by welding
current whereas it is significantly affected by the voltage. As the voltage is increased from
29V to 33V the main effect on weld bead width increases from about 50 mm to 60 mm. This
happens due to increase in arc length as the voltage increases. The more arc length
generates more heat in the weld pool and allows the molten metal to get spread over the joint
and thereby the weld bead width increases. The weld bead width is also increased with
increase in welding speed.
In case of weld reinforcement, welding current is ranked on the top affecting the weld
reinforcement significantly followed by voltage and welding speed. The welding current has
shown increasing trend in weld reinforcement whereas decreasing trend is shown by voltage
and welding speed. As the welding current increases the total heat input increases and hence
deposition rate increases which in turn increases the weld reinforcement. As mentioned
above, the voltage increases the weld bead width and which is compensated by the
reinforcement. Thus the weld reinforcement decreases as the voltage increases.
Larger the better approach is considered during the analysis of hardness and it is
observed that the voltage and welding speed have increased the hardness significantly
whereas welding current has shown negligible effect on the hardness. The relationship
between the input and output parameters is also established by carrying out the regression
analysis of the obtained result. The estimated coefficients for all the responses are obtained
by using response surface modelling. Various models are attempted and all the necessary
tests to find out the coefficient of determination for each model are carried out. The final
models developed in actual form for the weld bead width, reinforcement and weld hardness
are given by Eqs 1 - 3 respectively.
Weld bead width, BW (mm) = -217.755 + 0.249971(I) + 8.92165(V) + 16.6681(S) 0.008591(I*V) - 0.0170702(I*S) - 0.5456(V*S) +
0.000600025(I*V*S)
(1)
Weld reinforcement, RF (mm) = 24.7912 + 0.071036(I) – 0.89395(V) – 12.3371(S) –
0.001118(I*V) + 0.0229572(I*S) + 0.3718(V*S) –
6.9925*10-4(I*V*S)
(2)
Weld hardness, H (HV) = -1692.25 + 3.086(I) + 68.45(V) + 75.625(S) - 0.11(I*V) –
0.07775(I*S) – 2.925(V*S) + 0.00375(I*V*S)
(3)
All these models have shown very good results over the various standard tests and
the R2 value obtained for all the model is more than 90%. Hence, these input output relation
models can be used to decide the respective responses using different combinations of input
variables and vice-versa. This provides flexibility to the user to select appropriate values of
process parameters to achieve the desired response. However, an optimum setting of input
parameters is also suggested in order to achieve the optimised response.
451
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4. Optimum parameters setting for various responses
The mathematical models given by Eqs 1 - 3 can be used to select particular values of
input parameters and to decide the process output. However, in order to achieve the optimum
parameters setting an attempt is carried out to obtain an individual set of input process
parameter which gives the optimum value of respective response. Attempt is also made to
satisfy all the output responses simultaneously by a single set of input process parameter. To
achieve this target, a recently developed advanced optimization algorithm by Rao et al. (2011,
2012), named as teaching-learning-based optimization algorithm (TLBO) is used. The TLBO
algorithm does not require any algorithm-specific parameters and only require the common
control parameters like population size and number of iteration. It is already tested on various
standard benchmark functions and its results are proved better than the other advanced
optimization techniques (Rao and Patel, 2012). Hence attempt is made here to use the TLBO
algorithm for the parameters optimization of the SAW process under consideration. The
working of TLBO algorithm as given by Rao et al. (2011, 2012) is referred in this work to
optimise the parameters of the SAW process.
The various responses considered in the present work are conflicting in nature i.e. weld
bead width and weld reinforcement are need to be minimised whereas weld hardness is to be
maximised. Hence in this section, each response is considered separately and attempted
using the TLBO algorithm. A common population size of 20 is used and the number of
iterations considered is 30 which give a very consistent result for all the cases.
The minimum weld bead width obtained by the TLBO algorithm is 45.217 mm with the
welding current = 400Amp, voltage = 29V and the welding speed = 4cm/min. Similarly the
minimum reinforcement obtained is 7.545 mm with optimum parameters setting as: welding
current = 400Amp, voltage = 29V and welding speed=14cm/min. The maximum hardness
reported by the algorithm is about 337HV for which the optimum parameters obtained are:
welding current= 403Amp, voltage= 33V and welding speed=4cm/min
It is observed that, to optimise all the three objectives, different parameters setting is
obtained, hence all the objectives cannot be satisfied at a time. Thus in order to obtain a
common parameters setting which can satisfy all the objectives simultaneously, a combined
objective function is developed by normalising all the three objectives. The combined
objective function obtained in the present work is given by Eq 4.
Min Z = W 1*(BW/BW min) + W 2*(RF/RFmin) - W 3*(H/Hmax)
(4)
Where W 1, W 2 and W 3 are the weightages assigned to the individual objectives of
weld bead width (BW), reinforcement (RF) and hardness (H) respectively. In this work equal
weightage of 1/3 each is assigned to each objective respectively, however, the decision
maker can use any weightage to the objectives keeping in view the importance of respective
objectives. BW min, RFmin and Hmax are the optimum results obtained in the individual cases.
The TLBO algorithm is applied to this combined objective function and a common parameters
setting is obtained which includes welding current= 400Amp, Voltage= 32V and welding
speed= 14cm/min. This common parameters setting produces the optimised result such as
minimum weld bead width of 58.617 mm, minimum reinforcement of 7.390 mm and the
maximum hardness of 309 HV. The results produced by the combined objective function is
always compromising result and it may be different than the individual objectives, however,
the common parameters setting achieved for this always tries to optimise all the objectives
simultaneously. The result can be changed by changing the weightages assigned to each
objective so that a particular objective can be optimised to greater extent.
5.
Conclusions
Investigations on multi-pass welding of 2.25Cr-Mo-V grade of steel having plate
thickness of 35 mm are carried out in this work using submerged arc welding process. Efforts
are made to develop the relation between the input and output process parameters and full
quadratic models are developed in terms of input parameters and process performance.
Appearance of weld bead geometry for all the welded samples along the cross section are
452
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
observed as considerably good and the hardness of the weldments is also on higher side
which give a very strong weld joint. The coefficient of determination obtained in most of the
cases is more than 90%. The mathematical models developed have given very good results
under the necessary tests and hence these models can be useful to decide and optimise the
submerged arc welding process for Cr-Mo-V steels. The effects of various input parameters
on the process performance is also investigated and presented in this work. Optimised
parameter setting is also obtained for each response using the TLBO algorithm. Combined
objective functions is also developed which can be used to obtain the common parameters
setting which satisfies all the objectives simultaneously. The output of the present work may
become useful for industries dealing with the Cr-Mo-V grades of steels.
Acknowledgement
The authors are thankful to the Gujarat Council on Science and Technology
(GUJCOST), Gandhinagar, Gujarat, India, for financial support to carry out the research work
as a part of a research project.
References
Arivazhagan, B., Prabhu, R., Albert, S.K., Kamaraj, M. and Sundaresan, S. Microstructure
and mechanical properties of 9Cr-1Mo steel weld fusion zones as a function of weld metal
composition. Journal of Materials Engineering and Performance, 2009, 18, 999–1004.
Bae, D. and Kim, C.H. Corrosion fatigue characteristics in the Weld of multi-pass welded
A106 Gr B steel pipe. KSME International Journal, 2004, 18, 114- 121.
Hilkes, J. and Gross, V. Welding Cr-Mo steel for power generation and petrochemical
application-past, present and future. IIW Conference, Singapore, 2009, 1-11.
Lu, S.P., Kwon, O.Y., Kim, T.B. and Kim, K.H. Microstructure and wear property of Fe–Mn–
Cr–Mo–V alloy cladding by submerged arc welding. Journal of Materials Processing
Technology, 2004, 147, 191–196.
McDonald, E.J., Hallam, K.R., Bell, W. and Flewitt, P.E.J. Residual stresses in A multi-pass
Cr-Mo-V low alloy ferritic steel repair weld. Materials Science and Engineering: A, 2002,
325(1-2), 454-464.
Naz, N., Tariq, F. and Baloch, R.A. Failure analysis of HAZ cracking in low C-CrMoV steel
weldments. Journal of Failure Analysis and Prevention, 2009, 9, 370-379.
Rao, R.V. and Patel, V. An elitist teaching-learning-based optimization algorithm for solving
complex constrained optimization problems. International Journal of Industrial Engineering
Computations, 2012, 3(4), 535-560.
Rao, R.V., Savsani, V.J. and Vakharia, D.P. Teaching–learning-based optimization: A novel
method for constrained mechanical design optimization problems. Computer-Aided
Design, 2011, 43, 303–315.
Rao, R.V., Savsani, V.J. and Vakharia, D.P. Teaching–learning-based optimization: An
optimization method for continuous non-linear large scale problem. Information Sciences,
2012, 183, 1-15.
Storesund, J. Borggreen, K. and Zang, W. Creep behaviour and lifetime of large welds in X 20
Cr-Mo-V 12 1—results based on simulation and inspection. International Journal of
Pressure Vessels and Piping, 2006, 83, 875–883.
Swindeman, R.W., Santella, M.L., Maziasz, P.J., Roberts, B.W. and Coleman, K. Issues in
replacing Cr–Mo steels and stainless steels with 9Cr–1Mo–V steel. International Journal of
Pressure Vessels and Piping, 2004, 81, 507–512.
Tarng, Y.S. and Yang, W.H. Application of the Taguchi method to the optimization of the
submerged arc welding process. Materials and Manu Processes, 1998, 13, 455-467.
Tarng, Y.S., Yang, W.H. and Juang, S.C. The use of fuzzy logic in the Taguchi method for the
optimisation of the submerged arc welding process. International Journal of Advanced
Manufacturing Technology, 2000, 16, 688–694.
Zielinski, A., Dobrzanski, J. and Krzton, H. Structural changes in low alloy cast steel Cr-Mo-V
after long time creep service. Journal of Achievements in Materials and Manufacturing
Engineering, 2007, 25, 33-36.
453
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Multi Objective Optimization of Weld Bead Geometry in Pulsed
Gas Metal Arc Welding using Genetic Algorithm
K.Manikya Kanti1*, P.Srinivasa Rao2, G.Ranga Janardhana3
1
Gayatri Vidya Parishad College of Engineering, Visakhapatna-530048, Andhra Pradesh, India
2
Chaitanya Engineering College, Visakhapatnam, Andhra Pradesh, India
3
University College of Engineering, J.N.T.U., Kakinada, Andhra Pradesh India
*Corresponding author (e-mail: kantikmanikya@rediffmail.com)
The objective of this paper is to present a GA based optimization procedure to optimize
processing parameters viz. plate thickness, pulse frequency, wire feed rate, WFR/TS ratio
and peak current using a multi-objective function model for predicting depth of penetration
and convexity index of the bead geometry in PGMA welding. An integer-coded genetic
algorithm, based on the elitist non-dominated sorting genetic algorithm, is implemented to
obtain Pareto-optimal designs for multiple objectives. The developed model is validated
against experimental results and it is found that the results obtained from genetic algorithm
model are accurate. The optimal process parameters gave a value of 5.314 for depth of
penetration and 0.2108 for convexity index which demonstrates an error percentage of
1.41% and 0.86% respectively, thus presenting the effectiveness of the model. The
obtained results help in selecting quickly the process parameters to achieve the desired
quality.
1.
Introduction
Gas metal arc welding (GMAW) has a wide range of applications from thin sheet welding
to heavy section narrow gap welding. The salient features of the GMAW process are high metal
deposition rates and adaptability for automatic welding. For good quality welding, proper mode of
metal transfer is required. In P-GMAW the current normally producing globular transfer is
modulated with a current waveform of alternating low-level background and high-level pulse
current. The background current serves primarily to sustain arc, where as the pulse current is
adjusted to exceed the threshold value and thus transfer a small droplet from the wire tip. The
pulse wave form therefore produce a series of small droplets, giving spray mode of metal transfer
at a low mean current, which would normally produce globular transfer. However to have an
effective and satisfactory metal transfer in pulsed current welding, the right choice of the values or
range of parameters must be made.
Weld bead geometric parameters have a large influence on the quality of the product.
Hence the studies on the effects of various welding process parameters on the formation of bead
depth of penetration and bead geometry have attracted the attention of many researchers,
Murugan and Parmar (1994), Kim I.S et al. (2003), Palani and Murugan (2006) to carry out further
investigations. Mathematical modeling of processes using the principle of design of experiments
has proved to be an efficient procedure for understanding the behavior of a process by
conducting minimum number of experiments [Srinivasa Rao P (2004) J.P.Ganjigatti et al. (2008)
Sukhomay Pal et al.(2009)]. Recent developments in the evolution of artificial intelligence
techniques have been found to be useful in solving many engineering problems. Several artificial
intelligence techniques have been explored [Correia D.S et al. (2005) Nagesha and Datta (2010)
Rao R.V. and Kalyankar (2013)] to determine welding parameters for various arc-welding
processes. These evolutionary algorithms consider the uncertainty features of the welding
processes, which cannot be expressed by mathematical equations. Thus, they are better
compared with conventional mathematical and statistical techniques. Genetic algorithms are
attracting the attention of many researchers when it comes to optimization of process parameters
454
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
as they discover the benefits of its adaptive search. Researchers explored the possibility of using
GA as a method to decide near optimal settings of a process. These algorithms encode a
potential solution to a specific problem on a simple chromosome like data structure and apply
recombination operators to these structures so as to preserve critical information. Classical
optimization methods, like the gradient-based methods, have a tendency to get stuck in local
optima. Genetic algorithms, that mimic evolution through natural selection of „„genetic‟‟
information, are better at finding global solutions and are easy to parallelize [Jeffrey et al. (2013)].
2.
Multi objective optimization using GA
With single objective problems, the genetic algorithm stores a single fitness value for
every solution in the current population of solutions. By allocating the fitter members of the
population a higher chance of producing more offspring than the less fit members, the GA can
create the next generation of hopefully better solutions. With multi objective problems, every
solution has a number of fitness values, one for each objective. The concept of Pareto-optimality
helps to overcome this problem of comparing solutions with multiple fitness values. A solution is
Pareto-optimal if it is not dominated by any other solutions. However, it is quite common for a
large number of solutions to a problem to be Pareto-optimal and thus be given equal fitness
scores. For many problems, the set of solutions deemed acceptable by a user will be a small subset of the set of Pareto optimal solutions to the problems. Manually choosing an acceptable
solution can be a laborious task, which would be avoided if the GA could be directed by a ranking
method to converge only on acceptable solutions. For this work, an acceptable solution is defined
as follows by Goldberg (1989) “A solution is an acceptable solution if it is Pareto-optimal and it is
considered to be acceptable by a human”. The principles of true multi-objective optimization, give
rise to a set of equally optimal solutions, known as Pareto-optimal or non-inferior solutions,
instead of a single optimal solution. Upon completion of the optimization procedure, the designer
can view the manner by which the Pareto-optimal solutions are distributed in the performance
space, and choose the most suitable solution based on higher level .The Pareto-based methods,
proposed first by Goldberg (1989) have been analyzed by researchers Peter J. Bentley (1996).
3.
Model development
In the present study, the data to model a multi objective GA model is collected from the
thesis by Rao P.S (2004). Five input process parameters plate thickness(A) , pulse frequency
(B), wire feed rate(C) ,wire feed rate to travel speed ratio i.e. WFR/TS ratio(D) and peak
current(E) are used in the present study to predict two output parameters, the bead penetration
depth (P.D) and the convexity index (C.I) which is the ratio of bead reinforcement height to the
bead width. 18 experiments were carried out for different combinations of inputs and the bead
penetration depth and convexity index were found. The factors and the levels used in the
conventional P-GMAW experiment are given in the following table.
4.
Factor
Process Parameters
Units
Level 1
Level 2
Level 3
A
B
C
D
E
Plate thickness
Frequency
Wire feed rate
WFR/TS Ratio
Peak current
mm
Hz
M/min
---------A
6
50
3.0
15
440
8
101
5.5
20
480
10
152
8.0
25
520
Mathematical (regression) model developed
Multiple regression analysis was performed and regression models were
developed to find the bead penetration depth and the convexity index by Rao P.S [2004].
455
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The regression model developed for determining the bead penetration depth (P.D) and the
convexity index (C.I) are given as follows:
5.
P. D = (𝐷0.4516 𝐶 1.0401 ×× 𝐸 0.4731 ) (101.6625 × 𝐴0.3545 × 𝐵 0.1126 )
C. I = (𝐴0.7498 × 𝐸1.9137 ) (104.4695 × 𝐵 0.1554 × 𝐶 0.8509 × 𝐷0.6826 )
(1)
(2)
Development of model for optimization
In the present study a multi objective GA based optimization procedure is used to
optimize processing parameters viz. plate thickness, pulse frequency, wire feed rate, WFR/TS
ratio and peak current. The optimization of depth of penetration and convexity index was carried
with the help of mathematical equations developed in the regression model. A single objective
function, taken as fitness function, was developed using equation 1 and equation 2 to maximize
the depth of penetration while minimizing the convexity index simultaneously. The maximization
of depth of penetration was done by taking the negative of equation (1), as the source code was
developed for minimization of functions in MATLAB. The execution of genetic algorithm was done
using the graphic al user interface in MATLAB R (2012). The gamultiobj solver in MATLAB R
(2012) attempts to create a set of Pareto optima for a multiobjective minimization. The gamultiobj
uses a controlled elitist genetic algorithm (a variant of NSGA-II). It uses the genetic algorithm for
finding local Pareto optima.
5.1
Implementation of GA
The implementation of the GA technique for the multi-objective optimization problem is
discussed below:
A vectorized fitness function is chosen for this problem as this method can take less time than
computing the objective functions of the vectors serially. Hence population type of double vector
is taken in order to solve this problem.
The GA operates on three main operators namely 1) Reproduction 2) Cross over and 3) Mutation.
Reproduction selects the copies of chromosomes proportionate to their fitness value. Here
roulette wheel was used as reproduction operator to select the chromosomes. After reproduction,
the population is enriched with good strings from the previous generation but does not have any
new string. A crossover operator is applied to the population to create better strings. Mutation is
the random modification of chromosomes i.e. changing 0 to 1 or vice versa on a bit by bit basis.
The need for mutation is to keep diversity in the population.
The following parameters are used in execution of GA for the present work:
Sample size: 75, Selection function: Tournment, Type of crossover: heuristic, Crossover
probability: 0.8, Mutation function: adaptive feasible, Pareto front population fraction: 0.35,
Function tolerance: 1e-6, Number of generations: 145
6.
Results and discussions
Pulsed MIG welding process parameters are optimized for bead penetration depth and
convexity index simultaneously using multi objective optimization using GA .Optimal process
parameters were searched by the genetic algorithm to arrive at the desired bead penetration
depth and convexity index. The algorithm is run in MATLAB and the figure 1 shows the execution
of the program in MATLAB. Figure 2 shows the elite members plotted on pareto front while the
program is being executed. This plot shows the tradeoff between the two components of objective
function. The solution is converged after 145 iterations as the weighted average change in the
fitness function value is less than function tolerance, and 27 non inferior pareto optimal values are
given after the execution of the program. As the number of optimal values is more, the solution is
exported to MATLAB workspace for visualization.
456
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 2: Set of Pareto front results
(non inferior solutions)
Figure 1: Optimization tool in MATLAB
Figure 3 shows the workspace results exported to MATLAB. Figures 4 and 5 show the pareto
optimal results obtained by running the program. The results listed in the figures 4 and 5 indicate
that all the five independent controllable process variables, which are optimized by the GA are
having values between the vectors of minimum and maximum values of the controllable process
variables. It is observed that the depth of penetration and convexity index obtained by using the
above values for process parameters is 5.314 mm and 0.2108 respectively. With these results, it
is found that GA can be a powerful tool in optimization of pulsed MIG welding parameters.
The optimum values of the process variables obtained from GA are given below:
(A) Plate thickness = 6 mm (B) Pulse frequency = 50.3226Hz (C) Wire feed rate =8 M/min
(D) Wire feed rate to travel speed ratio = 25 (E) Peak current =519.699 A
Figure 3:
Results exported to
workspace
Figure 4:
Pareto front-Decision
variables
Figure 5:
Pareto front-function
values
The optimum values of the process variables obtained from experimental results are as
follows:
(A) Plate thickness = 6 mm (B) Pulse frequency = 50 Hz (C) Wire feed rate =8 M/min
(D) Wire feed rate to travel speed ratio = 25 (E) Peak current =480 A
457
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The maximum depth of penetration and the minimum convexity index for the above
experimental data obtained are 5.24 mm and 0.209 at the above given process parameters. It is
observed from results that the process variable values generated by using multi objective GA can
be considered to be optimal. The percentage error for depth of penetration and convexity index
for the experimental values and the values generated by GA is 1.41% and 0.86% respectively
which is very much within the acceptable range.
7.
Conclusions
The effect of process parameters on bead penetration depth using pulsed GMA welding
using genetic algorithms is studied. An effective method for finding optimal bead penetration
depth in pulsed GMA welding process using a multi objective genetic algorithm is proposed. The
developed genetic algorithm model can be used to find optimal bead penetration depth with
relatively few experiments.
References
Correia D.S., Gonçalves C.V., Cunhada S.S. and Ferraresi Jr., V.A. Comparison between genetic
algorithms and response surface methodology in GMAW welding optimization. Journal of
materials processing technology, 2005,160, 70–76
D.S. Nagesha,G.L. Datta. Genetic algorithm for optimization of welding variables for height to
width ratio and application of ANN for prediction of bead geometry for TIG welding process.
Applied Soft Computing,2010,10, 897–907
Goldberg, D. E. Genetic Algorithms in Search, Optimization & Machine Learning. AddisonWesley,1989
Jeffrey D. Poirier, Senthil S. Vel, Vincent Caccese. Multi-objective optimization of laser-welded
steel sandwich panels for static loads using a genetic algorithm. Engineering Structures,
2013, 49, 508–524
J.P.Ganjigatti, D.K. Pratihar and A. Roy Choudhury. Modeling of the MIG welding process using
statistical approaches. Int J Adv Manuf Technol, 2008, 35, 1166–1190
Kim I.S, Son J.S., Kim I.G., Kim J.Y. and Kim O.S. A study on relationship between process
variables and bead penetration for robotic CO2 arc welding .Journal of materials processing
technology, 2003, 136,139-145
MATLAB R 2012, Help Navigator.
Murugan N. and Parmar R.S. Effect of MIG process parameters on the geometry of the bead in
the automatic surfacing of stainless steel. Journal of materials processing technology, 1994,
41, 381-398
Peter J. Bentley, Jonathan P. Wakefield. An Analysis of Multiobjective Optimization within
Genetic Algorithms, Technical Report 96, 1996, 1-14
P.K. Palani , N. Murugan. Selection of parameters of pulsed current gas metal arc welding.
Journal of materials processing technology, 2006,172, 1–10
Rao R.V. and Kalyankar V.D. Parameter optimization of modern machining processes using
teaching–learning-based optimization algorithm. Engineering Applications of Artificial
Intelligence, 2013, 26(1), 524-531
Srinivasa Rao P. Development of arc rotation mechanism and experimental studies on pulsed
GMA welding with and without arc rotation. PhD thesis, IIT Kharagpur, India, 2004
Sukhomay Pal, Santosh K. Malviya, Surjya K.Pal and Arun K.Samantaray. Optimization of quality
characteristics parameters in a pulsed metal inert gas welding process using grey-based
Taguchi method. Int J Adv Manuf Technol, 2009, 44, 1250–1260
458
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
A New Teaching-Learning-Based Optimization Method for
Multi-Objective Optimization of Master Production Scheduling
Problems
1
2
1
S. Radhika *, Ch.Srinivasa Rao , K.Karteeka Pavan
1
R.V.R & J.C College of Engineering, Guntur–522019, Andhra Pradesh
2
Andhra University College of Engineering, Visakhapatnam
Corresponding author (e-mail: sajjar99@gmail.com)
For an effective and efficient synchronization of operations in any organization, a
Master Production Schedule (MPS) is necessary. The overall objective of MPS is to
allocate all the manufacturing resources in an efficient manner while satisfying the
forecasted demands. Hence, MPS is a plan that determines optimal values of products
to be produced. More competitive and optimal solutions can be obtained by the nature
inspired population based algorithms. Teaching-Learning-based optimization (TLBO) is
one such recently proposed population based algorithms which does not require any
algorithm-specific control parameters. This work presents the development and use of
TLBO to MPS problems, which is not yet found in the literature so far. The TLBO
algorithm developed is applied to a benchmark problem and the research demonstrates
that use of TLBO yields the most optimal solution for MPS problems
1.
Introduction
The difficulty in optimization of engineering problems has given way to the
development of an important heuristic search algorithmic group namely, the Evolutionary
Algorithm group. The most commonly used evolutionary optimization technique is the Genetic
Algorithm (GA) [Swagatam Das (2009)]. Though, the GA provides a near optimal solution for
complex problems, it requires a number of control parameters in advance which affect the
effectiveness of the solution. Determining the optimum values for these controlling
parameters is very difficult in practice. Considering this fact, Recently, Rao et al. & Rao and
Patel have introduced the Teaching-Learning-Based Optimization (TLBO) algorithm which
does not require any algorithm specific parameters [Rao, R.V. &Kalyankar, V.D. (2012a,b,c)].
TLBO is developed based on the natural phenomena of teaching and learning process of a
classroom. TLBO contains two phases namely the teacher phase and the learning phase
[Rao, R.V. & Patel, V. (2012a, b)]. Similar to any population based algorithms, according to
Rao et al (2011), the TLBO also contains population. Solution vectors are the learners and
dimensions of each vector is termed as a subject. Best learner in the population is a teacher.
The work presents the development and use of Teacher Learner based Optimization (TLBO)
technique to Master production scheduling (MPS) problems, something that does not seem to
have been done so far.
Master production scheduling has been extensively investigated over the last three
decades and it continues to attract the interest of both the academic and industrial sectors.
One must ensure that the proposed MPS is valid and realistic for implementation before it is
released to real manufacturing system [Ilham Supriyanto (2011)].In this connection, several
studies have suggested an authentication process to check the validity of tentative MPS, few
of which include, Higgins et al(1992) Kochhar et al(1998) and Heizer et al(2006).Besides the
substitution of the verification process, to solve and enhance MPS quality, researchers also
have employed various advanced optimization techniques viz.;Vieira et al (2004) applied
simulated annealing ,Soares et al (2009) introduced new genetic algorithm structure, Vieira et
al (2003) has compared genetic algorithms and simulated annealing for master production
scheduling problems. The objectives considered are minimize inventory level, maximize
service level, minimize inventory level below safety stock and minimize overtime. The
subsequent section explains the procedure of TLBO.
459
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2.
TLBO procedure
TLBO is a simple evolutionary algorithm which does not require any program specific
parameters compared to other existing evolutionary algorithms. The process of TLBO is as
follows.
2.1
Initialization
The population X, is randomly initialized by a given data set of n rows and d columns using
the following equation.
X i , j (0) X min
rand (1) * X max
X min
j
j
j
(1)
th
Xi,j Creation of a population of learners or individuals. The i learner of the population X at
current generation t with d subjects is as follows,
(2)
X i (t ) X i ,1 (t ), X i , 2 (t ),..., X i ,d (t )
2.2
Teacher phase
The mean value of each subject, j, of the population in generation t is given as
M (t ) [M 1 (t ), M 2 (t ),..., M d (t )
(3)
The teacher is the best learner with minimum objective function value in the current
population. The Teacher phase tries to increase the mean result of the learners and always
tries to shift the learners towards the teacher. A new set of improved learners can be
generated by adding a difference of teacher and mean vector to each learner in the current
population as follows.
X i (t 1) X i (t ) r * ( X best (t ) TF M (t ))
(4)
TF is the teaching factor with value between 1 and 2, and riis the random number in
the range [0, 1]. The value of TF can be found using the following equation (5)
TF round (1 rand (1))
(5)
2.3
Learner phase
The knowledge of the learners can be increased by the interaction of one another in
the class. For a learner, i, another learner is selected, j, randomly from the class.
X i (t 1)
X i (t ) r * ( X i (t ) X j (t )), iff (( X i (t )) f ( X j (t ))
X i (t ) r * ( X j (t ) X i (t )), iff (( X j (t )) f ( X i (t ))
(6)
The two phases are repeated till a stopping criterion has met. Best learner is the best
solution in the run.
2.4
Stopping criteria
The stopping criteria in the present work is ―Stop by convergence or stagnation‖. The
convergence of the algorithm is based on the fitness value of the fittest individual. The
difference of fitness value of fittest individuals in any two successive generations is less than
0.0001, is the stopping criteria.
3.
MPS problem considered
A manufacturing scenario is selected from Soares et al (2009) to study the applicability of
TLBO algorithm for the MPS problem as follows.
MPS problem is posed as a multi-objective optimization problem. For the optimization
of the selected parameters the following multi-objective criteria is selected as the fitness
function Soares et al (2009)
1
fitness
1 Z n
(7)
460
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Where Z n c1
AIL
RNM
BSS
c2
c3
c4 OC
AILmax
RNM max
BSS max
(8)
EImax, RNMmax, and BSSmax, are the biggest values found from the initial population
created. Unit values are used for the fitness coefficients c1, c2, c3 and c4 — which indicate
equal importance among the objectives to be minimized. Master Production schedule problem
can be mathematically modeled as a mixed integer problem as follows [Soares et al (2009)]:
(9)
Minimize: Z c1 AIL c2 RNM c3 BSS c4 OC
The constraints and the nomenclature used are taken from Soares et al (2009). The
subsequent section demonstrates the applicability of TLBO in finding a master schedule plan
for a production scenario.
The scenario is with a planning horizon of 13 periods, four productive resources, and
20 different products. The scenario also considered (a) different period lengths (b) different
initial inventory quantity for each product and (c) different safety inventory levels and different
standard production lot sizes.
4.
Results and discussion
The applicability of the proposed algorithm was tested on the manufacturing scenario
considered. The plot on figure shows the variations of fitness evolution in all the 65
nd
independent runs. The best fitness value 0.926713 is obtained in the 2 run and the worst
th
fitness value 0.830392 is obtained in the 16 run. The fitness is increased by nearly 22% to
that when done with GA and the average number of iterations taken for the convergence is 4.
Figure1. Evolution of fitness values
This work showed that master plan created with TLBO presented low levels of ending
inventory, requirements not met and efficiently met safety inventory levels. Also, the results
show that the TLBO approach gives a better result when compared to the existing work with
GE. Table1 shows the comparison of the various parameters obtained through the present
work with those of MPS GA [Soares et al (2009)].
The best master production schedule found with respect to the 4 resources, 13 periods for all
the 20 products along with the total MPS for each product is shown in the table2.
Table1. Comparison between the values of performance indicators
GA
TLBO
Average Ending Inventory - EI (units/hour)
4555.08
2744.356
Average Requirements Not Met - RNM (Units/Hour)
321.42
317.1006
Average Quantity below safety stock - BSS (units/Hour)
37.03
9.1168
461
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table2. Best mps obtained
Products
Product
1
Product
2
Periods
Resou
rces
1
2
3
4
5
6
7
8
9
10
11
12
13
Res1
0
0
0
0
0
0
0
0
0
11460
0
7000
0
Res2
150
0
70
90
0
0
11400
0
2000
14000
0
7000
2000
Res3
90
0
30
0
70
0
6030
700
0
260
0
100
0
Res4
TT.
MPS
0
70
20
0
70
0
2210
700
7000
0
700
4600
14000
240
70
120
90
140
0
19640
1400
9000
25720
700
18700
16000
Res1
0
40
0
10
0
0
700
0
0
5030
0
1400
1000
Res2
70
10
0
0
40
0
700
0
7000
7000
0
3900
0
Res3
50
0
40
0
0
0
10
0
7000
0
0
7000
0
Res4
TT.
MPS
0
0
0
60
0
0
0
400
2000
1720
400
0
7000
120
50
40
70
40
0
1410
400
16000
13750
400
12300
8000
(For conciseness, MPS for products 3 thru 18 are not shown)
Product
19
Product
20
5.
Res1
100
0
0
0
0
0
1400
0
0
7510
0
900
0
Res2
0
10
0
0
70
0
310
700
7000
14000
700
5500
14000
Res3
0
0
60
0
0
0
1400
0
7000
0
0
0
0
Res4
TT.
MPS
100
50
50
140
60
0
0
0
0
7550
0
400
1000
200
60
110
140
130
0
3110
700
14000
29060
700
6800
15000
Res1
60
0
50
0
40
0
700
0
0
6000
0
0
0
Res2
0
0
0
20
0
0
700
0
1000
6000
0
0
0
Res3
0
0
0
60
0
0
0
0
0
0
400
7000
6000
Res4
TT.
MPS
0
40
0
60
0
0
0
300
7000
0
0
0
2000
60
40
50
140
40
0
1400
300
8000
12000
400
7000
8000
Conclusions and future scope
The complexity of parameter optimization problems increases with the increase in the
number of parameters. The present TLBO model is useful for future research, to
generate a more advanced model for improved reliability in case of more parameters
in MPS.
The results demonstrate that the TLBO method produces more optimal master
production schedule values compared to GA .
Defining more suitable fitness function by considering different weights to the
coefficients and their influence may be analyzed.
Application of the proposed TLBO in a larger production scenario and testing its
validity to an industry will be our upcoming work.
References
Heizer, J. H., Render, B. Operations Management, Upper Saddle River, New York: Pearson
Prentice Hall. 2006.
462
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Higgins P, Browne J Master production scheduling: a concurrent planning approach. Prod
Plan Control 1992,3(1):2–18.
Ilham Supriyanto Fuzzy Multi-Objective Linear Programming and Simulation Approach to the
Development of Valid and Realistic Master Production Schedule; LJ_proc_supriyanto_de
201108_01, 2011.
Kochhar, A. K., Ma, X., Khan, M. N. Knowledge-based systems approach to the development
of accurate and realistic master production schedules, Journal of Engineering
Manufacture, 1998, Vol. 212 pp.453-60
Rao, R.V. & Kalyankar, V.D. Parameter optimization of modern machining processes using
teaching–learning-based optimization algorithm. Engineering Applications of Artificial
Intelligence, 2012a, http://dx.doi.org/10.1016/j.engappai.2012.06.007.
Rao, R.V. & Kalyankar, V.D. Multi-objective multi-parameter optimization of the industrial
LBW process using a new optimization algorithm. Journal of Engineering
Manufacture,2012b, DOI: 10.1177/0954405411435865
Rao, R.V. & Kalyankar, V.D. Parameter optimization of machining processes using a new
optimization
algorithm.
Materials
and
Manufacturing
Processes,
2012c,
DOI:10.1080/10426914.2011.602792
Rao, R.V. & Patel, V. An elitist teaching-learning-based optimization algorithm for solving
complex constrained optimization problems. International Journal of Industrial
Engineering Computations, 2012a, 3(4), 535-560.
Rao, R.V. & Patel, V. Multi-objective optimization of combined Brayton and inverse Brayton
cycle using advanced optimization algorithms, Engineering Optimization, 2012b, doi:
10.1080/0305215X.2011.624183.
Rao, R.V., Savsani, V.J. &Vakharia, D.P.. Teaching-learning-based optimization: A novel
method for constrained mechanical design optimization problems. Computer-aided
Design, 2011, 43 (3), 303-315.
Swagatam Das, Ajith Abraham and Amit Konar. Metaheuristic Clustering 2009 SpringerVerlag Berlin Heidelberg, ISBN 978-3-540-92172-1, ISSN 1860949X
Soares, M. M., Vieira, G. E. A New multi-objective optimization method for master production
scheduling problems based on genetic algorithm, International Journal of Advanced
Manufacturing Technology; 2009,41:549-567.
Vieira, G. E., Ribas, C. P. A new multi-objective optimization method for master production
scheduling problems using simulated annealing, International Journal of Production
Research, 2004, Vol.42.
Vieira, G. E., Favaretto, F., Ribas, P. C. Comparing genetic algorithms and simulated
annealing in master production scheduling problems; Proceeding of 17th International
Conference on Production Research,2003, Blacksburg, Virginia, USA.
463
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
PSO_TVAC Based Optimal Location and Sizing of TCPST for
Real Power Loss Minimization
Ankit Singh Tomar*, Laxmi Srivastava
Dept. of Electrical Engineering, Madhav Institute of Technology & Science, Gwalior, India
*Corresponding author (e-mail: ankitsingh.tomar51@gmail.com)
Minimization of losses is important for improvement and there by enhancement of the
available transfers capability of a power system and this can be achieved by the
application of Flexible A.C. Transmission System (FACTS) devices. Out of various
types of FACTS devices, Thyristor controlled phase shifting transformer (TCPST) is a
series FACTS device that can be used to reduce power losses and to control the power
flows in various lines. However, due to the huge cost of the FACTS device, it is
important to find the optimal location and sizing of the device in a power system to
obtain maximum benefits of the device. In this paper, first, loss sensitivity index (LSI) of
various lines of a power system has been determined. Thereafter, for a set of the lines
having highest values of LSI, Particle Swarm Optimization-Time Varying Acceleration
Coefficients (PSO-TVAC) algorithm has been applied to find the optimal location and
size of TCPST for real power loss minimization. To establish the effectiveness of the
proposed approach, it has been implemented on standard IEEE 30-bus system and is
found to be highly satisfactory.
Keywords — FACTS, Loss Sensitivity Index, Newton Raphson load flow, Particle
Swarm Optimization-Time Varying Acceleration Coefficients, Thyristor controlled phase
shifting transformer.
1.
Introduction
The ongoing deregulation trends in Electric Supply Industry throughout the world act as a
vital force in favor of new technologies to best suit the compromising the system security and
social maximization (Hingorani and Gyugyi, 1999; Taranto et al., 1992) and it requires the
efficient control of power system network. Under this scenario, flexible A.C. transmission
system (FACTS) seems to be the most appropriate solution for controlling power flow, bus
voltage etc. in a power system because of its excellent flexibility and versatility (Xiao et al.,
2002; Galiana et.al, 1996; Fuerpe- Esquivel and Ache, 1997). With the application of FACTS
technology, the capacity of existing power system networks can be extended up to its thermal
limits without adding new transmission lines. FACTS devices are the solid state converters
having capability of improving power transmission capacity, improving bus voltage profile,
enhancing power system stability, minimizing transmission losses etc (Ge and Chung, 1999;
Songet al., 2004). In steady state operation of power system, unwanted loop flow and parallel
power flow between utilities are problems in heavily loaded interconnected power systems.
However, with the FACTS devices, the unwanted power flow can be easily regulated
(Chandrasekharan et al. 2009).
In order to obtain the maximum benefits from their use, the most important issues to be
considered are the type of FACTS devices, the settings of FACTS devices and optimal location
of FACTS devices (Ge and Chung, 1999). Facts devices may be categorized into four types:
Series Controllers, Shunt Controllers, Combined Series-Series Controllers and Combined
Series-Shunt Controllers. Commonly used FACTS devices include static var compensator
(SVC), thyristor controlled series compensator (TCSC), thyristor controlled phase shifting
transformer (TCPST), unified power flow controller (UPFC) etc. SVC and Statcom are
connected in shunt with the system to improve voltage profile by injecting or absorbing the
reactive power, while TCSC and TCPST are connected in series with the system (Ge and
Chung, 1999; Songet al., 2004). Like other FACTS devices, TCPST is an expensive device;
therefore it is important to find the optimal location and its size in a power system, in order to
minimise real power losses in a power system. Various methods for optimal location and its
464
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
size of TCPST are proposed including sensitivity analysis and evolutionary computational
techniques (Preedavichit and Srivastava, 1998; Wirmond et al. 2011).
This paper deals with the application of Particle Swarm Optimization-Time Varying
Acceleration Coefficients (PSO-TVAC) algorithm to find out the optimal location and the
optimal parameter settings of TCPST for minimization of real power losses in a power system
network. The effectiveness of proposed method has been tested on IEEE 30-bus system
(Sadat).
2.
Modeling of TCPST
The model of the transmission line with Thyristor-Controlled Phase Shifting Transformer
(TCPST) is shown in Figure 1. This device can control the voltage phase shift angle. By
varying the voltage phase shift angle, active power flow is controlled. The active power flow of
an overloaded line can be decreased with negative phase shift and that of a under-loaded line
can be increased up to almost the rated capacity.
Figure 1. Equivalent Circuit of TCPST
The real and reactive power flows from bus i to bus j can be derived as:
𝑃𝑖𝑗 =
𝑉𝑖2 𝐺𝑘
𝑡 𝑠2
𝑄𝑖𝑗 =
𝑉
𝑉
− 𝑡 𝑖 𝑉𝑗 𝐺𝑘 cos(𝛿𝑖 − 𝛿𝑗 − 𝛷)- 𝑡 𝑖 𝑣𝑗 𝑣𝑖 𝐵𝑘 sin(𝛿𝑖 − 𝛿𝑗 − 𝛷)
𝑉𝑖 𝐺𝑘
𝑡 𝑠2
𝑠
𝑠
𝑉𝑖
𝑉𝑖
− 𝑡 𝑉𝑗 𝐺𝑘 sin(𝛿𝑖 − 𝛿𝑗 − 𝛷)- 𝑡
𝑠
𝑠
(1)
𝑣𝑗 𝐵𝑘 cos(𝛿𝑖 − 𝛿𝑗 − 𝛷)
(2)
𝑉
𝑉
𝑃𝑗𝑖 = 𝑉𝑗 2 𝐺𝑘 − 𝑡 𝑖 𝑉𝑗 𝐺𝑘 cos(𝛿𝑖 − 𝛿𝑗 − 𝛷) − 𝑡 𝑖 𝑣𝑗 𝐵𝑘 sin(𝛿𝑖 − 𝛿𝑗 − 𝛷)
𝑠
𝑠
(3)
The real and reactive power loss (Plk, Qlk) in the line having the TCPST can be expressed as:
𝑃𝐿𝑘 =
𝑉𝑖2 𝐺𝑘
𝑡 𝑠2
𝑉
+ 𝑉𝑗 2 𝐺𝑘 − 2 𝑡 𝑖 𝑉𝑗 𝐺𝑘 cos(𝛿𝑖 − 𝛿𝑗 − 𝛷)
𝑠
The range of the phase shift angle considered in this paper is:
-10 deg. ≤ 𝝓 TCPST ≤ 10 deg.
Insertion of TCPST having a complex tapping ratio a+jb: 1 will modify 𝑌 as
[𝑌𝑚𝑜𝑑𝑖 ]=
𝑌𝑖𝑖
𝑇𝑠2
𝑌𝑗𝑖
−
𝑇𝑠
−
(4)
𝑌𝑖𝑗
𝑇𝑠∗
𝑌𝑗𝑗
Where, Ts = a+jb = ts ∟𝝓
3.
Method for optimal placement of TCPST
In the present work, the TCPST device has been considered from static point of view to
minimize total system real power transmission loss. In this paper, first, loss sensitivity index
(LSI) of various lines of a power system has been determined. Thereafter, for a set of the
lines having highest values of LSI, Particle Swarm Optimization-Time Varying Acceleration
Coefficients (PSO-TVAC) algorithm has been applied to find the optimal location and size of
TCPST for real power loss minimization.
465
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3.1
Loss sensitivity index
Loss sensitivity with respect to TCPST placed in line k (k=1,…….NL) may be given as
(Preedavichit and Srivastava, 1998):
bk =
𝜕𝑃 𝐿
𝜕𝜙 𝑘
The loss sensitivity index of phase shift (𝝓k ) to total power loss will be
𝜕𝑃 𝐿
𝑉
=2 𝑖 𝑣𝑗 𝐺𝑘 sin(𝜙𝑘 ) ;
k = 1,2,……….NL
𝜕𝜙 𝑘
3.2
𝑡𝑠
(5)
Particle swarm optimization
Particle swarm optimization is a population based evolutionary computing technique
that traces its evolution to the emergent motion of a flock of birds searching for food. It
scatters random particles i.e. solutions into the problem space. These particles, called
swarms, collect information from each other through their respective positions (Kennedy and
Eberhart, 2002; Eberhart and Shi, 2001). The particles update their positions using their own
experience and the experience of their neighbors. The update mode is termed as the velocity
of particles. The position and velocity vectors of the i th particle of a d-dimensional search
space can be represented as X i ( xi1 , xi 2 ,............xid ) and Vi (vi1 , vi 2 ,........vid ) respectively.
On the basis of the value of the evaluation function, the best previous position of a
particle is recorded and represented as pbest i ( X t1 , X t 2 ........ X td ) . If the gth particle is
the best among all particles in the group so far, it is represented as
pbest g gbest ( X t1 , X t 2 ,..........X td ) . Then, the new velocities and the positions of the particles
for the next fitness evaluation are calculated using the following two equations:
vidk 1 C[w vidk c1 rand1 pbestid xid c2 rand 2 gbest gd xid ]
xidk 1 xid vidk 1
(6)
(7)
Here w is the inertia weight parameter, C is constriction factor, c1 ,c2 are cognitive and
social coefficients, and rand1 and rand2 are two separately generated uniformly distributed
random numbers in the range [0, 1]. The first part of (6) known as “inertia” or “momentum”
and it represents the previous velocity. The second part of (6) is termed as the "cognitive" or
“memory” component and represents the personal thinking of each particle. The third part is
known as the "social knowledge" component that shows the collaborative effect of the
particles, in finding the global optimal solution. The social component always pulls the
particles toward the global best particle found so far.
Initially, a population of particles is generated with random positions, and then random
velocities are assigned to each particle. The fitness of each particle is then evaluated
according to a user defined objective function. At each iteration, the velocity of each particle is
calculated according to (6) and the position for the next function evaluation is updated
according to (7). Each time if a particle finds a better position than the previously found best
position; its location is stored in memory.
3.3
PSO-Time varying acceleration coefficients (PSO-TVAC)
The idea behind time varying acceleration coefficients development is to enhance the
global search in the early part of the optimization and to encourage the particles to converge
towards the global optima at the end of the search. This can be achieved by varying the
acceleration coefficients c1 and c2 with time such that the cognitive component is reduced
while the social component is increased as the search proceeds. With a large cognitive
component and small social component at the beginning, particles are allowed to move
around the search space instead of moving toward the population best during early stages.
On the other hand, a small cognitive component and a large social component allow the
466
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
particles to converge to the global optima in the latter part of the optimization process. The
acceleration coefficients are expressed as:
c1 c1 f c1i
(8)
iter
c1i
iter max
c 2 c 2 f c 2i
iter
c 2i
itermax
(9)
where c1i, c1f , c2i and c2f are initial and final values of cognitive and social acceleration
factors respectively. The PSO-TVAC algorithm has been applied in this paper, because this
algorithm avoids premature convergence in the early stages of the search and enhances
convergence to the global optimum solution during the latter stages of the search (Varshney,
2011).
4.
Implementation of PSO_TVAC for optimal location and sizing of TCPST
For implementation of PSO_TVAC approach for optimal placement and sizing of TCPST,
first, loss sensitivity index for various lines of a power system is computed using NR load flow
method and then these are ranked in decreasing order. For placement of TCPST, the lines
connected between two generation buses are ignored irrespective of their LSI values
(Preedavichit and Srivastava, 1998). Around 20%-25% lines having highest values of LSI are
selected as possible locations for TCPST. For these possible locations of TCPST,
PSO_TVAC algorithm has been applied for determining the optimal location and size of
TCPST using following steps:
(i.) Initialization
Initially the particle is defined as a vector which contains the randomly selected
TCPST location (the line at which a TCSC is placed) and its size as shown below.
Particle: [𝝓
Where is the TCPST line location number and 𝝓 is the TCPST size in degree
(ii.) Step2: Calculation of fitness function
The constrained optimization problem of optimal location of TCPST device is
converted into an unconstrained optimization problem using penalty factor PF
corresponding to the constraints violation as:
Fitness function = Objective function J + Penalty Factor
f(x) = J + PF
Thus, the fitness function used in PSO algorithm consists of two terms : J the original
fitness function and the penalty factor PF corresponding to the constraints violation.
(iii.) For each individual particle, compare the particle’s fitness value with its pbest. If the
current value is better than the pbest value, then set this value as the pbest and the
current particle’s position, xi, as pi.
(iv.) Identify the particle that has the best fitness value. The value of its fitness function is
defined as gbest and its position as pg.
(v.) Update the velocity and position of all particles using (6) and (7).
(vi.) Repeat steps (ii)-(v) until a stopping condition is met (e.g., maximum number of
iterations or a sufficiently good fitness value).
5.
Result and discussions
The effectiveness of proposed method is demonstrated by applying it on IEEE 30- bus
system (Sadat). The IEEE 30-bus system consists of 1 slack bus, 5 generation buses, 24 load
buses, and 41 transmission lines. In this paper, total system real power loss reduction is
considered as the objective to determine the optimal location of TCPST. For optimal placement
of TCPST, LSI for various lines of a power system is computed using NR load flow method.
The LSI values for all the 41 lines are shown in Table 1. As can be observed from Table 1, the
loss sensitivity index of line no. 1 is highest, but is not considered as one of the possible
467
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
locations for TCPST placement because it is connected between two generation buses 1 and
2. Similarly, line no. 5 is also not considered for TCPST placement due its connection between
generation buses 2 and 5.
Table 1. Loss Sensitivity Index for IEEE 30-bus system
Line
No.
1
2
3
4
5
6
7
8
9
10
Loss Sensitivity
Index
1.1066
0.3750
0.2617
0.4905
0.3696
0.3660
0.3984
-0.1287
0.2229
0.1610
Line
No.
11
12
13
14
15
16
17
18
19
20
Loss Sensitivity
Index
0.0
0.0
0.0
0.0
0.0
0.0
0.0522
0.1157
0.0435
0.0082
Line
No.
21
22
23
24
25
26
27
28
29
30
Loss Sensitivity
Index
0.0220
0.0408
0.0191
-0.0433
0.0555
0.0246
0.0853
0.0424
-0.0087
0.0281
Line
No.
31
32
33
34
35
36
37
38
39
40
41
Loss Sensitivity
Index
0.0351
0.0090
-0.0193
0.0181
-0.0367
0
0.0439
0.0514
0.0280
-0.0030
0.1027
When the lines are ranked in decreasing order of their absolute LSI values, it is 4, 7, 2, 6,
3, 9, 10, 8, 18, 41, 27 and so on. For optimal placement of TCPST, the first 10 lines are
selected as possible locations for TCPST. For these possible locations for TCPST placement,
PSO_TVAC algorithm has been applied for determining the optimal location and size of
TCPST. With population size 10, the PSO_TVAC algorithm converged in 23 iterations, giving
optimal location for TCSC as line no. 8 and setting of TCPST as 0.9879°. The total system
real power loss reduced from 0.1760pu to 0.1751pu. To validate the proposed approach, the
PSO_TVAC algorithm has also been applied considering all the 41 lines as possible locations
for TCPST placement. In this case also, the same results were obtained, but the cpu time
requirement was 0.15157s. When PSO_TVAC algorithm was applied for LSI based10
possible locations, it required 0.05159s. Thus, the proposed approach is found to faster and
accurate as well.
6.
Conclusion
This paper presents PSO_TVAC based approach for determining optimal location of
TCPST for real power loss reduction. On the basis of loss sensitivity index, a subset of lines
for possible location of TCPST was obtained and then, PSO_TVAC algorithm has been
applied for this subset of lines only. Effectiveness of the proposed approach has been tested
on the standard IEEE 30-bus system and is found to faster as well accurate.
Acknowledgment
The authors sincerely acknowledge the financial assistance received from Department of
Science and Technology, New Delhi, India vide letter no. SR/S3/EECE/0064/2009, dated 2201-2010 and Director, Madhav Institute of Technology & Science, Gwalior, India to carry out
this research work.
References
Chandrasekharan K., Jeyaraj K. A, Sahayasenthamil L. and Saravan M., 2009, ”A new
method to incorporate FACTS Devices in Optimal Power Flow Using Particle Swarm
Optimization “in Journal of Theoretical and Applied Information Technology.
Eberhart R. C. and Shi Y., 2001, “Particle swarm Optimization: Developments, applications
and resources,” in Proc. IEEE Int. Conf. Evolutionary Computation, , vol.1, pp. 81–86.
Fuerpe- Esquivel C.R. and Ache E., 1997,“A Newton Type Algorithm for the Control of Power
flow in Electrical Power Network.” IEEE Trans. on Power Systems Vol. 12, No. 4.
468
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Galiana F.D. et.al, 1996, “Assessment and Control of the Imp act of the FACTS Devices on
Power System Performance” IEEE Trans. On Power Systems, Vol. 11, No. 4.
Ge S.Y. and Chung T.S., 1999,“ Optimal Active Power Flow incorporating Power Flow Control
Needs in Flexible A.C. Transmission Systems” IEEE Trans. on Power Systems, Vol. 14,
No. 2.
Hingorani N.G. and Gyugyi L., 1999, “Understanding FACTS Concepts and Technology of
Flexible A.C. Transmission System” Piscataway: IEEE Press.
Kennedy J. and Eberhart R., 2000, “Particle swarm optimization”, Proc. IEEE Conf. Neural
Networks, vol. 4, pp. 1942-1948.
Preedavichit P. and Srivastava S.C., 1998,“Optimal reactive power dispatch considering
FACTS devices”, Electric Power Systems Research, vol. 46, pp. 251-257.
Sadat H., Power System Analysis (Power Flow Analysis, Pg. 225 – 232).
Song S. H., Lim J.U. and Seung-II M., 2004, “Installation and Operation of FACTS Devices for
Enhancing Steady –State Security ” Electric Power Systems Research, Vol. 70, pp. 7 –
15.
Taranto G.N., Pinto LM.V.G. and Pereira M.V.F., 1992,”Representation of FACTS Devices in
Power System Economic Dispatch”, IEEE Trans Power Syst., vol.7, pp572-576.
Varshney S., Laxmi S. and Manjaree P., 2011, “Optimal location and sizing of Statcom for
Voltage Security Enhancement using PSO-TVAC", Presented in International
Conference on Power System (ICPS-2011) held at IIT, Madras, Chennai, India during
December 22-24, 2011.
Wirmond V. E., Thelma S.P., Odilon L.T., 2011, TCPST allocation using optimal power flow
and genetic algorithm”, Electrical Power and Energy Systems, vol. 33, pp. 880-886.
Xiao Y., Song Y.H. and Sun Y. Z., 2002,“ Power Flow Control Approach to Power Systems
with Embedded FACTS Devices” IEEE Trans.on Power systems Vol. 17, No. 4.
469
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimal Placement and Sizing of TCSC for Minimization of
Line Overloading and System Power Loss using MultiObjective Genetic Algorithm
Gautam Singh Dohare*, Laxmi Srivastava
Department of Electrical Engg., Madhav Institute of Technology & Science, Gwalior, India
*Corresponding author (e-mail: gsdohare86@gmail.com)
In this paper Multi-objective genetic algorithm (MOGA) based approach has been
proposed for optimal placement and sizing of thyristor controlled series compensator
(TCSC) for minimization of line overloading and real power loss in a power system. The
TCSC devices are the modern power electronics based devices by which power flow
may be controlled and transmission losses can be reduced and hence transfer
capability can be enhanced. To implement MOGA, most severe single line outage
contingencies have been selected by calculating over loading index (OLI) for various
contingencies and then ranking them in decreasing order of their severity. Thereafter,
considering some of the most critical contingencies one-by-one, MOGA has been
implemented for finding the optimal location and size of TCSC for line over loading and
real power loss reduction simultaneously. The Multi-Objective GA provides nondominated solutions and simultaneously maintains diversity in the non-dominated
solutions. A fuzzy set theory-based approach is used to obtain the best compromise
solution over the trade-off curve. To examine the effectiveness of the MOGA based
approach, it has been implemented on IEEE 30-bus test systems and the results
obtained are found to quite satisfactory.
Keywords- Over loading index (OLI), Power loss (PL), Thyristor controlled series
compensator (TCSC), Flexible AC transmission system (FACTS), Multi-objective
genetic algorithm (MOGA).
1.
Introduction
In the last few years, power system operation faces new challenges due to restructuring
of the electricity industry. From last few decades load demand is continuously increasing.
Due to this, the magnitudes of the power flows in some of the transmission lines sometimes
reaches very near to its maximum limits, while in some other lines, it is very low in
comparison to their maximum rating. In case of contingencies, some of the lines get
overloaded and power system becomes insecure which is the most undesirable state of a
power system. With ever increasing load demand, for operating a power system efficiently
and maintaining the power system security during contingencies, either existing transmission
or generation facilities must be utilized more efficiently or new facilities should be added to the
existing power system. Due to the constraints such as lack of investment and difficulties in
getting new transmission line, this is not a feasible option, but maximum efficiency with
existing system can be attained by using Flexible Alternating Current transmission system
(FACTS) devices. FACTS device are the solid state converters having capability of improving
power transmission capacity, improving bus voltage profile, enhancing power system stability,
minimizing transmission losses etc. In order to optimize and to obtain the maximum benefits
from their use, the main issues to be considered are type of FACTS devices, its optimal
location and its rating. Commonly used FACTS devices are static var compensator (SVC),
thyristor controlled series compensator (TCSC) and unified power flow controller (UPFC) etc.
SVC and statcom are connected in shunt with the system to improve voltage profile by
injecting or absorbing the reactive power, while TCSC is connected in series with the system.
Like other FACTS devices, TCSC is also a costly device; therefore it is important to find its
optimal location and its size (Benabid et al. 2009), in order to minimize line over loading and
system losses in a power system. The complicated problem of optimal location and sizing of
FACTS devices has been handled by researchers in various ways by considering different
470
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
objective functions. To improve the voltage security index of a power system an alternative
solution is to locate an appropriate FACTS device (Besharat and Taher, 2008).
Conventionally, sensitivity analysis is used to decide the optimal placement of TCSC for static
security enhancement and for reactive power dispatch. Real power performance index has
been used for determining the optimal location of TCSC for congestion management and
reduction of total system reactive power losses in deregulated power system, while OPF
formulation has been proposed for investment recovery of FACTS devices in the deregulated
electricity market (Verma, 2001). Various methods like sequential quadratic programming,
mixed integer programming and line stability index has been proposed for optimal location
and sizing of TCSC for voltage stability enhancement. With the advent of evolutionary
computational techniques like particle swarm optimization (PSO), differential evolution (DE),
simulated annealing (SA), etc, these methods has been applied for the problem of optimal
location of FACTS devices (Shanmukha Sundar and Ravikumar, 2012).
This paper proposes a multi-objective genetic algorithm (MOGA) based approach to find
out the optimal location and size of TCSC for minimizing line over loading and the power
losses for severe single line outage contingencies. Severity of a single line outage
contingency has been determined on the basis of over loading index (OLI) in the power
system. The contingency providing the highest value of OLI is considered as the most severe
contingency. Optimal location and sizing of TCSC has been determined by applying MOGA
for most critical contingencies one-by-one. Effectiveness of the proposed multi-objective GA
based approach has been tested on IEEE 30-bus system (Saddat). Genetic Algorithm toolbox
of Matlab has been used for finding optimal location and size of TCSC.
2.
Static modeling of TCSC
The model of a transmission line with a TCSC connected between bus-i and bus-j is
shown in Fig. 1. During the steady state, the TCSC can be considered as a static reactance –
jxc. The real and reactive power flow from bus-i to bus-j, and from bus-j to bus-i of a line
having series impedance and a series reactance are (Verma, 2001).
𝑃𝑖𝑗𝑐 = 𝑉𝑖2 𝐺𝑖𝑗′ - 𝑉𝑖 𝑉𝑗 (𝐺𝑖𝑗′ cos𝛿𝑖𝑗 + 𝐵𝑖𝐽′ sin 𝛿𝑖𝑗 )
(1)
𝑄𝑖𝑗𝑐 = − 𝑉𝑖2 (𝐵𝑖𝑗′ + 𝐵𝑠 ) - 𝑉𝑖 𝑉𝑗 (𝐺𝑖𝑗′ sin𝛿𝑖𝑗 - 𝐵𝑖𝑗′ cos 𝛿𝑖𝑗 )
(2)
𝑃𝑗𝑖𝑐 = 𝑉𝑗 2 𝐺𝑖𝑗′ - 𝑉𝑖 𝑉𝑗 (𝐺𝑖𝑗′ cos𝛿𝑖𝑗 - 𝐵𝑖𝑗′ sin 𝛿𝑖𝑗 )
(3)
𝑄𝑖𝑗𝑐 = − 𝑉𝑗 2 (𝐵𝑖𝑗′ + 𝐵𝑠 ) + 𝑉𝑖 𝑉𝑗 (𝐺𝑖𝑗′ sin𝛿𝑖𝑗 + 𝐵𝑖𝑗′ cos 𝛿𝑖𝑗 )
(4)
The active and reactive power loss in the line having TCSC can be written as,
𝑃𝐿 = 𝑃𝑖𝑗 + 𝑃𝑗𝑖 = 𝐺𝑖𝑗′ (𝑉𝑖2 + 𝑉𝑗 2 ) - 2 𝑉𝑖 𝑉𝑗 𝐺𝑖𝑗′ cos 𝛿𝑖𝑗
(5)
𝑄𝐿 = 𝑄𝑖𝑗 + 𝑄𝑗𝑖 = - (𝑉𝑖2 + 𝑉𝑗 2 ) ( 𝐵𝑖𝑗′ + 𝐵𝑠 ) + 2 𝑉𝑖 𝑉𝑗 𝐵𝑖𝑗′ cos 𝛿𝑖𝑗
(6)
Where
𝐺𝑖𝑗′ =
𝑟𝑖𝑗2 +
𝑟 𝑖𝑗
𝑥 𝑖𝑗 − 𝑥 𝑐
2
and
Figure 1. Model of transmission line with TCSC.
𝐵𝑖𝑗′ =
−(𝑥 𝑖𝑗 − 𝑥 𝑐 )
2
2
𝑟𝑖𝑗 + 𝑥 𝑖𝑗 − 𝑥 𝑐
Figure 2. Injection model of TCSC.
The change in the line flow due to series capacitance can be expressed as a line without
series capacitance with power injected at the receiving and sending ends of the line as shown
in Fig. 2.
471
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The real and reactive power injections at bus-i and bus-j can be expressed as,
𝑃𝑖𝑐 = 𝑉𝑖2 ∆𝐺𝑖𝑗 - 𝑉𝑖 𝑉𝑗 [∆𝐺𝑖𝑗 cos 𝛿𝑖𝑗 + ∆ 𝐵𝑖𝑗 sin 𝛿𝑖𝑗 ]
𝑃𝑗𝑐 = 𝑉𝑖2 ∆𝐺𝑖𝑗 - 𝑉𝑖 𝑉𝑗 [∆𝐺𝑖𝑗 cos 𝛿𝑖𝑗 - ∆ 𝐵𝑖𝑗 sin 𝛿𝑖𝑗 ]
𝑄𝑖𝑐 = − 𝑉𝑖2 ∆𝐵𝑖𝑗 - 𝑉𝑖 𝑉𝑗 [∆𝐺𝑖𝑗 sin 𝛿𝑖𝑗 - ∆ 𝐵𝑖𝑗 cos 𝛿𝑖𝑗 ]
𝑄𝑗𝑐 = − 𝑉𝑖2 ∆𝐵𝑖𝑗 + 𝑉𝑖 𝑉𝑗 [∆𝐺𝑖𝑗 sin 𝛿𝑖𝑗 + ∆ 𝐵𝑖𝑗 cos 𝛿𝑖𝑗 ]
∆𝐺𝑖𝑗 =
where
𝑥 𝑐 𝑟 𝑖𝑗 (𝑥 𝑐 − 2𝑥 𝑖𝑗 )
2
𝑟𝑖𝑗2 +𝑥 𝑖𝑗
(𝑟𝑖𝑗2 + 𝑥 𝑖𝑗 − 𝑥 𝑐
2
and
(7)
(8)
(9)
(10)
∆𝐵𝑖𝑗 =
2
− 𝑥 𝑐 (𝑟𝑖𝑗2 − 𝑥 𝑖𝑗
+ 𝑥 𝑐 𝑥 𝑖𝑗 )
2
𝑟𝑖𝑗2 +𝑥 𝑖𝑗
(𝑟𝑖𝑗2 + 𝑥 𝑖𝑗 − 𝑥 𝑐
2
𝑋𝐶, The rating of TCSC depends on reactance 𝑥𝑖𝑗 of the line i-j , To prevent over
compensation TCSC reactance is considered between -0.7 𝑥𝑖𝑗 to 0.3 𝑥𝑖𝑗 .
3.
Objective functions for multi-objective GA
3.1 Over loading index (OLI)
The severity of a contingency can be evaluated by an over loading index:
𝑂𝐿𝐼 =
𝑊 ∆𝑆𝑙 𝑎𝑣𝑔 2𝑛
𝑙∈𝑛𝑙 2𝑛 𝑆𝑙 𝑚𝑎𝑥
(11)
Where n =2 , nl is the no. of overloaded lines.
max
Sl
is the rated capacity of line, n is the exponent and W a real non-negative weighing
coefficient which may be used to reflect the importance of lines. OLI will be zero when all the
lines are within their maximum power flow limits and will reach a high value when there are
overloads. Thus, it provides a good measure of severity of the line overloads for a given state
of the power system. Most of the works on contingency selection algorithms utilize the second
order over loading indices which, in general, suffer from masking effects. The lack of
discrimination, in which the over loading index in for a case with many small violations may be
comparable in value to the index for a case with one huge violation, is known as masking
effect. By most of the operational standards, the system with one huge violation is much more
severe than that with many small violations. Masking effect to some extent can be avoided
using higher order over loading indices that is n > 1. However, in this study, the value of
exponent has been taken as 2 and W = 1.
3.2.
Power loss (PL)
The second objective of this work is to determine the optimal location and sizing of
TCSC in the power system to minimize the power loss. The power loss describe as,
PL=
𝑁
𝑗 =1 𝑔𝑘
(𝛿𝑖 − 𝛿𝑗 )]
[𝑉𝑖2 + 𝑉𝑗 2 − 2𝑉𝑖 𝑉𝑗 cos
(12)
Subjected to the following equality constraints:
𝑃𝑔𝑖 − 𝑃𝑑𝑖 −
𝑁
𝑗 =1
𝑉𝑖 𝑉𝑗 𝑌𝑖𝑗 Cos (𝛿𝑖𝑗 − 𝜃𝑖𝑗 ) = 0
, 𝑄𝑔𝑖 − 𝑄𝑑𝑖 −
𝑁
𝑗 =1
𝑉𝑖 𝑉𝑗 𝑌𝑖𝑗 Sin (𝛿𝑖𝑗 − 𝜃𝑖𝑗 ) = 0
(13)
And following inequality constraints:
𝑃𝑔𝑖𝑚𝑖𝑛 ≤ 𝑃𝑔𝑖 ≤ 𝑃𝑔𝑖𝑚𝑎𝑥
𝑉𝑗 𝑚𝑖𝑛 ≤ 𝑉𝑗 ≤ 𝑉𝑖𝑚𝑎𝑥
𝑚𝑖𝑛
𝑚𝑎𝑥
𝑋𝑇𝐶𝑆𝐶
≤ 𝑋𝑖 ≤ 𝑋𝑇𝐶𝑆𝐶
Where
∀𝑖 ∈NG ,
∀𝑖 ∈ NG ,
𝑚𝑎𝑥
𝑚𝑖𝑛
𝑄𝑔𝑖
≤ 𝑄𝑔𝑖 ≤ 𝑄𝑔𝑖
𝛿𝑖𝑗𝑚𝑖𝑛 ≤ 𝛿𝑖𝑗 ≤ 𝛿𝑖𝑗𝑚𝑎𝑥
472
∀𝑖 ∈ NG
∀𝑖 ∈ NG
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
F is the objective function., 𝑃𝐿 is the power loss in the 𝑘 𝑡 line, ntl is the number of lines in the
system, N is the set of buses, NG is the set of generation buses, Y𝑖𝑗 Is the magnitude of ij
element in admittance matrix, θ𝑖𝑗 phase angle of ij element in admittance matrix, 𝑃𝑔𝑖 and 𝑄𝑔𝑖
are the active and reactive power generation at bus i, 𝑃𝑑𝑖 and 𝑄𝑑𝑖 are the active and reactive
power load at bus i, 𝑉𝑖 is the voltage magnitude at bus i, 𝛿𝑖𝑗 is the power angle, 𝑋𝑇𝐶𝑆𝐶 is the
reactance of TCSC.
4.
Multi-objective genetic algorithm for optimal location of TCSC
Genetic algorithms (GA) (Goldberg, 1989) are generalized search algorithms based on
the mechanics of natural genetics. GA maintains a population of individuals that represent the
candidate solutions to the given problem. Each individual in the population is evaluated to
give some measure of its fitness to the problem from the objective function. GAs combine
solution evaluation with stochastic genetic operators namely, selection, crossover and
mutation to obtain near optimality. Being a population-based approach, GA is well suited to
solve multi-objective optimization problems. Multi-objective genetic algorithm (Deb, 2005;
Goldberg, 1989; Fonseca and Fleming, 1995) is an extension of classical GA. The main
difference between a conventional GA and a MOGA lies in the assignment of fitness to an
individual. The rest of the algorithm is the same as that in a classical GA (Narmatha Banu and
Devaraj, 2012).
In the Multi-objective GA, first, each solution is checked for its domination in the
population. Two solutions x1 and x2 are compared on the basis of whether one dominates
the other solution or not. In a minimization problem, a solution x1 dominates x2 if the following
two conditions are satisfied
i {1, 2}: fi ( x1 ) fi ( x2 )
1.
(14)
j {1, 2}: f j ( x1 ) f j ( x2 )
2.
(15)
(a) The solution 𝑥 (1) is no worse than 𝑥 (2) in all objectives, or 𝑓𝑖 ( 𝑥 (1) ) 𝑓𝑖 ( 𝑥 (2) ) for all i = 1,
2, . . ., M where M be the objective functions.
𝑓𝑖 ( 𝑥 (2) )
(b) The solutions 𝑥 (1) is strictly better than 𝑥 (2) in at least one objective, or 𝑓𝑖 𝑥 1
for at least one j ( j ∈ {1, 2, . . ., M})
If any of the above condition is violated, the solution 𝑥 (1) does not dominate the solution 𝑥 (2)
(or mathematically 𝑥 (1)
≤ 𝑥 (2)). To a solution ‘i’, a rank 𝑟𝑖 equal to one plus the number of solutions 𝑛𝑖 that dominate
solution ‘i’ is assigned:
ri = 1+ni
In this way, non-dominated solutions are assigned a rank equal to 1, since no solution would
dominate a non-dominated solution in a population. Once the ranking is done, a raw fitness is
assigned to each solution on the basis of its rank. To perform this, the ranks are sorted in
ascending order of magnitude and then, a raw fitness is assigned to each solution by using a
mapping function. Generally, the mapping function is selected so as to assign fitness between
N (for the best rank solution) and 1 (for the worst rank solution). Subsequently, solutions of
each rank are considered at a time and their raw fitness are averaged. This average fitness is
called the assigned fitness to each solution of the rank. This process emphasizes nondominated solutions in the population. In order to maintain diversity among non-dominated
solutions, niching among solutions of each rank are introduced. The niche count is calculated
with the following equation:
𝑛𝑐𝑖
=
𝜇 𝑟𝑖
𝑗 =1
𝑆 𝑑𝑖𝑗
(16)
where 𝜇(𝑟𝑖 ) is the number of solutions in rank 𝑟𝑖 and 𝑆(𝑑𝑖𝑗 ) is the sharing function of two
solutions i and j. The sharing function Sh(d) is calculated using objective function value as
distance metric as
473
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
𝑆 𝑑𝑖𝑗 =
1−
𝑑 𝑖𝑗
𝛼
,
if 𝑑 ≤ 𝜎𝑠𝑎𝑟𝑒
(17)
𝑜𝑡𝑒𝑟𝑤𝑖𝑠𝑒
where ‘𝜎𝑠𝑎𝑟𝑒 ’ is the sharing parameter which signifies the maximum distance between any
two solutions before they can be considered to be in the same niche and 𝑑𝑖𝑗 is the normalized
distance between any two solutions i and j in a rank. The normalized distance 𝑑𝑖𝑗 is calculated
using
=
𝑀
0,
𝜎𝑠 𝑎𝑟𝑒
𝑗
𝑘=1
𝑓𝑘𝑖 − 𝑓𝑘
𝑑𝑖𝑗
2
(18)
𝑓𝑘𝑚𝑎𝑥 − 𝑓𝑘𝑚𝑖𝑛
th
where 𝑓𝑘𝑚𝑎𝑥 and 𝑓𝑘𝑚𝑖𝑛 are the maximum and minimum objective function value of the k
objective. The shared function takes a value in [0, 1], depending on the values of 𝑑𝑖𝑗
and 𝜎𝑠𝑎𝑟𝑒 . The shared fitness value is calculated by dividing the assigned fitness of a solution
by its niche count. Although all solutions of any particular rank have the identical fitness, the
shared fitness value of a solution residing in a less crowded region has a better shared
fitness. This produces a large selection pressure for poorly represented solutions in any rank.
Dividing the assigned fitness value by the niche count reduces the fitness of each solution. In
order to keep the average fitness of the solutions in a rank the same as that before sharing,
these fitness values are scaled using Eq. (19) so that their average shared fitness value is the
same as the average assigned fitness value.
𝑓 𝑗 𝜇 (𝑟 )
𝑓𝑗𝑆𝐶 = 𝜇 (𝑟 ) ′
𝑓
𝑘=1 𝑘
where 𝑓𝑗 is the scaled fitness; 𝑓𝑗′ is the shared fitness and it is calculated using 𝑓𝑗′ = 𝑓𝑗′ / 𝑛𝑐𝑗 ; 𝜇𝑟
is the number of solutions in rank 𝑟𝑖 .
This procedure is continued until all ranks are processed. Then, selection, crossover and
mutation operators are applied to create a new population. With each individual represented
as a string of integers and floating point numbers, selection process remains the same as in
classical GA, but the cross over and mutation operators are applied variable by variable. In
this paper, tournament selection and BLX-𝛼 crossover and non-uniform mutation operators
(Goldberg, 1989; Fonseca and Fleming, 1995; Narmatha Banu and Devaraj, 2012)are used.
𝑆𝐶
5.
Best compromise solution
After having the Pareto-optimal set of non-dominated solution, the proposed approach
presents one solution to the decision maker as the best compromise solution. Due to
imprecise nature of the decision maker’s judgment, the ith objective function fi is represented
by a membership function µi defined as [9]
1
max
fi
fi
i max
f i min
fi
0
f i f i min
f i min f i f i max
f i f i max
(20)
where f i
and f i
are the minimum and maximum value of the i objective function
among all non-dominated solutions, respectively. For each non-dominated solution k, the
normalized membership function µk is calculated as
min
max
th
Nobj
k
(21)
k
i
i 1
Nobj
M
k 1
i 1
474
k
i
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
where M is the number of non-dominated solutions and Nobj is number objective functions
which is equal to 2, in this paper. The best compromise solution is that having the maximum
k
value of µ .
6.
Result and discussion
The effectiveness of proposed MOGA based method is illustrated by applying the
approach in IEEE 30-bus system [10]. This test system consists of 6 generation buses , 24
load buses, and 41 transmission lines. In this paper, the optimal placement and size of TCSC
has been determined for minimization of over loading index and system real power losses.
The largest value of over loading index indicates the most critical line amongst the all lines in
a power system. Ranking of five most critical lines is given in Table1. The five most critical
lines are10, 36, 27, 15 and 5 respectively. As can be observed from Table.1, line no.10
(connected to buses 6-8) has highest value of over loading index (OLI). Therefore this line is
ranked as the weakest line. For selected crtical contingencies , multi-objective genetic
algorithm (MOGA) was applied for optimal placement and sizing of TCSC, so resluts of multiobjective genetic algorithm (MOGA) are shown in this paper. The proposed multi-objective
genetic algorithm (MOGA) was implement to find the optimal placement and proper sizing of
the TCSC.The optimization parameters is given in the Table.2 .
Table 1. OLI Ranking for IEEE 30-Bus System
S.
No.
1.
2.
3.
4.
5.
Line
outage
10
36
27
15
5
Over loading
index
4.2460
3.6751
2.8341
1.5015
0.9249
Power
loss
0.1855
0.1984
0.1799
0.2035
0.3291
Rank
I
II
III
IV
V
Table 2. Parameter of MOGA
Number of variables
Population size
Number of generation
Pareto fraction
StallGenLimit
TolFun
1
16
200
0.5
15
1e-3
Table 3. Results of TCSC Placement in IEEE 30-Bus System
S.No.
1
2
3
4
5
Line
out
10
36
27
15
5
TCSC
location
13
33
29
13
8
TCSC
value
-0.1456
-0.2304
-0.0165
-0.1408
0.0348
Overloading index
(OLI)
Without
With
TCSC
TCSC
4.2460
4.1088
3.6751
3.5321
2.8341
2.8266
1.5015
1.0018
0.9249
0.8433
Power loss (PL)
Without
TCSC
0.1855
0.1984
0.1799
0.2035
0.3291
With
TCSC
0.1849
0.1972
0.1799
0.2016
0.3258
After installation of TCSC at optimal location for outage of line nos.10, 36, 27, 15 and
5 one by one, the Over loading index (OLI) and Power loss (PL) were minimized. Table 3
shows the optimal location and sizing of TCSC for these line outage cases of power loss
and over lodaing in the line no.10, 36, 27, 15 and 5 respectively. After installation of TCSC at
optimal location for line outage nos. 10, 36, 27, 15 and 5 one by one, the optimum location of
475
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
TCSC in IEEE 30-bus system for over loading and power loss minimization were found to be
line nos. 13, 33, 29, 13 and 8 respectively. Thus it can be clearly observed that TCSC
optimum location for one contingency may not be optimum for other contingencies and more
than one TCSC are required to minimize over loading and power losses under various
contingencies.
6.
Conclusion
In this paper, a multi-objective genetic algorithm has been proposed for optimal
placement and sizing of TCSC for over loading index and power loss minimization under
single line outage contingencies of a power system. The effectiveness of this method has
been demonstrated on IEEE 30-bus system. It has been observed that TCSC optimum
location for one contingency may not be optimum for other contingencies and more than one
TCSC are required to minimize over loading and power losses simultaneously under various
contingencies.
Acknowledgment
The authors sincerely acknowledge the financial assistance received from Department of
Science and Technology, New Delhi, India vide letter no. SR/S3/EECE/0064/2009, dated 2201-2010 and Director, Madhav Institute of Technology & Science, Gwalior, India to carry out
this research work.
References
Benabid R.,Boudour M, Abido M.A.,2009, ‘Optimal location and setting of SVC and TCSC
devices using non-dominated sorting particle swarm optimization’, Electrical Power
System Research vol. 79, pp. 1668-1677.
Besharat H., Taher S. A, 2008, ‘Congestion management by determining optimal location of
TCSC in deregulated power system’, Electrical Power and Energy System, vol. 30, pp.
563-568.
Deb K., 2005, ‘Multi-objective Optimization Using Evolutionary Algorithms’. John Wiley &
Sons, Ltd., New York, pp. 209–213.
Fonseca C.M., Fleming P.J., 1995, An overview of evolutionary algorithms in multiobjective
optimization, Evolutionary Computation, vol. 3 (1), pp. 1–16.
Gerbex S., Cherkaoui R. and Alain J. G., 2001, ‘Optimal location of multi-type FACTS devices
in a power system by means of genetic algorithms’, IEEE Transactions on power system,
Vol.16, no.3.
Goldberg D.E., 1989, Genetic Algorithms for Search, Optimization, and Machine Learning,
Addison–Wesley, Reading, MA.
Narmatha Banu R., Devaraj D., 2012, ‘Multi-objective GA with fuzzy decision making for
security enhancement in power system’. Applied Soft Computing 12, 2756–2764.
Saddat H., ‘Power System Analysis’, Tata Mcgraw-Hill Publishing Company Limited.
Shanmukha Sundar K., Ravikumar H.M., 2012, ‘Selection of TCSC location for secured
optimal power flow under normal and network contingencies’, Electrical Power Energy
Systems, vol. 34, pp. 29-37.
Verma K. S., Singh S. N., Gupta H.O., 2001, ‘FACTS devices location for enhancement of
total transfer capability’, Power Engineering Society winter meeting, IEEE, vol. 2, pp.
522–527.
476
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
ACODE Algorithm Based Optimized Artificial Neural Network
for AGC in Deregulated Environment
K. Chandra Sekhar, K. Vaisakh*,
A.U. College of Engineering, Andhra University, Visakhapatnam-530003, A.P, India
Corresponding author (e-mail: vaisakh_k@yahoo.co.in)
This paper presents an artificial neural network (ANN) based frequency controller
design for two-area Automatic Generation Control (AGC) scheme in a deregulated
environment. A three layer feed forward neural network (NN) is proposed for controller
design and trained with back propagation algorithm (BPA). Aside from choosing a
training algorithm to train ANNs, the ANN structure can also be optimized by applying
certain pruning techniques to reduce network complexity. The Adaptive Composite
Differential Evolution (ACODE) based optimization algorithm is used as the training
algorithm and the Optimal Brain Damage (OBD) method as the pruning algorithm. This
study suggests an approach to ANN training through the simultaneous optimization of
the connection weights and ANN structure. The functioning of the proposed ANN based
controller has been demonstrated on a two-area system and the results have been
compared with those obtained by other methods reported in the literature.
1.
Introduction
The parallel operation and controlling the frequency of interconnected power system
has becoming the challenge for control engineer. The deviation of the frequencies and tie-line
power arise because of sudden load variations, which occur due to a mismatch between the
generated and the demanded power. The main objective of providing an Automatic
Generation Control (AGC) has been to maintain the system frequency at nominal value and
the power interchange between different areas at their scheduled values. The concepts of
the conventional AGC are well discussed in (Elgerd, 1982; Elgerd and Fosha, 1970; Elgerd
and Fosha, 1970).
In the deregulated environment of the electricity sector, there will be many market
players, such as generating companies (Gencos), distribution companies (Discos),
transmission company (Transco), and system operator (SO). For stable and secure operation
of a power system under deregulated environment, the SO has to provide a number of
ancillary services. One of the ancillary services is the “frequency regulation” based on the
concept of the load frequency control. A detailed discussion on load frequency control issues
in power system operation under deregulation environment is reported in (Christie and Bose,
2004).
In this work, a general model for two-area AGC suitable for a competitive electricity
environment has been proposed. A feed forward Neural Network has been designed to
eliminate the frequency error in the developed model. Area Control Error (ACE) and the load
disturbance have been taken as the input to the Neural Network and the output of the ANN is
processing the changes required in the governor inputs to eliminate the frequency error. A
back propagation algorithm has been used to train the artificial neural network offline. The
performance studies have been carried out by using the MATLAB SIMULINK for transactions
within and across the control area boundaries.
2. Restructured power system
A. Traditional vs. Restructured Scenario: The traditional power system industry has a
“vertically integrated utility” (VIU) structure. In the restructured or deregulated environment,
vertically integrated utilities no longer exist. The utilities no longer own generation,
transmission, and distribution; instead, there are three different entities, viz., GENCOs
477
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
(generation companies), TRANSCOs (transmission companies) and DISCOs (distribution
companies).
As there are several GENCOs and DISCOs in the deregulated structure, a DISCO
has the freedom to have a contract with any GENCO for transaction of power. A DISCO may
have a contract with a GENCO in another control area. Such transactions are called “bilateral
transactions.” All the transactions have to be cleared through an impartial entity called an
independent system operator (ISO). For an in-depth discussion of implications of restructuring
the power industry, refer to (Nobile et al., 2000).
2.1.1
DISCO participation matrix
In the restructured environment, GENCOs sell power to various DISCOs at
competitive prices. Thus, DISCOs have the liberty to choose the GENCOs for contracts. They
may or may not have contracts with the GENCOs in their own area. This makes various
combinations of GENCO-DISCO contracts possible in practice. Therefore, a “DISCO
participation matrix” (DPM) is a matrix with the number of rows equal to the number of
GENCOs and the number of columns equal to the number of DISCOs in the system. Each
entry in this matrix can be thought of as a fraction of a total load contracted by a DISCO
(column) toward a GENCO (row). Thus, the ith entry corresponds to the fraction of the total
load power contracted by DISCO from a GENCO. The sum of all the entries in a column in
this matrix is unity. DPM shows the participation of a DISCO in a contract with a GENCO;
hence the name “DISCO participation matrix” (Kumar et al., 1997).
2.1.2
Block diagram formulation
In this section, the block diagram for a two-area AGC system in the deregulated
scenario is formulated. Whenever a load demanded by a DISCO changes, it is reflected as a
local load in the area to which this DISCO belongs. This corresponds to the local loads
∆𝑃𝐿1 , ∆𝑃𝐿2 and should be reflected in the deregulated AGC system block diagram at the point
of input to the power system block. As there are many GENCOs in each area, ACE signal has
to be distributed among them in proportion to their participation in the AGC. Coefficients that
distribute ACE to several GENCOs are termed as “ACE participation factors” (𝑎𝑝𝑓′𝑠). Note
that 𝑚
𝑗 =1 𝑎𝑝𝑓𝑗 = 1 where 𝑚 is the number of GENCOs. Unlike in the traditional AGC system,
a DISCO asks/demands a particular GENCO or GENCOs for load power. These demands
must be reflected in the dynamics of the system. Turbine and governor units must respond to
this power demand. Thus, as a particular set of GENCOs are supposed to follow the load
demanded by a DISCO, information signals must flow from a DISCO to a particular GENCO
specifying corresponding demands. Here, we introduce the information signals which were
absent in the traditional scenario. The demands are specified by 𝑐𝑝𝑓′𝑠 (elements of DPM) and
the pu MW load of a DISCO. These signals carry information as to which GENCO has to
follow a load demanded by which DISCO. The block diagram for AGC in a deregulated
system used in this paper is structurally is based upon the idea of (Kumar et al., 1997).
2.1.3
State Space Characterization of the Two-Area System in Deregulated
Environment:
The closed loop system of AGC is characterized in state space form as
𝑥 = 𝐴𝑐𝑙 𝑥 + 𝐵 𝑐𝑙 𝑢
(1)
Where 𝑥the state is vector and 𝑢 is the vector of power demands of the DISCOs. 𝐴𝑐𝑙 and
𝐵 𝑐𝑙 matrices. The state matrices are taken from ref. (Kumar et al., 1997).
Optimized artificial neural network controller
An artificial neural network (ANN), also known as neural network (NN), is an abstract
representation of the biological nervous system. It is composed of a collection of neurons that
communicates with each other through the axons. An artificial neural network is an adaptive
system that has interesting attributes like the ability to adapt, learn and generalize. An ANN is
also highly accurate in classification and prediction of output because of its massively parallel
processing, fault tolerance, self-organization and adaptive capability which enables it to solve
478
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
many complex problems. Its ability to solve different problems is achieved by changing its
network structure during the learning (training) process (Huanping, 2011; Shi, 2009). But, it
was also pointed out that the determination of various ANN parameters like the number of
hidden layers, number of neurons in the hidden layer, connection weights initialization etc. is
not a straightforward process and finding the optimal configuration of ANNs is a very time
consuming process . Thus, designing an optimal ANN structure and choosing an effective
ANN training algorithm for a given problem is an interesting research area.
Moreover, since the determination of various ANN parameters is not a straightforward
process, various researches have been conducted with the purpose of finding the optimal
configuration of ANNs. As a result, several algorithms have been proposed as training
algorithms for ANNs and these include Genetic Algorithms (GA), Ant Colony Optimization
(ACO) (Shi and Li, 2009), Artificial Bee Colony (ABC) (Garro at el., 2011) and Particle Swarm
Optimization (PSO) . These algorithms vary on how they can effectively optimize the artificial
neural network with respect to the problem being solved. Adaptive Composite Differential
Evolution (ACODE) is a more recent evolutionary intelligence-based optimization algorithm. It
was developed to solve various problems in engineering and it has been proven to have a
better performance in finding the global best solutions than other existing optimization
algorithms (Storn and Price, 2005).
In addition to choosing a training algorithm to train ANNs to carry out a certain task,
the ANN structure can also be optimized by applying certain pruning techniques to reduce
network complexity without drastically affecting its classification and prediction capabilities. A
pruning technique presented in this paper is the Optimal Brain Damage (OBD) pruning
algorithm. This pruning technique was found to be computationally simpler and can produce
relatively good ANNs. Consequently, ANN researches can be classified into two categories:
(1) training ANNs using a training algorithm and a non-OBD pruning technique to further
improve the ANNs (Augasta and Kathirvalavakumar, 2011; Li and Niu, 2008; Tu et al., 2010);
and (2) training ANNs using a training algorithm and an OBD pruning technique to further
improve the ANNs (Orlowska-Kowalska and Kaminski, 2009; Sansa et al., 2011; OrlowskaKowalska and Kaminski, 2008)
In this paper, the ACODE algorithm is proposed to be used as the training algorithm
to train the ANNs with OBD as its pruning method. The objective is to develop a CSO-based
ANN optimizer that trains artificial neural networks to learn the input-output relationships of a
given problem and then use the OBD pruning method to generate an optimal network
structure. That is, the ACODE-based ANN optimizer will generate an optimal set of
connection weights and structure for a given problem.
2.2
Simulation results of a two-area system in the deregulated environment
Here, the objective is to formulate the AGC problem in each control area and propose
an effective supplementary control loop based on the ANNs [6]. Since there is a strong
relationship between the training of ANNs and adaptive/ self-tuning control, increasing the
flexibility of structure induces a more efficient learning ability in the system, which in turn
causes less iteration and better error minimization. To obtain the improved flexibility, teaching
signals and other parameters of ANNs (such as connection weights) should be related to
each other. In this work a two area interconnected deregulated power system is taken for the
study of AGC. ANN controller is presented for the load frequency control. The superiority of
the controller is established by making the comparison of its performance with that of
conventional Integral controller. A two-area system is used to illustrate the behaviour of the
proposed AGC scheme. The same data as in (Paliwal and Kumar, 2009; Shi and Li, 2009) is
used for simulations. Both the areas are assumed to be identical. The governor-turbine units
in each area are assumed to be identical. Simulation diagram, simulation results of two area
deregulated powers system with neural network controller under base case as Case 1 is
shown in Fig.1 below,
479
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Gain 45
1
0.5
0.5
Gain 40
Gain 46
Step 13
3
0.5
0.5
Gain 41
Gain 47
Step 1
2
0
0
Gain 42
Gain 48
0
0
Gain 43
-KGain 59
Gain 32
-KScope 6
-K-
0.5
1
s
Sum of
Elements 3
-K-
Integrator 4
Gain 61
p{1}
Step 6
1
1
0.08 s+1
0.3s+1
Transfer Fcn 12
Transfer Fcn
Scope 12
120 .048
y {1}
24 s+1
Transfer Fcn 19
Scope 9
Neural Network 1
1
0.5
1
0.08 s+1
0.3s+1
Transfer Fcn 13
Transfer Fcn 16
Scope 10
Transfer Fcn 1
0.0545 *2*pi
s
Gain 39
1
Scope 5
Step 5
0.5
1
Gain 34
1
s
Sum1 of
Gain
-K- Elements 2
-K-
Integrator 6
Gain 35
p{1}
1
y {1}
Neural Network 2
Scope 13
1
0.08 s+1
0.3s+1
Transfer Fcn 17
Transfer Fcn 15
120 .048
0.5
Gain 63
-K-K-
Gain 36
24 s+1
1
1
0.08 s+1
0.3s+1
Transfer Fcn 18
Transfer Fcn 11
Transfer Fcn 14
ScopeStep
4 4
Gain 62
Gain 53
1
0
0
Gain 49
Gain 54
Step 3
0
Step 2
1
Gain 50
Gain 56
1
0
0
Gain 51
0
0
Gain 57
1
0
Gain 52
Figure 1. Simulation diagram of two area deregulated system with optimized artificial neural
network controller for base case.
2.2.1
Case 1: Base Case
Consider a case where the GENCOs in each area participate equally in AGC.
Assume that the load change occurs only in area I. Thus, the load is demanded only by
DISCO1 and DISCO2. Let the value of this load demand be 0.1 pu MW for each of them. Note
that DISCO3 and DISCO4 do not demand power from any GENCOs, and hence the
corresponding participation factors are zero. DISCO1 and DISCO2 demand identically from
their local GENCOs, viz., GENCO1 and GENCO2. Figs. 2 and 3 show the results of this load
change: area frequency deviations in area 1 and respectively. The frequency deviation in
each area goes to zero in the steady state
2.2.2
Case 2:All DISCOS contract with GENCOS
Consider a case where all the DISCOs contract with the GENCOs for power. It is
assumed that each DISCO demands 0.1 pu MW power from GENCOs as defined by in DPM
480
Scope 8
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
matrix and each GENCO participates in AGC . The system in Fig. 1 is simulated using this
data and the results are depicted in Figs. 4 and 5. Figs. 4 and 5 show the results of this load
change: area frequency deviations in area 1 and respectively. The frequency deviation in
each area goes to zero in the steady state
.
2.2.3
Case 3: Contract Violation
It may happen that a DISCO violates a contract by demanding more power than that
specified in the contract. This excess power is not contracted out to any GENCO. This
uncontracted power must be supplied by the GENCOs in the same area as the DISCO. It
must be reflected as a local load of the area but not as the contract demand. The system in
Fig. 1 is simulated using this data and the results are depicted in Figs. 6 and 7. The frequency
deviations vanish in the steady state
Figure 2. Frequency deviation delta f1 Vs.
time in sec.(Case 1)
Figure 5. Frequency deviation delta f2 Vs.
time in sec.(Case 2)
Figure 3. Frequency deviation delta f2 Vs.
time in sec.(Case 1)
Figure 6. Frequency deviation delta f1 Vs.
time in sec.(Case 3)
Figure 4. Frequency deviation delta f1 Vs.
time in sec.(Case 2)
Figure 7. Frequency deviation delta f2 Vs.
time in sec.(Case 3)
481
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3. Conclusion
A general-purpose ANN controller for two-area AGC, suitable for deregulated electricity
market, has been studied. The ACODE based ANN optimizer with OBD pruning algorithm,
was able to generate artificial neural networks with low training error and high classification
accuracy given that it has a low misclassification rate. Thus, the ACODE optimization
algorithm is an effective training algorithm for artificial neural networks. As a training
algorithm, the ACODE was able to produce artificial neural networks that perform well using
different datasets.
References
Garro B. A., Sossa H. and Vázquez R. A., 2011, Artificial Neural Network Synthesis by
means of Artificial Bee Colony (ABC) Algorithm. Proceedings of the IEEE Congress on
Evolutionary Computation, CEC 2011, pp. 331-338.
Nobile E., Bose A., and Tomsovic K., 2000,“Bilateral market for load following ancillary
services,” in Proc. PES Summer Power Meeting, Seattle, WA, July 15–21, 2000.
Shi H. and Li W., 2009, Artificial Neural Networks with Ant Colony Optimization for Assessing
Performance of Residential Buildings. Proceedings of the International Conference on
Future BioMedical Information Engineering, FBIE 2009, (2009), pp. 379-382.
Shi H., 2009, Evolving Artificial Neural Networks Using GA and Momentum. Proceedings of
the 2009 Second International Symposium on Electronic Commerce and Security,
ISECS '09, (1): pp. 475-478.
Sansa I., Mrabet N. B, and Bouzid Ben Khader M., 2011, Effectiveness of the SaliencyBased Methods in Optimization of NN Structure for Induction Motor Fault Diagnosis.
Proceedings of the 8th International Multi-Conference on Systems, Signals & Devices,
pp.1-7.
Kumar J., Ng K., and Sheble G., 1997, “AGC simulator for price-based operation: Part I,”
IEEE Trans. Power Systems, vol. 12, no. 2.
Tu J., Zhan Y. and Han F., 2010, A Neural Network Pruning Method Optimized with PSO
Algorithm. In Proceedings of the 2010 Second International Conference on Computer
Modeling and Simulation, ICCMS '10, (3), pp. 257-259.
Li L. and Niu B., 2008, Designing Artificial Neural Networks Using MCPSO and BPSO,
Proceedings of the 2008 International Conference on Computational Intelligence and
Security, CIS 2008, pp. 176-179.
Gethsiyal Augasta M. and Kathirvalavakumar T.,2011, A Novel Pruning Algorithm for
Optimizing Feedforward Neural Network of Classification Problems. Neural Processing
Letters, 34(3), pp. 241-258.
Paliwal M. and Kumar U., 2009, A. Neural Networks and Statistical Techniques: A Review of
Applications. Expert Systems with Applications, 36(1), pp. 2-17.
Elgerd O. I., and Fosha C., 1970 “Optimum Megawatt-Frequency Control of Multi-Area
Electric Energy Systems,” IEEE Transactions on Power Apparatus and Systems, vol.
PAS-89, No. 4, April 1970, pp. 556-563.
Elgerd O. I., and Fosha C., 1970 “The Megawatt-Frequency Control Problem: A New
Approach via Optimal Control Theory,” IEEE Trans. PowerApparatus System, vol. PAS89, Apr. 1970, pp. 563–577.
Elgerd O. I., 1982, Electric Energy Systems Theory: An Introduction, McGraw Hill.
Christie R.D. and Bose A., 1996, “Load Frequency Control Issues In Power System
Operation after Deregulation,” IEEE Transactions on Power Systems, vol. 11, No.3,
August 1996, pp. 1191-1200
Storn R. and Price K., 2005, "Differential Evolution - A simple and efficient adaptive scheme
for
global
optimization
over
continuous
spaces"
march
2005.
http://www.icsi.berkeley.edu/ftp/pub/techreports/1995/tr-95-012.pdf March 1995
Orlowska-Kowalska T. and Kaminski M., 2009, Effectiveness of Saliency-Based Methods in
Optimization of Neural State Estimators of the Drive System with Elastic Couplings. IEEE
Transactions on Industrial Electronics, (2009), 56(10), pp. 4043-4051.
482
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Orlowska-Kowalska T. and Kaminski M., 2008, Optimization of Neural State Estimators of the
Two-mass System using OBD method. Proceedings of the IEE International
Symposium on Industrial Electronics, ISIE 2008, (2008), pp. 461-466.
Huanping Z., Congying L., Xinfeng Y., 2011, Optimization research on Artificial Neural
Network Model. Proceedings of the 2011 International Conference on Computer
Science and Network Technology, pp. 1724-1727.
483
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Analysis of Transverse Vibration of A Simply Supported Beam
through Finite Element Method
Sanjay Kumar Jha1*, Rupak Kumar Deb2, Vinay Chandra Jha2, Iqbal Ahmed Khan3
1
Mechanical Engineering Department, Gold Field Inst. of Tech. & Mgt., Faridabad, (HR), INDIA
2
Mechanical Engineering Department, Lingaya's University, Faridabad, (HR), INDIA
3
Mechanical Engineering Department Galgotias University, Greater Noida, (UP), INDIA
*Corresponding author (e-mail: sanjay.cp@gmail.com)
The shear force and bending moment phenomenon are the basic design considerations in
simply supported beams when subjected to flexural loading. In this paper, the effect of free
vibration of the beam was investigated using a finite element method and the basic
understanding of the influence of applied force on natural frequencies of cantilever beam is
presented. Hamilton’s principle applied to the Lagrangian function is used to derive the
equations of motion. In addition other factors affecting the vibration of beams are
discussed. The variables of the beam are Slenderness ratio and Shearing consideration.
The numerical results for free vibration of beam are presented. These results are
compared with the results obtained using MATLAB R2010a to plot the modal natural
frequency of simply supported beam. The module frequencies can be highly useful for the
vibration analysis and the resonance in a structure. So, the beam is taken and its module
natural frequencies are computed.
1. Objective and scope of work
In this paper, we are using Finite Element Method to formulate the equations of motion of a
homogeneous hinged-hinged type beam. The natural frequency of the homogeneous beam will
be found out at different variables of beam using MATLAB R2010. The results will be compared
with the results found by finite element method. Using these results, frequency and beam
variables will be correlated.
2. Introduction
Beam is a horizontal or inclined structural member having more than one support, and
carrying vertical loads across (transverse) its longitudinal axis, as a girder, purling, or rafter.
Three basic types of beams are: (1) Simple span, supported at both ends (2) Continuous,
supported at more than two points (3) Cantilever, supported at one end with the other end
overhanging and free (Seon et al., 1999; Leszek, 2009; Davis et al., 1972).
Generally there are two types of beams Euler-Bernoulli’s beam and Timoshenko beam. By the
classical theory of Euler-Bernoulli’s beam it assumes that;
1. Cross-sectional plane perpendicular to the axis of the beam remain plane after deformation.
2. The deformed cross-sectional plane is still perpendicular to the axis after deformation.
3. The classical theory of beam neglects the transverse shearing deformation, where the
transverse shear is determined by the equation of equilibrium.
In Euler – Bernoulli beam theory, shear deformations and rotation effects are neglected, and
plane sections remain plane and normal to the longitudinal axis. In the Timoshenko beam theory,
plane sections still remain plane but are no longer normal to the longitudinal axis (Sampaio and
Cataldo, 2008; Henri et al., 2010).
An exact formulation of the beam problem was first investigated in terms of general elasticity
equations by Pochhammer (1876) and Chree (1889). They derived the equations that describe a
vibrating solid cylinder. However, it is not practical to solve the full problem because it yields more
484
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
information than usually needed in applications. The beam theories under consideration all yield
the transverse displacement as a solution (Sampaio and Cataldo, 2008; Falsone and Settineri,
2011; Sheikh and Madhujit, 2004).
3. Numerical modeling and formulation
Euler Bernoulli Beam:
For stiffness matrix:
Figure 1. (a) Simply supported beam subjected to arbitrary (negative) distributed load.(b)
Deflected beam element. (c) Sign convention for shear force and bending moment (Davis et al.,
1972; Bazoune and Khulief, 2003; Hutton, 2004)
The bending strain is:
The radius of curvature of a given curve is:
The term below can be neglected:
Therefore:
is the total strain energy...
,
I=
Considering the given four boundary conditions and the one-dimensional nature of the given
problem in terms of the independent variable, we assume the displacement function in the
form (Leszek, 2009; Henri, 2010; Falsone and Settineri, 2011)
Figure 2. Bending moment diagram for a flexure element. Sign convention per the MOS
theory.
485
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Using the relation:
where N1, N2, N3, and N4 are the shape functions that describe the distribution of
displacement in terms of the nodal values in nodal displacement vector {δ}:
We get
Applying the first theorem of Castigliano to the strain energy function with respect to nodal
displacement v1 gives the transverse force at node 1 as
While application of the given theorem with respect to the rotational displacement results to
moment as
Similarly we obtain
,
The above 4 equations can be represented in the form:
By comparison of coefficients:
Including dimensionless variable
The above equation becomes:
The stiffness coefficients are(Bazoune and Khulief, 2003; William, 1969)
The complete stiffness value of flexure element is given as:
Element load vector:
486
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
(a)nodal load positive convention(b)mechanics of solids positive convention theory
For mass matrix of the Euler-Bernoulli beam:
Fig:differential element of beam subjected to time dependent loading (William, 1969)
From Newtons second law:
We have:
On replacing the relation below in newtons second law
Under the assumptions of constant elastic modulus E and moment of inertia Iz, the governing
equation becomes:
On applying Galerkins method to the above equation,we have
And thus we get:
The consistent mass matrix for a two-dimensional beam element is given by:
Substitution for the interpolation functions and performing the required integrations gives the
mass matrix as
Combining the mass matrix with previously obtained results for the stiffness matrix and force
vector, the finite element equations of motion for a beam element are:
Timoshenko beam:
The shearing effect in Timoshenko beam element:
Consider an infinitesimal element of beam of length δx and flexural rigidity El. The element is
in static equilibrium under the forces shown in Figure
487
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 3. Forces and displacements on infinitesimal element of beam.
The shear angle, Ψ, is measured as positive in an anticlockwise direction from the normal to the
midsurface to the outer face of the beam
The rotation of the cross section in an anticlockwise direction is:
------The stress-strain relation in bending is:
------ ------- ------F=α1
-----M=α1x+α2
------ ------- -------
The rotations at the ends of the beam, δ2 and δ4 can be expressed as rotations of the cross
section by using equation (4). The displacements δ1 to δ4 can be related to the constants α1 to
α4 through: ----for i=1,2,3,4
{Pi}=[Y]{αi}
and the elements of (Pi} are defined in Figure 2. Substituting for {αi} from equation (10)
in equation (11) gives
=[S]{δi}
Where [S] is the stiffness matrix [1,6,5]:
The shape functions of the timoshenko beam are:
Mechanical
The mass matrix of timoshenko beam:
488
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
We have the boundary conditions:
For hinged end
4. Results
The mode shapes of a beam were calculated and the analysis was done using the finite
element method by calculating the characterstic matrices(mass matix and stiffness matrix)of the
given simply supported beam. The natural frequencies and the mode shape of the given hinged
type beam were calculated using MATLAB. The natural frequencies can show the pattern of the
resonance that a beam is going to follow and its effect on structures. The mode shapes of a
structural steel beam with given slenderness ratio and circular cross sections were calculated and
the following result was obtained:
5. Conclusion
In this work, we examined four approximate models for a transversely vibrating beam: the
Euler-Bernoulli and Timoshenko models. The equation of motion and the boundary conditions
were obtained and the frequency equations for four boundary conditions were obtained. The
circular cross section of a simply supported beam was analyzed and the modal shapes and
natural frequencies were calculated. The slight structural consideration will show that the
amplitude of beam at resonance will be maximum and the problem of failure will arise. So, in
489
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
design considerations the beams taken should be such that there is no resonance for the stability
of a structure.
References
Bazoune,A. Khulief,Y.A., 2003, “Shape Function of Three-Dimensional Timoshenko Beam
Element” Journal of sound and vibration, 259(2),473-480.
Davis R., Henshell R. D. And Warburton G. B. 1972, “A Timoshenko Beam Element”
Department of Mechanical Engineering, University of Nottingham, Nottingham NG7 2RD,
England, Journal of Sound and Vibration, 22 (4), 475-487.
Falsone G., Settineri D. 2011 “An Euler–Bernoulli-Like Finite Element Method for Timoshenko
Beams Dipartimento Di Ingegneria Civile”, Università di Messina, C.da Di Dio, 98166
Messina, Italy Mechanics Research Communications, 38, 12–16.
Gavin H. P. 2010, “Structural Element Stiffness Matrices and Mass Matrices” Duke University
Department of Civil and Environmental Engineering CE 283. Structural Dynamics Spring.
Hutton,D. 2004, New York,McGraw Hill,2004.
Majkut, L. 2009, “Free and Forced Vibrations of Timoshenko Beams Described by Single
Difference Equation” AGH University of Science and Technology, Faculty of Mechanical
Engineering and Robotics, Cracow, Poland Journal Of Theoretical And Applied Mechanics
47, 1, pp. 193-210, Warsaw .
Sampaio R. ;Cataldo,E. 2008, “Timoshenko Beam With Uncertainty on The Boundary
Conditions” Paper accepted September.
Seon M. H. Haym B. And Timothy W., 1999, “Dynamics of Transversely Vibrating Beams Using
Four Engineering Theories”: Mechanical and Aerospace Engineering, Rutgers, the State
University of New Jersey, Piscataway, NJ 08854, ;.S.A. Journal of Sound and vibration
(1999) 225(5), 935}988.
Sheikh H., Abdul M., M., 2004, Matrix an finite element analysis of structures.
William J., 1969, Weaver Analysis of framed structures.
490
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization through Residual Stress Analysis of High Pressure
Cylindrical Component at Post-Autofrettage Stage
Shrinivas Kiran Patil*, Santosh J. Madki
Brahmdevdada Mane Institute of Technology, Solapur -413002, Maharashtra, India
*Corresponding author (e-mail: kunal.pune@yahoo.com)
Pressure vessels are widely used in the nuclear, chemical & military industries, along with
fluid transmission and storage applications. In recent decades, various methods have been
proposed for strengthening the vessels. Autofrettage is a metal fabrication technique in
which a pressure vessel is subjected to enormous pressure, causing internal portions of
the part to yield, which results in internal compressive residual stresses in the vessel. This
concept of autofrettage in high pressure application component & its residual stress
analysis concept is been discussed in next few pages.
1. Introduction
Autofrettage is a metal fabrication technique in which a pressure vessel is subjected to
enormous pressure, causing internal portions of the part to yield which resulting in internal
compressive residual stresses in the vessel. The goal of autofrettage is to increase the durability
of final product. The technique is commonly used in manufacturing high-pressure pump cylinders,
battleship, tank cannon barrels and fuel injection systems for diesel engines. While some work
hardening will occur, that is not the primary mechanism of strengthening (Michael, 2008).
Autofrettage is a means of pre-stressing thick-walled tubes to better distribute the tensile
hoop stress throughout the tube wall, so reducing the magnitude of the hoop stresses found at
the ID when the tube is re-pressurized subsequent to pre-stressing. This is achieved by
overloading the ID of the tube to cause plastic expansion of some or the entire tube wall, such
that residual compressive hoop stresses are created in the near-bore region whilst residual
tensile hoop stresses are created in the outer portion. Initial tensile overload can induce beneficial
compressive stress in fatigue critical areas of structures and components (Thumser and
Bergmann, 2008).
The start point is a single steel tube of internal diameter slightly less than the desired
caliber. The tube is subjected to internal pressure of sufficient magnitude to enlarge the bore and
in the process the inner layers of the metal are stretched beyond their elastic limit. This means
that the inner layers have been stretched to a point where the steel is no longer able to return to
its original shape once the internal pressure in the bore has been removed, as shown in figure-1.
Figure 1. Detailed process of Autofrettage: The tube (a) is subjected to internal pressure past
its elastic limit (b), leaving an inner layer of stressed metal (c).
491
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
2. Methods of autofrettage
There are two methods of autofrettage namely hydraulic autofrettage & swage
autofrettage. Hydraulic autofrettage involves the application of hydrostatic pressure to the ID of
the tube, such that equivalent stress at the ID exceeds the material yield stress and plastic
deformation begins. Pressure is further increased such that the deformation propagates to the
desired depth within the tube wall. Oil is used to pressurize the tube as it is noncorrosive and
slightly compressible as compared to a highly compressible gas (Hojjati and Hassani, 2007). The
ends of the tube must be sealed to contain the pressurized oil this is achieved either through use
of floating bungs or caps that attach to tube, which in turn carries the applied axial force. The net
axial force applied to the tube in the Closed-Ends case will alter the ratio of component stresses
(compared to the Open-Ends case) found at peak pressure conditions, potentially altering the
residual stresses developed. These two states, or end conditions, are classified as Open- and
Closed-Ends respectively and the details of their modelling are discussed later. Hydraulic
autofrettage and the two end conditions described above, are depicted by Figure-2
Figure 2. Hydraulic autofrettage diagram, for open- and closed ends
Swage autofrettage achieves the required plastic expansion of the inner portions of the
tube via mechanical interference between an oversized mandrel and the inner surface of the
tube. Mandrels typically consist of two conical sections joined by a short length of constant
Diameter. The forward conical section has a shallower slope than the rear section.
Machining is conducted on the autofrettaged tube (hydraulic or swage) to ensure the
correct final dimension and for confirming the pre-stressing along the required length. This
generally involves the removal of the tube’s ends and machining of the inner diameter to the
desired caliber. This also removes the most highly deformed material to alter the residual stress
field.
3. Analysis
3.1 Material and analysis parameters
A cylindrical tube of Structural Steel with Internal diameter 101.6 mm, Outer Diameter
120.65 mm & Length 57.15 mm is taken as a model for Analysis using Abaqus Analysis Package.
As we are considering Bilinear Kinematic hardening, we need to provide material property for
elastic and plastic zone separately as stated below in table-1 & table-2.
492
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 1. Material Property for elastic zone
Young’s Modulus
2e5 MPa
Poisson’s Ratio
0.3
Table 2. Material property for plastic zone
Poisson’s Ratio
0
0.18
Yield Stress
200 MPa
480 MPa
3.2 Boundary condition
For our analysis model is considered in plane stress, so the stress varies only in two
direction i.e. r & ϴ, So third direction is free to move. For applying the pressure on cylinder for
autofrettage process, we need to get pressure at beginning of yielding. Pressure of 29.08MPa is
required to form yielding in the material, as per manufacturers specifications. For meshing, let us
consider element size of 2mm.
3.3 Loading and unloading
Now let us apply the pressure of 45MPa at inner side of modeled tube, which is well
beyond the yielding pressure value of 29 MPa. After unloading, we get below stress pattern as
per figure-3.Maximum residual stress in tension is 22.17 MPa & 1.052 MPa in Compression.
Figure 3.l Stress pattern after unloading
Now let us check use of this residual stress by comparing the maximum stress intensities
at pre-Autofrettage & Post-Autofrettage stage, keeping loading pressure intensity constant (27
MPa).As shown in figure-4(a) & (b) the maximum Stress intensity is 169.4 MPa before
Autofrettage ,which is reduced up to 156.6 MPa due to residual stress pattern generated as per
Figure-3.
493
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
(a) Before Autofrettage
(b) After Autofrettage
Figure 4. Loading stress pattern
4. Result and conclusion
Swage Autofrettage is quite advantageous than hydraulic autofrettage in many aspects.
For the process optimization it is necessary to have optimum autofrettage pressure along with
proper elastic-plastic boundary. We can achieve material optimization or Factor of Safety
enhancement by using same material by using Autofrettage.
As far as our above analysis is considered we can get below results as per table-3.
Table 3. Summary of analysis
Sr. No.
1
2
3
Stress Parameter
Maximum Stress before Autofrettage
Residual Stress
Maximum Stress after Autofrettage
Value (in MPa)
169.4
22.17(Tensile) & 1.052 (Compressive)
156.6
So in view of safety needs in case of high pressure vessels, we propose to enhance the
factor of safety by using same material. However process optimization & deciding the optimum
elastic-plastic boundary would be the next course of Analysis which is quite useful for Overall
Optimization.
References
Bihamta R., Movahhedy M. R., “A numerical study of swage autofrettage of thick-walled tubes,”
Materials and Design, Volume 28, 2007,804–815.
Hojjati, M. H., Hassani, A. “Theoretical and finite-element modeling of autofrettage process in
strain- hardening thick-walled cylinders”, International Journal of Pressure Vessels and
Piping, University of Mazandaran, Iran, 2007, 310-319.
Michael C. Gibson, ”Determination of Residual Stress Distributions in Autofrettaged Thick-Walled
Cylinders,” Defence College Of Management & Technology, 2008, 79-122.
Parker A.P., Gibson M.C., “Material Modeling for Autofrettage Stress Analysis including the
„Single Effective Material,” Journal of Tribology, 2001, Vol. 123, 686-698.
Thumser, R., Bergmann, J.W. “Residual Stress field and fatigue analysis of Autofrettage parts,”
International Journal of Pressure Vessels and Piping, Weimar, Germany, 2002,113-117.
494
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Analytic Hierarchy Process (AHP) for Green Supplier
Selection in Indian Industries
Samadhan P. Deshmukh1*, Vivek K. Sunnapwar2
1
Watumull Institute, Worli, Mumbai–400018, Maharashtra, India
Lokmanya Tilak College of Engineering, Navi Mumbai–4000709, Maharashtra, India
2
*Corresponding author (e-mail: samadhan_rit@yahoo.co.in)
The green supplier selection is a strategic problem and has a significant impact on the
manufacturing industries. Selecting the green supplier among many alternatives is a
multi-criteria decision making (MCDM) problem. This research aims to survey current
green activities in supplier selection in India and to evaluate best green supplier.
Different environment factors affecting the manufacturing sector are considered in this
study. The literature on green supply chain management (GSCM) is covered
exhaustively, primarily taking a ‘Green Manufacturing’ as basic approach, from its
conceptualization. This study develops an evaluation model based on the analytic
hierarchy process (AHP). The AHP is used to analyze the structure of the green
supplier selection problem and to determine the weights of the criteria. A case study is
conducted to illustrate the utilization of the model for the green supplier selection
problem.
1. Introduction
The green supplier selection process is one of the key operational tasks for sustainable
supply chain partners. The powerful supplier should enhance the performance of the supply
chain with environmental, social and economical aspects. Green supply chain management
(GSCM) is a fastest growing concept in developing countries and having its presence both in
environment management and supply chain management literature. Due to the current
awareness in the environmental aspects, the assortment of the supplier has turned their way
and made focus on the green criteria base more than a habitual way (Kannan et al. 2012).
With increasing government regulations and stronger public awareness in environmental
protection, firms today simply cannot ignore environmental issues if they want to survive in
the global market (Lee et al., 2009). Environmental management is becoming more and more
important for corporations as the emphasis on the environmental protection by organizational
stakeholders, including stockholders, governments, customers, employees, competitors and
communities, keeps increasing. This study explains the practices and implementation of
green supply chain management and environmental performance practices among various
manufacturing industries located in India. Total seven main practices are considered with 47
sub factors. The study consists of five sections. After this introduction, section 2 reviews some
recent works on green supplier selection criteria and supplier selection models. Research
methodology on analytic hierarchy process (AHP) is presented in section 3. Formulation of
AHP to select a best supplier is done in section 4 with case study. Finally, concluding remarks
are presented in section 5.
2. Review on green manufacturing and various models used
Approach towards green supply chain management (GSCM) practice had been identified
by various researchers in recent years. While the works on the evaluation and selection of
suppliers are abundant, those that concern environmental issues are rather limited. There are
only a few studies related to green supply chain management (Bhateja et al., 2011).There are
various mathematical techniques for evaluation of suppliers, such as data envelopment
495
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
analysis (DEA), analytic hierarchy process (AHP), fuzzy goal programming, fuzzy analytic
network process (ANP) in literature (Buyukozkan and Cifci, 2007). For the purpose of
evaluating and selecting green suppliers, both qualitative and quantitative factors must be
considered. Thus, green supplier selection is a kind of multiple criteria decision making
(MCDM) problem and we need to employ MCDM methods to handle it appropriately.
A study based on six dimensions of green supply chain management including green
manufacturing and packaging, environmental participation, green marketing, green suppliers,
green stock and green eco design was conducted to check capability of green supply chain
management in electronics related firms in Taiwan [Shang et al., 2010]. Evaluation of the
suppliers was done within the Indian textile and clothing industry (both garment
manufacturers and ancillary suppliers) using sustainability criteria. Various activities of the
supply chain processes of Indian manufacturing industries were studied to measure the
performance of the manufacturing sectors (Bhateja et al., 2011). A green supplier selection
model was presented for high-tech industry (Lee et al. 2009). A system model for the new
green manufacturing paradigm was presented to develop an open mixed architecture for the
design, planning and control of green manufacturing activities (Deif, 2012). Closed-loop
supply chain management (CSCM) is totality of green purchasing, green manufacturing and
material management, green distribution and marketing, as well as reverse logistics. CSCM
was identified as an efficient, effective and economical strategy towards environmental
sustainable practices in manufacturing companies (Olugu and Wong ,2012). The relationships
between green supply chain management (GSCM) drivers (organizational support, social
capital and government involvement) and GSCM practices (green purchasing, cooperation
with customers, eco- design and investment recovery) in Taiwan’s textile and apparel
manufacturers was investigated [Wu et al., 2012]. A framework for integrating environmental
factors into the supplier selection process was presented to find different environmental
performance criteria (Humphreys et al., 2003). The traditional and green supply chain
management was compared to find several important opportunities in green supply chain
management in-depth, including those in manufacturing, bio-waste, construction, and
packaging (Ho et al., 2009). The green supply chain management practices likely to be
adopted by the manufacturing industry of electrical and electronics products in India were
investigated to find the relationship between green supply chain management practices and
environmental performance (Kumar et al., 2012). A study for green supplier development and
analytical evaluation using rough set theory was proposed (Bai and Sarkis, 2010). A decision
model to measure environmental practice of suppliers using a multi attribute utility theory
(MAUT) approach was developed (Handfield et al., 2002). Green supplier selection was done
by integrating artificial neural network (ANN) and two multi attribute decision analysis (MADA)
methods (DEA and ANP) (Kuo et al., 2010). A strategic model using structural equation
modeling and fuzzy logic was introduced in supplier selection (Punniyamoorthy et al. ,2011).
For the assessment of a supplier’s environmental performance, green vendor rating systems
was designed (Noci ,1997). AHP was used to evaluate various transport policies with an aim
to reduce climate change impact (Berrittella et al., 2007). The major four activities of the
green supply chain management namely green purchasing, green manufacturing, green
marketing and reverse logistics were covered for green supply chain management in India
Nimawat and Namdev , 2012).
AHP is a multi-attribute decision-making method that is especially useful when dealing
with the complex problems by using a nine-point scale (Saaty ,2008). AHP is considered an
ideal method for ranking alternatives when multiple criteria and sub-criteria are present in the
decision making process (Kannan et al., 2012). AHP offers a methodology to rank alternative
courses of action based on the decision maker’s judgments concerning the importance of the
criteria and the extent to which they are met by each alternative. For this reason, AHP is
ideally suited for the supplier selection problem (Hudymacova et al., 2003). The method
based on analytic hierarchy process was proposed to evaluate and rank any given set of
strip-layout alternatives (Rao ,2004). For evaluating environmental performance of suppliers
multi-criteria approach was used (Awasthi at el., 2010). The proposed approach consisted of
12 criteria. Analytic hierarchy process (AHP) and Technique for Order Preference by
496
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Similarity to Ideal Solution (TOPSIS) were also studied to evaluate faculty performance in
engineering education (Ghosh , 2011).
3. Research methodology
The analytic hierarchy process (AHP) was first proposed by Saaty in 1971, and it is one of
the most commonly used methods for solving multiple-criteria decision-making (MCDM)
problems in political, economic, social and management sciences (Saaty ,2008). Through
AHP, opinions and evaluations of the decision-makers are integrated, and a complex problem
is devised into a simple hierarchy system with higher levels to lower ones. The major steps for
considering decision problems by AHP are described as follows.
Step 1: Establishment of structural hierarchy
To establish structural hierarchy, a complex decision to be structured into a hierarchy
descending from an overall objective to various 'criteria', 'sub-criteria', and so on until the
lowest level. The objective or the overall goal of the decision is represented at the top level of
the hierarchy. Fig.1 shows the hierarchy for the selection of the best green supplier. The main
criteria like quality, environment performance assessment, etc. are at top level and subcriteria contributing to the decision are represented at the intermediate levels.
Customer Co-operation
Green Cost
Green Design
Green Logistic Design
EPA 1
GM1
CC1
GC1
GD1
GLD1
Q9
EPA 4
GM8
Environment
Performance
Assessment
Q1
Quality
Green Manufacturing
Best Green Supplier Selection
Supplier 1
CC7
Supplier 2
GC5
GD9
GLD5
Supplier 3
Figure 1. The hierarchy for the selection of the best green supplier
Step 2: Establishment of comparative judgments
Once a hierarchy is structured, the next step is to determine the priorities of elements at
each level. A set of comparison matrices of all elements in a level of the hierarchy with
respect to an element of the immediately higher level are constructed. This helps to prioritize
and convert individual comparative judgments into ratio scale measurements. The pairwise
comparisons generate a matrix of relative rankings for each level of the hierarchy. The
number of matrices depends on the number of elements at each level. The order of the matrix
at each level depends on the number of elements of the lower level that it links to. The
preferences are quantified by using a nine-point scale. Table 1 shows the fundamental scale
adopted by Saaty. The meaning of each scale measurement is explained in the table 1.
497
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Saaty’s Scale
1
3
5
7
9
2,4,6,8
Table 1. Saaty's fundamental scale
Relative importance of the two sub-elements
Equally important
Moderately important
Strongly important
Very strongly important
Extremely important
Intermediate values
Step 3: Synthesis of priorities and measurement of consistency
The priority weights of the elements are obtained from a decision-maker of the company
using questionnaire method. The maximum Eigen value (λmax value) is an important
validating parameter in AHP (Saaty, 2003). It is used as a reference index to screen
information by calculating the consistency ratio (CR) of the estimated vector in order to
validate whether the pairwise comparison matrix provides a completely consistent evaluation.
A measure of how far a matrix is from consistency is performed by Consistency Ratio (C.R.).
After all matrices are developed, and all pairwise comparisons are obtained, Eigen vectors or
the relative weights, global weights, and the maximum Eigen value (λmax) for each matrix is
then calculated. The consistency ratio is calculated according to the following steps:
1. Calculate the eigenvector or the relative weights and λmax for each matrix of order n.
2. Compute the consistency index (CI) for each matrix of order n by the formulae:
CI
max n
n 1
…………………….........….... (1)
Where, n is the number of criteria.
3. The consistency ratio (CR) is then calculated using the formulae:
CR
CI
…………………………………….... (2)
RI
Where, RI is known as random consistency index.
Table 1.
Average random index (RI) based on matrix size
n
3
4
5
6
7
8
RI
0.52
0.89
1.11
1.25
1.35
1.40
9
1.45
The acceptable CR range varies according to the size of the matrix, i.e., 0.52 for a 3 by 3
matrix, 0.89 for a 4 by 4 matrix and 1.0 for all larger matrices, n ≥ 5. The number 0.1 is the
accepted upper limit for CR. If CR is more than the accepted value, then the inconsistency of
judgments within that matrix have occurred and the evaluation process should therefore be
reviewed, reconsidered, and improved. If the consistency test is not passed, the expert will be
asked to redo the part of the questionnaire (Lee et al. 2009).
4. Case study
A case study was carried out to illustrate the use of AHP for supplier selection which takes
green manufacturing approach into account. An Indian multinational corporation
headquartered in Mumbai, Maharashtra, India was considered for a case study. The study
focused on three local suppliers Supplier 1, Supplier 2 and Supplier 3, preselected in the
approved suppliers list of the organization from the procurement of various components. The
goal was to find the best supplier among the three taking into account the green
manufacturing approach.
Because the goal of the present research is to select best green supplier, a qualitative
research method was employed. Seven main criteria namely quality (C1), environment
performance assessment (C2), green manufacturing (C3), customer co-operation (C4), green
costs (C5), green design (C6), and green logistics design (C7) were considered for the case
study. The decision maker was asked to compare the factors and to judge the importance of
each criterion in pairwise comparisons. The pairwise comparison of the goal is as shown in
Table 3. Also, λmax, CI, corresponding RI and CR values are mentioned below table 3.
498
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 2. Pairwise comparison of the goal
Goal
C1
C2
C3
C4
C5
C6
C7
C1
1
3
2
7
8
4
8
C2
1/3
1
4
2
7
3
7
C3
1/2
1/4
1
1
1
1/3
1
C4
1/7
1/2
1
1
3
1
3
C5
1/8
1/7
1
1/3
1
1/5
2
C6
1/4
1/3
3
1
5
1
2
C7
1/8
1/7
1
1/3
1/2
1/2
1
By computing, λmax = 7.7605, CI = 0.1265, RI = 1.35 (For n=7, from Table 2), CR =
0.0939 which is less than 0.1, hence the decision makers judgment is consistent. A weight for
each main criterion is shown in Table 4.
Table 3. Main Criteria Weights
Sr. No.
Criteria
Weight
1.
Quality
0.3878
2.
Environment Performance Assessment
0.2359
3.
Green Manufacturing
0.0741
4.
Customer Co-operation
0.0927
5.
Green Costs
0.0461
6.
Green Design
0.1217
7.
Green Logistics Design
0.0416
Similarly the pairwise comparison of all the sub criteria was also done. The performance of
suppliers with respect to each main criterion is shown in Table 5. From table 5, it can be seen
that supplier 1 is having maximum weights for all 7 green performance measures.
Table 4. Performance of suppliers with respect to each main criterion
Criteria
Supplier 1
Supplier 2
Supplier 3
Quality (QTY)
0.4821
0.3049
0.2130
Environment Performance Assessment (EPA)
0.5336
0.2611
0.2053
Green Manufacturing (GM)
0.4839
0.2689
0.2472
Customer Co-operation (CC)
0.5479
0.2779
0.1741
Green Costs (GC)
0.6207
0.2076
0.1717
Green Design (GD)
0.4868
0.2545
0.2587
Green Logistics Design (GLD)
0.3794
0.3520
0.2686
4.1. Final overall weight and ranking calculation
The final overall weights and ranking of the suppliers were calculated from the supplier
ratings given for each sub criterion. Weights were calculated by first multiplying each subcriterion with its respective major criteria and the supplier rating associated with it. The values
obtained from each of the sub-criteria were added up to give the final weight of a supplier.
These calculations of AHP gave criteria and sub-criteria weights and supplier ratings in crisp
form. Table 6 shows the final weight calculations and corresponding ranking of the suppliers.
Table 5. Final Weight calculations and ranking
Supplier
Final Weight
Ranking
S1
0.5102
1
S2
0.2882
2
S3
0.2017
3
5. Conclusion
By nature, green supplier selection process is a complicated task. The profitability and
customer satisfaction are directly proportional to the effectiveness of supplier selection which
takes environmental measures into account. Therefore, green supplier selection is a crucial
strategic decision for long-term survival of the firm. In the study, robust analytical hierarchy
process (AHP) model for green supplier selection is used. An evaluation of three suppliers is
conducted in the case study. In evaluating these suppliers, all the 54 criteria are considered
so that each criterion adds their weight in the supplier selection process. From the case study,
499
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
supplier S1 (with final weight 0.5102) stands out as the undisputed best green supplier. The
proposed method is a very useful decision-making tool for mitigating environmental
challenges.
References
Awasthi A., Chauhan S. S., Goyal S. K., A fuzzy multicriteria approach for evaluating
environmental performance of suppliers, Int. J. Prod Economics, 2010, 126, 370–378.
Bai C. and Sarkis J., Green supplier development: analytical evaluation using rough set
theory, Journal of Cleaner Production, 2010, 18, 1200–1210.
Baskaran V., Nachiappan S. and Rahman S., Indian textile suppliers’ sustainability evaluation
using the grey approach, Int. J. Production Economics, 2012,135, 647–658.
Berrittella M., Certa A., Enea M. and Zito P., An Analytic Hierarchy Process for the Evaluation
of Transport Policies to Reduce Climate Change Impacts (2007).
Bhateja A. K., Babbar R., Singh S. and Sachdeva A., Study of green supply chain
management in the Indian manufacturing industries: A literature review cum an analytical
approach for the measurement of performance, International Journal of Computational
Engineering & Management, 2011, 13, 84-99.
Buyukozkan G. and Cifci G., A novel hybrid MCDM approach based on fuzzy DEMATEL,
fuzzy ANP and fuzzy TOPSIS to evaluate green suppliers, Expert Systems with
Applications, 39, 3000–3011.
Deif A. M., A system model for green manufacturing, Journal of Cleaner Production, 2012, 19,
1553–1559.
Ghosh D. N., Analytic Hierarchy Process & TOPSIS Method to Evaluate Faculty Performance
in Engineering Education, UNIASCIT, 2011, 1(2), 63–70.
Handfield R., Walton S. V., Sroufe R. and Melnyk S. A., Applying environmental criteria to
supplier assessment: A study in the application of the Analytical Hierarchy Process,
European Journal of Operational Research, 2002, 141, 70–87.
Ho J. C., Shalishali M.K., Tseng T-L and Ang D. S., Opportunities in green supply chain
management, The Coastal Business Journal, 2009, 8 (1).
Hudymacova M., Benkova M., Pocsova J., Skovranek T., Supplier selection based on multicriterial AHP method, Acta Montanistica Slovaca Ročník, 2010,15 (3), 249–255.
Humphreys P.K., Wong Y.K. and Chan F. T. S., Integrating environmental criteria into the
supplier selection process, J of Materials Processing Tech, 2003,138, 349–356
Kannan G., Sarkis J., Sivakumar R., and Palaniappan M., Multi Criteria Decision Making
approaches for Green supplier evaluation and selection: A literature review, Conference
on the Greening of Industry Network, GIN 2012, Linkoping – Sweden, Oct 22–24, 2012.
Kumar S., Chattopadhyaya S. and Sharma V., Green Supply Chain Management: A Case
Study from Indian Electrical and Electronics Industry. International Journal of Soft
Computing and Engineering, 2012, 1, 275–281.
Kuo R. J., Wang Y. C., and Tien F. C., Integration of artificial neural network and MADA
methods for green supplier selection, J of Cleaner Production, 2010, 18, 1161-1170.
Lee A. H. I., Kang H-Y, Hsu C-F, and Hung H-C., A green supplier selection model for hightech industry, Expert Systems with Applications, 2009, 36, 7917–7927.
Nimawat D. and Namdev V., An Overview of Green Supply Chain Management in India,
Research Journal of Recent Sciences, 2012, 16., 77–82.
Noci G., Designing ‘‘green’’ vendor rating s stems for the assessment of a suppliers
environmental performance. European Journal of Purchasing and Supply Management,
1997, 3 (2), 103–114.
Olugu E. U., Wong K. Y., An expert fuzzy rule-based system for closed-loop supply chain
performance assessment in the automotive industry, Expert Systems with Applications,
2012, 39, 375–384.
Punniyamoorthy M., Mathiyalagan P., and Parthiban P., A strategic model using structural
equation modeling and fuzzy logic in supplier selection, Expert Systems with Applications,
2011, 38, 458–474.
Rao R. V., Evaluation of metal stamping layouts using an analytic hierarchy process method,
Journal of Materials Processing Technology,2004, 152, 71–76
Saaty T. L., Decision making with the analytic hierarchy process. Int. J. Services Sci, 2008.
500
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Integration of Process Planning and Scheduling Activities using
A Hybrid Model
A.Sreenivasulu Reddy*, Abdul Shafi. M, K. Ravindranath
Department of Mech. Engg., S V University College of Engineering, Tirupati – 517502, A.P, India.
* Corresponding author (e-mail: allampati_sr@rediffmail.com)
Process planning and scheduling activities are integrated using a Hybrid model, which is
developed by combining Genetic Algorithm (GA) and Simulated Annealing (SA)
approaches. Using this model, alternative process plans and schedules are generated.
And also best alternative process plans and effective schedules would be generated based
on due dates. The hybrid model has been applied for twenty different part drawings using
MATLAB software. The integration of these two functions can introduce significant
improvements to the efficiency of the manufacturing facilities through reduction in
scheduling conflicts, reduction of flow-time and work-in-process that would result in
increased utilization of available production resources and adaptation to irregular shop floor
disturbances.
1.
Introduction
Process Planning and scheduling are the two most important activities involved in any
manufacturing organization. Although the terms process planning and scheduling which are
having two different activities with different objectives (Kim, 2001).
In some cases, it is difficult to decompose Process planning and scheduling and
considered them together as a single activity. When process plans for various parts are finally
carried out, process bottlenecks may arise making the generated plans infeasible. To overcome
these problems a planning system is needed that integrates the process planning activities with
scheduling (Kumarand Rajotia, 2003).
Different researchers adopted different techniques and approaches also used advanced
computing methods including expert system and artificial intelligence (AI). The decision logic in
process planning may be based on decision trees, decision tables, heuristic methods, rule based
decision trees, constraint-based methods, hard coded algorithms, and problem oriented
languages.
2.
Integration of process planning and scheduling activities
The modern manufacturing industry has been facing various technical challenges in
effectively supporting integrated process-planning and production-scheduling decisions in a
complex and dynamic environment. From a pure process planning perspective, the numbers of
orders that require the generation of new process plans and production of new tools, and the
sheer variety of parts and machines (and their various characteristics) present a significant
challenge. As in other large machine shops, production scheduling in this environment is not easy
task either. Major scheduling challenges include the presence of multiple sources of uncertainty,
both internal (e.g., machine breakdowns) and external (e.g., new order arrivals, delays in tool
production and raw-material delivery), the difficulty in accurate accounting for the finite capacity of
a large number of resources operating according to complex constraints, and the need to take
into account the multiple resource requirements of various operations (e.g., tools, NC programs,
raw materials, human operators). While considerable progress has been made with respect to
software technologies for process planning and finite-capacity production scheduling, very little
attention has been given to issues of integration. Except for a few attempts, often in the context of
501
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
small manufacturing environments, process-planning and production-scheduling activities are
typically handled independently, and are carried out in a rigid, sequential manner with very little
communication. Process alternatives are traded off strictly from the standpoint of engineering
considerations, and plans are developed without consideration of the current ability of the shop to
implement them in a cost-effective manner. Likewise, production scheduling is performed under
fixed process assumptions and without regard to the opportunities that process alternatives can
provide for acceleration of production flows. Only under extreme and ad hoc circumstances (e.g.,
under pressure from shop floor expediters of late orders) are process-planning alternatives
revisited. This lack of coordination leads to unnecessarily long order lead times and increased
production costs and inefficiencies, and severely restricts the ability to effectively coordinate local
operations with those at supplier/customer sites, whether internal (e.g., a tool shop) or external
(e.g., raw-material suppliers). Huang et al. identify three distinct approaches: to integrate process
planning and production scheduling,
1. Non-Linear Process Planning, which generates all possible process plans ahead of time
(i.e., based on static considerations) and dynamically selects between these alternatives
at execution time. This is the approach taken in the FLEXPLAN system (Tonshoff et al.,
1989).
2. Closed-Loop Process Planning, also referred to as real-time or dynamic process planning
(Tonshoff et al., 1989), where process planning attempts to take into account dynamic
resource availability information.
3. Distributed Process Planning, which reduces the complexity of the closed-loop approach
by subdividing integrated process-planning and production-scheduling decisions into
multiple, more localized, decision phases.
A process planning system should interface with a scheduling system for generating
more realistic process plans and schedules. In doing so, the efficiency of the manufacturing
system as a whole is expected to improve. Without the integration of process planning and
scheduling, a true CIM system, which strives to integrate the various phases of manufacturing
in a single comprehensive system will not effectively materialize. Many researchers have
made significant contributions to integrate process planning with scheduling. The process
plans with optimum and alternative solutions are generated based on manufacturing time and
cost as criteria. These alternative process plans are input to scheduling module and the best
schedules were developed in the shop floor, based on due dates.
2.1 Selection of optimum process plans based on total machining cost
Optimum process plans are selected from the generated alternatives based on the total
machining cost which includes the machine costs, tool costs, machine change over costs, set-up
costs and the tool change over cost. These costs can be computed using equations considered
from Sreenivasulu Reddy.A et.al (Sreenivasulu and Ravindranath, 2012).
A hybrid model is developed by combining Genetic Algorithm ( Shao et al., 2009) and
Simulated Annealing ( Li and McMahon, 2007). This Algorithm is applied to generate optimal and
near optimal process plans. From the alternative process plans, a few are selected based on
minimum machining cost and optimal or near-optimal process plans are identified. The developed
optimal process plans are useful in order to produce effective schedules based on the due dates.
2.2 Case studies
In this paper, the process plans with optimum and alternative solutions are generated for
20part drawings based on manufacturing time and cost. These alternative process plans are sent
to scheduling module to generate the best schedules for shop floor based on due dates. Sample
part drawing and its features details are shown in Fig.1 and Table.1 respectively
502
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Fig.1. Sample Part drawing
2.3 Operations Information
F ID
Table.1 Operations information for sample part drawing
Feature
Operations
Dimensions In mm
F1
Surface
Milling
(L-200,W-100,H-50) *4
F2
Surface
Milling
L-200,W-100
F3
Pocket
Vertical milling
L-90,H-10
F4
Pocket
Vertical milling
L-10,W-40,H-10,R-5
F5
Step
Shaping
L-50,W-100,H-30
F6
Pocket
Vertical milling
L-20,W-20,H-10,R-10
F7
Hole
Drilling
D-10,Depth-10
F8
Pocket
Vertical milling
L-20,W-20,H-10,R-10
F9
Hole
Drilling
D-10,Depth-10
F10
Pocket
Vertical milling
L-100,W-80,H-20,R-10
F11
Through hole
Drilling
D-10,Depth-50
F12
Through hole
Drilling
D-10,Depth-50
F13
Through hole
Drilling
D-10,Depth-50
F14
Through hole
Drilling
D-10,Depth-50
The precedence relations for the sample part drawing are shown in Fig.2. These precedence
relations generated according to some standard rules. However, the user is allowed to choose
the precedence relations according to requirements and available resources. The relations shown
below are just for example by considering standard principles.
. The model developed for twenty part drawings and corresponding optimized process
plans, alternative process plans and schedules are shown in Table 2, 3 and 4 respectively.
503
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Figure 2. Precedence relation of the sample part drawing
Table 2. Best plans for part drawing
Table 3. Alternative five plans for sample part drawing
504
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 4 Output for Scheduling in shop floor
3. Conclusions
1 The Hybrid Algorithm has been adopted for the combined optimization of process plans
and scheduling of a typical job shop. This approach can generate multiple optimal or
near-optimal process plans with good computational efficiency using combined
machining cost criteria.
2 The precedence constraints are defined and manipulated in Hybrid Algorithm approach,
and the operations are always conducted in a feasible solution domain so that the risk of
generating the invalid process plans were avoided. The multiple process plans have been
used for scheduling which shows a practical dynamic workshop environment.
3 Contemporary Hybrid algorithm is heuristic local search and non-deterministic method it
works based on the randomization and interpretation of the derived solutions.
4 Whereas Genetic Algorithms and Simulated Annealing (GA and SA) Approaches are
considered as non-deterministic techniques, which are used whenever exact solutions
not possible to derive.
References
Kim, Y. K., 2001. “A set of data for the integration of process planning and job shop scheduling”,
Available at /http://syslab.chonnam.ac.kr/links/data-pp&s.docS.
Kumar, M., Rajotia, S., 2003. “Integration of scheduling with computer aided process planning”,
Journal of Materials Processing Technology Vol.138, pp 297–300.
Li, W.D., McMahon, C.A., 2007. “A simulated annealing-based optimization approach for
integrated process planning and scheduling”, International Journal of Computer Integrated
Manufacturing Vol.20 No.1, pp 80–95.
Shao, X. Y., Li, X. Y., Gao, L., Zhang, C. Y., 2009. “Integration of process planning and
scheduling a modified genetic algorithm-based approach”, Computers & Operations
Research36, pp 2082–2096.
Sreenivasulu Reddy.A ,K. Ravidranath 2012 “Integration of process planning and scheduling
activities using petrinets” International j.of multidispl.research & advcs. in engg.(ijmrae), issn
0975-7074, vol. 4, no. iii (july 2012), pp. 387-402
Tonshoff H.K., et al, 1989, FLEXPLANN: “Concept for Intelligent Process Planning and
Scheduling”, The CIRP International Workshop, Hannover, Germany.
505
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
A Brief Review on Algorithm Adaptation Methods for MultiLabel Classification
Jitendra Agrawal1, Shikha Agrawal2, Shilpi Kaur1, Sanjeev Sharma1
1
School of Information Technology, RGPV, Bhopal, M.P., India
2
University of Technology, RGPV, Bhopal, M.P., India
*Corresponding author (e-mail: shikha@rgtu.net)
Classification technique is based on machine learning and is basically used to classify
each item in a set of data into one of the predefined set of classes or groups. In multilabel classification, multiple classes are to be predicted for each problem instance that
means each instance is assigned more than one label. Binary classification, multi-class
classification and ordinal regression problems can be seen as special cases of multilabel problems where each instance is assigned only one label. Text classification is
the main application area of multi-label classification techniques. However, relevant
works are found in areas like Bioinformatics, medical diagnosis, scene classification
and music categorization. There are two approaches to do multi-label classification, the
first is an algorithm independent approach or problem transformation in which multilabel problem is dealt by transforming the original problem into a set of single label
problems and the second approach is algorithm adaptation, where specific algorithms
have been proposed to solve multi-label classification problem. Through our work, we
investigate various research works that have been conducted under algorithm
adaptation for multi-label classification.
1. Introduction
Classification is a process in which each record in the dataset is assigned to a particular
class from the set of classes. An example is assigning a student into one of the two classes
„Sporty‟ or „Studious‟. Classification can be single-label or multi-label. In single label
classification, learning is performed for a set of examples that are associated with single label
c from a set of disjoint labels L. For example, an instance will belong either to class „C 1‟ or
class „C2‟ or class „C3‟ from the set of class {C1, C2, C3}. However several modern
applications such as text categorization, medical analysis, protein function categorization,
music classification and semantic scene categorization requires examples to be associated
with a set of labels. For example, a text document consisting information about the attacks of
26/11 can be categorized as news, movie and terrorist attack. In semantic scene
classification, a snap can belong to more than one conceptual class, such as beach, forest,
city and people at the same time. Similarly a protein can perform many functions
simultaneously. Enzymatic proteins increase metabolism for digestion in the stomach,
functioning of pancreas, blood-clotting and convert glycogen into glucose. Thus in multi-label
classification each example is associated with a subset of labels Yi in the given label set C i.e.
Yi C. Such classification problems can be solved either by problem transformation or by
algorithm adaptation approach as mentioned by Tsoumakas et al. (2007). In the next section
we discuss the work done by various researchers under algorithm adaptation method.
2. Algorithm adaptation
The algorithm adaptation approach uses various algorithms to directly handle the entire
multi-label dataset. Next section gives the brief review of research work done under algorithm
adaptation.
2.1. Boosting algorithms
The main idea behind boosting is to combine many weak classifiers to produce a strong
classifier. This is done by iteratively selecting a training set where each instance is assigned a
506
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
label. Set of weights are uniformly distributed Dt over the instances and labels. These weights
are then fed to the weak learner which produces weak hypotheses. Error is computed using
summation of distribution Dt. Finally based on this error value, the weights of incorrectly
classified instances are increased so that the examples that were classified incorrectly are fed
back to the algorithm and the weak learner is forced to focus on the hard examples in the
training set whereas correctly classified examples are removed. Based on this concept
simplest version of Adaboost (Adaptive Boosting) is proposed. However, maintaining a set of
weights over training examples does not solve the problem of multi-class and multi-label. So
to deal with such problems, set of weights are maintained over training examples and labels.
And during boosting process, the training examples and their corresponding labels get
incrementally higher weights that are difficult to predict while lower weights are maintained
over the examples and labels that are easy to classify. Schapire and singer (2000) proposed
two extensions of AdaBoost algorithms for multi-class, multi-label classification problems. The
first boosting algorithm, named as Adaboost.MH is derived by reducing the multi-label data to
binary data. At next step binary Adaboost is applied on this binary data. The goal is to predict
only correct labels. It uses hamming loss and updated learning algorithms, to increase the
accuracy of classification task. Adaboost.MR is the second algorithm that performs label
ranking such that the correct labels receive the highest ranks. Classification Probabilistic
accuracy is improved by ranking loss.
2.2. Probabilistic generative models
Probabilistic generative models are generally used to generate a sequence of
observable data using some probability distribution. The Naive Bayes model is a conditional
probability learning method that uses Bayes‟ theorem which relates the probability of the
occurrence of an event to the occurrence or non-occurrence of an associated event. Due to
the fact that naive bayes is fast and highly scalable it is being used for multi-label
classification. Ueda and Saito (2003) applied probabilistic generative model for multi-label text
categorization problem. To detect multiple categories of text simultaneously two probability
generative models namely PMM1 (Parametric mixture model 1) and PMM2 (Parametric
mixture model 2) are proposed in their work. These proposed models use word based
representation, Bag-Of-Words (BOW) representation. It is based on the assumption that
mixture of characteristic words which appear in single-labeled text belongs to each category
of multi-categories. In PMM1, approximation of a class dependent probability is done. This is
regarded as “first-order” approximation. But according to author PMM2 is more flexible model.
This is because the parameter vectors of duplicate category are also used to approximate
class dependent probability. When experimented on real world dataset i.e. yahoo.com web
pages, PMMs proved to be much faster than naive bayes, k-nearest neighbor and three-layer
neural Networks. Ghamrawi et al. (2005) proposed two models based on Conditional Random
Field (CRF). The first model is Collective Multi-Label classifier (CML) that captures the label
co-occurrences but do not account for the presence of particular feature values in objects.
The second model known as Collective Multi-Label with Features classifier (CMLF), maintains
parameters for observational features. The experimental results show that the proposed
model outperforms the single-label counterparts on standard text corpora. Multi-label
classification algorithms discussed above have high computational cost. In order to reduce
computational cost a novel multi-label classifier is proposed by Z. Wei et al. (2011). In this
paper Naive Bayesian multi-label classification (NBML) algorithm is proposed that
incorporates two step feature selection strategy. In feature selection, subset of discriminative
features that occur in training set are selected to improve classification quality and reduce
computational complexity. In the first step, document frequency and Chi square test are used
for feature selection. In the second step FCFB (Fast Correlation based Filter Selection) is
applied on the results obtained in first step which leads to the reduction in feature
dimensionality. When experimented on real World Wide Web dataset NBML performs
equivalent to other multi-label algorithms. All the methods mentioned above do not consider
the correlations among labels resulting into low classification performance. Many researchers
have worked upon to deal with label correlations in multi-label classification problem which is
discussed in the following paragraph. A generative probabilistic model, the Correlated
Labeling Model (Col Model) is proposed by Wang et al. (2008). The main aim of Col Model is
507
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
to capture the information conveyed in the class membership, to exploit the in-depth relation
between the classes and words: via the latent topic factors and to predict the potential classes
in an unseen document classification. It is a supervised model and employs multivariate
normal distribution to capture the correlation between the classes. Experimental results show
that Col Model possesses good precision and recall values and classification performance
increases significantly when correlation among classes is considered. Zhang et al. (2009)
proposed a Multi-Label Naive Bayes (MLNB) method to deal with multi-label classification
problem. The author incorporates a two stage filter wrapper feature selection strategy to
improve the performance of MLNB. In the first stage Principal Component Analysis (PCA) is
used to eliminate irrelevant and redundant features. In the second stage Genetic Algorithm
(GA) is used to optimize the classification by explicitly considering correlations among labels
of different instances through fitness function. Experiments show that the proposed approach
performs effectively on synthetic as well as on real world datasets. Though incorporation of
PCA and GA improves the performance but at the same time increases the time complexity
for high dimensional dataset. A second order CRF (Conditional Random Field) model is
proposed by Wang et al. (2010) for multi-label image classification, to capture the semantic
associations between labels. In the proposed model the feature weights are initialized
differently and then voting technique is applied to improve the performance. Multiple CRFs
are obtained iteratively and each CRF vote for several labels. For each label if the vote
number exceeds the predefined threshold, it is regarded as final labels for an image. The
results show the effectiveness of this method when applied to MSRC dataset. To address the
inherent correlations among multiple labels, Ma et al. (2012) proposed a generative model
named Labeled Four-Level Pachinko Allocation Model (L-F-L-PAM). The proposed algorithm
is based on labeled LDA model and an additional latent correlations level is added to enhance
the performance. Apart from this Pruned Gibbs Sampling is used for inferring the unlabeled
test documents which results in reduced inference time in the test stage .The results of the
experiments conducted on text dataset: Reuters-21578 corpus and webpage‟s from
yahoo.com show that by considering the relations between multiple labels the overall
performance and computational efficiency of multi-label classification task is improved.
2.3. Support vector machines
Elisseeff and Weston (2002) present rank SVM for multi-label classification. This is a
linear model that uses ranking loss as its cost function where ranking loss is defined as
average fractional pairs of labels that are ordered incorrectly. Rank SVM aims to minimize this
cost function. When experiments were conducted rank SVM results in improvements when
compared to other multi-label algorithms. The SVMs and other discriminative classification
methods are designed to assign an instance to one among the set of disjoint classes, but high
degree of correlation and overlapping exist amongst classes in multi-labeled data. In order to
deal with this problem Godbole and Sarawagi (2004) proposed two enhancements for support
vector machine. The first improvement is the algorithm that deals with the correlation between
classes. This is done by extending original dataset with |C| extra features. The binary
classifiers are then trained on this extended dataset. The second improvement is handling the
overlapping classes in multi-label classification by modifying the margins of SVMs. This is
achieved by one of the two methods: 1) by removing similar negative instances that lie very
close to the resultant hyperplane and 2) removing all the training instances of confusion
classes in confusion matrix.Two algorithms based on SVMs are proposed by Qin et al. (2009)
for multi-label classification problem. The first algorithm is based on „one against rest‟ strategy
in which the multi-label training set is decomposed into binary classifiers and then k binary
SVM sub-classifiers are trained and the membership vector is obtained as per sub-classifiers
according to which classification of the text is performed. The second algorithm is based on
the hyper sphere, multi-label classification algorithm (HSMC) for classification of a dataset
having higher number of samples and more classes. The experimental result made on
Reuters 21578 shows the effectiveness of both algorithms. Hariharan et al. (2012) proposed a
max-margin multi-label classification formulation referred to as M3L. The author incorporates
the prior knowledge about densely correlated labels to improve the performance to M3L.
Further SMO (Sequential Minimal Optimization) is adapted for optimizing the above
formulation. Basically SMO breaks large quadratic programming (QP) problem into set of
508
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
smallest possible QP problems, each of which is solved analytically. Experiments show that
by incorporating prior knowledge M3L (Max-Margin Multi-Label Classification) could improve
prediction accuracy over independent methods. In order to overcome the drawback of rank
SVM, Jianhua Xu (2012) proposed multi-label support vector machine (ML-SVM) algorithm. In
this novel algorithm a zero label is added to proposed SVM architecture derived from rank
SVM. A new form of cost function is also introduced that reduces the computational cost. The
multi-label problem is first decomposed into several sub-problems by applying problem
transformation technique and then Frank-Wolfe method is used to train these sub-problems.
When experiments were conducted SVM-ML proved to be stronger than the ML-KNN, MLRBF, ML-NB, BP-MLL and rank SVM.
2.4. k-Nearest neighbor
The k - nearest neighbor is based on instance based learning. The classification of a test
tuple is done based on the majority vote of the k neighbors that are closest to the test tuple.
Due to its simplicity and high performance when subjected to large training set, it is applied to
multi-label classification. Multi-label k-nearest neighbor (ML-kNN) is introduced by Zhang and
Zhou (2007) which uses the basic concept of k-nearest neighbor. For each test tuple it first
identifies its k-nearest neighbors and according to the classes assigned to these neighbors,
test tuple is classified using maximum a posteriori (MAP). Results of experiments prove that
the performance of ML-kNN is equivalent to rank-SVM and higher than boosting algorithms.
Spyromitros et al. (2008) proposed k-NN in conjunction with Binary Relevance (BR) problem
transformation method known as BR-kNN. When BR is paired with k-NN same process of
calculating kNN is performed L (total number of labels) times. In order to overcome in the
proposed BR-kNN independent predictions are made for each label followed by single knearest neighbor search. The author identifies two extensions of BR-kNN to improve the
performance. The first extension known as BR-kNN-a, handles the empty set that may be
produced as an output of BR. In such case, BR-kNN-a outputs the label with highest
confidence. The second extension BR-kNN-b works in two steps: in the first step it calculates
the average size of label set of k-nearest neighbor and in second step the label with highest
confidence is produced. Results show that BR-kNN-a dominates in the scene and the
emotion dataset whereas BR-kNN-b dominates in yeast dataset. Coelho et al. (2011)
proposed Multi-Label k-Nearest Michigan Particle Swarm Optimization (ML-KMPSO) which
hybridizes MPSO (Michigan Particle Swarm Optimization) and ML-kNN (Multi-Label k-Nearest
Neighbor). At first MPSO breaks the MLC into sub classification problems without considering
the label correlation. And then ML-kNN is used to establish the correlation among classes.
When experimented on two real world data sets: yeast and scene, the proposed algorithm
outperforms other multi-label classification algorithms.
2.5. Neural network
An ANN (Artificial Neural Network) is analogous to the human brain which is constructed
with nodes or neurons that are connected to each other with the help of connection link. Each
connection link is associated with weights that store information which is used to solve the
particular problem. Due to their ability to solve complex problems for which no algorithmic
solutions do not exist, ANN have become very popular for solving multi-label classification
problem. Zhang and Zhou (2006) proposed first neural network based algorithm for multi-label
classification and named it as backpropagation for multi-label learning (BP-MLL). In this work
a single hidden layer feed-forward BP-MLL neural network is used with sigmoidal neurons
and bias parameters in the hidden and input layers. The number of output layer neuron is
equal to the number of labels. Training is based on the traditional BP (BackPropagation)
algorithm. But to deal with the correlation between labels a global error function is proposed in
this paper. If the output value of neuron is higher than predefined threshold value, then
corresponding label belongs to the input instance else not. Experiments in functional
genomics and text categorization dataset show that it performs better than well-established
multi-label learning algorithms. Grodzicki et al. (2008) proposed some modifications in the
error function of BP-MLL proposed by Zhang and Zhou (2006), by incorporating a threshold
value into the error function used in BP-MLL. Generalization of error function is done by
adding independent thresholds for different labels. The result shows that proposed
509
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
modification improves the performance of multi - label classifiers based on neural network.
Radial Basis Neural Network for Multi-Label (ML-RBF) learning is proposed by Zhang (2009).
The training procedure of ML-RBF is a two stage process. In first step k-means clustering is
performed on set of instances of each possible class. The centroids so obtained are then
used to determine the parameters of the basis functions. In the second stage, weights are
adjusted to minimize the sum-of-square error function. This algorithm when applied to three
real world datasets proves its efficiency as well as its effectiveness in comparison to other
algorithms. Sapozhnikova (2009) presents the extension of fuzzy ARTMAP for multi-label
classification called multi-label-FAM. In the proposed methodology a best category set with
high activation values are produced based on the fact that if the relative difference of
activations of a category lies below a predefined threshold then it is included in the set. After
normalizing these activation values the resultant prediction is obtained by calculating
weighted sum of individual predictions. Post processing filter is used to produce the labels,
having score more than predefined fraction of the highest score. When experimented on
yeast dataset, the performance of the proposed classifier is comparable with the performance
of other multi-label classifiers except for ML-kNN. De Souza et al. (2009) proposed an
effective machine learning technique which provides fast training and testing along with
simple implementation for automatic multi-label text categorization systems known as VGRAM WNN (virtual generalizing random access memory weightless neural networks ). RAM
based neural networks use RAM to store knowledge instead of connections. The networks
input values are used as the RAM address the value stored at this address is the neuron‟s
output. When tested on two real world datasets i.e. categorization of free text descriptions of
economic activities and categorization of web pages, VG-RAM WNN outperforms ML-kNN.
Implementation simplicity and high computational speed during the training phase of
Probabilistic Neural Network (PNN) motivated Ciarelli et al. (2009) to propose a modified
version of PNN to solve the multi-label classification problem. Basically PNN is an
implementation of a statistical algorithm called kernel discriminant analysis in which the
operations are organized into a multi-layered feed-forward network with four layers. However
the proposed version of PNN is composed of three layers but like original PNN requires only
one training step. Comparative experimental evaluation on a yahoo and economic activities
database proved that PNN is superior to other algorithms.
3. Conclusion
Multi-label classification is a generalization of multi-class, where each instance is
assigned to a subset of labels. Researchers have worked to solve the multi-label problem
using both the approaches i.e. via Problem Transformation and algorithm adaptation.
However through this paper we have focused on the work under algorithm adaptation and
concluded that the algorithms described here handle various issues of multi-label
classification especially the exploitation of correlations among labels, predicting multiple
classes for unseen instances, minimization of ranking as well as hamming loss. The future
work in multi-label classification includes application of population based meta heuristics, like
Ant Colony Optimization and Particle Swarm Optimization to be used to improve the
performance and efficiency of existing algorithms. Another promising work is the use of
ranking with multi-label classification such that classifier predicts the order of the classes
associated with a given instance.
References
Ciarelli, P. M., Oliveria, E., Badue, C., and De Souza, A. F. Multi-Label Text Categorization
Using a Probabilistic Neural Network. International Journal of Computer Information
Systems and Industrial Management Applications (IJCISIM), 2009,1, 133-144.
Coelho, T.A., Esmin, A. A. A. and Junior, W. M. Particle Swarm Optimization for Multi-label
Classification. Proceedings of GECCO, 12-16 July 2011, Dublin,Ireland.
De Souza, A. F., Pedroni, F., Oliveira, E., Ciarelli, P.M., Henrique, W. F., Veronese, L., and
Badue, C. Automated multi-label text categorization with VG-RAM weightless neural
networks. Neurocomputing, 72, 2009, 2209–2217.
510
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Elisseeff, A., and Weston, J. A kernel method for multi-labelled classification. Advances in
Neural Information Processing Systems, 2002,14.
Ghamrawi, N. and McCallum, A. Collective Multi-Label Classification. Proceedings of ACM
conference on Information and Knowledge Management (CIKM ’05), 2005, 195-200,
Bremen, Germany.
Godbole, S. and Sarawagi, S. Discriminative methods for multi-labeled classification.
Proceedings of the 8th pacific-Asia Conference on Knowledge Discovery and Data Mining
(PAKDD ‘04) (eds. Dai, H., Srikant, R. and Zhang, C.), 2004, 22-30.
Grodzicki, R., Mandziuk, J. and Wang, L. Improved Multi-Label Classification with Neural
Networks. Proceedings of Advances in Knowledge Discovery and Data Mining (eds.
Rudolph, G. et.al), 2008, 409-416.
Hariharan, B., Vishwanathan, S. V. N., and Varma, M. Efficient Max-Margin Multi-Label
Classification with Applications to Zero-Shot Learning. Machine Learning Journal, 2012,
88(1),127-155.
Ma, H., Chen, E., Xu, L., and Xiong, H. Capturing correlations of multiple labels: A generative
probabilistic model for multi-label learning. Neurocomputing, 92, 2012,116-123.
Qin, Y.P., and Wang, X.K. Study on Multi-label Text Classification Based on SVM.
Proceedings of sixth International Conference on Fuzzy Systems and Knowledge
Discovery, 2009.
Sapozhnikova, E. P. Multi-label classification with Art-based neural networks.
Proceedings of Second International Workshop on Knowledge Discovery and Data
Mining, 2009.
Schapire, R. E. and Singer, Y. Boostexter: a boosting-based system for text categorization.
Machine Learning 39, 2000, 135-168.
Spyromitros, E., Tsoumakas, G. and Vlahavas, I. An Empirical Study of Lazy Multilabel
Classification Algorithms. Proceedings of 5th Hellenic Conference on Artificial
Intelligence (SETN 2008), 401-406.
Tsoumakas, G. and Katakis, I. Multi-Label Classification: An Overview. International Journal
of Data Warehousing & Mining, 2007, 3(1), 1-13.
Ueda, N. and Saito, K. Parametric mixture models for multi-label text. Advance in Neural
Information Processing Systems. 15, 2003, 721–728.
Wei, Z., Zhang, H., Zhang, Z., Li, W. and Miao, D. A Naive Bayesian Multi-Label Classification
Algorithm With Application to Visualize Text Search Results. International Journal of
Advanced Intelligence, 2011, 3(2), 173-188.
Wang, H., Huang, M., and Zhu, X. A Generative Probabilistic Model for Multi-Label
Classification. Eighth IEEE International Conference on Data Mining, 2008.
Wang, X., Liu, X., Shi, Z., Shi, Z., and Sui, H. Voting Conditional Random Fields for Multird
label Image Classification. 3 International Congress on Image and Signal Processing
(CISP 2010), 2010.
Xu, J. An Efficient Multi-Label Support Vector Machine with a Zero Label. Expert Systems
with Applications, 39, 2012, 4796–4804.
Zhang, M.L., Pena, J.M., and Robles, V. Feature selection for multi-label naive Bayes
classification. Information Sciences, 179, 2009, 3218–3229.
Zhang, M.L., and Zhou, Z.H. ML-KNN: A lazy learning approach to multi-label learning.
Pattern Recognition, 2007, 40(7), 2038-3048.
Zhang, M.L. and Zhou, Z.H. Multi-Label Neural Networks with Applications to Functional
Genomics and Text Categorization. IEEE transactions on knowledge and data
engineering, 2006, 18(10), 1338-1351.
Zhang, M.L. ML-RBF: RBF Neural Networks for Multi-Label Learning. Neural Processing
Lecture, 2009, 29(2), 61-74.
511
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimization of Influential Parameters of Solar Parabolic
Collector using RSM
P. Venkataramaiah1*, P. MohanaReddy2, D. Vishnuvardhan Reddy2
1
S. V. University College of Engineering, Tirupati-517502, A.P, India
2
S. V. College of Engineering, Tirupati-517507, A.P, India
*Corresponding author (e-mail: pvramaiah@gmail.com)
The present work focused on optimization of influential parameters of solar parabolic
collector. The experiments are designed using Response Surface Methodology (RSM)
by considering influential parameters: Absorber tube material, Reflector material and
Period of sun incidence. Parametric optimization is done by using design expert for the
experimental data, based on the objective function obtained from RSM and the
constraints developed from bound of influential parameters. The experiments are
conducted according to the Design of Experiment (DOE) and outlet temperature of
working fluid (water) is recorded for each experimental run.
Key words Design expert, RSM, influential parameters, outlet temperature
1. Introduction
Energy is the key input to drive and improve the life cycle. Solar thermal systems play an
important role in providing non-polluting energy for domestic and industrial applications.
Concentrating solar technologies, such as, parabolic trough collectors are used to supply
industrial process heat, off-grid electricity and bulk electrical power. In a parabolic trough solar
collector, the reflective surface focuses sunlight on to a heat collecting element (absorber tube)
through which fluid flows. The fluid captures solar energy in the form of heat that can be used
in a variety of applications. Parabolic trough systems use mirrored surface of a linear parabolic
concentrator to focus direct solar radiation on an absorber pipe running along the focal line of
the parabola. The HTF (heat transfer fluid) inside the absorber pipe is heated and collected in
outlet storage tank. The collectors rotate about horizontal north south axis, an arrangement
which results in slightly less energy incident on them over the year but favors summertime
operation when peak power is needed.
Figure1. Assembled view of the present setup
2. Literature review
The RSM is important in designing, formulating, developing, and analyzing new scientific
studying and products. The most common applications of RSM are in Industrial, Biological
and Clinical Science, Social Science, Food Science, and Physical and Engineering Sciences.
Since RSM has an extensive application in the real-world, it is also important to know how
and where Response Surface Methodology started in the history. Response surface method
which combines the experiment design theory and the quality loss concept has been used in
developing robust design of products and processes.
According to Hill and Hunter, RSM method was introduced by G.E.P. Box and K.B.
Wilson in 1951 (Wikipedia 2006). Box and Wilson suggested to use a first-degree polynomial
model to approximate the response variable. They acknowledged that this model is only an
512
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
approximation, not accurate, but such a model is easy to estimate and apply, even when little
is known about the process (Wikipedia 2006). Moreover, Mead and Pike stated origin of RSM
starts 1930s with use of Response Curves (Myers, Khuri, and Carter 1989).
According to research of Myers, et.al.The orthogonal design was motivated by Box
and Wilson in the case of the first-order model. Forthe second-order models, many subjectmatter scientists and engineers have a working knowledge of the central composite designs
(CCDs) and three-level designs by Box and Behnken (1960). Also, the same research states
that another important contribution by Hartley (1959), who made an effort to create a more
economical or small composite design. In the present work CCD with three levels has been
adopted.
3. Experimental work
The experimental set up of solar parabolic collector has been fabricated and installed on
the top of the building as shown in the Fig.2.The equipment is placed in E-W direction, such
that the front view of the setup faces east. Whenever ball valve is opened, water enters into
the absorber tube through the flexible hose. The absorber, which is placed at the focal point
of the parabolic trough is heated by direct radiation as well as reflected radiation from the
reflective surface. After absorbing sufficient radiation, the water in the absorber tube gets
heated and its density decreases, at the end of the absorber tube connected to the storage
tank and hot water is collected at the top of the tank. A solar Auto tracking system (operated
by 12 V DC batteries) is used to utilize the maximum radiation, by tilting the collector
according to sun direction.
3.1. Experimental design by RSM
The experimental design (Table.2) is developed by considering the three influential
factors and their levels (Table.1) by using the RSM CCD
Symbol
A
B
C
Table 1. Influential parameters and Their Levels
System
Level 1
Level 2
Parameters
U-tube Aluminum
coated with Black
Si
Polished
Aluminum (AP)
Absorber Material
Reflector Material
Period
of
incidence
sun
9.00AM-11.00AM
Level 3
U-tube Aluminum U-tube Aluminum
coated with Black coated with Black
Ni
Cr
Silvered Mirror Stainless
steel
(SM)
(SS)
11.00AM
01.00PM-04.00PM
01.00PM
3.2 Conducting experiments and development of objective function
Experiments are conducted on a solar parabolic collector as per DOE obtained from
RSM and outlet temperature of water is recorded for each experimental run as shown in
Table.2.
Table 2. DOE and Experimental data
Table 3. Parameter Constriants
NAME
A:ABS
B:REF
C:TIME
temp
GOAL
Is
in target
is
in target
Is
in minimize
maximize
L.L
.83
U.L
.97
L.W
1
U.W
1
IMP
1
.8
.97
1
.49
2
1
3
1
1
3
76
95
1
1
3
L.L=lower limit, U.L=upper limit, U.W=upper weight, L.W=lower weight
513
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
A regression equation (objective function) is developed for the experimental data (Table 2)
and plots are developed to investigate the effect of process variables on response
characteristic [7].. The regression equation (Eq.1) for the temperature as a function of three
input process variables is developed using experimental data.
Y=+79.73 -1.90 * A +3.40* B +2.50 * C -2.0 * A * B -0.25 * A * C +2.50 * B * C +5.68 * A^2 +
0.18* B^2 -1.32 * C^2
………….. (Eq.1)
The analysis of variance (ANOVA) is also performed to statistically analyse the results.
Source
Model
A-ABS
B-REF
C-POD
AB
AC
BC
A^2
B^2
C^2
Residual
Lack of Fit
Pure Error
Cor Total
Sum of
Squares
426.79
36.1
115.6
62.5
32
0.5
50
88.78
0.091
4.78
61.21
51.88
9.33
488
Table 4 - ANOVA for Temperature
Mean
F
Df
Square
Value
9
47.42
7.75
1
36.1
5.9
1
115.6
18.89
1
62.5
10.21
1
32
5.23
1
0.5
0.082
1
50
8.17
1
88.78
14.5
1
0.091
0.015
1
4.78
0.78
10
6.12
5
10.38
5.56
5
1.87
19
p-value
Prob> F
0.0018
0.0355
0.0015
0.0096
0.0453
0.7809
0.017
0.0034
0.9054
0.3977
0.0415
Statistical inferences
1. The Model F-value of 7.75 implies the model is significant.There is only a 0.18% chance
that a "Model F-Value" this large could occur due to noise.
2. The "Lack of Fit F-value" of 5.56 implies the Lack of Fit is significant. There is only a
4.15% chance that a "Lack of Fit F-value" this large could occur due to noise.
3. The "Pred R-Squared" of 0.3009 is not as close to the "Adj R-Squared" of 0.7617 as one
mightnormally expect. "Adeq Precision" measures the signal to noise ratio. A ratio greater
than 4 is desirable. Your ratio of 11.955 indicates an adequate signal. This model can be
used to navigate the design space.
4. Values of "Prob > F" less than 0.0500 indicate model terms are significant.
In this case A, B, C, AB, BC, A2 are significant model terms. Values greater than 0.1000
indicate the model terms are not significant
4. Optimization by RSM
To decide about the adequacy of the model, for the test, lack of fit tests and model
summary statistics is performed for the temperature characteristics of solar parabolic trough.
The sum of squares test in each table shows how the terms of increasing complexity
contribute to the model [5]. It can be observed that for this response, the Quadratic model is
appropriate. The results (Tables 5-6) indicate that the Quadratic model in all the characteristic
show significant lack of fit, hence the adequacy of quadratic model is confirmed. The
optimized values for influential parameters are obtained from RSM are shown in Table.5
514
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Table 5. Optimum values obtained from the Design Expert
S.NO
Absorbity
Reflectivity
Time
temperature
1
0.97
0.80
1.24
81.725
2
0.97
0.80
1.00
81.227
3
0.97
0.80
1.03
81.292
4
0.97
0.87
1.30
81.373
5
0.97
0.81
1.00
81.007
6
0.97
0.84
1.00
80.605
Table 6. Selection of Adequate Model for temperature
a. Effect of Process Variables on Temperature
The above response surface is plotted to study the effect of process variables on the
temperature and is shown in Figures 4.1(a)-4.1(c). From Figure 4.1(a) the temperature is
found to have an increasing trend with the increase of reflectivity and at the same POI it
increases with the increase of absorptivity. It is observed from Figure 4.1(b) that temperature
increases with increase in POI and also increases with increase in absorptivity. It is seen
from Figure 4.1(c) that Temperature decreases with increase in the POI and decreases with
the decrease of the reflectivity.
515
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Fig.1 Combined effect of ABS and Ref on
temperature
Fig.2 Combined effect of POI and
ABS on temperature
Fig.3 Combined effect of POI and Ref on
temperature
5. Conclusions
In the present work the experimental setup of solar parabolic trough has been used. The
controllable parameters such as Reflective materials, Absorber materials, time which
influence the responses are identified and the experiments were conducted according to RSM
design. With the use of efficient selective coatings, high solar absorptance is obtained.
ANOVA is performed for each response and most influencing factors have been identified that
reflective and absorber materials possess more impact on response. RSM approaches
formulating a multiple linear regression equation, it is simulated using design expert software
and near optimum values is obtained.
References
Fernandez-Garcia, E.Zarza (2010) “Parabolic solar through collectors and their applications”;
Elsevier journal of Renewable energy,14(1695–1721)
Faraht.S, F.Sarhaddi (2009) “Energetic optimization of flat solar collectors” Elsevier journal of
Renewable energy, 34(1169-1174)
MohanaReddy.P, Dr.P.Venkataramaiah (2012) “Optimization of process parameters of a
solar parabolic through in winter using grey-taguchi approach” IJERA, 2(2248-9622)
NarayanaSaibabaK.V, P.King(2012) “Development Of Models For Dye Removal Process
Using Response surface Methodology And Artificial Neural Networks” IJGET,1(22789928)
Sahoo.P(2011):“Optimization of turning parameter for surface roughness using RSM and GA”
APEM, 63(1854-6250)
Sinha.U.K and S.P Sharma(2008) “Modeling the parabolic collector for solar thermal electric
power” APEM, 63(205-211)
Khoo L.P and C.H Chen (2011) “Integration of response surface methodology with genetic
algorithm” IJAMT ,18(483–489)
516
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Evaluating Coupling Loss Factors of Corrugated Plates for
Minimising Sound Radiated by Plates
S.S.Pathan*, D.N.Manik
Mechanical Engineering Department, Indian Institute of Technology Bombay, Mumbai, India
* Corresponding author (email: pathan_ss@yahoo.co.in)
Due to the forces acting on flat plates, sometimes they become very
noisy.Stiffeners are used to modify the dynamic characteristics of the flat plates
with an objective to minimise noise. Being a high frequency phenomenon, noise is
predicted by usingStatistical Energy Analysis (SEA) method.Coupling Loss
Factors (CLF) are significant SEA parameters and they need to be evaluated for
better understanding of energy exchange between subsystems. In the present
work, a model is developed for evaluating the coupling loss factors for corrugated
plates with the help of power balance equations between the SEA subsystems
and an optimum configuration is suggested to minimise the sound radiated by flat
plates.
Keywords: Statistical Energy Analysis, Coupling Loss Factors, Sound radiation.
1
Introduction
SEA is an approach commonly used by the designers in the dynamic analysis of the
response of complex mechanical systems in vibration and acoustics at high frequencies. SEA
does not need an exact detail, such as mode shapes and resonance frequencies, of a specific
system. It is concerned with the average behaviour of a population or ensemble of structures
that are nominally identical but in practice have small differences.
In SEA, the system is divided into a number of subsystems, normally acoustic and structural,
which are coupled together and broadband stationary random excitations are applied to one
or more of them. Each subsystem is represented using its gross physical properties, such as
geometric form, dimensions, material properties and loss factors.
Within a frequency band, the structural vibrational or fluid cavity acoustical energy in each
subsystem is assumed to be residing in resonances, each of which has equal energy. This
assumption is called as equipartition of modal energy. This requires a high modal density for
each subsystem.
The coupling between parts and losses of energy from the parts of the system are described
by coupling loss and loss factors. In order to calculate energy levels among the various parts
of the coupled system, power balance equations containing coupling loss and loss factors are
written.These equations involve expressions for power flowing from one subsystem to
another. The energy of each subsystem can be obtained by solving these power balance
equation (1) given by[1],
n
j 1 1 j
12
1n
21
n
j 1 2 j
2n
E1
E 2
n2
E
n
n
j 1 nj
n1
Pin,1
1 P
in , 2
P
in,n
Where Ei represents the spatially and time averaged energy of subsystem i ,
loss factor of subsystem i and
ij
the band centre frequency and
Pin ,i
(1)
ii
is the internal
is the coupling loss factor between subsystems and ω is
is the power input into subsystem i .
517
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Xie et al [2] have analysed the coupling between the local modes by considering aluminium
extrusions used in railway vehicles, but they have used sandwich panels. Sometimes
sandwich panels cannot be used in cases where heat dissipation is important; for example, in
case of transformers. In the present work, the coupling loss factors ij are modelled for the
corrugated plates which are obtained by the modification of flat plates using stiffeners.
2 Coupling loss factors for corrugated plates
The base plate (Plate 1) of size 0.48m 0.3m 0.0035m is used to study the dynamic
interaction with three inverted ‘V’ shaped stiffeners attached to the base plate for obtaining
0
various corrugated plates (Plates 2 to 10).Stiffener included angles for Plates 2-4( 70 ),
0
0
Plates 5-7( 90 ) and Plates 8-10( 120 )are shown in figure 1. Corresponding to each
corrugation configuration shown in figure 1, three thickness values t=1.5,2 and 3 mm are used
for the present study.
The corrugations attached to the base plate with different configurations act as a
bridge to connect the local mode subsystem on the base of the corrugated plate. These
inverted ‘V’ shaped corrugated structure can be considered as an additional subsystem
comprising of local modes. Local modes of the base plate get coupled with those of
corrugations by bending waves. In this case the vibration energy gets exchanged between the
base plate and corrugation. Because of the variation in corrugation geometry, there will be
difference in energy exchange between the base and corrugated structure.
(a)
(b)
(c)
Figure 1. Corrugation configuration; (a) 70 0 , (b) 900 and (c) 1200 .
The interaction of waves occurs at the junction formed because of corrugation and
the base plate. The structure generally can support bending, transverse shear and
longitudinal waves. Here, bending waves are considered to be dominating and significant for
the present study of investigating the bending wave coupling. When the wave transmission
phenomenon occurs, the coupling loss factor between the local mode subsystems can be
found using the following formula [1],
c g 12 L
1
(2)
12
A1
Where, c g is the group velocity on the source panel 1, 12 is the transmission coefficient
1
A1 is the area of panel 1.
The group velocity , c g , is twice of bending wave speed, cb . Where,
between panel 1 &panel 2 and
cb 1.35 fcL h
(3)
(where c L is longitudinal wave speed)
In the SEA model, as the corrugations are included as local subsystem, the bending
wave incident on the joint will generate waves on other connected local subsystem. The
coupling loss factor from one panel to another can be worked out using the above equation
(2). Formulae given by Craik[6] are used for determining transmission coefficients for different
types of joints.These transmission coefficients are used here to calculate coupling loss factors
between the local mode subsystems.
518
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
A structure under consideration consisting of local subsystems is shown in figure 2.
Here, in the present discussion, a portion of plate is referred to as a panel to distinguish it
from the overall concept of the plate. Panels 1,2 and 3 form a part of base plate (Plate 1)
whereas panel 4 form a part of corrugation.
Figure 2 : Local mode subsystems comprising of panels 1,2 and 3 with corrugations, 4.
The power flow from local modes on the source room side to the corrugated local
modes consists of contributions of the following:
(i) Panel ‘1’ to two panels ‘4’, (ii) each panels ‘2’ to panel ‘4’ and (iii) each panels ‘3’ to two
panels ‘4’.
Equation (4) gives the power flow between the source (flat) side to the corrugation,
Ws c 3E1 2E2 E3 sc
(4)
Ws c 3E1214 2E224 E334
(5)
where 14 , 24 and 34 are coupling loss factors from panel ‘1’ to panel ‘4’, panel ‘2’ to panel
‘4’ and panel ‘3’ to panel ‘4’ respectively. sc is the coupling loss factor from source side flat
panel to local modes of the corrugation. Using the assumption of equipartition of energy which
assumes same response for all the corrugated plates, equation (5) can be written as:
Ws c ms vs
2
sc 3E1 214 2E224 E334
Ws c ms vs
2
sc 3m1 vs
2
214 2 vs
2
(6)
m224 m334
(7)
Where m s is the mass of the outer panel, m1 , m2 and m3 are mass of panels ‘1’, ‘2’ and ‘3’
respectively. Here,
l1 , l 2 ,... li ' s are lengths of the panel,
vs
2
is the spatially averaged mean
square velocity of the source side panel. Introducing geometrical parameters corresponding
to all the panels and rearranging the terms, the equation of the coupling loss factor from
source to corrugation is obtained as given below:
t p 6l114 2l 2 24 2l334
sc
(8)
t 4 3l1 2l 2 2l3
Similarly, based on the power flow from corrugation local modes to the local modes on the
source side, the coupling loss factor obtained is , cs ,which is given as:
cs t 4 3 41 42 2 43
(9)
6t p
For the evaluation of coupling loss factors sc and cs , the coupling loss factors i 4 and
4ii 1,2,3 are evaluated first based on the equation (2) for which transmission coefficients
must be known. The transmission loss
Rij
for a cross joint [6] is used to calculate the
transmission coefficients ij .
Rij
1 2 A 1 2
B C D log 1 14 ,
20 log
A
519
1
(10)
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
The transmission loss of the joint and the transmission coefficient are related by the following
equation (11),
1
ij
(11)
Rij 10 log
As per Craik [6], for a cross joint, the values of B, C and D are taken as 4.0153, 0.2535 and
1.56 respectively, whereas the value of A is found from as,
2
B jk j
2
Bi k i
C L sj h j
j
C L si hi
i
hj
(12)
hi
and of equation(10) is obtained from:
kj
(13)
CL h j
j
ki
3
C L hi
i
Results and discussion
The equations developed here are simulated in MATLAB for evaluating coupling loss
factors for all the corrugated plates. It is observed that for the same included angle of the
stiffeners, the coupling loss factor is optimum for t=2 mm(figure 3(a),3(b) and 3(c)). By varying
the angles of the stiffeners, the CLF is optimizedfor t=2 mm again. The present work shows
that even if flat plates are stiffened, their dynamic response may not be favourable all the
time. There exists an optimum configurationwhich will minimize sound radiated by flat plate,
rest of the configurations are not at all going to givethe benefit ofstiffening effect!!!
(a) (b) (c)
(d) (e)
(f)
Figure 3(a)-(f): Coupling loss factors for different corrugated plates.
4
Conclusion
Flat plates are modified using inverted angle like stiffeners with an aim of optimizing the
structural response at high frequencies.The coupling loss factors, being very important
520
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
parameters of SEA, are modelled and predicted using MATLAB. The configuration of plate 9 (
0
120 , t 2mm ) is the bestone out of the population studied here for minimising the sound
radiated by the flat plate as the coupling loss factor is optimum for this case. Further
investigation is continued for evaluating modal densities of the plates under
consideration.Here, the discussion is restricted to the CLFprediction only, but the
experimental work is also done for capturing dynamic response of the plates.
References
Craik R.J.M., 1996, Sound Transmission through Buildings using Statistical Energy Analysis,
Aldershot, England Gower Publishing Ltd.
Lyon R. H., Dejong R. G., 1995, Theory and Application of Statistical Energy Analysis,
Boston: Butterworth-Heinemann, second edition.
Xie G., Thompson D.J., Jones C.J.C., 2006, A modelling approach for the vibroacoustic
behavior of aluminium extrusions used in railway vehicles, Journal of sound and
vibration, vol. 293, 921-932.
521
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Performance of Elitist Teaching-Learning-Based Optimization
(TLBO) Algorithm on Problems from GLOBAL Library
Anikesh Kumar, Nikunj Agarwalla, Prakash Kotecha*
Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam, India
*
Corresponding author (email: pkotecha@iitg.ernet.in)
Teaching-Learning-Based Optimization (TLBO) Algorithm is a recently introduced
heuristic algorithm, which mimics the learning environment of a class room. This work
reports our findings on the performance of elitist TLBO on 36 non-linear constrained
optimization problems with diverse number of variables and constraints that are
available in the GLOBAL Library and have been tested with state-of-the-art
mathematical programming solver (BARON). In addition, the effect of different types of
constraints and different evolutionary parameters such as population size, number of
generations and elite population size has also been analyzed.
1.
Introduction
The emergence of heuristic algorithms as a substitute to the gradient-based
techniques has proved to be instrumental in alleviating the difficulties associated with
optimizing large-scale engineering problems, partially owing to their modelling flexibility and
its simplicity. Many of these heuristic algorithms mimic natural phenomenon. One such
recently proposed evolutionary algorithm is Teaching-Learning-Based Optimization (TLBO)
(Rao et al., 2011, 2012a, 2012b). All evolutionary algorithms share some common
parameters such as population size, number of generations (iterations), elite size, etc which
considerably affect the performance of the algorithm. In addition to these common
evolutionary parameters, these algorithms have their own set of specific parameters. For
instance, Genetic Algorithm (GA) requires the setting of mutation and cross-over probability
(Back T., 1996), Particle Swarm Optimization (PSO) requires inertia weight in addition to
social and cognitive parameters (Clerc M., 2006), Ant Colony Optimization (ACO) requires
trail and visibility parameters along with evaporation rate (Dorigo et al.,1996), Differential
Evolution (DE) uses differential weight and crossover probability (Storn et al.,1997), whereas
Harmony Search (HS) requires the setting of harmony memory consideration rate and pitch
adjustment (Lee et al., 2005). However, TLBO has been reported to be an “algorithm-specific
parameter-less algorithm” ( Rao et al.,2012c, R.V.Rao, personal communication, June 3,
2013) as it does not have any of its own tuning parameters but only relies on the common
evolutionary parameters, thereby potentially reducing the computational effort required for the
solution of the problem. The performance of TLBO has been compared to other well-known
heuristic algorithms (Rao et al., 2011, 2012a, 2012b) such as GA, DE, HS (Rao et al., 2011,
2012c). However, to the best of our knowledge, TLBO has not been compared with any
gradient-based algorithm. Hence, we intend to determine the performance of elitist TLBO on a
set of non-linear constrained problems from the GLOBAL Library (GAMS, 2013) by comparing
its solutions with the solutions reported by state-of-the-art mathematical programming based
global solvers such as BARON.
TLBO is a population based algorithm which mimics the learning in a class room
environment and essentially comprises of two phases, the first being the Teacher Phase
where the students learn from the teacher and the second being the Learner Phase which
accounts for the interaction between the students. At the beginning of the Teacher phase, the
best individual is assigned the role of teacher. At the end of this phase the initial mean marks
of the class is shifted towards the teacher and the initial marks of the students are replaced by
the new marks depending on their interaction with the teacher. In the Learner Phase, every
student randomly interacts with any other student in the class and enhances his knowledge. A
student X1 can interact with any other student X2, and depending upon who is better, one is
moved towards the other. If X2 is better, then X1 is moved towards X2 and vice versa. The
detailed algorithm has not been included in this article for the sake of brevity and is available
522
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
along with its different versions in literature (Rao et al., 2011, 2012a, 2012b). TLBO has been
applied to problems of different fields like mechanical design (Rao et al., 2011), heat
exchanger design (Rao et al., 2013), clustering (Zou et al., 2013) etc. In addition to the single
objective optimization problems, TLBO has also been applied to multi-objective optimization
problems (Satapathy et al., 2011).
This article aims at determining the performance of elitist TLBO to solve a set of nonlinear constrained problems by comparing its solutions with the ones determined by
mathematical programming based global solvers and available in the GLOBAL Library
(GAMS, 2013). It also aims at studying the change in performance of TLBO for different
problem characteristics such as the type of constraints as well as different evolutionary
parameters such as population size, number of generations and elite size.
In the next section, a brief description of the problems on which elitist TLBO has been tested
is presented following which the implementation of elitist TLBO on these problems has been
discussed. Post the implementation section, the performance of elitist TLBO for different
problems has been discussed by drawing a comparison between the solutions available in
literature and the solutions determined by elitist TLBO. This section also discusses the effect
of evolutionary parameters on elitist TLBO.
2.
Case studies
Elitist TLBO has been implemented on 36 non-linear constrained problems available
in the GLOBAL Library and they have been chosen as this library has been reported (GAMS,
2013) to constitute a varied set of both theoretical and practical test models. In addition, these
problems are also being used to evaluate the performance of the “best known” gradient based
mathematical programming algorithm. This will enable us to compare the performance of
elitist TLBO with the “best known” mathematical programming solver. Details of the selected
problems are provided in Table 1. The names of the problems have been kept consistent
(except for ex5_2_2) with the GLOBAL Library, so that there is no ambiguity. For the sake of
brevity, the problems are not presented in this article and they can be freely accessed from
the GLOBAL Library (GAMS, 2013). These problems belong to a wide array of fields like
economic planning, structures, pollution control, petroleum industry, risk management etc. All
the problems listed below have been solved in the GAMS Library using proprietary software
such as BARON. 11 of these problems have been reported as unsolved in the library which
may be either due to the inability of the solver or the infeasibility of the problem.
Obj
Table1. Details of the case studies
Name
nv
nc
Obj
Name
nv
nc
Name
nv
nc
Obj
ex2_1_3
13
9
-15
sambal
17
10
3.97
ex8_4_3
52
25
*
ex2_1_5
10
11
-268.01
ex7_3_5
13
15
1.21
ex8_4_4
17
12
0.21
ex2_1_6
10
5
-39
ex2_1_9
10
1
-0.38
ex5_2_2
9
6
-400
ex2_1_10
20
10
49318.02
ex9_2_2
10
11
99.99
chakra
62
41
*
ex8_2_1a
57
33
*
ex9_2_3
16
15
0
chenery
43
38
*
ex8_2_4
55
81
*
himmel16
18
21
0.87
ex8_4_8
42
30
*
st_rv8
40
20
*
ex9_1_1
13
12
-13
pollut
42
8
*
st_rv9
50
20
*
ex9_1_4
10
9
-37
srcpm
39
28
*
ex2_1_7
20
10
-4150.41
ex9_1_5
13
12
-1
bearing
13
12
1.95
process
10
7
-5.67
ex9_1_8
14
12
-3.25
ex5_3_2
22
16
1.86
ex14_1_6
9
15
0
ex9_2_6
16
12
-1
ex5_4_3
16
13
4845.46
ex8_2_1b
57
33
*
ex9_2_7
10
9
17
immun
21
7
0
523
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
ex5_2_2_case1 in the library has been renamed as ex5_2_2; nv – number of variables, nc –
number of constraints; Obj – best known value as provided in GLOBAL Library; * – no
solution is available in the Library; values of the objective function have been rounded off to
the second decimal.
For the sake of convenience, the elitist TLBO code provided in literature (Rao et al.,
2012b) was modified for multiple runs and implemented on the problems listed in Table 1
using MATLAB 6.0.0.88 R12. Since TLBO has been primarily designed for unconstrained
optimization problems, the penalty function method has been used for handling the
constraints with a penalty factor of 1015. Due to the complex nature of the equality constraints,
they have been converted into inequality constraints as h . The exact tolerance used
for each of the problem is specified in Table 2. For problems with (- , ) as the limits for
decision variables, the upper and lower limits of the decision have been redefined and are
reported in the Supplementary File (Data Link, 2013).
Since the performance of elitist TLBO varies with number of generations, population
size and the elite size, each problem was run for three different generations (200,500 and
1000), three population sizes (50, 75 and 100) and two elite sizes (4, 6). Thus there are 18
combinations for each problem. Due to the stochastic nature of the algorithm, each of the 18
combinations was run with 10 different random set of random numbers thereby incurring 180
instances for each of the 36 problems. The seeds for the “rand” function in MATLAB were
determined using the “randint” function and are [152,417,832,108,225,282,279,644,325,285].
3.
Results and discussions
This section discusses the results obtained by elitist TLBO by its application on the
problems listed in Table 1 including the effect of different evolutionary parameters and its
performance relative to the solutions available in literature. Since there are 180 runs for each
instance of the problem, only the best solution among all the 18 combinations and the mean,
the worst, the standard deviation and the evolutionary parameters corresponding to it have
been reported in Table 2. In cases where multiple combinations lead to the best solution, the
one with the better mean value was selected. As each problem was run 180 times, for the
sake of brevity, the complete set of results is not presented in this article but has been
provided in the Supplementary File (Data Link, 2013). Similarly, decision variables
corresponding to the best solution have been provided in the Supplementary File (Data Link,
2013).
Performance: In Table 2, the last column states whether the elitist TLBO has determined a
better solution (B), the same solution (S) as provided in the Library, a solution inferior to the
one available in the Library (W) or a new solution to a problem with an unavailable solution in
the Library. Elitist TLBO was able to solve 27 of the 36 problems and was not able to find
feasible solutions for 9 problems (U). Out of the 27 solved problems, 6 problems were
reported as unsolved in the library (N). Out of the 9 unsolved problems, 5 have been reported
as unsolved in the library. Thus TLBO was not able to discover a feasible solution for 4 of the
problems.
In Figure 1, the first column of the first bar chart represents the number of problems that could
be solved by elitist TLBO whereas the second column shows the number of unsolved
problems by elitist TLBO. In both these columns, the bottom part represents the number of
problems whose solution is available in the Library whereas the top part corresponds to the
number of problems whose solutions are not available in the Library. From the first column, it
can be seen that elitist TLBO could solve 6 problems whose solution is not available in the
Library (problems- 5,6,7,8,12,25), whereas the second column shows that the elitist TLBO
could not determine solutions for 4 problems which have a solution available in the Library
(problems - 33-36).
The bar chart on the right has four columns; the first one corresponding to the number of
problems that had no equality constraints, whereas the second, third and fourth columns have
a tolerance of 10-10, 10-5, and 10-2 respectively. Each column has been divided into three
parts. The first part (from bottom) corresponds to problems with a better solution than
available or solution to an unsolved problem in the Library. The second part corresponds to
problems whose solutions are similar to the ones available in the Library and the last part to
524
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
the problems whose solution by elitist TLBO is inferior to the ones that are already available in
the Library.
Effect of Population Size: For a fixed number of generations and elite size, an increase in
population size does not guarantee a better solution. For instance, in ex5_2_2, for 200
generations and an elite population of 4, the best solution corresponding to a population size
of 75 is -0.3899 whereas it is -0.3 for a population size of 100. This phenomenon can be seen
in other instances as depicted in Table 2 and the Supplementary File (Data Link, 2013).
SI
ε
Σ
ng
1
ex2_1_3
1000
6
100
-15
-12.3
-9
2.6
-
S
2
ex2_1_5
1000
6
50
-268.01
-268.01
-268.01
0
-
S
3
ex2_1_6
200
6
100
-39
-39
-39
0
-
S
4
ex2_1_10
1000
4
100
49318.02
57758.1
133718.8
26689.89
-
S
5
ex8_2_1a
1000
4
100
-977.89
-971.87
-959.26
6.10
-
N
6
ex8_2_4
1000
6
100
-11268.3
-11265
-11255.9
3.96
-
N
7
st_rv8
1000
4
75
-132.64
-127.10
-106.54
7.59
-
N
8
st_rv9
1000
6
100
-120.14
-113.67
-108.83
3.29
-
N
9
ex2_1_7
500
4
100
-4119.52
-3133.66
-407.25
1469.36
-
W
10
process
1000
6
100
-6.07
-0.31
0.85
2.5
10-10
B
-10
S
-10
11
ex14_1_6
1000
Es
Table 2. Results obtained by TLBO
np
Best
Mean
Worst
Name
6
100
0
0
0
0
10
P
12
ex8_2_1b
1000
6
50
-852.06
-734.81
-719.56
41.65
10
N
13
sambal
200
4
50
1028
1028
1028
8.44*10-11
10-10
W
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
ex7_3_5
ex2_1_9
ex9_2_2
ex9_2_3
himmel16
ex9_1_1
ex9_1_4
ex9_1_5
ex9_1_8
ex9_2_6
ex9_2_7
ex8_4_3
ex8_4_4
ex5_2_2
chakra
200
1000
1000
1000
1000
500
1000
1000
1000
1000
1000
1000
1000
200
1000
4
4
6
6
6
4
4
4
6
6
6
4
6
4
6
75
100
50
100
50
50
50
100
50
100
100
100
100
75
100
1.21
1.21
-0.38
-0.06
1.19*10
-11.75
-1.01
3.72*10
-1.02
-0.36
16.97
4.22
1.09
-0.39
-0.14
16
6.93*10
1.06*10
15
3.96*10
1000
4
100
3.15*10
30
ex8_4_8
1000
6
100
7.95*10
1000
6
100
8.28*10
14
26
1.66*10
525
9.73*10
1.72*10
19
1.7*10
26
1.66*10
15
20
26
1.66*10
S
-2
S
-2
S
-2
S
-2
S
-2
N
-2
W
-2
W
-2
U
-2
U
-2
U
-2
U
10
10
10
0.12
15
S
-2
10
10
0.51
16
B
-2
10
0.43
0
16
1.18*10
1.15*10
1.94
B
-2
10
14
-6
5.75
B
-2
10
0.63
16.97
5.26
0.25
3.72*10
0.48
16.97
2.89*10
2.49
14
B
-2
10
10
14
10.17
6.36
13
chenery
pollut
-7.03
0.29
-3.25
9.00*10
S
-2
10
10
0.11
14
-5
10
8.80
-0.54
14
-37.03
47.04
22.33
-0.62
-13.02
0.000795
221.72
9.79
-0.91
6.81*10
-0.38
140.01
29
31
1.21
-0.38
99.36
-5
10
15
7.37*10
14
10
5.11*10
10
19
10
5.6*10
10
3.62*10
10
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
32
33
srcpm
bearing
34 ex5_3_2
35
36
ex5_4_3
immun
1000
1000
1000
1000
1000
6
6
6
4
6
100
299933.2
100
39
100
50
100
3.79*10
15
3.07*10
13
2.01*10
9
9.49*10
14
4.68*10
39
3.79*10
16
4.05*10
15
5.46*10
9
9.49*10
15
1.25*10
39
3.79*10
16
8.71*10
16
1.68*10
9
9.49*10
14
4.94*10
0
-2
U
-2
U
-2
U
-2
U
-2
U
10
10
16
2.77*10
15
6.39*10
10633.21
10
10
10
ng–number of generations, Es – elite population size, np – population size, σ – standard
deviation, – tolerance, P – performance, B – better, S – same, W – worse, U – unsolved, Nnew solution.
Figure 1: Performance of elitist TLBO
Figure 2: Effect of elite size on ex2_1_3 and ex5_2_2
Effect of Elite Size: An increase in elite size fastens up the convergence at times but it is not
always favourable for getting the best result. For instance we get a better solution in ex2_1_3
with an increase in elite size. For 1000 generations and a population size of 100, the best
solution for an elite size of 4 is -12.3 whereas it is -15 for an elite size of 6. On the contrary, in
526
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
ex5_2_2, for 200 generations and a population size of 50, the best solution for an elite size of
4 is -0.24 whereas it is -0.15 for an elite size of 6. Figure 2 shows the contrast of the two
instances described above. It should be noted that since the evolutionary algorithms need to
be run for different sets of parameters and random numbers, the time required by TLBO
would be much higher than mathematical programming algorithm such as GAMS/BARON. As
we have not tested other evolutionary algorithms like GA, ACO, DE, HS, etc on the problems
in the Library, comparing the performance of TLBO with any of these algorithms on the basis
of this article would be unfair.
4.
Conclusions
The elitist TLBO was tested on 36 non-linear constrained problems and it was able to
determine the solutions for 27 problems. It was able to determine solutions for 6 problems,
under the listed circumstances in Table 2, which gradient based solvers were not able to
solve. Of the 9 problems that elitist TLBO could not even determine a feasible solution, 4
have been solved by gradient based algorithms as reported in the Library. The effect of
population size and elite size does play a role in the performance of elitist TLBO, but the
changes in performance owing to the changes in these parameters cannot be generalised.
Elitist TLBO has performed much better on problems without equality constraints than the
ones with equality constraints. Future work can include testing the performance of elitist
TLBO on various other optimization problems that are considered as “benchmark” problems
by the mathematical programming community to provide better clarity on the performance of
elitist TLBO.
References
Bäck, T., Evolutionary algorithms in theory and practice: evolution strategies, evolutionary
programming, genetic algorithms, Oxford University Press, Oxford, 1996
Clerc, M., Particle swarm optimization. ISTE Publishing Company, 2006
Dorigo, M., Maniezzo, V. and Colorni, A., The ant system: Optimization by a colony of
cooperating agents, IEEE Trans. Syst, Man, Cybern. B, vol. 26, no. 2, pp. 29–41, 1996
GLOBAL Library, http://www.gamsworld.org/global/globallib.htm, Last Accessed: May, 2013
Lee, K. S., and Geem, Z. W., A new meta-heuristic algorithm for continuous engineering
optimization: harmony search theory and practice. Computer Methods in Applied
Mechanics and Engineering. v194. 3902-3933, 2005
Satapathy, S. C., and Naik, A., Data clustering based on teaching-learning-based
optimization, Proceedings of the Second international conference on Swarm,
Evolutionary, and Memetic Computing, p.148-156, 2011
Storn, R. and Price, K. V., “Differential evolution - A simple and efficient heuristic for global
optimization over continuous Spaces,” J. Global Optim., vol. 11, pp. 341–359, 1997
Supplementary File, http://goo.gl/Gfwuo , Last Accessed: May, 2013
Rao, R. V., Savsani, V. J. and Vakharia, D. P., Teaching-learning-based optimization: A novel
method for constrained mechanical design optimization problems, Computer-Aided
Design, v.43 n.3, p.303-315, 2011
Rao, R. V., Savsani, V. J., and Vakharia, D.P., Teaching-Learning-Based Optimization: An
optimization method for continuous non-linear large scale problems, Information
Sciences: an International Journal, v.183 n.1, p.1-15, 2012a
Rao, R. V., and Patel, V., An elitist teaching-learning-based optimization algorithm for solving
complex constrained optimization problems. International Journal of Industrial
Engineering Computations. v3. 535-560.2012b
Rao, R. V., and Patel, V., Comparative performance of an elitist teaching-learning-based
optimization algorithm for solving unconstrained optimization problems. International
Journal of Industrial Engineering Computations, 4:29-50, 2012c
Rao, R. V., and Patel, V., Multi-objective optimization of heat exchangers using a modified
teaching-learning-based optimization algorithm. Applied Mathematical Modelling, Volume
37, Issue 3, Pages 1147–1162, 2013
Zou, F., Wang, L., Hei, X., Chen, D and Wang, B., Multi-objective optimization using teachinglearning-based optimization algorithm. Engineering Applications of Artificial Intelligence,
Volume 26, Issue 4, Pages 1291–1300, 2013
527
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
Optimal Placement of Multiple TCSC: A New Self Adaptive Firefly
Algorithm
R.Selvarasu1*, C.Christober Asir Rajan2
1
Department of EEE, JNTUH, Hyderabad, India
Pondicherry Engineering College, Puduicherry, India
2
*Corresponding author (e-mail: selvarasunaveen@gmail.com)
This paper presents a new strategy for optimally placing multiple Thyristor Controlled
Series Compensator (TCSC) in power systems with a view to minimize the transmission
loss and improve the bus voltage profile. The proposed strategy uses a new Self Adaptive
Firefly Algorithm (SAFA) and identifies the optimal locations for TCSC placement and their
parameter. The TCSC placement strategy is applied on IEEE 14-bus system and their
results are presented to demonstrate its effectiveness.
1.
Introduction
In recent years the power systems are forced to operate close to their thermal and
stability limits due to exponentially increasing load demand. There is always a need for
construction of new generation facilities and transmission networks. However, they involve huge
installation cost, environment impact, political, large displacement of population and land
acquisition. One of the simplest ways is to minimize transmission loss, which is estimated to be
approximately 25% of the generated power, rather than constructing new generation systems.
The power electronics based FACTS devices, developed by Hingorani (2000) which have
been effectively used for flexible operation and control of the power system through controlling
their parameters. They have the capability to control the various electrical parameters in
transmission network in order to achieve better system performance. FACTS devices are divided
into three categories: Shunt connected, Series connected and a combination of both. The Static
Var Compensator (SVC) and Static Synchronous Compensator (STATCOM) are belongs to the
shunt connected device and are in use for a long time. Consequently, they are variable shunt
reactors which inject or absorb reactive power in order to control the voltage at a given bus. Both
Thyristor Controlled Series Compensator (TCSC) and Static Synchronous Series Compensator
(SSSC) are belongs to the series connected devices. The TCSC and SSSC mainly control the
active power in a line by varying the line reactance. They are in operation at a few places but are
still in the stage of development (Larsen et al.,1994). Unified Power Flow Controller (UPFC)
belongs to Combination of Shunt and Series devices. UPFC is able to control active power,
reactive power and voltage magnitude simultaneously or separately (Gyugyi, 1992).
Better utilization of an existing power system capacity by installing FACTS devices has
become essential in the area of ongoing research. Recently several strategies have been
suggested by the researchers for optimally placing FACTS devices in power systems with a view
to enhance the performance. Different Meta-heuristic algorithms such as Genetic Algorithm (GA),
Simulated annealing (SA), Ant Colony Optimization (ACO), Bees Algorithms (BA), Differential
Evolution (DE), and Particle Swarm Optimization (PSO) etc (yang, 2010) have been applied in
solving the FACTS placement problems.
Optimal locations of multi type FACTS devices in a power system to improve the
loadability by means of Genetic Algorithm has been successfully implemented by Gerbex,
Cherkaom and Germond (2001). PSO has been applied by Saravanan et al., (2007) to find the
optimal location of FACTS devices considering cost of installation and system loadability.
Bacterial Foraging algorithm has been used by Senthil Kumar and Renuga (2012) to find the
528
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
optimal location of UPFC devices with objectives of minimizing the losses and improving the
voltage profile.
Firefly Algorithm (FA) has been developed by Yang (2010) which is found to be superior
over other algorithms in the sense that it could handle multimodal problems of combinational and
numerical optimization more naturally and efficiently. It has been then applied by various
researchers for solving various problems, to name a few: economic dispatch by Apostolopoulos
and Vlachos (2011) and Taher Niknam et al., (2012), and Unit commitment by Chandrasekaran
and Simon (2012) etc. However, the improper choice of FA parameters affects the convergence
and may lead to sub-optimal solutions. There is thus a need for developing better strategies for
optimally selecting the FA parameters with a view of obtaining the global best solution besides
achieving better convergence.
In this paper, a new self adaptive firefly based strategy is proposed for TCSC placement
with a view of minimizing transmission loss and improve the bus voltage profile. The strategy
identifies the optimal locations and the TCSC parameter. Simulations are performed on IEEE 14bus system using MATLAB software package and the results are presented to demonstrate the
effectiveness of the proposed approach.
2.
Firefly algorithm
This algorithm mimics the flashing behavior of fireflies. It is similar to other optimization
algorithms employing swarm intelligence such as PSO.FA initially produces a swarm of fireflies
located randomly in the search space. The initial distribution is usually produced from a uniform
random distribution. The position of each firefly in the search space represents a potential
solution of the optimization problem. The dimension of the search space is equal to the number of
optimizing parameters in the given problem. The fitness function takes the position of a firefly as
input and produces a single numerical output value denoting how good the potential solution is. A
fitness value is assigned to each firefly. The brightness of each firefly depends on the fitness
value of that firefly. Each firefly is attracted by the brightness of other fireflies and tries to move
towards them. The velocity or the pull a firefly towards another firefly depends on the
attractiveness. The attractiveness depends on the relative distance between the fireflies. It can be
a function of the brightness of the fireflies as well. A brighter firefly far away may not be as
attractive as a less bright firefly that is closer. In each iterative step, FA computes the brightness
and the relative attractiveness of each firefly. Depending on these values, the positions of the
fireflies are updated. After a sufficient amount of iterations, all fireflies converge to the best
possible position on the search space. The number of fireflies in the swarm is known as the
population size, N . The selection of population size depends on the specific optimization
problem. However, typically a population size of 20 to 50 is used for PSO and FA for most
th
applications. Each m firefly is denoted by a vector
xm
as
xm x1m , xm2 , xmnd .
(1)
The search space is limited by the following inequality constraints
xv (min) xv xv (max) .
v 1, 2,, nd
(2)
Initially, the positions of the fireflies are generated from a uniform distribution using the following
equation
xmv xv (min) xv (max) xv (min) rand .
(3)
Here, rand is a random number between 0 and 1, taken from a uniform distribution.
th
The light intensity of the m firefly, I m is given by
I m Fitness ( xm ) .
(4)
529
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
th
The attractiveness between the m and n
th
firefly,
m,n is given by
m,n (max,m,n min,m,n )exp( m rm,n 2 ) min,m,n
Where rm,n is Cartesian distance between
rm,n xm xn
x
nd
v 1
k
m
xnk
2
(5)
.
mth and nth firefly
.
(6)
nth firefly is larger than the intensity of the mth firefly, then the mth
th
th
firefly moves towards the n firefly and its motion at the k iteration is denoted by the following
If the light intensity of
equation:
xm (k) xm (k 1) m,n xm (k 1) xm (k 1) rand 0.5
.
(7)
Where = random movement factor.
2.1 Self adaptive firefly algorithm
In the above narrated Firefly Algorithm, each firefly of the swarm travel around the problem
space taking into account the results obtained by others, still applying its own randomized moves
as well. The random movement factor ( ) is very effective on the performance of Firefly
Algorithm. A large value of makes the movement to explore the solution through the distance
search space and smaller value of tends to facilitate local search. In this paper is
dynamically tuned in each iteration. The influence of further solutions is controlled by the value of
attractiveness of (9), which can be adjusted by modifying two parameters max and .In general
the value of max should be used from 0 to 1 and two limiting cases can be defined: The
algorithm performs cooperative local search with the brightest firefly strongly determining other
fireflies positions, especially in its neighborhood, when max = 1 and only non-cooperative
distributed random search with max = 0.
On the other hand, the value of determines the variation of attractiveness with
increasing distance from communicated firefly. Setting as 0 corresponds to no variation or
attractiveness is constant and conversely putting as results in attractiveness being close to
zero which again is equivalent to the complete random search. In general in the range of 0 to
10 can be chosen for better performance. Indeed, the choice of these parameters affects the final
solution and the convergence of the algorithm.
Each firefly with nd decision variables in the FA will be defined to encompass nd +3. FA
variables in a self-adaptive method, where the last three variables represent m , min,m and
m .
A firefly can be represented as
xm x1m , xm2 , xmnd , m , min,m , m .
Each firefly possessing the solution vector, m ,
(8)
min,m and m undergo the whole search
process. During iteration, the FA produces better off-springs through Equation (5) and Equation
(7) using the parameters available in the firefly of Equation (8) thereby enhancing the
convergence of the algorithm.
530
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
3. Proposed strategy
The TCSC are to be installed at appropriate locations with optimal parameters that
minimize the transmission loss for better utilization of the existing power system. This paper aims
to develop a methodology that performs TCSC placement with an objective of minimizing
transmission loss and improve the bus voltage profile.
3.1 Objective function
The objective is to minimize transmission loss, which can be evaluated from the power flow
solution, and written as
Min Ploss
nl
G (V
l 1
l
i
2
V j 2 2VV
i j cos ij ) .
(9)
l th line, Vi =Voltage magnitude at bus
i , V j =Voltage magnitude at bus j , ij =Voltage angle at bus i and j and nl =Total number of
Where Ploss =Real power loss, Gl =Conductance of
transmission lines.
3.2 Problem constraints
The equality constraints are the load flow equation given by
PGi PDi Pi (V , ) ,
(10)
QGi QDi Qi (V , ) .
(11)
th
Where PGi and QGi represent the real and reactive power generation at i generator
respectively. PDi and QDi represent the real and reactive power drawn by the load at bus i ,
respectively.The Inequality Constraints are
Vi min Vi Vi max for PQ buses
QGi
min
QGi QGi
max
(12)
for PV buses
(13)
TCSC Constraints
0.8 X line TCSC X line 0.2 X line p.u .
(14)
Where X line =Reactance of the transmission line and TCSC is the compensation factor
of the TCSC. The firefly of the proposed SVC placement problem is defined as
xm {(L1 , TCSC ,1 ,m , min,m , m )....(LM , TCSC ,M , m , min,m , m ).......(L N , TCSC ,N , N , min,N , N )} . (15)
Where, L M is the Line location of the
M th TCSC
The Self Adaptive Firefly Algorithm searches for optimal solution by maximizing the light
intensity I m , like the fitness function in any other stochastic optimization techniques. The light
intensity function can be obtained by transforming the power loss function into I m function as
Max I m
1
.
1 Ploss
(16)
A population of firefly is randomly generated and their intensity is calculated using
Equation (16). Based on the light intensity, each firefly is moved to the optimal solution through
Equation (7) and the iterative process continues till the algorithm converges.
531
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
4. Simulation results and discussions
The effectiveness of the proposed Self Adaptive Firefly algorithm (SAFA) for optimally
placing the TCSC devices to minimize the transmission loss and improve the bus voltage profile
in the power system has been implemented and tested on IEEE 14 bus test systems using
MATLAB 7.5. The line data and bus data for the three test systems are taken from Power system
test case (Online). Simulations are carried out with considering two and three numbers of TCSC
and the simulation results in terms of the locations and the TCSC parameters and the resulting
loss are presented in Table 1. It is observed from this table that the placement of SVC,
irrespective of number of devices, attempts to reduce the loss. When three TCCS are installed,
the real power loss is considerably reduced from 13.3663 MW to 13.2903 MW. Bus Voltages are
given in table 2.It is observed from the results three SVC devices are adequate to achieve the
desired goal of minimizing the loss and improving the bus voltages within the allowable range and
makes it suitable for practical implementations.
No of TCSC
Table1.Optimal locations, parameter and Real Power Loss
Real Power Loss(MW) Locations (Line No)
TCSC (p.u)
0
13.3663
2
13.2915
3
13.2903
17
15
15
16
17
-0.799
-0.800
-0.800
-0.800
-0.798
Table 2. Bus Voltages of IEEE 14- bus System
Bus No 1
2
3
4
5
6
7
8
9
10
11
12
13
14
IVI
without 1.06 1.045 1.01 1.014 1.018 1.07 1.057 1.09 1.05 1.046 1.05 1.055 1.05 1.037
TCSC
IVI with
2
1.06 1.045 1.01 1.014 1.019 1.07 1.055 1.09 1.05 1.049 1.05 1.057 1.05 1.049
TCSC
IVI with
3
1.06 1.045 1.01 1.014 1.017 1.07 1.055 1.09 1.05 1.053 1.05 1.057 1.05 1.048
TCSC
4.1 Parameter of the self adaptive firefly algorithm
The population size, N for IEEE 14-bus system has been taken as 30. The maximum
number of Iterations considered as 200. The random movement factor ( ) are tuned during each
iteration. The initial value of
is set to 0.5. The attractiveness parameter is varied from min
is taken as 0.2 and the value of max is taken as 1. The absorption
to max . The value of min
parameter is taken as 1 and it is tuned in all iteration. It is to be pointed out that the
performance of the any meta-heuristic optimization algorithms is very dependent on the tuning of
their different parameters. A small change in the parameter may result in a large change in the
solution of these algorithms. SAFA is a powerful algorithm which efficiently tuned all the
parameters to obtain the global or near global optimal solution.
It is very clear from the above discussions that the proposed SAFA is able to reduce to
the loss to the lowest possible by optimally placing and determining the parameters when
532
Proceedings of the International Conference on Advanced Engineering Optimization Through Intelligent Techniques
(AEOTIT), July 01-03, 2013
S.V. National Institute of Technology, Surat – 395 007, Gujarat, India
compared to other optimization algorithms. In addition the self adaptive nature of the algorithm
avoids repeated runs for fixing the optimal FA parameters by a trial and error procedure and
provides the best possible parameters values.
5. Conclusion
This paper made an attempt to identify the optimal placement of TCSC and their
parameter with a view of minimizing the transmission loss in the power system network using
SAFA. Simulations results are presented for IEEE 14-bus system. Results have shown that the
identified location of TCSC minimize the transmission loss in the power system network and
improve the bus voltage profile. With the above proposed algorithm it is possible for the utility to
place TCSC devices in transmission network such that proper planning and operation can be
achieved with minimum system losses.
References
Apostolopoulos, T. and Vlachos, A. Application of the Firefly algorithm for solving the economic
emissions load dispatch problem. Combinatorics, 2011, 2011(523806),23 .
Chandrasekaran,K. and Simon,S.P. Network and reliability constrained unit commitment problem
using binary real coded firefly algorithm, Electrical Power & Energy Systems,
2012,43(1),921–932.
Gerbex, S., Cherkaom, R. and Germond,A.J. Optimal location of multi type FACTS devices in a
power systems by means of genetic algorithms, IEEE Transaction on Power Systems, 2001,
16(3),537-544.
Gyugyi,L. Unified power flow controler concept for flexible AC transmission system. IEEE
proceedings, 1992,139(4),323-331.
Hingorani,N.G. and Gyugyi,I. Understanding FACTS Concepts and Technology of Flexible AC
Transmission Systems, IEEE Press, New York ,2000.
Larsen,E.V.,Clark,K.,Miske,S.A.,and Urbanek,J.Characteristic and rating considerations of
thyristor controlled series compensation, IEEE Transaction Power Delivery,1994,9,992-1000.
Mathur,R.M. and Varma,R.K. Thyristor-based FACTS Controllers for Electrical Transmission
Systems, IEEE Press, Piscataway, 2002.
Saravanan,M., Slochanal,S.M.R., Venkatesh,P. and Abraham,J.P.S. Application of particle
swarm optimization technique for optimal location of FACTS devices considering cost of
installation and system loadability, Electrical Power System Research,2007,77,276-283.
Senthil Kumar,M. and P Renuga,P. Application of UPFC for enhancement of voltage profile and
Minimization of losses using fast voltage stability index(FVSI), Archives Of Electrical
Engineering, 2012, 61(2), 239-250.
Taher Niknam, Rasoul Azizipanah-Abarghooee,and Alireza Roosta. Reserve Constrained
Dynamic Economic Dispatch:A New Fast Self- Adaptive Modified Firefly Algorithm, IEEE
System, 2012, 6(4),2012.
Yang, X.S. Firefly algorithms for multimodal optimization, stochastic algorithms: Foundations and
applications, 2009,5792, 169-178.
Yang,X.S. Firefly algorithm stochastic test functions and design optimization, Bio-inspired
Computation, 2010, 2, 78-84.
Yang,X.S. Nature-Inspired Meta-Heuristic Algorithms, Luniver Press, Beckington, 2010.
Power Systems Test Case, The University of Washington Archive, [Online]. Available:
http://www.ee.washington.edu/research/pstca/, 2000.
533