One of the key challenges in multidisciplinary design is integration of design and analysis methods of various systems in design framework. To achieve Multidisciplinary Design Optimization (MDO) goals of aircraft systems, high fidelity analysis are required from multiple disciplines like aerodynamics, structures or performance. High Fidelity Analysis like Computer-Aided Design and Engineering (CAD/CAE) techniques, complex computer models and computation-intensive analyses/simulations are often used to accurately study the system behaviour towards design optimization. Due to high computational cost and numerical noise associated with these analyses, they cannot be used effectively. The use of surrogates or Response Surface Models (RSM) is one approach in Multi Disciplinary design optimization to avoid the computation barrier and to take care of artificial minima due to numerical noise. This paper brings out a method based on use of “Smart Response Surface Models" to generate surrogate models, with its validated subspace, in the design space around the point of interest with the use of legacy data for MDO. The method has been evaluated on three test cases, which are created based on High Speed Civil Transport (HSCT) Multidisciplinary Design Optimization Test Suite
Linking design and manufacturing on a PLM platformiosrjce
IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mechanical and civil engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mechanical and civil engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
The document proposes a new model for emulating massively parallel single instruction multiple data (SIMD) machines in a distributed system using a network of virtual processing elements managed by distributed host agents. It describes the architecture of the model, which uses virtual processing elements arranged in topological structures like meshes and GPUs to emulate different parallel machine architectures. An example application of edge detection on an MRI image is provided to demonstrate the performance of the proposed parallel virtual machine model.
In this paper, implementations of three Hough Transform based fingerprint alignment algorithms are analyzed with respect to time complexity on Java Card environment. Three algorithms are: Local Match Based Approach (LMBA), Discretized Rotation Based Approach
(DRBA), and All Possible to Match Based Approach (APMBA). The aim of this paper is to present the complexity and implementations of existing work of one of the mostly used method of fingerprint alignment, in order that the complexity can be simplified or find the best algorithm with efficient complexity and implementation that can be easily implemented on Java Card environment for match on card. Efficiency involves the accuracy of the implementation, time taken to perform fingerprint alignment, memory required by the implementation and instruction operations required and used.
Automatic analysis of smoothing techniques by simulation model based real tim...ijesajournal
The pivotal research work that has been carried out and described in this literature acknowledges the
importance of various smoothing techniques for processing 3D human faces from 2.5D range face images.
The smoothing techniques have been developed and implemented using MATLAB-Simulink for real time
processing in embedded system. In addition, the significance of smoothed 2.5D range image over original
face range image has been discovered as well as its time complexity has also been reported with array of
experiments. The variations in time complexities are also accomplished using different optimization levels
and execution modes. A set of filtering techniques such as, Max filter, Min filter, Median filter, Mean filter,
Mid-point filter and Gaussian filter, have been designed and illustrated using Simulink model. The model
takes depth face image (i.e. the range face image) as input in real time and presents the improvement over
original face images. In the design flow, the performance of every block has also been characterized by
range face images from Frav3D, GavabDB, and Bosphorus databases. In the experimental section of this
research article, an array of performance analysis for these smoothing techniques with variation of
frameworks is explained.
Abstract
It is well known that design loads vary randomly during equipment operations. Similarly, material properties such as yield strength, tensile strength, fatigue strength, etc. are random variables. Design analytical models are approximations of reality and failure mode models are also approximations. Consequently, design solutions are not exact. Practical design must therefore, consider the random nature and statistical variability of design parameters. Reliability-based design models are developed to provide practical design methods. This paper develops a lognormal reliability-based design model that can be coded in Excel Spreedsheet. Two design examples are considered in demonstrating the application of the formulated model. In the first example, our result differs from the result from [7] by 0.9% on the conservative side. The weight of the beam in the second example differs from [9] by 14.53% positively for a reliability target of z = 3. This variance is largely due to differences in reliability targets. When the beam size is adjusted to closely match the reliability levels of [9], the weight of the beam becomes 1.4% lower for our model. Therefore the results from our reliability model give comparable but slightly conservative and realistic solutions.
Keywords: Lognormal, Reliability, Variability, Normal Variate
This document summarizes a research project that used a 3D morphable face model to improve stereo reconstruction of faces and visualize the reconstructed model on a smartphone. The research methodology involved fitting a morphable model to single images to estimate shape, extracting texture, using the model to aid stereo reconstruction, and deforming the model to match. Evaluation showed improved reconstruction accuracy when incorporating the morphable model. The reconstructed face model was then visualized on a smartphone using virtual reality hardware.
Validation Study of Dimensionality Reduction Impact on Breast Cancer Classifi...ijcsit
A fundamental problem in machine learning is identifying the most representative subset of features from
which we can construct a predictive model for a classification task. This paper aims to present a validation
study of dimensionality reduction effect on the classification accuracy of mammographic images. The
studied dimensionality reduction methods were: locality-preserving projection (LPP), locally linear
embedding (LLE), Isometric Mapping (ISOMAP) and spectral regression (SR). We have achieved high
rates of classifications. In some combinations the classification rate was 100%. But in most of the cases the
classification rate is about 95%. It was also found that the classification rate increases with the size of the
reduced space and the optimal value of space dimension is 60. We proceeded to validate the obtained
results by measuring some validation indices such as: Xie-Beni index, Dun index and Alternative Dun
index. The measurement of these indices confirms that the optimal value of reduced space dimension is
d=60.
CFD Analysis of ATGM Configurations -- Zeus NumerixAbhishek Jain
Above Research Paper can be downloaded from www.zeusnumerix.com
The research paper aims to study the aerodynamic configuration of anti-tank guided missile (ATGM). Effect of rotation of the ATGM on it own axis is simulated along with the curved fins. Areas of flow separation and higher drag are visualized using the variation of axial force along the length. The paper emphasizes on the contribution of base drag to the total drag. Authors A Venkateshwarlu and Hem Raj (BDL), Kumar Mihir and Sanjay Kumar (Zeus Numerix), Prof KE Prasad JNTU.
IRJET- 3D Reconstruction of Surface Topography using Ultrasonic TransducerIRJET Journal
This document describes a study that aims to reconstruct 3D images of surface topography using an ultrasonic transducer. The transducer scans target objects by firing ultrasonic pulses and receiving reflected signals. Digital signal processing is used to analyze the reflected waveforms and generate graphs representing the objects' surface areas. MATLAB and LabVIEW are used to perform computations and render 3D images of the targets' actual shapes from the collected data. The methodology involves using an ultrasonic transducer, oscilloscope, and pulse receiver to scan target objects in steps and reconstruct their surface topographies based on variations in reflected signal amplitudes. The targets tested are an adjustable wrench and a 5 rupee coin.
Application of Multiple Kernel Support Vector Regression for Weld Bead Geomet...IJECEIAES
Modelling and prediction of weld bead geometry is an important issue in robotic GMAW process. This process is highly non-linear and coupled multivariable system and the relationship between process parameters and weld bead geometry cannot be defined by an explicit mathematical expression. Therefore, application of supervised learning algorithms can be useful for this purpose. Support vector machine is a very successful approach to supervised learning. In this approach, a higher degree of accuracy and generalization capability can be obtained by using the multiple kernel learning framework, which is considered as a great advantage in prediction of weld bead geometry due to the high degree of prediction accuracy required. In this paper, a novel approach for modelling and prediction of the weld bead geometry, based on multiple kernel support vector regression analysis has been proposed, which benefits from a high degree of accuracy and generalization capability. This model can be used for proper selection of welding parameters in order to obtain a desired weld bead geometry in robotic GMAW process.
This document summarizes an international journal article that proposes a two-phase algorithm for face recognition in the frequency domain using discrete cosine transform (DCT) and discrete Fourier transform (DFT). The algorithm works in two phases: the first phase uses Euclidean distance to determine the K nearest neighbor training samples of a test sample. The second phase represents the test sample as a linear combination of the K nearest neighbors and classifies the sample based on which class representation has the smallest deviation from the test sample. Experimental results on FERET and ORL face databases show the two-phase algorithm based on DCT and DFT outperforms other methods like two-phase sparse representation and PCA/LDA in terms of classification accuracy.
Traditional manual drafting provides valuable skills for spatial conception and graphic expression, while computer-aided drafting (CAD) offers advantages like speed, control, and complex 3D modeling. Both manual and CAD skills remain important for engineers to effectively visualize, analyze, and communicate design concepts. CAD software is now essential for mechanical design, allowing complex animated 3D models, simulations, and sharing of engineering drawings. Common CAD, simulation, and coding programs used include AutoCAD, SolidWorks, ANSYS, MATLAB, and Python.
IRJET- A Review on Face Detection and Expression RecognitionIRJET Journal
This document reviews face detection and expression recognition techniques. It discusses common methods for face detection including knowledge-based, feature-based, template matching and appearance-based. For expression recognition it covers preprocessing, feature extraction using local binary patterns (LBP) and principal component analysis (PCA). LBP represents textures as histograms of local binary patterns. PCA performs dimensionality reduction to extract the most important features. The document also provides examples of implementing a basic face recognition system and compares LBP and PCA methods.
This document discusses different coordinate systems used in geometric modeling:
1. The model coordinate system (MCS) is the reference space where all model geometric data is stored.
2. The working coordinate system (WCS) is user-defined to facilitate geometric construction.
3. The screen coordinate system (SCS) is device-dependent with its origin at the lower left display corner.
Geometric modeling refers to techniques for efficiently representing a design's geometric aspects. Approaches include wireframe modeling using points and curves, surface modeling defining objects by bounding faces, and solid modeling representing objects as solids.
Graphics software acts as an intermediary between application programs and graphics hardware, supporting output primitives and interaction devices. There are two main types of graphics software: general programming packages that provide extensive graphics functions for use in languages like C and FORTRAN, including functions for shapes, colors, and transformations; and special-purpose applications packages that are designed for non-programmers to generate displays without programming knowledge, such as painting and CAD programs.
This document discusses the analysis of different process parameters on the properties of components manufactured using Fused Deposition Modeling (FDM). It aims to study the effect of road width, air gap, and build orientation (0°, 45°, 90°) on properties like accuracy, surface finish, and build time. Samples will be manufactured at different combinations of these parameters and tested to determine their properties, with results analyzed and presented graphically. Prior research has found orientation affects properties like surface quality, accuracy, build time and cost, so optimization of orientation is important for FDM.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
MONOGENIC SCALE SPACE BASED REGION COVARIANCE MATRIX DESCRIPTOR FOR FACE RECO...cscpconf
In this paper, we have presented a new face recognition algorithm based on region covariance
matrix (RCM) descriptor computed in monogenic scale space. In the proposed model, energy
information obtained using monogenic filter is used to represent a pixel at different scales to
form region covariance matrix descriptor for each face image during training phase. An eigenvalue
based distance measure is used to compute the similarity between face images. Extensive
experimentation on AT&T and YALE face database has been conducted to reveal the
performance of the monogenic scale space based region covariance matrix method and
comparative analysis is made with the basic RCM method and Gabor based region covariance matrix method to exhibit the superiority of the proposed technique.
A Novel Approach of Fuzzy Based Semi-Automatic Annotation for Similar Facial ...ijsrd.com
This paper proposes a semi-automatic approach for annotating similar facial images that are often weakly labeled with duplicate, noisy, or incomplete names. It uses an unsupervised label refinement (ULR) algorithm with fuzzy clustering to improve the labels. The ULR algorithm refines the labels through multiple iterations using machine learning techniques. It also uses a parallel computation framework to solve very large problems efficiently. Evaluation on a dataset with introduced noise shows the proposed optimized fuzzy ULR approach outperforms other ULR algorithms in refining the labels.
The document discusses optimizing the design of a 9m x 12m house building plan using artificial intelligence and learning systems. It proposes using algorithms to automate the preliminary design process to allow for more optimization of factors like space utilization, daylight utilization, energy usage, water harvesting, and building materials. The algorithms aim to generate an optimized building plan design that improves upon conventional designs for these utility parameters. The preliminary algorithms and results presented in the paper show promise for developing an expert system that can efficiently optimize house design dimensions and parameters. Continued development is still needed to build a full design toolkit for houses of different sizes and numbers of floors.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Surrogate modeling for industrial designShinwoo Jang
We describe GTApprox | a new tool for medium-scale surrogate modeling in industrial design. Compared to existing software, GTApprox brings several innovations: a few novel approximation algorithms, several advanced methods of automated model selection, novel options in the form of hints. We demonstrate the efficiency of GTApprox on a large collection of test problems. In addition, we describe several applications of GTApprox to real engineering problems.
GRID COMPUTING: STRATEGIC DECISION MAKING IN RESOURCE SELECTIONIJCSEA Journal
The rapid development of computer networks around the world generated new areas especially in computer instruction processing. In grid computing, instruction processing is performed by external processors available to the system. An important topic in this area is task scheduling to available external resources. However, we do not deal with this topic here. In this paper we intend to work on strategic decision making on selecting the best alternative resources for processing instructions with respect to criteria in special conditions. Where the criteria might be security, political, technical, cost, etc. Grid computing should be determined with respect to the processing objectives of instructions of a program. This paper seeks a way through combining Analytic Hierarchy Process (AHP) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to help us in ranking and selecting available resources according to considerable criteria in allocating instructions to resources. Therefore, our findings will help technical managers of organizations in choosing as well as ranking candidate alternatives for processing program instructions.
APPROXIMATE ARITHMETIC CIRCUIT DESIGN FOR ERROR RESILIENT APPLICATIONSVLSICS Design
When the application context is ready to accept different levels of exactness in solutions and is supported
by human perception quality, then the term ‘Approximate Computing’ tossed before one decade will
become the first priority . Even though computer hardware and software are working to generate exact
results, approximate results are preferred whenever an error is in predefined bound and adaptive. It will
reduce power demand and critical path delay and improve other circuit metrics. When it comes to
traditional arithmetic circuits, those generating correct results with limitations on performance are rapidly
getting replaced by approximate arithmetic circuits which are the need of the hour, and so on about their
design.
APPROXIMATE ARITHMETIC CIRCUIT DESIGN FOR ERROR RESILIENT APPLICATIONSVLSICS Design
When the application context is ready to accept different levels of exactness in solutions and is supported
by human perception quality, then the term ‘Approximate Computing’ tossed before one decade will
become the first priority . Even though computer hardware and software are working to generate exact
results, approximate results are preferred whenever an error is in predefined bound and adaptive. It will
reduce power demand and critical path delay and improve other circuit metrics. When it comes to
traditional arithmetic circuits, those generating correct results with limitations on performance are rapidly
getting replaced by approximate arithmetic circuits which are the need of the hour, and so on about their
design.
IRJET- Machine Learning Techniques for Code OptimizationIRJET Journal
This document summarizes research on using machine learning techniques for code optimization. It discusses how machine learning can help address two main compiler optimization problems: optimization selection and phase ordering. It provides an overview of supervised and unsupervised machine learning approaches that have been used, including linear models, decision trees, clustering, and evolutionary algorithms. Key papers applying these techniques to problems like optimization selection, phase ordering, and code compression are summarized. The document concludes that machine learning is increasingly being applied to compiler optimization problems to develop intelligent heuristics with minimal human input.
SIMULATION-BASED OPTIMIZATION USING SIMULATED ANNEALING FOR OPTIMAL EQUIPMENT...Sudhendu Rai
The paper describes a software toolkit that enables the data-driven simulation-based optimization of print shops It enables quick modeling of complex print production environments under the cellular production framework. The software toolkit automates several steps of the modeling process by taking declarative inputs from the end-user and then automatically generating complex simulation models that are used to determine improved design and operating points. This paper describes the addition of another layer of automation consisting of simulation-based optimization using simulated-annealing that enables automated search of a large number of design alternatives in the presence of operational constraints to determine a cost-optimal solution. The results of the application of this approach to a real-world problem are also described.
An Adjacent Analysis of the Parallel Programming Model Perspective: A SurveyIRJET Journal
This document provides an overview and analysis of parallel programming models. It begins with an abstract discussing the growing demand for parallel computing and challenges with existing parallel programming frameworks. It then reviews several relevant studies on parallel programming models and architectures. The document goes on to describe several key parallel programming models in more detail, including the Parallel Random Access Machine (PRAM) model, Unrestricted Message Passing (UMP) model, and Bulk Synchronous Parallel (BSP) model. It discusses aspects of each model like architecture, communication methods, and associated cost models. The overall goal is to compare benefits and limitations of different parallel programming models.
This document discusses optimizing the gate-level area of digit-serial finite impulse response (FIR) filter designs using multiple constant multiplication (MCM) blocks. It introduces the problem of designing a digit-serial MCM operation with minimal area and presents algorithms to formalize the area optimization problem. Results show the proposed algorithms and digit-serial MCM architectures efficiently design digit-serial MCM operations and FIR filters with reduced area compared to existing approaches.
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
Software effort estimation through clustering techniques of RBFN networkIOSR Journals
This document discusses using a radial basis function neural network (RBFN) to estimate software development effort based on the COCOMO II model. The RBFN uses COCOMO II data for training and has three layers - an input layer with COCOMO II parameters like size and scale factors, a hidden middle layer with Gaussian activation functions, and an output layer that calculates effort. Two clustering algorithms, K-means and APC-III, are used to determine the receptive fields of the hidden layer neurons. The K-means algorithm partitions the COCOMO II data into clusters and finds cluster centers to minimize distance between clusters. The RBFN is trained and tested on the COCOMO II data to evaluate its ability to accurately estimate software
ADVANCED CIVIL ENGINEERING OPTIMIZATION BY ARTIFICIAL INTELLIGENT SYSTEMS: RE...Journal For Research
Artificial intelligence is the ability of computer systems to perform tasks which otherwise need human brain. Those tasks include visual perception, decision-making, speech recognition and translation between languages. Large amount computing resources is required to traditionally design and optimize complex civil structure in traditional method. This can be effectively eased by using intelligent systems. This paper lists out some of the methods and theories in the application of artificial intelligent systems in the field of civil engineering.
Comparison of Cost Estimation Methods using Hybrid Artificial Intelligence on...IJERA Editor
Cost estimating at schematic design stage as the basis of project evaluation, engineering design, and cost
management, plays an important role in project decision under a limited definition of scope and constraints in
available information and time, and the presence of uncertainties. The purpose of this study is to compare the
performance of cost estimation models of two different hybrid artificial intelligence approaches: regression
analysis-adaptive neuro fuzzy inference system (RANFIS) and case based reasoning-genetic algorithm (CBRGA)
techniques. The models were developed based on the same 50 low-cost apartment project datasets in
Indonesia. Tested on another five testing data, the models were proven to perform very well in term of accuracy.
A CBR-GA model was found to be the best performer but suffered from disadvantage of needing 15 cost drivers
if compared to only 4 cost drivers required by RANFIS for on-par performance.
The document discusses cost estimation for systems engineering projects. It introduces the COSYSMO cost model, which estimates systems engineering effort as a function of project size, complexity factors, and environment factors. The model accounts for things like requirements, interfaces, algorithms, and scenarios to determine size, then applies multipliers for risk, team cohesion, and other complexity factors. The document provides details on calibrating the model for different organizations and examples of applying it to estimate effort for a sample project.
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...Waqas Tariq
Software cost estimation deals with the financial and strategic planning of software projects. Controlling the expensive investment of software development effectively is of paramount importance. The limitation of algorithmic effort prediction models is their inability to cope with uncertainties and imprecision surrounding software projects at the early development stage. More recently, attention has turned to a variety of machine learning methods, and soft computing in particular to predict software development effort. Fuzzy logic is one such technique which can cope with uncertainties. In the present paper, Particle Swarm Optimization Algorithm (PSOA) is presented to fine tune the fuzzy estimate for the development of software projects . The efficacy of the developed models is tested on 10 NASA software projects, 18 NASA projects and COCOMO 81 database on the basis of various criterion for assessment of software cost estimation models. Comparison of all the models is done and it is found that the developed models provide better estimation
This paper explores the effectiveness of the recently devel- oped surrogate modeling method, the Adaptive Hybrid Functions (AHF), through its application to complex engineered systems design. The AHF is a hybrid surrogate modeling method that seeks to exploit the advantages of each component surrogate. In this paper, the AHF integrates three component surrogate mod- els: (i) the Radial Basis Functions (RBF), (ii) the Extended Ra- dial Basis Functions (E-RBF), and (iii) the Kriging model, by characterizing and evaluating the local measure of accuracy of each model. The AHF is applied to model complex engineer- ing systems and an economic system, namely: (i) wind farm de- sign; (ii) product family design (for universal electric motors); (iii) three-pane window design; and (iv) onshore wind farm cost estimation. We use three differing sampling techniques to inves- tigate their influence on the quality of the resulting surrogates. These sampling techniques are (i) Latin Hypercube Sampling
∗Doctoral Student, Multidisciplinary Design and Optimization Laboratory, Department of Mechanical, Aerospace and Nuclear Engineering, ASME student member.
†Distinguished Professor and Department Chair. Department of Mechanical and Aerospace Engineering, ASME Lifetime Fellow. Corresponding author.
‡Associate Professor, Department of Mechanical Aerospace and Nuclear En- gineering, ASME member (LHS), (ii) Sobol’s quasirandom sequence, and (iii) Hammers- ley Sequence Sampling (HSS). Cross-validation is used to evalu- ate the accuracy of the resulting surrogate models. As expected, the accuracy of the surrogate model was found to improve with increase in the sample size. We also observed that, the Sobol’s and the LHS sampling techniques performed better in the case of high-dimensional problems, whereas the HSS sampling tech- nique performed better in the case of low-dimensional problems. Overall, the AHF method was observed to provide acceptable- to-high accuracy in representing complex design systems.
This document discusses the use of mathematical programming to optimize supply chain management. It begins with an introduction to mathematical programming and its applications in supply chain management. It then presents a generic mixed-integer programming model for supply chain configuration that aims to minimize total costs. The model includes constraints related to demand fulfillment, facility flows, capacity, material availability and open facilities. The document discusses common modifications to the generic model, such as incorporating international factors, inventory, transportation and policies. It provides two case studies that apply the generic model to analyze costs for different companies. The conclusion states that mathematical programming allows comparison of costs between products and optimization of production costs and systems.
This document discusses the use of mathematical programming to optimize supply chain management. It begins with an introduction to mathematical programming and its applications in supply chain management. It then describes a generic mixed-integer programming model for supply chain configuration that aims to minimize total costs. The model includes constraints related to demand fulfillment, facility flows, capacity, material availability and open facilities. The document also discusses common modifications to the generic model, such as incorporating international factors, inventory, transportation and policies. It provides two case studies that apply the generic model to analyze different companies' supply chain costs.
Adaptive response surface by kriging using pilot points for structural reliab...IOSR Journals
Structural reliability analysis aims to compute the probability of failure by considering system uncertainties. However, this approach may require very time-consuming computation and becomes impracticable for complex structures especially when complex computer analysis and simulation codes are involved such as finite element method. Approximation methods are widely used to build simplified approximations, or metamodels providing a surrogate model of the original codes. The most popular surrogate model is the response surface methodology, which typically employs second order polynomial approximation using least-squares regression techniques. Several authors have been used response surface methods in reliability analysis. However, another approximation method based on kriging approach has successfully applied in the field of deterministic optimization. Few studies have treated the use of kriging approximation in reliability analysis and reliability-based design optimization. In this paper, the kriging approximation is used an alternative to the traditional response surface method, to approximate the performance function of the reliability analysis. The main objective of this work is to develop an efficient global approximation while controlling the computational cost and accurate prediction. A pilot point method is proposed to the kriging approximation in order to increase the prior predictivity of the approximation, which the pilot points are good candidates for numerical simulation. In other words, the predictive quality of the initial kriging approximation is improved by adding adaptive information called “pilot points” in areas where the kriging variance is maximum. This methodology allows for an efficient modeling of highly non-linear responses, while the number of simulations is reduced compared to Latin Hypercubes approach. Numerical examples show the efficiency and the interest of the proposed method.
Similar to Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft Systems (20)
Axial Capacity Enhancement of CFRP Confined Columns Made of Steel Fiber Reinf...IOSRJMCE
Results of the experimental study on the axial compressive behavior of steel fiber reinforced concrete (SFRC) wrapped with fiber reinforced polymer (FRP) have been presented in this paper. A total of 18 concrete cylinders were tested under axial compression. The effects of steel fiber parameters were investigated which includes fiber aspect ratio (AR) and fiber volume fraction (VF). The concrete cylinders were divided into groups of confined and unconfined ones. In accordance with previous study, it was found that, FRP confined cylinders showed greater axial stress than that of unconfined specimens. Although the presence of steel fiber increases the peak axial stresses for both confined and unconfined group of specimens, but no significant change of peak axial stress (and peak strain) has been observed in both confined and unconfined group due to increase of fiber volume ratio. But with the increase of fiber aspect ratio, the peak axial stresses of both unconfined and FRPconfined cylinders were found to slightly decrease. It was also observed that, concrete specimens reinforced with internal steel fiber absorbed much higher energy than that of unreinforced ones.
Experimental and Numerical Modal Analysis of a Compressor Mounting BracketIOSRJMCE
Experimental modal analysis has grown steadily in popularity since the advent of the digital FFT spectrum analyzer in the early 1970’s. Today, impact testing (or bump testing) has become widespread as a fast and economical means of finding the modes of vibration of a machine or structure. In this paper, it presents the experimental and numerical modal analysis of a compressor mounting bracket (CMB). The dynamic behavior of CMB is investigated through impact testing. The three-dimensional finite element models are constructed using Altair HyperMesh and an numerical modal analysis is then performed to generate natural frequencies and mode shapes in the three-orthogonal directions. The finite element model agrees well with the experimental tests and eventually it helps the designer to design upfront with much lesser cost and time of experimentation
Analysis and Design Aspects of Support Measures of Main Caverns of Karuma Hyd...IOSRJMCE
The Power house complex of Karuma Hydropower project comprises three main caverns i.e Power house, Transformer Hall and Tailrace surge gallery set at a depth of about 80m in mainly granitic gneiss rock medium. The cavern has been oriented in a N141° direction based on engineering considerations. The principle stress direction is also found nearly parallel to the axis of the caverns and thus the present orientation satisfies both engineering and geotechnical criteria. The support by way of rock anchors and SFRS/ Plain shotcrete has been provided based on analysis using phase 2 software. The underground caverns lie in low geostress field and therefore numerical simulation of excavation of these caverns were done to understand the rock mass behavior during excavation and thus help in design of excavation sequence and rock support. The excavation of all three caverns has since been completed and concrete works are in progress. This paper sums up the 3D simulation analysis of the rock medium and the proposed rock support system for the three caverns.
A Review: Effect of Laser Peening Treatment on Properties And Life Cycle of D...IOSRJMCE
- In this review, the effect of laser peening process with and without protective coating is discussed over the different material and it is observed that the residual stress are induced in material surface up to some depth according to process parameters of LSP. Fatigue strength and micro-hardness of material are enhance by inducing residual stresses which further depends on process parameters and material properties.
Experimental Evaluation of Refrigerant Mixtures as Substitutes for HFC134aIOSRJMCE
The document describes an experimental evaluation of refrigerant mixtures as substitutes for HFC134a in a 200 liter domestic refrigeration system. The study tested mixtures of propane (R290) and butane (R600a) at different mass ratios, including R600a/R290 at ratios of 70/30, 60/40, and 50/50 by weight percent. The results showed that the R600a/R290 mixture at a ratio of 60/40 performed better than the other mixtures and HFC134a in terms of refrigerating effect and coefficient of performance. Specifically, the 60/40 mixture achieved a refrigerating effect 10% higher than HFC134a at -5°C and 35.
Simulation and Analysis of Electrolyte Flow Pattern in ECM for L-Shaped Tool ...IOSRJMCE
Electrochemical machining (ECM) is a non-traditional machining method based on principle of reverse electroplating (anode loses metal to cathode). ECM uses principal of electrolysis to remove metal from the work piece. Due to improper tool design of complicated shapes, there are chances of passivation and boiling of electrolyte in ECM process that causes poor machining. In this present study three dimensional flow pattern of ECM process has been simulated using Computational Fluid Dynamics (CFD) in L-shaped tool models. This work is for optimizing the design of L-shaped tool and to study the flow pattern, current density distribution, velocity profile, temperature pattern, turbulence and final shape change of workpiece top surface. ANSYS was used for design and simulating this CFD problem, the geometrical model consists of a circular workpiece made with Iron, 20% brine solution (NaCl) as electrolyte and L-shaped copper tool with different kind of grooves.
Study on Transverse Displacement in Composite Laminated Plates across Thickne...IOSRJMCE
: In this paper main focus is given on alternative ways of thermal load applied through the thickness of laminated composite plate. Research has assumed gradient or constant thermal profile along the thickness of the plate. Laminated composite is a complex material therefore assumed loading profiles may not obey the gradient or constant thermal profile path therefore, as a research in all total 10 thermal profiles are studied. The results in tabular and graphical forms are presented along with discussion. First Order Shear Deformation Theory (FOST) is used for analysis. Various quantities, namely in plane as well as transverse displacements and stresses are obtained when thermal load is applied in various orientations along the thickness of the plate. Main focus in this article is analysis of transverse shear displacement and results are presented in tabular and graphical form.
Study of Forced Convection Heat Transfer with Single phase and mixture phase ...IOSRJMCE
In this study, forced convection heat transfer of nanoliquids is done using both single-phase and mixture-phase models and the results are compared with experimental results. The governing equations of the study here are discretized using the finite volume method. Hybrid differencing scheme is used to calculate the face values of the control volumes. A code is written using SIMPLER algorithm and then solved using the MATLAB engine. The mixture-phase model studied here, considers two slip mechanisms between nanoparticle and base-fluid, namely Brownian diffusion and thermophoresis. Al2O3-water nanofluid is used for the study of nanofluid and the study shows significant increase in convective heat transfer coefficient while the mixturephase model demonstrates slightly lower values than the single-phase model. The study is done with various nanoparticle concentrations and Reynolds numbers. With increasing particle concentration and Reynolds number, the convective heat transfer coefficient increases and as well as the shear stress. For low concentrations of the nanoparticle, Nusselt number is slightly lower than the base fluid and as the concentration increases, the Nusselt number also rises higher than the base fluid
Eigenvalue Buckling Computation and Impact on Pipeline Wall Thickness and Sub...IOSRJMCE
Submarine pipelines used in the transportation of hydrocarbon in the oil and gas industry are usually subjected to external hydrostatic pressure and compressive stresses resulting in susceptibility to buckling and loss of structural stability. The objective of the present work was to examine influence of wall thickness on eigenvalue buckling load and hydrostatic pressure on ocean depth. Linear buckling analysis was conducted employing finite element method using ANSYS software package and the simulation was conducted varying wall thickness and ocean depth. The investigation showed collapse buckling pressure decreased linearly with ocean water depth but increased with thickness increment. Pipeline buckling failure can be minimized with wall thickness optimization design and selection.
Mechanical Analysis of an Ixtle Based Cable for Its Use in ArchitectureIOSRJMCE
1) The document analyzes the mechanical properties of ixtle fibers from the Agave lechuguilla plant for potential use in architectural structures. Tests were conducted on loose fibers and fibers entwined together to analyze tensile strength.
2) Samples of stabilized clay soil were prepared with and without the addition of ixtle fibers. The effect of fiber length on adhesion and cracking was examined.
3) Compression tests showed soil mixtures without fiber had strength of 12kg/cm2, while adding fiber increased strength to 18kg/cm2, indicating potential to enhance earthen building materials.
Application of Artificial Neural Networking for Determining the Plane of Vibr...IOSRJMCE
In this paper a new approach for Artificial Neural Networking using Feed Forward Back Propagation Method and Levenberg-Marquardt backpropagation training function has been developed using Java Programming, where by directly feeding the RMS and Phase values of vibration, the unbalance plane can be detected with minimum error. In a Machine Fault Simulator RMS value and phase values of vibrations are collected from the four accelerometers placed in X and Y direction of Left and Right Bearings .Further these data are fed into the neural network for training purpose. In the testing phase of the neural network, the plane of vibration has been determined using different training algorithms available in MATLAB. Their prediction values have been compared with the actual value, errors for different training algorithms are calculated and a conclusion has been drawn for the best training function available for this current research work.
Effect of Stiffening System on Building Resistance to Earthquake ForcesIOSRJMCE
Multi-story steel buildings of various heights under the action of earthquake force are analyzed by using time-history analysis technique. The ground motion records of El Centro, California in 1940 are considered in this study. Different types of stiffening systems (bracing and shear walls) are used for the considered buildings. The main objective of this study is to evaluate the response of steel structures subjected to earthquake excitation and to investigate the effect of various stiffening systems in improving the response of these buildings. The finite element method of SAP 2000 V17program is used in the analysis. A static analysis is conducted to obtain an indication on the stiffness of the studied stiffening models in order to interpret the stiffness effect on the response of the structures under the seismic load. It is found that, the natural period of a structure is highly affected by the height of the structure and the used stiffeningsystem. It is inversely proportional with the stiffness and directly proportional with the height of the structure. It is concluded that the roof displacement andits maximum value at a specific momentdoes not give a clear indication for the behavior of building. Therefore the full time response of the building must be considered. Also it has been concluded that it is not necessarily when the stiffness of a building increases, the roof or any story displacement of the building decreases under earthquake load.
Resource Optimization of Construction Project Using Primavera P6IOSRJMCE
Construction projects are unique in nature, having their own difficulties, uncertainties and risks, posing never-ending questions concerning the resources and costs. There is always a conflict between ‘how much it will cost?’ and ‘where to raise the finances from?’. The success of a project depends upon the efficiency with which the project management gets the work done by utilizing the planned resources of men, materials, machinery, money and time.. In large scale projects, preparing an accurate and workable plan is very difficult. Resources are required to carry out specific tasks in a project, but the availability of resources within a given firm is always limited. While preparing the schedule structure, the Project Manager might schedule certain tasks in parallel. In such cases it might be possible that the same resource is being used in both the parallel tasks, while its availability is limited. This paper emphasises how the Project Manager could resolve such conflicts by using Resource Balancing in modern softwares such as Primavera (P6) R8.3, to reduce laborious computations. In this paper, the Resource Balancing techniques namely smoothing & leveling have been investigated in detail. This paper uses a case study in order to portray how Resource Balancing could be done using Primavera p6 and its effects are on the duration and cost of the entire project.
Optimization of Electrical Discharge Machining Process Parameters using SCM42...IOSRJMCE
This document summarizes research that used response surface methodology to optimize the electrical discharge machining (EDM) process parameters when machining SCM420 low alloy steel. The researchers investigated the effects of peak current, pulse on time, and gap voltage on material removal rate (MRR) and surface roughness (Ra). They conducted experiments according to a central composite design and analyzed the results to develop mathematical models relating the process parameters to the output responses. The analysis found that peak current had the greatest influence on both MRR and Ra, with MRR increasing and Ra decreasing at higher peak current levels. Optimal values for the process parameters to achieve high MRR and low Ra were determined to be a peak current of 22 amps, pulse
Efficient Mass Participation Ratio of Building with BasementIOSRJMCE
This study investigates the effect of basement floor(s) on seismic analysis of buildings. Considering the basement floor(s) in the seismic analysis using response spectrum method creates a problem regarding the mass participation ratio (MPR) which should not be less than 90% of total mass of building as a requirement by the code. While the MPR depending on the number of mode shapes used in the modal analysis, some codes allow to neglect this ratio with condition that use a reduced number of mode shapes with some restrictions to calculate it. A parametric study was performed to investigate this reduced number of mode shapes and a new restriction was performed to calculate it. The natural period, the top lateral displacement and the internal straining actions using the reduced numbers of mode shapes were compared with those of building where using the number of mode shape which can reach 90% MPR. Finite element simulations are conducted using ANSYS program to investigate the effect of basement floor(s). Results are presented for different buildings by considering different numbers of floors for the super structure (2, 5, 10, 15 and 20), the number of basements (1 and 3) and spring support stiffness, which simulate the effect of soil. The numerical results of the considered cases show that the requirement of 90% MPR can be neglect by using a reduced number of mode shapes and some restrictions stated in this study. In such case the accuracy will be not less than 95%.
Literature Review of Experimental Study on Load Bearing Masonry WallIOSRJMCE
Masonry load bearing wall subjected to vertical concentric and eccentric loading may collapse through instability. In this Paper the buckling behavior of masonry load bearing wall of different slenderness ratio were investigated by many researcher has been reviewed via testing a series of scale masonry wall subjected to concentric and eccentric vertical loading. The influence of nonlinear behavior of interface element, slenderness ratio and various end conditions have been investigated together with the effect of different end eccentricity of vertical load.
Structural Design of Concrete Structure Using E-TabsIOSRJMCE
In the world of technology and evolution, the field of civil engineering has also grown in various dimensions. Earlier the analysis and sustainability of the civil structures used to be paper based calculations which led to insufficient accuracy of analysis and variable factors affecting the failure of the structure through inevitable instances. But with the help of integration of various engineering fields, this determination of various characteristic changes, durability, deformation or failure caused due to hidden factors which are left out in the manual calculations i.e. paper based have converted and led this system to whole new level. Now a number of software are developed for analysis, computation and management of building of civil structures which are highly precise. This document provides the various uses, merits of the software E-Tabs in the construction field and in the analysis of the concrete structures as well as steel structures.
Finite Element Analysis of Opening Plate, Fixed Tube Sheet and Floating Sheet...IOSRJMCE
)A heat exchanger is a device that is used to transfer thermal energy (enthalpy) between two or more fluids, between a solid surface and a fluid, or between solid particulates and a fluid, at different temperatures and in thermal contact. Opening Plate, Fixed Tube Sheet and Floating Tube Sheet is a part of Shell & Tube Heat Exchanger, used in refinery and oil & gas production. Typically, shell & tube heat exchanger can be considered as a pressure vessel subjected to uniform internal pressure. Hence the shell & tube heat exchanger in various design and operating conditions needs to be checked and verified for soundness of participating components. Opening, Fixed tube and Floating sheet plate due to uniform internal pressure in the shell & tube heat exchanger can produce high-localized stress and deformation. If the components are not designed for these conditions, safety of the equipment is at stake. Hence check for the stress and displacement of the shell & tube heat exchanger during operating condition is carried out using finite element analysis software and observed that shell & tube heat exchanger is free from collapse and serviceability failure.
Influence of Combine Vertical Irregularities in the Response of Earthquake Re...IOSRJMCE
This document discusses the influence of vertical irregularities on the seismic response of reinforced concrete structures through nonlinear static (pushover) analysis. Five 17-story reinforced concrete building models with different vertical setback configurations are analyzed: one regular model and four models with increasing mass, stiffness and vertical setback irregularities. The results show that vertical irregularities reduce lateral load capacity and increase lateral displacement, base shear, and performance point compared to the regular structure. Plastic hinges form at different stages for each model based on their performance level. It is concluded that increasing vertical irregularities negatively impact seismic performance by decreasing flexural and shear strength demands.
Straight Bevel Gears Manufacturing Analysis by Conventional Powder Metallurgy...IOSRJMCE
In this paper it was proposed the manufacture of a component of the agricultural sector machine with conventional Powder Metallurgy (PM) techniques. The proposed process was developed with a Straight Bevel Gears, which is currently manufactured by other manufacturing processes, such as machining and forming. The gearwheel is part of a gearbox that replaces speed reduction. The component developed by powder metallurgy is low-carbon iron, in relation to Ni (nickel), Cu (copper) Mo (molybdenum) in large quantities. It was sought to verify the dimensional behavior with Statistical Process Control (SPC) techniques and also its properties through the analysis of the densities obtained in the gears during the powder metallurgy process. It was found that the parts fulfilled the requirements for coupling operation, such as those processed by other processing methods. The mechanical properties checked fully met the minimum requirements for these components. The chemical composition generated for the sprocket also pleased satisfied for this use.
Presentation slide on DESIGN AND FABRICATION OF MOBILE CONTROLLED DRAINAGE.pptxEr. Kushal Ghimire
To address increased waste dumping in drains, a low-cost drainage cleaning robot controlled via a mobile app is designed to reduce human intervention and improve automation. Connected via Bluetooth, the robot’s chain circulates, moving a mesh with a lifter to carry solid waste to a bin. This project aims to clear clogs, ensure free water flow, and transform society into a cleaner, healthier environment, reducing disease spread from direct sewage contact. It’s especially effective during heavy rains with high water and garbage flow.
I am Dr. T.D. Shashikala, an Associate Professor in the Electronics and Communication Engineering Department at University BDT College of Engineering, Davanagere, Karnataka. I have been teaching here since 1997. I prepared this manual for the VTU MTech course in Digital Communication and Networking, focusing on the Advanced Digital Signal Processing Lab (22LDN12). Based on, 1.Digital Signal Processing: Principles, Algorithms, and Applications by John G. Proakis and Dimitris G. Manolakis, Discrete-Time Signal Processing by Alan V. Oppenheim and Ronald W. Schafer, 3.Digital Signal Processing: A Practical Guide for Engineers and Scientists" by Steven W. Smith. 4.Understanding Digital Signal Processing by Richard G. Lyons. 5.Wavelet Transforms and Time-Frequency Signal Analysis" by Lokenath Debnath . 6. MathWorks (MATLAB) - MATLAB Documentation
1. DEE 1203 ELECTRICAL ENGINEERING DRAWING.pdfAsiimweJulius2
This lecture will equip students with basic electrical engineering knowledge on various types of electrical and electronics drawings, different types of drawing papers, different ways of producing a good drawing and the importance of electrical engineering drawing to both engineers and the users.
By the end of this lecture, students will be to differentiate between different electrical diagrams like, block diagrams, schematic diagrams, circuit diagrams among others.
Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft Systems
1. IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE)
e-ISSN: 2278-1684,p-ISSN: 2320-334X, Volume 14, Issue 1 Ver. VI (Jan. - Feb. 2017), PP 14-27
www.iosrjournals.org
DOI: 10.9790/1684-1401061427 www.iosrjournals.org 14 | Page
Smart Response Surface Models using Legacy Data for
Multidisciplinary Optimization of Aircraft Systems
Ramesh Gabbur
Scientist,Aeronautical Development Agency,Bangalore.India Doctoral Student at International Institute for
Aerospace Engineering & Management, Jain University, Bangalore.
Abstract: One of the key challenges in multidisciplinary design is integration of design and analysis methods of
various systems in design framework. To achieve Multidisciplinary Design Optimization (MDO) goals of
aircraft systems, high fidelity analysis are required from multiple disciplines like aerodynamics, structures or
performance. High Fidelity Analysis like Computer-Aided Design and Engineering (CAD/CAE) techniques,
complex computer models and computation-intensive analyses/simulations are often used to accurately study
the system behaviour towards design optimization. Due to high computational cost and numerical noise
associated with these analyses, they cannot be used effectively. The use of surrogates or Response Surface
Models (RSM) is one approach in Multi Disciplinary design optimization to avoid the computation barrier and
to take care of artificial minima due to numerical noise. This paper brings out a method based on use of “Smart
Response Surface Models" to generate surrogate models, with its validated subspace, in the design space
around the point of interest with the use of legacy data for MDO. The method has been evaluated on three test
cases, which are created based on High Speed Civil Transport (HSCT) Multidisciplinary Design Optimization
Test Suite
Keywords: Optimization, DOE, Surrogate Modelling, Multidisciplinary Design, Aircraft
I. Introduction
Present generation multi-role combat aircraft with y by wire and state of the art weapons systems are
complex systems in nature, which need specialists. Complexity of combat aircraft mandates the need for design
teams to have multidisciplinary experience in the entire aircraft design with core expertise in their respective
domains. Today aerospace design and development is not only multidisciplinary but also global in nature with
design and engineering teams deployed around the world [1]. It requires a high level of technical and techno-
managerial expertise across various engineering disciplines to cater for very stringent reliability, safety and
performance requirements. This would enable design and development of an optimal multidisciplinary system in
a collaborative and cohesive integrated environment of various engineering domains.
Multidisciplinary system design is a complex, computationally intensive process that combines
discipline analysis with design-space search and decision making. The decision making is based on engineering
judgment and is greatly assisted by computer automation. Towards this systems engineering provides holistic
approach for integrated design and development of aircraft and its associated systems [2]. One of the key
challenges in collaborative design is integration of design and analysis methods of various systems in system
engineering framework. With the advances in Computer-Aided Design and Engineering (CAD/CAE)
techniques, complex computer models and computation-intensive analyses/simulations (discipline analysis) are
often used to accurately study the system behaviour towards design improvements. This design optimization
process normally requires a large number of iterations before the optimal solution is identified. Design
optimization, with high fidelity design tools, is computationally very expensive and time consuming. The use of
approximation models or surrogates to replace the expensive high fidelity computer analysis, in Multi
Disciplinary Optimization (MDO), is a natural approach to avoid the computation barrier and to take care of
numerical noise[3]. Typically approximation models or surrogates of high fidelity design tools are used to
reduce this computational effort and time during multidisciplinary design optimization process.This paper brings
out a method based on use of “Smart Response Surface Models" to generate surrogate models in the design
space around the point of interest with the use of legacy data for (MDO).
II. Response Surface Models (Rsm)
Complex aircraft engineering design problems are solved using high fidelity analysis/simulation
software tools. The high computational cost associated with these analyses and simulations prohibits them from
being used as performance measurement tools in the optimization of design for combat aircraft. Another major
drawback in using high fidelity analysis is numerical noise, which occurs as a result of the incomplete
convergence of iterative processes, the use of adaptive numerical algorithms, round-o_ errors, and the discrete
representation of continuous physical objects (fluids or solids)[4]. The use of surrogates or Response Surface
2. Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft
DOI: 10.9790/1684-1401061427 www.iosrjournals.org 15 | Page
Models (RSM) to replace the expensive high fidelity computer analysis, in MDO, is a natural approach to avoid
the computation barrier and to take care of artificial minima due to numerical noise. Renaud and Gabriele
developed Response Surface Modelling (RSM) of multidisciplinary systems during concurrent subspace
optimizations (CSSOs) [5] [6]. Korngold and Gabriele addressed discrete multidisciplinary problems using the
RSM [7].
Expensive high fidelity computer analysis can be represented as a blackbox function. In a simplest form the high
fidelity analysis tools takes vector X as input and gives Y as the output as shown in Figure 1.
Figure 1: Representing it mathematically with limits on the design space
𝑌 = 𝑓 𝑥 𝑤 𝑒𝑟𝑒 𝑥 ∈ 𝑅 𝑛
(1)
𝑥 𝑚𝑖𝑛 < 𝑥 < 𝑥 𝑚𝑎𝑥 𝑑𝑒𝑓𝑖𝑛𝑒𝑠 𝑡 𝑒 𝑑𝑒𝑠𝑖𝑔𝑛 𝑠𝑝𝑎𝑐𝑒
This function would be replaced by polynomial based surrogate model. A typical second order surrogate model
is shown below
𝑦 = 𝛽0 + 𝛽𝑖 𝑥𝑖
𝑛
𝑖=1
+ 𝛽𝑖𝑖 𝑥𝑖
2
𝑛
𝑖=1
+
𝑛−1
𝑖=1
𝛽𝑖𝑗 𝑥𝑖
𝑘𝑛
𝑗=𝑖+1
𝑥𝑗 2
𝑥′ 𝑚𝑖𝑛 < 𝑥′
< 𝑥′ 𝑚𝑎𝑥 𝑑𝑒𝑓𝑖𝑛𝑒𝑠 𝑡 𝑒 𝑚𝑜𝑑𝑒𝑙 𝑠𝑢𝑏𝑠𝑝𝑎𝑐𝑒
Where 𝛽𝑖, 𝛽𝑗 and 𝛽𝑖𝑗 are regression coefficients,
x is the input vector and y is the output. The subspace
surrogate model is defined by the side constraints 𝑥′ 𝑚𝑖𝑛 𝑎𝑛𝑑 𝑥′ 𝑚𝑎𝑥 .
III. Smart Response Surface Models
Smart response surface models is a methodology that develops a response surface model and identifies
the subspace for which model is valid. In the conventional methods, of implementing response surface models
(RSM) for Multidisciplinary Design, the model subspace is de_ned prior to generating the model and the
accuracy of the model is not predefined [4]. The accuracy of the RSM generated is assumed to be acceptable
apriori. An algorithm for developing surrogate models to pre-defined accuracy was developed by Gabbur &
Ramchand is described in [3]. As the accuracy becomes more stringent there would be a reduction of model
subspace with concomitant increase in number of iterations. The algorithm creates knowledge database for
functions calls and surrogates models. Legacy or historical data if available would also form a part of this
knowledge database. This database would reduce the number of times a high fidelity analysis/simulation
software tool is run for model generation. The methodology has been been tested on five different optimization
test function and the result have been brought out in [3].
IV. Algorithm
The flow chart for smart RSM is shown in figure 2. The smart RSM comprises of six processes repeated
iteratively to generate the validated surrogate models with its design space. The iterative steps are follows
1. Identifying the design space of the model
2. Design of Experiments
3. Analysis of DOE points
4. Generation of Response surface models based on DOE
5. Model Validation
6. Design space reduction
4.1 Identifying the design space of the model
The domain of the optimization problem is defined as the initial design space for the surrogate model.
Mathematically it can be represented as
3. Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft
DOI: 10.9790/1684-1401061427 www.iosrjournals.org 16 | Page
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒
𝐹 𝑋 𝑤 𝑒𝑟𝑒 𝑋 ∈ 𝑅 𝑁
(3)
𝑋𝑖𝑚𝑖𝑛 ≤ 𝑋𝑖 ≤ 𝑋𝑖𝑚𝑎𝑥 𝑤 𝑒𝑟𝑒 𝑖 = 1,2,3 … . 𝑁 (4)
𝑆𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝐺 𝑥 ≥ 0 𝑤 𝑒𝑟𝑒 𝐺 ∈ 𝑅 𝑚
𝐻 𝑥 = 0 𝑤 𝑒𝑟𝑒 𝐻 ∈ 𝑅 𝑝
Then the initial design space of the model, for N dimension, would be defined by equation 4.
Figure 2: Smart RSM Alogorithm
4.2 Design of Experiments
Experimental design techniques, which were initially developed for physical experiments, are _nding
considerable use for the design of computer experiments/analyses. In Design of Experiment (DOE) techniques
developed for analysis of physical experiments, random variation is accounted for by spreading the sample
points out in the design space and by taking multiple data points (replicates). Among various classical
experimental designs, Central Composite Design (CCD), alphabetical optimal designs, especially D-optimal
designs, are also widely used [8, 9]. Sacks, et al. state that the classical techniques of experimental blocking,
replication, and randomization are irrelevant when it comes to deterministic computer experiments [10].
Therefore sample points should be chosen to fill the design space for computer experiments. Koch, Mavris and
Mistree [11] investigate the use of a modified central composite design (CCD) that combines half-fractions of
an inscribed CCD with a face-centered CCD to distribute points more evenly throughout the design space.
Koehler and Owen [12] describe several Bayesian space filling.designs, including maximum entropy designs,
mean squared-error designs, minimax and maximin designs, Latin Hypercube, randomized orthogonal arrays,
and scrambled nets. Widely used space filling sampling methods are Orthogonal Array (OA) and Latin Hyper
cube Design (LHD). OA can generate a sample with better space-filling property than LHD. However, the
generation of an OA sample is more complicated than LHD [13, 14]. In addition, OA demands strict level
4. Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft
DOI: 10.9790/1684-1401061427 www.iosrjournals.org 17 | Page
classification for each variable, which might bring difficulty in real design. In real design, not all combinations
of variable level lead to realistic design solutions, and some may cause the crash of the analysis or simulation,
which is not uncommon in finite element analysis. In that case, the engineers must manually adjust variables to
an appropriate number, deviating from one of the defined levels. Thus the property of OA might be undermined
[15]. Therefore for this algorithm LHD is used as DOE method.
4.3 Analysis of DOE points
Design analysis is carried out on the points selected by DOE and values of the objective function are evaluated
through the computation intensive analysis and simulation processes.
4.4 Generation of Response surface models based on DOE
Based on the analysed above design points a quadratic response surface model is fitted to the data using
the usual least square method. As an initial test R2 and R2 adjusted are metrics used to estimate and understand
the quality of RSM.
4.5 Model Validation
The surrogate model is validated for acceptable fit in two stages. The first stage is to check for low
frequency errors (gross misfit of the model). This is carried out around a check point in the design space. The
check point is generated in such a way that if the model validation fails then the point would be in the new
reduced domain. This point is then perturbed for low frequency error. The direction of perturbation is such that
the perturbed point also lies in the reduced design space. This is shown in the Figure 3 for a two dimension
function.
Figure 3 : Low Frequency validation points
Negative perturbation is given in the X1 direction to get point 1 and positive perturbation is given along
X2 to get point 2. The perturbation value d is 50% of the reduced domain width for each input. The number of
points needed for carrying low frequency error is (k+1), where k is the dimension of the input vector. The error
(residual) between actual value and the predicted value is calculated at each of these points. This value should
be less than predetermined value (typically around 1% ) for the model to be acceptable. Once the model is
validated for low frequency then it is checked for high frequency error. For carrying out high frequency
validation the above process is repeated with a change in perturbation value. The perturbation value d is
changed to 5 % of the reduced design space. The error residuals are calculated as
𝐸𝑟𝑟𝑜𝑟 𝑅𝑒𝑠𝑖𝑑𝑢𝑎𝑙𝑠 =
𝑌′
− 𝑌
𝑌
∗ 100
𝑤 𝑒𝑟𝑒 𝑌 = 𝐴𝑐𝑡𝑢𝑎𝑙 𝑉𝑎𝑙𝑢𝑒
𝑌′
= 𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑 𝑉𝑎𝑙𝑢𝑒
5. Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft
DOI: 10.9790/1684-1401061427 www.iosrjournals.org 18 | Page
4.6 Design Space Reduction
For a complex analysis tool / function a single quadratic model may not satisfactorily represent the
analysis tool for the full design space. When surrogate model does not accurately represent the analysis tool then
the design space needs to be reduced. A selective reduction of design space is employed. The strategy is to halve
the domain for that design variable for which the error residual is more than 1%. Further more design space is
reduced (zoomed in) around the reference point. Mathematically xri is the reference point for ith
input and xli and
xui are its lower and upper limits respectively then the new design space lower limit x`li and upper limit x`ui are
𝑥′𝑙𝑖 = 𝑥 𝑟𝑖 −
𝑥 𝑢𝑖 −𝑥 𝑙𝑖
4
(5)
𝑥′ 𝑢𝑖 = 𝑥 𝑟𝑖 +
𝑥 𝑢𝑖 −𝑥 𝑙𝑖
4
for i= (1,2,….k) (6)
𝑖𝑓 𝑥′𝑙𝑖 < 𝑥𝑙𝑖 𝑡 𝑒𝑛
𝑥′𝑙𝑖 = 𝑥𝑙𝑖 𝑎𝑛𝑑 𝑥′ 𝑢𝑖 = 𝑥 𝑟𝑖 +
𝑥 𝑢𝑖 − 𝑥𝑙𝑖
2
(7)
𝑖𝑓 𝑥′ 𝑢𝑖 < 𝑥 𝑢𝑖 𝑡 𝑒𝑛
𝑥′ 𝑢𝑖 = 𝑥 𝑢𝑖 𝑎𝑛𝑑 𝑥′𝑙𝑖 = 𝑥 𝑟𝑖 +
𝑥 𝑢𝑖 − 𝑥𝑙𝑖
2
(8)
Figure 4 shows Design space reduction for two dimension design space (k=2). In figure 4(a) and figure
4(b) both the domain are reduced and in figure 4(c) domain for one variable only is reduced. It is proposed to
test the algorithm on a higher dimension (10d
or higher) realistic design problem. High speed civil transport
(HSCT) data is chosen to create a synthetic problem of 25 design variable. The HSCT date used for optimization
test problem consists of one objective function and 66 inequality constraints. Noisy functions are created for the
objective function and constraints for proving the effectiveness of the smart RSM algorithm in filtering out
numerical noise for use in Multidisciplinary Design Optimization.
V. High Speed Civil Transport (Hsct)
High speed civil transport (HSCT) is an example of extremely challenging aircraft designs, where the
disciplines are highly coupled and results from high fidelity design analysis are critical to establishing the
feasibility of the aircraft design. The design concept of HSCT is to fly the aircraft with more than 300
passengers at speeds in excess of 1,500 miles per hour. The aircraft development is by NASA and its industry
partners as a next generation supersonic passenger jet of the future [16]. HSCT aircraft configuration is shown
in figure 5
Figure 4: Design Space Reduction
6. Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft
DOI: 10.9790/1684-1401061427 www.iosrjournals.org 19 | Page
Figure 5 : HSCT Configuration
Multidisciplinary Analysis and Design (MAD) center for advanced vehicles uses HSCT configuration
design as a test case for the evaluation of new design optimization methodologies and techniques developed in-
house [17]. The test case is described as minimizing the takeoff gross weight (TOGW) of a High Speed Civil
Transport (HSCT) aircraft with range of 5500 nautical mile, designed to cruise at Mach 2.4 and ferry 250
passengers. TOGW was selected as the objective function for optimization problem since it represents a
composite measure of merit for the aircraft as a system. TOGW is expressed as a sum of the dry weight (i.e.,
the weight of the aircraft including payload, but without fuel) and the fuel weight.The dry weight of the aircraft
is correlated to the initial acquisition cost of the aircraft and fuel weight represents the yearly recurring costs of
aircraft operations [18]. From Multidisciplinary perspective the choice of the gross weight as the objective
function incorporates structural and aerodynamic considerations. The structural considerations are directly
related to the aircraft empty weight, while the aerodynamic performance dictates the drag and hence the thrust
required to overcome the drag which dictates the fuel weight required for the mission. The HSCT design is
described by twenty five design variables and sixty eight constraints. Twenty four of these design variables
describe the geometry of the aircraft and can be divided into five categories, wing planform, airfoil shape, tail
area, nacelle placement and fuselage shape.One variables, mission fuel, defines the cruise mission. Details of the
twenty five design variables are given in Table 1.
Table 1 HSCT Design Variables
Design Variable No Description
1 Wing root chord
2 LE break point, x
3 LE break point, y
4 TE break point, x
5 TE break point, y
6 LE wing tip, x
7 Wing tip chord
8 Wing semi-span
9 Max t/c location
10 Airfoil t/c at root
11 Airfoil t/c at LE break
12 Airfoil t/c at tip
13 Fuselage restraint 1,x
14 Fuselage restraint 1,r
15 Fuselage restraint 2,x
16 Fuselage restraint 2,r
17 Fuselage restraint 3,x
18 Fuselage restraint 3,r
19 Fuselage restraint 4,x
20 Fuselage restraint 4,r
21 Nacelle 1, y
22 Nacelle 2, y
23 Mission fuel
24 Vertical tail area
25 Horizontal tail area
Sixty eight design constraints define geometry, system performance and aerodynamic performance and are
given in Table 2.
7. Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft
DOI: 10.9790/1684-1401061427 www.iosrjournals.org 20 | Page
Table 2: HSCT Constraints
Index Constraint
1 Fuel volume 50 wing volume
2 Wing root TE Tail LE
3-20 Wing chord 7.0 ft
21 LE break within wing semi-span
22 TE break within wing semi-span
23 Root chord t/c ratio ≥ 1.5%
24 LE break chord t/c ratio≥1.5%
25 Tip chord t/c ratio ≥1.5%
26-30 Fuselage restraints
31 Wing spike prevention
32 Nacelle 1 inboard of nacelle 2
33 Nacelle 2 inboard of semi-span
34 Range ≥5500 nautical miles
35 CL at landing speed ≤1
36-53 Section CL at landing ≤2
54 Landing angle of attack ≤12o
55-58 Engine scrape at landing
59 Wing tip scrape at landing
60 TE break scrape at landing
61 Rudder deflection ≤22.50
62 Bank angle at landing ≤50
63 Tail deflection at approach ≤22.5o
64 Takeoff rotation to occur ≤ Vmin
65 Engine-out limit with vertical tail
66 Balanced field length ≤11000 ft
67-68 Mission segments: thrust available ≥ thrust required
Multiple configuration of HSCT were analysed over a period of time at NASA Multidisciplinary
Analysis and Design (MAD) center for advanced vehicles. The data from this analysis has been collated and is a
part of NASA Multidisciplinary Design Optimization Test Suite [17]. It consists of analyses of 2490 HSCT
configuration. The data from each analysis is represented in a matrix of 19 rows and 5 columns, and each of the
2,490 matrices is separated by a blank line. The breakup of 95 numbers taken row by row from each 19 x 5
matrix is as follows
Number 1 to Number 25 are the x vector of 25 design variables which describes each HSCT aircraft
configuration. The 25 Design variables are scaled to the order of 1 to 10.
Number 26 wing bending material weight
Number 27 is the takeoff gross weight (TOGW(x)). The objective function, TOGW(x), is not scaled.
Number 28 to Number 95 represent the sixty eight constraints. The constraints are unscaled and are of order
100-1000 (with negative numbers indicating design infeasibility).
VI. Response Surface Model
Using the legacy data of HSCT, polynomial based surrogate models (cubic response surface) are created for the
objective function and 68 constraints. For generating the response surface models the 68 constraints are scaled.
The scaling procedure used for constraints is
𝑆𝑐𝑎𝑙𝑒𝑑 𝐶𝑜𝑛𝑠𝑡𝑟𝑎𝑖𝑛𝑡 𝑉𝑎𝑙𝑢𝑒 = (𝑀𝑎𝑥𝑖𝑚𝑢𝑚 𝑣𝑎𝑙𝑢𝑒 − 𝐴𝑐𝑡𝑢𝑎𝑙 𝑉𝑎𝑙𝑢𝑒)/𝐿𝑖𝑚𝑖𝑡
where 𝐿𝑖𝑚𝑖𝑡 is defined as difference between maximum value and minimum value of the constraints. The
surrogate model is of the form given below
𝑦 = 𝛽0 + 𝛽𝑖 𝑥𝑖
𝑘
𝑖=1 𝑡𝑜 𝑘
+ 𝛽𝑖𝑖 𝑥𝑖
2
𝑘
𝑖=1 𝑡𝑜 𝑘
𝛽𝑖𝑖𝑖 𝑥𝑖
3
𝑘
𝑖=1 𝑡𝑜 𝑘
+
𝑘
𝑖=1 𝑡𝑜 𝑘
𝛽𝑖𝑗 𝑥𝑖
𝑘
𝑗 =𝑖 𝑡𝑜 𝑘
𝑥𝑖𝑗 (9)
Latin Hypercube design (LHD) as DOE strategy was used to generate 2000 experimental design points
for 25 design variables. The data points nearest to the 2000 experimental design points were selected from the
HSCT data and used for generating cubic response surface model. The cubic response surface model is fitted to
the data using least square method. For the objective function TOGW, the graph of predicted vs actual value is
shown in Figure 6. Residual / error is calculated for the objective function for all 2490 data points. The error is
normalised with variance and estimated as follows
𝑁𝑜𝑟𝑚𝑎𝑖𝑠𝑒𝑑 𝑅𝑒𝑠𝑖𝑑𝑢𝑎𝑙𝑠 =
𝑎𝑐𝑡𝑢𝑎𝑙 𝑣𝑎𝑙𝑢𝑒 − 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑 𝑣𝑎𝑙𝑢𝑒
𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 σ
8. Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft
DOI: 10.9790/1684-1401061427 www.iosrjournals.org 21 | Page
Figure 6 Plot of Objective Function
Figure 7 Normalized Residual Error Plot for Objective Functions
The normalised error for the objective function with +3 σ and -3 σ $ limits is shown in figure 7. Model statistics
R2
and R2
adjusted for objective function are .991093 and 0.982265 respectively. Figure 8 and 9 indicate the
spread of R2
and R2
adjusted for 66 constraints.
Figure 8 : R2
for 66 constraints
10. Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft
DOI: 10.9790/1684-1401061427 www.iosrjournals.org 23 | Page
VII. Hsct Design Optimization - Test Case
A synthetic design optimization test problem is created based on available HSCT data. Mathematically
optimization test problem of HSCT is stated below
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒
𝐹 𝑥 𝑤 𝑒𝑟𝑒 𝑥 ∈ 𝑅25
𝑥𝑖𝑚𝑖𝑛 ≤ 𝑥𝑖 ≤ 𝑥𝑖𝑚𝑎𝑥 𝑤 𝑒𝑟𝑒 𝑖 = 1,2,3 … .25
𝑆𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝐺𝑗 𝑥 ≥ 0 𝑓𝑜𝑟 𝑗 = 1,2, … 66
𝑤 𝑒𝑟𝑒 𝐹 𝑥 𝑖𝑠 𝑡 𝑒 𝑡𝑎𝑘𝑒 𝑜𝑓𝑓 𝑔𝑟𝑜𝑠𝑠 𝑤𝑒𝑖𝑔 𝑡, 𝑎𝑛𝑑 𝑥 𝑖𝑠 𝑡 𝑒 𝑖𝑛𝑝𝑢𝑡 𝑣𝑒𝑐𝑡𝑜𝑟
𝐺𝑗 𝑥 𝑎𝑟𝑒 𝑡 𝑒 𝑛𝑜𝑛 𝑙𝑖𝑛𝑒𝑎𝑟𝑖𝑡𝑦 𝑐𝑜𝑛𝑠𝑡𝑟𝑎𝑖𝑛𝑡𝑠
𝑥 𝑚𝑖𝑛 𝑎𝑛𝑑 𝑥 𝑚𝑎𝑥 𝑎𝑟𝑒 𝑡 𝑒 𝑙𝑜𝑤𝑒𝑟 𝑎𝑛𝑑 𝑢𝑝𝑝𝑒𝑟 𝑏𝑜𝑝𝑢𝑛𝑑𝑠 𝑓𝑜𝑟 𝑡 𝑒 𝑑𝑒𝑠𝑖𝑔𝑛 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠
Three optimization problem based on HSCT are defined for validation of Smart Response Surface
Models algorithm. The first test case is based on the cubic model (called as smooth function) detailed in section
6. Test case 1 is represented in Figure 10. In the second test case noise is added to the smooth function. The
amount of noise added is based on the mean and sigma of the respective model/function. This is represented in
Figure 11. In third test case Smart Response Surface Models algorithm is used to generated surrogate models
for optimization. The smart RSM interfaces between optimizer and noisy function. The smart RSM generates
quadratic models with is move limits and is used by the optimizer. Figure 12 shows interaction between
optimizer, smart RSM, noisy function along with RSM and I/O database. It is expected that the Smart RSM
would effectively filter numerical noise and optimization would converge with fewer iteration.
The following Nomenclature is used for defining the optimization problem statement
𝑓𝑡 𝑥 𝐶𝑢𝑏𝑖𝑐 𝑓𝑖𝑡 𝑜𝑓 𝑤𝑒𝑖𝑔 𝑡 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑡𝑜 𝐻𝑆𝐶𝑇 𝑙𝑒𝑔𝑎𝑐𝑦 𝑑𝑎𝑡𝑎 set
𝑔𝑡(𝑥) 𝐶𝑢𝑏𝑖𝑐 𝑓𝑖𝑡 𝑓𝑜𝑟 𝑐𝑜𝑛𝑠𝑡𝑟𝑎𝑖𝑛𝑡𝑠 𝑡𝑜 𝐻𝑆𝐶𝑇 𝑙𝑒𝑔𝑎𝑐𝑦 𝑑𝑎𝑡𝑎 𝑠𝑒𝑡
𝑓𝑛𝑡 𝑥 𝑁𝑜𝑖𝑠𝑦 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝑓𝑡(𝑥)
𝑓𝑛𝑡 𝑥 = 𝑓𝑡 𝑥 + 𝜖 𝑤 𝑒𝑟𝑒 𝜖 = 𝑅𝑁 𝜇, 𝜎
𝑔 𝑛𝑡 𝑥 𝑁𝑜𝑖𝑠𝑦 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝑔𝑡(𝑥) )
𝑔 𝑛𝑡 (𝑥) = 𝑔𝑡(𝑥) + 𝜖 𝑤 𝑒𝑟𝑒 𝜖 = 𝑅𝑁 𝜇, 𝜎
𝑓𝑞𝑛 𝑥 𝑄𝑢𝑎𝑑𝑟𝑎𝑡𝑖𝑐 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑓𝑛𝑡 𝑥 𝑤𝑖𝑡 𝑖𝑛 𝑚𝑜𝑣𝑒 𝑙𝑖𝑚𝑖𝑡𝑠
𝑔 𝑞𝑛 𝑥 𝑄𝑢𝑎𝑑𝑟𝑎𝑡𝑖𝑐 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑔 𝑛𝑡 𝑥 𝑤𝑖𝑡 𝑖𝑛 𝑚𝑜𝑣𝑒 𝑙𝑖𝑚𝑖𝑡𝑠
Gradient based optimizer CSFQP is used to solve the three test cases. Gradients are calculated by CSFQP using
the built-in finite difference method based function. The stopping criteria for optimizer, d0 norm, is less than 10-
6
. The starting point for optimization, i.e initial design point is a feasible for all the constraints. It is identical for
all three test cases.
Figure 10: Test Case 1
11. Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft
DOI: 10.9790/1684-1401061427 www.iosrjournals.org 24 | Page
7.1 Test case1
Problem Statement
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒
𝑓𝑡 𝑥 𝑤 𝑒𝑟𝑒 𝑥 ∈ 𝑅25
𝑥𝑖𝑚𝑖𝑛 ≤ 𝑥𝑖 ≤ 𝑥𝑖𝑚𝑎𝑥 𝑤 𝑒𝑟𝑒 𝑖 = 1,2,3 … .25
𝑆𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑔𝑡 𝑥 ≥ 0 𝑤 𝑒𝑟𝑒 𝑔 𝑛𝑡 ∈ 𝑅66
CSFQP was run for the above design problem. The starting point for the optimizer was an initial
feasible design point. An optimal point was reached after 72 iteration.The d0 norm after 72 iteration was
7.817𝑒 − 07. The number of functions calls for by the optimizer for the objective function 1931. The value of
the weight function was 332601.83. This value of the objective function is used as reference value for
comparing the other two test cases.
7.2 Test case 2
Problem Statement
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒
𝑓𝑛𝑡 𝑥 𝑤 𝑒𝑟𝑒 𝑥 ∈ 𝑅25
𝑥𝑖𝑚𝑖𝑛 ≤ 𝑥𝑖 ≤ 𝑥𝑖𝑚𝑎𝑥 𝑤 𝑒𝑟𝑒 𝑖 = 1,2,3 … .25
𝑆𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑔 𝑛𝑡 (𝑥) ≥ 0 𝑤 𝑒𝑟𝑒 𝑔 𝑛𝑡 ∈ 𝑅66
Figure 11: Test Case 2
CSFQP was run for the above design problem with initial design point. The value of objective function
at the initial design point is 6.21725X 105
. The optimizer failed to converge after10 iteration. The value of
Objective function after10 iteration is 6.21417times105
.The values of the objective function has not reduced
much after 10 iteration. The objective function was called 844 times. The function calls to each of 66 constraints
was ranging between 840 to 849. The d0 norm and the step size at the 10 iteration were 3.6058990e+00and
1.4210854e-14 respectively. The optimizer failed as the step size was small. As seen here with a noisy function
a gradient based optimizer fails to converge to optima due to numerical noise.
7.3 Test case 3 - Smart RSM
In test case 3 Smart RSM interfaces between optimizer and the noisy function. It generates validated
quadratic RSM (with its subspace) for objective function and 66 constraints to be used by optimizer. The
optimization parameters are similar to earlier test cases. The process was repeated twice with two different
acceptable modelling error of 10% and 5%. The Smart RSM also interacts with two databases, Input/output
and RSM database. At the starting of the optimization process there is no data in both databases during
optimization process the databases get populated and values are checked to reuse existing RSM and avoid
redundant function calls.
12. Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft
DOI: 10.9790/1684-1401061427 www.iosrjournals.org 25 | Page
Figure 12: Test Case 3
Acceptable modelling error 10%
Number of iterations 36
d0norm 2.67651 X 10-7
Number of functions calls for objective function 1636
Objective function 𝑓𝑛𝑡 𝑥 374942.05
% 𝑒𝑟𝑟𝑜𝑟 =
𝑓𝑛𝑡 𝑥 − 𝑓𝑡 𝑥
𝑓𝑛𝑡 𝑥
𝑋 100
12.7 %
Acceptable modelling error 5%
Number of iterations 41
d0norm 9.390395 X 10-7
Number of functions calls for objective function 2426
Objective function 𝑓𝑛𝑡 𝑥 366345.560329
% 𝑒𝑟𝑟𝑜𝑟 =
𝑓𝑛𝑡 𝑥 − 𝑓𝑡 𝑥
𝑓𝑛𝑡 𝑥
𝑋 100
10.1%
VIII. Result
Objective function/design analysis tools of the form 𝑓𝑡 𝑥 are smooth and simple functions. Typically
these are empirical methods or simple equation used during pre-conceptual design stage. For conceptual/detail
design, high fidelity design analysis tools are preferred which are complex and invariably have numerical noise
and are similar to 𝑓𝑛𝑡 𝑥 . During multidisciplinary optimization process with these design analysis tools, the
optimization process either fails due to non- convergence or requires large number of iterations. Smart RSM
present a way to overcome these issues.
Table 4 shows the number of iterations, calls to the objective function, and value reached after
optimazation for the test cases. Due to the numerical noise introduced in test case 2 the optimization process has
failed after 10 iterations without convergence. It is aslo observed that the value of objective function value has
not reduced appreciably to 6.21417 x 105
from the initial value of 6.21725 x 105
during optimization process.
The number of functions calls by the optimizer for objective function was 844 till non convergence.
Table 4 Test Case Results
Iteration No Of Objective
Function calls
Objective Function optimal
value (Kg)
Test Case 1 72 1932 332061.8
Test Case 2 Failed (10) 844 621417.3
Test Case 3 10% 36 1636 374942.1
5% 41 2426 366345.5
With the implementation of Smart RSM, Test case 3, quadratic approximations with move limits are
created and used by optimizer. The number of objective function calls by the smart RSM for an acceptable
modelling error of 10% and 5% is 1636 and 2426 respectively. For a acceptable modelling error of 10%, Table
13. Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft
DOI: 10.9790/1684-1401061427 www.iosrjournals.org 26 | Page
6 gives the details of the function calls for objective functions and 66 constraints. Complex aircraft design has
large number of constraints criteria that needs to met. In the case of HSCT there are 66 design constraints. These
design constraints are estimated with use of multidisciplinary analysis. This increases the computation efforts
and time. With the implementation of Smart RSM on design tools, both for objective function and constraints,
there would a saving in computation time and circumventing the problems associated with numerical noise.
Further, with appropriate database managements for RSM a design knowledge base could also be developed.
Table 5 shows number of models generated for objective function and constraints for 10`% modelling error.
Table 5 : Number of Surrogate models generated
Function RSM created Function RSM created
Objective 3 Constraint 34 9
Constraint 1 3 Constraint 35 14
Constraint 2 2 Constraint 36 11
Constraint 3 8 Constraint 37 11
Constraint 4 11 Constraint 38 9
Constraint 5 12 Constraint 39 11
Constraint 6 15 Constraint 40 13
Constraint 7 7 Constraint 41 12
Constraint 8 14 Constraint 42 7
Constraint 9 15 Constraint 43 8
Constraint 10 15 Constraint 44 3
Constraint 11 14 Constraint 45 3
Constraint 12 11 Constraint 46 15
Constraint 13 3 Constraint 47 1
Constraint 14 16 Constraint 48 3
Constraint 15 6 Constraint 49 18
Constraint 16 11 Constraint 50 0
Constraint 17 3 Constraint 51 3
Constraint 18 3 Constraint 52 3
Constraint 19 10 Constraint 53 15
Constraint 20 1 Constraint 54 20
Constraint 21 3 Constraint 55 11
Constraint 22 13 Constraint 56 1
Constraint 23 11 Constraint 57 1
Constraint 24 1 Constraint 58 1
Constraint 25 5 Constraint 59 1
Constraint 26 12 Constraint 60 1
Constraint 27 15 Constraint 61 0
Constraint 28 10 Constraint 62 21
Constraint 29 9 Constraint 63 1
Constraint 30 15 Constraint 64 1
Constraint 31 15 Constraint 65 13
Constraint 32 14 Constraint 66 0
Constraint 33 12
Table 6: Function calls details
Function No of calls Function No of calls
Objective 1636 Constraint 34 5210
Constraint 1 2118 Constraint 35 8067
Constraint 2 1117 Constraint 36 6817
Constraint 3 5177 Constraint 37 9099
Constraint 4 6468 Constraint 38 8972
Constraint 5 6549 Constraint 39 9106
Constraint 6 9030 Constraint 40 10601
Constraint 7 4030 Constraint 41 12949
Constraint 8 8795 Constraint 42 6437
Constraint 9 9147 Constraint 43 5886
Constraint 10 8764 Constraint 44 2227
Constraint 11 8672 Constraint 45 2725
Constraint 12 6234 Constraint 46 10535
Constraint 13 2374 Constraint 47 704
Constraint 14 9331 Constraint 48 1643
Constraint 15 5832 Constraint 49 13543
Constraint 16 8164 Constraint 50 947
Constraint 17 2362 Constraint 51 1620
Constraint 18 2227 Constraint 52 2227
Constraint 19 7206 Constraint 53 8963
Constraint 20 704 Constraint 54 19637
14. Smart Response Surface Models using Legacy Data for Multidisciplinary Optimization of Aircraft
DOI: 10.9790/1684-1401061427 www.iosrjournals.org 27 | Page
Constraint 21 2227 Constraint 55 8428
Constraint 22 10189 Constraint 56 704
Constraint 23 7195 Constraint 57 704
Constraint 24 704 Constraint 58 704
Constraint 25 3016 Constraint 59 704
Constraint 26 6610 Constraint 60 704
Constraint 27 9494 Constraint 61 946
Constraint 28 5670 Constraint 62 19843
Constraint 29 5105 Constraint 63 704
Constraint 30 8986 Constraint 64 704
Constraint 31 9786 Constraint 65 8599
Constraint 32 7846 Constraint 66 946
Constraint 33 7221
Acknowledgements
Authors gratefully acknowledge the support and guidance provided by Dr S. Korthu of Aeronautical
Development Agency and Dr K Ramchand of IIAEM, Jain University for this research publication.
References
[1]. D R Towill.Man-machine interaction in aerospace control systems. The Radio and Electronic Engineer, 50(9):447-458, September
1980.
[2]. M Price, S Raughunathan, and R Curran.An integrated systems engineering approach to aircraft design. Progress In Aerospace
Science, 42:331-376, 2006.
[3]. Ramesh Gabbur and K Ramchand. Expert systems based response surface models for multidisciplinary design optimization. In
Progress in Systems Engineering, volume 1089 of Advances in Intelligent Systems and Computing, pages 527-535. Springer
International Publishing, 2015.
[4]. A Giunta A, Dudley J M, Narducci R, Grossman B, Haftka R.T, Mason W H, and Watson L. T. Noisy aerodynamic response and
smooth approximations in hsct design. Proceedings of the 5th
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary
Analysis and Optimization, 94(4376):1117-1128, sep 1994.
[5]. Renaud J. E and Gabriele G. A. Improved coordination in non-hierarchical system optimization. AIAA Journal, 31:2367-2373,
1993.
[6]. G. A. Renaud J. Eand Gabriele. Approximation in non-hierarchical system optimization. AIAA Journal, 32:198-205, 1994.
[7]. J. C. Korngold and G. A. Gabriele. Multidisciplinary analysis and optimization of discrete problems using response surface
methods. Journal of Mechanical Design, 119:427-433, 1997.
[8]. Mitchell T. J. An algorithm for the construction of d-optimal experimental designs. Technometrics, 6-2:203-210, 1997
[9]. R Unal, R. A Lepsch, and M. L McMillin. Response surface model building and multidisciplinary optimization using d-optimal
designs. AIAA, 98-4759:405-411, 1998.
[10]. J Sacks, W.J Welch, T.J Mitchel, and H.P Wynn. Design and analysis of computer experimant. Statistical Science, 4:409-435, 1989.
[11]. P. N Koch, D. Mavris, and F Mistree. Multi-level, partitioned response surfaces for modeling complex systems. 7th
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization AIAA, 4858:1954-1968, 1998.
[12]. J. R. Koehler and A. B Owen. 'Computer Experiments' Handbook of Statistics, volume 261-308. Elsevier Science, New York, 1996.
[13]. G Taguchi, Y Yokoyama, and Wu. Y. Taguchi methods: Design of experiments. American Supplier Institute, Allen Park, Michigan.
[14]. A Owen. Orthogonal arrays for computer experiments, integration, and visualization. StatisticaSinica,, 2:439-452, 1992.
[15]. G. Gary Wang. Adaptive response surface method using inherited latin hypercube design points. ASME, Journal of Mechanical
Design, Vol. 125, pp. pp 210-220, June 2003., 125:210-220, June 2003.
[16]. Nasa's high-speed research program.url=http://oea.larc.nasa.gov/PAIS/HSR-Overview2.html.
[17]. Test suite problem 2.1 hsct approximation challenge. utl=http://mdob.larc.nasa.gov/mdo.test/class2prob1.html.
[18]. Anthony A. Giunta. Aircraft multidisciplinary design Optimization using design of experiments Theory and response surface
modeling Methods. PhD thesis, 1997.