ICMMT Proceedings (Final)
ICMMT Proceedings (Final)
ICMMT Proceedings (Final)
PROCEEDINGS
INTERNATIONAL CONFERENCE
ON MATERIAL SCIENCE, MECHANICS
AND TECHNOLOGY
(ICMMT2022)
DECEMBER 23-24, 2022
INDORE, INDIA
ORGANIZED BY
&
SPONSORS
I
Preface
One of the major objectives of the present International Conference is to provide a platform
for Scientists, Technocrats and Researchers to share and exchange views on the opportunity
and challenges offered by the ever-increasing technological advancement that are taking place
in the world.
The conference has been successful in achieving these aims. There has been excellent response
with submission of articles that will be of immense value to the world of technology.
We sincerely acknowledge and express our gratitude to the reviewers for their great
contribution in selecting the worthy articles and facilitating the process of publication.
II
¼MkW- fpUe; i.M;k ½
¼,e-ch-ch-,l] ih-th-fM] ,e-vkj-lh- lkbZd &yanu½
izfr dqyifr] nso laLd`fr fo'ofo|ky;
'kqHkdkeuk lUns' k
;g vR;Ur ghs g"kZ] xoZ ,oa xkSjo dk fo"k; gS fd vkidh [;kfrizkIr laLFkku ds rRoko/kku
es^a ^HkkSfrd foKku] ;kaf+=dh vksj fofuekZ.k (Material Science, Mechanics and Manufacturing
( ICMMT))’ ij izFke vUrjk"Vªh; lEesyu vkgqr djus dk LrqR; iz;kl gks jgk gSA
;g vuqHko lR; gS fd cqf} u rks loZK gS vkSj u gh loZleFkZA mldh mi;ksfxrk bruh Hkj
gS fd ge viuh {kerk dk lnqqi;ksx bl lalkj dks lqUnj ,oa leqUUr cukus ds fy, djsAa ekuoh
xfjek blh esa fufgr gSA vkius vUrjk"Vªh; lEesyu ds ek?;e ls lek/kkuijd O;kogkfjd o oSKkfud
lksp ds ek?;e ls vk’kkokn dh tks >yd fn[kkbZ gS og Qynk;d] izsjd] xzg.kh; ,oa ueuh; gSA bl
izsjd vk;kstu dh vk|ksikUr Le`fr;ksa dks lat
W ks;s j[kus ds fy, ^ Lekfjdk ^ ds lQy iz dk'ku dk
fu.kZ; Hkh Lokxrs; gSA
;qx_f"k iafMr Jhjke 'kekZ vkpk;Z dk ;g dguk Fkk fd Kku vuUr gS] lR; vlhe gSA mldk
,d va'k ghs euq"; dh lhfer cqf} ds fy, tku lduk laHko gSA izxfr dh dzec} ;k=k ij pyrs
gq, ge lR; ds vf?kd fudV igqWprs tkrs gSA blfy, gesa lR; dh gj ubZ fdj.k dk Lokxr djus
ds fy, rS;kj jguk pkfg,A iwokZxzgksa ij vMdj cSB tkuk vkSj tks dgk tkrk jgk gS] mlh dks lp
ekurs jguk cqf}erk dk fpUg ugh gSA ;FkkFkZoknh fparu] vksfS pR; dk voyEcu vkSj ek;kZnkvksa dk
ifjikyu& ;gh osSKkfud /keZ dk tks Hkfo"; dk bDdhlohz lnh dk /keZ gS& ewy vk/kkj cusxk vkSj
curk pyk tk jgk gSA
vk'kk djrs gSa ;g lEeysu vkids fo’ofo|ky; ifjokj ds leLr fo|kfFkZ;ks]a vkpk;Z&vkp;kZvksa
vf/kdkfj;ksa lfgr tulkekU; dks vkufUnr] iqyfdr] izQqfYyr j[k dj uofparu&uopsruk iznku
djsxk rFkk oSKkfud rdZ vkSj rF; dh dlkSVh ij fur~ uwru vk;ke x<rs jgus dk ikFks; tqVk,xkA
HkkSfrdh ,oa ;kaf=dh foKku ds {ks= esa uohu izk.k Qwd
W us ds fy, ge iqu% vUr%dj.k ls vkidks gkfnZd
c/kkbZ Kkfir djrs gSA bldh izsj.kk ,oa ijEijk v{k; cuh jgs ,slh dkeuk Hkh djrs gSA
III
.
Message
IV
Maj. Gen. J. S. Syali (VSM Retd)
Secretary & Director General
Institution of Engineers (India)
Message
V
Dr. Sunil Kumar Gupta
Vice Chancellor
RGPV Bhopal
Message
It gives me immense pleasure to hear that the Sushila Devi Bansal College,
Indore is organizing First International Conference on “Material Science,
Mechanics and Technology (ICMMT 2022)” on December 23-24, 2022.
The purpose and goal of the education we provide, should be to narrow down the
widening gap between educational curriculum and the knowledge and skill
requirements of industries. I hope the conference shall provide a meaningful
platform to all the stake holders to young and talented scientists to present their
research work and to interact with distinguished scientists of the country.
I am sure that deliberations made by the scientists during the conference will be
beneficial for advancement of science and technology.
It is a laudable effort of the organizers for choosing a very relevant and active
field of research conducting the conference as above are viral part of quality
improvement activities for all the stake holders viz a viz scientists, faculty,
Academia and students community.
I extend my warm greetings once again to the students, teachers and management
of the institution.
VI
Er. Anil Bansal
Chairperson
Bansal Group of Institutes
Message
I am sure that the conference will be elevating and will have great participation
and treasured outcomes for all the participants. The conference would result in
boundless innovative ideas and would pave a way for new inventions, leading to
an improved society.
My hearty congratulations to all the organizers and best wishes for great success
of the conference.
VII
Er. Sunil Bansal
Secretary
Bansal Group of Institutes
Message
It gives me great joy to announce about ICMMT 2022. Our sole objective is to
provide a platform to discuss and exchange high quality academic research ideas
amongst researchers, engineers, academicians, industrial professionals and
practitioners of Material Science, Mechanics and Technology.
Several experts from the thematic areas have also confirmed to share their years
of knowledge and wisdom during the plenary lecture sessions with the
participants.
At our best, it is going to be the best opportunity for all the participants.
VIII
Dr. Sanjay Jain
Joint Secretary
Bansal Group of Institutes
Message
This prestigious conference is being hosted with the very purpose of knowledge
exchange and implantation of the seeds of innovation.
IX
Er. Parth Bansal
Managing Director
Bansal Group of Institutes
Message
I wish all the success for the conference in the form of a good research outcome
X
Prof. H.B. Khurasia, FIE
Group Director
Bansal Group of Institutes
Message
Looking at the augmenting needs of this stream, this conference will definitely
prove advantageous as it ensures a common platform for all faculty members,
research scholars, PG scholars, industry personnel and practitioners from around
the globe.
I extend my best wishes to all entrants and pray for the success of ICMMT 2022
XI
Dr. Premanand S Chauhan
Director
SDBC, Indore, IN
Message
This conference bears all the hallmarks of success. This is due to the great team
work from international and national advisory committee members and
organizing team members who I owe a deep depth of gratitude. I am grateful to
the associates the Institution of Engineers and M.P. Council of Science &
Technology for their generous support in organizing this conference.
I pay my sincere regards to IOP Publication for bringing out the proceedings. My
thanks to members of the technical committee and reviewing committee for
helping in the review of papers. Thanks to the promotional committee whose help
is deeply appreciated.
Thanks to the printing committee for bringing out the fantastic souvenir which
shares the highlights from ICMMT 2022, gives sketches of keynote speakers, and
photographs of all major volunteers.
Last but not the least are the authors whom I thank sincerely for contributing
good papers and for attending this conference. I sincerely hope that all the
participants will actively deliberate in the conference and come out with
recommendations for emerging trends in Materials, Mechanics and Technology.
XII
International Advisory Committee
Prof. Raghu Echempati Professor, Wayne State University, USA
Prof. Brian Norton Professor and DIT President, Dublin Energ Lab, Ireland
Prof. Anjali Awasthi Professor & Research Chair, Concordia University, Canada
Mr. P.B. Sajeev Vice President, Bharti Airtel International (Netherland), Kenya
Dr. Rubi Chakraborty Scientist, HOYA Opt. Corp. & Entrepreneur at Biena Tec, USA
Dr. Noel Perara Associate Professor & Director, CMINC State University, USA
Dr. Madhavi Singh, Associate Professor, Penn State College of Medicine, USA
XIII
National Advisory Committee
Prof. V K Jain Retd. Professor, Indian Institute of Technology Kanpur, INDIA
Dr. Suwarna Torgal Associate Professor, IET Devi Ahilya University, Indore, INDIA
Dr. Jawar Singh Associate Professor, Indian Institute of Technology Patna, INDIA
Associate Professor, Indian Institute of Technology (ISM),
Dr. Prasoon Kumar Singh Dhanbad, INDIA
Mr. Avinash Mishra General Manager, HR Godrej Consumer Products Ltd., INDIA
XIV
Organizing Committee
Chief Patron
Prof. Sunil Kumar Gupta Vice Chancellor, RGPV Bhopal, IN
Organizing Secretary
Dr. Atul Agarwal Associate Professor, Dept. of CSE, SDBC, Indore, IN
XV
Technical Chair (Technology)
Dr. Keshav Rawat Associate Professor, Central University, Haryana, IN
Ms. Nidhi Bhandari Assistant Professor, Applied Sc. & Hum. SDBCT, Indore, IN
Ms. Jagrati Singh Thakur Assistant Professor, Dept. of CSE, SDBC Indore, IN
Mr. Deepshikha Dadhich Assistant Professor, Applied Sc. & Hum. SDBCT, Indore, IN
Publicity Chair
Mr. Vijay Mishra Head Counselling, SDBC Indore, IN
XVI
Keynote Speech
Abstract
The purpose of this talk is to introduce the audience to some of the well-known
lightweight materials and their applications in mobility and other industries. Some
of the research issues on the materials and their joining methods will be discussed.
It is believed that this talk will expand the undergraduate and graduate engineering
students’ knowledge, as well as those of the practitioners with a unique and ever-
growing topic being presented at this meeting. This is a basic and introductory talk
meant to provide a development of thinking skills and hands on experience needed
for college students, practicing engineers or senior technicians.
XVII
Keynote Speech
Abstract
Industry and academia are two parallel very important aspects of the world and
their interdependency is well known. Still, a big gap is visible and felt by the
analysts. Advancements in technology, tools, materials, and methodology to
enhance efficiency are the trends in the industry. It must be implemented in
academia to produce skilled professionals to accommodate the ever-changing
scenario of the fast-growing industry. A collaborative approach in the working of
both is the utmost requirement of today’s era. There are several initiatives have
already been taken globally, the only need is to make the people aware of these.
Therefore this talk will highlight some of the recent advances in the field of
materials science and its impact on several market sectors, outline key global
initiatives, and discuss collaboration opportunities between industry and academia.
XVIII
Keynote Speech
Abstract
This keynote speech focus on the causes of various Environmental pollution in terms of natural
resource uses, urbanization, and technology development during Industrial Revolution IR1.0,
IR2.0, and IR3.0. Non-renewable energy sources, coal, oil, and gas, are the most significant
contributors to climate change. A survey says that it accounts for over 75 percent of worldwide
greenhouse gas emissions and approximately 90 percent of all carbon dioxide (CO 2) productions.
Greenhouse gas emissions blanket the Earth, trapping the sun's heat and the roots of global
warming and climate change. It causes a rising temperature, storms, drought, a rising ocean, loss
of various species, inadequate food, many health risks, and poverty. It has become clear that our
pattern of life is not sustainable, as we remove and exploit naturally available resources and a
system more quickly than they manage to recover. It necessitates much knowledge and support
among countries, companies, and people. This keynote comprises important elements of
Sustainability, sustainable materials, and manufacturing, Industrial Ecology. It focuses on United
Nations' Sustainable Development Goal 12 (UNSDG 12), which aims to develop sustainable
consumption and production patterns and ensure their attractiveness, viability, and practicability.
Sustainable consumption necessitates that services, and related products that respond to basic
needs and bring people a better quality of life. This type of consumption lessens the use of natural
resources and toxic materials.
Moreover, it reduces the pollution released by a product during their manufacture and service
and converses the resources for future generations. Sustainable production is the process of
developing and creating a product and its service that are environmentally friendly. It conserves
and preserves energy resources, is economically viable, safe and healthy for employees and
consumers, and socially and creatively fulfilling. The most recent response to this challenge is
the idea of the circular economy (CE), defined by some as a decoupling of economic growth
from resource consumption by keeping materials at their highest quality in a closed loop. A
circular economy calls for an extension of the product life cycle and the subsequent reuse and
recycling of materials and products.
Furthermore, reducing greenhouse gas emissions and environmental pollution, as well as energy,
water, and raw material consumption, are indispensable for SDG 12. This talk also enlightens
how Education is crucial to achieving SDG-12. Eminence education contributes to dropping
waste generation through presenting and practicing the four 'Rs' - Reduce, Reuse, Recycle and
Recover. Keeping the public informed and educated provides the necessary tools to live
harmoniously with nature for sustainable lifestyles. Developing countries' scientific and
technological capacities can be strengthened through education efforts, moving towards more
sustainable consumption and production patterns
XIX
Keynote Speech
Abstract
Global advancement in technology, driven by increased competition in the world
economy, has shifted the focus from traditional product development methodology
to rapid manufacturing approach. Consequently, this transformation has rendered
a myriad of manufacturing opportunities to address the incumbent challenge of
reducing product development cycle time economically whilst delivering highest
quality as well as satisfying the continuous flux of customer requirements. As a
result, manufacturing industries are adopting a new paradigm of technology known
as Additive Manufacturing (AM). AM relates to a rapidly growing number of
automated machines or processes in which physical objects are directly produced
from computer aided design (CAD) data by selectively adding material in the form
of thin cross sectional layers without the use of tooling and human intervention.
Furthermore, the American Society for Testing and Materials (ASTM)
International Committee F42 established a classification of AM processes into
seven categories: binder jetting (BJ), directed energy deposition (DED), material
extrusion (ME), material jetting (MJ), powder bed fusion (PBF), sheet lamination
(SL), and vat polymerization (VP). In particular, the ME category is attributed to
Fused Deposition Modelling (FDM) pioneered by Stratasys in 1989 and is the
focus of this talk. Further, it will cover a classification of AM processes while
providing technology examples as well as comparisons in terms of raw material
state, material, advantages and disadvantages. Additionally, this presentation will
be focussed on the contributions made to the AM field with particular attention
will be given to studies on FDM part quality improvement along with some
suggestions for future research.
XX
Table of Contents
Sr.
Paper Id Title/Author Page No.
No.
Communication Challenges of Connected vehicles with integration of
1 001 Air Space & Ground with IoT 01-06
Vishal Bairagi, Namrata Bhatt, Anuradha Deolase & Anjali Sharma Maltare
Comparative Analysis Of Rcc Deck Slab And Steel Bridge Design With
2 003 Load Analysis Using Staad Pro 07-12
Mohit Rathore & Kavita Golghate
High Performance Concrete From Fly Ash, GGBS, and Silica Fume To
3 4502 Extend Initial Setting Time For long Transportation of Concrete 13-21
Ayush Joshi & Kishor Patil
Machine Learning to Predict Student Performance using Voting
4 7306 Classifier 22-26
S Shri Goud & S Agrawal
A Taxonomy and Review on Machine Learning Based Approaches for
5 006 Stock Market Forecasting 27-32
Ruchi Sharma & Pooja Hardiya
R.C.C. Shell Structure Design of Selected Shape
6 002 Abdullah Faruque Pathan & Kavita Golghate 33-38
Abstract
The automobile industry has entered a new age with the introduction of IoT since it is changing how we connect
with our vehicles. Automotive sector adopts new technology such as Internet of things (IoT), artificial intelligence
(AI) and machine learning (ML), embedded and cellular mobile networks, sensor technology, cloud computing,
and data analytics to provide better fast and more secure machines and vehicles. The automotive industry may
now leverage a variety of innovative technologies for speedier expansion. The benefits of using IoT technology in
the automotive industries are predictive car maintenance, smart infrastructure for drivers, improved engineering,
connected vehicles, in-vehicle infotainment, and increased safety. Connected vehicles is one of the most recent
and advance topic in the area of automotive industry. This industry also uses embedded systems integrated with
IoT concepts to provide better efficiency in connected vehicles. Automotive industry use artificial intelligence and
machine learning concepts to provide fully automatic vehicles based on feature of advanced driver assistance
systems (ADAS). Use of these concepts to develop a comfortable driving experience and minimizes the incidence
and severity of automotive accidents that one cannot avert to prevent deaths and injuries. Telematics systems that
is coverage of telecommunication and information processing device are used to better connectivity to our
vehicles. We can be monitoring our vehicles totally like engine management systems, location, speed, TPMS,
fleet management systems, and health of our vehicles. In this paper, we focus on wireless technologies and
potential challenges occur in the connected vehicles that can affect the progress in automotive industry. We focus
on the difficulties and contemporary wireless solutions for connectivity between vehicle-to-sensor, vehicle-to-
vehicle, vehicle-to-Internet, and vehicle-to-road. We also point out potential areas for future research in connected
vehicle.
Keywords: Internet of things, Connected Vehicles, Telematics, ADAS, Cloud Computing, artificial intelligence,
Machine Learning.
1. Introduction
The Internet of things (IoT) adopts billions of things such as electronics, sensor technology, computational
technology, and cloud technology that help to design a simple and user-friendly application. There are numerous
amount of application areas in IoT where we can solve real-time problems such as smart farming, smart grid,
smart supply chain management and many more [1].IoT plays a major role in the automotive industry also
because in the automotive industry many advanced mechanisms is use to develop and design products. These
products develop with the help of advanced robots that work on artificial intelligence and machine learning-based
concepts. These robots and machines are designed to perform specific tasks effectively. Robots are the key
mechanism that brought so many revolution in production industry especially in the field of production rate [2].
As we know IoT is a system of devices that exchange data through a connection to the internet. In the automotive
industry that allow difficult devices like electronics, actuators and sensors to communicate with each another and
also share information with other devices (vehicle) connected to the internet. That connected and modern vehicles
are based on IoT and embedded systems. With the help of IoT, we introduce autonomous vehicles based on
ADAS (Advance driver assistance System) feature .Autonomous vehicles are becoming more popular in the
automobile sector as automation and artificial intelligence technology develops [3].
The method of communication between electronic devices embedded in a vehicle, such as the engine-
management systems, active suspension, central locking, air conditioning, airbags, etc. is called as controller area
network (CAN) that used to improve safety and security in automobile industry.
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: vishalbairagi975@gmail.com
1
Communication Challenges of Connected Vehicles with Integration of Air Space & Ground with IoT
As spark is necessary to ignite the combustion chamber in a spark-ignition engine. Timing is crucial in this case.
It "communicates" with the vehicle's engine management unit, which selects the best time for the ignition to
produce the power and fuel economy, to make sure this happens precisely [4].
The transformation of the automotive industry from mechanical engineering to mechanical products is another
significant difficulty. It is possible to cut down on connectors and cabling by fusing the ideas of networks with
mechatronic modules. Several network technologies are already widely used to connect the Electronic control unit
(ECU). Fig.1 explain how all the sensors and modules are interfaced in ECU. In a vehicle, many ECU is
communicated with the help of CAN (Control area network) protocol. The automotive industry widely uses CAN
protocol because is the simplest architecture and is based on two-wire protocol [5].
Fully automatic and semi-automatic connected car adopt IoT and its application. There will be more than 1.5
billion connected cars at the end of 2025 this is possible with the help of advanced technology. Lots of company
manufacture autonomous cars with the help of mechanical and electronics. This car is connected to an IoT
gateway with the help of CV2X (Cellular vehicle to everything). These connected cars are also helping to manage
traffic and fast transportation. As car is connected with each other and share information of traffic and road
profiles to drivers to select the right path for going to their destination [6].Features of advanced and connected
cars are [7] –
2.1 ADAS
An advanced feature in connected vehicles so ADAS is play an important and major role in our autonomous cars
and self-drive cars. Basically, it works on lidar sensors, radar sensors, and camera-based technology to assist our
driver in front collision warning, Adaptive cruise control (ACC), parking assistance, a self-parking system, and
many more application that are possible with the help of the ADAS system. In this ADAS we will measue 360
view so easily find and detect blind spot to object in our vehicles radar range.This ADAS sensor is integrtion
toour ECU in there vehicles. Using a ADAS feature ourcome to our accident chance.
Mention above Fig 2.0 is shown here how ADAS is working. As car 1 and car 2 is manage a proper distance to
each other so no warning and car maintain proper distance to each other automatically. But car 2 and car 3 is not
manage proper distance to each other to show a collision warning to driver and suggestion to maintain proper
distane to other vehicles (car). Collison warninng is basically show when radar detect in front of any object and
major distance according to roles to show warning. If your object very close to you so apply EBS (Emergancy
braking system) which is shown in car 3 and car 4 this function avoid a accidant chances.
The driver monitoring system nothing but the alert system for driver as well as tracking system for aletness of
driver. We have seen many cases of the vehicle moving but the driver is not focusing on the driver’s time he
distracts and sometimes is sleeping in moving vehicles or using a mobile phone to driving time. To avoid this type
of incident so we will use a driver monitoring system. It works on camera bases, the camera detects the driver’s
face on artificial intelligence and machine learning algorithm. We can predict the driver’s condition. All the
things are truly possible with the help of AL/MLbased IoT system.
Fuel tempting is the major problem shown to vehicles owner but with the help of connected vehicles this issue can
be solve by 70-80%. In this system a sensor is mounted in the fuel tank. This sensor continues monitoring fuel
consumption by the vehicle and how much amount of fuel left in the tank. System continuously calculating this
ratio using machine learning algorithms. Fuel monitoring system maintain the fuel record and generate the report
directly to the vehicle owner with the help to telematics system.
3. Related work
S. Gupta et al. [3] discuss the impact of AI & trust on the R&M programs for the automotive industry, broken
down into four pillars: in-vehicle experience, connected vehicles, auto manufacturing, and autonomous
vehicle with examples and use-cases. E. K. Mindarta et al. [6] demonstrate how the use of IoT in this sector
can improve customer experience by enhancing human-machine interactions, particularly when using IoT-
enabled vehicles. It has presented sound advice on how to use IoT in this industry most effectively while
minimizing potential hazards.
S. Hussain et al. [7] propose a fleet maintenance system called Car e-Talk that makes use of cloud computing
and the Internet of Things to track the health of vehicles, identify any irregularities, and provide information
on the nearby servicing facility. The vehicle is equipped with a variety of sensors to track its condition.
Through a microcontroller, data from sensors is received on the driver's smartphone, and after processing,
important information is shown on the driver's mobile screen. The same data is uploaded to a cloud server,
where it is used to maintain and evaluate a system history for preventative maintenance.
S. Zhu et al. [8] introduce intelligent edge computing, a cutting-edge technology used to provide energy-
efficient AI computing for IIoT applications. To unload the majority of AI workloads from servers, suggest
an intelligent edge computing framework with a heterogeneous architecture. A unique technique to improve
the scheduling for diverse AI activities has been suggested to increase the energy efficiency of various
computing resources. R. Han et al. [9] created UAV-assisted Internet of Things (IoT) systems, where the
effectiveness of data collecting is assessed in terms of packet loss rate and data volume using a Markov
chain. Furthermore, the calculation frequency of UAVs is developed by the preference coefficients of the cost
of energy and time consumption to fulfill the different service requirements. Finally, the Age of Information
(AoI) system is taken into consideration to determine the freshness of data packets, where the models of
single-IoT devices and multi- IoT devices with the first-come-first-served (FCFS) principle and M/M/1
queuing are evaluated. The outcomes of the simulation demonstrate that the suggested system may offer
reliable data collecting and effective computing for IoT devices.
Muthumanickam et al. [10] focused on an intelligent accident escape system for bad weather and traffic
conditions described using the Internet of Things. To assess the condition of the vehicle, several types of
sensors are employed. After being received, data from sensors is processed by a microcontroller and
presented on a car's dashboard. The suggested concept combines an IoT system that tracks the weather and
road conditions with an intelligent system based on deep learning that learns the factors that negatively affect
accidents to predict and advise the driver to drive at a safe speed. According to the experimental findings, the
proposed deep learning technique has a prediction accuracy of 94%, compared to the existing LeNet model's
80%. ResNet as proposed is more efficient.
S. Mozaffari et al. [11] described a thorough overview of the state-of-the-art of deep learning-based methods
for predicting vehicle behavior. An overview of the general issue of predicting vehicle behavior is first
provided, along with a discussion of its difficulties. Y. Feng et al. [12] aims to close the research gap by
putting forth a thorough analysis methodology for the TSC's cyber security issue in the CV environment.
Data spoofing assault is thought to be the most conceivable and practical attack strategy after potential threats
to the system's key components and their effects on efficiency and safety were examined.
M. Halakoo et al. [13] the impact of CAVs on the macroscopic fundamental diagram (MFD) is examined in
this study with the help of microscopic traffic simulations. Additionally, a sensitivity analysis of the market
penetration rates of CAVs and network designs is carried out.T. Fedullo et al. [14] seek to explore the
potential applications of AI methods to the automobile industry, with a special emphasis on cutting-edge
metrology and measuring systems.
Emerging technologies usually introduce challenges that need to be resolved. The five main obstacles that vehicle
manufacturer must overcome as they try to establish themselves in the new ecosystem have been identified during
the research for our report Connected vehicles .Connected vehicles have many challenges like modern vehicles
connected to the internet, wireless network and wire network technology to every time for better connectivity in
modern vehicles. Some of the major challenges are-
1. Security
2. Functional Safety
3. Exchange service and comfort
4. Reliable network coverage
5. Connectivity and subscription complexity
Having stable connectivity is a big hurdle for automakers. Reliable and high-bandwidth communication is
essential for operations like maintenance and cutting-edge features like assisted and autonomous driving. The
tiniest service interruption could mean the difference between accident-free navigation and mishap. As businesses
like Einride demonstrate the advantages of driverless transportation, the industry is already beginning to realise
the potential of assisted and autonomous driving. The only wireless technology that can deliver the dependable,
high-bandwidth coverage required for safe operations is cellular connectivity, namely 5G.
5 Proposed Methodology
Connected vehicles or modern vehicles based on electronics hardware and software provide smooth connectivity
to cloud-to-vehicle and vehicle-to-vehicle owner. The extra feature also added along with efficient connectivity is
the need of updated software and hardware firmware in the vehicles. Automotive sector adopt the new service that
is FOTA (Firmware over the Air) that update the system without any physical connectivity.
Above Fig 3.0 explain the overall working procedure of FOTA. In initial step, FOTA update to check the ignition
(power) status of the vehicle. If vehicle ignition is in ON condition the process of FOTA update can be initiated
otherwise wait till the ignition will ON. The secondary and important step to check network status in the vehicle.
For this telematics double profile SIM should be available in the vehicle for establishment of connection between
vehicles to cloud. When connection establishment is successful that automatically update the firmware. This
process totally depend on network and cloud to vehicle connectivity.
6. Analysis of Result
According to proposed methodology in which main focus is on, one of the most important challenge of connected
vehicle that is real-time, reliable and secure communications among the vehicles and between vehicles. Because
Unlimited and seamless coverage as well as ultra-reliable and low-latency communications are vital for connected
vehicles, in particular for new use cases like cloud-to-vehicle and vehicle-to-vehicle communication.
In proposed study work FOTA (Firmware over the Air) service tried to connect vehicles with the help of CAN
protocol and tries to establish the connection to update the software and hardware with security solution according
to need. In this study there is no need to connect vehicles physically. Also there is no need to go to service station
for OTA update.
In future work, we will represent experiment results with the different measured value of connection timing, error
rate of connection, network authenticity and reliability of connection with comparison to other proposed methods.
7. Conclusion
Technological innovation plays an important role in the development of the automotive industry Driving
standards can be improved by using new technology. As we know that, the trend towards connectivity is one of
the most important in the automotive industry, with it estimated that by 2025, nearly all new cars will be
connected. Automakers are increasingly producing high performance vehicles due to customer benefits such as
higher engine power, more innovative braking and suspension systems, and other technical features that ensure
high product quality.
Therefore, we can conclude that the automotive industry is currently experiencing growth. Technologies that will
take the automotive industry to new heights. By using wireless Technology protocol Automotive Industry with
IOT may lead the Connectivity over the world to avoid Human errors, Develop Quality Product with highly
accuracy & Security. CAN (Control area network) protocol plays an important role in Automotive Industry with
IOT (Embedded system, Telematics, Modern vehicle, ADAS, cloud computing, AL/ML).
For Enhancement of Transportation Scenario by solving real-time problems & design a simple and user-friendly
application. Traffic forecasts help drivers drive more economically and be safer on the road. We also collect
information about how drivers use certain features so that manufacturers can change, improve, or remove them
from future design. IoT enables automakers to improve engineering and production to meet growing demand.
Product innovation with new technical support and further development.
References
[1] M. Jain and P. Kulkarni, "Application of AI, IOT and ML for Business Transformation of The Automotive
Sector," 2022 International Conference on Decision Aid Sciences and Applications (DASA), 2022, pp. 1270-
1275, doi: 10.1109/DASA54658.2022.9765294.
[2] M. Amejwal, A. El Jaouhari, J. Arif, S. Fellaki and F. Jawab, "Production Flow Management Based on
Industry 4.0 Technologies," 2022 14th International Colloquium of Logistics and Supply Chain Management
(LOGISTIQUA), 2022, pp. 1-7, doi: 10.1109/LOGISTIQUA55056.2022.9938064.
[3] . S. Gupta, B. Amaba, M. McMahon and K. Gupta, "The Evolution of Artificial Intelligence in the
Automotive Industry," 2021 Annual Reliability and Maintainability Symposium (RAMS), 2021, pp. 1-7, doi:
10.1109/RAMS48097.2021.9605795.
[4] V. Pilloni, H. Ning and L. Atzori, "Task Allocation Among Connected Devices: Requirements, Approaches,
and Challenges," in IEEE Internet of Things Journal, vol. 9, no. 2, pp. 1009-1023, 15 Jan.15, 2022, doi:
10.1109/JIOT.2021.3127314.
[5] B. Poudel and A. Munir, "Design and Evaluation of a Reconfigurable ECU Architecture for Secure and
Dependable Automotive CPS," in IEEE Transactions on Dependable and Secure Computing, vol. 18, no. 1,
pp. 235-252, 1 Jan.-Feb. 2021, doi: 10.1109/TDSC.2018.2883057.
[6] E. K. Mindarta, D. Kavitha, M. Poongundran, R. Priyadarshini, M. Sethu Ram and M. Moldabayeva, "IoT In
Increasing Human to Machine Interactions in the Automobile Sector," 2022 International Conference on
Applied Artificial Intelligence and Computing (ICAAIC), 2022, pp. 1510-1515, doi:
10.1109/ICAAIC53929.2022.9792682.
[7] S. Hussain, U. Mahmud and S. Yang, "Car e-Talk: An IoT-Enabled Cloud-Assisted Smart Fleet Maintenance
System," in IEEE Internet of Things Journal, vol. 8, no. 12, pp. 9484-9494, 15 June15, 2021, doi:
10.1109/JIOT.2020.2986342.
[8] S. Zhu, K. Ota and M. Dong, "Green AI for IIoT: Energy Efficient Intelligent Edge Computing for Industrial
Internet of Things," in IEEE Transactions on Green Communications and Networking, vol. 6, no. 1, pp. 79-
88, March 2022, doi: 10.1109/TGCN.2021.3100622.
[9] R. Han, J. Wang, L. Bai, J. Liu and J. Choi, "Age of Information and Performance Analysis for UAV-Aided
IoT Systems," in IEEE Internet of Things Journal, vol. 8, no. 19, pp. 14447-14457, 1 Oct.1, 2021, doi:
10.1109/JIOT.2021.3051361.
[10] Muthumanickam, Arunkumar, Balasubramanian, Gomathy, and Chakrapani, Venkatesh. ‘Vehicle
Health Monitoring and Accident Avoidance System Based on IoT Model’. 1 Jan. 2022 : 1 – 16 MLA
[11] S. Mozaffari, O. Y. Al-Jarrah, M. Dianati, P. Jennings and A. Mouzakitis, "Deep Learning-Based
Vehicle Behavior Prediction for Autonomous Driving Applications: A Review," in IEEE Transactions on
Intelligent Transportation Systems, vol. 23, no. 1, pp. 33-47, Jan. 2022, doi: 10.1109/TITS.2020.3012034.
[12] Y. Feng, S. E. Huang, W. Wong, Q. A. Chen, Z. M. Mao and H. X. Liu, "On the Cybersecurity of Traffic
Signal Control System With Connected Vehicles," in IEEE Transactions on Intelligent Transportation
Systems, vol. 23, no. 9, pp. 16267-16279, Sept. 2022, doi: 10.1109/TITS.2022.3149449.
[13] M. Halakoo and H. Yang, "Evaluation of Macroscopic Fundamental Diagram Transition in the Era of
Connected and Autonomous Vehicles," 2021 IEEE Intelligent Vehicles Symposium (IV), 2021, pp. 1188-
1193, doi: 10.1109/IV48863.2021.9575687.
[14] T. Fedullo, A. Morato, F. Tramarin, S. Cattini and L. Rovati, "Artificial Intelligence - Based
Measurement Systems for Automotive: a Comprehensive Review," 2022 IEEE International Workshop on
Metrology for Automotive (MetroAutomotive), 2022, pp. 122-127, doi:
10.1109/MetroAutomotive54295.2022.9855154.
Abstract
Bridge construction is one of the cores of traffic infrastructure construction. To better develop relevant bridge
science, this paper introduces the main research progress and aspects, including concrete bridges and the high-
performance STEEL materials, the latest research on steel-concrete composite girders, advances in box girder and
cable-supported bridge analysis theories, advance in steel bridges, the theory of bridge evaluation and
reinforcement, bridge model test sand new testing techniques, steel bridge fatigue, wind resistance of bridges,
vehicle-bridge interactions, progress in seismic design of bridges, bridge hydrodynamics, bridge informatization
and intelligent bridge and prefabricated concrete bridge structures.
Keywords: Bridge science, Annual progress in 2019, Review
1. Introduction
A Deck Slab bridge is a structure having a High length between the inner face of the dirt walls to carry the traffic
loads above the natural obstruction (streams, rivers etc.) or artificial obstructions. The superstructure of the bridge
comprises of the deck slab and supports. On the simple span bridge, the deck slab lay specifically on bearings
through which forces and moments are transferred to the sub-structure. The deck slab bridge comprises
superstructure as deck slab and supports as abutments. Fig. 1 shows the typical sections of the solid deck Slab
Bridge which contains components such as deck slab wearing coat, abutment and footing. The solid deck slab
casting is up-front with simple, and the concrete moulds are extremely easy to build. Solid volumes might be
expanded. The deck Slab Bridge include deck slab sections supported by longitudinal girders are supported by
abutments. The girders give the stiffness and strength essential for the length, and empower the section to be
moderately thin and inexpensive to build. The details are required for the design of the abutment and substructure
is span of bridge.
2. General
Bridges have built a long time it is very active in the world. Today, modern bridges tend to use high strength
materials. Bridges are very sensitive to dynamic loadings can be exposed to the impact of vibration cau sed by the
dynamic effects such as wind, earthquake and vehicle movement as well as cyclicloading. However vibration
can influence too in terms to safety as well as comfort of users and limit serviceability of the bridge.
Construction of long span bridges has been very active in the world in the past few decades. Today, modern
Copyright © ICMMT2022
Corresponding Author’s E-mail ID:kgolghate12@gmail.com
7
Comparative Analysis of RCC Deck Slab and Steel Bridge Design with Load Analysis using Staad Pro
bridges tend to use high strength materials. Therefore, these structures were made slender. It is noticed that they
are very sensitive to dynamic loadings such as wind, earthquake and vehicle movement.
As we all know that whenever the bridge span will be long, they turn into more flexible and susceptible to vibrate.
Vibration effects can be too dangerous and it has a number of levels; from a dangerous effect (causing immediate
structural failure) to a more prolonged effect (structural fatigue).
Further, vibration can impact protection in addition to comfort of users and restriction serviceability of the bridge.
Consequently, substantial research had been completed to recognize mechanisms behind bridge vibration and to
lessen this undesirable vibration impact.
3. Bridges
Structural steel has many advantages over other available construction materials in terms of strength and ductility.
In tension and compression the ratio of higher strength to cost is different i.e. in tension ratio is increase while in
compression it is decrease as compare to concrete. We also know that the stiffness to weight ratio of steel is far
better than that of concrete. Subsequently, structural steel is an effective and cost-effective material for bridges.
For the construction of long span bridges steel material is the only solutions due to high load bearing capabilities
and also long life but corrosion of steel problem is a major issue also. Various authorities spend thousands of
money to save such steel structures from corrosion. Howrah bridge is such an example these bridges also
subjected to rusting due to salt present in sea water. Howrah bridge is also famous with a name of RabindraSetuit
is steel bridge constructed in 1943
We have presented a list of benefits of steel bridges that promises or popular in Europe and in various developed
countries.
Firstly they are enough capable to bear heavy loads over its longer spans hence it offers minimum dead
weight, and also promises smaller foundations
From the point of view of construction steel has the advantage of prefabrication and later on it will erect at
site.
Due to urbanization with heavy traffic steel bridges can be erected with minimum time without affecting
community
It also offers greater efficiency than concrete structures with respect to seismic and blast loading
Steel bridges life is always greater than concrete bridge
It also offers slender appearance because of shallow construction depth which makes them artistically
attractive. This will leads to the minimum cost of bridges
All these normally clues to low life cycle costs
As the name implies, a steel bridge is a bridge in which the main material is steel. Steel bridge members are easy
to fabricate and are widely used in bridge construction due to the high tensile strength of steel materials. Steel has
tensile and compressive strength, and the ability to bend without cracking or breaking. Moreover, steel bridges do
not undergo dry shrinkage or creep as loads are applied over time.
Some important properties of steel bridges include:
1. Compared to concrete bridges, the self-weight is relatively light and long-span bridges can be constructed.
2. It is possible to manufacture durable and homogeneous quality materials in large quantities, and quality
assurance is possible because the elements are manufactured in controlled environments.
5. Literature Review
Johan Maljaars, Systematic derivation of safety factors for the fatigue design of steel bridges: This paper presents
a probabilistic framework to derive the safety factors for fatigue of steel and composite steel concrete road
bridges. Engineering models are used for the design and the safety factor is derived in such a way that the design
meets the target reliability set by international Eurocode and ISO standards, estimated using measured data and
advanced probabilistic models. Engineering model uncertainties and dynamic amplification factors are established
through comparison of measurements and models. The value of visual inspections is quantified based on
observations from practice and expert opinions. The safety factors are derived for Eurocode’s Fatigue Load
Model 4 and Eurocode’s tri-linear S-N curve. The study shows that the safety factors for fatigue as currently
recommended by the Eurocodes need to be raised.
Yanyan Sha, Design of steel bridge girders against ship forecastle collisions: A key aspect in the design of mega
bridge structures across navigable waterways is to ensure bridge safety with respect to accidental ship collisions.
Special attention has been paid to providing sufficient impact resistance for bridge sub-structures including piers
and pylons. However, the collision design of bridge super-structures such as bridge girders is commonly
neglected. In this paper, high-fidelity finite element models of a ship bow and a bridge girder are established.
Numerical simulations are conducted to study the structural response of the bridge girder subjected to impact
from the ship forecastle. Based on the simulation results, design considerations of bridge girders against ship
forecastle collision loads are discussed. The effects of the impact location and relative structural strength are also
investigated. A simple but effective strengthening method is proposed to increase the collision resistance of steel
bridge girders.
Amol Mankara, Probabilistic reliability framework for assessment of concrete fatigue of existing RC bridge deck
slabs using data from monitoring: Assessment of existing bridge structures for inherent safety level or for lifetime
extension purposes is often more challenging than designing new ones. With increasing magnitude and frequency
of axle loads, reinforced concrete bridge decks are susceptible to fatigue failure for which they have not been
initially designed. Fatigue verification and prediction of remaining service duration may turn out to be critical for
civil infrastructure satisfying the required reliability. These structures are exposed to stochastic loading (e.g.
vehicle loads, temperature loads); on the resistance side, reinforced concrete also behaves in a stochastic way.
This paper presents a probabilistic reliability framework for assessment of future service duration, which includes
probabilistic modeling of actions based on large monitoring data and probabilistic modeling of fatigue resistance
based on test data. A case study for the steel - reinforced concrete slab of the Crêt de l'Anneau Viaduct is
presented along with calibration of resistance partial safety factors for lifetime extension.
Javier Ca˜nada P´erez-Sala, Numerical analysis of precast concrete segmental bridge decks: Precast Concrete
Segmental Bridges are nowadays a well-established alternative for bridge construction that presents significative
advantages related to the construction process. Numerous bridges have been built using this technology in the past
decades and extensive research has been conducted, including the development of different numerical models to
study their behaviour. This paper proposes a new Finite Element model for Precast Concrete Segmental Bridge
decks capable of reproducing the main characteristics of their behaviour at a reduced computational cost. The
model proposed has shown very good agreement with experimental results existing in the literature. After
calibration, the influence of different modelling choices has been analysed. The results point out to a high impact
of the modelling strategy adopted for the joints in the compression areas, requiring an adequate estimation of the
point of contact between the segments. Additionally, consideration of friction of external tendons at the deviators
showed limited relevance in the global behaviour of the model but was important for the correct estimation of
stress increments in the tendons. Finally, considering or not the presence of epoxy at the joints did not seem to
influence significantly the behaviour of the models. The use of shell elements combined with the modelling
strategy adopted for the joints offers better accuracy than existing models with a significantly lower
computational time.
Su-Shen Lim, Flexural strength test on new profiled composite slab system: This research presents an
experimental study on the flexural strength and failure behaviour of a newly developed composite metal decking
system. The newly developed metal decking system with a thickness of 0.75 mm and 1.0 mm produced by the
industry requires a detailed study in strength and performances before it is launched for commercialization. A
simply supported conventional reinforced concrete slab is used as the control specimen and two composite slabs
with different metal thickness of steel sheet profiles were constructed and tested under a four points flexural
strength test. The strength and behaviour of the slabs are recorded and comparisons with conventional slabs with
composite deck slabs are made to achieve the objectives. The recorded results of three different slabs were then
used to plot a load-displacement graph and deflection profiles are to be analysed and compared. The yield flexural
capacity and average yield displacement of specimens for the compositeslabs were 28.0 kN and 0.80 mm
respectively. The 1.0 mm metal thickness composite slab has the highest ultimate flexural capacity among all
specimens which is 84 kN followed by a 0.75 mm metal thickness composite slab with 58 kN and ends with the
lowest 9.1 kN of conventional slab. Two cracking patterns were found during the experimental test which
includes shear cracking and flexural cracking`. Besides that, two major failure modes under bending which are
flexure failure at the centre point of the specimen and bond or longitudinal slip failure along the side of the
specimen were found in the experimental test. Bond failure results in slippage between the concrete and metal
deck, which can result in cancellation of the composite action at interface. In conclusion, the strength of slabs
improved, and ductility was remarkably increased when slabs acted as a composite structure.
Rajai Z. Al-Rousan, Impact of sulfate damage on the behavior of full-scale concrete bridge deck slabs reinforced
with FRP bars: This paper is aimed to analyze the performance of reinforced-with-FRP, sulfate-damaged concrete
bridge deck slabs under concentrated loads, using the nonlinear finite element analysis (NLFEA) method. For
experimentation purposes, twenty-seven full-scale models have been prepared to simulate concrete bridge deck
slabs, with a length of 3000 mm and a width of 2500 mm. The parameters of the study were: (i) type of
reinforcement, as three types were tested: glass FRP (GFRP), carbon FRP (CFRP), and steel; (ii) bottom
transverse reinforcement ratio (ρ = 0.38, 0.46, and 0.57); and (iii) sulfate damage level, which consisted of three
levels: Level 0 (undamaged), level 1 (73 days), and level 2 (123 days). All of the slabs models were equipped
with two parallel girders of steel as supporters. To be able to analyze the models’ performance up till failure, the
load exerted by sustained truck wheels (87.5 kN CL-625 truck) was simulated by subjecting each slab model to a
monotonic single concentrated load, with a contact area of 600 × 250 mm, at the center of the models. The
simulated models encountered a punching shear mode of failure. The analysis showed that the models that were
strengthened with CFRP and GFRP bars exhibited a remarkable improvement in the models’: ultimate load,
elastic stiffness, post-cracking stiffness, elastic energy absorption, and post-cracking energy; whereas, there was
less influence on the models’ ultimate deflection, compared to the ones strengthened with steel.
Bal´azs K¨ovesdi, Reliability analysis-based investigation of the historical Sz´echenyi Chain Bridge deck system:
Before the recent reconstruction, significant corrosion damages were observed on the deck system of the ~170-
year-old Sz´echenyi Chain Bridge. Therefore, an advanced reliability analysis-based study is executed to assess
the risk of failure of the structure in its situation until the refurbishment starts. The current paper has the aim to
introduce the applied assessment method for the risk analysis and damage grade determination of the historical
structure. The novel method combines the following techniques: (i) input data coming from on-site measurements
implemented into state-of-the-art corrosion models, (ii) advanced finite element model-based resistance
calculation (GMNI analysis) implemented into (iii) Monte Carlo simulation-based stochastic reliability
assessment method to determine the risk of the bridge deck system failure. It is concluded that the introduced
method is a powerful tool for risk assessment of existing aging structures.
Rajai Z. Alrousan, The behavior of alkali-silica reaction-damaged full-scale concrete bridge deck slabs reinforced
with CFRP bars: Steel material is susceptible to corrosion after being in an aggressive environment and chemical
attacks causing major deficiencies and failure; in some cases, maintenance or repair is necessary. Consequently,
carbon fiber-reinforced polymers (CFRP) materials have been innovated to replace the usage of conventional steel
reinforcement. In this study, the behavior of reinforced concrete bridge deck slab has been investigated under the
effect of ASR damage and internal reinforcement with CFRP bars. However, the effect of the reinforcement type
(CFRP and steel), reinforcement ratio (0.38, 0.46, and 0.57) %, and ASR damage stage (without, first, second, and
third) has been carried out using the NLFEA technique after a well-calibration process against available
experimental data. Generally, it has been noticed that the utilization of CFRP bars significantly improved the
slab’s strength in compression and tension, stiffness, and modulus of elasticity. Moreover, an innovative and cost-
effective reinforcing technique, using CFRP bars with a 0.38% ratio or 0.46% reinforcement ratio, represents an
ideal solution to improve the ultimate load-carrying capacity, serviceability, and durability. In addition, CFRP
exhibited more strain values in concrete and steel materials and decreased under exposure to ASR damage action.
Under exposure to ASR damage action, both the elastic and post-cracking stiffnesses are reduced regardless of the
reinforcement type. Generally, it has been observed that the resulting degradation could be approximated by a
parabolic shape having a slower decreasing rate at ASR levels of 2 and 3 where the material’s losing most of its
mechanical properties. Finally, a linearly decreasing relationship was found between the slab’s energy capability
and ASR level.
Xin Chen, Heating properties of bridge decks using hydronic heating systems with internal or external circulation
tubes: A comparison analysis of the heating properties of the hydronic heating system of bridge decks with
external (exchange tubes installed at the bottom of the existing bridge deck with voids inside) or internal
(exchange tubes embedded in pavement of the newly built bridge deck) tubes was carried out through field tests.
Two heating methods (constant heating power and constant inlet fluid temperature) were used to analyze the heat
exchange flux and the temperature increments as well as thermally induced stress of the slab. Numerical
simulation was conducted to model the bridge deck heating process to analyze the temperature distribution of the
bridge surface. The results shows that the heat exchange flux are the same under the same constant heating
powers for the two embedded tube position heating systems; the maximum temperature increment of the bridge
deck surface obtained by the external heating system is 0.46 times that obtained by the internal heating system;
the maximum thermally induced stress caused by the external heating is 20.4% of the concrete strength (19.1
MPa), which is much higher than that caused by the internal heating under the same heating powers. The thermal
efficiencies of the external and internal heating systems are approximately 24.4% and 47.9%, respectively. Under
the same constant inlet temperatures, the temperature increment of the bridge deck caused by the external heating
is 20.4% of that of the internal heating.
Mark Hurley, Laboratory study of a hydronic concrete deck heated externally in a controlled sub-freezing
environment: Geothermal heating of bridge decks is a reliable and sustainable method for bridge de-icing that has
been in- creasing in demand since conventional de-icing methods were proved to be environmentally hazardous.
Previous research on geothermal heating of bridge decks relied on hydronic pipes embedded inside of bridge
decks, which are confined to newly constructed bridges. For existing bridges, a newly devised method for external
heating has been recently tested under limited laboratory conditions to determine its overall performance. This
study explores laboratory heating tests of a concrete slab with a thickness representing a typical concrete bridge
deck. This slab was equipped with a simulated geothermal bridge de-icing system and tested inside a freezer
subjected to sub-freezing controlled conditions. Various winter scenarios were applied to the system to determine
its heat- ing performance and how feasible it will be for the system to be transferred to the field. A prediction
equation was developed to estimate the total energy reserves required to permit de-icing, and statistical analysis
was performed and validated with test results. The slab surface heat flux was estimated to range from 27 W/m 2 K
to 73 W/m 2 K from the heating test. The externally-heated deck can be designed with the developed prediction
equation for snow melting.
Kamal B, DESIGN OF COMPOSITE DECK SLAB: The aim of this project is to design the composite slab
which is connected using shear connectors. The use of steel-concrete composite construction has been widely
applied in building construction (industrial buildings, Apartments etc). In general, composite design provides the
efficient use of material, ease of construction, time saving, and more space when compared with non-composite
design. The composite slab which has to be placed on a conference hall has been selected for this project. In this
project, the design of composite slab and connection with the beam using shear connectors have been done using
INSDAG design manual based on assumptions of Eurocode-4 and IS: 11384. The members are analyzed for the
loads acting on them like live load and dead load. The necessary details of profile sheet and shear connectors were
abstracted from Manufacturer’s table recommended by INSDAG and IS: 11384. The Detailed calculations for the
design of composite slab, steel beam and their connections are submitted in this project.
Y. Kamala Raju, Reinforced Cement Concrete Bridge Deck Design of a Flyover with Analysis for Dynamic
Response Due To Moving Loads for Urban Development in Transportation Systems - A Case Study: The present
study on Practices in civil engineering for sustainable community development to meet four out of total eight
Millennium Development Goals of United Nations have been taken up to improve the quality of life of Global
Community by creating awareness in all concerned. This study is also relevant during the United Nations Decade
of sustainable development. The four goals related to Civil Engineering are effective irrigation water
management, providing safe drinking water, ensuring environmental sustainability and sustainable transportation
system. As an inspiration of these goals, this paper is on the study of Reinforced Cement Concrete bridge deck
design and its dynamic response to urban development in transport systems. A Reinforced Cement Concrete
bridge deck is designed using the Indian Roads Congress (IRC) Bridge Code: IRC 21-1987. The bridge deck is
designed for IRC Class AA loading tracked vehicle. The design curves by M. Pigeaud, are used to get Moment
Coefficients in two directions for the deck slab. The longitudinal girders are designed by Courbon’s method. The
dynamic responseof bridge deck for moving loads is analyzed as per British Standard Code of Practice BSCP-117
Part-II – 1967. This is based on Lenzen’s criteria relating the Natural Frequency and Vibration Amplitude. A
computer program in C language is developed todesign interior slab panels of the Reinforced concrete bridge
deck to arrive at the reinforcements and depths for a specified length of the width of slab panel and thickness of
the wearing coat with Grade of concrete M-25 and Grade of steel Fe-415 High Yield Strength Deformed (HYSD)
bars. The possible Global Partnership for overall development with universities, consulting organizations,
government organizations and nongovernmental organizations is also to be discussed.
6. Conclusion
The structural development in bridge engineering along with efficiency have got much attention in few decades.
Leading to the development, Optimization of structure established on mathematical analysis emerged mostly
employed strategies for productive and sustainable design in the bridge engineering. Despite the widespread
knowledge, there has yet to be a rigorous examination of recent structural optimizationexploration development.
Thus, the primary objectives of this paper are to critically review previous structural optimization research,
provide a detailed examination of optimization goals and outline recent research field limitations and provide
guidelines for future research proposal in the field of bridge engineering structural optimization. This article
begins by outlining the relevance of efficiency and sustainability in the bridge construction, as well as the work
done required for this review. Suitable papers are gathered and followed by a statistical analysis of the selected
publications. Following that, the selected papers are evaluated in terms of the optimization targets as well as their
spatial patterns. Structure's optimization four key steps, including modeling, optimization techniques, formulation
of optimization concerns and computational tools, are also researched and examined in depth. Finally, research
gaps in contemporary works are identified, as well as suggested guidance for future works.
References
[1] X. Zhou, X. Zhang, Thoughts on the development of bridge technology in China, Engineering 5, 1120–1130
(2019) [CrossRef] [Google Scholar]
[2] Albuquerque, A.L.L. Silva, A.M.P.D. Jesus, R. Calçada, An efficient methodology for fatigue damage
assessment of bridge details using modal superposition of stress intensity factors, Int. J. Fatigue 81, 61–77
(2015) [CrossRef] [Google Scholar]
[3] K.L. Scrivener, V.M. John, E.M. Gartner, Eco-efficient cements: potential economically viable solutions for
a low-CO2 cement-based materials industry, Cement Concr. Res. 114, 2–26 (2018) [CrossRef] [Google
Scholar]
[4] .D. Sen, K. Erazo, W. Zhang, S. Nagarajaiah, L. Sun, On the effectiveness of principal component analysis
for decoupling structural damage and environmental effects in bridge structures, J. Sound Vib. 457, 280–298
(2019) [CrossRef] [Google Scholar]
[5] .K. Manley, T. Rose, Innovation in the road and bridge industry, in 2013 Proceedings of PICMET 2013:
Technology Management in the IT-Driven Services (2013), pp. 2468–2474 [Google Scholar]
[6] G. Xu, W. Wang, China's energy consumption in construction and building sectors: an outlook to 2100,
Energy 195, 117045 (2020) [CrossRef] [Google Scholar]
[7] .C.C. Fu, S. Wang, Computational Analysis and Design of Bridge Structures. CRC Press (2017) [Google
Scholar]
[8] B.G. Sumpter, R.K. Vasudevan, T. Potok, S.V. Kalinin, A bridge for accelerating materials by design, Npj
Comput. Mater. 1 (October) (2015) [Google Scholar]
[9] .B. Briseghella, L. Fenu, C. Lan, E. Mazzarolo, T. Zordan, Application of topological optimization to bridge
design, J. Bridge Eng. 18, 790–800 (2013) [CrossRef] [Google Scholar]
[10] A. Elrehim, Z. Mostafa, M.A. Eid, M.G. Sayed, Structural optimization of concrete arch bridges using
genetic algorithms, Ain Shams Eng. J. 10, 507–516 (2019) [CrossRef] [Google Scholar]
[11] H. Toutanji, Design equations for concrete columns confined with hybrid composite materials, Adv.
Compos. Mater. 10, 127–138 (2001) [CrossRef] [Google Scholar]
[12] R.J. Vanderbei, Structural optimization, Int. Ser. Oper. Res. Manag. Sci. 285, 275–291 (2020) [Google
Scholar]
[13] Darvishi, S. Shojaee, Size and geometry optimization of truss structures using the combination of DNA
computing algorithm and generalized convex approximation method, Int. J. Optim. Civil Eng. 8, 625–656
(2018) [Google Scholar]
[14] .L.M. Gil-Martín, E. Hernández-Montes, A. Palomares, M. Pasadas-Fernández, The optimum shape of
an arch under non-symmetric loading conditions, Arch. Appl. Mech. 86, 1509–1520 (2016) [CrossRef]
[Google Scholar]
[15] .M.P. Saka, O. Hasançebi, Z.W. Geem, Metaheuristics in structural optimization and discussions on
harmony search algorithm, Swarm Evol. Comput. 28, 88–97 (2016) [CrossRef] [Google Scholar]
Abstract
In World, Economic development is essential parameter for any nation growth, Especially for developing nation
like India and China . Where consumption of energy in form of electricity is prominently very high such one of
them is industries, so as manufacturing increases industrial waste exponentially increases such waste material are
GGBS, silica fume (S.F.) and fly ash (F.A.) as byproduct form thermal power plant by burning of coal as waste,
(Ground granulated blast furnace slag) GGBS or GGBFS by Iron furnace as slag, Silica fume byproduct in
manufacturing of ferrosilicon alloy. Utilization of such industrial waste is important for eco system and
sustainable growth is possible only by effectively use of such waste, as these material shows cementitious
property when used with OPC cement, for high performance concrete we can use all 3 cementitious material in
concrete production at a time, many research paper shows use of any individual with cement (cement+ F.A.),
(cement +S.F.) or combination of two material with cement (GGBS + FA + cement)In
thispaperwepresentsusethese3ingredientswithOPCcement(F.A.+S.F.+GGBS+Cement)with the reference of below
mentioned reference research paper. The optimum use of GGBS and fly ash are in ratio of (15:15) replacement
and additional with OPC cement in 7 days and 28days gave max results to take result as reference add silica fume
in (5 -10) percentage to improve HPC concrete workability, initial setting time, transit time without using any
kind chemical plasticizers if required then naturally, locally available material used as retarder to
transportconcretetofardistancefrombatchingplantwithoutaffectingstrengthofconcrete.
Keywords: H.P.C., Transit time ,Green material, Industrial waste, Durability, GGBS
1. Introduction
Now, the earth is dwelling of 8 billion population and continuously increasing with time, so energy is primarily
need for any one survival the consumption of resources increases in all domains like automobile, thermal power
station iron and steel industries where, waste and byproducts are not organic and not easily decomposable mostly
inert material are from these industries utilization of such material in optimistic way now these byproduct used in
concrete production without any effects on strength and different engineering property of concrete , This waste is
the burning issue as it is in abundant and harmful to environmental degradation. To keep a hygienic and healthier
environment, it is now a worldwide concern to develop a social, technical, economic, and environment benign
remedy. Many such demerits of such industrial waste are affecting the organic property of agriculture land,
around that power plant or industries so .inorganic waste are not easily treatable or not disposable in land fill
adversely affecting ground water .If human in contact with such waste it affect respiratory system because there
size is less than micron. The management of environment damaging industrial waste has long been an abandoned
challenge. Solved only by recycle it in form of construction material where, it consumed in large scale without too
muchtreatmentorprocessingoncetheyareusedinconcreteoranymaterialwastelossesitsidenticalpropertyandbecomeine
rtandnonreactiveinnature.
In past years, optimized use of solid waste is one of major problem for construction and civil researchers.
Supplementary cementitious materials which is by product can be used with the low possible environmental
impact. Currently, there is a world wide push to produce eco–friendly goods that are less costly and have less of a
effect on environment, and to do so by replacing less expensive material with cement to some proportion with
supplementary cementitious material.
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: aengineerjoshi5@gmail.com
13
High Performance Concrete from Fly Ash, GGBS and Silica Fume to Extend Initial Setting Time for Long
Transportation of Concrete
While at time of manufacturing of cement CO2 emission cause carbon footprints increases by replacing cement
with supplementary cementitious material consumption of cement reduces also save resources and energy. The
main purpose of present study is to utilization by product material for green concrete which is replacement of
cement and fine aggregate to make High Performance Concrete HPC to extend initial setting time for long
transit/transportation of fresh concrete to long distant construction site without reducing it engineering property.
In High Performance Concrete using ultra fine cementitious material like fly ash, G.G.B.S, silica
fumehaveveryfineparticlessotheyreduceswatercontent,increasesconsistency,increasepackingdensity,durabilityatles
s than 0.4 w/c ratio. This concrete workability measure compaction factor and slump test. For compressive
strength test done at 3,7,& 28 days .The results of various paper and IS code mention in reference are reviewed by
me and outcomes are analyzed, discussed, and concluded.
2. Literature Review
JagritiGupta, NandeshwarLata, SagarMittal,2018, Effect of Addition and Replacment of GGBS and Flyash
with Cement in Concrete, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH &
TECHNOLOGY (IJERT) RTCEC– 2018 (Volume6–Issue11), [1] studied the use of GGBS and flyash in
concrete by adding or replacement in cement, paper show the design mix with different ratio of
GGBS and Fly ash with cement and show slump of each design mix & gave compression strength
of M35 grade with different composition of cementitious material at 0.4w/c ratio. Compressive
strength at 7 days and 28 days get at ratio of cementitious material in (GGBS: Fly ash: Silica fume)
form, when replacement with OPC cement (15:15:70) and Addition with OPC cement (15:15:100)
is optimum point of using GGBS and Flyash when used beyond that parameter compressive
strength reduces. Slump value increases as GGBS and FA% increases when replacement with
cement but when addition with cement slump value increases up to ( 100 : 5: 5) after that ratio
slump value decrease by using GGBS and Fly ash in certain define proportion improved packing
density and work ability of concrete [1].
O.P.C. – G.G.B.F.S based concrete, O.P.C.- F.A.- S.F. tricement concrete uses less water and is less adhesive
than O.P.C.-S.F. concrete[7]
Setting time of concrete influenced by the GGBS and flyash percentage in concrete mix. Mostly cementitious
material extend setting time of concrete many adnexa like types of cement, composition and fineness of
supplementary cementitious material like (slag, silica fumes, rice husk) water demand, hydration rate, Extend
or increase
settingtimebyreducingrateofheatofhydrationpozzolanicmateriallikeflyashdecreasewaterrequirementutilizeddur
ingarid or hot weather where excess water evaporate and concrete dry fastly and decrease workability also
adverse effect on placing, transportation and finishing of concrete. Pozolanic material helpful for long
transportation /transit of concrete from batching plant helpful in extending initial setting time. GGBS and Fly
ash gave smooth or evensurface finish but using silica fume generally makes concrete adhesive in nature so,
it is deficit to get smooth and even surface of concrete. To improve pump ability of concrete cementitious
materials like GGBS, fly ash are effective or advantageous
To use fly ash, Silica fume mainly to reduce choking of pipe in less lurry concreteor higher amount of course
aggregate in concrete. Use of such by product generally reduce the rapid setting also use of fly ash with OPC
cement makes it Pozzolanic cement which generally not makes any plastic shrinkage crack over surface.
Objectives
3.1.1. O.P.C.Cement
Ordinary port land cement, OPC has greenish gray color and major cementitious material in concrete. Main
constituent of OPC cement is Limestone generally 60%, silica 20%, alumina 6%. As per reference paper
review53Gradecementisusedintheirworkalso not more than 10%byweight of sample retain from
90micronofsievenumber 9 sieve conforming to IS: 12269-1987. OPC cement grade 53 means 53 mpa shows
compressive strength after 28 days with proper curing. Cement used with in 3 month from packing date and free
from lumps whencementpourinwaterfilledbucketcementsinkcompletelyinwaternotfloatoversurface.
3.1.2. G.G.B.S.
G.G.B.S. is one of the cementitious matter get from Iron industry as a byproduct, where liquid iron slag are
extinguished. To get iron from iron ore in liquid state, mixture of its ore with coal and limestone put blast furnace
put all dry ingredients in cup of furnace as moves down in furnace temperature increases and up to 1550°C once
mixture reaches up to that temperature molten slag and liquid iron separated , molten slag float over liquid metal.
Once the slag cool and dry then it is look like clinker to pulverized or ‘granulated’ and ground it that’s why it is
called ground granulated blast furnace slag. It is used for preparing slag cement as per IS 455(1976) around 25-
70%usedbyweightofOPCCement.GGBS is in off-white colors
3.1.3.Flyash
A consequence material in form of ash of thermal power station based on coal as fuel through combustion
process. Fly ash is susceptible with O.P.C cement and both mixed in certain proportion gives pozzolana and
cementitious property makes P.P.C. cement and also reduces water needed for good workability. The fuse
material of coal in thermal power plant consists volatile and carbon matter are burned infurnace. The impurities of
coal like quartz and clay and more during combustion process dissolved into suspension and carried out from
chamber in form of flue gas. The residue matter cools and set in form of spherical shape particle. Particulate
matter are collected through various mechanical systems or fabric filter, bag and through precipitators. Fly ash
contain generally alumina and silica in major proportion. Flyash used in concrete mix design in two ways 1st by
replacement 2nd by adding with O.P.C. cement. Fly ash also known as pulverized fuel ash, coal ash and it is
heterogeneous material. Utilization of Fly ash in concrete is necessary because it is toxic in nature many
chemicals like arsenic, nickel, lead and other. It also causes respiratory problems, inflammation in lungs also
cause heat disease like heart stroke.
Fly ash provides superior workability at less water cement ratio, extend the setting time without using retarder or
plasticizer, intercede environment by reducing carbon footprint, reducing material and cost of concrete.
F.A. particle size varies from 10 micron to 75 micron, avg. density of 1350 kg/m^3 specific gravity is 2.5, fly ash
classification into two grade as mention below:-
IS1489-2015 mention the permissible use of Fly ash is15 –35 %by weight of cement.
It is a by product substance that is utilized as a pozzolana and is also known as microsilica or condensed silica
fume. Particle size of silica fume is generally between (0.10-0.30)10^-6m and density lie between (140-
450)kg/m3.Inanelectric arc furnace used to produce silicon or ferrosilicon alloy, raw quartz is abridged with coal
to get this as by product. Higher percentage of silica fume cause concrete more brittle, become hard. At 2000°C
furnaces smokes release as an oxidized vapor that are collected is called condensed silica or some times is also
known as volatized silica. Large fabric bags are used to catch the condensed liquid as it cools. Next, contaminants
are taken out and the pure particle size of the condensed silica fume is collected and controlled with safety. Very
fine particulate matter may cause respiratory problem. ASTMC 1240 code show that permissible use of Silica
Fume is upto15%butnormally7-8%usedby weight of cement.
Aggregate size more than 4.75 mm is used as coarse aggregate. Basalt aggregate is taken having specific gravity
was found to be 2.52 and having density of 2850kg/m^3. Fineness modules is more than 7 and water absorption
should be less than 1%. Size of coarse aggregate should be less than 10 mm.
3.1.7. Admixture
Generally no admixture is required even needed then for retarder purpose mineral oil or sugar syrup may be used
but no chemical plasticizers are used ,main aim of these High Performance Concrete is made from industrial
waste or Locally available material.
3.1.8 Water
Water using for concrete mixing must be potable water. Water Required about 23% by weight is required for
hydration and 15% is entrapped in between the voids of cement. The total water required for complete hydration
and workability is 38% by weight. Workability of concrete directly depend on water content in concrete mix or
water cement ratio. Many cementitious material like fly ash , GGBS , Silica fumes changes the water requirement
for same workability or consistency. Oil, sugar, acid free water generally used for concrete mix and for curing.
Sea water is strictly prohibited in replacement of potable water in mixing of concrete.
Fly ash reduces water content as replacement of fly ash with OPC cement for normal work approximately 2 - 18
%byreplacing25–50%OPCcement with fly ash.
The following property of water that used in concrete mix.pHmustbeliebetween6.5-8,for potable water
The waste utilization as a material of green concrete is studied so as to reduce the carbon foot prints and save
natural resources like sand or natural aggregate and protect eco system from pollution and save energy which is
consumed while production of cement. So, it is Substitute of construction material like cement and fine
aggregates. For high performance concrete HPC following steps done
1. Use OPC 53 grade cement having no more than 10%byweightretain from90micron sieve
2. Use GGBS as 15%by weight of cement
3. Use Fly ash as 10%byweightofcement
4. Using silica fume as percentage from (5-10)%by weight of cement
5. Using fine rivers and of zone II, and between 4.75 mmto75 micron
6. Coarse aggregate shouldbebetween10mmto12.5mm
7. w/cratioshouldbelessthanorequal0.4
Total cementations material in ratio of ( OPC : GGBS : Fly ash : Silica fume )
When replacement with cement (0.68:0.15:0.10:0.07)
When addition with cement (1.00:0.15:0.10 :0.07)
Batching of all the material done as per design mix proportion of cementitious material, sand and aggregates with
0.4 w/c ratio. Mix the proportion in dry condition then added water in controlled manner and mix properly fresh
concrete is ready, This concrete is to be tested through slump cone & compaction factor test to measure concrete
workability. The number of standard cubes samples are prepared to test compressive strength to obtain 28 day
strength after curing at room temperature. The results are to be obtained, analyzed, discussed and concluded.
Testing of concrete is done in two stages once at a fresh stage and 2 nd at hardened stage generally workability or
flow ability are measured in a fresh stage of concrete. Workability shows ease while handling, transportation,
placing, and compaction without any vibration, which highly help to avoid segregation of material and too much
vibration may cause bleeding in concrete. A homogeneous and lumps free in a nature is a condition of effective
fresh concrete.
A hardened specimen of concrete can be tested in a terms of its strength, surface texture (finishing), unit weight,
but in general compression or a compressive strength test is the best method for testing ideal engineering
properties of hardened concrete. The cube of concrete at the age of 3,7 and 28 days is done for compressive
strength test. To get rough idea about strength of concrete or to know relative faster way of quality of concrete 7
day compressive strength test may be done generally 68% strength achieved in 7 days. If sample get more than
that it is expected that it will get design strength in 28 days.
The cube of concrete whose standard size (15*15*15) cm is used as a sample for compressive strength test, while
testing the cube. As per IS 1199, the selection of sample for fresh concrete are done. While as per IS 516 testing,
casting and curing of concrete cube are done and 28 days curing are necessary for achieving design strength. At
least 3 specimen should be select from sample. The resultant strength is taken as mean of 3 specimen. The
variation in each test specimen should not be more than +_15% of mean. Testing of sample should be done on dry
condition or taken out cube from curing tank 1 day prior for testing its compressive strength test.
Table 3.Comparison of Compressive Strength for 28 days on Addition & Replacement For M35 Grade
For High performance concrete HPC use data for mix design as highlighted in Table 2 (Table 9) of mentioned
research paper so, at 15- 15 percentage of Fly ash and GGBS gave maximum compressive strength ,so while
using silica fumes from ( 5-10) % to get compressive strength with all three supplementary cementitious material
( GGBS,S.F.,F.A) to get High Performance Concrete using OPC 53 grade cement and Table 2 of above mention
gave idea about w/c ratio because water reducer from (2-18)% when using Fly ash and also using silica fume up
to near about 5% doesn’t affect workability and GGBS with Fly ash always increases workability so there is no
need of need of plasticizer for long transportation of concrete from batching plant but even if needed then mineral
oil or sugar syrup is used as retarder admixture without affecting strength of Strength of concrete.
4. Conclusion
i) Fine size particle affects the cement consistency, hydration, & strength. Fineness of silica fume
generally higher than the cement and also higher specific surface, By increasing the slice fume
content simultaneously consistency also increases.
ii) Workability of concrete increase with percentage of Fly ash. About (2-18)% water reduced for F.A
above 25% same will be applicable when other supplementary cementitious material also about (25-
30)Percentage by weight of cement
iii) Using Supplementary cementitious material gave more smooth, even surface and increases
engineering property like packing density, thermal insulation, electric resistivity, reduce shrinkage
crack, increases durability, provide higher flexural strength.
iv) During fresh concrete free from bleeding, segregated and provides homogeneous mixture of concrete
v) Silica fume mostly used up to 15%butnormally from (7-10)% by weight of cement is used.
vi) Workability decreases as replacement percentage of silica fume with O.P.C cement above 10%. Also
water demand will be more for higher substitutions.
References
[1] Jagriti Gupta, NandeshwarLata, Sagar Mittal, 2018, Effect of Addition and Replacment of GGBS andFlyash
with Cement in Concrete, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH
&TECHNOLOGY(IJERT)RTCEC– 2018 (Volume 6– Issue, Information on
https://www.ijeít.oíg/íeseaích/effect-of-addition-and-íeplacment-of-ggbs-and- flyash-with-cement-in-concíete-
IJERľCONV6IS11023.pdf
[2] Information from Effect of Fly Ash on Mixing Water Requirements for Air-Entrained Concrete Chapter 3 Fly
Ash, Slag, Silica Fume, and Natural Pozzolans, Design and Control of Concrete Mixtures
EB001http://www.ce.memphis.edu/1101/notes/concíete/PCA_manual/Chap03.pdf
[3] Yehia,S., Farrag,S., Helal,K.etal. Effects of FlyAsh, SilicaFume, and Ground- Granulated Blast Slag on
Properties of Self-Compacting High Strength Lightweight Concrete. GSTFJEngTechnol3,21(2015).
https://doi.org/10.7603/s40707-014-0021-3
Abstract
This study's objective is to forecast a student success or failure using machine learning algorithms. Voting
classifier methods and machine learning approaches are compared for how much they improve prediction
accuracy. Three various machine learning techniques were used in this study. They are logistic regression,
classification using random forests, and svc. Voting Classifiers are machine learning models that forecast an
output (a class) based on their highest likelihood of selecting that output from that class. They test their skills with
a variety of models. To anticipate the output class based on the highest majority of votes, the results of each
classifier that was fed into the voting classifier are simply summed.
1. Introduction
Machine learning techniques are utilized in this study to forecast a student's success or failure. The study's
specific focus is on how machine learning approaches and voting classifier techniques compare in terms of how
much they improve prediction performance. In this thesis, three different machine learning techniques were
employed. [5]They are logistic regression, svc and random forest classification. Voting Classifiers are artificial
intelligence (AI) models that forecast a result (or class) in accordance with their highest probability of selecting
that class as the result. They work out on a variety of models. To predict the output class based on the highest
majority of votes, it simply takes the results of each classifier that was submitted into the voting classifier and
averaging them.. An easy-to-use, centralized way of overseeing all of a school's or college's online activities is
provided by a learning management system. Any educational institution's success is based on how well its pupils
perform. To improve a number of student characteristics, including final grades, attendance, etc., it is crucial to
predict and analyze student performance. Operating in a complex and very competitive environment are modern
educational institutions. Performance analysis, high-quality training, creating tools for evaluating students'
performance, and identifying unmet future needs are difficulties that the majority of institutions face today.[3]
Universities implement plans for student intervention to assist students in resolving problems that come up while
they are enrolled in classes. Because of the increasing use of computers and the internet, there is now a lot more
publicly available data that can be reviewed. New information is generated every day, whether it's regarding user
behavior, website traffic, or online sales figures. Such a huge amount of data presents both a challenge and an
opportunity. The problem is that analyzing such vast amounts of data is difficult for humans. The good news is
that this content is well-formatted and digitally recorded, making it ideal for processing by computers, who
process information far more quickly than people. Artificial intelligence (AI) has always been a hot topic of
discussion because it now permeates almost every element of life in the twenty-first century.[2] Machine
learning(ML),a sub field of artificial intelligence that uses algorithms to synthesis knowledge by combining the
underlying relationships between several data sets.[3] Predicting future situations or events that are unknown to
computers is the fundamental goal of machine learning. Thanks to data mining and machine learning, this mix of
tools may process data, patterns and models for thought, understanding, planning, problem-solving, forecasting
and object manipulation. [2]The concept of machine learning was created in this environment. Computers can
analyze digital data in ways that humans are unable to do to find patterns and laws.
ML can see the actions of the students and evaluate their performance. Decision-makers now have better tools to
extract information from data for judgments and policies thanks to machine learning (ML). [7] Instructors and
institution scan effectively investigate the educational database using machine learning and data mining. Using
attributes extrapolated from logged data, this analysis can be used to forecast a student's performance, such as
predicting a student's success in a course and a student's final grade. These systems enable all elements of a course
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: shrigoudsaloni@gmail.com
22
Machine Learning to Predict Student Performance using Voting Classifier
to be managed in a single location, from lessons and assignments through evaluations and grading. This implies
that instructors may at any time offer feedback on any project or test. At the end of the semester is not necessary
for students to see their grades. It can be beneficial to mentors or teachers as well by helping them choose the
subjects that should be covered in greater detail and how to organize projects that will help students overcome
their difficulties. It will benefit parents who are constantly concerned about their children's academic
achievements since they will be aware of their children's performance as well as the areas in which they are
deficient. They can now more precisely than ever before watching their students' online interactions due to
technological advancements. It gives educators and students access from anywhere, 24 hours a day, 7 days a
week. Algorithms can be used as a teaching tool to provide students with individualized feedback on their
homework, tests and other assignments. These priceless discoveries can assist instructors and students in
producing graduates that meet industry requirements and are of the highest grade. An unexpected evolution in
many domains, particularly in educational teaching and learning processes, is being attributed to technologies like
machine learning and artificial intelligence.
2.Literature Review
The integration of AI and machine learning (ML) into various parts of education has been the subject of several
studies, and several methods and technologies have been used to achieve so. One of these elements is the
evaluation of student performance. There are several ways to evaluate students' performances. [2]A computerized
evaluation tool has been developed to assess students' overall performance and track their academic development.
The author employs a set of principles that resembles a tree to accurately predict student success. The
aforementioned device type uses education data mining (EDM).[1] The huge collection of academic databases is
studied using the clustering facts mining approach.. This approach will speed up the search and result in a more
accurate type of finding.[1]
Nguyen Thai-Nghe, Andre Busche, and Lars Schmidt- Thieme used system learning approaches to enhance the
prediction impacts of instructional performances for two genuine case studies. The imbalance in beauty was
addressed through the application of three different strategies, all of which produced outstanding outcomes. The
datasets were first rebalanced before employing cost-insensitive and sensitive learning algorithms. SVM was used
for the smaller datasets, and Decision Tree for the bigger datasets. The models are initially deployed on a local
web server.[6]
An evaluation of an online math tutoring program from 3747 high school students was conducted by San Pedro et
al. They predicted whether or not a student will attend college five years from now. Research shows that children
who perform well on the teaching machine in middle school arithmetic are far more likely to enroll in college five
years later. Students who demonstrated confusion or carelessness inside the machine have a lower likelihood of
enrolling in college. They employed a classifier for logistic regression to make the prediction.
The J48, Simple Cart, Rep-tree and NB tree algorithms were examined by Mrinal Pandey and Vivek Kumar
Sharma to forecast engineering student performance in January 2013. They used datasets from524 students for the
10-fold cross-validation and datasets from 178 students for the percentage split approach. The 10-fold cross-
validation approach revealed that the J48 decision tree algorithm had a superior accuracy of 80.15 percent. The
percentage split method increases the J48 algorithm's accuracy to 82.58 percent. The comparison's results
demonstrate that J48 outperforms other algorithms in both situations. Teachers can help students achieve better
scores by using the J48 decision tree algorithm.[5]
3. Methodology
In this study first we clean the data and then we use voting classifier of esemble learning method. In this we use
three algorithms which is Logistic regression, Support vector machine and random forest classifier. We
researched ways to improve the precision of ML classification techniques for student performance. All attributes
and particular features have been examined independently to compare the classifiers' performance in terms of
accuracy. We pinpoint the key characteristics and increase the classification process' effectiveness.[4]
Ensemble learning is a wide meta approach to machine learning that tries to enhance predictive of ensembles you
can build to solve your predictive modeling problem, three strategies dominate there alm of ensemble learning. In
fact, rather than just algorithms per se, this area of study has given rise to many other, more specialized
approaches.
A voting classifier is a machine learning model that learns from a collection of numerous models and predicts an
output (a class) based on the class that has the best chance of being chosen as the output. It simply compiles the
outcomes of each classifier that is fed into the voting classifier, anticipates the output class based on the vote with
the highest majority, and outputs the results. The objective is to create a single model that learns from numerous
models and predicts output based on their cumulative majority of votes for each output class, as opposed to
creating individual specialized models and evaluating each one's correctness. Voting Classifier supports a total of
two separate voting processes.
In hard voting, the anticipated output class is the class that received the greatest number of votes or the class that
each classifier predicted as being most likely to occur. In this case the majority anticipated A as the output
because three classifiers predicted the output class (A, A, and B). The final forecast will therefore be A.
In a soft vote, the output class forecast is based on the average likelihood given to that class. Suppose the
prediction probabilities for classes A and B are (0.30, 0.47, and 0.53) given some input (0.20, 0.32,0.40). Thus,
the average for class A is 0.4333, whereas the average for class B is 0.3067. The winner is class A as a result of
having the highest average probability across all classifiers.
In the context of artificial intelligence, the supervised machine learning model family includes logistic regression.
Because it tries to distinguish between various classes, it is also known as a discriminative model (or categories).
To calculate the model's beta coefficients, logistic regression maximizes the log-likelihood function. [7]When
viewed from the perspective of machine learning, this modifies slightly. The negative log-likelihood loss function
is employed in machine learning, and gradient descent is utilized to locate the global maximum. Based on a
number of independent factors, logistic regression calculates the likelihood that a given event, such as voting or
not voting, would take place. The dependent variable has a range of 0to 1 because the result is a probability.
The building of Random Forest, a classification technique that employs data constructs to extract the rules and
patterns from the input data, involves the usage of several decision trees. Each tree is constructed using input
randomization and bagging, which involves randomly choosing data points from the dataset. This produces an
uncorrelated forest of trees whose collective prediction is more accurate than that of any one tree alone.[4]
In this work, evaluation techniques are employed to assess the effectiveness of the algorithms and the quality of
the data. The metrics utilized to determine the optimum outcome include accuracy, precession, recall, and f1
score. In this study, we make use of the panda’s library's profiling function, which provides a profile report of the
dataset's data, primarily in terms of its overall characteristics (number of records, number of variables, overall
missing, duplicates, memory footprint), as well as its presentation.
4. Material
The information is gathered with the aid of the experience API learner activity tracker tool. The dataset has 16
characteristics and 480 student records. Three broad categories are used to group the features: Demographic
characteristics like nationality and gender. Features of the academic background, including grade level, section
and educational stage. Personality traits like raising hands in class, using resources, responding to parent
questionnaires, and happiness with college.
5. Existing System
The current system was built using only Random Forest, Support vector machine and Logistic Regression
algorithms. The current system gives less accurate results since they just use one algorithm for the result and do
not compare it to the outcomes of the other three algorithms. The existing system has very poor performance.
6. Proposed System
In the suggested system, a voting classifier is used to combine the results of more than two algorithms. This
allows you to compare the results and determine which algorithms perform better and it also increases algorithm
accuracy. In this system, support vector machines, logistic regression, and Random Forest Classifier are used .To
compare the accuracy, first, test this method on its own, and then test it with a voting classifier.
The data is first pre-processed to identify missing values, turn categorical values into numbers (feature encoding),
and determine the relationship between the attributes. We employ the profile report function from the panda’s
library in the datasets. It provides a static view of the entire report along with a graphical view.
The data will then be split into training and test sets. The prediction model was developed using the training set,
which had 80% of the data, and tested using the test set, which contained 20% of the data. It is crucial to keep in
mind that an equal number of students from both courses must be present in both the training and test sets. After
creating the test and training sets, the models were built. The model was then created using the logistic regression
approach, SVM, and Random Forest. Choosing the variables and input data are the main phases in building a
model in the Python programming language. After the model has been created, it is applied using the test data
collection. The outcome of the technique is accuracy, which is important to the thesis. Both actual and anticipated
values are covered in the data. Finally, we improve the accuracy of the employed algorithms by using the voting
classifier model.
Algorithms Accuracy
Logistic Regression 66.66
Table1.Beforeapplyingtheclassifier
Algorithms Accuracy
8. Conclusion
Predicting a student's academic success can be helpful for instructors and newcomers to beginners in designing
their learning training method conceptually.
Based on how effectively they could forecast results, voting classifiers were compared in this thesis. On the data
sets, three different machine learning algorithms were evaluated, and three assessment measures were employed
to compare the results. The methods used were SVM classification, random forest, and logistic regression. A
voting classifier is then used to enhance performance. The voting classifier's accuracy is higher than that of all
three algorithms combined.
Even though it was more successful than utilizing the various methods, the voting classifier was not required if
the model performs better with simply the algorithms. Combining both methods can sometimes result in the best
outcomes, but it also depends on the dataset and the features that were picked.
References
[1] BAlbreiki ,NZakiandHAlashwal 2021 A systematic literature review of student Performance Prediction
using Machine Learning Educ.Sci.2021,11,552.
[2] LSandra,FLumbangoal and TMatsuo November2021 Machine Learning Algorithm To Predict Student
Performance: A Systematic Literature Review TEMjournalVolume10,issue4,1919-1927.
[3] S Sharma, S Kumawat and S Garg august 2021 Predicting Student Potential using Machine Learning
Techniques AISC volume 1387 485-495.
[4] T Al-Hafeez and A Omar /march 2022 Student Performance prediction using Machine
LearningTechniquesrd-1455610/v1.
[5] M Pandey and V Sharma 2012 A Decision Tree Algorithm Pertaining to the Student Performance analysis
and prediction volume 61 IJCA.
[6] N Thai-Nghe , A Busche and L Schmidt-Thieme2009 Improving Academic Performance Prediction by
dealing with class imbalance Pisa, Italy878-883.
[7] B Muthusenthil, VS Mugesh, D Thansh and R Subhash 2020 Predictive analysis tool for predicting student
performance and placement performance using Machine Learning algorithmvolume6 –issue 2.
[8] S Rawat and H Khosla2019 Student performance analysis using Machine Learning algorithms volume7
Abstract
The stock prices depend both on time as well as associated variables and finding patterns in among the variables
aid forecasting future stock prices which is often termed as stock market forecasting. Stock market prediction
extremely challenging due to the dependence of stock prices on several financial, socio-economic and political
parameters etc. For real life applications utilizing stock market data, it is necessary to predict stock market data
with low errors and high accuracy. This needs design of appropriate artificial intelligence (AI) and machine
learning (ML) based techniques which can analyze large and complex data sets pertaining to stock markets and
forecast future prices and trends in stock prices with relatively high accuracy. This paper presents a
comprehensive review on the various techniques used in recent contemporary papers for stock market forecasting.
Keywords: Time Series Models, Stock Market Forecasting, Artificial Intelligence, Artificial Neural Networks,
Forecasting accuracy
1. Introduction
The stock market movement is extremely volatile and dependent on a multitude of variables. The unpredictability
in the nature of the stock markets make it a risky proposition [3]. Stock market prediction is fundamentally a
regression problem in which patterns in previous data and its associated variables need to be found. Stock prices
can be mathematically modeled as a time series function as:
𝑷𝒓𝒊𝒄𝒆𝒔= 𝒇𝒖𝒏𝒄𝒕𝒊𝒐𝒏 (𝒕𝒊𝒎𝒆, 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔)
The stock prices depend both on time as well as associated variables and finding patterns in among the variables
aid forecasting future stock prices which is often termed as stock market forecasting. Stock market prediction
extremely challenging due to the dependence of stock prices on several financial, socio-economic and political
parameters etc. For real life applications utilizing stock market data, it is necessary to predict stock market data
with low errors and high accuracy. This needs design of appropriate artificial intelligence (AI) and machine
learning (ML) based techniques which can analyze large and complex datasets pertaining to stock markets and
forecast future prices and trends in stock prices with relatively high accuracy.
Machine Learning models are employed for data analysis where the data to be analyzed is extremely large and
complex or both. Primarily, Artificial Intelligence and Machine Learning (AI and ML) have been extensively
used for financial and business applications where large data has to be analyzed. One such major area is
investment banking [1]-[2]. In such applications, it is necessary to estimate the future movement in stock market
prices. Several decisions pertaining to investments, shares etc. depend on the behavior of the stocks of a company.
The stock price values are often leveraged by financial and investment firms for gaining profits and investing.
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: ruchisharma161216@gmail.com
27
A Taxonomy and Review on Machine Learning based Approaches for Stock Market Forecasting
3. Literature Review
The contemporary work in the domain and the note worthy contributions is cited in this section.
S. Kim et al. in [3] developed a technique termed as effective transfer entropy (ETE) to be used in conjugation
with existing ML algorithms such as LR, MLP, LSTM etc. The ETE metric served as an exogenous feature
which helped the training performance of the standard training models based on the entropy measure of the
dataset which is as tochastic variable of the training data set, while the data set used for the study was the US
stock market dataset.
B. Bouktifet al. in [4]designed an N-grams based approach utilizing the semantic analysis of data related to
stock movement for the prediction problem. The sentiment polarity was utilized to predict the impact of the
users of different social media platforms on the stock prices. The polarities used were positive, negative and
neutral which served as tokenized impacts on the feature values of the dataset.
X.Lietal.in [5] devised a deep learning model employing sentiment analysis results to predict the stock market
behavior. The individual and cumulative impact of the sentiment features were used for designing the
sentiment vector for the forecasting model. Textual normalization and opinion mining techniques were
incorporated as features to gauge the sentiments of the common public regarding reputations of the firms since
previous prices alone may notal ways render the moving trends in the market
Gaurang Bansal et al. in [6] proposed a de- centralized forecasting model incorporating block chain which acts
as a distributed ledger for stock market behaviors. The use of block chain was done to relate the variables or
features for training the system model to find trends or visible patterns in the data blocks. The performance
evaluation of the system was done in terms of the accuracy of prediction.
Jithin Eapen et al. in [7] proposed a pipeline approach of CNNs along with a bi-directional LSTM model. The
authors were able to gain significant performance improvement in the prediction accuracy of the system using
the pipeline CNN model as compared to the baseline regression models for the same S & P dataset. The
bidirectional LSTM model was also tested for prediction accuracy of the system for the same database. It was
shown that the pipelined CNN based approach outperformed the conventional techniques.
Min Wen et al. in [8] proposed a stacked CNN based approach for the analysis of noisy time series data for
stock market behavioral patterns. The stacked CNN structure was able to extract different levels of features for
the different layers of the data set. The proposed technique was shown to perform better than existing
techniques for temporal stock market behavioral data patterns.
Y Guo et al. in [9] proposed a modified version of the support vector regression (SVR)model with weight
updating mechanism based on evolutionary algorithms such as the PSO. The inclusion of PSO helped in
finding the local and global best feature values while optimizing the objective function simultaneously. It was
shown that the proposed approach could outperform the existing regression or back propagation models.
MS Raimundo et al. in [10] proposed a technique that was the amalgamation of the wavelet transform and the
support vector regression. The technique used the DWT as a data processing tools and retained the
approximate co-efficient values of the multi-tree level DWT analysis of the raw data thereby enabling more
noise immunity for the SVR algorithm. The DWT-SVR hybrid was shown to perform better in terms of
performance accuracy w.r.t. to SVR alone.
2 Bouktif etal. Opinion Mining and Sentiment Analysis was No data filtering or
used along with historical stock prices for market optimization approach used.
prediction.
5 Eapenet al. A pipelined approach of CNNs along with a bi- Dimensional reduction and
directional LSTM model. data optimization not used.
Opinion mining not
employed.
6 Wenatal. Stacked CNN based approach for the analysis of No estimation of over fitting
noisy time series data for stock market for the CNN model.
behavioral patterns.
7 Guoet al. Adaptive Support vector regression (SVR) model Opinion mining and filtering
with weight updating mechanism based on of data not employed.
Particle Swarm Optimization. Performance SVR generally
saturates after which adding
More training data doesn’t
increase accuracy of the
system.
8 Raimundoet al. Combination of discrete wavelet transform SVR’s performance doesn’t
(DWT) and the support Vector regression. The improve above a threshold.
DWT was used as a data filtering tool. Opinion mining data not
considered.
9 Baeket al. An amalgamation of two LSTM layers No data optimization and
performed. The first layer avoided the chances of sentiment analysis data used.
over fitting while the second LSTM block was
used for prediction.
10 Selvinget al. The approach used made predictions based on Only daily closing price
the daily closing price. The models used chosen as the time dependent
ARIMA, GARCH, LSTM, RNN and Sliding feature. Very restricted
window CNN. feature set.
11 Zhaoet al. Proposed model used time-weighted feature No estimates of over
vectors to train an LSTM neural net fitting or vanish ingredient.
Y Baek et al. in [11] designed a deep neural network named ModAugNet. The deep neural network was again
an amalgamation of two LSTM layers. The first layer avoided the chances of over fitting while the second
LSTM block was used purely for prediction. The approach was novel in the sense that a similar network with
different hyper parameters were used for the optimization and prediction purposes.
S.Selving et al. in [12] utilized different data fitting algorithms for stock movement estimates. The data fitting
approaches utilized were both linear and non-linear in approach such as ARIMA, GARCH etc. The exogenous
input feature vector was the closing price of the day which served as a separate feature value. The
performance for the same data set with and without the closing price as an exogenous input was tested on the
system performance.
Z. Zhao et al. in [13] developed an approach which utilized different time-weighted feature vectors to train an
LSTM neural net. The essence of the proposed approach was the fact that recent time or temporal sample has
different weighted values compared to the generalized weighted values of a normal feature vector. The
performance of the system was evaluated in terms of the accuracy achieved. The designed system achieved an
accuracy score of 83.91% while feeding the system with refined feature values.
DMQ Nelson et al. in [14] proposed an LSTM based model for stock market prediction along with technical
analysis indicators. The fundamental approach of the system was to find the correlation among different
variables for stock market movement. The price indicators of a particular company in a specific stock market
were linked to the stock market in other stock markets listed globally. The effect of closing prices of one stock
in a particular stock exchange was linked to the opening price of the same stock in some other stock exchange.
Thus along with the historical data, the correlation among other variables was also evaluated.
M. Billah et al. in [15] designed a back-propagation based neural network training algorithm with data
structuring. The Levenberg Marquardt (LM) weight updating rule was used to forecast closing stock prices of
stocks for the Dhaka Stock Exchange. It was shown that the LM algorithm needed lesser memory
consumption as well as iterations compared to conventional neural networks and ANFIS systems. The
performance evaluation metric was the accuracy of the system.
H.J. Sadei et al in [16] proposed a fuzzy time series predictor based on the concept of fuzzy expert systems.
The fuzzy set creation based on temporal data was dine followed by the design of membership functions.
Finally, the fuzzy relationships were computed and the defuzzification block was used to predict future trends
in the stock prices. Different membership functions were used for the purpose of designing the fuzzy sets.
G.R.M.Lincy et al. in [17] proposed a model based on multiple fuzzy inference systems and applied it to the
NASDAQ stock exchange data. The proposed system was pitted against the conventional ANFIS systems and
it was shown that the proposed system outperformed the conventional techniques based on expert system.
4. Performance Metrics
The parameters which can be used to evaluate the performance of the ANN design for time series models is given
by:
1) Mean Absolute Error(MAE)
Here ,
N is the number of predicted samples
V is the predicted value
̂𝑡 is the actual value
𝑉
e is the error value
5. Conclusion
This paper presents a comprehensive review and taxonomy on the use of machine learning based approaches for
stock market prediction or forecasting. As the data to be analyzed is extremely large and complex, hence it is
mandatory to employ machine learning for regression analysis. The multiple machine learning and deep learning
approaches used in contemporary work have been cited and the related research gaps have been identified. It is
expected that this paper puts future research directions in better stead with an aim to enhance the forecasting
accuracy.
References
[1] MartinT. Hagan,HowardB.Demuth, MarkH.Beale, OrlandoDe Jesus, “Neural Network Design”, 2ndedition,
Cengage Publications.
[2] Shalev-Shwartz, Shai, Ben-David, “Understanding Machine Learning: From Theory to Algorithms”,
Cambridge University Press.
[3] S Kim, S Ku, W Chang, JW Song, “Predicting the Direction of US Stock Prices Using Effective Transfer
Entropy and Machine Learning Techniques”, IEEE Access2020,Vol-8,pp: 111660–111682.DOI:10.1109/
ACCESS. 2020. 3002174
[4] SBouktif, A Fiaz, M Awad, AmirMosavi, “Augmented Textual Features-Based Stock Market Prediction”,
IEEEAccess2020, Volume-8, PP:40269–40282.DOI:10.1109/ACCESS.2020.2976725
[5] X Li, P Wu, W Wang, “Incorporating stock prices and news sentiments for stock market prediction: A case of
HongKong”, Information Processing& Management, Elsevier2020Volume 57, Issue 5, pp: 1-19.
https://doi.org/10.1016/j.ipm.2020.102212
[6] Gaurang Bansal; Vikas Hasija; Vinay Chamola; Neeraj Kumar; Mohsen Guizani, “Smart Stock Exchange
Market: A Secure Predictive Decentralized Model”, 2019 IEEE Global Communication Conference
(GLOBECOM), IEEE2019pp.1-6. DOI: 10.1109/GLOBECOM38437.2019.9013787
[7] Jithin Eapen; DoinaBein; Abhishek Verma, “Novel Deep Learning Model with CNN andBi-Directional
LSTM for Improved Stock Market Index Prediction”, 2019 IEEE 9th Annual Computing and Communication
Workshop and Conference (CCWC), IEEE 2019 pp.0264-0270. DOI:10.1109/CCWC.2019.8666592
[8] MinWen; PingLi; LingfeiZhang; YanChen, “Stock Market Trend Prediction Using High-Order Information of
Time Series”, IEEE Access 2019, Volume 7, pp : 28299 – 28308.DOI:10.1109/ACCESS.2019.2901842
[9] YGuo, SHan, CShen, YLi, XYin, YBai, “An adaptive SVR for high-frequency stock price
forecasting”,Volume-6,IEEE Access 2018,pp:11397–11404. DOI:10.1109/ACCESS.2018.2806180
[10] MS Raimundo, JOkamoto, “SVR-wavelet adaptive model for forecasting financial time series”, 2018
International Conference on Information and Computer Technologies (ICICT), IEEE2018,pp.111-114.
DOI:10.1109/INFOCT.2018.8356851
[11] YBaek, HYKim,“Mod AugNet: A new forecasting framework for stock market index value with an
over fitting prevention LSTM module and a prediction LSTM module”
Abstract
A shell structure is a thin structure composed of curved sheets of material, so that Shell structures are inspired
from natural element named “SHELL”. A thin curved member or slab usually of reinforced concrete that function
as both structure and covering. The waviness plays an important role in the structural behavior realizing a spatial
form. Some of natural elements showing these properties are eggshell, seashell, fruit shells such as walnut etc.
The present reinforced concrete shells as a very efficient structure, spanning wide, architectonically beautiful,
relevant and valuable structural solution. Shell structures are very attractive lightweight structures, which are
especially suited to building as well as industrial application. The actual design of shells, involves theories of
shells and the use of appropriate codes of practice. Thus, while making use of some existing codes on shells, in
order to provide a text that could be used in various countries; we have attempted to present the designs apart
from the existing codes.
1. Introduction
A concrete shell is also commonly called thin shell concrete structure. It is a structure composed of a relatively
thin shell of concrete, usually with no interior columns or exterior mainstay. The shells are most commonly flat
plates, curved and domes, but may also take the form of ellipsoids or cylindrical sections, or thereof some
combinations. Concrete shell structures are created for a span large distances with a minimal amount of material.
In the maintaining this economy of material, these forms have a Light, Aesthetically, Sculptural Appeal. Shells
are spatially curved shaped structures, which support external applied loads. Especially shell structure is
efficiently able to bear direct bending stresses due to its stressed skin structure As per IS 2210: 1988. In General -
Shells may be broadly classified as ‘singly-curved shell' and ‘doubly-curved shell'. This is based on Gauss
curvature theory. The gauss curvature of singly curved shells is zero because one of their principal curvatures is
zero.
Doubly-curved shells are non-developable and are classified as synclastic or anticlastic according as their Gauss
curvature is positive or negative. Lanky shell concrete structures are structurally efficient systems for covering
world architecture. However, their construction has seen a sharp decline since their golden period between the
1920s and early 1960s, with the possible exception of air-inflated domes. Commonly cited reasons for their
disappearance are the cost of formwork and the rising cost of associated labor with the declining interest from
architects. Thin shell concrete structures are pure compression structures formed from inverse Alysoid shapes.
Alysoid shapes are those taken by string or fabric when allowed to hang freely under their own weight. The free
hanging form is in pure tension, as string can bear no compression. Pure compression is ideal for concrete because
concrete has high compressive strength and very low tensile strength. These shapes maximize the effectiveness of
concrete, allowing it to form thin light span structures. The effort in the design of shells is to make the shell as
thin as per architecture view, so that the dead weight is reduced and the structure functions free from the large
bending stresses. By this means, a minimum of materials is used to be the maximum structural advantage with
well favored. Illustrations of natural shell structures include coconut shells, tortoise shells, seashells and nutshells,
man-made shell structures include tunnels, roofs, helmets, drink cans and boats.
2. Objective of Study
There are two types of shell structures. One is singly curved (developable shell) and other one is Doubly curved
(Non Developable) Shells. A singly curved surface has its radius in one plane. Examples are a running track, the
edge of the dinner plate, the wheel of a bicycle, etc. A doubly curved surface has its radii in two planes. Examples
are a hemispherical bowl, a fish bowl, a cooking vessel, a hull of a boat or a coracle, etc. Based on above study of
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: pathanabdullah24@gmail.com
33
R.C.C. Shell Structure Design of Selected Shape
shell structure we choose doubly curved, Antynclastic curvature and elliptic paraboloids shell structure. This type
of shell is based on surface of translation or ruled surface.
IS 2210-1988, Section 7 gives us some guidance in choosing the preliminary dimensions of the various types of
shells. We review it in the various chapters which dealing with various types of shells, then we choose a shape of
“Cow Boy Hat’ type shell structure having elliptic paraboloids round rings with central Dome.
Shells are usually thickened for some distance from their junction with edge members and traverses. The
thickening is usually of the order of 30 percent of the shell thickness. It is, however, important to note that undue
thickening is undesirable. In the case ‘of singly-curved shells, the distance over which the thickening at the
junction of the shell and traverse is made should be between 0’38 4R.d and 0’76 1/rd, where R and d are the
radius and the thickness, respectively. The thickening of shell at straight edges shall depend on the transverse
bending moment. For doubly curved shells, this distance will depend upon the geometry of the shell and the
boundary conditions as the extent of bending penetration is governed by these factors. For Initial analysis, we
adopt this structure having 50m diameter.
Height of outer ring is 6 meter extended to central dome having height of 10 m., Height of dome is 7 m and
diameter is 10m. Total height of structure is 17 m. As per IS2210 Thickness of shells shall not normally be less
than 50 mm if singly curved and 40 mm if doubly curved. For Initial analysis we adopt Thickness of shells is 200
mm. After designing values will vary.
In France ocean side, it is the excitement on the beaches of Grande Conche, Pigeonnier or Foncillon, on the city
side, Royan immerses us in the 1950s with its modernist shell structure, joyful and radiant seaside architecture,
characteristic of this bold architecture. This shell shaped Central Market is one of the emblematic monuments of
the city, Built from 1955 on the plans of the architects Louis Simon and Andre Morisseau, in associate with the
engineer Rene Sarger. This Structure was destroyed during the bombing of the city in 1945. This structure having
thin 7 cm thick R.C.C. shell cover. It was an architectural feat at the time and served as a model for the
construction of the Defence CNIT and inspiration for the circus of Bucharest. At the top of this dome classified as
a historic Memorial since 2002.
It is also known as Bahia Temple and it is most visited monuments in India. Construction of Lotus temple is
completed in year of 1986 and because of this Bahia Temple is designed like a lotus flower and it is known as
lotus temple. This amazing architecture is built by Persian architect FariborzSahba from Canada. This shell
shaped temple is one of the emblematic monuments of the India.
Sydney Opera House Construction began in March 1959 after the demolition of the existing fort Macquarie Tram
Depot. The project was built in three phases. First is the foundation and building of the podium overlooking the
Sydney Harbor, second is the construction of the outer shells and last one is the construction of the interior. This
structure is Designed by Danish architect JornUtzon, but completed by an Australian architectural team headed by
Peter Hall, the building was formally opened by Queen Elizabeth-II on 20 October 1973. After a gestation
beginning with Utzon's 1957 selection as winner of an international design competition.
L'Oceanografic Spanish is an oceanarium situated on the dry Turia River bed to the southeast of the city centre of
Valencia, Spain, where different marine habitats are represented. It was opened on 14 February 2003. This steel-
fibre reinforced concrete thin-shell structure was designed by renowned architect Felix Candela, at age 87 in 1997
and structural engineers Alberto Domingo and Carlos Lazaro. The Typical hyperbolic parabola shape of the roof
is reminiscent of the Los Manantiales Restaurant in Mexico City, which Candela designed in 1958.
4. Design Consideration
4.1 Concrete
Controlled concrete shall be used for all shell, and folded plate structures. The concrete is of minimum grade
M20. The quality of materials used in concrete, the methods of proportioning and mixing the concrete shall be
done in accordance with the relevant provisions of IS: 456-1978.
Layer 2
M20 30 2.75 × 107 Normal
(above)
4.2 Steel
The steel for the reinforcement shall be: Diameters of Reinforcement Bars The following diameters of bars may
be provided in the structure of the shell. Larger diameters may be provided in the thickened portions, transverse
reinforcement and beams:
Minimum diameter: 8 mm, and
Maximum diameter: 1/4 of shell thickness or 16 mm whichever is smaller.
4.4 Slope
Generally, if the slope of shell exceeds 45°, it will be too steep for easy concreting.
In the design of the rectangular grid for cylindrical shells, the reinforcement shall usually be divided into the
following three groups:
a) Longitudinal reinforcement to take up the longitudinal stress Nx or Ny as the case may be,
b) Shear reinforcement to take up the principal tension caused by shear Nxy,
c) Transverse reinforcement to resist NY and My. Longitudinal reinforcement shall be provided at the
junction of the shell and the traverse to resist the longitudinal moment M.
The following stages in the design of concrete shell roofs can be identified.
1. Determination of the shell form, its supports and loads (limit states)
2. Analysis of internal stress resultants and displacements
3. Design/verification of shell reinforcement
4. Verification of the adequacy of concrete material and thickness
Conclusion
An overview of selected cell structures is shown in this paper. Also, information has been collected about some
shell structures that have been built so far. The advantages and disadvantages of shell structure have also been
discussed. The materials and design considerations that are needed to make a shell structure are included. Now
the goal is to take the shell structure of this chosen shape in a structure software and completely design it to
remove all the shortcomings. It will be examined minutely through various rock theory and analysis. Along with
this, where this structure can be used, it will also be discussed in the design part.
References
[1] Stefan J Medwadowski and AvelinoSamartin “Design of Reinforcement In Concrete Shells: A Unified
Approach” Journal Of The International Association For Shell And Spatial Structures: Lass Vol.45(2004)
[2] V.Kushwaha, R.S.Mishra, S.Kumar, Assistant Professor, Civil Engineering Dept, SSITM Bhilai ,
Chhattisgarh, India “ A Comprehensive Study for Economic and Sustainable Design of ThinShell Structure
for Different Loading Conditions” IRJET Volume: 03 Issue: 01 | Jan-2016.
[3] ShraddhaMalviya, Ketan Jain “Study of Shell Structure and Analysis of Structure Failure International
Journal of Research in Engineering, Science and Management Volume-2, Issue-11, November-2019
[4] V.SravanaJyothi “Design and Analysis of Reinforced Concrete Shell” IJSRD - International Journal for
Scientific Research & Development| Vol. 3, Issue 09, 2015 ISSN (online): 2321-0613
[5] Y. Kamala Raju1, N. Tejaswi2 and S.Anjali Reddy3 1Assistant Professor, Dept. of Civil Engg., GRIET,Hyd.,
India “Reinforced Cement Concrete cylindrical Shell for Parking Sheds” May – June 2020 ISSN: 0193-4120.
[6] SrinivasanChandrasekaran*, S.K.Gupta, Federico Carannante3 “Design aids for fixed support reinforced
concrete cylindrical shells under uniformly distributed loads” International Journal of Engineering, Science
and Technology Vol. 1, No. 1, 2009.
[7] Er. Mohammed Sahil1, Er. Prafull Kothari, “Case Study on Architecture of Lotus Temple”
IJERTV9IS050907 Vol. 9 Issue 05, May-2020
[8] Girish G M, ShriMahadevanIyer, Dr. Neeraja. D, “Parametric Study on Behavior of Concrete Shell under
Uniform Loading IJERT ISSN: 2278-0181 IJERTV4IS030838 Vol. 4 Issue 03, March-2015.
[9] IS: 2210-1988 „Criteria for Design of Reinforced Concrete Shell Structures and Folded Plates‟, Bureau of
Indian Standards, New Delhi.
Abstract
In today’s era generally the high rise structures are subjected to blast loading. There is a serious issue related
to dynamic design of RC structures especially for explosive loads. The Dynamic Breakdown of buildings
and bridges is most important failure to be considered during design of RC structures. There are very less
number of technologies to avoid such kind of failure in multistoried building but these methods are not
commonly used for modeling and analysis of building bridges and other heavy structures. One of these
method is GSA criteria of designing high rise buildings subjected to possibility of dynamic breakdown.
Different researchers had made attempt to study the behaviors of structure under dynamic failure. In this
paper a detailed summary of these research studies is collected and then some conclusions has been drawn.
A building is considered in Bhopal (Seismic zone II of India)and the it is modeled in ETABS software,
explosive loads are assigned to the building as per the GSA guidelines. Dynamic Breakdown is the very
much popular is civil Engineering domain. Activities like dynamic breakdown may lead to the injuries and
loss of public and private properties also. Breakdown of structures is most common during accidents also
when one or more column of a structure is severely damaged the alternate load path works for the transfer of
load.
Keywords: Base shear, ETABS, Dynamic Breakdown, Demand capacity ratio, GSA, PMM ratio.
1. Introduction
High rise structures are most common now a days especially for populated countries like India there is
shortage of space for building more number of houses hence instead of doing this multiple flats are
constructed in a single apartment or tower. The provision of high rise tower buildings has some disadvantage
also like these are critical for design and most often subjected to additional loads like wind load accidental
loads etc.In general any building is designed for ordinary loads like dead load live load and seismic load that
is suitable for structures of low level of risk like mid height apartments but while designing any high rise
structure considerations of dynamic loads is very important step that is to be followed by the designer.
Dynamic load in any structure can be developed due sudden loading like accidental load blast load or
explosion of gas cylinder or any pipe line etc. Such kind of loading cannot be ignored while we are going for
finalize the size of beam column and other load carrying member. Improper or false design of any load
carrying ember in building may lead to the failure of while structure with or without warning. If whole
structure gest collapse due to failure of any column of beam suddenly under dynamic load then such kind of
failure of structure is known as dynamic breakdown of that structure.Its best example is failure of the World
Trade Center attack in US also known as 9/11 attack. Many other conditions of sudden impact like collision
of any heavy vehicle at the column of building, bomb blast, removal of beam or column due to blast of gas
cylinder, melting of members of steel in precast building due to fire breakout etc.
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: kmrsonuptl@gmail.com
39
Dynamic Breakdown study of High Rise RC Building-A Review
all the above cases may lead to the dynamic breakdown failure in any high rise building either it is
precast or cast in situ.
2. Literature Review
To understand the phenomenon of dynamic breakdown and the behiour of structure under dynamic loads a
number of research papers published online in reputed journals are reviewed andbased on their study some
important points are noted .
Mansoret.al. Concluded in 2019 about the advantages and disadvantages of using energy dissipation
technologies in multi storied building under dynamic loads. The building was situated in seismic zone IV
and soil type medium under medium to high frequency contents. Steel framed structure was considered
with state of art technology. The analysis was done using FEM methodology and results are reviewed.
The results are expressed in the form of base shear, maximum storey drift, maximum storey
displacement and maximum bending moment. The authors concluded that a steel framed structure with
energy dissipations techniques is suitable for high risk areas where possibility of dynamicloading is more
as compared to other loads.
Stanislav et.al. 2019 considered a G+10 precast building situated in Asia having steel and concrete in its
main structure the soil type taken for the structure is medium and seismic zone is of high risk .ETABS
software was used for modelling steel of HYSD and Mild steel was used concrete of grade M 25. Two
way slab was taken and minimum steel was provide as per the recommendations by the IS code.The
design was done using LSM and suitable factor of safety was provided to the possible load
combinations during theanalysis and then results were obtained.
Rahai et.al. 2019 .considered a L shaped building in India subjected to ordinary loads as per IS 456 &
IS 1893.Four cases of blast in columns namely exterior and interior were considered for the study. The
dynamic breakdown was studied based on the results related to PMM and DCR ratio for different cases.It
was concluded that all the cases are critical when blast occurs but interior case in most dangerous as far
as concerned about alternate load path.
Tripathi et.al. 2019 worked on the performance of rectangular plan symmetric high rise building in
Seismic Zone II under accidental loading. The authors took different possible locations of explosion on
ground and mid storey of 15 storey building. Equivalent static method was performed using ETABS
software under ordinary loading conditions including blast load.The severity of blast was tested as per
the general service administration guidelines of USA. According to their study the interior column of
ground floor was very critical to design as it cannot create alternate load path when subjected to blast
load. Authors also concluded that corner column id not much critical when building is subjected to
accident or explosive load it means there is very less possibility for the correnace of dynamic
breakdown.
Pradeep et.al.2018, expressed literature related to dynamic failure in cement concrete based building
where lateral loads are also considered. The study was based on the concept that when a load carrying
member such as beam or column is damaged due to sudden vibration occurred in blast loading then the
load is transferred to the neighboring beam or column. If these beams and columns are not designed for
additional load then the building will definitely fail and then not one all the connected members are also
subjected to severe failure patterns. The waves travelled during any accidents or explosion due to gas
cylinder are also in the form of dynamic loadings hence structure where such works are done can be also
considered as critical for the design.
Abbaset.al. 2017 discussed causes of dynamic breakdown in a high rise precast steel structure.SAP
2000 software was used for the analysis and modelling. Design was done asper the recommendation of IS
code 456:2000 and IS 800:2007.Grid based plan of the structure was selected then it was exported to
SAP software.
Shaikh et.al. Studied in September, 2016, progressive collapse of RC structure in accordance with the
guidelines provided in GSA: 2003 using a Finite Element Method based software ETABS. They have
conducted the analysis on a RCC structure in which the columns at critical locations were removed to
explore the importance of slab's depth in resistance of the progressive collapse and concluded as: The
Structure will become more critical when the Interior Column at ground Floor is removed, Since the
axial resistance capacity increases with thickness of the slab increases, the slabs having more thickness
will have more resistance to progressive collapse, The Corner Column removal influences fixed beam to
behave as cantilever beam and due to lack of the reinforcement at top side, beam is liable to failure,
Middle Column Removal influences fixed beam to behave as the continuous beam as it leads to the
scarcity of reinforcement at bottom side which could bethe cause of failure, DCR incessantly decreases in
Sagging DCR, due to constant Capacityin sagging of square building.
Shefna et.al.Worked in 2014 to study the behavior of a G+11 structure having plan dimensions 20m X
25m. Conventional column beam and slab were modeled as per trails done on the basis of previous
observations. Earthquake load of Zone II was considered and analysis was performed using STAAD pro
V8i software. Limit state of collapse was performed to study the dynamic failure pattern of the structure.
It can be concluded that suitable use of bracing system may lead to the strengthening of RCC structure
under dynamic breakdown conditions.
Tavakoli et.al. Studied in October, 2012, that utilization investigation techniques are nonlinear static
examination for dynamic breakdown under seismic stacking and 3 -D and 2-D models of SMRF were
considered for push over investigation (ETABS). Parallel Loading examples are Triangular burden
design, Uniform burden example and Capacity bend for both the example in decided. Basic segment is
made to lose 40%, 70% and 100% of successful region. Limit bend for each case are resolved and looked
at. Finish of their examinations are Number of stories and coves are expanded limit of the structure to
oppose dynamic breakdown under horizontal stacking additionally expanded. Expanding the quantity of
bayous and stories, incites a more elevated amount of vigor record..
Joshi et.al. Have studied in 2013, carried out the Sustainable Analysis Procedures for assessment of
Progressive Collapse in 2011 using SAP2000 for nonlinear dynamic analysis and concluded that heavy
penalty in terms of increase in load factor is arisen in linear Static and Nonlinear static procedures and it is
possible to find the exact loading that can provide correct behaviour. The applied loading in these
procedures is quite less than that of in actual analysis and design. It is very important to consider the
nonlinear effect offloor slab in the analysis.
Alirezaet.al. Have studied in 2012, the effect of irregularity in height of RC Structures onthe Progressive
Collapse through 3 RC buildings of 6 stories each designed according to Iranian concrete code (ABA)
and have been checked by ACI. designs for different assessment procedures to take a gander at DCR
esteems. It was seen that powerful improvement factor of 2 used in straight static condition is a good
check for static assessment framework since direct static and straight remarkable examination strategy
yield around a comparative most noteworthy moment. Static examination has low DCR worth glance at
dynamic philosophy this may be a direct result of dynamic heightening variable of 2 used in straight
interesting assessment. Straight remarkable assessment gives more assurance results than static
examination. They wrapped up since direct static and straight special assessment procedures yield
around a comparable most extraordinary redirection. Case II of LDA for instance RC Frame with
removal of area has most imperative DCR regard in assessment with LDA case and other LSA case.
Results showed that DCR of area is 1.98 which is under 2 for instance GSA criteria.Hence the
packagingis less frail against dynamic breakdown.
3. Conclusion
From the above research study one thing is very clear that precast and framed structure like multistoried
buildings, bridges, complex etc. are soft target for the dynamic breakdown failure because such kind of
structures are fabricated hence any damage in a load carrying member like column or beam may lead to the
failure of the whole structure within a few seconds. This failure is sudden it means the structure will fail
without waring hence is very much needed to avoid such kind of failure in dense populated countries like
Indian where number of multistoried apartments are there in that a bulk of crowd lives and works together.
Similarly the structure which are designed from steel connections like railway bridges, factories and other
tower buildings may also be subjected to dynamic load very often. If we summaries the all research paper
mentioned earlier, one thing is common here that a high rise framed and precast building should be designed
with the consideration of dynamic loads also. Talking about solution of these issues it is recommended to
provide alternate load path in the structure such as bracings, shear wall, diaphragm wall etc. It is also
observed that the beams are critical in flexure so we can also provide strength to the beams by using fiber
reinforced concrete instead of conventional concrete during casting work.
References
[1] Meshal A. Abdulsalam and Muhammad Tariq A. Chaudhary “Progressive collapse of reinforced
concrete buildings considering flexure-axial-shear interaction in plastic hinges” Cogent Engineering
(2021), 8: 1882115 https://doi.org/10.1080/23311916.2021.1882115.
[2] Stanislav Pavlov, and Olga Tusnina. “Progressive collapse evaluation in industrial building of existing
production” E3S Web of Conferences 97, 04053 (2019).
[3] ShubhamTripathi , Dr A K Jain." Progressive Collapse Assessment of RCC Structure under
Instantaneous Removal of Columns and its Modelling Using Etabs Software." IOSR Journal of
Engineering (IOSRJEN), vol. 09, no. 10, 2019, pp. 27 -36.
[4] NurEzzarynAsnawiSubki ,Hazrina Mansor “Progressive Collapse Assessment: A review of the current
energy-based Alternate Load Path (ALP) method” MATEC Web of Conferences 258, 0 (2019)
https://doi.org/10.1051/matecconf/201925802012.
[5] Y.A. Al-Sallouma , H. Abbas “Progressive collapse analysis of a typical RC high-rise tower” Journal of
King Saud University – Engineering Sciences 29 (2017) 313-320.
[6] Valerii Pershakov1 , AndriiBieliatynskyi “Progressive Collapse of High-Rise Buildings from Fire”
MATEC Web of Conferences 7301001 7 , 01001 (2016) a. Choubey and M.D. Goel – “Progressive
Collapse Analysis of Rcc Structures” - International Journal of Optimization in Civil
Abstract
Beam–shear wall connections are the most vital additives of reinforced concrete (RCC)
systems. They serve as a load switch course and take a large portion of the overall shear. Joints
in RCC systems built without seismic provisions have an inadequate potential and ductility
beneath lateral loading and can reason the progressive failure of the whole shape. Amazing
understanding of RCC joint shear conduct seems to be essential due to the fact intense damage
can cause deterioration of total overall performance of RC beam-column connections or frames
inside a joint panel. Beam–column connections are the most critical components of reinforced
concrete (RCC) structures. They serve as a load transfer path and take a significant portion of
the overall shear. Joints in RCC structures constructed with no seismic provisions have an
insufficient capacity and ductility under lateral loading and can cause the progressive failure of
the entire structure. The joint may fail in the shear prior to the connecting beam and column
elements.
Older concrete beam-column or shear wall joints that lack transverse reinforcement are
vulnerable to excessive damage possibly contributing to building collapses all through strong
earthquakes. Tools to expect shear power of joints with ductile details exist. Because of
complicated behaviour of joint centre, calculation of joint shear calls for becomes crucial. So,
confinement is furnished at such joints. However, this confinement results in congestion at
joints. Experimental research has been carried out on a newly proposed design of plate-
reinforced composite (PRC) coupling beams, which also help us in beam carrying high shear
stress.
Here in this paper an attempt is made to understand the behaviour of shear stress and ductile
shear demand capacity in Beam-Column or shear wall junction different grade of concrete with
use of steel plate by using Etabs 2019 software.
Keywords: Seismic, deterioration, ductile, collapses, confinement, shear demand capacity.
1. Introduction
The beam column or shear wall joint is the essential sector in a reinforced concrete body. Its
miles subjected to large forces during severe ground shaking and its behavior has a significant
affect at the response of the structure. The idea of joint being rigid fails to do not forget the
outcomes of excessive shear forces developed in the joint. The shear failure is usually brittle in
nature which isn't a suitable structural performance mainly in seismic conditions. Know-how
the joint conduct is important in exercise proper judgments inside the layout of joints.
Therefore, it is vital to talk about approximately the seismic movements on various sorts of
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: saransh1299@gmail.com
44
Comparative Analysis on Shear Stress and Ductile Shear Demand Capacity in Beam-Shear Wall Junction
between Higher Vs Lower Concrete Grade and Effect of PRC Beam by Using Etabs Software
joints and to focus on the critical parameters that affect joint performance with unique
connection with bond and shear switch. The anchorage period necessities for beam bars, the
provision of transverse reinforcement and the role of stirrups in shear switch at the joint are the
principal trouble. A take a look at of the use of extra pass-willing bars at the joint core suggests
that the willing bars introduce an extra new mechanism of shear transfer and diagonal cleavage
fracture at joint can be averted.
Call for real property in urban regions is growing every day; to neutralize this necessity in
those city areas only choice is erect boom. This kind of improvement locations greater-lateral
neutralization challenges for wind and earthquake hundreds. Preservation of the slabs through
beams and reinforcement of the beams by way of columns is well-known practice in
architecture and production. This could be dubbed the development of a beam slab. The
contribution of slab within the beam- column joint was first considered in ACI 352-02. Beam-
column or shear wall- slab connection turns into trouble whilst we talk about lateral load i.e.,
Seismic load it will become a important problem.
A beam- column- slab connection is the combination of joint and beam column, slab adjoining
to the joint. And additionally, a joint is defined as that portion of the column inside the
intensity of the deepest beam which frames into the column. Beam slab connection consists of
3 kinds of connections. Forms of connections of are interior beam column slab connection;
outside beam column slab connection and nook beam column slab connection.
2. Literature Review
There are many researches in the field of Beam- Column or shear wall junction get momentum
from 1970’s. Related to this areas lot of papers published in many conferences and journals.
The literature reviews on Beam- Column or shear wall connection we will discuss in detail in
this chapter.
In this work, a detailed three-dimensional (3D) nonlinear finite element model is developed to
study the response and predict the behavior of beam–column connection subjected to cyclic
loads that was tested at the karunya Institute of technology and sciences (KITS) laboratory.
The beam column joint is modelled using 3D solid elements and surface-to-surface contact
elements between the beam/column faces and interface grout in the vicinity of the connection.
The model takes into account the pre-tension effect in the post-tensioning strand and the
nonlinear material behavior of concrete. Fracture of the mild-steel bars resulted in the failure
of the connection. In order to predict this failure mode, stress and strain fields in the mild-steel
bars at the beam–column interface were generated from the analyzed model. In addition, the
magnitude of the force developed in the post-tensioning steel tendon was also monitored and it
was observed that it did not yield during the entire loading. Steel mesh was developed in the
beam to increase the shear capacity. Finite element modelling will provide a practical and
economical tool to investigate the behavior of such connections.
been noted. Analysis carried by push over analysis. Also, variation of horizontal and vertical
irregularity is review.
3. Objective
As we discussed above Beam and shear wall junction play a major role in earthquake resistant
building will can resist doing ductile detailing and understanding behaviors of shear stress and
shear demand capacity at junction.
Here we taking residential building plan with dimension 41m X 17.1m having G+30 floor.
Analysis done with different grade of concrete from M60 to M30 gradually decreasing with
above floor. Considering Zone-4 seismic zone with 1.2 importance factor.
As on initial analysis and basic observation outer periphery beam induce major shear stress at
shear wall junction. Link beam also play a major role in ductile detailing. By increasing
concrete grade, it increases permissible limit of shear stress and satisfy beam with any extra
changes (diagonal rebar and shear plate effect in beam).
Fig 2. Framing Plan Layout with Shear Wall and Beam Size.
4. Discussion
1. To compare the effect of shear stress on beam-shear wall junction at those beams which act
under higher lateral loading with effect of gradually decreasing grade of concrete.
2. Analysis and study the better design option in case of beam failure due to high seismic
forces to make it economical and ease of execution at construction site.
3. To analysis and study the effect of lateral loading on slab at beam-shear wall junction.
4. Introducing the PRC (plate-reinforced composite (PRC) coupling beams) with all design
parameters, in case high shear stress and exceeding permissible concrete shear stress limit
(Tc max).
5. Analysis beam condition by design behaviour (moment diagram and shear force diagram
under earthquake loading) to understanding which beam suitable for diagonal reinforcement
(due to coupling action) and PRC beam.
References
[1] S.Ebenezer, E.Arunraj, G.Hemalatha, “Analytical Behaviour on External Beam Column
Joint Using Steel Mesh”.
[2] Murat Engindeniz 1, Lawrence F. Kahn, 2008, “Pre-1970 RCC corner beam-column-slab
joints: seismic adequacy and upgradability with CFRP composites. (WCEE – October 12-
17, 2008).
[3] (N. Mitra, 2008, “Continuum model for rc interior beam-column connection regions”.
(WCEE – October 12-17, 2008).
[4] Syed Sohailuddin. S. S 1, Rashmi. G. Bade 2, Ashfaque A. Ansari, strengthening of
reinforced concrete beam column joint under seismic loading using ANSYS. (ICAET-
2014).
[5] S. L. Patil1, S. A. Rasal, Behaviour of Beam-Column Joint on Different Shapes RC Framed
Structures: A Review”. (IJSR, April 2017)
[6] Wael M. Hassan, Jack P. Moehle, “Experimental Assessment of Seismic Vulnerability of
Corner Beam-Column Joints in Older Concrete Buildings”. (15 WCEE, 2012).
[7] Rupali R. Bhoir, Prof. V. G. Sayagavi,Prof. N.G.Gore Prof. P. J. Salunkhe, “Shear Demand
of Exterior Beam Column Joint using STAAD-Pro”. (IRJET. Oct-2015)
[8] S.Ebenezer, E.Arunraj, G.Hemalatha, “Analytical Behaviour on External Beam Column
Joint Using Steel Mesh”. (IJITEE, April-2019)
[9] NandhigamVijayaprasad, Aditya Kumar Tiwary, “behavior of exterior RCC beam column
joint with strengthened concrete and diagonal cross bracings”. (IJCIET, March-2019).
Abstract
Web users are continuously bombarded with spamming attacks. Sometimes, such spamming attacks may
successfully re-direct the user to a malicious web link which is generally termed as re-direction spam. However,
genuine re-directions are also common in case the web-servers are overloaded with requests which are more that
that can be processed. It can be challenging to distinguish redirection spam from actual web-redirections. Lately,
artificial intelligence has been used for redirection spam classification using the design of various models of
artificial neural networks. The performance parameters are generally the accuracy and mean square error. This
paper presents a comprehensive survey on redirection spamming attack detection using artificial intelligence
based approaches so as to thwart spamming attacks for time critical applications. Various models have been
discussed with their pros and cons.
1. Introduction
With increasing number of users using web services, the problem of spamming attacks has become
very serious among both web and mobile applications. The most common form of spamming attack
encountered is the redirection spam attack on cellular users. The main challenge is to detect whether a
redirection is actually spam or not. With the vast amount of data and its complexity, manual
classification of redirection spam in time critical applications is infeasible. This paper presents the
basics of re-direction spam classification using artificial intelligence based techniques. It introduces the
necessity of spam classification along with the various approaches used to classify them using artificial
neural networks.
Generally it is difficult to classify based on auto-redirect or auto-refresh tag because when web servers
are heavily loaded, they may introduce such measures to release load and avoid web server crashes.
Hence it becomes mandatory to look for techniques which can classify with high accuracy in time
critical aspects and situations. The following section presents the basics of artificial neural networks
used for spam classification.
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: rupam.solanki1412@gmail.com
50
A Survey on Machine Learning Assisted Text Spam Classification Models
Some of the spamming attacks may be benign while others may be malignant trying to redirect mobile
users to malicious websites where user security may be compromised. Since the amount of data is
staggering large and complex, off late machine learning based approaches are becoming common to
filter out spams. One of the challenges which machine learning based approaches face for mobile
spamming platforms is the limited computational and processing capabilities of hand held mobile
devices. This makes is necessary to design and test algorithms which are compatible with various
versions of mobile operating systems and also supported by limited memory and processing hardware
as there exists a lot of diversity in the mobile hardware of different devices.
2. Related Work
Various approaches have been devised for mobile spam classification.
AK Jain et al. proposed an approach for the detection of spam messages. We have identified an
effective feature set for text messages which classify the messages into spam or ham with high
accuracy. The feature selection procedure is implemented on normalized text messages to obtain a
feature vector for each message. The feature vector obtained is tested on a set of machine learning
algorithms to observe their efficiency.
KS Adewole et al. proposed a unified framework is proposed for both spam message and spam account
detection tasks. Authors utilized four datasets in this study, two of which are from SMS spam message
domain and the remaining two from Twitter microblog. To identify a minimal number of features for
spam account detection on Twitter, this paper studied bio-inspired evolutionary search method. Using
evolutionary search algorithm, a compact model for spam account detection is proposed, which is
incorporated in the machine learning phase of the unified framework. The results of the various
experiments conducted indicate that the proposed framework is promising for detecting both spam
message and spam account with a minimal number of features.
Surendra Sedhai et al. proposed a technique that used semi-supervised approach for spam redirection
classification mechanism. The concept used the training rules to be governed by supervised learning
with an adaptive weight changing mechanism. However, the approach had the liberty of letting the
weight adaptation fall into the purview of the training algorithm used.
Chao Chen et al. proposed a technique for the classification of drifted twitter spam based on statistical
feature based classification. The major issues addressed in this paper, were the use of statistical features
for spam classification. Drifted spam is often the result of several attached web links leading to the
drifting mechanism of the tweets in social media applications with malicious URLs that can cause the
spamming attacks on the web mails.
Nida Mirza et al. proposed a technique for spam classification based on hybrid feature selection. The
major advantage of this approach was the fact that the hybrid parameters can be an amalgamation of
both textual features and non-textual features. The evaluation of the performance of the proposed
system was done on the basis of mean square error, hit rate and the accuracy. The performance of
hybrid feature selection was shown to be better than the average features computation algorithms.
Hammad Afzal et al. in proposed a mechanism for the classification of bi-lingual tweets using
machine learning algorithms. The methodology of the system was the use of natural language
processing and thereafter the use of deep neural networks with multiple hidden layers. The learning
rates were dependent on the differential changes in the architecture of the neural network used
Hailu Xu et al. proposed a technique for efficient spam detection across social platforms. The main
problem with classification problems as presented by the authors is the lack of correlation between
variables. This often leads to low accuracy in prediction. Hence this approach lacks any expert view on
the apparent relation between the feature values and the outputs. The authors tried to address the
problem of expert view exclusion in their work to enhance the accuracy.
Nadir Omer eta al. proposed a technique based on the use of support vector machine for spamming
attacks. The authors tried to exhibit the fact that the support vector machine (SVM) used the concept of
the Hyper-plane for the classification of the multi-dimensional data exposed to spamming attacks.
Tarjani Vyas et al. used the techniques of supervised learning for the classification of spamming
attacks. The supervised learning mechanism was shown to have a different level of accuracy as
compared to unsupervised learning. The classification process was however characterized by the
computation of probabilities for classification as spam or ham.
Nishtha Jatana et al. proposed a technique for efficient radix encoded approach for the differentiation
of SPAM and HAM based on the Bayesian Classifier. The classifier was used to classify the data set
used for testing when the probability of spam and non-spam was computed based on the concept of the
conditional probability.
Kamalanathan Kandasamy et al. used natural language processing (NLP) in conjugation with
machine learning. The database was the twitter database. The paper presents an extremely interesting
approach based on behavioural economics along with time series prediction. The authors considered
twitter data (tweets) to access the mood of the society at large as an additional feature along with
spamming data. The performance evaluation parameter was the mean absolute percentage error.
Navneel Prasad et al. proposed a comparative technique for spam classification based on back
propagation and resilient networks. The authors cited that the problem addressed was the low accuracy
in prediction by feed forward networks. Such networks do not have an error feedback mechanism for
training to occur with the error as one of the inputs affecting the weights. Similar attributes of resilient
and reinforced learning were used.
Wojciech Indy et al. used the MapReduce technique for spam classification mechanism. The
MapReduce technique often finds similarities in data sets based on the spurious nature of spamming
attacks often resembling actual ham. The performance indices were the accuracy and sensitivity.
Ashwin Rajadesingan et al. proposed a technique based on Comment Analysis whose data is often
extracted from Comment-Blog Post Relationships. The approach is often useful when the blog part of
the webmail produces the necessary textual data which can be either spam or non-spam.
Alper Kursat Uysal et al. proposed a mechanism for SMS Spam filtering. The authors used a hybrid
of Artificial Neural Networks and particle swarm optimization (PSO) to reach desired values of the
objective function. The authors used the Radial Basis Function (RBF) which have advantages of easy
design, good generalization, strong tolerance to input noise, and online learning ability. The particle
swarm optimization used along with it helped the authors to attain high accuracy of prediction.
D. Karthika Renuka et al. proposed a supervised approach for spam redirection classification
mechanism. The concept used the training rules to be governed by supervised learning with an adaptive
weight changing mechanism. The fully supervised learning mechanism made the accuracy increase.
Safvan Vahora et al. used the naïve Bayesian Classifier for spam classification. In straight forward
terms, a naive Bayesian classifier assumes that the value of a specific feature is unrelated to the
presence or absence of the other feature, given the category variable. Thus training it with the data set
automatically trains the Bayesian Classifier to classify the data.
Lourdes Araujo et al. presented a quantified link analysis for spam filtering. Conventional techniques
suffered from classification problems as presented by the authors is the lack of correlation between
variables. This often leads to low accuracy in prediction. Hence this approach lacks any expert view on
the apparent relation between the feature values and the outputs. The quantified link establishes the
accuracy measure.
Sang Min Lee et al. tried out spam classification based on feature selection and data optimization. The
major problem addressed in the paper was the fact that over fitting in the datasets for training for neural
networks. Over-fitting introduces noise effects in the training and increases prediction errors. The
authors proved that a function (training data set) with finite discontinuities can be approximated with a
simpler neural network. The performance metrics are training time and mean square error.
Chih-Hung Wu et al proposed a technique based on back propagation and feature selection. The
problem that the paper addressed was:
number of iterations to be as less as possible for the neural network are often high in case of large
datasets with low correlation and stability of the algorithm is often less Both were proved to be
achieved to a certain extent in this approach.
Chi-Yao Tseng et al. proposed an incremental SVM technique for spam classification of spam. The
approach used the feature selection to be mapped on the hyperplane and then being classified by the
SVM. The performance metric was the accuracy, precision and recall.
The need for probabilistic classifiers arise from the fact that the classification problem often encounters
data sets with overlapping vectors. The major challenges in spam classification are:
1) It is very difficult to detect malicious redirections because redirections are also made intentionally
for non-harmful purposes like load balancing.
2) If successful redirection is not employed, then Web Server may crash in case requests received
becomes much more than request handling capacity.
3) It is very difficult to actually detect a malicious spam and differentiate it from a load balancing
redirection.
The feature selection mechanism is also important for the computation of the various parameters that
include the mean square error and accuracy. However, the addition of features makes the accuracy
increase at times but also increases the complexity of the training.
4) Moreover, general machine learning techniques for spam classification are prone to poisoning
attacks.
Depending on the implementation, Bayesian spam filtering may be susceptible to Bayesian poisoning, a
technique used by spammers in an attempt to degrade the effectiveness of spam filters that rely on
Bayesian filtering. A spammer practicing Bayesian poisoning will send out emails with large amounts
of legitimate text (gathered from legitimate news or literary sources).
Spammer tactics include insertion of random innocuous words that are not normally associated with
spam, thereby decreasing the email's spam score, making it more likely to slip past a Bayesian spam
filter. However, with (for example) Paul Graham's scheme only the most significant probabilities are
used, so that padding the text out with non-spam-related words does not affect the detection probability
significantly. Words that normally appear in large quantities in spam may also be transformed by
spammers. For example, «Viagra» would be replaced with «Viaagra» or «V!agra» in the spam
message. The recipient of the message can still read the changed words, but each of these words is met
more rarely by the Bayesian filter, which hinders its learning process. As a general rule, this spamming
technique does not work very well, because the derived words end up recognized by the filter just like
the normal ones.
The overlapping vectors make its challenging to find a clear boundary for the classification problem
and often there exists only a fuzzy or non-clear boundary to demarcate among the data classes. In such
overlapping classes, the final categorization of a new data vector ‘X’ is done based on the maximum
mutual probability given by:
𝑿𝟏 𝑿𝟐 𝑿𝒏
𝑷(𝑿) = 𝑴𝒂𝒙{ 𝑼 . .….. 𝑼 ) (1)
𝑼
Here,
X1, X2….Xn are the multiple classes
U is the universal set containing all the classes
P(X) is the maximum probability of a data sample to belong to a particular category.
4. Conclusion:
It can be concluded from previous discussions that the spam classification is a non-trivial task based on
the amount and the complexity of data mobile and web servers receive in real time situations. It can be
inferred from the discussions made so far that AI and ML based approaches are appropriate to cater to
the needs of the web services. However, the challenging aspect in spam classification remains the
accuracy that needs to be met for real life applications which may be challenging.
References
[1] AK Jain, D Goel, S Agarwal, Y Singh, G.Bajaj, “Predicting Spam Messages Using Back
Propagation Neural Network”, Journal of Wireless Personal Communications, Springer 2021, vol. 110,
pp. 403-422.
[2] KS Adewole, NB Anuar, A Kamsin, “SMSAD: a framework for spam message and spam account
detection”, Journal of Multimedia Tools and Applications, Springer 2020, vol. 78, pp. 78, 3925–3960.
[3] Aliaksandr Barushka, Petr Hajek, “Spam filtering using integrated distribution-based balancing
approach and regularized deep neural networks”, Springer 2018
[4] Surendra Sedhai, Aixin Sun, “Semi-Supervised Spam Detection in Twitter Stream”, IEEE 2018
[5] Chao Chen, Yu Wang, Jun Zhang, Yang Xiang, Wanlei Zhou, Geyong Min, “Statistical Features-
Based Real-Time Detection of Drifted Twitter Spam”, IEEE 2017
[6] Nida Mirza, Balkrishna Patil ,Tabinda Mirza ,Rajesh Auti, “Evaluating efficiency of classifier for
email spam detector using hybrid feature selection approaches”,IEEE 2017
[7] Hammad Afzal ,Kashif Mehmood, “Spam filtering of bi-lingual tweets using machine
learning”,IEEE 2016
[8]Hailu Xu ,Weiqing Sun ,Ahmad Javaid,” Efficient spam detection across Online Social Networks”,
IEEE 2016
[9]Nadir Omer Fadl Elssied,Othman Ibrahim ,Ahmed Hamza Osman,”Enhancement of spam detection
mechanism based on hybrid kkkk-mean clustering and support vector machine”,SPRINGER 2015
[10] Tarjani Vyas , Payal Prajapati , Somil Gadhwal,”A survey and evaluation of supervised machine
learning techniques for spam e-mail filtering”,IEEE 2015
[11] Nishtha Jatana ,Kapil Sharma,” Bayesian spam classification: Time efficient radix encoded
fragmented database approach”, IEEE 2014
[13] Navneel Prasad ,Rajeshni Singh ,Sunil Pranit Lal,”Comparison of Back Propagation and Resilient
Propagation Algorithm for Spam Classification”,IEEE 2013
[16] Alper Kursat Uysal ,Serkan Gunal ,Semih Ergin ,Efnan Sora Gunal, “A novel framework for SMS
spam filtering”, IEEE 2012
[17] D. Karthika Renuka , T. Hamsapriya ,M. Raja Chakkaravarthi, P. Lakshmi Surya, “Spam
Classification Based on Supervised Learning Using Machine Learning Techniques”, IEEE 2011
[18] Safvan Vahora ,Mosin Hasan ,Reshma Lakhani, “Novel approach: Naïve Bayes with Vector space
model for spam classification”, IEEE 2011
[19] Lourdes Araujo, Juan Martinez-Romo, “Web Spam Detection: New Classification Features Based
on Qualified Link Analysis and Language Models”, IEEE 2010
[20] Sang Min Lee, Dong Seong Kim , Ji Ho Kim ,Jong Sou Park, “Spam Detection Using Feature
Selection and Parameters Optimization”, IEEE 2010
[21] Chih-Hung Wu, Chiung-Hui Tsai, “Robust classification for spam filtering by back-propagation
neural networks using behavior-based features”, SPRINGER 2009
[22] Chi-Yao Tseng, Ming-Syan Chen, “Incremental SVM Model for Spam Detection on Dynamic
Email Social Networks”, IEEE 2009
Abstract
Modern wireless networks are undergoing a paradigm shift in term of management and optimization with the
emergence of pervasive networks such as IoT, Fog networks etc. Future generation wireless networks would face
constraints in terms of available copious data generation, increase in user counts and limited bandwidth. This has
led to the widespread development of software defined networks (SDNs). Software-defined networking
technology is an approach to network management that enables dynamic, programmatically efficient network
configuration in order to improve network performance and monitoring. One of the major challenges which SDNs
face is the Quality of Service (QoS) isse due to fading effects in real time. This leads to an inevitable trade off
between network bandwidth utility and QuS metrics such as error rate and throughput. Thus, handover or
switching mechanisms for SDNs is necessary so as to leverage available bandwidth as well as maintain
satisfactory quality of service for the network. This paper presents a review on co-existence of NOMA and
OFDM systems as multiple access methods for software defined networks, along with contemporary handover
mechanisms.
1. Introduction
Increased control with greater speed and flexibility: Instead of manually programming multiple vendor-
specific hardware devices, developers can control the flow of traffic over a network simply by
programming an open standard software-based controller. Networking administrators also have more
flexibility in choosing networking equipment, since they can choose a single protocol to communicate
with any number of hardware devices through a central controller. The major challenge looming large
on SDNs is the multipath propagation and varying media (channel conditions) in terms of fading. This
results in the following problems [3]:
1) Reduced strength resulting in poor quality of service.
2) Increase bit and packed error rates resulting in SDN system outage.
3) Large latencies and relatively low throughput.
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: prakratijoshi2000@gmail.com
57
A Review on Automated Handover in Software Defined Networks (SDNs) based on QoS Metrics
A typical SDN generally has the capability of automatic fallback or handover. Handover may occur
between two systems when the performance of once system starts to deteriorate compared to the other
system. The proposed approach aims at leveraging the handover mechanism for SDNs. In the proposed
approach, a handover between a primary multiple access technique (MA1) and a secondary multiple
access technique (MA2) is proposed with automatic fallback enabled receivers. The condition for
switching or handover is proposed to be the BER of the system based on different channel fading
conditions [4].
The SDN would incorporate both:
1) Near and Far User cases
2) Fading conditions
Moreover, for effective interference suppression, many MUD schemes also require an estimate of the
covariance matrix of the received signal, which is typically the sample covariance matrix. The sample
covariance matrix converges slowly, resulting in a poor estimate of the true covariance matrix when the
number of samples of the received signal is relatively low.
c) Speed of surrounding objects – If objects in the radio channel are in motion, they induce a time
varying Doppler shift on multipath components. If the surrounding objects move at a greater rate than
the mobile, then this effect dominates fading.
d) Transmission Bandwidth of the signal – If the transmitted radio signal bandwidth is greater than
the “bandwidth” of the multipath channel, the received signal will be distorted.
In figure 2, a variation of signal strength as a function of distance from base station has been depicted.
It can be seen that as the distance from the base station increases, the received signal strength decreases
making the reception of signals more prone to errors and degraded quality of service.
After considering the multipath effects, it is convenient to understand the concept of successive signal
detection and equalization.
The need for equalization lies in the fact that practical wireless channels do not fulfil the condition of
distortion less transmission [6]. A mechanism that reverses or nullifies the derogatory effects of
distortion introducing channel is called an equalizer. The rate of data transmissions over a
communication system is limited due to the effects of linear and nonlinear distortion. Linear distortions
occur in from of inter-symbol interference (ISI), co-channel interference (CCI) and adjacent channel
interference (ACI) in the presence of additive white Gaussian noise. Nonlinear distortions are caused
due to the subsystems like amplifiers, modulator and demodulator along with nature of the medium.
Sometimes burst noise occurs in communication system. Different equalization techniques are used to
mitigate these effects. Different applications and channel models suit a different equalization
technique.The main challenge of wireless communications is the random and frequency
selective nature of wireless channels which does not follow the conditions for distortion less
transmission.
For the transmission to be distortion less, the channel should have a flat frequency response as
shown in the figure above. But practically, wireless channels are random and show non ideal
characteristics.
Such a channel introduces distortions in the received signal thereby degrading the BER
performance of the system. If we know the frequency response of the channel H(z), then we
can design the frequency response of the equalizer as E(z)= 1/H(z).
Since the wireless channels are realized as filters, hence equalizer structures also need to be
realized as filters.One of the biggest challenges of realizing an equalizer is in designing an
algorithm that would implement the equalizer transfer function. The design objective of the
equalizer is to undo the effects of the channel and to remove the interference. Conceptually,
the equalizer attempts to build a system that is a “delayed inverse” of the channel, removing
the inter symbol interference while simultaneously rejecting additive interferers uncorrelated to
the source. If the interference n (kTs) is unstructured (for instance white noise) then there is
little that a linear equalizer can do to improve it. But when the interference is highly structured
(such as narrow band interference from another user) then the linear filter can often notch out
the offending frequencies and thereby reduce the effects of inter symbol interference.
The multi-detection mechanism can be understood using the conceptual block diagram of a MUD
system which is shown in the following figure.
Signal Detection
The successive DFE equalization approach is an efficient technique of equalizing the received signal
power and is capable of detecting different multi-path components (MPCs) under varying signal
strengths or BER conditions. The approach requires the following information [7]:
a) Individual signal strength of each MPC be given by:
Si = g i √Pi (1)
4. Literature Review
Mohseni et al. [8] showed that SDNs allow network operators to easily add new services to the network
and quickly adapt the network to their own requirements. The main feature of this architecture is
separation of control plane and data plane, and logically concentrating network strategic intelligence in
a network location. Real-time traffics generated by mobile users still require small end-to-end delay and
small handover latency during the user handover, to make smooth communication for users. Most IoT
application traffic fall within this category. While the amount of transmitted data in IoT applications
might be considerably smaller than that of real-time multimedia applications, many IoT applications
have low tolerance to large delays. In this paper, an improved end-to-end delay and handover latency
by proposing an SDN-based scheme with a cross-layer approach is proposed for handover.
A Tusha et al. [9] proposed a hybrid power domain non-orthogonal multiple accessing (NOMA)
scheme by the superposition of orthogonal frequency division multiple accessing (OFDM) and index
modulated OFDM (OFDM-IM) technologies is presented and named IM-NOMA. It is shown via both
computer-based simulations and mathematical analysis that IM-NOMA outperforms the classical
OFDM-NOMA in terms of bit error rate (BER) under a total power constraint and achievable sum rate.
The system performance of IM-NOMA not only depends on the power difference between the
overlapping users but also on features of the OFDM-IM signal. Hence, this scheme is robust against
possible catastrophic error performance in case similar power is assigned to the users.
Y. Yapıcı et al. [10] proposed a downlink multiuser VLC network where users randomly change their
location and vertical orientation. In order to increase the spectral efficiency, we consider the non-
orthogonal multiple access (NOMA) transmission to serve multiple users simultaneously. In particular,
we propose individual and group-based user ordering techniques for NOMA with various user feedback
schemes. In order to reduce the computational complexity and link overhead, feedback on the channel
quality is proposed to be computed using a mean value of the vertical angle (instead of the exact
instantaneous value), as well as the distance information. In addition, a two-bit feedback scheme is
proposed for the group-based user scheduling, which relies on both the distance and the vertical angle,
and differs from the conventional one-bit feedback of the distance only. The outage probability and
sum-rate expressions are derived analytically, which show a very good match with the simulation data.
Cai et al. [11] proposed Fifth generation (5G) wireless networks face various challenges in order to
support large-scale heterogeneous traffic and users, therefore new modulation and multiple access
(MA) schemes are being developed to meet the changing demands. The authors showed that as this
research space is ever increasing, it becomes more important to analyze the various approaches,
therefore, authors present a comprehensive overview of the most promising modulation and MA
schemes for 5G networks. Unlike other surreys of 5G networks, this paper focuses on multiplexing
techniques, including modulation techniques in orthogonal MA (OMA) and various types of non-OMA
(NOMA) techniques. Specifically, we first introduce different types of modulation schemes, potential
for OMA, and compare their performance in terms of spectral efficiency, out-of-band leakage, and bit-
error rate. Authors stress upon the various types of NOMA candidates, including power-domain
NOMA, code-domain NOMA, and NOMA multiplexing in multiple domains.
Alodeh et al. [12] proposed a technique for multi user detection which used an energy efficient
mechanism based on symbol level pre-coding of data stream prior to transmission. The approach
followed interleaved pre-coding so as to avoid burst errors in packet data transmission. The approach
was well suited to multi user scenario in with a Multiple Input Single Output (MISO) framework. The
detection region was termed as relaxed detection region due to the fact that the proposed technique used
large degrees of freedom for the actual signal detection.
The QoS Metric often chosen is the error rate or specifically the bit error rate of the system and hence
automatic handover should be decided by the QoS metric chosen for the SDN. Such a SDN would
definitely have the distinct advantage of better reliability compared to SDNs without automatic
handover enabling protocol.
5. Conclusion
From the previous discussions, it can be said that there are three important challenges in the detection
of multiple signals corresponding to multi-user detection in Software Defined Networks (SDNs).The
major challenge looming large on SDNs is the multipath propagation and varying media (channel
conditions) in terms of fading which result in reduced strength resulting in poor quality of service,
increase bit and packed error rates resulting in SDN system outage, large latencies and relatively low
throughput. This paper presents a comprehensive review in SDNs, associated challenges and also
introduces the necessity of automated handover so as to enhance the quality of service (Qos) and
reliability.
References:
[1] B. A. A. Nunes, M. Mendonca, X. -N. Nguyen, K. Obraczka and T. Turletti, "A Survey of
Software-Defined Networking: Past, Present, and Future of Programmable Networks," in IEEE
Communications Surveys & Tutorials, 2014, vol. 16, no. 3, pp. 1617-1634.
[2] A. A. Shah, G. Piro, L. A. Grieco and G. Boggia, "A Review of Forwarding Strategies in Transport
Software-Defined Networks," 2020 22nd International Conference on Transparent Optical Networks
(ICTON), 2020, pp. 1-4.
[3] W. Rafique, L. Qi, I. Yaqoob, M. Imran, R. U. Rasool and W. Dou, "Complementing IoT Services
Through Software Defined Networking and Edge Computing: A Comprehensive Survey," in IEEE
Communications Surveys & Tutorials, 2020, vol. 22, no. 3, pp. 1761-1804.
[4] S Wang, Y Li, J Wang, “Multiuser detection in massive spatial modulation MIMO with low-
resolution ADCs”, IEEE Transactions on Wireless Communication, Vol-14, Issue-4,available at IEEE
Xplore, IEEE 2015.
[5] S Narayanan, MJ Chaudhry, A Stavridis, “Multi-user spatial modulation MIMO”, Proceedings of
Wireless Communications and Networking Conference (WCNC), available at IEEE Xplore, IEEE 2014
[6] P Botsinis, D Alanis, SX Ng, LLCSO Hanzo, “Quantum-Assisted Multi-User Detection for Direct-
Sequence Spreading and Slow Subcarrier-Hopping Aided SDMA-OFDM Systems”, IEEE 2014
[7] A Mukherjee, SAA Fakoorian, J Huang, “2rinciples of physical layer security in multiuser wireless
networks: A survey”, Vol-16, Issue-3, IEEE
[8] H. Mohseni and B. Eslamnour, "Handover Management for Delay-sensitive IoT Services on
Wireless Software-defined Network Platforms," IEEE Systems Journal on Cyber Security, 2021, pp. 1-
6
[9] A. Tusha, S. Doğan and H. Arslan, "A Hybrid Downlink NOMA With OFDM and OFDM-IM for
Beyond 5G Wireless Networks," in IEEE Signal Processing Letters, vol. 27, pp. 491-495, 2020.
[10] Y. Yapıcı and İ. Güvenç, "NOMA for VLC Downlink Transmission With Random Receiver
Orientation," in IEEE Transactions on Communications, vol. 67, no. 8, pp. 5558-5573, Aug. 2019
[11] YunlongCai, ZhijinQin ,Fangyu Cui, Geoffrey Ye Li, Julie A. McCann, “Modulation and Multiple
Access for 5G Networks”, IEEE 2018
[12] MahaAlodeh ; SymeonChatzinotas ; BjörnOttersten, “Energy-Efficient Symbol-Level Precoding
in Multiuser MISO Based on Relaxed Detection Region”, IEEE Xplore- 2016, vol-15, Issue-5,
Abstract
Wireless communication is undergoing a paradigm shift with emergence of high performance machine learning (ML)
computing and internet of things (IoT). The demand for bandwidth has significantly risen due to multimedia
applications and high speed data transfer. However, with increasing number of cellular users, the challenge is to
effectively manage the limited spectrum allotment for wireless communication while maintaining satisfactory quality
of service. Hence, different multiplexing techniques have been used to effectively use the available bandwidth.
Recently, the concept of automatic fallback in receivers are gaining popularity due to high mobility in vehicular
networks and IoT. Automatic fallback and handover mechanisms often utilize the channel state information (CSI) of
the radio and can switch between technologies to provide the best available quality of service for particular spatial and
temporal channel conditions. With the advent of machine learning and deep learning methods, estimating the channel
state information has become computationally efficient and feasible thereby improving the performance metrics of the
system. This paper presents a comprehensive review on the need for cognitive systems with CSI availability,
handover mechanisms in wireless networks and different strategies involved in estimating the channel state
information for wireless networks.
Keywords: Wireless Networks, Handover, Channel State Information (CSI), Cognitive Networks, Machine Learning
(ML).
I. INTRODUCTION
Wireless communications beyond 5G has emerged as new paradigm with enormous new possibilities such as
metaverse, digital clones, large scale automation and internet of things to name a few [1]. However, all these new age
concepts critically depend on the bandwidth availability and spectrum management in wireless networks. As
bandwidth is limited, hence, effectively using the bandwidth is critically important to cater to the following needs [2]:
1) Increasing number of users.
2) Increased bandwidth requirement owing to multimedia data transfer.
3) Need for high data rates.
4) Limited available bandwidth.
The problem becomes even more critical with the necessity of internet of things (IoT) and fog computing networks
where multiple devices are connected over internet and send data to a centralized server [3]. The IoT framework is
depicted in figure 1.
There are several applications of IoT such as:
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: shubhamm5194@gmail.com
65
A Review on Contemporary Handover Mechanisms for Wireless Networks
The IoT framework has its own set of limitations in the sense that that is a lot of device cluttering in the near 2.4GHz
Industrial scientific and medical (ISM) band. IoT based networks can be further classified as [4]:
1) Cellular based IoT
2) Device to Device based IoT.
Another variant of the IoT framework is the fog computing architecture for last mile connectivity. Fog infrastructure
supports heterogeneous devices, such as end devices, edge devices, access points, and switches. Fog servers are
considered to be micro data centres by inheriting cloud services at the network edges [5]. The fog computing
architecture is depicted in figure 3.
The frequency ‘f’ can be reused at a cell site ‘d’ km away for a cell with radius ‘r’ keeping in mind the reuse factor:
𝑑
𝑞= (1)
𝑟
Here,
q is the re-use factor.
r is the cell radius.
d is the distance of re-use.
Typically, in wide area networks and metropolitan area networks, if multiple IoT clusters are connected to a single
Cloud Server in a cell, then such a cell is called a Macro Cell [8]. Macro cells may have a large number of IoT devices
(IoTDs) connected. The scenario of such IoT clusters is depicted in figure 5.
The major constrains of IoT and fog based networks are [9]:
1) Devices are resource constrained.
2) Number of devices are exceedingly large.
3) Networks can be used for extremely time critical applications with latency causing serious repercussions.
Thus, selecting an appropriate multiplexing technique is necessary to address the following issues [10]:
1) Lesser Bit Errors.
2) Low or acceptable limits of outage.
3) Acceptable latencies.
4) Effective spectrum management.
In FDM: Different users have different frequencies. In TDM: Different users have different times slots. In OFDM:
Different users have different orthogonal frequencies. In NOMA: Different uses have different power levels. In
NOMA, different uses may have SAME time slot and frequency, but the power level should be different. Figure 6
depicts the spectrum of FDM, OFDM and NOMA.
OFDM and NOMA often exhibit similar SNR-BER characteristics. A typical cellular system generally has the
capability of adaptive fallback or automatic fallback [17]. In such cases, there can be a switching from one of the
technologies to another parallel or co-existing technology in case of changes in system parameters such as Bit Error
Rate (BER) etc. NOMA and OFDM can be shown to co-exist in case they can share similar bandwidth parameters and
have a comparative BER performance over the SNR range chosen so that automatic fallback or handover is not a
problem. Thus two major fallback or handover mechanisms are commonplace which are:
1) OFDM-NOMA
2) Cellular-Device to Device-Wifi
The main objective of handover is to maintain a satisfactory quality of service metric. The outage of the system is
measure of the quality of service of the systems. The outage means the chance of unacceptable quality of service. The
outage primarily depends on the signal to noise ratio and the bit error rate of the system. The system outage often is
represented in terms of the complementary cumulative distribution function or the CCDF. The need for using a
probabilistic model for the description of the outage of the system is due to the fact that neither the BER no the SNR
of the system can be used to ascertain the outage since both are subjective performance metrics [18]. In general, it is
shown that the outage is a function of the signal to noise plus interference ratio, the distance and the channel fading
effects. The outage in terms of absolute parameters q(λ) is given by [23]:
2π2 2/η
q(λ) = exp {− 2π R2k SINR k λ} (2)
sin( )
η
Here,
2/η
K k = Ck R2k SNR k is a constant depending on system and channel parameters
SINR represents the signal to noise plus interference ratio
R is the distance
λj is the device density in a network
σkj is the shadowing factor
q(λ) is the absolute outage
Here,
ccdf denotes the complementary cumulative density function of the D2D Networks system
cdf denotes the cumulative density function of the network.
x denotes a random variable.
The enhancements in chip fabrication and computational power have made it possible to analyze copious amount of
data at real time and on miniaturized systems on chip (SoCs) [19]. Machine learning (ML) models can be have the
capability of analyzing large and complex data sets practically infeasible with conventional statistical models [20].
Machine learning models can be classified as [21]:
1) Unsupervised Learning: In this approach, the data set is not labelled or categorized prior to training a model. This
typically is the most crude form of training wherein the least amount of apriori information is available regarding
the data sets [22].
2) Supervised Learning: In this approach, the data is labelled or categorized or clustered prior to the training process.
This is typically possible in case the apriori information us available regarding the data set under consideration.
3) Semi-Supervised Learning: This approach is a combination of the above mentioned supervised and unsupervised
approaches. The data is demarcated in two categories. In one category, some amount of the data is labelled or
categorized. This is generally not the larger chunk of the data. In the other category, a larger chunk of data is
unlabeled and hence the data is a mixture of both labelled and unlabeled data groups.
Often, another sub-categorization made is the reinforcement learning which the type of learning in which the aim is to
adjust the training parameters so as to maximize the rewards in certain circumstances. They may also possess
categorically classified targets prior to training. Typically, some paradigms separate out machine learning and deep
learning. In case of deep learning, the number of hidden layers are multiple and no separate feature extraction is done,
and the data is directly fed to the neural network [23].
Machin Leaning and Deep Learning based techniques can be used to estimate the channel state information through
several training parameters such as:
1) Channel Gain
2) Fading effects
3) Shadowing parameters
Thus, the correlation among the independent variables and target variable can be estimated through the training
process
The neural network model is the most effective training model used for pattern recognition for deep learning models
and is depicted in figure 14. The mathematical relationship between the various parameters is given by:
𝑦 = 𝑓(∑𝑖=𝑛 𝑖=1 𝑋𝑖 𝑊𝑖 + Ɵ) (4)
Here,
X represents the inputs, Y represents the output & W represents the weights
Activation represents the behavior of the neural network while decision making
The model can be trained with time spaced input-target data corresponding to the channel to attain the updated CSI.
Moreover, by estimating the channel response, the design of equalizers can also be done [24]. The equalization
mechanism can be used to mitigate the negative effects of noise and distortions in the channel. Such a mechanism is
depicted in figure 1
Conclusion: This paper presents a comprehensive review on the currents trends in wireless networks pertaining to
modulation techniques, handover mechanisms and automatic fallback, fading effective and channel sensing through
latest machine learning and deep learning algorithms for cognitive networks. Moreover, internet of things (IoT), fog
computing, device to device networks and their co-existence in underlay cellular networks have also been discussed.
Channel sensing mechanisms through channel sensing and estimation of channel state information (CSI) for the
design of equalization mechanisms have also been cited and discussed in detail. Moreover, the significant and
noteworthy contributions in the domain have also been presented with the approach used, novelty of perspective and
findings. The findings of the paper indicate that stochastic and big data analytics methods can be explored to design
optimal handover and equalization methods for future generation wireless networks aiming high data rates, low error
rate and outage to maintain satisfactory quality of service (QoS).
References
[1] J. Thompson et al., "5G wireless communication systems: prospects and challenges [Guest Editorial]," in IEEE
Communications Magazine, vol. 52, no. 2, pp. 62-64, February 2014, doi: 10.1109/MCOM.2014.6736744.
[2] M. A. M. Albreem, "5G wireless communication systems: Vision and challenges," 2015 International Conference
on Computer, Communications, and Control Technology (I4CT), 2015, pp. 493-497, doi:
10.1109/I4CT.2015.7219627.
[3] J. M. Khurpade, D. Rao and P. D. Sanghavi, "A Survey on IOT and 5G Network," 2018 International Conference
on Smart City and Emerging Technology (ICSCET), 2018, pp. 1-3, doi: 10.1109/ICSCET.2018.8537340.
[4] J. H. Anajemba, Y. Tang, J. A. Ansere and C. Iwendi, "Performance Analysis of D2D Energy Efficient IoT
Networks with Relay-Assisted Underlaying Technique," IECON 2018 - 44th Annual Conference of the IEEE
Industrial Electronics Society, 2018, pp. 3864-3869, doi: 10.1109/IECON.2018.8591373.
[5] M. Aazam, S. Zeadally and K. A. Harras, "Fog Computing Architecture, Evaluation, and Future Research
Directions," in IEEE Communications Magazine, vol. 56, no. 5, pp. 46-52, May 2018, doi:
10.1109/MCOM.2018.1700707.
[6] Ş. Sönmez, I. Shayea, S. A. Khan and A. Alhammadi, "Handover Management for Next-Generation Wireless
Networks: A Brief Overview," 2020 IEEE Microwave Theory and Techniques in Wireless Communications
(MTTW), 2020, pp. 35-40, doi: 10.1109/MTTW51045.2020.9245065.
[7] T. D. Novlan, R. K. Ganti, A. Ghosh and J. G. Andrews, "Analytical Evaluation of Fractional Frequency Reuse
for Heterogeneous Cellular Networks," in IEEE Transactions on Communications, vol. 60, no. 7, pp. 2029-2039,
July 2012, doi: 10.1109/TCOMM.2012.061112.110477.
[8] H. Zhang, X. Wen, B. Wang, W. Zheng and Y. Sun, "A Novel Handover Mechanism Between Femtocell and
Macrocell for LTE Based Networks," 2010 Second International Conference on Communication Software and
Networks, 2010, pp. 228-231, doi: 10.1109/ICCSN.2010.91.
[9] M. Yannuzzi, R. Milito, R. Serral-Gracià, D. Montero and M. Nemirovsky, "Key ingredients in an IoT recipe:
Fog Computing, Cloud computing, and more Fog Computing," 2014 IEEE 19th International Workshop on
Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), 2014, pp. 325-329,
doi: 10.1109/CAMAD.2014.7033259.
[10] A. Alrawais, A. Alhothaily, C. Hu and X. Cheng, "Fog Computing for the Internet of Things: Security and
Privacy Issues," in IEEE Internet Computing, vol. 21, no. 2, pp. 34-42, Mar.-Apr. 2017, doi:
10.1109/MIC.2017.37.
[11] S. Han et al., "Artificial-Intelligence-Enabled Air Interface for 6G: Solutions, Challenges, and Standardization
Impacts," in IEEE Communications Magazine, vol. 58, no. 10, pp. 73-79, October 2020, doi:
10.1109/MCOM.001.2000218.
[12] Y. Cai, Z. Qin, F. Cui, G. Y. Li and J. A. McCann, "Modulation and Multiple Access for 5G Networks," in IEEE
Communications Surveys & Tutorials, vol. 20, no. 1, pp. 629-646, Firstquarter 2018, doi:
10.1109/COMST.2017.2766698.
[13] G. Nain, S. S. Das and A. Chatterjee, "Low Complexity User Selection With Optimal Power Allocation in
Downlink NOMA," in IEEE Wireless Communications Letters, vol. 7, no. 2, pp. 158-161, April 2018, doi:
10.1109/LWC.2017.2762303.
[14] D Tse & P Viswanath, Fundamentals of Wireless Communication, 2004 (Book)
[15] J. Guerreiro, R. Dinis, P. Montezuma and M. Campos, "On the Receiver Design for Nonlinear NOMA-OFDM
Systems," 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), 2020, pp. 1-6, doi:
10.1109/VTC2020-Spring48590.2020.9129559.
[16] A Al Khansa, X Chen, Y Yin, G Gui, H Sari., Performance analysis of Power-Domain NOMA and NOMA-2000
on AWGN and Rayleigh fading channels, Journal of Physical Communication, Elsevier 2020, vol.43, 101185.
[17] H. Yoo, M. Lee, T. H. Hong and Y. S. Cho, "A Preamble Design Technique for Efficient Handover in IEEE
802.16 OFDM-Based Mobile Mesh Networks," in IEEE Transactions on Vehicular Technology, vol. 62, no. 1,
pp. 460-465, Jan. 2013, doi: 10.1109/TVT.2012.2220990.
[18] D. Zhang, Y. Zhou, X. Lan, Y. Zhang and X. Fu, "AHT: Application-Based Handover Triggering for Saving
Energy in Cellular Networks," 2018 15th Annual IEEE International Conference on Sensing, Communication,
and Networking (SECON), 2018, pp. 1-9, doi: 10.1109/SAHCN.2018.8397106.
[19] M. Schmidt, D. Block and U. Meier, "Wireless interference identification with convolutional neural networks,"
2017 IEEE 15th International Conference on Industrial Informatics (INDIN), 2017, pp. 180-185, doi:
10.1109/INDIN.2017.8104767.
[20] S. Skaria, A. Al-Hourani, M. Lech and R. J. Evans, "Hand-Gesture Recognition Using Two-Antenna Doppler
Radar With Deep Convolutional Neural Networks," in IEEE Sensors Journal, vol. 19, no. 8, pp. 3041-3048, 15
April15, 2019, doi: 10.1109/JSEN.2019.2892073.
[21] S Cohen, “The basics of machine learning: strategies and techniques”, Artificial Intelligence and Deep Learning,
Elsevier 2021, pp.13-40.
[22] VK Ayyadevara, “Basics of Machine Learning”, Pro Machine Learning Algorithms, Spreinger 2018, pp 1-15.
[23] Y. Sun, C. Wang, H. Cai, C. Zhao, Y. Wu and Y. Chen, "Deep Learning Based Equalizer for MIMO-OFDM
Systems with Insufficient Cyclic Prefix," 2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall),
2020, pp. 1-5, doi: 10.1109/VTC2020-Fall49728.2020.9348509.
[24] H. Yazdani, A. Vosoughi and X. Gong, "Achievable Rates of Opportunistic Cognitive Radio Systems Using
Reconfigurable Antennas With Imperfect Sensing and Channel Estimation," in IEEE Transactions on Cognitive
Communications and Networking, vol. 7, no. 3, pp. 802-817, Sept. 2021, doi: 10.1109/TCCN.2021.3056691.
Abstract
Wireless channels rarely fulfill the conditions required for distortion-less transmission. sensible wireless channels
suffer from the impact of a non-flat magnitude response and a non-linear section response that leads for
distortions within the received signals. what is more effects like tiny scale weakening, massive scale weakening
and physicist shifts cause dis-similarity between the transmitted and received signals. Wireless channels not
showing the perfect impulse response ends up in the reception of multiple copies of the transmitted signal at the
receiver thereby leading to entomb image Interference (ISI). A additive and cascading impact of the on top of
mentioned reasons ends up in the degraded performance of electronic communication systems. to avoid these
prejudicial effects, many equalizer styles are projected. This paper focuses on the design aspects of equalizers
with an inclination towards decision feedback equalizers, due to their efficiency in nullifying the detrimental
effects of practical wireless channels.
Keywords: Frequency Selective Channel, Equalizer, Decision Feedback Equalizer, Bit Error Rate (BER),
Probability of Error (Pe), Throughput.
1. Introduction
Since Wireless channels introduce several degradation effects on the signal passing through them, therefore its
important to reverse the effects of the channel. A mechanism that reverses or nullifies the uncomplimentary
effects of distortion introducing channel is termed associate equalizer [1]. The speed of information transmissions
over a communication system is proscribed attributable to the consequences of linear and distortion. Linear
distortions occur in from of inter-symbol interference (ISI), co-channel interference (CCI) and adjacent channel
interference (ACI) within the presence of additive white Gaussian noise. Non-linear
distortions are caused attributable to the subsystems like amplifiers, modulator and detector along side nature of
the medium. Generally burst noise happens in communication system. Totally
different techniques are accustomed mitigate these effects. Totally different applications and channel models
suit a distinct deed technique.
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: vishwakarmashubham41@gmail.com
75
Design of Adaptive Equalizers for Frequency Selective Fading Channels
Wireless Channels behave differently for different frequencies. The channel state information is the about the
state of the channel. The state of the channel is generally a function of time. The practical wireless channels are
generally functions of frequency and time both. Hence such errors are irreversible errors which can’t be mitigated
using regular techniques [2].
Let the channel have an impulse response h(t). Since any practical system can sense the channel in the discrete
time domain, therefore the channel impulse response can be re-considered as h(n). Let the channel in the
frequency domain be H(z). Then the output of the channel is:
The aim at design of an equalizer is the design of a system with a transfer function
𝟏
𝑬(𝒛) = (3)
𝑯(𝒛)
There are several ways in which the system with the transfer function E(z) can be practically implemented. The
different techniques result in different equalizer structures. Different equalizer structures can be Linear
Equalizers, MLSE Equalizers, Zero Forcing Equalizers, Adaptive Equalizers, and Decision Feedback Equalizers
etc.
The main idea behind the design of a decision feedback equalizer is the fact that if bit errors in the output can be
fed back to the system to update tap weights of the equalizer filter, then subsequent errors can be reduced. The
following figure shows the design of a DFE.
4. Previous Work
Zhang et al. in [1] proposed an adaptive transmission scheme with frequency-domain precoding matrix
composed of the eigenvectors of the channel matrix is proposed to improve the system performance
under MMSE equalization, and its optimized performance is derived with simple expression. Finally,
considering two extreme channel conditions, the lower and upper bounds for the diversity performance
of the adaptive transmission scheme are derived. Simulation results show that the proposed adaptive
transmission achieves significantly better performance for short signal frames and can work well with
imperfect channel state information (CSI). The derived performance bounds can serve as benchmarks
for OTFS and other precoded OFDM systems.
Caciularu et al. in [2] proposed a new approach for blind channel equalization and decoding, variational
inference, and variational autoencoders (VAEs) in particular, is introduced. Authors first consider the
reconstruction of uncoded data symbols transmitted over a noisy linear intersymbol interference (ISI)
channel, with an unknown impulse response, without using pilot symbols. Then the approach is to
derive an approximate maximum likelihood estimate to the channel parameters and reconstruct the
transmitted data. Results demonstrate significant and consistent improvements in the error rate of the
reconstructed symbols, compared to existing blind equalization methods such as constant modulus, thus
enabling faster channel acquisition.
Suneel et al. in [3] proposed a equalization of channel coefficients is done through evolutionary adaptive
algorithms. Conventional differential evolution (DE) and particle swarm optimization (PSO) are used for
equalizing second order channel. Later a new optimization technique called teaching learning based optimization
(TLBO) is used which provides a better solution. In this paper a comparative study of discussed optimization
techniques for different scenarios are provided.
Duan et al. in [4] proposed the concept of turbo equalization for MIMO systems. This approach was rather very
pivotal in MIMO systems since MIMO systems encounter different channel conditions for different entries of the
channel matrix (H) and hence need an adaptive equalization technique that caters to the needs of the different
channel conditions prevailing for the different transmitter and receiver conditions. Turbo encoding was the
approach used in this case.
Peng et al. in [5 ] proposed a 56Gb/s NRZ Transceiver in 40nm CMOS technology that was an equalizer design
that was able to sample the data samples at 56Bbps speed for high speed communications. The practical
implementation of such a transceiver design was done using the CMOS technology. The reception mechanism
was using a non return to zero (NRZ) approach for the equalization. The performance parameters for the design
was the compactness (size) and the power dissipation of the circuit.
Chen et al. in [6] proposed a Complex-Valued B-Spline equalizer model for frequency selective channel models.
The approach was iterative in nature and the Polynomial Models Applied to Iterative Frequency-Domain Decision
Feedback Equalization was used. The simulation of the channels was done based on the Hammerstein Channel
models. The approach as shown to attain better results compared to the existing approaches.
Magueta et al. in [7] proposed a Hybrid Iterative Space-Time Equalization mechanism that was again to be used
for MIMO systems. The challenge pertaining to this system design was the fact that the proposed system had to
adjust iteratively to the Multi-User mm Wave Massive MIMO Systems. The massive MIMO systems had the
channel response matrix to be updated continuously and hence the equalizer too had to adapt accordingly in real
time applications.
Belazi et al. in [8] proposed a Bidirectional Soft-Decision Feedback equalizer. The approach also used the
concept of Turbo Equalization that changes the equalizer filter’s co-efficients as per the changes in the channel
matrix in case for MIMO Systems. The equalization mechanism was further designed as an iterative mechanism
that was to reduce the errors as the number of iterations was to increase synonymous with a Monte Carlo
simulation for BER
.
Prakash et al. in [9] proposed a distributed approach for equalizer design. This included the Arithmetic-Based
Realization of equalizers. The approach also incorporated the decision feedback mechanism that is to feed back
the errors corresponding to every sensing iteration of the channel response. The channel is to be however sensed
continuously so as to update the error profile of the predicted output and to match the input-output mapping of the
system.
Tao in [10] proposed a low complexity decision feedback mechanism for wireless channels. The approach used
the concept of soft-output linear equalizer (Soft-LE), the soft-input, soft-output decision-feedback equalizer (Soft-
DFE) is much less investigated.. The performance metrics was the Bit Error Rate (BER) of the system. It was
shown through scatter plots that the proposed system attains coherent results in both the complex signaling points
and the BER scenario of the proposed system.
5. Performance Metrics
The major performance metrics to deicide the performance of equalizers are Bit Error Rate (BER) or Probability
of Error (Pe) and throughput [11]. The above parameters can be defined as:
𝑩𝒊𝒕 𝑬𝒓𝒓𝒐𝒓𝒔
𝑩𝑬𝑹 = (4)
𝑻𝒐𝒕𝒂𝒍 𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝑩𝒊𝒕𝒔
𝑩𝒊𝒕𝒔 𝑻𝒓𝒂𝒏𝒔𝒎𝒊𝒕𝒕𝒆𝒅
𝑻𝒉𝒓𝒐𝒖𝒈𝒉𝒑𝒖𝒕 = (5)
𝑻𝒊𝒎𝒆 𝑪𝒐𝒏𝒔𝒖𝒎𝒆𝒅
It should be noted that low value of BER is envisaged while a high value of Throughput is aimed in the design of
equalizers [12].
6. Conclusion
The previous discussions suggest the fact that although different equalizer techniques are available at the
disposable of a design engineer, still decision feedback equalizer is one of the most effective ways of mitigating
the ill effects of frequency selective channels and multi path propagation causing inter symbol interference. This
paper throws light on the design aspects of different equalizing technique evaluating the pros and cons of each
design thereby arriving at the conclusion that the most effective equalization technique is the decision feedback
mechanism.
REFERENCES
[1] H. Zhang, X. Huang and J. A. Zhang, "Adaptive Transmission With Frequency-Domain Precoding and Linear
Equalization Over Fast Fading Channels," in IEEE Transactions on Wireless Communications,2021, vol. 20, no.
11, pp. 7420-7430
[2] A. Caciularu and D. Burshtein, "Unsupervised Linear and Nonlinear Channel Equalization and Decoding
Using Variational Autoencoders," in IEEE Transactions on Cognitive Communications and Networking, 2020,
vol. 6, no. 3, pp. 1003-1018.
[3] D.Suneel Varma, P.Kanvitha, K.R.Subhashini, “Adaptive Channel Equalization using Teaching Learning
based Optimization”, IEEE 2019.
[4] Weimin Duan, Jun Tao and Y. Rosa Zheng, “Efficient Adaptive Turbo Equalization for Multiple-Input–
Multiple-Output Underwater Acoustic Communications”, IEEE 2018
[5] Pen-Jui Peng, Jeng-Feng Li, Li-Yang Chen, Jri Lee, “A 56Gb/s PAM-4/NRZ Transceiver in 40nm
CMOS”, IEEE 2017.
[6] Sheng Chen, Xia Hong, Emad Khalaf, Fuad E. Alsaadi and Chris J. Harris, “Comparative Performance
of Complex-Valued B-Spline and Polynomial Models Applied to Iterative Frequency-Domain Decision Feedback
Equalization of Hammerstein Channels”, IEEE 2017.
[7] Roberto Magueta, Daniel Castanheira, Adão Silva, Rui Dinis, and Atílio Gameiro, “Hybrid Iterative
Space-Time Equalization for Multi-User mmW Massive MIMO Systems”, IEEE 2017.
[8] A Belazi, AAA El-Latif, AV Diaconu, R Rhouma, “Bidirectional Soft-Decision Feedback Turbo
Equalization for MIMO Systems”, IEEE 2016.
[9] M. Surya Prakash, Rafi Ahamed Shaik, Sagar Koorapati, “An Efficient Distributed Arithmetic-Based
Abstract
Data Encryption has been one of the pivotal domains for research. For a long time is has happened to be a field of
enormous research work because of its aspect of strong data protection. Initially data used to be handled in text
formats only, but with time and advancement, data became available in various other formats as well other than just
text. With progress in technology, the advent of digital images also started becoming rampant on a rapid scale. They
are implemented in diverse systems of communication. Encryption concept also started getting prominence as a part
of safeguarding method. A number of encryption algorithms have been put forth and have been tested with respect to
their robustness and efficacy. Image degradation due to continuous capture and transmission has also been looked
upon, in order to minimize the noise impact. This paper outlines the basics of digital image processing and allied
concepts with its principle highlight on image encryption methods and the different kinds of noise laying impact on
the images.
Keywords: Image Processing, Image Encryption, Image Compression, Transform Domain, Chaotic Neural Network
(CNN), Discrete Cosine Transform (DCT), Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE).
1. Introduction
We can think of image to be of two dimensions, given by the two dimensional function I=f(x, y): where co ordinates are
given by x and y, which are also termed as pixel values. An image is a kind of a huge picture element matrix consisting
of two major pieces of data related to them as follows[16] :-
1) The gray scale value of the picture element also called the intensity of the image.
2) The R, G, B value pertinent to points with fixed coordinates.
Digital images in general make use of a digital computer for data processing. This method is called Digital Image
processing. The digital images are processed in a digital manner by it. Many changes and modifications can be
brought upon by the use of that method. The variations are mainly used in the gray scale value of the picture among
the R,G,B values of the image pixels.
Image encryption of images is a mechanism of encryption where the pixels values of the images are transmogrified.
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: shraddha.kumar@sdbc.ac.in
80
A Review on Image Encryption Techniques and Performance Metrics
Here g1 and g2 represent the transform functions and M’ and N; the pixel coordinates.
The necessary consideration is the overall robust design of the math functions utilized for the encrypting method.
These functions must be robust enough such that it becomes nearly impossible to attack and breach them over a span
of time domain. The attacks must become infeasible after a certain period of time. The infeasibility can be calculated
with respect to the rate of growth of the algorithm and the computation complexity.
Clear observation can be drawn from the above figure of graph suggesting the complexity of computation grows very
fast in comparison to the algorithm growth rate.[15] This is an indication of provision of infeasibility to the encryption
algorithm
B. Image Encryption in Transform Domain
Here various transform domain approaches are utilized. [16].The types of transforms implemented pertinent to this
category are the Fourier Transform, Fast Fourier Transform, Discrete Cosine Transform, Wavelet Transform,
Contourlet transform. Its expression in mathematics can be stated as:
I(m,n)↔I[d(md,nd)] (1)
Where (md,nd) denote the pixel values in the transform domain. The image is changed back into the original form
domain using the inverse form of the transform after introducing required changes in the transform domain. A brief
generic explanation of the concept is outlined as follows:
The Fast Fourier Transform (FFT) calculating the Fourier Transform Efficiently:
It is defined as”
(𝑗−1)(𝑘−1)
𝑋(𝑘) = ∑𝑁
𝑗=1 𝑥(𝑗)𝜔𝑁 (2)
and
1 −(𝑗−1)(𝑘−1)
𝑥(𝑗) = ∑𝑁
𝑘=1 𝑋(𝑘)𝜔𝑁 (3)
𝑁
𝜋(2𝑛−1)(𝑘−1)
𝑦(𝑘) = 𝑤(𝑘) ∑𝑁
𝑛=1 𝑥(𝑛)𝑐𝑜𝑠 (3)
2𝑁
k=1,2......N
where,
1
𝑤(𝑘) = ; for k=1
√𝑁
2
𝑤(𝑘) = √ ; for 2<k<N
𝑁
The aforementioned functions form to be the generic transform methods employed for image encryption in the
transform domain.
C. Encryption using Neural Network:
Neural networks work on the basis of the fact that our human brain works in a vastly different manner and process the
information the most unique ways compared to the top high end digital computers. The human possesses the below
mentioned characteristics:
1) Great measure of non-linearity
2) An enormously parallel structure.
This is the reason why the human brain can compute and achieve huge works and processing of complex data in a
matter of fractions of time that the most advanced computer takes great amount of time to perform.[11]
After the analysis of the biological human brain, it is observed that the parallel model of the human brain is the one
where all signals from various other parts of the body come together and aggregate in a simultaneous manner.
Here Xi signifies the signals obtained from various ways, Wi denotes the weight corresponding to a particular path
and Ɵ is the bias of the network. It is illustrated by the following figures:
Encryption using Chaotic Neural Network
The chaotic neural network implementation of the encryption mechanism is also a wide scope of research and future
advancement. The basics of this concept originate from the statement of chaos theory put forth by Robert May.
Chaos can be understood as concept where there is a fixed output obtained for a fixed input in the system, but changes
and variations in the input gives a totally different output that indicates of non-existent fixed mapping among the
parameters of input and output of the stated system. Hence the system is adaptive to the changes in the inputs and gives
output accordingly.[10]
The above condition can be illustrated as below:
Y (i) = fn(X (i)) Ѵ X (i); (8)
but Y (i) is random for X (i+Δ);
where Δ stands for a change in X.
The aforementioned mathematical conditions are employed to get a ‘chaotic neural network’ i.e. a neural network
possessing the property of chaos. The existence of chaos in the neural network signifies that the network model can
vary dynamically according to the changes or variations in the respective input of the network. The condition is
mathematically stated as:
3.Types of noise
The noise is kind of a degradation of the digital image. As the digital images are subjected to various forms of
processing, transformations, rampant usage and retrieval mechanisms.[12]
Image noise can be categorized as follows:-
Gaussian Noise (Amplifier Noise)
Poisson Noise (Shot Noise)
Salt & pepper Noise (Impulse Noise)
Speckle Noise
C. Gaussian Noise (Amplifier Noise)
It is the form of electronic noise also called the amplifier noise because it originated from the amplifier in the devices
of image capture, store and retrieval. This noise does not depend on the gray scale of the pixels of the image. It
possesses a low power spectral density.
D. Salt & pepper Noise (Impulse Noise)
It is also called impulse noise or spike noise attributing toits impulsive behaviour. This noise appears to be in black and
white spots that correlate to the look of the salt and pepper particles. This take only two values that are discrete in
nature according to the salt and pepper.
E. Speckle Noise (Multiplicative Noise)
This noise follows a multiplicative pattern and the best pixel value that is effective is the real pixel measure with the
noise coefficient multiplied with original pixel value
J = I + n*I. (10)
Here J stands for the speckle noise distribution and I is the original image.
4. Previous Work
This section presents the previous work in the domain.
Hao-Tian Wu et al. [1] proposed homo morphic encryption for images. Images were converted to the homomorphic
image format with the image being a function of two components which are the reflectance and the illumination co-
efficient values. The Paillier encryption mechanism is used in the approach to encrypt the data. The performance of
the proposed system was evaluated in term so the peak signal to noise ratio. The variation of the peak signal to noise
ratio was analyzed as function of the embedding rate. The major challenge with the proposed work was the fact that it
did not have any separate or dedicated noise removal technique to enhance the noise immunity of the images.
Moreover, the encryption mechanism did not exhibit significantly high amounts of chaos which makes cryptosystems
more immune to brute force attacks.
K.H.Jung [2] proposed a technique based on Data hiding in images using Pixel Value Difference (PVD) and Block
Expansion technique. In this approach the interpolation operation has been used to find the expansion of the blocks of
the pixels which the pixel difference is minimal and then the values of the blocks are utilized for data embedding. The
major challenge with this approach of interpolation is the fact that interpolation and block expansion approaches may
often lead to loss of data and resolution. This can be seen in the manifestation of the low value of the peak signal to
noise ratio of the system and the relatively high value of mean square error of the system.
Somendu Chakroborty et al. [3] proposed the LSB injection technique for performing image steganography. The
approach used the technique to convert the image into an LSB-MSB decomposition to find the co-efficient values
which had the least amount of significant data and the values which have the maximal amount of significant data. The
major challenge with such approaches is to figure out how to discriminate among the values of the co-efficient values
which have MSBs and the ones which have the LSBs. The embedding of data in the transform domain is effective
however it has the disadvantage of the data loss during the approximations in the transform and inverse transform
process. This again manifests in the increase error profile of the extracted data.
K.Mohammad et al. [4] proposed blind spectral de-convolution using the Split Bregman approach for the restoration
of images. This technique also introduced the use of the Wavelet Transform for the reconstruction of hyper spectral
images. In the approach the wavelet co-efficient values are used for image restoration. This is done by the iterative
decomposition of the image and discarding the detailed co-efficient values of the image. The main challenge of the
image restoration in the transform domain is the fact that the image transforms and inverse transform pairs often
introduce irrecoverable changes in the images which render loss of resolution in the image. This may often be
effective in remove the noise and blurring effects in images but are not effective enough to simultaneously retain the
image characteristics of the image to maintain quality.
H. Dadgostar et al. [5] developed an interval-valued intuitionistic fuzzy edge detection technique for conducting
steganography and data embedding in images. The approach showed that the interval-valued approach for detecting
images LSBs is effective for injecting or embedding secret data. The use of fuzzy logic was used as an expert view
based system which would decide the places or blocks where the data can be embedded to make it least perceptible to
the attackers. The major disadvantages with the fuzzy based approaches for data embedding is the fact that it is often
extremely complex and non-deterministic to frame the membership functions for the cryptosystem as the fuzzy
system needs to be trained with sufficient large amounts of data to be able to find the accurate ranges of the
membership functions.
Xinvi Zhou et.al. [6] proposed LSB based color image steganography considering effects of noise. In this approach it
was shown that the images are often degraded in their resolution, correlation co-efficient values and peak signal to
noise ration due to the effect of noise and disturbances. The typical noise effects which affect the images are the
Gaussian noise, the speckle noise, salt and pepper noise and Poisson noise. The noise removal mechanism has to be
effective enough to just remove the noise and result in image quality degradation to as least a value as possible. The
residual values of noise and disturbances can be evaluated in terms of the signal to noise ratio of the image.
Bin Li et al. [7] developed clustering modification for the purpose of spatial image data hiding applications. The
approach tried to apply clustering to find out the redundant information of the images. It was shown that images in
general have a lot of redundant data in the form of the pixels which clearly manifests itself when the image spectral
analysis is done. The spectral analysis clearly shows that a lot of the pixels render information about the common
spectral bands and hence cause large redundancies. This can cause the image to take up larges space in the memory
for storage and also require more bandwidth for transmission. Another con is the requirement of more time and space
complexity for the image processing applications.
5. Performance Indices
The Peak Signal to Noise Ratio (PSNR) and mean square error (MSE) are two major deciding parameters in the effect
of degradation of the image. While MSE is an amount of the errors in the image with respect to the original image,
PSNR signifies the effect of residual noise.
The MSE represents the cumulative squared error of the original image and the extent to which image has
transformed.
A low value of MSE indicates lower degradation occurring to the original image, while a higher value indicates
higher degradations.
Peak Signal to Noise Ration (PSNR) tells about the amount of residual noise existing in the concerned image. The
higher the value of signal power and the lower the value of the image power, higher is the PSNR measures. Peak
Signal to Noise Ratio is usually expressed in decibels. It is mathematically expressed as below:
𝑠𝑖𝑧𝑒 2
𝑃𝑆𝑁𝑅 = 10𝑙𝑜𝑔10 (11)
𝑚𝑠𝑒
REFERENCES
[1] H.T.Wu, Y.M.Cheung, Z.Yang, S.Tang, “A high-capacity reversible data hiding method for homomorphic encrypted images”, Journal of
Visual Communication and Image Representation, Vol-62, Elsevier 2021
[2] K.H. Jung, “High-capacity reversible data hiding method using block expansion in digital images”, Volume-14, Springer 2020
[3] Somendu Chokroborty, Anand Singh Jalal, Charul Bhatnagar, “LSB based non blind predictive edge adaptive image steganography”,
Volume-76, Issue-6, Springer 2019
[4] K.Mohammad, M.Sajid, I Mehmood “Image steganography using uncorrelated color space and its application for security of visual contents
in online social networks”, Elsevier 2018
[5] H.Dadgostar, F.Afsari, “Image steganography based on interval-valued intuitionistic fuzzy edge detection and modified LSB”, Volume-30,
Elsevier 2017
[6] Xinyi Zhou, Wei Gong, WenLong Fu, Liang Jin, “An Improved Method for LSB based color image steganography combined with
cryptography”, IEEE 2016
[7] Bin Li, Ming Wand, Xiaolong Li, Shunquan Tan, Jiwu Huang, “ A strategy of clustering modification directions in Spatial Image
Steganography”, Vol-10, Issue-9, IEEE Transactions 2015
[8] Bi Li, M Wang, J Huang, X Li, “A New Cost Function for Spatial Image Steganography”, IEEE 2014.
[9] Mansi S, Vijay H Mankar, “Current Status and Key Issues in Image Steganography: A Survey”, Volume-13, Elsevier 2014
[10] Zhenxing Qian, Xinpeng Zhang, Shuozhong Wang, “Reversible Data Hiding in Encrypted JPEG Bitstream”, IEEE 2014
[11] A Bakhshandeh, Z Eslami “An authenticated image encryption scheme based on chaotic maps and memory cellular automata”, Elsevier 2013.
[12] K Gu, G Zhai, X Yang, W Zhang, “A new reduced-reference image quality assessment using structural degradation model”, IEEE 2013
[13] YW Tai, S Lin, “Motion-aware noise filtering for de-blurring of noisy and blurry images”, IEEE 2012
[14] A. Kanso and M. Ghebleh, “A Novel Image Encryption Algorithm Based on a 3D Chaotic Map”, Elsevier 2012
[15] Xinpeng Zhang, “Lossy Compression and Iterative Reconstruction for Encrypted Image”, IEEE 2011
[16] W Hong, TS Chen, HY Wu, “Reversible An improved reversible data hiding in encrypted images using side match”, IEEE 2011
[17] Seyed Mohammad Seyedzade, Reza Ebrahimi Atani, Sattar Mirzakuchaki, “A Novel Image Encryption Algorithm Based on Hash Function”,
IEEE 2010
[18] Ismail Amr Ismail, Mohammed Amin and Hossam Diab, “A Digital Image Encryption Algorithm Based A Composition of Two Chaotic
Logistic Maps”, International Journal of Network Security 2010
[19] CK Huang, HH Nien, “Multi chaotic systems based pixel shuffle for image encryption”, Elsevier 2009
[20] R Rhouma, S Meherzi, S Belghith, “OCML-based colour image encryption”, Elsevier 2009
[21] T Gao, Z Chen, “A new image encryption algorithm based on hyper-chaos”, Elsevier 2008
[22] KW Wong, BSH Kwok, WS Law, “A fast image encryption scheme based on chaotic standard map”, Elsevier 2008
[23] Reversibility improved data hiding in encrypted Images,
Weiming Zhang, Kede Ma, Nenghai Yu, Elsevier, 2013
[24] Double image encryption scheme by using random phase encoding and pixel exchanging in the gyrator transform domains, Elsevier 2012,
Zhengjun Liu , Yu Zhang , She Li , Wei Liu , Wanyu Liu , Yanhua Wang, Shutian Liu
[25] Color image encryption using spatial bit-level permutation and high-dimensionChaotic system, Elsevier 2011 Hongjun Liu , Xingyuan Wan
[26] NPCR and UACI Randomness Tests for Image Encryption
Yue Wu, Student Member, IEEE, Joseph P. Noonan, Life Member, IEEE, and Sos Agaian, Senior Member, IEEE 2011
[27] A novel colour image encryption algorithm based on chaos, Elsevier 2011
Xingyuan Wangn, Lin Teng, Xue Qin
[28] A fast color image encryption algorithm based on coupled two-dimensional piecewise chaotic map, Elsevier 2011
[29] Image encryption algorithm by using fractional Fourier transform and pixel scrambling operation based on double random phase encoding,
Elsevier2012 Zhengjun Liu, She Li, Wei Liu, Yanhua Wang, Shutian Liu
[30] A symmetric image encryption algorithm based on mixed Linear–nonlinear coupled map lattice, Elsevier 2014Zhang Ying-Qian, Wang Xing-
Yuan
[31] A novel image encryption based on hash function with only two-round diffusion process, Springer 2013Benyamin Norouzi, Seyed
Mohammad Seyedzadeh, Sattar Mirzakuchaki, Mohammad Reza Mosavi
[32] [10] A novel chaotic block image encryption algorithm basedon dynamic random growth technique, Elsevier 2014Xingyuan Wang, Lintao
Liu, Yingqian Zhang
[33] Lag Synchronization of Switched Neural Networks via Neural Activation Function and Applications in Image Encryption, IEEE Transactions
2014 Shiping Wen, Zhigang Zeng, Senior Member, IEEE, Tingwen Huang, Senior Member, IEEE,Qinggang Meng, and Wei Yao
[34] A Comparative Study of Various Types of Image Noise and Efficient Noise Removal Techniques, International Journal of Advanced
Research in Computer Science and Software Engineering, IJRCSSE 2013 Rohit Verma, Jahid Ali
[35] Comparative Study of Different Noise Models and Effective Filtering Techniques,International Journal of Science and Research (IJSR)Dr.
Aziz Makandar, Daneshwari Mulimani, Mahantesh Jevoor
[36] Efficient Technique for Colour Image Noise ReductionC.Mythili, V.KavithaThe Research Bulletin of Jordan, ISWSA;ACM 201
[37] Cryptography and Network Security, by Willaim Stallings, Pearson India.
[38] Digital Image Processing by Gonzalez and Woods, Person India.
Abstract
Chronic health risks have risen among young individuals due to several factors such as sedentary lifestyle, poor
eating habits, sleep irregularities, environmental pollution, workplace stress etc. The problem seems to be more
menacing in the near future. One possible solution is thus to design health risk prediction systems which can
evaluated some critical features of parameters of the individual and then be able to predict possible health risks.
As the data shows large divergences in nature with non-correlated patterns, hence choice of machine learning
based methods becomes inevitable to design systems which can analyze the critical factors or features of the data
and predict possible risks. The choice of classifier here is important as the data often shows overlapping nature.
An overview of the aspect and different methodologies adopted in this regard are presented in this paper.
Keywords: Health Risk Assessment, Machine Learning, Error Performance, Accuracy Estimation.
1. Introduction
With increase in the sedentary lifestyle of people around the globe, different health risks are affecting people
worldwide. While life expectancy has increased, but increasing health risks can be seen throughout the world. The
majority of the population are pre-occupied in sedentary and non-active vocations neglecting the health markers
which has seen an earlier precedence of health risks in people. The major reasons happen to be [2]:
1) Sedentary Lifestyle
2) Lack of Physical Exercise.
3) Poor Food Choices.
4) Environmental Pollution.
5) Climate Change
6) Stress in everyday life etc.
Hence, an urgent need to address the health risks has become imperative. However, the cost of healthcare
medications is also continuing to rise. It is the government's job to have an efficient, cost-effective medical
system.
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: arjun.parihar@sdbc.ac.in
89
A Survey on Supervised Learning Models for Modelling Health Risk Evaluation
2. Literature Review
Li et al. [6] showed that the massive amount of medical data accumulated from patients and healthcare providers
has become a vast reservoir of knowledge source that may enable promising applications such as risk predictive
modeling, clinical decision support, disease or safety surveillance. However, discovering knowledge from the big
medical data can be very complex because of the nature of this type of data: they normally contain large amount
of unstructured data; they may have lots of missing values; they can be highly complex and heterogeneous. To
address these challenges, authors have proposed a Collaborative Filtering-Enhanced Deep Learning approach. In
particular, we estimate missing values based on patients' similarity, i.e., we predict one patient's missing features
based on the values of similar patients. This is implemented with the Collaborative Topic Regression method,
which tightly couples topic model and probability matrix factorization and is able to utilize the rich information
hidden in the data. Then a deep neural network-based method is applied for the prediction of health risks. This
method can help us handle complex and multi-modality data. Extensive experiments on a real-world dataset have
been performed and the results show improvements of the proposed algorithm over the state-of-the-art methods.
Rajilwall et al. [7] proposed a machine learning primarily based prognostic modelling framework, which may run
in static/low speed, massive information from electronic health records, furthermore as extreme velocity,
streaming massive information settings captured from wearables, like fitness bands and biosensor watches.
Authors describe a scalable algorithm called Neuron network, which is used to achieve highly accurate results in
fuzzy data. Authors have presented the outcomes of the proposed framework implementation for static and low-
velocity/volume settings from the EHR & clinical DBs, with the experimental authentication of the planned
framework, for 2 openly accessible CVD data sets which are “NHANES” dataset, and the “Framingham Heart
Study” dataset), shown promising outcomes, in terms of performance of different modelling algorithms for the
disease status prediction.
Dimopoulos et al. [8] used of Cardiovascular Disease (CVD) risk estimation scores in primary prevention has
long been established. However, their performance still remains a matter of concern. The aim of this study was to
explore the potential of using ML methodologies on CVD prediction, especially compared to established risk tool.
Depending on the classifier and the training dataset the outcome varied in efficiency but was comparable between
the two methodological approaches. In particular, the system showed accuracy 85%, specificity 20%, sensitivity
97%, positive predictive value 87%, and negative predictive value 58%, whereas for the machine learning
methodologies, accuracy ranged from 65 to 84%, specificity from 46 to 56%, sensitivity from 67 to 89%, positive
predictive value from 89 to 91%, and negative predictive value from 24 to 45%; random forest gave the best
results, while the k-NN gave the poorest results.
Maxwell et al. [9] showed that multi-label classification of data remains to be a challenging problem for medical
records because of the complexity of the data, it is sometimes difficult to infer information about classes that are
not mutually exclusive. For medical data, patients could have symptoms of multiple different diseases at the same
time and it is important to develop tools that help to identify problems early. Intelligent health risk prediction
models built with deep learning architectures offer a powerful tool for physicians to identify patterns in patient
data that indicate risks associated with certain types of chronic diseases. The results suggest that Deep Neural
Networks (DNN), a DL architecture, when applied to multi-label classification of chronic diseases, produced
accuracy that was comparable to that of common methods such as Support Vector Machines. We have
implemented DNNs to handle both problem transformation and algorithm adaption type multi-label methods and
compare both to see which is preferable.
Chen et al. [10] showed that with big data growth in biomedical and healthcare communities, accurate analysis of
medical data benefits early disease detection, patient care, and community services. However, the analysis
accuracy is reduced when the quality of medical data is incomplete. Moreover, different regions exhibit unique
characteristics of certain regional diseases, which may weaken the prediction of disease outbreaks. In this paper,
we streamline machine learning algorithms for effective prediction of chronic disease outbreak in disease-frequent
communities. To overcome the difficulty of incomplete data, authors use a latent factor model to reconstruct the
missing data. We experiment on a regional chronic disease of cerebral infarction. The authors propose a new
convolutional neural network (CNN)-based multimodal disease risk prediction algorithm using structured and
unstructured data from hospital. To the best of our knowledge, none of the existing work focused on both data
types in the area of medical big data analytics. Compared with several typical prediction algorithms, the
prediction accuracy of our proposed algorithm reaches 94.8% with a convergence speed, which is faster than that
of the CNN-based unimodal disease risk prediction algorithm.
Nithya et al. [11] proposed that machine Learning (ML) is the fastest rising arena in computer science, and health
informatics is of extreme challenge. The aim of Machine Learning is to develop algorithms which can learn and
progress over time and can be used for predictions. Machine Learning practices are widely used in various fields
and primarily health care industry has been benefitted a lot through machine learning prediction techniques. It
offers a variety of alerting and risk management decision support tools, targeted at improving patients' safety and
healthcare quality. With the need to reduce healthcare costs and the movement towards personalized healthcare,
the healthcare industry faces challenges in the essential areas like, electronic record management, data integration,
and computer aided diagnoses and disease predictions. Machine Learning offers a wide range of tools, techniques,
and frameworks to address these challenges. This paper depicts the study on various prediction techniques and
tools for Machine Learning in practice. A glimpse on the applications of Machine Learning in various domains
are also discussed here by highlighting on its prominence role in health care industry.
Ross et al. [12] proposed that effectiveness of precision medicine is beginning to be realized in some areas of
medicine. In Oncology, genetic profiling is now being used to identify patients for whom tailored chemotherapy
regimens—directed against the individual's personal cancer mutation—can be used to significantly improve
outcomes relative to traditional therapy. Rather than the current empirical approach to treatment, there is hope that
with a deeper understanding of biology and pharmacogenomics, we may one day be able to guarantee that every
patient receives the right dose of the right medicine at the right time. The proposed machine-learned models
outperformed stepwise logistic regression models both for the identification of patients with PAD (area under the
curve, 0.87 vs 0.76, respectively; P = .03) and for the prediction of future mortality (area under the curve, 0.76 vs
0.65, respectively; P = .10). Both machine-learned models were markedly better calibrated than the stepwise
logistic regression models, thus providing more accurate disease and mortality risk estimates.
LaFreniere et al. [13] proposed that artificial neural network is a powerful machine learning technique that
allows prediction of the presence of the disease in susceptible populations while removing the potential for human
error. In this paper, authors identify the important risk factors based on patients' current health conditions, medical
records, and demographics. These factors are then used to predict the presence of hypertension in an individual.
These risk factors are also indicative of the probability of a person developing hypertension in the future and can,
therefore, be used as an early warning system. Authors design a neural network model for predicting hypertension
with about 82% accuracy. This is good performance given our chosen risk factors as inputs and the large
integrated data used for the study. The proposed network model utilizes very large sample sizes (185,371 patients
and 193,656 controls) from the Canadian Primary Care Sentinel Surveillance Network (CPCSSN) data set.
Tay et al. [14] proposed a novel learning algorithm – a key factor that influences the performance of machine
learning-based prediction models – and utilities it to develop CVD risk prediction tool. This novel neural-inspired
algorithm, called the Artificial Neural Cell System for classification (ANCSc), is inspired by mechanisms that
develop the brain and empowering it with capabilities such as information processing/storage and recall, decision
making and initiating actions on external environment. Specifically, we exploit on 3 natural neural mechanisms
responsible for developing and enriching the brain – namely neurogenesis, neuroplasticity via nurturing and
apoptosis – when implementing ANCSc algorithm. Benchmark testing was conducted using the Honolulu Heart
Program (HHP) dataset and results are juxtaposed with 2 other algorithms – i.e. Support Vector Machine (SVM)
and Evolutionary Data-Conscious Artificial Immune Recognition System (EDC-AIRS). Empirical experiments
indicate that ANCSc algorithm (statistically) outperforms both SVM and EDC-AIRS algorithms. Key clinical
markers identified by ANCSc algorithm include risk factors related to diet/lifestyle, pulmonary function,
personal/family/medical history, blood data, blood pressure, and electrocardiography. These clinical markers, in
general, are also found to be clinically significant providing a promising avenue for identifying potential
cardiovascular risk factors to be evaluated in clinical trials.
Sowjanya et al. [15] showed that deficiency of knowledge about diabetes causes untimely death among the
population at large. Therefore, acquiring a proficiency that should spread awareness about diabetes may affect the
people in India. In this work, a mobile/android application based solution to overcome the deficiency of
awareness about diabetes has been shown. The application uses novel machine learning techniques to predict
diabetes levels for the users. At the same time, the system also provides knowledge about diabetes and some
suggestions on the disease. A comparative analysis of four machine learning (ML) algorithms were performed.
The Decision Tree (DT) classifier outperforms amongst the 4 ML algorithms. Hence, DT classifier is used to
design the machinery for the mobile application for diabetes prediction using real world dataset collected from a
reputed hospital in the Chhattisgarh state of India.
3. Existing Models
The major challenge with design of health risk prediction systems are:
1) The data is extremely complex and uncorrelated in nature.
2) The number of variables being large makes it extremely challenging to carry out regression analysis.
3) The outcomes are often individual dependent not exhibiting alignment to fixed patterns.
Mostly, evolutionary algorithms are used in the domain to design models for health risk prediction. Evolutionary
algorithms try to mimic the human attributes of thinking which are:
1) Parallel data processing
2) Self-Organization
3) Learning from experiences
The major approaches employed in the domain of health risk prediction are:
Some of the commonly used techniques are discussed below:
1)Statistical Regression: These techniques are based on the time series approach based on the fitting problem
that accurately fits the data set at hand. The approach generally uses the auto-regressive models and means
errors in prediction. Such a training-testing rule is associated for neural network. Deep Neural Networks are the
neural networks with multiple hidden layers and are generally used for training complex datasets.
4. Conclusion
It can be concluded form the previous discussions that a vast amount of clinical data scattered across different
sites on the Internet hinders users from finding helpful information for their well-being improvement. Besides, the
overload of medical information (e.g., on drugs, medical tests, and treatment suggestions) have brought many
difficulties to medical professionals in making patient-oriented decisions. These issues raise the need to apply
recommender systems in the healthcare domain to help both, end-users and medical professionals, make more
efficient and accurate health-related decisions. In this article, we provide a systematic overview of existing
research on healthcare recommender systems.
References:
[15] K. Sowjanya, A. Singhal and C. Choudhary, "MobDBTest: A machine learning based system for
predicting diabetes risk using mobile devices," 2015 IEEE International Advance Computing Conference (IACC),
2015, pp. 397-402, doi: 10.1109/IADCC.2015.7154738.
[16] LM Hlaváč, D Krajcarz, IM Hlaváčová, S Spadło, “Precision comparison of analytical and statistical-
regression models for AWJ cutting”, Precision Engineering, Elsevier 2017,vol. 50, pp. 148-159
[17] C Bergmeir, RJ Hyndman, B Koo, “A note on the validity of cross-validation for evaluating
autoregressive time series prediction”, Computational Statistics & Data Analysis, Elsevier 2018, vol.120, pp. 70-
83.
[18] D Kumar, KN Rai, “Numerical simulation of time fractional dual-phase-lag model of heat transfer within
skin tissue during thermal therapy”, Journal of Thermal Biology, Elsevier 2017, vol. 67, pp. 49-58
[19] M. Chen, U. Challita, W. Saad, C. Yin and M. Debbah, "Artificial Neural Networks-Based Machine
Learning for Wireless Networks: A Tutorial," in IEEE Communications Surveys & Tutorials, vol. 21, no. 4, pp.
3039-3071, Fourthquarter 2019, doi: 10.1109/COMST.2019.2926625.
[20] I. H. Laradji, R. Pardinas, P. Rodriguez and D. Vazquez, "Looc: Localize Overlapping Objects with
Count Supervision," 2020 IEEE International Conference on Image Processing (ICIP), 2020, pp. 2316-2320, doi:
10.1109/ICIP40778.2020.9191122.
[21] S Bandaru, AHC Ng, K Deb, “Data mining methods for knowledge discovery in multi-objective
optimization: Part A-Survey”, Expert Systems with Applications, Elsevier 2017, vol. 70, no.15 pp.139-159
[22] A. Karpatne, I. Ebert-Uphoff, S. Ravela, H. A. Babaie and V. Kumar, "Machine Learning for the
Geosciences: Challenges and Opportunities," in IEEE Transactions on Knowledge and Data Engineering, vol. 31,
no. 8, pp. 1544-1554, 1 Aug. 2019, doi: 10.1109/TKDE.2018.2861006.
[23] V. Sze, Y. Chen, T. Yang and J. S. Emer, "Efficient Processing of Deep Neural Networks: A Tutorial
and Survey," in Proceedings of the IEEE, vol. 105, no. 12, pp. 2295-2329, Dec. 2017, doi:
10.1109/JPROC.2017.2761740.
[24] W. Zhou, J. Li, M. Zhang, Y. Wang and F. Shah, "Deep Learning Modeling for Top-N Recommendation
With Interests Exploring," in IEEE Access, vol. 6, pp. 51440-51455, 2018, doi: 10.1109/ACCESS.2018.2869924.
Abstract:
Agriculture is one of the major factors of human sustainability. There are several factors to consider in
agriculture, especially of crops to yield maximum production. Agriculture is an important part of our daily life
and our country’s growth as per research 70% of cities and villages still depend on agriculture for their lifecycle,
with 82% of farmers’ first priority being agriculture, because it totally depends on agriculture in India. Now this
time the internet and technology so we can apply technology to our Agriculture and improve our crop production
rate. This is helpful to many farmers. Now the project goal making agriculture advance using the latest
technology and IoT (Internet of things) technologies. The major feature of this project is the GSM Motor
controller performs tasks like automatic control of the water pump, Spraying, bird, and animal scaring, moisture
sensing, etc. Secondary it includes Temperature and humidity detection and plotting real-time graph data.
Warehouse management is an important part of agriculture because the warehouse is care for crops for future use,
and manages temperature, and humidity. Fire, and theft detection system. All the sensors (Lora, GSM,
temperature) and modules are connected to the microcontroller.
Keywords: - Internet of things, Agriculture automation, GSM, Wireless System, LoRa, Wi-Fi.
1. Introduction
The agriculture industry will become important in the next decades. Agriculture plays a major role in our day–to–
day life because our day starts with agricultural products, we need food (fabrics, cotton, wool, seat) for our daily
life this food is provided to many farmers with the help of agriculture, so agriculture is important to us but we
leak in agriculture because we use traditional method to our agriculture. The traditional method is good but the
population increases day to day in the traditional method we can’t have more production to crop and it does not
benefit the farmer and the next generation. But now a day technology is advanced so now apply technology to our
agriculture and provide many benefits to our farmers and country’s growth. So, we apply the Internet of things to
agriculture to increase production and less affords the farmers. Now with the help of IoT, we will push agriculture
to the next level generation. Smart agriculture drones and robots are already used by farmer but this ratio is very
less because over farmer is not aware of this. With smart farming, we will develop over-crop production and
storage capacity of over-dry crops, so this kind of thing is more beneficial to farmers. The agriculture industry is
adopting the internet of thing now we called Agriculture Internet of things. So now reduced over challenges
regarding climate, and weather conditions, to easily detect with the help of over IoT system (Also monitoring
environmental factors and growth over crop production).Now we need to take a parameter of soil with the help of
a sensor, controller, modules, and internet of things, and collect the data for analyses the data to many servers
using a wireless protocol, this parameter helps our crop production and what needs actual ingredients to our crop
this is financial help to many farmers because they don’t use unwanted chemicals this harm to crop and human
health. The aim of our paper is to make advanced agriculture systems and high tack using IoT (Internet of things),
wireless technology. The advantage of this paper is using a GSM Motor controller, GPS-based monitoring
Copyright © ICMMT2022
Corresponding Author’s E-mail ID: abhishek.garg835@gmail.com
96
LoRa based Agriculture & Crop Monitoring Systems using Internet of Things (IoT)
system, thief alert system, temperature & humidity monitoring system, and water level indication. Control all
parameters using the advanced sensor, controller & our web server, or using a mobile application.
2. Literature Review: -
The newworld of decreasing water levels, the drying up of rivers and water sources, and an unpredictable
environment present an urgent need to develop water source infrastructure. To cope with this use of temperature
and moisture sensors at suitable locations for monitoring crops is implemented.Each level is suited for different
applications and has different component and deployment configurations. Designing an IoT system can be a
complex and challenging task as these systems involve interaction between various componentssuch as
Temperature and humidity sensor, soil moisture sensor,GSM motor controller, water level indication, MQ3
sensor, LoRa module, etc all sensor interfacing with the controller. This module usesa nonconnection wiresystem
(wireless network) uses a wireless sensor infrastructure to observe and control the agriculture variableslike real-
time temperature and humidityfor better management and maintenance of agriculture production.
Smart Irrigation: -
Smart irrigation systems can improve crop yield while saving water. IoT-based Smart irrigation system detects
soil moisture and according to soil moisture how much water is required to plant and crop. This parameter helps
to proper water supply and reduces the water westing problem. Smart irrigation systems also collect moisture
level measurements on a server or in the cloud where the collected data can be analysed to plan watering
schedules, is a device for smart irrigation that uses water valves, soil sensors and wireless enables programmable
computer.
Greenhouses are structures with glass or plastic roofs that provided a conducive environment for the growth of
plants. The climatological conditions inside a greenhouse can be monitored and controlled to provide the best
conditions for the growth of plants. The temperature, humidity, soil moisture, light, and carbon dioxide levels are
monitored using sensors, and climatological conditions are controlled automatically using actuation devices
(Such as valves for releasing water and switches for controlling fans). IoT systems play an important role in the
greenhouse and help in improving productivity. The design of a wireless sensing and control system for precision
greenhouse management is described.
Fig1. Smart Agriculture System based on GSM Motor Controller and Cloud Service
In this image, all sensors and modules send data to the controller. The controller processes the data & send it to
the cloud. GSM-based motor controller control using SMS and Call thru a mobile phone. We can on & off the
motor to long distances.
3. Hardware specification
LoRa: -
LoRa ™ spread spectrum modulation technology. It is used to achieve communication between nodes. The range
of LoRa is approx. 100 meters can increase the height of the antenna (IPEX antenna use) and require proper
power. It operates 410 – 525 MHz frequency range. SPI communication is used in the LoRa module. The
operating temperature of the Lora module is -30°C to 85°C. This module is used in many applications like
automatic meter reading, home building automation, security purpose, wireless notices board, and many
applications available.
SIM800L is a cellular module. It is allowing GPRS transmission, sending, and receiving voice calls and SMS.
SIM800L is a low-cost module. This module specification and wideband frequency support so this module is
good for all GPRS-based applications. We can use this module is a long-range application. This module is
supported AT command so we can use this module in anywhere but the condition is available in network range. it
has a micro-SIM slot. Antenna for the network signal, microphone, speaker pinouts, and ring.The power supply
required of the module 3.4 to 4.4V DC with a minimum 2A. working temperature range is -40°C to 85°C.
The soil moisture sensor is commonly used in garden monitoring and agriculture or farming purpose. This sensor
working basically detects soil moisture. It means how much water content is available in the soil. The working
principle of this sensor is to use capacitance to measure the dielectric permittivity of the surrounding medium. It
detects water content in the soil. The operating voltage of this module is 3.3 V to 5V DC. We can take both
analog and digital output of this module.
Smoke Sensor:
MQ2 sensor is mostly used to detect gas MQ2 is a gas sensor. It is a metal oxide semiconductor (MOS), It is also
known as a chemiresistors. This sensorworks on chance temperature and resistance value. This sensor operating
voltage is +5V DC. it is detected as LPG, Alcohol, Propane, Hydrogen, CO, and methane. 20-second preheat
duration in this sensor. Sensitivity varied using a potentiometer.
ESP32 is an available lot of features in the market. It is a dual-core 32-bit LX6 processor. ESP32 isbuilt-in Wi-Fi
and BLE chip it is capable of long and short-range communication. Long-range communication using Wi-Fi
(internet) and short-range communication using BLE(Bluetooth). Both types of Bluetooth are available the first
one is BLE and the second one is Classic Bluetooth. 240MHz clock frequency in this module. 520 KB of SRAM
448 KB ROM and 16KB RTC SRAM. In this module 18 channels and 12-bit ADC. SPI, I2C, and UART
communication are supported in this module. In this module, Digital and PWM pins are there. The operating
voltage of this module is 3.3V to 5V DC.
4. Conclusion:
Backbone is important born to a human; this type Agriculture is Backbone to over country. If I say agriculture is
Backbone in every country so this is not wrong because agriculture provides food for people’ssurvival. IoT-based
smart agriculture system is used in our aspects like soil monitoring system, temperature monitoring system, GSM
motor controller, and other soil and warehouse perimeter. It helps to increase our crop production and crop safety
purpose. This project helps many farmers and easy life to our farmer.
REFERENCES:
[1] Dr. V. Suma, “Internet of Things (IoT) based Smart Agriculture in India: An Overview”.
[2] Nikesh Gondchawar1, Prof. Dr. R. S. Kawitkar, “IoT based smart agriculture”.
[3] Zhang, L., Dabipi, I. K. and Brown, W. L, "Internet of Things Applications for Agriculture". In, Internet of
Things A to Z: Technologies and Applications, Q. Hassan (Ed.), 2018
[4] S. Navulur, A.S.C.S. Sastry, M. N. Giri Prasad, "Agricultural Management through Wireless Sensors and
Internet of Things" International Journal of Electrical and Computer Engineering (IJECE), 2017; 7(6) :3492-3499.
[5] E. Sisinni, A. Saifullah, S. Han, U. Jennehag and M. Gidlund, "Industrial Internet of Things: Challenges,
Opportunities, and Directions," in IEEE Transactions on Industrial Informatics, vol. 14, no. 11, pp. 4724-4734,
Nov. 2018.
[6] S. R. Nandurkar, V. R. Thool, R. C. Thool, “Design and Development of Precision Agriculture System Using
Wireless Sensor Network”, IEEE International Conference on Automation, Control, Energy and Systems (ACES),
2014
[7] JoaquínGutiérrez, Juan Francisco Villa-Medina, Alejandra Nieto-Garibay, and Miguel Ángel Porta-Gándara,
“Automated Irrigation System Using a Wireless Sensor Network and GPRS Module”,IEEE TRANSACTIONS
ON INSTRUMENTATION AND MEASUREMENT, 0018-9456,2013
[8] Dr.V .VidyaDevi,G. Meena Kumari, “Real- Time Automation and Monitoring System for Modernized
Agriculture” ,International Journal of Review and Research in Applied Sciences and Engineering (IJRRASE)
Vol3 No.1. PP 7-12, 2013
Organized By