Detecting Natural Disasters Using Deep Learning
Detecting Natural Disasters Using Deep Learning
https://doi.org/10.22214/ijraset.2023.49175
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue II Feb 2023- Available at www.ijraset.com
Abstract: Natural disasters cannot be stopped, but they can be spotted, allowing people valuable time to flee to safety, as is often
noted. One strategy is to utilize computer vision to supplement current sensors, which improves the accuracy of natural
catastrophe detectors and, more significantly, enables people to take preparations, stay safe, and prevent/reduce the number of
fatalities and injuries caused by these disasters. As a result, responding to natural disasters like earthquakes, floods, and
wildfires requires extensive work by emergency responders and analysts who are on the ground. A low-latency data source for
understanding crisis conditions has arisen in social media. While the majority of social media research just uses text, photos
provide additional insight into accident and disaster situations.
Keywords: VGG16, CNN, CLR, Learning rate finder, Overfitting
I. INTRODUCTION
For response organizations, unexpected onset events like earthquakes, flash floods, and car accidents must be quickly identified.
However, gathering information in an emergency is time-consuming and expensive because it frequently calls for manual data
processing and professional evaluation.
There have been attempts to use computer vision algorithms on synthetic aperture radar, satellite photography, and other remote
sensing data to reduce these laborious efforts. Unfortunately, these methods are still expensive to use and insufficiently reliable to
gather pertinent data in emergencies.
Additionally, satellite imagery only offers an above perspective of the disaster-affected area and is subject to noise like clouds and
smoke (i.e., common images during storms and wildfires).
According to studies, social media posts in the form of text messages, pictures, and videos can be accessible right away when a
disaster strikes and can provide crucial information for disaster response, including reports of infrastructure damage and the
immediate needs of those who have been affected. Social media imaging is still underutilized, nonetheless, in contrast to other data
sources (such as satellites), mostly due to two significant difficulties.
First off, social media picture streams are notoriously noisy, and disasters are no exception. Sizable chunks of social media
photographs are irrelevant to particular disaster categories even after applying a text-based filter. Second, although deep learning
models, the industry standard for image classification, are data-hungry, there is currently no large-scale ground-level picture dataset
available for the development of robust computational models.
In this work, we address these issues and look into the detection of accidents, damage, and natural disasters in photos. The large-
scale Incidents Dataset, which comprises 4,428 scene-centric photos and is classified into four classes—cyclones, earthquakes,
floods, and wildfires—is presented first. Our model uses these pictures as the training and testing datasets.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1042
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue II Feb 2023- Available at www.ijraset.com
B. Objective
The primary goal of this project is to create a cutting-edge Convolutional Neural Network (CNN) model for classifying natural
disaster images and videos into different disaster kinds. On the dataset, the model is trained and tested. The system should accept
images and input and provide output on the probability of natural disasters occurring, the goal is to predict them at an early stage.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1043
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue II Feb 2023- Available at www.ijraset.com
Here is the link for accessing the dataset, the dataset is stored in google drive and accessed when required for execution: -
https://drive.google.com/drive/folders/139H6Nmf9gBbP15BXSRCD6MLHTAwem1mt?usp=sharing
C. Theoretical Framework
This section includes the pre-processing of data.
1) Data Cleaning and Feature Extraction: Data pre-processing, often known as data cleansing, is the first step in data extraction.
The goal of data cleaning is to simplify the dataset so that it is easier to deal with. One observation per row and one variable per
column are two traits of a clean/tidy dataset.
2) Model: The machine learning process is carried out using a deep learning artificial neural network that has a hierarchy of levels.
The model is built on deep networks, in which the information flow starts at the initial level. The model learns something
simple and transmits its output to layer two of the network while merging its input into something slightly more difficult and
passing it on to layer three of the network. This process continues as each level of the network draws on the knowledge it
received from the preceding level.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1044
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue II Feb 2023- Available at www.ijraset.com
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1045
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue II Feb 2023- Available at www.ijraset.com
In this study, a convolutional neural network (CNN) is employed to convert an RGB image into a visual feature vector. The three
most often used CNN layers are convolution, pooling, and fully connected. Additionally, ReLU f(x) = max (0, x), a nonlinear active
function, is used. ReLU is faster than the common equation f(x) = tanh (x). The use of a dropout layer prevents overfitting. The
dropout sets the output of each hidden neuron to zero with a probability of 0.5. The "dropped out" neurons are a part neither of the
backpropagation nor the forward pass.
Due to the millions of parameters that both the CNN and the RNN include, there are specific convergence concerns when they are
merged. For instance, Vinyals et al. found that fixing the convolutional layer's parameters to those trained from ImageNet is optimal.
The only CNN parameters that are learned from caption instances are the RNN parameters and the non-convolution layer
parameters.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1046
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue II Feb 2023- Available at www.ijraset.com
2) VGG16: A convolution neural network (CNN) architecture called VGG16 was utilized to win the 2014 ILSVR (ImageNet)
competition. It is regarded as having one of the best vision model architectures to date. The distinctive feature of VGG16 is that
it prioritized having convolution layers of 3x3 filters with a stride 1 and always utilized the same padding and max pool layer of
2x2 filters with a stride 2. Throughout the entire architecture, convolution and max pool layers are arranged in the same manner.
Two FC (completely connected layers) are present at the very end, followed by a softmax for output. The 16 in VGG16
indicates that there are 16 weighted layers. This network has over 138 million parameters, making it a sizable network.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1047
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue II Feb 2023- Available at www.ijraset.com
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1048
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue II Feb 2023- Available at www.ijraset.com
Table 3 displayed the accuracy obtained after running some set of epochs at each interval. It is observed that as the set of epochs
running increases from interval to interval, the accuracy is increased. Finally, after running 48 epochs at a time it is noted that the
accuracy obtained is 97%.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1049
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue II Feb 2023- Available at www.ijraset.com
B. Observation
1) Finally, the learning rate plot demonstrates how the learning rate oscillates between the MIN_LR and MAX_LR values in our
CLR callback:
Cyclical Learning Rate: We trained our neural networks using Keras and Cyclical Learning Rates (CLR). Cyclical Learning Rates
can significantly cut down on the number of tests needed to fine-tune and identify the best learning rate for your model.
2) Finding the best learning rates for our Network on our natural catastrophe dataset using a Keras Learning Rate Finder. With the
Keras deep learning framework, we will use the dataset to train a model for identifying natural disasters.
If you look at the plot, you can see that our model immediately picks up speed and learns around 1e-6.
Our loss keeps decreasing until it reaches about 1e-4, at which point it starts to increase once more, indicating overfitting.
Hence, the range of our ideal learning rate is 1e-6 to 1e-4.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1050
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue II Feb 2023- Available at www.ijraset.com
C. Output
Above fig.12 shows, the output displayed over a video. After the successful completion of training, we uploaded a video as input
and checked its activity. As a result, we obtained good accuracy in detecting video as Flood.
V. ACKNOWLEDGMENT
We would like to thank our guide “prof. Dr. G. Murugan” for helping throughout our project. In addition, special thanks to our
college management (Vardhaman College of Engineering) for standing back for us in the successful completion of the project.
REFERENCES
[1] In Proceedings of Journal of arXiv ISSN: 2008.09188v1, August 2021, Ethan Weber, Nuria Marzo, Dim P. Papadopoulos, Aritro Biswas, Agata Lapedriza,
Ferda Ofli, Muhammad Imran, and Antonio Torralba published "Detecting natural disasters, damage, and occurrences in the wild."
[2] Multimodal classification of crisis occurrences in social media by M. Abavisani, L. Wu, S. Hu, J. Tetreault, and A. Jaimes. In: Computer Vision and Pattern
Recognition Conference Proceedings, IEEE/CVF (2020).
[3] Deep landscape features for enhancing vector-borne disease prediction, N. Abdur Rehman, U. Saif, and R. Chunara. Workshops from the IEEE Symposium on
Computer Vision and Pattern Recognition (2019).
[4] A real-time decision support system for earthquake crisis management, called Ears (earthquake alert and report system), was developed by M. Avvenuti, S.
Cresci, A. Marchetti, C. Meletti, and M. Tesconi. pp. 1749–1758 in SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD).
ACM (2014). (2014).
[5] Inundation modeling in areas with a lack of data: Ben-Haim, Z., Anisimov, V., Yonas, A., Gulshan, V., Shafi, Y., Hoyer, S., Nevo, S. NeurIPS Workshop on
Artificial Intelligence for Disaster Response and Humanitarian Aid (2019).
[6] The multimedia satellite challenge at Medieval 2017: Emergency response for flooding occurrences. Bischke, B., Helber, P., Schulze, C., Venkat, S., Dengel,
A., Borth, D. MediaEval 2017 Workshop Proceedings, pp. 1–3. (2017).
[7] Daly, S., Thom, J.: Exploiting and categorizing social media image posts to assess fires. Pages 1–14 in 13th International Conference on Information Systems
for Crisis Response and Management (ISCRAM) (2016).
[8] Fernandez UAV-based urban structural damage assessment utilizing object-based picture processing and semantic reasoning. Galarreta, J., Kerle, N., Gerke, M.
Earth System Science and Natural Hazards, 15(6), 1087-1101 (2015)
[9] https://ieeexplore.ieee.org/document/7237228
[10] https://www.itm-conferences.org/articles/itmconf/pdf/2022/04/itmconf_icacc2022_03010.pdf
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1051