Detection of Bone Fracture Using Image Processing Methods: Dataset Used
Detection of Bone Fracture Using Image Processing Methods: Dataset Used
Detection of Bone Fracture Using Image Processing Methods: Dataset Used
Methods
INTRODUCTION:
Anomaly detection is an important problem that has been well-studied within diverse
research areas and application domains. The aim of this survey is two fold, firstly we
present a structured and com- prehensive overview of research methods in deep
learning-based anomaly detection. Furthermore, we review the adoption of these
methods for anomaly across various application domains and asess their
effectiveness. We have grouped state-of-the-art research techniques into different
categories based on the underlying assumptions and approach adopted. Within each
category we outline the basic anomaly detection technique, alongwith its variants and
present key assumptions, to differentiate between normal and anomalous behavior.
Abnormality detection
DATASET USED:
there was this Deep Learning competition hosted by Stanford last year which
expected the participants to detect the bone abnormalities. The dataset is widely
known as MURA. MURA is a dataset of musculoskeletal radiographs consisting of
14,863 studies from 12,173 patients, with a total of 40,561 multi-view radiographic
images. Each belongs to one of seven standard upper extremity radiographic study
types: elbow, finger, forearm, hand, humerus, shoulder, and wrist.
FEASIBILITY STUDY:
METHODOLOGY/PLANNING OF WORK:
Detection of Bone Fracture using Image Processing
Methods
This function basically reduces the learning rate to 1/10th of its initial value after
every 10 epochs.
It can be included in the callbacks as follows:
lrate = LearningRateScheduler(step_decay)
callbacks = [lrate]
2. Loss based decay: This technique reduces the learning rate after a patience of
certain predefined epochs once the model stops improving. Keras has a built-in
module named ReduceLRonPlateau which is shown in the code below.
reduce_lr = ReduceLROnPlateau(monitor='val_loss',
factor=0.1,
patience=10, min_delta=0.0001, verbose=1,
min_lr=0.0000001)
callbacks = [reduce_lr]
The above function monitors the validation loss and once it stops decreasing for a
patience of 10 epochs it reduces the learning rate by a factor of 0.1. Here, my model
Detection of Bone Fracture using Image Processing
Methods
had an initial learning rate of 0.001 and through this function I could take it to a value
min_lr which is 10–6.
For me the later technique proved to be more effective.
2. The other method involves using Scikitlearn, which comes with a function which
automatically computes the class weights from the training data which is as follows:
from sklearn.utils import class_weightclass_weights =
class_weight.compute_class_weight('balanced',
np.unique(y_train), y_train)model.fit(X_train, y_train,
class_weight=class_weights)
Data Augmentation
This technique is used when the dataset consists of less number of samples or the
images have varied orientations. Here, the task at hand asked us to build a binary
classifier to determine if the bone had some abnormality or was normal. Although,
the dataset had around 1000–2000 samples of each bone for the positive class, still
the orientation differed. Following are the sample image from Hand data.
Various optimizers
You can use different optimizers as a part of hyperparameter tuning. You can change
it to SGD, Adam. I have always used ‘rmsprop’ as a part of training but the Stanford
team has used ‘Adam’ with some specific parameters changed. Although, I haven’t
found luck in improving the model’s performance by changing the optimizers
Detection of Bone Fracture using Image Processing
Methods
Ensembling
Ensembling is a technique which allows you to make decisions by aggregating the
predictions from your different models. The reasons ensembling works better than
single models is because every model learns different features as per its architecture.
So, in this case, you could ensemble all your best performing models to get better
results.
There are two different ways of ensembling, one is Averaging out predictions from
the models and the other is to take the majority vote. The former method solely
depends on the probabilities or the confidence with which the class is being predicted
whereas the later one depends on the predicted class by most number of models in the
ensemble.
3.6 Classification
Classification is a step of data analysis to study a set of data
and categorize them into a number of categories. Each
category has its own characteristics and the data that belong to
such category have the same properties of this category. In
proposed method, different types of classifier are used such as
decision tree (DT) and neural network (NN) and meta-
classifier. Based on the GLCM textural features, classifiers
classify the given image into fractured and non-fractured
image. not ragged.
INNOVATION:
BIBLIOGRAPHY:
[1] Vijaykumar, V., Vanathi, P., Kanagasabapathy, P.
(2010). Fast and efficient algorithm to remove gaussian
noise in digital images. IAENG International Journal of
Computer Science, 37(1).
[6] Lim, S. E., Xing, Y., Chen, Y., Leow, W. K., Howe, T.
S., Png, M. A. (2004). Detection of femur and radius
fractures in x-ray images. In: Proc. 2nd Int. Conf. on
Advances in Medical Signal and Info. Proc.