Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

An Implementation of K-Means Clustering For Efficient Image Segmentation

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

An Implementation of K-Means Clustering for Efficient Image Segmentation of Natural

Background Images
1
Aswin Kumer S V, 2Dr E Mohan
1
Associate Professor, Department of Electronics and Communication Engineering,
Koneru Lakshmaiah Education Foundation, Vaddeswaram, AP, India
2
Professor, Department of Electronics and Communication Engineering, Lord
Venkateswara Engineering College
1
svaswin@gmail.com, 2emohan1971@gmail.com
ABSTRACT
The image segmentation is the process of finding and grouping the correlated pixels in the
particular image. There are different types are available find the correlated pixels in the
image. In this paper, the K-Means Clustering is used for segmentation to observe the different
image objects in an image. At first, the input samples are converted into Gray scale images,
after that those images are processed by K-Means clustering to produce the segmented image
output. The K-Means based on the grouping of similar pixels and the allotment of the center
pixels. By repeating the same process several times, then the output segmented image will
have excellent object discriminations. Out of several algorithm for image segmentation, the
K-Means clustering will provide good results. The objects discrimination is purely based on
the correlation of pixels available in the image. After processing, the image reshaping also
performed for better visualisation of the segmented image.
KEYWORDS: Image Processing, Image Segmentation, K-Means Clustering, Cluster
Centers, Label Function, Reshaping
1. INTRODUCTION
In the field of image processing [3], there is a similarity between the object tracking [5],
object detection and object discrimination [1]. The image segmentation concept is mostly
related with the concept of object discrimination in an image [2]. The image detail will be
preserved in every concept of image processing [4], which is also applicable for the
segmentation [6]. The clustering is based on the observation of the correlated pixels in the
particular frame [8] and forming those pixels together [7]. The pixel forming has the center
pixel [10], which is also to be decided by the algorithm [11], used to measure the distance
between the node pixel [9] and the correlated pixels for different object detection [12].
W.Zhao et.al, (2020) implements the machine learning concept with the help of Recurrent
Constraint Network for image segmentation. It gives more effective output because of the
supervised focus region technique. Y.Chen et.al, (2020) have done the approximation in the
pixels using the Low Rank Quaternion in a color image and processed better for good
segmentation results. V.Jatla et.al, (2020) uses a novel method of coronal hole segmentation,
the newly developed methodology improves the object discrimination in a better way.
X.Deng et.al, (2020) improved the resolution of an image along with the segmentation, which
are processed by the double coupled network. The ISTA is the type of network which is
newly implemented in the multi modal images.
2. RELATED WORK
L.Pan et.al, (2020) estimates the Flow of the frames during the movement of objects in an
image during segmentation. The video processing is also done like enhancing, deblurring, etc.
B.Kim et.al, (2020) implements deep learning algorithm to estimate the loss function
parameters to achieve the efficient image segmentation. I.Kajo et.al, (2020) also performed
the video processing to distinguish the foreground and background information of the
particular frames. Additionally, the Tensor completion method is used as video frame
assistance. N. Hidalgo-Gavira et.al, (2020) used the conventional convolution method using
variational Bayesian formulas for the pathological images. Z. Huang et.al, (2020) used the
adaptive networks for scale estimations for the semantic object differentiation. H.Zhang et.al,
(2020) distinguishing the objects instantly using the SSD Filters. This is a single stage
approach which produces the effective segmentation results. M.Foare et.al, (2020) also
referred the Mumford-shah model which is already referred by the B.Kim et.al, (2020) but
the difference between these two implementations are one researcher uses the loss function
and another researcher uses the discrete method with minimization. The alternate data points
are helpful for the object discrimination. F.Yuan et.al, (2020) also implements the Neural
Network to estimate the smoke density. The wavelets are also used to find the unknown data
points with the help of known data point parameters. D.O.Medly et.al, (2020) implemented
the robust algorithm for exact placements of the objects. The shaping modal in the deep
learning neural network helps to achieve these proper placements of an objects. S.Razavikia
et.al, (2020) also used the shaping of an image after the blur filter is applied to an image. The
process of reconstruction and recovery of the image is done by ranking the data points
available in the image which are estimated by Hankel Structured modal. A.Chuchvara et.al,
(2020) tried for the faster estimation of the data points with high accuracy using the sparse
representation parameters for image segmentation.
3. PROPOSED METHODOLOGY
The image is applied in the input side will be converted into Gray image as shown in Fig.3
and the K-Means clustering is applied to get the Segmented image at the output which is
shown in Fig.4.

Fig.1. Flow Sequence of Proposed Implementation


In the first step of the K-Means clustering, the data points are created and those points are
separately grouped for further processing [13, 18]. The formation of the group based on the
similarity between the data points [16,17]. Those similar data points which are grouped
together is also called as the clusters. There are many clustering algorithms are available. The
K-Means Clustering is one of the efficient and widely accepted algorithms that is used in this
implementation. The number of the clusters available in an image can be represented as K in
the K-Means Clustering.
In the beginning of the algorithm, it is choosing a group of clusters out of all clusters and it
allots the data points to those clusters for further steps of processing.
The next step of the processing is finding out the center point of the cluster to calculate the
distance of each and every data point from the center point [25]. The reallotment of the data
points is done based on the distance calculation to the short distance cluster [23]. Again, it
will find out the center point for the recently grouped clusters. Based on these steps, the
image segmentation quality will be improving by repeating those steps again.
The algorithm of the proposed model are as follows
i) The process of image acquisition using High resolution capturing device.
ii) The image is in the form of binary which is further converted from RGB to Gray
form.
Grayscale image = [(0.3Red) +(0.59Green) +(0.11Blue)] (1)
iii) The process of image segmentation is applied with the K-Means clustering
method.
iv) The clustering approach includes all mathematical models and post processing
approaches.
m n
K ( C ) =∑ |Di|−¿ ∑ |Ci|¿ (2)
p=1 q=1

Where,
K(C) = K-Means Clustering
m=number of center clusters
n=number of data points of final cluster
D=set of data points
C=set of clusters
v) Finally, the image segmentation output is obtained for all input images as shown
in the figure 4.
4. RESULTS AND DISCUSSIONS
To implement this K means clustering based image segmentation, there are three number of
natural sample images with different scenes and different luminance conditions are taken
which are shown in figure 2.
(a)

(b)
(c)

Fig.2. (a), (b) and (c), Sample input images


These images are taken as a sample images to compute the K Means clustering and the same
are converted to the Gray scale images as the first step and the pre-processing step of the
image segmentation which are shown in the Figure 3. In that, it is clearly showing that the
difference of images under various luminance conditions.

(a)
(b)

(c)
Fig.3. (a), (b) and (c), Images After RGB to Gray Conversion
(a)

(b)
(c)

Fig.4. (a), (b) and (c), Images After K-Means Clustering


The obtained output images are clearly showing the segmentation with natural background on
the image which are represented in the figure 4. The segmentation output is achieved for all
the three input sample images by converting them into Gray scale images for further
processing and the post processing is also done to obtain the efficient segmentation output.
5. CONCLUSIONS
The efficiency of the K-Means clustering is observed in this implementation by trying out
with six different sample images. In that, the foreground and background discrimination are
clearly distinguishable based on the grouping of data points. The discrimination in the output
based on the illumination in an image which is clearly observed at the output images in
comparison with input images. It is only possible by calculating the distance of nearest
neighbour to form the clusters. The concept of using label function and center the cluster
improves the quality of segmentation and finally the image reshaping is also done to coincide
with the input sample images.
6. REFERENCES
1. W. Zhao, X. Hou, X. Yu, Y. He and H. Lu, "Towards Weakly-Supervised Focus
Region Detection via Recurrent Constraint Network," in IEEE Transactions on Image
Processing, vol. 29, pp. 1356-1367, 2020, doi: 10.1109/TIP.2019.2942505.
2. Y. Chen, X. Xiao and Y. Zhou, "Low-Rank Quaternion Approximation for Color
Image Processing," in IEEE Transactions on Image Processing, vol. 29, pp. 1426-
1439, 2020, doi: 10.1109/TIP.2019.2941319.
3. V. Jatla, M. S. Pattichis and C. N. Arge, "Image Processing Methods for Coronal Hole
Segmentation, Matching, and Map Classification," in IEEE Transactions on Image
Processing, vol. 29, pp. 1641-1653, 2020, doi: 10.1109/TIP.2019.2944057.
4. X. Deng and P. L. Dragotti, "Deep Coupled ISTA Network for Multi-Modal Image
Super-Resolution," in IEEE Transactions on Image Processing, vol. 29, pp. 1683-
1698, 2020, doi: 10.1109/TIP.2019.2944270.
5. L. Pan, Y. Dai, M. Liu, F. Porikli and Q. Pan, "Joint Stereo Video Deblurring, Scene
Flow Estimation and Moving Object Segmentation," in IEEE Transactions on Image
Processing, vol. 29, pp. 1748-1761, 2020, doi: 10.1109/TIP.2019.2945867.
6. B. Kim and J. C. Ye, "Mumford–Shah Loss Functional for Image Segmentation With
Deep Learning," in IEEE Transactions on Image Processing, vol. 29, pp. 1856-1866,
2020, doi: 10.1109/TIP.2019.2941265.
7. I. Kajo, N. Kamel and Y. Ruichek, "Self-Motion-Assisted Tensor Completion Method
for Background Initialization in Complex Video Sequences," in IEEE Transactions on
Image Processing, vol. 29, pp. 1915-1928, 2020, doi: 10.1109/TIP.2019.2946098.
8. N. Hidalgo-Gavira, J. Mateos, M. Vega, R. Molina and A. K. Katsaggelos,
"Variational Bayesian Blind Color Deconvolution of Histopathological Images," in
IEEE Transactions on Image Processing, vol. 29, pp. 2026-2036, 2020, doi:
10.1109/TIP.2019.2946442.
9. Z. Huang, C. Wang, X. Wang, W. Liu and J. Wang, "Semantic Image Segmentation
by Scale-Adaptive Networks," in IEEE Transactions on Image Processing, vol. 29,
pp. 2066-2077, 2020, doi: 10.1109/TIP.2019.2941644.
10. H. Zhang, Y. Tian, K. Wang, W. Zhang and F. Wang, "Mask SSD: An Effective
Single-Stage Approach to Object Instance Segmentation," in IEEE Transactions on
Image Processing, vol. 29, pp. 2078-2093, 2020, doi: 10.1109/TIP.2019.2947806.
11. M. Foare, N. Pustelnik and L. Condat, "Semi-Linearized Proximal Alternating
Minimization for a Discrete Mumford–Shah Model," in IEEE Transactions on Image
Processing, vol. 29, pp. 2176-2189, 2020, doi: 10.1109/TIP.2019.2944561.
12. F. Yuan, L. Zhang, X. Xia, Q. Huang and X. Li, "A Wave-Shaped Deep Neural
Network for Smoke Density Estimation," in IEEE Transactions on Image Processing,
vol. 29, pp. 2301-2313, 2020, doi: 10.1109/TIP.2019.2946126.
13. Aswin Kumer S V and Dr. S.K.Srivatsa, “A novel image fusion approach using high
resolution image enhancement technique”, International Journal of Pure and Applied
Mathematics Vol.116, No. 23, 2017, pp.671 – 683, ISSN: 1311-8080 (printed
version); ISSN: 1314-3395 (on-line version), Special Issue.
14. V. Cherukuri, V. Kumar B.G., R. Bala and V. Monga, "Deep Retinal Image
Segmentation With Regularization Under Geometric Priors," in IEEE Transactions on
Image Processing, vol. 29, pp. 2552-2567, 2020, doi: 10.1109/TIP.2019.2946078.
15. Q. Wu, J. Zhang, W. Ren, W. Zuo and X. Cao, "Accurate Transmission Estimation
for Removing Haze and Noise From a Single Image," in IEEE Transactions on Image
Processing, vol. 29, pp. 2583-2597, 2020, doi: 10.1109/TIP.2019.2949392.
16. L. Sun, W. Shao, M. Wang, D. Zhang and M. Liu, "High-Order Feature Learning for
Multi-Atlas Based Label Fusion: Application to Brain Segmentation With MRI," in
IEEE Transactions on Image Processing, vol. 29, pp. 2702-2713, 2020, doi:
10.1109/TIP.2019.2952079.
17. Y. Shin, S. Park, Y. Yeo, M. Yoo and S. Ko, "Unsupervised Deep Contrast
Enhancement With Power Constraint for OLED Displays," in IEEE Transactions on
Image Processing, vol. 29, pp. 2834-2844, 2020, doi: 10.1109/TIP.2019.2953352.
18. K. Mei, B. Hu, B. Fei and B. Qin, "Phase Asymmetry Ultrasound Despeckling With
Fractional Anisotropic Diffusion and Total Variation," in IEEE Transactions on
Image Processing, vol. 29, pp. 2845-2859, 2020, doi: 10.1109/TIP.2019.2953361.
19. Y. Liu, L. Jin and C. Fang, "Arbitrarily Shaped Scene Text Detection With a Mask
Tightness Text Detector," in IEEE Transactions on Image Processing, vol. 29, pp.
2918-2930, 2020, doi: 10.1109/TIP.2019.2954218.
20. X. Wang, X. Jiang, H. Ding and J. Liu, "Bi-Directional Dermoscopic Feature
Learning and Multi-Scale Consistent Decision Fusion for Skin Lesion Segmentation,"
in IEEE Transactions on Image Processing, vol. 29, pp. 3039-3051, 2020, doi:
10.1109/TIP.2019.2955297.
21. U. Gaur and B. S. Manjunath, "Superpixel Embedding Network," in IEEE
Transactions on Image Processing, vol. 29, pp. 3199-3212, 2020, doi:
10.1109/TIP.2019.2957937.
22. Q. Zhang, N. Huang, L. Yao, D. Zhang, C. Shan and J. Han, "RGB-T Salient Object
Detection via Fusing Multi-Level CNN Features," in IEEE Transactions on Image
Processing, vol. 29, pp. 3321-3335, 2020, doi: 10.1109/TIP.2019.2959253.
23. Inthiyaz, S., Madhav, B.T.P. & Madhav, P.V.V. 2017, "Flower segmentation with
level sets evolution controlled by colour, texture and shape features", Cogent
Engineering, vol. 4, no. 1.
24. Katta, S., Siva Ganga Prasad, M. & Madhav, B.T.P. 2018, "Teaching learning-based
algorithm for calculating optimal values of sensing error probability, throughput and
blocking probability in cognitive radio", International Journal of Engineering and
Technology (UAE), vol. 7, no. 2, pp. 52-55.
25. Aswin Kumer S V and Dr. S.K. Srivatsa, “An Implementation of Futuristic Deep
Learning Neural Network in Satellite Images for Hybrid Image Fusion” International
Journal of Recent Technology and Engineering (IJRTE), Volume-8, Issue-1, May
2019, ISSN: 2277-3878, pp.484-487.
26. Inthiyaz, S., Madhav, B.T.P., Kishore Kumar, P.V.V., Vamsi Krishna, M., Sri Sai
Ram Kumar, M., Srikanth, K. & Arun Teja, B. 2016, "Flower image segmentation: A
comparison between watershed, marker-controlled watershed, and watershed edge
wavelet fusion", ARPN Journal of Engineering and Applied Sciences, vol. 11, no. 15,
pp. 9382-9387.

You might also like