EEL6825-Character Recognition Algorithm Using Correlation.
EEL6825-Character Recognition Algorithm Using Correlation.
Decision
Theory,
I. INTRODUCTION find its one of major application when used in Data/Character Recognition engines. A learned set of characters (could be numbers as well) which is stored in a library database can be correlated with a block of characters in input image, so as to find maximum matching alphabets and dump it in a text file or machine readable format. This finds a major use case when there is a huge volume of data to be processed and human eyes/brain is incompatible to process each block of data with equal accuracy and efficiency. The function of Letter/Number Recognition algorithms work in a similar way as that with human brain, which also scans every document it reads and tries to correlate each letter or words with a previously learned words. If it finds a match, the reading is successful. In our
ATTERN RECOGNITION ALGORITHMS
approach to machine learning and pattern recognition, if we feed a scanned snapshot of a document, the algorithms should convert the jpeg images into a text file which can be used in any word processors, can be stored, and sent or communicated for other functions. Ideally there are two approaches for the problem of Character Recognition, firstly the Feature Extraction, where the input data volume is large and instead of using the entire template, we extract certain features like lines, contours etc. to make a feature vector(s) and eventually reduced representation under use, for e.g. Kernel/Multilinear PCA, Partial Lease Square, Non-Linear Dimensionality Reduction etc. Its counter-part, and our problem statement lies in second part, Pattern Recognition. This method is fairly easy and based sometimes on Supervised learning, where an initial set of templates, also known as Training Set, are already stored in the library. Our Solution approach to above problem is to create Classifiers likewise to segregate and match each input data based on the templates stored in the library. In this project, we put forth the resolutions using the template matching method for character recognition, which is finding the maximum similarity measure between the input image and the set of preloaded templates in the library. The all processes are preceded by important pre-processing stages where we eventually remove noise by filtering, perform thresholding operation, segmenting out the Region of Interest (ROI) and then try correlating each and every letter in the document jpeg image with the set of libraries we have. The work here is presented in 2 main areas. Firstly, we would study different OCR algorithms being used/studied. Many of the algorithms are based on specific use cases and sometime economically unviable. Others require HQ machine learning to go with Recognition systems. Overall, they are developed based on the area of use and mostly adaptive, which makes the algorithm costlier. Second part, the works on supervised learning algorithm for Character Recognition will be presented. With any given image which contains English alphabet and numbers, we will extract the region of interest and devise a method to cut out the text and translate it onto a different file which could be machine readable. The quality of image fed and the illumination and environmental situations play a key role in determining the accuracy of this algorithm, therefore we lay stress on some of the important preprocessing steps. Each step will be elaborated in theory and related steps in the algorithm implemented will be explained in further sections with considerable mathematic analogy. The
library of alphabets and numbers pre-loaded in the memory to be matched to, is also prepared on the course of this process and the correlation finds the best match to these. The paper is organized in the following way; Section II presents the existing Character Recognition systems being reference to their evolution and mechanisms. Section III contains the algorithm being used for this project work, which includes an extensive step wise approach towards correlation similarity measure with the library (supervised or training set). Section IV shows some simulation results using this algorithm. We conclude the papers in Section V with future scope of work. II. EXISTING CHARACTER RECOGNITIONS SYSTEMS The overall flow of stages is fairly same for any Character Recognition algorithm. Different subtasks are considered individually and primitive to modern adaptive algorithms are applied so as to fine tune the final output. Below is the broader perspective of the algorithm-
Image Acquisition
Preprocessing
Character Template vector creation. Grayscale conversion and binarization. Crop lines from paragraphs. Crop words from lines. Adjusting Aspect Ratio of Input Characters. Correlation with template Character vector to input characters. Assigning Similarily measure value.
anywhere in world. This is a more challenging problem since, in this case, the images are taken from a camera and not directly scanned with noise removal and patch spots and speckles in the image. Problem is taken care by Artificial Neural Network by introducing several layers of neural network to learn the character template from the snapshots. It was tested to decipher 99% of the CAPTCHAs (a challengeresponse test to determine difference between human or machine interaction). Approach was to unite steps like localization, segmentation and recognition step onto one single, using Deep Convolutional Network, making the Multilayer Neural Network to take lesser training time and fewer parameters. As used by all the latest OCR engines as talked above, the state-of-art technique for Character Recognition has been the Artificial Neural network. ANN is costly both computationally and economically but it is a good technology to learn and implement as per requirement. [3] On an overall perspective, we learned that any Pattern Recognition Software can be based upon 2 models, Supervised learning and Unsupervised learning. ANN finds its usage mostly in unsupervised learning, since by the process of correlating or recognizing a pattern, the algorithm learns to classify amongst wide variety of input images. The system is analogous to a situation when the method to solve a problem is not known and the algorithm somehow tries to figure out a way by processing similar problems in its background. This background network formation is very important and a large number of similar structural elements like input, state, and output have to be interconnected like a human neural network, thus the name.
Segmentation
Character Recognition
There are several Character Recognition algorithm starting from Tesseact Software which is an open source Apache License OCR software developed by HP to latest ones like OmniPage by Nuance Communications (1st to work on personal Computers) and Microsoft Office OneNote 2007. The usage also extends to many maps and streets recognizing engines used by Apple/Google/Nokia Inc. etc. Latest one being the Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks [6]. To take the example of Google street view, the Google database has millions of image snapshots taken by Google maps team which has all the street numbers and path names inscribed on the name plates on each street. They run algorithms to match each street and house location with the set of addresses from their database, so as to exactly locate a particular spot
In [4], was another novel approach towards training the ANN using Negative Correlation Learning, where a parts of network were trained in groups instead of individual nodes, drastically reducing processing time as discussed recently. A different type of classifier has been used in [5] known as Least Square Support Vector Machine (LS-SVM). Author states that implementation of forward Neural Network classifier causes some flaws and issues due its restrictive theoretic knowledge. The fundamental pre-processing was to use dynamic thresholding and to extract features based on normalization of the gray value (Between 0 and 1). Characters are segmented and recognized or classified using the Least Square SVM model based on features. Binarization is an important step before feeding into OCR algorithm, where text in the image is converted into flat black and white format. The
author compares the image pixels with its local background just after applying Gaussian or median filter. This is also termed as Dynamic Thresholding. Mathematically given by = {(, ) |, , }; objects For bright
III. CORRELATION OPTICAL CHARACTER RECOGNITION ALGORITHM EXPLAINED Optical Character Recognition becomes an important and interesting problem especially when a non-standard writing scripts and fonts are under use. In addition to this, it suffers addition loss when we have artifacts like environment noise, non-uniform character spacing [8], and uneven breakage in texts etc. So after we scan images for OCR, most of the algorithm converts it to bi-level image consisting of only Black and White. One of the most important ways to perform this is to Threshold such images, which would be explained eventually. A. Learning pattern of characters: The very initial part of our Character Recognition algorithm is to make the machine learn some pattern of English alphabets and numbers. We feed in the prototype of each alphabets and create a library with certain classifications such as labels, so now when a piece of unknown set of characters are compared with the templates already in the library, maximum match helps to find out the exact word being used. There are several steps before the comparison is made which will be explained, but the main idea is, when compared, both the template and the input image must be of same pixel ratio and size. This library is sometimes termed as training data. One important idea behind modern Character Recognition is to reduce the problem domain. For example, if we want a faster convergence of our algorithm in a postal department scanning system where envelop has address and ZIP code in specific format, the scanning engine and the library should be designed such, only a subset of the main library is compared against the scanned area. A street number or street address should only contain set of characters and no special characters. So the part of special characters in the library can be avoided to compare with. This reduces the time consumption. This is how the scanned image classifier works based on the type of visual content. A wrongly chosen classifier cause problems in OCR, if the algorithm is insufficiently trained or if the characters are extracted erroneously. In our algorithm, SampleTraining.m is a separate code written exclusively for calibration and learning purposes. Each letter in English alphabet and the numbers are assigned using bmp sample image of that alphabet/number character. We create 3 separate matrices for Upper case, lower case alphabets and numbers. The 3 matrices are then concatenated to a single character matrix and converted into template.mat template file. The template is defined a global variable in the entire scope of the main code RunThis_MainRecognitionAlgo .m and this template is called in a different nested loop where correlation takes place.
Where, , is the original image and , is image after applying localization filter.
= {(, ) |, , }; objects
For
dark
Furthermore, Vector Machine model is defined by the author and Lagrangian minimization is used to minimize the squared VC Machine. In [7], a new and more robust nature of template matching is presented based on Maximum Likelihood (ML) formulation. Its primitive form uses Image registration template matching based on Sum of Squared Differences for finding the suitable match, but sooner it was discovered that this method does not take account of occluding boundaries in scanned or captured images. In the paper, the author describes that each template position possible, has a Likelihood attached to it and there is function which generates this likelihood. Based on certain threshold, if the matching uncertainty value is lower, the position of the template with highest ML is considered; and if above the threshold, all such positions are considered. Characters are recognized based on extracted features and matching the features based on these algorithm having the Maximum Likelihood. When an image is scanned and set to preprocessing, binarization of the image is one of the most important stages. This basically converts any image into complete Black and White image and also removes any unwanted speckles and marks from the image which may cause confusion while recognizing characters from the image. As discussed, we need to set a specific threshold for this binarization and there are several algorithms and methods to this [8], thresholding task. Several methods are listed for example, Global Binarization which is setting a constant threshold value, fairly good assumption for images with stable background illumination and noise. More advanced are like Otsu Threshold and MultiResolution Otsu threshold. This methods are based on the idea of minimizing the variance of black and white pixels individually (this method has been implemented in the project) and also to set a block size which can engulf a text character completely but when places in a background, the environment lighting conditions are invariant, thus making this adaptable to changing illuminations. The block size is iteratively enlarged from small to large to just include the text letter. Also other important parts are de-noising using Post Binarization despeckling and Wavelet de-noising.
B. Image Scanning and Binarization: Capturing images using a scanner usually gives twice as clarity as capturing a document using a fairly good pixel saturation camera in average room illumination. This step is to optically scan the document to be optically read using the algorithm. Preferably in this case, the scanned image should include pieces of printed text or handwritten letters. The digitization of the document under use should be around 150300 dpi (dots per inch) for it to be good enough for our Recognition algorithm. We avoid using a lossy compression in text document scanning since the characters are pixelated or disrupted during compression and the algorithm might find a mismatch. We consider scanning and forming a Colored or more preferred a Grayscale image. In our algorithm, we would convert the image to ultimately grayscale even if the user forgets and scans a multicolor image. Next step towards our algorithm falls majorly in Pre-processing. As a part of subsequent steps, we try binarize the image thus obtained by setting a threshold, based on Otsu Method. This threshold is set performing several trails, but in more advanced engines, this could be derived from combination of Otsu method, Changs method, Margin Error method etc. Binarization is pretty straight forward if we consider a global threshold method, laying more stress on best scanning process. But since we come across different image illumination and variation in image histograms, Otsu Method finds best application for adaptive thresholding. Post binarization, we remove all the objects lesser than x pixel density. The value x can be user input based, and the image formed is termed as Morphologically Open Binary Image.. Binarization ideally should be different in case of colored image scans. Each color scale should be segregated into Red Green or Blue or CMYK depending on color bit scheme and then Binarization to be carried out on each color plate. This keeps the homogeneity. In our algorithm, we down converted the entire RGB image into grayscale image so this step can be overpassed. Otsu method [8][13][14] for thresholding, minimizes the within-class variance (or in other words, maximizing interclass variance), by calculating gray level histogram and the threshold is normalized to fall in-between 0-1. Considering bi to be the ith pixel value standardized to either 1 or 0. If , set = 1 and = 0 if . Now to find this normalized threshold level say t, we need to minimize the intraclass variance of both black and white pixel cluster. If the image level histogram is bimodal, then threshold lies in the global minima or the valley in-between the peaks, but this is not a realistic case always, so we need to calculate threshold by statistically minimize the intraclass variance so as to attain close to bimodal form. Let there be (1,2,L) gray values and ni is the number of pixel on ith level gray value. So automatically, 1 + 2 + 3+ + + = .(1)
And probability of choosing pixels is = =1 = 1 sum of all probabilities.
Now if there is a threshold, the gray values can be classified into two parts, C0 and C1 (Class). 0 (1,2. . ) . (2) 1 ( + 1, + 2, . . ) (3) Class occurrence probability is now given by
0 = (0 ) = = () . (4)
1
and
1 = (1 ) = = 1 () (5)
+1
0 = .
=1
() = . . (6) 0 ()
and
1 = .
=+1
() = (7) 1 1 ()
Where = 0 . 0 + 1 . 1 = =1 . for overall mean and 0 + 1 = 1. And 0th order and 1st order moment are
() = () = . . . (8)
=1 =1
= ( 0 )2 .
=1
. . (9) 0 . . (10) 1
And
2 1
= ( 1 )2 .
=+1
The term to maximize to find optimal value of t is interclass variance given by2 = 0 (0 )2 + 1 (1 )2
Expanding the above term2 = 0 . 1 . (1 1 )2 . . (11) Finally the Otsu method to maximize the variance between white and black pixel clusters or the interclass variance is to 2 maximized the term = { . () ()} . . . (12) (). { ()}
and
C. Pre-processing: As discussed earlier, thresholding as a part of preprocessing marks its importance because the quality of binary image will be solely dependent upon the fineness of the thresholding. Our estimation stands good when we have a good scanned image and the brightness of the scanned image is fairly good. The image obtained from thresholding may clip the characters to some extent at the edges or may smear somewhere. Also, the scanned image may contain some noise (mostly salt and pepper) during the process of capturing. This causes huge problem on recognition algorithm to find out the pattern. Other steps of preprocessing involve removal of lines and cleanup of disruptive marks and other noise in the image. Aspect Radio is also adjusted likewise to match the size of each character in the template library. This lessens the effort in correlation step since the edges and curves can be better mapped. Some of the lines can be rotated due to disoriented document placement under scanner. The dis-orientation is to be detected and we have steps to counter the rotation as much as possible which ultimately helps segmentation of lines from paragraphs and words from lines and eventually characters, make easy. D. Segmentation and Layout analysis: A document image consists of several columns, paragraphs, line breaks, indentation etc. Majorly, most of the algorithms fail because of errors in scanning and proper segmentation of text. They also may constitute figures and some graphical charts which might not be of interest. These components have to be distinguished from our required text part. To give an example, a postal department OCR machine has to distinguish between the address text from stamp images and other artifacts which look legitimate in naked eyes. Then only the proper address can be fed into recognition. So we separate different text paragraphs from figures and then we cut each line from the paragraph. There are some typical problems when performing segmentation subtask like during correlation, the algorithm might take 2 letters in the scanned document as one if they are found to be connected due to misprint of handwriting. Also, on having a loose threshold, slightly higher pixel intensity than the set value can convert the intensity pixels of the connecting lines to be black making the characters appear connected. Noise can also play a role in bringing some sort of connection between the alphabets. The performance is poorer when text snapshot is directly extracted from a graphical image. In our algorithm, SegmentLine.m is called which clips each line from a paragraph and forms a vector of different lines. If the document has only one line, the loop is bypassed and the vector stores only one single line of text. The next stage is to clip out each character from the line vector. The code SegmentLetter.m is called from the main code to perform this task. In the similar fashion, one letter is cropped and concatenated onto another vector of letters characters which has the rest of the words. The iteration takes every word in the SegmentLetter vector and creates another
vector of letters. The spaces in-between words are counted separately in the main code and spaces are introduced at appropriate location. This vector of letters is important, as this vector elements are considered one by one and is compared to the stored set of characters. E. Recognition using Correlation with existing library: In the scanned paragraph, each line is clipped out and formed a word matrix. This vector is further clipped to extract each character. All the cropped elements/characters are adjust with a proper aspect ratio to match aspect ratio of the existing library character. Since the input character vector is formed, it is matched with the prototype characters using corr2 correlation function in MATLAB. The prototype which gives the best match (correlation value) is classified with that label. The correlation output determines the similarity measure of the template with the input image. Each template character in the library has an attribute value which forms a marker to that character. SM in the algorithm returns that character amongst all other in the library when corr2 function has the maximum value to a particular marker. A vector is initiated inside the function where correlation takes place, since after the correct letter is recognized; it is concatenated to the previously recognized set of character array. Later those are collectively dumped in a text file. By the course of recognition, spaces and line breaks are detected and a separate array is maintained to push in required spaces in between words or characters. There are 2 major correlation coefficient under use which are as follows = Or = [((,) (,) ). ((,) (,) )] { ((,) (,) ) } . { ((,) (,) ) } (14) Where (. ) denotes the average of each elements in the matrix. This basically employs Digital Image Correlation using methods like Image Registration in 2D or could be extended to 3D. The corr2 function explained above computes the correlation co-efficient as shown in eq. 14. F. Data dump onto text file: The text file is the result vector which is formed during the correlation function execution. A loop break is introduced if we dont find any more character in that line. Duri ng the present loop, the contiguous letters constitute a word and in between the words, the space is counted. A space vector is
2 2
[(,) (,) ] .
. (13)
also defined to introduce space at requisite locations. When the sentence finishes, loop break is called.
Entire paragraph is segmented line by line and a separate vector of lines are created. This is done at SegmentLine.m file.
IV. SIMULATION AND RESULTS ANALYSIS Character Recognition algorithm highly depends upon all the steps being worked up on starting from scanning an image and trying to remove noise as much possible without damaging the picture itself to correlation or some other recognition algorithm. The precision cannot be matched to the ones with backhaul has ANN or some other inherently costly mechanisms, but the approximation is fairly good and human readable text format. The poorer image property like different light conditions and too much of abrupt noises in the image affect the final out drastically. Following image was scanned and de-noise for being fed for subsequent steps
One second part of segmentation, each letter is separated out and is stored in word matrix. Each letter is clipped and remaining letters are concatenated in the matrix and this loop is iterated till we have all the words broken down into letters. In this step, the algorithm may be unpredictable since after this step the letters will be adjust for the aspect ratio matching with the library characters and i or I may look like l. The reason could be either the font under use and/or the errors drawn in while editing the size of the characters. The final output is dumped in a text file. There are some error though, but the output is fairly readable and editable if requiredThe clip function following by size function does the clipping of the word from the lines and letters from the words. Following is the Algorithm Runtime and Threshold value calculated just before segmentation.
We have included RED Text to show the effect of greyscale conversion and thresholding operation.
Image Property
RGB Image
5 20 80 150 5 20 80 150
96 96 90 86 96 96 90 86
Table 1: Simulation Evaluation based on Parameters Figure 4: After Thresholding and Binarization
We clearly see a relation between the pixel removal and time elapse with accuracy of the algorithm, which is very intuitive. We keep on removing as much as connected components from the scan, it aids the algorithm by not degrading the accuracy but the accuracy goes down when we remove too much of connected components and eventually some of the required text is smeared and eroded. Lesser text to work upon therefore reducing the time elapsed. The normalized threshold value changes, if we scan a different image, since the threshold calculation for binarization differs only when different images are used, or
word in that language (since handwriting recognition is challenging unless possible words are not fed in training sequence). VI. ACKNOWLEDGEMENT This project is towards completion of the course EEL6825 Pattern Recognition Spring 2014 under the guidance of Dr. Dapeng Oliver Wu and TA Mr. Kairan Sun. I thanks both of them for encouraging me to pursue this course and approving the topic I selected for this course Project. It was a truly learning experience to tackle problems in pattern recognition and devising methods to solve it.
V. CONCLUSION AND SCOPE OF FUTURE WORK Image Recognition and extracting Pattern is a highly sensitive mechanism. It is very sensitive to any form of changes and variations. Irrespective of the fact that human brain is adaptable to such type of changes, algorithms could also be customized likewise. But most of the times, skewing or small deformation in the image can also cause recognition algorithms to fail with high percentage. Our aim is to avoid unnecessary information in the image as much as possible, reducing the image into Black and White and based on this object, picking up an approach for example taking Template Matching techniques or ANN with OpenCV. In [9], author deduced that sample variances to calculate correlation coefficient is a much better and accurate way of than whole function correlation. We have found that out in our simulations too stating the fact that a small tweak in algorithm can cause a major difference in character recognition engines. As a part of further enhancements, stricter rules could be imposed on the use of fonts like OCR-A/B, MICR etc. which allows better recognition. MICR codes are still found in any bank checks, which makes the scanning and transactions for the banks to be very accurate, efficient, fast and secured. [1]. The comparison of stored library with the input image is better attained using feature based algorithms like neighbor classifiers such as K-NN algorithm. Best way to extract 100% of character recognition algorithm is to use a black INK, use of simpler shapes and avoiding loops wherever possible, close the loops (for example B and D, which seems O to the algorithm sometimes), Avoiding linking of characters. Dynamic thresholding operation is an important implementation in the OCR system as a part of preprocessing. This would adapt the algorithm from unnatural illumination changes and noise in the characters being scanned. As in [8], several methods for dynamic thresholding for binarization was explained, as considering the variance of the pixel brightness from a global threshold value. If the variance is too large (a defined parameter), the pixel under consideration is assigned a 1 or a 0. Some feature recognition algorithm divides the characters into pieces and then each sub segments feature is compared to that in library. A better approach [5] is by use of least square SVM which implements dynamic thresholding plus gray value normalization for character segmentation so as to extract features and carry out correlation operations based on these features based on appropriate classifier. As the number of features goes up, algorithm becomes robust. Further advancement was made [11] to recognize handwritten characters, by segmenting each connecting strokes in the alphabets or characters (Language independent). The Neural Network is trained with some basic geometric shapes, and the classifier designed at the end would classify individual characters based on checking the possible combinations of shapes, and the network has to be trained just one time and even the recognizer has not to be trained with each possible
VII. REFERENCES
[1] [2] [3] Optical Character Recognition Wikipedia page. http://en.wikipedia.org/wiki/Optical_character_recognition . Duda Richard O., Hart Peter E., Stork David G., Pattern Classifications. 2nd Edition Willey Interscience. Pranob K Charles, V.Harish, M.Swathi, CH. Deepthi. A Review on the Various Techniques used for Optical Character Recognition, International Journal of Engineering Research and Applications (IJERA); ISSN: 2248-9622 www.ijera.com, Vol. 2, Issue 1, Jan-Feb 2012,pp.659-662. Kir, B.; Oz, C.; Gulbag, A., "The Application of optical character recognition for mobile device via artificial neural networks with negative correlation learning algorithm," Electronics, Computer and Computation (ICECCO), 2013 International Conference on , vol., no., pp.220,223, 7-9 Nov. 2013 doi: 10.1109/ICECCO.2013.6718268. Jianhong Xie, "Optical Character Recognition Based on Least Square Support Vector Machine," Intelligent Information Technology Application, 2009. IITA 2009. Third International Symposium on , vol.1, no., pp.626,629, 21-22 Nov. 2009 doi: 10.1109/IITA.2009.327. Ian J. Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, Vinay Shet, Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks, Street View and reCAPTCHA Teams, Google Inc. Olson, C.F., "Maximum-likelihood template matching," Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on , vol.2, no., pp.52,57 vol.2, 2000 doi: 10.1109/CVPR.2000.854735. Maya R. Gupta, Nathaniel P. Jacobson, Eric K. Garcia , OCR binarization and image pre-processing for searching historical documents, ELSEVIER, Patter Recognition 40 (2007) 389-397; Electrical Engineering, University of Washington, Seattle, Washington 98195, United States; Received 28 October 2005; received in revised form 27 February 2006; accepted 28 April 2006. Radim Hercik, Roman Slaby, Zdenek Machacek, and Jiri Koziorek, Correlation Method of COR Algorithm for Traffic Sign Detection Implementable in Microcontrollers, VSB-Technical University of Ostrava. Akram, M.U.; Bashir, Z.; Tariq, A.; Khan, S.A., "Geometric feature points based optical character recognition," Industrial Electronics and Applications (ISIEA), 2013 IEEE Symposium on , vol., no., pp.86,89, 22-25 Sept. 2013 doi: 10.1109/ISIEA.2013.6738973. Ali, A.; Ahmad, M.; Rafiq, N.; Akber, J.; Ahmad, U.; Akmal, S., "Language independent optical character recognition for and written text," Multitopic Conference, 2004. Proceedings of INMIC 2004. 8th International , vol., no., pp.79,84, 24-26 Dec. 2004 doi: 10.1109/INMIC.2004.1492850. Rohit Verma, Dr. Jahid Ali, A Survey of Feature Extraction and Classification Techniques in OCR Systems, International Journal of Computer Applications & Information Technology, vol.I, issue III, November 2012 (ISSN/l2278-7720). AlSaeed, D.H.; Bouridane, A.; Elzaart, A.; Sammouda, R., "Two modified Otsu image segmentation methods based on Lognormal and
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
Gamma distribution models," Information Technology and e-Services (ICITeS), 2012 International Conference on , vol., no., pp.1,5, 24-26 March 2012 doi: 10.1109/ICITeS.2012.6216680. [14] Dongju Liu; Jian Yu, "Otsu Method and K-means," Hybrid Intelligent Systems, 2009. HIS '09. Ninth International Conference on , vol.1, no., pp.344,349, 12-14 Aug. 2009 doi: 10.1109/HIS.2009.74. [15] Digital Image Correlation Wiki Page: http://en.wikipedia.org/wiki/Digital_image_correlation