The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMSijcsit
Biometric matching involves finding similarity between fingerprint images.The accuracy and speed of the
matching algorithmdetermines its effectives. This researchaims at comparing two types of matching
algorithms namely(a) matching using global orientation features and (b) matching using minutia triangulation.The comparison is done using accuracy, time and number of similar features. The experiment is conducted on a datasets of 100 candidates using four (4) fingerprints from each candidate. The data is sampled from a mass registration conducted by a reputable organization in Kenya.Theresearch reveals that fingerprint matching based on algorithm (b) performs better in speed with an average of 38.32 milliseconds
as compared to matching based on algorithm (a) with an average of 563.76 milliseconds. On accuracy,algorithm(a) performs better with an average accuracy of 0.142433 as compared to algorithm (b) with an average accuracy score of 0.004202.
Techniques of Brain Cancer Detection from MRI using Machine LearningIRJET Journal
The document discusses techniques for detecting brain cancer from MRI scans using machine learning. It first provides background on brain tumors and MRI. It then outlines the cancer detection process, including pre-processing the MRI data, segmenting the images, extracting features, and classifying tumors using techniques like CNNs, SVMs, MLP, and Naive Bayes. The document reviews related work applying these techniques and compares their results, finding accuracy can be improved with larger, higher resolution datasets.
Literature Survey on Detection of Brain Tumor from MRI Images IOSR Journals
This document provides a literature survey on methods for detecting brain tumors from MRI images. It discusses several segmentation and clustering techniques that have been used for this purpose, including thresholding, edge-based segmentation, region-based segmentation, fuzzy c-means clustering, and k-means clustering. The document also reviews related work applying these methods and evaluates their effectiveness at automatically detecting and segmenting brain tumors from MRI data.
This document describes a project report submitted by three students for their Bachelor of Engineering degree. The project involves developing a system for classifying brain images using machine learning techniques. It discusses challenges in detecting brain tumors and the need for automated classification methods. It also provides an overview of techniques for image segmentation, clustering, and feature extraction that will be used in the project.
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHMAM Publications
The current study investigated a median filter with the fuzzy level set method to propose fuzzy segmentation of magnetic resonance imaging (MRI) cerebral tissue images. An MRI image was used as an input image. A median filter and fuzzy c-means (FCM) clustering were utilized to remove image noise and create image clusters, respectively. The image clusters showed initial and final cluster centers. The level set method was then used for segmentation after separating and extracting white matter from gray matter. Fuzzy c-means was sensitive to the choice of the initial cluster center. Improper center selection caused the method to produce suboptimal solutions. The proposed algorithm was successfully utilized to segment MRI cerebral tissue images. The algorithm efficiently performed segmentation of test MRI cerebral tissue images compared with algorithms proposed in previous studies.
This document describes a system for detecting brain tumors in MRI images using image segmentation. It discusses how existing manual detection of tumors is difficult due to noise and requires many days. The proposed system applies preprocessing like filtering and grayscale conversion. It then uses image segmentation techniques to detect tumor edges and boundaries. Features are extracted and classification is used to differentiate between normal and tumor images, helping doctors detect tumors earlier. The system is implemented in MATLAB and aims to overcome difficulties in early tumor detection.
IRJET- An Efficient Brain Tumor Detection System using Automatic Segmenta...IRJET Journal
This document presents a proposed method for an efficient brain tumor detection system using automatic segmentation with convolutional neural networks. The proposed method uses median filtering for noise removal, Otsu's thresholding for segmentation, and morphological operations for filtering. A convolutional neural network is then used for tumor classification. The methodology is tested on a brain MRI dataset, with evaluations of performance metrics like accuracy, precision, recall, and processing time. The goal is to develop an automated system for early detection of brain tumors using deep learning techniques for analysis of medical images.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMSijcsit
Biometric matching involves finding similarity between fingerprint images.The accuracy and speed of the
matching algorithmdetermines its effectives. This researchaims at comparing two types of matching
algorithms namely(a) matching using global orientation features and (b) matching using minutia triangulation.The comparison is done using accuracy, time and number of similar features. The experiment is conducted on a datasets of 100 candidates using four (4) fingerprints from each candidate. The data is sampled from a mass registration conducted by a reputable organization in Kenya.Theresearch reveals that fingerprint matching based on algorithm (b) performs better in speed with an average of 38.32 milliseconds
as compared to matching based on algorithm (a) with an average of 563.76 milliseconds. On accuracy,algorithm(a) performs better with an average accuracy of 0.142433 as compared to algorithm (b) with an average accuracy score of 0.004202.
Techniques of Brain Cancer Detection from MRI using Machine LearningIRJET Journal
The document discusses techniques for detecting brain cancer from MRI scans using machine learning. It first provides background on brain tumors and MRI. It then outlines the cancer detection process, including pre-processing the MRI data, segmenting the images, extracting features, and classifying tumors using techniques like CNNs, SVMs, MLP, and Naive Bayes. The document reviews related work applying these techniques and compares their results, finding accuracy can be improved with larger, higher resolution datasets.
Literature Survey on Detection of Brain Tumor from MRI Images IOSR Journals
This document provides a literature survey on methods for detecting brain tumors from MRI images. It discusses several segmentation and clustering techniques that have been used for this purpose, including thresholding, edge-based segmentation, region-based segmentation, fuzzy c-means clustering, and k-means clustering. The document also reviews related work applying these methods and evaluates their effectiveness at automatically detecting and segmenting brain tumors from MRI data.
This document describes a project report submitted by three students for their Bachelor of Engineering degree. The project involves developing a system for classifying brain images using machine learning techniques. It discusses challenges in detecting brain tumors and the need for automated classification methods. It also provides an overview of techniques for image segmentation, clustering, and feature extraction that will be used in the project.
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHMAM Publications
The current study investigated a median filter with the fuzzy level set method to propose fuzzy segmentation of magnetic resonance imaging (MRI) cerebral tissue images. An MRI image was used as an input image. A median filter and fuzzy c-means (FCM) clustering were utilized to remove image noise and create image clusters, respectively. The image clusters showed initial and final cluster centers. The level set method was then used for segmentation after separating and extracting white matter from gray matter. Fuzzy c-means was sensitive to the choice of the initial cluster center. Improper center selection caused the method to produce suboptimal solutions. The proposed algorithm was successfully utilized to segment MRI cerebral tissue images. The algorithm efficiently performed segmentation of test MRI cerebral tissue images compared with algorithms proposed in previous studies.
This document describes a system for detecting brain tumors in MRI images using image segmentation. It discusses how existing manual detection of tumors is difficult due to noise and requires many days. The proposed system applies preprocessing like filtering and grayscale conversion. It then uses image segmentation techniques to detect tumor edges and boundaries. Features are extracted and classification is used to differentiate between normal and tumor images, helping doctors detect tumors earlier. The system is implemented in MATLAB and aims to overcome difficulties in early tumor detection.
IRJET- An Efficient Brain Tumor Detection System using Automatic Segmenta...IRJET Journal
This document presents a proposed method for an efficient brain tumor detection system using automatic segmentation with convolutional neural networks. The proposed method uses median filtering for noise removal, Otsu's thresholding for segmentation, and morphological operations for filtering. A convolutional neural network is then used for tumor classification. The methodology is tested on a brain MRI dataset, with evaluations of performance metrics like accuracy, precision, recall, and processing time. The goal is to develop an automated system for early detection of brain tumors using deep learning techniques for analysis of medical images.
Utilization of Super Pixel Based Microarray Image Segmentationijtsrd
In the division of PC vision pictures, Super pixels are go probably as key part from 10 years prior. There are various counts and methodology to separate the Super pixels anyway whole all of them the best super pixel looking at strategy is Simple Linear Iterative Clustering SLIC have come to pivot continuously recently. The concentrating of small scale group quality verbalization from MRI imaging is more useful to perceive tumors or some other dangerous development contaminations, so the fundamental DNA cDNA microarray is a grounded device for analyzing the same. The division of microarray pictures is the essential development in a microarray assessment. In this paper, we proposed a figuring to dividing the cDNA small show picture using Simple Linear Iterative Clustering SLIC based Self Organizing Maps SOM method. In any case, the proposed figuring is taken up a moving task to look at the bad quality of pictures in addition. There are two phases to separate the image, introductory, a pre setting up the applied picture to diminish fuss levels and second, to piece the image using SLIC based SOM approach. Mr. Davu Manikanta | Mr. Parasurama N | K Keerthi "Utilization of Super Pixel Based Microarray Image Segmentation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-5 , August 2021, URL: https://www.ijtsrd.com/papers/ijtsrd46274.pdf Paper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/46274/utilization-of-super-pixel-based-microarray-image-segmentation/mr-davu-manikanta
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...Journal For Research
Computerized MR of brain image binarization for the uses of preprocessing of features extraction and brain abnormality identification of brain has been described. Binarization is used as intermediate steps of many MR of brain normal and abnormal tissues detection. One of the main problems of MRI binarization is that many pixels of brain part cannot be correctly binarized due to the extensive black background or the large variation in contrast between background and foreground of MRI. Proposed binarization determines a threshold value using mean, variance, standard deviation and entropy followed by a non-gamut enhancement that can overcome the binarization problem. The proposed binarization technique is extensively tested with a variety of MRI and generates good binarization with improved accuracy and reduced error.
A Dualistic Sub-Image Histogram Equalization Based Enhancement and Segmentati...inventy
This document presents a dualistic sub-image histogram equalization technique for medical image enhancement and segmentation. The technique divides an image histogram into two parts based on mean and median, then equalizes each sub-histogram independently. It enhances images effectively while constraining average luminance shift. For segmentation, canny edge detection and neural networks are used. The technique is tested on medical images and shows improved completeness and correctness over previous methods, with neural networks increasing accuracy to 98.3257%.
Comparative performance analysis of segmentation techniquesIAEME Publication
This document compares the performance of several image segmentation techniques: global thresholding, adaptive thresholding, region growing, and level set segmentation. It applies these techniques to medical and synthetic images corrupted with noise and evaluates the segmentation results using binary classification metrics like sensitivity, specificity, accuracy, and precision. The results show that level set segmentation best preserves object boundaries, adaptive thresholding captures most image details, and global thresholding has the highest success rate at extracting regions of interest. Overall, the study aims to determine the optimal segmentation method for medical images from CT scans.
brain tumor detection by thresholding approachSahil Prajapati
This technical paper proposes a method for detecting tumors in MRI brain images using thresholding and morphological operations. The methodology involves preprocessing images using sharpening filters, histogram equalization, and median filtering. Threshold segmentation is then used to create binary images, and morphological operations like erosion and dilation are applied. Finally, tumor regions are extracted using image subtraction, which removes closely packed pixels. The authors found that this approach, combining thresholding with morphological operations and subtraction, was effective at detecting and segmenting tumor regions in MRI brain images.
Design and development of pulmonary tuberculosis diagnosing system using imageIAEME Publication
The document describes a system for detecting pulmonary tuberculosis (PTB) using image processing techniques and an artificial neural network (ANN). X-ray images are segmented and enhanced to extract shape and texture features. These features along with clinical sputum examination results are used to train an ANN. The trained ANN is then used to classify unknown X-ray images as TB or non-TB and indicate severity. The system was tested on 110 images and achieved 94.5% accuracy in detection. Image processing techniques like enhancement, segmentation, and ANN provide an automated method for PTB diagnosis using visual features from chest X-rays.
An Efficient Brain Tumor Detection Algorithm based on Segmentation for MRI Sy...ijtsrd
A collection, or mass, of abnormal cells in the brain is called as Brain Tumor . The skull, which encloses your brain, is very rigid. Growth inside such a restricted space can cause problems. Brain tumors can be malignant or benign. Segmentation in magnetic resonance imaging (MRI) was an emergent research area in the field of medical imaging system. In this an efficient algorithm is proposed for tumor detection based on segmentation and morphological operators. Quality of scanned image is enhanced and then morphological operators are applied to detect the tumor in the scanned image. Merlin Asha. M | G. Naveen Balaji | S. Mythili | A. Karthikeyan | N. Thillaiarasu"An Efficient Brain Tumor Detection Algorithm based on Segmentation for MRI System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-2 , February 2018, URL: http://www.ijtsrd.com/papers/ijtsrd9667.pdf http://www.ijtsrd.com/engineering/electronics-and-communication-engineering/9667/an-efficient-brain-tumor-detection-algorithm-based-on-segmentation-for-mri-system/merlin-asha-m
Classification of Osteoporosis using Fractal Texture FeaturesIJMTST Journal
In our proposed method an automatic Osteoporosis classification system is developed. The input of the system is Lumbar spine digital radiograph, which is subjected to pre-processing which includes conversion of grayscale image to binary image and enhancement using Contrast Limited Adaptive Histogram Equalization technique(CLAHE). Further Fractal Texture features(SFTA) are extracted, then the image is classified as Osteoporosis, Osteopenia and Normal using a Probabilistic Neural Network(PNN). A total of 158 images have been used, out of which 86 images are used for training the network and 32 images for testing and 40 images for validation. The network is evaluated using a confusion matrix and evaluation parameters like Sensitivity, Specificity, precision and Accuracy are computed fractal feature extraction techniques.
Brain Image Fusion using DWT and Laplacian Pyramid Approach and Tumor Detecti...INFOGAIN PUBLICATION
Image fusion is the process of combining important information from two or more images into a single image. The resulting image will be more enhanced than any of the input pictures. The idea of combining multiple image modalities to furnish a single, more enhanced image is well established, special fusion methods have been proposed in literature. This paper is based on image fusion using laplacian pyramid and Discreet Wavelet Transform (DWT) methods. This system uses an easy and effective algorithm for multi-focus image fusion which uses fusion rules to create fused image. Subsequently, the fused image is obtained by applying inverse discreet wavelet transform. After fused image is obtained, watershed segmentation algorithm is applied to detect the tumor part in fused image.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Comparitive study of brain tumor detection using morphological operatorseSAT Journals
Abstract
Segmentation divides an image into foreground object and the background object. In our case foreground object is brain tumor and background is CSF, white matter, and grey matter. Aim of our study is to detect the tumor and remove the background completely and compare the morphological operations that can be used for this purpose. Segmentation remains a challenging area for researchers since many segmentation methods results in over segmentation or under segmentation and hence, leads to the false interpretation of the results. The proposed work is the comparative study of the morphological segmentation methods for segmenting brain tumor from MRI images. Before segmentation, filtration process is carried out using two method, Non Local mean filter and median filter and their results are compared using MSE and PSNR. NL mean filter preserves sharp edges and fine details in an image hence, preferred over median filter. Also tumor location is identified, to get an approximate idea about the position of the tumor in the brain i.e. in which part the brain tumor is located. The tumor is identified by using different algorithms which are based on morphology such as watershed segmentation, morphological erosion, and hole filling algorithm and comparison between them is carried out based on parameters like accuracy, sensitivity and elapsed time. Each of the segmentation results are compared with the tumor obtained using interactive tool present in MATLAB R2013b.
Keywords: Brain tumor, MRI images, Image segmentation, Morphology, Erosion, Thresholding, Hole filling, Watershed segmentation
Medical Image Fusion Using Discrete Wavelet TransformIJERA Editor
Medical image fusion is the process of registering and combining multiple images from single or multiple imaging modalities to improve the imaging quality and reduce randomness and redundancy in order to increase the clinical applicability of medical images for diagnosis and assessment of medical problems. Multimodal medical image fusion algorithms and devices have shown notable achievements in improving clinical accuracy of decisions based on medical images. The domain where image fusion is readily used nowadays is in medical diagnostics to fuse medical images such as CT (Computed Tomography), MRI (Magnetic Resonance Imaging) and MRA. This paper aims to present a new algorithm to improve the quality of multimodality medical image fusion using Discrete Wavelet Transform (DWT) approach. Discrete Wavelet transform has been implemented using different fusion techniques including pixel averaging, maximum minimum and minimum maximum methods for medical image fusion. Performance of fusion is calculated on the basis of PSNR, MSE and the total processing time and the results demonstrate the effectiveness of fusion scheme based on wavelet transform.
MEDICAL IMAGE TEXTURE SEGMENTATION USINGRANGE FILTERcscpconf
Medical image segmentation is a frequent processing step in image understanding and computer
aided diagnosis. In this paper, we propose medical image texture segmentation using texture
filter. Three different image enhancement techniques are utilized to remove strong speckle noise as well enhance the weak boundaries of medical images. We propose to exploit the concept of range filtering to extract the texture content of medical image. Experiment is conducted on ImageCLEF2010 database. Results show the efficacy of our proposed medical image texture segmentation.
PPT on BRAIN TUMOR detection in MRI images based on IMAGE SEGMENTATION khanam22
The document presents three methods for tumor detection in MRI images: 1) K-means clustering with watershed algorithm, 2) Optimized K-means using genetic algorithm, and 3) Optimized C-means using genetic algorithm. It evaluates each method, finding that C-means clustering with genetic algorithm most accurately detects tumors by assigning data points to multiple clusters and finding the optimal solution in less time. The proposed approach successfully detects tumors with high accuracy, identifies the tumor area and internal structure, and provides a colorized output image.
The document discusses a method for classifying brain tumor images using artificial neural networks. It involves three main steps: 1) preprocessing MRI images using morphological operations to remove noise, 2) extracting texture and statistical features using GLCM and GLRLM techniques, and 3) classifying images using a probabilistic neural network (PNN) and measuring accuracy. Features are extracted from 50 brain tumor images and 65 images are tested, achieving a classification accuracy of up to 98%.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Brain tumour segmentation based on local independent projection based classif...eSAT Journals
This document summarizes a research paper on brain tumour segmentation using local independent projection based classification. The proposed method uses MRI images and consists of four main steps: preprocessing using median filtering, feature extraction using patches, tumour segmentation using local independent projection classification, and post processing to analyze the tumour region. Local independent projection classification treats segmentation as a classification problem and uses local anchor embedding and softmax regression to improve performance. The method was able to classify tumour and edema regions and calculate the tumour area and perimeter pixels.
Identifying brain tumour from mri image using modified fcm and supportIAEME Publication
This document summarizes a research paper that proposes a technique for identifying brain tumors in MRI images. The technique involves 4 steps: 1) preprocessing the MRI image, 2) segmenting the image using a modified fuzzy C-means algorithm, 3) extracting features from the segmented regions like mean, standard deviation, and pixel orientation, and 4) classifying the image as tumorous or normal using support vector machine classification on the extracted features. The technique is evaluated on MRI brain images and achieves a testing accuracy of 93%, demonstrating its effectiveness at detecting brain tumors compared to other segmentation and classification methods.
Implementing Tumor Detection and Area Calculation in Mri Image of Human Brain...IJERA Editor
This paper is based on the research on Human Brain Tumor which uses the MRI imaging technique to capture the image. In this proposed work Brain Tumor area is calculated to define the Stage or level of seriousness of the tumor. Image Processing techniques are used for the brain tumor area calculation and Neural Network algorithms for the tumor position calculation. Also in the further advancement the classification of the tumor based on few parameters is also expected. Proposed work is divided in to following Modules: Module 1: Image Pre-Processing Module 2: Feature Extraction, Segmentation using K-Means Algorithm and Fuzzy C-Means Algorithm Module 3: Tumor Area calculation & Stage detection Module 4: Classification and position calculation of tumor using Neural Network
Global Outsourcing: Recent Issues & Challenges in the Pharmaceutical IndustryWei Garofolo
The document discusses recent issues and challenges facing the pharmaceutical industry regarding global outsourcing. It notes that drug development costs have risen tremendously to over $1 billion on average, while successful new drug approvals have declined. Pharmaceutical companies are focusing more on biomarkers, biologics, and genomic research to develop new drug targets. However, stricter regulations from the FDA and increased development costs and times pose significant challenges, especially for drugs targeting age-related diseases. Outsourcing certain functions and improving efficiency are seen as ways to help address these challenges.
The document provides guidelines for regulating herbal medicines in Europe and Japan. In Europe, herbal medicinal products are regulated and must comply with guidelines on quality control, safety, efficacy, advertising and more. Herbal medicines from foreign countries entering the market must also provide proof of quality, safety and efficacy. In Japan, traditional herbal medicines called Kampo are regulated similarly to conventional drugs, and must undergo clinical trials and comply with good manufacturing practices. Japan also has various systems for monitoring adverse reactions to herbal medicines. The regulatory frameworks aim to standardize quality while allowing traditional herbal approaches to be practiced legally.
Utilization of Super Pixel Based Microarray Image Segmentationijtsrd
In the division of PC vision pictures, Super pixels are go probably as key part from 10 years prior. There are various counts and methodology to separate the Super pixels anyway whole all of them the best super pixel looking at strategy is Simple Linear Iterative Clustering SLIC have come to pivot continuously recently. The concentrating of small scale group quality verbalization from MRI imaging is more useful to perceive tumors or some other dangerous development contaminations, so the fundamental DNA cDNA microarray is a grounded device for analyzing the same. The division of microarray pictures is the essential development in a microarray assessment. In this paper, we proposed a figuring to dividing the cDNA small show picture using Simple Linear Iterative Clustering SLIC based Self Organizing Maps SOM method. In any case, the proposed figuring is taken up a moving task to look at the bad quality of pictures in addition. There are two phases to separate the image, introductory, a pre setting up the applied picture to diminish fuss levels and second, to piece the image using SLIC based SOM approach. Mr. Davu Manikanta | Mr. Parasurama N | K Keerthi "Utilization of Super Pixel Based Microarray Image Segmentation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-5 , August 2021, URL: https://www.ijtsrd.com/papers/ijtsrd46274.pdf Paper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/46274/utilization-of-super-pixel-based-microarray-image-segmentation/mr-davu-manikanta
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...Journal For Research
Computerized MR of brain image binarization for the uses of preprocessing of features extraction and brain abnormality identification of brain has been described. Binarization is used as intermediate steps of many MR of brain normal and abnormal tissues detection. One of the main problems of MRI binarization is that many pixels of brain part cannot be correctly binarized due to the extensive black background or the large variation in contrast between background and foreground of MRI. Proposed binarization determines a threshold value using mean, variance, standard deviation and entropy followed by a non-gamut enhancement that can overcome the binarization problem. The proposed binarization technique is extensively tested with a variety of MRI and generates good binarization with improved accuracy and reduced error.
A Dualistic Sub-Image Histogram Equalization Based Enhancement and Segmentati...inventy
This document presents a dualistic sub-image histogram equalization technique for medical image enhancement and segmentation. The technique divides an image histogram into two parts based on mean and median, then equalizes each sub-histogram independently. It enhances images effectively while constraining average luminance shift. For segmentation, canny edge detection and neural networks are used. The technique is tested on medical images and shows improved completeness and correctness over previous methods, with neural networks increasing accuracy to 98.3257%.
Comparative performance analysis of segmentation techniquesIAEME Publication
This document compares the performance of several image segmentation techniques: global thresholding, adaptive thresholding, region growing, and level set segmentation. It applies these techniques to medical and synthetic images corrupted with noise and evaluates the segmentation results using binary classification metrics like sensitivity, specificity, accuracy, and precision. The results show that level set segmentation best preserves object boundaries, adaptive thresholding captures most image details, and global thresholding has the highest success rate at extracting regions of interest. Overall, the study aims to determine the optimal segmentation method for medical images from CT scans.
brain tumor detection by thresholding approachSahil Prajapati
This technical paper proposes a method for detecting tumors in MRI brain images using thresholding and morphological operations. The methodology involves preprocessing images using sharpening filters, histogram equalization, and median filtering. Threshold segmentation is then used to create binary images, and morphological operations like erosion and dilation are applied. Finally, tumor regions are extracted using image subtraction, which removes closely packed pixels. The authors found that this approach, combining thresholding with morphological operations and subtraction, was effective at detecting and segmenting tumor regions in MRI brain images.
Design and development of pulmonary tuberculosis diagnosing system using imageIAEME Publication
The document describes a system for detecting pulmonary tuberculosis (PTB) using image processing techniques and an artificial neural network (ANN). X-ray images are segmented and enhanced to extract shape and texture features. These features along with clinical sputum examination results are used to train an ANN. The trained ANN is then used to classify unknown X-ray images as TB or non-TB and indicate severity. The system was tested on 110 images and achieved 94.5% accuracy in detection. Image processing techniques like enhancement, segmentation, and ANN provide an automated method for PTB diagnosis using visual features from chest X-rays.
An Efficient Brain Tumor Detection Algorithm based on Segmentation for MRI Sy...ijtsrd
A collection, or mass, of abnormal cells in the brain is called as Brain Tumor . The skull, which encloses your brain, is very rigid. Growth inside such a restricted space can cause problems. Brain tumors can be malignant or benign. Segmentation in magnetic resonance imaging (MRI) was an emergent research area in the field of medical imaging system. In this an efficient algorithm is proposed for tumor detection based on segmentation and morphological operators. Quality of scanned image is enhanced and then morphological operators are applied to detect the tumor in the scanned image. Merlin Asha. M | G. Naveen Balaji | S. Mythili | A. Karthikeyan | N. Thillaiarasu"An Efficient Brain Tumor Detection Algorithm based on Segmentation for MRI System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-2 , February 2018, URL: http://www.ijtsrd.com/papers/ijtsrd9667.pdf http://www.ijtsrd.com/engineering/electronics-and-communication-engineering/9667/an-efficient-brain-tumor-detection-algorithm-based-on-segmentation-for-mri-system/merlin-asha-m
Classification of Osteoporosis using Fractal Texture FeaturesIJMTST Journal
In our proposed method an automatic Osteoporosis classification system is developed. The input of the system is Lumbar spine digital radiograph, which is subjected to pre-processing which includes conversion of grayscale image to binary image and enhancement using Contrast Limited Adaptive Histogram Equalization technique(CLAHE). Further Fractal Texture features(SFTA) are extracted, then the image is classified as Osteoporosis, Osteopenia and Normal using a Probabilistic Neural Network(PNN). A total of 158 images have been used, out of which 86 images are used for training the network and 32 images for testing and 40 images for validation. The network is evaluated using a confusion matrix and evaluation parameters like Sensitivity, Specificity, precision and Accuracy are computed fractal feature extraction techniques.
Brain Image Fusion using DWT and Laplacian Pyramid Approach and Tumor Detecti...INFOGAIN PUBLICATION
Image fusion is the process of combining important information from two or more images into a single image. The resulting image will be more enhanced than any of the input pictures. The idea of combining multiple image modalities to furnish a single, more enhanced image is well established, special fusion methods have been proposed in literature. This paper is based on image fusion using laplacian pyramid and Discreet Wavelet Transform (DWT) methods. This system uses an easy and effective algorithm for multi-focus image fusion which uses fusion rules to create fused image. Subsequently, the fused image is obtained by applying inverse discreet wavelet transform. After fused image is obtained, watershed segmentation algorithm is applied to detect the tumor part in fused image.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Comparitive study of brain tumor detection using morphological operatorseSAT Journals
Abstract
Segmentation divides an image into foreground object and the background object. In our case foreground object is brain tumor and background is CSF, white matter, and grey matter. Aim of our study is to detect the tumor and remove the background completely and compare the morphological operations that can be used for this purpose. Segmentation remains a challenging area for researchers since many segmentation methods results in over segmentation or under segmentation and hence, leads to the false interpretation of the results. The proposed work is the comparative study of the morphological segmentation methods for segmenting brain tumor from MRI images. Before segmentation, filtration process is carried out using two method, Non Local mean filter and median filter and their results are compared using MSE and PSNR. NL mean filter preserves sharp edges and fine details in an image hence, preferred over median filter. Also tumor location is identified, to get an approximate idea about the position of the tumor in the brain i.e. in which part the brain tumor is located. The tumor is identified by using different algorithms which are based on morphology such as watershed segmentation, morphological erosion, and hole filling algorithm and comparison between them is carried out based on parameters like accuracy, sensitivity and elapsed time. Each of the segmentation results are compared with the tumor obtained using interactive tool present in MATLAB R2013b.
Keywords: Brain tumor, MRI images, Image segmentation, Morphology, Erosion, Thresholding, Hole filling, Watershed segmentation
Medical Image Fusion Using Discrete Wavelet TransformIJERA Editor
Medical image fusion is the process of registering and combining multiple images from single or multiple imaging modalities to improve the imaging quality and reduce randomness and redundancy in order to increase the clinical applicability of medical images for diagnosis and assessment of medical problems. Multimodal medical image fusion algorithms and devices have shown notable achievements in improving clinical accuracy of decisions based on medical images. The domain where image fusion is readily used nowadays is in medical diagnostics to fuse medical images such as CT (Computed Tomography), MRI (Magnetic Resonance Imaging) and MRA. This paper aims to present a new algorithm to improve the quality of multimodality medical image fusion using Discrete Wavelet Transform (DWT) approach. Discrete Wavelet transform has been implemented using different fusion techniques including pixel averaging, maximum minimum and minimum maximum methods for medical image fusion. Performance of fusion is calculated on the basis of PSNR, MSE and the total processing time and the results demonstrate the effectiveness of fusion scheme based on wavelet transform.
MEDICAL IMAGE TEXTURE SEGMENTATION USINGRANGE FILTERcscpconf
Medical image segmentation is a frequent processing step in image understanding and computer
aided diagnosis. In this paper, we propose medical image texture segmentation using texture
filter. Three different image enhancement techniques are utilized to remove strong speckle noise as well enhance the weak boundaries of medical images. We propose to exploit the concept of range filtering to extract the texture content of medical image. Experiment is conducted on ImageCLEF2010 database. Results show the efficacy of our proposed medical image texture segmentation.
PPT on BRAIN TUMOR detection in MRI images based on IMAGE SEGMENTATION khanam22
The document presents three methods for tumor detection in MRI images: 1) K-means clustering with watershed algorithm, 2) Optimized K-means using genetic algorithm, and 3) Optimized C-means using genetic algorithm. It evaluates each method, finding that C-means clustering with genetic algorithm most accurately detects tumors by assigning data points to multiple clusters and finding the optimal solution in less time. The proposed approach successfully detects tumors with high accuracy, identifies the tumor area and internal structure, and provides a colorized output image.
The document discusses a method for classifying brain tumor images using artificial neural networks. It involves three main steps: 1) preprocessing MRI images using morphological operations to remove noise, 2) extracting texture and statistical features using GLCM and GLRLM techniques, and 3) classifying images using a probabilistic neural network (PNN) and measuring accuracy. Features are extracted from 50 brain tumor images and 65 images are tested, achieving a classification accuracy of up to 98%.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Brain tumour segmentation based on local independent projection based classif...eSAT Journals
This document summarizes a research paper on brain tumour segmentation using local independent projection based classification. The proposed method uses MRI images and consists of four main steps: preprocessing using median filtering, feature extraction using patches, tumour segmentation using local independent projection classification, and post processing to analyze the tumour region. Local independent projection classification treats segmentation as a classification problem and uses local anchor embedding and softmax regression to improve performance. The method was able to classify tumour and edema regions and calculate the tumour area and perimeter pixels.
Identifying brain tumour from mri image using modified fcm and supportIAEME Publication
This document summarizes a research paper that proposes a technique for identifying brain tumors in MRI images. The technique involves 4 steps: 1) preprocessing the MRI image, 2) segmenting the image using a modified fuzzy C-means algorithm, 3) extracting features from the segmented regions like mean, standard deviation, and pixel orientation, and 4) classifying the image as tumorous or normal using support vector machine classification on the extracted features. The technique is evaluated on MRI brain images and achieves a testing accuracy of 93%, demonstrating its effectiveness at detecting brain tumors compared to other segmentation and classification methods.
Implementing Tumor Detection and Area Calculation in Mri Image of Human Brain...IJERA Editor
This paper is based on the research on Human Brain Tumor which uses the MRI imaging technique to capture the image. In this proposed work Brain Tumor area is calculated to define the Stage or level of seriousness of the tumor. Image Processing techniques are used for the brain tumor area calculation and Neural Network algorithms for the tumor position calculation. Also in the further advancement the classification of the tumor based on few parameters is also expected. Proposed work is divided in to following Modules: Module 1: Image Pre-Processing Module 2: Feature Extraction, Segmentation using K-Means Algorithm and Fuzzy C-Means Algorithm Module 3: Tumor Area calculation & Stage detection Module 4: Classification and position calculation of tumor using Neural Network
Global Outsourcing: Recent Issues & Challenges in the Pharmaceutical IndustryWei Garofolo
The document discusses recent issues and challenges facing the pharmaceutical industry regarding global outsourcing. It notes that drug development costs have risen tremendously to over $1 billion on average, while successful new drug approvals have declined. Pharmaceutical companies are focusing more on biomarkers, biologics, and genomic research to develop new drug targets. However, stricter regulations from the FDA and increased development costs and times pose significant challenges, especially for drugs targeting age-related diseases. Outsourcing certain functions and improving efficiency are seen as ways to help address these challenges.
The document provides guidelines for regulating herbal medicines in Europe and Japan. In Europe, herbal medicinal products are regulated and must comply with guidelines on quality control, safety, efficacy, advertising and more. Herbal medicines from foreign countries entering the market must also provide proof of quality, safety and efficacy. In Japan, traditional herbal medicines called Kampo are regulated similarly to conventional drugs, and must undergo clinical trials and comply with good manufacturing practices. Japan also has various systems for monitoring adverse reactions to herbal medicines. The regulatory frameworks aim to standardize quality while allowing traditional herbal approaches to be practiced legally.
Drug discovery challenges and different discovery approachesHitesh Soni
The document discusses several challenges in drug discovery and different discovery approaches. It outlines issues with the traditional high-throughput screening approach such as low success rates. It then describes alternative approaches like considering transient binding drugs that interact weakly with multiple targets, leveraging natural products as drug leads, and exploring a multi-target drug discovery strategy to address complex diseases involving multiple molecular dysfunctions.
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
Clinical trials involve several phases to test a drug's safety and efficacy. Phase I trials test safety in healthy volunteers. Phase II trials test dosage and side effects in patients. Phase III trials test efficacy in large patient groups. Legal and procedural aspects require ethics committee approval, informed consent, and regulatory oversight. Clinical trials involve clinical investigators, institutions to host the trial, sponsors to fund the trial, and regulatory authorities to provide legal approval. The clinical trial protocol, informed consent process, and role of ethics committees are important to protect patient rights and welfare in clinical research.
liquisolid technology is a topic related to pharmaceutics presented by konatham teja kumar reddy from chilkur balaji college of pharmcy ,hyderabad,telangana
Georgina Gal, Regulatory Affairs Manager, AbbVie, Hungary
Presentation at EIPG – BIPA Symposium “Clinical Trials Research” at the Faculty of Pharmacy, Medical University of Sofia, Sofia 2014.
Solubility enhancement by using various techniques Prajakta Chavan
This document discusses various techniques for enhancing the solubility of drugs, including particle size reduction, hydrotropy, cosolvency, solubilization by surfactants, solid dispersions, pH adjustment, high pressure homogenization, supercritical fluid recrystallization, sonocrystallization, complexation, spray drying, inclusion complex formation, liquisolid technique, microemulsions, and self-emulsifying drug delivery systems. Particle size reduction techniques like micronization and nanosuspensions increase surface area to enhance dissolution rate and solubility. Other techniques utilize excipients like surfactants, cosolvents, and polymers to solubilize drugs.
This document provides an overview of the regulatory aspects of herbal medicines in India, Europe, and the United States. It discusses the key regulatory bodies and guidelines around herbal medicines in each region. In India, herbal medicines are regulated by the Ayush Ministry, ICMR, and Drugs and Cosmetics Act. In Europe, herbal medicines fall under European directives and are evaluated by the EMA and HMPC. In the US, herbal products are classified and regulated differently depending on if they are considered a dietary supplement, food, or drug.
Formulation and Evaluation of Liquisolid Compacts of CarvedilolIOSR Journals
The purpose of this study is to develop a novel liquisolid technique to enhance the dissolution rate of
poorly water soluble drug Carvedilol, a BCS class II drug, which is a β-blocker, by using different excipients.
The main components of a liquisolid system are a non volatile solvent, carrier and coating materials and a
disintegrant. Liquisolid system refers to the formulations that are formed by conversion of liquid drugs, drug
suspensions or drug solution in non-volatile solvents into dry, non adherent, free flowing and compressible
powder mixture by blending with suitable carrier and coating materials. Hence the dissolution step, a prerequisite
for drug absorption, is by passed and better bioavailability of poorly soluble drug is achieved.
Liquisolid tablets of carvedilol are prepared by using PEG, PG, glycerine as non volatile liquid vehicles and
Avicel PH 101 and 102, Aerosil as carrier and coating materials respectively. Optimized formulation containing
20% drug in PEG 400, with Avicel 101 as carrier and Aerosil as coating material has shown 98.4% drug
release within 20 min which is better than marketed product (CARCA 12.5mg, Intas). The DSC and X-RD
studies are performed to investigate the physicochemical properties of formulation and drug excipient
interactions. The results are found to be satisfactory
Personalized medicine involves the prescription of specific therapeutics best suited for an individual based on their genetic or proteomic profile. This talk discusses current approaches in drug discovery/development, the role of genetics in drug metabolism, and lawful/ethical issues surrounding the deployment of new health technology.
The document discusses the key stages in the drug discovery and development process including target selection, compound screening and hit optimization, selecting a drug candidate through further optimization of properties like absorption and metabolism, safety testing in animals and humans, proof of concept clinical trials in patients, large phase 3 clinical trials for registration and approval, and finally launch and life cycle management. It notes that the entire process from discovery to approval can take 12-16 years and cost over $1 billion.
The document discusses the process of drug discovery, including target selection, lead discovery, medicinal chemistry, in vitro and in vivo studies, and clinical trials. Target selection involves identifying cellular or genetic targets involved in disease through techniques like genomics, proteomics, and bioinformatics. Lead discovery focuses on identifying small molecule modulators of protein function through methods like synthesis, combinatorial chemistry, assay development, and high-throughput screening. Medicinal chemistry then works to optimize these leads. [/SUMMARY]
An Assimilated Face Recognition System with effective Gender Recognition RateIRJET Journal
This document summarizes an assimilated face recognition system that can also perform gender recognition. The system conducts experiments using databases like GENDER-FERET and Cambridge AT&T. For face recognition, it uses the Eigenfaces algorithm to extract features and classify faces. For gender recognition, it uses a trainable COSFIRE filter with Gabor filters to obtain face descriptors, which are classified using an SVM classifier. The experiments achieve a gender recognition rate of over 90%. The paper shows that the gender recognition approach outperforms other methods using handcrafted features and raw pixels.
Local Descriptor based Face Recognition SystemIRJET Journal
This document describes a local descriptor-based face recognition system that uses the Asymmetric Region Local Binary Pattern (AR-LBP) operator along with Principal Component Analysis (PCA) for facial expression recognition. The proposed AR-LBP operator addresses limitations of existing LBP operators in terms of scale, feature histogram length, and discriminability. The system divides input face images into regions, extracts AR-LBP histograms from each region, and concatenates them into a feature vector. It was evaluated on three datasets and achieved recognition accuracies of 96.43%, 97.14%, and 86.67%, respectively. Evaluation using different similarity metrics found that Mahalanobis Cosine distance performed best. Experiments varied grid and operator sizes.
Effectual Face Recognition System for Uncontrolled IlluminationIIRindia
Facial recognition systems are biometric methods used to pinpoint the identities of faces present in various digital formats by comparing them to facial databases. The variation in illuminating conditions is a huge hindrance for efficient operation of facial verification systems. The effects of change in ambient lighting conditions and formation of shadows can be nullified by an effortless pre-processing system. This paper presents an effectual Facial Recognition System which consists of three stages: the illumination insensitive preprocessing method, Feature Extraction and Score Fusion. In the preprocessing stage, the light-sensitive images are converted to light-insensitive images so that uncontrolled lighting will no more be a liability for any kind of identification. In the feature extraction stage, hybrid Fourier classifiers are used to obtain transforms which are projected into subspaces using PCLDA Theory. And the output is passed onto the Score Fusion stage where the discriminating powers of the classifiers are unified by using LLR and knowing the ground truth optimizations. This proposal has passed the Face Recognition Grand Challenge (FRGC) Version-2 Experiment, Extended Yale B and FERET datasets.
ANOVA and Fisher Criterion based Feature Selection for Lower Dimensional Univ...CSCJournals
Unethical uses of data hiding methods have made Image Steganalysis a very important area of
research work in the field of Digital Investigations. Effectiveness of any Image Steganalysis
algorithm depends on feature selection and feature reduction. The goal of this paper is to develop
a reduced dimensional merged feature set for universal image steganalysis using Fisher Criterion
and ANOVA techniques. Statistical features extracted from wavelet subbands and binary
similarity patterns extracted from DCT of an image are merged to make combined feature set.
Fisher criterion and ANOVA test are applied to evaluate the combined feature vector score and
then only those features are selected which are found sensitive in both feature selection methods.
These reduced dimensional 15-D feature vector is used to train SVM classifier with RBF kernel.
The proposed algorithm is tested against steganography methods like F5, Outguess and LSB
based method. Stego images are generated using widely available stego tools for two standard
image databases: CorelDraw and BSDS500. Results are further validated using 10 fold cross
validation process. The proposed algorithm achieves overall 97% detection accuracy against
various steganography methods
IRJET- Face Recognition of Criminals for Security using Principal Component A...IRJET Journal
This document presents a face recognition system using principal component analysis to identify criminals at airports. The system is trained on images of known criminals collected from law enforcement agencies. It uses PCA for dimensionality reduction to generate eigenfaces from the training images. During testing, it generates an eigenface from the input image and calculates the Euclidean distance between this eigenface and the eigenfaces of the training images. It identifies the criminal as the one corresponding to the training image with the minimum distance, alerting authorities. The document outlines the methodology, including preprocessing steps like subtracting the mean face, and reviews prior work applying PCA and other algorithms to face recognition.
1) The document presents a new touchless palmprint verification method that uses shock filtering for preprocessing, SIFT feature extraction and matching, and I-RANSAC and LPD refinement for feature matching.
2) Shock filtering is used to enhance the palmprint images by dividing them clearly at edges and flattening signals within areas. This improves SIFT feature detection by producing more keypoints.
3) SIFT is used to extract scale and rotation invariant features from the palmprint images. I-RANSAC and LPD refinement are then used to match features between images and improve matching accuracy.
Touchless Palmprint Verification using Shock Filter, SIFT, I-RANSAC, and LPD iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
An Efficient Face Recognition Using Multi-Kernel Based Scale Invariant Featur...CSCJournals
Face recognition has gained significant attention in research community due to its wide range of commercial and law enforcement applications. Due to the developments in the past few decades, in the current scenario, face recognition is employing advanced feature identification techniques and matching methods. In spite of vast research done, face recognition still remains an open problem due to the challenges posed by illumination, occlusions, pose variation, scaling, etc. This paper is aimed at proposing a face recognition technique with high accuracy. It focuses on face recognition based on improved SIFT algorithm. In the proposed approach, the face features are extracted using a novel multi-kernel function (MKF) based SIFT technique. The classification is done using SVM classifier. Experimental results shows the superiority of the proposed algorithm over the SIFT technique. Evaluation of the proposed approach is done on CVL face database and experimental results shows that the proposed approach has a recognition rate of 99%.
Biometric identification with improved efficiency using sift algorithmIJARIIT
This document summarizes a research paper that proposes using the SIFT (Scale-Invariant Feature Transform) algorithm to increase the efficiency of biometric identification systems using fingerprints. The paper provides background on biometric identification and describes the SIFT algorithm and its key steps. It then outlines the methodology used, which applies SIFT to extract features from fingerprint images to reduce identification time. The results showed SIFT provided higher accuracy and reduced identification time compared to other techniques. Future work is suggested to make the system more robust and adaptive.
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDYcsandit
The majority of applications requiring high resolution images to derive and analyze data
accurately and easily. Image super resolution is playing an effective role in those applications.
Image super resolution is the process of producing high resolution image from low resolution
image. In this paper, we study various image super resolution techniques with respect to the
quality of results and processing time. This comparative study introduces a comparison between
four algorithms of single image super-resolution. For fair comparison, the compared algorithms
are tested on the same dataset and same platform to show the major advantages of one over the
others.
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters IJERA Editor
The Biometrics is used to recognize a person effectively compared to traditional methods of identification. In this paper, we propose a Face recognition based on Single Tree Wavelet Transform (STWT) and Dual Tree Complex Wavelet transform (DTCWT). The Face Images are preprocessed to enhance quality of the image and resize. DTCWT and STWT are applied on face images to extract features. The Euclidian distance is used to compare features of database image with test face images to compute performance parameters. The performance of STWT is compared with DTCWT. It is observed that the DTCWT gives better results compared to STWT technique.
IRJET - A Review on Face Recognition using Deep Learning AlgorithmIRJET Journal
This document provides an overview of face recognition using deep learning algorithms. It discusses how deep learning approaches like convolutional neural networks (CNNs) have achieved high accuracy in face recognition tasks compared to earlier methods. CNNs can learn discriminative face features from large datasets during training to generalize to new images, handling variations in pose, illumination and expression. The document reviews popular CNN architectures and training approaches for face recognition. It also discusses other traditional face recognition methods like PCA and LDA, and compares their performance to deep learning methods.
IRJET- Class Attendance using Face Detection and Recognition with OPENCVIRJET Journal
This document describes a system to automate class attendance using face detection and recognition with OpenCV. The system uses the Viola-Jones algorithm for face detection and linear binary pattern histograms for face recognition. Detected faces are converted to grayscale images for better accuracy. The system trains on positive images of faces and negative images without faces to build a classifier. It then detects faces in class and recognizes students by matching features to a stored database, updating attendance and notifying administrators. The proposed system aims to reduce time spent on manual attendance and increase accuracy by automating the process through computer vision techniques.
This document describes a proposed system for identifying individual faces among a crowd using video footage. The system utilizes a training process involving face detection, feature extraction using HOG, and classification with SVM. Testing involves extracting video frames, detecting faces using LBP features, extracting HOG features, and classifying faces using SVM. The goal is to identify specific suspects from videos recorded with a CCTV camera mounted 2.5 meters high at a 60 degree angle, achieving the highest accuracy and lowest processing time from frame sampling and threshold testing.
1. The document discusses various techniques that have been proposed for face detection and attendance systems, including Haar classifiers, improved support vector machines, and local binary patterns algorithms.
2. It reviews several papers that have implemented different methods for face recognition for attendance systems, such as using HOG features and PCA for dimensionality reduction along with SVM classification.
3. The document also summarizes a paper that proposed a context-aware local binary feature learning method for face recognition that exploits contextual information between adjacent image bits.
A Survey on Different Relevance Feedback Techniques in Content Based Image Re...IRJET Journal
This document summarizes several relevance feedback techniques used in content-based image retrieval to bridge the semantic gap between low-level visual features and high-level semantic concepts. It reviews subspace learning algorithms like feature adaptation and relevance feedback, probabilistic feature weighting with positive and negative examples, asymmetric bagging and random subspaces for support vector machines, navigation pattern-based relevance feedback, biased discriminative Euclidean embedding, and feature line embedding biased discriminant analysis. The goal of these techniques is to retrieve more semantically relevant images through an iterative feedback process between the user and retrieval system.
Iaetsd multi-view and multi band face recognitionIaetsd Iaetsd
The document discusses multi-view and multi-band face recognition using wavelet transforms. It begins with an abstract describing the challenges of face recognition due to variations in lighting, expression, and aging. It then introduces a multi-band face recognition algorithm using wavelet transforms to extract features from multiple video bands. The experimental results show wavelet transforms take less response time and are more suitable for feature extraction and face matching with high accuracy. It discusses preprocessing images, feature extraction using PCA and wavelet transforms, feature matching, and concludes wavelet transforms help with feature extraction and face matching with high accuracy and less response time.
This document describes a facial recognition and biometric security system called Digiyathra that is intended to streamline airport security checks. It would allow passengers to complete check-in, bag drop, and boarding using only their face as identification. During online ticket booking, passengers would submit a passport photo that would be added to a database and used for verification at various points throughout their journey. This system aims to accelerate passenger throughput while reducing costs by minimizing the need for paper-based ID checks. It provides details on how facial recognition works, describing the five main steps of detection, analysis, template generation, matching, and result determination. Local Binary Patterns Histograms are discussed as the specific method used to recognize and identify faces within this
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptxAravindHari22
1) The document proposes a real-time unconstrained face recognition system using deep convolutional neural networks (DCNN).
2) The system performs face detection, extracts DCNN features, and computes similarity to perform face recognition on images and video frames.
3) It was tested on challenging datasets like CASIA-WebFace, IJB-A, and LFW and was able to achieve accurate recognition with variations in pose, illumination, expression, resolution and occlusion.
An Efficient Image Forensic Mechanism using Super Pixel by SIFT and LFP Algor...IRJET Journal
This document summarizes a research paper that proposes an efficient image forensic mechanism using super pixels, scale-invariant feature transform (SIFT), and local fingerprint (LFP) algorithm to detect copy-move forgery. The mechanism applies wavelet decomposition to compute super pixel sizes for segmentation, extracts features using SIFT, and performs region growing to detect forged regions. Experimental results showed increased performance in precision, sensitivity, specificity, and F1 score measures for forgery detection compared to existing techniques. The document also reviews several related works on image forgery detection techniques.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
This document provides a review of synthetic aperture radar (SAR) engineering. It begins with an introduction to SAR and its uses in remote sensing and defense. It then discusses designs for SAR systems and antennas. The document reviews recent literature on SAR, including works discussing antenna mask design to optimize SAR performance, polarimetric SAR for mapping terrain changes, and using SAR to monitor cryospheric regions. It concludes that SAR is a useful technique for achieving good image quality through optimized antenna design.
This document analyzes the capacity of MIMO wireless channels when accounting for impairments from physical transceiver hardware limitations. It is shown that when including the effects of transceiver impairments like non-linearities, phase noise, and quantization noise, the capacity of MIMO channels reaches a finite limit as SNR increases, rather than increasing without bound. This results in a zero multiplexing gain, unlike the ideal case without impairments. However, the relative capacity increase from MIMO over single-antenna channels remains at least as large when including impairments. Various figures are presented showing the capacity and multiplexing gain for different channel models and transceiver configurations. The document concludes by stating the analysis provides insights into understanding
1) The document discusses various Internet of Things (IoT) based digital agriculture monitoring systems that have been developed by researchers to optimize resource utilization and increase crop production.
2) It describes different technologies like Bluetooth, Zigbee, GSM, WiFi that have been used to monitor agriculture parameters such as temperature, moisture, humidity and communicate this sensor data to monitoring systems.
3) The paper also proposes a new IoT monitoring system using sensors to measure temperature, soil moisture and humidity, an ESP8266 WiFi module to transmit data to the cloud, and a user interface to view environmental parameter graphs remotely.
In this follow-up session on knowledge and prompt engineering, we will explore structured prompting, chain of thought prompting, iterative prompting, prompt optimization, emotional language prompts, and the inclusion of user signals and industry-specific data to enhance LLM performance.
Join EIS Founder & CEO Seth Earley and special guest Nick Usborne, Copywriter, Trainer, and Speaker, as they delve into these methodologies to improve AI-driven knowledge processes for employees and customers alike.
Sustainability requires ingenuity and stewardship. Did you know Pigging Solutions pigging systems help you achieve your sustainable manufacturing goals AND provide rapid return on investment.
How? Our systems recover over 99% of product in transfer piping. Recovering trapped product from transfer lines that would otherwise become flush-waste, means you can increase batch yields and eliminate flush waste. From raw materials to finished product, if you can pump it, we can pig it.
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
this resume for sadika shaikh bca studentSadikaShaikh7
I am a dedicated BCA student with a strong foundation in web technologies, including PHP and MySQL. I have hands-on experience in Java and Python, and a solid understanding of data structures. My technical skills are complemented by my ability to learn quickly and adapt to new challenges in the ever-evolving field of computer science.
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Em...Erasmo Purificato
Slide of the tutorial entitled "Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Emerging Trends" held at UMAP'24: 32nd ACM Conference on User Modeling, Adaptation and Personalization (July 1, 2024 | Cagliari, Italy)
Fluttercon 2024: Showing that you care about security - OpenSSF Scorecards fo...Chris Swan
Have you noticed the OpenSSF Scorecard badges on the official Dart and Flutter repos? It's Google's way of showing that they care about security. Practices such as pinning dependencies, branch protection, required reviews, continuous integration tests etc. are measured to provide a score and accompanying badge.
You can do the same for your projects, and this presentation will show you how, with an emphasis on the unique challenges that come up when working with Dart and Flutter.
The session will provide a walkthrough of the steps involved in securing a first repository, and then what it takes to repeat that process across an organization with multiple repos. It will also look at the ongoing maintenance involved once scorecards have been implemented, and how aspects of that maintenance can be better automated to minimize toil.
MYIR Product Brochure - A Global Provider of Embedded SOMs & SolutionsLinda Zhang
This brochure gives introduction of MYIR Electronics company and MYIR's products and services.
MYIR Electronics Limited (MYIR for short), established in 2011, is a global provider of embedded System-On-Modules (SOMs) and
comprehensive solutions based on various architectures such as ARM, FPGA, RISC-V, and AI. We cater to customers' needs for large-scale production, offering customized design, industry-specific application solutions, and one-stop OEM services.
MYIR, recognized as a national high-tech enterprise, is also listed among the "Specialized
and Special new" Enterprises in Shenzhen, China. Our core belief is that "Our success stems from our customers' success" and embraces the philosophy
of "Make Your Idea Real, then My Idea Realizing!"
How to Avoid Learning the Linux-Kernel Memory ModelScyllaDB
The Linux-kernel memory model (LKMM) is a powerful tool for developing highly concurrent Linux-kernel code, but it also has a steep learning curve. Wouldn't it be great to get most of LKMM's benefits without the learning curve?
This talk will describe how to do exactly that by using the standard Linux-kernel APIs (locking, reference counting, RCU) along with a simple rules of thumb, thus gaining most of LKMM's power with less learning. And the full LKMM is always there when you need it!
AI_dev Europe 2024 - From OpenAI to Opensource AIRaphaël Semeteys
Navigating Between Commercial Ownership and Collaborative Openness
This presentation explores the evolution of generative AI, highlighting the trajectories of various models such as GPT-4, and examining the dynamics between commercial interests and the ethics of open collaboration. We offer an in-depth analysis of the levels of openness of different language models, assessing various components and aspects, and exploring how the (de)centralization of computing power and technology could shape the future of AI research and development. Additionally, we explore concrete examples like LLaMA and its descendants, as well as other open and collaborative projects, which illustrate the diversity and creativity in the field, while navigating the complex waters of intellectual property and licensing.
Data Protection in a Connected World: Sovereignty and Cyber Securityanupriti
Delve into the critical intersection of data sovereignty and cyber security in this presentation. Explore unconventional cyber threat vectors and strategies to safeguard data integrity and sovereignty in an increasingly interconnected world. Gain insights into emerging threats and proactive defense measures essential for modern digital ecosystems.
Performance Budgets for the Real World by Tammy EvertsScyllaDB
Performance budgets have been around for more than ten years. Over those years, we’ve learned a lot about what works, what doesn’t, and what we need to improve. In this session, Tammy revisits old assumptions about performance budgets and offers some new best practices. Topics include:
• Understanding performance budgets vs. performance goals
• Aligning budgets with user experience
• Pros and cons of Core Web Vitals
• How to stay on top of your budgets to fight regressions
Performance Budgets for the Real World by Tammy Everts
G011134454
1. IOSR Journal of Electronics and Communication Engineering (IOSR-JECE)
e-ISSN: 2278-2834,p- ISSN: 2278-8735.Volume 11, Issue 1, Ver. III (Jan. - Feb .2016), PP 44-54
www.iosrjournals.org
DOI: 10.9790/2834-11124454 www.iosrjournals.org 44 | Page
Hybrid Technique to Identity the Faces Using Key Point
Hypothesis Prediction Algorithm (Khpa)
K.RaghavendraPrasad1
, H.Girisha2
1
Department of EEE,Rao Bahadur Y.Mahabaleswarappa Engineering college, India
2
Department of CSE,Rao Bahadur Y.Mahabaleswarappa Engineering college, India
Abstract: To achieve accurate detection of faces and extraction of criminal records through the detected faces,
a feedback accuracy system is developed. This model improves the efficiency of accurate detection of criminal
faces using Anisotropic Scale Invariant Feature Transform (A-SIFT) algorithm. SIFT algorithm uses
anisotropic scaling on the query images for accurate corner detection. The identified true positive cases are
used for detecting database image using unsupervised RESVM technique. The data is further tested using a
hypothesis testing proposed with definedrule that compares the resultant RESVM image with the actual image.
Depending upon the rules, the test results provide whether the image is accurate or not. Degree of accuracy
determines the acceptance or rejection of images, level of accuracy above a certain threshold value resubmits
the image to RESVM. Then the entire process is repeated until the image attains an accuracy level of 95% or
more. This proposed method is tested with both similar and different reference and input facial images. Finally,
it is found that the results are better and efficient in terms of retrieval time and keypoint reduction.
I. Introduction
In recent years, feature detection and analysis plays a major role in image analysis and processing to
obtain relevant information [1]. This feature detection is done using pattern recognition [2], annotation [3],
recognition [4] and sequence analysis [5]. Out of which a high distinctive invariant, Scale Invariant Feature
Transform (SIFT) proposed by David G. Lowe proved to be an effective technique to pre-process an image but
robust in pattern recognition [6]. This SIFT can detect and describe the entire local feature in an image for
object recognition, tracking and mapping application.
SIFT locates the extreme value in the scaling space to detect the location, scaling and rotation
invariant. Disruption in recognition could be reduced using local image feature extraction than filtering that can
solve hand deformation problem. In addition to this, robustness exist in SIFT due to compression, noise addition
and filtering. This SIFT perform inefficient due to the presence of numerous patterns generated at each filtration
stage [7]. Isotropic filter for preprocessing and matching the preprocessed images doesn’t perform well in terms
of matched points [8].
Apart from robustness due to filtering, accuracy is affected when SIFT is used with 2D symbolic
aggregate approximation feature [9] and with Comp-Code feature extraction during recognition [10]. The main
problems with SIFT technique is to discriminate the key points on an image, orientation histogram descriptors
are not sufficient [11]. Another major problem is the matching of key points between the images to attain
rotation and translation invariance is affected entirely due to topological relation ignorance.
SIFT is mainly used in extracting feature in a multispectral gray image, while doing this it posses certain
drawbacks. The first drawback is that the conversion of multispectral to gray information reduces the level of
extraction of image information, since the image formation is rapidly affected due to gray image conversion.
Thus it leads to degraded SIFT extracted features and the second drawback is that it fails in detecting spectral
information that is used in differentiating regional characteristics. This could avoid the number of useful
descriptors or distinctive point features in a multi-spectral image leading to less accuracy [12].
There are certain considerations while pre-processing an image with respect with above drawbacks. To
avoid the problem due to gray scaling, [13] used a technique that uses opponent information to retain the
information of image even if the image is subjected to gray scaling with different intensity shifts. To improve
the descriptor in an image after the application of SIFT, a combination of color and global information builds
the color invariance components and global components using log-polar histogram [14]. This makes the
descriptors to appear rich after pre-processing the image in relevance to other images.
Unsupervised recognition plays a crucial role in detecting the face of human in an image. To achieve
this, Support Vector Machine (SVM) could contribute to a better part. Here to improve promising detection
rates, Ensemble learning approach using SVM is used. Robust Ensemble Support Vector Machine
(RESVM)plays a major in learning patterns using positive unlabelled data (PU) [ressvm.pdf] for studying the
patterns given by the descriptors through Modified SIFT technique. Furthermore, the accuracy of system is
increased to a vast level using this PU pattern studying that reduces the noises present in an image. This can be
2. Hybrid Technique To Identity The Faces Using Key Point Hypothesis Prediction Algorithm (Khpa)
DOI: 10.9790/2834-11124454 www.iosrjournals.org 45 | Page
justified in terms of PU pattern with descriptors that represents the actual image representation rather the
presence of false unlabelled data. Initially to attain this PU learning, class-weighted SVM (CWSVM) approach
considers the unlabelled data to be negative cases [15]. Then the CWSVM is trained to avoid the contamination
of positive instances else it will lead to accumulation of unlabelled data. Initially Bootstrap re-sampling on both
positive and unlabelled instances reduces the accumulation of these attributes. In RES VM uses an extra Degree
of Freedom to attain misclassification penalty between the positive and unlabelled data. The ensemble learning
is done based on base model decision value that re-samples the positive and unlabelled data in SVM.
This paper aims at detection of criminal faces from a group photograph, a complex process in terms of
its accuracy. The model involves two approaches; one is giving query at front end, pre-processing and
classification using A-SIFT and RESVM respectively. Prior method includes the accuracy increment and
retrieval process. Here, dynamic detection of human faceis done using an anomaly detection method called SIFT
technique.In SIFT technique, due to unavailability of angular points large number of features are extracted
leading to redundant information. Due to generation of high dimensional feature points, calculation time
increases thus affecting real time performance [16]. This concern is been addressed in [17] and our concern is to
improve the directionality of key points. To increase this objective we are going for anisotropicscaling over
SIFT (A-SIFT) technique. Rich directional key points are obtained using A-SIFT thatimproves the key
pointbehaviour of the featured transform. The obtained key points are processed for further stages to obtain
better descriptors. Training is done over these descriptors with RESVM and compared with original image to
eliminate the accumulation of unlabelled data. Thus, PU learning with positive labels increases more the
accuracy. Thereby generating rules for accurate detection, the accuracy of detection is improved through
hypothesis testing of the predicted values with generated rules. Depending on the results obtained, if the
accuracy value is less than the threshold value of the model; then the system is made to re-configure all steps
repeatedly over the given query image. Upon increased accuracy greater than threshold level, further process
involves the retrieval ofrelated image from the database.
The contribution of the paper: section 2 includes the proposed model for accuracy improvement and
section 3 includes the correctness of the proposed model and the validation is done in section 4. Finally the
conclusion and future scope is included in section 5.
II. Methodologies
The proposed method uses Scale Invariant Feature Transform (SIFT), over a group photo for extracting
its feature. The main feature to be extracted is the human face that is visible in the group photo.This is done to
detect the criminals using facial detection technique.An automated model is mainly used for extraction of
selected features’ i.e. face from the group photo. Remaining attributes are eliminated;facial features are
accurately selected using automated selection of threshold accurate levels for each regions in a face. The facial
attribute is extracted using A-SIFT technique that uses anisotropic scaling for improving the accuracy of scaling.
In the next stage, PU learning using Robust Ensemble-SVM (RESVM)removes the unlabeled data and increases
the efficiency of retrieval. In the third stage, threshold accuracy value (TAV) determines the accuracy of
retrieval in previous two stages. This TAV value is generated based on the rule defined in terms of hypothesis,
proving whether the obtained image is significant or in-significant. Once if the accuracy value is lesser than the
TAV, the pre-processing and classification is repeated until the image retrieval attains greater value than
TAV.Otherwise the image is retrieved from the database whose accuracy value is greater than TAV.
3. Hybrid Technique To Identity The Faces Using Key Point Hypothesis Prediction Algorithm (Khpa)
DOI: 10.9790/2834-11124454 www.iosrjournals.org 46 | Page
Fig. 2 Key Point Hypothesis Prediction Algorithm(KHPA)
2.1 Anisotropic-SIFT
The main steps in Scale Invariant Feature Transform involves: Scale space extreme detection that
searches and detects all the scale space extreme points of an image using Difference of Gaussian (DOG). These
size and location of keypoints are accurately positioned using accurate positioning keypoints. Orientation of
these keypoints is assigned over a desired direction; thus converting the image data to feature points. Descriptors
are assigned to feature points for counting the key points of the current scaled area [17].
2.1.1.Difficulty In SIFT Algorithm
The robustness in SIFT occurs due to the selection of key points that is been addressed in [17]. Also,
corner detectors may not help when an image is scaled and [12] proposed edge extraction using canny edge
detector, which fails when image is scaled. Our concern is to provide better directional scaling of the key points
to achieve a proper orientation of key points. Adding anisotropy for directional scaling improves the accuracy or
coherence of the directional vectors in obtaining proper feature points. Thus the main difficulty in SIFT
algorithm could be removed using this coherent technique. Thus, the redundant feature vectors arising due to
improper scaling in featured direction could be eliminated through this technique. The process associated with
modified SIFT technique is described below:
2.1.2. Adaptive Thresholding
Initial process involves detection of key points using a windowing function. Since the image varied
based on various scaling, exact detection of accurate key points is required. To achieve this, an adaptive window
with similarity difference through weighted average processing between the images [18]. Objects are
approximated in terms of key point collection in an image and to reduce the noise, anisotropic diffusion is used.
This adaptive smoothening parameter helps in attaining smoothness images and protecting the edge features
[19]. In this principle, the diffusion co-efficient K diffuses near smoothened areas than boundary regions.
Texture and noise areas are set with small variations for edge preservation and for flattened areas, large
variations are set. Thus using a constant K0,semi-adaptive diffusion co-efficient are given in Eqs. (1) and (2).
:
0
'
1 / 2 5 5
K
K
I
(1)
:
0
'
1 exp 255 /
K
K
I
(2)
The constant K0 is larger than K in normal anisotropic processing and in semi-adaptive diffusion, K is
large in flat region and small in edge region to diffuse more and less, respectively. Multiple diffusion process
could help in achieving better denoising effects. Thus, the diffusion process is performed over several iterations
follows automatically the smoothening functions. This is because K’ values are dependent completely on
manually specified K0 values, named semi-adaptive threshold. Finally, improved co-efficient for smoothening
function with K’ is defined as:
4. Hybrid Technique To Identity The Faces Using Key Point Hypothesis Prediction Algorithm (Khpa)
DOI: 10.9790/2834-11124454 www.iosrjournals.org 47 | Page
:
2
0
ex p 1 / 2 5 5
I
c I I
K
(3)
:
2
0
exp 1 exp 255 /
I
c I I
K
(4)
The smoothening function helps in removing noises present in reference image and that is represented
in fig. 2. Using adaptive thresholding technique.
Fig. 2. Initial Pre-Processing and filtering of noises using Adaptive Thresholding
2.1.3. Scale Space Detection
This semi-adaptive threshold smoothening co-efficient finds the potential key points through local
maxima solution across scale and space. This correspondingly provides set of values in x and y axis at K’
scale.Here,instead of Difference of Gaussian (DoG)we have applied a database of keypoints taken from several
facial images with various expressions of faces. Since, DoG helps in finding the feature points in the whole
image. However, the facial regions alone cannot be extracted. So, the relative keypoints from the whole image
keypoints set is found using database values containing facial image values as ground vectors. The database
values are chosen as ground truth values from sample image of it is shown in fig. 3
(a). Likewise, 3(b) shows the crowded inefficient SIFT points over the reference image.
(a) (b)
Fig. 3. Reference image (a) normal face (b) initial SIFT points
5. Hybrid Technique To Identity The Faces Using Key Point Hypothesis Prediction Algorithm (Khpa)
DOI: 10.9790/2834-11124454 www.iosrjournals.org 48 | Page
Initially, the images of ground truth value are used for extraction of features and saving it in database. From the
ground values, a new image m is trained with this training data. The trained image is now tested with same
database after several iterations. To achieve proper keypoints in scale space, weighted Euclidean distance is
used. This helps in finding the minimum distance between the keypoints over a given descriptor. The weights
are based on the training data samples from the database using neural networks with inputs from ground truth
value.
1 1 2 2 2 2
1
( , ) ( ) ( )
n
i i i i i
i
d a b w x b x b
(5)
where 1
i
x and 2
i
x represents ith
measure of edge regionsalong x and y coordinates relative to point 1, respectively.
1
i
b and 2
i
b represents ith
measure of edge regions along x and y coordinates relative to point 2, respectively.wij
represents a value between ith
measure that relates to ground truth value from the database.
2.1.4 Keypoint Localization
A threshold parameter is used to find more matching points relative to facial region. The value range is
chosen as between [0,1]. Here, if the matching points are more, then the range value could be more. Greater
value range directly affects the accuracy and time matching precision. This parameter is defined as:
1 2
1
1 3
( , )
( , )
i i
R
i i
d x b
d w
d x b
(6)
Numerator in the above equation represents the weighted Euclidean distance between points 1 and 2.
Denominator represents weighted Euclidean distance between points 1 and 2 based on weights wi.Thus, using
key point localization, the reference points with training data samples helped in removing redundant and reduces
the keypoints to a larger extent shown in fig. 4.
Fig. 4. Removal of Redundant Keypoints over the facial regions using Keypoint localization
2.1.5 Orientation Assignment
For each keypoint on the image contour, the orientation is assigned in terms of directional derivative,
defined as:
,
( ) arg m ax ( ; ) , 1, 2 , ...,q
k
D q I P k k k
(7)
wherePq represents corresponding pixel on the contour, q with keypoint. The use of pixel edges with keypoints
correlates with its corresponding directions. ( )D q of the adjacent pixel is same when the edge segment is
smooth. ( )D q of the adjacent pixel is indifferent in the corner regions. The interior point of the contour is
defined as:
~
( ) m in ( 1) ( 1) , ( 1) ( 1)D q D q D q D q D q
(8)
Endpoint contour is defined as:
6. Hybrid Technique To Identity The Faces Using Key Point Hypothesis Prediction Algorithm (Khpa)
DOI: 10.9790/2834-11124454 www.iosrjournals.org 49 | Page
~ ~
(1) ( ) m in (2 ) ( 1) , (2 ) ( 1)D D q D D q D D q
(9)
Also, for the open contour, the equation is defined as:
(1) m in (1) (2) , (1) (2)D D D D D
( ) m in ( ) ( 1) , ( ) ( 1)D q D q D q D q D q
(10)
The above expressions helps in reducing the robustness in measuring the directional derivatives of the
edges or corners with keypoints. This robustness occurs in the edges or corners with keypoints due to noise and
local neighborhood variations in the contours [20]. The orientation assignment makes a comparison between the
facial access points between all the points in the fig. 2b and fig. 3. Eqs. 8, 9 and 10 helps in adding relevant
keypoints from fig. 2b and thus producing fig. 5.
Fig. 5. Adding relevant keypoints using Orientation Assignment
2.1.6Keypoint Matching
Thus, weighted Euclidean distance with sample weights chosen from ground truth value helps in
achieving nearest keypoints in the scale space. The threshold value dR is compared between the ground image
value and the actual image. When the obtained result is lesser than dR, then the corresponding point is excluded.
This helps in removing the extrema point with less contrast and eliminates poor localization. Thus, the
redundant extrema point is removed using large curvature across edges and small curvature in the DoG function.
When the difference is lesser than Eigen vector with larger to smaller value, Hessian matrix is formed at the
keypoint location; thereby rejecting the corresponding keypoint. Thus, with the help of training data from the
neural networks, accurate keypoints from ground truth image accurate the keypoints, precisely.Finally
eliminating the poor and localized regions and edges of a facial image using s cale space detection with adaptive
thresholding. One more advantage of this technique is that use of Laplacian operators for maximum distance
finding is further eliminated. With manual computation of keypoints, we could help in achieving exact keypoint
over facial regions represented in fig. 5.
7. Hybrid Technique To Identity The Faces Using Key Point Hypothesis Prediction Algorithm (Khpa)
DOI: 10.9790/2834-11124454 www.iosrjournals.org 50 | Page
Fig. 5. Exact keypoint matching over Facial regions
2.2 RESVM
In a given image, description is given to training classes, training data and test points. The classes of
interest is based on faces in an image and that is treated as the classes of interest within image regions. Training
data belonging to the facial classes involves training the system with classes and learning its parameters. Here
facial part in an image is chosen that represents class to obtain training data. This is done by choosing the facial
access points near facial regions in an image. Facial access points are simply an irregular submatrix of the
ground truth image. The pixel value in the irregular submatrix is converted into a vector. This is formed by
connecting the submatrix column under previous column. This pixel vector is treated as training data of relevant
class. In image classification, random pixel is selected as a test point belonging to selected classes. In RESVM
classification method equal amount of test points are selected over each regions that represents facial classes.
Misclassification error on test points helps to estimate the performance of the RESVM after being classified
fromA-SIFT.
Robust Ensemble Support Vector Machine (RESVM)bagging method uses Class-weighted Support
Vector Machine (CWSVM). This is a supervised technique that uses penalty for misclassified labels in A-SIFT
that differs per class. CWSVM uses PU (P – Positive, U – Negative) learning through unlabeled dataset from
facial that contains negative labels i.e. noise on keypoints. Training using CWSVM is done to distinguish
Positive labels (P)from Negative labels (U).During training phase, positive instances misclassification is
penalized to a larger extent than unlabeled cases. This is done to emphasize higher degree certainty on P. In the
context, the optimization for PU learning in training phase of CWSVM is defined as:
, ,
1 1
1
m in ( , )
2
N N
i j i j i j P i U i
b
i j i P i U
y y x x
1
. . ( , ) 1 1, ...,
0 1, ...,
N
i i j i j ij
i
s t y y x x b i N
i N
(11)
with
N
is considered as a support value and { 1, 1}
N
y is the label vector and the kernel function is
defined using (., .) . Slack variables are termed as
N
and b is the bias terms. P
and U
are the
misclassification penalties that are required in order to make P U
. This penalties with various classes is
used for tackling imbalanced datasets. RESVM resamples P along with U and uses degree of freedom for
controlling misclassification penalty between P and U instances. Further this reduces the variability between
base models (ground truth image).
The RESVM resamples potentially contaminated P and U that is treated as resampled sets with
replacement persuadesinconsistencyacross resampled sets of U and P during training.The inconsistency between
resamples increases with increased contamination on original dataset. Contamination levels lesser than 50% in a
given dataset is treated as mislabeled. Large contamination in resamples increases the convergencesize of the
expected contamination that equals the original dataset being resampled. Thus, with increased size of resamples
the inconsistencyreduces in contamination. With varied contamination among training sets, could
induceinconsistencyamongstthe facial imageand createsdiverse set of facial image samplesthough resampling
both the P and U instances. Variancereduction exploits the inconsistencyin facial image model using resampling
8. Hybrid Technique To Identity The Faces Using Key Point Hypothesis Prediction Algorithm (Khpa)
DOI: 10.9790/2834-11124454 www.iosrjournals.org 51 | Page
and tradeoff istaken place between increased inconsistencyby training smaller resamples and base models with
improved stability with large training sets. Finally better classification of criminal faces could be obtained with
P and U training in RESVM by dividing into subclasses as shown in fig. 6.
Training sets on SVM modelis quantified as dual weight and cases can be distinguished into three sets:
(i) the training sets correctlyclassified lying outside the margin ( =0), (ii) the training setscorrectly classified
lying on the margin ( [0, ] )and (iii) the training sets incorrectly classified lyinginside the margin (
). The training sets misclassified during trainingis treated as bounded with maximal and considered as a
controlSVM point. With bounded SVM mislabeled training instances could be eliminated to a smaller extent
during training phase on learning the noise labels. There is a best case where mislabeled training sets classifies
in relation with true label using training procedure of bounded SVM.
Fig. 6. Classification into subclasses using RESVM with P and U training
2.3 Hypothesis Testing
Consider pixel classes of two image (one main image and one ground truth image) with mean 1 2
,µ µ .
Test point for the image is chosen as 0
x with ( 1 1
x , 1 2
x , 1 3
x ,…, 11 n
x ) and ( 2 1
x , 2 2
x , 2 3
x ,…, 21 n
x ) as training
data for class 1 and 2, respectively. Here, the hypothesis between the classes 1 and 2 of image 1 and 2
respectively is chosen to predict the pixel level keypoint hypothesis test. To test the image with hypothesis, null
hypothesis is chosen between the image classes having equal mean with identical distribution between the
classes 1 and 2. So, t-test appropriates null or equal mean hypothesis and Wilcoxon rank sum test is used for
other hypothesis to find the alternate hypothesis. Here, this research considers two tests that is defined as
follows:
1. Hypothesis Test 1 (H1): Place test point over training data of class 1 of image 1 for testing the null hypothesis
H0.
2. Hypothesis Test 2 (H2): Place test point over training data of class 2 of image 2 for testing with class 1.
p-values for H1 and H2 is denoted through 1 0
P V x and 2 0
P V x . The probability of classes with data for
training is denoted by 1
p and 2
p .A small 1 0
P V x and large 2 0
P V x maintains a difference between the
classes, when observation is been done on class 1.When this small and large value is applied over class 2, this
will result in blurred boundary between the classes.
The relative test probability is calculated over test point 0
x is:
1 0
1 0 02
P V x
P V x P V x
(12)
This relative test probability does not belong to class 1 and the probability of 0
x is calculated using class 1 is
defined as:
9. Hybrid Technique To Identity The Faces Using Key Point Hypothesis Prediction Algorithm (Khpa)
DOI: 10.9790/2834-11124454 www.iosrjournals.org 52 | Page
1 0
1 0 2 0
1
P V x
P V x P V x
(13)
This method classifies 0
x through class 1 when 2 0 1 1 0 2
P V x p P V x p and class 2 when
1 0 2 2 0 1
P V x p P V x p . Training data is chosen from classes 1 in image 1 or ground truth image with
subclasses. The subclasses refers to class within facial regions that represents the eyes (subclass-1), nose
(subclass-2), ear (subclass-3), chin (subclass-4), checks (subclass-4)etc. The image is fixed with standard size
512 × 512 and this training data is chosen as null hypothesis data. This is done to check the accurate relevancy
of keypoints and test points in the class 1 image. When the accuracy of the resultant keypoints using RESVM
and test points is greater than threshold value (TAV), image 1 is set as reference image for finding the faces in
image 2. Then with the keypoints and test points, the image with multiple faces is identified using A-SIFT and
RESVM. The accuracy of the resultant points with multiple subclasses in image 2 is tested with the reference
points of multiple subclasses in reference image using t-test. When the accuracy is greater than the TAV, then
the keypoints are extracted and matched with the image from the database.
The t-test case for class 1 with subclasses is represented with n1 and n2 points is denoted as:
0 11 12 1 1 21 22 2 2
1 2
1
2
2
21
1 2
1
1
x x x x n x x x n
n n
T
sd
n n
(14)
2
2 is represented as a variance from class 2 and the standard deviation for class 1 is represented as:
2 2
1 10 1
12
1
1
1 0 1
1
1
1
n
i
i
x x
sd
n
x x x
n
(15)
The t-test 2 is represented as:
0 2 1 2 2 2 21 1 1 2 1 1
1 2
2
2
2
12
1 2
1
1
x x x x nx x x n
n n
T
sd
n n
(16)
2
2 is represented as a variance from class 1 and the standard deviation for class 2 is represented as:
2 2
2 20 2
12
2
2
2 0 2
2
2
1
n
i
i
x x
sd
n
x x x
n
(17)
Thus for a given threshold value (TAV), the conditions are written as follows:
1. When maximum( 1 2
,P V P V ) ≥ TAV, test p-value point is greater than the TAV, then 0
x represents class 1
2 0 1 1 0 2
P V x p P V x p and if 2 0 2 1 0 1
P V x p P V x p for class 2
2. When maximum(PV1, PV2) <TAV, test p-value point is smaller than the TAV, then 0
x represents class 1
2 0 1 1 0 2
PV x p PV x p and if 2 0 2 1 0 1
PV x p PV x p for class 2
10. Hybrid Technique To Identity The Faces Using Key Point Hypothesis Prediction Algorithm (Khpa)
DOI: 10.9790/2834-11124454 www.iosrjournals.org 53 | Page
Results for Hypothesis Testing
Initially the test result is obtained over SIFT and compared with A-SIFT. Then, the image with A-SIFT
keypoints are classified with RESVM to remove the robust keypoints and resulting in authorized keypoints or
the facial access points from the input faces. This is initially done over a test or a reference image for the initial
logging into the system. Then, the input photo of the criminal face is taken and the process using A-SIFT and
RESVM is applied over it to find the accurate keypoints. Then, hypothesis testing is used for comparing the
image with the reference image shown in fig. 8.Finally, image with higher threshold value is selected as
displayed as output image.
Fig. 8. Testing of Hypothesis with reference image
In the first test, same reference image is given as an input image to detect the face. The key points are
refined using the cases of RESVM, using this we could further refine more keypoints and made the detection
simpler as shown in fig. 8. Then the keypoints on the fig. 8 is made to match with the keypoint from fig. 7 and
suitable areas were divided into subclasses. Then, each subclass is divided and measured finally retrieved the
original face as shown in fig. 9.
Fig. 8. Detected face with proposed technique
11. Hybrid Technique To Identity The Faces Using Key Point Hypothesis Prediction Algorithm (Khpa)
DOI: 10.9790/2834-11124454 www.iosrjournals.org 54 | Page
2.4 Accuracy Detection
The TAV is obtained by cross-validation with training data for choosing the value between the range
on (0,1). This makes the error rate of cross-validation to be less for the training data giving best performance.
Fixing threshold has a dominant reason that helps in finding the best face from database using hypothesis
testing. Here significant levels 0.01 and 0.05 declares the significant solution for the desired input training data.
When the significance is 0.05 the risk of error is more and vice versa. The value lesser than 0.01 i.e. 0.001
results in less test results and would not lead to a better significant results. Also, necessary improvement in the
result is affected if the result is in 0.001 state. So, finally the result is maintained between 0.01 and 0.05 range to
find the find the relative face obtained from the database. To detect the accuracy of the system, the several
images are tested and we could find significance with 0.01 and 0.05 range. The image that are tested with same
and different reference images and for the similar images the results were within this range. Reference images
that are slightly indifferent produced a significance higher than this range. Finally, the efficacy of the proposed
systemis proved in terms of A-SIFT, RESVM and Hypothesis testing.
III. Conclusions
The Proposed Keypoint Hypothesis technique proved effective in terms of detecting the faces. The use
of various techniques at each stage results in effective removal of redundant keypoint and making the detection
simpler. Here, to achieve this at each stage we used Anisotropic-SIFT with modification in SIFT at each stage
and making A-SIFT suitable for facial images. This is followed by the use of RESVM that enables the usage of
P and U learning over the keypoint facial image and segregating the subclasses over facial regions using Facial
Access Points grouping over a particular region. This helped the further stages to effectively compare the trained
image or the reference image with suitable database of image. This comparison is done using Hypothesis testing
with 2 hypothesis that enables the accuracy detection to be more effective. Depending on the results of TAV, the
process might get continued or might end. Thus the effectiveness of the system is proved in terms of facial
image. The proposed system could be applied over other environments for detecting the objects. Also,
comparison with other facial techniques will make the systemto get enhanced further.
Acknowledgement
References
[1] Deng, Houtao, et al. "Atime series forest for classification and feature extraction." Information Sciences 239 (2013): 142-153.
[2] Tao, Dacheng, et al. "General tensor discriminant analysis and gabor features for gait recognition." Pattern Analysis and Machine
Intelligence, IEEETransactions on 29.10 (2007): 1700-1715.
[3] delRivero, José Antonio Sánchez, et al. "Feature selection for classification of animal feed ingredients from near infrared
microscopy spectra." Information Sciences 241 (2013): 58-69.
[4] Tao, Dapeng, et al. "Similar handwritten Chinese character recognition using discriminative locality alignment manifold learn ing."
Document Analysis and Recognition (ICDAR), 2011 International Conference on. IEEE, 2011.
[5] Huang, Qing-Hua. "Discovery of time-inconsecutive co-movement patterns of foreign currencies using an evolutionary biclustering
method." AppliedMathematics and Computation 218.8 (2011): 4353-4364.
[6] Wang, Yu, et al. "License plate recognition based on SIFT feature." Optik-International Journal for Light and Electron Optics
126.21 (2015): 2895-2901.
[7] Wang, Shen, Chen Cui, and XiamuNiu. "Watermarking for DIBR 3D images based on SIFT feature points." Measurement 48
(2014): 54-62.
[8] Wu, Xiangqian, Qiushi Zhao, and Wei Bu. "A SIFT-based contactless palmprint verification approach using iterative RANSAC and
local palmprint descriptors." Pattern Recognition 47.10 (2014): 3314-3326.
[9] Chen, Jiansheng, and Yiu-Sang Moon. "Using SIFT features in palmprint authentication." Pattern Recognition, 2008. ICPR 2008.
19th International Conference on. IEEE, 2008.
[10] Zhao, Qiushi, Wei Bu, and Xiangqian Wu. "SIFT -based image alignment for contactless palmprint verification." Biometrics (ICB),
2013 International Conference on. IEEE, 2013.
[11] Kong, Adams Wai-Kin, and David Zhang. "Competitive coding scheme for palmprint verification." Pattern Recognition, 2004.
ICPR2004. Proceedings ofthe 17th International Conference on. Vol. 1. IEEE, 2004.
[12] Li, Yanshan, et al. "GA-SIFT: A new scale invariant feature transform for multispectral image using geometric algebra."
Information Sciences 281 (2014): 559-572.
[13] Abdel-Hakim, Alaa E., and AlyFarag. "CSIFT: A SIFT descriptor with color invariant characteristics." Computer Vision and
Pattern Recognition, 2006 IEEE Computer Society Conference on. Vol. 2. IEEE, 2006.
[14] Li, Canlin, and Lizhuang Ma. "A new framework for feature descriptor based on SIFT." Pattern Recognition Letters 30.5 (2009):
544-557.
[15] Claesen, Marc, et al. "A robust ensemble approach to learn from positive and unlabeled data using SVM base models."
Neurocomputing160 (2015): 73-84.
[16] Yang, Xiao-Min, et al. "Image feature extraction andmatchingtechnology [J]." Optics andPrecision Engineering9 (2009): 033.
[17] Gao, Xiaorong, et al. "Vehicle bottom anomaly detection algorithm based on SIFT." Optik-International Journal for Light and
Electron Optics 126.23 (2015): 3562-3566.Wang, Yingmei, et al. "Adaptive filtering with self-similarity for low-dose CT imaging."
Optik-International Journal for Light and Electron Optics 126.24 (2015): 4949-4953.
[18] Xu, Jiangtao, et al. "An improved anisotropic diffusion filter with semi-adaptive threshold for edge preservation." Signal Processing
119 (2016): 80-91.
[19] Zhang, Wei-Chuan, andPeng-Lang Shui. "Contour-based corner detection via angle difference of principal directions of anisotropic
Gaussian directional derivatives." Pattern Recognition 48.9 (2015): 2785-2797.