Adil Khan
Dr. Adil Khan http://orcid.org/0000-0003-2862-5718 (ORCID ID). He received C.T. from AIOU Islamabad, B. Ed from the University of Peshawar, BS Honors in Computer Science from Edwards College Peshawar, M.S in Computer Science from City University of Science and Information Technology, Peshawar, Pakistan. From 2014-2016, he was a Lecturer in the Higher Education Department KPK, Pakistan and from 2016-2019, he served as a research scholar at the School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001 PR China. Currently, Adil Khan is working as a Lecturer at the Department of Computer Science, SZIC, University of Peshawar. He has published many publications in top-tier academic journals and conferences and is interested in Machine Learning, Game Artificial Intelligence (Game-AI), Neural networks, Real-Time Strategy Games, First-Person-Shooter Games, Sandbox open-world Games, Computer Vision, Image Processing (Breast Cancer Detection). He can be reached at personal e-mail: adil.adil25@yahoo.com.
Address: Islamabad, Islamabad, Pakistan
Address: Islamabad, Islamabad, Pakistan
less
Uploads
playing first by Adil Khan
images belong to the same person or not, this consider as a
challenge due to variation in lighting, background, pose, scale,
and occlusion. This paper presents an improvement method for
unconstrained ear recognition problem based on local feature
fusion, and further analyzes the performance and efficiency of
discriminative local feature fusion for aligned and non-aligned ear
images. Firstly, local discriminative features such as LPQ, HOG,
LBP, POEM, BSIF and Gabor features are extracted from the ear
images. Then, Discriminant Correlation Analysis (DCA) is
exploited for fusion and reduction dimension. Finally, support
vector machine (SVM) is adopted for classification. Experiments
are conducted on popular ear databases, USTB I, USTB II, and
IIT Delhi II. Furthermore, we report an encouraging result on a
difficult and challenging ear database called annotated web ear
(AWE) that is collected from the wild. The experimental results
show superior of proposed approach that can achieve a high
performance for non-aligned images (AWE and USTB II datasets),
on the other hand, unique local features can achieve promising
recognition rates for aligned images, USTB I and IIT Delhi II
datasets.
images belong to the same person or not, this consider as a
challenge due to variation in lighting, background, pose, scale,
and occlusion. This paper presents an improvement method for
unconstrained ear recognition problem based on local feature
fusion, and further analyzes the performance and efficiency of
discriminative local feature fusion for aligned and non-aligned ear
images. Firstly, local discriminative features such as LPQ, HOG,
LBP, POEM, BSIF and Gabor features are extracted from the ear
images. Then, Discriminant Correlation Analysis (DCA) is
exploited for fusion and reduction dimension. Finally, support
vector machine (SVM) is adopted for classification. Experiments
are conducted on popular ear databases, USTB I, USTB II, and
IIT Delhi II. Furthermore, we report an encouraging result on a
difficult and challenging ear database called annotated web ear
(AWE) that is collected from the wild. The experimental results
show superior of proposed approach that can achieve a high
performance for non-aligned images (AWE and USTB II datasets),
on the other hand, unique local features can achieve promising
recognition rates for aligned images, USTB I and IIT Delhi II
datasets.