Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Human Gender and Age Detection Based On Attributes of Face

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/360808188

Human Gender and Age Detection Based on Attributes of Face

Article  in  International Journal of Interactive Mobile Technologies (iJIM) · May 2022


DOI: 10.3991/ijim.v16i10.30051

CITATIONS READS

0 194

2 authors, including:

Shaimaa Hameed
University of Technology, Iraq
72 PUBLICATIONS   63 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

network security View project

Management Information System View project

All content following this page was uploaded by Shaimaa Hameed on 24 May 2022.

The user has requested enhancement of the downloaded file.


Paper—Human Gender and Age Detection Based on Attributes of Face

Human Gender and Age Detection Based


on Attributes of Face
https://doi.org/10.3991/ijim.v16i10.30051

Shaimaa Hameed Shaker1(*), Farah Qais Al-Khalidi2


1
Department of Computer Sciences, University of Technology, Baghdad, Iraq
2
Department of Computer Sciences, Mustansiriyah University, Baghdad, Iraq
Shaimaa.h.shaker@uotechnology.edu.iq

Abstract—The main target of the work in this paper is to detect the gender
and oldness of a person with an accurate decision and efficient time based on the
number of facial outward attributes extracted using Linear-Discriminate Analysis
to classify a person within a certain category according to his(her) gender and
age. This work was deal with color facial images via the Iterative Dichotomiser3
algorithm as a classifier to detect the oldness of a person after gender detected.
This paper used the Face-Gesture-Recognition-Research-Network aging dataset.
All facial images in the dataset were categorizing into binary categories using
k-means. This is followed by the process of dividing all samples according to age
classes that belonging to each specific sex category. Thus, this division process
enabled us to reach a quick and accurate decision. The results showed that the
accuracy of the proposal was 90.93%, and F-measure was 89.4.

Keywords—facial image, features extraction, human age and gender, k-mean,


LDA, ID3

1 Introduction
Gender and age are human’s identification that plays the main role in social com-
munication [1]. A detection system is combined of two phases: gender detection and
age detection which is a structure of three parts: face detection, gender estimate, and
age guesstimate. Face detection is used to localize the faces in an image, there is quite
challenging due to several reasons like environment, lighting, movement, orientation,
and facial expressions, these factors lead to variations in color, shadows, luminance,
and contours of images [2, 3]. In the real world, some males/females may lookthe
same gender and this an error, some of the people may look youthful or more adult
than the real age that leads to the differences between apparent age and real age [4–6].
The various attributes can be identified from the color-image of a human face such as
hair on the upper lip, male/female, hair on the chin, age, scars, hair, height, skin color,
weight, glasses, tattoo marks, facial attributes, etc [7–10]. Required information can be
extracted from these attributes and compared with the patterns stored in the database
to determine an identity [11, 12]. There are some difficulties in computer-based facial
gender and age estimation [13].

176 http://www.i-jim.org
Paper—Human Gender and Age Detection Based on Attributes of Face

A. Speed of aging is a result of different health conditions and the environment


B. Different forms of aging will emerge at different age levels.
C. Search and collect historic images which had been taken ago.
D. Some females have a propensity to exhibit their faces are younger.

To distinguish the face from the image there are keypoints in human faces must
be detected and extracted [14, 15], these key points are called landmarks which
included the eye, nose, and mouth, as shown in Figure 1 [16]. Texture descriptors.
MuhammadSajid, et al. [17], in 2019, proved the importance of exact similarity aging
inefficient age estimation. The findings of that work depended on two large datasets.
Fatma S. Abousaleh, et al. [18], in 2016, proposed a proportional deep learning frame-
work, named CCRCNN, the proposal first compares the input image of a face with
known face ages that considered as a reference to produce a set of hints the input face
is either younger or older than a reference. later, the estimation stage combines the
hints to approximate the person’s age. SudipMandal, et al. [19], in 2017, proposed an
automatic age estimation system from facial images using wrinkle feature and Neural
Network the system was implemented using MATLAB. Only three groups of age were
taken into consideration child, young and old. Prajakta A. Mélange and G. S. Sable [8],
in 2018, introduced a method to predicate sex and how older persons depend on some
facial characteristics. Such that used Preprocessing phase then selected some geometric
features for classification. Depended on the face angle, left the eye to right eye distance,
and some other distance from eye to nose in addition eye to chin distance, with eye to
lip distance. Bosea, S. Bandyopadhyay [20] in 2021 introduced a method of features
extraction based on the size of the Face to the size of the Eye of a Face to identify the
human. The main objective of this work is to find an automatic method to rapidly guess-
timate human gender and oldness of the human using some information and attributes
of the color face image accurately based on Iterative Dichotomiser3(ID3) algorithms.
In addition to the introduction section, the paper structure is dealing with six other
sections. Section two describes the features extracted from images while section three
tells the details about the dataset that use. Section four with all subsections explain the
required steps of the proposed method design, then discusses the results illustrated in
section five. Section six expresses the conclusions and some ideas of future work.

2 Features of facial image detection

Classification of people depends on person faces images that contain signs and char-
acteristics to classify those persons, so a face aging prediction is used in many appli-
cations in digital entertainment. Features of facial image detection consist of the facial
features region of nose detection, Eyebrows, Lip detection, mustache, beard, Left/Right
eye position, and skin wrinkle analysis i.e. eyebrows also help in gender recognition.
Female eyebrows are longer, thinner, and curly at the ends. On the other hand, male
eyebrows are mismanaged and thicker. Also, the male face has a more protuberant nose,
brow, chin/jaw than the female face. Gender and age detection are estimated according
to the number of these facial geometric features called the attributes [20–22].

iJIM ‒ Vol. 16, No. 10, 2022 177


Paper—Human Gender and Age Detection Based on Attributes of Face

3 FG-NET dataset

The FG-NET aging database is a publicly accessible aging database that has been
broadly utilized for evaluation. The database is consisting of 1,002 color images of
82 different subjects. For males, there are 607 color images and 395 colorimages of
females. Most subjects gossip from 10 to 13 images of themselves [23–26], Figure 2
shows some images of the FG-NET database. Split these images into a training setcon-
taining 720 images divided into 14 classes as shown in Table 1. Results demonstrate
that some classes have the same number of features after extracting the features from
these 14 classes. Then combined it into Multiclasses upon the Sum-attributes of fea-
tures it contains as shown in Table 2 (explains in 4.1).

Fig. 1. Facial landmarks [6]

Fig. 2. Examples of some images from the FG-NET Aging database

178 http://www.i-jim.org
Paper—Human Gender and Age Detection Based on Attributes of Face

Table 1. Aatalog of classes


Gender Type
Class# Rang of Age
Male = 0, Female = 1
One 0 3–7
Two 0 8–13
Three 0 14–19
Four 0 20–25
Five 0 26–30
Six 0 31–40
Seven 0 41–50
Eight 1 3–7
Nine 1 8–13
Ten 1 14–19
Eleven 1 20–25
Twelve 1 26–30
Thirteen 1 31–40
Fourteen 1 41–50

Table 2. The proposed multi class upon facial features


Class # Range Description
One (3-7)(26-30) Male
Two (3-7)(26-30) Female
Three (8-13)(14-19)(20-25) Male
Four (8-13)(14-19)(20-25) Female
Five (31-40)(41-50) Male
Six (31-40)(41-50) Female

4 Proposed method design

The proposed method applies decision tree mechanisms to intelligent gender and
age estimation from facial images using the ID3 classifier on the FG-NET dataset after
extracting the features by the LDA algorithm. The contents of the FG-NET dataset are
categorized into two categories by k-means classifier, one for 607 male images and the
other one is for 395 female images as a process to gender detection using the attributes
extracted of each face-image in this dataset.The proposed method is shown in Figure 3.

4.1 Preprocessing phase

The first phase of the proposed system is preprocessing phase. This phase includes
six steps which are image capturing, converting image into grayscale, removing noise

iJIM ‒ Vol. 16, No. 10, 2022 179


Paper—Human Gender and Age Detection Based on Attributes of Face

from it using median filtering, detecting the face from the color image using a viola-joins
algorithm which consists of 4 levels that are Haar-like features, integral image, Adaboost
training, and cascade classifier as shown in Figure 3, normalization that can be done
using contrast stretching and finally clipping it to delete undesirable outside parts of a
color image such as white space in the background image around the face. In the pro-
posal, the dataset was categorized into six categories as shown in Table 2, These catego-
ries were depended after manyexperiments, where it was found that thesechoices of age
range mentioned for females or males within each category have the same number of
attributes and does not constitute a distinction, so they were considered within the same
category. To estimate the human-gender and human-age, there are two phases training
set was 80% of dataset and the other 20% of it was the testing set.

4.2 Phasing of data mining-procedure

The second phase of this work is the data mining phase that includes the features
extraction process and classification process.
Dimension reduction as features extraction process. Feature extraction is
decreasing the dimension through excluded the most significant information from the
entire data [27]. Using Fisher’s face depends on the mechanism of Linear Discriminates
Analysis LDA [28]. This is an important step that is used to decrease the dimension
of the image in the dataset with good separable classes to avoid the problem of over-
fitting and to decrease the complexity of total cost-value. So the steps of LDA are as
the following:
1st step: The image in 2D n×m was converted into a column vector that represents
n×1.
2nd step: To calculate d-dimension mean vectors for classes from the dataset use
eq (1):
1

n
Ii  Xi (1)
ni i 1

3rd step: Using scatter matrix to calculate the scatter matrix which comprises of
three classes such that eq (2) to calculate within-class scatter matrix, eq (3) to calculate
class-covariance matrices, and eq (4) to calculate between-class:

∑ ∑
c Nj
Sw = ( I i j − Iˆ j ) (2)
j =1 i =1

Where I i j is the ith sample of class j, Iˆ j is the mean-value of class j, c is the real
number of classes and Nj is the number-value of samples in class j.
1

n
∑j= ( X i − Ij )( X i − Iˆ j )T (3)
N i =1

(4)

c
Sb = Ij − Iˆ Ij − IˆT
j =1

Where Iˆ represent the mean of all classes and Ij is the mean-value of class j.

180 http://www.i-jim.org
Paper—Human Gender and Age Detection Based on Attributes of Face

4th step: Calculate the eigenvectors and eigenvalues for the scatter matrices using
eq (5)

AV = λV (5)

Where A  S w1 Sb, V is an eigenvector and l is an eigenvalue.


5th step: Sorting the eigenvectors by decreasing eigenvalues using eq (6) and select-
ing n eigenvectors from the biggest eigenvalues.
6th step: Use the matrix of 4 * 2D called W that is used to convert the samples into
the modern sub-space by using eq (6).
wT * Sb * w
arg max w = T (6)
w w * Sw * w
Classification process. The other part of data mining is classification. The proposal
uses a decision tree classification approach to build a tree as a model to predict the value
of the image face.ID3 classifier read features of face which extracted by LDA algorithm
and used it for classification. To construct a decision tree, ID3 was formed depending
on the use of Entropy that calculated depends on eq (7), and Information Gain that
calculated depends on eq 8.
c

EntropyS    P log
i 1
i 2
Pi (7)

Where c takes different values and Pi is the probability of S belonging to class i.


Sv
Gain( S , A)  Entropy ( S )   vValueA S
EntropySv (8)

Where A is an attribute that is a set of all possible values v and Sv is the subset
of S. To implement the decision tree algorithm, the entropy of each target was cal-
culated and the dataset was divided into distinct attributes the entropy-value of each
division-tree was computed then accumulated together to get the overall total number
of entropies. The Gain-value of information is computed from the differences between
of entropy-value as a result and the entropy-value before the divide. The gain of the
largest information called attribute was chosen as a decision node and the dataset was
split up by its branches, this procedure was repeated on every branch. A leaf node was
generated when the value of entropy is zero while it was a non-leaf node when it holds
a value greater than zero and split furthermore. All non-leaf branches are considered
by ID3-algorithm which executes recursively pending all data is classified. To detect
human gender age, the image is classified into one of six classes, as shown in Table 2.
The number of selected features from the image was checked with the feature number
of class one, if it is matched then match tested image features with the rules of class
one, the same procedure was done for all six classes, if true match the gender and age
are estimated. Figure 4 summarizes this procedure, where Ti represents the test image
and nf represents the number of features.

iJIM ‒ Vol. 16, No. 10, 2022 181


Paper—Human Gender and Age Detection Based on Attributes of Face

Fig. 3. Steps of the proposed method

5 Results and discussion

The proposed system has three-phase which were face image detection, data mining
model, and Gender and age detection model. Implement the normalization step using
contrast streaking on images after face detection steps on these preprocessing input
images. The next step is feature extraction of the face’s image using the LDA algo-
rithm. The final step is the classification based on ID3 deals with the attributes of the
face’s image that is found from the previous step. It is worth mentioning that 120 of
the additional images of human faces that the classifier trained on were added to the
test dataset, to get a test dataset containing both known and unknown images of faces.
Table 3 describes the correctly and incorrectly percentage of six class categories and
the total correct rate of gender detection. First-class has 7 attributes with correctly clas-
sified of 195 images while incorrectly classified of 20 images. Class no. three describes
with 5 attributes with correctly classified of 377 images while incorrectly classified of
32 images. So class five has 9 attributes with correctly classified of 68 images while
incorrectly classified of 28. See class two has 8 attributes with correctly classified of

182 http://www.i-jim.org
Paper—Human Gender and Age Detection Based on Attributes of Face

192 images while incorrectly classified of 23 images. Class four has 6 attributes with
correctly classified 371 images while incorrectly classified of 38 images. The last one
is class no. six which has 11 attributes with correctly classified 66 images while incor-
rectly classified of 30. So, the total correctly classified percentage is 85.9, 85.6 for
males and females respectively while the total missed classified percentage is 14.28,
14.29 for males and females respectively. Table 4 reviews the accuracy of performance
evaluation measures of the ID3 classification step on the items which are used as train-
ing set where the total number of these items is 720. The criterion Mean-Absolute-Error
M.A.E. and Root-Mean-Square-Error R.M.S.E. are measures of error rate in prediction
[29, 30]. Nevertheless, R.M.S.E. is more robust since it is less sensitive to extreme val-
ues than mean-absolute-error [31], A small value for these criteria means that the esti-
mated model is close to the real value, thus 0.7628 of M.A.E. and 14.2814 of R.M.S.E.
are mean the error rate is very low. In Table 5, based on LDA and ID3 human age of
class1 and class2 of 450 sample size has a total correct rate equal to 93.3%, while the
210 males and 330 females of class3 and 4 has a total correct rate equal to 93.5% and
the human age of class5 and class6 of 100 sample size has total correct rate equal to
86%. That means the total correct rate for all 1090 sample sizes is 90.93%. Figure 5
states the diagram of the age detection-based samples of the Table 6. When comparing
the results of the proposed method to classify the human gender as male and female
with other existing methods like PNN and SVM1.The results of the gender test on 20%
of items are used for testing of data set that is used, so the proposed method introduces
an acceptable rate of correctly classified corresponding to the PNN and SVM1 [32]
methods as shown in Table 6. So the proposal achieve is too close to the other existing
methods. Table 7, the number of all instances was 1090 images. For the male gender,
the highest precision is for class2 because FP is the biggest one according to the number
of attributes of this class, and the class sample size is bigger than others. class1 has the
highest recall because FN is big although class3 has the same FN as class1 the number
of attributes is bigger than one of class1 also a sample size of class3 is less than the
sample size of class1. Also, notice that the highest F-measure is for class2 because it
provides a single score that balances both the concerns of precision and recall in one
number. For the female gender, the highest precision is class4 because of the balance
of attributes number and FP is the large enough relative to sample size while the class5
has the highest recall although class6 has FN that bigger than it but attributes the num-
ber of this class is bigger than class5. So the highest F-measure is for class4 because
of balancing between the number of facial attributes and sample size. Figure 6 shows
the accuracy of the proposed detection method depending on the number of attributes.
Table 8 displays the results of the LDA and ID3 classifier of accuracy. The six classes
have calculated the precision, recall, and F-measure. The average of accuracy in the
three classes of male gender gave precision of 83.066, recall 93.8.569, and f-measure
88.49. While the average accuracy in the three classes of female gender gave precision
of 87.8, recall 93.33.569, and F-measure 90.4. The obtained results of the proposal
were compared with another classifier.

iJIM ‒ Vol. 16, No. 10, 2022 183


Paper—Human Gender and Age Detection Based on Attributes of Face

Fig. 4. Flow chart gender and age estimation

184 http://www.i-jim.org
Paper—Human Gender and Age Detection Based on Attributes of Face

Table 3. Age and gender results of LDA and ID3 classifier


Class Class- Correctly Correctly Incorrectly Incorrectly
Gender Instance Attribute
No Range Classified Percentage Classified Percentage
One Male (3–7) 215 7 195 90.6977% 20 09.3023%
(26–30)
Three Male (8–13) 409 5 377 92.1760% 32 07.8240%
(14–19)
(20–25)
Five Male (31–40) 96 9 72 75.0000% 24 00.2570%
(41–50)
Two Female (3–7) 215 8 192 89.3023% 23 10.6900%
(26–30)
Four Female (8–13) 409 6 371 90.7090% 38 09.2900%
(14–19)
(20–25)
Six Female (31–40) 96 11 74 77.0835% 22 22.9160%
(41–50)
720/M 644M 85.9570%M 78/M 14.2800%/M

720/F 637/F 85.6000%F 83F 14.2960%/F

Table 4. Accuracy of the proposed method


Accuracy measures Result

1

n
MAE = yi  
yi where n = 1442 0.0198
n i 1

1

n
RMSE = ( yi  
yi ) 2 where n = 1442 0.7525
n i 1

Table 5. The total correct rate of human age using LDA and ID3
Sample Correctly Correct Total Correct
Age Gender
Size Detect Rate Rate
Male 200 186 93.00%
(3–7) (26–30) 93.30%
Female 250 234 93.60%
Male 210 189 90.00%
(8–13) (14–19)(20–25) 93.50%
Female 330 322 97.00%
Male 59 52 82.00%
(31–40)(41–50) 86.00%
Female 41 37 90.00%
Total M+F 1090 1020 90.93% 90.93%

iJIM ‒ Vol. 16, No. 10, 2022 185


Paper—Human Gender and Age Detection Based on Attributes of Face

350

300

250

200 Sample size

150 Correct rate

100

50

0
1 2 3 4 5 6

Fig. 5. The correct rate of human age

Table 6. Result of gender test on 20% of items are used for testing of Dataset
PNN SVM1 Proposed Method
Gender Type
Male Female Male Female Male Female
Male 89.75 10.25 95.08 4.29 89.73 10.7
Female 11.88 88.12 4.41 95.59 12.77 87.23
Correctly classified % 88.935 95.335 88.48

Table 7. Accuracy Using ID3 Classifier


Confusion
Class Class-Age Matrix
Gender #Attribute Precision Recall F-Measure
No. Range TP TN
FP FN
1 Male (0–7) (26–30) 122 73 7 0.890 0.960 0.9200
15 5
2 Male (8–13) (14–19) 244 143 5 0.897 0.938 0.9380
(20–25) 28 4
3 Male (31–40)(41–50) 55 13 9 0.705 0.916 0.7967
23 5
4 Female (0–7) (26–30) 173 19 8 0.905 0.971 0.9368
18 5
5 Female (8–13) (14–19) 283 88 6 0.901 0.975 0.9365
(20–25) 31 7
6 Female (31–40)(41–50) 53 13 11 0.828 0.854 0.8407
7 9

186 http://www.i-jim.org
Paper—Human Gender and Age Detection Based on Attributes of Face

12

10

Attribute
6

0
1 2 3 4 5 6

F-Measure 0.9 0.98 1.76 1.88 0.94 6

No. of attributes 0.83 0.85 1.41 1.68 0.84 11

CLASS #

Fig. 6. The relation of detection and no.of attributes

Table 8. Results of various classifiers


Algorithms ID3 Multilayer Hoeffding
LDA+ID3 J-48 SMO
Accuracy Classifier Perceptron Tree
Precision 85.0794 85.62 89.3113 73.9333 91.0000 92.8667
Recall 86569 93.56 87.6004 78.0000 90.9000 92.8333
F-measure 80 89.445 87.7374 73.1667 90.2333 92.4333

6 Acknowledgments
The authors would like to thank the University of Technology – Iraq www.uotech-
nology.edu.iq and Mustansiriyah University – Iraq www.uomustansiriyah.edu.iq for
the present work.

7 References
[1] P. Rodríguez, G. Cucurull, J. M. Gonfaus, F. X. Roca, and J. Gonzalez, “Age and gender rec-
ognition in the wild with deep attention,” Pattern Recognition, vol. 72, pp. 563–571, 2017.
https://doi.org/10.1016/j.patcog.2017.06.028
[2] A. Kumar, A. Kaur, and M. Kumar, “Face detection techniques: a review,” Artificial Intelli-
gence Review, vol. 52, no. 2, pp. 927–948, 2019. https://doi.org/10.1007/s10462-018-9650-2
[3] F. Q. Abdulalla and S. H. Shaker, “A Surveyof human face detection methods,” Journal of
Al-Qadisiyah for computer science mathematics, vol. 10, no. 2, pp. Page 108–117, 2018.
https://doi.org/10.29304/jqcm.2018.10.2.392

iJIM ‒ Vol. 16, No. 10, 2022 187


Paper—Human Gender and Age Detection Based on Attributes of Face

[4] W. Little, R. McGivern, and N. Kerins, Introduction to sociology-2nd Canadian edition. BC


Campus, 2016.
[5] A. D. Sokolova, A. S. Kharchevnikova, and A. V. Savchenko, “Organizing multimedia
data in video surveillance systems based on face verification with convolutional neural net-
works,” in International Conference on Analysis of Images, Social Networks and Texts,
2017, pp. 223–230: Springer. https://doi.org/10.1007/978-3-319-73013-4_20
[6] Y. Liu, F. Wei, J. Shao, L. Sheng, J. Yan, and X. Wang, “Exploring disentangled feature
representation beyond face identification,” in Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, 2018, pp. 2080–2089. https://doi.org/10.1109/
CVPR.2018.00222
[7] M. O. Sigo, M. Selvam, S. Venkateswar, and C. Kathiravan, “Application of ensemble
machine learning in the predictive data analytics of indian stock market,” CIFR Paper Forth-
coming, vol. 16, no. 2, 2020. https://doi.org/10.14704/WEB/V16I2/a195
[8] P. A. Melange and G. Sable, “Age group estimation and gender recognition using face fea-
tures,” Int. J. Eng. Sci, vol. 7, no. 7, pp. 1–7, 2018.
[9] P. Bose, “A proposed method for age detection of person based on the size of face to the size
of the eye of a face,” Turkish Journal of Computer Mathematics Education, vol. 12, no. 12,
pp. 1815–1818, 2021.
[10] R. ALairaji and H. Salim, “Abnormal behavior detection of students in the examina-
tion hall from surveillance videos,” in Advanced Computational Paradigms and Hybrid
Intelligent Computing, vol. 1373: Springer Singapore, 2022, pp. 113–125. https://doi.
org/10.1007/978-981-16-4369-9_12
[11] A. Bulat and G. Tzimiropoulos, “Binarized convolutional landmark localizers for human
pose estimation and face alignment with limited resources,” in Proceedings of the IEEE
International Conference on Computer Vision, 2017, pp. 3706–3714. https://doi.org/10.1109/
ICCV.2017.400
[12] A. Manju and P. Valarmathie, “Organizing multimedia big data using semantic based
video content extraction technique,” in 2015 International Conference on Soft-Com-
puting and Networks Security (ICSNS), 2015, pp. 1–4: IEEE. https://doi.org/10.1109/
ICSNS.2015.7292370
[13] R. Rothe, R. Timofte, and L. Van Gool, “Some like it hot-visual guidance for preference pre-
diction,” in Proceedings of the IEEE conference on computer vision and pattern recognition,
2016, pp. 5553–5561. https://doi.org/10.1109/CVPR.2016.599
[14] G. Geetha and K. M. Prasad, “An hybrid ensemble machine learning approach to predict
type 2 diabetes mellitus,” Webology, vol. 18, no. Special Issue on Information Retrieval and
Web Search, pp. 311–331, 2021. https://doi.org/10.14704/WEB/V18SI02/WEB18074
[15] Y. He, K. Cao, C. Li, and C. C. Loy, “Merge or not? learning to group faces via imitation
learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[16] A. Dirin, N. Delbiaggio, and J. Kauttonen, “Comparisons of facial recognition algorithms
through a case study application,” International Journal of Interactive Mobile Technologies
(iJIM), vol. 14, no. 14, pp. 121–133, 2020. https://doi.org/10.3991/ijim.v14i14.14997
[17] M. Sajid, N. Iqbal Ratyal, N. Ali, B. Zafar, S. H. Dar, M. T. Mahmood, and Y. B. Joo, “The
impact of asymmetric left and asymmetric right face images on accurate age estimation,” Math-
ematical Problems in Engineering, vol. 2019, 2019. https://doi.org/10.1155/2019/8041413
[18] F. S. Abousaleh, T. Lim, W.-H. Cheng, N.-H. Yu, M. A. Hossain, and M. F. Alhamid,
“A novel comparative deep learning framework for facial age estimation,” EURASIP Jour-
nal on Image Video Processing, vol. 2016, no. 1, pp. 1–13, 2016. https://doi.org/10.1186/
s13640-016-0151-4
[19] S. Mandal, C. Debnath, and L. Kumari, “Automated age prediction using wrinkles features
of facial images and neural network,” International Journal, vol. 12, 2017.

188 http://www.i-jim.org
Paper—Human Gender and Age Detection Based on Attributes of Face

[20] S. Ghosh and S. K. Bandyopadhyay, “Gender classification and age detection based on
human facial features using multi-class SVM,” British Journal of Applied Science Technol-
ogy, vol. 10, no. 4, pp. 1–15, 2015. https://doi.org/10.9734/BJAST/2015/19284
[21] C. Huda, H. Tolle, and F. Utaminingrum, “Mobile-based driver sleepiness detection
using facial landmarks and analysis of EAR values,” International Journal of Interactive
Mobile Technologies (iJIM), vol. 14, no. 14, pp. 16–30, 2020. https://doi.org/10.3991/ijim.
v14i14.14105
[22] R. A. Azeez, M. K. Abdul-Hussein, M. S. Mahdi, and H. T. S. ALRikabi, “Design a system
for an approved video copyright over cloud based on biometric iris and random walk genera-
tor using watermark technique,” Periodicals of Engineering Natural Sciences, vol. 10, no. 1,
pp. 178–187, 2021. https://doi.org/10.21533/pen.v10i1.2577
[23] G. Ozbulak, Y. Aytar, and H. K. Ekenel, “How transferable are CNN-based features for age
and gender classification?,” in 2016 International Conference of the Biometrics Special Inter-
est Group (BIOSIG), 2016, pp. 1–6: IEEE. https://doi.org/10.1109/BIOSIG.2016.7736925
[24] G. Panis and A. Lanitis, “An overview of research activities in facial age estimation using the
fg-net aging database,” in European Conference on Computer Vision, 2014, pp. 737–750:
Springer. https://doi.org/10.1007/978-3-319-16181-5_56
[25] H. T. Salim and I. A. Aljazaery, “Encryption of color image based on dna strand and expo-
nential factor,” International journal of online and biomedical engineering(iJOE), vol. 18,
no. 3, 2022. https://doi.org/10.3991/ijoe.v18i03.28021
[26] H. TH and N. Alseelawi, “A novel method of multimodal medical image fusion based on
hybrid approach of NSCT and DTCWT,” International journal of online and biomedical
engineering, vol. 18, no. 3, 2022.
[27] F. Q. Al-Khalidi, S. H. Al-Kananee, and S. A. Hussain, “Monitoring the breathing rate in
the human thermal image based on detecting the region of interest,” Journal of Theoretical
Applied Information Technology, vol. 99, no. 8, 2021.
[28] A. Onan, H. Bulut, and S. Korukoglu, “An improved ant algorithm with LDA-based rep-
resentation for text document clustering,” Journal of Information Science, vol. 43, no. 2,
pp. 275–292, 2017. https://doi.org/10.1177/0165551516638784
[29] N. Nordin and N. M. Fauzi, “A web-based mobile attendance system with facial recognition
feature,” 2020. https://doi.org/10.3991/ijim.v14i05.13311
[30] H. Tauma, and H. Salim, “Enhanced data security of communication system using combined
encryption and steganography,” International Journal of Interactive Mobile Technologies,
vol. 15, no. 16, pp. 144–157, 2021. https://doi.org/10.3991/ijim.v15i16.24557
[31] R. Rothe, R. Timofte, and L. Van Gool, “Deep expectation of real and apparent age from a
single image without facial landmarks,” International Journal of Computer Vision, vol. 126,
no. 2, pp. 144–157, 2018. https://doi.org/10.1007/s11263-016-0940-3
[32] M. Sedaghi, “A comparative study of gender and age classification in speech signals,”
Iranian Journal of Electrical & Electronic Engineering, vol. 5, no. 1, 2009.

8 Authors

Dr. Shaimaa Hameed Shaker—computer sciences-pattern recognition-2006-


University of technology – Iraq Baghdad. Received the BSC in computer science by
University of technology – Iraq in 1991. She graduated with the Master of Science in
computer science-visual cryptography in 1996 by University of technology – Iraq and
finished the Ph.D. degree in Computer sciences – pattern recognition 2006, respec-
tively. Her interest in pattern recognition, cryptography and data security, information

iJIM ‒ Vol. 16, No. 10, 2022 189


Paper—Human Gender and Age Detection Based on Attributes of Face

hiding, image processing, and bioinformatics.Currently, she is ahead of the network


management department of computer science college – University of technology, she is
a lecturer. Can be contacted at email: (Shaimaa.h.shaker@uotechnology.edu.iq)
Dr. Farah Qais Al-Khalidi—computer sciences – image processing-2012-
University of technology Iraq-Baghdad. Mustansiriyah University, from January 2005 –
Present, she is in the computer science Department as Assistance prof. in computer sci-
ence, she received PHD. From Sheffield hallam University – January 2007 – January
2012. Field of study is information technology (computer science). Can be contacted at
email: (farahqaa@uomustansiriyah.edu.iq)

Article submitted 2022-02-14. Resubmitted 2022-03-13. Final acceptance 2022-03-14. Final version
published as submitted by the authors.

190 http://www.i-jim.org

View publication stats

You might also like