- SZABIST (szabist.edu.pk), Computer Science, Graduate StudentUniversiti Kuala Lumpur, Malaysia Institute of Information Technology (MIIT), Department Memberadd
- Image Features Extraction, VLSI CAD, Access Control, Handwritten Signature Verification, Electronic Computer Systems Engineering, Computer Vision, and 22 moreDigital Image Processing, Computer Science, Image Processing, Image Recognition (Computer Vision), Big Data Analytics, Object Tracking (Computer Vision), Document Image Analysis, Machine Learning, Handwriting Recognition (Computer Vision), Character Recognition, Opinion Mining, Big Data, Artificial Intelligence, Optical Character Recognition, Data Mining, Object Recognition (Computer Vision), Machine Learning Big Data, Digital Signal processing (Engineering) (Engineering) (Engineering), Gesture Recognition, Feature Extraction, Principal component analysis (PCA), and Digital Signal and Image Processingedit
- I am a harworking researcher.edit
There has been a significant increase in the attention paid to resource management in smart grids, and several energy forecasting models have been published in the literature. It is well known that energy forecasting plays a crucial role... more
There has been a significant increase in the attention paid to resource management in smart grids, and several
energy forecasting models have been published in the literature. It is well known that energy forecasting plays
a crucial role in several applications in smart grids, including demand-side management, optimum dispatch, and
load shedding. A significant challenge in smart grid models is managing forecasts efficiently while ensuring the
slightest feasible prediction error. A type of artificial neural networks such as recurrent neural networks, are
frequently used to forecast time series data. However, due to certain limitations like vanishing gradients and
lack of memory retention of recurrent neural networks, sequential data should be modeled using convolutional
networks. The reason is that they have strong capabilities to solve complex problems better than recurrent
neural networks. In this research, a temporal convolutional network is proposed to handle seasonal shortterm energy forecasting. The proposed temporal convolutional network computes outputs in parallel, reducing
the computation time compared to the recurrent neural networks. Further performance comparison with the
traditional long short-term memory in terms of MAD and sMAPE has proved that the proposed model has
outperformed the recurrent neural network.
energy forecasting models have been published in the literature. It is well known that energy forecasting plays
a crucial role in several applications in smart grids, including demand-side management, optimum dispatch, and
load shedding. A significant challenge in smart grid models is managing forecasts efficiently while ensuring the
slightest feasible prediction error. A type of artificial neural networks such as recurrent neural networks, are
frequently used to forecast time series data. However, due to certain limitations like vanishing gradients and
lack of memory retention of recurrent neural networks, sequential data should be modeled using convolutional
networks. The reason is that they have strong capabilities to solve complex problems better than recurrent
neural networks. In this research, a temporal convolutional network is proposed to handle seasonal shortterm energy forecasting. The proposed temporal convolutional network computes outputs in parallel, reducing
the computation time compared to the recurrent neural networks. Further performance comparison with the
traditional long short-term memory in terms of MAD and sMAPE has proved that the proposed model has
outperformed the recurrent neural network.
Research Interests:
Energy consumption prediction has always remained a concern for researchers because of the rapid growth of the human population and customers joining smart grids network for smart home facilities. Recently, the spread of COVID-19 has... more
Energy consumption prediction has always remained a concern for researchers because of the rapid growth
of the human population and customers joining smart grids network for smart home facilities. Recently, the
spread of COVID-19 has dramatically increased energy consumption in the residential sector. Hence, it is
essential to produce energy per the residential customers’ requirements, improve economic efficiency, and
reduce production costs. The previously published papers in the literature have considered the overall energy
consumption prediction, making it difficult for production companies to produce energy per customers’ future
demand. Using the proposed study, production companies can accurately have energy per their customers’ needs
by forecasting future energy consumption demands.
Scientists and researchers are trying to minimize energy consumption by applying different optimization and
prediction techniques; hence this study proposed a daily, weekly, and monthly energy consumption prediction
model using Temporal Fusion Transformer (TFT). This study relies on a TFT model for energy forecasting, which
considers both primary and valuable data sources and batch training techniques. The model’s performance has
been related to the Long Short-Term Memory (LSTM), LSTM interpretable, and Temporal Convolutional Network
(TCN) models. The model’s performance has remained better than the other algorithms, with mean squared error
(MSE), root mean squared error (RMSE), and mean absolute error (MAE) of 4.09, 2.02, and 1.50. Further, the
overall symmetric mean absolute percentage error (sMAPE) of LSTM, LSTM interpretable, TCN, and proposed
TFT remained at 29.78%, 31.10%, 36.42%, and 26.46%, respectively. The sMAPE of the TFT has proved that the
model has performed better than the other deep learning models.
of the human population and customers joining smart grids network for smart home facilities. Recently, the
spread of COVID-19 has dramatically increased energy consumption in the residential sector. Hence, it is
essential to produce energy per the residential customers’ requirements, improve economic efficiency, and
reduce production costs. The previously published papers in the literature have considered the overall energy
consumption prediction, making it difficult for production companies to produce energy per customers’ future
demand. Using the proposed study, production companies can accurately have energy per their customers’ needs
by forecasting future energy consumption demands.
Scientists and researchers are trying to minimize energy consumption by applying different optimization and
prediction techniques; hence this study proposed a daily, weekly, and monthly energy consumption prediction
model using Temporal Fusion Transformer (TFT). This study relies on a TFT model for energy forecasting, which
considers both primary and valuable data sources and batch training techniques. The model’s performance has
been related to the Long Short-Term Memory (LSTM), LSTM interpretable, and Temporal Convolutional Network
(TCN) models. The model’s performance has remained better than the other algorithms, with mean squared error
(MSE), root mean squared error (RMSE), and mean absolute error (MAE) of 4.09, 2.02, and 1.50. Further, the
overall symmetric mean absolute percentage error (sMAPE) of LSTM, LSTM interpretable, TCN, and proposed
TFT remained at 29.78%, 31.10%, 36.42%, and 26.46%, respectively. The sMAPE of the TFT has proved that the
model has performed better than the other deep learning models.
Research Interests:
Smart grids and smart homes are getting people’s attention in the modern era of smart cities. The advancements of smart technologies and smart grids have created challenges related to energy efficiency and production according to the... more
Smart grids and smart homes are getting people’s attention in the modern era of smart cities. The advancements of smart technologies and smart grids have created challenges related to energy efficiency and production according to the future demand of clients. Machine learning, specifically neural network-based methods, remained successful in energy consumption prediction, but still, there are gaps due to uncertainty in the data and limitations of the algorithms. Research published in the literature has used small datasets and profiles of primarily single users; therefore, models have difficulties when applied to large datasets with profiles of different customers. Thus, a smart grid environment requires a model that handles consumption data from thousands of customers. The proposed model enhances the newly introduced method of Neural Basis Expansion Analysis for interpretable Time Series (N-BEATS) with a big dataset of energy consumption of 169 customers. Further, to validate the re...
Research Interests: Computer Science, Artificial Intelligence, Machine Learning, Data Mining, Renewable Energy, and 11 moreTime Series, Energy Consumption, Energy efficiency, Smart Grid, Hybrid Renewable Energy Forecasting, Big Data, Energy Efficiency, Artificial Neural Network, Building Energy Prediction, Convolutional Neural Network, and consumption sociology
Research Interests:
Research Interests:
Smart grids and smart homes are getting people's attention in the modern era of smart cities. The advancements of smart technologies and smart grids have created challenges related to energy efficiency and production according to the... more
Smart grids and smart homes are getting people's attention in the modern era of smart cities. The advancements of smart technologies and smart grids have created challenges related to energy efficiency and production according to the future demand of clients. Machine learning, specifically neural network-based methods, remained successful in energy consumption prediction, but still, there are gaps due to uncertainty in the data and limitations of the algorithms. Research published in the literature has used small datasets and profiles of primarily single users; therefore, models have difficulties when applied to large datasets with profiles of different customers. Thus, a smart grid environment requires a model that handles consumption data from thousands of customers. The proposed model enhances the newly introduced method of Neural Basis Expansion Analysis for interpretable Time Series (N-BEATS) with a big dataset of energy consumption of 169 customers. Further, to validate the results of the proposed model, a performance comparison has been carried out with the Long Short Term Memory (LSTM), Blocked LSTM, Gated Recurrent Units (GRU), Blocked GRU and Temporal Convolutional Network (TCN). The proposed interpretable model improves the prediction accuracy on the big dataset containing energy consumption profiles of multiple customers. Incorporating covariates into the model improved accuracy by learning past and future energy consumption patterns. Based on a large dataset, the proposed model performed better for daily, weekly, and monthly energy consumption predictions. The forecasting accuracy of the N-BEATS interpretable model for 1-day-ahead energy consumption with "day as covariates" remained better than the 1, 2, 3, and 4-week scenarios.
Research Interests:
The standard manufacturing organizations follow certain rules. The highest ubiquitous organizing principles in infrastructure design are modular idea and symmetry, both of which are of the utmost importance. Symmetry is a substantial... more
The standard manufacturing organizations follow certain rules. The highest ubiquitous organizing principles in infrastructure design are modular idea and symmetry, both of which are of the utmost importance. Symmetry is a substantial principle in the manufacturing industry. Symmetrical procedures act as the structural apparatus for manufacturing design. The rapid growth of population needs outstrip infrastructure such as roads, bridges, railway lines, commercial, residential buildings, etc. Numerous underground facilities are also installed to fulfill different requirements of the people. In these facilities one of the most important facility is water supply pipelines. Therefore, it is essential to regularly analyze the water supply pipelines’ risk index in order to escape from economic and human losses. In this paper, we proposed a simplified hierarchical fuzzy logic (SHFL) model to reduce the set of rules. To this end, we have considered four essential factors of water supply pipe...
Research Interests:
Representation of Pre-RST information is very useful using visualized elements for realization of benefits of requirement traceability. This improves the practitioner motivation to maintain Pre-RST information during life cycle processes.... more
Representation of Pre-RST information is very useful using visualized elements for realization of benefits of requirement traceability. This improves the practitioner motivation to maintain Pre-RST information during life cycle processes. Few researchers proposed visualization for Post-RST due to which many benefits of requirement traceability cannot be realized. This paper proposed an improved visualization representing Pre-RST information that demonstrates various benefit of requirement traceability. In order to evaluate empirically, an experiment is conducted and textual representation of traceability information is obtained. In order to strengthen our claim a survey is conducted to compare textual representation of traceability information with proposed visualization and results are compiled.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
In recent years, due to the unnecessary wastage of electrical energy in residential buildings, the requirement of energy optimization and user comfort has gained vital importance. In the literature, various techniques have been proposed... more
In recent years, due to the unnecessary wastage of electrical energy in residential buildings, the requirement of energy optimization and user comfort has gained vital importance. In the literature, various techniques have been proposed addressing the energy optimization problem. The goal of each technique is to maintain a balance between user comfort and energy requirements, such that the user can achieve the desired comfort level with the minimum amount of energy consumption. Researchers have addressed the issue with the help of different optimization algorithms and variations in the parameters to reduce energy consumption. To the best of our knowledge, this problem is not solved yet due to its challenging nature. The gaps in the literature are due to advancements in technology, the drawbacks of optimization algorithms, and the introduction of new optimization algorithms. Further, many newly proposed optimization algorithms have produced better accuracy on the benchmark instances ...
Research Interests:
Program slice is the part of program that may take the program off the path of the desired output at some point of its execution. Such point is known as the slicing criterion. This point is generally identified at a location in a given... more
Program slice is the part of program that may take the program off the path of the desired output at some point of its execution. Such point is known as the slicing criterion. This point is generally identified at a location in a given program coupled with the subset of variables of program. This process in which program slices are computed is called program slicing. Weiser was the person who gave the original definition of program slice in 1979. Since its first definition, many ideas related to the program slice have been formulated along with the numerous numbers of techniques to compute program slice. Meanwhile, distinction between the static slice and dynamic slice was also made. Program slicing is now among the most useful techniques that can fetch the particular elements of a program which are related to a particular computation. Quite a large numbers of variants for the program slicing have been analyzed along with the algorithms to compute the slice. Model based slicing spli...
Research Interests:
Nanotechnology is generating interest of researchers toward cost-free and environment-friendly biosynthesis of nanoparticles. In this research, biosynthesis of stable copper nanoparticles has been done by using aloe vera leaves extract... more
Nanotechnology is generating interest of researchers toward cost-free and environment-friendly biosynthesis of nanoparticles. In this research, biosynthesis of stable copper nanoparticles has been done by using aloe vera leaves extract which has been prepared in de-ionized water. The aim of this study is the tracing of an object by green synthesis of copper oxide nanoparticles with the interaction of leaves extract and copper salt and its dye removal efficiency. The results have confirmed the efficient removal of Congo red (CR) dye using copper oxide nanoparticles. Furthermore, we have examined the effect of variables like concentration, time, pH, and adsorbent dosage. We have observed maximum 1.1 mg/g dye removal at 10 min time interval, pH 2, and 5 mg/g nanoparticles. The shape of the copper nanoparticles was spherical, and their range of grain was 80–120 nm. The EDX of synthesized nanoparticles showed copper 38% and 65% oxygen. UV spectrophotometer analysis confirms peak of the c...
Research Interests:
Energy is considered the most costly and scarce resource, and demand for it is increasing daily. Globally, a significant amount of energy is consumed in residential buildings, i.e., 30–40% of total energy consumption. An active energy... more
Energy is considered the most costly and scarce resource, and demand for it is increasing daily. Globally, a significant amount of energy is consumed in residential buildings, i.e., 30–40% of total energy consumption. An active energy prediction system is highly desirable for efficient energy production and utilization. In this paper, we have proposed a methodology to predict short-term energy consumption in a residential building. The proposed methodology consisted of four different layers, namely data acquisition, preprocessing, prediction, and performance evaluation. For experimental analysis, real data collected from 4 multi-storied buildings situated in Seoul, South Korea, has been used. The collected data is provided as input to the data acquisition layer. In the pre-processing layer afterwards, several data cleaning and preprocessing schemes are applied to the input data for the removal of abnormalities. Preprocessing further consisted of two processes, namely the computation...
Research Interests:
The advancements in electronic devices have increased the demand for the internet of things (IoT) based smart homes, where the connecting devices are growing at a rapid pace. Connected electronic devices are more common in smart... more
The advancements in electronic devices have increased the demand for the internet of things (IoT) based smart homes, where the connecting devices are growing at a rapid pace. Connected electronic devices are more common in smart buildings, smart cities, smart grids, and smart homes. The advancements in smart grid technologies have enabled to monitor every moment of energy consumption in smart buildings. The issue with smart devices is more energy consumption as compared to ordinary buildings. Due to smart cities and smart homes’ growth rates, the demand for efficient resource management is also growing day by day. Energy is a vital resource, and its production cost is very high. Due to that, scientists and researchers are working on optimizing energy usage, especially in smart cities, besides providing a comfortable environment. The central focus of this paper is on energy consumption optimization in smart buildings or smart homes. For the comfort index (thermal, visual, and air qua...
Research Interests:
In recent years, due to the unnecessary wastage of electrical energy in residential buildings, the requirement of energy optimization and user comfort has gained vital importance. In the literature, various techniques have been proposed... more
In recent years, due to the unnecessary wastage of electrical energy in residential buildings, the requirement of energy optimization and user comfort has gained vital importance. In the literature, various techniques have been proposed addressing the energy optimization problem. The goal of each technique is to maintain a balance between user comfort and energy requirements, such that the user can achieve the desired comfort level with the minimum amount of energy consumption. Researchers have addressed the issue with the help of different optimization algorithms and variations in the parameters to reduce energy consumption. To the best of our knowledge, this problem is not solved yet due to its challenging nature. The gaps in the literature are due to advancements in technology, the drawbacks of optimization algorithms, and the introduction of new optimization algorithms. Further, many newly proposed optimization algorithms have produced better accuracy on the benchmark instances but have not been applied yet for the optimization of energy consumption in smart homes. In this paper, we have carried out a detailed literature review of the techniques used for the optimization of energy consumption and scheduling in smart homes. Detailed discussion has been carried out on different factors contributing towards thermal comfort, visual comfort, and air quality comfort. We have also reviewed the fog and edge computing techniques used in smart homes.
Research Interests:
The ratio of aging and chronic diseases is increasing day by day; therefore, the people are interested in better health management. They are interested in patient-centered methods instead of the traditional and conventional hospitalized... more
The ratio of aging and chronic diseases is increasing day by day; therefore, the people are interested in better health management. They are interested in patient-centered methods instead of the traditional and conventional hospitalized services. The idea of U-Healthcare is getting popularity. The U-Healthcare is responsible for the observations of different states of health during running, walking, and jogging. Researchers and developers are focusing on a telemedicine system composed of Mobile, Ubiquitous and Wireless Body Area Network. The U-Healthcare system is still a little bit vague and obscure, and due to the shortcomings, the complete adoption of the U-Healthcare system is not possible. So, for this purpose, we just need to take the inclusion of latest, well-sophisticated hardware, communications, interconnections, trademark computing, advanced routing and privacy to the upcoming child of U-Healthcare based on Mobile, Ubiquitous and Wireless Body Area Network. In this paper, we have critically analyzed the relevant papers on Mobile, Ubiquitous and Wireless Body Area Network specifically in terms of the routing and security issues.
Research Interests:
The aim of the paper is to facilitate energy suppliers to make decisions for the provision of energy to different residential buildings according to their demand, which will enable the energy suppliers to manage and optimize the energy... more
The aim of the paper is to facilitate energy suppliers to make decisions for the provision of energy to different residential buildings according to their demand, which will enable the energy suppliers to manage and optimize the energy consumption in an efficient manner. In this paper, we have used Multi-layer perceptron and Random Forest to classify residential buildings according to their energy consumption. The hourly consumed historical data, of two types of buildings, have been predicted: high power and low power consumption buildings. The prediction consists of three stages: data retrieval, feature extraction, and prediction. In the data retrieval stage, the hourly consumed data based on the daily basis is retrieved from the database. In the feature extraction stage, statistical features; mean, standard deviation, skewness and kurtosis are computed from the retrieved data. In the prediction stage, Multi-Layer Perceptron and Random Forest have been used for the prediction of high power and low power consumption buildings. The hourly consumed historical data of 400 residential buildings have been used for experimentation. The data was divided into 70% (280 buildings) training and 30% (120 buildings) testing. The Multi-Layer Perceptron achieved 95.00% accurate result, whereas the accuracy observed by Random Forest was 90.83%.
Research Interests:
—In this paper, new statistical features based approach (SFBA) for hourly energy consumption prediction using Multi-Layer Perceptron is presented. The model consists of four stages: data retrieval, data pre-processing, feature extraction... more
—In this paper, new statistical features based approach (SFBA) for hourly energy consumption prediction using Multi-Layer Perceptron is presented. The model consists of four stages: data retrieval, data pre-processing, feature extraction and prediction. In the data retrieval stage, historical hourly consumed energy data has been retrieved from the database. During data pre-processing, filters have been applied to make the data more suitable for further processing. In the feature extraction stage, mean, variance, skewness, and kurtosis are extracted. Finally, Multi-Layer Perceptron has been used for prediction. For experimentation with Multi-Layer Perceptron with different training algorithms, a final model of the network was designed in which the scaled conjugate gradient (trainscg) was used as a network training function, tangent sigmoid (Tansig) as a hidden layer transfer function and linear function as an output layer transfer function. For hourly energy consumption prediction, a total of six weeks data of ten residential buildings has been used. To evaluate the performance of the proposed approach, Mean Absolute Error (MAE), Mean Squared Error (MSE) and Root Mean Squared Error (RMSE), evaluation measurements were applied.
Research Interests:
The accurate analysis of energy consumption by home appliances for future energy management in residential buildings is a challenging problem due to its high impact on the human surrounding environment. In this paper, a prediction... more
The accurate analysis of energy consumption by home appliances for future energy management in residential buildings is a challenging problem due to its high impact on the human surrounding environment. In this paper, a prediction methodology is presented for energy consumption of home appliances in residential buildings. The aim of the paper is the daily power consumption prediction of home appliances based on classification according to the hourly consumed power of all home appliances being used in residential buildings. The process consists of five stages: data source, data collection, feature extraction, prediction, and performance evaluation. Different machine learning algorithms have been applied to data containing historical hourly energy consumption of home appliances used in residential buildings. We have divided data into different training and testing ratios and have applied different quantitative and qualitative measures for finding the prediction capability and efficiency of each algorithm. After performing extensive experiments, it has been concluded that the highest accuracy of 98.07% has been observed for Logistic Regression for 70-30% training, and testing ratio. The Multi-Layer Perceptron and Random Forest have achieved 96.53%, 96.15% accuracies for 75-25%, training, and testing ratios. The accuracy of KNN was 94.96% with 60-40% training, and testing ratios. For finding the further effectiveness of the proposed model, cross-validation with different folds have been applied. Each classifier also shows significant variations in the performance with different ratios of training and testing proportions.
Research Interests:
Forensic applications have great importance in the digital era, for the investigation of different types of crimes. The forensic analysis includes Deoxyribonucleic Acid (DNA) test, crime scene video and images, forged documents analysis,... more
Forensic applications have great importance in the digital era, for the investigation of different types of crimes. The forensic analysis includes Deoxyribonucleic Acid (DNA) test, crime scene video and images, forged documents analysis, computer-based data recovery, fingerprint identification, handwritten signature verification and facial recognition. The signatures are divided into two types i.e. genuine and forgery. The forgery signature can lead to the huge amount of financial losses and create other legal issues as well. The process of forensic investigation for the verification of genuine signature and detection of forgery signatures in law related departments has been manual and the same can be automated using digital image processing techniques, and automated forensic signature verification applications. The signatures represent any person's authority so the forged signatures may also be used in a crime. Research has been done to automate the forensic investigation process, but due to the internal variations of signatures, the automation of signature verification still remained a challenging problem for researchers. In this paper, we have further extended previous research carried out in [1-2] and proposed a Forensic signature verification model based on two classifiers i.e. Multilayer Perceptron (MLP) and Random Forest for the classification of genuine and forgery signatures.
Research Interests:
Mean of Neighbors of Minimum Degree Algorithm (MNMA) is proposed in this paper. The MNMA produces optimal or near optimal vertex cover for any known undirected, un-weighted graph. The MNMA adds a vertex cover at each step among those... more
Mean of Neighbors of Minimum Degree Algorithm (MNMA) is proposed in this paper. The MNMA produces optimal or near optimal vertex cover for any known undirected, un-weighted graph. The MNMA adds a vertex cover at each step among those vertices which are neighbors of minimum degree vertices having degree equal to the mean value to construct vertex cover. The performance of MNMA is compared with other algorithms on small benchmark instances as well as on large benchmark instances such as BHOLIB and DIMACS. The MNMA is an efficient and fast algorithm and outperformed all the algorithms.
Research Interests:
The minimum vertex cover (MVC) and maximum independent set (MIS) problems are to be determined in terms of a graph of the small set of vertices, which cover all the edges, and a large set of vertices, no two of which are adjacent. MVC and... more
The minimum vertex cover (MVC) and maximum independent set (MIS) problems are to be determined in terms of a graph
of the small set of vertices, which cover all the edges, and a large set of vertices, no two of which are adjacent. MVC and MIS are
notable for its capability of modelling other combinatorial problems and real-world applications. The aim of this paper is twofold: first
to investigate failures of the state-of- the-art algorithms for MVC problem on small graphs and second to propose a simple and efficient
approximation algorithm for the minimum vertex cover problem. Mostly the state of art approximation algorithms for the MVC problem
are based on greedy approaches, or inspired from MIS approaches. Hence, these approaches regularly fail to provide optimal results on
specific graph instances. These problems motivated to propose Max Degres Around (MDA) approximation algorithm for the MVC
problem. The proposed algorithm is simple and efficient than the other heuristic algorithms for the MVC problem. In this paper, we have
introduced small benchmark instances and some state of the art algorithms (MDG, VSA, and MVSA) have been tested along with the
proposed algorithm. The proposed algorithm performed well as compared to counterpart algorithms tested on graphs with up to 1000
vertices and 150,000 vertices for Minimum Vertex Cover (MVC) and Maximum Independent Set (MIS).
of the small set of vertices, which cover all the edges, and a large set of vertices, no two of which are adjacent. MVC and MIS are
notable for its capability of modelling other combinatorial problems and real-world applications. The aim of this paper is twofold: first
to investigate failures of the state-of- the-art algorithms for MVC problem on small graphs and second to propose a simple and efficient
approximation algorithm for the minimum vertex cover problem. Mostly the state of art approximation algorithms for the MVC problem
are based on greedy approaches, or inspired from MIS approaches. Hence, these approaches regularly fail to provide optimal results on
specific graph instances. These problems motivated to propose Max Degres Around (MDA) approximation algorithm for the MVC
problem. The proposed algorithm is simple and efficient than the other heuristic algorithms for the MVC problem. In this paper, we have
introduced small benchmark instances and some state of the art algorithms (MDG, VSA, and MVSA) have been tested along with the
proposed algorithm. The proposed algorithm performed well as compared to counterpart algorithms tested on graphs with up to 1000
vertices and 150,000 vertices for Minimum Vertex Cover (MVC) and Maximum Independent Set (MIS).
Research Interests:
The face describes the personality of humans and has adequate importance in the identification and verification process. The human face provides, information as age, gender, face expression and ethnicity. Research has been carried out in... more
The face describes the personality of humans and has adequate importance in the identification and verification process. The human face provides, information as age, gender, face expression and ethnicity. Research has been carried out in the area of face detection, identification, verification, and gender classification to correctly identify humans. The focus of this paper is on gender classification, for which various methods have been formulated based on the measurements of face features. An efficient technique of gender classification helps in accurate identification of a person as male or female and also enhances the performance of other applications like Computer-User Interface, Investigation, Monitoring, Business Profiling and Human Computer Interaction (HCI). In this paper, the most prominent gender classification techniques have been evaluated in terms of their strengths and limitations.
Research Interests:
The analysis of MRI images is a manual process carried by experts which need to be automated to accurately classify the normal and abnormal images. We have proposed a reduced, three staged model having pre-processing, feature extraction... more
The analysis of MRI images is a manual process carried by experts which need to be automated to accurately classify the normal and abnormal images. We have proposed a reduced, three staged model having pre-processing, feature extraction and classification steps. In preprocessing the noise has been removed from grayscale images using a median filter, and then grayscale images have been converted to color (RGB) images. In feature extraction, red, green and blue channels from each channel of the RGB has been extracted because they are so much informative and easier to process. The first three color moments mean, variance, and skewness are calculated for each red, green and blue channel of images. The features extracted in the feature extraction stage are classified into normal and abnormal with K-Nearest Neighbors (k-NN). This method is applied to 100 images (70 normal, 30 abnormal). The proposed method gives 98.00% training and 95.00% test accuracy with datasets of normal images and 100% training and 90.00% test accuracy with abnormal images. The average computation time for each image was .06s.
Research Interests:
Image processing is a technique developed by computer and Information technology scientist and being used in all field of research including medical sciences. The focus of this paper is the use of image processing in tumor detection from... more
Image processing is a technique developed by computer and Information technology scientist and being used in all field of research including medical sciences. The focus of this paper is the use of image processing in tumor detection from the brain Magnetic Resonance Imaging (MRI). For the brain tumor detection, Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are the prominent imaging techniques, but most of the experts prefer MRI over CT. The traditional method of tumor detection in MRI images is a manual inspection which provides variations in the results when analyzed by different experts, therefore, in view of the limitations of the manual analysis of MRI, there is a need for an automated system that can produce globally acceptable and accurate results. There is enough amount of published literature available to replace the manual inspection process of MRI images with the digital computer system using image processing techniques. In this paper, we have provided a review of digital image processing techniques in the context of brain MRI processing and critically analyzed them for the identification of the gaps and limitations of the techniques so that the gaps can be filled and limitations of various techniques can be improved for precise and better results.
Research Interests:
The Police and Police stations have its adequate importance all around the world in this era where the crime rate is very high, the situation of Pakistan is also same. Currently, the police stations in Pakistan are utilizing the old... more
The Police and Police stations have its adequate importance all around the world in this era where the crime rate is very high, the situation of Pakistan is also same. Currently, the police stations in Pakistan are utilizing the old method (hard paper) of FIR registration and which requires extra effort to maintain the record of criminals and to trace someone's record also require unnecessary time which can be saved by 10 digitizing the police stations records. Although, some police stations do use digital record keeping in Excel sheets but the Integrity problem is noticed in file based record also the access is slower for searching single record the officer/official has to go through all the records in the sheet which consumes extra time. The excel sheets can only be used by a single person at a time and also they do not have any security mechanism, anyone who has access to the computer can easily access the sensitive record. To 15 overcome these issues we have developed an application for the police station to digitize the method of FIR system and other important official records about the staff and necessary registers used by police stations