Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Abdul Salam Shah
  • International Islamic University Malaysia
  • 00923015550455
There has been a significant increase in the attention paid to resource management in smart grids, and several energy forecasting models have been published in the literature. It is well known that energy forecasting plays a crucial role... more
There has been a significant increase in the attention paid to resource management in smart grids, and several
energy forecasting models have been published in the literature. It is well known that energy forecasting plays
a crucial role in several applications in smart grids, including demand-side management, optimum dispatch, and
load shedding. A significant challenge in smart grid models is managing forecasts efficiently while ensuring the
slightest feasible prediction error. A type of artificial neural networks such as recurrent neural networks, are
frequently used to forecast time series data. However, due to certain limitations like vanishing gradients and
lack of memory retention of recurrent neural networks, sequential data should be modeled using convolutional
networks. The reason is that they have strong capabilities to solve complex problems better than recurrent
neural networks. In this research, a temporal convolutional network is proposed to handle seasonal shortterm energy forecasting. The proposed temporal convolutional network computes outputs in parallel, reducing
the computation time compared to the recurrent neural networks. Further performance comparison with the
traditional long short-term memory in terms of MAD and sMAPE has proved that the proposed model has
outperformed the recurrent neural network.
Energy consumption prediction has always remained a concern for researchers because of the rapid growth of the human population and customers joining smart grids network for smart home facilities. Recently, the spread of COVID-19 has... more
Energy consumption prediction has always remained a concern for researchers because of the rapid growth
of the human population and customers joining smart grids network for smart home facilities. Recently, the
spread of COVID-19 has dramatically increased energy consumption in the residential sector. Hence, it is
essential to produce energy per the residential customers’ requirements, improve economic efficiency, and
reduce production costs. The previously published papers in the literature have considered the overall energy
consumption prediction, making it difficult for production companies to produce energy per customers’ future
demand. Using the proposed study, production companies can accurately have energy per their customers’ needs
by forecasting future energy consumption demands.
Scientists and researchers are trying to minimize energy consumption by applying different optimization and
prediction techniques; hence this study proposed a daily, weekly, and monthly energy consumption prediction
model using Temporal Fusion Transformer (TFT). This study relies on a TFT model for energy forecasting, which
considers both primary and valuable data sources and batch training techniques. The model’s performance has
been related to the Long Short-Term Memory (LSTM), LSTM interpretable, and Temporal Convolutional Network
(TCN) models. The model’s performance has remained better than the other algorithms, with mean squared error
(MSE), root mean squared error (RMSE), and mean absolute error (MAE) of 4.09, 2.02, and 1.50. Further, the
overall symmetric mean absolute percentage error (sMAPE) of LSTM, LSTM interpretable, TCN, and proposed
TFT remained at 29.78%, 31.10%, 36.42%, and 26.46%, respectively. The sMAPE of the TFT has proved that the
model has performed better than the other deep learning models.
Smart grids and smart homes are getting people’s attention in the modern era of smart cities. The advancements of smart technologies and smart grids have created challenges related to energy efficiency and production according to the... more
Smart grids and smart homes are getting people’s attention in the modern era of smart cities. The advancements of smart technologies and smart grids have created challenges related to energy efficiency and production according to the future demand of clients. Machine learning, specifically neural network-based methods, remained successful in energy consumption prediction, but still, there are gaps due to uncertainty in the data and limitations of the algorithms. Research published in the literature has used small datasets and profiles of primarily single users; therefore, models have difficulties when applied to large datasets with profiles of different customers. Thus, a smart grid environment requires a model that handles consumption data from thousands of customers. The proposed model enhances the newly introduced method of Neural Basis Expansion Analysis for interpretable Time Series (N-BEATS) with a big dataset of energy consumption of 169 customers. Further, to validate the re...
Smart grids and smart homes are getting people's attention in the modern era of smart cities. The advancements of smart technologies and smart grids have created challenges related to energy efficiency and production according to the... more
Smart grids and smart homes are getting people's attention in the modern era of smart cities. The advancements of smart technologies and smart grids have created challenges related to energy efficiency and production according to the future demand of clients. Machine learning, specifically neural network-based methods, remained successful in energy consumption prediction, but still, there are gaps due to uncertainty in the data and limitations of the algorithms. Research published in the literature has used small datasets and profiles of primarily single users; therefore, models have difficulties when applied to large datasets with profiles of different customers. Thus, a smart grid environment requires a model that handles consumption data from thousands of customers. The proposed model enhances the newly introduced method of Neural Basis Expansion Analysis for interpretable Time Series (N-BEATS) with a big dataset of energy consumption of 169 customers. Further, to validate the results of the proposed model, a performance comparison has been carried out with the Long Short Term Memory (LSTM), Blocked LSTM, Gated Recurrent Units (GRU), Blocked GRU and Temporal Convolutional Network (TCN). The proposed interpretable model improves the prediction accuracy on the big dataset containing energy consumption profiles of multiple customers. Incorporating covariates into the model improved accuracy by learning past and future energy consumption patterns. Based on a large dataset, the proposed model performed better for daily, weekly, and monthly energy consumption predictions. The forecasting accuracy of the N-BEATS interpretable model for 1-day-ahead energy consumption with "day as covariates" remained better than the 1, 2, 3, and 4-week scenarios.
The standard manufacturing organizations follow certain rules. The highest ubiquitous organizing principles in infrastructure design are modular idea and symmetry, both of which are of the utmost importance. Symmetry is a substantial... more
The standard manufacturing organizations follow certain rules. The highest ubiquitous organizing principles in infrastructure design are modular idea and symmetry, both of which are of the utmost importance. Symmetry is a substantial principle in the manufacturing industry. Symmetrical procedures act as the structural apparatus for manufacturing design. The rapid growth of population needs outstrip infrastructure such as roads, bridges, railway lines, commercial, residential buildings, etc. Numerous underground facilities are also installed to fulfill different requirements of the people. In these facilities one of the most important facility is water supply pipelines. Therefore, it is essential to regularly analyze the water supply pipelines’ risk index in order to escape from economic and human losses. In this paper, we proposed a simplified hierarchical fuzzy logic (SHFL) model to reduce the set of rules. To this end, we have considered four essential factors of water supply pipe...
Representation of Pre-RST information is very useful using visualized elements for realization of benefits of requirement traceability. This improves the practitioner motivation to maintain Pre-RST information during life cycle processes.... more
Representation of Pre-RST information is very useful using visualized elements for realization of benefits of requirement traceability. This improves the practitioner motivation to maintain Pre-RST information during life cycle processes. Few researchers proposed visualization for Post-RST due to which many benefits of requirement traceability cannot be realized. This paper proposed an improved visualization representing Pre-RST information that demonstrates various benefit of requirement traceability. In order to evaluate empirically, an experiment is conducted and textual representation of traceability information is obtained. In order to strengthen our claim a survey is conducted to compare textual representation of traceability information with proposed visualization and results are compiled.
In recent years, due to the unnecessary wastage of electrical energy in residential buildings, the requirement of energy optimization and user comfort has gained vital importance. In the literature, various techniques have been proposed... more
In recent years, due to the unnecessary wastage of electrical energy in residential buildings, the requirement of energy optimization and user comfort has gained vital importance. In the literature, various techniques have been proposed addressing the energy optimization problem. The goal of each technique is to maintain a balance between user comfort and energy requirements, such that the user can achieve the desired comfort level with the minimum amount of energy consumption. Researchers have addressed the issue with the help of different optimization algorithms and variations in the parameters to reduce energy consumption. To the best of our knowledge, this problem is not solved yet due to its challenging nature. The gaps in the literature are due to advancements in technology, the drawbacks of optimization algorithms, and the introduction of new optimization algorithms. Further, many newly proposed optimization algorithms have produced better accuracy on the benchmark instances ...
Program slice is the part of program that may take the program off the path of the desired output at some point of its execution. Such point is known as the slicing criterion. This point is generally identified at a location in a given... more
Program slice is the part of program that may take the program off the path of the desired output at some point of its execution. Such point is known as the slicing criterion. This point is generally identified at a location in a given program coupled with the subset of variables of program. This process in which program slices are computed is called program slicing. Weiser was the person who gave the original definition of program slice in 1979. Since its first definition, many ideas related to the program slice have been formulated along with the numerous numbers of techniques to compute program slice. Meanwhile, distinction between the static slice and dynamic slice was also made. Program slicing is now among the most useful techniques that can fetch the particular elements of a program which are related to a particular computation. Quite a large numbers of variants for the program slicing have been analyzed along with the algorithms to compute the slice. Model based slicing spli...
Nanotechnology is generating interest of researchers toward cost-free and environment-friendly biosynthesis of nanoparticles. In this research, biosynthesis of stable copper nanoparticles has been done by using aloe vera leaves extract... more
Nanotechnology is generating interest of researchers toward cost-free and environment-friendly biosynthesis of nanoparticles. In this research, biosynthesis of stable copper nanoparticles has been done by using aloe vera leaves extract which has been prepared in de-ionized water. The aim of this study is the tracing of an object by green synthesis of copper oxide nanoparticles with the interaction of leaves extract and copper salt and its dye removal efficiency. The results have confirmed the efficient removal of Congo red (CR) dye using copper oxide nanoparticles. Furthermore, we have examined the effect of variables like concentration, time, pH, and adsorbent dosage. We have observed maximum 1.1 mg/g dye removal at 10 min time interval, pH 2, and 5 mg/g nanoparticles. The shape of the copper nanoparticles was spherical, and their range of grain was 80–120 nm. The EDX of synthesized nanoparticles showed copper 38% and 65% oxygen. UV spectrophotometer analysis confirms peak of the c...
Energy is considered the most costly and scarce resource, and demand for it is increasing daily. Globally, a significant amount of energy is consumed in residential buildings, i.e., 30–40% of total energy consumption. An active energy... more
Energy is considered the most costly and scarce resource, and demand for it is increasing daily. Globally, a significant amount of energy is consumed in residential buildings, i.e., 30–40% of total energy consumption. An active energy prediction system is highly desirable for efficient energy production and utilization. In this paper, we have proposed a methodology to predict short-term energy consumption in a residential building. The proposed methodology consisted of four different layers, namely data acquisition, preprocessing, prediction, and performance evaluation. For experimental analysis, real data collected from 4 multi-storied buildings situated in Seoul, South Korea, has been used. The collected data is provided as input to the data acquisition layer. In the pre-processing layer afterwards, several data cleaning and preprocessing schemes are applied to the input data for the removal of abnormalities. Preprocessing further consisted of two processes, namely the computation...
The advancements in electronic devices have increased the demand for the internet of things (IoT) based smart homes, where the connecting devices are growing at a rapid pace. Connected electronic devices are more common in smart... more
The advancements in electronic devices have increased the demand for the internet of things (IoT) based smart homes, where the connecting devices are growing at a rapid pace. Connected electronic devices are more common in smart buildings, smart cities, smart grids, and smart homes. The advancements in smart grid technologies have enabled to monitor every moment of energy consumption in smart buildings. The issue with smart devices is more energy consumption as compared to ordinary buildings. Due to smart cities and smart homes’ growth rates, the demand for efficient resource management is also growing day by day. Energy is a vital resource, and its production cost is very high. Due to that, scientists and researchers are working on optimizing energy usage, especially in smart cities, besides providing a comfortable environment. The central focus of this paper is on energy consumption optimization in smart buildings or smart homes. For the comfort index (thermal, visual, and air qua...
In recent years, due to the unnecessary wastage of electrical energy in residential buildings, the requirement of energy optimization and user comfort has gained vital importance. In the literature, various techniques have been proposed... more
In recent years, due to the unnecessary wastage of electrical energy in residential buildings, the requirement of energy optimization and user comfort has gained vital importance. In the literature, various techniques have been proposed addressing the energy optimization problem. The goal of each technique is to maintain a balance between user comfort and energy requirements, such that the user can achieve the desired comfort level with the minimum amount of energy consumption. Researchers have addressed the issue with the help of different optimization algorithms and variations in the parameters to reduce energy consumption. To the best of our knowledge, this problem is not solved yet due to its challenging nature. The gaps in the literature are due to advancements in technology, the drawbacks of optimization algorithms, and the introduction of new optimization algorithms. Further, many newly proposed optimization algorithms have produced better accuracy on the benchmark instances but have not been applied yet for the optimization of energy consumption in smart homes. In this paper, we have carried out a detailed literature review of the techniques used for the optimization of energy consumption and scheduling in smart homes. Detailed discussion has been carried out on different factors contributing towards thermal comfort, visual comfort, and air quality comfort. We have also reviewed the fog and edge computing techniques used in smart homes.
Background and Objective: The process of solving the Maximum Clique (MC) problem through approximation algorithms is harder, however, the Minimum Vertex Cover (MVC) problem can easily be solved using approximation algorithms. In this... more
Background and Objective: The process of solving the Maximum Clique (MC) problem through approximation algorithms is harder, however, the Minimum Vertex Cover (MVC) problem can easily be solved using approximation algorithms. In this paper, a technique has been proposed to use the approximation algorithms of Minimum Vertex Cover (MVC) for the solution of the Maximum Clique (MC) problem. Materials and Methods: To test the proposed technique, selected approximation algorithms have been deployed to small graph instances. The algorithms that were used for experiments are Maximum Degree Greedy (MDG), Vertex Support Algorithm (VSA), Mean of Neighbors of Minimum Degree Algorithm (MNMA), Modified Vertex Support Algorithm (MVSA), Maximum Adjacent Minimum Degree Algorithm (MAMA), and Clever Steady Strategy Algorithm (CSSA). Results: The development of an efficient approximation algorithm for the Maximum Clique (MC) problem is very difficult due to its complex nature. The only way left is to use the approximation algorithm of Minimum Vertex Cover (MVC) for the solution of the Maximum Clique (MC) problem. The experimental analysis of the proposed algorithm has revealed that the Maximum Clique (MC) problem can be efficiently solved with approximation algorithms of Minimum Vertex Cover (MVC). The proposed algorithm has efficiently solved the Maximum Clique (MC) problem within the reduced time limit. Conclusions: It is a difficult task to directly solve the Maximum Clique (MC) problem through approximation algorithms. The proposed method provides a platform to efficiently solve the Maximum Clique (MC) problem by using the approximation algorithm of Minimum Vertex Cover (MVC).
Research Interests:
In this paper, a detailed literature review and comparative analysis, based on simplicity, efficiency and the run-time complexity of some well-known approximation algorithms for Minimum Vertex Cover (MVC) problem have been carried out.... more
In this paper, a detailed literature review and comparative analysis, based on simplicity, efficiency and the run-time complexity of some well-known approximation algorithms for Minimum Vertex Cover (MVC) problem have been carried out. The key contribution of this paper is the provision of small benchmark graphs on which the given approximation algorithms fail to provide optimal results. The small benchmark graphs will help the researcher to evaluate efficient approximation algorithms. Generally, different terminologies and different styles have been adopted for writing pseudo code for different algorithms. To avoid such kind of difficulties, a uniform set of terminologies and pseudo code for each algorithm is provided in this paper, which will help researchers to easily understand the approximation algorithms for the Minimum Vertex Cover (MVC) problem.
The ratio of aging and chronic diseases is increasing day by day; therefore, the people are interested in better health management. They are interested in patient-centered methods instead of the traditional and conventional hospitalized... more
The ratio of aging and chronic diseases is increasing day by day; therefore, the people are interested in better health management. They are interested in patient-centered methods instead of the traditional and conventional hospitalized services. The idea of U-Healthcare is getting popularity. The U-Healthcare is responsible for the observations of different states of health during running, walking, and jogging. Researchers and developers are focusing on a telemedicine system composed of Mobile, Ubiquitous and Wireless Body Area Network. The U-Healthcare system is still a little bit vague and obscure, and due to the shortcomings, the complete adoption of the U-Healthcare system is not possible. So, for this purpose, we just need to take the inclusion of latest, well-sophisticated hardware, communications, interconnections, trademark computing, advanced routing and privacy to the upcoming child of U-Healthcare based on Mobile, Ubiquitous and Wireless Body Area Network. In this paper, we have critically analyzed the relevant papers on Mobile, Ubiquitous and Wireless Body Area Network specifically in terms of the routing and security issues.
The aim of the paper is to facilitate energy suppliers to make decisions for the provision of energy to different residential buildings according to their demand, which will enable the energy suppliers to manage and optimize the energy... more
The aim of the paper is to facilitate energy suppliers to make decisions for the provision of energy to different residential buildings according to their demand, which will enable the energy suppliers to manage and optimize the energy consumption in an efficient manner. In this paper, we have used Multi-layer perceptron and Random Forest to classify residential buildings according to their energy consumption. The hourly consumed historical data, of two types of buildings, have been predicted: high power and low power consumption buildings. The prediction consists of three stages: data retrieval, feature extraction, and prediction. In the data retrieval stage, the hourly consumed data based on the daily basis is retrieved from the database. In the feature extraction stage, statistical features; mean, standard deviation, skewness and kurtosis are computed from the retrieved data. In the prediction stage, Multi-Layer Perceptron and Random Forest have been used for the prediction of high power and low power consumption buildings. The hourly consumed historical data of 400 residential buildings have been used for experimentation. The data was divided into 70% (280 buildings) training and 30% (120 buildings) testing. The Multi-Layer Perceptron achieved 95.00% accurate result, whereas the accuracy observed by Random Forest was 90.83%.
Research Interests:
—In this paper, new statistical features based approach (SFBA) for hourly energy consumption prediction using Multi-Layer Perceptron is presented. The model consists of four stages: data retrieval, data pre-processing, feature extraction... more
—In this paper, new statistical features based approach (SFBA) for hourly energy consumption prediction using Multi-Layer Perceptron is presented. The model consists of four stages: data retrieval, data pre-processing, feature extraction and prediction. In the data retrieval stage, historical hourly consumed energy data has been retrieved from the database. During data pre-processing, filters have been applied to make the data more suitable for further processing. In the feature extraction stage, mean, variance, skewness, and kurtosis are extracted. Finally, Multi-Layer Perceptron has been used for prediction. For experimentation with Multi-Layer Perceptron with different training algorithms, a final model of the network was designed in which the scaled conjugate gradient (trainscg) was used as a network training function, tangent sigmoid (Tansig) as a hidden layer transfer function and linear function as an output layer transfer function. For hourly energy consumption prediction, a total of six weeks data of ten residential buildings has been used. To evaluate the performance of the proposed approach, Mean Absolute Error (MAE), Mean Squared Error (MSE) and Root Mean Squared Error (RMSE), evaluation measurements were applied.
Research Interests:
The accurate analysis of energy consumption by home appliances for future energy management in residential buildings is a challenging problem due to its high impact on the human surrounding environment. In this paper, a prediction... more
The accurate analysis of energy consumption by home appliances for future energy management in residential buildings is a challenging problem due to its high impact on the human surrounding environment. In this paper, a prediction methodology is presented for energy consumption of home appliances in residential buildings. The aim of the paper is the daily power consumption prediction of home appliances based on classification according to the hourly consumed power of all home appliances being used in residential buildings. The process consists of five stages: data source, data collection, feature extraction, prediction, and performance evaluation. Different machine learning algorithms have been applied to data containing historical hourly energy consumption of home appliances used in residential buildings. We have divided data into different training and testing ratios and have applied different quantitative and qualitative measures for finding the prediction capability and efficiency of each algorithm. After performing extensive experiments, it has been concluded that the highest accuracy of 98.07% has been observed for Logistic Regression for 70-30% training, and testing ratio. The Multi-Layer Perceptron and Random Forest have achieved 96.53%, 96.15% accuracies for 75-25%, training, and testing ratios. The accuracy of KNN was 94.96% with 60-40% training, and testing ratios. For finding the further effectiveness of the proposed model, cross-validation with different folds have been applied. Each classifier also shows significant variations in the performance with different ratios of training and testing proportions.
Research Interests:
Forensic applications have great importance in the digital era, for the investigation of different types of crimes. The forensic analysis includes Deoxyribonucleic Acid (DNA) test, crime scene video and images, forged documents analysis,... more
Forensic applications have great importance in the digital era, for the investigation of different types of crimes. The forensic analysis includes Deoxyribonucleic Acid (DNA) test, crime scene video and images, forged documents analysis, computer-based data recovery, fingerprint identification, handwritten signature verification and facial recognition. The signatures are divided into two types i.e. genuine and forgery. The forgery signature can lead to the huge amount of financial losses and create other legal issues as well. The process of forensic investigation for the verification of genuine signature and detection of forgery signatures in law related departments has been manual and the same can be automated using digital image processing techniques, and automated forensic signature verification applications. The signatures represent any person's authority so the forged signatures may also be used in a crime. Research has been done to automate the forensic investigation process, but due to the internal variations of signatures, the automation of signature verification still remained a challenging problem for researchers. In this paper, we have further extended previous research carried out in [1-2] and proposed a Forensic signature verification model based on two classifiers i.e. Multilayer Perceptron (MLP) and Random Forest for the classification of genuine and forgery signatures.
Research Interests:
The vital role of medical imaging in the automatic and efficient diagnosis and treatment in a short frame of time cannot be ignored. There are many imaging techniques for the diagnostic purpose of the human brain with each technique... more
The vital role of medical imaging in the automatic and efficient diagnosis and treatment in a short frame of time cannot be ignored. There are many imaging techniques for the diagnostic purpose of the human brain with each technique having its own advantages and disadvantages. One of the most important imaging modalities for diagnosis and treatment of brain diseases is Magnetic Resonance Imaging (MRI). In our work, we have used MRI for the classification of the brain into normal or abnormal due to its capability of going into much finer details of the brain's soft tissues. With the advancement of technologies, new algorithms and techniques are developed for automatic discrimination of the normal human brain from abnormal. In this paper, Random Subspace (RS) ensemble classifier that uses K-Nearest Neighbors as base classifier has been used to classify the human brain MRI into normal and abnormal. Total of nine features are extracted from the red, green and blue channel of color MRI, which are then entered into random subspace classifier for classification. The results have been compared with the state of art techniques and the proposed algorithm has been proved to be very simple and efficient with an accuracy of 98.64%.
Research Interests:
There are many medical imaging modalities used for the analysis and cure of various diseases. One of the most important of these modalities is Magnetic Resonance Imaging (MRI). MRI is advantageous over other modalities due to its high... more
There are many medical imaging modalities used for the analysis and cure of various diseases. One of the most important of these modalities is Magnetic Resonance Imaging (MRI). MRI is advantageous over other modalities due to its high spatial resolution and the excellent capability of discrimination of soft tissues. In this paper, an automated classification approach of normal and pathological MRI is proposed. The proposed model three simple stages; preprocessing, feature extraction and classification. Two types of features; color moments and texture features have been considered as main features for the description of brain MRI. A probabilistic classifier based on logistic function has been used for the MRI classification. A standard data set consisting of one hundred and fifty images has been used in the experiments, which was divided into 66% training and 34% testing. The proposed approach gave 98% accurate results for training data set and 94% accurate results for the testing data set. For validation of the proposed approach, 10-Fold cross validation was applied, which gave 90.66% accurate results. The classification capability of probabilistic classifier has been compared with the different state of art classifiers, including Support Vector Machine (SVM), Naïve Bayes, Artificial Neural Network (ANN), and Normal densities based linear classifier.
Research Interests:
Mean of Neighbors of Minimum Degree Algorithm (MNMA) is proposed in this paper. The MNMA produces optimal or near optimal vertex cover for any known undirected, un-weighted graph. The MNMA adds a vertex cover at each step among those... more
Mean of Neighbors of Minimum Degree Algorithm (MNMA) is proposed in this paper. The MNMA produces optimal or near optimal vertex cover for any known undirected, un-weighted graph. The MNMA adds a vertex cover at each step among those vertices which are neighbors of minimum degree vertices having degree equal to the mean value to construct vertex cover. The performance of MNMA is compared with other algorithms on small benchmark instances as well as on large benchmark instances such as BHOLIB and DIMACS. The MNMA is an efficient and fast algorithm and outperformed all the algorithms.
Research Interests:
The minimum vertex cover (MVC) and maximum independent set (MIS) problems are to be determined in terms of a graph of the small set of vertices, which cover all the edges, and a large set of vertices, no two of which are adjacent. MVC and... more
The minimum vertex cover (MVC) and maximum independent set (MIS) problems are to be determined in terms of a graph
of the small set of vertices, which cover all the edges, and a large set of vertices, no two of which are adjacent. MVC and MIS are
notable for its capability of modelling other combinatorial problems and real-world applications. The aim of this paper is twofold: first
to investigate failures of the state-of- the-art algorithms for MVC problem on small graphs and second to propose a simple and efficient
approximation algorithm for the minimum vertex cover problem. Mostly the state of art approximation algorithms for the MVC problem
are based on greedy approaches, or inspired from MIS approaches. Hence, these approaches regularly fail to provide optimal results on
specific graph instances. These problems motivated to propose Max Degres Around (MDA) approximation algorithm for the MVC
problem. The proposed algorithm is simple and efficient than the other heuristic algorithms for the MVC problem. In this paper, we have
introduced small benchmark instances and some state of the art algorithms (MDG, VSA, and MVSA) have been tested along with the
proposed algorithm. The proposed algorithm performed well as compared to counterpart algorithms tested on graphs with up to 1000
vertices and 150,000 vertices for Minimum Vertex Cover (MVC) and Maximum Independent Set (MIS).
Research Interests:
Nowadays, every vendor and IT service provider wants to switch into a cloud environment for better Quality of Service (QoS), Scalability, Performance and reasonable Cost. Many software developers trying to get the benefits of cloud... more
Nowadays, every vendor and IT service provider wants to switch into a cloud environment for better Quality of Service (QoS), Scalability, Performance and reasonable Cost. Many software developers trying to get the benefits of cloud computing and want to access the cloud environments at low cost and easy access. For this rationale and real-time cloud services, a reliable virtual platform is required. Many issues are encountering in development and deployment of these platforms regarding programming models, application architecture, APIs and services it provided. On the other hand, there are too many issues on the client side, including the limitation of tools, the interaction between client and service provider and user requirements in a specific cloud. As the cloud is inherently distributed environment, so it fabricates gaps in communication and coordination between stack holders. To cope with these obstacles and overcome challenges during software development in Cloud Computing, it is necessary to have a framework which resolves the issues and develop the software process model which meet the user requirement and provide quality of services within a time and budget. In this paper, the literature review mainly focuses on the software process model with their strength and weakness. The literature review also analyzes some attributes for software life cycle including cost, time, scalability and QoS.
Research Interests:
There are so many security risks for the users of cloud computing, but still the organizations are switching towards the cloud. The cloud provides data protection and a huge amount of memory usage remotely or virtually. The organization... more
There are so many security risks for the users of cloud computing, but still the organizations are switching towards the cloud. The cloud provides data protection and a huge amount of memory usage remotely or virtually. The organization has not adopted the cloud computing completely due to some security issues. The research in cloud computing has more focus on privacy and security in the new categorization attack surface. User authentication is the additional overhead for the companies besides the management of availability of cloud services. This paper is based on the proposed model to provide central authentication technique so that secured access of resources can be provided to users instead of adopting some unordered user authentication techniques. The model is also implemented as a prototype.
Research Interests:
The face describes the personality of humans and has adequate importance in the identification and verification process. The human face provides, information as age, gender, face expression and ethnicity. Research has been carried out in... more
The face describes the personality of humans and has adequate importance in the identification and verification process. The human face provides, information as age, gender, face expression and ethnicity. Research has been carried out in the area of face detection, identification, verification, and gender classification to correctly identify humans. The focus of this paper is on gender classification, for which various methods have been formulated based on the measurements of face features. An efficient technique of gender classification helps in accurate identification of a person as male or female and also enhances the performance of other applications like Computer-User Interface, Investigation, Monitoring, Business Profiling and Human Computer Interaction (HCI). In this paper, the most prominent gender classification techniques have been evaluated in terms of their strengths and limitations.
Research Interests:
The analysis of MRI images is a manual process carried by experts which need to be automated to accurately classify the normal and abnormal images. We have proposed a reduced, three staged model having pre-processing, feature extraction... more
The analysis of MRI images is a manual process carried by experts which need to be automated to accurately classify the normal and abnormal images. We have proposed a reduced, three staged model having pre-processing, feature extraction and classification steps. In preprocessing the noise has been removed from grayscale images using a median filter, and then grayscale images have been converted to color (RGB) images. In feature extraction, red, green and blue channels from each channel of the RGB has been extracted because they are so much informative and easier to process. The first three color moments mean, variance, and skewness are calculated for each red, green and blue channel of images. The features extracted in the feature extraction stage are classified into normal and abnormal with K-Nearest Neighbors (k-NN). This method is applied to 100 images (70 normal, 30 abnormal). The proposed method gives 98.00% training and 95.00% test accuracy with datasets of normal images and 100% training and 90.00% test accuracy with abnormal images. The average computation time for each image was .06s.
Research Interests:
Offline signature recognition has great importance in our day to day activities. Researchers are trying to use them as biometric identification in various areas like banks, security systems and for other identification purposes.... more
Offline signature recognition has great importance in our day to day activities. Researchers are trying to use them as biometric identification in various areas like banks, security systems and for other identification purposes. Fingerprints, iris, thumb impression and face detection based biometrics are successfully used for identification of individuals because of their static nature. However, people's signatures show variability that makes it difficult to recognize the original signatures correctly and to use them as biometrics. The handwritten signatures have importance in banks for cheque, credit card processing, legal and financial transactions, and the signatures are the main target of fraudulence. To deal with complex signatures, there should be a robust signature verification method in places such as banks that can correctly classify the signatures into genuine or forgery to avoid financial frauds. This paper, presents a pixels intensity level based offline signature verification model for the correct classification of signatures. To achieve the target, three statistical classifiers; Decision Tree (J48), probability based Naïve Bayes (NB tree) and Euclidean distance based k-Nearest Neighbor (IBk), are used. For comparison of the accuracy rates of offline signatures with online signatures, three classifiers were applied on online signature database and achieved a 99.90% accuracy rate with decision tree (J48), 99.82% with Naïve Bayes Tree and 98.11% with K-Nearest Neighbor (with 10 fold cross validation). The results of offline signatures were 64.97% accuracy rate with decision tree (J48), 76.16% with Naïve Bayes Tree and 91.91% with k-Nearest Neighbor (IBk) (without forgeries). The accuracy rate dropped with the inclusion of forgery signatures as, 55.63% accuracy rate with decision tree (J48), 67.02% with Naïve Bayes Tree and 88.12% (with forgeries).
Research Interests:
The police stations have adequate importance in the society to control the law and order situations of the country. In Pakistan, police stations manage criminal records and information manually. We have previously developed and improved a... more
The police stations have adequate importance in the society to control the law and order situations of the country. In Pakistan, police stations manage criminal records and information manually. We have previously developed and improved a desktop application for the record keeping of the different registers of the police stations. The data of police stations is sensitive and that need to be handled within secured and fully functional software to avoid any unauthorized access. For the proper utilization of the newly developed software, it is necessary to test and analyze the system before deployment into the real environment. In this paper, we have performed the testing of an application. For this purpose, we have used Ranorex, automated testing tool for the functional and performance testing, and reported the results of test cases as pass or fail.
Research Interests:
Cloud computing has attracted users due to high speed and bandwidth of the internet. The e-commerce systems are best utilizing the cloud computing. The cloud can be accessed by a password and username and is completely dependent upon the... more
Cloud computing has attracted users due to high speed and bandwidth of the internet. The e-commerce systems are best utilizing the cloud computing. The cloud can be accessed by a password and username and is completely dependent upon the internet. The threats to confidentiality, integrity, authentication and other vulnerabilities that are associated with the internet are also associated with cloud. The internet and cloud can be secured from threats by ensuring proper security and authorization. The channel between user and cloud server must be secured with a proper authorization mechanism. The research has been carried out and different models have been proposed by the authors to ensure the security of clouds. In this paper, we have critically analyzed the already published literature on the security and authorization of the internet and cloud.
Research Interests:
The rate of scientific literature has been increased in the past few decades; new topics and information is added in the form of articles, papers, text documents, web logs, and patents. The growth of information at rapid rate caused a... more
The rate of scientific literature has been increased in the past few decades; new topics and information is added in the form of articles, papers, text documents, web logs, and patents. The growth of information at rapid rate caused a tremendous amount of additions in the current and past knowledge, during this process, new topics emerged, some topics split into many other sub-topics, on the other hand, many topics merge to formed single topic. The selection and search of a topic manually in such a huge amount of information have been found as an expensive and workforce-intensive task. For the emerging need of an automatic process to locate, organize, connect, and make associations among these sources the researchers have proposed different techniques that automatically extract components of the information presented in various formats and organize or structure them. The targeted data which is going to be processed for component extraction might be in the form of text, video or audio. The addition of different algorithms has structured information and grouped similar information into clusters and on the basis of their importance, weighted them. The organized, structured and weighted data is then compared with other structures to find similarity with the use of various algorithms. The semantic patterns can be found by employing visualization techniques that show similarity or relation between topics over time or related to a specific event. In this paper, we have proposed a model based on Cosine Similarity Algorithm for citation network which will answer the questions like, how to connect documents with the help of citation and content similarity and how to visualize and navigate through the document.
Research Interests:
The Telecommunication laboratory plays an important role in carrying out research in the different fields like Telecommunication, Information Technology, Wireless Sensor Networks, Mobile Networks and many other fields. Every Engineering... more
The Telecommunication laboratory plays an important role in carrying out research in the different fields like Telecommunication, Information Technology, Wireless Sensor Networks, Mobile Networks and many other fields. Every Engineering University has a setup of laboratories for students particularly for Ph.D. scholars to work on the performance analysis of different Telecommunication Networks including WLANs, 3G/4G, and Long Term Evolution (LTE). The laboratories help students to have hand on practice on the theoretical concepts they have learned during the teachings at the university. The technical subjects have a practical part also which boosts the knowledge of students and learning of new ideas. The Telecommunication and Engineering laboratories are equipped with different electronic equipment's like digital trainers, simulators etc. and some additional supportive devices like computers, air conditioners, projectors, and large screens, with power backup facility that creates the perfect environment for experimentation. The setup of Telecommunication and Engineering laboratories cost huge amount, required to purchase equipment, and maintain the equipment. In any working environment risk factor is involved. To handle and avoid risks there must be risk management policy to tackle with accidents and other damages during working in the laboratory, may it be human or equipment at risk. In this paper, we have proposed a risk management policy for the Telecommunication and Engineering laboratories, which can be generalized for similar type of laboratories in engineering fields of studies.
Research Interests:
Image processing is a technique developed by computer and Information technology scientist and being used in all field of research including medical sciences. The focus of this paper is the use of image processing in tumor detection from... more
Image processing is a technique developed by computer and Information technology scientist and being used in all field of research including medical sciences. The focus of this paper is the use of image processing in tumor detection from the brain Magnetic Resonance Imaging (MRI). For the brain tumor detection, Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are the prominent imaging techniques, but most of the experts prefer MRI over CT. The traditional method of tumor detection in MRI images is a manual inspection which provides variations in the results when analyzed by different experts, therefore, in view of the limitations of the manual analysis of MRI, there is a need for an automated system that can produce globally acceptable and accurate results. There is enough amount of published literature available to replace the manual inspection process of MRI images with the digital computer system using image processing techniques. In this paper, we have provided a review of digital image processing techniques in the context of brain MRI processing and critically analyzed them for the identification of the gaps and limitations of the techniques so that the gaps can be filled and limitations of various techniques can be improved for precise and better results.
Research Interests:
Literature review is an excruciating part in the process of research. It requires an analysis of published material on the topic on interest. Moreover, for a new researcher, it is challenging extract a great number of required objectives,... more
Literature review is an excruciating part in the process of research. It requires an analysis of published material on the topic on interest. Moreover, for a new researcher, it is challenging extract a great number of required objectives, including the problem identification,
no more great deal in this era of Information and Communication Technology (ICT), instead overloading of the literature is a major problem and the great change to be handle. Often postgraduate research students raise three questions to their peers and supervisors. First, how many articles are sufficed for a good literature review? Second, how many past years literature will be enough to meet the required level for a good literature review? And third,
this research paper a novel hypothetical model is proposed to answer first two questions; the number of articles required for a good and reasonable literature review and number of years backward the analysis of articles required for the same. Our results indicate that analysis of data partially support our hypothetical model and its assumptions. Keywords: literature review; hypothetical model; load reduction; proposal writing; information systems.
Research Interests:
The Police and Police stations have its adequate importance all around the world in this era where the crime rate is very high, the situation of Pakistan is also same. Currently, the police stations in Pakistan are utilizing the old... more
The Police and Police stations have its adequate importance all around the world in this era where the crime rate is very high, the situation of Pakistan is also same. Currently, the police stations in Pakistan are utilizing the old method (hard paper) of FIR registration and which requires extra effort to maintain the record of criminals and to trace someone's record also require unnecessary time which can be saved by 10 digitizing the police stations records. Although, some police stations do use digital record keeping in Excel sheets but the Integrity problem is noticed in file based record also the access is slower for searching single record the officer/official has to go through all the records in the sheet which consumes extra time. The excel sheets can only be used by a single person at a time and also they do not have any security mechanism, anyone who has access to the computer can easily access the sensitive record. To 15 overcome these issues we have developed an application for the police station to digitize the method of FIR system and other important official records about the staff and necessary registers used by police stations
Research Interests:

And 4 more