Dr. Sabrina Ahmad received Bachelor of Information Technology (Hons) from Universiti Utara Malaysia and MSc. in Real-time Software Engineering from Universiti Teknologi Malaysia. Later, she obtained a Ph.D. in Computer Science from The University of Western Australia. Her specialization is in requirements engineering and focusing on improving software quality. She is currently an Associate Professor at the Faculty of Information and Communication Technology, Universiti Teknikal Malaysia Melaka. Her research interests include software engineering, requirements engineering, enterprise architecture, software quality and software testing.
She has a Professional Certification in Requirements Engineering (IREB CPRE -FL), a certified tester (CTFL) by ISTQB. and also a certified Information Technology Architect (IASA CITA-F).
s of Keynote Talks Old and New Directions in Requirements Elicitation Research and Practice: A So... more s of Keynote Talks Old and New Directions in Requirements Elicitation Research and Practice: A Sociotechnical Perspective
International Journal of Electrical and Computer Engineering (IJECE)
Timber quality control is undoubtedly a very laborious process in the secondary wood industry. Ma... more Timber quality control is undoubtedly a very laborious process in the secondary wood industry. Manual inspections by operators are prone to human error, thereby resulting in poor timber quality inspections and low production volumes. The automation of this process using an automated vision inspection (AVI) system integrated with artificial intelligence appears to be the most plausible approach due to its ease of use and minimal operating costs. This paper provides an overview of previous works on the automated inspection of timber surface defects as well as various machine learning and deep learning approaches that have been implemented for the identification of timber defects. Contemporary algorithms and techniques used in both machine learning and deep learning are discussed and outlined in this review paper. Furthermore, the paper also highlighted the possible limitation of employing both approaches in the identification of the timber defect along with several future directions t...
International Journal of Electrical and Computer Engineering (IJECE), 2023
Timber quality control is undoubtedly a very laborious process in the secondary wood industry. Ma... more Timber quality control is undoubtedly a very laborious process in the secondary wood industry. Manual inspections by operators are prone to human error, thereby resulting in poor timber quality inspections and low production volumes. The automation of this process using an automated vision inspection (AVI) system integrated with artificial intelligence appears to be the most plausible approach due to its ease of use and minimal operating costs. This paper provides an overview of previous works on the automated inspection of timber surface defects as well as various machine learning and deep learning approaches that have been implemented for the identification of timber defects. Contemporary algorithms and techniques used in both machine learning and deep learning are discussed and outlined in this review paper. Furthermore, the paper also highlighted the possible limitation of employing both approaches in the identification of the timber defect along with several future directions that may be further explored.
Journal of Telecommunication, Electronic and Computer Engineering, 2018
The Named Entity Recognition (NER) task is among the important tasks in analysing unstructured te... more The Named Entity Recognition (NER) task is among the important tasks in analysing unstructured textual data as a solution to gain important and valuable information from the text document. This task is very useful in Natural Language Processing (NLP) to analyse various languages with distinctive styles of writing, characteristics and word structures. The social media act as the primary source where most information and unstructured textual data are obtained through these sources. In this paper, unstructured textual data were analysed through NER task focusing on the Malay language. The analysis was implemented to investigate the impact of text features transformation set used for recognising entities from unstructured Malay textual data using fuzzy c-means method. It focuses on using Bernama Malay news as a dataset through several steps for the experiment namely pre-processing, text features transformation, experimental and evaluation steps. As a conclusion, the overall percentage a...
Various methods of biometric identification for authentication purposes are introduced and using ... more Various methods of biometric identification for authentication purposes are introduced and using human voice for human identity as one of the methods is gaining interest from researchers.The human voice contains an anatomical structure that is unique for every person. This quality of information allows possible identification,especially for security intention.During identification,the analysis should only concentrate on the human voice,however,besides the voice signal, background sound which is termed the noise should also be considered.As the noise does not contribute to the identification, it should be eliminated.This research focuses on integrating MFCC feature extraction method with a wavelet-based coefficient method to extract the human anatomical structure of features and eliminate the noise concurrently.The Mel frequency cepstral coefficient (MFCC) is used to represent the unique anatomical structure of the human voice.The enhanced technique is expected to increase the accura...
International Journal of Recent Technology and Engineering, 2019
This paper proposes a new framework for crisis-mapping with flood prediction model based on the c... more This paper proposes a new framework for crisis-mapping with flood prediction model based on the crowdsourcing data. Crisis-mapping is still at infancy stage development and offers opportunities for exploration. In fact, the application of the crisis-mapping gives fast information delivery and continuous updates for crisis and emergency evacuation using sensors. However, current crisis-mapping is more to the information dissemination of flood-related information and lack of flood prediction capability. Therefore, this paper applied artificial neural network for flood prediction model in the proposed framework. Sensor data from the crowdsourcing platform can be used to predict the flood-related measures to support continuous flood monitoring. In addition, the proposed framework makes used of the unstructured data from the Twitters to support the flood warnings dissemination to locate flood area with no sensor installation. Based on the results of the experiment, the fitted model from ...
Bulletin of Electrical Engineering and Informatics
Directly learning a defect prediction model from cross-project datasets results in a model with p... more Directly learning a defect prediction model from cross-project datasets results in a model with poor performance. Hence, training data selection becomes a feasible solution to this problem. Limited comparative studies investigating the effect of training data selection on the prediction performance have presented contradictory results. Those studies also did not analyze why a training data selection method underperforms. This study aims to investigate the impact of training data selection on the defect prediction model and data complexity measures. The method is based on an empirical comparison between prediction performance and data complexity measure before and after selection. This study compared 13 training data selection methods on 61 projects using six classification algorithms and measured the data complexity using six complexity measures focusing on overlap class, noise level, and class imbalanced ratio. Experimental results indicate that the best method for each dataset var...
Indonesian Journal of Electrical Engineering and Computer Science, 2022
Cross-project defect prediction (CPDP) has been a popular approach to address the limited histori... more Cross-project defect prediction (CPDP) has been a popular approach to address the limited historical dataset when building a defect prediction model. Directly applying cross-project datasets to learn the prediction model produces an unsatisfactory predictive model. Therefore, the selection of training data is essential. Many studies have examined the effectiveness of training data selection methods, and the best-performing method varied across datasets. While no method consistently outperformed the others across all datasets, predicting the best method for a specific dataset is essential. This study proposed a recommendation system to select the most suitable training data selection method in the CPDP setting. We evaluated the proposed system using 44 datasets, 13 training data selection methods, and six classification algorithms. The findings concluded that the recommendation system effectively recommends the best method to select training data.
Bulletin of Electrical Engineering and Informatics, 2022
Directly learning a defect prediction model from cross-project datasets results in a mode... more Directly learning a defect prediction model from cross-project datasets results in a model with poor performance. Hence, training data selection becomes a feasible solution to this problem. Limited comparative studies investigating the effect of training data selection on the prediction performance have presented contradictory results. Those studies also did not analyze why a training data selection method underperforms. This study aims to investigate the impact of training data selection on the defect prediction model and data complexity measures. The method is based on an empirical comparison between prediction performance and data complexity measure before and after selection. This study compared 13 training data selection methods on 61 projects using six classification algorithms and measured the data complexity using six complexity measures focusing on overlap class, noise level, and class imbalanced ratio. Experimental results indicate that the best method for each dataset varies depending on the dataset and classifiers. The training data selection most affects noise rate and class imbalance. We concluded that critically selecting the training data method could improve the performance of the prediction model. We recommend dealing with noise and unbalanced classes when designing training data methods.
Journal of Telecommunication, Electronic and Computer Engineering, 2018
This paper presents an evaluation of a boilerplate technique with the assistance of a tool-based ... more This paper presents an evaluation of a boilerplate technique with the assistance of a tool-based prototype in order to improve Software Requirements Specification (SRS) quality in terms of comprehensibility, correctness and consistency. The value behind this boilerplate is to ease the process of identifying essential requirements for a generic information management system and translating them into standard requirements statements in the SRS. An empirical investigation environment is adapted and expert judgment method is used for evaluation. Results showed that the toolbased boilerplate technique improves the completeness, correctness and consistency of requirements in SRS.
Indonesian Journal of Electrical Engineering and Computer Science
Cross-project defect prediction (CPDP) has been a popular approach to address the limited histori... more Cross-project defect prediction (CPDP) has been a popular approach to address the limited historical dataset when building a defect prediction model. Directly applying cross-project datasets to learn the prediction model produces an unsatisfactory predictive model. Therefore, the selection of training data is essential. Many studies have examined the effectiveness of training data selection methods, and the best-performing method varied across datasets. While no method consistently outperformed the others across all datasets, predicting the best method for a specific dataset is essential. This study proposed a recommendation system to select the most suitable training data selection method in the CPDP setting. We evaluated the proposed system using 44 datasets, 13 training data selection methods, and six classification algorithms. The findings concluded that the recommendation system effectively recommends the best method to select training data.
International Journal of Innovative Technology and Exploring Engineering, 2019
The revolution of Internet of Things (IoT) will be able to revive the way people use the technolo... more The revolution of Internet of Things (IoT) will be able to revive the way people use the technology for a greater benefit. As we are embarking towards the golden age of technology, smart home application is gaining popularity as it adds convenience, comfort and peace of mind. There are variety of smart home applications worldwide which has diverse functionality with different perspectives and embedded assumptions. These scenario leads to uncertainty among the developers and leads to unnecessary effort to elicit requirements every time new application wants to be developed. Therefore, this paper presents an exploration to determine essential requirements for smart home application based on end user needs. An empirical investigation based on survey technique was conducted to determine essential requirements for smart home application. A case study of residents in Satellite City of Muadzam Shah, Pahang was conducted. The analysis was done by using T-Test and One Way Analysis of Varianc...
Features ranking is a very essential step in determining significant features for handwriting ima... more Features ranking is a very essential step in determining significant features for handwriting images. Its goal is to increase the classification performance by reducing the computational cost. In the context of handwriting recognition, the extraction of image features can lead to the problem of high dimensionality of data. This has become the handwriting recognition problem whereby the variation of generated features are contributing to the factor of irrelevant or redundant features while maybe even correlated to each other that burden the classification process. As a result, this will be contributing to the lower identification performance accuracy due to the increase of computational complexity. This paper used a Systematic Literature Review (SLR) to compile the features ranking based technique to overcome the drawbacks above. SLR is a literature review that collects and critically analyzes multiple studies to answer the research question. Five research questions were drawn for th...
Enterprise Architecture (EA) is associated with alignment between business needs and IT services ... more Enterprise Architecture (EA) is associated with alignment between business needs and IT services availability. Business architecture, as a part of EA, has a crucial role in understanding and blending with business needs. Through business architecture, the stakeholders’ needs and interests are outline in detail. The level of data details in a business architecture can determine more support for strategic decision making in achieving business goals. Inaccurate determination of entity structure in business architecture affects the success rate of EA implementation. In an EA early implementation, an enterprise architect need to define an accurate entity structure. Unfortunately, current EA framework recommend generic entity structure. The challenge is how to define an accurate entity structure fo a particular industry. This study explores the determination of the core entities for business architecture that can guide the initiation of EA implementation. In this paper, an upstream petrol...
s of Keynote Talks Old and New Directions in Requirements Elicitation Research and Practice: A So... more s of Keynote Talks Old and New Directions in Requirements Elicitation Research and Practice: A Sociotechnical Perspective
International Journal of Electrical and Computer Engineering (IJECE)
Timber quality control is undoubtedly a very laborious process in the secondary wood industry. Ma... more Timber quality control is undoubtedly a very laborious process in the secondary wood industry. Manual inspections by operators are prone to human error, thereby resulting in poor timber quality inspections and low production volumes. The automation of this process using an automated vision inspection (AVI) system integrated with artificial intelligence appears to be the most plausible approach due to its ease of use and minimal operating costs. This paper provides an overview of previous works on the automated inspection of timber surface defects as well as various machine learning and deep learning approaches that have been implemented for the identification of timber defects. Contemporary algorithms and techniques used in both machine learning and deep learning are discussed and outlined in this review paper. Furthermore, the paper also highlighted the possible limitation of employing both approaches in the identification of the timber defect along with several future directions t...
International Journal of Electrical and Computer Engineering (IJECE), 2023
Timber quality control is undoubtedly a very laborious process in the secondary wood industry. Ma... more Timber quality control is undoubtedly a very laborious process in the secondary wood industry. Manual inspections by operators are prone to human error, thereby resulting in poor timber quality inspections and low production volumes. The automation of this process using an automated vision inspection (AVI) system integrated with artificial intelligence appears to be the most plausible approach due to its ease of use and minimal operating costs. This paper provides an overview of previous works on the automated inspection of timber surface defects as well as various machine learning and deep learning approaches that have been implemented for the identification of timber defects. Contemporary algorithms and techniques used in both machine learning and deep learning are discussed and outlined in this review paper. Furthermore, the paper also highlighted the possible limitation of employing both approaches in the identification of the timber defect along with several future directions that may be further explored.
Journal of Telecommunication, Electronic and Computer Engineering, 2018
The Named Entity Recognition (NER) task is among the important tasks in analysing unstructured te... more The Named Entity Recognition (NER) task is among the important tasks in analysing unstructured textual data as a solution to gain important and valuable information from the text document. This task is very useful in Natural Language Processing (NLP) to analyse various languages with distinctive styles of writing, characteristics and word structures. The social media act as the primary source where most information and unstructured textual data are obtained through these sources. In this paper, unstructured textual data were analysed through NER task focusing on the Malay language. The analysis was implemented to investigate the impact of text features transformation set used for recognising entities from unstructured Malay textual data using fuzzy c-means method. It focuses on using Bernama Malay news as a dataset through several steps for the experiment namely pre-processing, text features transformation, experimental and evaluation steps. As a conclusion, the overall percentage a...
Various methods of biometric identification for authentication purposes are introduced and using ... more Various methods of biometric identification for authentication purposes are introduced and using human voice for human identity as one of the methods is gaining interest from researchers.The human voice contains an anatomical structure that is unique for every person. This quality of information allows possible identification,especially for security intention.During identification,the analysis should only concentrate on the human voice,however,besides the voice signal, background sound which is termed the noise should also be considered.As the noise does not contribute to the identification, it should be eliminated.This research focuses on integrating MFCC feature extraction method with a wavelet-based coefficient method to extract the human anatomical structure of features and eliminate the noise concurrently.The Mel frequency cepstral coefficient (MFCC) is used to represent the unique anatomical structure of the human voice.The enhanced technique is expected to increase the accura...
International Journal of Recent Technology and Engineering, 2019
This paper proposes a new framework for crisis-mapping with flood prediction model based on the c... more This paper proposes a new framework for crisis-mapping with flood prediction model based on the crowdsourcing data. Crisis-mapping is still at infancy stage development and offers opportunities for exploration. In fact, the application of the crisis-mapping gives fast information delivery and continuous updates for crisis and emergency evacuation using sensors. However, current crisis-mapping is more to the information dissemination of flood-related information and lack of flood prediction capability. Therefore, this paper applied artificial neural network for flood prediction model in the proposed framework. Sensor data from the crowdsourcing platform can be used to predict the flood-related measures to support continuous flood monitoring. In addition, the proposed framework makes used of the unstructured data from the Twitters to support the flood warnings dissemination to locate flood area with no sensor installation. Based on the results of the experiment, the fitted model from ...
Bulletin of Electrical Engineering and Informatics
Directly learning a defect prediction model from cross-project datasets results in a model with p... more Directly learning a defect prediction model from cross-project datasets results in a model with poor performance. Hence, training data selection becomes a feasible solution to this problem. Limited comparative studies investigating the effect of training data selection on the prediction performance have presented contradictory results. Those studies also did not analyze why a training data selection method underperforms. This study aims to investigate the impact of training data selection on the defect prediction model and data complexity measures. The method is based on an empirical comparison between prediction performance and data complexity measure before and after selection. This study compared 13 training data selection methods on 61 projects using six classification algorithms and measured the data complexity using six complexity measures focusing on overlap class, noise level, and class imbalanced ratio. Experimental results indicate that the best method for each dataset var...
Indonesian Journal of Electrical Engineering and Computer Science, 2022
Cross-project defect prediction (CPDP) has been a popular approach to address the limited histori... more Cross-project defect prediction (CPDP) has been a popular approach to address the limited historical dataset when building a defect prediction model. Directly applying cross-project datasets to learn the prediction model produces an unsatisfactory predictive model. Therefore, the selection of training data is essential. Many studies have examined the effectiveness of training data selection methods, and the best-performing method varied across datasets. While no method consistently outperformed the others across all datasets, predicting the best method for a specific dataset is essential. This study proposed a recommendation system to select the most suitable training data selection method in the CPDP setting. We evaluated the proposed system using 44 datasets, 13 training data selection methods, and six classification algorithms. The findings concluded that the recommendation system effectively recommends the best method to select training data.
Bulletin of Electrical Engineering and Informatics, 2022
Directly learning a defect prediction model from cross-project datasets results in a mode... more Directly learning a defect prediction model from cross-project datasets results in a model with poor performance. Hence, training data selection becomes a feasible solution to this problem. Limited comparative studies investigating the effect of training data selection on the prediction performance have presented contradictory results. Those studies also did not analyze why a training data selection method underperforms. This study aims to investigate the impact of training data selection on the defect prediction model and data complexity measures. The method is based on an empirical comparison between prediction performance and data complexity measure before and after selection. This study compared 13 training data selection methods on 61 projects using six classification algorithms and measured the data complexity using six complexity measures focusing on overlap class, noise level, and class imbalanced ratio. Experimental results indicate that the best method for each dataset varies depending on the dataset and classifiers. The training data selection most affects noise rate and class imbalance. We concluded that critically selecting the training data method could improve the performance of the prediction model. We recommend dealing with noise and unbalanced classes when designing training data methods.
Journal of Telecommunication, Electronic and Computer Engineering, 2018
This paper presents an evaluation of a boilerplate technique with the assistance of a tool-based ... more This paper presents an evaluation of a boilerplate technique with the assistance of a tool-based prototype in order to improve Software Requirements Specification (SRS) quality in terms of comprehensibility, correctness and consistency. The value behind this boilerplate is to ease the process of identifying essential requirements for a generic information management system and translating them into standard requirements statements in the SRS. An empirical investigation environment is adapted and expert judgment method is used for evaluation. Results showed that the toolbased boilerplate technique improves the completeness, correctness and consistency of requirements in SRS.
Indonesian Journal of Electrical Engineering and Computer Science
Cross-project defect prediction (CPDP) has been a popular approach to address the limited histori... more Cross-project defect prediction (CPDP) has been a popular approach to address the limited historical dataset when building a defect prediction model. Directly applying cross-project datasets to learn the prediction model produces an unsatisfactory predictive model. Therefore, the selection of training data is essential. Many studies have examined the effectiveness of training data selection methods, and the best-performing method varied across datasets. While no method consistently outperformed the others across all datasets, predicting the best method for a specific dataset is essential. This study proposed a recommendation system to select the most suitable training data selection method in the CPDP setting. We evaluated the proposed system using 44 datasets, 13 training data selection methods, and six classification algorithms. The findings concluded that the recommendation system effectively recommends the best method to select training data.
International Journal of Innovative Technology and Exploring Engineering, 2019
The revolution of Internet of Things (IoT) will be able to revive the way people use the technolo... more The revolution of Internet of Things (IoT) will be able to revive the way people use the technology for a greater benefit. As we are embarking towards the golden age of technology, smart home application is gaining popularity as it adds convenience, comfort and peace of mind. There are variety of smart home applications worldwide which has diverse functionality with different perspectives and embedded assumptions. These scenario leads to uncertainty among the developers and leads to unnecessary effort to elicit requirements every time new application wants to be developed. Therefore, this paper presents an exploration to determine essential requirements for smart home application based on end user needs. An empirical investigation based on survey technique was conducted to determine essential requirements for smart home application. A case study of residents in Satellite City of Muadzam Shah, Pahang was conducted. The analysis was done by using T-Test and One Way Analysis of Varianc...
Features ranking is a very essential step in determining significant features for handwriting ima... more Features ranking is a very essential step in determining significant features for handwriting images. Its goal is to increase the classification performance by reducing the computational cost. In the context of handwriting recognition, the extraction of image features can lead to the problem of high dimensionality of data. This has become the handwriting recognition problem whereby the variation of generated features are contributing to the factor of irrelevant or redundant features while maybe even correlated to each other that burden the classification process. As a result, this will be contributing to the lower identification performance accuracy due to the increase of computational complexity. This paper used a Systematic Literature Review (SLR) to compile the features ranking based technique to overcome the drawbacks above. SLR is a literature review that collects and critically analyzes multiple studies to answer the research question. Five research questions were drawn for th...
Enterprise Architecture (EA) is associated with alignment between business needs and IT services ... more Enterprise Architecture (EA) is associated with alignment between business needs and IT services availability. Business architecture, as a part of EA, has a crucial role in understanding and blending with business needs. Through business architecture, the stakeholders’ needs and interests are outline in detail. The level of data details in a business architecture can determine more support for strategic decision making in achieving business goals. Inaccurate determination of entity structure in business architecture affects the success rate of EA implementation. In an EA early implementation, an enterprise architect need to define an accurate entity structure. Unfortunately, current EA framework recommend generic entity structure. The challenge is how to define an accurate entity structure fo a particular industry. This study explores the determination of the core entities for business architecture that can guide the initiation of EA implementation. In this paper, an upstream petrol...
Uploads
Papers by Sabrina Ahmad