Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Faizur Rashid

    Faizur Rashid

    • I am working in the domain of computer science for more than a decade, I keep using an interest in modern sciences a... moreedit
    Automated document classification is the machine learning fundamental that refers to assigning automatic categories among scanned images of the documents. It reached the state-of-art stage but it needs to verify the performance and... more
    Automated document classification is the machine learning fundamental that refers to assigning automatic categories among scanned images of the documents. It reached the state-of-art stage but it needs to verify the performance and efficiency of the algorithm by comparing. The objective was to get the most efficient classification algorithms according to the usage of the fundamentals of science. Experimental methods were used by collecting data from a sum of 1080 students and researchers from Ethiopian universities and a meta-data set of Banknotes, Crowdsourced Mapping, and VxHeaven provided by UC Irvine. 25% of the respondents felt that KNN is better than the other models. The overall analysis of performance accuracies through various parameters namely accuracy percentage of 99.85%, the precision performance of 0.996, recall ratio of 100%, F-Score 0.997, classification time, and running time of KNN, SVM, Perceptron and Gaussian NB was observed. KNN performed better than the other classification algorithms with a fewer error rate of 0.0002 including the efficiency of the least classification time and running time with ~413 and 3.6978 microseconds consecutively. It is concluded by looking at all the parameters that KNN classifiers have been recognized as the best algorithm.
    The fast progress in engineered image generation and manipulation has now gone to a point where it raises huge worries on the suggestion on the public. Best-case scenario, this prompt lost trust in advanced content, yet it may even bring... more
    The fast progress in engineered image generation and manipulation has now gone to a point where it raises huge worries on the suggestion on the public. Best-case scenario, this prompt lost trust in advanced content, yet it may even bring about additional mischief by spreading false data and the making of phony news. In this paper, we look at the authenticity of best-in-class Image detections, and that it is so hard to identify them-either consequently or by people. Specifically, we center on Deep Fakes, copy-move, splicing, resembling and statistical. As noticeable delegates for image categorization. Traditional image forensics techniques are usually not well suited to blur images due to the compression that strongly degrades the data. Thus, this paper follows a deep learning approach and presents two networks, both with a low number of layers to focus on the macroscopic properties of images. We make the greater part a million controlled images individually for each approach. The subsequent freely accessible dataset is at any rate a request for greatness bigger than similar other options and it empowers us to prepare information driven phony locators in an administered manner. We will show that the utilization of extra space explicit learning improves imitation identification to an exceptional precision.
    Objective: To develop document summarization for the Afaan Oromo language based on the query entered by the user(s). Methods: This study follows the design science analysis technique as a result of its considerations of thoughtful,... more
    Objective: To develop document summarization for the Afaan Oromo language based on the query entered by the user(s). Methods: This study follows the design science analysis technique as a result of its considerations of thoughtful, intellectual, and ingenious activity throughout problem-solving and the creation of knowledge. The developed query-based framework has used the TF-IDF term weight methodology. Development tools such as HornMorpho are employed for morphological analysis; whereas, Natural Language Processing Toolkit is employed for the text process. The system has experimented on the various extraction rates of 10%, 20%, and 30%. The result's evaluated exploitation recall, precision, and F-measure for objective analysis; whereas, subjective analysis has been evaluated by language consultants. Findings: The results of the evaluations showed that the proposed system registered f-measure of 90%, 91% and 93% at a summary extraction rate of 10%, 20%, and 30% respectively. The informativeness and coherence of the proposed system also registered its best performance summary of 51.67%, 56.67 % and 54.17% average score on five scale measures at an extraction rate of 10%, 20%, and 30% respectively when both methods were used together. Novelty: By using a morphological analysis tool the performance of the system is improved from 80.67% to 91.3% F-measure when we compare it with the previous work even supposing there's still a requirement to conduct additional analysis to enhance the Afaan Oromo text summarization.
    Image processing technology is a popular practical technology in the field of computer science. It has important research in analysing, recognizing, identifying, and predicting the images using a variety of platform with algorithms.This... more
    Image processing technology is a popular practical technology in the field of computer science.
    It has important research in analysing, recognizing, identifying, and predicting the images using a variety of
    platform with algorithms.This is aimed at the analysis of algorithms of image processing in the cloud
    platform.Several algorithms are very high to use image processing and computing technique. Here a selection of
    state-of-art is applied to test image processing execution and timing factor using different strategies and
    platforms. Among them, the dataset structure and performance of the system can choose a verification
    algorithm to achieve the final operation. Based on the structure of a real-time image processing system based on
    SOPC technology is built and the corresponding functional receiving unit is designed for real-time image
    storage, editing, viewing, and analysing. Studies have shown that the image processing system based on cloud
    computing has increased the speed of image data processing by 12.7%. Compared with another
    platformespecially in the case of segmentation and enhancement of the image. This analysis has advantages in
    image compression and image restoration on a cloud platform.
    Automated document classification is the machine learning fundamental that refers to assigning automatic categories among scanned images of the documents. It reached the state-of-art stage but it needs to verify the performance and... more
    Automated document classification is the machine learning fundamental that refers to assigning automatic categories among scanned images of the documents. It reached the state-of-art stage but it needs to verify the performance and efficiency of the algorithm by comparing. The objective was to get the most efficient classification algorithms according to the usage of the fundamentals of science. Experimental methods were used by collecting data from a sum of 1080 students and researchers from Ethiopian universities and a meta-data set of Banknotes, Crowdsourced Mapping, and VxHeaven provided by UC Irvine. 25% of the respondents felt that KNN is better than the other models. The overall analysis of performance accuracies through various parameters namely accuracy percentage of 99.85%, the precision performance of 0.996, recall ratio of 100%, F-Score 0.997, classification time, and running time of KNN, SVM, Perceptron and Gaussian NB was observed. KNN performed better than the other classification algorithms with a fewer error rate of 0.0002 including the efficiency of the least classification time and running time with ~413 and 3.6978 microseconds consecutively. It is concluded by looking at all the parameters that KNN classifiers have been recognized as the best algorithm.
    Process management is one of the important tasks performed by the operating system. The performance of the system depends on the CPU scheduling algorithms. The main aim of the CPU scheduling algorithms is to minimize waiting time,... more
    Process management is one of the important tasks performed by the operating system. The performance of the system depends on the CPU scheduling algorithms. The main aim of the CPU scheduling algorithms is to minimize waiting time, turnaround time, response time and context switching and maximizing CPU utilization. First-Come-First-Served (FCFS) Round Robin (RR), Shortest Job First (SJF) and, Priority Scheduling are some popular CPU scheduling algorithms. In time shared systems, Round Robin CPU scheduling is the preferred choice. In Round Robin CPU scheduling, performance of the system depends on the choice of the optimal time quantum. This paper presents an improved Round Robin CPU scheduling algorithm coined enhancing CPU performance using the features of Shortest Job First and Round Robin scheduling with varying time quantum. The proposed algorithm is experimentally proven better than conventional RR. The simulation results show that the waiting time and turnaround time have been ...
    Research Interests: