Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Internet of Things (IoT) can be used in the healthcare sector to exchange patients' information, but there are many concerns about the privacy and security of the patients' confidential information or transferring this... more
Internet of Things (IoT) can be used in the healthcare sector to exchange patients' information, but there are many concerns about the privacy and security of the patients' confidential information or transferring this information. Data integrity is difficult to ensure since generated data from IoT devices are split into parts and stored in numerous edge servers in various locations. Data loss and improper data storage in edge servers make it difficult to ensure data integrity. The various security challenges and data integrity of edge computing can be handled by integrating blockchain (BC) technologies. BC paradigm provides a new infrastructure and security rules to enable IoT devices to get trusted interoperability for information and business. So, many healthcare institutions based on BC technology have the ability for data storage and support trust. This work aims to presents an IoT-Edge framework for the exchange of data without changing utilizing data processing and BC techniques. IoT devices can monitor follow the patient's status remotely and subsequently overcome the possibility of difficult cases. The proposed system presents many healthcare institutions' features by providing the complete preservation of patients' data with its confidential transmission and submits the results of the patient examination safely. The proposed system is user-friendly and offers the required utilities for integrity and confidentiality of information. Some simulation and performance experiments are conducted; the findings indicate an acceptable performance as an IoT-Edge framework based on BC technology.
Due to internet development, data transfer becomes faster and easier to transmit and receive different data types. The possibility of data loss or data modification by a third party is high. So, designing a model that allows stakeholders... more
Due to internet development, data transfer becomes faster and easier to transmit and receive different data types. The possibility of data loss or data modification by a third party is high. So, designing a model that allows stakeholders to share their data confidently over the internet is urgent. Steganography is a term used to hide information and an attempt to conceal the existence of embedded information in different types of multimedia. In this chapter, a steganography model is proposed to embed an image into a cover image based on DWT approach as the first phase. Then, the embedded secret image is extracted from the stego-image as the second phase. Model performance was evaluated based on signal noise ratio (SNR), PSNR, and MSE (mean square error). The proposed steganographic model based on DWT is implemented to hide confidential images about a nuclear reactor and military devices. The findings indicate that the proposed model provides a relatively high embedding payload with no visual distortion in the stego-image. It improves the security and maintains the hidden image correctness.
Providing air pollution information to individuals enables them to understand the air quality of their living environments. Thus, the association between people’s wellbeing and the properties of the surrounding environment is an essential... more
Providing air pollution information to individuals enables them to understand the air quality of their living environments. Thus, the association between people’s wellbeing and the properties of the surrounding environment is an essential area of investigation. This paper proposes Air Quality Prediction through harvesting public/open data and leveraging them to get Personal Air Quality index. These are usually incomplete. To cope with the problem of missing data, we applied KNN imputation method. To predict Personal Air Quality Index, we apply a voting regression approach based on three base regressors which are Gradient Boosting regressor, Random Forest regressor and linear regressor. Evaluating the experimental results using the RMSE metric, we got an average score of 35.39 for Walker and 51.16 for Car.
Hyperspectral imaging is employed in a broad array of applications. The usual idea in all of these applications is the requirement for classification of a hyperspectral image data. Where Hyperspectral data consists of many bands - up to... more
Hyperspectral imaging is employed in a broad array of applications. The usual idea in all of these applications is the requirement for classification of a hyperspectral image data. Where Hyperspectral data consists of many bands - up to hundreds of bands - that cover the electromagnetic spectrum. This results in a hyperspectral data cube that contains approximately hundreds of bands - which means BIG DATA CHALLENGE. In this paper, unsupervised hyperspectral image classification algorithm, in particular, Iterative Self-Organizing Data Analysis Technique Algorithm (ISODATA) algorithm used to produce a classified image and extract agricultural information, using ENVI (Environment of Visualizing Images) that is a software application utilized to process and analyze geospatial imagery. The study area, which has been applied on is Florida, USA. Hyperspectral dataset of Florida was generated by the SAMSON sensor. In this paper, the performance was evaluated on the base of the accuracy assessment of the process after applying Principle Component Analysis (PCA) and ISODATA algorithm. The overall accuracy of the classification process is 75.6187%.
This paper proposes a comparative study using machine learning algorithms to predict the shooting success by basketball players in the National Basketball Association (NBA). This work is focusing on analyzing NBA’s regular session... more
This paper proposes a comparative study using machine learning algorithms to predict the shooting success by basketball players in the National Basketball Association (NBA). This work is focusing on analyzing NBA’s regular session dataset, which will help NBA teams to prepare their play plan for future games based on the other team players’ performance. For instance, how good is each player usually in shooting from different distance and what defense strategies they often use. In this work, Random Forest and XGBoost models are used for shooting prediction.
This paper aims to improve the quality of the patient's life and provide them with the lifestyle they need. And we have the intention to obtain this by creating a mobile application that analyzes the patient's data such as... more
This paper aims to improve the quality of the patient's life and provide them with the lifestyle they need. And we have the intention to obtain this by creating a mobile application that analyzes the patient's data such as diabetes, blood pressure, and kidney. Then, implement the system to diagnose patients of chronic diseases using machine learning techniques such as classification. It's hard for the patients of chronic diseases to record their measurements on a paper every time they measure either the blood pressure or sugar level or any other disease that needs periodic measurements. The paper might be lost, and this can lead the doctor not fully to understand the case. So, the application is going to record measurements in the database. Also, it's difficult for patients to decide what to eat or how many times they should exercise according to their situation. Our idea is to recommend a lifestyle for the patient and make the doctor participate in it by writing not...
Genetic mapping is an approach in identifying genes and processes. Genetic maps are essential tools for analyzing DNA sequence data, not only providing a blueprint of the genome but also unlocking linkage patterns between genetic markers,... more
Genetic mapping is an approach in identifying genes and processes. Genetic maps are essential tools for analyzing DNA sequence data, not only providing a blueprint of the genome but also unlocking linkage patterns between genetic markers, chromosomal regions with more than one sequence variant. Studying these linkage patterns enables diverse applications to identifying the biological underlying feature of problems in health, agriculture, and the study of biodiversity. Genetic mapping provides a mean to understand the basis of genetic and biochemical diseases and provides genetic markers. Mapping studies can be done in a single large pedigree; the larger the number of affected individuals sampled the better the estimate of recombination between the gene causing the disease and one or more nearby genetic marker. This work proposes an algorithm for improving the methods to detect breast cancer by analyzing the DNA data and detect the issue in the DNA samples. This work based on the big data and machine learning techniques to get classifications for all samples. All samples will be classified into two main classes. This work evaluates the performance of different classification algorithms on the dataset. It also provides a website application as the tool that can help specialist predict the of breast cancer based on stated genetic mutation.
Data protection has become a more critical issue and the necessity to secure a transmission channel is become more serious. Therefore, steganography, the art of hidden data into a digital media in a way that embed a secret message in the... more
Data protection has become a more critical issue and the necessity to secure a transmission channel is become more serious. Therefore, steganography, the art of hidden data into a digital media in a way that embed a secret message in the cover document without permitting anyone to suspect the data existence except the intended recipient, has become a relevant topic of research. The actual challenge in steganography is how it could obtain high robustness and capacity without damaging the cover document imperceptibility. This article presents two steganography approaches that based on the Similarity of English Font Styles (SEFS). This process has the main document font style replaced by a similar font style to embed the secret message after encoding it. This is done by using 1) the upper-case letters and punctuation marks of the carrier document or 2) the white space between words, start and end letters of each word that has more than 2 letters in the carrier document. These approache...
IBM is one of the top companies in technology and business field, the aim of the research is to measure the awareness gauge of computer science students and graduates about some of the main IBM cloud solutions which are IBM Bluemix, and... more
IBM is one of the top companies in technology and business field, the aim of the research is to measure the awareness gauge of computer science students and graduates about some of the main IBM cloud solutions which are IBM Bluemix, and Watson Analytics in Saudi Arabia. The method used in the research to measure the awareness was publishing both online and paper surveys. The surveys got a total of 208 participants, they come from different universities, and with different majors in the computer science field. The survey results show that 139 out of 208 people did know about IBM company and their knowledge about the company come from colleges and social media network, but not from the IBM website or its charity and community works, also the study show that few people know about IBM solutions where (Only 16 people know about IBM Bluemix and Only 14 people know about Watson Analytics).
Website is a software product used by different organizations for marketing and information exchange. It is one of the best technologies for information system applications.  Generally, universities have complex websites, which include a... more
Website is a software product used by different organizations for marketing and information exchange. It is one of the best technologies for information system applications.  Generally, universities have complex websites, which include a collection of many sub-websites related to the different sections of universities. This work employed software tools-based evaluation method and evaluator-based evaluation method to comprehensively evaluate five big university websites in Saudi Arabia that are King Saud University (KSU), King Faisal University (KFU), Princess Nourah Bint Abdulrahman University (PNU), Prince Sultan University (PSU) and Dar Al Uloom University (DAU). The evaluation involves testing sample pages related to the selected universities. This study provides an overview regarding the weakness and strengths of the five Saudi university websites. Where it aims to comprehensively evaluate the five Saudi university websites, using the software tools (WebCHECK and Sitebeam),  and...
Page 1. Reconstruction of High Resolution Image from a set of Blurred, Warped, Undersampled, and Noisy Measured Images Sahar A. EI_ Rahman, Hala A. Elqader, Mazen Selim Electrical Department, Faculty ofEngineering-Shoubra, Benha... more
Page 1. Reconstruction of High Resolution Image from a set of Blurred, Warped, Undersampled, and Noisy Measured Images Sahar A. EI_ Rahman, Hala A. Elqader, Mazen Selim Electrical Department, Faculty ofEngineering-Shoubra, Benha University Cairo, Egypt ...
Information Technology is a significant portion of the healthcare system. Availability, integrity, security, and accuracy of the data in every healthcare process are vital. So, such data should be updated to fulfil continued improvement... more
Information Technology is a significant portion of the healthcare system. Availability, integrity, security, and accuracy of the data in every healthcare process are vital. So, such data should be updated to fulfil continued improvement of the services in each healthcare providers and especially in healthcare. Thus, several information systems must be integrated with the healthcare systems. Healthcare record includes some information such as specific allergies and medications, medical history, the status of immunization, radiology images, results of lab and examination, everyone stat such as weight and age, appointments, order tests, and diagnoses. This record is identified by patient ID. The authorized hospital employees and the doctors will use their password and ID to login the application for privacy and security. Then a request for that patient record will be sent by using the patient ID, that will be recieved by the doctor, the recording ought to be recent since the patient fi...
In Arab and Muslim countries, the Arabic language is important because it is the Quran’s language. Thus, teaching it to the children is an important issue. However, some of these children might suffer from a learning difficulty or... more
In Arab and Muslim countries, the Arabic language is important because it is the Quran’s language. Thus, teaching it to the children is an important issue. However, some of these children might suffer from a learning difficulty or disability such as dyslexia and deaf children. Unfortunately, it has been hardly to find any educational application in Arabic to help them and make their study life easier. Therefore, the aim of this work is to provide a good quality, interesting and encouraging application to teach both the Arabic alphabet and the Arabic sign language for a child who suffers from either a reading difficulty (Dyslexia) or a hearing disability (deaf children). The system teaches the child the Arabic letters systematically in a sequential manner starting for the first letter and ending up with the last one to help the child to learn the letters, step by step and do not move from the first letter to the second one until passing the first letter. It uses an interesting interf...
Patient file Number and finding is embedded inside the medical image provide significant information which should be unavailable to a un-authorized person who does not have an access to the image. The medical image effect by variant type... more
Patient file Number and finding is embedded inside the medical image provide significant information which should be unavailable to a un-authorized person who does not have an access to the image. The medical image effect by variant type of noise corrupt the image quality when it comes to protecting information in the medical image. In this project, we will present a method to interference reduction in the embedded medical image. A method Encrypt the patient file number and findings by using RSA and then embedding in a medical image by Discrete Wavelet Transform (DWT) technique. For reduction noise, we filter the medical image after the previous process by Median filter. With this system, any medical image that will be transferred will have the patient file numb0er and findings hidden and embedded in the image with better quality.
Breast cancer (BC) is considered the most common cause of cancer deaths in women. This study aims to identify BC early based on machine learning algorithms and features selection methods. The overall methodology of this work was modified... more
Breast cancer (BC) is considered the most common cause of cancer deaths in women. This study aims to identify BC early based on machine learning algorithms and features selection methods. The overall methodology of this work was modified based on knowledge data discovery (KDD) process, which include four datasets, preprocessing phase (data cleaning, data splitting to training and testing sets), processing phase (feature selection, k-folds validation, and classification) and finally model evaluation. This paper presents a comparison between different classifiers such as decision tree (DT), random forest (RF), logistic regression (LR), Naive Bayes (NB), K-nearest neighbor (KNN), and support vector machine (SVM). Four different breast cancer datasets (Wisconsin prognosis breast cancer (WPBC), Wisconsin diagnosis breast cancer (WDBC), Wisconsin Breast Cancer (WBC), and Mammographic Mass Dataset (MM-Dataset) based on BI-RADS findings) are conducted in the experiments. The proposed models were evaluated by utilizing classification accuracy and confusion matrix. The experimental results indicate that the classification based on RF technique with the Genetic Algorithm (GA) as a feature selection method is better than the other classifiers with an accuracy value 96.82% using WBC dataset. In WDBC dataset, the results indicate that the classification utilizing C-SVM technique with the applied kernel function RBF (Radial Basis Function) is superior to the other classifiers with an accuracy value 99.04%. In WPBC dataset, the results indicate that the classification using RF technique with recursive feature elimination (RFE) as a feature selection method is better than the other classifiers with an accuracy value 74.13%. In MM-Dataset, the results indicate that the classification using DT technique is better than the other classifiers with an accuracy value 83.74%. The findings indicate that the proposed models are effective by comparing with others existing models.
The Internet of Things (IoT) supports a wide range of applications including smart cities, traffic congestion, waste management, structural health, security, emergency services, logistics, retails, industrial control, and health care. IoT... more
The Internet of Things (IoT) supports a wide range of applications including smart cities, traffic congestion, waste management, structural health, security, emergency services, logistics, retails, industrial control, and health care. IoT is mega-technology that can establish connection with anything, anyone, at any time, place, service on a platform and any network. It has a great impact on the whole block chain of businesses, smart objects and devices, systems and services that are enabled by heterogeneous network connectivity and is developed as a smart pervasive framework of smart devices. IoT devices are in use in many fields, they connect to complex devices, interface with hostile environments and are deployed on various uncontrolled platforms, therefore faces many security issues and challenges. Since the IoT offers a potential platform for integrating any type of network and complex system it could encounter vulnerabilities inherent to the individual systems which are available within the integrated network. This research paper is a study of the security issues of the individual systems responsible for IoT interconnection and their impact towards the integrated IoT system.
Breast cancer (BC) is considered the most common cause of cancer deaths in women. This study aims to identify BC early based on machine learning algorithms and features selection methods. The overall methodology of this work was modified... more
Breast cancer (BC) is considered the most common cause of cancer deaths in women. This study aims to identify BC early based on machine learning algorithms and features selection methods. The overall methodology of this work was modified based on knowledge data discovery (KDD) process, which include four datasets, preprocessing phase (data cleaning, data splitting to training and testing sets), processing phase (feature selection, k-folds validation, and classification) and finally model evaluation. This paper presents a comparison between different classifiers such as decision tree (DT), random forest (RF), logistic regression (LR), Naive Bayes (NB), K-nearest neighbor (KNN), and support vector machine (SVM). Four different breast cancer datasets (Wisconsin prognosis breast cancer (WPBC), Wisconsin diagnosis breast cancer (WDBC), Wisconsin Breast Cancer (WBC), and Mammographic Mass Dataset (MM-Dataset) based on BI-RADS findings) are conducted in the experiments. The proposed models were evaluated by utilizing classification accuracy and confusion matrix. The experimental results indicate that the classification based on RF technique with the Genetic Algorithm (GA) as a feature selection method is better than the other classifiers with an accuracy value 96.82% using WBC dataset. In WDBC dataset, the results indicate that the classification utilizing C-SVM technique with the applied kernel function RBF (Radial Basis Function) is superior to the other classifiers with an accuracy value 99.04%. In WPBC dataset, the results indicate that the classification using RF technique with recursive feature elimination (RFE) as a feature selection method is better than the other classifiers with an accuracy value 74.13%. In MM-Dataset, the results indicate that the classification using DT technique is better than the other classifiers with an accuracy value 83.74%. The findings indicate that the proposed models are effective by comparing with others existing models.
In the last years, oil spill detection by hyperspectral imaging has been transferred from experimental to operational. In this paper, researchers attempted to use and compare four classification approaches for the identification of oil... more
In the last years, oil spill detection by hyperspectral imaging has been transferred from experimental to operational. In this paper, researchers attempted to use and compare four classification approaches for the identification of oil spills. The hyperspectral image classification approaches 'namely' are support vector machine (SVM), parallelepiped, minimum distance (MD) and binary encoding (BE). These approaches used to identify the oil spill areas in both two study areas which are selected as oil-spill areas in the Gulf of Mexico and the Adriatic Sea. The classifiers are applied to the study areas after pre-processing that include the spatial and spectral subset and atmospheric correction. Whereas, the classifiers applied to the full dataset and region of interest (ROI) before and after performing principal component analysis (PCA). The PCA is utilised to eliminate redundant data, reduce the vast amount of information and consequently, decrease the processing times. The findings indicate that the SVM, MD and BE approaches supply a high classification accuracy better than parallelepiped approach using both datasets obtained from both selected region.
Internet and web services are fast becoming critically important to business, industry and individuals. Where Creating online brand and community is the chief objective of web serving. They recognize that web-based systems can enhance... more
Internet and web services are fast becoming critically important to business, industry and individuals. Where Creating online brand and community is the chief objective of web serving. They recognize that web-based systems can enhance their scale of communication as the Internet is capable of rendering large amounts of data in a speedy manner to the public. To be successful, web-based systems need to have good usability. Usability is a measure of how easy the interface is to use. In order to achieve these measurements, we need to analyze them to detect its drawbacks, and find a way to improve them. This paper aims to analyze some of the top hospitals websites in Saudi Arabia that are Dr. Sulaiman Alhabib Hospital, King Fahad Medical City, Saad Specialist Hospital, Dallah Hospital, King Faisal Specialist Hospital and Research Center & International Medical Center. The evaluation involves testing sample pages related to the selected hospitals. This study provides an overview regarding...
In the last years, oil spill detection by hyperspectral imaging has been transferred from experimental to operational. In this paper, researchers attempted to use and compare four classification approaches for the identification of oil... more
In the last years, oil spill detection by hyperspectral imaging has been transferred from experimental to operational. In this paper, researchers attempted to use and compare four classification approaches for the identification of oil spills. The hyperspectral image classification approaches 'namely' are support vector machine (SVM), parallelepiped, minimum distance (MD) and binary encoding (BE). These approaches used to identify the oil spill areas in both two study areas which are selected as oil-spill areas in the Gulf of Mexico and the Adriatic Sea. The classifiers are applied to the study areas after pre-processing that include the spatial and spectral subset and atmospheric correction. Whereas, the classifiers applied to the full dataset and region of interest (ROI) before and after performing principal component analysis (PCA). The PCA is utilised to eliminate redundant data, reduce the vast amount of information and consequently, decrease the processing times. The findings indicate that the SVM, MD and BE approaches supply a high classification accuracy better than parallelepiped approach using both datasets obtained from both selected region.
Recently, the multimedia and cellular technologies have spread dramatically. Therefore, the demand for digital information has increased. Speech compression is one of the most effective forms of communication. This paper presents three... more
Recently, the multimedia and cellular technologies have spread dramatically. Therefore, the demand for digital information has increased. Speech compression is one of the most effective forms of communication. This paper presents three approaches for the transmission of compressed speech signals over convolutional Coded Orthogonal Frequency Division Multiplexing (COFDM) system with a chaotic interleavering technique. The speech signal has is compressed using the Set Partitioning In Hierarchical trees (SPIHT) algorithm, which is an improved version of EZW and which is characterized by a simple and effective method for further compression. For mitigation of the fading due to multipath wireless channels, this paper proposes a COFDM system based on fractional Fourier transform (FrFT), a COFDM system based on discrete Cosine transform (DCT), and a COFDM system based on discrete wavelet transform (DWT). The FrFT has the ability of solving the frequency offset problem, which causes the received frequency-domain sub-carriers to be shifted, and therefore, the orthogonality between subcarriers deteriorates even with equalization. The DCT has an advantage of increased computational speed as only real calculations are required. The DWT is spectrally efficient since it does not utilize cyclic prefix (CP). These systems have been designed under the assumption that corruptive background noises are absent. Therefore, denoising techniques, namely wavelet denoising and Wiener filtering methods are suggested at the receiver to achieve enhancement in the speech quality. The simulation experiments shows that the proposed COFDM–DWT with Wiener filtering at the receiver has a better trade-off between BER, spectral efficiency and signal distortion. Hence, the BER performance is improved with small bandwidth occupancy. Moreover, due to the denoising stage, the speech quality is improved to achieve good intelligibility.
The statistics serve an important role in the teaching and learning evaluation, as a part of the process of promotion and tenure, and as an important piece of the course specification improvement process. Instructor and courses... more
The statistics serve an important role in the teaching and learning evaluation, as a part of the process of promotion and tenure, and as an important piece of the course specification improvement process. Instructor and courses evaluations are still paper based throughout many universities, however, some universities are beginning to evaluate Web-based solutions to a time-consuming and wasteful process. Students play an complementary role in the university academic life through their participation in the courses and instructors evaluation through the Course Online Evaluation system (COES) to quality teaching and academic excellence. COES is an interactive web-based system designed and implemented for students in College of computer and Information Sciences (CCIS) at Princess Nourah bint Abdulrahman University (PNU) to evaluate their instructors and courses. While the application was implemented on a relatively small scale, but it has applications for any college or department interested in automating the course and instructor evaluation process. Finally, the results of this work show that integration of an online evaluation system can provide accurate, timely, and more detailed information to instructors, departments and faculty, as well as, security, retain the confidentiality, and functionality of the traditional paper-based approach and its problems.
ABSTRACT
This paper proposes a new algorithm for super-resolution (SR) restoration that uses the affine block-based registration algorithm in the maximum likelihood estimator. The proposed SR algorithm is restricted to linear space- invariant... more
This paper proposes a new algorithm for super-resolution (SR) restoration that uses the affine block-based registration algorithm in the maximum likelihood estimator. The proposed SR algorithm is restricted to linear space- invariant (LSI) blur and global uniform translational displacement between the measured images. It is tested using synthetic Grayscale and Mono_Color images, where the reconstructed image can be compared with its original (17). All the simulations correspond to synthetic data, in order to bypass problems such as translation estimation between measurements, and the blurring function estimation. The proposed algorithm improves the accuracy of translational registration (which is common to use).
Research Interests:
ABSTRACT
Multimodal biometric system can be accomplished at different levels of fusion and achieve higher recognition performance than the unimodal system. This paper concerned to study the performance of different classification techniques and... more
Multimodal biometric system can be accomplished at different levels of fusion and achieve higher recognition performance than the unimodal system. This paper concerned to study the performance of different classification techniques and fusion rules in the context of unimodal and multimodal biometric systems based on the electrocardiogram (ECG) and fingerprint. The experiments are conducted on ECG and fingerprint databases to evaluate the performance of the proposed biometric systems. MIT-BIH database is utilized for ECG, FVC 2004 database is utilized for the fingerprint, and further experiments are being performed to evaluate the proposed multimodal system with 47 subjects from virtual multimodal database. The performance of the proposed unimodal and multimodal biometric systems is measured using receiver operating characteristic (ROC) curve, AUC (area under the ROC curve), sensitivity, specificity, efficiency, standard error of the mean, and likelihood ratio. The findings indicate AUC up to 0.985 for sequential multimodal system, and up to 0.956 for parallel multimodal system, as compared to the unimodal systems that achieved AUC up to 0.951, and 0.866, for the ECG and fingerprint biometrics, respectively. The overall performance of the proposed multimodal systems is better than that of the unimodal systems based on different classifiers and different fusion levels and rules.
Page 1. Reconstruction of High Resolution Image from a set of Blurred, Warped, Undersampled, and Noisy Measured Images Sahar A. EI_ Rahman, Hala A. Elqader, Mazen Selim Electrical Department, Faculty ofEngineering-Shoubra, Benha... more
Page 1. Reconstruction of High Resolution Image from a set of Blurred, Warped, Undersampled, and Noisy Measured Images Sahar A. EI_ Rahman, Hala A. Elqader, Mazen Selim Electrical Department, Faculty ofEngineering-Shoubra, Benha University Cairo, Egypt ...
This paper proposes an algorithm to reconstruct a High Resolution (HR) image from a set of blurred, warped, undersampled, and noisy measured images. The proposed algorithm uses the affine block-based algorithm in the maximum likelihood... more
This paper proposes an algorithm to reconstruct a High Resolution (HR) image from a set of blurred, warped, undersampled, and noisy measured images. The proposed algorithm uses the affine block-based algorithm in the maximum likelihood (ML) estimator. It is tested using synthetic images, where the reconstructed image can be compared with its original. A number of experiments were performed with the proposed algorithm to evaluate its behavior before and after noise addition and also compared with its behavior after noise removal. The proposed system results show that the enhancement factor is better after noise removal than in case of no noise is additive, and show that PSNR difference is better in comparison with the results of another system.
Research Interests:
This paper deals with the problem of reconstructing High Resolution (HR) still image from a set of displaced, undersampled, and blurred measured images. It proposes an algorithm that uses the affine block-based algorithm in the maximum... more
This paper deals with the problem of reconstructing High Resolution (HR) still image from a set of displaced, undersampled, and blurred measured images. It proposes an algorithm that uses the affine block-based algorithm in the maximum likelihood estimator. It is tested using synthetic Grayscale and Mono_Color images, where the reconstructed image can be compared with its original. A number of experiments were performed with the proposed algorithm over different sets of Low Resolution (LR) images to evaluate its behavior as a function of the number of available LR images. All the simulations correspond to synthetic data, in order to bypass problems such as translation estimation between measurements, and the blurring function estimation. The proposed algorithm accurately recovers the HR image even in the case where just very few input images are provided.
Research Interests:
A major drawback of Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems is its high Peak-to-Average Power Ratio (PAPR). MIMO OFDM systems suffer with the problem of inherent high peak-to-average... more
A major drawback of Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems is its high Peak-to-Average Power Ratio (PAPR). MIMO OFDM systems suffer with the problem of inherent high peak-to-average power ratio (PAPR) due to the inter symbol interference between the subcarriers. In order to obtain optimal PAPR reduction using the Partial Transmitted Sequence (PTS), Partial Transmits Sequence (PTS) is one proposed to overcome this problem. PTS is one of most a popular scheme for peak power reduction in Orthogonal Frequency Division Multiplexing (OFDM) systems by determines the optimal phase weighting factor such that the overall system complexity is reduced. By finding the optimum phase weighting factors and the sub block partition schemes can achieve the lower PAPR and computational complexity of MIMO OFDM systems .The total search for the number of sub blocks and the rotation factors must be accomplished. As the number of sub blocks and rotation factors increases, PAPR reduction improves. The number of calculation increases as the number of sub blocks increases, such that complexity increases exponentially and the process delay. The simulation results show that : PTS technique achieve PAPR reduction at the expense of transmit signal power increase, BER increase, data rate loss and provide better PAPR reduction with reduced computational complexity compared to conventional PTS scheme technique in the MIMO-OFDM systems.
Research Interests:
High Peak to Average Power Ratio (PAPR) for MIMO-OFDM system is still a demanding area and difficult issue. The radio transmitter stations for covering and getting enough transmitted power in their desired area has to use High Power... more
High Peak to Average Power Ratio (PAPR) for MIMO-OFDM system is still a demanding area and difficult issue. The radio transmitter stations for covering and getting enough transmitted power in their desired area has to use High Power Amplifier (HPA). On the other hand, in order the HPA to have the most output power efficiency must be designed to work close to the saturation region, therefore due to the high PAPR of input signals, a factor which is called memory-less nonlinear distortion will affect the communication channels. We know the MIMO-OFDM receiver’s efficiency is sensitive to the HPA. If the high power amplifier doesn’t work in linear region, it can cause the out-of-band power to be kept under the specified limits. This condition can cause inefficient amplification and expensive transmitters, thus it is necessary to investigate PAPR reduction techniques for MIMO-OFDM system. By now, for reducing PAPR, numerous techniques have been recommended. In this paper the performance and the efficiency of two types of them will be discussed and simulated and then we will propose our suggested method for a conventional OFDM system.
Research Interests:

And 11 more