Particulate matter in the atmosphere obscures the visibility of the atmosphere, causing a conditi... more Particulate matter in the atmosphere obscures the visibility of the atmosphere, causing a condition known as haze. Other natural phenomena like mist, fog and dust also obscure the vision this is because of scattering of light which attenuates the light intensity. All these instances are responsible for the degradation of image quality. Hazy images are problematic because these images cannot be used for computer vision and image processing applications like pattern and object recognition. Dehazing images improve the clarity and contrast of the images making them more suitable for computer vision and image processing. This paper presents a method of dehazing images using CNN. The proposed model is trained on D-HAZY [1] and SOTS [13] datasets which contains a mix of natural and synthesized hazy images. To assess the model's performance, we employ PSNR and SSIM metrics.
Clouds play a vital role in Earth’s water cycle and the energy balance of the climate system; und... more Clouds play a vital role in Earth’s water cycle and the energy balance of the climate system; understanding them and their composition is crucial in comprehending the Earth–atmosphere system. The dataset “Understanding Clouds from Satellite Images” contains cloud pattern images downloaded from NASA Worldview, captured by the satellites divided into four classes, labeled Fish, Flower, Gravel, and Sugar. Semantic segmentation, also known as semantic labeling, is a fundamental yet complex problem in remote sensing image interpretation of assigning pixel-by-pixel semantic class labels to a given picture. In this study, we propose a novel approach for the semantic segmentation of cloud patterns. We began our study with a simple convolutional neural network-based model. We worked our way up to a complex model consisting of a U-shaped encoder-decoder network, residual blocks, and an attention mechanism for efficient and accurate semantic segmentation. Being an architecture of the first of ...
Indian Journal of Computer Science and Engineering
Nowadays, traffic monitoring systems are at the frontline of smart city movement, and traffic den... more Nowadays, traffic monitoring systems are at the frontline of smart city movement, and traffic density estimation is useful to a traffic monitoring system. The system of this work estimates traffic density using a five-layered CNN with a variety of input feature maps and filter sizes. There are 64, 64, 96, 96, and 96 feature maps for each pair of convolutions and max-pooling layers, and each pair's corresponding filter sizes are 5*5, 3*3, 5*5, 3*3, and 3*3. The proposed system divides the traffic into three categories: High, Medium, and Low traffic based on images that are taken from traffic videos that are recorded by traffic surveillance cameras. To test the system, we used the WSDT (Washington State Department of Traffic Transportation) Dataset of recorded video footage from the highway CCTVs in cities Seattle and Washington. The model is evaluated using parameters such as 0.99 precision, 0.99 recall, 0.99 f1-score, and model accuracy of 99.6 %. By examining the dataset, we have trained the model in such a manner to produce better results.
Nowadays, traffic monitoring systems are at the frontline of smart city movement, and traffic den... more Nowadays, traffic monitoring systems are at the frontline of smart city movement, and traffic density estimation is useful to a traffic monitoring system. The system of this work estimates traffic density using a five-layered CNN with a variety of input feature maps and filter sizes. There are 64, 64, 96, 96, and 96 feature maps for each pair of convolutions and max-pooling layers, and each pair's corresponding filter sizes are 5*5, 3*3, 5*5, 3*3, and 3*3. The proposed system divides the traffic into three categories: High, Medium, and Low traffic based on images that are taken from traffic videos that are recorded by traffic surveillance cameras. To test the system, we used the WSDT (Washington State Department of Traffic Transportation) Dataset of recorded video footage from the highway CCTVs in cities Seattle and Washington. The model is evaluated using parameters such as 0.99 precision, 0.99 recall, 0.99 f1-score, and model accuracy of 99.6 %. By examining the dataset, we have trained the model in such a manner to produce better results.
Question Answer System (QAS) automatically answers the question asked in natural language. Due to... more Question Answer System (QAS) automatically answers the question asked in natural language. Due to the varying dimensions and approaches that are available, QAS has a very diverse solution space, and a proper bibliometric study is required to paint the entire domain space. This work presents a bibliometric and literature analysis of QAS. Scopus and Web of Science are two well-known research databases used for the study. A systematic analytical study comprising performance analysis and science mapping is performed. Recent research trends, seminal work, and influential authors are identified in performance analysis using statistical tools on research constituents. On the other hand, science mapping is performed using network analysis on a citation and co-citation network graph. Through this analysis, the domain’s conceptual evolution and intellectual structure are shown. We have divided the literature into four important architecture types and have provided the literature analysis of K...
Every data and kind of data need a physical drive to store it. There has been an explosion in the... more Every data and kind of data need a physical drive to store it. There has been an explosion in the volume of images, video, and other similar data types circulated over the internet. Users using the internet expect intelligible data, even under the pressure of multiple resource constraints such as bandwidth bottleneck and noisy channels. Therefore, data compression is becoming a fundamental problem in wider engineering communities. There has been some related work on data compression using neural networks. Various machine learning approaches are currently applied in data compression techniques and tested to obtain better lossy and lossless compression results. A very efficient and variety of research is already available for image compression. However, this is not the case for video compression. Because of the explosion of big data and the excess use of cameras in various places globally, around 82% of the data generated involve videos. Proposed approaches have used Deep Neural Netwo...
Computer networks have experienced an explosive growth over the past few years, which has lead to... more Computer networks have experienced an explosive growth over the past few years, which has lead to s ome severe congestion problems. Reliable protocols like TCP wo rks well in wired networks where loss occurs mostly because of congestion. However, in wireless networks, loss occ urs because of bit rates and handoffs too. TCP responds all losses by congestion control and avoidance algorithms, wh ich results in degradation of TCP’s End-To-End performa nce in wireless networks. This paper discusses different issues and problems regarding use of TCP in wireless networks and provides comprehensive survey of various schemes to improve performance of TCP in Wireless Networks. Keywords—TCP, Mobile-IP, Wireless networks, Protoco l design.
Improving the performance of the transmission control protocol (TCP) in wireless environment has ... more Improving the performance of the transmission control protocol (TCP) in wireless environment has been an active research area. Main reason behind performance degradation of TCP is not having ability to detect actual reason of packet losses in wireless environment. In this paper, we are providing a simulation results for TCP-P (TCP-Performance). TCP-P is intelligent protocol in wireless environment which is able to distinguish actual reasons for packet losses and applies an appropriate solution to packet loss. TCP-P deals with main three issues, Congestion in network, Disconnection in network and random packet losses. TCP-P consists of Congestion avoidance algorithm and Disconnection detection algorithm with some changes in TCP header part. If congestion is occurring in network then congestion avoidance algorithm is applied. In congestion avoidance algorithm, TCP-P calculates number of sending packets and receiving acknowledgments and accordingly set a sending buffer value, so that i...
TCP is a reliable end-to-end protocol for transporting applications. TCP was originally designed ... more TCP is a reliable end-to-end protocol for transporting applications. TCP was originally designed for wired links where the bit error rate is really low and we assume that packet losses are due to congestion in the network. But TCP performance in wireless networks suffers from some significant issues like Bit error rate, Bandwidth usability, Disconnections and handoffs, Congestion problem and random packet losses. Traditional TCP protocols treat all packet losses as a result of congestion. But they are not able to recognize packet losses because of other reasons. This makes significant effect on the communication efficiency in the wireless networks. A new protocol "TCP-P i.e. TCP-Performance" designed in this paper works in three stages to overcome above issues. In the first stage it examines congestion occurring by calculating available bandwidth using receiving rate of acknowledgements. In the second stage it detects occurring disconnection by sensing the medium and in th...
Improving the performance of the transmission control protocol (TCP) in wireless environment has ... more Improving the performance of the transmission control protocol (TCP) in wireless environment has been an active research area. Main reason behind performance degradation of TCP is not having ability to detect actual reason of packet losses in wireless environment. In this paper, we are providing a simulation results for TCP-P (TCP-Performance). TCP-P is intelligent protocol in wireless environment which is able to distinguish actual reasons for packet losses and applies an appropriate solution to packet loss. TCP-P deals with main three issues, Congestion in network, Disconnection in network and random packet losses. TCP-P consists of Congestion avoidance algorithm and Disconnection detection algorithm with some changes in TCP header part. If congestion is occurring in network then congestion avoidance algorithm is applied. In congestion avoidance algorithm, TCP-P calculates number of sending packets and receiving acknowledgements and accordingly set a sending buffer value, so that ...
Particulate matter in the atmosphere obscures the visibility of the atmosphere, causing a conditi... more Particulate matter in the atmosphere obscures the visibility of the atmosphere, causing a condition known as haze. Other natural phenomena like mist, fog and dust also obscure the vision this is because of scattering of light which attenuates the light intensity. All these instances are responsible for the degradation of image quality. Hazy images are problematic because these images cannot be used for computer vision and image processing applications like pattern and object recognition. Dehazing images improve the clarity and contrast of the images making them more suitable for computer vision and image processing. This paper presents a method of dehazing images using CNN. The proposed model is trained on D-HAZY [1] and SOTS [13] datasets which contains a mix of natural and synthesized hazy images. To assess the model's performance, we employ PSNR and SSIM metrics.
Clouds play a vital role in Earth’s water cycle and the energy balance of the climate system; und... more Clouds play a vital role in Earth’s water cycle and the energy balance of the climate system; understanding them and their composition is crucial in comprehending the Earth–atmosphere system. The dataset “Understanding Clouds from Satellite Images” contains cloud pattern images downloaded from NASA Worldview, captured by the satellites divided into four classes, labeled Fish, Flower, Gravel, and Sugar. Semantic segmentation, also known as semantic labeling, is a fundamental yet complex problem in remote sensing image interpretation of assigning pixel-by-pixel semantic class labels to a given picture. In this study, we propose a novel approach for the semantic segmentation of cloud patterns. We began our study with a simple convolutional neural network-based model. We worked our way up to a complex model consisting of a U-shaped encoder-decoder network, residual blocks, and an attention mechanism for efficient and accurate semantic segmentation. Being an architecture of the first of ...
Indian Journal of Computer Science and Engineering
Nowadays, traffic monitoring systems are at the frontline of smart city movement, and traffic den... more Nowadays, traffic monitoring systems are at the frontline of smart city movement, and traffic density estimation is useful to a traffic monitoring system. The system of this work estimates traffic density using a five-layered CNN with a variety of input feature maps and filter sizes. There are 64, 64, 96, 96, and 96 feature maps for each pair of convolutions and max-pooling layers, and each pair's corresponding filter sizes are 5*5, 3*3, 5*5, 3*3, and 3*3. The proposed system divides the traffic into three categories: High, Medium, and Low traffic based on images that are taken from traffic videos that are recorded by traffic surveillance cameras. To test the system, we used the WSDT (Washington State Department of Traffic Transportation) Dataset of recorded video footage from the highway CCTVs in cities Seattle and Washington. The model is evaluated using parameters such as 0.99 precision, 0.99 recall, 0.99 f1-score, and model accuracy of 99.6 %. By examining the dataset, we have trained the model in such a manner to produce better results.
Nowadays, traffic monitoring systems are at the frontline of smart city movement, and traffic den... more Nowadays, traffic monitoring systems are at the frontline of smart city movement, and traffic density estimation is useful to a traffic monitoring system. The system of this work estimates traffic density using a five-layered CNN with a variety of input feature maps and filter sizes. There are 64, 64, 96, 96, and 96 feature maps for each pair of convolutions and max-pooling layers, and each pair's corresponding filter sizes are 5*5, 3*3, 5*5, 3*3, and 3*3. The proposed system divides the traffic into three categories: High, Medium, and Low traffic based on images that are taken from traffic videos that are recorded by traffic surveillance cameras. To test the system, we used the WSDT (Washington State Department of Traffic Transportation) Dataset of recorded video footage from the highway CCTVs in cities Seattle and Washington. The model is evaluated using parameters such as 0.99 precision, 0.99 recall, 0.99 f1-score, and model accuracy of 99.6 %. By examining the dataset, we have trained the model in such a manner to produce better results.
Question Answer System (QAS) automatically answers the question asked in natural language. Due to... more Question Answer System (QAS) automatically answers the question asked in natural language. Due to the varying dimensions and approaches that are available, QAS has a very diverse solution space, and a proper bibliometric study is required to paint the entire domain space. This work presents a bibliometric and literature analysis of QAS. Scopus and Web of Science are two well-known research databases used for the study. A systematic analytical study comprising performance analysis and science mapping is performed. Recent research trends, seminal work, and influential authors are identified in performance analysis using statistical tools on research constituents. On the other hand, science mapping is performed using network analysis on a citation and co-citation network graph. Through this analysis, the domain’s conceptual evolution and intellectual structure are shown. We have divided the literature into four important architecture types and have provided the literature analysis of K...
Every data and kind of data need a physical drive to store it. There has been an explosion in the... more Every data and kind of data need a physical drive to store it. There has been an explosion in the volume of images, video, and other similar data types circulated over the internet. Users using the internet expect intelligible data, even under the pressure of multiple resource constraints such as bandwidth bottleneck and noisy channels. Therefore, data compression is becoming a fundamental problem in wider engineering communities. There has been some related work on data compression using neural networks. Various machine learning approaches are currently applied in data compression techniques and tested to obtain better lossy and lossless compression results. A very efficient and variety of research is already available for image compression. However, this is not the case for video compression. Because of the explosion of big data and the excess use of cameras in various places globally, around 82% of the data generated involve videos. Proposed approaches have used Deep Neural Netwo...
Computer networks have experienced an explosive growth over the past few years, which has lead to... more Computer networks have experienced an explosive growth over the past few years, which has lead to s ome severe congestion problems. Reliable protocols like TCP wo rks well in wired networks where loss occurs mostly because of congestion. However, in wireless networks, loss occ urs because of bit rates and handoffs too. TCP responds all losses by congestion control and avoidance algorithms, wh ich results in degradation of TCP’s End-To-End performa nce in wireless networks. This paper discusses different issues and problems regarding use of TCP in wireless networks and provides comprehensive survey of various schemes to improve performance of TCP in Wireless Networks. Keywords—TCP, Mobile-IP, Wireless networks, Protoco l design.
Improving the performance of the transmission control protocol (TCP) in wireless environment has ... more Improving the performance of the transmission control protocol (TCP) in wireless environment has been an active research area. Main reason behind performance degradation of TCP is not having ability to detect actual reason of packet losses in wireless environment. In this paper, we are providing a simulation results for TCP-P (TCP-Performance). TCP-P is intelligent protocol in wireless environment which is able to distinguish actual reasons for packet losses and applies an appropriate solution to packet loss. TCP-P deals with main three issues, Congestion in network, Disconnection in network and random packet losses. TCP-P consists of Congestion avoidance algorithm and Disconnection detection algorithm with some changes in TCP header part. If congestion is occurring in network then congestion avoidance algorithm is applied. In congestion avoidance algorithm, TCP-P calculates number of sending packets and receiving acknowledgments and accordingly set a sending buffer value, so that i...
TCP is a reliable end-to-end protocol for transporting applications. TCP was originally designed ... more TCP is a reliable end-to-end protocol for transporting applications. TCP was originally designed for wired links where the bit error rate is really low and we assume that packet losses are due to congestion in the network. But TCP performance in wireless networks suffers from some significant issues like Bit error rate, Bandwidth usability, Disconnections and handoffs, Congestion problem and random packet losses. Traditional TCP protocols treat all packet losses as a result of congestion. But they are not able to recognize packet losses because of other reasons. This makes significant effect on the communication efficiency in the wireless networks. A new protocol "TCP-P i.e. TCP-Performance" designed in this paper works in three stages to overcome above issues. In the first stage it examines congestion occurring by calculating available bandwidth using receiving rate of acknowledgements. In the second stage it detects occurring disconnection by sensing the medium and in th...
Improving the performance of the transmission control protocol (TCP) in wireless environment has ... more Improving the performance of the transmission control protocol (TCP) in wireless environment has been an active research area. Main reason behind performance degradation of TCP is not having ability to detect actual reason of packet losses in wireless environment. In this paper, we are providing a simulation results for TCP-P (TCP-Performance). TCP-P is intelligent protocol in wireless environment which is able to distinguish actual reasons for packet losses and applies an appropriate solution to packet loss. TCP-P deals with main three issues, Congestion in network, Disconnection in network and random packet losses. TCP-P consists of Congestion avoidance algorithm and Disconnection detection algorithm with some changes in TCP header part. If congestion is occurring in network then congestion avoidance algorithm is applied. In congestion avoidance algorithm, TCP-P calculates number of sending packets and receiving acknowledgements and accordingly set a sending buffer value, so that ...
Uploads