This paper proposes a software based parallel CRC (Cyclic Redundancy Check) algorithm called 'N-byte RCC (Repetition of Computation and Combination)''. This algorithm is the iterative process of message computation by the 'slicing-by-4'... more
This paper proposes a software based parallel CRC (Cyclic Redundancy Check) algorithm called 'N-byte RCC (Repetition of Computation and Combination)''. This algorithm is the iterative process of message computation by the 'slicing-by-4' and combination through the 'zero block lookup tables'. This algorithm can parallelize the CRC calculation with any number of processors. In order to verify the performance of our algorithm, we employ two different communication architectures; the single bus architecture and the 1-star topology NoC (Network on Chip) architecture. With respect to those architectures, we explore our parallel algorithm by using TLM (Transaction Level Model). From the simulation results, we present that the proposed parallel CRC algorithm with BUS and NoC architectures reduces the processing time by 28 percent and 38 percent, respectively, compared to the 'slicing-by-8' which is the fastest algorithms among other software based algorithms. Furthermore, the 1-star NoC architecture of the parallel CRC shows higher performance than the single bus architecture regardless of the number of processors.
This paper presents an improved Frame Level Redundancy Scrubbing (FLR) algorithm that uses Cyclic Redundancy Check (CRC) as an error detection technique for configuration memory scrubbing, is developed as a solution to mitigate Single... more
This paper presents an improved Frame Level Redundancy Scrubbing (FLR) algorithm that uses Cyclic Redundancy Check (CRC) as an error detection technique for configuration memory scrubbing, is developed as a solution to mitigate Single Event Upset (SEU) through upset detection and correction. Fault injection was performed on FPGA configuration memory frames on a different number of modules to emulate SEU. The improved FLR algorithm was implemented and system level simulation was carried out using MATLAB. The performance of the improved FLR algorithm was compared with that of the existing FLR algorithm using error correction time and energy consumption as metrics. The results of this work showed that the improved FLR algorithm produced 31.6% improvement in error correction time and 61.1% improvement in energy consumption over the existing FLR algorithm.
Performance analysis of two algorithms, MatrixPower and FactorPowe r, that generate all ϕ (2 r -1)/ r degree- r primitive polynomials ( ϕ is the Euler's totient function) is presented. MatrixPower generates each new degree- r... more
Performance analysis of two algorithms, MatrixPower and FactorPowe r, that generate all ϕ (2 r -1)/ r degree- r primitive polynomials ( ϕ is the Euler's totient function) is presented. MatrixPower generates each new degree- r primitive polynomial in O( r 4 ) ~ O( r 4 ln r ) time. FactorPower generates each new degree- r primitive polynomial in
In this paper, we introduce error correction to the Bluetooth Low Energy (BLE) standard by utilising data redundancy provided by the Cyclic Redundancy Check (CRC) code used to detect erroneous packets. We assume a scenario with an... more
In this paper, we introduce error correction to the Bluetooth Low Energy (BLE) standard by utilising data redundancy provided by the Cyclic Redundancy Check (CRC) code used to detect erroneous packets. We assume a scenario with an energy-constrained transmitter and a constraint-free infrastructure, which allows us to introduce additional signal processing at the receiving side while keeping the transmitter intact. A novel approach of applying iterative decoding techniques to the BLE CRC code is investigated in this work. By using these techniques and real BLE packets collected in an office environment, we show that by enabling CRC error correction, the sensitivity of the BLE receiver can be improved by up to 3 dB. At the same time, up to 60% of corrupted packets can be corrected, which directly translates to a significant reduction in the number of retransmissions and a noticeable energy saving.
GFP Frame delineation is specified by the ITU-T in recommendation G.7041. At the transmitter the Core Header Error Check (cHEC) field is composed of the third and forth bytes of the GFP frame core header. The cHEC field is calculated from... more
GFP Frame delineation is specified by the ITU-T in recommendation G.7041. At the transmitter the Core Header Error Check (cHEC) field is composed of the third and forth bytes of the GFP frame core header. The cHEC field is calculated from the first 2 bytes of the core header ie ...
In all types of data communication systems, errors may occur. Therefore error control is necessary for reliable data communication. Error control involves both error detection and error correction. Previously error detection can be done... more
In all types of data communication systems, errors may occur. Therefore error control is necessary for reliable data communication. Error control involves both error detection and error correction. Previously error detection can be done by Cyclic Redundancy Check (CRC) codes and error correction can be performed by retransmitting the corrupted data block popularly known as Automatic Repeat Request (ARQ). But CRC codes can only detect errors after the entire block of data has been received and processed. In this work we use a new and "continuous" technique for error detection namely, Continuous Error Detection (CED). The "continuous" nature of error detection comes from using arithmetic coding. This CED technique improves the overall performance of communication systems because it can detect errors while the data block is being processed. We focus only on ARQ based transmission systems. We will show have the proposed CED technique can improve the throughput of ARQ...
A switched-capacitor logarithmic pipeline ADC scheme that does not require squaring or any other complex analog functions is described. This approach is ideal where a high dynamic range, but not a high peak SNDR, is required. A signed,... more
A switched-capacitor logarithmic pipeline ADC scheme that does not require squaring or any other complex analog functions is described. This approach is ideal where a high dynamic range, but not a high peak SNDR, is required. A signed, 8-bit 1.5 bit-per-stage logarithmic pipeline ADC is implemented in 0.18 mum CMOS. The 22 MS/s ADC achieves a measured DR of 80 dB and a measured SNDR of 36 dB, occupies 0.56 mm2, and consumes 2.54 mW from a 1.62 V supply. The measured dynamic range figure of merit is 174 dB.
In all types of data communication systems, errors may occur. Therefore error control is necessary for reliable data communication. Error control involves both error detection and error correction. Previously error detection can be done... more
In all types of data communication systems, errors may occur. Therefore error control is necessary for reliable data communication. Error control involves both error detection and error correction. Previously error detection can be done by Cyclic Redundancy Check (CRC) codes and error correction can be performed by retransmitting the corrupted data block popularly known as Automatic Repeat Request (AR Q). But CRC codes can only detect errors after the entire block of data has been received and processed. In this work we use a new and “continuous” technique for error detection namely, Continuous Error Detection (CED). The “continuous” nature of error detection comes from using arithmetic coding. This CED technique improves the overall performance of communication systems because it can detect errors while t he data block is being processed. We focus only on ARQ based transmission systems. We will show have the proposed CED technique can improve the throughput of ARQ systems by up to ...
Communication is one of the basic necessities of human beings. File transfer is one of the basic forms of communication. Reliability is the key issue raised due to complex nature of network and growth of computer science. In this paper we... more
Communication is one of the basic necessities of human beings. File transfer is one of the basic forms of communication. Reliability is the key issue raised due to complex nature of network and growth of computer science. In this paper we have devised a technique for file transfer which identifies whether some portion of the file is received corrupt or not, and if yes then exactly what portion is corrupt. This technique provides reliability by eliminating the corruption from a file, hence requiring less bandwidth of the network by reducing the amount of data to be re-sent in case of corruption. The reliability is ensured with the help of file signature generation method which we have devised in this paper. The beauty of this technique that it generates hashes which are not easy to break, hence ensuring security of the file. We have used TCP as the underlying protocol, whereas TCP is already considered to be reliable, but the fact is that it does not ensure the reliable transfer over the network due to the fact that it uses CRC which is still vulnerable to network conditions and malicious attacks. Our technique operates at the application layer and tries to finish the cope up with the reliability over the file transfer. We have also developed a prototype to test the integrity of our technique. Empirical results ensure the reliability of our technique. The emphasis of this paper is to provide users with the corrupt free file transfer over the network, so that their time and valuable resources might be saved.
In this paper, we present the performance of fixed decode-and-forward cooperative networks with relay selection over independent but not identically distributed Nakagami-m fading channels, with integer values of the fading severity... more
In this paper, we present the performance of fixed decode-and-forward cooperative networks with relay selection over independent but not identically distributed Nakagami-m fading channels, with integer values of the fading severity parameter m. Specifically, closed-form expressions for the symbol error probability and the outage probability are derived using the statistical characteristic of the signal-to-noise ratio. We also perform Monte-Carlo simulations to verify the analytical results.
In this paper, a hybrid turbo decoding algorithm is used, in which the outer code, Cyclic Redundancy Check code is not used for detection of errors as usual but for error correction and improvement. This algorithm effectively combines the... more
In this paper, a hybrid turbo decoding algorithm is used, in which the outer code, Cyclic Redundancy Check code is not used for detection of errors as usual but for error correction and improvement. This algorithm effectively combines the iterative decoding algorithm with Rate-Compatible Insertion Convolution Turbo Decoding, where the CRC code and the turbo code are regarded as an integrated whole in the Decoding process. Altogether we propose an effective error detecting method based on normalized Euclidean distance to compensate for the loss of error detection capability which should have been provided by CRC code. Simulation results show that with the proposed approach, 0.5-2dB performance gain can be achieved for the code blocks with short information length.
DVB-H offers reliable high data rate reception for mobile handheld and battery-powered devices. A link layer with error correction is defined to work on top of the DVB-T physical layer. The DVB-H standard suggests to use Reed-Solomon... more
DVB-H offers reliable high data rate reception for mobile handheld and battery-powered devices. A link layer with error correction is defined to work on top of the DVB-T physical layer. The DVB-H standard suggests to use Reed-Solomon coding combined with cyclic redundancy check error detection as the link layer forward error correction. However, there exist more powerful methods for decoding. In this paper, a detailed comparison of five different decoding strategies is presented of which all are compatible with the current standard. Comparison is based on frame error, IP packet error and byte error rates after decoding. Also, the effect of errors on visual experience of a video stream is analyzed
A suitcase-size Ka-band satellite terminal has been developed by CRC for data rates up to 2.048 Mbps for demonstration, in cooperation with the USAF at Rome Labs NY, over the Advanced Communications Technology Satellite (ACTS). A 0.5 m... more
A suitcase-size Ka-band satellite terminal has been developed by CRC for data rates up to 2.048 Mbps for demonstration, in cooperation with the USAF at Rome Labs NY, over the Advanced Communications Technology Satellite (ACTS). A 0.5 m offset-fed ...
Fade duration, fade slope and cloud absorption are analyzed using results from measurements at CRC of the 20-GHz beacon of the Anik F2 satellite and from a 12-frequency profiling radiometer. The impact of low-pass filtering on fade... more
Fade duration, fade slope and cloud absorption are analyzed using results from measurements at CRC of the 20-GHz beacon of the Anik F2 satellite and from a 12-frequency profiling radiometer. The impact of low-pass filtering on fade duration and slope statistics derived from Anik F2 beacon data is discussed. The performance of four models predicting fade duration statistics, including one