This paper presents a protocol for a relay configuration of one quantum CubeSat and two quantum d... more This paper presents a protocol for a relay configuration of one quantum CubeSat and two quantum drones positioned at distant places over the Earth, where: (a) an entangled pair is generated and distributed by the CubeSat between both drones located over the clouds, (b) each drone descends through the clouds with its respective entangled photon, and (c) each drone generates a new entangled photon pair, conserves one and distributes the other one to Mobile Ground Stations (MGS). These latter photons allow to teleport both CubeSat’s entangled photons to the MGSs. Once on Earth, the CubeSat’s entangled photons constitute a bridge for the teleportation of an arbitrary qubit among the MGSs. In this way, we solve the main problem of all quantum communication between a satellite and the Earth: the weather as well as unfavorable environmental conditions. Finally, this paper evaluates the performance of the protocol, which first teleports the CubeSat’s entangled photons and thanks to these the definitive desired qubit, with implementations on the 16-qubit Melbourne processor of IBM Q Experience, where this evaluation constitutes the first stage of a project that tries to communicate two distant points of the earth at any time regardless of the weather.
This paper introduces an Enhanced Boolean version of the Correlation Matrix Memory (CMM), which i... more This paper introduces an Enhanced Boolean version of the Correlation Matrix Memory (CMM), which is useful to work with binary memories. A novel Boolean Orthonormalization Process (BOP) is presented to convert a non-orthonormal Boolean basis, i.e., a set of non-orthonormal binary vectors (in a Boolean sense) to an orthonormal Boolean basis, i.e., a set of orthonormal binary vectors (in a Boolean sense). This work shows that it is possible to improve the performance of Boolean CMM thanks BOP algorithm. Besides, the BOP algorithm has a lot of additional fields of applications, e.g.: Steganography, Hopfield Networks, Bi-level image processing, etc. Finally, it is important to mention that the BOP is an extremely stable and fast algorithm.
In this study, a new version of the quantum teleportation protocol is presented, which does not r... more In this study, a new version of the quantum teleportation protocol is presented, which does not require a Bell state measurement (BSM) module on the sender side (Alice), a unitary transform to reconstruct the teleported state on the receiver side (Bob), neither a disambiguation process through two classic bits that travel through a classic disambiguation channel located between sender and receiver. The corresponding theoretical deduction of the protocol, as well as the experimental verification of its operation for several examples of qubits through implementation on an optical table, complete the present study. Both the theoretical and experimental outcomes show a marked superiority in the performance of the new protocol over the original version, with more simplicity and lower implementation costs, and identical fidelity in its most complete version.
This paper introduces a Quantum Correlation Matrix Memory (QCMM) and Enhanced QCMM (EQCMM), which... more This paper introduces a Quantum Correlation Matrix Memory (QCMM) and Enhanced QCMM (EQCMM), which are useful to work with quantum memories. A version of classical Gram-Schmidt orthogonalisation process in Dirac notation (called Quantum Orthogonalisation Process: QOP) is presented to convert a non-orthonormal quantum basis, i.e., a set of non-orthonormal quantum vectors (called qudits) to an orthonormal quantum basis, i.e., a set of orthonormal quantum qudits. This work shows that it is possible to improve the performance of QCMM thanks QOP algorithm. Besides, the EQCMM algorithm has a lot of additional fields of applica-tions, e.g.: Steganography, as a replacement Hopfield Net-works, Bi-level image processing, etc. Finally, it is important to mention that the EQCMM is an extremely easy to implement in any firmware.
We describe a novel method for removing noise (in wavelet domain) of unknown variance from microa... more We describe a novel method for removing noise (in wavelet domain) of unknown variance from microarrays. The method is based on a smoothing of the coefficients of the highest subbands. Specifically, we decompose the noisy microarray into wavelet subbands, apply smoothing within each highest subband, and reconstruct a microarray from the modified wavelet coefficients. This process is applied a single time, and exclusively to the first level of decomposition, i.e., in most of the cases, it is not necessary a multirresoltuion analysis. Denoising results compare favorably to the most of methods in use at the moment.
Super-resolution is nowadays used for a high-resolution image produced from several low-resolutio... more Super-resolution is nowadays used for a high-resolution image produced from several low-resolution noisy frames. In this work, we consider the problem of high-quality interpolation of a single noise-free image. Such images may come from different sources, i.e., they may be frames of videos, individual pictures, etc. On the other hand, in the encoder we apply a downsampling via bidimen-sional interpolation of each frame, and in the decoder we apply a upsampling by which we restore the original size of the image. If the compression ratio is very high, then we use a convolutive mask that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. In fact, the mentioned mask is coded inside texture memory of a GPGPU.
In this work, we present a comparison between different techniques of image compression. First, t... more In this work, we present a comparison between different techniques of image compression. First, the image is divided in blocks which are organized according to a certain scan. Later, several compression techniques are applied, combined or alone. Such techniques are: wavelets (Haar's basis), Karhunen-Loève Transform, etc. Simulations show that the combined versions are the best, with minor Mean Squared Error (MSE), and higher Peak Signal to Noise Ratio (PSNR) and better image quality, even in the presence of noise.
In this work, we developed the concept of supercompression, i.e., compression above the compressi... more In this work, we developed the concept of supercompression, i.e., compression above the compression standard used. In this context, both compression rates are multiplied. In fact, supercompression is based on super-resolution. That is to say, supercompression is a data compression technique that superpose spatial image compression on top of bit-per-pixel compression to achieve very high compression ratios. If the compression ratio is very high, then we use a convolutive mask inside decoder that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. Specifically, the mentio-ned mask is coded inside texture memory of a GPGPU.
In this work, we present a comparison between two techniques of image compression. In the first c... more In this work, we present a comparison between two techniques of image compression. In the first case, the image is divided in blocks which are collected according to zig-zag scan. In the second one, we apply the Fast Cosine Transform to the image, and then the transformed image is divided in blocks which are collected according to zig-zag scan too. Later, in both cases, the Karhunen-Loève transform is applied to mentioned blocks. On the other hand, we present three new metrics based on eigenvalues for a better comparative evaluation of the techniques. Simulations show that the combined version is the best, with minor Mean Absolute Error (MAE) and Mean Squared Error (MSE), higher Peak Signal to Noise Ratio (PSNR) and better image quality. Finally, new technique was far superior to JPEG and JPEG2000.
This work deals with unsupervised image deblurring. We present a new deblurring procedure on imag... more This work deals with unsupervised image deblurring. We present a new deblurring procedure on images provided by lowresolution synthetic aperture radar (SAR) or simply by multimedia in presence of multiplicative (speckle) or additive noise, respectively. The method we propose is defined as a two-step process. First, we use an original technique for noise reduction in wavelet domain. Then, the learning of a Kohonen self-organizing map (SOM) is performed directly on the denoised image to take out it the blur. This technique has been successfully applied to real SAR images, and the simulation results are presented to demonstrate the effectiveness of the proposed algorithms.
This paper introduces a Quantum Correlation Matrix Memory (QCMM) and Enhanced QCMM (EQCMM), which... more This paper introduces a Quantum Correlation Matrix Memory (QCMM) and Enhanced QCMM (EQCMM), which are useful to work with quantum memories. A version of classical Gram-Schmidt orthogonalisation process in Dirac notation (called Quantum Orthogonalisation Process: QOP) is presented to convert a non-orthonormal quantum basis, i.e., a set of non-orthonormal quantum vectors (called qudits) to an orthonormal quantum basis, i.e., a set of orthonormal quantum qudits. This work shows that it is possible to improve the performance of QCMM thanks QOP algorithm. Besides, the EQCMM algorithm has a lot of additional fields of applications, e.g.: Steganography, as a replacement Hopfield Networks, Bilevel image processing, etc. Finally, it is important to mention that the EQCMM is an extremely easy to implement in any firmware.
The Gram-Schmidt Process (GSP) is used to convert a non-orthogonal basis (a set of linearly indep... more The Gram-Schmidt Process (GSP) is used to convert a non-orthogonal basis (a set of linearly independent vectors) into an orthonormal basis (a set of orthogonal, unit-length vectors). The process consists of taking each vector and then subtracting the<br> elements in common with the previous vectors. This paper introduces an Enhanced version of the Gram-Schmidt Process (EGSP) with inverse, which is useful for signal and image processing applications.
Since his famous discussions with Niels Bohr, Albert Einstein considered quantum entanglement (QE... more Since his famous discussions with Niels Bohr, Albert Einstein considered quantum entanglement (QE) as a spooky action at a distance, due to the violation of locality necessary so that two entangled particles can share this effect in an instantaneous way despite of being at a great distance from each other, i.e., not being local. In other words, a notification about the change of state in one of them could only cover the space that separates them at a speed superior to that of light, which we know is impossible according to the Theory of Relativity (TR). Besides, QE faces directly the two main pillars of Physics: TR and Quantum Theory (QT); becoming the bone of contention between both theories. Quite the contrary, in this work we will see that QE is the meeting point of both theories, so much so, that QE could be considered as the cornerstone of the Theory of Everything (TOE). Consistent with this, the entangled particles retain certain autonomy unknown to date, and in addition, they...
This paper presents a protocol for a relay configuration of one quantum CubeSat and two quantum d... more This paper presents a protocol for a relay configuration of one quantum CubeSat and two quantum drones positioned at distant places over the Earth, where: (a) an entangled pair is generated and distributed by the CubeSat between both drones located over the clouds, (b) each drone descends through the clouds with its respective entangled photon, and (c) each drone generates a new entangled photon pair, conserves one and distributes the other one to Mobile Ground Stations (MGS). These latter photons allow to teleport both CubeSat’s entangled photons to the MGSs. Once on Earth, the CubeSat’s entangled photons constitute a bridge for the teleportation of an arbitrary qubit among the MGSs. In this way, we solve the main problem of all quantum communication between a satellite and the Earth: the weather as well as unfavorable environmental conditions. Finally, this paper evaluates the performance of the protocol, which first teleports the CubeSat’s entangled photons and thanks to these the definitive desired qubit, with implementations on the 16-qubit Melbourne processor of IBM Q Experience, where this evaluation constitutes the first stage of a project that tries to communicate two distant points of the earth at any time regardless of the weather.
This paper introduces an Enhanced Boolean version of the Correlation Matrix Memory (CMM), which i... more This paper introduces an Enhanced Boolean version of the Correlation Matrix Memory (CMM), which is useful to work with binary memories. A novel Boolean Orthonormalization Process (BOP) is presented to convert a non-orthonormal Boolean basis, i.e., a set of non-orthonormal binary vectors (in a Boolean sense) to an orthonormal Boolean basis, i.e., a set of orthonormal binary vectors (in a Boolean sense). This work shows that it is possible to improve the performance of Boolean CMM thanks BOP algorithm. Besides, the BOP algorithm has a lot of additional fields of applications, e.g.: Steganography, Hopfield Networks, Bi-level image processing, etc. Finally, it is important to mention that the BOP is an extremely stable and fast algorithm.
In this study, a new version of the quantum teleportation protocol is presented, which does not r... more In this study, a new version of the quantum teleportation protocol is presented, which does not require a Bell state measurement (BSM) module on the sender side (Alice), a unitary transform to reconstruct the teleported state on the receiver side (Bob), neither a disambiguation process through two classic bits that travel through a classic disambiguation channel located between sender and receiver. The corresponding theoretical deduction of the protocol, as well as the experimental verification of its operation for several examples of qubits through implementation on an optical table, complete the present study. Both the theoretical and experimental outcomes show a marked superiority in the performance of the new protocol over the original version, with more simplicity and lower implementation costs, and identical fidelity in its most complete version.
This paper introduces a Quantum Correlation Matrix Memory (QCMM) and Enhanced QCMM (EQCMM), which... more This paper introduces a Quantum Correlation Matrix Memory (QCMM) and Enhanced QCMM (EQCMM), which are useful to work with quantum memories. A version of classical Gram-Schmidt orthogonalisation process in Dirac notation (called Quantum Orthogonalisation Process: QOP) is presented to convert a non-orthonormal quantum basis, i.e., a set of non-orthonormal quantum vectors (called qudits) to an orthonormal quantum basis, i.e., a set of orthonormal quantum qudits. This work shows that it is possible to improve the performance of QCMM thanks QOP algorithm. Besides, the EQCMM algorithm has a lot of additional fields of applica-tions, e.g.: Steganography, as a replacement Hopfield Net-works, Bi-level image processing, etc. Finally, it is important to mention that the EQCMM is an extremely easy to implement in any firmware.
We describe a novel method for removing noise (in wavelet domain) of unknown variance from microa... more We describe a novel method for removing noise (in wavelet domain) of unknown variance from microarrays. The method is based on a smoothing of the coefficients of the highest subbands. Specifically, we decompose the noisy microarray into wavelet subbands, apply smoothing within each highest subband, and reconstruct a microarray from the modified wavelet coefficients. This process is applied a single time, and exclusively to the first level of decomposition, i.e., in most of the cases, it is not necessary a multirresoltuion analysis. Denoising results compare favorably to the most of methods in use at the moment.
Super-resolution is nowadays used for a high-resolution image produced from several low-resolutio... more Super-resolution is nowadays used for a high-resolution image produced from several low-resolution noisy frames. In this work, we consider the problem of high-quality interpolation of a single noise-free image. Such images may come from different sources, i.e., they may be frames of videos, individual pictures, etc. On the other hand, in the encoder we apply a downsampling via bidimen-sional interpolation of each frame, and in the decoder we apply a upsampling by which we restore the original size of the image. If the compression ratio is very high, then we use a convolutive mask that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. In fact, the mentioned mask is coded inside texture memory of a GPGPU.
In this work, we present a comparison between different techniques of image compression. First, t... more In this work, we present a comparison between different techniques of image compression. First, the image is divided in blocks which are organized according to a certain scan. Later, several compression techniques are applied, combined or alone. Such techniques are: wavelets (Haar's basis), Karhunen-Loève Transform, etc. Simulations show that the combined versions are the best, with minor Mean Squared Error (MSE), and higher Peak Signal to Noise Ratio (PSNR) and better image quality, even in the presence of noise.
In this work, we developed the concept of supercompression, i.e., compression above the compressi... more In this work, we developed the concept of supercompression, i.e., compression above the compression standard used. In this context, both compression rates are multiplied. In fact, supercompression is based on super-resolution. That is to say, supercompression is a data compression technique that superpose spatial image compression on top of bit-per-pixel compression to achieve very high compression ratios. If the compression ratio is very high, then we use a convolutive mask inside decoder that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. Specifically, the mentio-ned mask is coded inside texture memory of a GPGPU.
In this work, we present a comparison between two techniques of image compression. In the first c... more In this work, we present a comparison between two techniques of image compression. In the first case, the image is divided in blocks which are collected according to zig-zag scan. In the second one, we apply the Fast Cosine Transform to the image, and then the transformed image is divided in blocks which are collected according to zig-zag scan too. Later, in both cases, the Karhunen-Loève transform is applied to mentioned blocks. On the other hand, we present three new metrics based on eigenvalues for a better comparative evaluation of the techniques. Simulations show that the combined version is the best, with minor Mean Absolute Error (MAE) and Mean Squared Error (MSE), higher Peak Signal to Noise Ratio (PSNR) and better image quality. Finally, new technique was far superior to JPEG and JPEG2000.
This work deals with unsupervised image deblurring. We present a new deblurring procedure on imag... more This work deals with unsupervised image deblurring. We present a new deblurring procedure on images provided by lowresolution synthetic aperture radar (SAR) or simply by multimedia in presence of multiplicative (speckle) or additive noise, respectively. The method we propose is defined as a two-step process. First, we use an original technique for noise reduction in wavelet domain. Then, the learning of a Kohonen self-organizing map (SOM) is performed directly on the denoised image to take out it the blur. This technique has been successfully applied to real SAR images, and the simulation results are presented to demonstrate the effectiveness of the proposed algorithms.
This paper introduces a Quantum Correlation Matrix Memory (QCMM) and Enhanced QCMM (EQCMM), which... more This paper introduces a Quantum Correlation Matrix Memory (QCMM) and Enhanced QCMM (EQCMM), which are useful to work with quantum memories. A version of classical Gram-Schmidt orthogonalisation process in Dirac notation (called Quantum Orthogonalisation Process: QOP) is presented to convert a non-orthonormal quantum basis, i.e., a set of non-orthonormal quantum vectors (called qudits) to an orthonormal quantum basis, i.e., a set of orthonormal quantum qudits. This work shows that it is possible to improve the performance of QCMM thanks QOP algorithm. Besides, the EQCMM algorithm has a lot of additional fields of applications, e.g.: Steganography, as a replacement Hopfield Networks, Bilevel image processing, etc. Finally, it is important to mention that the EQCMM is an extremely easy to implement in any firmware.
The Gram-Schmidt Process (GSP) is used to convert a non-orthogonal basis (a set of linearly indep... more The Gram-Schmidt Process (GSP) is used to convert a non-orthogonal basis (a set of linearly independent vectors) into an orthonormal basis (a set of orthogonal, unit-length vectors). The process consists of taking each vector and then subtracting the<br> elements in common with the previous vectors. This paper introduces an Enhanced version of the Gram-Schmidt Process (EGSP) with inverse, which is useful for signal and image processing applications.
Since his famous discussions with Niels Bohr, Albert Einstein considered quantum entanglement (QE... more Since his famous discussions with Niels Bohr, Albert Einstein considered quantum entanglement (QE) as a spooky action at a distance, due to the violation of locality necessary so that two entangled particles can share this effect in an instantaneous way despite of being at a great distance from each other, i.e., not being local. In other words, a notification about the change of state in one of them could only cover the space that separates them at a speed superior to that of light, which we know is impossible according to the Theory of Relativity (TR). Besides, QE faces directly the two main pillars of Physics: TR and Quantum Theory (QT); becoming the bone of contention between both theories. Quite the contrary, in this work we will see that QE is the meeting point of both theories, so much so, that QE could be considered as the cornerstone of the Theory of Everything (TOE). Consistent with this, the entangled particles retain certain autonomy unknown to date, and in addition, they...
A quantum time-dependent spectrum analysis, or simply, quantum spectral analysis (QuSA) is presen... more A quantum time-dependent spectrum analysis, or simply, quantum spectral analysis (QuSA) is presented in this work, and it's based on Schrödinger equation, which is a partial differential equation that describes how the quantum state of a non-relativistic physical system changes with time. In classic world is named bandwidth at time (BAT), which is presented here in opposition and as a complement of traditional spectral analysis frequency-dependent based on Fourier theory. Besides, BAT is a metric, which assesses the impact of the flanks of a signal on its frequency spectrum, which is not taken into account by Fourier theory and even less in real time. Even more, and unlike all derived tools from Fourier Theory (i.e., continuous, discrete, fast, short-time, fractional and quantum Fourier Transform, as well as, Gabor) BAT has the following advantages: a) compact support with excellent energy output treatment, b) low computational cost, O(N) for signals and O(N 2) for images, c) it doesn't have phase uncertainties (indeterminate phase for magnitude = 0) as Discrete and Fast Fourier Transform (DFT, FFT, respectively), d) among others. In fact, BAT constitutes one side of a triangle (which from now on is closed) and it consists of the original signal in time, spectral analysis based on Fourier Theory and BAT. Thus a toolbox is completed, which it is essential for all applications of Digital Signal Processing (DSP) and Digital Image Processing (DIP); and, even, in the latter, BAT allows edge detection (which is called flank detection in case of signals), denoising, despeckling, compression, and superresolution of still images. Such applications include signals intelligence and imagery intelligence. On the other hand, we will present other DIP tools, which are also derived from the Schrödinger equation. Besides, we discuss several examples for spectral analysis, edge detection, denoising, despeckling, compression and superresolution in a set of experimental results in an important section on Applications and Simulations, respectively. Finally, we finish this work with special section dedicated to Conclusions.
In this paper, three tecniques of internal image-representation in a quantum computer are compare... more In this paper, three tecniques of internal image-representation in a quantum computer are compared: Flexible Representation of Quantum Images (FRQI), Novel Enhanced Quantum Representation of digital images (NEQR), and Quantum Boolean Image Processing (QBIP). All conspicuous technical items are considered in this comparison for a complete analysis: i) performance as Classical-to-Quantum (Cl2Qu) interface, ii) characteristics of the employed qubits, iii) sparsity of the used internal registers, iv) number and size of the required registers, v) quality in the outcomes recovering, vi) number of required gates and its consequent accumulated noise, vi) decoherence, and vii) fidelity. These analyses and demonstrations are automatically extended to all variants of FRQI and NEQR. This study demonstrated the practical infeasibility in the implementation of FRQI and NEQR on a physical quantum computer (QPU), while QBIP has proven to be extremely successful on: a) the four main quantum simulators on the cloud, b) two QPUs, and c) optical circuits from three labs. Moreover, QBIP also demonstrated its economy regarding the required resources needed for its proper function and its great robustness (immunity to noise), among other advantages, in fact, without any exceptions.
Teleportation is the most important and impactful tool in the arsenal of quantum communications w... more Teleportation is the most important and impactful tool in the arsenal of quantum communications with a particular projection on quantum internet. We propose a non-ambiguity alternative to the original teleportation protocol, which completely eliminates the classical-disambiguation-channel used by the original version. Experimental evidence on a quantum platform, via IBM cloud, is provided to demonstrate its performance.
Instantaneous teleportation-the transmission and reconstruction over arbitrary distances of an un... more Instantaneous teleportation-the transmission and reconstruction over arbitrary distances of an unknown state without any type of disambiguation based on classical bits-is demonstrated, supporting the fact that instantaneous information transfer via an Einstein-Podolsky-Rosen channel is definitely possible. In other words, quantum correlations can be used to send signals, reinforcing the existence of an action at a distance, hence, paving the way for a better understanding of quantum entanglement and its consequent impact on Quantum Internet, as well as, on a realistic relationship between Special Relativity and Quantum Mechanics.
Quantum stretching is a technique to make quasi-copies of an arbitrary qubit without violating th... more Quantum stretching is a technique to make quasi-copies of an arbitrary qubit without violating the No-Cloning Theorem. These quasi-copies of the original qubit contain all the information of the original but at the cost of duplication in its size each time the technique is applied. Basically, a stretched bit or subit is obtained applying a single, unitary and reversible gate, i.e., we can recover the original qubit from its subit. Quantum stretching will allow us simultaneous teleportation to multiple destinations of the same subit, with the consequent potential that this has for quantum internet in configurations from 1 to N, however, always taking into account the duplication of qubit size to be quasi-copied every time the technique is applied. Finally, quantum stretching is particularly useful to make satellite bifurcations in the quantum internet context.
In this draft, we present a new technique of quantum key distribution, where the first time the i... more In this draft, we present a new technique of quantum key distribution, where the first time the initial key is teleported, and after that, the entanglement is held in a fictitious way. In the transmitter, the new key is built with the previous key and the plaintext and with the new key and the plaintext we obtain the ciphertext. On the receiver side, with the previous key and the ciphertext we obtain the new key and with it and the ciphertext we obtain the plaintext. In other words, it is as if we emulate successive teleportations of the new keys that do not exist.
Uploads
Papers by Mario Mastriani