Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Quantum Computing Report

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 26

QUANTUM COMPUTING PROJECT REPORT

Submitted by : Abhay pratap (285) Uday kumar (219)

CONTENTS
Introduction Concepts of Quantum Computing History What makes quantum computing better than classical computing Real Time Applications Problems Disadvantages The limits of quantum computing Current Status Different Algorithms used in quantum computing Problems in execution Applications of Quantum Computers Conclusion

Abstract
Imagine a computer whose memory is exponentially larger than its apparent physical size; a computer that can manipulate an exponential set of inputs simultaneously; a computer that computes in the twilight zone of Hilbert space. You would be thinking of a quantum computer. Relatively few and simple concepts from quantum mechanics are needed to make quantum computers a possibility. The subtlety has been in learning to manipulate these concepts. Is such a computer an inevitability or will it is too difficult to build? The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, the review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the EPR experiment described. The EPR-Bell correlations and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from classical information theory, and, arguably, quantum from classical physics. Basic quantum information ideas are described, including key distribution, teleportation, data compression, quantum error correction, the universal quantum computer and quantum algorithms. The common theme of all these ideas is the use of quantum entanglement as a computational resource. Experimental methods for small quantum processors are briefly sketched, concentrating on ion traps, high Q cavities, and NMR. The review concludes with an outline of the main features of quantum information physics and a venues or future research Reference: http://seminarprojects.com/Thread-quantum-computing-fullreport#ixzz2gfpIRfPT

Contents
INTRODUCTION ............................................................................................................................................. 5 CONCEPTS OF QUANTUM COMPUTING ....................................................................................................... 6 HISTORY: ....................................................................................................................................................... 8 Difference Between Classical and Quantum Computing .............................................................................. 9 What Classical Computers Can and Cannot Do .......................................................................................... 10 Real Time Application and PROBLEMS ....................................................................................................... 11 Problems : ................................................................................................................................................... 11 Disadvantages And Limitations : ................................................................................................................. 12 Limitations .................................................................................................................................................. 12 Current status ............................................................................................................................................. 12 Different Algorithms of Quantum Computer.............................................................................................. 15 Applications of quantum computers .......................................................................................................... 18 OUR COMMENTS ........................................................................................................................................ 25

INTRODUCTION
Quantum computing is a combination of physics, mathematics and computer science. Quantum algorithm exponentially speed up classical computation. The basic paradigm for quantum algorithm is the quantum circuit model, which is composed of the basic quantum units of information (qubits) and quantum gates. By interacting with each other while being isolated from the external environment, qubits can perform certain calculations exponentially faster than conventional computers. The quantum computer, following the laws of quantum physics, would gain enormous processing power through the ability to be in multiple states, and to perform tasks using all possible permutations simultaneously. By doing a computation on many different numbers at once, then interfering the results to get a single answer, a quantum computer has the potential to be much more powerful than a classical computer of the same size. Reference: http://seminarprojects.com/Thread-quantum-computing-fullreport#ixzz2gfrQEGqv

CONCEPTS OF QUANTUM COMPUTING

Different Element of Quantum Computing We think that quantum computer as a classical computer with a quantum circuit attached to it with some kind of interface between conventional and quantum logic. Since there are only a few things a quantum computer does better than a classical computer it makes sense to do the bulk of the processing on the classical machine.

1) Bits and Qubits These are the nuts and bolts of quantum computing. It describes qubits, gates, and circuits. Quantum computers perform operations on qubits which are analogous to conventional bits but they have an additional property in that they can be in a superposition. A quantum register with 3 qubits can store 8 numbers in superposition simultaneously, and a 250 qubit register holds more numbers (superposed) than there are atoms in the universe. Representation of data-qubits 2) Single Qubit Classical computers use two discrete states to represent a unit of information, this state is called a binary digit (or bit for short). A bit has the following two values: 0 and 1. There is no intermediate state between them, i.e. the value of the bit cannot be in a superposition. Quantum bits, or qubits, can on the other hand be in a state between 0 and 1, but only during the computational phase of a quantum operation. When measured, a qubit can become either: The | > symbolic notation is part of the Dirac notation.

3) Multiple Qubit The potential amount of information available during the computational phase grows exponentially with the size of the system, i.e. the number of qubits. This is because if we have n qubits the number of basis states is 2n. E.g. if we have two qubits, forming a quantum register then there are four (=22) computational basis states: forming Reference: http://seminarprojects.com/Thread-quantum-computing-full-report#ixzz2gfsPNnnE

The following concepts are important for quantum computing : 1) Superposition Superposition means a system can be in two or more of its states simultaneously. For example a single particle can be traveling along two different paths at once. This implies

that the particle has wave-like properties, which can mean that the waves from the different paths can interfere with each other. Interference can cause the particle to act in ways that are impossible to explain without these wave-like properties. The ability for the particle to be in a superposition is where we get the parallel nature of quantum computing: If each of the states corresponds to a different value then, if we have a superposition of such states and act on the system, we effectively act on all the states simultaneously. 2) Entanglement In1935 Einstein demonstrated a paradox in an attempt to refute the undefined nature of quantum systems. The results of their experiment seemed to show that quantum systems were defined, having local state BEFORE measurement. Although the original hypothesis was later proven wrong (i.e. it was proven that quantum systems do not have local state before measurement). The effect they demonstrated was still important, and later became known as entanglement. Entanglement is the ability for pairs of particles to interact over any distance instantaneously. Particles dont exactly communicate, but there is a statistical correlation between results of measurements on each particle that is hard to understand using classical physics. To become entangled, two particles are allowed to interact; they then separate and, on measuring say, the velocity of one of them (regardless of the distance between them), we can be sure of the value of velocity of the other one (before it is measured). The reason we say that they communicate instantaneously is because they store no local state and only have well defined state once they are measured. Because of this limitation particles cant be used to transmit classical messages faster than the speed of light as we only know the states upon measurement. Entanglement has applications in a wide variety of quantum algorithms and machinery. 3) Uncertainty The quantum world is irreducibly small so its impossible to measure a quantum system without having an effect on that system as our measurement device is also quantum mechanical. As a result there is no way of accurately predicting all of the properties of a particle. There is a trade off - the properties occur in complementary pairs (like position and momentum, or vertical spin and horizontal spin) and if we know one property with a high degree of certainty then we must know almost nothing about the other property. That unknown propertys behavior is essentially random. An example of this is a particles position and velocity: if we know exactly where it is then we know nothing about how fast it is going. This indeterminacy is exploited in quantum cryptography. Reference: http://seminarprojects.com/Thread-quantum-computing-fullreport#ixzz2gfuUaQbH

Data Retrieval In general, an n qubit register can represent the numbers 0 through 2^n-1 simultaneously. If we attempt to retrieve the values represented within a superposition, the superposition randomly collapses to represent just one of the original values. Relationships among data Entanglement Entanglement is the ability of quantum systems to exhibit correlations between states within a superposition. Imagine two qubits, each in the state |0> + |1> (a superposition of the 0 and 1.) We can entangle the two qubits such that the measurement of one qubit is always correlated to the measurement of the other qubit. Quantum Gates Quantum Gates are similar to classical gates, but do not have a degenerate output. i.e. their original input state can be derived from their output state, uniquely. They must be reversible. This means that a deterministic computation can be performed on a quantum computer only if it is reversible. Luckily, it has been shown that any deterministic computation can be made reversible. Reference: http://seminarprojects.com/Thread-quantum-computing-fullreport#ixzz2gfvlQgcK

HISTORY:
1982 : Feynman proposed the idea of creating machines based on the laws of quantum mechanics instead of the laws of classical physics. 1985 : David Deutsch developed the quantum Turing machine, showing that quantum circuits are universal. 1994 : Peter Shor came up with a quantum algorithm to factor very large numbers in polynomial time. 1996 : Lov Grover develops a quantum search algorithm with O(N) complexity. 1998 : 2-Qubit register developed 2001 : First execution of Shor's algorithm at IBM's Alma den Research Center and Stanford University. 2005 : First qubyte (8 qubits) created 2006 : Experimental 12-bit QC built. 2009 : Yale creates solid-state quantum processor 2011 : D-Wave announces commercial quantum computer

Difference Between Classical and Quantum Computing


What the Quantum Can Do o If we grant that P NP, then only one hope remains for solving NP-complete problems in polynomial time: namely, to broaden what we mean by computer. At rst sight, quantum mechanics would appear to provide just the kind of resources needed. Quantum mechanics makes it possible to store and manipulate a vast amount of information in the states of a relatively small number of particles. To see how this comes about, imagine that we have 1,000 particles and that each particle, when measured, can be found to be either spinning up or spinning down. For our purposes, what it means for a particle to spin up or down is irrelevant; all that matters is that there is some property of the particle that has one of two values when measured. o To describe the quantum state of this collection of particles, one must specify a number for every possible result of measuring the particles. These numbers are called the amplitudes of the possible outcomes and relate to each outcomes probability, but unlike probabilities, quantum amplitudes can be positive or negative (in fact, they are complex numbers). For example, an amplitude is needed for the possibility that all 1,000 particles will be found spinning up, another amplitude for the possibility of nding that the rst 500 particles are spinning up and that the remaining 500 are spinning down, and so on. There are 21,000 possible outcomes, or about 10^300, so that is how many numbers are neededmore than there are atoms in the visible universe! The technical terminology for this situation is that the 1,000 particles are in a superposition of those 10^300 states. o we can store 10^300 numbers on our 1,000 particles simultaneously. Then, by performing various operations on the particles and on some auxiliary ones perhaps hitting them with a sequence of laser pulses or radio waves we can carry out an algorithm that transforms all 10^300 numbers (each one a potential solution) at the same time. o If at the end of doing that we could read out the particles nal quantum state accurately, we really would have a magic computer: it would be able to check 10^300 possible solutions to a problem, and at the end we could quickly discern the right one. Unfortunately, there is a catch. When the particles are measured (as is necessary to read out their nal state), the rules of quantum mechanics dictate that the measurement will pick out just one of the 10^300 possibilities at random and that all the others will then disappear. (To go back to the quantum slacks developed at Haggar, if you tried to wear them you would nd yourself in either formal or casual attire, not both.) We would seem to be no better off than if we used a classical computer and tried out one randomly chosen possible solutionin either case, we end up knowing about only one such possible solution.

So a good quantum computer algorithm would ensure that computational paths leading to a wrong answer would cancel out in this way. It would also ensure that the paths leading to a correct answer would all have amplitudes with the same sign which yields constructive interference and thereby boosts the probability of nding them when the particles are measured at the end.

What Classical Computers Can and Cannot Do


Computer scientists categorize problems according to how many computational steps it would take to solve a large example of the problem using the best algorithm known. The problems are grouped into broad, overlapping classes based on their difculty. Three of the most important classes are listed below. Contrary to myth, quantum computers are not known to be able to solve efciently the very hard class called N-Polynomial-complete problems. P PROBLEMS: Ones computers can solve efciently, in polynomial time Example: Given a road map showing n towns, can you get from any town to every other town? For a large value of n, the number of steps a computer needs to solve this problem increases in proportion to n^2, a polynomial. Because polynomials increase relatively slowly as n increases, computers can solve even very large P problems within a reasonable length of time. NP PROBLEMS: Ones whose solutions are easy to verify Example: You know an n-digit number is the product of two large prime numbers, and you want to nd those prime factors. If you are given the factors, you can verify that they are the answer in polynomial time by multiplying them. Every P problem is also an NP problem, so the class NP contains the class P within it. The factoring problem is in NP but conjectured to be outside of P, because no known algorithm for a standard computer can solve it in only a polynomial number of steps. Instead the number of steps increases exponentially as n gets bigger.

Real Time Application and PROBLEMS

Problems :
Unfortunately there are a number of practical problems that are preventing this from happening: The electron only remains in its excited state for about a microsecond before it falls to the ground state. Bearing in mind that the required duration of each laser pulse is around 1 nanosecond, there is a limit to the number of computational steps that can be made before information is lost. Constructing quantum dots is a very difficult process because they are so small. A typical quantum dot measures just 10 atoms (1 nanometer) across. The technology needed to build a computer from these dots doesn't yet exist.

Disadvantages And Limitations :

1. Security and Privacy Issue :


The password cracking will be easy in quantum computing. With the use of quantum computing you can break any type of encryption.

2. Moral, ethical, social, and economic issues:


a. Growing too much dependency on machines b. Advanced technology falling into evil hands c. Economic division: who can/cannot afford technology

Limitations 1. Classic decidability same as quantum : Decidability of classic computers is same as the quantum computer but the only difference of cost. 2. Classic computation can simulate quantum in poly space : The Quantum storage (Qubits)is only useful during the quantum computing. 3. Lower bounds for quantum computations (OR, AND, PARITY: N/2)

Current status

1. D-Wave Defies World of Critics With First Quantum Cloud

Canadian national champion wrestler. That may seem like an odd combination, but this dual background makes him the perfect fit for his chosen profession.Rose is the CTO and founder of D-Wave. He calls it the worlds only quantum computer company, but the worlds quantum computer experts dont agree with him. The result is a nearly 10-year fight to prove each other wrong, and at least in some Geordie Rose has a Ph.D. in quantum physics, but hes also a world champion in Brazilian jiu-jitsu and a ways, Geordie Rose is winning. The quantum computer is the holy grail of tech research. The idea is to build a machine that uses the mind-bending properties of very small particles to perform calculations that are well beyond the capabilities of machines here in the world of classical physics. But its still not completely clear that a true quantum computer can actually be built. Theres no shortage of quantum physicists, mathematicians, and computer scientists who say that D-Waves machine is no quantum computer. D-Waves technology has been an enigma, in a negative sense, says Greg Kuperberg, a math professor at the University of California, Davis. Just this week, physicists at the University of New South Wales unveiled new research they hope will eventually lead to the first quantum computer. http://www.wired.com/wiredenterprise/2012/02/dwave-quantum-cloud/all/

2. Googles Quantum Computer Proven To Be Real Thing :


D-Wave, the company that built the thing, calls it the worlds first quantum computer, a seminal creation that foretells the future of mathematical calculation. But many of the worlds experts see it quite differently, arguing the D-Wave machine is something other than the computing holy grail the scientific community has sought since the mid-1980s. No doubt, the argument will continue. But today, researchers at the University of Southern California published a paper that comes that much closer to showing the D-Wave is indeed a quantum computer. USC houses and operates the D-Wave system owned by Lockheed, and the researchers led by Daniel Lidar, a professor of electrical engineering, chemistry, and physics say they have at least shown the machine is not using a computing model known as simulated annealing, which obeys the laws of classical physics (the physics of everyday life) rather than the more elusive properties of quantum physics.
http://www.wired.com/wiredenterprise/2013/06/d-wave-quantum-computer-usc/

3.

Google, NASA Open New Lab to Kick Tires on Quantum Computer

If machines are going to become as smart as Google and NASA want them to be, they may need a whole new type of computing to get them there. Quantum computing, that is.So today Google said its opening a lab complete with a quantum computer called the Quantum Artificial Intelligence Lab. Its hosted at NASAs Ames Research

center, located just down the Highway 101 from Googles Mountain View headquarters and run in conjunction with the Universities Space Research Association, a non-profit group devoted to space science.The lab will operate a 512-qubit quantum computer called the D-Wave Two, a machine thats also being tested out by Lockheed Martin. Google has had some success using its vast computing resources to build machine learning into services such as voice and image recognition, but this work is incredibly computeintensive. Although its still in the early days of experimentation, quantum computing could herald a new era of number-crunching. Thats because it uses quantum physics to break computer processing out of the binary computing paradigm that has dominated for the past half-century. Instead of binary bits, these computers measure qubits, which can simultaneously represent many more values. http://www.wired.com/wiredenterprise/2013/05/google-dwave/

4. Research Gives New Permanence to Quantum Memory:


Quantum computers are real, but thanks to the fragility of quantum information, they cant yet do anything you couldnt do faster on a normal computer. Now, a team of researchers at the University of Sydney and Dartmouth College have found a way to make quantum information more reliable. In these superconducting systems, the quantum information only persists for about 100 microseconds a tiny fraction of a second, says Dr. Michael J. Biercuk, director of the Quantum Control Laboratory in the University of Sydneys School of Physics and ARC Centre for Engineered Quantum Systems. This information decay, called decoherence, is a problem even when information is idle. But Biercuk and his colleagues have found a way to make quantum information persist for several hours. Their research will be published on Wednesday in Nature Communications. Quantum computing takes advantage of the unique properties of quantum particles, creating something called qubits in order to do calculations. Researchers believe that this new breed of computer could one day solve certain types of problems in a fraction of the time todays classical computers can, and major progress has been made towards that goal. For example, Google and NASA recently bought a machine created by the Canadian company D-Wave, which the inventors claim is a working quantum computer. But many scientists remain unconvinced that the D-Wave machine can outperform traditional computers if its even a quantum computer at all. Others, such as IBM, have built proof-of-concept quantum computers, but they are all held back by decoherence.

http://www.wired.com/wiredenterprise/2013/06/quantum-memory/

Different Algorithms of Quantum Computer 1. ShorS Algorithm : Shor's algorithm, named after mathematician Peter Shor, is a quantum
algorithm(an algorithm which runs on a quantum computer) for integer factorization discovered in 1994. Informally it solves the following problem: Given an integer N, find its prime factors. On a quantum computer, to factor an integer N, Shor's algorithm runs in polynomial time (the time taken is polynomial in log N, which is the size of the input. Specifically it takes time O((log N)3), demonstrating that the integer factorization problem can be efficiently solved on a quantum computer and is thus in the complexity class . This is exponentially faster than the most efficient known classical factoring algorithm, the general number field sieve, which works in subexponential time -- about O(e(log N)1/3 (log log N)2/3). The efficiency lies in the efficiency of the quantum Fourier transform, and modular exponentiation by squaring. Given a quantum computer with a sufficient number of qubits, Shor's algorithm can be used to break the widely used public-key cryptography scheme known as RSA. RSA is based on the assumption that factoring large numbers is computationally infeasible. So far as is known, this assumption is valid for classical (non-quantum) computers; no classical algorithm is known that can factor in polynomial time. However, Shor's algorithm shows that factoring is efficient on a quantum computer, so an appropriately large quantum computer can break RSA. It was also a powerful motivator for the design and construction of quantum computers and for the study of new quantum computer algorithms. It has also facilitated research on new cryptosystems that are secure from quantum computers, collectively called postquantum cryptography. In 2001, Shor's algorithm was demonstrated by a group at IBM, who factored 15 into 3 5, using an NMR implementation of a quantum computer with 7 qubits.[2]However, some doubts have been raised as to whether IBM's experiment was a true demonstration of quantum computation, since no entanglement was observed.[3]Since IBM's implementation, several other groups have implemented Shor's algorithm using photonic qubits, emphasizing that entanglement was observed Classical part

Grovers algorithm

Grover's algorithm is a quantum algorithm for searching an unsorted database with N entries in O(N^1/2) time and using O(log N) storage space . Lov Grover formulated it in 1996. In models of classical computation, searching an unsorted database cannot be done in less than linear time. Grover's algorithm illustrates that in the quantum model searching can be done faster than this; in fact its time complexity O(N^1/2) is asymptotically the fastest possible for searching an unsorted database in the linear quantum model. It provides a quadratic speedup, unlike other quantum algorithms, which may provide exponential speedup over their classical counterparts. However, even quadratic speedup is considerable when N is large. In the Deutsch-Jozsa problem, we are given a black box quantum computer known as an oracle that implements the function f : {0,1}^n => {0,1}. In layman's terms, it takes n-digit binary values as input and produces either a 0 or a 1 as output for each such value. We are promised that the function is either constant (0 on all inputs or 1 on all inputs) or balanced (returns 1 for half of the input domain and 0 for the other half). The task then is to determine if f is constant or balanced by using the oracle.

DeutschJozsa Algorithm
The DeutschJozsa algorithm is a quantum algorithm, proposed by David Deutsch and Richard Jozsa in 1992 with improvements by Richard Cleve, Arthur Eckert, Chiara Macchiavello, and Michele Mosca in 1998. Although of little practical use, it is one of the first examples of a quantum algorithm that is exponentially faster than any possible deterministic classical algorithm. It is also a deterministic algorithm, meaning that it always produces an answer, and that answer is always correct. In the Deutsch-Jozsa problem, we are given a black box quantum computer known as an oracle that implements the function f : {0,1}^n => {0,1}. In layman's terms, it takes n-digit binary values as input and produces either a 0 or a 1 as output for each such value. We are promised that the function is either constant (0 on all inputs or 1 on all inputs) or balanced(returns 1 for half of the input domain and 0 for the other half). The task then is to determine if f is constant or balanced by using the oracle.

Applications of quantum computers Factorizing RSA


The actual algorithm can then be constructed as follows; 1. Find x co prime to N 2. Using Quantu Period Finding obtain order r 3. If r does not satisfy conditions from 3.1, retry by returning to step 1 4. Return factor of N from GCD(xr/2 1,N) It should be noted that all the steps can be performed on a classical computer as they are classical by their nature. The only exception is step 2.This factorization relies on Quantum Period Finding (QPF) to do all the heavy lifting. QPF works by first calculating 2j powers of x modulo N; x, x2, x4, x8, . . . , x2n1 mod N. This computation is performed by a square-and-multiply algorithm, also known as modular exponentiation or exponentiation by squaring. Next, a register, denoted by z, is prepared in a superposition of states and the following operation is performed xz mod N = (xz020 mod N)(xz121 mod N) . . . (xzn12n1 mod N). Since, naively speaking, every qubit of z represents both 0 and 1 at the same time, the product x z contains the combination of chosen x to every exponent up to 2n 1. As there is no way for us to just read the contents from a quantum register due to collapse of wave function after measurement, we perform quantum Fourier transform on register z to obtain the period r. To illustrate Shors algorithm, we will factor number 21. First we choose x = 2, since 2 is a prime number and so it is co prime to 21. Second, we find period r of f(a) = 2a mod 21. Because we do not have a quantum computer at hand (yet), we make a table of f(a) by conventional means. a0123456 f(x) 1 2 4 8 16 11 1 It is obvious that period r is 6 and is even. It also satisfies relation 1 < 23 mod 21 < 20 and GCD(8 + 1, 21) = 3 actually yields a factor of 21. In this particular case GCD(8 1, 21) = 7 does as well. Breaking RSA Even though RSA encryption can be broken by factoring N, there exists a shortcut. In previous section we said that we encrypt a message by relation c _ me mod N. For decrypting this message knowledge of order r of c modulo N is sufficient. To show this, suppose we know d such that e d _ 1 mod r. Thus we can write ed=1+qr for some integer q and c d _ (me) d _ me d _ m1+qr mod N. As m has to be co prime to N for RSA to function, exponentiation of m has to be a periodic function. Exponentiation of c is a periodic function as well and m is part of the set of values that c takes during exponentiation because c d _ m mod N. This, in turn, means that period of m must be equal to period of c. Strictly speaking, numbers co prime

with N form subgroups under group modulo N and order r denotes number of their elements. Since subgroup generated by c contains m = c d and subgroup generated by m contains c = me, they must be identical, which is the reason their orders are the same. If order of m is r, then mr _ 1 mod N and m1+qr _ m(m r)q _ m(1)q _ m mod N. Now we see that we can obtain d from equation (3.3) with extended Euclidean algorithm if we know the period r of encrypted message c. ( d, N) is then private key pair used for decryption.

Quantum simulation
Feynmans initial motivation for constructing a quantum computer was the efficient simulation of quantum dynamics. He proposed a universal quantum simulator consisting of a lattice of spins with nearest neighbor interactions that are freely specifiable and can efficiently reproduce the dynamics of any other many-particle quantum system with a finite-dimensional state space. A universal quantum simulator is a controlled device that, operating itself at the quantum level, efficiently reproduces the dynamics of any other many-particle quantum system that evolves according to short range interactions. Quantum simulation in Aspuru-Guzik research group focuses on developing digital and analog quantum simulators for quantum chemistry. Fully understanding the complicated electronic structures and molecules is one of the well-known challenges to quantum chemists. The aim is to develop quantum algorithms and quantum simulators that can achieve significant advantages for the electronic-structure problems, the simulation of chemical dynamics, protein folding, photosynthetic systems, and other tasks. spintronics Spintronics is an emerging field of nanoscale electronics involving the detection and manipulation of electron spin. Electron spin can be detected as a magnetic field having one of two orientations, known asdown and up. This provides an additional two binary states to the conventional low and high logic values, which are represented by simple currents. With the addition of the spin state to the mix, a bit can have four possible states, which might be called down-low, down-high,up-low, and up-high. These four states represent quantum bits (qubits). Spintronic technology has been tested in mass-storage components such as hard drives. The technology also holds promise for digital electronics in general. The existence of four, rather than two, defined states for a logic bit translates into higher data transfer speed, greater processing power, increased memory density, and increased storage capacity, provided the properties of electron spin can be sufficiently controlled for practical applications.

Quantum Cryptography
When cryptographers consider 'attacks using quantum computers', they mostly think of Shor's algorithm, which allows us to efficiently factor large numbers and solve the discrete logarithm problem on a quantum computer. Thus, Shor's algorithm gives us an exponential speedup over the best known classical algorithms. Cryptosystems that rely on the hardness of such problems, such as RSA and Elliptic Curv Cryptography, will be much less secure (read: broken) once we have a quantum computer. Because these systems are used widely in practice, we would like to find some alternatives that, as far as we know, are still secure against attacks using a quantum computer. The alternatives discussed at the workshop were based on problems from the fields of lattices, error-correcting codes and multivariate quadratic equations. For each of these fields there was an introductory tutorial and one or two invited talks. Additionally, there were two tutorials and one invited talk on quantum algorithms. The tutorials on quantum algorithms were very interesting, because quantum algorithms are often no more than black boxes to cryptographers and these talks explained some of the 'magic' behind them. The first quantum algorithms tutorial was given by Stephen Jordan. He gave a highlevel overview about the kind of quantum algorithms that provide super-polynomial speed-ups, such as Shor's algorithm. These algorithms solve some variant of the Hidden Subgroup Problem, for various groups. He also described several groups for which we do not yet know how to solve the HSP. For example, if we could solve the HSP over the symmetric group, we would be able to solve the Graph Isomorphism problem using a quantum computer. Finally, he mentioned that we might be able to solve certain lattice problems if we were able to construct certain quantum states corresponding to a Gaussian or similar distribution over lattices. Constructing these states efficiently is an open problem. The other tutorial on quantum algorithm was given by Frederic Magniez. It focused on applications of Grover's algorithm, which allows us to perform a search of an unsorted database in the square root of the size of the database. When used to find collisions in specific functions, this can sometimes even be improved to a cube root. Thus, this kind of quantum algorithm gives polynomial speedups. For symmetric cryptography, the existence of Grover's algorithm usually means that we need to double the length of the key if we want a comparable security level against a quantum computer. However, it is not always trivial to apply Grover's algorithm to other search problems. This lead to some interesting open problems, i.e., whether Grover could be used to speed up other classical algorithms in cryptanalysis. As mentioned before, Shor's factorization algorithm has yet to be implemented on more than a few qubits. But if the efficient factorization of large numbers becomes possible, RSA cryptography will need to be replaced by a new form of cryptography,

one that will not be foiled by classical or quantum computers. Conveniently, such a method already exists; in fact, it was developed before Shor invented his factorization algorithm. Coincidentally, it too relies on quantum mechanics. The cryptographic method in question is quantum key distribution, which was introduced in 1984 by Charles H. Bennett and Gilles Brassard in Quantum cryptography: Public key distribution and coin tossing" and is thus commonly referred to as BB84. In short, quantum key distribution is secure not because messages are encrypted in some text is difficult- to-decrypt way but rather because eavesdroppers cannot intercept messages undetected, regardless of computational resources. Although quantum key distribution is the most famous cryptographic application of quantum mechanics, it is not the only one. In the 1960s, Stephen Wiesner conceived of two applications: a way to send two messages, only one can be read, and a way to design money that cannot be counterfeited. His ideas were largely unknown until 1983, when he described them in an article called \Conjugate coding". Quantum cryptography uses our current knowledge of physics to develop a cryptosystem that is not able to be defeated - that is, one that is completely secure against being compromised without knowledge of the sender or the receiver of the messages. The wordquantum itself refers to the most fundamental behavior of the smallest particles of matter and energy: quantum theory explains everything that exists and nothing can be in violation of it. Quantum cryptography is different from traditional cryptographic systems in that it relies more on physics, rather than mathematics, as a key aspect of its security model. Essentially, quantum cryptography is based on the usage of individual particles/waves of light (photon) and their intrinsic quantum properties to develop an unbreakable cryptosystem - essentially because it is impossible to measure the quantum state of any system without disturbing that system. It is theoretically possible that other particles could be used, but photons offer all the necessary qualities needed, their behavior is comparatively well-understood, and they are the information carriers in optical fiber cables, the most promising medium for extremely high-bandwidth communications.

How It Works in Practice


In practice, quantum cryptography has been demonstrated in the laboratory by IBM and others, but over relatively short distances. Recently, over longer distances, fiber optic cables with incredibly pure optic properties have successfully transmitted photon bits up to 60 kilometers. Beyond that, BERs (bit error rates) caused by a combination of the Heisenberg Uncertainty Principle and microscopic impurities in the fiber make the system unworkable. Some research has seen successful transmission through the air, but this has been over short distances in

ideal weather conditions. It remains to be seen how much further technology can push forward the distances at which quantum cryptography is practical. Practical applications in the US are suspected to include a dedicated line between the White House and Pentagon in Washington, and some links between key military sites and major defense contractors and research laboratories in close proximity.

complexity theory :
Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, it tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically.

DMRG theory:

The density matrix renormalization group (DMRG) is a numerical variational technique devised to obtain the low energy physics of quantum many-body systems with high accuracy. It was invented in 1992 by Steven R. White and it is nowadays the most efficient method for 1-dimensional systems. The main problem of quantum many-body physics is the fact that the Hilbert space grows exponentially with size. For example, a spin-1/2 chain of length L has 2 L degrees of freedom. The DMRG is an iterative, variational method that reduces effective degrees of freedom to those most important for a target state. The target state is often the ground state. After a warm-up cycle, the method splits the system into two blocks, which need not have equal sizes, and two sites in between. A set of representative states has been chosen for the block during the warm-up. This set of left block + two sites + right block is known as the superblock. Now a candidate for the ground state of the superblock, which is a reduced version of the full system, may be found. It may have a rather poor accuracy, but the method is iterative and improves with the steps below.

o Decomposition of the system into left and right blocks, according to DMRG. The candidate ground state that has been found is projected into the subspace for each block using a density matrix, whence the name. Thus, the relevant states for each block are updated. Now one of the blocks grows at the expense of the other and the procedure is repeated. When the growing block reaches maximum size, the other starts to grow in its place. Each time we return to the original (equal sizes) situation, we say that a sweep has been completed. Normally, a few sweeps are enough to get a precision of a part in 1010 for a 1D lattice.

The first application of the DMRG, by Steven White and Reinhard Noack, was a toy model: to find the spectrum of a spin 0 particle in a 1D box. This model had been proposed by Kenneth G. Wilson as a test for any new renormalization group method, because they all happened to fail with this simple problem. The DMRG overcame the problems of previous renormalization group methods by connecting two blocks with the two sites in the middle rather than just adding a

single site to a block at each step as well as by using the density matrix to identify the most important states to be kept at the end of each step. After succeeding with the toy model, the DMRG method was tried with success on the Heisenberg model (quantum).

N-representability theory:

Several reconstructive proposals for density matrix functional theory are investigated, each of which expresses the two-electron density matrix, and therefore the electronic energy, as a functional of the natural orbitals and their occupation numbers. It is shown that for each of these functional, half of the parallel-spin Eigen values of the reconstructed two-electron density matrix are necessarily negative. Illustrative all-electron calculations for Be and LiH, in a variety of Gaussian basis sets, demonstrate that these spurious negative Eigen values lower the electronic energy substantially. In spite of this, there is no indication that the variationally optimized energy diverges as the basis set approaches completeness, as has been suggested based on calculations with a small number of active orbitals. The apparent variational instability reported previously is attributed to qualitative differences between the minimal-basis and extended-basis potential curves, for certain functional. However, we identify one functional that yields accurate LiH potential curvescomparable to full configuration interaction resultsin both minimal and extended basis sets. Explicitly antisymmetric reconstructions are recommended as a remedy for the positivity problem.

no-cloning theorem :
The no-cloning theorem is a result of quantum mechanics that forbids the creation of identical copies of an arbitrary unknown quantum state. It was stated by Wootters and Zurek and Dieks in 1982, and has profound implications in quantum computing and related fields. The state of one system can be entangled with the state of another system. For instance, one can use the Controlled NOT gate and the Walsh-Hadamard gate to entangle two qubits. This is not cloning. No well-defined state can be attributed to a subsystem of an entangled state. Cloning is a process whose result is a separable state with identical factors. According to Asher Peres[3] and David Kaiser, the publication of the no-cloning theorem was prompted by a proposal of Nick Herbert[5] for a superluminal communication device using quantum entanglement.

OUR COMMENTS
Based on what have read and learnt about the Quantum Computing in general, there a few important issues that we feel need to be tackled before Quantum technology can emerge from the researcher's laboratory into the mainstream and become a part of everyday computing and security technology. The most severe limitation of quantum computers at their current stage of development is the fact that they are large and expensive, and are custom-built by researchers in a very controlled laboratory environment. It remains to be seen whether quantum computers of a reasonable size (say comparable to personal computers 20 years ago) can be built and packaged in a form amenable to mass-production. This will require addressing the issue of decoherence in large and complicated quantum logic circuits. Another issue with quantum computers is the fact that a program has to be run many times and the distribution of the outputs has to be analyzed to determine the correct output of the program. It remains to be seen how this model of computation in with everyday computational tasks such as simple addition, or managing a company's accounts. As for quantum cryptographic techniques, the central concern is that of building practical quantum communication channels. The biggest problem is that using repeaters in quantum channels is not possible since it would amount to eavesdropping, and would destroy the information contained in the transmitted qubits. Research efforts are underway to integrate quantum channels into the IP and IPSec framework by incorporating quantum links in the link layer. Whether this can scaled up for mass deployment as the next generation of the Internet, remains to be seen. This will, of course, require building quantum channels which can transmit qubits over significant distances, since the current are too low for practical usage.

http://www-cs-students.stanford.edu/~adityagp/acads/netsec-writeup.pdf

You might also like