We propose to look at the evolution of ideas related to parallel systems, algorithms, and applications during the past three decades and then glimpse at the future. The journey starts with massively parallel systems of the early nineties,...
moreWe propose to look at the evolution of ideas related to parallel systems, algorithms, and applications during the past three decades and then glimpse at the future. The journey starts with massively parallel systems of the early nineties, transitions gently to computing for the masses on the clouds of the first decade of the new millennium, and continues with what we could expect from quantum computers and quantum parallelism in the next few decades. In the late 1980s several technology companies were tasked to build massively parallel systems for classified and unclassified work. Supercomputers built with high-performance processors and low-latency interconnection networks provided the computing cycles for a relatively small population of sophisticated users from National Laboratories and universities. For example, the Touchstone Delta and then the Paragon had a hypercube interconnection network connecting together hundreds or thousands i860s, RISC processors launched with great fanfare by Intel, but mostly forgotten today. Virtually all un-classified applications running on such systems required the development of parallel libraries and parallel algorithms for different areas of scientific computing. Well discuss in some depth an application in the area of structural biology we developed for the Touchstone Delta, the 3D structure determination of viruses with unknown symmetry at high resolution. The tide turned, and in the second half of the first decade of the new millennium utility computing captured the attention of the masses. Large farms of servers built with off-the-shelf components, the so-called clouds, appeared in different time zones on earth, rather than the sky, to support enterprize computing. Anybody with a credit card and a problem involving big data now had access to vast amounts of computing cycles and storage space. The number of Cloud Service Providers (CSPs), the spectrum of services offered by the CSPs, and the number of cloud users have increased dramatically during the last few years. For example, in 2007 the EC2 (Elastic Cloud Computing) was the first service provided by AWS (Amazon Web Services), five years later, in 2012, AWS was used by businesses in 200 countries. Amazons S3 (Simple Storage Service) has surpassed two trillion objects and routinely runs more than 1.1 million peak requests per second. Well-known methods to partition the data and process the data concurrently were rediscovered and given new names, the Elastic MapReduce service has launched 5.5 million clusters since the start of the service in May 2010. In our presentation well concentrate on the IaaS (Infrastructure as a Service) cloud delivery model, discuss several classes of applications running successfully on a cloud, and present the results of some cloud benchmarks. While the computer science and computer engineering communities were busy developing operating systems, file systems, compilers, and algorithms for parallel systems a physicist, albeit a very famous one, Richard Feynman, showed in 1982 that computers obeying the laws of classical physics will never be able to carry out some difficult tasks, for example, to exactly simulate a quantum system. He predicted that a system where information is encoded as properties of quantum particles and processed using reversible state transformations will be capable of delivering extraordinary computing power. A number of alternatives for building a quantum computer including quantum dots based on spin or charge, trapped ions in a cavity, liquid NMR (Nuclear Magnetic Resonance), quantum Hall effect, and other physical processes are now investigated in many labs around the world. The race to build a quantum computer is on. To understand why, we overview the physical phenomena exploited by quantum parallelism and then discuss Shors factoring algorithm which achieves an exponential speed-up compared with a classical one. In 2009 it took almost two years and hundreds of computers to factor a 768 bit number, it is estimated that it would take 1,000 times longer to factor a 1,024 bit number. A quantum computer would most likely be able to factor a 2,048 bit number in days. But, so far only quantum computers with few qubits were built thus, information encrypted with a few hundred bit keys is safe!!