Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Computer Reading

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Computer

A computer is a programmable machine that receives input, stores and manipulates data, and provides
output in a useful format.

While a computer can, in theory, be made out of almost anything (see misconceptions section), and
mechanical examples of computers have existed through much of recorded human history, the first
electronic computers were developed in the mid-20th century (1940–1945). Originally, they were the
size of a large room, consuming as much power as several hundred modern personal computers (PCs).
Modern computers based on integrated circuits are millions to billions of times more capable than the
early machines, and occupy a fraction of the space. Simple computers are small enough to fit into mobile
devices, and can be powered by a small battery. Personal computers in their various forms are icons of
the Information Age and are what most people think of as "computers". However, the embedded
computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots
are the most numerous.

Contents
1.Misconceptions
A computer does not need to be electric, nor even have a processor, nor RAM, nor even hard
disk. The minimal definition of a computer is anything that transforms information in a
purposeful way.

o 1.1 Required technology

 Computational systems as flexible as a personal computer can be built out of almost


anything. For example, a computer can be made out of billiard balls (billiard ball
computer); this is an unintuitive and pedagogical example that a computer can be made
out of almost anything. More realistically, modern computers are made out of transistors
made of photolithographed semiconductors.

 Historically, computers evolved from mechanical computers and eventually from vacuum
tube transistors.

 There is active research to make computers out of many promising new types of
technology, such as optical computing, DNA computers, neural computers, and quantum
computers. Some of these can easily tackle problems that modern computers cannot (such
as how quantum computers can break some modern encryption algorithms by quantum
factoring).
o 1.2 Computer architecture paradigms ~ model

Some different paradigms of how to build a computer from the ground-up:


 RAM machines
These are the types of computers with a CPU, computer memory, etc.,
which understand basic instructions in a machine language. The concept evolved from
the Turing machine.
 Brains
Brains are massively parallel processors made of neurons, wired in intricate
patterns, that communicate via electricity and neurotransmitter chemicals.
 Programming languages
Such as the lambda calculus, or modern programming languages,
are virtual computers built on top of other computers.
 Cellular automata
For example, the game of Life can create "gliders" and "loops"
and other constructs that transmit information; this paradigm can be applied to DNA
computing, chemical computing, etc.
 Groups and committees
The linking of multiple computers (brains) is itself a computer
Logic gates are a common abstraction which can apply to most of the above digital or
analog paradigms.
The ability to store and execute lists of instructions called programs makes computers
extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a
mathematical statement of this versatility: any computer with a certain Turing-complete
is, in principle, capable of performing the same tasks that any other computer can
perform. Therefore any type of computer (netbook, supercomputer, cellular automaton,
etc.) is able to perform the same computational tasks, given enough time and storage
capacity

o 1.3 Limited-function computers

Conversely, a computer which is limited in function (one that is not "Turing-complete")


cannot simulate arbitrary things. For example, simple four-function calculators cannot
simulate a real computer without human intervention. As a more complicated example,
without the ability to program a gaming console, it can never accomplish what a
programmable calculator from the 1990s could (given enough time); the system as a
whole is not Turing-complete, even though it contains a Turing-complete component
(the microprocessor). Living organisms (the body, not the brain) are also limited-
function computers designed to make copies of themselves; they cannot be
reprogrammed without genetic engineering.

o 1.4 Virtual computers


A "computer" is commonly considered to be a physical device. However, one can create
a computer program which describes how to run a different computer, i.e. "simulating a
computer in a computer". Not only is this a constructive proof of the Church-Turing
thesis, but is also extremely common in all modern computers. For example, some
programming languages use something called an interpreter, which is a simulated
computer built on top of the basic computer; this allows programmers to write code
(computer input) in a different language than the one understood by the base computer
(the alternative is to use a compiler). Additionally, virtual machines are simulated
computers which virtually replicate a physical computer in software, and are very
commonly used by IT. Virtual machines are also a common technique used to create
emulators, such game console emulators.

2 History of computing
The first use of the word "computer" was recorded in 1613, referring to a person who carried
out calculations, or computations, and the word continued to be used in that sense until the
middle of the 20th century. From the end of the 19th century onwards though, the word began
to take on its more familiar meaning, describing a machine that carries out computations

o 2.1 Limited-function ancient computers

 The history of the modern computer begins with two separate technologies—automated
calculation and programmability—but no single device can be identified as the earliest
computer, partly because of the inconsistent application of that term. Examples of early
mechanical calculating devices include the abacus, the slide rule and arguably the
astrolabe and the Antikythera mechanism, an ancient astronomical computer built by the
Greeks around 80 BC.[4] The Greek mathematician Hero of Alexandria (c. 10–70 AD)
built a mechanical theater which performed a play lasting 10 minutes and was operated
by a complex system of ropes and drums that might be considered to be a means of
deciding which parts of the mechanism performed which actions and when.[5] This is the
essence of programmability.

 The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to


be the earliest programmable analog computer.[6][verification needed] It displayed the zodiac, the
solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway
causing automatic doors to open every hour,[7][8] and five robotic musicians who played
music when struck by levers operated by a camshaft attached to a water wheel. The
length of day and night could be re-programmed to compensate for the changing lengths
of day and night throughout the year.[6]

 The Renaissance saw a re-invigoration of European mathematics and engineering.


Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators
constructed by European engineers, but none fit the modern definition of a computer,
because they could not be programmed.
o 2.2 First general-purpose computers

In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series
of punched paper cards as a template which allowed his loom to weave intricate patterns
automatically. The resulting Jacquard loom was an important step in the development of
computers because the use of punched cards to define woven patterns can be viewed as an early,
albeit limited, form of programmability.

It was the fusion of automatic calculation with programmability that produced the first
recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a
fully programmable mechanical computer, his analytical engine. Limited finances and Babbage's
inability to resist tinkering with the design meant that the device was never completed.

In the late 1880s, Herman Hollerith invented the recording of data on a machine readable
medium. Prior uses of machine readable media, above, had been for control, not data. "After
some initial trials with paper tape, he settled on punched cards ..." To process these punched
cards he invented the tabulator, and the keypunch machines. These three inventions were the
foundation of the modern information processing industry. Large-scale automated data
processing of punched cards was performed for the 1890 United States Census by Hollerith's
company, which later became the core of IBM. By the end of the 19th century a number of
technologies that would later prove useful in the realization of practical computers had begun to
appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the
teleprinter.

During the first half of the 20th century, many scientific computing needs were met by
increasingly sophisticated analog computers, which used a direct mechanical or electrical model
of the problem as a basis for computation. However, these were not programmable and generally
lacked the versatility and accuracy of modern digital computers.

Alan Turing is widely regarded to be the father of modern computer science. In 1936 Turing
provided an influential formalisation of the concept of the algorithm and computation with the
Turing machine, providing a blueprint for the electronic digital computer.[11] Of his role in the
creation of the modern computer, Time magazine in naming Turing one of the 100 most
influential people of the 20th century, states: "The fact remains that everyone who taps at a
keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of
a Turing machine".[11]
The Zuse Z3, 1941, considered the world's first working programmable, fully automatic
computing machine.

The ENIAC, which became operational in 1946, is considered to be the first general-purpose
electronic computer.

EDSAC was one of the first computers to implement the stored program (von Neumann)
architecture.

Die of an Intel 80486DX2 microprocessor (actual size: 12×6.75 mm) in its packaging.

The inventor of the program-controlled computer was Konrad Zuse, who built the first working
computer in 1941 and later in 1955 the first computer based on magnetic storage.[12]

George Stibitz is internationally recognized as a father of the modern digital computer. While
working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he
dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to
use binary circuits to perform an arithmetic operation. Later models added greater sophistication
including complex arithmetic and programmability.[13]

A succession of steadily more powerful and flexible computing devices were constructed in the
1930s and 1940s, gradually adding the key features that are seen in modern computers. The use
of digital electronics (largely invented by Claude Shannon in 1937) and more flexible
programmability were vitally important steps, but defining one point along this road as "the first
digital electronic computer" is difficult.Shannon 1940 Notable achievements include.

 Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working
machine featuring binary arithmetic, including floating point arithmetic and a measure of
programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the
world's first operational computer.[14]
 The non-programmable Atanasoff–Berry Computer (1941) which used vacuum tube
based computation, binary numbers, and regenerative capacitor memory. The use of
regenerative memory allowed it to be much more compact than its peers (being
approximately the size of a large desk or workbench), since intermediate results could be
stored and then fed back into the same set of computation elements.
 The secret British Colossus computers (1943),[15] which had limited programmability but
demonstrated that a device using thousands of tubes could be reasonably reliable and
electronically reprogrammable. It was used for breaking German wartime codes.
 The Harvard Mark I (1944), a large-scale electromechanical computer with limited
programmability.
 The U.S. Army's Ballistic Research Laboratory ENIAC (1946), which used decimal
arithmetic and is sometimes called the first general purpose electronic computer (since
Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however,
ENIAC had an inflexible architecture which essentially required rewiring to change its
programming.

o 2.3 Stored-program architecture

 Several developers of ENIAC, recognizing its flaws, came up with a far more flexible
and elegant design, which came to be known as the "stored program architecture" or von
Neumann architecture. This design was first formally described by John von Neumann in
the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of
projects to develop computers based on the stored-program architecture commenced
around this time, the first of these being completed in Great Britain. The first working
prototype to be demonstrated was the Manchester Small-Scale Experimental Machine
(SSEM or "Baby") in 1948. The Electronic Delay Storage Automatic Calculator
(EDSAC), completed a year after the SSEM at Cambridge University, was the first
practical, non-experimental implementation of the stored program design and was put to
use immediately for research work at the university. Shortly thereafter, the machine
originally described by von Neumann's paper—EDVAC—was completed but did not see
full-time use for an additional two years.
 Nearly all modern computers implement some form of the stored-program architecture,
making it the single trait by which the word "computer" is now defined. While the
technologies used in computers have changed dramatically since the first electronic,
general-purpose computers of the 1940s, most still use the von Neumann architecture.

 Beginning in the 1950s, Soviet scientists Sergei Sobolev and Nikolay Brusentsov
conducted research on ternary computers, devices that operated on a base three
numbering system of −1, 0, and 1 rather than the conventional binary numbering system
upon which most computers are based. They designed the Setun, a functional ternary
computer, at Moscow State University. The device was put into limited production in the
Soviet Union, but supplanted by the more common binary architecture.

o 2.4 Semiconductors and microprocessors

 Computers using vacuum tubes as their electronic elements were in use throughout the
1950s, but by the 1960s had been largely replaced by transistor-based machines, which
were smaller, faster, cheaper to produce, required less power, and were more reliable.
The first transistorised computer was demonstrated at the University of Manchester in
1953.[16] In the 1970s, integrated circuit technology and the subsequent creation of
microprocessors, such as the Intel 4004, further decreased size and cost and further
increased speed and reliability of computers. By the late 1970s, many products such as
video recorders contained dedicated computers called microcontrollers, and they started
to appear as a replacement to mechanical controls in domestic appliances such as
washing machines. The 1980s witnessed home computers and the now ubiquitous
personal computer. With the evolution of the Internet, personal computers are becoming
as common as the television and the telephone in the household].

 Modern smartphones are fully programmable computers in their own right, and as of
2009 may well be the most common form of such computers in existence.

 3 Programs
o 3.1 Stored program architecture
o 3.2 Bugs
o 3.3 Machine code
o 3.4 Higher-level languages and program design
 4 Function
o 4.1 Control unit
o 4.2 Arithmetic/logic unit (ALU)
o 4.3 Memory
o 4.4 Input/output (I/O)
o 4.5 Multitasking
o 4.6 Multiprocessing
o 4.7 Networking and the Internet
 5 Further topics
o 5.1 Artificial intelligence
o 5.2 Hardware
o 5.3 Software
o 5.4 Programming languages
o 5.5 Professions and organizations
 6 See also
 7 Notes
 8 References
 9 External links

You might also like