Computer Programming: Software Development Process
Computer Programming: Software Development Process
• 1 Overview
• 2 History of programming
• 3 Modern programming
o 3.1 Quality requirements
o 3.2 Algorithmic complexity
o 3.3 Methodologies
o 3.4 Measuring language usage
o 3.5 Debugging
• 4 Programming languages
• 5 Programmers
• 6 References
• 7 See also
• 8 External links
[edit] Overview
Within software engineering, programming (the implementation) is regarded as one phase
in a software development process.
There is an ongoing debate on the extent to which the writing of programs is an art, a
craft or an engineering discipline.[1] Good programming is generally considered to be the
measured application of all three, with the goal of producing an efficient and evolvable
software solution (the criteria for "efficient" and "evolvable" vary considerably). The
discipline differs from many other technical professions in that programmers generally do
not need to be licensed or pass any standardized (or governmentally regulated)
certification tests in order to call themselves "programmers" or even "software
engineers." However, representing oneself as a "Professional Software Engineer" without
a license from an accredited institution is illegal in many parts of the world.
Another ongoing debate is the extent to which the programming language used in writing
computer programs affects the form that the final program takes. This debate is analogous
to that surrounding the Sapir-Whorf hypothesis [2] in linguistics, that postulates that a
particular language's nature influences the habitual thought of its speakers. Different
language patterns yield different patterns of thought. This idea challenges the possibility
of representing the world perfectly with language, because it acknowledges that the
mechanisms of any language condition the thoughts of its speaker community.
Said another way, programming is the craft of transforming requirements into something
that a computer can execute.
[edit] History of programming
See also: History of programming languages
The concept of devices that operate following a pre-defined set of instructions traces back
to Greek Mythology, notably Hephaestus and his mechanical servants[3]. The Antikythera
mechanism was a calculator utilizing gears of various sizes and configuration to
determine its operation. The earliest known programmable machines (machines whose
behavior can be controlled and predicted with a set of instructions) were Al-Jazari's
programmable Automata in 1206.[4] One of Al-Jazari's robots was originally a boat with
four automatic musicians that floated on a lake to entertain guests at royal drinking
parties. Programming this mechanism's behavior meant placing pegs and cams into a
wooden drum at specific locations. These would then bump into little levers that operate
a percussion instrument. The output of this device was a small drummer playing various
rhythms and drum patterns.[5][6] Another sophisticated programmable machine by Al-
Jazari was the castle clock, notable for its concept of variables which the operator could
manipulate as necessary (i.e. the length of day and night). The Jacquard Loom, which
Joseph Marie Jacquard developed in 1801, uses a series of pasteboard cards with holes
punched in them. The hole pattern represented the pattern that the loom had to follow in
weaving cloth. The loom could produce entirely different weaves using different sets of
cards. Charles Babbage adopted the use of punched cards around 1830 to control his
Analytical Engine. The synthesis of numerical calculation, predetermined operation and
output, along with a way to organize and input instructions in a manner relatively easy for
humans to conceive and produce, led to the modern development of computer
programming. Development of computer programming accelerated through the Industrial
Revolution.
In the late 1880s Herman Hollerith invented the recording of data on a medium that
could then be read by a machine. Prior uses of machine readable media, above, had been
for control, not data. "After some initial trials with paper tape, he settled on punched
cards..."[7] To process these punched cards, first known as "Hollerith cards" he invented
the tabulator, and the key punch machines. These three inventions were the foundation of
the modern information processing industry. In 1896 he founded the Tabulating Machine
Company (which later became the core of IBM). The addition of a control panel to his
1906 Type I Tabulator allowed it to do different jobs without having to be physically
rebuilt. By the late 1940s there were a variety of plug-board programmable machines,
called unit record equipment, to perform data processing tasks (card reading). Early
computer programmers used plug-boards for the variety of complex calculations
requested of the newly invented machines.
Data and instructions could be stored on external punch cards, which were kept in order
and arranged in program decks.
The invention of the Von Neumann architecture allowed computer programs to be stored
in computer memory. Early programs had to be painstakingly crafted using the
instructions of the particular machine, often in binary notation. Every model of computer
would be likely to need different instructions to do the same task. Later assembly
languages were developed that let the programmer specify each instruction in a text
format, entering abbreviations for each operation code instead of a number and
specifying addresses in symbolic form (e.g. ADD X, TOTAL). In 1954 Fortran, the first
higher level programming language, was invented. This allowed programmers to specify
calculations by entering a formula directly (e.g. Y = X*2 + 5*X + 9). The program text,
or source, was converted into machine instructions using a special program called a
compiler. Many other languages were developed, including ones for commercial
programming, such as COBOL. Programs were mostly still entered using punch cards or
paper tape. (See computer programming in the punch card era). By the late 1960s, data
storage devices and computer terminals became inexpensive enough so programs could
be created by typing directly into the computers. Text editors were developed that
allowed changes and corrections to be made much more easily than with punch cards.
As time has progressed, computers have made giant leaps in the area of processing
power. This has brought about newer programming languages that are more abstracted
from the underlying hardware. Although these more abstracted languages require
additional overhead, in most cases the huge increase in speed of modern computers has
brought about little performance decrease compared to earlier counterparts. The benefits
of these more abstracted languages is that they allow both an easier learning curve for
people less familiar with the older lower-level programming languages, and they also
allow a more experienced programmer to develop simple applications quickly. Despite
these benefits, large complicated programs, and programs that are more dependent on
speed still require the faster and relatively lower-level languages with today's hardware.
(The same concerns were raised about the original Fortran language.)
Throughout the second half of the twentieth century, programming was an attractive
career in most developed countries. Some forms of programming have been increasingly
subject to offshore outsourcing (importing software and services from other countries,
usually at a lower wage), making programming career decisions in developed countries
more complicated, while increasing economic opportunities in less developed areas. It is
unclear how far this trend will continue and how deeply it will impact programmer wages
and opportunities.
Whatever the approach to software development may be, the final program must satisfy
some fundamental properties. The following five properties are among the most relevant:
The academic field and the engineering practice of computer programming are both
largely concerned with discovering and implementing the most efficient algorithms for a
given class of problem. For this purpose, algorithms are classified into orders using so-
called Big O notation, O(n), which expresses resource use, such as execution time or
memory consumption, in terms of the size of an input. Expert programmers are familiar
with a variety of well-established algorithms and their respective complexities and use
this knowledge to choose algorithms that are best suited to the circumstances.
[edit] Methodologies
The first step in most formal software development projects is requirements analysis,
followed by testing to determine value modeling, implementation, and failure elimination
(debugging). There exist a lot of differing approaches for each of those tasks. One
approach popular for requirements analysis is Use Case analysis.
Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and
Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a
notation used for both OOAD and MDA.
It is very difficult to determine what are the most popular of modern programming
languages. Some languages are very popular for particular kinds of applications (e.g.,
COBOL is still strong in the corporate data center, often on large mainframes,
FORTRAN in engineering applications, and C in embedded applications), while some
languages are regularly used to write many different kinds of applications.
[edit] Debugging
Debugging is often done with IDEs like Visual Studio, NetBeans, and Eclipse.
Standalone debuggers like gdb are also used, and these often provide less of a visual
environment, usually using a command line.
Allen Downey, in his book How To Think Like A Computer Scientist, writes:
The details look different in different languages, but a few basic instructions
appear in just about every language:
• input: Get data from the keyboard, a file, or some other device.
• output: Display data on the screen or send data to a file or other
device.
• math: Perform basic mathematical operations like addition and
multiplication.
• conditional execution: Check for certain conditions and execute
the appropriate sequence of statements.
• repetition: Perform some action repeatedly, usually with some
variation.
[edit] Programmers
Main article: Programmer
Computer programmers are those who write computer software. Their jobs usually
involve:
• Coding
• Compilation
• Documentation
• Integration
• Maintenance
• Requirements analysis
• Software architecture
• Software testing
• Specification
• Debugging
[edit] References
1. ^ Paul Graham (2003). Hackers and Painters.
http://www.paulgraham.com/hp.html. Retrieved on 2006-08-22.
2. ^ Kenneth E. Iverson, the originator of the APL programming language, believed
that the Sapir–Whorf hypothesis applied to computer languages (without actually
mentioning the hypothesis by name). His Turing award lecture, "Notation as a
tool of thought", was devoted to this theme, arguing that more powerful notations
aided thinking about computer algorithms. Iverson K.E.,"Notation as a tool of
thought", Communications of the ACM, 23: 444-465 (August 1980).
3. ^ New World Encyclopedia Online Edition New World Encyclopedia
4. ^ Al-Jazari - the Mechanical Genius, MuslimHeritage.com
5. ^ A 13th Century Programmable Robot, University of Sheffield
6. ^ Fowler, Charles B. (October 1967), "The Museum of Music: A History of
Mechanical Instruments", Music Educators Journal 54 (2): 45-49
7. ^ Columbia University Computing History - Herman Hollerith
8. ^ Survey of Job advertisements mentioning a given language>