Lesson 4 Programming Techniques Paradigms
Lesson 4 Programming Techniques Paradigms
SOFTWARE
DEVELOPMENT
Lesson 4
Programming techniques Paradigms
Learning Objectives
2. The ability to re-use the same code at different places in the program without copying it.
3. An easier way to keep track of program flow than a collection of "GOTO" or "JUMP" statements
(which can turn a large, complicated program into spaghetti code).
The main benefit of procedural programming over first- and second-generation languages
is that it allows for modularity, which is generally desirable, especially in large, complicated
programs.
Modularity was one of the earliest abstraction features identified as desirable for a
programming language.
Scoping is another abstraction technique that helps to keep procedures strongly
modular.
It prevents a procedure from accessing the variables of other procedures (and vice-
versa), including previous instances of itself such as in recursion.
Procedures are convenient for making pieces of code written by different people or
different groups, including through programming libraries.
1. Assertions
2. Parameter Checking
Assertions
As we program, we make many assumptions about the state of the program at
each point in the code
• A variable's value is in a particular range
…
}
data != null
data is sorted
If you don't have control over the calling code, throw exceptions
• e.g., your product might be a class library that is called by code you don’t control
4. Object-oriented programming
paradigm
Object-oriented programming (OOP) is a programming paradigm that uses
"objects" – data structures encapsulating data fields and procedures together
with their interactions – to design applications and computer programs.
Associated programming techniques may include features such as data
abstraction, encapsulation, modularity, polymorphism, and inheritance.
Many modern programming languages now support OOP.
OOP concepts: class
A class defines the abstract characteristics of a thing (object), including that thing's
characteristics (its attributes, fields or properties) and the thing's behaviors (the
operations it can do, or methods, operations or functionalities).
One might say that a class is a blueprint or factory that describes the nature of
something.
Classes provide modularity and structure in an object-oriented computer program.
A class should typically be recognizable to a non-programmer familiar with the problem
domain, meaning that the characteristics of the class should make sense in context.
Also, the code for a class should be relatively self-contained (generally using
encapsulation).
Collectively, the properties and methods defined by a class are called its members.
OOP concepts: object
Up to this day, functional programming has not been very popular except for a restricted
number of application areas, such as artificial intelligence.
John Backus presented the FP programming language in his 1977 Turing Award lecture "Can
Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of
Programs".
In the 1970s the ML programming language was created by Robin Milner at the
University of Edinburgh, and David Turner developed initially the language SASL at the
University of St. Andrews and later the language Miranda at the University of Kent.
ML eventually developed into several dialects, the most common of which are now
Objective Caml, Standard ML, and F#.
Also in the 1970s, the development of the Scheme programming language (a partly-
functional dialect of Lisp), as described in the influential "Lambda Papers” and the 1985
textbook "Structure and Interpretation of Computer Programs”, brought awareness of the
power of functional programming to the wider programming-languages community.
The Haskell programming language was released in the late 1980s in an attempt to gather
together many ideas in functional programming research.
Functional programming languages, especially purely functional ones, have
largely been emphasized in academia rather than in commercial software
development.
Because of this they lack referential transparency, i.e. the same language
expression can result in different values at different times depending on the
state of the executing program.
Eliminating side-effects can make it much easier to understand and predict the
behavior of a program, which is one of the key motivations for the
development of functional programming.
Functional Programming: Higher-Order
Functions
Most functional programming languages use higher-order functions, which are
functions that can either take other functions as arguments or return functions as
results.
The differential operator d/dx that produces the derivative of a function f is an example
of this in calculus.
• If the result of a pure expression is not used, it can be removed without affecting other expressions.
• If a pure function is called with parameters that cause no side-effects, the result is constant with respect to that
parameter list (referential transparency), i.e. if the pure function is again called with the same parameters, the same
result will be returned (this can enable caching optimizations).
• If there is no data dependency between two pure expressions, then they can be evaluated in any order, or they can be
performed in parallel and they cannot interfere with one another (in other terms, the evaluation of any pure
expression is thread-safe and enables parallel execution).
If the entire language does not allow side-effects, then any evaluation strategy can be used; this
gives the compiler freedom to reorder or combine the evaluation of expressions in a program.
This allows for much more freedom in optimizing the evaluation.
The notion of pure function is central to code optimization in compilers, even for procedural
programming languages.
While most compilers for imperative programming languages can detect pure functions, and
perform common-subexpression elimination for pure function calls, they cannot always do this
for pre-compiled libraries, which generally do not expose this information, thus preventing
optimizations that involve those external functions.
Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external
functions as pure, to enable such optimizations. Fortran 95 allows functions to be designated
"pure" in order to allow such optimizations.
Functional Programming: Recursion
Iteration in functional languages is usually accomplished via recursion.
Recursion may require maintaining a stack, and thus may lead to inefficient memory
consumption, but tail recursion can be recognized and optimized by a compiler into the
same code used to implement iteration in imperative languages.
The Scheme programming language standard requires implementations to recognize and
optimize tail recursion.
Tail recursion optimization can be implemented by transforming the program into
continuation passing style during compilation, among other approaches.
Common patterns of recursion can be factored out using higher order functions,
catamorphisms and anamorphisms, which "folds" and "unfolds" a recursive function call
nest.
Using such advanced techniques, recursion can be implemented in an efficient manner in
functional programming languages.
Functional Programming: Eager vs. Lazy
Evaluation
Functional languages can be categorized by whether they use strict (eager) or non-strict
(lazy) evaluation, concepts that refer to how function arguments are processed when an
expression is being evaluated. Under strict evaluation, the evaluation of any term
containing a failing subterm will itself fail. For example, the expression
print length([2+1, 3*2, 1/0, 5-4])
will fail under eager evaluation because of the division by zero in the third element of the
list. Under lazy evaluation, the length function will return the value 4 (the length of the
list), since evaluating it will not attempt to evaluate the terms making up the list.
Eager evaluation fully evaluates function arguments before invoking the function. Lazy
evaluation does not evaluate function arguments unless their values are required to
evaluate the function call itself.
The usual implementation strategy for lazy evaluation in functional languages is graph
reduction. Lazy evaluation is used by default in several pure functional languages,
including Miranda, Clean and Haskell.
Functional Programming: Type Inference
Especially since the development of Hindley–Milner type inference in the 1970s, functional
programming languages have tended to use typed lambda calculus, as opposed to the untyped
lambda calculus used in Lisp and its variants (such as Scheme).
Type inference, or implicit typing, refers to the ability to deduce automatically the type of the
values manipulated by a program. It is a feature present in some strongly statically typed
languages.
The presence of strong compile-time type checking makes programs more reliable, while type
inference frees the programmer from the need to manually declare types to the compiler.
Type inference is often characteristic of — but not limited to — functional programming languages
in general. Many imperative programming languages have adopted type inference in order to
improve type safety.
Functional Programming: In Non-functional
Languages
It is possible to employ a functional style of programming in languages that are not
traditionally considered functional languages.
Some non-functional languages have borrowed features such as higher-order functions,
and list comprehensions from functional programming languages. This makes it easier to
adopt a functional style when using these languages.
Functional constructs such as higher-order functions and lazy lists can be obtained in C++
via libraries, such as in FC++.
In C, function pointers can be used to get some of the effects of higher-order functions.
Many object-oriented design patterns are expressible in functional programming terms: for
example, the Strategy pattern dictates use of a higher-order function, and the Visitor
pattern roughly corresponds to a catamorphism, or fold.
7. Reflective programming
paradigm
Reflection is the process by which a computer program can observe and
modify its own structure and behavior at runtime.
In most computer architectures, program instructions are stored as data -
hence the distinction between instruction and data is merely a matter of how
the information is treated by the computer and programming language.
Normally, instructions are executed and data is processed; however, in some
languages, programs can also treat instructions as data and therefore make
reflective modifications.
Reflection is most commonly used in high-level virtual machine programming
languages like Smalltalk and scripting languages, and less commonly used in
manifestly typed and/or statically typed programming languages such as
Java, C, ML or Haskell.
Reflection-oriented programming includes self-examination, self-
modification, and self-replication.
Ultimately, reflection-oriented paradigm aims at dynamic program
modification, which can be determined and executed at runtime.
Some imperative approaches, such as procedural and object-oriented
programming paradigms, specify that there is an exact predetermined
sequence of operations with which to process data.
The reflection-oriented programming paradigm, however, adds that
program instructions can be modified dynamically at runtime and invoked
in their modified state.
That is, the program architecture itself can be decided at runtime based
upon the data, services, and specific operations that are applicable at
runtime.
Reflection can be used for observing and/or modifying program execution at
runtime. A reflection-oriented program component can monitor the execution of
an enclosure of code and can modify itself according to a desired goal related to
that enclosure. This is typically accomplished by dynamically assigning program
code at runtime.
Reflection can thus be used to adapt a given program to different situations
dynamically.
Reflection-oriented programming almost always requires additional knowledge,
framework, relational mapping, and object relevance in order to take advantage
of this much more generic code execution mode.
It thus requires the translation process to retain in the executable code much of
the higher-level information present in the source code, thus leading to more
bloated executables.
However, in cases where the language is interpreted, much of this information is
already kept for the interpreter to function, so not much overhead is required in
these cases.
A language supporting reflection provides a number of features available at
runtime that would otherwise be very obscure or impossible to accomplish in a
lower-level language. Some of these features are the abilities to:
• Discover and modify source code constructions (such as code blocks, classes, methods,
protocols, etc.) as a first-class object at runtime.
• Convert a string matching the symbolic name of a class or function into a reference to or
invocation of that class or function.
The Common Gateway Interface allowed scripting languages to control web servers,
and thus communicate over the web. Scripting languages that made use of CGI early in
the evolution of the Web include Perl, ASP, and PHP.
Modern web browsers typically provide a language for writing extensions to the
browser itself, and several standard embedded languages for controlling the browser,
including JavaScript and CSS, or ActionScript.
Scripting Languages: Types of Scripting
Languages
Job control languages and shells
• A major class of scripting languages has grown out of the automation of job control, which
relates to starting and controlling the behavior of system programs. (In this sense, one might
think of shells as being descendants of IBM's JCL, or Job Control Language, which was used for
exactly this purpose.)
• Many of these languages' interpreters double as command-line interpreters such as the Unix
shell or the MS-DOS COMMAND.
• Others, such as AppleScript offer the use of English-like commands to build scripts. This
combined with Mac OS X's Cocoa framework allows user to build entire applications using
AppleScript & Cocoa objects.
GUI scripting
• With the advent of graphical user interfaces a specialized kind of scripting language emerged
for controlling a computer. These languages interact with the same graphic windows, menus,
buttons, and so on that a system generates.
• They do this by simulating the actions of a human user. These languages are typically used to
automate user actions or configure a standard state. Such languages are also called "macros"
when control is through simulated key presses or mouse clicks.
• They can be used to automate the execution of complex tasks in GUI-controlled applications.
Application-specific scripting languages
• Many large application programs include an idiomatic scripting language tailored to the needs of the
application user.
• Likewise, many computer game systems use a custom scripting language to express the game
components’ programmed actions.
• Languages of this sort are designed for a single application; and, while they may superficially resemble a
specific general-purpose language (e.g. QuakeC, modeled after C), they have custom features that
distinguish them.
• Emacs Lisp, a dialect of Lisp, contains many special features that make it useful for extending the editing
functions of the Emacs text editor.
• A host of special-purpose languages has developed to control web browsers’ operation. These include
JavaScript, VBScript (Microsoft - Explorer), XUL (Mozilla – Firefox), and XSLT, a presentation language that
transforms XML content.
• Client-side scripting generally refers to the class of computer programs on the web that are executed by the
user's web browser, instead of server-side (on the web server). This type of computer programming is an
important part of the Dynamic HTML (DHTML) concept, enabling web pages to be scripted; that is, to have
different and changing content depending on user input, environmental conditions (such as the time of day), or
other variables.
• Web authors write client-side scripts in languages such as JavaScript (Client-side JavaScript) and VBScript.
• Techniques involving the combination of XML and JavaScript scripting to improve the user's impression of
responsiveness have become significant enough to acquire a name, such as AJAX.
Scripting Languages: Types of Scripting
Languages
Client-side scripts are often embedded within an HTML document (hence known as an
"embedded script"), but they may also be contained in a separate file, which is referenced
by the document that use it (hence known as an "external script").
Upon request, the necessary files are sent to the user's computer by the web server on
which they reside. The user's web browser executes the script using an embedded
interpreter, then displays the document, including any visible output from the script.
Client-side scripts may also contain instructions for the browser to follow in response to
certain user actions, (e.g., clicking a button). Often, these instructions can be followed
without further communication with the server.
In contrast, server-side scripts, written in languages such as Perl, PHP, and server-side
VBScript, are executed by the web server when the user requests a document. They
produce output in a format understandable by web browsers (usually HTML), which is
then sent to the user's computer. Documents produced by server-side scripts may, in turn,
contain or refer to client-side scripts.
• Client-side scripts have greater access to the information and functions available on the
user's browser, whereas server-side scripts have greater access to the information and
functions available on the server.
• Server-side scripts require that their language's interpreter be installed on the server,
and produce the same output regardless of the client's browser, operating system, or
other system details.
• Client-side scripts do not require additional software on the server (making them popular
with authors who lack administrative access to their servers). However, they do require
that the user's web browser understands the scripting language in which they are
written. It is therefore impractical for an author to write scripts in a language that is not
supported by popular web browsers.
• Unfortunately, even languages that are supported by a wide variety of browsers may not
be implemented in precisely the same way across all browsers and operating systems.
9. Aspect-oriented programming
paradigm
• Aspect-oriented programming entails breaking down program logic into
distinct parts (so-called concerns or cohesive areas of functionality).
• But some concerns defy these forms of implementation and are called
cross-cutting concerns because they "cut across" multiple abstractions in a
program.
• The aspects can potentially be applied to different programs, provided that the
pointcuts are applicable.
Aspect-Oriented Programming:
Implementation
Most implementations produce programs through a process known as weaving - a
special case of program transformation.
An aspect weaver reads the aspect-oriented code and generates appropriate
object-oriented code with the aspects integrated.
AOP programs can affect other programs in two different ways, depending on the
underlying languages and environments:
1. a combined program is produced, valid in the original language and indistinguishable from an
ordinary program to the ultimate interpreter
Compilation process
Weaving process
Aspect-Oriented Programming
base code
aspect code
woven code
Aspect-Oriented Programming:
History
AOP as such has a number of antecedents: the Visitor Design Pattern, CLOS MOP
(Common Lisp Object System’s MetaObject Protocol).
Gregor Kiczales and colleagues at Xerox PARC developed AspectJ (perhaps the
most popular general-purpose AOP package) and made it available in 2001.
Aspect-Oriented Programming: Motivation
• Typically, an aspect is scattered or tangled as code, making it harder to understand and
maintain.
• It is scattered by virtue of its code (such as logging) being spread over a number of
unrelated functions that might use it, possibly in entirely unrelated systems, different
source languages, etc.
• That means to change logging can require modifying all affected modules. Aspects
become tangled not only with the mainline function of the systems in which they are
expressed but also with each other.
• That means changing one concern entails understanding all the tangled concerns or
having some means by which the effect of changes can be inferred.
Aspect-Oriented Programming: Join Point
Model
• The advice-related component of an aspect-oriented language defines a join point model
(JPM). A JPM defines three things:
• When the advice can run. These are called join points because they are points in a running program where
additional behavior can be usefully joined. A join point needs to be addressable and understandable by an ordinary
programmer to be useful. It should also be stable across inconsequential program changes in order for an aspect to
be stable across such changes. Many AOP implementations support method executions and field references as join
points.
• A way to specify (or quantify) join points, called pointcuts. Pointcuts determine whether a given join point matches.
Most useful pointcut languages use a syntax like the base language (for example, AspectJ uses Java signatures) and
allow reuse through naming and combination.
• A means of specifying code to run at a join point. AspectJ calls this advice, and can run it before, after, and around
join points. Some implementations also support things like defining a method in an aspect on another class.
• Join-point models can be compared based on the join points exposed, how join points are
specified, the operations permitted at the join points, and the structural enhancements that
can be expressed.
Aspect-Oriented Programming:
Implementation
Java's well-defined binary form enables bytecode weavers to work with any Java
program in .class-file form. Bytecode weavers can be deployed during the build
process or, if the weave model is per-class, during class loading.
AspectJ started with source-level weaving in 2001, delivered a per-class bytecode
weaver in 2002, and offered advanced load-time support after the integration of
AspectWerkz in 2005.
Deploy-time weaving offers another approach. This basically implies post-processing,
but rather than patching the generated code, this weaving approach subclasses
existing classes so that the modifications are introduced by method-overriding. The
existing classes remain untouched, even at runtime, and all existing tools (debuggers,
profilers, etc.) can be used during development.
Aspect-Oriented Programming: Problems
Programmers need to be able to read code and understand what is happening in order to prevent errors.
Even with proper education, understanding crosscutting concerns can be difficult without proper support for
visualizing both static structure and the dynamic flow of a program. Starting in 2010, IDEs such as Eclipse have
begun to support the visualizing of crosscutting concerns, as well as aspect code assist and refactoring.
Given the intrusive power of AOP weaving, if a programmer makes a logical mistake in expressing crosscutting, it
can lead to widespread program failure.
Conversely, another programmer may change the join points in a program – e.g., by renaming or moving
methods – in ways that the aspect writer did not anticipate, with unintended consequences.
One advantage of modularizing crosscutting concerns is enabling one programmer to affect the entire system
easily; as a result, such problems present as a conflict over responsibility between two or more developers for a
given failure.
However, the solution for these problems can be much easier in the presence of AOP, since only the aspect need
be changed, whereas the corresponding problems without AOP can be much more spread out.
Aspect-Oriented Programming: Implementations
• The following programming languages have implemented AOP, within the language,
or as an external library:
• C / C++ / C#, COBOL, Objective-C frameworks, ColdFusion, Common Lisp, Delphi, Haskell, Java, JavaScript, ML,
PHP, Scheme, Perl, Prolog, Python, Ruby, Squeak Smalltalk and XML.
References
References
1. John von Neumann. First Draft Report on the EDVAC, 1945.
2. A.M. Turing, On Computable Numbers, with an Application to the
Entscheidungsproblem, Proceedings of the London Mathematical Society, 2 42: 230–65,
1937.
3. J. R Gurd, C. C Kirkham, I. Watson. The Manchester prototype dataflow computer.
Communications of the ACM - Special section on computer architecture CACM
Homepage archive. Volume 28 Issue 1, Jan. 1985, Pages 34-52, ACM New York, NY, USA.
4. Alan Bawden, Richard Greenblatt, Jack Holloway, Thomas Knight, David Moon, Daniel
Weinreb, LISP Machine Progress Report, MIT AI Lab memos, AI-444, 1977.
5. John Backus. Can programming be liberated from the von Neumann style?: a functional
style and its algebra of programs. Communications of the ACM . Volume 21 Issue 8, Aug.
1978. Pages 613-641. ACM New York, NY, USA.
6. Harold Abelson, Gerald Jay Sussman. Structure and Interpretation of Computer
Programs. The MIT Press. 1996.