Imtermediate Python Programming
Imtermediate Python Programming
It was in 1989, December to be precise, that a man named Guido van Rossum
began working on a computer language that he named Python. He had
previously worked with the team that devised the ABC language that was
part of the Amoeba operating system from the 1980s. Much as he found the
ABC language okay, there were a few features that the language was lacking
in and this caused him no end of frustration. What he wanted was a computer
programming language that was high-level, something that would speed up
how quickly Amoeba project utilities were developed and it certainly wasnt
ABC. However, the ABC language was set to play an influential and
significant part in the development of the new language because Guido
borrowed the bits of ABC that he did like and then teamed them up with
features that were missing from the language.
The very first edition of python was published in February 1991. It was
object oriented; it had a system of modules; exception handling was included,
along with functions and all the core data types. Python v1.0 was officially
released in January of 1994 and included programming concepts like map,
lambda, filter and reduce.
Guido Van Rossum released v1.2 while he was still working on the Amoeba
project before he moved on to the Corporation for National Research
Initiatives in Virginia. From there, he continued to work on Python using
indirect funding from DARPRA to release a few more versions of the
programming language.
By the time Python 1.4 rolled around, several new features had been
included, such as the keyword arguments inspired by Modula 3 and there was
also support built in for complex numbers, as well as mangling, a basic form
of hiding data. On 31 December 1997, v 1.5 was released and September 5,
2000 saw v1.6. This was followed very closely by v2.0, in October 2000 and
this latest version included even more features, like list comprehensions. This
concept came from two other programming languages, Haskell and SETL. A
garbage collection system was also introduced, bringing in a feature that
could collect reference cycles.
The first big update to Python came with v2.2. All the in-built types in
Python and all user-defined classes that were written in the Python language
were unified into a single hierarchy. This unification is what gave Python its
model of pure and consistent object orientation. The update to the Python
class system brought more new features to provide a better experience of
programming for all users, including:
The ability to be able to subclass any of the built-in data types
The introduction of class and static methods
The introduction of get() and set() methods for defining properties
Updates to the metaclasses, to the __new__() method and the super()
method along with an update to MRO algorithms
Python remained like this until December 2, 2008 when the next major
version was released. This was Python 3 and it was released to fix a few basic
design flaws evident in Python 2. These fixes couldnt be implemented into
Python while backward compatibility was maintained in 2 series, only with
the introduction of a major new release.
Python 2 vs Python 3
Python 3 has caused the most disruption to the ecosystem, bringing with it
some major changes. These changes include:
Print is now classed as a function
Some of the better-known APIs, like dict.keys(), range, and
dict.values() will now return iterators and views instead of a list, thus
providing an improvement to efficiency when these APIs get used
The rules for comparison ordering are much simpler. For example, you
can no longer sort a heterogeneous list because all of the list elements
have got to be comparable to one another
Integer types have been reduced to one, for example, an int.long is now
classed as an int
When carryout out division of a pair of integers, the result will now be
a float and not an integer. If you want to return an integer as the result,
you will need to use //.
All Python text is now in Unicode but binary data is used to represent
encoded Unicode text. If you attempt to mix data and text you will raise
an exception and this will break all backwards compatibility with
Python 2
New syntax was introduced, including the nonlocal statement, function
annotations, set literals, extended inerrable unpacking, and dictionary
comprehensions, to name just a few
Other syntax was updated, including metaclass specification, exception
handling and list comprehensions.
You can find all the details about the changes from Python 2 to Python 3 on
the official Python website but, for the purposes of this book, we are going to
be using Python 3.
That completes your introduction to the Python programming language. We
are now going to delve into some of the intermediate concepts of Python and
I have provided you with plenty of working examples so that you can see
exactly how these concepts work.
Welcome to the world of intermediate Python programming!
Chapter 1: Object-Oriented Programming (OOP)
def del_account(self):
Account.num_accounts -= 1
def inquiry(self):
return self.balance
The following objects are introduced by a class definition:
Class objects
Instance objects
Method objects
Class Objects
When a program is executed and it comes up against a class definition, a new
namespace gets created this will be the namespace that every binding of a
class variable and a method definition goes to. Be aware that this namespace
wont create any new local scope that class methods can use. Because of this,
there is a need for names that are fully qualified when you access a variable
in a method. For example, lets say you have a class called Accounts with a
variable called num_of_acounts. Any method that attempts to access this
variable can only do so by using the fully qualified name, which is
Account.num_of_accounts. If this name is not used in the __init__() method,
the result will be an error, as displayed below:
class Account(object):
num_accounts = 0
def del_account(self):
Account.num_accounts -= 1
def deposit(self, amt):
self.balance = self.balance + amt
def inquiry(self):
return self.balance
def del_account(obj):
Account.num_accounts -= 1
def inquiry(obj):
return obj.balance
>>> Account.num_accounts
>>> 0
>>> x = Account('obi', 0)
>>> x.deposit(10)
>>> Account.inquiry(x)
>>> 10
Class and Static Methods
Every method that is defined in a class will operate, by default on instances.
However, you may use decorators to define a class or static method and we
do this by decorating the method with the correct @classmethod or
@staticmethod decorator.
Static Methods
A static method is a normal function that resides in a class namespace. When
we reference a static method from a class, we are showing that a function
type gets returned and not an unboundmethod type, as in this example:
class Account(object):
num_accounts = 0
def __init__(self, name, balance):
self.name = name
self.balance = balance
Account.num_accounts += 1
def del_account(self):
Account.num_accounts -= 1
def inquiry(self):
return "Name={}, balance={}".format(self.name, self.balance)
@staticmethod
def type():
return "Current Account"
>>> Account.deposit
<unbound method Account.deposit>
>>> Account.type
<function type at 0x106893668>
When you define a static method, you use the @staticmethod decorator and
these arguments dont need the self argument. Static methods allow for better
organization as all the code that is related to a specific class are put into that
class and we can use a subclass to override it if necessary.
Class Methods
As the name implies, a class method will operate on a class and not on an
instance. We use the @classmethod decorator on the class and not the
instance that is passed to the method as the first argument:
import json
class Account(object):
num_accounts = 0
def del_account(self):
Account.num_accounts -= 1
def inquiry(self):
return "Name={}, balance={}".format(self.name, self.balance)
@classmethod
def from_json(cls, params_json):
params = json.loads(params_json)
return cls(params.get("name"), params.get("balance"))
@staticmethod
def type():
return "Current Account"
A good example of using a class method is to see it as a kind of factory for
the creation of objects. If you imagine that all the data that goes in the
Account class will be in different formats, like strings, json, tuples etc. We
cant define an untold number of __init__ methods because Python classes
are only allowed one __init__ method so this is where class methods step into
the breach to save the day.
In the example above of the Account class definition, we wanted to use a json
string object to initialize an account. We defined a class factor method called
from_json this will take a json string object and extract the parameters; it
will then create the account object using those parameters. Another class
method example would be dict.fromkeys() method, a method that is used for
the creation of dict objects form a sequence of keys and their values.
Special Methods
On occasion, you may have user-defined classes that you want to customize,
maybe to change how the class object is created and then initialized or
perhaps you want to provide certain operations with polymorphic behavior.
Polymorphic behavior allows a user-defined class to define its own
implementation for certain operations and, to help, Python has some special
methods. These will usually be of the __*__ format where the * is
referencing the name of a method. __new__ and __init__ are examples of
these methods for customizing the way objects are created and initialized and,
for the emulation of a built-in type you would use __get__, __getitem__,
__sub__ and __add__. For the customization of access to attributes, you
would use __getattr__ or __getattribute__ for example. There are many more
special methods that you could use and we are going to look at the most
important ones below.
Object Creation Special Methods
To create a new instance of a class, we need to use the __new__ method to
create the instance and then the __init__ method to initialize it. Most of you
should already be familiar with how to define an __init__ method but the
__new__ method is not usually defined by a user for each class you can do
it though if you want to customize how class instances are created.
Attribute Access Special Methods
We can also customize the attribute access for a class instance and we do that
through the implementation of these methods:
class Account(object):
num_accounts = 0
def del_account(self):
Account.num_accounts -= 1
def inquiry(self):
return "Name={}, balance={}".format(self.name, self.balance)
@classmethod
def from_dict(cls, params):
params_dict = json.loads(params)
return cls(params_dict.get("name"), params_dict.get("balance"))
@staticmethod
def type():
return "Current Account"
x = Account('obi', 0)
The method called __getattr__(self, name) will only be called when we
reference an attribute that is not an instance attribute and is not in the object
class tree. The method should return a value for that attribute OR it should
raise an AttributeError exception. Lets say that x is an Account class
instance and it is attempting to gain access to an attribute that doesnt exist
that will result in the method being called:
>>> acct = Account("obi", 10)
>>> acct.number
But hold on! Where is the attribute called number?
If __getattr__ is referencing an instance attribute that doesnt exist you may
end up with an infinite loop because __getattr__ is going to be continuously
called without any end to it.
The method called _setattr__(self, name, value)__ is called whenever we
attempt to assign an attribute. __setattr__ should put the value that we are
trying to assign into the instance attribute dictionary whereas
self.name=value will result in a recursive call and an infinite loop.
Whenever we call del obj, __delattr__(self, name)__ will be called
And lastly, when we want to implement an attribute access for a class
instance, we call __getattr__(self, name)__.
Type Emulation Special Methods
Python has a special syntax that is to be used with specific types. For
example, we can access elements that are in tuples and lists with the index
notation[]; we can use the + operator to add a numeric value, and so on. We
can also create classes that use the special syntax and we do that through the
implementation of special methods that will be called by the interpreter
whenever it comes across that syntax. The example below shows you how
this works to emulate a list in Python:
class CustomList(object):
def __len__(self):
# this is called when a user calls len(CustomList instance)
return len(self.container)
def __repr__(self):
return str(self.container)
These are some of the ways in which class behavior can be customized with
the definition of different special methods.
Chapter 2a: Descriptors
Descriptors are an important part of Python and are very widely used. It is
vital that you understand descriptors if you want a bit of an edge over other
programmers. To help in this section of the chapter, I am going to discuss
descriptors using some common scenarios that you are likely to come across
in your programming. I will also tell you what descriptors are and how to use
them to fix these scenarios; for this, I will be using new style classes to refer
to the Python version.
Imagine a program where strict type checking must be enforced for object
attributes. Because Python is known as a dynamic language, there is no
support for type checking but we can put our own version of it in place,
despite the fact that it will be a very basic version. The example below shows
you the conventional way that we would type check an object attribute:
def __init__(self, name, age):
if isinstance(str, name):
self.name = name
else:
raise TypeError("Must be a string")
if isinstance(int, age):
self.age = age
else:
raise TypeError("Must be an int")
This method is one way that you can enforce type checking but it gets a bit
unwieldy the more arguments are added in. Another way to do it would be by
creating a type_check(type, val) function. This would be called before the
assignment in the__init__ method but how could we then implement this
checking when we want the attribute value set somewhere else? Some would
say utilize the Java method of using getters and setters but that is also
somewhat cumbersome and really isnt Pythonic.
Now imagine a program where we want attribute created that will be
initialized just one at runtime and will then turn into read only. You could
probably come up with a few special methods to implement this but it would
still be a cumbersome way of doing things.
Lastly, a program in which we customize the access to object attributes. We
could be doing this to log the access, for example. Again, not too hard to find
a solution but once again, it will be cumbersome and, perhaps more
importantly, you may not be able to re-use it.
All of these three scenarios are linked by one fact they each have a
relationship to attribute references in that we are attempting to customize the
attribute access.
How Do Python Descriptors Provide Solutions?
The solutions that descriptors provide each of these scenarios are simple,
hardy, quite beautiful to look at and can be reused. To put it simply, a Python
descriptor is an object that is representative of an attribute value. What this
means is if an account object had an attribute name, the descriptor is an
object that will represent that attributes value. A descriptor can be any kind of
object that will implement any of the following special methods - __set__,
__get__, or __delete__. Below you can see the signature for each method:
descr.__set__(self, obj, value) --> None
def __set__(self,instance,value):
if not isinstance(value,self.type):
raise TypeError("Must be a %s" % self.type)
setattr(instance,self.name,value)
def __delete__(self,instance):
raise AttributeError("Cannot delete the attribute")
class Foo(object):
name = TypedProperty("name",str)
num = TypedProperty("num",int,42)
def get_acct_num(self):
return self._acct_num
def del_acct_num(self):
del self._acct_num
@property
# the x property. the decorator creates a read-only property
def x(self):
return self._x
@x.setter
# the x property setter makes the property writeable
def x(self, value):
self._x = value
@x.deleter
def x(self):
del self._x
To make a property into a read-only property all you would do is leave the
setter method out.
Descriptors are used very widely in Python and examples of non-data
descriptors are class methods, functions, and static methods.
Chapter 3: Functions in Python
>>> square
<function square at 0x031AA230>
>>> dir(square)
['__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__', '__dict__',
'__doc__', '__format__', '__get__', '__getattribute__', '__globals__', '__hash__', '__init__',
'__module__', '__name__', '__new__', '__reduce__', '__reduce_ex__', '__repr__',
'__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'func_closure', 'func_code',
'func_defaults', 'func_dict', 'func_doc', 'func_globals', 'func_name']
>>>
Some of the more important attributes of functions are:
(__doc__ will return the docstring for the specified function)
def square(x):
"""return the square of the specified number"""
return x**2
>>> square.__doc__
'return the square of the specified number'
__name__ returns the function name.
def square(x):
"""return the square of the specified number"""
return x**2
>>> square.func_name
'square'
__module__ will return the module name where the function is defined.
def square(x):
"""will return the square of the specified number"""
return x**2
>>> square.__module__
'__main__'
func_default will return a tuple of the values of the default argument while
func_globals will return a reference that points to the dictionary holding the
global variables for the function.
def square(x):
"""will return the square of the specified number"""
return x**2
>>> square.func_globals
{'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', 'square':
<function square at 0x10f099c08>, '__doc__': None, '__package__': None}
func_dict will return the namespace that supports the attributes for the
arbitrary functions:
def square(x):
"""will return the square of the specified number"""
return x**2
>>> square.func_dict
{}
func_closure will return a tuple of the cells that hold the bindings for the free
variables in the function.
You can pass a function as an argument to another function and a function
that can take another as an argument is usually referred to as a higher-order
function. These are vital to functional programming and an example of a
higher-order function is map. This will take an iterable and a function and
will apply the function to each of the items that are in the iterable, eventually
returning a brand-new list. The next example shows how this works when we
pass the previously defined square function and an iterable that contains
numbers to map:
>>> map(square, range(10))
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
We can also define a function inside another code block of functions and they
may also be returned from another function call:
def outer():
outer_var = "outer variable"
def inner():
return outer_var
return inner
In that example, you can see that we defined a function called inner, inside
another one called outer and then returned inner when outer was executed.
You can also assign a variable with a function in the same way that you
would any object:
def outer():
outer_var = "outer variable"
def inner():
return outer_var
return inner
>>> func = outer()
>>> func
<function inner at 0x031AA270>
>>>
In this example, a function is returned by the outer function when it is called
and this function will be assigned to the variable called func; func can be
called in the same way that the returned function is called:
>>> func()
'outer variable'
Function Definitions
To create a user-defined function we use the def keyword. Any function
definition is an executable statement:
def square(x):
return x**2
Note that when the module that has the function loads into the interpreter or
is defined in the Python REPL, the definitions statement, def square(x), will
be executed.
Function Call Arguments
As well as normal arguments, functions in Python provide support for a
variable number of arguments. These come in three types as described below:
Default argument values - allows users to define default values for a
function argument. In this case, fewer arguments are needed to call the
function. Python will use the default values supplied for the arguments
that dont get supplied during the function call. The next example
shows how this works:
def show_args(arg, def_arg=1, def_arg2=2):
return "arg={}, def_arg={}, def_arg2={}".format(arg, def_arg, def_arg2)
We defined this function with a single normal positional argument called arg
and a pair of default arguments called def_arg and def_arg2. We can call this
function in any one of these ways:
By supplying only argument values that are non-default positional. The
supplied default values will be taken on by the other arguments:
def show_args(arg, def_arg=1, def_arg2=2):
return "arg={}, def_arg={}, def_arg2={}".format(arg, def_arg, def_arg2)
>>> show_args("peace")
'arg=peace, def_arg=1, def_arg2=2'
By supplying values that will override some of the default arguments as
well as the arguments that are non-default positionals:
def show_args(arg, def_arg=1, def_arg2=2):
return "arg={}, def_arg={}, def_arg2={}".format(arg, def_arg, def_arg2)
At the time of the function call, the normal argument is given as normal and
the optionals get unpacked into the call.
Anonymous Functions
Anonymous functions are also supported in Python and we create these with
the lambda keyword. Python lambda expressions take this form:
lambda_expr ::= "lambda" [parameter_list]: expression
A lambda expression will return a function object once it has been evaluated
and its attributes will be named functions. They are normally reserved for
simple Python functions, like this:
>>> square = lambda x: x**2
>>> for i in range(10):
square(i)
0
1
4
9
16
25
36
49
64
81
>>>
This is the same as the named function below:
def square(x):
return x**2
Nested Functions and Closures
A function that is defined inside another function is called a nested function,
as in the following example:
```python
def outer():
outer_var = "outer variable"
def inner():
return outer_var
return inner
```
In a function definition of this type, the inner will only in scope when it is
inside the outer so it is at its most useful when the inner is being returned or
moved to the outer scope or when it gets passed to another function.
In a nested function, a new nested function instance is created on every call to
the outer function. The reason for this is that when the outer function is
executed, the new inner definition is executed but the body isnt.
Nested functions can access the environment they were created in and this is
directly down to the semantics of function definition in Python. A result of
this is that a variable that has been defined in the outer may be referenced
even when the outer function has been executed.
def outer():
outer_var = "outer variable"
def inner():
return outer_var
return inner
>>> x = outer()
>>> x
<function inner at 0x0273BCF0>
>>> x()
'outer variable'
When a reference variable is accessed by the outer function of a nested
function, the nested function is known as closed over the referenced
variable. We use the __closure__ special attribute to get into the closed
variables, like this:
>>> cl = x.__closure__
>>> cl
(<cell at 0x029E4470: str object at 0x02A0FD90>,)
>>> cl[0].cell_contents
'outer variable'
Python closures are a little odd in nature. In Python 2.x or earlier, a variable
that pointed to an immutable type, like a number or string, could not rebound
inside a closure, as in this example:
def counter():
count = 0
def c():
count += 1
return count
return c
>>> c = counter()
>>> c()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in c
UnboundLocalError: local variable 'count' referenced before assignment
A somewhat odd solution would be to use a mutable type as a way of
capturing the closure, like this:
def counter():
count = [0]
def c():
count[0] += 1
return count[0]
return c
>>> c = counter()
>>> c()
1
>>> c()
2
>>> c()
3
In Python 3, we saw the introduction of the keyword, nonlocal, and this is
what we use to fix the issue of closure scoping, like this:
def counter():
count = 0
def c():
nonlocal count
count += 1
return count
return c
We can use a closure for maintaining a state and, in a few simple cases, to
provide a much better solution than a class, which is what we would normally
use to maintain a state. To show you how this works, we have taken a logging
example; just imagine an unimportant API that uses object orientation that is
class based to log at several levels:
class Log:
def __init__(self, level):
self._level = level
log_info = Log("info")
log_warning = Log("warning")
log_error = Log("error")
We can implement the same functionality with closures like this:
def make_log(level):
def _(message):
print("{}: {}".format(level, message))
return _
log_info = make_log("info")
log_warning = make_log("warning")
log_error = make_log("error")
As you can see, use the way that is closure-based is much better and more
readable even though it is used for implementing the exact same function.
Chapter 4: Generators and Iterators
def __iter__(self):
return self
def next(self):
y = self.n
self.n += 1
return y
An example of the usage can be seen below. Note that the final lone of the
program tries to convert the object into a list and the result will be an infinite
loop, simply because that specific iterator will never end:
>>> counter = count_iterator()
>>> next(counter)
0
>>> next(counter)
1
>>> next(counter)
2
>>> next(counter)
3
>>> list(counter) # This will result in an infinite loop!
Lastly, if you want to be accurate, you should amend the above code if an
object does not have the __iter__ method defined, provided __getitem__ has
been defined, it can remain an iterable. In a case like this, the iter function
that is built into Python will return an iterator that is of an iterator type for
that object, using __getitem__ to iterate over the list items.
Finally, to be accurate we should make an amendment to the above: objects
that do not have the method __iter__ defined can still be iterable if they
define __getitem__. In this case when Python's built-in function iter will
return an iterator of type iterator for the object, which uses __getitem__ to go
over the items in the list. If __getitem_ raises an IndexError or a
StopIteration exception, the iteration will stop. Look at an example of how
that works:
class SimpleList(object):
def __init__(self, *items):
self.items = items
def next(self):
try:
q = self.s[-self.s[-1]] + self.s[-self.s[-2]]
self.s.append(q)
return q
except IndexError:
raise StopIteration()
def __iter__(self):
return self
def current_state(self):
return self.s
And this is how it gets used:
>>> Q = qsequence([1, 1])
>>> next(Q)
2
>>> next(Q)
3
>>> [next(Q) for __ in xrange(10)]
[3, 4, 5, 5, 6, 6, 6, 8, 8, 8]
Generators
A generator is an iterator that has been defined with a function notation that is
a little simpler. Basically, the generator is a function that contains a yield
expression. They are not able to return a value; instead, when they are ready
they yield a result. The process to remember the context of a generator is
automated by Python. The context is the value of the local variables, the
location of the control flow, etc. Whenever a generator is called using
__next__, the yield will be the next iteration value. __iter__ is also
implemented automatically and this means that a generator may be used
wherever you need an iterator. Look at the next example, showing you an
implementation that is functionally the same as the iterator class we looked at
earlier but it is easier to read:
def count_generator():
n=0
while True:
yield n
n += 1
Now lets see how it works:
>>> counter = count_generator()
>>> counter
<generator object count_generator at 0x106bf1aa0>
>>> next(counter)
0
>>> next(counter)
1
>>> iter(counter)
<generator object count_generator at 0x106bf1aa0>
>>> iter(counter) is counter
True
>>> type(counter)
<type 'generator'>
Now we are going to use a generator to implement the Hofstadter Q
sequence. Note that this is a much easier implementation but we are no
longer able to implement methods like current_state, that we used earlier.
You may not access a variable stored in a generator context from outside the
generator and that means that you will not be able to access current_state
from the object.
def hofstadter_generator(s):
a = s[:]
while True:
try:
q = a[-a[-1]] + a[-a[-2]]
a.append(q)
yield q
except IndexError:
return
Note that we have use a return statement that has no parameters to end the
generator iteration. This will raise a StopIteration exception internally.
In the next example, we are using the Groupon Randomness Extraction
Interview Question. We use one generator to implement a Bernoulli process
this is a sequence f Boolean values in a random order which is infinite with
False probability q=1-p and Truehaving probability. Another generator is
used for implementing a von Neumann extractor that will take the Bernoulli
process as an input with 0<p<1 as an entropy source, returning a Bernoulli
process of p=0.5.
import random
def bernoulli_process(p):
if p > 1.0 or p < 0.0:
raise ValueError("p must be between 0.0 and 1.0.")
while True:
yield random.random() < p
def von_neumann_extractor(process):
while True:
x, y = process.next(), process.next()
if x != y:
yield x
Lastly, a generator is a good tool for when we want to implement a discrete
dynamical system. The next example will show you how the tent map
dynamical system can be properly implemented with the use of generators:
>>> def tent_map(mu, x0):
... x = x0
... while True:
... yield x
... x = mu * min(x, 1.0 - x)
...
>>>
>>> t = tent_map(2.0, 0.1)
>>> for __ in xrange(30):
... print t.next()
...
0.1
0.2
0.4
0.8
0.4
0.8
0.4
0.8
0.4
0.8
0.4
0.8
0.4
0.8
0.4
0.8
0.4
0.799999999999
0.400000000001
0.800000000003
0.399999999994
0.799999999988
0.400000000023
0.800000000047
0.399999999907
0.799999999814
0.400000000373
0.800000000745
0.39999999851
0.79999999702
Another example that is similar is the Collatz sequence:
def collatz(n):
yield n
while n != 1:
n = n / 2 if n % 2 == 0 else 3 * n + 1
yield n
Again, note that we dont need to raise the StopIteration exception manually
because it will be raised automatically when the control flow gets to the end
of the function. The next example shows you the Collatz generator being
used:
>>> # If the Collatz conjecture were true then list(collatz(n)) for any n will
... # always terminate (though you might find your machine runs out of memory before!)
>>> list(collatz(7))
[7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
>>> list(collatz(13))
[13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
>>> list(collatz(17))
[17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
>>> list(collatz(19))
[19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
Recursive Generators
A generator may be recursive, in the same way that any function may be
recursive. To show you how this works, we will implement a simple version
of itertools.permutations, which is a generator that will generate all the
permutations of a specified item list. Note, rather than the method you will
see here, do use itertools.permutations as it will be a lot quicker. The idea
here is very simple we are going to swap each element in a list with the first
one, thus moving each to the first position; the rest of the list is them
permuted:
def permutations(items):
if len(items) == 0:
yield []
else:
pi = items[:]
for i in xrange(len(pi)):
pi[0], pi[i] = pi[i], pi[0]
for p in permutations(pi[1:]):
yield [pi[0]] + p
>>> for p in permutations([1, 2, 3]):
... print p
...
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
Generator Expressions
A generator expression allows you to define a generator using a simple
notation, very much like the notation for list comprehension in Python. The
next example will provide us with a generator that will iterate over every
perfect square. Note that the generator expression results are objects of the
generator type and, because of this, will implement both the __iter__ and the
__next__ methods.
>>> g = (x ** 2 for x in itertools.count(1))
>>> g
<generator object <genexpr> at 0x1029a5fa0>
>>> next(g)
1
>>> next(g)
4
>>> iter(g)
<generator object <genexpr> at 0x1029a5fa0>
>>> iter(g) is g
True
>>> [g.next() for __ in xrange(10)]
[9, 16, 25, 36, 49, 64, 81, 100, 121, 144]
The Bernoulli process can also be implemented with a generator expression,
in the next example, that expression is p=0.4. If another iterator is required by
the generator expression for a loop counter, the best choice would be
itertools.count especially if an infinite sequence is to be the result. Otherwise,
you can use xrange:
>>> g = (random.random() < 0.4 for __ in itertools.count())
>>> [g.next() for __ in xrange(10)]
[False, False, False, True, True, False, True, False, False, True]
As I said earlier, you can pass a generator function to any other function that
needs an iterator as one of its arguments. For example, we could write the
following to sum up the first 10 perfect squares:
>>> sum(x ** 2 for x in xrange(10))
285
Chapter 5: Lambda, Map, Filter, & Reduce
F = map(fahrenheit, temp)
C = map(celsius, F)
In this example, we have not used the lambda function. If we had, we would
not have needed to define and give the functions fahrenheit() and celsius()
names. This becomes clear in the next example:
>>> Celsius = [39.2, 36.5, 37.3, 37.8]
>>> Fahrenheit = map(lambda x: (float(9)/5)*x + 32, Celsius)
>>> print Fahrenheit
[102.56, 97.700000000000003, 99.140000000000001, 100.03999999999999]
>>> C = map(lambda x: (float(5)/9)*(x-32), Fahrenheit)
>>> print C
[39.200000000000003, 36.5, 37.300000000000004, 37.799999999999997]
>>>
We can apply map() to multiple lists but each list must be the same length.
map() applies the lambda function to all the elements of the lists in the
arguments it will start with the 0th index and then move on to the 1st index,
2nd index and continue until it reaches the nth index:
>>> a = [1,2,3,4]
>>> b = [17,12,11,10]
>>> c = [-1,-4,5,9]
>>> map(lambda x,y:x+y, a,b)
[18, 14, 14, 14]
>>> map(lambda x,y,z:x+y+z, a,b,c)
[17, 10, 19, 23]
>>> map(lambda x,y,z:x+y-z, a,b,c)
[19, 18, 9, 5]
This example shows you that the parameter called x will obtain its values
from the list called a, while y will obtain its values from b; z gets them from
the list called c.
Filtering
The function called filter gives us a nice way of filtering out the elements in
a list and the function called function will return a result of True. The
function filter requires a function f as the first argument. f will return a
Boolean value, either False or True and this function gets applied to each
element in the list called l. If f returns a True, the list element will then be
added in to the result list; if the result is false, it will not be included.
>>> fib = [0,1,1,2,3,5,8,13,21,34,55]
>>> result = filter(lambda x: x % 2, fib)
>>> print result
[1, 1, 3, 5, 13, 21, 55]
>>> result = filter(lambda x: x % 2 == 0, fib)
>>> print result
[0, 2, 8, 34]
>>>
Reducing a List
The function called reduce() will repeatedly apply the function called func()
to the sequence called seq and the return will be a single value.
If seq = [s1, s2, s3, sn], calling reduce (func, seq), works in this way:
To start with, the initial two elements in seq are applied to func so the list that
reduce() is working on will now look like [func(s1, s2), s3, , sn].
Next, func is applied to the last result and to the third element in the list and
the list will now look like [ func(func(s1, s2),s3, .sn]
This continues until there is a single element left; this is the element that is
returned as the reduce() result. The next example shows you how this works:
>>> reduce(lambda x,y: x+y, [47,11,42,13])
113
reduce() Examples
How to work out a numerical value list maximum with reduce():
>>> f = lambda a,b: a if (a > b) else b
>>> reduce(f, [47,11,42,102,13])
102
>>>
How to use reduce() to work out the sum of a list of numbers from 1 to
100:
>>> reduce(lambda x, y: x+y, range(1,101))
5050
Conclusion
Well, we have reached the end of our peek into intermediate Python
programming and I hope that I have been able to teach you, at the very least,
the basic concepts. Obviously intermediate programming is much harder than
Python for beginners but, provided you have a good understanding of the
absolute basic concepts of Python, you shouldnt do too badly with the
intermediate.
Please dont hesitate to read this book over and over until you are
comfortable with the contents; it wont do to move straight on to advanced
Python concepts until you are you will simply find yourself lost in the mire.
There is plenty of help available for intermediate students, lots of forums and
full Python courses that can help you to truly understand what you are doing
before you attempt anything even harder.
Once again, I hope this course has been of some help you to you.