Py3 8
Py3 8
If you’re a professional software developer, you may have to work with several
C/C++/Java libraries but find the usual write/compile/test/re-compile cycle is too slow.
Perhaps you’re writing a test suite for such a library and find writing the testing code a
tedious task. Or maybe you’ve written a program that could use an extension language,
and you don’t want to design and implement a whole new language for your application.
You could write a Unix shell script or Windows batch files for some of these tasks, but
shell scripts are best at moving around files and changing text data, not well-suited for
GUI applications or games. You could write a C/C++/Java program, but it can take a lot
of development time to get even a first-draft program. Python is simpler to use, available
on Windows, Mac OS X, and Unix operating systems, and will help you get the job done
more quickly.
Python is simple to use, but it is a real programming language, offering much more
structure and support for large programs than shell scripts or batch files can offer. On
the other hand, Python also offers much more error checking than C, and, being a very-
high-level language, it has high-level data types built in, such as flexible arrays and
dictionaries. Because of its more general data types Python is applicable to a much
larger problem domain than Awk or even Perl, yet many things are at least as easy in
Python as in those languages.
Python allows you to split your program into modules that can be reused in other Python
programs. It comes with a large collection of standard modules that you can use as the
basis of your programs — or as examples to start learning to program in Python. Some
of these modules provide things like file I/O, system calls, sockets, and even interfaces
to graphical user interface toolkits like Tk.
Python is an interpreted language, which can save you considerable time during
program development because no compilation and linking is necessary. The interpreter
can be used interactively, which makes it easy to experiment with features of the
language, to write throw-away programs, or to test functions during bottom-up program
development. It is also a handy desk calculator.
Python enables programs to be written compactly and readably. Programs written in
Python are typically much shorter than equivalent C, C++, or Java programs, for several
reasons:
Python is extensible: if you know how to program in C it is easy to add a new built-in
function or module to the interpreter, either to perform critical operations at maximum
speed, or to link Python programs to libraries that may only be available in binary form
(such as a vendor-specific graphics library). Once you are really hooked, you can link
the Python interpreter into an application written in C and use it as an extension or
command language for that application.
By the way, the language is named after the BBC show “Monty Python’s Flying Circus”
and has nothing to do with reptiles. Making references to Monty Python skits in
documentation is not only allowed, it is encouraged!
Now that you are all excited about Python, you’ll want to examine it in some more detail.
Since the best way to learn a language is to use it, the tutorial invites you to play with
the Python interpreter as you read.
In the next chapter, the mechanics of using the interpreter are explained. This is rather
mundane information, but essential for trying out the examples shown later.
The rest of the tutorial introduces various features of the Python language and system
through examples, beginning with simple expressions, statements and data types,
through functions and modules, and finally touching upon advanced concepts like
exceptions and user-defined classes.
2. Using the Python Interpreter(computing)
2.1. Invoking the Interpreter(is a computer program that directly
executes instructions written in programming or scripting language without
requiring them previously to have been complied into a machine language
program. )parse-> translate-> complier
python3.8
to the shell. 1 Since the choice of the directory where the interpreter lives is an
installation option, other places are possible; check with your local Python guru or
system administrator. (E.g., /usr/local/python is a popular alternative location.)
On Windows machines where you have installed Python from the Microsoft Store,
the python3.8 command will be available. If you have the py.exe launcher installed,
you can use the py command. See Excursus: Setting environment variables for other
ways to launch Python.
The interpreter’s line-editing features include interactive editing, history substitution and
code completion on systems that support the GNU Readline library. Perhaps the
quickest check to see whether command line editing is supported is typing Control-P to
the first Python prompt you get. If it beeps, you have command line editing; see
Appendix Interactive Input Editing and History Substitution for an introduction to the
keys. If nothing appears to happen, or if ^P is echoed, command line editing isn’t
available; you’ll only be able to use backspace to remove characters from the current
line.
The interpreter operates somewhat like the Unix shell: when called with standard input
connected to a tty device, it reads and executes commands interactively; when called
with a file name argument or with a file as standard input, it reads and executes
a script from that file.
A second way of starting the interpreter is python -c command [arg] ..., which
executes the statement(s) in command, analogous to the shell’s -c option. Since
Python statements often contain spaces or other characters that are special to the shell,
it is usually advised to quote command in its entirety with single quotes.
Some Python modules are also useful as scripts. These can be invoked
using python -m module [arg] ..., which executes the source file for module as if
you had spelled out its full name on the command line.
When a script file is used, it is sometimes useful to be able to run the script and enter
interactive mode afterwards. This can be done by passing -i before the script.
All command line options are described in Command line and environment.
$ python3.8
Python 3.8 (default, Sep 16 2015, 09:25:04)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more
information.
>>>
Continuation lines are needed when entering a multi-line construct. As an example, take
a look at this if statement:
>>>
>>> the_world_is_flat = True
>>> if the_world_is_flat:
... print("Be careful not to fall off!")
...
Be careful not to fall off!
To declare an encoding other than the default one, a special comment line should be
added as the first line of the file. The syntax is as follows:
For example, to declare that Windows-1252 encoding is to be used, the first line of your
source code file should be:
One exception to the first line rule is when the source code starts with a UNIX
“shebang” line. In this case, the encoding declaration should be added as the second
line of the file. For example:
#!/usr/bin/env python3
# -*- coding: cp1252 -*-
Footnotes
1
On Unix, the Python 3.x interpreter is by default not installed with the executable
named python, so that it does not conflict with a simultaneously installed Python
2.x executable.
Many of the examples in this manual, even those entered at the interactive prompt,
include comments. Comments in Python start with the hash character, #, and extend to
the end of the physical line. A comment may appear at the start of a line or following
whitespace or code, but not within a string literal. A hash character within a string literal
is just a hash character. Since comments are to clarify code and are not interpreted by
Python, they may be omitted when typing in examples.
Some examples:
3.1.1. Numbers
The interpreter acts as a simple calculator: you can type an expression at it and it will
write the value. Expression syntax is straightforward: the operators +, -, * and / work
just like in most other languages (for example, Pascal or C); parentheses (()) can be
used for grouping. For example:
>>>
>>> 2 + 2
4
>>> 50 - 5*6
20
>>> (50 - 5*6) / 4
5.0
>>> 8 / 5 # division always returns a floating point number
1.6
The integer numbers (e.g. 2, 4, 20) have type int, the ones with a fractional part
(e.g. 5.0, 1.6) have type float. We will see more about numeric types later in the
tutorial.
Division (/) always returns a float. To do floor division and get an integer result
(discarding any fractional result) you can use the // operator; to calculate the
remainder you can use %:
>>>
>>>
>>> 5 ** 2 # 5 squared
25
>>> 2 ** 7 # 2 to the power of 7
128
The equal sign (=) is used to assign a value to a variable. Afterwards, no result is
displayed before the next interactive prompt:
>>>
>>> width = 20
>>> height = 5 * 9
>>> width * height
900
If a variable is not “defined” (assigned a value), trying to use it will give you an error:
>>>
There is full support for floating point; operators with mixed type operands convert the
integer operand to floating point:
>>>
>>> 4 * 3.75 - 1
14.0
In interactive mode, the last printed expression is assigned to the variable _. This
means that when you are using Python as a desk calculator, it is somewhat easier to
continue calculations, for example:
>>>
This variable should be treated as read-only by the user. Don’t explicitly assign a value
to it — you would create an independent local variable with the same name masking the
built-in variable with its magic behavior.
In addition to int and float, Python supports other types of numbers, such
as Decimal and Fraction. Python also has built-in support for complex numbers, and
uses the j or J suffix to indicate the imaginary part (e.g. 3+5j).
3.1.2. Strings
Besides numbers, Python can also manipulate strings, which can be expressed in
several ways. They can be enclosed in single quotes ('...') or double quotes ("...")
with the same result 2. \ can be used to escape quotes:
>>>yes
In the interactive interpreter, the output string is enclosed in quotes and special
characters are escaped with backslashes. While this might sometimes look different
from the input (the enclosing quotes could change), the two strings are equivalent. The
string is enclosed in double quotes if the string contains a single quote and no double
quotes, otherwise it is enclosed in single quotes. The print() function produces a
more readable output, by omitting the enclosing quotes and by printing escaped and
special characters:
>>>
>>>
String literals can span multiple lines. One way is using triple-
quotes: """...""" or '''...'''. End of lines are automatically included in the string,
but it’s possible to prevent this by adding a \ at the end of the line. The following
example:
print("""\
Usage: thingy [OPTIONS]
-h Display this usage message
-H hostname Hostname to connect to
""")
produces the following output (note that the initial newline is not included):
Usage: thingy
y [OPTIONS]
-h Display this usage message
-H hostname Hostname to connect to
Strings can be concatenated (glued together) with the + operator, and repeated with *:
>>>
Two or more string literals (i.e. the ones enclosed between quotes) next to each other
are automatically concatenated.
>>>
>>> 'Py' 'thon'
'Python'
This feature is particularly useful when you want to break long strings:
>>>
This only works with two literals though, not with variables or expressions:
>>>
>>>
Strings can be indexed (subscripted), with the first character having index 0. There is no
separate character type; a character is simply a string of size on
Indices may also be negative numbers, to start counting from the right:
>>>
Note that since -0 is the same as 0, negative indices start from -1.
>>>
Note how the start is always included, and the end always excluded. This makes sure
that s[:i] + s[i:] is always equal to s:
>>>
Slice indices have useful defaults; an omitted first index defaults to zero, an omitted
second index defaults to the size of the string being sliced.
>>>
>>> word[:2] # character from the beginning to position 2
(excluded)
'Py'
>>> word[4:] # characters from position 4 (included) to the end
'on'
>>> word[-2:] # characters from the second-last (included) to the
end
'on'
+---+---+---+---+---+---+
| P | y | t | h | o | n |
+---+---+---+---+---+---+
0 1 2 3 4 5 6
-6 -5 -4 -3 -2 -1
The first row of numbers gives the position of the indices 0…6 in the string; the second
row gives the corresponding negative indices. The slice from i to j consists of all
characters between the edges labeled i and j, respectively.
For non-negative indices, the length of a slice is the difference of the indices, if both are
within bounds. For example, the length of word[1:3] is 2.
>>>
However, out of range slice indexes are handled gracefully when used for slicing:
>>>
>>> word[4:42]
'on'
>>> word[42:]
''
Python strings cannot be changed — they are immutable. Therefore, assigning to an
indexed position in the string results in an error:
>>>
>>>
>>>
>>> s = 'supercalifragilisticexpialidocious'
>>> len(s)
34
See also
Text Sequence Type — str
Strings are examples of sequence types, and support the common operations
supported by such types.
String Methods
The old formatting operations invoked when strings are the left operand of
the % operator are described in more detail here.
3.1.3. Listss
Python knows a number of compound data types, used to group
together other values. The most versatile is the list, which can be
written as a list of comma-separated values (items) between
square brackets. Lists might contain items of different types, but
usually the items all have the same type.
>>>
Like strings (and all other built-in sequence types), lists can be
indexed and sliced:
>>>
>>>
>>> squares[:]
[1, 4, 9, 16, 25]
>>>
>>> squares + [36, 49, 64, 81, 100]
[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
Unlike strings, which are immutable, lists are a mutable type, i.e. it
is possible to change their content:
>>>
You can also add new items at the end of the list, by using
the append() method (we will see more about methods later):
>>>
>>>
>>>
>>>
>>>
>>> i = 256*256
>>> print('The value of i is', i)
The value of i is 65536
>>>
>>> a, b = 0, 1
>>> while a < 1000:
... print(a, end=',')
... a, b = b, a+b
...
0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,9
87,
Footnotes
Unlike other languages, special characters such as \n have the same meaning
with both single ('...') and double ("...") quotes. The only difference
between the two is that within single quotes you don’t need to escape " (but you
have to escape \') and vice versa.
4. More Control Flow Tools
Besides the while statement just introduced, Python uses the usual flow control
statements known from other languages, with some twists.
4.1. if Statements
Perhaps the most well-known statement type is the if statement. For example:
>>>
There can be zero or more elif parts, and the else part is optional. The keyword ‘elif’ is
short for ‘else if’, and is useful to avoid excessive indentation. An if … elif … elif …
sequence is a substitute for the switch or case statements found in other languages.
>>>
Code that modifies a collection while iterating over that same collection can be tricky to get
right. Instead, it is usually more straight-forward to loop over a copy of the collection or to create
a new collection:
>>>
The given end point is never part of the generated sequence; range(10) generates 10 values,
the legal indices for items of a sequence of length 10. It is possible to let the range start at
another number, or to specify a different increment (even negative; sometimes this is called the
‘step’):
range(5, 10)
5, 6, 7, 8, 9
range(0, 10, 3)
0, 3, 6, 9
To iterate over the indices of a sequence, you can combine range() and len() as follows:
>>>
In most such cases, however, it is convenient to use the enumerate() function, see Looping
Techniques.
>>>
>>> print(range(10))
range(0, 10)
In many ways the object returned by range() behaves as if it is a list, but in fact it isn’t. It is an
object which returns the successive items of the desired sequence when you iterate over it, but it
doesn’t really make the list, thus saving space.
We say such an object is iterable, that is, suitable as a target for functions and constructs that
expect something from which they can obtain successive items until the supply is exhausted. We
have seen that the for statement is such a construct, while an example of a function that takes an
iterable is sum():
>>>
>>> sum(range(4)) # 0 + 1 + 2 + 3
6
Later we will see more functions that return iterables and take iterables as arguments. Lastly,
maybe you are curious about how to get a list from a range. Here is the solution:
>>>
>>> list(range(4))
[0, 1, 2, 3]
Loop statements may have an else clause; it is executed when the loop terminates through
exhaustion of the iterable (with for) or when the condition becomes false (with while), but not
when the loop is terminated by a break statement. This is exemplified by the following loop,
which searches for prime numbers:
>>>
(Yes, this is the correct code. Look closely: the else clause belongs to
the for loop, not the if statement.)
When used with a loop, the else clause has more in common with the else clause of
a try statement than it does with that of if statements: a try statement’s else clause runs
when no exception occurs, and a loop’s else clause runs when no break occurs. For more on
the try statement and exceptions, see Handling Exceptions.
The continue statement, also borrowed from C, continues with the next iteration of the loop:
>>>
>>>
>>>
>>> class MyEmptyClass:
... pass
...
Another place pass can be used is as a place-holder for a function or conditional body when you
are working on new code, allowing you to keep thinking at a more abstract level. The pass is
silently ignored:
>>>
>>>
The keyword def introduces a function definition. It must be followed by the function name and
the parenthesized list of formal parameters. The statements that form the body of the function
start at the next line, and must be indented.
The first statement of the function body can optionally be a string literal; this string literal is the
function’s documentation string, or docstring. (More about docstrings can be found in the
section Documentation Strings.) There are tools which use docstrings to automatically produce
online or printed documentation, or to let the user interactively browse through code; it’s good
practice to include docstrings in code that you write, so make a habit of it.
The execution of a function introduces a new symbol table used for the local variables of the
function. More precisely, all variable assignments in a function store the value in the local
symbol table; whereas variable references first look in the local symbol table, then in the local
symbol tables of enclosing functions, then in the global symbol table, and finally in the table of
built-in names. Thus, global variables and variables of enclosing functions cannot be directly
assigned a value within a function (unless, for global variables, named in a global statement,
or, for variables of enclosing functions, named in a nonlocal statement), although they may be
referenced.
The actual parameters (arguments) to a function call are introduced in the local symbol table of
the called function when it is called; thus, arguments are passed using call by value (where
the value is always an object reference, not the value of the object). 1 When a function calls
another function, or calls itself recursively, a new local symbol table is created for that call.
A function definition associates the function name with the function object in the current symbol
table. The interpreter recognizes the object pointed to by that name as a user-defined function.
Other names can also point to that same function object and can also be used to access the
function:
>>>
>>> fib
<function fib at 10042ed0>
>>> f = fib
>>> f(100)
0 1 1 2 3 5 8 13 21 34 55 89
Coming from other languages, you might object that fib is not a function but a procedure since
it doesn’t return a value. In fact, even functions without a return statement do return a value,
albeit a rather boring one. This value is called None (it’s a built-in name). Writing the
value None is normally suppressed by the interpreter if it would be the only value written. You
can see it if you really want to using print():
>>>
>>> fib(0)
>>> print(fib(0))
None
It is simple to write a function that returns a list of the numbers of the Fibonacci series, instead of
printing it:
>>>
The return statement returns with a value from a function. return without an
expression argument returns None. Falling off the end of a function also returns None.
The statement result.append(a) calls a method of the list object result. A method
is a function that ‘belongs’ to an object and is named obj.methodname, where obj is
some object (this may be an expression), and methodname is the name of a method that
is defined by the object’s type. Different types define different methods. Methods of
different types may have the same name without causing ambiguity. (It is possible to
define your own object types and methods, using classes, see Classes) The
method append() shown in the example is defined for list objects; it adds a new
element at the end of the list. In this example it is equivalent
to result = result + [a], but more efficient.
4.7. More on Defining Functions
It is also possible to define functions with a variable number of arguments. There are three
forms, which can be combined.
This example also introduces the in keyword. This tests whether or not a sequence contains a
certain value.
The default values are evaluated at the point of function definition in the defining scope, so that
i = 5
def f(arg=i):
print(arg)
i = 6
f()
will print 5.
Important warning: The default value is evaluated only once. This makes a difference when the
default is a mutable object such as a list, dictionary, or instances of most classes. For example,
the following function accumulates the arguments passed to it on subsequent calls:
print(f(1))
print(f(2))
print(f(3))
This will print
[1]
[1, 2]
[1, 2, 3]
If you don’t want the default to be shared between subsequent calls, you can write the function
like this instead:
accepts one required argument (voltage) and three optional arguments (state, action,
and type). This function can be called in any of the following ways:
parrot(1000) # 1
positional argument
parrot(voltage=1000) # 1 keyword
argument
parrot(voltage=1000000, action='VOOOOOM') # 2 keyword
arguments
parrot(action='VOOOOOM', voltage=1000000) # 2 keyword
arguments
parrot('a million', 'bereft of life', 'jump') # 3
positional arguments
parrot('a thousand', state='pushing up the daisies') # 1
positional, 1 keyword
but all the following calls would be invalid:
In a function call, keyword arguments must follow positional arguments. All the keyword
arguments passed must match one of the arguments accepted by the function (e.g. actor is not a
valid argument for the parrot function), and their order is not important. This also includes
non-optional arguments (e.g. parrot(voltage=1000) is valid too). No argument may
receive a value more than once. Here’s an example that fails due to this restriction:
>>>
When a final formal parameter of the form **name is present, it receives a dictionary
(see Mapping Types — dict) containing all keyword arguments except for those corresponding
to a formal parameter. This may be combined with a formal parameter of the
form *name (described in the next subsection) which receives a tuple containing the positional
functioarguments beyond the formal parameter list. (*name must occur before **name.) For
example, if we define a function like this:
Note that the order in which the keyword arguments are printed is guaranteed to match the order
in which they were provided in the function call.
where / and * are optional. If used, these symbols indicate the kind of parameter by how the
arguments may be passed to the function: positional-only, positional-or-keyword, and keyword-
only. Keyword parameters are also referred to as named parameters.
>>>
The first function definition, standard_arg, the most familiar form, places no restrictions on
the calling convention and arguments may be passed by position or keyword:
>>>
>>> standard_arg(2)
2
>>> standard_arg(arg=2)
2
The second function pos_only_arg is restricted to only use positional parameters as there is
a / in the function definition:
>>>
>>> pos_only_arg(1)
1
>>> pos_only_arg(arg=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: pos_only_arg() got an unexpected keyword argument 'arg'
The third function kwd_only_args only allows keyword arguments as indicated by a * in the
function definition:
>>>
>>> kwd_only_arg(3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: kwd_only_arg() takes 0 positional arguments but 1 was
given
>>> kwd_only_arg(arg=3)
3
And the last uses all three calling conventions in the same function definition:
>>>
>>> combined_example(1, 2, 3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: combined_example() takes 2 positional arguments but 3
were given
>>> combined_example(1, 2, kwd_only=3)
1 2 3
Finally, consider this function definition which has a potential collision between the positional
argument name and **kwds which has name as a key:
There is no possible call that will make it return True as the keyword 'name' will always bind
to the first parameter. For example:
>>>
But using / (positional only arguments), it is possible since it allows name as a positional
argument and 'name' as a key in the keyword arguments:
In other words, the names of positional-only parameters can be used in **kwds without
ambiguity.
4.7.3.5. Recap
The use case will determine which parameters to use in the function definition:
As guidance:
Use positional-only if you want the name of the parameters to not be available to the
user. This is useful when parameter names have no real meaning, if you want to enforce
the order of the arguments when the function is called or if you need to take some
positional parameters and arbitrary keywords.
Use keyword-only when names have meaning and the function definition is more
understandable by being explicit with names or you want to prevent users relying on the
position of the argument being passed.
For an API, use positional-only to prevent breaking API changes if the parameter’s name
is modified in the future.
4.7.4. Arbitrary Argument Lists
Finally, the least frequently used option is to specify that a function can be called with an
arbitrary number of arguments. These arguments will be wrapped up in a tuple (see Tuples and
Sequences). Before the variable number of arguments, zero or more normal arguments may
occur.
Normally, these variadic arguments will be last in the list of formal parameters, because they
scoop up all remaining input arguments that are passed to the function. Any formal parameters
which occur after the *args parameter are ‘keyword-only’ arguments, meaning that they can
only be used as keywords rather than positional arguments.
>>>
>>>
In the same fashion, dictionaries can deliver keyword arguments with the **-operator:
>
>>>
The above example uses a lambda expression to return a function. Another use is to pass a small
function as an argument:
>>>
>>> pairs = [(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')]
>>> pairs.sort(key=lambda pair: pair[1])
>>> pairs
[(4, 'four'), (1, 'one'), (3, 'three'), (2, 'two')]
The first line should always be a short, concise summary of the object’s purpose. For brevity, it
should not explicitly state the object’s name or type, since these are available by other means
(except if the name happens to be a verb describing a function’s operation). This line should
begin with a capital letter and end with a period.
If there are more lines in the documentation string, the second line should be blank, visually
separating the summary from the rest of the description. The following lines should be one or
more paragraphs describing the object’s calling conventions, its side effects, etc.
The Python parser does not strip indentation from multi-line string literals in Python, so tools
that process documentation have to strip indentation if desired. This is done using the following
convention. The first non-blank line after the first line of the string determines the amount of
indentation for the entire documentation string. (We can’t use the first line since it is generally
adjacent to the string’s opening quotes so its indentation is not apparent in the string literal.)
Whitespace “equivalent” to this indentation is then stripped from the start of all lines of the
string. Lines that are indented less should not occur, but if they occur all their leading whitespace
should be stripped. Equivalence of whitespace should be tested after expansion of tabs (to 8
spaces, normally).
>>>
Annotations are stored in the __annotations__ attribute of the function as a dictionary and
have no effect on any other part of the function. Parameter annotations are defined by a colon
after the parameter name, followed by an expression evaluating to the value of the annotation.
Return annotations are defined by a literal ->, followed by an expression, between the parameter
list and the colon denoting the end of the def statement. The following example has a required
argument, an optional argument, and the return value annotated:
>>>
For Python, PEP 8 has emerged as the style guide that most projects adhere to; it promotes a
very readable and eye-pleasing coding style. Every Python developer should read it at some
point; here are the most important points extracted for you:
This helps users with small displays and makes it possible to have several code files side-
by-side on larger displays.
Use blank lines to separate functions and classes, and larger blocks of code inside
functions.
When possible, put comments on a line of their own.
Use docstrings.
Use spaces around operators and after commas, but not directly inside bracketing
constructs: a = f(1, 2) + g(3, 4).
Name your classes and functions consistently; the convention is to
use UpperCamelCase for classes and lowercase_with_underscores for
functions and methods. Always use self as the name for the first method argument
(see A First Look at Classes for more on classes and methods).
Don’t use fancy encodings if your code is meant to be used in international environments.
Python’s default, UTF-8, or even plain ASCII work best in any case.
Likewise, don’t use non-ASCII characters in identifiers if there is only the slightest
chance people speaking a different language will read or maintain the code.
Footnotes
Actually, call by object reference would be a better description, since if a mutable object
is passed, the caller will see any changes the callee makes to it (items inserted into list)
5. Data Structures
This chapter describes some things you’ve learned about already in more detail, and
adds some new things as well.
list.append(x)
Extend the list by appending all the items from the iterable. Equivalent
to a[len(a):] = iterable.
list.insert(i, x)
Insert an item at a given position. The first argument is the index of the element
before which to insert, so a.insert(0, x) inserts at the front of the list,
and a.insert(len(a), x) is equivalent to a.append(x).
list.remove(x)
Remove the first item from the list whose value is equal to x. It raises
a ValueError if there is no such item.
list.pop([i])
Remove the item at the given position in the list, and return it. If no index is
specified, a.pop() removes and returns the last item in the list. (The square
brackets around the i in the method signature denote that the parameter is
optional, not that you should type square brackets at that position. You will see
this notation frequently in the Python Library Reference.)
list.clear()
Return zero-based index in the list of the first item whose value is equal to x.
Raises a ValueError if there is no such item.
The optional arguments start and end are interpreted as in the slice notation and
are used to limit the search to a particular subsequence of the list. The returned
index is computed relative to the beginning of the full sequence rather than
the start argument.
list.count(x)
list.reverse()
list.copy()
>>>
>>> fruits.count('apple')
2
>>> fruits.count('tangerine')
0
>>> fruits.index('banana')
3
>>> fruits.index('banana', 4) # Find next banana starting
a position 4
6
>>> fruits.reverse()
>>> fruits
['banana', 'apple', 'kiwi', 'banana', 'pear', 'apple',
'orange']
>>> fruits.append('grape')
>>> fruits
['banana', 'apple', 'kiwi', 'banana', 'pear', 'apple',
'orange', 'grape']
>>> fruits.sort()
>>> fruits
['apple', 'apple', 'banana', 'banana', 'grape', 'kiwi',
'orange', 'pear']
>>> fruits.pop()
'pear'
You might have noticed that methods like insert, remove or sort that
only modify the list have no return value printed – they return the
default None. 1 This is a design principle for all mutable data structures in
Python.
Another thing you might notice is that not all data can be sorted or compared.
For instance, [None, 'hello', 10] doesn’t sort because integers can’t
be compared to strings and None can’t be compared to other types. Also,
there are some types that don’t have a defined ordering relation. For
example, 3+4j < 5+7j isn’t a valid comparison.
>>>
>>>
>>>
>>> squares = []
>>> for x in range(10):
... squares.append(x**2)
...
>>> squares
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
Note that this creates (or overwrites) a variable named x that still exists after
the loop completes. We can calculate the list of squares without any side
effects using:
>>>
>>>
>>> combs = []
>>> for x in [1,2,3]:
... for y in [3,1,4]:
... if x != y:
... combs.append((x, y))
...
>>> combs
[(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]
Note how the order of the for and if statements is the same in both these
snippets.
If the expression is a tuple (e.g. the (x, y) in the previous example), it must
be parenthesized.
>>>
>>>
>>>
>>> matrix = [
... [1, 2, 3, 4],
... [5, 6, 7, 8],
... [9, 10, 11, 12],
... ]
>>>
>>>
>>> transposed = []
>>> for i in range(4):
... transposed.append([row[i] for row in matrix])
...
>>> transposed
[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]
>>>
>>> transposed = []
>>> for i in range(4):
... # the following 3 lines implement the nested listcomp
... transposed_row = []
... for row in matrix:
... transposed_row.append(row[i])
... transposed.append(transposed_row)
...
>>> transposed
[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]
In the real world, you should prefer built-in functions to complex flow
statements. The zip() function would do a great job for this use case:
>>>
>>> list(zip(*matrix))
[(1, 5, 9), (2, 6, 10), (3, 7, 11), (4, 8, 12)]
See Unpacking Argument Lists for details on the asterisk in this line.
>>>
>>>
>>> del a
[]
Referencing the name a hereafter is an error (at least until another value is
assigned to it). We’ll find other uses for del later.
>>>
Though tuples may seem similar to lists, they are often used in different
situations and for different purposes. Tuples are immutable, and usually
contain a heterogeneous sequence of elements that are accessed via
unpacking (see later in this section) or indexing (or even by attribute in the
case of namedtuples). Lists are mutable, and their elements are usually
homogeneous and are accessed by iterating over the list.
>>> empty = ()
>>> singleton = 'hello', # <-- note trailing comma
>>> len(empty)
0
>>> len(singleton)
1
>>> singleton
('hello',)
>>>
>>> x, y, z = t
This is called, appropriately enough, sequence unpacking and works for any
sequence on the right-hand side. Sequence unpacking requires that there
are as many variables on the left side of the equals sign as there are
elements in the sequence. Note that multiple assignment is really just a
combination of tuple packing and sequence unpacking.
5.4. Sets
Python also includes a data type for sets. A set is an unordered collection with
no duplicate elements. Basic uses include membership testing and
eliminating duplicate entries. Set objects also support mathematical
operations like union, intersection, difference, and symmetric difference.
Curly braces or the set() function can be used to create sets. Note: to create
an empty set you have to use set(), not {}; the latter creates an empty
dictionary, a data structure that we discuss in the next section.
>>>
>>>
5.5. Dictionaries
Another useful data type built into Python is the dictionary (see Mapping Types
— dict). Dictionaries are sometimes found in other languages as
“associative memories” or “associative arrays”. Unlike sequences, which
are indexed by a range of numbers, dictionaries are indexed by keys,
which can be any immutable type; strings and numbers can always be
keys. Tuples can be used as keys if they contain only strings, numbers, or
tuples; if a tuple contains any mutable object either directly or indirectly, it
cannot be used as a key. You can’t use lists as keys, since lists can be
modified in place using index assignments, slice assignments, or methods
like append() and extend().
The main operations on a dictionary are storing a value with some key and
extracting the value given the key. It is also possible to delete a key:value
pair with del. If you store using a key that is already in use, the old value
associated with that key is forgotten. It is an error to extract a value using
a non-existent key.
Performing list(d) on a dictionary returns a list of all the keys used in the
dictionary, in insertion order (if you want it sorted, just
use sorted(d) instead). To check whether a single key is in the
dictionary, use the in keyword.
>>>
>>>
>>>
When the keys are simple strings, it is sometimes easier to specify pairs using
keyword arguments:
>>>
>>>
When looping through a sequence, the position index and corresponding value
can be retrieved at the same time using the enumerate() function.
>>>
>>> for i, v in enumerate(['tic', 'tac', 'toe']):
... print(i, v)
...
0 tic
1 tac
2 toe
To loop over two or more sequences at the same time, the entries can be
paired with the zip() function.
>>>
>>>
To loop over a sequence in sorted order, use the sorted() function which
returns a new sorted list while leaving the source unaltered.
>>>
It is sometimes tempting to change a list while you are looping over it; however,
it is often simpler and safer to create a new list instead.
>>>
The comparison operators in and not in check whether a value occurs (does
not occur) in a sequence. The operators is and is not compare whether
two objects are really the same object; this only matters for mutable
objects like lists. All comparison operators have the same priority, which is
lower than that of all numerical operators.
Comparisons may be combined using the Boolean operators and and or, and
the outcome of a comparison (or of any other Boolean expression) may be
negated with not. These have lower priorities than comparison operators;
between them, not has the highest priority and or the lowest, so
that A and not B or C is equivalent to (A and (not B)) or C. As
always, parentheses can be used to express the desired composition.
The Boolean operators and and or are so-called short-circuit operators: their
arguments are evaluated from left to right, and evaluation stops as soon
as the outcome is determined. For example, if A and C are true but B is
false, A and B and C does not evaluate the expression C. When used as
a general value and not as a Boolean, the return value of a short-circuit
operator is the last evaluated argument.
>>>
Note that comparing objects of different types with < or > is legal provided that
the objects have appropriate comparison methods. For example, mixed
numeric types are compared according to their numeric value, so 0 equals
0.0, etc. Otherwise, rather than providing an arbitrary ordering, the
interpreter will raise a TypeError exception.
Footnotes
Other languages may return the mutated object, which allows method chaining,
such as d->insert("a")->remove("b")->sort();
6. Modules
If you quit from the Python interpreter and enter it again, the definitions you have made
(functions and variables) are lost. Therefore, if you want to write a somewhat longer
program, you are better off using a text editor to prepare the input for the interpreter and
running it with that file as input instead. This is known as creating a script. As your
program gets longer, you may want to split it into several files for easier maintenance.
You may also want to use a handy function that you’ve written in several programs
without copying its definition into each program.
To support this, Python has a way to put definitions in a file and use them in a script or
in an interactive instance of the interpreter. Such a file is called a module; definitions
from a module can be imported into other modules or into the main module (the
collection of variables that you have access to in a script executed at the top level and
in calculator mode).
A module is a file containing Python definitions and statements. The file name is the
module name with the suffix .py appended. Within a module, the module’s name (as a
string) is available as the value of the global variable __name__. For instance, use your
favorite text editor to create a file called fibo.py in the current directory with the
following contents:
Now enter the Python interpreter and import this module with the following command:
>>>
This does not enter the names of the functions defined in fibo directly in the current
symbol table; it only enters the module name fibo there. Using the module name you
can access the functions:
>>>
>>> fibo.fib(1000)
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987
>>> fibo.fib2(100)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
>>> fibo.__name__
'fibo'
If you intend to use a function often you can assign it to a local name:
>>>
Each module has its own private symbol table, which is used as the global symbol table by all
functions defined in the module. Thus, the author of a module can use global variables in the
module without worrying about accidental clashes with a user’s global variables. On the other
hand, if you know what you are doing you can touch a module’s global variables with the same
notation used to refer to its functions, modname.itemname.
Modules can import other modules. It is customary but not required to place
all import statements at the beginning of a module (or script, for that matter). The imported
module names are placed in the importing module’s global symbol table.
There is a variant of the import statement that imports names from a module directly into the
importing module’s symbol table. For example:
>>>
This does not introduce the module name from which the imports are taken in the local symbol
table (so in the example, fibo is not defined).
>>>
This imports all names except those beginning with an underscore (_). In most cases Python
programmers do not use this facility since it introduces an unknown set of names into the
interpreter, possibly hiding some things you have already defined.
Note that in general the practice of importing * from a module or package is frowned upon, since
it often causes poorly readable code. However, it is okay to use it to save typing in interactive
sessions.
If the module name is followed by as, then the name following as is bound directly to the
imported module.
>>>
This is effectively importing the module in the same way that import fibo will do, with the
only difference of it being available as fib.
>>>
For efficiency reasons, each module is only imported once per interpreter session. Therefore, if
you change your modules, you must restart the interpreter – or, if it’s just one module you want
to test interactively, use importlib.reload(),
e.g. import importlib; importlib.reload(modulename).
6.1.1. Executing modules as scripts
When you run a Python module with
the code in the module will be executed, just as if you imported it, but with the __name__ set
to "__main__". That means that by adding this code at the end of your module:
if __name__ == "__main__":
import sys
fib(int(sys.argv[1]))
you can make the file usable as a script as well as an importable module, because the code that
parses the command line only runs if the module is executed as the “main” file:
$ python fibo.py 50
0 1 1 2 3 5 8 13 21 34
>>>
This is often used either to provide a convenient user interface to a module, or for testing
purposes (running the module as a script executes a test suite).
The directory containing the input script (or the current directory when no file is
specified).
PYTHONPATH (a list of directory names, with the same syntax as the shell
variable PATH).
The installation-dependent default.
Note
On file systems which support symlinks, the directory containing the input script is calculated
after the symlink is followed. In other words the directory containing the symlink is not added to
the module search path.sear
After initialization, Python programs can modify sys.path. The directory containing the script
being run is placed at the beginning of the search path, ahead of the standard library path. This
means that scripts in that directory will be loaded instead of modules of the same name in the
library directory. This is an error unless the replacement is intended. See section Standard
Modules for more information.
Python checks the modification date of the source against the compiled version to see if it’s out
of date and needs to be recompiled. This is a completely automatic process. Also, the compiled
modules are platform-independent, so the same library can be shared among systems with
different architectures.
Python does not check the cache in two circumstances. First, it always recompiles and does not
store the result for the module that’s loaded directly from the command line. Second, it does not
check the cache if there is no source module. To support a non-source (compiled only)
distribution, the compiled module must be in the source directory, and there must not be a source
module.
You can use the -O or -OO switches on the Python command to reduce the size of a
compiled module. The -O switch removes assert statements, the -OO switch removes
both assert statements and __doc__ strings. Since some programs may rely on having
these available, you should only use this option if you know what you’re doing.
“Optimized” modules have an opt- tag and are usually smaller. Future releases may
change the effects of optimization.
A program doesn’t run any faster when it is read from a .pyc file than when it is read
from a .py file; the only thing that’s faster about .pyc files is the speed with which they
are loaded.
The module compileall can create .pyc files for all modules in a directory.
There is more detail on this process, including a flow chart of the decisions, in PEP 3147.
6.2. Standard Modules
Python comes with a library of standard modules, described in a separate document, the Python
Library Reference (“Library Reference” hereafter). Some modules are built into the interpreter;
these provide access to operations that are not part of the core of the language but are
nevertheless built in, either for efficiency or to provide access to operating system primitives
such as system calls. The set of such modules is a configuration option which also depends on
the underlying platform. For example, the winreg module is only provided on Windows
systems. One particular module deserves some attention: sys, which is built into every Python
interpreter. The variables sys.ps1 and sys.ps2 define the strings used as primary and
secondary prompts:
>>>
These two variables are only defined if the interpreter is in interactive mode.
The variable sys.path is a list of strings that determines the interpreter’s search path for
modules. It is initialized to a default path taken from the environment variable PYTHONPATH, or
from a built-in default if PYTHONPATH is not set. You can modify it using standard list
operations:
>>>
>>>
Without arguments, dir() lists the names you have defined currently:
>>>
>>> a = [1, 2, 3, 4, 5]
>>> import fibo
>>> fib = fibo.fib
>>> dir()
['__builtins__', '__name__', 'a', 'fib', 'fibo', 'sys']
Note that it lists all types of names: variables, modules, functions, etc.
dir() does not list the names of built-in functions and variables. If you want a list of those, they
are defined in the standard module builtins:
>>>
6.4. Packages
Packages are a way of structuring Python’s module namespace by using “dotted module names”.
For example, the module name A.B designates a submodule named B in a package named A. Just
like the use of modules saves the authors of different modules from having to worry about each
other’s global variable names, the use of dotted module names saves the authors of multi-module
packages like NumPy or Pillow from having to worry about each other’s module names.
Suppose you want to design a collection of modules (a “package”) for the uniform handling of
sound files and sound data. There are many different sound file formats (usually recognized by
their extension, for example: .wav, .aiff, .au), so you may need to create and maintain a
growing collection of modules for the conversion between the various file formats. There are
also many different operations you might want to perform on sound data (such as mixing, adding
echo, applying an equalizer function, creating an artificial stereo effect), so in addition you will
be writing a never-ending stream of modules to perform these operations. Here’s a possible
structure for your package (expressed in terms of a hierarchical filesystem):
When importing the package, Python searches through the directories on sys.path looking for
the package subdirectory.
The __init__.py files are required to make Python treat directories containing the file as
packages. This prevents directories with a common name, such as string, unintentionally
hiding valid modules that occur later on the module search path. In the simplest
case, __init__.py can just be an empty file, but it can also execute initialization code for the
package or set the __all__ variable, described later.
Users of the package can import individual modules from the package, for example:
import sound.effects.echo
This loads the submodule sound.effects.echo. It must be referenced with its full name.
This also loads the submodule echo, and makes it available without its package prefix, so it can
be used as follows:
Again, this loads the submodule echo, but this makes its function echofilter() directly
available:
Note that when using from package import item, the item can be either a submodule (or
subpackage) of the package, or some other name defined in the package, like a function, class or
variable. The import statement first tests whether the item is defined in the package; if not, it
assumes it is a module and attempts to load it. If it fails to find it, an ImportError exception is
raised.
Contrarily, when using syntax like import item.subitem.subsubitem, each item except
for the last must be a package; the last item can be a module or a package but can’t be a class or
function or variable defined in the previous item.
The only solution is for the package author to provide an explicit index of the package.
The import statement uses the following convention: if a package’s __init__.py code
defines a list named __all__, it is taken to be the list of module names that should be imported
when from package import * is encountered. It is up to the package author to keep this list
up-to-date when a new version of the package is released. Package authors may also decide not
to support it, if they don’t see a use for importing * from their package. For example, the
file sound/effects/__init__.py could contain the following code:
This would mean that from sound.effects import * would import the three named
submodules of the sound package.
If __all__ is not defined, the statement from sound.effects import * does not import
all submodules from the package sound.effects into the current namespace; it only ensures
that the package sound.effects has been imported (possibly running any initialization code
in __init__.py) and then imports whatever names are defined in the package. This includes
any names defined (and submodules explicitly loaded) by __init__.py. It also includes any
submodules of the package that were explicitly loaded by previous import statements. Consider
this code:
import sound.effects.echo
import sound.effects.surround
from sound.effects import *
In this example, the echo and surround modules are imported in the current namespace
because they are defined in the sound.effects package when
the from...import statement is executed. (This also works when __all__ is defined.)
Although certain modules are designed to export only names that follow certain patterns when
you use import *, it is still considered bad practice in production code.
You can also write relative imports, with the from module import name form of import
statement. These imports use leading dots to indicate the current and parent packages involved in
the relative import. From the surround module for example, you might use:
Note that relative imports are based on the name of the current module. Since the name of the
main module is always "__main__", modules intended for use as the main module of a Python
application must always use absolute imports.
While this feature is not often needed, it can be used to extend the set of modules found in a
package.
Footnotes
In fact function definitions are also ‘statements’ that are ‘executed’; the execution of a
module-level function definition enters the function name in the module’s global
symbol table.
Often you’ll want more control over the formatting of your output than simply printing
space-separated values. There are several ways to format output.
To use formatted string literals, begin a string with f or F before the opening
quotation mark or triple quotation mark. Inside this string, you can write a Python
expression between { and } characters that can refer to variables or literal
values.
>>>
>>>
When you don’t need fancy output but just want a quick display of some variables for
debugging purposes, you can convert any value to a string with
the repr() or str() functions.
The str() function is meant to return representations of values which are fairly human-
readable, while repr() is meant to generate representations which can be read by the
interpreter (or will force a SyntaxError if there is no equivalent syntax). For objects
which don’t have a particular representation for human consumption, str() will return
the same value as repr(). Many values, such as numbers or structures like lists and
dictionaries, have the same representation using either function. Strings, in particular,
have two distinct representations.
Some examples:
>>>
The string module contains a Template class that offers yet another way to
substitute values into strings, using placeholders like $x and replacing them with values
from a dictionary, but offers much less control of the formatting.
>>>
Passing an integer after the ':' will cause that field to be a minimum number of
characters wide. This is useful for making columns line up.
>>>
Other modifiers can be used to convert the value before it is formatted. '!
a' applies ascii(), '!s' applies str(), and '!r' applies repr():
>>>
For a reference on these format specifications, see the reference guide for the Format
Specification Mini-Language.
>>>
>>> print('We are the {} who say "{}!"'.format('knights', 'Ni'))
We are the knights who say "Ni!"
The brackets and characters within them (called format fields) are replaced with the
objects passed into the str.format() method. A number in the brackets can be used
to refer to the position of the object passed into the str.format() method.
>>>
If keyword arguments are used in the str.format() method, their values are referred
to by using the name of the argument.
>>>
>>>
If you have a really long format string that you don’t want to split up, it would be nice if
you could reference the variables to be formatted by name instead of by position. This
can be done by simply passing the dict and using square brackets '[]' to access the
keys.
>>>
>>>
This is particularly useful in combination with the built-in function vars(), which returns
a dictionary containing all local variables.
>>>
For a complete overview of string formatting with str.format(), see Format String
Syntax.
>>>
(Note that the one space between each column was added by the way print() works:
it always adds spaces between its arguments.)
There is another method, str.zfill(), which pads a numeric string on the left with
zeros. It understands about plus and minus signs:
>>>
>>> '12'.zfill(5)
'00012'
>>> '-3.14'.zfill(7)
'-003.14'
>>> '3.14159265359'.zfill(5)
'3.14159265359'
>>>
The first argument is a string containing the filename. The second argument is another
string containing a few characters describing the way in which the file will be
used. mode can be 'r' when the file will only be read, 'w' for only writing (an existing
file with the same name will be erased), and 'a' opens the file for appending; any data
written to the file is automatically added to the end. 'r+' opens the file for both reading
and writing. The mode argument is optional; 'r' will be assumed if it’s omitted.
Normally, files are opened in text mode, that means, you read and write strings from
and to the file, which are encoded in a specific encoding. If encoding is not specified,
the default is platform dependent (see open()). 'b' appended to the mode opens the
file in binary mode: now the data is read and written in the form of bytes objects. This
mode should be used for all files that don’t contain text.
In text mode, the default when reading is to convert platform-specific line endings ( \
n on Unix, \r\n on Windows) to just \n. When writing in text mode, the default is to
convert occurrences of \n back to platform-specific line endings. This behind-the-
scenes modification to file data is fine for text files, but will corrupt binary data like that
in JPEG or EXE files. Be very careful to use binary mode when reading and writing such
files.
It is good practice to use the with keyword when dealing with file objects. The
advantage is that the file is properly closed after its suite finishes, even if an exception is
raised at some point. Using with is also much shorter than writing equivalent try-
finally blocks:
>>>
>>> with open('workfile') as f:
... read_data = f.read()
>>> # We can check that the file has been automatically closed.
>>> f.closed
True
If you’re not using the with keyword, then you should call f.close() to close the file
and immediately free up any system resources used by it.
Warning
Calling f.write() without using the with keyword or calling f.close() might result
in the arguments of f.write() not being completely written to the disk, even if the
program exits successfully.
>>>
>>> f.close()
>>> f.read()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: I/O operation on closed file.
To read a file’s contents, call f.read(size), which reads some quantity of data and
returns it as a string (in text mode) or bytes object (in binary mode). size is an optional
numeric argument. When size is omitted or negative, the entire contents of the file will
be read and returned; it’s your problem if the file is twice as large as your machine’s
memory. Otherwise, at most size characters (in text mode) or size bytes (in binary
mode) are read and returned. If the end of the file has been reached, f.read() will
return an empty string ('').
>>>
>>> f.read()
'This is the entire file.\n'
>>> f.read()
''
f.readline() reads a single line from the file; a newline character (\n) is left at the
end of the string, and is only omitted on the last line of the file if the file doesn’t end in a
newline. This makes the return value unambiguous; if f.readline() returns an empty
string, the end of the file has been reached, while a blank line is represented by '\n', a
string containing only a single newline.
>>>
>>> f.readline()
'This is the first line of the file.\n'
>>> f.readline()
'Second line of the file\n'
>>> f.readline()
''
For reading lines from a file, you can loop over the file object. This is memory efficient,
fast, and leads to simple code:
>>>
If you want to read all the lines of a file in a list you can also
use list(f) or f.readlines().
f.write(string) writes the contents of string to the file, returning the number of
characters written.
>>>
Other types of objects need to be converted – either to a string (in text mode) or a bytes
object (in binary mode) – before writing them:
>>>
f.tell() returns an integer giving the file object’s current position in the file
represented as number of bytes from the beginning of the file when in binary mode and
an opaque number when in text mode.
To change the file object’s position, use f.seek(offset, whence). The position is
computed from adding offset to a reference point; the reference point is selected by
the whence argument. A whence value of 0 measures from the beginning of the file, 1
uses the current file position, and 2 uses the end of the file as the reference
point. whence can be omitted and defaults to 0, using the beginning of the file as the
reference point.
>>>
In text files (those opened without a b in the mode string), only seeks relative to the
beginning of the file are allowed (the exception being seeking to the very file end
with seek(0, 2)) and the only valid offset values are those returned from
the f.tell(), or zero. Any other offset value produces undefined behaviour.
File objects have some additional methods, such as isatty() and truncate() which
are less frequently used; consult the Library Reference for a complete guide to file
objects.
Rather than having users constantly writing and debugging code to save complicated
data types to files, Python allows you to use the popular data interchange format
called JSON (JavaScript Object Notation). The standard module called json can take
Python data hierarchies, and convert them to string representations; this process is
called serializing. Reconstructing the data from the string representation is
called deserializing. Between serializing and deserializing, the string representing the
object may have been stored in a file or data, or sent over a network connection to
some distant machine.
Note
The JSON format is commonly used by modern applications to allow for data exchange.
Many programmers are already familiar with it, which makes it a good choice for
interoperability.
If you have an object x, you can view its JSON string representation with a simple line
of code:
>>>
Another variant of the dumps() function, called dump(), simply serializes the object to
a text file. So if f is a text file object opened for writing, we can do this:
json.dump(x, f)
To decode the object again, if f is a text file object which has been opened for reading:
x = json.load(f)
This simple serialization technique can handle lists and dictionaries, but serializing
arbitrary class instances in JSON requires a bit of extra effort. The reference for
the json module contains an explanation of this.
See also
>>>
The parser repeats the offending line and displays a little ‘arrow’ pointing at the earliest
point in the line where the error was detected. The error is caused by (or at least
detected at) the token preceding the arrow: in the example, the error is detected at the
function print(), since a colon (':') is missing before it. File name and line number
are printed so you know where to look in case the input came from a script.
8.2. Exceptions
Even if a statement or expression is syntactically correct, it may cause an error when an
attempt is made to execute it. Errors detected during execution are
called exceptions and are not unconditionally fatal: you will soon learn how to handle
them in Python programs. Most exceptions are not handled by programs, however, and
result in error messages as shown here:
>>>
>>> 10 * (1/0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: division by zero
>>> 4 + spam*3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'spam' is not defined
>>> '2' + 2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't convert 'int' object to str implicitly
The last line of the error message indicates what happened. Exceptions come in
different types, and the type is printed as part of the message: the types in the example
are ZeroDivisionError, NameError and TypeError. The string printed as the
exception type is the name of the built-in exception that occurred. This is true for all
built-in exceptions, but need not be true for user-defined exceptions (although it is a
useful convention). Standard exception names are built-in identifiers (not reserved
keywords).
The rest of the line provides detail based on the type of exception and what caused it.
The preceding part of the error message shows the context where the exception
happened, in the form of a stack traceback. In general it contains a stack traceback
listing source lines; however, it will not display lines read from standard input.
>>>
First, the try clause (the statement(s) between the try and except keywords) is
executed.
If no exception occurs, the except clause is skipped and execution of
the try statement is finished.
If an exception occurs during execution of the try clause, the rest of the clause is
skipped. Then if its type matches the exception named after
the except keyword, the except clause is executed, and then execution
continues after the try statement.
If an exception occurs which does not match the exception named in the except
clause, it is passed on to outer try statements; if no handler is found, it is
an unhandled exception and execution stops with a message as shown above.
A try statement may have more than one except clause, to specify handlers for
different exceptions. At most one handler will be executed. Handlers only handle
exceptions that occur in the corresponding try clause, not in other handlers of the
same try statement. An except clause may name multiple exceptions as a
parenthesized tuple, for example:
class B(Exception):
pass
class C(B):
pass
class D(C):
pass
Note that if the except clauses were reversed (with except B first), it would have
printed B, B, B — the first matching except clause is triggered.
The last except clause may omit the exception name(s), to serve as a wildcard. Use this
with extreme caution, since it is easy to mask a real programming error in this way! It
can also be used to print an error message and then re-raise the exception (allowing a
caller to handle the exception as well):
import sys
try:
f = open('myfile.txt')
s = f.readline()
i = int(s.strip())
except OSError as err:
print("OS error: {0}".format(err))
except ValueError:
print("Could not convert data to an integer.")
except:
print("Unexpected error:", sys.exc_info()[0])
raise
The try … except statement has an optional else clause, which, when present, must
follow all except clauses. It is useful for code that must be executed if the try clause
does not raise an exception. For example:
for arg in sys.argv[1:]:
try:
f = open(arg, 'r')
except OSError:
print('cannot open', arg)
else:
print(arg, 'has', len(f.readlines()), 'lines')
f.close()
The use of the else clause is better than adding additional code to the try clause
because it avoids accidentally catching an exception that wasn’t raised by the code
being protected by the try … except statement.
When an exception occurs, it may have an associated value, also known as the
exception’s argument. The presence and type of the argument depend on the exception
type.
The except clause may specify a variable after the exception name. The variable is
bound to an exception instance with the arguments stored in instance.args. For
convenience, the exception instance defines __str__() so the arguments can be
printed directly without having to reference .args. One may also instantiate an
exception first before raising it and add any attributes to it as desired.
>>>
>>> try:
... raise Exception('spam', 'eggs')
... except Exception as inst:
... print(type(inst)) # the exception instance
... print(inst.args) # arguments stored in .args
... print(inst) # __str__ allows args to be printed
directly,
... # but may be overridden in exception
subclasses
... x, y = inst.args # unpack args
... print('x =', x)
... print('y =', y)
...
<class 'Exception'>
('spam', 'eggs')
('spam', 'eggs')
x = spam
y = eggs
If an exception has arguments, they are printed as the last part (‘detail’) of the message
for unhandled exceptions.
Exception handlers don’t just handle exceptions if they occur immediately in the try
clause, but also if they occur inside functions that are called (even indirectly) in the try
clause. For example:
>>>
>>>
The sole argument to raise indicates the exception to be raised. This must be either
an exception instance or an exception class (a class that derives from Exception). If
an exception class is passed, it will be implicitly instantiated by calling its constructor
with no arguments:
If you need to determine whether an exception was raised but don’t intend to handle it,
a simpler form of the raise statement allows you to re-raise the exception:
>>>
>>> try:
... raise NameError('HiThere')
... except NameError:
... print('An exception flew by!')
... raise
...
An exception flew by!
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
NameError: HiThere
Exception classes can be defined which do anything any other class can do, but are
usually kept simple, often only offering a number of attributes that allow information
about the error to be extracted by handlers for the exception. When creating a module
that can raise several distinct errors, a common practice is to create a base class for
exceptions defined by that module, and subclass that to create specific exception
classes for different error conditions:
class Error(Exception):
"""Base class for exceptions in this module."""
pass
class InputError(Error):
"""Exception raised for errors in the input.
Attributes:
expression -- input expression in which the error occurred
message -- explanation of the error
"""
class TransitionError(Error):
"""Raised when an operation attempts a state transition that's
not
allowed.
Attributes:
previous -- state at beginning of transition
next -- attempted new state
message -- explanation of why the specific transition is
not allowed
"""
Most exceptions are defined with names that end in “Error”, similar to the naming of the
standard exceptions.
Many standard modules define their own exceptions to report errors that may occur in
functions they define. More information on classes is presented in chapter Classes.
>>>
>>> try:
... raise KeyboardInterrupt
... finally:
... print('Goodbye, world!')
...
Goodbye, world!
KeyboardInterrupt
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
If a finally clause is present, the finally clause will execute as the last task before
the try statement completes. The finally clause runs whether or not
the try statement produces an exception. The following points discuss more complex
cases when an exception occurs:
If an exception occurs during execution of the try clause, the exception may be
handled by an except clause. If the exception is not handled by
an except clause, the exception is re-raised after the finally clause has been
executed.
An exception could occur during execution of an except or else clause. Again,
the exception is re-raised after the finally clause has been executed.
If the try statement reaches a break, continue or return statement,
the finally clause will execute just prior to
the break, continue or return statement’s execution.
If a finally clause includes a return statement, the returned value will be the
one from the finally clause’s return statement, not the value from
the try clause’s return statement.
For example:
>>>
>>>
As you can see, the finally clause is executed in any event. The TypeError raised
by dividing two strings is not handled by the except clause and therefore re-raised after
the finally clause has been executed.
In real world applications, the finally clause is useful for releasing external resources
(such as files or network connections), regardless of whether the use of the resource
was successful.
The problem with this code is that it leaves the file open for an indeterminate amount of
time after this part of the code has finished executing. This is not an issue in simple
scripts, but can be a problem for larger applications. The with statement allows objects
like files to be used in a way that ensures they are always cleaned up promptly and
correctly.
with open("myfile.txt") as f:
for line in f:
print(line, end="")
After the statement is executed, the file f is always closed, even if a problem was
encountered while processing the lines. Objects which, like files, provide predefined
clean-up actions will indicate this in their documentation.
9. Classes
Classes provide a means of bundling data and functionality together. Creating a new
class creates a new type of object, allowing new instances of that type to be made.
Each class instance can have attributes attached to it for maintaining its state. Class
instances can also have methods (defined by its class) for modifying its state.
Compared with other programming languages, Python’s class mechanism adds classes
with a minimum of new syntax and semantics. It is a mixture of the class mechanisms
found in C++ and Modula-3. Python classes provide all the standard features of Object
Oriented Programming: the class inheritance mechanism allows multiple base classes,
a derived class can override any methods of its base class or classes, and a method
can call the method of a base class with the same name. Objects can contain arbitrary
amounts and kinds of data. As is true for modules, classes partake of the dynamic
nature of Python: they are created at runtime, and can be modified further after creation.
(Lacking universally accepted terminology to talk about classes, I will make occasional
use of Smalltalk and C++ terms. I would use Modula-3 terms, since its object-oriented
semantics are closer to those of Python than C++, but I expect that few readers have
heard of it.)
By the way, I use the word attribute for any name following a dot — for example, in the
expression z.real, real is an attribute of the object z. Strictly speaking, references to
names in modules are attribute references: in the
expression modname.funcname, modname is a module object and funcname is an
attribute of it. In this case there happens to be a straightforward mapping between the
module’s attributes and the global names defined in the module: they share the same
namespace! 1
Namespaces are created at different moments and have different lifetimes. The
namespace containing the built-in names is created when the Python interpreter starts
up, and is never deleted. The global namespace for a module is created when the
module definition is read in; normally, module namespaces also last until the interpreter
quits. The statements executed by the top-level invocation of the interpreter, either read
from a script file or interactively, are considered part of a module called __main__, so
they have their own global namespace. (The built-in names actually also live in a
module; this is called builtins.)
The local namespace for a function is created when the function is called, and deleted
when the function returns or raises an exception that is not handled within the function.
(Actually, forgetting would be a better way to describe what actually happens.) Of
course, recursive invocations each have their own local namespace.
Although scopes are determined statically, they are used dynamically. At any time
during execution, there are 3 or 4 nested scopes whose namespaces are directly
accessible:
the innermost scope, which is searched first, contains the local names
the scopes of any enclosing functions, which are searched starting with the
nearest enclosing scope, contains non-local, but also non-global names
the next-to-last scope contains the current module’s global names
the outermost scope (searched last) is the namespace containing built-in names
If a name is declared global, then all references and assignments go directly to the
middle scope containing the module’s global names. To rebind variables found outside
of the innermost scope, the nonlocal statement can be used; if not declared nonlocal,
those variables are read-only (an attempt to write to such a variable will simply create
a new local variable in the innermost scope, leaving the identically named outer variable
unchanged).
Usually, the local scope references the local names of the (textually) current function.
Outside functions, the local scope references the same namespace as the global scope:
the module’s namespace. Class definitions place yet another namespace in the local
scope.
It is important to realize that scopes are determined textually: the global scope of a
function defined in a module is that module’s namespace, no matter from where or by
what alias the function is called. On the other hand, the actual search for names is done
dynamically, at run time — however, the language definition is evolving towards static
name resolution, at “compile” time, so don’t rely on dynamic name resolution! (In fact,
local variables are already determined statically.)
def scope_test():
def do_local():
spam = "local spam"
def do_nonlocal():
nonlocal spam
spam = "nonlocal spam"
def do_global():
global spam
spam = "global spam"
scope_test()
print("In global scope:", spam)
Note how the local assignment (which is default) didn’t change scope_test’s binding
of spam. The nonlocal assignment changed scope_test’s binding of spam, and
the global assignment changed the module-level binding.
You can also see that there was no previous binding for spam before
the global assignment.
class ClassName:
<statement-1>
.
.
.
<statement-N>
Class definitions, like function definitions (def statements) must be executed before
they have any effect. (You could conceivably place a class definition in a branch of
an if statement, or inside a function.)
In practice, the statements inside a class definition will usually be function definitions,
but other statements are allowed, and sometimes useful — we’ll come back to this later.
The function definitions inside a class normally have a peculiar form of argument list,
dictated by the calling conventions for methods — again, this is explained later.
When a class definition is entered, a new namespace is created, and used as the local
scope — thus, all assignments to local variables go into this new namespace. In
particular, function definitions bind the name of the new function here.
When a class definition is left normally (via the end), a class object is created. This is
basically a wrapper around the contents of the namespace created by the class
definition; we’ll learn more about class objects in the next section. The original local
scope (the one in effect just before the class definition was entered) is reinstated, and
the class object is bound here to the class name given in the class definition header
(ClassName in the example).
class MyClass:
"""A simple example class"""
i = 12345
def f(self):
return 'hello world'
then MyClass.i and MyClass.f are valid attribute references, returning an integer and
a function object, respectively. Class attributes can also be assigned to, so you can
change the value of MyClass.i by assignment. __doc__ is also a valid attribute,
returning the docstring belonging to the class: "A simple example class".
Class instantiation uses function notation. Just pretend that the class object is a
parameterless function that returns a new instance of the class. For example (assuming
the above class):
x = MyClass()
creates a new instance of the class and assigns this object to the local variable x.
The instantiation operation (“calling” a class object) creates an empty object. Many
classes like to create objects with instances customized to a specific initial state.
Therefore a class may define a special method named __init__(), like this:
def __init__(self):
self.data = []
x = MyClass()
Of course, the __init__() method may have arguments for greater flexibility. In that
case, arguments given to the class instantiation operator are passed on
to __init__(). For example,
>>>
counter = 1
while x.counter < 10:
x.counter = x.counter * 2
print(x.counter)
del x.counter
The other kind of instance attribute reference is a method. A method is a function that
“belongs to” an object. (In Python, the term method is not unique to class instances:
other object types can have methods as well. For example, list objects have methods
called append, insert, remove, sort, and so on. However, in the following discussion,
we’ll use the term method exclusively to mean methods of class instance objects,
unless explicitly stated otherwise.)
Valid method names of an instance object depend on its class. By definition, all
attributes of a class that are function objects define corresponding methods of its
instances. So in our example, x.f is a valid method reference, since MyClass.f is a
function, but x.i is not, since MyClass.i is not. But x.f is not the same thing
as MyClass.f — it is a method object, not a function object.
x.f()
In the MyClass example, this will return the string 'hello world'. However, it is not
necessary to call a method right away: x.f is a method object, and can be stored away
and called at a later time. For example:
xf = x.f
while True:
print(xf())
What exactly happens when a method is called? You may have noticed that x.f() was
called without an argument above, even though the function definition for f() specified
an argument. What happened to the argument? Surely Python raises an exception
when a function that requires an argument is called without any — even if the argument
isn’t actually used…
Actually, you may have guessed the answer: the special thing about methods is that the
instance object is passed as the first argument of the function. In our example, the
call x.f() is exactly equivalent to MyClass.f(x). In general, calling a method with a
list of n arguments is equivalent to calling the corresponding function with an argument
list that is created by inserting the method’s instance object before the first argument.
If you still don’t understand how methods work, a look at the implementation can
perhaps clarify matters. When a non-data attribute of an instance is referenced, the
instance’s class is searched. If the name denotes a valid class attribute that is a function
object, a method object is created by packing (pointers to) the instance object and the
function object just found together in an abstract object: this is the method object. When
the method object is called with an argument list, a new argument list is constructed
from the instance object and the argument list, and the function object is called with this
new argument list.
class Dog:
kind = 'canine' # class variable shared by all
instances
>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> d.kind # shared by all dogs
'canine'
>>> e.kind # shared by all dogs
'canine'
>>> d.name # unique to d
'Fido'
>>> e.name # unique to e
'Buddy'
As discussed in A Word About Names and Objects, shared data can have possibly
surprising effects with involving mutable objects such as lists and dictionaries. For
example, the tricks list in the following code should not be used as a class variable
because just a single list would be shared by all Dog instances:
class Dog:
>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> d.add_trick('roll over')
>>> e.add_trick('play dead')
>>> d.tricks # unexpectedly shared by all dogs
['roll over', 'play dead']
class Dog:
def __init__(self, name):
self.name = name
self.tricks = [] # creates a new empty list for each dog
>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> d.add_trick('roll over')
>>> e.add_trick('play dead')
>>> d.tricks
['roll over']
>>> e.tricks
['play dead']
>>>
>>> w1 = Warehouse()
>>> print(w1.purpose, w1.region)
storage west
>>> w2 = Warehouse()
>>> w2.region = 'east'
>>> print(w2.purpose, w2.region)
storage east
There is no shorthand for referencing data attributes (or other methods!) from within
methods. I find that this actually increases the readability of methods: there is no
chance of confusing local variables and instance variables when glancing through a
method.
Often, the first argument of a method is called self. This is nothing more than a
convention: the name self has absolutely no special meaning to Python. Note,
however, that by not following the convention your code may be less readable to other
Python programmers, and it is also conceivable that a class browser program might be
written that relies upon such a convention.
Any function object that is a class attribute defines a method for instances of that class.
It is not necessary that the function definition is textually enclosed in the class definition:
assigning a function object to a local variable in the class is also ok. For example:
class C:
f = f1
def g(self):
return 'hello world'
h = g
Now f, g and h are all attributes of class C that refer to function objects, and
consequently they are all methods of instances of C — h being exactly equivalent to g.
Note that this practice usually only serves to confuse the reader of a program.
Methods may call other methods by using method attributes of the self argument:
class Bag:
def __init__(self):
self.data = []
Methods may reference global names in the same way as ordinary functions. The global
scope associated with a method is the module containing its definition. (A class is never
used as a global scope.) While one rarely encounters a good reason for using global
data in a method, there are many legitimate uses of the global scope: for one thing,
functions and modules imported into the global scope can be used by methods, as well
as functions and classes defined in it. Usually, the class containing the method is itself
defined in this global scope, and in the next section we’ll find some good reasons why a
method would want to reference its own class.
Each value is an object, and therefore has a class (also called its type). It is stored
as object.__class__.
9.5. Inheritance
Of course, a language feature would not be worthy of the name “class” without
supporting inheritance. The syntax for a derived class definition looks like this:
class DerivedClassName(BaseClassName):
<statement-1>
.
.
.
<statement-N>
The name BaseClassName must be defined in a scope containing the derived class
definition. In place of a base class name, other arbitrary expressions are also allowed.
This can be useful, for example, when the base class is defined in another module:
class DerivedClassName(modname.BaseClassName):
Execution of a derived class definition proceeds the same as for a base class. When the
class object is constructed, the base class is remembered. This is used for resolving
attribute references: if a requested attribute is not found in the class, the search
proceeds to look in the base class. This rule is applied recursively if the base class itself
is derived from some other class.
There’s nothing special about instantiation of derived
classes: DerivedClassName() creates a new instance of the class. Method references
are resolved as follows: the corresponding class attribute is searched, descending down
the chain of base classes if necessary, and the method reference is valid if this yields a
function object.
Derived classes may override methods of their base classes. Because methods have
no special privileges when calling other methods of the same object, a method of a
base class that calls another method defined in the same base class may end up calling
a method of a derived class that overrides it. (For C++ programmers: all methods in
Python are effectively virtual.)
An overriding method in a derived class may in fact want to extend rather than simply
replace the base class method of the same name. There is a simple way to call the
base class method directly: just
call BaseClassName.methodname(self, arguments). This is occasionally useful to
clients as well. (Note that this only works if the base class is accessible
as BaseClassName in the global scope.)
For most purposes, in the simplest cases, you can think of the search for attributes
inherited from a parent class as depth-first, left-to-right, not searching twice in the same
class where there is an overlap in the hierarchy. Thus, if an attribute is not found
in DerivedClassName, it is searched for in Base1, then (recursively) in the base
classes of Base1, and if it was not found there, it was searched for in Base2, and so on.
In fact, it is slightly more complex than that; the method resolution order changes
dynamically to support cooperative calls to super(). This approach is known in some
other multiple-inheritance languages as call-next-method and is more powerful than the
super call found in single-inheritance languages.
Dynamic ordering is necessary because all cases of multiple inheritance exhibit one or
more diamond relationships (where at least one of the parent classes can be accessed
through multiple paths from the bottommost class). For example, all classes inherit
from object, so any case of multiple inheritance provides more than one path to
reach object. To keep the base classes from being accessed more than once, the
dynamic algorithm linearizes the search order in a way that preserves the left-to-right
ordering specified in each class, that calls each parent only once, and that is monotonic
(meaning that a class can be subclassed without affecting the precedence order of its
parents). Taken together, these properties make it possible to design reliable and
extensible classes with multiple inheritance. For more detail,
see https://www.python.org/download/releases/2.3/mro/.
Since there is a valid use-case for class-private members (namely to avoid name
clashes of names with names defined by subclasses), there is limited support for such a
mechanism, called name mangling. Any identifier of the form __spam (at least two
leading underscores, at most one trailing underscore) is textually replaced
with _classname__spam, where classname is the current class name with leading
underscore(s) stripped. This mangling is done without regard to the syntactic position of
the identifier, as long as it occurs within the definition of a class.
Name mangling is helpful for letting subclasses override methods without breaking
intraclass method calls. For example:
class Mapping:
def __init__(self, iterable):
self.items_list = []
self.__update(iterable)
class MappingSubclass(Mapping):
Note that the mangling rules are designed mostly to avoid accidents; it still is possible to
access or modify a variable that is considered private. This can even be useful in
special circumstances, such as in the debugger.
Notice that code passed to exec() or eval() does not consider the classname of the
invoking class to be the current class; this is similar to the effect of
the global statement, the effect of which is likewise restricted to code that is byte-
compiled together. The same restriction applies
to getattr(), setattr() and delattr(), as well as when
referencing __dict__ directly.
class Employee:
pass
Instance method objects have attributes, too: m.__self__ is the instance object with
the method m(), and m.__func__ is the function object corresponding to the method.
9.8. Iterators
By now you have probably noticed that most container objects can be looped over using
a for statement:
This style of access is clear, concise, and convenient. The use of iterators pervades and
unifies Python. Behind the scenes, the for statement calls iter() on the container
object. The function returns an iterator object that defines the
method __next__() which accesses elements in the container one at a time. When
there are no more elements, __next__() raises a StopIteration exception which
tells the for loop to terminate. You can call the __next__() method using
the next() built-in function; this example shows how it all works:
>>>
>>> s = 'abc'
>>> it = iter(s)
>>> it
<iterator object at 0x00A1DB50>
>>> next(it)
'a'
>>> next(it)
'b'
>>> next(it)
'c'
>>> next(it)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
next(it)
StopIteration
Having seen the mechanics behind the iterator protocol, it is easy to add iterator
behavior to your classes. Define an __iter__() method which returns an object with
a __next__() method. If the class defines __next__(), then __iter__() can just
return self:
class Reverse:
"""Iterator for looping over a sequence backwards."""
def __init__(self, data):
self.data = data
self.index = len(data)
def __iter__(self):
return self
def __next__(self):
if self.index == 0:
raise StopIteration
self.index = self.index - 1
return self.data[self.index]
>>>
def reverse(data):
for index in range(len(data)-1, -1, -1):
yield data[index]
>>>
Anything that can be done with generators can also be done with class-based iterators
as described in the previous section. What makes generators so compact is that
the __iter__() and __next__() methods are created automatically.
Another key feature is that the local variables and execution state are automatically
saved between calls. This made the function easier to write and much more clear than
an approach using instance variables like self.index and self.data.
In addition to automatic method creation and saving program state, when generators
terminate, they automatically raise StopIteration. In combination, these features
make it easy to create iterators with no more effort than writing a regular function.
Examples:
>>>
Footnotes
Except for one thing. Module objects have a secret read-only attribute
called __dict__ which returns the dictionary used to implement the module’s
namespace; the name __dict__ is an attribute but not a global name.
Obviously, using this violates the abstraction of namespace implementation, and
should be restricted to things like post-mortem debuggers.
>>>
>>> import os
>>> os.getcwd() # Return the current working directory
'C:\\Python38'
>>> os.chdir('/server/accesslogs') # Change current working
directory
>>> os.system('mkdir today') # Run the command mkdir in the
system shell
0
Be sure to use the import os style instead of from os import *. This will
keep os.open() from shadowing the built-in open() function which operates much
differently.
The built-in dir() and help() functions are useful as interactive aids for working with
large modules like os:
>>>
>>> import os
>>> dir(os)
<returns a list of all module functions>
>>> help(os)
<returns an extensive manual page created from the module's
docstrings>
For daily file and directory management tasks, the shutil module provides a higher
level interface that is easier to use:
>>>
>>>
>>>
import argparse
>>>
>>>
>>> import re
>>> re.findall(r'\bf[a-z]*', 'which foot or hand fell fastest')
['foot', 'fell', 'fastest']
>>> re.sub(r'(\b[a-z]+) \1', r'\1', 'cat in the the hat')
'cat in the hat'
When only simple capabilities are needed, string methods are preferred because they
are easier to read and debug:
>>>
10.6. Mathematics
The math module gives access to the underlying C library functions for floating point
math:
>>>
>>>
>>> import random
>>> random.choice(['apple', 'pear', 'banana'])
'apple'
>>> random.sample(range(100), 10) # sampling without replacement
[30, 83, 16, 4, 8, 81, 41, 50, 18, 33]
>>> random.random() # random float
0.17970987693706186
>>> random.randrange(6) # random integer chosen from range(6)
4
The statistics module calculates basic statistical properties (the mean, median,
variance, etc.) of numeric data:
>>>
The SciPy project <https://scipy.org> has many other modules for numerical
computations.
>>>
>>>
>>>
For example, it may be tempting to use the tuple packing and unpacking feature instead
of the traditional approach to swapping arguments. The timeit module quickly
demonstrates a modest performance advantage:
>>>
In contrast to timeit’s fine level of granularity, the profile and pstats modules
provide tools for identifying time critical sections in larger blocks of code.
10.11. Quality Control
One approach for developing high quality software is to write tests for each function as it
is developed and to run those tests frequently during the development process.
The doctest module provides a tool for scanning a module and validating tests
embedded in a program’s docstrings. Test construction is as simple as cutting-and-
pasting a typical call along with its results into the docstring. This improves the
documentation by providing the user with an example and it allows the doctest module
to make sure the code remains true to the documentation:
def average(values):
"""Computes the arithmetic mean of a list of numbers.
import doctest
doctest.testmod() # automatically validate the embedded tests
The unittest module is not as effortless as the doctest module, but it allows a more
comprehensive set of tests to be maintained in a separate file:
import unittest
class TestStatisticalFunctions(unittest.TestCase):
def test_average(self):
self.assertEqual(average([20, 30, 70]), 40.0)
self.assertEqual(round(average([1, 5, 7]), 1), 4.3)
with self.assertRaises(ZeroDivisionError):
average([])
with self.assertRaises(TypeError):
average(20, 30, 70)
The pprint module offers more sophisticated control over printing both built-in and
user defined objects in a way that is readable by the interpreter. When the result is
longer than one line, the “pretty printer” adds line breaks and indentation to more clearly
reveal data structure:
>>>
The textwrap module formats paragraphs of text to fit a given screen width:
>>>
>>>
11.2. Templating
The string module includes a versatile Template class with a simplified syntax
suitable for editing by end-users. This allows users to customize their applications
without having to alter the application.
The format uses placeholder names formed by $ with valid Python identifiers
(alphanumeric characters and underscores). Surrounding the placeholder with braces
allows it to be followed by more alphanumeric letters with no intervening spaces.
Writing $$ creates a single escaped $:
>>>
>>>
Template subclasses can specify a custom delimiter. For example, a batch renaming
utility for a photo browser may elect to use percent signs for placeholders such as the
current date, image sequence number, or file format:
>>>
>>> t = BatchRename(fmt)
>>> date = time.strftime('%d%b%y')
>>> for i, filename in enumerate(photofiles):
... base, ext = os.path.splitext(filename)
... newname = t.substitute(d=date, n=i, f=ext)
... print('{0} --> {1}'.format(filename, newname))
Another application for templating is separating program logic from the details of
multiple output formats. This makes it possible to substitute custom templates for XML
files, plain text reports, and HTML web reports.
import struct
start = 0
for i in range(3): # show the first 3 file
headers
start += 14
fields = struct.unpack('<IIIHH', data[start:start+16])
crc32, comp_size, uncomp_size, filenamesize, extra_size =
fields
start += 16
filename = data[start:start+filenamesize]
start += filenamesize
extra = data[start:start+extra_size]
print(filename, hex(crc32), comp_size, uncomp_size)
11.4. Multi-threading
Threading is a technique for decoupling tasks which are not sequentially dependent.
Threads can be used to improve the responsiveness of applications that accept user
input while other tasks run in the background. A related use case is running I/O in
parallel with computations in another thread.
The following code shows how the high level threading module can run tasks in
background while the main program continues to run:
class AsyncZip(threading.Thread):
def __init__(self, infile, outfile):
threading.Thread.__init__(self)
self.infile = infile
self.outfile = outfile
def run(self):
f = zipfile.ZipFile(self.outfile, 'w',
zipfile.ZIP_DEFLATED)
f.write(self.infile)
f.close()
print('Finished background zip of:', self.infile)
While those tools are powerful, minor design errors can result in problems that are
difficult to reproduce. So, the preferred approach to task coordination is to concentrate
all access to a resource in a single thread and then use the queue module to feed that
thread with requests from other threads. Applications using Queue objects for inter-
thread communication and coordination are easier to design, more readable, and more
reliable.
11.5. Logging
The logging module offers a full featured and flexible logging system. At its simplest,
log messages are sent to a file or to sys.stderr:
import logging
logging.debug('Debugging information')
logging.info('Informational message')
logging.warning('Warning:config file %s not found', 'server.conf')
logging.error('Error occurred')
logging.critical('Critical error -- shutting down')
The logging system can be configured directly from Python or can be loaded from a
user editable configuration file for customized logging without altering the application.
This approach works fine for most applications but occasionally there is a need to track
objects only as long as they are being used by something else. Unfortunately, just
tracking them creates a reference that makes them permanent. The weakref module
provides tools for tracking objects without creating a reference. When the object is no
longer needed, it is automatically removed from a weakref table and a callback is
triggered for weakref objects. Typical applications include caching objects that are
expensive to create:
>>>
The array module provides an array() object that is like a list that stores only
homogeneous data and stores it more compactly. The following example shows an
array of numbers stored as two byte unsigned binary numbers (typecode "H") rather
than the usual 16 bytes per entry for regular lists of Python int objects:
>>>
The collections module provides a deque() object that is like a list with faster
appends and pops from the left side but slower lookups in the middle. These objects are
well suited for implementing queues and breadth first tree searches:
>>>
>>>
The heapq module provides functions for implementing heaps based on regular lists.
The lowest valued entry is always kept at position zero. This is useful for applications
which repeatedly access the smallest element but do not want to run a full list sort:
>>>
financial applications and other uses which require exact decimal representation,
control over precision,
control over rounding to meet legal or regulatory requirements,
tracking of significant decimal places, or
applications where the user expects the results to match calculations done by
hand.
For example, calculating a 5% tax on a 70 cent phone charge gives different results in
decimal floating point and binary floating point. The difference becomes significant if the
results are rounded to the nearest cent:
>>>
The Decimal result keeps a trailing zero, automatically inferring four place significance
from multiplicands with two place significance. Decimal reproduces mathematics as
done by hand and avoids issues that can arise when binary floating point cannot exactly
represent decimal quantities.
Exact representation enables the Decimal class to perform modulo calculations and
equality tests that are unsuitable for binary floating point:
>>>
>>>
>>> getcontext().prec = 36
>>> Decimal(1) / Decimal(7)
Decimal('0.142857142857142857142857142857142857')
Different applications can then use different virtual environments. To resolve the earlier
example of conflicting requirements, application A can have its own virtual environment
with version 1.0 installed while application B has another virtual environment with
version 2.0. If application B requires a library be upgraded to version 3.0, this will not
affect application A’s environment.
To create a virtual environment, decide upon a directory where you want to place it, and
run the venv module as a script with the directory path:
This will create the tutorial-env directory if it doesn’t exist, and also create
directories inside it containing a copy of the Python interpreter, the standard library, and
various supporting files.
A common directory location for a virtual environment is .venv. This name keeps the
directory typically hidden in your shell and thus out of the way while giving it a name that
explains why the directory exists. It also prevents clashing with .env environment
variable definition files that some tooling supports.
On Windows, run:
tutorial-env\Scripts\activate.bat
(This script is written for the bash shell. If you use the csh or fish shells, there are
alternate activate.csh and activate.fish scripts you should use instead.)
Activating the virtual environment will change your shell’s prompt to show what virtual
environment you’re using, and modify the environment so that running python will get
you that particular version and installation of Python. For example:
$ source ~/envs/tutorial-env/bin/activate
(tutorial-env) $ python
Python 3.5.1 (default, May 6 2016, 10:59:36)
...
>>> import sys
>>> sys.path
['', '/usr/local/lib/python35.zip', ...,
'~/envs/tutorial-env/lib/python3.5/site-packages']
>>>
You can install the latest version of a package by specifying a package’s name:
You can also install a specific version of a package by giving the package name
followed by == and the version number:
If you re-run this command, pip will notice that the requested version is already
installed and do nothing. You can supply a different version number to get that version,
or you can run pip install --upgrade to upgrade the package to the latest version:
pip uninstall followed by one or more package names will remove the packages
from the virtual environment.
pip list will display all of the packages installed in the virtual environment:
pip freeze will produce a similar list of the installed packages, but the output uses the
format that pip install expects. A common convention is to put this list in
a requirements.txt file:
The requirements.txt can then be committed to version control and shipped as part
of an application. Users can then install all the necessary packages with install -r:
pip has many more options. Consult the Installing Python Modules guide for complete
documentation for pip. When you’ve written a package and want to make it available
on the Python Package Index, consult the Distributing Python Modules guide.
This tutorial is part of Python’s documentation set. Some other documents in the set
are:
The Python Standard Library:
For Python-related questions and problem reports, you can post to the
newsgroup comp.lang.python, or send them to the mailing list at python-
list@python.org. The newsgroup and mailing list are gatewayed, so messages posted to
one will automatically be forwarded to the other. There are hundreds of postings a day,
asking (and answering) questions, suggesting new features, and announcing new
modules. Mailing list archives are available at https://mail.python.org/pipermail/.
Before posting, be sure to check the list of Frequently Asked Questions (also called the
FAQ). The FAQ answers many of the questions that come up again and again, and may
already contain the solution for your problem.
One alternative enhanced interactive interpreter that has been around for quite some
time is IPython, which features tab completion, object exploration and advanced history
management. It can also be thoroughly customized and embedded into other
applications. Another similar enhanced interactive environment is bpython.
0.125
has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction
0.001
has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only real
difference being that the first is written in base 10 fractional notation, and the second in
base 2.
The problem is easier to understand at first in base 10. Consider the fraction 1/3. You
can approximate that as a base 10 fraction:
0.3
or, better,
0.33
or, better,
0.333
and so on. No matter how many digits you’re willing to write down, the result will never
be exactly 1/3, but will be an increasingly better approximation of 1/3.
In the same way, no matter how many base 2 digits you’re willing to use, the decimal
value 0.1 cannot be represented exactly as a base 2 fraction. In base 2, 1/10 is the
infinitely repeating fraction
0.0001100110011001100110011001100110011001100110011...
Stop at any finite number of bits, and you get an approximation. On most machines
today, floats are approximated using a binary fraction with the numerator using the first
53 bits starting with the most significant bit and with the denominator as a power of two.
In the case of 1/10, the binary fraction is 3602879701896397 / 2 ** 55 which is
close to but not exactly equal to the true value of 1/10.
Many users are not aware of the approximation because of the way values are
displayed. Python only prints a decimal approximation to the true decimal value of the
binary approximation stored by the machine. On most machines, if Python were to print
the true decimal value of the binary approximation stored for 0.1, it would have to
display
>>>
>>> 0.1
0.1000000000000000055511151231257827021181583404541015625
That is more digits than most people find useful, so Python keeps the number of digits
manageable by displaying a rounded value instead
>>>
>>> 1 / 10
0.1
Just remember, even though the printed result looks like the exact value of 1/10, the
actual stored value is the nearest representable binary fraction.
Interestingly, there are many different decimal numbers that share the same nearest
approximate binary fraction. For example, the
numbers 0.1 and 0.10000000000000001 and 0.1000000000000000055511151231
257827021181583404541015625 are all approximated
by 3602879701896397 / 2 ** 55. Since all of these decimal values share the same
approximation, any one of them could be displayed while still preserving the
invariant eval(repr(x)) == x.
Historically, the Python prompt and built-in repr() function would choose the one with
17 significant digits, 0.10000000000000001. Starting with Python 3.1, Python (on most
systems) is now able to choose the shortest of these and simply display 0.1.
Note that this is in the very nature of binary floating-point: this is not a bug in Python,
and it is not a bug in your code either. You’ll see the same kind of thing in all languages
that support your hardware’s floating-point arithmetic (although some languages may
not display the difference by default, or in all output modes).
For more pleasant output, you may wish to use string formatting to produce a limited
number of significant digits:
>>>
>>> repr(math.pi)
'3.141592653589793'
It’s important to realize that this is, in a real sense, an illusion: you’re simply rounding
the display of the true machine value.
One illusion may beget another. For example, since 0.1 is not exactly 1/10, summing
three values of 0.1 may not yield exactly 0.3, either:
>>>
>>> .1 + .1 + .1 == .3
False
Also, since the 0.1 cannot get any closer to the exact value of 1/10 and 0.3 cannot get
any closer to the exact value of 3/10, then pre-rounding with round() function cannot
help:
>>>
Though the numbers cannot be made closer to their intended exact values,
the round() function can be useful for post-rounding so that results with inexact values
become comparable to one another:
>>>
Binary floating-point arithmetic holds many surprises like this. The problem with “0.1” is
explained in precise detail below, in the “Representation Error” section. See The Perils
of Floating Point for a more complete account of other common surprises.
As that says near the end, “there are no easy answers.” Still, don’t be unduly wary of
floating-point! The errors in Python float operations are inherited from the floating-point
hardware, and on most machines are on the order of no more than 1 part in 2**53 per
operation. That’s more than adequate for most tasks, but you do need to keep in mind
that it’s not decimal arithmetic and that every float operation can suffer a new rounding
error.
While pathological cases do exist, for most casual use of floating-point arithmetic you’ll
see the result you expect in the end if you simply round the display of your final results
to the number of decimal digits you expect. str() usually suffices, and for finer control
see the str.format() method’s format specifiers in Format String Syntax.
For use cases which require exact decimal representation, try using
the decimal module which implements decimal arithmetic suitable for accounting
applications and high-precision applications.
If you are a heavy user of floating point operations you should take a look at the
Numerical Python package and many other packages for mathematical and statistical
operations supplied by the SciPy project. See <https://scipy.org>.
Python provides tools that may help on those rare occasions when you really do want to
know the exact value of a float. The float.as_integer_ratio() method expresses
the value of a float as a fraction:
>>>
>>> x = 3.14159
>>> x.as_integer_ratio()
(3537115888337719, 1125899906842624)
Since the ratio is exact, it can be used to losslessly recreate the original value:
>>>
The float.hex() method expresses a float in hexadecimal (base 16), again giving the
exact value stored by your computer:
>>>
>>> x.hex()
'0x1.921f9f01b866ep+1'
This precise hexadecimal representation can be used to reconstruct the float value
exactly:
>>>
>>> x == float.fromhex('0x1.921f9f01b866ep+1')
True
Since the representation is exact, it is useful for reliably porting values across different
versions of Python (platform independence) and exchanging data with other languages
that support the same format (such as Java and C99).
Another helpful tool is the math.fsum() function which helps mitigate loss-of-precision
during summation. It tracks “lost digits” as values are added onto a running total. That
can make a difference in overall accuracy so that the errors do not accumulate to the
point where they affect the final total:
>>>
>>> sum([0.1] * 10) == 1.0
False
>>> math.fsum([0.1] * 10) == 1.0
True
Representation error refers to the fact that some (most, actually) decimal fractions
cannot be represented exactly as binary (base 2) fractions. This is the chief reason why
Python (or Perl, C, C++, Java, Fortran, and many others) often won’t display the exact
decimal number you expect.
Why is that? 1/10 is not exactly representable as a binary fraction. Almost all machines
today (November 2000) use IEEE-754 floating point arithmetic, and almost all platforms
map Python floats to IEEE-754 “double precision”. 754 doubles contain 53 bits of
precision, so on input the computer strives to convert 0.1 to the closest fraction it can of
the form J/2**N where J is an integer containing exactly 53 bits. Rewriting
1 / 10 ~= J / (2**N)
as
J ~= 2**N / 10
and recalling that J has exactly 53 bits (is >= 2**52 but < 2**53), the best value
for N is 56:
>>>
That is, 56 is the only value for N that leaves J with exactly 53 bits. The best possible
value for J is then that quotient rounded:
>>>
Since the remainder is more than half of 10, the best approximation is obtained by
rounding up:
>>>
>>> q+1
7205759403792794
Therefore the best possible approximation to 1/10 in 754 double precision is:
7205759403792794 / 2 ** 56
Dividing both the numerator and denominator by two reduces the fraction to:
3602879701896397 / 2 ** 55
Note that since we rounded up, this is actually a little bit larger than 1/10; if we had not
rounded up, the quotient would have been a little bit smaller than 1/10. But in no case
can it be exactly 1/10!
So the computer never “sees” 1/10: what it sees is the exact fraction given above, the
best 754 double approximation it can get:
>>>
>>> 0.1 * 2 ** 55
3602879701896397.0
If we multiply that fraction by 10**55, we can see the value out to 55 decimal digits:
>>>
>>> 3602879701896397 * 10 ** 55 // 2 ** 55
1000000000000000055511151231257827021181583404541015625
meaning that the exact number stored in the computer is equal to the decimal value
0.1000000000000000055511151231257827021181583404541015625. Instead of
displaying the full decimal value, many languages (including older versions of Python),
round the result to 17 significant digits:
>>>
>>>
>>> Fraction.from_float(0.1)
Fraction(3602879701896397, 36028797018963968)
>>> (0.1).as_integer_ratio()
(3602879701896397, 36028797018963968)
>>> Decimal.from_float(0.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625'
)
16. Appendix
16.1. Interactive Mode
16.1.1. Error Handling
When an error occurs, the interpreter prints an error message and a stack trace. In
interactive mode, it then returns to the primary prompt; when input came from a file, it
exits with a nonzero exit status after printing the stack trace. (Exceptions handled by
an except clause in a try statement are not errors in this context.) Some errors are
unconditionally fatal and cause an exit with a nonzero exit; this applies to internal
inconsistencies and some cases of running out of memory. All error messages are
written to the standard error stream; normal output from executed commands is written
to standard output.
Typing the interrupt character (usually Control-C or Delete) to the primary or secondary
prompt cancels the input and returns to the primary prompt. 1 Typing an interrupt while
a command is executing raises the KeyboardInterrupt exception, which may be
handled by a try statement.
#!/usr/bin/env python3.5
(assuming that the interpreter is on the user’s PATH) at the beginning of the script and
giving the file an executable mode. The #! must be the first two characters of the file.
On some platforms, this first line must end with a Unix-style line ending ( '\n'), not a
Windows ('\r\n') line ending. Note that the hash, or pound, character, '#', is used to
start a comment in Python.
$ chmod +x myscript.py
This file is only read in interactive sessions, not when Python reads commands from a
script, and not when /dev/tty is given as the explicit source of commands (which
otherwise behaves like an interactive session). It is executed in the same namespace
where interactive commands are executed, so that objects that it defines or imports can
be used without qualification in the interactive session. You can also change the
prompts sys.ps1 and sys.ps2 in this file.
If you want to read an additional start-up file from the current directory, you can program
this in the global start-up file using code
like if os.path.isfile('.pythonrc.py'): exec(open('.pythonrc.py').rea
d()). If you want to use the startup file in a script, you must do this explicitly in the
script:
import os
filename = os.environ.get('PYTHONSTARTUP')
if filename and os.path.isfile(filename):
with open(filename) as fobj:
startup_file = fobj.read()
exec(startup_file)
>>>
Now you can create a file named usercustomize.py in that directory and put anything
you want in it. It will affect every invocation of Python, unless it is started with the -
s option to disable the automatic import.
Footnotes