Code Optimization

Here we will look briefly at how to time and profile your code, and then at an approach to making your code run faster. There is a sequence of mini-gaols that is applicable to nearly every programming problem:

  1. Make it run
  2. Make it right
  3. Make it fast

Note that the list does not start with Make it fast. Testing, debugging and optimization are a set of strategies and practices to achieve those goals. Only optimization will be covered in these notes - pointers to resources for testing and debugging are provided but not covered.

Testing code

In [4]:
%%file distance.py

import numpy as np

def euclidean_dist(u, v):
    """Returns Euclidean distance betwen numpy vectors u and v."""
    w = u - v
    return np.sqrt(np.sum(w**2))
Writing distance.py
In [10]:
%%file test_distance.py
import numpy as np
from numpy.testing import assert_almost_equal
from distance import euclidean_dist

def test_non_negativity():
    for i in range(10):
        u = np.random.normal(3)
        v = np.random.normal(3)
        assert euclidean_dist(u, v) >= 0

def test_coincidence_when_zero():
    u = np.zeros(3)
    v = np.zeros(3)
    assert euclidean_dist(u, v) == 0

def test_coincidence_when_not_zero():
     for i in range(10):
        u = np.random.random(3)
        v = np.zeros(3)
        assert euclidean_dist(u, v) != 0

def test_symmetry():
    for i in range(10):
        u = np.random.random(3)
        v = np.random.random(3)
        assert euclidean_dist(u, v) == euclidean_dist(v, u)

def test_triangle():
    u = np.random.random(3)
    v = np.random.random(3)
    w = np.random.random(3)
    assert euclidean_dist(u, w) <= euclidean_dist(u, v) + euclidean_dist(v, w)

def test_known1():
    u = np.array([0])
    v = np.array([3])
    assert_almost_equal(euclidean_dist(u, v), 3)

def test_known2():
    u = np.array([0,0])
    v = np.array([3, 4])
    assert_almost_equal(euclidean_dist(u, v), 5)

def test_known3():
    u = np.array([0,0])
    v = np.array([-3, -4])
    assert_almost_equal(euclidean_dist(u, v), 5)
Overwriting test_distance.py
In [11]:
! py.test
============================= test session starts ==============================
platform darwin -- Python 2.7.11, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
rootdir: /Users/cliburn/git/sta-663-2016/lectures, inifile:
collected 8 items

test_distance.py ........

=========================== 8 passed in 0.68 seconds ===========================

Debugging

Tools within Jupyter from the official tutorial

After an exception occurs, you can call %debug to jump into the Python debugger (pdb) and examine the problem. Alternatively, if you call %pdb, IPython will automatically start the debugger on any uncaught exception. You can print variables, see code, execute statements and even walk up and down the call stack to track down the true source of the problem. This can be an efficient way to develop and debug code, in many cases eliminating the need for print statements or external debugging tools.

You can also step through a program from the beginning by calling %run -d theprogram.py.

Timing and profiling code

Install profiling tools:

pip install --pre line-profiler
pip install psutil
pip install memory_profiler

References:

  1. http://scipy-lectures.github.com/advanced/optimizing/index.html
  2. http://pynash.org/2013/03/06/timing-and-profiling.html

Timing code

  • 1s = 1000 ms
  • 1 ms = 1000 \(\mu\)s
  • 1 \(\mu\)s = 1000 ns

Simple approach

In [1]:
import time
import timeit

def f(nsec=1.0):
    """Function sleeps for nsec seconds."""
    time.sleep(nsec)

start = timeit.default_timer()
f()
elapsed = timeit.default_timer() - start
elapsed
Out[1]:
1.0024928400525823

We can make a decorator for convenience

In [2]:
def process_time(f, *args, **kwargs):
    def func(*args, **kwargs):
        import timeit
        start = timeit.default_timer()
        f(*args, **kwargs)
        print(timeit.default_timer() - start)
    return func
In [3]:
@process_time
def f1(nsec=1.0):
    """Function sleeps for nsec seconds."""
    time.sleep(nsec)
In [4]:
f1()
1.000329414033331

Within the Jupyter notebook, use the timeit magic function

In [5]:
%timeit f(0.01)
100 loops, best of 3: 11.2 ms per loop
In [6]:
%timeit -n10 f(0.01)
10 loops, best of 3: 11.3 ms per loop
In [7]:
%timeit -r10 f(0.01)
100 loops, best of 10: 11.2 ms per loop
In [8]:
%timeit -n10 -r3 f(0.01)
10 loops, best of 3: 11.4 ms per loop

Profiling code

This can be done in a notebook with %prun, with the following readouts as column headers:

  • ncalls
    • for the number of calls,
  • tottime
    • for the total time spent in the given function (and excluding time made in calls to sub-functions),
  • percall
    • is the quotient of tottime divided by ncalls
  • cumtime
    • is the total time spent in this and all subfunctions (from invocation till exit). This figure is accurate even for recursive functions.
  • percall
    • is the quotient of cumtime divided by primitive calls
  • filename:lineno(function)
    • provides the respective data of each function
In [9]:
def foo1(n):
    return sum(i**2 for i in range(n))

def foo2(n):
    return sum(i*i for i in range(n))

def foo3(n):
    [foo1(n) for i in range(10)]
    foo2(n)

def bar(n):
    return sum(i**3 for i in range(n))

def work(n):
    foo1(n)
    foo2(n)
    foo3(n)
    bar(n)
In [10]:
%prun -q -D work.prof work(int(1e6))

*** Profile stats marshalled to file 'work.prof'.
In [11]:
import pstats
p = pstats.Stats('work.prof')
p.print_stats()
pass
Mon Mar  7 11:18:30 2016    work.prof

         14000048 function calls in 10.525 seconds

   Random listing order was used

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        2    0.000    0.000    0.699    0.350 <ipython-input-9-48ed291f45d5>:4(foo2)
       14    2.233    0.159   10.525    0.752 {built-in method builtins.sum}
        1    0.000    0.000    1.008    1.008 <ipython-input-9-48ed291f45d5>:11(bar)
        1    0.000    0.000   10.525   10.525 <string>:1(<module>)
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}
  1000001    0.803    0.000    0.803    0.000 <ipython-input-9-48ed291f45d5>:12(<genexpr>)
       11    0.000    0.000    8.818    0.802 <ipython-input-9-48ed291f45d5>:1(foo1)
        1    0.000    0.000   10.525   10.525 <ipython-input-9-48ed291f45d5>:14(work)
        1    0.000    0.000    8.362    8.362 <ipython-input-9-48ed291f45d5>:7(foo3)
        1    0.000    0.000    8.011    8.011 <ipython-input-9-48ed291f45d5>:8(<listcomp>)
        1    0.000    0.000   10.525   10.525 {built-in method builtins.exec}
  2000002    0.410    0.000    0.410    0.000 <ipython-input-9-48ed291f45d5>:5(<genexpr>)
 11000011    7.079    0.000    7.079    0.000 <ipython-input-9-48ed291f45d5>:2(<genexpr>)


In [12]:
p.sort_stats('time', 'cumulative').print_stats('foo')
pass
Mon Mar  7 11:18:30 2016    work.prof

         14000048 function calls in 10.525 seconds

   Ordered by: internal time, cumulative time
   List reduced from 13 to 3 due to restriction <'foo'>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
       11    0.000    0.000    8.818    0.802 <ipython-input-9-48ed291f45d5>:1(foo1)
        2    0.000    0.000    0.699    0.350 <ipython-input-9-48ed291f45d5>:4(foo2)
        1    0.000    0.000    8.362    8.362 <ipython-input-9-48ed291f45d5>:7(foo3)


In [13]:
p.sort_stats('ncalls').print_stats(5)
pass
Mon Mar  7 11:18:30 2016    work.prof

         14000048 function calls in 10.525 seconds

   Ordered by: call count
   List reduced from 13 to 5 due to restriction <5>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 11000011    7.079    0.000    7.079    0.000 <ipython-input-9-48ed291f45d5>:2(<genexpr>)
  2000002    0.410    0.000    0.410    0.000 <ipython-input-9-48ed291f45d5>:5(<genexpr>)
  1000001    0.803    0.000    0.803    0.000 <ipython-input-9-48ed291f45d5>:12(<genexpr>)
       14    2.233    0.159   10.525    0.752 {built-in method builtins.sum}
       11    0.000    0.000    8.818    0.802 <ipython-input-9-48ed291f45d5>:1(foo1)


Checking memory usage

In [14]:
%load_ext memory_profiler
In [15]:
%%file foo.py

def foo(n):
    phrase = 'repeat me'
    pmul = phrase * n
    pjoi = ''.join([phrase for x in range(n)])
    pinc = ''
    for x in range(n):
        pinc += phrase
    del pmul, pjoi, pinc
Overwriting foo.py
In [16]:
# mprun requires the code be in a file
# funcions declared interactively in python will not work

from foo import foo

%mprun -f foo foo(100000)

In [17]:
# However, memit does work with interactive functions
# Unlike mprun which gives a line by line analysis
# memit gives the total amount of memory used

def gobble(n):
    x = [i*i for i in range(n)]

%memit -r 3 gobble(1000000)
peak memory: 136.37 MiB, increment: 17.44 MiB

Data structures and algorithms

There are many ways to speed up slow code. However, the first thing that should come to mind (after profiling to identify the bottlenecks) is whether there is a more appropriate data structure or algorithm that can be used. The reason is that this is the only approach that makes a difference to the big O complexity, and this makes all the difference for scalability. A few examples are shown here; a large collection of classic data structures and algorithms in Python with detailed explanations is available at Problem Solving wiht Algorihms and Data Structures

You are highly encouraged to take an algoorithms class, where you will discover strategies such as:

  • adaptive methods (e.g. adaptive quadrature, adaprive Runge-Kutta)
  • divide and conquer (e.g. Barnes-Hut, Fast Fourier Transform)
  • tabling and dynamic programming (e.g. Viterbi algorithm for Hidden Markov Models)
  • graphs and network algorihtms (e.g. shortest path, max flow min cut)
  • hashing (e.g. locality senstive hashing, Bloom filters)
  • probabilistic algorithms (e.g. randomized projections, Monte Carlo integration)
In [18]:
xs = np.random.randint(0, 1000, 100)
ys = np.random.randint(0, 1000, 100)

Using lists

In [19]:
def common1(xs, ys):
    """Using lists."""
    zs = set([])
    for x in xs:
        for y in ys:
            if x==y:
                zs.add(x)
    return zs
In [20]:
%timeit -n3 -r3 common1(xs, ys)
3 loops, best of 3: 1.58 ms per loop

Using sets

In [21]:
%timeit -n3 -r3 set(xs) & set(ys)
3 loops, best of 3: 44.7 µs per loop

Using lists

In [22]:
alist = list(np.random.randint(1000, 100000, 1000))
blist = alist[:]
entries = np.random.randint(1, 10000, 10000)
In [23]:
def f1(alist, entries):
    """Using repeated sorts."""
    zs = []
    for entry in entries:
        alist.append(entry)
        alist.sort(reverse=True)
        zs.append(alist.pop())
    return zs
In [24]:
%timeit f1(alist, entries)
1 loops, best of 3: 398 ms per loop

Using a heap (priority queue)

In [25]:
from heapq import heappushpop, heapify
In [26]:
def f2(alist, entries):
    """Using a priority queue."""
    heapify(alist)
    zs = []
    for entry in entries:
        zs.append(heappushpop(alist, entry))
    return zs
In [27]:
%timeit f2(blist, entries)
100 loops, best of 3: 3.61 ms per loop

Python idioms for speed

String concatenation

In [28]:
def concat1(alist):
    """Using string concatenation."""
    s = alist[0]
    for item in alist[1:]:
        s += " " + item
    return s

def concat2(alist):
    """Using join."""
    return " ".join(alist)

alist = ['abcde'] * 1000000
%timeit -r3 -n3 concat1(alist)
%timeit -r3 -n3 concat2(alist)
3 loops, best of 3: 352 ms per loop
3 loops, best of 3: 19.3 ms per loop

Avoiding loops

In [29]:
"""Avoiding loops."""

import math

def loop1(n):
    """Using for loop with function call."""
    z = []
    for i in range(n):
        z.append(math.sin(i))
    return z

def loop2(n):
    """Using local version of function."""
    z = []
    sin = math.sin
    for i in range(n):
        z.append(sin(i))
    return z

def loop3(n):
    """Using list comprehension."""
    sin = math.sin
    return [sin(i) for i in range(n)]

def loop4(n):
    """Using map."""
    sin = math.sin
    return list(map(sin, range(n)))

def loop5(n):
    """Using numpy."""
    return np.sin(np.arange(n)).tolist()

n = 1000000
%timeit -r1 -n1 loop1(n)
%timeit -r1 -n1 loop2(n)
%timeit -r1 -n1 loop3(n)
%timeit -r1 -n1 loop4(n)
%timeit -r1 -n1 loop5(n)

assert(np.all(loop1(n) == loop2(n)))
assert(np.all(loop1(n) == loop3(n)))
assert(np.all(loop1(n) == loop4(n)))
assert(np.all(loop1(n) == loop5(n)))
1 loops, best of 1: 476 ms per loop
1 loops, best of 1: 373 ms per loop
1 loops, best of 1: 286 ms per loop
1 loops, best of 1: 268 ms per loop
1 loops, best of 1: 111 ms per loop

Using in-place operations

In [30]:
a = np.arange(1e6)

%timeit global a; a = a * 0
%timeit global a; a *= 0
100 loops, best of 3: 5.6 ms per loop
100 loops, best of 3: 2.96 ms per loop

Using appropriate indexing

In [31]:
def idx1(xs):
    """Using loops."""
    s = 0
    for x in xs:
        if (x > 10) and (x < 20):
            s += x
    return s

def idx2(xs):
    """Using logical indexing."""
    return np.sum(xs[(xs > 10) & (xs < 20)])

n = 1000000
xs = np.random.randint(0, 100, n)
%timeit -r3 -n3 idx1(xs)
%timeit -r3 -n3 idx2(xs)
3 loops, best of 3: 431 ms per loop
3 loops, best of 3: 10.3 ms per loop

Using views to implement stencils

In [32]:
def average1(xs):
    """Using loops."""
    ys = xs.copy()
    rows, cols = xs.shape
    for i in range(rows):
        for j in range(cols):
            s = 0
            for u in range(i-1, i+2):
                if u < 0 or u >= rows:
                    continue
                for v in range(j-1, j+2):
                    if v < 0 or v >= cols:
                        continue
                    s += xs[u, v]
            ys[i, j] = s/9.0
    return ys

def average2(xs):
    """Using shifted array views and border to avoid out of bounds checks."""
    rows, cols = xs.shape
    xs1 = np.zeros((rows+2, cols+2))
    xs1[1:-1, 1:-1] = xs[:]
    ys = (xs1[:-2, :-2]  + xs1[1:-1, :-2]  + xs1[2:, :-2] +
          xs1[:-2, 1:-1] + xs1[1:-1, 1:-1] + xs1[2:, 1:-1] +
          xs1[:-2, 2:]   + xs1[1:-1, 2:]   + xs1[2:, 2:])/9.0
    return ys

n = 25
xs = np.random.uniform(0,10,(n, n))
%timeit -r3 -n3 average1(xs)
%timeit -r3 -n3 average2(xs)
3 loops, best of 3: 4.68 ms per loop
3 loops, best of 3: 107 µs per loop

Using generalized universal functions (gufuncs)

In [33]:
xs = np.random.random((1000, 10))
xs
Out[33]:
array([[ 0.0239949 ,  0.9448899 ,  0.38954008, ...,  0.90376854,
         0.00426179,  0.85283056],
       [ 0.84696008,  0.41795534,  0.33089656, ...,  0.77224093,
         0.13336724,  0.37291434],
       [ 0.96967546,  0.24677243,  0.91441873, ...,  0.6430914 ,
         0.1975462 ,  0.91088953],
       ...,
       [ 0.70033398,  0.23787021,  0.36570841, ...,  0.07397977,
         0.64451552,  0.13583896],
       [ 0.34292311,  0.95505168,  0.8044513 , ...,  0.98800589,
         0.43128007,  0.67242465],
       [ 0.36700419,  0.84937765,  0.44672394, ...,  0.82128124,
         0.1343562 ,  0.11249669]])
In [34]:
ys = np.random.random((1000, 10))
ys
Out[34]:
array([[ 0.02427837,  0.59096979,  0.19144668, ...,  0.30528168,
         0.73472361,  0.52060017],
       [ 0.89185624,  0.41738057,  0.35344497, ...,  0.15926106,
         0.56084192,  0.85950004],
       [ 0.56798758,  0.42511275,  0.83825657, ...,  0.04916259,
         0.94247933,  0.46567012],
       ...,
       [ 0.19913379,  0.62601032,  0.47914341, ...,  0.19906258,
         0.49500519,  0.6781382 ],
       [ 0.36574487,  0.25007863,  0.92439174, ...,  0.03072802,
         0.35768397,  0.06059906],
       [ 0.51692658,  0.37195484,  0.59856346, ...,  0.25166055,
         0.48383847,  0.93378644]])
In [35]:
from numpy.core.umath_tests import inner1d

%timeit -n3 -r3 np.array([x @ y for x, y in zip(xs, ys)])
%timeit -n3 -r3 inner1d(xs, ys)
3 loops, best of 3: 2.05 ms per loop
3 loops, best of 3: 21.7 µs per loop
In [36]:
from numpy.core.umath_tests import matrix_multiply
In [37]:
xs = np.random.randint(0, 10, (500, 2, 2))
ys = np.random.randint(0, 10, (500, 2, 2))
In [38]:
%timeit -n3 -r3 np.array([x @ y for x, y in zip(xs, ys)])
%timeit -r3 -n3 matrix_multiply(xs, ys)
3 loops, best of 3: 3.89 ms per loop
3 loops, best of 3: 18.7 µs per loop

Memoization

In [39]:
from functools import lru_cache
In [40]:
def fib(n):
    if n <= 2:
        return 1
    else:
        return fib(n-1) + fib(n-2)

# A simple example of memoization - in practice, use `lru_cache` from functools
def memoize(f):
    store = {}
    def func(n):
        if n not in store:
            store[n] = f(n)
        return store[n]
    return func

@memoize
def mfib(n):
    return fib(n)

@lru_cache()
def lfib(n):
    return fib(n)

assert(fib(10) == mfib(10))
assert(fib(10) == lfib(10))

%timeit -r1 -n10 fib(30)
%timeit -r1 -n10 mfib(30)
%timeit -r1 -n10 lfib(30)
10 loops, best of 1: 430 ms per loop
10 loops, best of 1: 42.4 ms per loop
10 loops, best of 1: 43.6 ms per loop
In [ ]:

In [ ]: