Scalable data storage and structures

When dealing with big data, minimizing the amount of memory used is critical to avoid having to use disk-based access, which can be 100,000 times slower than random access. This notebook deals with ways to minimizee data storage for several common use cases:

  • Large arrays of homogenous data (often numbers)
  • Large string collections
  • Counting distinct values
  • Yes/No responses to queries

Methods covered range from the mundane (use numpy arrays rather than lists), to classic but less well-known data structures (e.g. prefix trees or tries) to algorithmically ingenious probabilistic data structures (e.g. bloom filter and hyperloglog).

In [1]:
import sys
import numpy as np

Selective retrieval from disk-based storage

We have already seen that there are many ways to retrieve only the parts of the data we need now into memory at this particular moment. Options include

  • generators (e.g. to read a file a line at a time)
  • numpy.memmap
  • HDF5 via h5py
  • Key-value stores (e.g. redis)
  • SQL and NoSQL databases (e.g. sqlite3)

Storing numbers

Less memory is used when storing numbers in numpy arrays rather than lists.

In [2]:
sys.getsizeof(list(range(int(1e8))))
Out[2]:
900000112
In [3]:
np.arange(int(1e8)).nbytes
Out[3]:
800000000

Using only the precision needed can also save memory

In [4]:
np.arange(int(1e8)).astype('float32').nbytes
Out[4]:
400000000
In [5]:
np.arange(int(1e8)).astype('float64').nbytes
Out[5]:
800000000

Storing strings

In [6]:
def flatmap(func, items):
    return it.chain.from_iterable(map(func, items))
In [7]:
def flatten(xss):
    return (x for xs in xss for x in xs)

Using a list

In [8]:
with open('data/Ulysses.txt') as f:
    word_list = list(flatten(line.split() for line in f))
In [9]:
sys.getsizeof(word_list)
Out[9]:
2258048
In [10]:
target = 'WARRANTIES'
In [11]:
%timeit -r1 -n1 word_list.index(target)
6.33 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)

Using a sorted list

In [12]:
word_list.sort()
In [13]:
import bisect
%timeit -r1 -n1 bisect.bisect(word_list, target)
8.48 µs ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)

Using a set

In [14]:
word_set = set(word_list)
In [15]:
sys.getsizeof(word_set)
Out[15]:
2097376
In [16]:
%timeit -r1 -n1 target in word_set
1.2 µs ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)

Using a trie (prefix tree)

! pip install hat_trie
In [17]:
%load_ext memory_profiler
In [18]:
from hat_trie import Trie
In [19]:
%memit word_trie = Trie(word_list)
peak memory: 70.50 MiB, increment: 0.10 MiB
In [20]:
%timeit -r1 -n1 target in word_trie
3.73 µs ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)

Data Sketches

A sketch is a probabilistic algorithm or data structure that approximates some statistic of interest, typically using very little memory and processing time. Often they are applied to streaming data, and so must be able to incrementally process data. Many data sketches make use of hash functions to distribute data into buckets uniformly. Typically, data sketches have the following desirable properties

  • sub-linear in space
  • single scan
  • can be parallelized
  • can be combined (merge)

Some statistics that sketches have been used to estimate include

  • indicator variables (event detection)
  • counts
  • quantiles
  • moments
  • entropy

Packages for data sketches in Python are relatively immmature, and if you are interested, you could make a large contribution by creating a comprehensive open source library of data sketches in Python.

Morris counter

The Morris counter is used as a simple illustration of a probabilistic data structure, with the standard trade-off of using less memory in return for less accuracy. The algorithm is extremely simple - keep a counter \(c\) that represents the exponent - that is, when the Morris counter is \(c\), the estimated count is \(2^c\). The probabilistic part comes from the way that the counter is incremented by comparing a uniform random variate to \(1/2^c\).

In [21]:
from random import random

class MorrisCounter:
    def __init__(self, c=0):
        self.c = c

    def __len__(self):
        return 2 ** self.c

    def add(self, item):
        self.c += random() < 1/(2**self.c)
In [22]:
mc = MorrisCounter()
In [23]:
print('True\t\tMorris\t\tRel Error')
for i, word in enumerate(word_list):
    mc.add(word)
    if i%int(.2e5)==0:
        print('%8d\t%8d\t%.2f' % (i, len(mc), 0 if i==0 else abs(i - len(mc))/i))
True            Morris          Rel Error
       0               2        0.00
   20000           32768        0.64
   40000           32768        0.18
   60000           32768        0.45
   80000           65536        0.18
  100000           65536        0.34
  120000           65536        0.45
  140000           65536        0.53
  160000          131072        0.18
  180000          131072        0.27
  200000          131072        0.34
  220000          131072        0.40
  240000          131072        0.45
  260000          131072        0.50

Increasing accuracy

A simple way to increase the accuracy is to have multiple Morris counters and take the average. These two ideas of using a probabilistic calculation and multiple samples to improve precision are the basis for the more useful probabilisitc data structures described below.

In [24]:
mcs = [MorrisCounter() for i in range(10)]
In [25]:
print('True\t\tMorris\t\tRel Error')
for i, word in enumerate(word_list):
    for j in range(10):
        mcs[j].add(word)
    estimate = np.mean([len(m) for m in mcs])
    if i%int(.2e5)==0:
        print('%8d\t%8d\t%.2f' % (i, estimate, 0 if i==0 else abs(i - estimate)/i))
True            Morris          Rel Error
       0               2        0.00
   20000           20480        0.02
   40000           38502        0.04
   60000           45875        0.24
   80000           72089        0.10
  100000          134348        0.34
  120000          163840        0.37
  140000          176947        0.26
  160000          176947        0.11
  180000          203161        0.13
  200000          203161        0.02
  220000          229376        0.04
  240000          255590        0.06
  260000          255590        0.02

Distinct value Sketches

The Morris counter is less useful because the degree of memory saved as compared to counting the number of elements exactly is not much unless the numbers are staggeringly huge. In contrast, counting the number of distinct elements exactly requires storage of all distinct elements (e.g. in a set) and hence grows with the cardinality \(n\). Probabilistic data structures known as Distinct Value Sketches can do this with a tiny and fixed memory size.

Examples where counting distinct values is useful:

  • number of unique users in a Twitter stream
  • number of distinct records to be fetched by a databse query
  • number of unique IP addresses accessing a website
  • number of distinct queries submitted to a search engine
  • number of distinct DNA motifs in genomics data sets (e.g. microbiome)

Hash functions

A hash function takes data of arbitrary size and converts it into a number in a fixed range. Ideally, given an arbitrary set of data items, the hash function generates numbers that follow a uniform distribution within the fixed range. Hash functions are immensely useful throughout computer science (for example - they power Python sets and dictionaries), and especially for the generation of probabilistic data structures.

A simple hash function mapping

Note the collisions. If not handled, there is a loss of information. Commonly, practical hash functions return a 32 or 64 bit integer. Also note that there are an arbitrary number of hash functions that can return numbers within a given range.

Note also that because the hash function is deterministic, the same item will always map to the same bin.

In [26]:
def string_hash(word, n):
    return sum(ord(char) for char in word) % n
In [27]:
sentence = "The quick brown fox jumps over the lazy dog."
for word in sentence.split():
    print(word, string_hash(word, 10))
The 9
quick 1
brown 2
fox 3
jumps 9
over 4
the 1
lazy 8
dog. 0

Built-in Python hash function

In [28]:
help(hash)
Help on built-in function hash in module builtins:

hash(obj, /)
    Return the hash value for the given object.

    Two objects that compare equal must also have the same hash value, but the
    reverse is not necessarily true.

In [29]:
for word in sentence.split():
    print('{:<10s} {:24}'.format(word, hash(word)))
The            -4859935776507312418
quick           9157615745031482514
brown           4123312298496538273
fox            -2015214628178477320
jumps            -71379956079029581
over           -6974446915587241323
the            -5638214675285202096
lazy            1423964815621844201
dog.           -1983643758301440122

Using a hash function from the MurmurHash3 library

Note that the hash function accepts a seed, allowing the creation of multiple hash functions. We also display the hash result as a 32-bit binary string.

In [30]:
import mmh3

for word in sentence.split():
    print('{:<10} {:+032b} {:+032b}'.format(word.ljust(10), mmh3.hash(word, seed=1234),
          mmh3.hash(word, seed=4321)))
The        +0001000011111110001001110101100 +1110110100100101010111100011010
quick      -0101111111011110110101100101000 +1000100001101010110000101101100
brown      +1000101010000110110010001110101 -1101101110000000010001100010100
fox        -1000000010010010000111001111011 +0111011111000011001001001110111
jumps      +0000010111000011010000100101010 +0010010001111110100010010110011
over       -0110101101111001001101011111011 -1101110111110010000101101000100
the        -1000000101110000000110011111001 +0001000111100111011000011100101
lazy       -1101011000111111110011111001100 +0010101110101100001000101110000
dog.       +0100110101101111101011110111111 -0101111000110000001011110001011

LogLog family

The binary digits in a (say) 32-bit hash are effectively random, and equivalent to a sequence of fair coin tosses. Hence the probability that we see a run of 5 zeros in the smallest hash so far suggests that we have added \(2^5\) unique items so far. This is the intuition behind the loglog family of Distinct Value Sketches. Note that the biggest count we can track with 32 bits is \(2^{32} = 4294967296\).

The accuracy of the sketch can be improved by averaging results with multiple coin flippers. In practice, this is done by using the first \(k\) bit registers to identify \(2^k\) different coin flippers. Hence, the max count is now \(2 ** (32 - k)\). The hyperloglog algorithm uses the harmonic mean of the \(2^k\) flippers which reduces the effect of outliers and hence the variance of the estimate.

In [31]:
for i in range(1, 15):
    k = 2**i
    hashes = [''.join(map(str, np.random.randint(0,2,32))) for i in range(k)]
    print('%6d\t%s' % (k, min(hashes)))
     2  01001110101100101111011010111111
     4  10000010000111011000111110010010
     8  01001001110010100010101011000100
    16  00011011100001111110100010110011
    32  00000001000100111110110100100110
    64  00000011101010100011001100010101
   128  00000011001000100100001110011001
   256  00000000011011001011111011011001
   512  00000000101100110010111101011100
  1024  00000000001110100101101000011111
  2048  00000000000100001010110101000100
  4096  00000000000010100001011100111011
  8192  00000000000000001011101100000101
 16384  00000000000000100011011110111100
pip install hyperloglog
In [32]:
from hyperloglog import HyperLogLog
In [33]:
hll = HyperLogLog(0.01) # accept 1% counting error
In [34]:
print('True\t\tHLL\t\tRel Error')
s = set([])
for i, word in enumerate(word_list):
    s.add(word)
    hll.add(word)
    if i%int(.2e5)==0:
        print('%8d\t%8d\t\t%.2f' % (len(s), len(hll), 0 if i==0 else abs(len(s) - len(hll))/i))
True            HLL             Rel Error
       1               1                0.00
    6585            6560                0.00
   11862           11777                0.00
   15390           15318                0.00
   18358           18236                0.00
   24705           24712                0.00
   28693           28750                0.00
   30791           30946                0.00
   34530           34677                0.00
   36002           36077                0.00
   41720           42091                0.00
   45842           46384                0.00
   46389           46979                0.00
   49524           50226                0.00

Bloom filters

Bloom filters are designed to answer queries about whether a specific item is in a collection. If the answer is NO, then it is definitive. However, if the answer is yes, it might be a false positive. The possibility of a false positive makes the Bloom filter a probabilistic data structure.

A bloom filter consists of a bit vector of length \(k\) initially set to zero, and \(n\) different hash functions that return a hash value that will fall into one of the \(k\) bins. In the construction phase, for every item in the collection, \(n\) hash values are generated by the \(n\) hash functions, and every position indicated by a hash value is flipped to one. In the query phase, given an item, \(n\) hash values are calculated as before - if any of these \(n\) positions is a zero, then the item is definitely not in the collection. However, because of the possibility of hash collisions, even if all the positions are one, this could be a false positive. Clearly, the rate of false positives depends on the ratio of zero and one bits, and there are Bloom filter implementations that will dynamically bound the ratio and hence the false positive rate.

Possible uses of a Bloom filter include:

  • Does a particular sequence motif appear in a DNA string?
  • Has this book been recommended to this customer before?
  • Check if an element exists on disk before performing I/O
  • Check if URL is a potential malware site using in-browser Bloom filter to minimize network communication
  • As an alternative way to generate distinct value counts cheaply (only increment count if Bloom filter says NO)
pip install git+https://github.com/jaybaird/python-bloomfilter.git
In [35]:
from pybloom import ScalableBloomFilter

# The Scalable Bloom Filter grows as needed to keep the error rate small
# The default error_rate=0.001
sbf = ScalableBloomFilter()
In [36]:
for word in word_set:
    sbf.add(word)
In [37]:
test_words = ['banana', 'artist', 'Dublin', 'masochist', 'Obama']
In [38]:
for word in test_words:
    print(word, word in sbf)
banana True
artist True
Dublin True
masochist False
Obama False
In [39]:
### Chedck
for word in test_words:
    print(word, word in word_set)
banana True
artist True
Dublin True
masochist False
Obama False