Counting Pizza with Python

I'm a full time nerd, even when I'm ordering pizza online I can't stop myself from investigating how the websites I'm ordering from work. My latest investigation was Dominoes where I found a neat way to count the number of orders that they process throughout the day. This post is supposed to highlight potential dangers when exposing integer ID's, and how they can allow someone motivated (or sad) enough to track data you might not want to share.

Below is a graph of the data I collected over a 2 weeks of monitoring the Dominos website, it gives a good indication of how Pizza sales fluctuate during this time. The data was collected early 2016 and I contacted Dominoes with details about this potential issue. I received no reply, but they did however fix it as explained below.

Dominoes Order Volume

The issue

After you order a delicious (if a bit expensive) Dominoes pizza you have the option to track your order as it is being cooked and delivered. After opening up Chrome's dev tools I noticed that it was making a request to the following URL:

This URL returns a JSON response with some information that was used to update the tracker, such as the status (cooking, prepared, out for delivery etc). While I was writing my dissertation I ordered a few too many pizzas over a couple of weeks and I noticed that the orderId parameter was just an incrementing number. I also discovered that it wasn't tied to a users session so you could send any arbitrary order ID in the parameter and get the status of that order.

So an idea was born, what if I made a script that uses this to roughly track how many Dominoes were ordered online over time? Once you have a 'start' orderId you would just need to increment it by X, and keep requesting that ID until a valid response came back and the orderId was 'filled'. You can then log how long that took, and just increment orderId by X again and repeat.

import requests, time

start = 1234 # The orderID of a yummy pizza you have literally just brought
current = start

def order_exists(id):
    resp = requests.get("{0}".format(id))
    if resp.status == 404:
        return False
    return True

while True:
    if order_exists(current):
        current += 10
        print("{0} = {1}".format(time.time(), current))


The actual code is a lot messier and a bit more complex, I found that some dominoes orders are never fulfilled and so you also need to check current + 1 and current + 2 to be sure. From the output of the script you can work out the order rate over time and produce the graph above.

The fix

Dominoes have since fixed this with the release of their new tracker which resembles a creepy HAL-like thing. Now the orderId parameter is a base64 string with the following value:


As you can see the actual integer order ID is still there but there is a GUID after it. If you attempt to request the status of an order without the correct GUID an error is returned, thus fixing this issue.

This goes to show that you should be very careful when using public, incrementing integer ID's. They can easily enable someone (like a competitor) to fairly accurately track the health of your business by how many orders you are processing.

The results

Note: I think the first Monday results are skewed due to some problems with my script as the following Monday sales are a huge amount lower

Dominoes sell a lot of pizzas. These numbers are at least their online orders, I doubt if the in-store or over the phone ones are included. Anyway, at their peak on Tuesday they were processing ~300 orders every 30 seconds. That's 10 a second, and with the ridiculous profit margins Dominoes has on their pizzas (a single pizza can set you back £15-20!) that's a serious amount of money. It seems their busiest days are Tuesday, Friday, Saturday and Sunday, and whilst Monday, Wednesday and Thursday are busy they are a lot lower than the others. Still, 4 out of 7 busy days isn't bad!

What's interesting is lunchtime sales are pretty negligible compared to the evening rush, on most days only processing about an order a second for the whole of the UK. It also seems like some people like getting pizza at 10am!

Whilst running the script I noticed a few times when the Dominoes site went down. Below is a graph that shows this happening twice on Tuesday:

You can see the first crash happens right after ~300 orders in 30 seconds, perhaps this contributed to it? The next is smaller, but each must have resulted in a fairly large amount of missed orders as they did happen at peak times.

The conclusion

Don't expose incrementing, integer primary keys of anything sensitive, they can be used to extract trends over time. Even if you attempt to protect yourself you might be returning a 404 for an invalid ID and 403 for a valid one, which still tells an attacker which orders are valid and which are not.

I've been struggling with myself over whether to call this a security issue. It's technically leaking information that you might not want public but it's not exactly going to allow an attacker to do anything bad to your site.

Syntax highlighting and CSS support added to wordinserter

I recently added syntax highlighting and support for CSS stylesheets to wordinserter, and the implementation was satisfying enough that I thought I would blog about it.

Wordinserter is a library I maintain that lets you insert HTML documents/snippets into Word documents: It's primary use case is when you have a WYSIWYG editor in a users browser that outputs HTML and you want to put that HTML into some kind of word document. I guess you could say it inserts things... into word. Doing this with wordinserter is as simple as:

from wordinserter import parse, insert

html = "<h1>Hello there!</h1>"\
       "<p><strong>I'm strong!</strong></p>"
operations = parse(html, parser="html")
insert(operations, document=document, constants=constants)

You can read some more examples and documentation on the Github project here:, and you can see comparison images between how Firefox and wordinserter renders particular HTML snippets here:

Anyway, back to the topic at hand. One of the features our WYSIWYG editor supports is syntax highlighting, and I've always wanted to add proper support for this in wordinserter. In HTML code is usually represented by a pre or code tag like so:

def test():

import urllib

The pre/code tag has some unusual properties such as respecting all whitespace included within it, but other than that it's just a normal tag. Websites (like this one) use various JS libraries or server-side processing to highlight the contents of these tags to make them more visually appealing which usually boils down to sticking a bunch of span tags with CSS classes/inline styles in the right places to highlight the code. For example the snippet below is the highlighted HTML contents of the snippet above:

def <span class="hljs-function"><span class="hljs-title">test</span><span class="hljs-params">()</span></span>:

import urllib
urllib.<span class="hljs-function"><span class="hljs-title">urlopen</span><span class="hljs-params">(<span class="hljs-string">""</span>)</span></span>

So how does wordinserter highlight code?

You can tell wordinserter to insert highlighted code in two ways. The first, and the simplest, is to simply send a <pre> tag with a language attribute like so:

<pre language="python">
def test():

import urllib

This uses the awesome pygments library under the hood and will highlight the code using that, using magic.

This worked for a while, then some clever chap walked up to me and said "Hey, the syntax highlighting works for this code in the WYSIWYG editor but it doesn't display correctly in the document". I looked into it and the problem was a mismatch between using pygments to highlight the code in the document and hljs to do it on the frontend. So the only natural way forward was to unify them both to use a single style, and so I added support for CSS files to wordinserter.

You what?

Yeah. That's what I thought to myself when I first had the idea. We have a CSS file that styles stuff on the frontend, and we want the highlighted code to be the same on the generated document. The only way forward that I could see would be to send the CSS file along with the hljs highlighted code (the one with all the spans) to wordinserter. It would then see a span tag with hljs-functions, look at the CSS file and see the appropriate style and then apply it. You can use it like so:

from wordinserter import parse, insert

html = "<h1>Hello there!</h1>"\
       "<p><strong>I'm strong!</strong></p>"
operations = parse(html, parser="html", stylesheets=["h1 { color: red; }"])
insert(operations, document=document, constants=constants)

The implementation was actually really simple:

All it does is parse the CSS file using the awesome cssutils library then crudely run through each rule, find all elements that match that rule and copy the CSS rules as inline-styles. Not amazing but it gets the job done and required minimal modifications to any other part of the library. I had to make some big changes later when I figured out that the inheritance of these styles was wonky (parent styles overrode the child styles), but that's fixed now so it's all groovy.

In both cases the finished document will look like this:

Segfaulting Python with afl-fuzz

American Fuzzy Lop is both a really cool tool for fuzzing programs and an adorable breed of bunny. In this post I'm going to show you how to get the the tool (rather than the rabbit) up and running and find some crashes in the cPython interpreter.


Explaining in detail what fuzzing is would be a post in itself, so here is the summary from the Wikipedia article

Fuzz testing or fuzzing is a software testing technique, often automated or semi-automated, that involves providing invalid, unexpected, or random data to the inputs of a computer program. The program is then monitored for exceptions such as crashes, or failing built-in code assertions or for finding potential memory leaks. Fuzzing is commonly used to test for security problems in software or computer systems. It is a form of random testing which has been used for testing hardware or software.

In laments terms it might be the equivalent of "throwing enough shit at a wall and to see what sticks". For example if you have a program that plays MP3 files written in an unsafe language like C/C++ fuzzing might involve taking a valid MP3 file and changing parts of the file and then attempting to play it. If the MP3 player works correctly any invalid files will result in an error, whereas if it is coded incorrectly the program will crash. Repeat this 1,000,000 times and you might find a lot of different bugs.

That's the long and short of it. AFL has some really clever tricks up its sleeve to make this process more efficient to the point where it can make up JPEG images from just running a JPEG parser or write valid Bash scripts. That's pretty awesome and made me want to have a crack at the Python, specifically the cPython interpreter.

Setting up afl-fuzz

For best results use Linux - OSX has some performance issues that make afl really slow and Windows isn't supported. The instructions below assume Ubuntu since that's what I used.

Downloading and compiling afl was really painless. Other than build-essential I didn't have to install any dependencies.

➜ ~ sudo apt-get install build-essential
➜ ~ wget
➜ ~ tar xvf afl-latest.tgz && cd afl-2.06b
➜ ~ make -j8

After building the project with make you will have a few executables in the current directory including afl-gcc, afl-fuzz and afl-whatsup.

Fuzzing Python

Now we have our afl-fuzz executable we need to make a program for it to fuzz. To enable afl to fuzz a program it must meet two requirements: be compiled with afl-gcc (or afl-g++) and accept a file name through a command line parameter (or the data via stdin - no sockets for example). When afl starts fuzzing it generates testcases using something akin to black-magic, then writes each case to a file in a directory. It then invokes the target program with the path to that file, e.g my-program /afl/testcase1. The target should then read the file and do something with it, and because it is compiled with the special afl-gcc/g++ compilers afl-fuzz is able to understand what parts of the program executed. If it sees one testcase made the program execute in a new way it starts building new ones around that, allowing it to slowly map out large portions of the target and in theory find edge-cases that cause it to crash.

So I reasoned the simplest way to do this was just download the cPython source code, compile the interpreter with afl-gcc and then run it through afl-fuzz. The stock interpreter executes code from a file because that's what its built to do. Let's go grab the source code:

➜ ~ sudo apt-get install mercurial
➜ ~ cd ~
➜ ~ hg clone
➜ ~ cd cpython
➜ ~ hg checkout 3.5
➜ ~ CC=~/afl-2.06b/afl-gcc ./configure && make -j8

The configure script reads the name of the C compiler from an environment variable CC. By passing CC=~/afl-2.06b/afl-gcc you're telling it to use afl-gcc as the C compiler to build the interpreter. If you've done it right you should see lines line this in the output:

afl-as 2.06b by <[email protected]>
[+] Instrumented 97 locations (64-bit, non-hardened mode, ratio 100%).
../afl-gcc -pthread -c -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes    -Werror=declaration-after-statement   -I. -IInclude -I./Include    -DPy_BUILD_CORE -o Parser/grammar.o Parser/grammar.c
afl-cc 2.06b by <[email protected]>

So now we have our special interpreter we have one more thing to do before we run it through afl-fuzz: make some test cases. afl-fuzz works best if you give it some example programs to initially work with, so create some files in a directory like ~/python_testcases and fill them with simple Python syntax constructs, e.g:

class ABC(object):
    def x(self):
        for i in range(10):
            y = i + 2
            yield y

o = ABC()
yield from o.x()

Then invoke afl-fuzz like so:

➜ ~ ~/afl-2.06b/afl-fuzz -i python_testcases -o fuzz cpython/python @@

You should see a screen like the one below:

Fuzzing Python fast

Notice something about the image above?

Damn. That's terrible. We are never going to get anywhere with 20 executions a second - we need millions and millions to get good results. Lucky for us there is a trick we can use to greatly improve that speed.

Use llvm-mode and Python Bytecode files

Currently our target program is the CPython interpreter itself which has a bit of a large startup overhead. This includes reading the user environment, possibly executing a users file etc. This is slow. We are also trying to feed it actual Python code which has to be parsed into bytecode before being executed, which is also an avoidable overhead. CPython actually stores these parsed bytecode files on disk as file_name.pyc in 2 and inside a __pycache__ directory in 3. These are just an optimization so that the interpreter doesn't have to constantly re-parse the source files. But the interpreter can execute them directly, and afl-fuzz happens to be work much better when working with binary formats over textual ones. To avoid the startup and parsing overhead we need to make a stripped down interpreter.

I've never actually written anything more than "hello world" in C so this was a bit of a learning curve. After reading the Python C documentation I figured the best way to go is to use PyRun_AnyFileEx. I made a new file inside cpython/Programs/ called test.c where I put the code my stripped down program and after a bit of tinkering I wrote the following C:

#include <Python.h>
#include <stdio.h>

int main(int argc, char *argv[])
    Py_NoSiteFlag = 1;
    FILE *f = fopen(argv[1], "rb");
    if(PyRun_AnyFileEx(f, argv[1],1 ) == -1) {

    return 0;

The _AFL_INIT() code above is a special experimental feature of afl that greatly increases the speed of our fuzzing. In short it enables you to 'defer initialization' of afl until after an expensive initialization that the program has to execute. In this case the PyInitializeEx function is pretty slow so we place the __AFL_INIT() function after it, so that when afl is fuzzing the program it gets started from after Py_InitializeEx is called. This skips repeating the slow startup and speeds fuzzing up a lot, because we want to fuzz the PyRun_AnyFileEx function and not waste type running PyInitializeEx() over and over again. To enable this you have to compile a special afl tool called afl-clang-fast:

➜ ~ pushd ~/afl-2.06b/llvm_mode/
➜ ~ sudo apt-get install clang llvm-dev llvm
➜ ~ make
➜ ~ popd

And then run ./configure again with afl-clang-fast instead of afl-gcc:

➜ ~ CC=~/afl-2.06b/afl-clang-fast CXX=~/afl-2.06b/afl-clang-fast++ ./configure

This is optional though, it works fine without it.

After all this we have our target program, we need to make it build. Getting this bit working took a while because I had no idea what I was doing. In the end I manually added some code this code to the $(BUILDPYTHON) Makefile target:

# Build the interpreter

    # New lines here:
    $(MAINCC) -c $(PY_CORE_CFLAGS) -o $(srcdir)/Programs/test.o $(srcdir)/Programs/test.c
    $(LINKCC) $(PY_LDFLAGS) $(LINKFORSHARED) -o Programs/test Programs/test.o $(BLDLIBRARY) $(LIBS) $(MODLIBS) $(SYSLIBS) $(LDLAST)

Yes, this is probably really horrible and not a good way to do this but it works. After running make -j8 you will have a file called test located inside Programs. If make errors don't worry, as long as the file is there then it will execute fine.

Running afl-fuzz

We have everything set, lets find some crashes. Compile your python testcases to pyc files with this command:

➜  ~ python3.4 -m compileall python_testcases
➜  ~ cp python_testcases/__pycache__/* python_testcases
➜  ~ mv python_testcases/*.py .

And then execute afl-fuzz. The -f input.pyc flag forces the input file to have the extension pyc which is required by Python when running pyc files:

➜  ~ afl-2.06b/afl-fuzz -i python_testcases -o fuzz -f input.pyc cpython/Programs/test @@

You should see the following screen:

Notice the speed: 373.4/sec, up from 20! And we've already found 100 unique crashes!

And to just confirm that those crashes are real and not because of our special program we can try them on the system python:

➜  ~ cp fuzz/crashes/id:000000,sig:11,src:000000,op:flip1,pos:13 test.pyc
➜  ~ python3.5 test.pyc
[1]    3689 segmentation fault  python3.5 test.pyc

It's worth pointing out here that this isn't really a problem or any kind of security risk. pyc files are internal to cPython and not many validation checks are run on them. If you're in a position to send someone malicious pyc code that crashes an interpreter a far more malicious thing would be to simply execute __import__('shutil').rmtree('/', ignore_errors=True) rather than crash the process.


So we've got afl-fuzz to find some real life crashes in the cPython interpreter, through fuzzing it's bytecode. We've also discovered that there are a lot of crashes and that the interpreter doesn't really protect itself against invalid bytecode, which takes a bit of the fun away.

The next step is to fire up a debugger like gdb and use it to explore why the bytecode crashes the interpreter. I'm going to write a post on how to do this in the near future because it's quite lengthy and this post is already long enough, but as a sample the first segfault afl found was in this code:

for (i = 0; i < n_cellvars; i++) {
    Py_ssize_t j;
    PyObject *cell = PyTuple_GET_ITEM(cellvars, i);
    for (j = 0; j < total_args; j++) {
        PyObject *arg = PyTuple_GET_ITEM(varnames, j);
        if (!PyUnicode_Compare(cell, arg)) {
            cell2arg[i] = j;
            used_cell2arg = 1;

The issue is that the line PyObject *arg = PyTuple_GET_ITEM(varnames, j); can be null if varnames is an empty tuple (which it can be if the pyc file is malformed). No null check is done on arg, which is passed to PyUnicode_Compare and this causes a segfault.

Scraping websites with Cyborg

I often find myself creating one-off scripts to scrape data off websites for various reasons. My go-to approach for this is to hack something together with Requests and BeautifulSoup, but this was getting tiring. Enter Cyborg, my library that makes writing web scrapers quick and easy.

Cyborg is an asyncio-based pipeline orientated scraping framework - in English that means you create a couple of functions to scrape individual parts of a site and throw them together in a sequence, with each of those parts running asynchronously and in parallel. Imagine you had a site with a list of users and you wanted to get the age and profile picture of each of them. Here's how this is done in Cyborg, showing off some of the cool features:

from cyborg import Job, scraper
from aiopipes.filters import unique
import sys

def scrape_user_list(data, response, output):
   for user in response.find("#list_of_users > li"):
       yield from output({
           "username": user.get(".username").text

def scrape_user(data, response):
   data['age'] = response.get(".age").text
   data['profile_pic'] = response.get(".pic").attr["href"]
   return data

if __name__ == "__main__":
   pipeline = Job("UserScraper") | scrape_user_list | unique('username') | scrape_user.parallel(5)
   pipeline > sys.stdout
   pipeline.monitor() > sys.stdout

That's it! The idea behind Cyborg is to keep the focus on the actual scraping, but getting benefits that are usually hard like parallel tasks, error handing and monitoring for free.

The library is very much in alpha, but you can find the project here on GitHub. Feedback welcome!

Our pipeline

Our pipeline is defined like so:

pipeline = Job("UserScraper") | scrape_user_list | unique('username') | scrape_user.parallel(5)
pipeline > sys.stdout

The Job() class is the start of our pipeline and it holds information like the name of the task ('UserScraper'). We use the | operator to add tasks to the pipeline, the first one being scrape_user_list. Any output from that task is passed to unique, which as you may have guessed filters out duplicate usernames that may be produced. This then passes output to the scrape_user function, and the .parallel(5) means start 5 parallel workers to process usernames.

The > operator is used to pipe the output of the pipeline to the standard output, but this could be any file-like object or function instead. This means you could write an import_into_database function that takes some scraped data and use SQL to add them to a database.

A key aim of Cyborg is to make monitoring the pipeline simple. The pipeline.monitor() > sys.stdout handles this for us by piping status information every second to the standard output. Below is some sample output from a real version of our pipeline (one that handles pagination and does a bit more work). You can see a progress bar for each task, including the 5 scrape_user workers. Error totals are also displayed here, if there are any.

UserScraper: 3 pipes
Runtime: 9 seconds
 |-------- scrape_user_list
           Input: 14/26 read.  53% done
           [=========*          ]  14/26
 |-------- unique 
           Input: 825/825 read. 100% done
           [===================*] 825/825
 |-------- scrape_user
           Input: 8/825 read.   0% done
           [*                   ]   8/825
  |------- [*                   ]   3/66
  |------- [*                   ]   3/92
  |------- [===*                ]   5/22
  |------- [===============*    ]   8/10


A scraper is a function decorated with @scraper, with the first argument being the page URL that is to be scraped. The response to that URL is passed as a parameter to the function, and the function should parse it to extract relevant information.

The scrape_user_list function is fairly simple, it takes a static URL (/list_of_users) and runs a simple CSS query on it to find HTML elements we are interested in using. It then uses yield from output to output a dictionary to the next phase of the pipeline. We need to use a yield from statement here as our scraper could produce an arbitrary number of outputs, so the yield from ensures that output is buffered until tasks further down the pipeline are ready to handle them.

The dictionary produced by scrape_user_list is used to format the scrape_user URL. So if scrape_user_list produces {'username': 'test'} then scrape_user's URL will be resolved to /user/test. This is then fetched and the age + profile picture is extracted from the response and the output passed on. As this is the last function in the pipeline then it gets output to stdout in JSON format.

The library itself

The library is pretty new, I wrote a 'draft' version that I wasn't very happy and this is a re-write much closer to what I had imagined originally. You can find the code on GitHub, or use pip install cyborg to get it installed locally.

HtmlToWord is now WordInserter

I've released a redesign of my HtmlToWord library, specifically it now supports Markdown and multiple different ways to interact with Word. It's now also been renamed to WordInserter to reflect this.

Originally HtmlToWord was designed to take HTML input, process it and then insert a representation of it into a Word document. I made this for a project at my work involving taking HTML input from a user (created using a WYSIWYG editor) and generating a Word document containing this. I was surprised to find no native way to do this in Word (other than emulating copy+paste, eww), so I made and released HtmlToWord. That library was tied directly to HTML, each supported tag was a individual class responsible for rendering itself.

This quickly got messy, and in a future version of the project for work Markdown was used instead, so I decided to re-write the library from scratch to handle this. HtmlToWord uses the HTML input to create a number of objects, one for each tag, and then calls a Render method on each of them. As WordInserter needs to process both HTML and Markdown I decided to better decouple the parsing from the rendering, otherwise the library would be full of duplicate code. It now takes some supported input and creates a tree of operations to perform which is then recursively fed to a specific renderer responsible for processing it.

The result of this is the code is a lot cleaner and more maintainable, and it supports different ways to take the input and insert it into Word. Currently only COM is supported, but in the future if other projects that directly manipulate .docx files mature a bit I can create a renderer that works on Linux.

Using the library is super simple:

from wordinserter import render, parse
from comtypes.client import CreateObject

# This opens Microsoft Word and creates a new document.
word = CreateObject("Word.Application")
word.Visible = True # Don't set this to True in production!
document = word.Documents.Add()
from comtypes.gen import Word as constants

markdown = """
### This is a title

![I go below the image as a caption](

*This is **some** text* in a [paragraph](

  * Boo! I'm a **list**

operations = parse(markdown, parser="markdown")
render(operations, document=document, constants=constants)

I have also created an automated test script that renders a bunch of HTML and Markdown documents in both FireFox and Word. This is used to make a comparison document to quickly find any regressions or issues. Judging by the number of installs from PyPi and the number of other contributors to the Github project this library is useful to some people, I hope that they take a look at the redesign.

Here is a snapshot of the top of the comparison page: