In the last cou­ple of years I start­ed being more and more obsessed by e‑paper dis­plays. I part­ly blame my Kin­dle – by far my favourite and the best elec­tron­ic device I own to date – and my love for 1‑bit graph­ics.

My first com­put­er was a Mac­in­tosh SE and most of my ear­ly years in front of a mon­i­tor were spent exper­i­ment­ing with Hyper­card: you tend to devel­op a cer­tain (life-long) taste for black and white graph­ics.

Before sum­mer I bought an Inky pHAT dis­play and put it to use with a spare Rasp­ber­ry Pi Zero W I had in a draw­er in the office, and my old Playsta­tion Eye cam­era.
The Playsta­tion Eye works “out of the box” with the Pi: tak­ing pic­tures with it is pret­ty straight­for­ward. The next log­i­cal step for me was to dis­play them as beau­ti­ful dithered black and white images on the small e‑paper dis­play.

Dis­claimer: I can­not real­ly code! The code I pro­duced to make this work is a fierce cut & paste job ’cause I was eager to have my “pro­to­type”. If any­thing, work­ing on the script pushed me to buy a copy of Learn Python 3 the Hard Way.

The aim of this post is to link and men­tion all the sources (guides, per­son­al blogs, forum posts, and online man­u­als) that I have used and enabled me to cre­ate my lit­tle pro­to­type and waste the good part of a rainy week­end.

What does it do? Why “Atkinson Machine”?

In short: it takes a pic­ture, crops and dithers it and, last, it adds some red over­lay­ing images (pat­terns most­ly).
Pimoroni’s dis­plays are capa­ble of pro­duc­ing black, white and an addi­tion­al colour: red in the case of mine.

The spe­cif­ic dither­ing process applied to the image is the same used in ear­ly Apple com­put­ers, and it was devel­oped by Bill Atkin­son who, lat­er, was part of the team that gave us the Mac­in­tosh and cre­at­ed Hyper­card.
It is safe to say that Bill cre­at­ed a series of (amaz­ing) tools that in the late 80ies and ear­ly 90ies pro­pelled me, and a lot of oth­er like-mind­ed peo­ple, into careers as dig­i­tal design­ers, devel­op­ers, and engi­neers on the ear­ly web.

“The bicy­cle for the mind.”

But I digress…

You can “Atkin­son-dither” your own images direct­ly online here:
https://gazs.github.io/canvas-atkinson-dither/

On my “Atkin­son Machine”, instead, I’m using Mike Teczno’s Python imple­men­ta­tion of it:

import sys, PIL.Image

img = PIL.Image.open(sys.argv[-1]).convert('L')

threshold = 128*[0] + 128*[255]

for y in range(img.size[1]):
    for x in range(img.size[0]):

        old = img.getpixel((x, y))
        new = threshold[old]
        err = (old - new) >> 3 # divide by 8
            
        img.putpixel((x, y), new)
        
        for nxy in [(x+1, y), (x+2, y), (x-1, y+1), (x, y+1), (x+1, y+1), (x, y+2)]:
            try:
                img.putpixel(nxy, img.getpixel(nxy) + err)
            except IndexError:
                pass

img.show()

See also David Lin­de­crantz’s nifty “1‑Bit Cam­era” for iPhone (sad­ly no longer available/supported on mod­ern iPhones) and live pro­cess­ing by Windell Oskay here.

Tests and more tests

This above is one of the ini­tial tests with the Playsta­tion­Eye, and a new­er Pi 3A+.
I then replaced the pHAT dis­play with a big­ger wHAT and bought the small­er Rasp­ber­ry Pi Cam­era to replace the USB web­cam.
The only dif­fer­ence being the con­nec­tor used and mov­ing to the PiCam­era library instead of the fswe­b­cam com­mand to cap­ture images with it.

The oper­a­tions are pret­ty straight­for­ward, as list­ed before: the image is tak­en, cropped, con­vert­ed to grayscale and then dithered using Python’s PIL and Wand.
It is then merged with the red over­lays, picked ran­dom­ly from and array of PNG files, and (this was a painful part) the three-colour palette has to be rearranged in the cor­rect order, oth­er­wise the e‑paper dis­play will ren­der it in the wrong colour com­bi­na­tion or in neg­a­tive.

#image is correct up to negative.png, but palette is in wrong order
#creates a reference palette and quantizes the image
 PALETTE = [
     255,   255,   255,  # white,  00
     0,   0, 0,  # black,  01
     255, 0,   0,  # red,    10
 ] + [0, ] * 252 * 3
 
#a palette image to use for quant
 pimage = Image.new("P", (1, 1), 0)
 pimage.putpalette(PALETTE)
 
#open the source image
 image = Image.open('negative.png')
 image = image.convert("RGB")
 
#quantize it using our palette image
 imagep = image.quantize(palette=pimage)
 
#save
 imagep.save('positive.png')

Last thing I’ve added is a “fat bits” option that makes the pix­els more vis­i­ble: the orig­i­nal pic­ture is tak­en at half the size and then enlarged 200% with­out any inter­po­la­tion (keep­ing hard edges, show­ing the pix­els).
Fat bits on the left, stan­dard right:

It has been run­ning non-stop, pinned to a cork­board above my desk for the last few months, tak­ing a new pic­ture every four min­utes.
The plan is to add a few more extra func­tion­al­i­ties:

• Option to save the pic­tures, as at the moment it inten­tion­al­ly rewrites them;

• Option to upload the pic­tures to an Insta­gram account (will require some con­fig screens and poten­tial­ly some input devices some­where);

• Embed­ding it all in an actu­al frame and adding a motion sen­sor to trig­ger the shut­ter, instead of keep­ing it on a four minute loop;

• Gen­er­ate spe­cif­ic graph­ic over­lays based on geolo­ca­tion.

But I should clean my exist­ing spaghet­ti code first.

Extra resources:
Increase expo­sure time on the Pi ‘s cam­era
Wand: crop­ping and resiz­ing
Fixed colour palettes with PIL
PIL image blend modes