Doggerel on seeing yet another display ad in a mall

I wish they’d stop calling food
It’s as if they’ve conflated
Unclad with unadulterated.

Antisocial August

I’m spending August off Twitter, Facebook and Tumblr, with a few exceptions – I’ll still be posting deepdreams stuff to last-visible-dog, and I might start a new series on @FSVO.

And I’ll probably show off about the City to Surf next weekend.

If you need to get in touch, there’s email, or comment here – I’ll still be blogging.

Deepdreams Dive Two: the MIT Places Neural Net

Here are some more sample images run on the same randomised coloured fog as the last dive. These are done with a different neural net, based on the MIT Places database. They lack the biomorphic horrors of the default net, but they have a kind of weird beauty.


3a. Low-level features, and a tendency to diffract colour


3b. These look curiously suggestive of the spiral patterns on ancient Celtic artefacts.


4a. You can really see how the Places net gets cues from colour here: as you might expect, the green parts seem to want to turn into parks.


4b. This layer is just crazy for chairs. And windfarms.


4c. Pagodas as far as the eye can see.


4d. The wind farms are back, and the pagodas are starting to look like stupas.


4e. It’s strange how at the higher levels, this model starts to go wonky. There are anphitheatres forming in the lower left.


5a. Oddly specific details here: a watertower and a fountain.


5b. This is the only layer in this net that I find very unsettling. There are suggestions of distorted faces, and something like a pine-forest in the middle.

Nine Layers of the DeepDream Algorithm Ranked in Order of Eldritch Abominationhood

Like many nerds, I’ve spent a lot of spare time over the past week playing with the open-source code for Google’s DeepDreams visualisations. This runs a visual-recognition neural network—basically, an artificial model of a visual cortex which has been trained on a big image database—through a feedback loop, which adds all sorts of psychedelic hallucinations to images. (If you are interested in the technicalities, here’s my post on how I got it running. If you want to get started with less fuss, I’d recommend Ryan Kennedy’s containerised version.)

Most of the output has only used a single layer of the net: there are actually nine available layers [actually, as it turns out, there are stacks more than nine, but these are the ones named “output”] so I set up a loop to apply each of them to a single image (some randomly-generated colour noise) and see what the different effects were. Here they are, ranked in ascending order of eldritch horror.

3a. This is the lowest-level layer, so it’s all rather abstract and decorative in a 1950s-public-art way. Like bacteria, or tapeworms, as my daughter said.

Layer 3a

4b. When I set this experiment running, I expected the results to get steadily more nightmarish as they became more figurative, but I was wrong. These incredibly stoned dogs are almost cute.

Layer 4b

4a. The level preceding 4b is similar except that the dogs look pissed-off. I call this one “The Seer”

Layer 4a

4c. This is the default layer which the open-sourced code uses: “puppyslugs”, or, I guess, eventually, “DeepDreams classic”. Here we see a single puppyslug in its native habitat of trippy as hell eye-cobwebs.


If this gives you the creeps, close the tab now. We’re not even halfway. The Google researchers mercifully withheld the worst.

5a. While layers 3b, 4a, 4b and 4c all show a distinct tendency to furriness, at 5a we’ve left the mammalian world far behind. This is the second-last layer, but even though it’s a mass of exoskeletons and scales, it’s still creating things that you could imagine seeing. Through a microscope, in a drop of very bad pond water. When you’d dropped acid.

A riot of carapaces and scaly writhing forms

3b. This is a step back to the lower levels of abstraction, and at first it seems like interesting fuzzy spirals and loops. Until you notice the… well, they’re not exactly dogs. Doglike features. Cells, even. Eyes. Emerging.


5b. The Elder Gods. This is the final layer in the neural-net stack, and it’s pretty bad. At least the flea-fish-microbe things of 5a had three-dimensional-ish bodies. At least the puppyslugs had… faces. I miss the puppyslugs. I miss their little weird faces.


4e. No.



Are those… apes? Eating a piano-accordion. Inside cobwebs. (Don’t look too closely at the cobwebs.) This is a Max Ernst nightmare.

4d. China Miéville’s fictional fantasy world Bas-Lag has a region call The Torque, a relic of an ancient war, which makes things go wrong. I never really cared to see anyone try to visualise its effects.


Yeah, nah.

Day 30 – a song you discovered during the Challenge

The 30 day song challenge has made me realise that I need to get out (of the 80s and 90s) more. Alpine’s “Hands”, which I discovered when Kate posted it on her blo for day 3 (a song that makes me happy) is great:

Thanks for following along with me: it’s been fun.

Day 29 – a song you want played at your funeral

I am not happy with this prompt, because thinking of what song I want played at my funeral is the sort of thing I do when I’m very depressed. But The Books’ vocals always remind me of Simon and Garfunkel, and the final track from their 2005 album Lost and Safe, “Twelve Fold Chain”, is one of the few songs about death which I find comforting.

Day 28 – a song from your childhood

I absolutely loved it when my parents put their tape of Simon and Garfunkel’s Bridge Over Troubled Water on in the car: it always makes me think of driving down the coast.

“The Boxer” was my favourite song at the time, for the way the sound and arrangement slowly build. It also contains my favourite personal mondegreen: I can’t hear it without thinking about that horse on Seventh Avenue.