Category Archives: computers

Here Comes Everybody

So literally the day after blogging about being off Twitter and #WhyIStayOnMastodon, a big crowd of Australian left twitter, including most of my mutuals, showed up in the Fediverse. So much for my plan to ease off the compulsive Mastodon checking.

This isn’t the first wave of birdsite refugees to hit Mastodon and it won’t be the last, although there are signs that we’ve reached some sort of threshold where the new arrivals start finding enough to talk about that they stick around. But I thought I should post a few things about what I’ve noticed about the place, and how it is, and isn’t, like Twitter.

It’s not just Mastodon. There are different types of software in the federated social media sphere which can federating with one another using the OStatus and ActivityPub protocols, one of which was developed for GNU Social, which came first. There’s also Pleroma, which is younger than Mastodon. And there’s drama.

There’s also drama within Mastodon, most interestingly about how the project is governed. This is normal and healthy, and how free software should work.

There’s no one Fediverse. This has always been true of Twitter, which is what phrases like “X Twitter” imply: different slices of the thing have their own cultures. It’s even more true in the fediverse, where each instance has its own moderation policy, and (sometimes) its own topical focus and (more rarely) custom software modifications, as in, which rejects all posts containing the letter ‘e’, or its anti-instance, Mastodon’s culture of strong local moderation and content warnings — and its reputation as a safe haven for furries, people of colour and queer and trans folk who were fed up with Twitter’s miserable efforts at countering abuse — can give the impression that it’s trying to be one big earnest safe space: as can the anti-Nazi policies which most instances, particularly the ones in EU jurisdictions, are happy to enforce. But some instances are, or have been, a lot more 4chan-like. Any fediverse server is free to block federation from any other, and in late 2017 and early 2018 there seemed to be some kind of instance block war going on, although I wasn’t paying too much attention to it. I think it reached some sort of equilibrium, but a mass influx of shitposters from political Twitter is just the sort of thing which could fire things up again. Conflict of this sort is inherent in the federated model, and there’s no telling what will happen if things are really snowballing.

Actually, content warnings are good. If I were asked to sum up the difference in ethos between Mastodon and other services I’ve used, I’d say that it tries to give you the most explicit control about what you let people see, on the most granular level. That’s why the privacy settings for individual posts can seem over-the-top, but actually make sense, and it’s why content warnings are a really good way to communicate to your readers what a post contains or is trying to do. Spoiler alert: it’s about letting people not read your post if it’s not relevant to them or would harm them, or give them spoilers. And when used properly, they can add the crucial dimension of timing to a good shitpost. (Apparently, Pleroma calls them ‘subjects’, so I guess someone was triggered by the term ‘content warning’).

Irony still works here. As do the usual range of shitposting strategies. (People were trying to get ‘pooptooting’ happening, but it never took off.) I’ve seem people say things like “will there be weird Mastodon like there was weird Twitter”, as if the whole lifecycle of the platform will recapitulate itself, but social media is Heraclitus’ river where the water is made of terrible memes and references to 90s culture, and you can’t step into it twice, nor would you necessarily want to. But Mastodon has been evolving its own vocabulary of in-jokes, because it’s full of clowns like you and me.

Earnestness works too. It still feels like it’s at the stage where you can make connections with people about shared interests, and the communities haven’t gotten too hidebound. It’s still absurdly friendly, if you’re used to Twitter. It can also be really long-winded and obsessive.

It has its problems. It’s still got too many straight white blokes who work with computers on it, and if anything, the recent Twitter influx seems to be making that worse. I don’t know what we can do about that other than to follow, pay attention to and boost other voices as much as possible.

Oh, and retweets are boosts now, which is what they always were. I’ve always thought that the best Twitter filter would be to block everyone who has a ‘RT ≠ endorsement’ disclaimer in their bio, and Mastodon has made it explicit: if you spread something around, you’re helping it, whether you like it or not.

If you give it a go, my primary Mastodon account is


Hush! Caution! Echoland!

It’s a cliché to speak of Ulysses as an endpoint, the culmination of the nineteenth century novel, finally bled dry of plot and incident and to a certain extent character (we only feel that we know Stephen and Bloom so well because we spend so much time with them) and erupting into a monstrous growth of period detail and stylistic parody. It’s not the last station on the line, but at least, unlike with Finnegans Wake, one can still pretend it’s something like a readable work of fiction.

But Ulysses was my gateway into mainstream literature: before that, excepting what I was forced to read for educational purposes, I’d only read science fiction and fantasy. Literature was too boring, just a bunch of normal people doing grown up stuff. Ulysses was different: the first handful of chapters were pretty slow and contained a great deal of matter relating to Thomas Aquinas which I let slide by in peaceful incomprehension. But once the newspaper headlines started in the seventh chapter it started to get fun, if not easier to understand.

So for me, it’s always felt like a starting point, not a conclusion. It’s not exactly a friendly introduction to the Western canon, but there’s a lot of writers I first heard of, or was exposed to parodies of, under its influence. And its attitude of “hey, keep up with this if you can”, the sense you get of being complicit with someone taking everything they knew about every book they’d ever read for a dance, is exhilarating.

This post started out as another very short science fiction story, which is what I usually post here on Bloomsday, but it felt like it was getting into territory I’ve covered too often: a sort of dystopian scenario where after the Singularity, or some parody thereof, the AIs really do reconstruct Dublin from Ulysses, and put a bunch of human consciousnesses in it, and it’s terrible, like being trapped in a Bloomsday costume party for all eternity. I gave it up because, for one thing, I was unconsciously plagiarising part of a short story by Ian Watson from the 80s called, I think, “The Bloomsday Revolutions”. (I thought of Ian Watson for the first time in years the other day. He’s a good writer, look him up if you get the chance.)

The other reason I stopped was that I’m weary of science fiction being about computers and AI. I think that the real event underlying the Singularity is the collapse of the sf imagination into the computational. Too many of my own attempts to write longer pieces of fiction have gotten stuck or faltered for the same reasons.

I was remembering Jorn Barger, the guy who coined the word ‘blog’ and had a kind of internet celebrity which then dissolved into anti-Semitism and silence. Barger was an autodidact Joyce fan and had a site called “IQ Infinity”, the central thesis of which was that in Ulysses and Finnegans Wake, Joyce had solved AI, that through sheer brainpower he’d comprehended how the human mind worked. I don’t remember, if I ever really understood, exactly in what sense Barger thought that the works themselves constituted artificial intelligence: could one create the personality of “Leopold Bloom” from the text if it were somehow transformed into software? As another Joyce fanboy I can understand Barger’s reverent awe — without sharing it to that extent. And looked at dispassionately, it’s a ridiculous and self-infatuated idea: if only everyone else loved my favourite author as much as I do, they’d understand how consciousness works, too.

I was also thinking of Ted Nelson’s Xanadu project, the origin of the term ‘hypertext’: I’d known about it for decades, and based on what I’d read of his writing at various times, I thought of him as a crank, embittered by the success of the web. Xanadu proposes a much more complex way of linking and embedding documents within one another, with links that go both ways, and an elaborate system of building a top-level document from a variety of sources. Having come across this summary by a recent participant made me sense, in an obscure way, the allure of this vision of a global network of interpenetrating words. But in another way, it feels nightmarish.

In my mind, Xanadu’s “transclusion” is a codified and rigid version of the sort of association of ideas which the reading mind does in a flexible way all by itself. All writing depends on this, but it’s essential to a text like Ulysses, and even more so Finnegans Wake. Rather than narrating, “Stephen thought about Aquinas’ doctrines of sense perception as he walked along Sandymount”, Joyce interpolates “the ineluctable modality of the visible” and so on, all those weird terms I didn’t understand the first time, leaving it to the reader to either follow the echoes, if they are aware of the reference, or, if not, to fold the unusual texts into their own memory, to be echoed later or in other texts.

I love this process: to some extent, what I’ve just described is what being literate means to me. But I enjoy doing it with my own mind, or letting my own mind do it for me, and the thought of it being made explicit, with coloured markers joining the texts in different columns, makes me queasy, as do the very few working xanalogical demos.

I should add that sometimes just reading Joyce gives me the same feeling of vertigo. There’s a central image, or nightmare, behind these different incarnations, a cousin of Borges’ total library, the idea of mind as a sort of infinite glossary. It makes sense that my imagination, in trying to come up with a response to Ulysses as Bloomsday comes around each year, would return to the machines with which I work, and the fantasy that one day they’ll be able to read our favourite books so well that they’ll bring them to life.

I often get the same feeling reading blogs from the rationalist and AI risk communities. I suspect that these are not so much a school of philosophy as a literary genre, in which people with a very particular form of intelligence — discursive, articulate, fond of numerical arguments, insistent that any discipline can either be reduced to economics or physics, or is empty or misleading — imagine, with the same kind of self-infatuation, that magnified forms of this form of intelligence will either save or wreck the world. Earlier this year, I got so compulsive about reading this kind of thing that I had to use a site-blocking extension to stop myself.

I console myself with the idea that Joyce, had he lived in our era, would have been very bad (one imagines with glee his towering contempt and exasperation) at using computers.

Speculative Execution

Speculative execution is not exactly how thought works, it’s how you work without thinking about it. When philosophers talk about determinism versus free will, they treat the brain as if it were a black box with memory and sensory perceptions going in and actions coming out, with a clear sequence of causality from the first to the last. For the determinists, this is enough. For those who believe in free will, there’s an extra something special added at some point — the Soul, some kind of quantum magic going on in the synapses, whatever sort of swerve away from clockwork perfection seems convincing this decade — but it’s just another station on a linear progression.

Cognitive psychology and neuroscience undermine all of this because the brain doesn’t work like a black box. Without your noticing, it’s continually second-guessing and anticipating in all sorts of different ways. Your visual field is not the beautiful and transparent 360-degree spherical sensorium, God’s own VR headset, that you think it is: it’s a little dot of fine-detailed vision in constant motion with the gaps filled in by how your visual centres have come to assume that the world works. You anticipate what other people are about to say; your own words come tumbling out of your mouth without any conscious composition. The mind isn’t some Cartesian homunculus behind your eyes, marshalling inputs and emitting appropriate commands like some idealised 18th century lord. It’s a democratic and noisy playroom of independently-acting modules, all fighting for what little bandwidth your senses and memory afford them, and only too keen to proceed as far as they can on what guesses they can make.

And just as in CPUs, the goal of all this mess, this willingness to go out on a limb, is efficiency. Err on the side of caution if you think there’s a predator or, more realistically, the hostile or mocking attention of your peers; get distracted by anything which seems promising, an attractive person or an appetising aroma, because who knows that it might not be your last chance.

That’s the evolutionary story, and while we like to locate the life-and-death struggles behind the bundle of hacks we call consciousness in the savage prehistoric past, think of how much more we need to rely on speculative processing in the buzzing and blooming and overcrowded Umwelt we’ve built around ourselves. Sure, we might have evolved on the savannah, but all of these words and walls and works and wills and won’ts are what we’ve built to suit us, and they give our phantom selves such a lot of opportunity to run down the paths of might-have-been.

You’re about to change lanes and you map out the trajectory towards the exit ramp but: there’s someone coming up the inside. Backtrack. You’re indulging in a daydream fantasy about an attractive co-worker and then have to be polite and efficient with him for an hour-long team meeting. Backtrack. You’re following the plot of a movie and then what is he doing? Didn’t she get shot? Backtrack.

And this is just on a small scale. You marry young, anticipating decades of mutual happiness, only to have to unpick it all in a messy divorce in your early thirties. You choose a degree based on a school friend you hero-worshipped but get sidetracked out of it and have to explain it away for the next decade. A swarm of ghost lives, decisions and commitments and purchases and options which, if we’re lucky, we get to retrospectively make sense of, justify, tell ourselves it was destiny or fate, that it was what we were aiming for all along, what we really needed. But perhaps the truth, and it need not be an unkind one, is that a human life needs a sort of virtual scaffolding of possibilities, that the might-have-beens which we’ve unconsciously or consciously rejected are what hold us together.

Certain mental illnesses and mood disorders can be seen as a perversion of this tendency. Depression as the paralysis brought on by too keen an awareness of the sheer volume — number is too narrow a word — of possibilities exploding from every moment: anxiety is a failure of the shielding which lets our minds evaluate them without bothering us with the nagging sense that we are dancing over an abyss. In the manic phase of bipolar disorder there is a dimming of the red light and bell that clangs to signal that it’s time to backtrack, impulses are followed through to their destructive last.

It doesn’t take very much paranoia to imagine that our brain’s talent for speculative execution could be an exploitable vulnerability. Maybe back in the days of the savannah — any predator will have a keen instinct for the false steps and feints of its prey — but now? The misdirection of the magician, the fortune teller’s cold read, the confidence of the con artist, sure in their ability to anticipate just how far down the garden path their marks will lead themselves. The manipulative and abusive, those who gaslight and interrogate, the grandstanding attorney and the demagogue: do they take not take their victim’s or audience’s might-have-beens and magnify them into terrors or seductions? Facebook keeps a record of not only the posts you write, but those you cancel. The algorithms that watch us will have a better map of our shadow self than we will, seeing all the links we follow and then hurriedly click shut, the people we stalk, the products we dare not purchase.

Except that we know from a hundred ads which clumsily ape our ten most recent Google queries that the algorithms are not yet that subtle. The idea that our brains could be hacked by means as delicate as those which can be used to steal the ghosts of data from the might-have-beens of CPU caches is science fiction. And what is fiction, if not a way to coax an audience into the speculative execution of a series of thoughts, a shared illusion, a thing which could never be?

Waiting for Broadband

I’m home from work this morning, waiting for a Telstra technician to come and activate the phone line in my new flat so that my ISP can switch my broadband over. It’s about six weeks since I got approved to move here, and almost four weeks since I moved in.

This has got me thinking a fair bit about Australian broadband, and about how people complain about it, and why I think they’ve been complaining about the wrong thing, and how it’s probably too late, but anyway:

The problem with Australian internet is not that it’s slow, it’s that it’s not a utility.

Australian complaints about the speed of the internet fall into two categories:

  • Robot surgeons on the Moon, and
  • I wanna pirate that rapey dragon show faster

Neither of these are a good argument for the NBN, because only a few places are going to need some kind of massive network connectivity to do gee-whiz immersive futuristic stuff that isn’t actually happening yet, and because complaining that you can’t watch Game of Thrones intersects with another big tedious argument, the one about copyright and how unfair it is that Australia doesn’t get everything at the same time as America. And if you wanted a show that would make broadband seem like a compelling mainstream issue, GoT is exactly the opposite of that show.

What we should have got out of the NBN, and what I would love right now, is for network access to become like power, water and gas: so we don’t have to layer it on top of an analogue voice network and deal with two levels of call centres. My flat is hard to identify in databases because it has a couple of different addresses, which is the main reason for the delay. I had this issue with the power, too, but the way I resolved that was to walk out to the backyard and write down the number on the meter.

I should have been able to do something that simple so that my kids could do a bunch of boring but necessary stuff, like read their email and access their school’s website, immediately after we moved in.

Anyone who is still talking about ‘futureproofing’ in this context needs to be put in a box and left out for the council cleanup. Our network infrastructure isn’t even presentproof.

“to get a new one is the best thing ever”

I love you so much fun and I have to be a good day to be a good time to get a new one is the best thing ever is when you have to be a good day to be a good time to get a new one is the best thing ever is when you have to be a good day to be a good time to get a new one is the best thing ever

(Written by choosing the first word suggested by iOS 8’s visual autocorrect)

Learning Haskell

Chris Okasaki – Purely Functional Data Structures
Simon Thompson – Haskell: The Craft of Functional Programming
Bryan O’Sullivan, Don Stewart, and John Goerzen – Real World Haskell

My team at UTS, the eResearch Support Group, has just finished off two major projects, for which I wrote a lot of integration code in Perl and hacked around a lot more than I cared to in Velocity, a horribly ugly template library for Java which is now at the top of my least-favourite-technologies list.

Despite the fact that my job involves less coding than it used to, and that this proportion is likely to drop further, I needed to clean all the glue out of my brain, so I’ve returned to teaching myself Haskell. The last time I attempted this was about five or six years ago: since then, an open-source ecosystem seems to have developed around what was once a very academic language. There are web frameworks and package repositories and things which aren’t really possible in other languages, like a search engine by type signature.

The best way to learn a language is to do something useful with it, and I have a work project for the end of next year in mind: whether Haskell is the right choice or not is something I haven’t quite decided yet.

I read the first two of the three books a couple of months ago: The Craft of Functional Programming was a good way to remind myself of the basics, but I didn’t have the time or energy to work through the exercises, so it didn’t stick. I looked up Purely Functional Data Structures because a friend who is much brighter than I  mentioned it; it’s not so much about Haskell per se, as it is a demonstration that the kind of reasoning which makes for efficient data processing in imperative languages can be applied to functional ones. Not having a background in computer science, I think I understood about forty percent of it, but it left me with the feeling that those who deeply understand such things will have taken care of the details and written great libraries for me to use.

I’m currently reading Real World Haskell properly and doing all of the exercises in spare moments and it’s a lot of fun. I think I’ve even started to understand, or remember, what monads are. (In a side-effect-free programming environment, state – or a sequence of imperative instructions, which is really just a particular kind of state – can be modelled as a sequence of nested evaluations.)

Google image search

Google Image search