Robin Sloan & Writing With The Machine

I am just so compelled by the notion of a text editor that possesses a deep, nuanced model of… what? Everything ever written by you? By your favorite authors? Your nemesis? All staff writers at the New Yorker, present and past? Everyone on the internet? It’s provocative any way you slice it.

I should say clearly: I am absolutely 100% not talking about an editor that ‘writes for you,’ whatever that means. The world doesn’t need any more dead-eyed robo-text.

The animating ideas here are augmentation; partnership; call and response.

The goal is not to make writing “easier”; it’s to make it harder.

The goal is not to make the resulting text “better”; it’s to make it different — weirder, with effects maybe not available by other means.”

Robin Sloan, “Writing With The Machine.” robinsloan.com. May 2016.

Robin Sloan hacked together some software to use a neural network trained against some corpus of text, e.g., Shakespeare, that then makes suggestions on how to complete sentences you write. He’s right that it doesn’t make writing easier. It makes it harder because it is essentially implanting non sequiturs into your writing that then have to be thoughtfully incorporated or erased. But, it does make your mind go off in different directions that would not occur if you were merely composing something on your own.

In order to try it, you have to install torch, torch-rnn, torch-rnn-server, the Atom text editor, and rnn-writer. It’s probably easiest to get it going on Linux or MacOS. The instructions are not entirely clear, and I failed to get it working the first time I tried. I made a second attempt yesterday, and I got it to work. The main thing is to go through all the instructions. Look for flags or how it can be done manually if it fails. I also didn’t realize that the torch-rnn-server requires git cloning and moving to the relevant directory in order to get the server to run.

Also, you should use the pre-trained model to make sure everything works first. Get the server running, connect it to your Atom editor and get a feel for the possibilities. However, you’ll probably find that the pre-trained model leaves quite a bit to be desired.

But, training up the model on a preferred text can take quite awhile if you are doing it on a consumer grade computer. I picked the King James Version of the Bible as my reference text, and I’m doing the training on an old laptop. By my calculations, it will take around 70 hours to train the model. It’s not a trivial exercise.

After trying the pre-trained model, it probably makes the most sense to try training the tiny-shakespeare.txt file included with torch-rnn-server, as a first walk-through of training up your own neural network from a specific text. This way you aren’t spending several days training up something you aren’t sure is going to work.

After that, take a look at Project Gutenberg. I can imagine using a neural network trained on years of our emails, Montaigne, old slang dictionaries, or Grimm’s Fairy Tales to different affect on our writing. There seems like a world of possibility with this approach.

Good luck!