Gardener’s Vision

“Without communication, connection, and empathy, it becomes easy for actors to take on the “gardener’s vision”: to treat those they are acting upon as less human or not human at all and to see the process of interacting with them as one of grooming, of control, of organization. This organization, far from being a laudable form of efficiency, is inseparable from dehumanization.”

—Os Keyes, “The Gardener’s Vision of Data.” Real Life. May 6, 2019.

Semi-Auto Cut-Up: Another Offering (KJV)

Eat the congregation, together,
smite them afraid, blast vessels.
O ye dead life, mercy, semblance
of a kind, a measure of shadow.

But, they had no prophet, neighbors
and friends, children of fate, troubled
the Others, remember them not, no-name,
wilderness sacrifices, sore consumed.

Desolation came and passed, Death,
bare the enemy, dead, desolate,
good and great together into the land,
begat headstones, seeds unto the earth.

Father of water, turn thyself,
out to sea, high between, separate,
sing of the living, purify mouths
and soul, fetch their inheritance.

Surge, rain on the wise borderlands,
pull the stopper, flush the tidal womb,
from the unleavened, and the unclean,
reborn, another offering to be devoured.

Another A.I. assist using a recently trained neural network using the King James version (KJV) as the corpus. Since this is the first neural network I tried myself, I learned many valuable lessons. It took 2.5 days the first go around, and the result was unusable because of all the newline characters in the original text. I didn’t realize that the text would have to be pre-processed or what it would entail. I plan on writing a post about the process of making Project Gutenberg texts usable as writing with the machine co-authors.

Also, there is a point to be made about the inherent class obstacles in learning and using neural networks. The differences from having a dedicated machine running the right hardware is the difference between waiting days to train a new model or a few hours. On the other end, speed also limits how big of a data set you are willing to start with. The KJV is about 5MB, and it took 2 days. Robin Sloan’s pre-trained text is around 123MB. Spending 24+ days to train a model is a serious barrier to entry.

AI For Everyone

“AI is not only for engineers. If you want your organization to become better at using AI, this is the course to tell everyone–especially your non-technical colleagues–to take. In this course, you will learn: – The meaning behind common AI terminology, including neural networks, machine learning, deep learning, and data science – What AI realistically can–and cannot–do – How to spot opportunities to apply AI to problems in your own organization – What it feels like to build machine learning and data science projects – How to work with an AI team and build an AI strategy in your company – How to navigate ethical and societal discussions surrounding AI Though this course is largely non-technical, engineers can also take this course to learn the business aspects of AI.”

A.I. for Everyone.

Audit the course for free. Takes 6-8 hours to complete.

h/t, Open Culture.

Robin Sloan & Writing With The Machine

I am just so compelled by the notion of a text editor that possesses a deep, nuanced model of… what? Everything ever written by you? By your favorite authors? Your nemesis? All staff writers at the New Yorker, present and past? Everyone on the internet? It’s provocative any way you slice it.

I should say clearly: I am absolutely 100% not talking about an editor that ‘writes for you,’ whatever that means. The world doesn’t need any more dead-eyed robo-text.

The animating ideas here are augmentation; partnership; call and response.

The goal is not to make writing “easier”; it’s to make it harder.

The goal is not to make the resulting text “better”; it’s to make it different — weirder, with effects maybe not available by other means.”

Robin Sloan, “Writing With The Machine.” robinsloan.com. May 2016.

Robin Sloan hacked together some software to use a neural network trained against some corpus of text, e.g., Shakespeare, that then makes suggestions on how to complete sentences you write. He’s right that it doesn’t make writing easier. It makes it harder because it is essentially implanting non sequiturs into your writing that then have to be thoughtfully incorporated or erased. But, it does make your mind go off in different directions that would not occur if you were merely composing something on your own.

In order to try it, you have to install torch, torch-rnn, torch-rnn-server, the Atom text editor, and rnn-writer. It’s probably easiest to get it going on Linux or MacOS. The instructions are not entirely clear, and I failed to get it working the first time I tried. I made a second attempt yesterday, and I got it to work. The main thing is to go through all the instructions. Look for flags or how it can be done manually if it fails. I also didn’t realize that the torch-rnn-server requires git cloning and moving to the relevant directory in order to get the server to run.

Also, you should use the pre-trained model to make sure everything works first. Get the server running, connect it to your Atom editor and get a feel for the possibilities. However, you’ll probably find that the pre-trained model leaves quite a bit to be desired.

But, training up the model on a preferred text can take quite awhile if you are doing it on a consumer grade computer. I picked the King James Version of the Bible as my reference text, and I’m doing the training on an old laptop. By my calculations, it will take around 70 hours to train the model. It’s not a trivial exercise.

After trying the pre-trained model, it probably makes the most sense to try training the tiny-shakespeare.txt file included with torch-rnn-server, as a first walk-through of training up your own neural network from a specific text. This way you aren’t spending several days training up something you aren’t sure is going to work.

After that, take a look at Project Gutenberg. I can imagine using a neural network trained on years of our emails, Montaigne, old slang dictionaries, or Grimm’s Fairy Tales to different affect on our writing. There seems like a world of possibility with this approach.

Good luck!