Deformin’ in the Rain: How (and Why) to Break a Classic Film

“…this essay subjects a single film to a series of deformations: the classic musical Singin’ in the Rain. Accompanying more than twenty original audiovisual deformations in still image, GIF, and video formats, the essay considers both what each new version reveals about the film (and cinema more broadly) and how we might engage with the emergent derivative aesthetic object created by algorithmic practice as a product of the deformed humanities.”

—Jason Mittell, “Deformin’ in the Rain: How (and Why) to Break a Classic Film.” Digital Humanities Quarterly. 2021. Vol. 15. No. 1.

I thought this approach of altering a film to better understand aspects of it is a pretty interesting technique that could be applied to a wide variety of artistic media. Film is perhaps more interesting because it can incorporate many different elements.

NetHack / The NetHack Learning Environment

Reminded of NetHack this morning after hearing of Facebook’s release of The NetHack Learning Environment.

“NetHack is one of the oldest and arguably most impactful videogames in history, as well as being one of the hardest roguelikes currently being played by humans. It is procedurally generated, rich in entities and dynamics, and overall an extremely challenging environment…”

I’ve only played this casually, but it’s very complex. It might be a fun project to learn a little bit about artificial intelligence. Or, you might simply wish to play the game yourself.

Worth a look. It’s free and runs on pretty much any computer you’d want to use. Almost everyone will want to get a version with graphic tiles.

Ironies of Automation

“…the more we automate, and the more sophisticated we make that automation, the more we become dependent on a highly skilled human operator.”

-Adrian Colyer, “Ironies of automation.” the morning paper. January 8, 2020.

A robot surgeon might be a great idea, but it’s going to handle the routine, the easy surgeries. What’s left is what’s hard. That’ll be the new work for human surgeons.

And who fixes the surgeries that the robot got wrong? Who watches the robot surgeons and steps in when they can’t do they job?

This is true of automation in every area. The jobs it eliminates are the easy, routine jobs. With more automation, the level of difficulty simply goes up.

If the robot does the job better, then they get the job. But, someone who does the job better than robots will always have to evaluate their work and step in when the work is beyond them.

Where will we find such people, if we don’t become them?

Excavating A.I.

“Datasets aren’t simply raw materials to feed algorithms, but are political interventions. As such, much of the discussion around ‘bias’ in AI systems misses the mark: there is no ‘neutral,’ ‘natural,’ or ‘apolitical’ vantage point that training data can be built upon. There is no easy technical ‘fix’ by shifting demographics, deleting offensive terms, or seeking equal representation by skin tone. The whole endeavor of collecting images, categorizing them, and labeling them is itself a form of politics, filled with questions about who gets to decide what images mean and what kinds of social and political work those representations perform.”

—Kate Crawford and Trevor Paglen, “Excavating AI
The Politics of Images in Machine Learning Training Sets.” Excavating.AI. October 2019.


Beginning of a six-part fiction series about a man working completely alone aboard a spaceship bound for a new planet. His fellow passengers will remain cryogenically frozen for the 20 years it will take for the ship to reach its destination; Frank’s work is to maintain the environment and make sure all is proceeding as it should. Despite his solitude, the show is actually a dialogue between Frank and Casper, the spaceship’s AI. They have an abrasive, dependent relationship, and the progression of the series made me think a lot about where our current interactions with AI tech might lead (12m38s).”

—”Hebrew, Frozen, Dark.” September 19, 2019.

Wearable Robots | Science

“…it is realistic to think that we will witness, in the next several years, the development of robust human-robot interfaces to command wearable robotics based on the decoding of a representative part of the neural code of movement in humans. The need for wearable technologies that minimally alter human biomechanics will result in a transition from rigid wearable robots to soft exosuits such as the one reported by Kim et al., and, eventually, to implantable neuroprostheses that can influence or
assist human movement. The need for preserving human neuromechanics while using assistive technology will likely lead to implantable and networked recording and stimulation neuroprostheses. Such devices would implement effective interfaces to decode the wearer’s movement intent and influence it when necessary to enhance human performance (7).”

—José L. Pons, et al, ” Witnessing a wearables transition.” Science. 16 Aug 2019: Vol. 365, Issue 6454, pp. 636-637.

Practical applications to efforts like Elon Musk’s Neurolink weren’t immediately apparent to me. Ok, a neuro-implant as a human artificial intelligence interface may make sense a few decades off. However, a neuronal interface for a soft exosuit seems like it something that could be used today.