Readow.ai

“Write the title of the book you last read and liked. You can also enter just any book you like.
The more titles you add to the list, the more our recommendations will match your preferences.”

https://readow.ai/

Artificial intelligence offering book recommendations. All that is needed is a CSV file upload, Good reads API, or similar. It would be great to be able to input a list and both the books to read first, as well as surface the books that should be on the list but aren’t on it.

Creative Immiseration

“These tools represent the complete corporate capture of the imagination, that most private and unpredictable part of the human mind. Professional artists aren’t a cause for worry. They’ll likely soon lose interest in a tool that makes all the important decisions for them. The concern is for everyone else. When tinkerers and hobbyists, doodlers and scribblers—not to mention kids just starting to perceive and explore the world—have this kind of instant gratification at their disposal, their curiosity is hijacked and extracted. For all the surrealism of these tools’ outputs, there’s a banal uniformity to the results. When people’s imaginative energy is replaced by the drop-down menu “creativity” of big tech platforms, on a mass scale, we are facing a particularly dire form of immiseration.

By immiseration, I’m thinking of the late philosopher Bernard Stiegler’s coinage, “symbolic misery”—the disaffection produced by a life that has been packaged for, and sold to, us by commercial superpowers. When industrial technology is applied to aesthetics, “conditioning,” as Stiegler writes, “substitutes for experience.” That’s bad not just because of the dulling sameness of a world of infinite but meaningless variety (in shades of teal and orange). It’s bad because a person who lives in the malaise of symbolic misery is, like political philosopher Hannah Arendt’s lonely subject who has forgotten how to think, incapable of forming an inner life. Loneliness, Arendt writes, feels like “not belonging to the world at all, which is among the most radical and desperate experiences of man.” Art should be a bulwark against that loneliness, nourishing and cultivating our connections to each other and to ourselves—both for those who experience it and those who make it.”

-Annie Dorson, “AI is plundering the imagination and replacing it with a slot machine.” Bulletin of the Atomic Scientists. October 27, 2022

Strikes me as another example of the two computing revolutions. One is to make things easy with a touch interface. The other requires having deep knowledge of a complicated topic, such as building machine learning models – not to mention having the resources to do so at the highest level.

The point I would make is that creativity by proxy is still creativity. You may not understand how the A.I. generates its content, but we still can have an aesthetic sense about what is good and what isn’t that the A.I. doesn’t provide.

Simulated Selves

“This Mum and Dad live inside an app on my phone, as voice assistants constructed by the California-based company HereAfter AI and powered by more than four hours of conversations they each had with an interviewer about their lives and memories. (For the record, Mum isn’t that untidy.) The company’s goal is to let the living communicate with the dead. I wanted to test out what it might be like.

Technology like this, which lets you “talk” to people who’ve died, has been a mainstay of science fiction for decades…

…“The biggest issue with the [existing] technology is the idea you can generate a single universal person,” says Justin Harrison, founder of a soon-to-launch service called You, Only Virtual. “But the way we experience people is unique to us.” …

But she warns that users need to be careful not to think this technology is re-creating or even preserving people. “I didn’t want to bring back his clone, but his memory,” she says. The intention was to “create a digital monument where you can interact with that person, not in order to pretend they’re alive, but to hear about them, remember how they were, and be inspired by them again.”

-Charlotte Jee, “Technology that lets us “speak” to our dead relatives has arrived. Are we ready?” technologyreview.com. October 18, 2022

Advances in artificial intelligence are opening up new possibilities of creating virtual representations of people. It’s a kind of advanced Turing test, not of a machine intelligence being able to pass itself off as human, but instead, being able to pass itself off as a person that you know or had known.

If you provide enough data – in the form of video, voice and text – you presumably can approximate what a person might do or so in certain contexts. It becomes possible to create individual avatars or constructs that approach the real thing.

The first application is for people to process grief. It seems obvious that this will be a thing, where people will use this technology to capture people around them and keep them alive in a sense. As with most change, there are benefits and risks to consider. On one hand, it would be nice to be able to talk with and confer with digital avatars of people that have died or left our lives for one reason or another. On the other hand, it is easy to imagine that these “relationships” would become maladaptive, where they call upon the limited time that we have and prevent us from meeting new people and spending the time necessary to have meaningful relationships with them.

Beyond grief, I think, in some sense, we already have inner representations of people in our minds. For example, I will sometimes want to make a comment that lacks tact, I sometimes have a version of my wife in my head saying something like, “You can say that, but say it nicely,” which, in fact, is something my wife says to me several times a year. I’d guess a digital assistant version might be better than the version I have in my head who I could consult about the right way to handle certain social situations. But, then again, I could just ask her in person. Wouldn’t the digital version get in the way of the real person, and ultimately damage my real relationship?

I like the idea of having multiple versions of myself. I imagine the process of adding data to be much like working on a blog, where the process of documenting surfaces thoughts that you might not have had otherwise. It changes you.

Then, you’d be able to consult with a different version of yourself. You’d be able to check in with past versions, and see how you have changed. You could get second opinions, from a close approximation of your self. There are also hazards here because ultimately this is a past facing exercise, and temperamentally, I try to live more in the future, or in the moment, when I can manage it.

In any event, this is interesting food for thought. I’d expect using this technology at funerals or by people that want to live on in a sense beyond when they die to be common within the next decade or two. It’s probably useful to think about the various tradeoffs before then.

Sudowrite: Writing with Artificial Intelligence

Robin Sloan described a process for “writing with the machine” back in 2016 that I tried in 2019. The interesting part of doing it yourself is that you could select the corpus that the A.I. was trained on and get writing in that style of subspecialty. But, it took a bit of work to set-up correctly, and these text generative models have gotten a lot better with GPT and other efforts.

So, if you have never tried writing with A.I., and it will likely become a standard feature in word processors and text editors within five years or so, you can try Sudowrite, which makes the whole process easy to set-up and try out.

Nightmare Fuel: Artificial Intelligence for Drug Discovery Repurposed to Discover New Chemical Weapons

“Our company—Collaborations Pharmaceuticals, Inc.—had recently published computational machine learning models for toxicity prediction in different areas, and, in developing our presentation to the Spiez meeting, we opted to explore how AI could be used to design toxic molecules. It was a thought exercise we had not considered before that ultimately evolved into a computational proof of concept for making biochemical weapons.”

-Urbina, F., Lentzos, F., Invernizzi, C. et al. “Dual use of artificial-intelligence-powered drug discovery.” Nat Mach Intell (2022). March 7, 2022.

What could possibly go wrong? h/t the Economist.

Alex Karp: Palantir & Privacy

“Palantir Technologies is considered as one of the most secretive companies in the world. The customer list of the data specialist from Palo Alto, California, by all accounts includes nearly all governments and secret services of the Western world. As well as an increasing number of companies who want to deliver better products thanks to the structured data analysis from Palantir. In the first of the two-part podcast interview with Alex Karp, who has also been on the supervisory board of Axel Springer since April 2018, Mathias Döpfner asks him how he counters critics of Palantir, whether Palantir was involved in locating Osama bin Laden and what it is that makes him most proud of Palantir. 

During the first part of the interview, which lasts a good 20 minutes, Alex Karp, who is usually as reserved in public as Palantir itself, also provides insights into the early days of the company, when hardly anyone believed in the potential of data, and explains why he sees protecting data as a competitive advantage. Karp, addressing Europe, also warns against softening data protection regulations. According to Karp, it’s all about striving for the best combination of “maximum effective Artificial Intelligence and maximum effective data protection”. “Because nobody, or nobody at least in Europe, wants to live in a world where they have no private sphere.” 

-“Mathias Döpfner interviews Alex Karp in the Axel Springer: ‘No one wants to live in a world where they have no private sphere’.” inside.pod. January 23, 2022.

I haven’t listened to it yet. So, this is more bookmark than recommendation. However, I understand this tries to address some of the philosophical objections to Palantir, which are many.

You.com

You.com, which bills itself as the world’s first open search engine, today announced its public beta launch…

Founded in 2020 by Socher and Bryan McCann, You.com leverages natural language processing (NLP) — a form of AI — to understand search queries, rank the results, and semantically parse the queries into different languages, including programming languages. The platform summarizes results from across the web and is extensible with built-in search apps so that users can complete tasks without leaving the results page.

“The first page of Google can only be modified by paying for advertisements, which is both annoying to users and costly for companies. Our new platform will enable companies to contribute their most useful actual content to that first page, and — if users like it — they can take an action right then and there,” Socher continued. “Most companies and partners will prefer this new interface to people’s digital lives over the old status quo of Google.”

—Kyle Wiggers, “AI-driven search engine You.com takes on Google with $20M.VentureBeat.com. November 9, 2021.

First I’m hearing of You.com, but it’s clear that something like this is the next iteration of search. Bookmarking to look into later.

The Computers Are Out of Their Boxes

“What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for. 

“The core of computing is changing from number-crunching to decision-­making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes…

…AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-­learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips.”

—Will Douglas Heaven, “How AI is reinventing what computers are.” MIT Technology Review. October 22, 2021.

Open Question: As artificial intelligence becomes more pervasive, what limits should we impose as a society and on ourselves on how we use this technology, so it minimizes its negative impact?

The key changes described in this article:

  • Volume, less precise calculations carried out in parallel
  • Defining success by outcomes rather than defining processes
  • Machine autonomy, i.e., artificial intelligence prompts people, acting as surrogate and agent

All to the good. But, there are negative social implications as this technology reaches critical mass among populations, a significant portion of people will off-load a subset of decisions to machines, which may be a net positive. However, easy to imagine that it undermines people’s ability to think for themselves, that the subset creeps into classes of decisions where it shouldn’t, e.g., prison sentences for people, and within the areas where it is commonly used, it will create a decision-making monoculture that crowds out alternative values. For example, if a dominate flavor of A.I. decides that Zojorishi makes the best automated rice cookers, which they do, and only makes that recommendation. Some large percentage of people, only buy Zojorishi. Then, the natural result is it will push other rice cooking options out of the market and make it difficult for new, possibly better, companies to emerge.

Lots of strange network effects that will happen due to this trend that should be given careful consideration. Even on a personal level, it would be good to have a clear idea of what exactly you’d like to use A.I. for, so you don’t undermine your own autonomy, as has happened in other computing eras, such as Microsoft dominating the desktop market.

Deformin’ in the Rain: How (and Why) to Break a Classic Film

“…this essay subjects a single film to a series of deformations: the classic musical Singin’ in the Rain. Accompanying more than twenty original audiovisual deformations in still image, GIF, and video formats, the essay considers both what each new version reveals about the film (and cinema more broadly) and how we might engage with the emergent derivative aesthetic object created by algorithmic practice as a product of the deformed humanities.”

—Jason Mittell, “Deformin’ in the Rain: How (and Why) to Break a Classic Film.” Digital Humanities Quarterly. 2021. Vol. 15. No. 1.

I thought this approach of altering a film to better understand aspects of it is a pretty interesting technique that could be applied to a wide variety of artistic media. Film is perhaps more interesting because it can incorporate many different elements.