You.com

You.com, which bills itself as the world’s first open search engine, today announced its public beta launch…

Founded in 2020 by Socher and Bryan McCann, You.com leverages natural language processing (NLP) — a form of AI — to understand search queries, rank the results, and semantically parse the queries into different languages, including programming languages. The platform summarizes results from across the web and is extensible with built-in search apps so that users can complete tasks without leaving the results page.

“The first page of Google can only be modified by paying for advertisements, which is both annoying to users and costly for companies. Our new platform will enable companies to contribute their most useful actual content to that first page, and — if users like it — they can take an action right then and there,” Socher continued. “Most companies and partners will prefer this new interface to people’s digital lives over the old status quo of Google.”

—Kyle Wiggers, “AI-driven search engine You.com takes on Google with $20M.VentureBeat.com. November 9, 2021.

First I’m hearing of You.com, but it’s clear that something like this is the next iteration of search. Bookmarking to look into later.

The Computers Are Out of Their Boxes

“What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for. 

“The core of computing is changing from number-crunching to decision-­making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes…

…AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-­learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips.”

—Will Douglas Heaven, “How AI is reinventing what computers are.” MIT Technology Review. October 22, 2021.

Open Question: As artificial intelligence becomes more pervasive, what limits should we impose as a society and on ourselves on how we use this technology, so it minimizes its negative impact?

The key changes described in this article:

  • Volume, less precise calculations carried out in parallel
  • Defining success by outcomes rather than defining processes
  • Machine autonomy, i.e., artificial intelligence prompts people, acting as surrogate and agent

All to the good. But, there are negative social implications as this technology reaches critical mass among populations, a significant portion of people will off-load a subset of decisions to machines, which may be a net positive. However, easy to imagine that it undermines people’s ability to think for themselves, that the subset creeps into classes of decisions where it shouldn’t, e.g., prison sentences for people, and within the areas where it is commonly used, it will create a decision-making monoculture that crowds out alternative values. For example, if a dominate flavor of A.I. decides that Zojorishi makes the best automated rice cookers, which they do, and only makes that recommendation. Some large percentage of people, only buy Zojorishi. Then, the natural result is it will push other rice cooking options out of the market and make it difficult for new, possibly better, companies to emerge.

Lots of strange network effects that will happen due to this trend that should be given careful consideration. Even on a personal level, it would be good to have a clear idea of what exactly you’d like to use A.I. for, so you don’t undermine your own autonomy, as has happened in other computing eras, such as Microsoft dominating the desktop market.

Deformin’ in the Rain: How (and Why) to Break a Classic Film

“…this essay subjects a single film to a series of deformations: the classic musical Singin’ in the Rain. Accompanying more than twenty original audiovisual deformations in still image, GIF, and video formats, the essay considers both what each new version reveals about the film (and cinema more broadly) and how we might engage with the emergent derivative aesthetic object created by algorithmic practice as a product of the deformed humanities.”

—Jason Mittell, “Deformin’ in the Rain: How (and Why) to Break a Classic Film.” Digital Humanities Quarterly. 2021. Vol. 15. No. 1.

I thought this approach of altering a film to better understand aspects of it is a pretty interesting technique that could be applied to a wide variety of artistic media. Film is perhaps more interesting because it can incorporate many different elements.

NetHack / The NetHack Learning Environment

Reminded of NetHack this morning after hearing of Facebook’s release of The NetHack Learning Environment.

“NetHack is one of the oldest and arguably most impactful videogames in history, as well as being one of the hardest roguelikes currently being played by humans. It is procedurally generated, rich in entities and dynamics, and overall an extremely challenging environment…”

I’ve only played this casually, but it’s very complex. It might be a fun project to learn a little bit about artificial intelligence. Or, you might simply wish to play the game yourself.

Worth a look. It’s free and runs on pretty much any computer you’d want to use. Almost everyone will want to get a version with graphic tiles.

Ironies of Automation

“…the more we automate, and the more sophisticated we make that automation, the more we become dependent on a highly skilled human operator.”

-Adrian Colyer, “Ironies of automation.” the morning paper. January 8, 2020.

A robot surgeon might be a great idea, but it’s going to handle the routine, the easy surgeries. What’s left is what’s hard. That’ll be the new work for human surgeons.

And who fixes the surgeries that the robot got wrong? Who watches the robot surgeons and steps in when they can’t do they job?

This is true of automation in every area. The jobs it eliminates are the easy, routine jobs. With more automation, the level of difficulty simply goes up.

If the robot does the job better, then they get the job. But, someone who does the job better than robots will always have to evaluate their work and step in when the work is beyond them.

Where will we find such people, if we don’t become them?

Excavating A.I.

“Datasets aren’t simply raw materials to feed algorithms, but are political interventions. As such, much of the discussion around ‘bias’ in AI systems misses the mark: there is no ‘neutral,’ ‘natural,’ or ‘apolitical’ vantage point that training data can be built upon. There is no easy technical ‘fix’ by shifting demographics, deleting offensive terms, or seeking equal representation by skin tone. The whole endeavor of collecting images, categorizing them, and labeling them is itself a form of politics, filled with questions about who gets to decide what images mean and what kinds of social and political work those representations perform.”

—Kate Crawford and Trevor Paglen, “Excavating AI
The Politics of Images in Machine Learning Training Sets.” Excavating.AI. October 2019.

Directive

Beginning of a six-part fiction series about a man working completely alone aboard a spaceship bound for a new planet. His fellow passengers will remain cryogenically frozen for the 20 years it will take for the ship to reach its destination; Frank’s work is to maintain the environment and make sure all is proceeding as it should. Despite his solitude, the show is actually a dialogue between Frank and Casper, the spaceship’s AI. They have an abrasive, dependent relationship, and the progression of the series made me think a lot about where our current interactions with AI tech might lead (12m38s).”

—”Hebrew, Frozen, Dark.” TheListener.co. September 19, 2019.