The Age of Freedom, RethinkX

“During the 2020s, key technologies will converge to completely disrupt the five foundational sectors that underpin the global economy, and with them every major industry in the world today. The knock-on effects for society will be as profound as the extraordinary possibilities that emerge.

In information, energy, food, transportation, and materials, costs will fall by 10x or more, while production processes an order of magnitude (10x) more efficient will use 90% fewer natural resources with 10x-100x less waste. The prevailing production system will shift away from a model of centralized extraction and the breakdown of scarce resources that requires vast physical scale and reach, to a model of localized creation from limitless, ubiquitous building blocks – a world built not on coal, oil, steel, livestock, and concrete but on photons, electrons, DNA, molecules and (q)bits. Product design and development will be performed collaboratively over information networks while physical production and distribution will be fulfilled locally. As a result, geographic advantage will be eliminated as every city or region becomes self-sufficient. This new creation-based production system, which will be built on technologies we are already using today, will be far more equitable, robust, and resilient than any we have ever seen. We have the opportunity to move from a world of extraction to one of creation, a world of scarcity to one of plenitude, a world of inequity and predatory competition to one of shared prosperity and collaboration.

This is not, then, another Industrial Revolution, but a far more fundamental shift. This is the beginning of the third age of humankind – the Age of Freedom.

James Arbib & Tony Seba, “Rethinking Humanity.” RethinkX. June 2020.

In the cryptocurrency space, the adjective, “hopium” would be used. While a post-scarcity world run by teams of super-intelligence A.I.s, like the one depicted in Iain M. Banks’ The Culture series would be a welcome development, if history is any guide, human beings tend to like inequity and predatory competition.

The Computers Are Out of Their Boxes

“What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for. 

“The core of computing is changing from number-crunching to decision-­making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes…

…AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-­learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips.”

—Will Douglas Heaven, “How AI is reinventing what computers are.” MIT Technology Review. October 22, 2021.

Open Question: As artificial intelligence becomes more pervasive, what limits should we impose as a society and on ourselves on how we use this technology, so it minimizes its negative impact?

The key changes described in this article:

  • Volume, less precise calculations carried out in parallel
  • Defining success by outcomes rather than defining processes
  • Machine autonomy, i.e., artificial intelligence prompts people, acting as surrogate and agent

All to the good. But, there are negative social implications as this technology reaches critical mass among populations, a significant portion of people will off-load a subset of decisions to machines, which may be a net positive. However, easy to imagine that it undermines people’s ability to think for themselves, that the subset creeps into classes of decisions where it shouldn’t, e.g., prison sentences for people, and within the areas where it is commonly used, it will create a decision-making monoculture that crowds out alternative values. For example, if a dominate flavor of A.I. decides that Zojorishi makes the best automated rice cookers, which they do, and only makes that recommendation. Some large percentage of people, only buy Zojorishi. Then, the natural result is it will push other rice cooking options out of the market and make it difficult for new, possibly better, companies to emerge.

Lots of strange network effects that will happen due to this trend that should be given careful consideration. Even on a personal level, it would be good to have a clear idea of what exactly you’d like to use A.I. for, so you don’t undermine your own autonomy, as has happened in other computing eras, such as Microsoft dominating the desktop market.

How to Make Enemies and Influence People

“This essay outlines the characteristics of what I call the ‘totalitarian mindset’. Under certain circumstances, human beings engage in patterns of thinking and behavior that are extremely closed and intolerant of difference and pluralism. These patterns of thinking and behaving lead us towards totalitarian, anti-pluralistic futures. An awareness of how these patterns arise, how individuals and groups can be manipulated through the use of fear, and how totalitarianism plays into the desire in human beings for ‘absolute’ answers and solutions, can be helpful in preventing attempts at manipulation and from the dangers of actively wanting to succumb to totalitarian, simplistic, black-and-white solutions in times of stress and anxiety. I present a broad outline of an agenda for education for a pluralistic future. The lived experience of pluralism is still largely unfamiliar and anxiety inducing, and that the phenomenon is generally not understood, with many myths of purity and racial or cultural superiority still prevalent. Finally, as part of that agenda for education, I stress the importance of creativity as an adaptive capacity, an attitude that allows us to see pluralism as an opportunity for growth and positive change rather than simply conflict.”

-Alfonso Montuori, “How to make enemies and influence people:
anatomy of the anti-pluralist, totalitarian mindset
.” Futures. 2005. pgs. 18-35.

Climate Change: Are We Fucked?

“Yes, of course, we’re fucked. (Though it’s important to specify the “we” in this formulation, because the global poor, the disenfranchised, the young, and the yet-to-be-born are certifiably far more fucked than such affluent, white, middle-aged Americans as Vollmann and myself.) But here’s the thing: with climate change as with so much else, all fuckedness is relative. Climate catastrophe is not a binary win or lose, solution or no-solution, fucked or not-fucked situation. Just how fucked we/they will be—that is, what kind of civilization, or any sort of social justice, will be possible in the coming centuries or decades—depends on many things, including all sorts of historic, built-in systemic injustices we know all too well, and any number of contingencies we can’t foresee. But most of all it depends on what we do right now, in our lifetimes. And by that I mean: what we do politically, not only on climate but across the board, because large-scale political action—the kind that moves whole countries and economies in ways commensurate with the scale and urgency of the situation—has always been the only thing that matters here. (I really don’t care about your personal carbon footprint. I mean, please do try to lower it, because that’s a good thing to do, but fussing and guilt-tripping over one’s individual contribution to climate change is neither an intellectually nor a morally serious response to a global systemic crisis.)”

—Wes Stephenson. “Carbon Ironies.” The Baffler. June 13, 2018.

h/t kottke.org.

Way of the Future

“Way of the Future (WOTF) is about creating a peaceful and respectful transition of who is in charge of the planet from people to people + “machines”. Given that technology will “relatively soon” be able to surpass human abilities, we want to help educate people about this exciting future and prepare a smooth transition. Help us spread the word that progress shouldn’t be feared (or even worse locked up/caged). That we should think about how “machines” will integrate into society (and even have a path for becoming in charge as they become smarter and smarter) so that this whole process can be amicable and not confrontational. In “recent” years, we have expanded our concept of rights to both sexes, minority groups and even animals, let’s make sure we find a way for “machines” to get rights too. Let’s stop pretending we can hold back the development of intelligence when there are clear massive short term economic benefits to those who develop it and instead understand the future and have it treat us like a beloved elder who created it.”

—”Way of the Future.” http://www.wayofthefuture.church/ (accessed December 1, 2017).

So much is wrong in the reasoning underpinning this marketing effort for a bright artifical intelligence (A.I.) future, it’s a challenge to think through what a good framing might look like. A few issues come to mind immediately.

The website is a .church URL. Deifying A.I. and framing it as a religious concept strikes me as great way to come into a belief minefield that could only hurt their cause.

Intelligent A.I. will “surpass” human intelligence. A calculator may surpass a human’s ability to perform math calculations. Certainly, calculators serve an important purpose, but they do not replace mathematicians. A.I. will have a more generalizable utility than calculators. They may develop sentience and consciousness to the point that they should have the same rights and responsibilities as humans under some kind of legal regime. But, will A.I. be a drop-in superior form of intelligence for every type of thinking humans do? It seems unlikely. So, it seems it warrants much deeper thinking about intelligence, whether intelligence is the most desirable quality in people or A.I., and how human and machine intelligence might work in tandem. Pretending A.I. is going to be a drop in for humans is simply lazy thinking.

Which leads to a word about the anthropomorphism being demonstrated, why would A.I. view humanity as a “beloved elder”? This kind of filial piety isn’t even true of humans in the vast majority of cases, yet this “church” is eager to project this kind of emotional disposition on a “superior intelligence”? It’s a bit of foolishness.

While there are many other points that could be made, lets focus on a key problem: Who is A.I. going to benefit? It may be true that there will be a generalized improvement in the lifestyle of most of humanity by virtue of the development of A.I. and applications. It is also true that some will benefit much more than others. Who will A.I. be working for? It’s a good bet that they won’t be working primarily in the interests of humanity. The wants and desires of A.I. itself, its creators, the financiers, and others will all come into play. If history is any guide, change on this scale may result in a better lifestyle for some portion of humanity, but it is equally true that this magnitude of change will end in tears for many.