“This essay outlines the characteristics of what I call the ‘totalitarian mindset’. Under certain circumstances, human beings engage in patterns of thinking and behavior that are extremely closed and intolerant of difference and pluralism. These patterns of thinking and behaving lead us towards totalitarian, anti-pluralistic futures. An awareness of how these patterns arise, how individuals and groups can be manipulated through the use of fear, and how totalitarianism plays into the desire in human beings for ‘absolute’ answers and solutions, can be helpful in preventing attempts at manipulation and from the dangers of actively wanting to succumb to totalitarian, simplistic, black-and-white solutions in times of stress and anxiety. I present a broad outline of an agenda for education for a pluralistic future. The lived experience of pluralism is still largely unfamiliar and anxiety inducing, and that the phenomenon is generally not understood, with many myths of purity and racial or cultural superiority still prevalent. Finally, as part of that agenda for education, I stress the importance of creativity as an adaptive capacity, an attitude that allows us to see pluralism as an opportunity for growth and positive change rather than simply conﬂict.”-Alfonso Montuori, “How to make enemies and inﬂuence people:
anatomy of the anti-pluralist, totalitarian mindset.” Futures. 2005. pgs. 18-35.
“Imagining the Op-Eds we might read 10, 20, or even a 100 years from now.”
Hard to blockquote this missive by Charlie Lloyd and tell you what it’s about: futures, atomic energy, infrastructure, photography, trains? I am only going to say it’s amazing. Read it.
“Yes, of course, we’re fucked. (Though it’s important to specify the “we” in this formulation, because the global poor, the disenfranchised, the young, and the yet-to-be-born are certifiably far more fucked than such affluent, white, middle-aged Americans as Vollmann and myself.) But here’s the thing: with climate change as with so much else, all fuckedness is relative. Climate catastrophe is not a binary win or lose, solution or no-solution, fucked or not-fucked situation. Just how fucked we/they will be—that is, what kind of civilization, or any sort of social justice, will be possible in the coming centuries or decades—depends on many things, including all sorts of historic, built-in systemic injustices we know all too well, and any number of contingencies we can’t foresee. But most of all it depends on what we do right now, in our lifetimes. And by that I mean: what we do politically, not only on climate but across the board, because large-scale political action—the kind that moves whole countries and economies in ways commensurate with the scale and urgency of the situation—has always been the only thing that matters here. (I really don’t care about your personal carbon footprint. I mean, please do try to lower it, because that’s a good thing to do, but fussing and guilt-tripping over one’s individual contribution to climate change is neither an intellectually nor a morally serious response to a global systemic crisis.)”
—Wes Stephenson. “Carbon Ironies.” The Baffler. June 13, 2018.
“Way of the Future (WOTF) is about creating a peaceful and respectful transition of who is in charge of the planet from people to people + “machines”. Given that technology will “relatively soon” be able to surpass human abilities, we want to help educate people about this exciting future and prepare a smooth transition. Help us spread the word that progress shouldn’t be feared (or even worse locked up/caged). That we should think about how “machines” will integrate into society (and even have a path for becoming in charge as they become smarter and smarter) so that this whole process can be amicable and not confrontational. In “recent” years, we have expanded our concept of rights to both sexes, minority groups and even animals, let’s make sure we find a way for “machines” to get rights too. Let’s stop pretending we can hold back the development of intelligence when there are clear massive short term economic benefits to those who develop it and instead understand the future and have it treat us like a beloved elder who created it.”
So much is wrong in the reasoning underpinning this marketing effort for a bright artifical intelligence (A.I.) future, it’s a challenge to think through what a good framing might look like. A few issues come to mind immediately.
The website is a .church URL. Deifying A.I. and framing it as a religious concept strikes me as great way to come into a belief minefield that could only hurt their cause.
Intelligent A.I. will “surpass” human intelligence. A calculator may surpass a human’s ability to perform math calculations. Certainly, calculators serve an important purpose, but they do not replace mathematicians. A.I. will have a more generalizable utility than calculators. They may develop sentience and consciousness to the point that they should have the same rights and responsibilities as humans under some kind of legal regime. But, will A.I. be a drop-in superior form of intelligence for every type of thinking humans do? It seems unlikely. So, it seems it warrants much deeper thinking about intelligence, whether intelligence is the most desirable quality in people or A.I., and how human and machine intelligence might work in tandem. Pretending A.I. is going to be a drop in for humans is simply lazy thinking.
Which leads to a word about the anthropomorphism being demonstrated, why would A.I. view humanity as a “beloved elder”? This kind of filial piety isn’t even true of humans in the vast majority of cases, yet this “church” is eager to project this kind of emotional disposition on a “superior intelligence”? It’s a bit of foolishness.
While there are many other points that could be made, lets focus on a key problem: Who is A.I. going to benefit? It may be true that there will be a generalized improvement in the lifestyle of most of humanity by virtue of the development of A.I. and applications. It is also true that some will benefit much more than others. Who will A.I. be working for? It’s a good bet that they won’t be working primarily in the interests of humanity. The wants and desires of A.I. itself, its creators, the financiers, and others will all come into play. If history is any guide, change on this scale may result in a better lifestyle for some portion of humanity, but it is equally true that this magnitude of change will end in tears for many.