The Vatican is the Oldest Computer in the World

Francis Spufford once said that Bletchley Park was an attempt to build a computer out of human beings so the credit for this metaphor belongs to him. But it can be generalised to any bureaucracy. They are all attempts to impose an algorithmic order on the messiness of the world, and to extract from it only only those facts which are useful to decision makers.

With that said, it’s clear that the Vatican is the oldest continuously running computer in the world. Now read on …

One way of understanding the Roman Catholic Church is to think of the Vatican as the oldest computer in the world. It is a computer made of human parts rather than electronics, but so are all bureaucracies: just like computers, they take in information, process it according to a set of algorithms, and act on the result.

The Vatican has an operating system that has been running since the days of the Roman Empire. Its major departments are still called “dicasteries”, a term last used in the Roman civil service in about 450 AD.

Like any very long running computer system, the Vatican has problems with legacy code: all that embarrassing stuff about usury and cousin marriage from the Middle Ages, or the more recent “Syllabus of Errors” in which Pope Pius IX in 1864 denounced as heresy the belief that he, or any Pope, can, and ought to, reconcile himself, and come to terms with progress, liberalism and modern civilization,” can no longer be acted on, but can’t be thrown away, either. Instead it is commented out and entirely different code added: this process is known as development.

But changing the code that the system runs on, while it is running, is a notoriously tricky operation…

-Andrew Brown, “The Vatican is the oldest computer in the world.” andrewbrown.substack.com. Novmber 24, 2025

[Commenting on the above.]

It is.

What I like about this essay is how it suggests a different perspective on other computer-like ‘machines’ that exist in our world. For years I’ve thought of corporations — especially large ones — as ‘superintelligent machines’ (which is why I think that much of the faux-nervous speculation about what it would be like to live in a world dominated by superintelligent machines is fatuous. We already know the answer to that question: it’s like living in contemporary liberal democracies!)

Charlie Stross, the great sci-fi writer, calls corporations “Slow AIs”. Henry Farrell (Whom God Preserve) writes that since Large Language Models (LLMs) are ‘cultural technologies’ — i.e. information processing machines’ — they belong in the same class as other information-processing machines — like markets (as Hayek thought), bureaucracies and even states. David Runciman, in his book Handover:How We Gave Control of Our Lives to Corporations, States and AIs* makes similar points.

Of course these are all metaphors with the usual upsides and downsides. But they are also tools for thinking about current — and emerging — realities.

-John Naughtons, “Wednesday, 26 November 2025.” memex.naughtons.org. November 26, 2025

Creative Immiseration

“These tools represent the complete corporate capture of the imagination, that most private and unpredictable part of the human mind. Professional artists aren’t a cause for worry. They’ll likely soon lose interest in a tool that makes all the important decisions for them. The concern is for everyone else. When tinkerers and hobbyists, doodlers and scribblers—not to mention kids just starting to perceive and explore the world—have this kind of instant gratification at their disposal, their curiosity is hijacked and extracted. For all the surrealism of these tools’ outputs, there’s a banal uniformity to the results. When people’s imaginative energy is replaced by the drop-down menu “creativity” of big tech platforms, on a mass scale, we are facing a particularly dire form of immiseration.

By immiseration, I’m thinking of the late philosopher Bernard Stiegler’s coinage, “symbolic misery”—the disaffection produced by a life that has been packaged for, and sold to, us by commercial superpowers. When industrial technology is applied to aesthetics, “conditioning,” as Stiegler writes, “substitutes for experience.” That’s bad not just because of the dulling sameness of a world of infinite but meaningless variety (in shades of teal and orange). It’s bad because a person who lives in the malaise of symbolic misery is, like political philosopher Hannah Arendt’s lonely subject who has forgotten how to think, incapable of forming an inner life. Loneliness, Arendt writes, feels like “not belonging to the world at all, which is among the most radical and desperate experiences of man.” Art should be a bulwark against that loneliness, nourishing and cultivating our connections to each other and to ourselves—both for those who experience it and those who make it.”

-Annie Dorson, “AI is plundering the imagination and replacing it with a slot machine.” Bulletin of the Atomic Scientists. October 27, 2022

Strikes me as another example of the two computing revolutions. One is to make things easy with a touch interface. The other requires having deep knowledge of a complicated topic, such as building machine learning models – not to mention having the resources to do so at the highest level.

The point I would make is that creativity by proxy is still creativity. You may not understand how the A.I. generates its content, but we still can have an aesthetic sense about what is good and what isn’t that the A.I. doesn’t provide.

Metaphorical Apes & Gorillas

Apes & Gorillas is another little gem from Joe Armeanio. It closely mirrors the idea two computing revolutions talked about in this post that talks about:

  1. Apps with easy to use interfaces designed for casual users
  2. Application layers, that provide tools that allow new ways of using a computer that were previously impossible

There’s a huge difference in needs between traders doing swaps and solo miners using a node wallet. The general principle is applicable to most areas of life where technology touches.

How-to: Be a Darknet Drug Lord

“[The advice in this article can be adapted to suit the needs of other hidden services, including ones which are legal in your jurisdiction. The threat model in mind is that of a drug market. The tone is that of a grandfather who is always annoyingly right, who can’t help but give a stream-of-consciousness schooling to some whippersnapper about the way the world works. If this article inspires you to go on a crime spree and you get caught, don’t come crying to me about it.]

-nachash@observers.net, “So, you want to be a darknet drug lord…pastebin.com. Unknown date.

The tl;dr of this piece is the notion of parallel construction and that all it takes is one fuck-up for someone with the necessary resources to find you. But, on the other hand, if your threat profile is trying to become a less vulnerable target to criminals, learning what criminals do to avoid law enforcement will put you far ahead of the kinds of people they normally target. Obviously, don’t be a darknet drug lord. There are easier ways to make money.

Questions About Technology Investment: CharaCorder

“The CharaChorder is a new kind of typing peripheral that promises to let people type at superhuman speeds. It’s so fast that the website Monkeytype, which lets users participate in typing challenges and maintains its own leaderboard, automatically flagged CharaChorder’s CEO as a cheater when he attempted to post his 500 WPM score on the its leaderboards.

It’s a strange looking device, the kind of thing Keanu Reeves would interface with in Johnny Mnemonic. Your palms rest on two black divots out of which rise nine different finger sized joysticks. These 18 sticks move in every direction and, its website claims, can hit every button you need on a regular keyboard. “CharaChorder switches detect motion in 3 dimensions so users have access to over 300 unique inputs without their fingers breaking contact with the device,” it said.”

-Matthew Gault, “This Keyboard Lets People Type So Fast It’s Banned From Typing Competitions.” Vice. January 6, 2022.

Open Question: What is a good “investment” in technology?

Let’s imagine you have a child that it at the age they are starting to use a computer and a QWERTY style keyboard. Do you spend $250 and get them this kind of peripheral knowing:

  • It’s a new technology that likely will not be around in 20 years
  • It seems likely that in 20 years or so that the main input with computing will be via voice and/or video
  • It is even possible that in 20 years everyone will have a brain-computer interface.

Personally, I think it is useful to learn how to use new devices, even if they turn out to be novelty devices. It’s easy to see that certain popular devices that became obsolete have paved the way for the evolution for the subsequent devices that come later. Examples:

  • Mainframe computing led to personal computing which led to mobile computing
  • Blackberry, PalmOS, iPods were the precursors to Android and iPhones
  • Every few years, someone makes a new chat app, from ICQ and IRC to Telegram and Discord.

Familiarity with the previous version can help you transition to new variants. So, it’s probably a good idea to get familiar with technologies, even if you don’t think they will last.

A Boring Dystopia: Mouse Movers

““The pandemic has proved to be a catalyst to saying no to the ‘9-to-5’ schedule. The tables have turned in favor of the Worker,” Rodriguez told me. “They are in power today. They value work flexibility. They are ambitious. They value work-life balance and are not afraid of saying no to employers who don’t share those values. The Mouse Mover is a new tool in that shift—and we stand with the Knowledge Worker.” 

-Samantha Cole, “Workers Are Using ‘Mouse Movers’ So They Can Use the Bathroom in Peace.” Vice. December 8, 2021

Just to recap: the claim being made here is that being able to buy a mouse mover, which is a device that moves your mouse to simulate computer use, is a tool of worker empowerment. Maybe in a dystopia any resistance to digital Taylorism is empowerment. Needing a “mouse mover” isn’t empowerment. It’s a sign of your alienation.

Get a different job, if you can. If you can’t, don’t pretend to yourself that using tools like this are empowering. They aren’t. It’s a symptom you should seek some kind of real empowerment, such as the ability to decide to use the toilet whenever you want without someone wondering why you aren’t working. Or, to engage in true utopian thinking, find work where you have control over how and why you spend your time because making those kinds of decisions are valuable in the environment you work rather than merely being present to respond at a moment’s notice to your boss.

The Computers Are Out of Their Boxes

“What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for. 

“The core of computing is changing from number-crunching to decision-­making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes…

…AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-­learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips.”

—Will Douglas Heaven, “How AI is reinventing what computers are.” MIT Technology Review. October 22, 2021.

Open Question: As artificial intelligence becomes more pervasive, what limits should we impose as a society and on ourselves on how we use this technology, so it minimizes its negative impact?

The key changes described in this article:

  • Volume, less precise calculations carried out in parallel
  • Defining success by outcomes rather than defining processes
  • Machine autonomy, i.e., artificial intelligence prompts people, acting as surrogate and agent

All to the good. But, there are negative social implications as this technology reaches critical mass among populations, a significant portion of people will off-load a subset of decisions to machines, which may be a net positive. However, easy to imagine that it undermines people’s ability to think for themselves, that the subset creeps into classes of decisions where it shouldn’t, e.g., prison sentences for people, and within the areas where it is commonly used, it will create a decision-making monoculture that crowds out alternative values. For example, if a dominate flavor of A.I. decides that Zojorishi makes the best automated rice cookers, which they do, and only makes that recommendation. Some large percentage of people, only buy Zojorishi. Then, the natural result is it will push other rice cooking options out of the market and make it difficult for new, possibly better, companies to emerge.

Lots of strange network effects that will happen due to this trend that should be given careful consideration. Even on a personal level, it would be good to have a clear idea of what exactly you’d like to use A.I. for, so you don’t undermine your own autonomy, as has happened in other computing eras, such as Microsoft dominating the desktop market.