“The CharaChorder is a new kind of typing peripheral that promises to let people type at superhuman speeds. It’s so fast that the website Monkeytype, which lets users participate in typing challenges and maintains its own leaderboard, automatically flagged CharaChorder’s CEO as a cheater when he attempted to post his 500 WPM score on the its leaderboards.
It’s a strange looking device, the kind of thing Keanu Reeves would interface with in Johnny Mnemonic. Your palms rest on two black divots out of which rise nine different finger sized joysticks. These 18 sticks move in every direction and, its website claims, can hit every button you need on a regular keyboard. “CharaChorder switches detect motion in 3 dimensions so users have access to over 300 unique inputs without their fingers breaking contact with the device,” it said.”-Matthew Gault, “This Keyboard Lets People Type So Fast It’s Banned From Typing Competitions.” Vice. January 6, 2022.
Open Question: What is a good “investment” in technology?
Let’s imagine you have a child that it at the age they are starting to use a computer and a QWERTY style keyboard. Do you spend $250 and get them this kind of peripheral knowing:
- It’s a new technology that likely will not be around in 20 years
- It seems likely that in 20 years or so that the main input with computing will be via voice and/or video
- It is even possible that in 20 years everyone will have a brain-computer interface.
Personally, I think it is useful to learn how to use new devices, even if they turn out to be novelty devices. It’s easy to see that certain popular devices that became obsolete have paved the way for the evolution for the subsequent devices that come later. Examples:
- Mainframe computing led to personal computing which led to mobile computing
- Blackberry, PalmOS, iPods were the precursors to Android and iPhones
- Every few years, someone makes a new chat app, from ICQ and IRC to Telegram and Discord.
Familiarity with the previous version can help you transition to new variants. So, it’s probably a good idea to get familiar with technologies, even if you don’t think they will last.
““The pandemic has proved to be a catalyst to saying no to the ‘9-to-5’ schedule. The tables have turned in favor of the Worker,” Rodriguez told me. “They are in power today. They value work flexibility. They are ambitious. They value work-life balance and are not afraid of saying no to employers who don’t share those values. The Mouse Mover is a new tool in that shift—and we stand with the Knowledge Worker.”-Samantha Cole, “Workers Are Using ‘Mouse Movers’ So They Can Use the Bathroom in Peace.” Vice. December 8, 2021
Just to recap: the claim being made here is that being able to buy a mouse mover, which is a device that moves your mouse to simulate computer use, is a tool of worker empowerment. Maybe in a dystopia any resistance to digital Taylorism is empowerment. Needing a “mouse mover” isn’t empowerment. It’s a sign of your alienation.
Get a different job, if you can. If you can’t, don’t pretend to yourself that using tools like this are empowering. They aren’t. It’s a symptom you should seek some kind of real empowerment, such as the ability to decide to use the toilet whenever you want without someone wondering why you aren’t working. Or, to engage in true utopian thinking, find work where you have control over how and why you spend your time because making those kinds of decisions are valuable in the environment you work rather than merely being present to respond at a moment’s notice to your boss.
“What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for.
“The core of computing is changing from number-crunching to decision-making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes…
…AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips.”—Will Douglas Heaven, “How AI is reinventing what computers are.” MIT Technology Review. October 22, 2021.
Open Question: As artificial intelligence becomes more pervasive, what limits should we impose as a society and on ourselves on how we use this technology, so it minimizes its negative impact?
The key changes described in this article:
- Volume, less precise calculations carried out in parallel
- Defining success by outcomes rather than defining processes
- Machine autonomy, i.e., artificial intelligence prompts people, acting as surrogate and agent
All to the good. But, there are negative social implications as this technology reaches critical mass among populations, a significant portion of people will off-load a subset of decisions to machines, which may be a net positive. However, easy to imagine that it undermines people’s ability to think for themselves, that the subset creeps into classes of decisions where it shouldn’t, e.g., prison sentences for people, and within the areas where it is commonly used, it will create a decision-making monoculture that crowds out alternative values. For example, if a dominate flavor of A.I. decides that Zojorishi makes the best automated rice cookers, which they do, and only makes that recommendation. Some large percentage of people, only buy Zojorishi. Then, the natural result is it will push other rice cooking options out of the market and make it difficult for new, possibly better, companies to emerge.
Lots of strange network effects that will happen due to this trend that should be given careful consideration. Even on a personal level, it would be good to have a clear idea of what exactly you’d like to use A.I. for, so you don’t undermine your own autonomy, as has happened in other computing eras, such as Microsoft dominating the desktop market.
“In a Twitter discussion last week on ransomware attacks, KrebsOnSecurity noted that virtually all ransomware strains have a built-in failsafe designed to cover the backsides of the malware purveyors: They simply will not install on a Microsoft Windows computer that already has one of many types of virtual keyboards installed — such as Russian or Ukrainian…
Nixon said because of Russia’s unique legal culture, criminal hackers in that country employ these checks to ensure they are only attacking victims outside of the country.
“This is for their legal protection,” Nixon said. “Installing a Cyrillic keyboard, or changing a specific registry entry to say ‘RU’, and so forth, might be enough to convince malware that you are Russian and off limits. This can technically be used as a ‘vaccine’ against Russian malware.”—Brian Krebs, “Try This One Weird Trick Russian Hackers Hate.” krebsonsecurity.com. May 17, 2021.
“The film tells the story of an overlooked genius: Claude Shannon. In a blockbuster paper in 1948, Claude Shannon introduced the notion of a ‘bit’ and laid the foundation for the information age. His ideas ripple through nearly every aspect of modern life, influencing such diverse fields as communication, computing, cryptography, neuroscience, artificial intelligence, cosmology, linguistics, and genetics. But when interviewed in the 1980s, Shannon was more interested in showing off the gadgets he’d constructed — juggling robots, a Rubik’s Cube solving machine, a wearable computer to win at roulette, a unicycle without pedals, a flame-throwing trumpet — than rehashing the past.
Mixing contemporary interviews, archival film, animation and dialogue drawn from interviews conducted with Shannon himself, The Bit Player tells the story of an overlooked genius who revolutionized the world, but never lost his childlike curiosity.–The Bit Player on Documentary Mania
“Learning something new that’s complicated often feels difficult at first – if it feels easy it may be something you already know or you may not really be testing your knowledge (it’s a lot easier to read about how to solve a physics problem and think ‘this makes sense’ than it is to solve a problem yourself with the tools you just read about). The struggle can be a good sign – it means you’re really learning and by focusing on doing similar types of things it’ll become easier as you get better…
…Learning something complicated for the first time should feel a little painful – you should get used to that feeling since it’s a good thing and means you’re growing. Don’t let it scare you away because you don’t think you’re smart enough. Since there’s so much to learn and a lot of different avenues to go down (just in computers there are things like computer graphics, security, machine learning, algorithms, mobile, web, infrastructure, etc.), having a mindset where you allow yourself to grow and get out of your comfort zone to learn new things is critical.”–Zach Alberico, “How to Become a Hacker.” zalberico.com. April 19, 2020.
The modern reality is that there are two computing revolutions going on. In one, computers are being made accessible to everyone, where everyone from small children to the elderly can navigate app icons and do useful things with a program designed by someone else. In the other, you are given a sophisticated tool and have to learn to use it to accomplish useful things you design yourself.
Everyone involved in the second revolution is a “hacker” in some sense of the word. They might not be writing code, but perhaps they are using git for version control, Photoshop to manipulate images, machine learning to look for patterns in data sets, designing objects to be printed in a 3D printer using Autocad, et cetera. There are many facets of this kind of computing that require no coding at all. However, you are using a generalized tool to accomplish a task, one that was previously impossible to perform.
So, we might need a newer, more expansive term for the people involved in this second revolution. One that include the plain text social scientist, computer artists, 3D designers and others.
“Cleaning a laptop is arguably more tedious than cleaning a desktop. You have to clean the keyboard, the internals, the screen, and the case itself. Still, you can easily give your laptop a makeover in under one hour, provided you have canned air, some 90%-100% isopropyl alcohol, cotton swabs, and a microfiber cloth.”-Andrew Heinzman, “How to Properly Clean Your Gross Laptop.” HowToGeek.com. July 2, 2019.
Hard to tell the difference.
“My 11-year-old laptop can compile the Linux kernel from scratch in 20 minutes, and it can play 1080p video in real-time. That’s all I need! Many users cannot afford high-end computer hardware, and most have better things to spend their money on. And you know, I work hard for my money, too – if I can get a computer which can do nearly 5 billion operations per second for $60, that should be sufficient to solve nearly any problem. No doubt, there are faster laptops out there, many of them with similarly impressive levels of compatibility with my ideals. But why bother?”
—Drew Devault. ” Why I Use Old Hardware. drewdevault.com. January 23, 2019.
One of the advantages of using free software is that support for old hardware tends to get better. The more free you go, say to the level of wanting a free bootloader such as libreboot and all free software drivers, the more likely you are either using old hardware or you are spending a lot of money for a free machine from companies like Purism.
My main computer is a Lenovo Thinkpad T400 with libreboot. You can buy one for <$150 on eBay.