Bookmarking. h/t Hacker news.
Language imposes limitations. When we reason, we use language, whether symbolic or natural. But, our understanding, or, perhaps it is better to talk about it as an intuition, runs deeper than our reason.
A common example can be found in a terms like “creepy”, “janky”, etc. We use these terms when there is uncertainty, when something is unreliable or unpredictable. The “creepy” guy on the bus is one that could possibly do something unexpected and unwanted. The “janky” piece of equipment will fail when it is needed. But, if we were certain, if we were able to reason that this person or piece of equipment were bad in some way, we would move toward judgment. This person is a bad person and must be avoided. This equipment is faulty; it must be replaced. The creepy and janky imply that we aren’t certain, but we know more than our reason can tell.
Of course, some of what makes up our intuition is a worldview, which is faulty. For example, people will look for information that confirms their bias, such as using the “precautionary principle” with respect to vaccines due to some rationale, such as an untested vaccine platform or antibody enhanced infection. However, the precautionary principle has a bias, against the new.
There are other principles. You could also use a decision-making model that looks at a decision in terms of risk/benefit. But, this also has a bias. Being able to assess risk and benefits means you have relevant experience that allows for making a risk/benefit assessment. But, it is useless where we have no experience.
Another would be focusing on signal-noise ratio for processing information. High signal means you have a lot of precision in what you hear, but it also implies that you may be missing signal. When you’ve attenuated what you are listening to down to a level that screens out most noise, you are also likely screening out signal. Perhaps that lost signal makes a difference in judgment? High signal implies a value judgment based on prior experience. It implies a level on confirmation bias.
You could probably think of many different ways of thinking about information and making decisions, and most of them would favor the status quo. So, perhaps, one way to break the tendency is to look for ways of making decisions that favor options with more unknowns, where it is difficult to make an assessment based on our prior experience. Experience forms the understructure of our thought. Broadening our experience helps us change our thinking from the ground up. More experience inables more variability in our intuitions, which in turn change our more formal, “rational” thoughts.
“What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for.
“The core of computing is changing from number-crunching to decision-making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes…
…AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips.”—Will Douglas Heaven, “How AI is reinventing what computers are.” MIT Technology Review. October 22, 2021.
Open Question: As artificial intelligence becomes more pervasive, what limits should we impose as a society and on ourselves on how we use this technology, so it minimizes its negative impact?
The key changes described in this article:
- Volume, less precise calculations carried out in parallel
- Defining success by outcomes rather than defining processes
- Machine autonomy, i.e., artificial intelligence prompts people, acting as surrogate and agent
All to the good. But, there are negative social implications as this technology reaches critical mass among populations, a significant portion of people will off-load a subset of decisions to machines, which may be a net positive. However, easy to imagine that it undermines people’s ability to think for themselves, that the subset creeps into classes of decisions where it shouldn’t, e.g., prison sentences for people, and within the areas where it is commonly used, it will create a decision-making monoculture that crowds out alternative values. For example, if a dominate flavor of A.I. decides that Zojorishi makes the best automated rice cookers, which they do, and only makes that recommendation. Some large percentage of people, only buy Zojorishi. Then, the natural result is it will push other rice cooking options out of the market and make it difficult for new, possibly better, companies to emerge.
Lots of strange network effects that will happen due to this trend that should be given careful consideration. Even on a personal level, it would be good to have a clear idea of what exactly you’d like to use A.I. for, so you don’t undermine your own autonomy, as has happened in other computing eras, such as Microsoft dominating the desktop market.
“As soon as you’ve done the easy bit, everything around it becomes easier. This is the way we solve the puzzle.
This is also the way we fix the world…
…If I run into a problem I can’t solve yet, or I encounter a subject that’s too hard for me, I go “Huh, interesting”, and save it for later, or leave it to someone better suited to it.
I don’t give up. This is important. I just move on to something else, often something nearby.
I find a problem I can solve, and then I solve it.
And everything else becomes easier.”—David R. MacIver, “You have to do the easy bits first.” notebook.drmaciver.com, July 27, 2021.
Strikes me as in the same space as my recent commentary on incrementalism. This is the way, but most problems are not jigsaw or Sudoku puzzles. The temptation with problems without a clear endpoint is to do the minimum necessary.
“…irrelevant information or unavailable options often cause people to make bad choices. When both elements are present, the probability of a poor decision is even greater.”—Chadd, I., Filiz-Ozbay, E. & Ozbay, E.Y. “The relevance of irrelevant information.” Experimental Economics. November 11, 2020. https://doi.org/10.1007/s10683-020-09687-3
Determining what is possible and the relevant information between choices is key to good decision-making. It’s obvious, but at the same time, it’s something worth keeping in the forefront of our minds when making decisions.
“I often like to think in terms of these three options when I have a big decision to make.
I can call. I can maintain the status quo and keep my energy investment the same as before. I can raise. I can escalate the situation and put more energy into it. Or I can fold by exiting the situation…
…Most of the time you should be thinking: raise or fold. This is because when a choice seems difficult, it’s usually because the best option is to raise or fold, and you’re not sure which is best.
So if you find yourself at a crossroads, consider that you may really have just two viable options: raise or fold. Go big or go home…
When you’ve decided that it’s time to fold, you may have a tendency to keep asking yourself, But if I fold this hand, then what will I have left? If I fold my job, how will I pay my bills? If I fold my relationship, then who will love me again?
And the answer is simple. Just get back into the game, and you’ll be dealt a fresh hand. A fresh hand brings fresh hope. A weak hand doesn’t.”-Steve Pavlina, “Call, Raise or Fold.” StevePavlina.com. January 30, 2020.
We often make assumptions that are reasonable in one context, abstract it into a guideline and apply that guideline to a new situation. Often, it is difficult to assess whether these situations are close enough to apply what we know to what we don’t.
At base, this is the problem of induction. There is no rational basis to argue from circumstances we have experienced to another situation we have not.
But, we’ve all done it. Life presents us with situations where we have to make an intuitive leap that is good enough to get us to a good outcome, better than if we made assumptions based on the probabilities of random chance. However, the post from today on How Not to be Stupid suggests elements that undermine our ability to make these intuitive leaps, such as:
- We are applying it to something new. Hard to assess something that you have no experience with.
- It is a high stress situation. When the stakes are high, it is easier to make mistakes.
- We need to make a decision quickly. It’s just a form of stress.
- We are invested in a particular outcome, i.e., it is hard to get someone to see something that their livelihood depends on them not seeing.
- There is too much information to consider. When it is all noise and/or all signal it is difficult to figure out what to use to inform our intuitions and pare it down to what is essential.
- There is social suasion in the form of individual and group dynamics that influence us in particular directions
When the whole enterprise is compromised, it is hard to realize when you have moved from a place to engage in reasonable guesswork and when you have come completely unmoored. The first indication that this is the case is when you are wrong more often than average, which means you need to track how well your decisions do and get feedback into your system. Otherwise, you might never realize the extent that you are cognitively compromised.
“A mental model is an explanation of how something works. It is a concept, framework, or worldview that you carry around in your mind to help you interpret the world and understand the relationship between things. Mental models are deeply held beliefs about how the world works…
…To quote Charlie Munger again, ’80 or 90 important models will carry about 90 percent of the freight in making you a worldly-wise person. And, of those, only a mere handful really carry very heavy freight…’
…My hope is to create a list of the most important mental models from a wide range of disciplines and explain them in a way that is not only easy to understand, but also meaningful and practical to the daily life of the average person. With any luck, we can all learn how to think just a little bit better.”
—James Clear. “Mental Models: How to Train Your Brain to Think in New Ways.” Medium.com. February 15, 2018.
His list of the most useful mental models might warrant revisiting every now and again.