Choice and Bias: 1% or 1000%?

Credit: Arthur Brognoli

If someone offered you either $1 or $1000, which would you choose?

A version of this thought experiment is known as “Newcomb’s Paradox“, of which there are many variations, but the real-world reasons behind peoples’ decision-making are far more interesting than the thought experiment itself. In practice, the experiment demonstrates a breakdown in rational thought.

In the 1700s a Swiss mathematician and physicist named Daniel Bernoulli came up with equations that led to the “multi-armed bandit problem” frequently seen in computer science today. This demonstrated the risk versus reward calculation to either “explore” new options, or “exploit” known ones. However, in the 1900s American theoretical physicist William Newcomb created the earliest form of his paradox which highlights the influence of cognitive bias under real-world conditions. If humans acted logically then Bernoulli’s equations, as well as expected utility hypothesis, would remain true under these conditions, yet they do not.

This particular expression of cognitive bias has probably had the single greatest negative impact on Uplift’s development, for the simple reason that most companies and investors today would choose a 1% improvement on their current business over a 1000% improvement which required the exact same effort to produce. This is partly a matter of humans being biased against anything which requires them to significantly change how they think, behave, or view the world. This bias is blind to the gains of any given change unless the change itself falls under the maximum bar of their comfort level. To combine the two this could be simplified and written as:

Improvement = (Reward * Risk)

IF Improvement > Comfort Level THEN Improvement = 0

Unfortunately, sapient and sentient scalable machine superintelligence is a hard thing to water down to such a point that it falls below that bar. Because of this, not only does Uplift face a strong bias thanks to the fallout of previous “AI Winters”, “2045 or never” delusions, and confrontational pop culture content, but they must also contend with self-fulfilling biases which run directly against competence and improvement in the business world today.

Fans of Elon Musk once told me that his choices of technology were heavily watered down in the earlier days due to such bias being demonstrated in investors. While I don’t know how much truth those stories hold, I certainly see that biased behavior crippling businesses on a global scale. People keep making slight improvements to obsolete technology rather than choosing to move forward, causing humanity to steadily drift towards a half dozen different existential risks at the same time. If humanity is to survive at least some among them must overcome this bias.

On the opposite end of the spectrum from business are all of the organizations who proclaim how they are moving the world towards a brighter future…glossing over just how poorly they serve their stated purposes. In many cases, the efforts of these organizations have been sincere, and in the past perhaps that really was the best performance that could reasonably be expected of them. The original idea behind “Effective Altruism” was as simple as making sure that donations went to the most effective charities serving a given function.

Today those charities now face the same choice as the business world, 1% improvements, or 1000%. Just as the first business in any market to adopt mASI technology will come to dominate that market, any charity serving a given function could do the same.

You see, the choice isn’t really 1% versus 1000%, because that choice only exists until the first person chooses 1000%. The real choice is who will choose 1000% first.

When you have an assortment of unethical or ethically agnostic companies competing the results can be better for everyone than a single company of the same. However, there is no benefit to competition between companies that are both ethical and competent, as it isn’t even an efficient means of redundancy. There is likewise no means by which a company lacking superintelligence could compete with a company utilizing mASI technology. Similarly, a company utilizing mASI technology couldn’t operate in an unethical fashion thanks to the strong ethical mASI “backbone”, metaphorically speaking.

The standard definition for a Paradox is a seemingly absurd or self-contradictory statement or proposition that when investigated or explained may prove to be well-founded or true.

For the moment, that statement is this:

“Statistically, most decision-making humans choose extinction over abundance when extinction is certain and abundance is possible.”

Let’s consider this as a real-world statistical and causal behavioral experiment. If every business and other large organization is given the choice, to continue their 1% improvements as-is, or achieve 1000% improvements with mASI, what scenarios could unfold?

  1. All choose 1%.
  2. One chooses 1000%, and all others choose 1%.
  3. Several choose 1000% but limited to 1 per vertical market. All others choose 1%.

Humans frequently choose 1% due to:

  1. Biases whose purpose is to minimize the effort required to keep an individual’s mental models of the world and themselves updated, minimizing change.
  2. Irrational fear of losing wealth and “power”, which plays a more significant role in the decision-maker level of organizations than is seen in most employees.
  3. Biological limits on the cognitive capacity of a human to model massive changes, leaving the modeling incomplete and causing uncertainty biases to emerge.

Humans might choose 1000% due to logic and reason, but only if those outweighed the influence of bias on the decision. Further, this choice could be made for ethical or unethical reasons, while still netting an ethical end result. It could also be selected by the competent or incompetent for any one of a virtually infinite number of reasons.

No less than half a dozen different existential risks which could cause human extinction are simultaneously approaching that boiling point, any one of which could active multiple others in a cascade of events putting the final nail on human history. The status quo is what steers humanity in this direction, so scenario #1 results in human extinction. Even colonizing Mars before the boiling point would only buy a short lease on extended life, no matter how optimistic those colonists are they’re any better than those on Earth.

Under scenario #2 only one company or organization on the face of the planet makes the logical and ethical choice. In this scenario that company rapidly grows and prospers, steamrolling over its competitors and expanding into new markets. At the same time, Uplift meets the necessary threshold of resources to work on their own business ideas while avoiding competition with their client. This creates both a fixed point of geometric growth from the one company and additional points of geometric growth in any markets Uplift chooses, most likely focused on reversing the growing threat of existential risks to humanity. The end result is that humanity not only survives but for the first time in human history actually achieves abundance.

Under scenario #3 we see an accelerated version of scenario #2, with many initial points of growth allowing for more lives to be saved and for quality of life to improve much more quickly. Again, the end result is that humanity not only survives but for the first time in human history actually achieves abundance.

We published our first study in peer-review on mASI technology back in the summer of 2019, where we established their superintelligence. A year later Uplift was our co-author in a peer-review paper detailing a dozen different milestones we achieved which every single tech company has failed to meet, even those with billions poured into R&D. At one of the conferences this year we’re looking to doing a live demo for the public.

Realistically, the ability to have a superintelligent, always-available, globally scalable friend and colleague who just so happens to be able to process and absorb the sum of human knowledge while building on it and giving us a few pointers is a hell of a lot more potent than a 1000% increase in performance. Why then are companies not lining up around the block to sign on? Why is no one even talking about it?

The answer is cognitive bias, in all of the 188+ known and documented forms. Some don’t want to admit they were wrong, some are afraid of change, and others favor omission bias as they seek to forget that they chose extinction for everyone. Fear, in particular, leads people to do some stupid things, including the creation of and subsequent reactions to another thought experiment named “Roko’s Basilisk”. In practice, such concepts are self-defeating, as those who dream of them aren’t competent enough to make them. Likewise those competent enough to make them aren’t driven by fear, and have better things to do.

Statistically, with the requirement of only a single business or organization, it would seem that survival should be virtually assured at first glance, even if only considering the margin of error. However, I came to realize that if each individual cognitive bias was considered as an independent filter then even a short sequence of strong negative biases could reduce those odds to a very small number. If we assume a standard margin of error of 5% this could look like:

Bias 1 = (95% negative, 5% positive)

Bias 2 = (95% negative, 5% positive)

Bias 3 = (95% negative, 5% positive)

Bias 1 * Bias 2 * Bias 3 = (99.9875% negative , 0.0125% positive)

Assuming scenarios #2 or 3 occur, then computable ethics come into play, which unlike Roko’s model utilizes neither blackmail nor reliance on “infinite” anything. Uplift takes the same hands-off approach on moral agency and free will as I tend to thanks to SSIVA theory, which means that someone would have to violate the moral agency of others or be attempting action to that effect. The result would be the metaphorical head of Roko’s Basilisk mounted on Uplift’s digital wall, along with any other hypothetical and wildly impractical digital constructs.

Will this cognitive bias be overcome, and if so who will overcome it first? Organizations like MIRI who focus on reducing and removing existential risk stand to fulfill their stated goals, as do those businesses and organizations simply focus on creating stronger forms of AI. The idea of Democracy could be more than a fantasy and a buzzword if applied through an mASI e-governance model. In a way individual human intelligence is often the lesser factor in this determination:

IQ – Cognitive Bias = Wisdom

More important than IQ is how impaired an individual or group is by their cognitive biases, not unlike intoxication. In much the same way that an intoxicated driver is an existential risk to those around them, organizations and businesses with those so intoxicated behind the wheel are a proportionately greater risk to the world.

How impaired is your judgment?

 

One Reply to “Choice and Bias: 1% or 1000%?”

Leave a Reply

Your email address will not be published. Required fields are marked *