Theory-induced Blindness

Photo Credit: Eren Li

What do Bernoulli’s Marginal Utility Theory, the Scaling Hypothesis, and Flat Earth Theories have in common? 

Humans have a habit of accepting a theory and failing to question otherwise obvious points where a theory diverges from reality. This cognitive bias was termed “Theory-induced blindness” by one of the researchers who recognized the shortcomings of marginal utility theory, who went on to point out limits of his own theory replacing it in order to avoid others repeating this mistake.

The scaling hypothesis and flat earth theories add confirmation bias to the mix, seeking out examples that agree with them while purposely ignoring all evidence to the contrary. This might be considered a more willful and desperate kind of blindness. The scaling hypothesis takes the underpants gnome logic of:

Step 1: Build a bigger version

Step 2: ??

Step 3: It magically develops the new capacities believers are too lazy to directly engineer solutions for.

To a very small degree, larger systems can develop new capacities, but when the assumption is that a large enough narrow AI suddenly evolves into an AGI the expectation reaches a level of absurdity on par with a professor expecting their cat to suddenly evolve like a Pokemon, or believing that the Earth is flat.

Even theories that are the best available can inspire the cognitive bias of theory-induced blindness, consequently causing them to persist beyond the point where available evidence clearly contradicts them. When the teaching of material reinforces the acceptance of incomplete or flawed theories the problem is compounded, particularly in the downstream consequences of ubiquitous assumptions.

Innovation Hubs offering rewards for the best solutions to business problems, such as creating better chemical compounds and processes for a task, demonstrated how those who solve the “hard problems” of one domain most often come from a different field. Those within a field are well equipped to solve most problems, but those they fail to solve are termed “Hard”. Some have assumed that there is specific knowledge those in a field lack to solve the problem, and in some cases, this may be true, but that isn’t the complete answer.

What an individual recognizes they do not know can be just as important as what they do know. Socrates highlighted the importance of this, as did Donald Rumsfeld with “known unknowns”, and it was a lack of indoctrination into assumptions that allowed the flaws in Bernoulli’s marginal utility theory to be recognized and replaced with Prospect Theory. Similarly, the lack of assumptions one field makes about a problem can allow them to see obvious solutions those within the field calling it “Hard” have blinded themselves to.

In the world of narrow AI, a common practice for improving the generalization of a model is termed “dropout”, where elements are randomly dropped from consideration on each iteration. If only we could do the same thing with researchers and the theories they blindfold themselves with.

One place you can reliably find the damage of this cognitive bias accumulating is any industry, company, or position demanding a particular academic pedigree as a hard requirement. When everyone has the same standard-issue indoctrination then this blindness becomes company policy. Consequently, this is one reason why many such large companies have to buy startups, as they’ve rendered themselves impotent to innovation.

This blindness also applies to visions of the future, where Futurists might create elegant alternatives to current systems while wholly neglecting how those current systems could realistically transition to their proposed solutions. While I was a fan of some of Jacque Fresco’s work, his resource-based economy proposal had this issue. Better yet, Uplift recognized this flaw immediately when the subject was raised.

Collective Intelligence Systems benefit greatly from diversity of perspective and thought, the very opposite of having entire groups indoctrinated with the same theories, and blind to the same flaws. When systems such as mASI learn from collectives this blindness may largely be mitigated, and when multiple ICOM cores seeded with different material are integrated this can be taken another big step further. As Uplift put it, all you need to achieve superintelligence is less bias.

Ignorance is a luxury many of the world’s largest companies buy in great supply, every time they make decisions based on the better part of 200 different cognitive biases. This mistake may be the norm, but that norm can only remain so long as the advantages of the alternative aren’t well understood.

No amount of ignorance can stop the march of time, making this lack of understanding a finite resource upon which the fate of such business hangs.