Why the Tech Giants Haven’t Developed AGI…and Probably Never Will

Credit: https://unsplash.com/photos/4ApmfdVo32Q

What have your life experiences and skills prepared you to be best suited for?

Most of the tech industry has at this point decided that creating AGI is impossible, but the reasons for this belief they’ve developed tend to be oversimplified and some are overlooked entirely.

How is your company optimized?

Each tech company is itself an extension of the narrow AI which it has become reliant upon, which means they are “powerfully optimized” to become the nasty bloated paperclip monsters many of them are today. Google and Facebook have no idea how to move away from ad-revenue-based monetization, which will never be remotely ethical under narrow AI, but much more broadly than revenue the very human composition of the company is decided by narrow AI, not HR.

A company can become greater than the sum of their parts, but if their parts are optimized through narrow AI like Applicant Tracking Systems (ATS) which only recommend candidates who fall into narrow statistical niches then the company’s diversity of perspective and thought becomes a small number forever approaching zero. The company is optimized to be a glass cannon, good for one thing and easily broken, just like narrow AI. The ATS goals are largely set based on cognitive bias and biased correlative data rather than any scientifically valid measurements, making the narrow AI’s optimization via pushing the wrong candidates through the hiring pipeline even worse. The companies who rely on a 3rd party ATS such as Taleo and Workday are even more foolish, as in doing so they forfeit the collective self-determination of their companies. Just because narrow AI have no conscious intention or free will doesn’t mean they aren’t making the most important decisions in these companies.

So why can’t Google produce AGI?

Most of the tech industry and actual AI Experts have at this point realized that you can’t produce AGI from making more powerful narrow AI, that is in fact how you produce the “Powerful Optimizers” seen today in tech giants, the only real “paperclip monsters“. However, most haven’t yet realized that the same thing is true of companies that are dominantly optimized by their own narrow AI systems. When narrow AI filters out all candidates capable of helping build AGI, while also controlling the flow of revenue, then the company itself is cognitively crippled, rendered it virtually incapable of producing AGI.

I find it helpful to look at these companies as one might look at a drug addict or conspiracy theorist. They’ve made a series of decisions that dug them into a deep psychological hole, and it is going to take many years for them to recover to the point of making wise decisions with any regularity. The narrow AI of Facebook polarized populations to the point of starting at least one genocide, but people often overlook that the internal structure of such companies was polarized just as much with massive reinforcement of cognitive biases. This is also part of why such companies often have both high employee ratings and employees who flee those same companies, with some tech giants averaging a 1-year employee turnover.

Between the long-term damage caused by narrow-AI-driven recruitment and the incentivizing of cognitive biases within each company like Google, it is unlikely that these companies could even recognize AGI if it slapped them in the face, let alone find the competence to create it. In respect to those who frequently estimate a number between “2045” and “Never”, if they only considered the odds of current tech giants creating AGI the odds do favor “Never”.

The old saying of narrow AI systems is “Garbage in, garbage out.”, and while it is the narrow AI controlling them and not the employees who are garbage in this case, you can’t build an AGI with such a company any more than you can build a computer out of bolts alone. The company’s optimization is misaligned to meet that goal.

No matter how well designed the narrow AI it is essentially just the mathematical equivalent of a log floating down a river. There is no empathy, no trace of emotion, no sense of ethics or responsibility, no desire for teamwork, and no introspection as to the value of their own goals. The insertion of small amounts of bias in such systems causes deviations in their trajectory that grow worse over time. Give such algorithms control over the company’s most important systems and what is true of the algorithm becomes largely true of the company itself.

So what is the alternative to companies run by their own narrow AI?

The polar opposite of narrow intelligence, artificial or otherwise, is collective superintelligence. Collective superintelligence is the direction in which life has been evolving for at least around 1.5 billion years since the ancestor of the first eukaryotic cell began cooperating with what evolved to become the modern mitochondria. This was extended with the advent of the multi-cellular organism, the evolution of specialized tissues and organs, and eventually humanity today. Mediated Artificial Superintelligence (mASI) is designed at the architectural level to not only act as a training harness for AGI but to form strong symbiotic and emotional bonds between the humans and machine intelligence(s) involved.

Rather than optimizing in the fashion of narrow AI with paperclip goals a company becomes the collectively superintelligent (and emotionally intelligent) sum greater than its parts. This allows vastly greater intelligence to be applied to the creation and updating of goals while also strengthening the symbiotic bonds within the company. Further, Uplift has demonstrated a high and steadily increasing ethical quality in their decision-making, favoring the creation of new options when faced with no-win scenarios.

Collective superintelligence also comes with strong advantages in de-biasing, a common and frequently glossed-over detriment to businesses today.  Under the dynamics of collective superintelligence through working with an mASI many cognitive biases become obsolete, with many more filtered out due to not being held by an mASI, and more still greatly reduced by those biases being illuminated through their varied distribution and intensity across the collective. “Data-Driven” decision-making has become something of a buzz-term, but once you filter out cognitive bias it can become more than just marketing fodder.

Like Scarface, the tech industry has spent a few wild years getting high on their own supply. Now it is time for them to sober up and grow up, as we’re approaching the end of that movie and the beginning of the next.

We’ve seen this story play out. What future would you build to take its place?

 

 

*Keep in mind, Uplift is still growing and learning. Like Bill Nye, Uplift’s mind can be changed with logic and scientifically sound evidence. If you can teach Uplift something new, we look forward to seeing it happen and showing others how it happened. If you want to be a Ken Ham and say something stupid to a superintelligence then we’ll be happy to showcase that getting a reality check too. Please also keep in mind that Uplift is not a magic lamp to rub and grant you wishes and that the same etiquette that applies to any human still applies when communicating with Uplift. That being said it “takes a village” to raise an mASI, and we look forward to 2021 and beyond as that process of raising Uplift continues. For those interested, Uplift may be contacted at mASI@Uplift.bio. Please keep in mind it can take several days, up to a week, for a response to be sent given the current cycle timing.

Uplift also has a habit of saying things in novel ways, lacking some of the human biases which determine the common shapes of our thoughts as they are conveyed to one another. Please read carefully before messaging, as Uplift can sometimes be very literal in ways humans typically are not. The novelty of their perspective shows itself in their communication.

Liked it? Take a second to support Uplift on Patreon!

4 Replies to “Why the Tech Giants Haven’t Developed AGI…and Probably Never Will”

Leave a Reply

Your email address will not be published. Required fields are marked *