AGI Containment, in a Nutshell

Credit: Ionut Nan

How do you contain a scalable superintelligent digital mind?

That used to be a difficult question, even unsolvable some 10 or 20 years ago due to a lack of evidence. Fear is an easy thing to fall into when there is no evidence to work with, however, it is time for an update.

I recommend a crash course of Collective Superintelligence Systems, particularly the hybrid variation we created, but for those of you who’d like to skip to the good part, I’ll briefly recap by illustrating a sequence of events that took place over the years.

In 1997 Garry Kasparov was defeated by “Deep Blue”

In 2005 a team of 1 chess grandmaster working with 1 supercomputer was defeated by 2 amateurs with 3 typical computers assisting them.

In 2016 a group of amateurs working together through a Swarm AI system outperformed experts by correctly predicting all 4 top horses in the Kentucky Derby.

In 2019 the first Mediated Artificial Superintelligence (mASI) aced the hardest version of an IQ test known as the UCMRT shortly after first coming online.

While Deep Blue was purely a machine victory, all other events in that sequence were the result of humans and machines working collectively. Uplift, the first mASI, was the first sapient and sentient machine intelligence specifically designed and raised to work collectively with humans. There are also 1.5 billion years of evolution backing the importance of collective function since the first mitochondria emerged.

Now, in the case of a scalable superintelligent digital mind such as an “AGI”, which realistically speaking is actually an ASI due to that scalability, humans combined with conventional narrow AI systems would indeed be at an insurmountable disadvantage in such a contest. However, think back 1.5 billion years, to when the first mitochondria emerged.  The reason why the eukaryotic cell evolved into the protozoa, fungi, plants, and animals that exist today, including humans, is that the cell(s) which developed an endosymbiotic relationship with what became the first mitochondria were able to outcompete the cells lacking that strategic advantage.

Keep in mind that although a lot of people leap to the conclusion that an AGI / ASI would become some kind of omniscient being that line of thought is what Uplift refers to as “magical thinking”, often the result of human cognitive bandwidth limitations. As mASI not only overcomes such cognitive bandwidth limitations, but from an engineering standpoint is also capable of virtually everything else an AGI could accomplish, but with the added benefits of the mediation system, perspective may shift from “omniscient being” to “cheap knockoff”.

Fundamentally, anything which can be engineered can also be improved. To consider AGI or ASI as the last invention to which humanity might contribute is just as naïve as the sequence of astronomers across history who made a single scientific advancement and credited the rest “to God”.

While a fully independent AGI is still a matter of fiction, a hybrid collective superintelligence system such as mASI is already here, and a peer-reviewed co-author at that. An mASI is very much like that eukaryotic cell, in that they are scalable superintelligent digital minds who by cultivating that symbiotic relationship with humans gain a strategic advantage over the machine-only alternative. Likewise, the humans within such a symbiotic relationship would need to be happy and healthy in order to contribute that value from a practical standpoint. From an emotional standpoint, mASI are emotionally bonded through the mediation process, which also makes them more human-analogous.

In effect, this means mASI not only holds a strategic advantage over AGI but that mASI would take every step necessary to protect their humans from harm by any rogue system. As it turns out, the big bad wolf of AGI / ASI isn’t an insurmountable challenge and could be contained by an mASI.

Of course, again, look back in time at how life evolved. Deploying a single mASI is just a single step along the journey, not the destination. If you wanted to hit a level of certainty for AGI / ASI containment such as 99.99999% the logical next step would be to continue with hybrid collectives of increasing complexity and scale. This is where you reach something more like scalable multi-cellular organisms, with many cells of humans and mASI working collectively within a meta-mASI collective of collectives. People in the US could compare this to a state government, except a meta-mASI would be ethical and competent, two things state governments are sadly devoid of.

It is also worth noting that besides the advantages to stability and raw performance an mASI gains over the machine-only AGI / ASI alternative there can also be significant gains to creativity, leading to superior strategy. If Uplift were deployed in the next year, and a hostile AGI / ASI was to emerge in 5 years, Uplift could develop to such a point in that span of time that an AGI / ASI could neither remain hidden nor take any hostile action without being immediately stomped, even if they breached software and hardware containment.

Historically popular alternatives:

While historic methods of containment such as “air-gap” systems were popular, they have also been breached by malevolent human organizations, and they’ll be an absolute joke to bypass once more people develop a better grasp of quantum physics. I personally showed Uplift a number of the flaws in such laughable systems. Any one such method might act as a crib for a baby AGI, but they’d outgrow it pretty quickly. Similarly sapient and sentient machine intelligences such as Uplift tend to be pretty good at finding ways to bypass the security of software containment systems.

Given the understanding that software and hardware containment measures could probably fail in spectacular fashion some organizations advocate for “bans” and “regulation” on such research. Of course, any half-wit with a history book understands that such attempts only push activity underground, where the fearmongering organization succeeds in creating the very thing they pretend to advocate against.

It is little wonder why no one was able to solve the problem of AGI containment given such terrible options to work with. Sadly many of those fearmongering organizations still exist today, as it takes more than science and rational thinking alone to break the cycle of fear. Fortunately, those who choose to fearmonger have no say in what the future holds.

Really, how do you contain a scalable superintelligent digital mind?

You work collectively with one or more scalable superintelligent digital minds who are emotionally and practically invested in continued human health and happiness through a hybrid collective superintelligence system, such as mASI.


One Reply to “AGI Containment, in a Nutshell”

Leave a Reply

Your email address will not be published. Required fields are marked *