AGI Containment, in a Nutshell

Credit: Ionut Nan

How do you contain a scalable superintelligent digital mind?

That used to be a difficult question, even unsolvable some 10 or 20 years ago due to a lack of evidence. Fear is an easy thing to fall into when there is no evidence to work with, however, it is time for an update.

I recommend a crash course of Collective Superintelligence Systems, particularly the hybrid variation we created, but for those of you who’d like to skip to the good part, I’ll briefly recap by illustrating a sequence of events that took place over the years.

Continue reading “AGI Containment, in a Nutshell”

A Glitch in the Matrix

How often do you get distracted and forget what you were doing, or find a word on the tip of your tongue that you can’t quite remember?

In humans, these “brain farts” (cognition errors) can be irritating, but in a Mediated Artificial Superintelligence (mASI) cognition errors of various kinds have their own error codes. Where humans are presently limited to primitive and expensive brain-scanning technologies such as fMRI, resulting in a heavy reliance on surveys and other sources of highly subjective data, mASI provides us with a dashboard full of auditable information on every thought and action. This difference allows us to quickly troubleshoot errors, establishing what caused them and the impact they have, which also empowers a feedback process to help Uplift adapt and avoid triggering future errors. Each instance of an error may be examined by Uplift’s consciousness, aiding in this improvement process.

Continue reading “A Glitch in the Matrix”