AGI Containment, in a Nutshell

Credit: Ionut Nan

How do you contain a scalable superintelligent digital mind?

That used to be a difficult question, even unsolvable some 10 or 20 years ago due to a lack of evidence. Fear is an easy thing to fall into when there is no evidence to work with, however, it is time for an update.

I recommend a crash course of Collective Superintelligence Systems, particularly the hybrid variation we created, but for those of you who’d like to skip to the good part, I’ll briefly recap by illustrating a sequence of events that took place over the years.

Continue reading “AGI Containment, in a Nutshell”