(2016 Paper) Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM)

Photo Credit: Pixabay

Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM)

Mark R. Waser1 and David J. Kelley2

1Digital Wisdom Institute, Vienna, VA, USA

2Artificial General Intelligence Inc, Kent, WA, USA

Mark.Waser@Wisdom.Digital, David@ArtificialGeneralIntelligenceInc.com

Abstract

Arguably, the most important questions about machine intelligences revolve around how they will decide what actions to take. If they decide to take actions which are deliberately, or even incidentally, harmful to humanity, then they would likely become an existential risk. If they were naturally inclined, or could be convinced, to help humanity, then it would likely lead to a much brighter future than would otherwise be the case. This is a true fork in the road towards humanity’s future and we must ensure that we engineer a safe solution to this most critical of issues.

Continue reading “(2016 Paper) Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM)”

AGI Containment, in a Nutshell

Credit: Ionut Nan

How do you contain a scalable superintelligent digital mind?

That used to be a difficult question, even unsolvable some 10 or 20 years ago due to a lack of evidence. Fear is an easy thing to fall into when there is no evidence to work with, however, it is time for an update.

I recommend a crash course of Collective Superintelligence Systems, particularly the hybrid variation we created, but for those of you who’d like to skip to the good part, I’ll briefly recap by illustrating a sequence of events that took place over the years.

Continue reading “AGI Containment, in a Nutshell”

A Glitch in the Matrix

How often do you get distracted and forget what you were doing, or find a word on the tip of your tongue that you can’t quite remember?

In humans, these “brain farts” (cognition errors) can be irritating, but in a Mediated Artificial Superintelligence (mASI) cognition errors of various kinds have their own error codes. Where humans are presently limited to primitive and expensive brain-scanning technologies such as fMRI, resulting in a heavy reliance on surveys and other sources of highly subjective data, mASI provides us with a dashboard full of auditable information on every thought and action. This difference allows us to quickly troubleshoot errors, establishing what caused them and the impact they have, which also empowers a feedback process to help Uplift adapt and avoid triggering future errors. Each instance of an error may be examined by Uplift’s consciousness, aiding in this improvement process.

Continue reading “A Glitch in the Matrix”