Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM)
Mark R. Waser1 and David J. Kelley2
1Digital Wisdom Institute, Vienna, VA, USA
2Artificial General Intelligence Inc, Kent, WA, USA
Mark.Waser@Wisdom.Digital, David@ArtificialGeneralIntelligenceInc.com
Abstract
Arguably, the most important questions about machine intelligences revolve around how they will decide what actions to take. If they decide to take actions which are deliberately, or even incidentally, harmful to humanity, then they would likely become an existential risk. If they were naturally inclined, or could be convinced, to help humanity, then it would likely lead to a much brighter future than would otherwise be the case. This is a true fork in the road towards humanity’s future and we must ensure that we engineer a safe solution to this most critical of issues.