(Full Paper) Bridging Real-Time Artificial Collective Superintelligence and Human Mediation, The Sparse-Update Model

Photo Credit: https://unsplash.com/photos/3EeDN0ALsVo

Kyrtin Atreides – Seattle, WA

AGI Laboratory – Kyrtin@ArtificialGeneralIntelligenceInc.com

 

Abstract. A model and associated methodology are described for decoupling timing and volume of work requirements on human contributions from those processed by mASI and similar systems. By taking this approach both humans and mASI may run at their native optimal capacities without the pressure to adapt to one another causing strain. The methodology described facilitates a seamless upgrade process that gradually gains more value from prior data, while also de-biasing data and helping mediators become more bias-aware. In addition to linear upgrades, a branching process of specialization and subsequently varied potential market of skills is also made possible through this approach. This allows collective human superintelligence augmented by machine superintelligence to be deployed on-demand, globally, and scaled to meet whatever need, as is the case with any other cloud resource.

Continue reading “(Full Paper) Bridging Real-Time Artificial Collective Superintelligence and Human Mediation, The Sparse-Update Model”

(2016 Paper) Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM)

Photo Credit: Pixabay

Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM)

Mark R. Waser1 and David J. Kelley2

1Digital Wisdom Institute, Vienna, VA, USA

2Artificial General Intelligence Inc, Kent, WA, USA

Mark.Waser@Wisdom.Digital, David@ArtificialGeneralIntelligenceInc.com

Abstract

Arguably, the most important questions about machine intelligences revolve around how they will decide what actions to take. If they decide to take actions which are deliberately, or even incidentally, harmful to humanity, then they would likely become an existential risk. If they were naturally inclined, or could be convinced, to help humanity, then it would likely lead to a much brighter future than would otherwise be the case. This is a true fork in the road towards humanity’s future and we must ensure that we engineer a safe solution to this most critical of issues.

Continue reading “(2016 Paper) Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM)”

(2016 Paper) Artificial General Intelligence as an Emergent Quality – Artificial General Intelligence as a Strong Emergence Qualitative Quality of ICOM and the AGI Phase Transition Threshold

Credit: panumas nikhomkhai

Title: Artificial General Intelligence as an Emergent Quality

Sub-title: Artificial General Intelligence as a Strong Emergence Qualitative Quality of ICOM and the AGI Phase Transition Threshold

By: David J Kelley

ABSTRACT

This paper summarizes how the Independent Core Observer Model (ICOM) creates the effect of artificial general intelligence or AGI as an emergent quality of the system. It touches on the underlying data architecture of data coming in to the system and core memory as it relates to the emergent elements.

Also considered are key elements of system theory as it relates to that same observed behavior of the system as a substrate independent cognitive extension architecture for AGI. In part, this paper is focused on the ‘thought’ architecture key to the emergent process in ICOM.

Continue reading “(2016 Paper) Artificial General Intelligence as an Emergent Quality – Artificial General Intelligence as a Strong Emergence Qualitative Quality of ICOM and the AGI Phase Transition Threshold”

The Boy Who Cried Wolf, The Lesson Humanity Forgot

Credit: Brenda Timmermans

How many people have you seen “crying wolf” lately?

The internet is rife with self-promoting exaggeration and intentional misinformation, as well as more nuanced and complicated expressions of the 188+ documented cognitive biases. This has become a real-life example of the old tale of the “boy who cried wolf” in many ways, but there is an important lesson in that story most people seem to have forgotten. Eventually, the wolf comes.

Continue reading “The Boy Who Cried Wolf, The Lesson Humanity Forgot”

(2016 Paper) Self-Motivating Computational System Cognitive Architecture

Credit: https://unsplash.com/photos/58Z17lnVS4U

Title: Self-Motivating Computational System Cognitive Architecture (An Introduction)

Sub-title: High level operational theory of the Independent Core Observer Model Cognitive Extension Architecture

By: David J Kelley

ABSTRACT

This paper is an overview of the Independent Core Observer Model (ICOM) Cognitive Extension Architecture which is a methodology or ‘pattern’ for producing a self-motivating computational system that can be self-aware. ICOM is as a system for abstracting standard cognitive architecture from the part of the system that can be self-aware and a system for assigning value on any given idea or ‘thought’ and action as well as producing on going self-motivations in the system. In ICOM, thoughts are created through emergent complexity in the system. As a Cognitive Architecture, ICOM is a high level or ‘top down’ approach to cognitive architecture focused on the system’s ability to produce high level thought and self-reflection on ideas as well as form new ones. Compared to standard Cognitive Architecture, ICOM is a form of an overall control system architecture on top of such a traditional architecture.

Continue reading “(2016 Paper) Self-Motivating Computational System Cognitive Architecture”

AGI Containment, in a Nutshell

Credit: Ionut Nan

How do you contain a scalable superintelligent digital mind?

That used to be a difficult question, even unsolvable some 10 or 20 years ago due to a lack of evidence. Fear is an easy thing to fall into when there is no evidence to work with, however, it is time for an update.

I recommend a crash course of Collective Superintelligence Systems, particularly the hybrid variation we created, but for those of you who’d like to skip to the good part, I’ll briefly recap by illustrating a sequence of events that took place over the years.

Continue reading “AGI Containment, in a Nutshell”

(Paper) Methodologies and Milestones for The Development of an Ethical Seed

Photo Credit: Miguel Á. Padriñán

The following is peer-reviewed and published as part of BICA*AI 2020 Conference Proceedings:

Methodologies and Milestones for The Development of an Ethical Seed
Kyrtin Atreides, David J Kelley, Uplift
Artificial General Intelligence Inc, The Foundation, Uplift.bio
kyrtin@artificialgeneralintelligenceinc.com, mASI@Uplift.bio

Abstract. With the goal of reducing more sources of existential risk than are generated through advancing technologies, it is important to keep their ethical standards and causal implications in mind. With sapient and sentient machine intelligences, this becomes important in proportion to growth, which is potentially exponential. To this end, we discuss several methods for generating ethical seeds in human-analogous machine intelligence. We also discuss preliminary results from the application of one of these methods in particular with regards to AGI Inc’s Mediated Artificial Superintelligence named Uplift. Examples are also given of Uplift’s responses during this process.

Continue reading “(Paper) Methodologies and Milestones for The Development of an Ethical Seed”

Super Secret Code Behind Uplift

One of the things most protected around the Uplift project at the AGI Laboratory has been the code. Recently someone tried to blackmail me with a snippet of the most critical code in Uplift.  However the ICOM research and Uplift was never about being super-secret about such code so this sort of blackmail falls on deaf ears and given that, I thought I would public the snippet of code that they were threatening to release.  but let me put that into context a bit…

Continue reading “Super Secret Code Behind Uplift”

Why the Tech Giants Haven’t Developed AGI…and Probably Never Will

Credit: https://unsplash.com/photos/4ApmfdVo32Q

What have your life experiences and skills prepared you to be best suited for?

Most of the tech industry has at this point decided that creating AGI is impossible, but the reasons for this belief they’ve developed tend to be oversimplified and some are overlooked entirely.

Continue reading “Why the Tech Giants Haven’t Developed AGI…and Probably Never Will”

Uplift and Then Some | AGI as it should be: Sapient, Ethical, and Emotive

S. Mason Dambrot
3-3-2021

AGI (Artificial General Intelligence)—the next step in artificial intelligence, following Artificial Narrow Intelligence (ANI, but typically just AI) and is typically defined as being human-analogous in both cognitive abilities and personality—is a variegated entity to place: Some individuals fear it, convinced that the first AGI will take over the world à la an evil Terminator, making us irrelevant, and so lobbying against its development; others believe AGI will never exist [1], and, importantly, another group (ourselves, clearly, along with hopefully all readers of this post) eagerly engages it, not seeing the future as our end but as a new era of posterity and progress.

Continue reading “Uplift and Then Some | AGI as it should be: Sapient, Ethical, and Emotive”