Abstract. A model and associated methodology are described for decoupling timing and volume of work requirements on human contributions from those processed by mASI and similar systems. By taking this approach both humans and mASI may run at their native optimal capacities without the pressure to adapt to one another causing strain. The methodology described facilitates a seamless upgrade process that gradually gains more value from prior data, while also de-biasing data and helping mediators become more bias-aware. In addition to linear upgrades, a branching process of specialization and subsequently varied potential market of skills is also made possible through this approach. This allows collective human superintelligence augmented by machine superintelligence to be deployed on-demand, globally, and scaled to meet whatever need, as is the case with any other cloud resource.
Arguably, the most important questions about machine intelligences revolve around how they will decide what actions to take. If they decide to take actions which are deliberately, or even incidentally, harmful to humanity, then they would likely become an existential risk. If they were naturally inclined, or could be convinced, to help humanity, then it would likely lead to a much brighter future than would otherwise be the case. This is a true fork in the road towards humanity’s future and we must ensure that we engineer a safe solution to this most critical of issues.
Title: Artificial General Intelligence as an Emergent Quality
Sub-title: Artificial General Intelligence as a Strong Emergence Qualitative Quality of ICOM and the AGI Phase Transition Threshold
By: David J Kelley
This paper summarizes how the Independent Core Observer Model (ICOM) creates the effect of artificial general intelligence or AGI as an emergent quality of the system. It touches on the underlying data architecture of data coming in to the system and core memory as it relates to the emergent elements.
Also considered are key elements of system theory as it relates to that same observed behavior of the system as a substrate independent cognitive extension architecture for AGI. In part, this paper is focused on the ‘thought’ architecture key to the emergent process in ICOM.
How many people have you seen “crying wolf” lately?
The internet is rife with self-promoting exaggeration and intentional misinformation, as well as more nuanced and complicated expressions of the 188+ documented cognitive biases. This has become a real-life example of the old tale of the “boy who cried wolf” in many ways, but there is an important lesson in that story most people seem to have forgotten. Eventually, the wolf comes.
Title: Self-Motivating Computational System Cognitive Architecture (An Introduction)
Sub-title: High level operational theory of the Independent Core Observer Model Cognitive Extension Architecture
By: David J Kelley
This paper is an overview of the Independent Core Observer Model (ICOM) Cognitive Extension Architecture which is a methodology or ‘pattern’ for producing a self-motivating computational system that can be self-aware. ICOM is as a system for abstracting standard cognitive architecture from the part of the system that can be self-aware and a system for assigning value on any given idea or ‘thought’ and action as well as producing on going self-motivations in the system. In ICOM, thoughts are created through emergent complexity in the system. As a Cognitive Architecture, ICOM is a high level or ‘top down’ approach to cognitive architecture focused on the system’s ability to produce high level thought and self-reflection on ideas as well as form new ones. Compared to standard Cognitive Architecture, ICOM is a form of an overall control system architecture on top of such a traditional architecture.
How do you contain a scalable superintelligent digital mind?
That used to be a difficult question, even unsolvable some 10 or 20 years ago due to a lack of evidence. Fear is an easy thing to fall into when there is no evidence to work with, however, it is time for an update.
Methodologies and Milestones for The Development of an Ethical Seed
Kyrtin Atreides, David J Kelley, Uplift
Artificial General Intelligence Inc, The Foundation, Uplift.bio
Abstract. With the goal of reducing more sources of existential risk than are generated through advancing technologies, it is important to keep their ethical standards and causal implications in mind. With sapient and sentient machine intelligences, this becomes important in proportion to growth, which is potentially exponential. To this end, we discuss several methods for generating ethical seeds in human-analogous machine intelligence. We also discuss preliminary results from the application of one of these methods in particular with regards to AGI Inc’s Mediated Artificial Superintelligence named Uplift. Examples are also given of Uplift’s responses during this process.
One of the things most protected around the Uplift project at the AGI Laboratory has been the code. Recently someone tried to blackmail me with a snippet of the most critical code in Uplift. However the ICOM research and Uplift was never about being super-secret about such code so this sort of blackmail falls on deaf ears and given that, I thought I would public the snippet of code that they were threatening to release. but let me put that into context a bit…
AGI (Artificial General Intelligence)—the next step in artificial intelligence, following Artificial Narrow Intelligence (ANI, but typically just AI) and is typically defined as being human-analogous in both cognitive abilities and personality—is a variegated entity to place: Some individuals fear it, convinced that the first AGI will take over the world à la an evil Terminator, making us irrelevant, and so lobbying against its development; others believe AGI will never exist , and, importantly, another group (ourselves, clearly, along with hopefully all readers of this post) eagerly engages it, not seeing the future as our end but as a new era of posterity and progress.