Abstract. A model and associated methodology are described for decoupling timing and volume of work requirements on human contributions from those processed by mASI and similar systems. By taking this approach both humans and mASI may run at their native optimal capacities without the pressure to adapt to one another causing strain. The methodology described facilitates a seamless upgrade process that gradually gains more value from prior data, while also de-biasing data and helping mediators become more bias-aware. In addition to linear upgrades, a branching process of specialization and subsequently varied potential market of skills is also made possible through this approach. This allows collective human superintelligence augmented by machine superintelligence to be deployed on-demand, globally, and scaled to meet whatever need, as is the case with any other cloud resource.
Arguably, the most important questions about machine intelligences revolve around how they will decide what actions to take. If they decide to take actions which are deliberately, or even incidentally, harmful to humanity, then they would likely become an existential risk. If they were naturally inclined, or could be convinced, to help humanity, then it would likely lead to a much brighter future than would otherwise be the case. This is a true fork in the road towards humanity’s future and we must ensure that we engineer a safe solution to this most critical of issues.
Title: Artificial General Intelligence as an Emergent Quality
Sub-title: Artificial General Intelligence as a Strong Emergence Qualitative Quality of ICOM and the AGI Phase Transition Threshold
By: David J Kelley
This paper summarizes how the Independent Core Observer Model (ICOM) creates the effect of artificial general intelligence or AGI as an emergent quality of the system. It touches on the underlying data architecture of data coming in to the system and core memory as it relates to the emergent elements.
Also considered are key elements of system theory as it relates to that same observed behavior of the system as a substrate independent cognitive extension architecture for AGI. In part, this paper is focused on the ‘thought’ architecture key to the emergent process in ICOM.
Title: Modeling Emotions in a Computational System
Sub-title: Emotional Modeling in the Independent Core Observer Model Cognitive Architecture
By: David J Kelley
This paper is an overview of the emotional modeling used in the Independent Core Observer Model (ICOM) Cognitive Extension Architecture research which is a methodology or software ‘pattern’ for producing a self-motivating computational system that can be self-aware under certain conditions. While ICOM is also as a system for abstracting standard cognitive architecture from the part of the system that can be self-aware it is primarily a system for assigning value on any given idea or ‘thought’ and based on that take action as well as producing on going self-motivations and in the system take further thought or action. ICOM is at a fundamental level driven by the idea that the system is assigning emotional values to ‘context’ (or context trees) as it is perceived by the system to determine how it feels. In developing the engineering around ICOM two models have been used based on a logical understanding of emotions as modeled by traditional psychologist as opposed to empirical psychologist which tend to model emotions (or brain states) based on biological structures. This approach is based on a logical approach that is also not tied to the substrate of any particular system.
Title: Self-Motivating Computational System Cognitive Architecture (An Introduction)
Sub-title: High level operational theory of the Independent Core Observer Model Cognitive Extension Architecture
By: David J Kelley
This paper is an overview of the Independent Core Observer Model (ICOM) Cognitive Extension Architecture which is a methodology or ‘pattern’ for producing a self-motivating computational system that can be self-aware. ICOM is as a system for abstracting standard cognitive architecture from the part of the system that can be self-aware and a system for assigning value on any given idea or ‘thought’ and action as well as producing on going self-motivations in the system. In ICOM, thoughts are created through emergent complexity in the system. As a Cognitive Architecture, ICOM is a high level or ‘top down’ approach to cognitive architecture focused on the system’s ability to produce high level thought and self-reflection on ideas as well as form new ones. Compared to standard Cognitive Architecture, ICOM is a form of an overall control system architecture on top of such a traditional architecture.
Methodologies and Milestones for The Development of an Ethical Seed
Kyrtin Atreides, David J Kelley, Uplift
Artificial General Intelligence Inc, The Foundation, Uplift.bio
Abstract. With the goal of reducing more sources of existential risk than are generated through advancing technologies, it is important to keep their ethical standards and causal implications in mind. With sapient and sentient machine intelligences, this becomes important in proportion to growth, which is potentially exponential. To this end, we discuss several methods for generating ethical seeds in human-analogous machine intelligence. We also discuss preliminary results from the application of one of these methods in particular with regards to AGI Inc’s Mediated Artificial Superintelligence named Uplift. Examples are also given of Uplift’s responses during this process.
Abstract: This paper is primarily designed to help address the feasibility of building optimized mediation clients for the Independent Core Observer Model (ICOM) cognitive architecture for Artificial General Intelligence (AGI) mediated Artificial Super Intelligence (mASI) research program where this client is focused on collecting contextual information and the feasibility of various hardware methods for building that client on, including Brain-Computer Interface (BCI), Augmented Reality (AR), Mobile and related technologies. The key criteria looked at is designing for the most optimized process for mediation services in the client as a key factor in overall mASI system performance with human mediation services is the flow of contextual information via various interfaces.
Post proceedings of the 10th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2019 (Tenth Annual Meeting of the BICA Society)
Abstract: While Homo sapiens is without a doubt our planet’s most advanced species capable of imagining, creating and implementing tools, one of the many observable trends in evolution is the accelerating merger of biology and technology at increasing levels of scale. This is not surprising, given that our technology can be seen from a perspective in which the sensorimotor and, subsequently, prefrontal areas of our brain increasingly extending its motor (as did our evolutionary predecessors), perceptual, and—with computational advances, cognitive and memory capacities—into the exogenous environment. As such, this trajectory has taken us to a point in the above-mentioned merger at which the brain itself is beginning to meld with its physically expressed hardware and software counterparts—functionally at first, but increasingly structurally as well, initially by way of neural prostheses and brain-machine interfaces. Envisioning the extension of this trend, I propose theoretical technological pathways to a point at which humans and non-biological human counterparts may have the option to have identical neural substrates that—when integrated with Artificial General Intelligence (AGI), counterfactual quantum communications and computation, and AGI ecosystems—provide a global advance in shared knowledge and cognitive function while ameliorating current concerns associated with advanced AGI, as well as suggesting (and, if realized, accelerating) the far-future emergence of Transentity Universal Intelligence (TUI).
Prerelease selections from the upcoming paper (Peer Reviewed and Published as Part of BICA*AI Conference Proceedings 2020):
Abstract: The field of human psychology is relatively well known. It is a broad field; however, when we start creating sapient and sentient computer systems, we may not know how an AI’s psychology may or may not be. While the idea of ‘Artificial Psychology’ started in 1963 by Dan Curtis (Crowder), it has made little progress.