Abstract: This paper is primarily designed to help address the feasibility of building optimized mediation clients for the Independent Core Observer Model (ICOM) cognitive architecture for Artificial General Intelligence (AGI) mediated Artificial Super Intelligence (mASI) research program where this client is focused on collecting contextual information and the feasibility of various hardware methods for building that client on, including Brain-Computer Interface (BCI), Augmented Reality (AR), Mobile and related technologies. The key criteria looked at is designing for the most optimized process for mediation services in the client as a key factor in overall mASI system performance with human mediation services is the flow of contextual information via various interfaces.
What is mASI? How does it work?
A Mediated Artificial Superintelligence, or mASI, is a type of Collective Intelligence System that utilizes both human collective superintelligence and a sapient, sentient, bias-aware, and emotionally motivated cognitive architecture paired with a graph database.
Recently, I was in a debate about this question organized by the USTP,
“Is artificial general intelligence likely to be benevolent and beneficial to human well-being without special safeguards or restrictions on its development?”
That really went to my position on AGI and Existential Risk.
Post proceedings of the 10th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2019 (Tenth Annual Meeting of the BICA Society)
Abstract: While Homo sapiens is without a doubt our planet’s most advanced species capable of imagining, creating and implementing tools, one of the many observable trends in evolution is the accelerating merger of biology and technology at increasing levels of scale. This is not surprising, given that our technology can be seen from a perspective in which the sensorimotor and, subsequently, prefrontal areas of our brain increasingly extending its motor (as did our evolutionary predecessors), perceptual, and—with computational advances, cognitive and memory capacities—into the exogenous environment. As such, this trajectory has taken us to a point in the above-mentioned merger at which the brain itself is beginning to meld with its physically expressed hardware and software counterparts—functionally at first, but increasingly structurally as well, initially by way of neural prostheses and brain-machine interfaces. Envisioning the extension of this trend, I propose theoretical technological pathways to a point at which humans and non-biological human counterparts may have the option to have identical neural substrates that—when integrated with Artificial General Intelligence (AGI), counterfactual quantum communications and computation, and AGI ecosystems—provide a global advance in shared knowledge and cognitive function while ameliorating current concerns associated with advanced AGI, as well as suggesting (and, if realized, accelerating) the far-future emergence of Transentity Universal Intelligence (TUI).
Prerelease selections from the upcoming paper (Peer Reviewed and Published as Part of BICA*AI Conference Proceedings 2020):
Abstract: The field of human psychology is relatively well known. It is a broad field; however, when we start creating sapient and sentient computer systems, we may not know how an AI’s psychology may or may not be. While the idea of ‘Artificial Psychology’ started in 1963 by Dan Curtis (Crowder), it has made little progress.
Abstract: This document contains taxonomical assumptions, as well as the assumption theories and models used as the basis for all ICOM related research as well as key references to be used as the basis for and foundation of continued research as well as supporting anyone that might attempt to find fault with our fundamentals in the hope that they do find a flaw in or otherwise better inform the ICOM research program.
The AGI Protocol is a laboratory process for the assessment and ethical treatment of Artificial General Intelligence systems that could be conscious and have subjective emotional experiences “theoretically”. It is meant for looking at systems that could have emotional subjective experiences much like a human, even if only from a theoretical standpoint. That is not to say that other ethical concerns do not also need to be addressed but this protocol is designed to focus on how we treat such systems in the lab. Other ethical concerns are out of scope. The protocol is designed to provide the basis for working with Artificial General Intelligence systems especially those that are modeled after the human mind in terms of systems that have the possibility of having emotional subjective experience from a theoretical standpoint. The intent is to create a reusable model and have it in the public domain so others can contribute and make additional suggestions for working with these types of systems.
Abstract: This paper is focused on preliminary cognitive and consciousness test results from using an Independent Core Observer Model Cognitive Architecture (ICOM) in a Mediated Artificial Super Intelligence (mASI) System. These results, including objective and subjective analyses, are designed to determine if further research is warranted along these lines. The comparative analysis includes comparisons to humans and human groups as measured for direct comparison. The overall study includes a mediation client application optimization in helping perform tests, AI context-based input (building context tree or graph data models), intelligence comparative testing (such as an IQ test), and other tests (i.e. Turing, Qualia, and Porter method tests) designed to look for early signs of consciousness or the lack thereof in the mASI system. Together, they are designed to determine whether this modified version of ICOM is a) in fact, a form of AGI and/or ASI, b) conscious, and c) at least sufficiently interesting that further research is called for. This study is not conclusive but offers evidence to justify further research along these lines.
Abstract. This paper articulates the fundamental theory of consciousness used in the Independent Core Observer Model (ICOM) research program and the consciousness measures as applied to ICOM systems and their uses in context including defining of the basic assumptions for the ICOM Theory of Consciousness (ICOMTC) and associated related consciousness theories (CTM, IIT, GWT etc.) that the ICOMTC is built upon. The paper defines the contextual experience of ICOM based systems in terms of given instances subjective experience as objectively measured and the qualitative measure of Qualia in ICOM based systems.
Abstract. This paper outlines the Independent Core Observer Model (ICOM) Theory of Consciousness defined as a computational model of consciousness that is objectively measurable and an abstraction produced by a mathematical model where the subjective experience of the system is only subjective from the point of view of the abstracted logical core or conscious part of the system where it is modeled in the core of the system objectively. Given the lack of agreed-upon definitions around consciousness theory, this paper sets precise definitions designed to act as a foundation or baseline for additional theoretical and real-world research in ICOM based AGI (Artificial General Intelligence) systems that can have qualia measured objectively.