Post proceedings of the 10th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2019 (Tenth Annual Meeting of the BICA Society)
Abstract: While Homo sapiens is without a doubt our planet’s most advanced species capable of imagining, creating and implementing tools, one of the many observable trends in evolution is the accelerating merger of biology and technology at increasing levels of scale. This is not surprising, given that our technology can be seen from a perspective in which the sensorimotor and, subsequently, prefrontal areas of our brain increasingly extending its motor (as did our evolutionary predecessors), perceptual, and—with computational advances, cognitive and memory capacities—into the exogenous environment. As such, this trajectory has taken us to a point in the above-mentioned merger at which the brain itself is beginning to meld with its physically expressed hardware and software counterparts—functionally at first, but increasingly structurally as well, initially by way of neural prostheses and brain-machine interfaces. Envisioning the extension of this trend, I propose theoretical technological pathways to a point at which humans and non-biological human counterparts may have the option to have identical neural substrates that—when integrated with Artificial General Intelligence (AGI), counterfactual quantum communications and computation, and AGI ecosystems—provide a global advance in shared knowledge and cognitive function while ameliorating current concerns associated with advanced AGI, as well as suggesting (and, if realized, accelerating) the far-future emergence of Transentity Universal Intelligence (TUI).
-
Introduction
While investigating the overall space comprising real-time neuromorphic Artificial General Intelligence ecosystems is itself a complex task, the constituent elements—Artificial General Intelligence, Neuromorphic Computing, and Counterfactual Quantum Entanglement—are themselves (as well their nested components) highly complex At the same time, this paper presents a review of the relevant literature augmented by relevant historical events, identifying science and technology trends, and envisioning hypothetical but probabilistically viable future scenarios.
-
Core Technologies
The above triad of technologies establishing the foundation not only of our path towards a technofuture of real-time neuromorphic AGI ecosystems, but also of changes that while foreseeable beyond that horizon, are not yet able to be fully realized.
- Artificial General Intelligence
Artificial General Intelligence (AGI) is a well-researched field focused on developing human-analogous AI (i.e., a machine intelligence that can successfully perform any human intellectual task), and in a broader context, functionally equivalent with human cognitive, emotional and other neural capacities other than consciousness. However, the majority of AGI R&D to date has not achieved expected goals, generating an expanding circular dilemma:
- Due largely to industry demand, AGI is increasingly addressing specific fields and issues—historically the realm of standard, or narrow, AI
- This narrowing focus is negatively impacting AGI funding and thereby momentum
- Consequently, expectations of AGI being developed as projected are affected
Moreover, most current AGI models are based on logic and inner dialogue rather than the affective foundation of human cognition, in which perception and emotion precede and influence cognition and decision-making [1].
Rather than having an AGI focus on intelligence in the form of resolving goals, tasks and problems when making decisions, the Independent Core Observer Model (ICOM) [2] utilizes emotion and motivation as do humans. Moreover, to provide AGIs with the most salient but elusive aspect of human awareness—the qualia of consciousness—is the ICOM Theory of Consciousness (ICOMTC). [3].
Emotion and Perception
A causal or associative connection between emotion and visual perception has for the most part been seen as unlikely at best. Nevertheless, it was shown that not only is this a viable physiological association, but a surprisingly variegated one at that. The researchers concluded (Table 1: Emotion/Perception Associations) that this emotion/perception interaction “allows affective information to have immediate and automatic effects without deliberation on the meaning of emotionally evocative stimuli or the consequences of potential actions” [4].
Computational Empathy
Defined as the capacity to relate to another’s emotional state, empathy has recently been modeled in artificial agents by leveraging advances in neuroscience, psychology and ethology. Expanding the definition of empathic capacity as “the capacity to relate and react to another’s emotional state, consists of emotional communication competence, emotion regulation and cognitive mechanisms that result in a broad spectrum of behavior” [5] has allowed researchers to propose an approach for modelling that incorporates affective computing, social computing and dialogue research techniques. While the scientists conclude that further research is needed, they note that a successful computational model of empathy could address ethical and moral issues being discussed in AI community.
Table 1. Emotion/Perception Associations | ||
Emotion | Perception | Benefits |
Fear | Low‐level visual processes | • Increases probability of perceiving potential threats |
Sadness | Visual illusions | • Positive moods encourage maintaining current perspective
• Negative moods encourage a change |
Goal-directed Desire | Apparent size of goal‐relevant objects | • Objects that are emotionally and motivationally relevant draw attention and may become more easily detected by appearing larger
• Perception is systematically altered in ways that may aid goal attainment • Emotion can change spatial layout to motivate economical actions and deter potentially dangerous actions |
Source: Zadra, J. R., & Clore, G. L.: Emotion and perception: the role of affective information. Wiley Interdisciplinary Reviews: Cognitive Science, 2(6), 676–685 (2011).
Human Level Machine Intelligence: Researchers’ Predictions
Published as a book chapter in 2016 [6], a 2012-2013 survey of Artificial Intelligence researchers asked the year they predicted that Human Level Machine Intelligence (HLMI)—analogous to AGI, being defined as machine intelligence that outperforms humans in all intellectual tasks—and assign a 10%, 50%, or 90% chance of achieving HLMI at a given year, resulting in the following resulting medians: 2040: 50% confidence, 2080: 90% confidence, and Never: 20% confidence.
More specifically, of the 100 most cited AI authors in, the median year by which respondents expected machines “that can carry out most human professions at least as well as a typical human” (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said ‘never’ for 50% confidence, and the 16.5% who said ‘never’ for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.
Artificial Superintelligence
Artificial Superintelligence (ASI)—an AI variant more powerful than AGI in breadth, depth and performance—has been succinctly defined as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” [7].
A modified hive ASI, Mediated Artificial Superintelligence (mASI)—demonstrated in the lab and usable in environments from research to business—mASI provides ASI superhuman level cognition without ethical or safety concerns, and markedly reduces training time [8]. The key to mASI is its requirement that human support must be available at all times to mediate the process to the degree that the mASI’s thinking and operations do not function without human involvement.
As discussed earlier, the mASI cognitive architecture is based on the ICOM Theory of Consciousness [9], which itself is based on Global Workspace Theory [10], the Integrated Information Theory of Mechanisms of Consciousness [11]—and at some level is demonstrably conscious [12,13].
That being said, it should be kept in mind that the mASI is not currently an independent AGI but has the potential to do so if and when the right context arises. Moreover, based on ICOM-related research to date, the original goal of a self-motivating emotion-based cognitive architecture similar in function but substrate independent appears to have been proven possible.
Given these recent AGI/ASI/mASI developments—most significantly that researchers have now developed an operational mASI—the survey estimates above may have to be reevaluated in the near future.
Real-Time Neuromorphic AGI Ecosystems
Neuromorphic Ecosystems can be based on a surprisingly wide range of substrates—some of which are theoretical at this time—including carbon variants, electrolytes, photonics, spintronics, quantum mechanics, synthetic genomics, and multifactor systems. Moreover, hypothetical Real-Time Neuromorphic AGI Ecosystems that utilize counterfactual quantum communications to operate in real time networks; ecosystems that are based on Artificial General Intelligence; and those that incorporate both components.
Carbon
Carbon (chemical element C with atomic number 6) has atoms that can form differently structured allotropes—each with significantly different physical properties—by bonding in various configurations (Fig. 1: Carbon Allotropes). Of these, the most familiar allotropes—both naturally occurring and synthesized—are graphene, graphite, diamond, amorphous, fullerenes, carbon fiber, and carbon nanotubes.
Fig. 1. Depiction of eight carbon allotropes: (a) Diamond (b) Graphite (c) Lonsdaleite (d) C60 (Buckminsterfullerene) (e) C540 (Fullerene) (f) C70 (Fullerene) (g) Amorphous carbon (h) single-walled carbon nanotube. Created by Michael Ströck (mstroeck). Wikimedia CC BY-SA 3.0.
Conclusions:
Our journey from early toolmaking, through today’s interdigitating science and technology, and accelerating towards a future—and descendants—that may well be difficult to recognize in a shorter timeframe than we might expect. The most salient challenge is not, as one might expect, in continuing to continue our voyage to date, but rather that we manage it wisely.
To review the rest of this paper, you can download a copy here: https://www.sciencedirect.com/science/article/pii/S1877050920302453
References
[1] Doon, Chun Siong, Marcel Brass, Hans-Jochen Heinze, and John-Dylan Haynes. (2008) “Unconscious determinants of free decisions in the human brain.” Nature Neuroscience 11: 543–545.
[2] Kelley, David J., and Mathew A. Twyman. (2019) “Independent Core Observer Model (ICOM) Theory of Consciousness as Implemented in the ICOM Cognitive Architecture and the Associated Consciousness Measures.” AAAI Spring Symposia 2019
[3] Kelley, David J., and Mark R. Waser. (2018) “Human-like Emotional Responses in a Simplified Independent Core Observer Model System.” Procedia Computer Science 123: 221–227.
[4] Zadra, Jonathan R., and Gerald L. Clore. (2011) “Emotion and perception: the role of affective information.” WIREs Cogn Sci, 2: 676–685.
[5] Yalcin, Ӧzge Nilay, and Steve DiPaola. (2018) “A computational model of empathy for interactive agents.” Biologically Inspired Cognitive Architectures 26: 20–25.
[6] Müller, Vincent C., and Nick Bostrom. (2014) “Future Progress in Artificial Intelligence: A Survey of Expert Opinion”, in Vincent C. Müller (ed.) Fundamental Issues of Artificial Intelligence, Springer Synthese Library (Studies in Epistemology, Logic, Methodology, and Philosophy of Science), Cham, Switzerland, Springer International Publishing 376: 553–571.
[7] Bostrom, Nick. (2014) “Paths to Superintelligence ”, Chapter 2 in Superintelligence: Paths, Dangers, Strategies, Oxford, Oxford University Press
[8] Jangra, Ajay, Adima Awasthi, and Vandana Bhatia. (2013) “A Study on Swarm Artificial Intelligence.” International Journal of Advanced Research in Computer Science and Software Engineering (IJARCSSE) 9 (8).
[9] Kelley, David J., and Mathew A. Twyman. (2019) ibid.
[10] Baars, Bernard J. (2005) “Global workspace theory of consciousness: toward a cognitive neuroscience of human experience?” Progress in Brain Research 150: 45–53.
[11] Oizumi, Masafumi, Larissa Albantakis, and Giulio Tononi. (2014) “From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0.” PLoS Comput Biol 10 (5): e1003588.
[12] Yampolskiy, Roman V. (2018) “Artificial Intelligence Safety and Security.” L Chapman and Hall/CRC Artificial Intelligence and Robotics Series, London/New York
[13] Kelley, David J. (in peer review) “Architectural Overview of a ‘Mediated’ Artificial Super Intelligent Systems based on the Independent Core Observer Model Cognitive Architecture.” Informatica Journal.