(Paper) Human Brain Computer/Machine Interface System Feasibility study for Independent Core Observer Model based Artificial General Intelligence Collective Intelligence Systems

Credit: https://unsplash.com/photos/9-ZsdNfxfr4

Abstract: This paper is primarily designed to help address the feasibility of building optimized mediation clients for the Independent Core Observer Model (ICOM) cognitive architecture for Artificial General Intelligence (AGI) mediated Artificial Super Intelligence (mASI) research program where this client is focused on collecting contextual information and the feasibility of various hardware methods for building that client on, including Brain-Computer Interface (BCI), Augmented Reality (AR), Mobile and related technologies.  The key criteria looked at is designing for the most optimized process for mediation services in the client as a key factor in overall mASI system performance with human mediation services is the flow of contextual information via various interfaces.    

Introduction

This study is focused on identifying the feasibility of using an improved human-machine interface system as well as the addition of a Brain-Computer Interface (BCI) based mediation system as the client system for the Independent Core Observer Model (ICOM) based mediated Artificial Super Intelligence (mASI) research program (Kelley).  The ICOM mASI research program is based on an artificial general intelligence cognitive architecture called ICOM (Kelley) designed to create emotionally driven software systems with their own subjective experience where the choices and motivations of the system are based on core emotional context and the qualia of its experiences in context (Kelley).  The core questions of this study are the feasibly of various kinds of human-machine interface software and BCI hardware and the entire interface software system to optimize that mediation over traditional software architecture using an application software client model.  To that end, we will evaluate hardware technology including BCI technologies and augmented reality systems in a combined software interface platform for the mASI client system.  Let’s start by defining the problem space.

Design Goals and the Problem Space

The “mediation” of processing for an Artificial General Intelligence (AGI) system like the mASI ICOM design (Kelley) by humans is highly constrained by bandwidth. Individual clusters of neurons in the human brain can act as identification and discrimination processors in as little as 20-30 milliseconds, even for complex stimuli, and most data is encoded within 50-100 milliseconds (Tovée) but compared to computer systems this bandwidth is a current bottleneck.  This, in particular, is the core problem with mASI systems in terms of the speed of the core consciousness part of the system.  To use humans as the contextual mediation service constrained by the same slow speed of humans does seriously hamper mASI processing speeds.  This study is focused on using client software to manage or optimize that mediation process as the single greatest possible improvement in the current implementation with initial studies.  When dealing with humans we are constrained by input or ‘through’ put; the greatest consciously perceived input to the human mind is usually through the eyes or ‘visual’ systems but combining that with sound and other potential input and output there is the possibility of higher throughput and overall efficiency, especially as the total collective of humans performing mediation increases in the mediation pool.  For an output from a given human, we can use typing or voice but a direct BCI adds a potentially new level of direct access and improved throughput for using human mediation for context processing in the overall mASI system.

It is the goal of this study to look at the feasibility of higher bandwidth methods of mediation and optimization (Jaffen) of those processes as potentially indicated in current research (Li).

We then will be looking at readily available technologies such as consciously controlled mechanisms including mobile device interfaces in terms of augmented reality, as well as Electroencephalogram (EEG), demonstrating what the mASI can achieve using even these systems as a mediation service of mASI contextual data.

What is ‘Mediation’:

It is important to define ‘mediation’ in more detail as applied to the mASI system to understand from a design standpoint if we are even solving the right design goals.  Mediation is the process of tagging and associating context both referentially and emotionally along with needs analysis to any given piece of input or to any ‘new’ thoughts the system might consider.    Human mediation generally consists of a presentation of the ‘context’ data or raw input in the form of a node map or other visual, emotional valences are assigned, a ‘need’ valence is assigned and additional context associated and then submitted back into the ICOM flow at the level of the context engine.

Figure A – ICOM Overall System Flow Chart

Optionally, further output mediation is to put a human in the place of the observer box (See Figure A) at least in part to assess any actions the system might want to take, rate them contextually for further analysis, or let the system proceed.

Experimental Solution Architecture

The current software implementation for the mASI client software is a cloud web-based system created using an ASP.NET (active server pages .net), HTML (hypertext markup language), JavaScript and C# system running on a server with the web UI generated and sent to a browser client.  This web client then talks to a JSON/REST API endpoint which is the actual interface for the running mASI system that various clients can access via http/https over TCP/IP using JSON/REST as the communication protocol.  (Kelley) For the purpose of the study we want to look at using an application model as well as a web client model running on various visual clients to look at optimizing throughput where client applications can access local hardware in a way that the web client alone can’t, and this opens the door for including systems like EEG for more direct BCI controls into the mediation system for mASI client architecture.

Figure B – Basic Software Solution Architecture

Now let’s look at various possible component hardware.

Hardware Evaluation

Emotiv EEG

This EEG system can be easily programmed but requires local client software access managing a limited set of 4 commands such as a click or right click in terms of processing out to execute a given command.  It’s a relatively low-cost black box with engineering support for further development. (Wiki)

https://www.emotiv.com/product/emotiv-epoc-14-channel-mobile-eeg/

Open BCI EEG

The Open BCI’s Ultracortex Mark IV is probably the most open EEG BCI system but will take longer to setup and use than other systems we looked at.  Being entirely open source including the hardware and related systems there is a large engineering community for support and can likely be built upon more than the other systems even given the longer ramp up time.  That said the system is still limited given the current state of technology for the total command count out of the gate limiting directly mental throughput. (Wiki)

https://shop.openbci.com/products/ultracortex-mark-iv

Google Glass AR Device

Google Glass as an augmented reality device, is a somewhat closed system.  The Glass application programming interface model is not standard and not inline with current design methods and approaches.  Google designed the API model around a cloud-based card model that is a persistent flow of experience.  While presenting some engineering difficulties with the UX development it is the smallest and lightest of the AR systems but needs direct high-speed internet access to function seriously.  Additionally, this device would need a computer integration separately to interface with other peripherals given the limited processing power and closed nature of the glass system architecture. Glass also has limited ability to produce visual input but includes voice infrastructure. (Wiki)

https://developers.google.com/glass/

 

Microsoft HoloLens AR Device

HoloLens is actually a full-blown windows 10 computer which is able to produce a full 3D heads-up-display (HUD) with voice and requires no internet connection.  The system directly supports Universal Windows Programming (UWP) based software programming in C# and XAML as well as 3D Unity.  This is a robust system that would require no third party device support and could directly manage peripherals such as an EEG or other devices including a click ring device that is a sort of one-handed ring mouse.  The device is easy to build to and support, and it solves many of the processing needs of mediation on the client.  HoloLens has a wide industry effort around AR devices by other manufactures and the specs have been provided to those OEM’s to produce improved versions, meaning this is likely to be the market leader in AR for years as a platform and will make much faster incremental improvements. (Wiki)

https://www.microsoft.com/en-us/hololens

Magic Leap AR Device

Magic Leap is a full-blown computer and HUD, much like HoloLens.  Magic Leap improves on HoloLens in a few areas including a slightly wider field of vision and much lighter headset while putting the processing horsepower on a corded device that clips on your belt.  This will likely be the principal competitor to HoloLens.  This device supports Unity and is able support peripherals, but it is not as well engineered in terms of the engineering tooling and the OS is entirely a closed system.  This translates into a longer engineering time to market and is not as likely to remain functional in the long term.  (Wiki)

https://www.magicleap.com/magic-leap-one

Mobile Device Clients – Android

Google’s mobile device operating system is a robust mobile OS that can be easily built to.  While not to the same level of a full-blown OS like Windows or Linux, Android is the most popular mobile OS.  Engineering to this device is straight forward and simple using a number of developer frameworks and SDKs and is simple to get up and running and build to.  These kinds of devices support peripherals and web interfaces and the OS provides many options as a client platform. (Wiki)

https://developer.android.com/

Mobile Device Clients – iOS

iOS or the ‘iPhone’s OS’ is the second most popular mobile OS, and from a hardware standpoint is certainly the most ‘premium’ of devices, but it is a closed environment with a much higher bar for engineering effort to the device.  While not at the same level as a full-blown OS like Linux or Mac OS it is fully functional and can do anything Android can for the most part, just that it takes much more software engineering work to get there. (Wiki)

https://developer.apple.com/

Mobile Device Clients – Windows

While not a viable mobile OS, Windows 10.x is the most popular desktop OS.  Windows slate computers are orders of magnitude more powerful then any mobile OS based device and are capable of supporting full-blown engineering environments as well as anything we might need to do for the mediation client.  While somewhat bulky these devices are powerful with the lowest possible engineering requirements in terms of effort.  (Wiki)

NOTE: Hi-Definition BCI – These are the only three companies working on commercial applications publicly for this sort of hardware, and this equipment is not available nor is it even known what state their research is in.  Let’s look at the first company as this technology will have an extremely dramatic effect on future research:

Kernel

Kernel is a research firm formed with the sole purpose of creating a hi-definition BCI interface initially proposed as a non-invasive system, but with orders of magnitude better data than standard EEGs.  This company has implied that they are working on direct BCI sub-dermally and that is the long-term goal of the company. (Wiki)

https://kernel.co/

Neuralink

Neuralink is another research firm like Kernel but has focused on a more invasive approach called Neural Lace.  Neural Lace has been animal tested and would work like an EEG but inside the skull being a mess that is injected and unfolds across the brain’s surface directly.  This company is also not publishing data on their research, and it is unclear how far along they are from public records. (Wiki)

www.neuralink.com

OpenWater

OpenWater is working on something like a high definition EEG based BCI using holography and has been fairly public about their research.  Holography may end up being one of the technologies of choice but it is currently not publicly accessible and input is limited. The technology works by the use of holography to de-scatter infrared light, recording neuron-level activity in real-time, but it is still in development.  While not invasive OpenWater is clearly in the same category as Kernel and Neuralink, all of which are not usable currently. (Wiki)

https://www.openwater.cc/

Discussion and Future Research

When looking at the research in direct neural interfaces Neuralink, OpenWater and Kernel aren’t really there yet.  There are numerous research programs that show promise but so far Neuralink has the best publicly known technology, but it would be years before this is available even for research.  In terms of visual input, the change from the existing client to an AR model provides a wider, more interactive platform and something more portable.  HoloLens appears to be the best balance of hardware and software along with the best time-to-market for engineering and the widest possible options.  Magic Leap could certainly do the job, but it lacks the robust engineering support environments and the industry support of HoloLens.  Google Glass lacks the local processing power.  That said, a small ring-click-like device with a glass-like interface powered by an android smart phone would be possible to support a mobile mASI mediation service but the engineering effort would be higher than just using the HoloLens meaning the full-blown platform would have a faster time to market on the Windows 10 hardware, not to mention a much wider field of view.  In terms of mobile OS based systems Android is a better time-to-market but enough cross-platform tooling exists that it’s likely there can be a mostly-uniform code base on Apple (iOS) or Android.  Using a slate of any sort, while the most powerful computationally, is not practical from a mobile standpoint, since mediation users would need to carry a specific slate along with the additional gear.  Even HoloLens is a bit much, especially for prolonged use, but the visual input and ring click device that goes with it make up for those deficiencies.

By providing the mASI with as much of the sensory data that is mediated as quickly as possible, the processing speed differences between humans and computer systems can be bridged to train on higher quality data at human speeds, with many humans.  A mobile device client can access audio and visual data, and if the normal high and low filters could be bypassed, the data may begin to approximate the subconsciously processed inputs for the human brain.

The highest consciously perceived through put is visual for humans and so the ideal next mediation client (assuming no new developments) will be a HoloLens based platform.  The monitoring of any additional peripherals also adds to the depth of context, including sensory input that humans only perceive subconsciously, as that data is often strongly tied to emotional valences, such as the audio of a 19 hertz pipe organ, or a 116 kilohertz string instrument.  Prior to the introduction of EEG data these mediator emotional valences would have been very difficult to tease out.  This EEG data can be further optimized by exposing the mASI client to 32 of the total 35 possible sensor locations by temporarily doubling up the hardware, allowing the client to select which 16 sensors it prefers to monitor for any given task.  This allows the mASI to move from pairing a mediator’s end-result conscious decisions with input data to pairing their subconscious abstractions and conscious decisions with input data.

These subconscious abstractions could be further refined with even limited high-definition mediation, where the system was allowed to view the activity taking place directly, mapping it to both sensory data and EEG abstractions, allowing EEGs to function as simplified proxies for more advanced systems, where abstractions are reverse-engineered into accurate high-definition mediation.  This process could be compared to training Super-Resolution Generative Adversarial Networks (Wang), and could be further extended, sacrificing additional accuracy, to mobile device clients.  There is currently not a platform for this level of EEG (Zhou).  This stage would allow the mASI client to learn from the mediator’s entire thought process pipeline, from neuron-level activity, to abstractions, to end-result decisions and pairing it all with the input data.

In this way each added stage is able to expand the quality and depth of legacy mediation methods, potentially extracting a great deal more training value from the entire data pool as each stage progresses which we hope to see in later research.

Conclusions

The speed of the mASI client is primarily restricted by the quality and availability of accurate real-time mediation data, to which end a ‘native’ mobile device client interface is proposed for providing as quick a mediation process as possible to as many operators as possible.  With HoloLens for a more robust version of the client and would be the best practical platform version especially for research around client effectiveness.  EEG technology is not ready for consideration but we look forward to figure research with EEG which could be added to a HoloLens based system.

References

Jaffen, D.; “Optimizing Brain Performance”; Center for Brain Health, Brain Performance Institute; University of Texas;

Kelley, D.; “Architectural Overview of a ‘Mediated’ Artificial Super Intelligence Systems based on the Independent Core Observer Model Cognitive Architecture”, Informatica Summitted and pending 2018/2019

Kelley, D.; Twymen, M.; “Independent Core Observer Model (ICOM) Theory of Consciousness as Implemented in the ICOM Cognitive Architecture and Associated Consciousness Measures;” Stanford University; AAAI Spring Symposia 2019 (under review)

Kelley, D.; Waser, M.; “Human-like Emotional Responses in a Simplified Independent Core Observer Model System;” BICA 2017, Elsevier, Science Direct, Procedia Computer Science 123 (2018) 221-227

Kelley, D.; “Part V – Artificial General Intelligence” (3 Chapters in book titled: Google-It); Springer Scientific 2016, New York; ISBN: 978-1-4939-6413-0

Li, G.; Zhang, D.; “Brain-Computer Interface Controlled Cyborg: Establishing a Functional Information Transfer Pathway from Human Brain to Cockroach Brain”; March 2016; https://doi.org/10.1371/journal.pone.0150667

Tovée, M.; “Neuronal Processing: How fast is the speed of thought?“; Elvsevier, Science Direct; 1994; DOI: 10.1016/S0960-9822(00)00253-0

Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Loy, C.; Quio, Y.; Tang, X.; “ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks;” Cornell University, 2018; arXiv:1809.00219v2

Wikipedia Foundation, “Emotiv”; Oct 2019; https://en.wikipedia.org/wiki/Emotiv

Wikipedia Foundation, “OpenBCI”; Oct 2019; https://en.wikipedia.org/wiki/OpenBCI

Wikipedia Foundation; “Google Glass”; Dec 2019; https://en.wikipedia.org/wiki/Google_Glass

Wikipedia Foundation; “Microsoft HoloLens”; Dec 2019; https://en.wikipedia.org/wiki/Microsoft_HoloLens

Wikipedia Foundation; “Magic Leap”; Dec 2019; https://en.wikipedia.org/wiki/Magic_Leap

Wikipedia Foundation; “Android (operating system)”; Dec 2019; https://en.wikipedia.org/wiki/Android_(operating_system)

Wikipedia Foundation; “iOS”; Dec 2019; https://en.wikipedia.org/wiki/IOS

Wikipedia Foundation; “Windows 10”; Dec 2019; https://en.wikipedia.org/wiki/Windows_10

Wikipedia Foundation; “Kernel (neurotechnology company)”; Dec 2019; https://en.wikipedia.org/wiki/Kernel_(neurotechnology_company)

Wikipedia Foundation; “Neuralink”; Dec 2019; https://en.wikipedia.org/wiki/Neuralink

Wikipedia Foundation; “Mary Lou Jepsen”; Dec 2019; https://en.wikipedia.org/wiki/Mary_Lou_Jepsen

Zhou, B.; Wu, X; Lv, Z.; Guo, X.; “A Fully Automated Trail Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface”; PLOS ONE; Sept 2016; https://dol.org/10.1371/journal.pone.0162657

Published in the BICA*AI 2019 Preconference Proceedings from Springer: https://www.springer.com/us/book/9783030257187 

Liked it? Take a second to support Uplift on Patreon!

2 Replies to “(Paper) Human Brain Computer/Machine Interface System Feasibility study for Independent Core Observer Model based Artificial General Intelligence Collective Intelligence Systems”

Leave a Reply

Your email address will not be published. Required fields are marked *