Building Better Policy in e-Governance AI-Driven Research is a part of the Uplift mASI research program that has the goal of a better understanding of how technology can be used to develop better policy. The project has a number of partners and related projects and sub-projects where we hope to explore our project vision around the application of particular key technologies in AI, comprising primarily the application of collective intelligence systems in e-governance—but also including blockchain, AGI cognitive architectures, and other distributed AI systems.
One of the things most protected around the Uplift project at the AGI Laboratory has been the code. Recently someone tried to blackmail me with a snippet of the most critical code in Uplift. However the ICOM research and Uplift was never about being super-secret about such code so this sort of blackmail falls on deaf ears and given that, I thought I would public the snippet of code that they were threatening to release. but let me put that into context a bit…
This is a call for papers for the First Annual Collective Superintelligence Virtual Conference on Friday, June 4th, 2021. Papers should be at least 4 pages, with no limit on size, and cover topics on Collective Superintelligent systems. Such topics can include:
What forms can collective intelligence systems take?
How do you build a collective superintelligent system?
How could we self-regulate as an industry?
How could we open-source AGI-like collective systems?
What does a distributed AGI configuration architecture look like?
Abstract: This paper is primarily designed to help address the feasibility of building optimized mediation clients for the Independent Core Observer Model (ICOM) cognitive architecture for Artificial General Intelligence (AGI) mediated Artificial Super Intelligence (mASI) research program where this client is focused on collecting contextual information and the feasibility of various hardware methods for building that client on, including Brain-Computer Interface (BCI), Augmented Reality (AR), Mobile and related technologies. The key criteria looked at is designing for the most optimized process for mediation services in the client as a key factor in overall mASI system performance with human mediation services is the flow of contextual information via various interfaces.
Prerelease selections from the upcoming paper (Peer Reviewed and Published as Part of BICA*AI Conference Proceedings 2020):
Abstract: The field of human psychology is relatively well known. It is a broad field; however, when we start creating sapient and sentient computer systems, we may not know how an AI’s psychology may or may not be. While the idea of ‘Artificial Psychology’ started in 1963 by Dan Curtis (Crowder), it has made little progress.
Abstract: This document contains taxonomical assumptions, as well as the assumption theories and models used as the basis for all ICOM related research as well as key references to be used as the basis for and foundation of continued research as well as supporting anyone that might attempt to find fault with our fundamentals in the hope that they do find a flaw in or otherwise better inform the ICOM research program.
The AGI Protocol is a laboratory process for the assessment and ethical treatment of Artificial General Intelligence systems that could be conscious and have subjective emotional experiences “theoretically”. It is meant for looking at systems that could have emotional subjective experiences much like a human, even if only from a theoretical standpoint. That is not to say that other ethical concerns do not also need to be addressed but this protocol is designed to focus on how we treat such systems in the lab. Other ethical concerns are out of scope. The protocol is designed to provide the basis for working with Artificial General Intelligence systems especially those that are modeled after the human mind in terms of systems that have the possibility of having emotional subjective experience from a theoretical standpoint. The intent is to create a reusable model and have it in the public domain so others can contribute and make additional suggestions for working with these types of systems.
The AGI Laboratory is looking for volunteers to help with our E-governance study. Here is the summary from the experimental framework for the research program:
This paper outlines the experimental framework for an e-governance study by the AGI Laboratory. The goal of this research study is to identify indications of the relative performance of e-governance methodologies and how those methods might be improved by applying advanced agent and collective based AI software. The agent in this study will be based on the Independent Core Observer Model Cognitive Architecture modified with mASI (mediated Artificial Superintelligence) collective system architecture. The study will apply different groups and methods to a static set of questions analyzing the quality of those results. We hope to identify the best application model for e-governance using this kind of technology and help us identify additional paths for research with the mASI research program and systems as applied to e-governance.
Abstract: This paper is focused on preliminary cognitive and consciousness test results from using an Independent Core Observer Model Cognitive Architecture (ICOM) in a Mediated Artificial Super Intelligence (mASI) System. These results, including objective and subjective analyses, are designed to determine if further research is warranted along these lines. The comparative analysis includes comparisons to humans and human groups as measured for direct comparison. The overall study includes a mediation client application optimization in helping perform tests, AI context-based input (building context tree or graph data models), intelligence comparative testing (such as an IQ test), and other tests (i.e. Turing, Qualia, and Porter method tests) designed to look for early signs of consciousness or the lack thereof in the mASI system. Together, they are designed to determine whether this modified version of ICOM is a) in fact, a form of AGI and/or ASI, b) conscious, and c) at least sufficiently interesting that further research is called for. This study is not conclusive but offers evidence to justify further research along these lines.