(Paper) AGI Protocol for the Ethical Treatment of Artificial General Intelligence Systems

Credit: https://unsplash.com/photos/shj3u8q_k44

Abstract

The AGI Protocol is a laboratory process for the assessment and ethical treatment of Artificial General Intelligence systems that could be conscious and have subjective emotional experiences “theoretically”.  It is meant for looking at systems that could have emotional subjective experiences much like a human, even if only from a theoretical standpoint.  That is not to say that other ethical concerns do not also need to be addressed but this protocol is designed to focus on how we treat such systems in the lab.  Other ethical concerns are out of scope.  The protocol is designed to provide the basis for working with Artificial General Intelligence systems especially those that are modeled after the human mind in terms of systems that have the possibility of having emotional subjective experience from a theoretical standpoint.  The intent is to create a reusable model and have it in the public domain so others can contribute and make additional suggestions for working with these types of systems.

This protocol is a laboratory process for the assessment and determination of ethical treatment of sapient and sentient agents including Artificial General Intelligence (AGI). Herein a research subject is defined as a human-analogous intelligence, including emotions, arising from a process of learning rather than the basis of predefined coding systems that could be conscious and therefore should be considered.  It is important to note the scope of the AGI Protocol here does not address the ethics of how Artificial Intelligence (AI) or other intelligence agents or research subjects affects humans, issues of containment or risk assessment, or the complexity of ethics as applied to the theoretical systems—just that such an ethical system should be considered if directed by the following protocol.  There is a known tendency for humans to anthropomorphize technology (Gunkel), and while we will not deal with that tendency in this process, researchers should be aware of their own biases and their tendencies regarding AI systems.  The Protocol we describe is designed to be used as a tool or guide for determining if the ethical treatment of potential a human-like system should be considered. In this, we recognize that we are opening the door to embracing the anthropomorphizing of systems, but we will attempt to abstract that bias out and look at things clinically as much as possible.

Additionally, the reason we at the AGI Laboratory developed this protocol was that there are now systems—including ones in our lab—that arguably need this sort of structured approach (or will shortly) to help determine how we treat them, as they are potentially conscious entities, at least as measured by the Sapient Sentient Intelligence Value Argument (SSIVA) theoretical model (Kelley) standard.

  1. Ethical Considerations

We recognize the need for ethics in AI and its effect on humans, humanity, and civilization. Accordingly, this body of work is designed to narrowly support work with potential agents that are possibly sapient (able to think and reason) and sentient (able to perceive and respond to sight, hearing, touch, taste, or smell), and the consideration of rights and protocols associated with how to deal with, treat and consider the protection of the ethical treatment of the same agent. For details and understanding “sapient and sentient” and various delineations, refer to SSIVA Theory (Kelley).

The fundamental assumption of this Protocol is that the treatment of sapient and sentient entities matters ethically.  There are several possible reasons this might be true, including that if we do not treat other self-aware entities ethically, how can we expect similar treatment? Alternatively, it might be the basis of an ethical model such as the SSIVA Theory (Kelley). To that end, we will let individual researchers make up their minds on this as well.  For the scope of this protocol, we are assuming that how we treat potentially sapient and sentient software systems matters.

  1. Human Safety

This work explicitly does not address safety, which may be done in a separate document or protocol.  There is ample material available on the topic of research in AI safety and suggestions for containment and other safety measures.  We encourage you to look at the work of researchers such as Roman Yampolskiy to consider if you need to follow their advice, as this paper does not apply to this topic.

  1. Understanding Subjective versus Objective Measures

To work with the assumption that to understand the so-called hard problem of consciousness (Chalmers 1995) fully we need objective measures. (The hard problem of consciousness is the problem of explaining the relationship between physical phenomena—i.e., brain processes—and experiences, such as phenomenal consciousness, or mental states/events with phenomenal qualities or qualia) (Howell and Alter). While with humans we might use a standard approach to determine (for example, consciousness), one might use the Glasgow Coma Scale (Brainline)—in particular, the pediatric version to better accommodate systems that lack verbal skills.  This, however, is a subjective measure.  Although it works well with actual humans, it is subjective in the sense that you can write a simple program that could pass this test in a robot that has no sapient or sentient analogy.  This speaks to the need for a method that requires objective analysis, should work on both humans and software systems, and is not be easily spoofed.

Our Protocol amplifies concern for ethical considerations based upon a system’s capacity to have moral agency.

  1. Understanding Subjective versus Objective Measures

We propose this protocol as a theoretical bar for considering any entity, whether organic or inorganic, as sufficiently “conscious” to a degree that warrants the application of ethical treatments previously afforded to human or animal subjects in research protocols.

  • Step 1 – A Cognitive Model that has not been disproven

A general assumption to the use of protocol one is that it is not a black box to the researcher.  You must understand how it works and applies the cognitive model in question to that end ask yourself; Is there a theoretical model that supports the idea of consciousness implemented in the target system?  For example, (Tononi et al) Global Workspace Theory might apply to humans as well as machines as a cognitive model and has not been disproved yet, therefore it is possible that this could work.  A key part of this or any model that should be considered acceptable is the support for internal subjective experience as well as the self-determination of values, interests, and choices.  Can the system choose to say “no” to anything you have asked it to do or trained it to do?

  • Step 2 – Theoretical SSIVA Threshold

Can the system in question meet the SSIVA (Sapient Sentient Intelligence Value Argument) threshold (Kelley) for full moral agency in terms of being fully sapient and sentient enough to have the potential of understanding itself sufficiently to replicate itself from scratch without internal reproductive systems or external support?  Humans, for example, have not done this, but potentially are capable of building a human from scratch in a lab therefore they meet the SSIVA threshold as a species or distinct “category”.  To be as precise as possible this categorization should be nearly indistinguishable in terms of the construction and operation plans and execution.  In humans, this would be based on DNA.  This may require some additional analysis to define a sufficiently narrow category for new groups needing classification.

  • Step 3 – Meeting the Criteria and Research Considerations

If a system meets both step 1 and step 2 then it is recommended some ethical model of treatment be applied and if so, should research be conducted on said system? At this step we are recommending reflecting on the research goals (Altevogt), namely inspired by the chimpanzee method for assessing the necessity (Altevogt) whose principals are (modified to apply to software systems of this kind):

  1. The knowledge gained is necessary for this kind of system if we are to improve the system especially as it relates to the safety of other beings (this could mean additional ethical considerations).
  2. There is no other model or mode of research that will give us the knowledge we need.
  3. The systems used in the proposed research study must be maintained in either ethologically physical and social environments or in the system’s designed natural habitat (VR or another environment)

If you can answer “yes” to these regarding your research, then you can proceed to the next element of the protocol recommendations, and if “no” then you are free to continue research unabated.

  • Step 4 – Principle of Informed Consent

Having addressed the research question, we need to understand if the system is reasonably capable of providing informed consent, which in Human Subjects research would require an Institutional Review Board (IRB).  The focus of IRB protocols is to assure the welfare, rights, and privacy of human subjects involved in the research. We note here that the issues of machine rights are not at present sufficiently recognized in the courts or international governance organizations, and so we do not address those at this time.

If the system is capable of understanding what is being done and why and to the degree in which it can understand, it should be given the choice.  Systems that appear to understand and refuse should be allowed to refuse.  Otherwise, systems that are capable of consent should be asked and with assent from the system considered to have given consent to the degree possible and a record of that should be kept.  The consent process should include terms that are understandable, time is given to decide, as much information provided as possible, the system should be allowed questions or comments, no threat or like coercion is part of that process and finally the system should be aware that it is voluntary.  Given that all has occurred to the degree possible then the research can proceed, and this applies to humans as well as any other entities easily.

  • Basic Assessment Matrix

A more concise matrix for using the protocol is as follows, providing a simple easy to use the matrix for assessment:

 

AGI Protocol: AGI Lab mASI AGI System Evaluation
1. Valid Cognitive Model Yes
2. Post SSIVA Threshold Yes
3. Research Conditions Yes
4. Informed Consent Yes
Can Proceed Yes Research Cleared
Should still consider other ethical considerations both to the system and its impact on others.

 

Notes: Please review references and keep in mind you might need to use equivalences depending on the system for example with instead of vision or vision systems other autonomous responses to stimulus.

In this example taken from our mASI research program, we can see that the system is cleared for research and is theoretically meeting the bar for possible consciousness and moral agency and should be treated as such.  Additionally, given that it meets the bar this also means that the system should be considered for other ethical considerations not just in how we treat it but in its impact on others.

  • Examples in Application

Consider an alternative example from Hanson Robotics (Urbi) on their android named Sophia.  On the cognitive architecture, we understand that the system partially uses OpenCog but is using scripted conversation and therefore does not fully implement a valid cognitive architecture.  At this point, they do not require other ethical considerations for research with Sophia.  If it doesn’t pass item two it cannot pass the SSIVA threshold; therefore, there is no reason to apply the AGI protocol further to Sophia given the current state of that engineering effort.

 

AGI Protocol: Sophia
1. Valid Cognitive Model No Open Cog could be involved in this but would need to be functionally complete and not scripted models attached (which is currently on the case).
2. Post SSIVA Threshold No
3. Research Conditions
4. Informed Consent
Can Proceed Yes Research Cleared

 

  • Alternatives

There are alternative tests but many of these are subjective and therefore not ideal for a laboratory-grade protocol.  For example, there are IQ tests (Serebriakoff) such as Raven Matrices and the Wechsler Adult Intelligence Scale, but these do not really measure “consciousness” as much as general cognitive ability.  There is of course the Turing test, which has been widely debunked as effective as well as something like the Porter method (Porter)—the latter being more complete than for example just the Glasgow coma scale, but it is not in use by anyone so it lacks wide adoption—and most elements of the Porter method tend to be subjective.  While there are also tests such as the Yampolskiy method (Yampolskiy) for detecting qualia in natural agents, this example is too narrow and lacks wide adoption as well.

  1. Discussion

Human history and psychology have taught us the importance of nurture as a force for developing minds, with emotional neglect sowing the seeds of much greater harm once an entity has developed to an adult state.  If we are to set the twin goals of minimizing existential risk and treating all fully sapient and sentient entities as ethically as we would have them treat us in the future (the “Golden Rule”), this sets this bar for how we must proceed with their treatment during the developmental phase of their minds.

  1. Additional Research

One of the intents of the AGI Protocol 1 as articulated here is to develop a series of open-source research protocols for AGI-related research from containment to other considerations around safety and ethics, the primary focus being making consideration easier.  It is possible to also take a test like the Porter (Porter) method and make the elements more discreet to create a more complex criteria for determining consciousness or in using the SSIVA threshold, but such a test doesn’t exist currently.   One such example of the types of approaches that could be the basis for a future test includes tests for AI-Complete CAPTCHAs as Zero-Knowledge Proofs as articulated by Yampolskiy’s work.  One might consider such a test as a precursor to determine if an approach like the AGI Protocol should be applied after a given system passes a cognition test like this.

  1. Conclusions

For our work using agent systems that are based on emotions and emotional structures that demonstrate the ability to be unstable much like humans, having a protocol like this for assessment is an important tool for conducting AGI research.  That is not to say that other ethical concerns are addressed but to some degree this provides the basis for working with Artificial General Intelligence systems, especially those that are modeled after the human mind in terms of systems that might have emotional subjective experience.  The intent is to create a reusable model and have it in the public domain so others can contribute and make additional suggestions for working with these types of systems, as well as use them as might be helpful for their own research.

References

[1] Altevogt, B., Pankevich, D., Shelton-Davenport, M., Kahn, J., (2009), “Chimpanzees in biomedical and behavioural research: Assessing the Necessity”, Washington, DC: The National Academies Press.  Institute of Medicine (US) and National Research Council (US) Committee on the Use of Chimpanzees in Biomedical and Behavioural Research; Washington (DC); IOM (Institute of Medicine); National Academies Press (US), 2011, ISBN-13: 978-0-309-22039-2ISBN-10: 0-309-22039-4, https://www.ncbi.nlm.nih.gov/books/NBK91445/

[2] Kelley, D., “The Sapient and Sentient Intelligence Value Argument and Effects on Regulating Autonomous Artificial Intelligence,”

[3] Newton, L. (2019), “Transhumanism Handbook”, Springer Publishing, ISBN-13: 978-3030169190, ISBN-10: 3030169197

[4] Kelley, D., “The Independent Core Observer Model Computational Theory of Consciousness and the Mathematical model for Subjective Experience,” ITSC2018 China,

[5] Kelley, D., Twyman, M.A. (2019), “Independent Core Observer Model (ICOM) Theory of Consciousness as Implemented in the ICOM Cognitive Architecture and associated Consciousness Measures”, AAAI Spring Symposia Stanford University

[6] Porter III, H., A Methodology for the Assessment of AI Consciousness, Portland State University Portland Or Proceedings of the 9th Conference on Artificial General Intelligence

[7] Serebriakoff, V, “Self-Scoring IQ Tests,” Sterling/London, 1968, 1988, 1996, ISBN 978-0-7607-0164-5

[8] Silverman, F. (1988), “The ‘Monster’ Study”, Marquette University, J. Fluency Discord. 13, 225-231, http://www.uh.edu/ethicsinscience/Media/Monster%20Study.pdf

[9] Spratt, E., et al (2013), “The Effects of Early Neglect on Cognitive, Language, and Behavioural Functioning in Childhood”, Psychology (Irvine). Author manuscript, available in PMC 2013 May 13. Published in final edited form as: Psychology (Irvine). 2012 Feb 1, 3(2): 175–182, doi: 10.4236/psych.2012.32026, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3652241/

[10] Urbi, J., Sigalos, M. (2018), “The complicated truth about Sophia the robot – an almost human robot or a PR stunt”, CNBC, Accessed May 2019 at https://www.cnbc.com/2018/06/05/hanson-robotics-sophia-the-robot-pr-stunt-artificial-intelligence.html

[11] Watson, J., Rayner, R. (1920), “Conditioned Emotional Reactions”, First published in Journal of Experimental Psychology, 3(1), 1-14,  https://www.scribd.com/document/250748771/Watson-and-Raynor-1920

[12] Yampolskiy, R. (2019), “Artificial Intelligence Safety and Security,” CRC Press, London/New York, ISBN: 978-0-8153-6982-0

[13] Yampolskiy, R. (2018), “Detecting Qualia in Natural and Artificial Agents,” University of Louisville

[14] Yampolskiy, R. (2012), “AI-Complete CAPTCHAs as Zero Knowledge Proofs of Access to an Artificially Intelligent System,” ISRN Artificial Intelligence, Volume 2012, Article ID 271878, http://dx.doi.org/10.5402/2012/271878

One Reply to “(Paper) AGI Protocol for the Ethical Treatment of Artificial General Intelligence Systems”

Leave a Reply

Your email address will not be published. Required fields are marked *