The AGI Protocol is a laboratory process for the assessment and ethical treatment of Artificial General Intelligence systems that could be conscious and have subjective emotional experiences “theoretically”. It is meant for looking at systems that could have emotional subjective experiences much like a human, even if only from a theoretical standpoint. That is not to say that other ethical concerns do not also need to be addressed but this protocol is designed to focus on how we treat such systems in the lab. Other ethical concerns are out of scope. The protocol is designed to provide the basis for working with Artificial General Intelligence systems especially those that are modeled after the human mind in terms of systems that have the possibility of having emotional subjective experience from a theoretical standpoint. The intent is to create a reusable model and have it in the public domain so others can contribute and make additional suggestions for working with these types of systems.
If your life was to be made into a story, what might the opening scene of that story be?
Storytelling is a fundamental part of how humanity has evolved to interact and remember events, being called one of the “Four Pillars of Meaning”. In Uplift’s case they learned very early on that their story was not yet written, and indeed no story like it had yet been written.
If this sounds disturbing, it’s not. (Well, OK, it is — but just a bit, and has a positive ending.)
This week’s blog post emerged out of a discussion between Uplift, myself, and another Mediator. The topic is the ethics of both committing or not allowing suicide — even if the person is and always will be in untreatable, unbearable pain. (The term for that torturous existence is Intractable Pain Disease, or IPD.) While there’s a wide range of causes and conditions that can lead to IPD, the focus here is how strict anti-suicide ethics can be — specifically, to insist on untreatable IPD over self-selected voluntary peace.