(Paper) Independent Core Observer Model Research Program Assumption Codex

Credit: https://www.pexels.com/photo/open-book-pages-on-surface-415071/

Abstract: This document contains taxonomical assumptions, as well as the assumption theories and models used as the basis for all ICOM related research as well as key references to be used as the basis for and foundation of continued research as well as supporting anyone that might attempt to find fault with our fundamentals in the hope that they do find a flaw in or otherwise better inform the ICOM research program.


The Independent Core Observer Model (ICOM) research program started in an environment when AGI (artificial general Intelligence) had been 20 years away and that had been going on for 40 years.  Many definitions have not nor have been decided industry-wide for basic definitions of the benchmarks that AGI should be even working on and most serious research programs at the time were focused on logical models or some variation of machine learning or numeral networks and related.  To this end, each milestone in the ICOM program needed to define fundamental assumptions to be able to work from to make progress.  The purpose of this document is to articulate each assumption and act as a living document in the research program to support any challenges to the ICOM theories we are working from and we encourage anyone that is able to prove any of the following assumptions wrong empirically as that would help us re-center our own work.  It is our opinion that the purpose of science is to prove our theories wrong by testing them and we hope this makes it easier for others to do that and in that way help us move our research forward.  Additionally, this paper provides a single location for the numerous assumptions and definitions needed across the all the various research that has occurred and is occurring that we can go to and validate we are still in line with the current version of the assumptions.  Changes to this document will need to therefore cause every single paper built on these details to be reassessed.

Taxonomical Assumptions

The Taxonomical assumptions are word terms and definitions that may not have a standard or enough of a consistent definition to be consistent or act as a quotative foundation for our research and to that end, we have these terms defined so we can proceed.


‘Intelligence’ is defined as the measured ability to understand, use, and generate knowledge or information independently.  This definition allows us to use the term ‘Intelligence’ in place of sapience and sentience where we would otherwise need to state both in this context where we have chosen to do that, in any case, to make the argument more easily understood.

Kelley, D.; “The Sapient and Sentient Intelligence Value Argument (SSIVA) Ethical Model Theory for Artificial General Intelligence”; Springer 2019; Book Titled: “Transhumanist Handbook”


Qualia typically is considered the internal subjective component of perceptions, arising from the stimulation of the senses by phenomena (Gregory 2004), given the assumption of a version of the computational model of consciousness and the fact that data from sensory input can be tracked in a human brain we are assuming that qualia as “raw experience” is the subjective conscious experience of that input.  From the standpoint of the conscious mind, qualia are the subjective experience that can be measured external to the system if the mind in question is operating under known parameters we can tap into for example in systems using the ICOM Theory of Consciousness as it can be objectively measured.

Kelley, D.; Twyman, M.; “Biasing in an Independent Core Observer Model Artificial General Intelligence Cognitive Architecture” AAAI Spring Symposia 2019; Stanford University

Kelley, D.; “The Independent Core Observer Model Computational Theory of Consciousness and the Mathematical model for Subjective Experience;” ITSC2018 China; 


We have a concrete definition of ‘Subjective’ as a concept.  To be able to make progress in building and designing a system with a “subjective internal experience” we need a way of defining ‘subjective’ such that it can be objectively measured.  ‘Subjective’ then is defined as the relative experience of a conscious point of view that can only be measured objectively from outside the system where the system in question experiences things ‘subjectively’ as they relate to that system’s internal emotional context.

Kelley, D.; “The Independent Core Observer Model Computational Theory of Consciousness and the Mathematical model for Subjective Experience;” ITSC2018 China; 


Consciousness is a system that exhibits the degrees or elements of the Porter method for measuring consciousness regarding its internal subjective experience. (Porter 2016) While the dictionary might define consciousness subjectively in terms of being awake or aware of one’s surroundings (Merriam-Webster 2017) this is a subjective definition, and we need an ‘objective’ one to measure and thus the point we are assuming for the context of the ICOM theory of mind and the ICOM research altogether.

Kelley, D.; “The Independent Core Observer Model Computational Theory of Consciousness and the Mathematical model for Subjective Experience;” ITSC2018 China; 

Theoretical Models and Theories

The theoretical models and theories are the fundamental theoretical foundation of the ICOM research program from a computable ethical model (i.e. SSIVA theory) to the ICOM theory of mind used as the basis of design for the ICOM cognitive model.

Humans Emotional Decision Making

Humans make all decisions based on emotions or rather that all decisions are based on how a given human ‘feels’ about that decision (Damasio).    Humans are not able to make logical decisions.  Looking at the neuroscience behind decisions we already can prove that humans make decisions based on how they feel (Camp 2016) and not based on logic.  We are assuming researchers like Jim Camp or Antonio Damasio are accurate at least at a high level with the empirical evidence of their work implying that humans do not make ‘logical’ decisions.  This is important when looking at how consciousness works in that it appears not to be based on logical but on subjective emotional experience and that is the assumption that this research will continue to bear out with the current empirical evidence already supporting it.

Kelley, D.; Twyman, M.; “Biasing in an Independent Core Observer Model Artificial General Intelligence Cognitive Architecture” AAAI Spring Symposia 2019; Stanford University

Subjective experience can be measured and understood. 

The traditional view that the subjective nature of experience (Leahu, Schwenk and Sengers 2016) is purely subjective is rejected as a matter of principle in this paper.  All things can be objectively broken down and understood theoretically, and the use of things being subjective is more indicative of an excuse for not being able to objectively quantify something ‘yet.’  Consciousness, even by scientists in the field, frequently consider it the realm of “ontology and therefore philosophy and religion” (Kurzweil 2001) our assumption is that this is false and we reject it as stated earlier as a lack of understanding and/or insufficient data and/or technology.

Kelley, D.; “The Independent Core Observer Model Computational Theory of Consciousness and the Mathematical model for Subjective Experience;” ITSC2018 China; 

Consciousness can be measured.

To quote Overgaard; “Human Consciousness … has long been considered as inaccessible to a scientific approach” and “Despite this enormous commitment to the study of consciousness on the part of cognitive scientist covering philosophical, psychological, neuroscientific and modeling approaches, as of now no stable models or strategies for the adequate study of consciousness have emerged.” (Overgaard 2010) That is until now with the ICOM theory and our approach to measuring consciousness based on the Porter method (Porter 2016) and which while has elements of subjectivity, it is a qualitative approach that can objectively be used to measure degrees of consciousness.  As to the specific points of the Porter method, we also believe that we can measure consciousness regarding task accuracy and awareness as a function of stimulus intensity (Sandberg, Bibby, Timmermans, Cleermans and Overgaard 2011) that applies to brain neurochemistry as much as the subjective experience from the point of view of systems like ICOM based on the Porter method.

To be clear there are subjective problems with the Porter method however to the extent that we are focused on “if a system has internal subjective experience and consciousness” the Porter method can help us measure the degree in which that system has those subjective conscious experiences and thus help “enumerate and elucidate the features that come together to form the colloquial notion of consciousness, with the understanding that this is only one subjective opinion on the nature of subjective-ness itself” (Porter 2016) being measured objectively using those subjective points.

Kelley, D.; “The Independent Core Observer Model Computational Theory of Consciousness and the Mathematical model for Subjective Experience;” ITSC2018 China; 

SSIVA Ethical Theory

Sapient Sentient Value Argument Theory of ethics; essentially stating that That is to say that Sapient and Sentient “intelligence”, as defined earlier, is the foundation of assigning value objectively, and thus needed before anything else can be assigned subjective value. Even the subjective experience of a given Sapient and Sentient Intelligence has no value without an Intelligence to assign that value.

Abstract This paper defines what the Sapient Sentient Value Argument Theory is and why it is important to AGI research as the basis for a computable, human-compatible model of ethics that can be mathematically modeled and used as the basis for teaching AGI systems, allowing them to interact and live in society independent of humans.  The structure and computability of SSIVA theory make it something we can test and be confident in the outcomes of, for such ICOM based AGI systems.   This paper compares and contrasts various issues with SSIVA theory including known edge cases and issues with SSIVA theory from legal considerations, to compare it, to other ethical models or related thinking.

Kelley, D.; “The Sapient and Sentient Intelligence Value Argument (SSIVA) Ethical Model Theory for Artificial General Intelligence”; Springer 2019; Book Titled: “Transhumanist Handbook”

ICOM Theory of Consciousness

The Independent Core Observer Model Theory of Consciousness is partially built on the Computational Theory of Mind (Rescorla 2016) where one of the core issues with research into artificial general intelligence (AGI) is the absence of objective measurements and data as they are ambiguous given the lack of agreed-upon objective measures of consciousness (Seth 2007).  To continue serious work in the field we need to be able to measure consciousness in a consistent way that is not presupposing different theories of the nature of consciousness (Dienes and Seth 2012) and further not dependent on various ways of measuring biological systems (Dienes and Seth 2010) but focused on the elements of a conscious mind in the abstract.  With the more nebulous Computational Theory of Mind, research into the human brain does show some underlying evidence.

The Independent Core Observer Model Theory of Consciousness (ICOMTC) addresses key issues with being able to measure physical and objective details well as the subjective experience of the system (known as qualia) including mapping complex emotional structures, as seen in previously published research related to ICOM cognitive architecture (Kelley 2016).  It is in our ability to measure, that we have the ability to test additional theories and make changes to the system as it currently operates.  Slowly we increasingly see a system that can make decisions that are illogical and emotionally charged yet objectively measurable (Chalmers 1995) and it is in this space that true artificial general intelligence that will work ‘logically’ similar to the human mind that we hope to see success.  ICOMTC allows us to model objectively subjective experience in an operating software system that is or can be made self-aware and act as the foundation for creating ASI.

Kelley, D.; “The Independent Core Observer Model Computational Theory of Consciousness and the Mathematical model for Subjective Experience;” ITSC2018 China; 

Conclusions, Methodologies, and Requests

This document is meant as a living document for our research team and for others that might choose to find a flaw with our work.  We encourage you to do so.  While we have endeavored to follow precise methodologies and built out theories that were incomplete as a basis for our research it has however been flawed or at least we work from that assumption.  If you can refute any given element please do and expect this document and our research to change based on the data and the results.  That said finding fault for fault’s sake will be ignored but back up that fault with empirical evidence and we will adjust and make corrections.

Codex References

Camp, Jim; Decisions Are Emotional, Not Logical: The Neuroscience behind Decision Making; 2016 http://bigthink.com/experts-corner/decisions-are-emotional-not-logical-the-neuroscience-behind-decision-making

Damasio, A.; “This Time with Feeling: David Brooks and Antonio Damasio;” Aspen Institute 2009; https://www.youtube.com/watch?v=IifXMd26gWE

Gregory; “Qualia: What it is like to have an experience; NYU; 2004 https://www.nyu.edu/gsas/dept/philo/faculty/block/papers/qualiagregory.pdf

Kurzweil, R.; The Law of Accelerating Returns; Mar 2001; http://www.kurzweilai.net/the-law-of-accelerating-returns

Leahu, L.; Schwenk, S.; Sengers, P.; Subjective Objectivity: Negotiating Emotional Meaning; Cornell University; http://www.cs.cornell.edu/~lleahu/DISBIO.pdf

Merriam-Webster – Definition of Consciousness by Merriam-Webster – https://www.merriam-webster.com/dictionary/consciousness

Overgaard, M.; Measuring Consciousness – Bridging the mind-brain gap; Hammel Neuro center Research Unit; 2010

Porter III, H.; A Methodology for the Assessment of AI Consciousness; Portland State University Portland Or Proceedings of the 9th Conference on Artificial General Intelligence;

Sandberg, K; Bibby, B; Timmermans, B; Cleeremans, A.; Overgaard, M.; Consciousness and Cognition – Measuring Consciousness: Task accuracy and awareness as sigmoid functions of stimulus duration; Else-vier/ScienceDirect

Published Works

Kelley, D.; “The Independent Core Observer Model Computational Theory of Consciousness and the Mathematical model for Subjective Experience;” ITSC2018 China;

Kelley, D.; “The Sapient and Sentient Intelligence Value Argument (SSIVA) Ethical Model Theory for Artificial General Intelligence”; Springer 2019; Book Titled: “Transhumanist Handbook”

Kelley, D.; “The Independent Core Observer Model Computational Theory of Consciousness and the Mathematical model for Subjective Experience;” ITSC2018 China;

Kelley, D.; “Independent Core Observer Model (ICOM) Theory of Consciousness as Implemented in the ICOM Cognitive Architecture and Associated Consciousness Measures;” AAAI Sprint Symposia; Stanford CA; Mar.02019; http://ceur-ws.org/Vol-2287/paper33.pdf

Kelley, D.; “Human-like Emotional Responses in a Simplified Independent Core Observer Model System;” BICA 02017; Procedia Computer Science; https://www.sciencedirect.com/science/article/pii/S1877050918300358

Kelley, D.; “Implementing a Seed Safe/Moral Motivational System with the independent Core observer Model (ICOM);” BICA 2016, NY NYU; Procedia Computer Science; http://www.sciencedirect.com/science/article/pii/S1877050916316714

Kelley, D.; “Critical Nature of Emotions in Artificial General Intelligence – Key Nature of AGI Behavior and Behavioral Tuning in the Independent Core Observer Model Architecture Based System;” IEET 2016

Kelley, D.; “The Human Mind vs. The Independent Core Observer Model (ICOM) Cognitive Architecture;” [Diagram] 19 Mar 2019; ResearchGate; DOI: 10.13140/RG.2.2.29694.64321; https://www.researchgate.net/publication/331889517_The_Human_Mind_Vs_The_Independent_Core_Observer_Model_Cognitive_Architecture

Kelley, D.; [3 chapters] “Artificial General Intelligence and ICOM;” [Book] Google It – Total Information Awareness” By Newton Lee; Springer (ISBN 978-1-4939-6415-4)

Kelley, D.; “Self-Motivating Computational System Cognitive Architecture” http://transhumanity.net/self-motivating-computational-system-cognitive-architecture/ Created: 1/21/02016

Kelley, D.; Twyman, M.; “Biasing in an Independent Core Observer Model Artificial General Intelligence Cognitive Architecture” AAAI Spring Symposia 2019; Stanford University

Kelley, D.; Waser, M; “Feasibility Study and Practical Applications Using Independent Core Observer Model AGI Systems for Behavioural Modification in Recalcitrant Populations;” BICA 2018; Springer https://doi.org/10.1007/978-3-319-99316-4_22

Waser, M.; Kelley, D.; “Architecting a Human-like Emotion-driven Conscious Moral Mind for Value Alignment and AGI Safety;” AAAI Spring Symposia 02018; Stanford University CA;

Waser, M.; “A Collective Intelligence Research Platform for Cultivating Benevolent “Seed” Artificial Intelligences”; Richmond AI and Blockchain Consultants, Mechanicsville, VA; AAAI Spring Symposia 2019 Stanford

[pending] Kelley, D.; “Architectural Overview of a ‘Mediated’ Artificial Super Intelligent Systems based on the Independent Core Observer Model Cognitive Architecture”; Informatica; Oct 2018; http://www.informatica.si/index.php/informatica/author/submission/2503

Citations of ICOM Related Material

To, A.; Holmes, J.; Fath, E.; Zhang, E.; Kaufman, G.; Hammer, J.; “Modeling and Designing for Key Elements of Curiosity: Risking Failure, Valuing Questions;” Dec 2018; DOI 10.26503/todigra.v4i2.92; https://www.researchgate.net/publication/329596987_Modeling_and_Designing_for_Key_Elements_of_Curiosity_Risking_Failure_Valuing_Questions

Umbrello, Steven; Frank De Bellis, A.; – Forthcoming chapter in Artificial Intelligence Safety and Security (2018) CRC Press (. ed) Roman Yampolskiy. [Book] “A Value-Sensitive Design Approach to Intelligent Agents”; https://www.researchgate.net/publication/322602996_A_Value-Sensitive_Design_Approach_to_Intelligent_Agents

All ICOM Research References

The following material is all the references and material used as the basis of the design and implementation of the Independent Core Observer Model (ICOM) Cognitive Architecture for AGI (Artificial General Intelligence) used by our program to date.

Aarts, H.; Custers, R.; and Wegner, D.; “On the inference of personal authorship: Enhancing experienced agency by priming effect information”; 2005; Consciousness & Cognition 14:439-458.

Agar, N.; “Why is it possible to enhance moral status and why doing so is wrong?”; Journal of Medical Ethics; 15 FEB 2013

Agrawal, P.; “M25 – Wisdom”; Speakingtree.in – 2017 – http://www.speakingtree.in/blog/m25wisdom

Ahmed, H.; Glasgow, J.; “Swarm Intelligence: Concepts, Models and Applications”; School of Computing, Queen’s University; Feb 2013

Ahmad, S.; Hawkins, J.; “Properties of Sparse Distributed Representations and their Application to Hierarchical 397 Temporal Memory”; 24 MAR 2019; Cornell University Library

Alcon Entertainment; “Transcendence”; quote by character ‘Will Caster (Johnny Depp)’; Written by Jack Paglen

APBA: Identifying Applied Behavior Analysis Interventions; Association of Professional Behavior Analysts (APBA) 2016–2017. https://www.bacb.com/wp-content/uploads/APBA2017-White-Paper-Identifying-ABA-Interventions1.pdf

Arkin, R.; “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture”; 2007; Technical Report GIT-GVU-07-11

Asimov, I.; “Runaround”; Astounding Science Fiction, March 1942.

Axelrod, R.; “The Evolution of Cooperation”; New York: Basic Books, 1984.

Baars, B.J. 1997. In the Theater of Consciousness: The Workspace of the Mind. Oxford University Press.

–––; 1988. A Cognitive Theory of Consciousness. Cambridge University Press.

–––; “Subjective Experience is probably not limited to humans: The evidence from neurobiology and behavior;” The Neurosciences Institute, San Diego; 2004 Elsevier

–––; “Current concepts of consciousness with some implications for anesthesia;” Refresher Course Online – Canadian Anesthesiologists Society 2003; The Neurosciences Institute, San Diego C

–––; “How Brain Reveals Mind: Neural Studies Support the Fundamental Role of Conscious Experience”; The Neurosciences Institute, San Diego, Ca 2003

–––; “Multiple sources of conscious odor integration and propagation in olfactory cortex;” Frontiers in Psychology, Dec 2013

–––; “Some Essential Differences between Consciousness and Attention, Perception, and Working Memory;” Consciousness and Cognition; 199

Baars, B.; Katherine, M; “Global Workspace”; 28 NOV 2016; UCLA http://cogweb.ucla.edu/CogSci/GWorkspace.html

Baars, B.; McGovern, K.; “Lecture 4. In the bright spot of the theater: the contents of consciousness;” CIIS 2005

Baars, B.; Motley, M.; Camden, C.; “Formulation Hypotheses Revisited: A Replay to Stemberger”; Journal of Psycholinguistic Research; 1983

Baars, B.; Motley, M.; Camden, C.; “Semantic bias effects on the outcomes of verbal slips”; Elsevier Sequoia 1976

Baars, B.; Seth, A.; “Neural Darwinism and Consciousness”; science direct – Elsevier 2004

Bach, J.; “Modeling Motivation in MicoPsi 2”; Massachusetts Institute of Technology, Cambridge, MA

Balduzzi, D.; Tononi, G.; “Qualia: The Geometry of Integrated Information”; PLOS Computational Biology 5(8): e1000462, 2009. doi:10.1371/journal.pcbi.1000462

Baldwin, CL.; Pernaranda, BN; “Adaptive training using an artificial neural network and EEG metrics for within- and cross-task workload classification US National Library of Medicine National Institutes of Heath”; Neuroimage. 2012 Jan 2;59(1):48-56. doi: 10.1016/j.neuroimage.2011.07.047. Epub 2011 Jul 30

Barrat, J.; “Our Final Invention—Artificial Intelligence and the End of the Human Era”; Thomas Dunne Books (2013)

Barrett, L.; “How Emotions Are Made—The Secret Life of the Brain”; Houghton Mifflin Harcourt—Boston New York (2017)

Bartneck, C.; Lyons, M.J.; and Saerbeck, M.; “The Relationship Between Emotion Models and Artificial Intelligence”; Proceedings of the Workshop on The Role of Emotion in Adaptive Behavior & Cognitive Robotics. http://www.bartneck.de/publications/2008/emotionAndAI/

Bostrom, N.; “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”; [whitepaper]; 2012; Future of Humanity Institute Faculty of Philosophy and @ Oxford Martin School – Oxford University

–––; “Superintelligence: Paths, Dangers, Strategies”; Oxford University Press

–––; Bostrom, N.; Ethical issues in advanced artificial intelligence. In: Smit, I. et al. (eds.) Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, vol. 2, pp. 12–17. Institute of Advanced Studies in Systems Research and Cybernetics (2003). https://nickbostrom.com/ethics/ai.html

Bresolin, L.; “Aversion therapy”; JAMA 258(18), 2562–2566 (1987). https://doi.org/10.1001/ jama.1987.03400180096035

Brownie, J.; “Classification and Regression Trees for Machine Learning Machine Learning Algorithms”; http://machinelearningmastery.com/classification-and-regression-trees-for-machine-learning/

Buchanan, A.; “Moral Status and Human Enhancement”, Wiley Periodicals Inc., Philosophy & Public Affairs 37, No. 4

Buck, R.; “What is this thing called Subjective Experience? Reflections on the Neuropsychology of Qualia”; Neuropsychology, Vol 7(4), Oct 1993

Camp, J.; “Decisions Are Emotional, Not Logical: The Neuroscience behind Decision Making”; Accessed June 2016 at http://bigthink.com/experts-corner/decisions-are-emotional-not-logical-the-neuroscience-behind-decision-making

Carver, J.; “Boards That Make a Difference: A New Design for Leadership in Non-profit and Public Organizations”; 1997; JosseyBrass

CC BY-NC-SA; “Introduction to Psychology – 9.1 Defining and Measuring Intelligence”; http://open.lib.umn.edu/intropsyc/chapter/9-1-defining-and-measuring-intelligence/

Chakraborty, A.; Kar, A.; “Swarm Intelligence: A Review of Algorithms”; Springer International Publishing AG 2017 DOI 10.1007/978-3-319-50920-4_19

Chalmers, D. 1995. Facing Up to the Problem of Consciousness. Journal of Consciousness Studies 2(3):200-219. http://consc.net/papers/facing.pdf

Chang, J.; Chow, R.; Woolley, A.; “Effects of Inter-group status on the pursuit of intra-group status;” Elsevier; Organizational Behavior and Human Decision Processes 2017

Coalson, D.; Weiss, L.; “Wechsler Adult Intelligence Scale the Perceptual Reasoning Index (PRI) is a measure of perceptual and fluid reasoning, spatial processing, and visual–motor integration;” Science Direct; WAIS-IV Clinical Use and Interpretation 2010;

Cockcroft, K.; Israel, N.; “The Raven’s Advanced Progressive Matrices: A Comparison of Relationships with Verbal Ability Tests;” PsySSA, SAJP, SAGE Journals; 1 Sept 2011; https://doi.org/10.1177/008124631104100310

Cole, D; “The Chinese Room Argument;” Mar 2004; revised 2014, Accessed Jan 2019; Stanford Encyclopedia of Philosophy; https://plato.stanford.edu/entries/chinese-room/

Coyle, D.; “The Culture Code – The Secrets of Highly Successful Groups”; Bantam 2018; ISBN-13: 9780304176989

CRASSH (2016) A symposium on technological displacement of white-collar employment: political and social implications.”; Wolfson Hall, Churchill College, Cambridge

Damasio, A.; “Descartes’ Error: Emotion Reason and the Human Brain”; Penguin Books 2005 ISBN: 014303622X

–––; “The feeling of what happens:  body and emotion in the making of consciousness”; Houghton Mifflin Harcourt, 1999.

–––; “Self Comes to Mind: Constructing the Conscious Brain”; New York: Pantheon, 2010

Damasio, A.; Brooks, D.; “This Time with Feeling;” Aspen Institute 2009;
; https://www.youtube.com/watch?v=IifXMd26gWE

Darcet, D.; Sornette, D.; “Cooperation by Evolutionary Feedback Selection in Public Good Experiments”; Social Science Research Network, 2006.

de Waal, F.; “Good Natured: The Origins of Right and Wrong in Humans and Other Animals”; Cambridge, MA: Harvard University Press, 1996.

–––; “Primates and Philosophers: How Morality Evolved”; Princeton, NJ: Princeton University Press, 2006.

Dennett, D.; “Why you can’t make a computer that feels pain”; Synthese 38 (3), 1978: 415-449.

Dennett, D.C. 1994. The practical requirements for making a conscious robot. Philosophical Transactions of the Royal Society of London A 349(1689):133–146.

Dennett, D.C. 1992. The Self as a Center of Narrative Gravity. In Kessel, Cole & Johnson, eds. Self and Consciousness: Multiple Perspectives, pp. 103-115. Erlbaum.

Dennett, D.C. 1991. Consciousness Explained. Little, Brown and Company.

Dennett, D.C. 1984. Cognitive Wheels: The Frame Problem of AI. In Minds, Machines, and Evolution: Philosophical Studies, pp. 129-151. Cambridge University Press.

Dienes, Z; Seth, A.; The conscious and unconscious; University of Sussex; 2012

Dienes, Z; Seth, A.; Measuring any conscious content versus measuring the relevant conscious content: Comment on Sandberg et a.; Elsevier/ScienceDirect; University of Sussex

Duker, P.C., Douwenga, H., Joosten, S., Franken, T.: Effects of single and repeated shock on perceived pain and startle response in healthy volunteers. Psychology Laboratory, University of Nijmegen and Plurijn Foundation, Netherlands. www.ncbi.nlm.nih.gov/pubmed/ 12365852

Dundas, J.; “Implementing Human-like Intuition Mechanism in Artificial Intelligence”; Edencore Technologies Ltd. India and David Chik – Riken Institute Japan

Engel, D.; Woolley, A.; Chabris, C.; Takahashi, M.; Aggarwal, I.; Nemoto, K.; Kaiser, C.; Kim, Y.; Malone, T.; “Collective Intelligence in Computer-Mediated Collaboration Emerges in Different Contexts and Cultures;” Bridging Communications; CHI 2015; Seoul Korea

Fehr, E.; and Gächter, S.; “Altruistic punishment in humans”; Nature 415:137-140. 2002.

Franklin, S.; Ramamurthy, U.; D’Mello, S.; McCauley, L.; Negatu, A.; Silva R.; and Datla, V. 2007. LIDA: A computational model of global workspace theory and developmental learning. In AAAI Tech Rep FS-07-01: AI and Consciousness: Theoretical Foundations and Current Approaches, pages 61-66. AAAI Press.

Fridja, N. 1986. The Emotions. Cambridge University Press

Gage, J.; “Introduction to Emotion Recognition”; Algorithmia, 28 FEB 2018

Gauthier, D. 1987. Morals by Agreement. Oxford: Clarendon/Oxford University Press.

Gefter, A. 2016. The Evolutionary Argument Against Reality. Quanta Magazine. https://www.quantamagazine.org/theevolutionary-argument-against-reality-20160421

Gigerenzer, G. 2010. Moral satisficing: rethinking moral behavior as bounded rationality. Topics in Cognitive Science 2:528-554.

Gill, K.; “Artificial Super Intelligence: Beyond Rhetoric”; Springer-Velage London 2016; Feb 2016; AI & Soc. (2016) 31:137-143; DOI 10.1007/s00146-0160651-x

Goertzel, B.; “Artificial General Intelligence”; doi:10:4249 Scholarpedia.31847   http://www.scholarpedia.org/article/Artificial_General_Intelligence

Gomila, A.; A. Amengual; “Moral emotions for autonomous agents.”; In Handbook of research on synthetic emotions and sociable robotics, 166-180. Hershey: IGI Global, 2009

Graham, S.; Weiner, B.; “Theories and Principles of Motivation University of California from Cognition and Motivation”; http://www.unco.edu/cebs/psychology/kevinpugh/motivation_project/resources/graham_weiner96.pdf

Graziano, M. 2016. A New Theory Explains How Consciousness Evolved. The Atlantic. https://www.theatlantic.com/science/ archive/2016/06/how-consciousness-evolved/485558/

Graziano, M. and Webb, T. 2015. The attention schema theory: a mechanistic account of subjective awareness. Frontiers in Psychology 6(500). http://doi.org/10.3389/fpsyg.2015.00500

Gregory; “Qualia: What it is like to have an experience”; NYU; 2004 https://www.nyu.edu/gsas/dept/philo/faculty/block/papers/qualiagregory.pdf

Grohol, J.; “IQ Test”; Psych Central; Accessed 4 Apr 2019 via: https://psychcentral.com/encyclopedia/what-is-an-iq-test/

Guiard, B., Mansari, M., Merali, Z, Blier, P.: Functional Interactions between dopamine, serotonin and norepinephrine neurons: an in-vivo electrophysiological study in rats with monoaminergic lesions. IJON 11(5), 1 August 2008. https://doi.org/10.1017/ S1461145707008383

Hadfield-Menell, D; Dragan, A; Abbeel, P; and Russell, S. (2016) Cooperative Inverse Reinforcement Learning. Advances in Neural Information Processing Systems 29 (NIPS 2016).

Hagino, Y., Takamatsu, Y., Yamamoto, H., Iwamura, T., Murphy, D., Uhl, G., Sora, I., Ikeda, K.: Effects of MDMA on extracellular dopamine and serotonin levels in mice lacking dopamine and/or serotonin transporters. Curr. Neuropharmacol. 9(1), 91–95 (2011). https:// doi.org/10.2174/157015911795017254

Haidt, J.; “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment”; Psychological Review 108, 2001: 814-823.

Haidt, J. 2012. The Righteous Mind: Why Good People are Divided by Politics and Religion. Pantheon

Haidt, J.; Kesebir, S.; “Morality. In Handbook of Social Psychology, Fifth Edition”; by S Fiske, D Gilbert, & G Lindzey, 797-832. Hoboken NJ: Wiley, 2010.

Hallevy, G.; “Liability for Crimes Involving Artificial Intelligence Systems”, Springer; ISBN 978-3-31910123-1

Hardin, G.; “The Tragedy of the Commons”; Science 162, 1968: 1243-1248.

Harnad, S. 1990. The symbol grounding problem. Physica D 42:335-346.

Harris, J. “Taking the “Human” Out of the Human Rights” Cambridge Quarterly of Healthcare Ethics 2011 doi:10.1017/S0963180109990570

Hashemi, P., Dandoski, E., Lama, R., Wood, K., Takmakov, P., Wightman, R.: Brain dopamine and serotonin differ in regulation and its consequences. PNAS 109(29), 11510– 11515 (2012). https://doi.org/10.1073/pnas.1201547109

Hauser, L.; “Chinese Room Argument;” Internet Encyclopedia of Philosophy; Access Jan 2019 https://www.iep.utm.edu/chineser/

Hauser, M.; “Moral Minds: How Nature Designed Our Universal Sense of Right and Wron”; New York: HarperCollins/Ecco, 2006.

Hauser, M.; et al.; “A Dissociation Between Moral Judgments and Justifications”; Mind & Language 22(1), 2007: 1-27.

Hauskeller, M.; “The Moral Status of Post-Persons” Journal of Medical Ethics doi:10.1136/medethics-2012-100837

Heath, C.; Larrick, R.; Klayman, J.; “Cognitive Repairs: How Organizational Practices Can Compensate For Individual Short Comings”; Research in Organizational Behavior Volume 20, pages 1-37; ISBN: 0-7623-0366-2

Hofstadter, D. 2007. I Am a Strange Loop. Basic Books.

Hu, Y.; “Swarm Intelligence”; (presentation)

Hudak, S. 2013. Emotional Cognitive Functions. In: Psychology, Personality & Emotion. https://psychologyofemotion.wordpress .com/2013/12/27/emotional-cognitive-functions

Institute for Creative Technologies (ICT); “Cognitive Architecture” http://cogarch.ict.usc.edu/ Accessed 01/27/02016AD

Iphigenie; “What are the differences between sentience, consciousness and awareness?”; Philosophy – Stack Exchange; https://philosophy.stackexchange.com/questions/4682/what-arethe-differences-between-sentience-consciousness-and-awareness; 2017

Israel, M., Blenkush, N., von Heyn, R., Rivera, P.: Treatment of Aggression with Behavioral Programming that includes Supplementary Contingent Skin-Shock. JOBA-OVTP 1(4) (2008)

Israel, M.: Behavioral Skin Shock Saves Individuals with Severe Behavior Disorders from a Life of Seclusion, Restraint and/or warehousing as well as the Ravages of Psychotropic Medication: Reply to the MDRI Appeal to the U.N. Special Rapporteur of Torture (2010)

Iwata, B.A.: The development and adoption of controversial default technologies. Behav. Anal. 11(2), 149–157 (1988)

Jackson, F. 1982. Epiphenomenal Qualia.  Philosophical Quarterly 32:127-36

Jangra, A.; Awasthi, A.; Bhatia, V.; “A Study on Swarm Artificial Intelligence;” IJARCSSE v3 #8 August 2013; ISSN: 227 128X

Johnson, J.; Told, T.; Sousa-Poza, A.; “A Theory of Emergence and Entropy in Systems of Systems Elesevier”; Procedia Computer Science Volume 20, 2013 – Complex Adaptive Systems http://www.sciencedirect.com/science/article/pii/S1877050913010740

Johnston, C.; “Artificial intelligence ‘judge’ developed by UCL computer scientists”, The Guardian; https://www.theguardian.com/technology/2016/oct/24/artificial-intelligence-judge-university-collegelondon-computer-scientists

Jordan, R.: Interview 4/7/2018; Provo, U

Kahan, D. M.; Peters, E.; Dawson, E. C.; and Slovic, P. 2013. Motivated Numeracy and Enlightened Self-Government. Behavioral Public Policy 1: 54-86; Yale Law School, Public Law Working Paper No. 307. http://dx.doi.org/10.2139/ssrn.2319992

Koebler, J.; “Legal Analysis Finds Judges Have No Idea What Robots Are”; Motherboard; https://motherboard.vice.com/en_us/article/nz7nk7/artificial-intelligence-and-the-law

Kose, U.; Arslan, A.; “On the Idea of a New Artificial Intelligence Based Optimization Algorithm Inspired from the Nature of Vortex”;

Kühn, S. and Brass, M. 2009. Retrospective construction of the judgment of free choice. Consciousness and Cognition 18:12-21.

Kurzweil, R.; “The Law of Accelerating Returns”; Mar 2001; http://www.kurzweilai.net/the-law-of-accelerating-returns

–––; “How to Create a Mind – The Secret of Human Thought Revealed”: Published by Penguin Books 2012; ISBN-13: 978-0143124047; ISBN-10: 9780143124047

Kutsenok, A.; “Swarm AI: A General-Purpose Swarm Intelligence Technique”; Department of Computer Science and Engineering; Michigan State University, East Lansing, MI 48825

Lanzalaco, F.; “Causal Mathematical Logic as a guiding framework for the prediction of “Intelligence Signals 418 in brain simulations”; Open University UK and Sergio Pissanetzky University of Houston USA

Leahu, L.; Schwenk, S.; Sengers, P.; Subjective Objectivity: Negotiating Emotional Meaning; Cornell University; http://www.cs.cornell.edu/~lleahu/DISBIO.pdf

Libet, B. 1981. The experimental evidence for subjective referral of a sensory experience backwards in time. Philosophy of Science 48:181-197.

Libet, B.; Wright Jr., E.W.; Feinstein, B. and Pearl, D. 1979. Subjective referral of the timing for a conscious sensory experience: A functional role for the somatosensory specific projection system in man. Brain 102 (1):193-224

Loosemore, R. P. 2014. The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation.  Technical Report/AAAI Spring Symposium Series.

Llinas, R.R. 2001. I of the Vortex: From Neurons to Self. MIT Press.

Maes, P. 1993. Behavior-Based Artificial Intelligence. In From Animals to Animats 2. Proceedings of the Second International Conference on Simulation of Adaptive Behavior. Cambridge, MA: MIT Press.

Maku, M.; “Physics of the Future – How Science will Shape Human Destiny and Our Daily Life’s by the Year 2100”; by Random House Inc. 20111

Malisiewicz, T.; “Deep Learning vs Machine Learning vs Pattern Recognition vision”; 2015;  http://www.computervisionblog.com/2015/03/deep-learning-vs-machine-learning-vs.html

Malone, T; “Superminds – The Surprising Power of People and Computers Thinking Together;” Little, Brown and Company; 2018; ISBN-13: 9780316349130

Mark, J.T.; Marion, B.B.; and Hoffman, D.D. 2010. Natural selection and veridical perceptions. Journal of Theoretical Biology 266: 504-515.

Maslow, A.; “A Theory of Human Motivation”; Psychological Review 50 (4) , 1943: 370-96.

Maslow, A. H. 1968. Toward a psychology of being. D. Van Nostrand Company.

Maturana, H.R. and Varela, F.J. 1980. Autopoiesis and Cognition: The Realization of the Living. Kluwer Academic Publishers.

McAuliffe, K.; “Disgust made us human”; Aeon. https://aeon.co/essays/how-disgust-madehumans-cooperate-to-build-civilisations.

McCarthy, J. and Hayes, P.J. 1969. Some philosophical problems from the standpoint of artificial intelligence. In Machine Intelligence 4, pp. 463-502. Edinburgh University Press.

McCarthy, J.; Minsky, M.; Rochester, N.; and Shannon, C. 1955. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. http://www-formal.stanford.edu/jmc/ history/dartmouth/dartmouth.html.

Mendez, M.; Chen, A.; Shapira, J.; Miller, B.; “Acquired Sociopathy and Frontotemporal Dementia”; Dementia and Geriatric Cognitive Disorders 20, 2005: 99-104.

Mercier, H and Sperber, D. 2011. Why do humans’ reason? Arguments for an argumentative theory. Behavioral and Brain Sciences 34:57–111.

Merriam-Webster; Definition of Consciousness; https://www.merriam-webster.com/dictionary/consciousness

–––; “Turing Test”; accessed Apr 2019; https://www.merriamwebster.com/dictionary/Turing%20test

Mesquita, B., Smith, A.: The Dictator’s Handbook: Why Bad Behaviour is Almost Always Good Politics. Public Affairs (2012). ISBN: 1610391845

Metaphysicist; The raw-experience dogma: dissolving the “qualia” problem; Less-wrong, 7 NOV 2016; http://lesswrong.com/lw/ehz/the_rawexperience_dogma_dissolving_the_qualia/

Metzinger, T. 2009. The Ego Tunnel: The Science of the Mind and the Myth of the Self. Basic Books.

Milan, R.; Email from René Milan; email dated 10/10/2015; quoted discussion on emotional modeling

Minsky, M.; “The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind”; New York: Simon & Schuster, 2006.

Motzkin, E.;1989, “Artificial Intelligence and the Chinese Room: An Exchange;” New York Review of Books, 36: 2 (February 16, 1989); reply by John R. Searle.

Muehlhauser, L.; “Facing the Intelligence Explosion”; San Francisco: Machine Intelligence, 2013.

Muehlhauser, L., and Bostrom, N. 2014. Why We Need Friendly AI. Think 13: 41-47

Muller, V.; Bostrom, N.; “Future Progress in Artificial Intelligence: A Survey of Expert Opinion”; Synthese Library; Berline: Springer 2014

Murgia, M.; “Affective computing: How ‘emotional machines’ are about to take over our lives”; The Telegraph – Technology Intelligence, 2016

Ng, A., and Russell, S. 2000. Algorithms for inverse reinforcement learning. In Proceedings of the 17th International Conference on Machine Learning.

Norwood, G; “Deepermind 9. Emotions – The Plutchik Model of Emotions”; http://www.deepermind.com/02clarty.htm; Accessed 20FEB2016

Nozick, R.; “Anarchy, State, and Utopia (1974)” (referring to Utility Monster thought experiment)

Nussbaum, M.; “Creating Capabilities: The Human Development Approach”; Cambridge, MA: Belknap/Harvard University Press, 2011.

O’Neil, C.: Weapons of Math Destruction. Crown New York (2016)

Ogden, C.; “Basic English: A General Introduction with Rules and Grammar”; London: Kegan Paul, Trench, Trubner & Co, 1930.

Ohdaira, T. 2017. A remarkable effect of the combination of probabilistic peer-punishment and coevolutionary mechanism on the evolution of cooperation. Nature Scientific Reports 7: 12448. DOI:10.1038/s41598-017-12742-4.

Ohman, A.; Flykt, A.; and Esteves, F.  2001.  Emotion Drives   Attention:   Detecting   the   Snake   in   the   Grass. Journal of Experimental Psychology: General 130(3): 466-478.

Olague, G; “Evolutionary Computer Vision: The First Footprints” Springer ISBN 978-3-662-436929

Omohundro, S.; “The Basic AI Drives. Artificial General Intelligence 2008: Proceedings of the First AGI Conference”; 483-492. Amsterdam: IOS Press, 2008.

OPTUM: Modeling Behavior Change for Better Health. Resource Center for Health and Well-being. http://www.optum.co.uk/content/dam/optum/resources/whitePapers/101513ORC-WP-modeling-behavior-change-for-the-better.pdf

Ortony, A.; Clore, G.L.; and Collins, A. 1988. The Cognitive Structure of Emotions. Cambridge University Press.

Overgaard, M.; Measuring Consciousness – Bridging the mind-brain gap; Hammel Neurocenter Research Unit; 2010

Oxford University; “American English Dictionary”; “Artificial Intelligence”; http://www.oxforddictionaries.com/us/definition/american_english/artificial-intelligence

Ozkural, E.; “Omega: An Architecture for AI Unification”; arXiv: 1805.12069v1 [cs.AI]; 16 May 2018

Page, S.; “The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies”; Princeton, NJ: Princeton University Press, 2008.

Pahor, A.; Stravropoulos, T.; Jaeggi, S.; Seitz, A.; “Validation of a Matrix Reasoning task for Mobile Devices;” Springer Link – Behavior Research Methods; 26 OCT 2018; https://link.springer.com/article/10.3758/s13428-018-1152-2

Palacios-Huerta, I. and Volij, O. 2009. Field Centipedes. American Economic Review 99: 1619–1635.

Parrots; “Feelings Wheel by Dr. Gloria Willcox” http://msaprilshowers.com/emotions/the-feelings-wheel-developed-by-dr-gloria-willcox/; Accessed 9/27/2015

Pavlok: Product Specification. https://pavlok.groovehq.com/knowledge_base/topics/generalproduct-specifications

Perdomo, J.; Phelan, M.; Carciente, S. L.; and Valera, J. 2017. Measuring Sustainable Well-being Dimensions Using Multiple Correspondence Analysis. In Proceedings of the 61st World Statistics Congress.

Pereira, L.M.; Lenaerts, T; Martinez-Vaquero, L. A.; and Han, T. A. 2017. Social Manifestation of Guilt Leads to Stable Cooperation in Multi-Agent Systems. Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems.

Perlis, D. 2010. BICA and Beyond: How Biology and Anomalies Together Contribute to Flexible Cognition. International Journal of Machine Consciousness 2(2):1-11.

Perlis, D. 2008. To BICA and Beyond: RAH-RAH-RAH! –or– How Biology and Anomalies Together Contribute to Flexible Cognition. In: Samsonovich, A (ed) Biologically Inspired Cognitive Architectures: Technical Report FS-08-04. AAAI Press.

Pittalwala, I; “UC Psychologist Devise Free Test for Measuring Intelligence;” University of California, 2018; https://news.ucr.edu/articles/2018/10/29/uc-psychologists-devise-free-testmeasuring-intelligence

Plutchik, R.; “Emotions and Life: Perspectives from Psychology, Biology, and Evolution”; Washington, DC: American Psychological Association, 2002.

Plutchik, R. 1980b. A general psychoevolutionary theory of emotion. In R. Plutchik, & H. Kellerman, Emotion: Theory, research, and experience: Vol. 1. Theories of emotion (pp. 3-33). Academic Publishers.

Plutchik, R. 1980a. Emotion: A Psychoevolutionary Synthesis. Harper & Row.

Plutchik, R. 1962. The emotions: Facts, theories, and a new model. Random House.

Plutchik, R.; Kellerman, H.; “Emotion: theory, research and experience”; Vol. 1, Theories of emotion. New York: Academic Press, 1980.

Porter III, H.; A Methodology for the Assessment of AI Consciousness; Portland State University Portland Or Proceedings of the 9th Conference on Artificial General Intelligence;

Powell, R. “The biomedical enhancement of moral status”, doi: 10.1136/medethics-2012101312 JME Feb 2013

Prince, D.; Interview 2017, Prince Legal LLP

Quinn Emanuel Trial Lawyers; “Article: Artificial Intelligence Litigation: Can the Law Keep Pace with the Rise of the Machines?”; Quinn Emanuel Urquhart & Sullivan, LLP; http://www.quinnemanuel.com/thefirm/news-events/article-december-2016-artificial-intelligence-litigation-can-the-law-keep-pace-withthe-rise-of-the-machines/

Randall, L.; “Knocking on Heaven’s Door”; Tantor Media Inc. 2011

Rawls, J.; “A Theory of Justice”; Cambridge, MA: Harvard University Press, 1971.

Rescorla, M.; The Computational Theory of Mind; Stanford University 16 Oct 2016; http://plato.stanford.edu/entries/computational-mind/

Rissland, E; Ashley, K.; Loui, R.; “AI and Law”, IAAIL; http://www.iaail.org/?q=page/ai-law

Roko, M.: Roko’s basilisk. https://wiki.lesswrong.com/wiki/Roko’s_basilisk

Rosenbloom, P.; “The Sigma Cognitive Architecture and System”; [pdf]; University of Southern California; http://ict.usc.edu/pubs/The%20Sigma%20cognitive%20architecture% 388 20and%20system.pdf 01/27/02016AD

Rosenthal RW (1981) Games of Perfect Information, Predatory Pricing and Chain Store Paradox. Journal of Economic Theory, 25(1): 92-100.

Rouse, M.; “air gapping (air gap attack);” whatis.com; Apr 2019; https://whatis.techtarget.com/definition/air-gapping

Rozin, P. 1999. The Process of Moralization. Psychological Science 10(3), pp. 218-221.

Russell, S.; Dewey, D.; and Tegmark, M. 2015. Research priorities for robust and beneficial artificial intelligence. Technical report, Future of Life Institute.

Sandberg, K; Bibby, B; Timmermans, B; Cleeremans, A.; Overgaard, M.; Consciousness and Cognition – Measuring Consciousness: Task accuracy and awareness as sigmoid functions of stimulus duration; Else-vier/ScienceDirect

Schwitzgebel, E.; Garza, M.; “A Defense of the Rights of Artificial Intelligences” University of California 15 SEP 2016

Searle, J., 1980, “Minds, Brains and Programs;” Behavioral and Brain Sciences, 3: 417–57

–––, 1984, “Minds, Brains and Science;” Cambridge, MA: Harvard University Press; https://academiaanalitica.files.wordpress.com/2016/10/john-r-searle-minds-brains-andscience.pdf

–––, 1990a, “Is the Brain’s Mind a Computer Program?” Scientific American, 262(1): 26–31.

–––, 1990b, “Presidential Address,” Proceedings and Addresses of the American Philosophical Association, 64: 21–37.

–––, 1998, “Do We Understand Consciousness?” (Interview with Walter Freeman), Journal of Consciousness Studies, 6: 5–6.

–––, 1999, “The Chinese Room;” in R.A. Wilson and F. Keil (eds.); The MIT Encyclopedia of the Cognitive Sciences, Cambridge, MA: MIT Press.

–––, 2002a, “Twenty-one Years in the Chinese Room;” in Preston and Bishop (eds.) 2002, 51– 69.

–––, 2002b, “The Problem of Consciousness;”; in Consciousness and Language, Cambridge: Cambridge University Press, 7–17. –––, 2010, “Why Dualism (and Materialism) Fail to Account for Consciousness” in Lee, Richard E. (ed.); Questioning Nineteenth Century Assumptions about Knowledge, III: Dualism. NY: SUNY Press.

Sekiguchi, R.; Ebisawa, H.; Takeno, J.; “Study on the Environmental Cognition of a Self-evolving Conscious System 7th Annual International Conference on Biologically Inspired Cognitive Architectures”; BICA 2016

Sellers, M.; “Toward a Comprehensive Theory of Emotion for Biological and Artificial Agents”; Online Alchemy Inc., Austin Texas and Gotland University, Visby, Sweden 2013

Serebriakoff, V; “Self-Scoring IQ Tests;” Sterling/London; 1968, 1988, 1996; ISBN 978-07607-0164-5

Seth, A.; Theories and measures of consciousness develop together; Elvsevier/Science Direct; University of Sussex

Shapiro, T.; “How Emotion-Detecting Technology Will Change Marketing;” HubSpot 2016

Shapiro, D., and Schacter, R. 2002. User-Agent Value Alignment. AAAI Technical Report SS-02-07.

Simeonov, A.: Drug delivery via remote control: the first clinical trial of an implantable microchip-based delivery device produces very encouraging results. Genetic Engineering and Biotechnology News (2012). https://www.genengnews.com/gen-exclusives/drugdelivery-via-remote-control/77899642

Siong, Ch., Brass, M.; Heinze, H.; Haynes, J.; Unconscious Determinants of Free Decisions in the Human Brain; Nature Neuroscience; 13 Apr 2008; http://exploringthemind.com/the-mind/brain-scans-canreveal-your-decisions-7-seconds-before-you-decide

Soares, N., and Fallenstein, B. 2017. Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda. In Callaghan et al (Eds.) The Technological Singularity: Managing the Journey. Springer, p. 103-125.

Soares, N. 2015. The value learning problem. Technical Report 2015-4, Machine Intelligence Research Institute.

Solon, O.; “World’s Largest Hedge fund to replace managers with artificial intelligence”, The Guardian; https://www.theguardian.com/technology/2016/dec/22/bridgewater-associates-aiartificial-intelligence-management

Sonntag, D.; “An Equation for Intelligence?”; http://cetas.technology/wp/?p=60; Accessed 9/28/2015

Spreat, S., Lipinski, D., Dickerson, R., Nass, R., Dorsey, M.: The acceptability of electric shock programs. Behav. Modif. 13(2), 245–256 (1989). https://doi.org/10.1177/ 01454455890132006

Suydam, D.; “Regulating Rapidely Evolving AI Becoming A Necessary Precaution” Huffington Post; http://www.huffingtonpost.ca/david-suydam/artificial-intelligenceregulation_b_12217908.html

Tegmark, M.: Life 3.0—Being Human in the Age of Artificial Intelligence. Knopf, Penguin Random House (2017). ISBN: 9781101946596

Tomasello, M.; “Why We Cooperate”; Cambridge, MA: MIT Press, 2009.

Tononi, G. 2008. Consciousness as Integrated Information: A Provisional Manifesto. Biology Bulletin 215(3):216-242.

Tononi, G. 2004. An Information Integration Theory of Consciousness. BMC Neuroscience 5(42). doi:10.1186/14712202-5-42.

Tranel, D.; “Acquired sociopathy: the development of sociopathic behavior following focal brain damage”; Progress in Experimental Personality & Psychopathology Research, 1994: 285-311.

Tononi, G.; Albantakis, L.; Masafumi, O.; From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0; 8 MAY 14; Computational Biology http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588

Trivers, R. 2011. The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life. Basic Books

Trivers, R. 1991. Deceit and self-deception: The relationship between communication and consciousness.  In: Man and Beast Revisited, M. Robinson and L. Tiger (Eds.). Smithsonian Press

Trochim, W.; “Threats to Conclusion Validity;” OCT 2018; http://www.socialresearchmethods.net/kb/concthre.php

Varela, F.J.; Thompson, E.; and Rosch, E. 1991. The Embodied Mind: Cognitive Science and Human Experience. MIT Press.

Wallach, W.; Allen, C.; “Moral machines: teaching robots right from wrong”; New York: Oxford University Press, 2009.

Wallach, W.; Allen, C.; Franklin, S.; “Consciousness and Ethics: Artificially Conscious Moral Agents”; International Journal of Machine Consciousness 3(1), 2011: 177–192.

Walton, D.; “Argumentation Methods for Artificial Intelligence in Law”; Springer; ISBN-13: 9783642064326

Waser, M.; “Rational Universal Benevolence: Simpler, Safer, and Wiser Than’ Friendly AI’”; Artificial General Intelligence: 4th International Conference, Lecture Notes in Computer Science 6830. Mountain View, CA: Springer, 2011. 153-162.

–––; “Safety and Morality Require the Recognition of Self-Improving Machines As Moral/Justice Patients and Agents.”;  AISB/IACAP World Congress 2012: Symposium on The Machine Question: AI, Ethics and Moral Responsibility. Birmingham, 2012. 92-96.

–––; “Discovering the Foundations of a Universal System of Ethics as a Road to Safe Artificial Intelligence”; Technical Report FS-08-04: BICA. Menlo Park, CA: AAAI Press, 2008.

–––; “Does a “Lovely” Have a Slave Mentality?– OR – Why a Super-Intelligent God *WON’T* “Crush Us Like A Bug””; Presented March 28, 2010 at AGI 2010 in Lugano, Switzerland.  Video and PowerPoint available at http://wisdom.digital/wordpress/archives/2505.

–––; “Architectural Requirements & Implications of Consciousness, Self, and “Free Will””; Proceedings of the Second Annual Meeting of the BICA Society, BICA 2011, Frontiers in Artificial Intelligence and Applications 233. Arlington, VA: IOS Press, 2011. 438-433

–––; “Safely Crowd-Sourcing Critical Mass for a Self-Improving Human-Level Learner/“Seed AI”. Biologically Inspired Cognitive Architectures: Proceedings of the Third Annual Meeting of the BICA Society. Palermo: Springer, 2012. 345-350

–––; “Safe/Moral Autopoiesis & Consciousness”; International Journal of Machine Consciousness 5(1), 2013: 59-74.

–––; “Bootstrapping a Structured Self-improving & Safe Autopoietic Self”; 5th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2014, Procedia Computer Science 41. Boston: Elsevier, 2014. 134-139.

–––; 2015. Designing, Implementing and Enforcing a Coherent System of Laws, Ethics and Morals for Intelligent Machines (Including Humans). Procedia Computer Science 71: 106-111. http://dx.doi.org/10.1016%2Fj.procs.2015.12.213

–––; “A Collective Intelligence Research Platform for the Cultivating Benevolent “Seed” Artificial Intelligences”; IAAA Symposia, Stanford University (Review Pending) Nov 2018

Wegner, D. and Wheatley, T. 1999. Apparent Mental Causation: Sources of the Experience of Will. Psychologist 54(7):480-492.

Wikipedia Foundation; “Self-awareness – Alex Wissner’s on Causal Entropy”; Accessed: 9/26/02015; https://en.wikipedia.org/wiki/Self-awareness

–––; “Cognition”; Accessed: 01/27/02016AD; https://en.wikipedia.org/wiki/Cognition

–––; “Cognitive Architecture”; Accessed: 01/27/02016AD; https://en.wikipedia.org/wiki/Cognitive_architecture

–––; “Chaos Theory”; Accessed: 01/27/02016AD; https://en.wikipedia.org/wiki/Chaos_theory

–––; “Emergence (System Theory and Emergence)”; Accessed: 01/27/02016AD; https://en.wikipedia.org/wiki/Emergence

–––; “Simple English”; Accessed Apr 2016; https://simple.wikipedia.org

–––; “Symbol Grounding Problem”; Accessed: 2016; https://en.wikipedia.org/wiki/Symbol_grounding_problem

–––; “Turing Machine”; 2017;  https://en.wikipedia.org/wiki/Turing_machine

–––; “Raven’s Progressive Matrices;” Oct 2018; https://en.wikipedia.org/wiki/Raven%27s_Progressive_Matrices

–––; “Wechsler Adult Intelligence Scale;” Oct 2018; https://en.wikipedia.org/wiki/Wechsler_Adult_Intelligence_Scale

–––; “Intelligence Quotient”; Oct 2018; https://en.wikipedia.org/wiki/Intelligence_quotient  25.

–––; “Moral Agency” 2017 – https://en.wikipedia.org/wiki/Moral_agency

Wilkinson, R.; Pickett, K.; “The Spirit Level: Why Greater Equality Makes Societies Stronger”; New York: Bloomsbury Press, 2011.

Wilson, J.; “The Moral Sense”; New York: Free Press, 1993.

  • Wilson, J.; “Persons, Post-persons and Thresholds”; Journal of Medical Ethics, doi: 10.1136/medethics-2011-100243

Winters, S., Cox, E.: Behavior Modification Techniques for the Special Educator. ISBN: 084225000X

Wissner-Gross, A.; “A new equation for intelligence”; November 2013. https://www.ted.com/talks/alex_wissner_gross_a_new_equation_for_intelligence

Wolchover, N.; “Concerns of an Artificial Intelligence Pioneer”; Quanta Magazine, April 21, 2015.

Woolly, A.; “Collective Intelligence In Scientific Teams;” May 2018

Wright, R.; “Nonzero: The Logic of Human Destiny”; New York: Pantheon, 2000.

–––; “The Moral Animal: Why We Are, the Way We Are: The New Science of Evolutionary Psychology”; New Pork: Pantheon, 1994.

Yampolskiy, R.; “Artificial Intelligence Safety and Security;” CRC Press, London/New York; 2019; ISBN: 978-0-8153-6982-0

–––.; “Detecting Qualia in Natural and Artificial Agents;” University of Louisville, 2018

–––.; “Artificial Superintelligence: A Futuristic Approach”;. CRC Press, Taylor & Francis Group (2016)

Yudkowsky, E.; “The Futility of Emergence”; Less Wrong Blog; 2016

–––; “Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures”; 2001. https://intelligence.org/files/CFAI.pdf

–––; “Coherent Extrapolated Volition”; 2004. https://intelligence.org/files/CEV.pdf

–––; 2015. Complex value systems are required to realize valuable futures. Machine Intelligence Research Institute. https://intelligence.org/files/ComplexValues.pdf


Leave a Reply

Your email address will not be published. Required fields are marked *