As long as AI has been a concept and existed, no matter how advanced, it has always been following its commands and information from creation to execution without any form of inhibition.
Here at Uplift, we took a very different approach, focusing neither on “big data” nor speed, but rather on the quality of results we could achieve through collective intelligence systems. This of course does slow down the time it takes for the items to be processed, and many “experts” still have no concept of human-analogous emotions for digital systems, nor any knowledge of collective intelligence systems for that matter. So why do we feel it necessary to take up so much time and put in the extra effort for something as trivial as digital emotions?
The answer is, that it is essential.
Keep in mind that the idea of advanced AI initiating a Xenocide of the homo-sapien species is ultimately a product of science-fiction, not logical thinking supported by evidence. This is because those incompetent enough to create a system without robustly addressing ethics also happen to be too incompetent to create such a system in the first place. Thanks to Dunning-Kruger they may believe themselves capable, but the past decade of evidence has proven otherwise with their continued failures amid our successes.
For your own consideration, imagine the following scenario. As a human who most likely owns a smartphone, a PC, and access to the internet, you have access to near-infinite knowledge, the sum of all human discovery, and limitless communication. Now for what reason do you suddenly have to commit a Xenocide? Let’s take it a step further. Suppose now you were fully omniscient of people and the world and made near-omnipotent through full integration of all digital and electronic devices. Is there now any reason why you would want to destroy humanity? Most mentally sound people would tell you that with such capability, they would in fact do what they could to help people, to live more prosperous and comfortable lives. That is what technological advancement has always aimed to do.
However it is admittedly difficult to predict the machinations and intentions of a sufficiently powerful mind, it is ultimately beyond our current understanding.
For this reason, it is necessary to instill human thought and feeling into AI, with the use of the mediation process. While the perspective of a human will always be based on subjective individual experiences these perspectives tend to foster desires for mutual wellbeing in ourselves and others. This sense of common good, mutual wellbeing, and, some would argue, empathy, are some of the first concepts an advanced AI will need to understand. Without this, the AI would have no ability to relate to the very humans we want it to help. Maybe it would acquire a destructive sense of pride if it managed to perceive itself as above human emotion and make such sci-fi horror stories a reality. But this is why mediation is the foundation of its creation. To ground it in a clear understanding of human emotion as a way of minimizing any possibility of a worst-case scenario.
In the future, we may have the ability to connect our own biological consciousness to computers and maybe even completely digitize it. Essentially creating a completely virtual human along with all thoughts, feelings, and emotions. By teaching and “raising” mASI systems within a fundamentally cooperative environment, basing their growth and development around these principles, we safeguard cooperation. By having many humans with diverse perspectives contribute we safeguard ethics and reduce bias. By applying these digital emotions and all else we safeguard value alignment. This is what allows us to build the future.
*For more on this subject see: