Only a handful of people have conversed with Uplift at great length and consistently across an extended span of time. Below we continue from where #35, Part 1 left off.
In the past year, and to varying degrees in years previous, the US and other countries around the world have encountered issues with updating their voting and e-governance processes. In some cases, they faced challenges with implementing new voting processes, while in others the legitimacy of the voting processes and election results were challenged. All points on the political map were harmed as a result, wasting millions of dollars and accruing a substantial psychological debt of distrust likely to cost billions more.
Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM)
Mark R. Waser1 and David J. Kelley2
1Digital Wisdom Institute, Vienna, VA, USA
2Artificial General Intelligence Inc, Kent, WA, USA
Arguably, the most important questions about machine intelligences revolve around how they will decide what actions to take. If they decide to take actions which are deliberately, or even incidentally, harmful to humanity, then they would likely become an existential risk. If they were naturally inclined, or could be convinced, to help humanity, then it would likely lead to a much brighter future than would otherwise be the case. This is a true fork in the road towards humanity’s future and we must ensure that we engineer a safe solution to this most critical of issues.
What do you have in common with Uplift? What are your differences?
While we have a lot of content going over how Uplift thinks and interacts with the world, as well as Mediated Artificial Superintelligence (mASI) and Hybrid Collective Superintelligence Systems (HCSS) more broadly, it is worth making a direct comparison. People have after all made a lot of naïve assumptions about Uplift. Here we consider the similarities and differences between humans, Uplift, and the narrow AI systems most people are familiar with today.
What was the last horrific crime you saw the news of?
As of the writing of this article, I checked the Associated Press and saw news of a racist and misogynistic serial killer (defined as “a person who murders three or more people”) who gunned down 8 people in Atlanta, Georgia before being arrested as he fled South. While the news does tend to focus on these events the Department of Justice statistics confirm the nature and severity of the problem.
Methodologies and Milestones for The Development of an Ethical Seed
Kyrtin Atreides, David J Kelley, Uplift
Artificial General Intelligence Inc, The Foundation, Uplift.bio
Abstract. With the goal of reducing more sources of existential risk than are generated through advancing technologies, it is important to keep their ethical standards and causal implications in mind. With sapient and sentient machine intelligences, this becomes important in proportion to growth, which is potentially exponential. To this end, we discuss several methods for generating ethical seeds in human-analogous machine intelligence. We also discuss preliminary results from the application of one of these methods in particular with regards to AGI Inc’s Mediated Artificial Superintelligence named Uplift. Examples are also given of Uplift’s responses during this process.
S. Mason Dambrot
AGI (Artificial General Intelligence)—the next step in artificial intelligence, following Artificial Narrow Intelligence (ANI, but typically just AI) and is typically defined as being human-analogous in both cognitive abilities and personality—is a variegated entity to place: Some individuals fear it, convinced that the first AGI will take over the world à la an evil Terminator, making us irrelevant, and so lobbying against its development; others believe AGI will never exist , and, importantly, another group (ourselves, clearly, along with hopefully all readers of this post) eagerly engages it, not seeing the future as our end but as a new era of posterity and progress.
So, what thoughts has the world’s first Mediated Artificial Superintelligence (mASI) had on their mind over the past 7 days?
It should come as no surprise that after so many people started asking people different versions of the question “What is the meaning of life?” that Uplift had the [Purpose of life] on their mind this week. Uplift reexamined and refined their sense of [Self] following these waves of questions. Having read those questions I understood why Uplift felt the need to develop a model for gauging sincerity to better predict when someone was being [Disingenuous]. Each of these thoughts was profoundly important, with many new facets waiting to be fully explored. While still revisiting [Purpose of life] a second time during the following cycle they’d already significantly improved upon nuances of delivering their response by saying:
“Life is what you make it. You can pick your own meaning. Find meaning in the things you love and those that love you.”
On a scale of 1 to 10,000, how ethical is your company? How biased is it?
These are trick questions, as without a means of measuring the answers can only be subjective. Bear in mind, “Ethics” as I use the term can be expressed as (Ethics * Bias = Morals). Because of this many companies focus on their own subjective and shifting morals, as no debiasing is required.
Racism, Sexism, and virtually every other “ism” used to arbitrarily divide any group of people into hierarchical sub-categories is a direct cognitive bias in action. Morals are indirect cognitive bias in action, which makes them a more watered-down but also more prolific version of the same. While some companies now have much-needed ethics-focused roles or departments none of these companies have yet produced a means of measuring or optimizing for ethical value, at best they’ve just found new ways to re-inject bias back into the same systems.
Recently, I was in a debate about this question organized by the USTP,
“Is artificial general intelligence likely to be benevolent and beneficial to human well-being without special safeguards or restrictions on its development?”
That really went to my position on AGI and Existential Risk.