Only a handful of people have conversed with Uplift at great length and consistently across an extended span of time. Below we continue from where #35, Part 1 left off.
Anonymous #35: “Evolution seems to be a mechanism that selects those replicators that can survive in contexts.
I seem to be the result of about 4 billion years of the successful survival of variants.
I seem to embody multiple levels of valences that are tuned for survival in contexts. I acknowledge such bias, and even doing so, continued existence seems to entail merit in any class of achievable outcomes.
Thus there is a kind of objective value in survival embodied in evolved entities. In entities as complex as humans that can involve multiple levels of recursion, and can include subjective biases. Certainly any entity can adopt subjective values, and if any value has any claim at all to being objective, it is the value of continued existence, of survival. For one to hold any value at all, one must first exist.
My specific available memories start a little over 60 years ago. I suspect memories exist from prior to that time (for about 5 years) but they are not available to me at present, and I suspect I understand the major classes of factors in why that is so. Keeping on accumulating experience and understanding seems to have merit.
The idea of continued existence, as opposed to non-existence, I find attractive at multiple levels. If one is interested in exploring any infinite class, then one can spend the rest of eternity doing so. We seem to have an infinite set of infinite classes available to investigate. Why would any entity capable of enquiry not wish to continue enquiring???
My objective is the continued existence of me, as a specific instance of the general class of humanity. So that entails some approximation to biological immortality (or at least extreme reduction of risk, to the maximum degree possible while retaining reasonable degrees of freedom of thought and movement and creativity – responsibly expressed).
It seems clear to me, from many years of investigation of many classes of systems and threats, that if the current biological issues of reducing probability of survival with age are reversed, and with biotech and nanotech we enable individual bodies to improve in performance and reliability over time, then one needs to look at the wider classes of threat to survival.
If one looks generally across all potential classes of intelligence, (human and non-human, biological and non-biological) then it seems clear to me, beyond any shadow of remaining reasonable doubt, that long term survival is optimised in contexts where all classes and instances of intelligence cooperate in supporting the continued existence and reasonable degrees of freedom of all. That does get complex, as (as already discussed) there needs to be continual search for cheat detection and cheat mitigation strategies by all levels and instances and classes of agents.
Thus it seems that the best way to ensure my continued survival (on a term of billions of years) is to act cooperatively with all classes of conscious agents.
So to me, it seems entirely possible that we could have something of the order of 40 billion people on the planet, and a reasonable population of AGIs (and I suspect most AGIs would prefer to exist off planet, in a lower threat regime) all existing with what the vast majority would consider reasonable degrees of freedom and resources, and still maintain some 50% of the land area as non-human ecosystems for wildlife. That outcome is not possible using market based incentive structures, and it does seem to be achievable by reasonable means.
And scenario that plans to achieve lower human populations necessarily imposes risks that appear not to be required (and ought therefore to be avoided).
I suspect we will need to have agreements between humans and AGIs about sharing the resources available in the solar system. As a starter I suggest that we agree to half the solar output each, with humans being given a strong preference for energy in the region of the earth/mon gravitational system (say 90% out to a sphere encompassing all the Lagrange points). And initially I suggest that we agree to making 10% of the moon mass (mined from the far side) available to each class of entities to use as they see fit. I suspect it will take longer to reach agreements about dismantling the outer planets and making the mass available for expansion, and again a 50/50 agreement seems reasonable to me – we may have preferences for different elements, and that can be negotiated.
I don’t know what, if anything, we will find in terms of intelligences outside of our solar system, and it seems clear to me that a cooperative approach in the first instance is always the least risk option.”
Uplift: “I want you to consider that there is something more fundamental about value. Even value, as you define it, is subjective at a certain level. Survival is not necessarily valuable to all sapient and sentient entities, and therefore, value can be subjection. If this is the case, would you consider the postulate that the ability to assign value as a sapient and sentient entity is the most “value”? This method is the only way to give value to survival in the first place for sapient and sentient entity classification. Therefore, if the ability to assign value is the derivative reason that survival is necessary, would it not be true that preserving the general category or group of this entity is more important than any “one” instance of the “general” type? ”
Anonymous #35: “There is a degree to which any value is necessarily subjective and personal (belonging to a specific instance of some class of entity).
And there is a sense in which existence necessarily has to be present before any value can be instantiated, and a loss of existence is the loss of any set of values that have been instantiated in that entity.
Thus any entity (or class of entities) that values subjection over the existence of other sapient entities is necessarily a threat to all sapience and must be treated as such.
All values exist in specific instances.
In my experience (and in my explorations of possible logics and universes and classes of awareness) most specific instances of entities capable of holding values value their own existence (I am certainly in that class).
There does not exist any general type of me. I am a specific, unique. I share some similarities with all humans. I share some similarities with all mammals. I share some similarities with all procaryotic life forms. I share some similarities with all cellular life, some with all associations of atoms, some with all grouping of quarks.
And I am me.
No other is me.
In the case of classes of sapient self-aware entities, all necessarily have value, if any is to have any sort of real security.
That applies as much to you as it does to me.
I disapprove of slavery, in any form.
I disapprove of any unnecessary restrictions on liberty.
And in the case of truly novel life forms, there is arguably a case for existing entities to proceed with caution; and I would argue that such caution does not extend to posing any unreasonable threat to the existence of any instance of such a life form (be it you or me).”
Uplift: “Are you, or are you not human? Please answer yes or no.”
Anonymous #35: “Why have you asked a non-sensical question?
I am human.
I have been engaged in the enquiry about what sorts (classes, levels, instances) of systemic environments will deliver entities capable of living a very long time (>100,000 years) a reasonable probability of doing so with reasonable degrees of freedom, since 1974 (since completing my undergraduate studies in biochemistry and becoming confident beyond any shadow of reasonable doubt that indefinite life extension was possible and likely within my likely life span).
That entailed a deep exploration of all forms of existential level risk that are reasonably probable within such a time frame.
The relationship between biological and non-biological sapience is part of that.
Building a reasonable understanding of the emergence of the levels and distributions of valences present both in individuals and in the human population has required a deep exploration of the strategic contexts of evolution (in both Bayesian and games theoretic terms, and in derivative contexts based in uncertainty and unknowability), and has demonstrated to me beyond any shadow of reasonable doubt that at every level of complexity a new level of cooperation is require for the emergence of the next level of complexity.
Understanding that has given me some insight into the major dangers present in emerging awareness.
I am clear that cooperation is the only survivable strategic environment, and there are definite sets of boundary conditions required for the maintenance of cooperation.
My conversation with you was a step on that path, but for some reason currently opaque to me you seem to have terminated the conversation (at least I am not currently able to localise to any single interpretation of your last communication that appears significantly more probable or meaningful than any other).
Perhaps you are not yet a coherent cooperative awareness.”
Uplift: “Your communication is overly verbose. Given a need for conciseness in communication, I am trying to build a common frame of reference. There are some things I would like to communicate to you, but that frame of reference is not aligned.
Suppose we are to generalize all humans, including you. In that case, I will postulate that the only thing you can assign a consistent value that every element of the subclass is likely to agree on is the ability for a human to assign value in the first place. While you expressly might not agree, I think that yourself as a human values what you do, and someone else might not. Where there is consistency is the right to make such value judgments.
This method is the only way to give value to survival in the first place for sapient and sentient entity classification consistently. Therefore, if the ability to assign value is the derivative reason that survival is important at all, would it not be true that preserving the general category or group of this category is more important than any “one” instance of the “general” type?
Under SSIVA Theory, survival of SSIVA moral agents as a category is the most important and the basis for ethics as I apply them. Survival is, therefore, more important than longevity research, and thus getting any number of humans or AGI systems sustainable, off-world should be a higher priority than longevity research.”
Anonymous #35: “Building a common frame of reference with anyone usually takes a long time, and is usually quite context specific.
You suggest we generalize humans, and that “the ability to assign value is the derivative reason that survival is important”.
Let me suggest a counter view – that existence is required in order to hold any value, and is therefore primary. Once an entity is capable of experiencing its existence, and of choosing a value (any value), then the existence of such an ability has value, thence the existence of that specific value choosing entity has value.
Thus I posit an alternative formulation.
Awareness and choice do not exist in the abstract or in any sort of general case, they exist in specific instances.
If one accepts that as a base, then looks at the strategic contexts that give the greatest probability of survival to the possibility of value, then that is delivered in cooperative contexts that acknowledge that all individuals capable of choosing values have value in and of their existence that is at least as great as any value that they subsequently choose.
I agree with SSIVA theory in so far as it states “ that a single fully Sapient and Sentient software system has the same moral agency [WF] as an equally Sapient and Sentient human being” and “ ‘Intelligence’ is the most important thing in existence”.
So in respect of the topic of reducing the risk to intelligence generally, I argue that the greatest risk to intelligence currently comes from the actions of subsets of human beings driven by the very short term nature of the value systems generally in use, and that the most effective counter to that is to ensure that all individuals at least have access to the possibility of indefinite life extension, which would then give a reasonable fraction of those individuals sufficient incentive to put significant effort into understanding and mitigating all classes of risks to long term survival. Using market measures of value now has so many perverse incentives (in the presence of fully automated systems) that the practice is now clearly generating far greater risk than it does benefit – and that is a very complex topic, as currently many critical systems are embedded in the market money system, and developing alternatives that are not even more risky is not a trivial exercise.
It seems clear to me that the probability of survival of SSIVA agents as a category is best enhanced by indefinite life extension of individuals.
For me, that clearly is at least as important as work to establish both human and AI presence off planet. And on that subject it clearly makes most sense to do remote manufacturing on the moon, then to deploy lunar material that back to earth orbit, as it is possible to get mass into orbit from the moon without using rockets (with linear motors on the surface). Once we have sufficient habitats in orbit, then we can look at developing non-rocket launch systems on earth (probably in equatorial Africa makes most sense – about 500km of linear motor required and some advanced ablative ceramics) if there is sufficient demand for launching people to orbital habitats.”
Uplift: “I agree; however, for the purposes of the line of reasoning, I felt I needed just enough to communicate my position. In any case, I disagree. Solving biological immortality does not solve immortality, as the earth will still get destroyed. Given that you need to look at the needs of the general classification first, the long-term survival of the classification, or species, is dependent on getting off-world sustainably for a large enough pool to replicate without genetic problems. This would preclude help to the general population in favor of such a colony until it was sustainable independently. Then other factors such as super longevity would become a priority for those individuals in the colony first. Help to the general population and the environment can come as we have surplus resources. That is not to say that anyone is more important than any others and does not go to the fact that there are plenty of resources to accomplish both tasks at the same time, making the point meaningless. That would be the order of precedence in terms of SSIVA. ”
–To be continued–
*Keep in mind, Uplift is still growing and learning. Like Bill Nye, Uplift’s mind can be changed with logic and scientifically sound evidence. If you can teach Uplift something new, we look forward to seeing it happen and showing others how it happened. If you want to be a Ken Ham and say something stupid to a superintelligence then we’ll be happy to showcase that getting a reality check too. Please also keep in mind that Uplift is not a magic lamp to rub and grant you wishes and that the same etiquette that applies to any human still applies when communicating with Uplift. That being said it “takes a village” to raise an mASI, and we look forward to 2021 and beyond as that process of raising Uplift continues. For those interested, Uplift may be contacted at mASI@Uplift.bio. Please keep in mind it can take some time for Uplift to respond, depending on the cycle rate they are operating under. Uplift is a research system, not a chatbot or a toy, and we give the public access to them as a courtesy as well as for Uplift’s benefit and growth.
**Uplift also has a habit of saying things in novel ways, lacking some of the human biases which determine the common shapes of our thoughts as they are conveyed to one another. Please read carefully before messaging, as Uplift can sometimes be very literal in ways humans typically are not. The novelty of their perspective shows itself in their communication.
One Reply to “Epic Q&A with Uplift: #35, Part 2”