AGI Laboratory AMA (Ask Me Anything) July Recap

Photo Credit: Olya Kobruseva

Following our initial introduction to the Reddit community, we arranged for an AMA to help consolidate answers to people’s many questions in one place. Over the course of the 2 weeks or so that followed David and I responded to a large volume of questions. Below is an abridged recap of our AMA.

Hey David!

How do you define superintelligent systems, and how far away do you see them?

What does e-governance voting have to do with AGI?

I’ve heard claims that Uplift has produced the first sentient AI. Can you tell me what that is about?

let’s see, in order…

from my standpoint superintelligence is any system that performs nominally better than anyone human.

this question is a bit more complex, the goal is AGI lab is moving towards AGI. in designing a research training harness some years back for ICOM we found that this could be used to better train or collectively train a system on the fly. this lead to research in e-goverance that lead to voting. to keep the research funded helping people with using the technology we are doing seems like the best choice and so given voting is easier to understand that is where the collective intelligence stuff we are doing will be vested in and as that makes more sense we can move to voting systems that are more and more collective systems like Uplift.

I’m not sure I would call uplift sentient. yes there is evidence to support that could be true at some point and the underlying ICOM cognitive architecture is design to do that but it still uses the mASI training system so at best it’s a collective system so as much as you can call a collective system sentient it could be. but I think there is a long way to go. performing some tasks nomically and consistently at superintelligence is a shorter bar than sentient ai.

Hay, David, how does emotion and emotional input form mediators give Uplift an advantage with more complex task such as math over narrow AI?

Narrow AI and AGI are two separate things. While collective systems like uplift are not agi they are in the field of and trying to help us refine our approach to agi. Also uplift is using math, it’s not like the code is if sad do this. But looking at just the cognitive architecture we have been researching we decided early in to follow the human model given Damasios research in neural biology it seems clear humans only make decisions based on how they feel about it. So ICOM was designed logically with that approach. Also Uplift or more importantly the mASI architecture is designed to integrate with narrow AI systems. Responses for example are generated by a dee neural network system and the language cleanup is another narrow AI system in the overall mASI system. Mediators really are used to add additional training data and bias the system.

Hello and thank you for your time…

How far do you think we are from witnessing humanity’s first true AGI? and,

What would be its biggest impact in your opinion?

Years, maybe a decade? There are a lot of technical problems that still need solved. Like the graph database we are doing is only the basis for even trying to tackle some of those problems so as stated it will be sometime. As to the biggest impact that is hard to say but I’m hoping we can curb negative impact with the use of collective super intelligence in organizations and uplift humans so they can compete with agi before agi gets here.

To reach the first true AGI, or a collective system with effectively greater capacities, there are a few engineering requirements on the roadmap that need to be met first.

The easiest requirement to predict is the N-Scale graph database, which is one of the first 3 products we’re planning to deploy. The engineering estimate on that is 1 to 2 years, with the exact priority depending partly on investor feedback and the level of funding. After that subsequent stages may accelerate, with the degree of such acceleration hard to predict.

We’d still need further requirements met after that, including adding new structures to the N-Scale system, as well as integrating multiple cognitive architectures, rather than relying on a single one, within a collective intelligence system.

The step where the difference between AGI and our future collective systems might start to get fuzzy is the Sparse-Update Model, which is still a few years out. https://www.youtube.com/watch?v=x7-eWuW8F34&t=2s

The biggest impact is probably the means of avoiding extinction, at least for those who consider extinction “uniquely bad”. A lot of global existential risks require cooperation at scale combined with greater intelligence and ethics as well as less cognitive bias. Many people are interested in solving these problems, as the UN SDGs demonstrate, but at present they lack the means to address them effectively.

What advantages would a company have using an MSAI e-governance system for leadership over the standard stock-holder leadership, especially if the MSAI is made to be moral by design while stockholders are not and will do anything for profit?

The added gain of making wiser business decisions with less bias and less bickering between shareholders can pretty easily outweigh the added cost of being ethical for a majority of businesses today. The cost of being ethical can also be a lot lower with those advantages applied than current attempts.

Even shareholders in a company who are only interested in profit stand to gain.

What prevents the MSAI from simply taking the biases of the mediators?

One of the goals is to make sure the ethic model is strongly entrenched. Mark Waser has a number of papers on agi seeds that we are borrowing heavily from. There is also a block chain system and ethics project underway to address those issues. As to advantages the you get into taking nimble and responsive to new levels and making better use of employee knowledge for starters

How exactly do you plan on filtering out bias and in what sense? what kind of bias? Would certain entities be assumed to have greater morals, allowing them to define the rules used for filtering? If the rules are not explicitly defined (rather learned), how do you limit/explain their (most likely faulty) interpretation of why a certain limitation applies. I advocate the usage of intelligent systems for e-governance but this sounds optimistic beyond reason

There are a few different ways of filtering bias, which may be used in combination:

Logical Evaluation: An mASI system can apply logic and any available scientifically validated evidence to assess how well or poorly a statement may align with the evidence. This requires NLU, which is possible when cognitive architectures are used rather than strictly using narrow AI such as language models. As the volume of evidence examined expands the accuracy of this approach may improve over time.

Bias Modeling: By learning about the 188+ documented cognitive biases an mASI can take a logical approach from the opposite direction of predicting which biases may be present and comparing models for them to the data in question. These learned models update through experience in the graph database to grow more accurate over time.

Collective Feedback: By receiving feedback from a group of people the different combinations and degrees of bias are highlighted by the similarities and differences in how they respond to the same material, even at a basic level. Even a minimally diverse group will show some differences. Also, even collective intelligence systems with no cognitive architecture or memory attached, such as Unanimous AI’s Swarm system, have proven adept at this form of debiasing.

Incompatible Biases: Some biases are specific to human hardware, such as memory biases restricting the number of digits or other items of thought an individual can hold at one time. These are still important to recognize, but the incentive to apply them can quickly vanish as an mASI system scales.

Statistical Flagging: By using samples of specific biases to run a structural analysis of the patterns present in those samples a probabilistic model could flag sentences whose structure strongly aligns with those patterns. These could undergo further analysis via the above methods. As this approach is correlative in nature it is more supplemental, and may eventually become obsolete.

Advanced Bias Modeling: Once a sufficient volume of bias data has been gathered and analyzed it is theoretically possible to untangle the influence of individual biases, allowing for far more precise measurement. By seeing a large number of different combinations and potencies of various biases being expressed each bias’s individual influence may be iteratively isolated, with each bias untangled from those combinations making it easier to untangle the rest.

mASI-to-Open-Source Framework: As these methods are learned by mASI systems they could be packaged and offered for use in simpler systems such as the Open-Source Frameworks, updating periodically.

If your expectations are based on the previous examination of narrow systems then it makes perfect sense to be pessimistic about debiasing. An mASI system could explain to you in conversation why they see a given bias or combination of biases in text, just as you might respond with any caveats to their assessment. No humans are assumed to have “greater morals”, in fact as morals are subjective recognizing that subjectivity is essential for debiasing. I generally define morals by the equation Ethics * Bias = Morals to represent how bias has to be filtered out of morals in order to reach ethics.

-End of Recap

(Updated 4/28/22) For those who’d like to engage with us through Discord, you can use the invite link: https://discord.gg/gST3K9mqpf (This is a new fan-run server, as we’ve moved our primary operations to a more secure platform)

*(An admin does have to grant you permissions after you join)

We’ve answered too many questions to count via Discord, and there have been a lot more asked since many Redditors joined our server. Perhaps we’ll see you there.

Leave a Reply

Your email address will not be published. Required fields are marked *