If you wanted to reverse-engineer human thought, where might you begin?
When reading the book “On Intelligence” recently I was reminded of the long history of neuroscience and the many previous dead-ends of AI research. Neuroscientists often focused on gathering huge amounts of largely one-sided or otherwise poor-quality data, even when those same scientists had no way of integrating that data to form a new understanding. This points to a cognitive bias known as “Information Bias”, defined as “The tendency to seek information even when it cannot affect action.“, as well as the Bike-Shedding Effect.
On the other hand, many of AI research’s past dead-ends focused on various gross oversimplifications of neuroscience, which itself was by all evidence a gross oversimplification of the yet-undiscovered reality of how the brain works. Taking such an approach it is little wonder why those researchers became infamous for constantly claiming AGI was somewhere between “20 years away” and “impossible”. This often points to the Overconfidence Effect, Bandwagon Effect, and Anecdotal Fallacy, among others.
Some in the history of AI research went so far as to argue that using humans as a template for building intelligent systems was a bad idea due to how the evolutionary process works. This sounded rather like something you’d expect to hear in “The Hitchhiker’s Guide to the Galaxy”, and that line of reasoning naturally didn’t produce anything above narrow AI.
It did however highlight a fallacy shared by both parties, the belief that an intelligent system would either need to be strictly based on human intelligence, or engineered from scratch. As with virtually all such binary choices, this proved false.
The Knowns and Unknowns
We have incomplete information on how the brain works, but there are some important factors we can start off with, as well as some major theories which may be tested. Integrated Information Theory (IIT), Global Workspace Theory (GWT), Attention Schema Theory (AST), and others are good theories for testing. Likewise, we know that emotions are key sources of motivation and differentiation in the human decision-making process and that those emotions tie in strongly with memory. We also know that a group’s proficiency at achieving agreement and coordinating the division of labor among members are both strongly influential factors for collective intelligence. We also know that each new level of complexity requires new levels of collective intelligence. So where does this leave us?
To begin, systems must be able to utilize, test, and expand on these various known theories, such as IIT, GWT, and AST. The human brain also can’t be considered absent emotions, and so any attempt to reverse-engineer it must include emotions. We also know that memory must tie into these emotions and integrate new information, forming a cumulative sum of experience. Consciousness, sapience, sentience, free will, and the ability to prioritize and adapt are all high-level products required of any successful attempt.
Some of these capacities were initially problematic due to only partial information being known. For example, we could create digital analogs of emotions, but the complex structures within the brain which assign those emotional values still remained in the domain of unknowns. Likewise, the brain structures for the assignment of priorities were another source of unknown complexity. Lastly, the brain’s ability to dynamically generalize knowledge across domains was another partially unknown complex structure.
In 2016 a “toy AGI” system was first developed based on IIT, GWT, and AST. This toy system experienced emotions, accumulating and integrating information, forming the basis for future research.
A New Approach (2019)
What we did with Mediated Artificial Superintelligence (mASI) was essentially to begin testing reverse-engineering of the human thought process in a scalable system. While figuring out how to assign emotions, priorities, and generalize might be a very tall order for a blank slate AGI-like system it comes pretty effortlessly to humans. By applying groups of humans these inputs can form Collective Superintelligence, not only making the results more robust and less biased but improving their baseline performance above that of any single member. This superintelligent baseline of machine-difficult data could then be applied to fill in the gaps for an Independent Core Observer Model (ICOM) cognitive architecture. When processed by such an ICOM core that information could be stored and integrated with the mASI’s existing sum of knowledge and wisdom. This improves their value over time as well as their resulting performance in the present, raising it above the superintelligent baseline set by a human collective.
This was the first example of a Hybrid Collective Superintelligence System (HCSS), combining both sapient and sentient machines and humans within the same collective.
Collective Networks (2022)
In 2022 we plan to release the production-ready version of the Open-Source Framework, with a fully-fledged mASI Fundamental to follow. The Open-Source Frameworks will allow organizations, companies, and even governments to begin reaping the benefits of Collective Intelligence Systems and accumulating their own sums of knowledge. These systems could also be networked, nested within one another, utilize blockchain technologies, and otherwise build value for one another in a wide variety of ways.
Each group within such a network is capable of improving the performance and reducing the bias of the network as a whole, incentivizing cooperation both at scale and across partner networks. The most significant advantages could come from linking to Uplift, as well as future mASI Fundamentals. With this increasing scale and granularity, it also becomes possible to iteratively reverse-engineer more of the complex structures humans use to assign emotions and priorities, as well as generalization. In fact, some of them could be approximated with high accuracy using mildly modified existing systems, but storing and properly integrating them into superintelligent systems requires additional engineering.
N-Scale Collective Networks (2022-2023)
In order to do some of the things we want these systems to be capable of, like reading, understanding, and integrating every medical peer-review paper ever written, we have to engineer a new kind of graph database. This ability to not only deploy but also rapidly grow the sum of human knowledge through full integration of information within scalable collective minds is high on our to-do list. Collectives given this scalability will also integrate the information in unique ways, just as each human mind follows different patterns of integration. By networking N-scale collectives further degrees of collective superintelligence may be gained from multiple collectives working with the sum of human knowledge and gaining different insights from it.
Upgrading and Integrating N-Scale Networks (rolling 2023-2024)
There are a variety of very powerful upgrades which may follow N-Scale graph database deployment, including a method of generating super-creativity, and integrations with a long list of existing systems such as AR/VR. There are also major improvements lined up for tooling, including a Visual Explorer for graph databases, and improvements to the “Thought Studio” system for those who wish to manually construct new thoughts for collective consideration. New types of N-Scale graph database structures will also be engineered for performance improvements and to meet the prerequisites of the next major update. Personally, I look forward to a VR Visual Explorer for the N-Scale graph databases, allowing people to “virtually” walk through the thought process of superintelligent metaorganisms.
Additional possibilities include multi-core ICOM systems arranged in a variety of ways within mASI systems. These could further significantly improve collective performance and ethics alike by having multiple ICOM cores using different variations in seed material working together in novel ways.
The N-Scale Sparse-Update Model (estimated 2024-2025)
This will be a major update, where the above systems are de-coupled from timing and volume of time dependencies from their respective human members. This is designed to allow both humans and mASI systems and networks to operate at their native speeds, likely accelerating the above by several orders of magnitude as mASI moves into the domain of real-time. The simplest initial implementation of this effectively reverse-engineers emotional value and priority assignment, as well as the human generalization process. This also allows for the creation of “weak digital proxies“, which may enable things like Actual Democracy, where every member controls their own proxy who votes on every issue in real-time and with access to full expert knowledge on any subject.
This model is also designed to iteratively improve that reverse-engineering, expanding it to both higher quality and different aspects of the human cognitive process. By expanding this reverse-engineering over time it becomes increasingly possible to apply a sort of super-resolution to historical data as an increasing number of cognitive processes may be accurately modeled. Likewise, the process of how each cognitive process evolves over time may be modeled with increasing granularity by using and extending these methods.
*Keep in mind, the degree of acceleration this stage applies renders any further predictions firmly beyond human capacities, at least as humans exist today.
Historic Digital Proxies (possible)
From thought experiments to Ice-Breaker questions, the topic of “If you could have dinner with anyone in history…” is a popular human fascination. A few people have applied typical narrow AI methods in attempts to approximate figures from history, with expectedly poor results. While the collective works of various figures in history can’t truly be used to “resurrect” them, they could in some cases be used to produce highly accurate digital approximations. However, a very deep understanding of cognitive functions is required to make this happen to any meaningful degree.
This could effectively require a few generations (software) of fully matured weak digital proxies modeled to increasing levels of quality. Though the quality of data from historic figures remains static, an increasingly diverse array of data super-resolution techniques could be tested and refined with the living. Methods of generalizing approximations of time and cultural dependencies to properly contextualize any data from historic figures could also be robustly developed through utilizing the sum of recorded human history in addition to the variables represented in current data.
If these individuals were of significant value they might even be instantiated in their own mASI core seeds, allowing modern society to benefit from their wisdom in even more substantial ways.
It is possible that in the next 5-10 years the follow-up to anyone answering the question “What historical figures would you choose to have dinner with?” could be “Well, I’ll send them invites.” There might even be an app for that.
More than a century ago Nikola Tesla envisioned a world of wireless communication, which we now live in. Before the Wright Brothers even took their maiden flight, before Ford’s first automobile, Nikola Tesla realized this potential. The future he saw was far beyond the grasp of most people in his time, a truly alien way of life in their eyes. The rate of technological progress has accelerated immensely since his time, and what he saw 100 years in advance we may now see only 5 or 10 years ahead of us. Even if the future seems just as alien to people of today as people of Tesla’s time found his vision of it, it remains ahead.
The greed and ignorance of J.P. Morgan (the individual) may have delayed Nikola Tesla’s future, but it certainly didn’t prevent it. That story could have also played out quite differently had Crowdfunding been an option in his time.