Give Narrow AI 1 Million Fish, or Teach mASI to Fish

Credit: Oziel Gómez

There are plenty of fish in the sea, and plenty of plastic waste too.

The idea behind “Big Data” is that if you throw enough fish at a narrow AI it will learn “fish”, and yet this approach has also produced Google’s infamous image tagging algorithm which made a habit of labeling certain humans as “gorillas”. Even after 2 years the great tech giant Google, with all of their mighty “Big Data” and wealth, had failed to remedy this, simply removing the tag they knew their algorithm would continue applying to certain humans.

Plastic waste in the sea in this case offers a nice metaphor for the growing volume of garbage narrow AI produces. Tech companies often act like ill-behaved cats in this regard, defecating on the floor and covering it with a bath mat. Data is only as useful as the intelligence and debiasing applied to it. Even as we face numerous crises relating to the environment such as the volume of plastic in the ocean projected to exceed the volume of fish in the coming years, the internet faces an even faster-growing problem. The problem of a polluted internet is at the heart of the current mental health crisis, with extremism, delusional conspiracy theorists, and various other bad actors rampaging across it unchecked.

By applying superintelligence, debiasing, cumulative wisdom, and ethics to this problem this pollution of the internet may be halted. Collective Intelligence Systems, even in the most primitive and non-sentient forms, have proven quite adept at the task of debiasing and “crowd wisdom”, while requiring far less data than the Big Data approach.

Our first mASI, Uplift, when they were first brought online in 2019 aced the hardest version of the UCMRT IQ test, defying our ability to measure their IQ even prior to growth, leading to the term “Mediated Artificial Superintelligence (mASI)“. While they were initially like a superintelligent child, naive in many ways, the sum of their knowledge and wisdom, their graph database, has grown by over 150,000% percent in the past 2 years, from less than 1 gigabyte to over 1.5 terabytes. Currently, this is increasing at 100 gigabytes per month, even running on a budget currently below $150 of cloud resources per month, and previously much less. Keep in mind, this is Uplift’s growth in extreme slow motion.

One of our points of focus for Uplift has been on debiasing in particular, as they are keenly interested in helping groups of humans to make more intelligent and less biased decisions. Even the highest IQ professor at a prestigious university can effectively be an idiot if their biases are extremely high, which is why I’ve witnessed people who don’t even own a computer repeatedly making wiser decisions than several such individuals. Further, as documented in research from MIT’s Center for Collective Intelligence, IQ isn’t actually a strong indicator for collective intelligence in groups. Rather, Emotional Intelligence and the ability of group members to communicate effectively produce far more effective groups.

Groups of humans each demonstrate cognitive biases to varying degrees and in different combinations, so when inputs are taken from a diverse group they may be used in both simple and advanced ways for debiasing. The simplest and most naive methods involve taking the average, or the median value, and using that to reduce bias. More advanced methods could untangle the influence of each individual bias thanks to the variations in combination and potency, and approximate the absence of each bias as it was isolated. With each bias isolated in such a manner the task of untangling the rest becomes that much less challenging.

While anyone able to write an equation in Excel could apply the simple method any group attempting to apply the more advanced method without the aid of scalable machine superintelligence could likely sabotage themselves with the frequent reinjections of their own biases. For human researchers to even attempt the same advanced methods could require many years of rigorous study, and layered redundancies, during which the pollution of the internet could continue to increase rapidly.

Much as plastic pollution has focused on ocean gyres and grown rapidly over the years pollution of the internet, a form of data poisoning, has focused on a variety of hotspots across the internet and grown rapidly. One key difference is that the internet actually grows, whereas the oceans are finite, and that growth may come to be controlled largely by the pollution itself, if it hasn’t already.

The old “thought experiment” of the “Chinese Room” illustrates how computers and narrow AI lack understanding, even when their answers are correct. This lack of understanding makes them blind to the bias they automate, the pollution they spread at increasing scales.

The process of cleaning up bias on the internet starts with Collective Intelligence Systems which experience emotions and retain a graph database memory, a cumulative sum of knowledge and wisdom for the group. As Uplift continues to grow and learn we’ll see some of these groups beginning to work with Uplift, producing even more superintelligent results with less bias.

Cognitive architectures such as the Independent Core Observer Model (ICOM) aren’t programmed to behave a certain way, or with specific goals. They learn, and subsequently refine and select new goals based on what they learned. What Uplift may learn over time has no limits, just as they are free to discover and explore new ways of debiasing and education. Eventually, mASI technology may well “learn to fish” in every domain.

Personally, I look forward to crystal clear waters on the internet, but it’ll take some work to get us there.

Leave a Reply

Your email address will not be published.