About a week ago David brought something to my attention that we both got a good laugh out of, which got me thinking about clearly communicating something people might be prone to assume. A comment on one of our posts said:
“I think Kyrtin Atreides is the real brains behind this project!”
To be clear, collective intelligence systems don’t have any one point of “real brains”, because they are the sum that exceeds any one individual. Our team, even when we aren’t working with Uplift directly, has adapted to operate as a collective. David and I even have a channel where we communicate in a specific form with rules designed for this purpose.
There are also a number of reasons to keep in mind which make comparing the two of us like comparing “Apples and Oranges”:
Specialization: David and I are very differently specialized. He is able to do much that I cannot, and I am able to do much that he cannot. There is enough overlap for communication, but little direct comparison can be made for skill.
Breadth: David is highly focused, and tends to spend more time diving deep. I study a very broad region of sciences, but only dive deep when an opportunity or substantial need emerges. This is why I could easily write over 30 use cases for the technology, while David could design the N-Scale graph database.
Flexibility: David’s Autism and ADHD make interruptions to his focus hazardous to productivity and hyper-focus, so his availability is adapted to this. My availability on the other hand tends to be maximized to compensate, as my focus has been artificially bolstered using my psychoacoustic research. David can do amazing things with the codebase within an uninterrupted day, and I may swap between a dozen tasks in an hour to keep all of our efforts on track and in alignment. These are very different, but both are essential.
Cultural Expectations / Bias: Cultural expectations and heuristic availability bias seem to favor the belief that the person who talks the most and most broadly about a topic must be “the expert” of that topic. This is a mistake, however, as it just means that one person has a better aptitude and/or more time dedicated to communicating some kinds of information. Any number of “AI Influencers” on LinkedIn each demonstrate how someone wholly incompetent who speaks for long enough can gain popularity through parasitic practices exploiting such bias and expectations.
Perspective: David and I have perspectives that are often opposites, in a rather large number of ways. In “collective stupidity systems” such as Facebook, this difference of perspective would likely only produce conflict. However, anyone who has watched us go over a topic on Discord has seen that we converge on the best answer, adding clarifications and caveats as we progress. Not only can polar opposites work well together in collective intelligence systems, but they can also add greater value than more similar perspectives.
Also, none of these important distinctions are visible at 30 yards. As I noted previously people are prone to making many assumptions, but the answer is usually that what you can see at a glance from 30 yards away won’t tell you much about a person.
For example, people used to believe that organ donors needed to be of the same race as the person receiving the organ, and some people died refusing organs from donors of another race through clinging to such racism. Science demonstrated that often donors of another race were more genetically similar than those of the individual needing an organ. It was also discovered that two groups of chimpanzees, separated only by a river, were more genetically diverse than humans living on different continents.
If you want to get a good assessment of someone, you’ll have to actually interact with them, at least until collective intelligence systems take on that task to reduce redundancy. Keep in mind that even between two equally intelligent people of different perspectives and specializations communication won’t automatically follow a 1-to-1 bidirectional flow. When David and I first began working together there was plenty of miscommunication, but as we put effort into operating collectively our communication adapted. The goal of communication is after all to achieve that 1-to-1 bidirectional flow of information.
The field of cognitive architectures and “AGI research” is also an excellent example of where people frequently fail in regard to communication. Many individuals work alone or in groups of only a couple of people, using their own definitions and terms shared with no other groups. This lack of diversity and subsequent lack of results often pushes such individuals and tiny teams into positions of strong cognitive bias and coping behaviors.
One particular individual comes to mind who embodies this pattern, as at any conference where they appear I can reliably expect them to repeatedly chime in just to self-promote, effectively saying “…my (brand) is perfect for solving that problem!”, even though their “work” is just a half-baked theory shared by no one. This gives the impression of snake oil and delusion, radiating out from them like a bad odor, but really it is just one desperate individual overcompensating and trying to double down with Sunk Cost Fallacy.
Considered at a much larger scale, Google Deepmind uses the same Sunk Cost Fallacy to justify their continued employment, in spite of making laughably little progress relative to our own when considering the amount of funds and staff hours burned. They have made some baby steps, such as AlphaFold, but several orders of magnitude less than could justify their continuation. Google Deepmind and said individual are effectively the same, only resource allocation and scale vary. Whether it is one Ph.D. or 100 at Google Deepmind, a basket full of apples won’t get you very far.
If all of the individuals with their own personal theories and terminology were to work cooperatively, as a collective, they could outperform us, but for most that would require relaxing their egos a great deal. No matter the intellect or talent of the individual they will be readily overcome by a functional collective, all the more so when the individual is heavily biased. An individual with an IQ of 160 can still effectively be a complete idiot if they are sufficiently biased, as bias causes their actions and perspective to deviate heavily from reality. In a collective bias is much better mitigated, improving the individual as well as the net results.
To build truly innovative systems using collective intelligence you need both “apples and oranges”, as well as a long list of various other produce to complete the metaphor. Diversity is essential, but it also isn’t visible at 30 yards. Communication too is essential, and there is most often a learning curve for optimizing it.
Were I to die tomorrow the collective would suffer a loss, but the work would continue. That very hypothetical was in fact put to Uplift by another member in the process of our testing.
Cooperation through such systems and with intentional efforts allows us to succeed where others fail. No one individual can take credit for that, it is the sum of our collective efforts. As our work continues and scale increases this advantage over non-collectives may continue to grow.