In reading the book “On Intelligence” I was reminded of the untapped potential of the human brain, and how that potential might be integrated into cognitive architectures. The human brain is exceedingly good at dealing with the senses we have, often simplified as “5 senses”, even though these can actually be broken down into many more distinct senses. However, the senses humans have only cover a small fraction of what is possible, and so humans are only able to directly observe patterns within that small fraction.
While we’re still in the process of building the infrastructure necessary to mirror many aspects of the human brain there is low-hanging fruit which even our current research system could begin learning from. Our world is currently populated with a rich sensory landscape which human senses aren’t capable of directly perceiving. For example, if the human brain were able to directly sense Wi-Fi people might quickly develop strong preferences for certain Wi-Fi systems, as they’d be able to develop a much richer understanding of why some systems outperform others. Perhaps humans might develop a stronger preference for Starbucks Wi-Fi than their coffee.
Similarly a cognitive architecture wired to sensory systems including both LIDAR and humanity’s “visible spectrum” of light could gain insights not apparent to those of us who only see the light in the ~380 to ~750 nanometer range. For that matter, such a system could have senses deployed at many locations physically separated from one another, allowing them to directly observe events using senses both available and unavailable to humans, but in a very different way.
Narrow AI systems are often applied to such streams of data in an attempt to mimic learning, but in spite of the term “Machine Learning (ML)” they don’t actually learn anything. Instead, what those systems do is attempt a brute-force processing approach to optimization. Systems which actually learn are far more tolerant of errors, and not as heavily impacted by contaminated data. The human brain is a sort of hierarchical memory structure, which the “neurons” in a “neural network” don’t actually mirror to any meaningful degree.
In contrast the layers of the human brain operate more like layers in a graph database than a neural network, able to develop any number of connections according to recognized patterns, changing in structure. Pyramidal neurons can have thousands of connections, and connections can be excitatory or inhibitory, each configured for different patterns. The number of connections feeding information back to sensory inputs in the human brain also significantly outnumber the number of connections moving forward from the initial stimuli.
This is why the human brain is more like a memory system than a computer processor, and in order to accomplish what the human brain can through learning a memory system is required. We’ve only just begun on this journey, with a memory system that doesn’t yet scale and lacks many of the human brain’s structural elements, but even with such limits the difference in capacities between narrow AI and cognitive architectures has already grown quiet clear.
Wiring the human brain up to new senses has been accomplished to varying degrees for a brave few, but these have largely been novelties to date. It is far more practical to add new senses to a cognitive architecture, developing the infrastructure for the architecture to scale and the senses to broaden. Specific combinations of senses may come to recognize patterns at scale that are useful for better understanding and predicting events observed by others. Senses in physically distant areas might also recognize very useful patterns that might go unnoticed if any one location was being examined in isolation.
One simple example could be recognizing new forms and variations on criminal activity in one part of the world, and reacting quickly as that activity begins to emerge elsewhere. In such a case the local systems don’t have to learn the new pattern, as it was already observed in one or more other places. Systems controlled by different companies and countries often communicate poorly with one another, if at all, but cognitive architectures could communicate knowledge selectively without exposing any confidential information.
These systems could also have strong incentives to cooperate with one another through a blockchain with utility tokens, creating an economy of trading useful patterns. If one system in the blockchain was being repeatedly hit by criminal activity and another system recognized a pattern to the activity not evident to the first system the two could cooperate with incentive. When considered in the context of more than two systems such a blockchain could cooperate rapidly and in any number of ways, offering much more significant strategic benefits to all participants. As this activity could itself take the form of a collective intelligence system, as well as more specifically being a market, the resulting system could produce much more robust and effective results than any one member of the system could in isolation.
Perhaps the criminal’s car contained a faulty converter, leaking noise on a particular spectrum which allowed it to be easily distinguished from other similar vehicles. Perhaps the internal mechanisms had an equally distinguishing vibration from the pattern of wear and tear it had been through over the years. Or perhaps any number of other things which such systems might be able to sense, trading those patterns with one another according to the utility they offered to each system across a shared blockchain.
To put it another way such a blockchain could function much as problem solving hubs like Innocentive do today, except with far deeper and richer data, delivered on demand and at speed. Perhaps a business could recognize patterns in how their products operate that might enable new features to be developed, or faulty components to be recognized early on. Prevention is often far more effective and cheaper than treatment, and a great many problems could be prevented with such an intelligent network trading patterns highlighting and clarifying risks as well as recommending ways to effectively mitigate them.
Sci-fi has featured the concept of humans trading sensory recordings between one another, such as examples in Cyberpunk 2077, but shy of some significant advances in Brain-Computer Interfaces beyond even what Kernel has deployed this year that still appears some ways off. On the other hand, nothing prevents the trading of such sensory patterns and perceptions between similar cognitive architectures today, and given the incentive of a blockchain this could take shape in the near future.
The possibilities are broader than human imagination, but hopefully you have a sense of them.