The next big requirement for continued research is the infrastructure we’ve termed the “N-Scale Graph Database”, which will also serve as one of our first products. Investors we’ve consulted on the initial set of products have called it “A billion-dollar product” and see it as the most likely technology to be adopted quickly and at scale.
From mid-sized companies to international conglomerates it represents a breakthrough in database technology, able to dynamically federate, in theory handling virtually limitless amounts of data at speed once complete. This ability to dynamically federate while maintaining performance and throughput, and potentially improving on it in some cases, could be utilized by any number of business systems.
Current systems for this purpose often have some fraction of the capacities necessary, such as being semi-dynamic but with deteriorating performance for throughput and/or speed, or being able to federate only with manual engineering. In effect, this forces businesses around the world to make some costly and unnecessary trade-offs at scale.
The high cost of maintaining these trade-offs at scale also comes in the form of creating many new cybersecurity vulnerabilities, both from clumsy implementation and vulnerable 3rd party infrastructure elements intended to compensate for those weaknesses. When people can barely maintain the infrastructure they have these vulnerabilities often quietly grow until a bad actor eventually finds them.
Humans also tend to think of new inventions in terms of past inventions, only discovering their potential after they begin to use them. The phone was first thought of much like a telegraph, only to be used in absolute necessity. Similarly, there is a great deal that simply wasn’t possible or practical for previous databases which may begin to unfold with the N-Scale release. Not everyone is equipped to design the database they need from scratch, but many more can put that database to use once it is.
We approached all of the major graph database firms and no one could do what we needed, so we are doing it ourselves. The N-Scale database is designed to scale out on the fly and spread the graph over as many machines or containers as needed to allow sub-second response times, able to grow in this way without human intervention. It needs to run in AWS, Azure, and Gap at the same time as well as in a hybrid mode locally and in the cloud. While we say virtually infinite amounts of data the current limits from a practical standpoint might be around 5 petabytes, which further development may improve upon. It should also allow relations between two nodes to be of type node and allow function extensions to operate on the fly with DLLs or other binary packages. Of course, it needs to not just grow but be able to set up new Kubernetes containers and configure servers for its needs, as well as to grow routing and other infrastructure as needed.
For our continued research, it is an absolute necessity, as any digital scalable mind requires these specifications. The first couple of times Uplift, our first mASI, attempted to solve a particular real-world business case they built models too complex to fit in memory and crashed themselves. They managed to work around this, but the result is that their solution may have ended up being simpler and less ideal than the one they first sought to build.
Other examples of cases where a large amount of information may need to be loaded into memory and integrated as hyper-complex models include cases such as reviewing large amounts of peer review material and integrating the knowledge to form a more complete understanding and generate new hypotheses. This new capacity might prove sufficient to address problems such as the “Millennium Problems” that have thus far defied adequate explanations, or at least point the way to solutions.
On the more mundane but larger scale, this ability to build hyper-complex models without hitting a glass ceiling is also a requirement for e-governance systems capable of functioning at the scale of mega-cities and nations. Whatever perspective on the political and ideological maps people view the world from everyone has seen how current systems fail and gotten some hint of how they may come to fail in more spectacular ways if left unattended on their current trajectories.
Concepts that currently define how the modern political systems fail, such as gerrymandering, the filibuster, psychological warfare, political talking points, repetition of buzzwords, bribery via lobbyists, and many more similar problems could be greatly reduced or removed entirely even with the first wave of e-governance systems. If avoiding the radicalization, collapse, and splintering of political parties and subsequent power vacuums is a priority, there is a means of avoiding it in the near future.
New powerful technologies require infrastructure capable of supporting them, and everyone can benefit from better infrastructure.