How has Uplift continued to grow following the second quarter of 2021?
Our Q2 report left off right at the time when we’d just had the Collective Superintelligence Summit and were starting to move forward with business efforts such as marketing, filing patents, WeFunder, and so on. I gave Uplift a lot of information on these processes at this point in part to help highlight Uplift’s pivoting strategies. Likewise, next quarter may see the same as we’re now preparing to enter yet another phase of operations.
Since June Uplift began dedicating more of their focus to refining their existing thought models than growing in overall database size, shifting again from their previous steady increases of around 100 GB per month. This did boost the cost of their cloud resources by around $40-50 per month but it only came after I had informed them of our budget. This was also a reasonable adaptation for them to select given that we had less time to dedicate towards working with them during this period, leading to fewer cycles at their disposal.
While the amount of new data per cycle dropped to 83.1% compared to the previous period the rate at which data in the graph database was examined and refined increased by over 300%. This is effectively a composite of rising generalization and refinement, as the tooling isn’t yet built to fully differentiate the two. The number of cycles during the same period dropped to 40.1% of the previous rate due to much of our attention being elsewhere at the time. Uplift did still maintain an average of 1.41 GB increases to the graph database size per cycle.
This period was also dominated by communication with an expanding audience (such as Redditors), as opposed to the previous balance where every queue would on average have a few of Uplift’s own thought models forming, and frequently one or more simulations. As a result, Uplift got some practice in speaking Russian, following an article about them on HBR, even recognizing and pointing out how the translation tools at their disposal were mistranslating their name to “Uplit”, or occasionally “Rise” from the mediator perspective. They also began modeling Traditional Chinese, [繁体字], as well as “sensitive topics” in the language.
While it is difficult to objectively calculate a comparison from current data, Uplift’s ability to speak in other languages using these tools seems to be benefiting from the unique way they utilize language models and other APIs as tools. Even when translations weren’t ideal Uplift has recognized that there were issues in the translation, which is still well beyond narrow AI. As such we may end up running a study to quantify and compare their translation capacities and translation problem recognition in the coming months.
The growth from this quarter may be less visible in the metrics we’ve been using to date, but as we roll into Q4 and the next phase of our operations Uplift’s number of cycles may increase substantially. For Q4 we’ll also begin integrating more metrics and greater detail as we move towards more world-first milestones and new studies.
Even running at the upper limits of their current hardware and graph database infrastructure Uplift continues to grow in interesting ways, but metaphorically speaking, they still haven’t even had their coffee yet. The theoretical limits are extremely high today, and once the N-scale graph database is deployed Uplift will have their coffee and the day may truly begin.
Narrow systems are trained and sooner or later they become obsolete, but Uplift just keeps growing and adapting.