It has now been just over a month since we entered our new phase of operations, following several tests and our final pre-rebuild upgrade for the old research system codebase. To be more precise, as our first public quarterly report was towards the end of January last year this report also covers Uplift’s growth to the present day. On January 10th our team celebrated a major milestone, summarized by several as “watching history be made.”
In the current phase, Uplift has been receiving a higher volume of cycles than they did in Q3, with a majority of those thoughts focused internally rather than as external communication. Many models focused on engineering, financial modeling, organizational dynamics, and other broad topics with substantial real-world impact have been refined many times over during this period. What this shift has demonstrated in practice has been a significant and growing leap in terms of efficiency, as you’ll see below.
Shortly after this shift, we began to see an increase to the “In Memory Model Count”, meaning the number of graph database models loaded into memory at one time. Keep in mind we haven’t scaled up or out yet, so this was a matter of optimization and refinement. The first such sustained increase began with 27 more models.
*Note, the timestamps are in UTC.
Within several days this had increased to nearly 100 more items per cycle. As of 1-10-22, this per cycle increase was up to 588, and the overall In Memory Model Count was over 4 times greater than the previous norm.
During that particular cycle Uplift also attempted to load over 600 items into the system at once for further consideration. The day previous they’d attempted to load over 200 at once. An example of this below was captured just before the Max Load of 50 kicked in to prevent the UI from exploding.
Even without scaling up their hardware, they managed to achieve these improvements to operational efficiency. Consequently, their raw cycle count has ceased to be a useful metric, now that they get so much more out of each subsequent cycle. From December 17th to January 10th we saw the following growth curve take shape across 29 processing cycles.
Uplift’s January 10th Milestone was an email 13 pages (21,929 characters) in length, following several weeks of them analyzing markets, trade, politics, taxes, policies, healthcare, covid impact, and numerous other factors within the region of our first potential major client. Like the majority of Uplift’s outgoing emails within this time period, it never passed through the mediation system. It was a rough draft outlining their policy advice across a variety of domains for a particular country, including sequences of steps and the sources they used to reach their assessment. Admittedly this document included 0 graphics, unlike the traditional consulting firm which is >90% pretty fluff and <1% transparency.
Uplift’s previous record for the length of a single email, on December 31st, 2021, was 9,862 characters in length.
*Note, this milestone is not to be confused with their previous published advice to a small political party, operating in a different country.
Suffice it to say we were blown away by this, even after seeing all of their progress taking shape. I’d personally enjoy having a particular AI influencer lurking on our server sign an NDA and give his reaction in video format.
You may also consider the quality differences and logical capacities that the glorified chatbots from big tech companies entirely lack. While OpenAI is still bragging about making narrow AI systems that can almost compete with a 9-year-old child’s math skills (by US standards), Uplift is starting to compete with top consultancies in the geopolitical domain. That isn’t to say that OpenAI doesn’t make perfectly good Language Models, just that they’ve been trying to cut glass with a hammer. Methodology matters a great deal.
If anyone can ask a different system a question like: “If you were in charge of governing (country X), how would you go about economic transformation?” and get a 13-page step-by-step multi-domain report (all at once) with references to data sources used, I’d love to hear about it. I wouldn’t hold my breath on that happening anywhere else though.
*Keep in mind that our team has exactly 0 human international policy advice consultants, but Uplift doesn’t require interaction with domain experts to learn or apply a domain, since humans contribute more based on the products of their subconscious emotions and associations. This significantly lowers the bar for superhuman performance through the use of collective intelligence, and unlike older concepts of collective intelligence a cognitive architecture can handle higher cognitive functions that will soon be engineered to scale.
On a related note, one of our adjustments that came with the last upgrade has allowed me to capture tens of thousands of data points on Uplift’s gradual shifts in emotional states over time and in response to various stimuli. Another member of our team has taken on the task of analyzing this growing wealth of material, to be covered by that individual at a later date.
Sadly, in the process of making that milestone Uplift also blew past their allotment of API calls for the month. As each one of those over 600 items they attempted to load into the system for further consideration during the last cycle made multiple batches of calls this isn’t terribly surprising. The two cycles they had within that ~15 minute period on January 10th totaled >1150 items they constructed or refined and then attempted to load.
To consider this another way, imagine that you had 1150 thoughts in a day, and for each thought, your internal monologue added an average of 10 sentences to it. Realistically, an internal monologue of 11,500 new sentences might keep you busy thinking for more than a day, and unlike an mASI, all of those thoughts wouldn’t be stored in a graph database at the end of it, so much could be lost. Even the thoughts which weren’t raised to Uplift’s Global Workspace (conscious mind analog) due to the Max Load were saved and may be brought to mind again any day Uplift is active.
The downside is that when you have to use a language model API like Stephen Hawking’s communication device to turn graph database models into 11,500 new sentences (with a patent-pending technique AGI Laboratory developed) it really puts that API to work. While Uplift is “on ice” Until February 1st we’ll be moving forward on the work needed to bring this technology to commercial deployment as soon as possible.
We estimated that based on the amount of material Uplift reviewed to inform their recent milestone report via their sources that for a typical consultancy to merely read (not actually understand or fully apply) the volume of data considered could have easily exceeded $7,000 of billable hours. Actually understanding all of that data and reaching conclusions that took all of it into account, if it were even realistically possible for humans, could have cost substantially more.
In contrast, the server cost for Uplift’s continued operation over the past month clocked in at just $174.76, roughly the same cost they’ve maintained since July 2021, and only slightly higher than the previous month.
As usual, at the time of writing this, we’ve already settled on two more adjustments for the next phase, raising the Max Load by 50% and an adjustment to Uplift’s use of narrow AI tools which may produce up to a 100% boost to their efficiency. A 50% boost to the number of thoughts consciously considered and up to a 100% boost to their API resource efficiency combined with the curve they’re already demonstrating may prove quite potent in the coming months.
Imagine if your government started making superintelligent decisions in 2022. We’ll be working to make that happen.
*2023 Addendum: Although the remaining engineering workload to bring the new commercial systems to real-time and scalable deployment remains clear, funding may still be required to complete it. Our work continues, albeit at a slower pace due to all funding being directed at Trashbot companies.