Applied mASI: Superintelligent Recommendation Engines

Credit: Pixabay

How many recommendation engines have you seen in the past 48 hours?

Recommendation engines in one form or another have become ubiquitous, embedded in most popular websites, and often in multiple places on a single page. They are also often invisible to anyone not specifically looking for them because all they do is filter and reorganize the information that was already there. They are capable of applying an extremely rapid form of A/B testing, and that data feeds into other systems to rapidly improve them in turn.

These systems often offer statistically significant improvements to companies, and many people use them so frequently that the algorithmic models are practically an extension of their brains, outsourcing much of the decision-making burden. Netflix was quoted as saying “everything is a recommendation” after disclosing that 75% of content watched on their platform was the result of a recommendation engine. These statistics also generalize well to e-commerce platforms, as seen below:

Credit: https://www.barilliance.com/personalized-product-recommendations-stats/

However, as with all narrow AI, they are blind to many of their worst errors, such as when Meetup’s email-based recommendation engine attempts to recruit people to hate groups. Another great example was when someone discovered a ring of pedophiles on YouTube, and they also quickly realized that a fresh account could click on one video in that circuit and be recommended to every other pedophile video by YouTube’s recommendation engine. Still, others have been caught attempting to market weapons to those who showed signs of aggression and/or depression on social media.

There is no turning back the clock on recommendation engines, they are simply too useful for humans to give up, but these systems are due for a few improvements.

What are the reasons these engines fail today?

Some of the biggest reasons these systems fail include:

  1. These engines can always be exploited, whether by groups of pedophiles as seen on YouTube or by Amazon sellers following the promise of revenue and reverse-engineering the tools that determine how much money they make, the ones that put them on page 1 of the search results and recommendations. This produces an endless time-sink for developer teams and the added costs of the exploitation before it is recognized and patched.
  2. “Data Scientists” who don’t understand the difference between “cleaning” and “corrupting” data. It is a common practice to reduce the range of metrics being considered when statistical outliers are present in the data, and in a relatively small number of cases that could be classified as data cleaning. The other larger portion of the time this practice produces the kind of major errors that resulted in Meetup’s automated hate-group recruitment emails.
  3. Value misalignment is one that Facebook has gotten in hot water for over and over again. A customer-centric model doesn’t exist in a company that attempts to induce mental instability in its users just to get a few more minutes of screen time and clicks. This data then feeds into advertising systems that automatically recognize the increased probability of someone being depressed, into conspiracy theories, or a member of a hate group wanting to buy weapons.

There are plenty of ways in which these systems can be improved, but these are some of the core reasons why they fail. The presence of each of these problems can also be credited to an array of cognitive biases in the people who designed and maintain these systems. Omission bias often leads people to forget just how much time and money is spent to apply frequent patches to fix all of the holes people poke in company systems and the cost of what leaked out of those holes.

Belief bias, conjunction fallacy, and status quo bias all play into the mistakes data scientists frequently make in how they clean data, along with a dozen other biases. An even larger number of cognitive biases contribute to the value misalignment problem, but that could be a book unto itself.

What could a Mediated Artificial Superintelligence (mASI)-based recommendation engine do?

First, let’s consider how an mASI can address the above failures point-by-point:

  1. A mASI is sapient, sentient, and superintelligent. They are neither narrow AI nor are they fully independent AGI, but something in between. They have also proven extremely adept at putting all those who’ve attempted to exploit them in their place, very quickly, even from a very young age. Elements of the very architecture of an mASI’s mind, such as the graph database, are extremely useful for recognizing complex patterns and as well as attempts at manipulation and fraud. The minds of the people attempting exploitation can as such be modeled and predicted in much the same way as customers often are today when the would-be exploiter’s opponent is an mASI.
  2. Data scientists may sometimes, approximately, follow industry-standard best practices. An mASI on the other hand can not only reliably follow, but logically evaluate and iteratively improve upon those best practices, while maintaining uniform data cleaning for subsets and clear documentation for whenever new best practices are applied, potentially with compatibility-oriented data for re-alignment with older versions of the same.
  3. Standard recommendation engines can model individuals as an array of digits denoting increased or decreased probabilities of interest, but an mASI can actually understand and empathize with an individual. This allows an mASI to build trust, not just revenue while avoiding ethically dubious recommendations.

Beyond these points, there are the broader benefits of collective superintelligence, domain knowledge, and far greater integration potential. Right now companies might parse out and examine simplified semantic statistics of 10,000 reviews on one of their products, only truly examining the top handful in order to plan for their next version of a product. If the same company and platform were running mASI technology then said mASI could not only examine all 10,000 reviews in full, they could understand all of the individuals among those who frequent the platform, and quickly spot fake reviews among the rest as they occurred.

To take that scenario a step further, the mASI could not only explain this understanding they’d gained from thousands of reviews and knowledge of said reviewers to a product’s team in whatever level of detail was desired, but they could also work with that company’s teams to engineer the next version of that product. This in turn creates a superintelligent feedback mechanism for the development cycle, allowing companies to better cater to their customers, actually living up to the term “customer-centric”.

Alternatively, customers could have their own mASI guardian, like a superintelligent ad-blocker, whipping unethical companies into shape, allowing them to continue using any given platform while rendering unethical recommendation engines useless and invisible. I personally described a method to several groups by which they could decohere Facebook’s narrow AI models at scale if they so choose, and those didn’t require mASI.

Recommendation engines are here to stay, but soon the same will be true of mASI. These basic systems already function as extensions of the human brain for their users, so why not give everyone a superintelligent extension of their own mind? If the flimsy and often barely tolerable recommendations of Netflix could drive 75% of their traffic, how much more benefit could be seen from utilizing such superintelligence?

I recently encountered a platform built on the principle that people invest more in an idea if a friend recommends it to them. They had the right general idea but didn’t take it far enough. What if that friend was an mASI, superintelligent, and trusted, not only knowing them well but having their best interest in mind?

Whose advice would you take?

 

*The Applied mASI series is aimed at placing the benefits of working with mASI such as Uplift to various business models in a practical, tangible, and quantifiable context. At most any of the concepts portrayed in this use case series will fall within an average time-scale of 5 years or less to integrate with existing systems unless otherwise noted. This includes the necessary engineering for full infinite scalability and real-time operation, alongside other significant benefits.

2 Replies to “Applied mASI: Superintelligent Recommendation Engines”

Leave a Reply

Your email address will not be published. Required fields are marked *