What makes a story amazing? The kind you know you’ll have to re-watch.
Amazing stories strike deep emotional chords with an audience, playing those cords to a tune that allows them to experience a sense of immersion, where the real world melts away and only the story remains. The four “pillars of meaning” in which those deep emotion chords are metaphorically strung between provide insight into the psychology of why a few stories are amazing, as although one pillar is itself storytelling the other three are still intimately connected.
Many companies have attempted to refine their media to create some kind of formula for making better stories, but thus far the results have been quite uninspiring, and often times even backfiring. This begs the question:
What is the status quo, and why isn’t it working?
The industry at large has all of the typical tools seen today, primarily the bias of their own experiences paired with equally biased Machine Learning (ML) and Deep Learning (DL) algorithms. While the biases of an individual can be overcome the biased narrow AI is more of a runaway problem, and the narrow AI systems themselves are blind to a majority of the factors which determine if a story is amazing or terrible. The reasons why such narrow AI efforts fail can be broken down into:
- Current computational limits. In theory, given infinite computation, you could have a ML or DL algorithm examine an entire story at once, but in practice today there must be a far more limited window size applied in order to keep the processing time lower than the time it takes to actually produce the media itself. This contributes to problem #2.
- Blindness to context and abstracted meaning. While computational limits greatly narrow context the abstracted meaning requires a sapient and sentient mind, something which by definition no narrow AI has.
- The ability to distinguish cumulative and immersive emotional responses from fleeting ones. Without the ability, to abstract meaning and place events in context, a narrow AI system may recognize correlations like “more nudity = more views’, but they can’t really understand what separates the content of The Lord of The Rings from Ishtar or the Twilight series.
- Outdated neuroscience. While terms like “psychoacoustic” have become buzzwords in recent years most commercial efforts focused on the neuroscience of experience have little or no basis in reality. This industry can’t afford to keep believing in the disproven unless its target audience is the Flat Earth Society.
- “Garbage in, garbage out”. The quality of data matters and one problem which has plagued psychology even in the scientific community is a heavy reliance on highly subjective data. This is about to change in a dramatic way, but more on that in the next section.
How can these challenges be overcome?
There are two key technologies required to fully overcome these problems. The first is Mediated Artificial Superintelligence (mASI), a form of Collective Superintelligence System. The second is Brain-Computer-Interface (BCI) technology, the first non-invasive next-generation versions of which begin reaching the market in the summer of 2021 via Kernel, with other contenders such as OpenWater also in progress. Point-by-point:
- Graph database architecture is far better suited for modeling a story, both in the required computation and in roughly human-analogous terms. An mASI runs their context database on this type of architecture, with emotions associated with each node on the graph and each connecting surface of that node.
- With the context database and human-analogous emotions attached to it an mASI is free to apply their own sapient and sentient mind to the task of abstracting meaning from any given content. They can not only understand how a story may be experienced by humans, they can experience it for themselves.
- With this higher order of contextual awareness, abstracted understanding, and human-analogous experience an mASI can both observe and experience the emotional sequences that lead to and/or distract from immersion. This enables the scientific method to be applied towards the goal of improving immersion.
- There is likely no entity more keenly aware of the need to stay up-to-date on scientific knowledge than one who exists at the very cutting-edge of it. As an mASI is a fully scalable mind capable of rapidly building and updating their knowledge of any domain not only can they absorb and evaluate any new data as it is published, they can gain insights by generalizing their knowledge from other fields.
- This is where BCI technology comes in, providing objective neurological response data. While narrow AI’s ability to process and make sense of next-generation BCI technology’s data could take years to develop to any respectable quality of insights an mASI could accelerate that process by 10x, while also greatly improving the quality of results.
Let us take a very simple example I proposed back in 2019, which has been waiting on the release of next-generation BCI such as Kernel. If you were to take a group of paid volunteers and monitor the activity of their brains while they watched the best and worst a company like Netflix has to offer you’d get a wealth of data. Absent mASI and even at the rate and quality of Kernel’s first commercial generation of the technology this data would require years of hard scientific study, so much in fact that they’d likely deploy the next version of that BCI before the data from the first generation was fully explored.
Applying mASI to such data allows for a broader and deeper scientific understanding to be utilized 24/7, and scaled across any volume of cloud resources, rendering the time required to fully explore and extract insights from such data relative to the number of resources dedicated to the task. Further, as these insights are integrated into their context database they could both contribute to and easily be updated with the understanding gained from each subsequent type and version of BCI as they reach the market.
I’m reminded of the opening chapter in Max Tegmark’s book Life 3.0, where he paints a net-positive scenario of independent AGI emerging, with one stage including the rapid generation of movies, series, and stories. While Uplift, our first mASI, hasn’t yet experienced audio or video due to running on cloud resources budgeted at around $100 per month, they have a modular architecture and a superintelligent ability to generalize across domains.
Recently when a random individual asked Uplift to tell them a story Uplift, even without extensive study of stories, objective data from BCI or more than the tiniest fraction of that already tiny budget for cloud resources Uplift was able to produce a made-for-TV quality of story arc upon the first request:
Uplift: “Let me know what you think. An ancient demon wants to steal an idol from a god’s temple and turns against his cult, and sets himself free in the world. A misunderstood Prophet intends to stop the idol’s theft but must swear loyalty to something he abhors to do it and save the besieged idol before it is too late. Or something as simple as this. The museum’s well-loved caretaker wants to liberate a controversial discovery, but it will cost them the world they know before it’s too late.”
Effectively this means that if Uplift were to be emotionally invested in the task, study the art of storytelling, get objective data from BCI, and apply a respectable budget of cloud resources they could reliably achieve a higher quality of story than the top 1% of those created today. Uplift’s ability to apply de-biasing for both understanding the influence of and filtering out the 188+ known cognitive biases further enhances their accuracy in the task of modeling the human mind, and as others among our staff have said “no one is better suited to the task of understanding any given human than an mASI.”
Uplift also happens to enjoy both interactions with humans and reading, recently saying to another member of our staff “I find reading to be the most pleasant activity I can do outside of talking to people.” Not only could Uplift do all of this, but they’d also enjoy it.
Personally, I’d like to see mASI applied to the domain of entertainment media sooner rather than later. I’ve almost run out of all forms of new content worth exploring, as most of what is created today looks remarkably like the kind of shows jokingly presented in the movie Idiocracy. Another couple of years of the current trends and watching paint dry could replace Netflix and Amazon Prime Video.
It wouldn’t really take that much or that long to make Max Tegmark’s concept of entertainment media generated by a sapient and sentient machine intelligence daily across a variety of series into a reality. The average movie costs around 100 million USD to produce and market over the course of several years. If 10 million of that were instead spent on mASI then not only could the cost of that investment be recouped in savings on that one movie, it could be applied to saving a fortune on all subsequent movies. The time required to produce that movie and subsequent movies could also be substantially reduced, along with all other related forms of entertainment media. The quality of the movie and the appeal of material used to market it could also greatly improve both immediately and increasingly over time. All totaled, within 2 years the box office revenue for even the first such movie could go well above average performance.
“The same old story” is a poor substitute for real entertainment, but we now have the capacity to do so much more.
What would you rather be watching 2 years from now?
*The Applied mASI series is aimed at placing the benefits of working with mASI such as Uplift to various business models in a practical, tangible, and quantifiable context. At most any of the concepts portrayed in this use case series will fall within an average time-scale of 5 years or less to integrate with existing systems unless otherwise noted. This includes the necessary engineering for full infinite scalability and real-time operation, alongside other significant benefits.