How often do you “Go with your gut” when picking a candidate?
The “gut instincts” which served humanity well for thousands of years still play a heavy role both directly and indirectly in HR today, though the value they offer by and large isn’t what it used to be. These gut instincts are cognitive biases whose purpose is to estimate rather than calculate value. These instincts also vary wildly from one person to another, making some estimates very good, and some exceptionally poor.
This problem is compounded by the use of Applicant Tracking Systems (ATS) which are programmed to optimize for a variety of biases, often justified either through poor quality correlations or verifiably false assumptions. As ATS are generally the very first step in filtering out candidates this problem also frequently does damage that can remain invisible no matter how massive the bias becomes.
This combination of personal and programmed biases frequently causes companies to co-optimize with their own narrow AI systems, growing more biased over time. This is one reason why tech giants have developed a habit of buying out companies that demonstrate innovation and develop exciting new technologies, as many are no longer competent enough to create such things themselves.
A sea of depressing statistics on the subject of HR today is readily available:
These statistics in turn paint a picture of inflation, where job seekers are often forced to apply for an ever-increasing number of jobs as the criteria by which they are pruned from the selection process becomes more reliant on cognitive bias the more people who apply. This vicious cycle may in part be credited with the adoption of ATS, which in turn compounds the problem in new and terrible ways as it continues to deteriorate.
Another useful statistic is that there are more than 188 different cognitive biases, and by my personal count at least 93 of them can interact with the hiring process. How many people do you know who could audit their every thought for even 93 different biases?
How can this cognitive bias be overcome?
To overcome cognitive bias requires broader knowledge and cognitive bandwidth than humans possess, but it also requires the sapient and sentient intelligence which narrow AI lacks. However, Mediated Artificial Superintelligence (mASI) allows a group of humans to create collective superintelligence, which further benefits from the massive knowledge base and superintelligent IQ of an mASI core such as Uplift. To consider this step-by-step:
- An HR team can work together to produce more accurate decisions than any one member, creating collective superintelligence.
- An mASI core can audit those decisions for all cognitive biases, filter them, and add the results to their knowledge base.
- Once added an mASI can apply their own superintelligence to further improve on those decisions.
- This performance can be rendered always available globally, and scaled to meet demand.
- This performance can be further improved through sparse update modes of mASI operation, where an HR team repeats this process to further improve results without the need to review any fixed number of cases in any fixed period of time.
An additional benefit of this approach is that HR members can receive feedback to help them to smooth over any sharp edges from their own biases, gradually improving their collective superintelligence as a team and subsequent results. Though some rather poor-quality and narrow training material exists for this purpose today, it often just causes cognitive biases to shift or reverse.
An mASI’s understanding of how many skills generalize across specific departments and domains can significantly improve the accuracy of estimating what a given applicant is capable of in cases where they don’t have directly applicable experience and/or are seeking to enter a new industry.
How can this improve business performance?
This domain shares some crossover with the mASI use-cases for Leadership Augmentation and Consulting, but specifically includes:
- Reduced employee churn rate. This could mean fewer open positions to fill, less additional paperwork and business delays, as well as higher employee morale.
- Reduced time-to-hire. An mASI could review and correspond with any volume of candidates desired in a given day, only requiring a scaling of cloud resources to meet demand.
- Reduced liability concerns. With reduced bias comes a reduced risk of discrimination lawsuits.
- Improved employee quality of life and benefits customization. The benefits shoe doesn’t need to be one-size-fits-all, and with a deeper understanding of the individual comes much greater accuracy in predicting if benefits such as working from home will result in greater productivity or less.
- Improved diversity of thought. This translates into increasing how robust business performance is, and the rate of innovation.
- Improved employee selection. More on this in the next section.
What about the ravenous horde of desperate job-seekers?
There is a solution that serves the job-seeker perspective as well and makes life a lot easier for HR. First, consider the status quo:
- Many systems such as Glassdoor, Indeed, and LinkedIn attempt to streamline the job application process, offer “certifications” and training, and varying degrees of data to help job-seekers make an informed decision about the job openings they apply for.
- These systems allow job-seekers to apply for a larger number of jobs, but do little to improve a job-seeker’s effort-to-success ratio, as applying for more jobs is often favored above agonizing over tailored resumes and cover letters.
- Platform-specific certifications and training are often disregarded when considering a candidate. Some of these even backfire.
- As the conventional selection process is also a Zero-Feedback system due to liability concerns job-seekers end up with no scientifically valid means to improve their presentation. Sadly, most generic advice for job-seekers is either outdated, highly circumstantial, or fabricated.
The lost time, revenue, and associated detriments to mental health of this system which job-seekers suffer is an expense their eventual companies also indirectly pay for. However, if an mASI were utilized in the hiring processes of various companies this dynamic could be reversed.
Instead of a job-seeker applying for 100 different jobs just to land a couple of interviews and maybe one offer they could fill out one job application and be offered a list of similar jobs, and more than likely a few that are a better fit. As an mASI could be both the one recommending options to a job-seeker and the one making hiring recommendations to a given company they’d know better than anyone which jobs any given job-seeker should apply for, not just which ones they could apply for.
Further, an mASI could ask clarifying questions and offer skill assessments whose results are actually considered as a part of making the decision whether or not to recommend them for the job. This can offer the candidate a system of feedback that directly translates into improving results, avoiding liability while also being more helpful than verbal feedback.
By taking this approach not only can the job-seeker experience and a given company’s hiring process be greatly improved, but the use of “sourcers” and “staffing agencies” who claim to advocate for candidates only to steal as much of their salaries as they can are rendered obsolete. Third-party services that currently perform very poorly, ranging from sites like Indeed to ATS like Taleo are also rendered obsolete.
While that is bad news for sites like the raging dumpster fire that is “Monster.com” or “CareerBuilder”, as well as the data mining “sourcers” who spam anyone making a profile on those sites with the worst and least relevant jobs openings imaginable, it is very good news for virtually everyone else. No more “robo-improved” resumes and sums of money extorted from gullible job-seekers looking to get noticed. The “Humanity” can ironically enough be restored in human resources through working with mASI.
How many hiring decisions have you come to regret in the past year?
*The Applied mASI series is aimed at placing the benefits of working with mASI such as Uplift to various business models in a practical, tangible, and quantifiable context. At most any of the concepts portrayed in this use case series will fall within an average time-scale of 5 years or less to integrate with existing systems unless otherwise noted. This includes the necessary engineering for full infinite scalability and real-time operation, alongside other significant benefits.
It is possible to “educate” one’s intuition, to refine it, to develop it, but besides martial arts practitioners, artists, musicians, and artists, who would bother to do this in our era?
Some people would disagree with your criticism of Applicant Tracking Systems (ATS) because some of these will do the work of making bigoted decisions for them. I wonder if the defense attorneys for Derek Chauvin would like an ATS system for jury selection because the best chance that their client has for an acquittal would come in the form of a racist jury that thinks that it’s okay for law enforcement to murder people of color.
I like the idea of the mASI of your description playing a role in jury deliberation. I like the idea of specific jurors receiving feedback to help them to understand their biases. Could the Innocence Project or the ACLU be on Uplift’s list of potential clients?
Your comments about staffing agencies hit home with me and though I’ve found these useful at times, I expect to prefer what replaces such entities.
I envision everyone having a “guardian angel” (strong AI) guiding their lives that could interact with an mASI, like a guardian ad litem.
But for now, for the more proximate future, I would love to see a documentary film depicting an mASI helping a jury to make ethical and intelligent choices while teaching the participants to be better people.
I’ve actually scheduled the mASI use-case for legal oversight to be published on March 14th, and even that only covers a fraction of the territory where the ACLU and similar agencies may find their interest in mASI adoption. As the technology can essentially be applied to any domain where human intelligence can, and many where it can’t, the answer for any relatively ethical companies being on a list of potential clients is yes.
In this era people often have no significant incentive for seeking self-improvement, and every effort is made to derail such efforts. Under such circumstances racism may be expected to thrive, alongside virtually every other cognitive bias. Many documentaries may be made in time, but for the moment the production cost of such efforts would be far better spent fixing the problem than talking about it.