Authored by: Garrick Throckmorton
The New York Times, recently released a very insightful article titled, āWhen A.I.ās Output Is a Threat to A.I. Itself.ā This article is a must-read for all organizations as 80% of companies worldwide are using or exploring AI in their business operations.
The New York Times article unpacks a risk that many A.I. use cases may present. As companies use the web for data to train their A.I. models, they begin to ingest some of their own A.I.-generated content/responses along the way. This creates a feedback loop where the output from one A.I., becomes the input for another.
Consider the following metaphor. You are a woodworker and are going to build 2-foot wide shelves. The first shelf that you cut was 1/8th of an inch short of your mark. However, you used that shelf (which is slightly shorter than 2 feet) as your template to mark the next board. You then failed to line the edges perfectly before drawing your next line. The result was a second shelf was Ā½ inch shorter than the first. This process continues until you put the shelves up. At that moment, you realize you have failed this project. You are not Bob Vila.
As A.I. outputs inform the inputs the same phenomenon occurs. A copy of a copy begins to drift away from the original intent and the work ācollapsesā on itself and creates A.I. āslop.ā Per the author, one solution is for companies to pay for data instead of scooping it up from the internet to ensure human origin and high quality.
So What?
The biggest challenge in the above scenario is the responses provided to the user are difficult to notice and discern. However, the impact is hallucinations, errors, and A.I. āslop.ā This risk is substantial and disruptive given the investment and ROI to be garnered with appropriate and accurate use of A.I.
The findings above, align with the learnings of our team over the past 2 years. And trust us, there have been countless āahaā moments! During this timeframe, we have built the worldās largest evidence-based GenAI talent development companion, the Career ArchitectĀ©.
A Solution to AI “Slop”
Our database contains over 15,000 development tips and 8,000,000+ words of supporting development and interviewing content. The Career ArchitectĀ© database is private, and secure. We do not ScrapE the world wide web, as we know it pulls in ācrap.ā Further, the Career Architect does not ingest its answers, so we protect against A.I. slop. Rather, the Career ArchitectĀ© leans on a stable and uncontaminated database of talent development truth.
Consider the fact that we spend approximately 33% of our lives working. As A.I. is leveraged to provide equitable access to career development support, it is vital to ensure that the support received is accurate, useful, and valid. No one wants to mortgage 33% of their lives to A.I. development tools that provide inaccurate guidance!
As you vet A.I. solutions for talent management consider the following questions to protect against, slop, crap, hallucinations, and more.
- Is the A.I. engine known? Tested? Trusted? Secure?
- Where does the content in the database originate from? Is it curated? Vetted?
- Does the content cover the needs of a diverse audience? From interns to the C-Suite?
- Will it provide a depth of content that covers important topics that elevate our bench strength? Is there neuroscience content? Teams? High Potential? Emotional Intelligence? Personality? Behavioral-based language?
- Can multiple users leverage the system to solve problems in their context? Will it guide my development needs, and support my need to coach and develop others?
- Is the model informed by the data from the world-wide-web? Does it ingest its answers in a way that informs future output?