Across time, we can identify innovations that moved too quickly for their own good. As one example, consider the introduction of leaded gasoline. This innovation was introduced as a performance enhancer for engines but was later found to have always been a significant health hazard.
Artificial Intelligence has undeniably progressed at a pace that will be a case study for innovation for years to come. While the hype cycle continues, our goal should not be to resist this technology at every turn, but to proceed with caution, ask deep questions, and consider the implications of adoption into our talent management workflows.
ChatGPT Policy Update
On October 29th, 2025, industry leader ChatGPT updated its policies focused on the delivery of legal and healthcare advice to users. The new guidelines classify the tool as an “educational tool” rather than a “consultant” and recommends that you speak to a professional. This is due to a line in ChatGPT’s terms and conditions that states:
“Provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
It is safe to assume this policy shift is intended to distance the organization from growing legal liability and regulatory issues that result from providing users with information that negatively impacts their health, well-being, finances, and more. Users benefit from increased protection as well, and yet the question is, where do we draw the line of risk?
The Hallucination Rate Challenge
A study led by researchers at the University of Waterloo evaluated the performance of ChatGPT-4 when it was asked a series of open-ended medical questions based on scenarios modified from a medical licensing exam. The takeaway: Only 31 per cent of the chatbot’s answers were deemed entirely correct, and 34 per cent were determined to be clear.
This should not come as a surprise, as the hallucination rates (Source: AIMultiple 10/1/2025) of LLMs are quite staggering. The systems are trained to provide confidence in their answers, even when the answers are false. Determining whether the response is true and accurate lies with the user. How would they know?

As Talent Management professionals, should we take notice of this legal and medical policy change? What implications does it have for our consideration of Artificial Intelligence as a coach and consultant for our strategies and for our teammates directly?
Three Consideration for Talent Management Professionals
1. “Garbage In, Garbage Out” – Source Integrity and Evidence-Based Data
Why it matters:
When models are trained or prompted on open, scraped, or unverified data (like internet and social media content), hallucinations and biases multiply — and the risk shifts from “bad advice” to legally consequential harm (e.g., biased promotion paths, unsafe coaching guidance, mental health missteps).
What to do:
- Insist on evidence-based, research-derived data as the foundation for any AI coach — ideally from validated talent management frameworks (e.g., KSAs, competencies, 9-box development models).
- Ask vendors to disclose data lineage: where their behavioral and career recommendations come from.
- Require human-in-the-loop design for sensitive or developmental conversations.
Signal to Talent Management: “If we can’t verify the data, we can’t validate the advice.”
2. AI Coaches Are Becoming “Employee Development Advisors” – and That Invokes Duties of Care
Why it matters:
The line between career guidance and psychological influence is blurring. As AI coaches provide reflective feedback, emotional tone analysis, or motivational advice, they begin to occupy a quasi-clinical role. Without oversight, that creates organizational exposure under emerging AI safety, data privacy, and employment law frameworks (e.g., EU AI Act, U.S. EEOC AI guidance).
What to do:
- Ensure that AI coaches are explicitly scoped to developmental learning, not psychological or medical support.
- Establish clear disclosure and consent so employees understand they’re interacting with an AI resource, not a human coach.
- Integrate ethical guardrails and escalation protocols (e.g., flagging sensitive inputs to human coaches or HR professionals).
Signal to Talent Management: “Once an AI gives career or well-being advice, it becomes part of our duty of care to ensure that advice is safe and appropriate.”
3. Transparency, Auditability, and the “Explainability Mandate”
Why it matters:
Organizations are now being asked to prove that AI recommendations are fair, explainable, and traceable. If an AI coach suggests one employee is “ready for leadership” and another “needs more emotional resilience,” that recommendation must be auditable and non-discriminatory.
What to do:
- Select solutions that enable explainable outputs and documentation trails, so your HR and compliance teams can review how insights were formed.
- Incorporate AI risk assessment into existing talent governance (e.g., alongside performance, DEI, and data privacy audits).
- Align with SOC 2, ISO, and EEOC-aligned standards for data security, fairness, and transparency.
Signal to Talent Management: “If you can’t explain how the AI reached its conclusion, or the source of its answer, you can’t defend it in the court of law or in the court of employee trust.”
Bottom Line
AI coaches have enormous promise for scalable, personalized development, but in the eyes of regulators and employees, they are no longer “experimental chatbots.” They are agents of influence inside your culture. The path forward is curated data, transparent design, and human partnership, not blind trust.
TalentTelligent® AI Coaches are all backed by the world’s largest evidence-based, database of talent development content. Learn more…

