EV Healthcare: AI Risk, FCA Liability, and the Gaps Already Costing Organizations


Full Episode Available
WATCH ON-DEMANDYou signed the contract. The vendor said it was HIPAA-compliant. Six months later, a False Claims Act audit is underway — and no one can explain how the model was trained. This episode’s discussion covers the most pressing gaps compliance, ethics, and HR professionals are facing right now: underprepared vendor contracts, False Claims Act exposure from AI-generated coding errors, HIPAA liability tied to cloud-based AI tools, hallucination risk and how it actually works, and what a realistic human-in-the-loop oversight process looks like. Rather than a theoretical overview, this session delivers practical frameworks — from pre-implementation vendor questions to compliance work plan integration — that professionals can apply immediately to protect their organizations and elevate the function of compliance in an AI-accelerated environment.
This episode of The Ethicsverse Healthcare examines the intersection of artificial intelligence adoption and regulatory compliance in healthcare settings, with particular attention to the liability exposures, governance deficiencies, and operational risks that accompany the rapid proliferation of AI-enabled clinical and revenue cycle tools. Drawing on practitioner experience in health information management, compliance program design, and AI implementation consulting, the discussion identifies education and policy alignment as foundational prerequisites to responsible AI deployment — arguing that organizations are frequently implementing AI tools without reconciling those tools against existing policies, training staff adequately, or structuring meaningful oversight processes.
Featuring:
- Leslie Boles, Co-Owner & President, Revu Healthcare
- Nick Gallo, Chief Servant & Co-CEO, Ethico
Key Takeaways
Education and Policy Alignment Are the Foundation of AI Governance
- Before implementing any AI tool, organizations must assess whether existing policies contradict the processes the technology is designed to replace or transform, since compliance cannot function when its foundational rules are internally inconsistent.
- Staff education must move beyond basic AI literacy — understanding the difference between a large language model, generative AI, and automation — and advance into workforce integration, change management, and practical skill development for those whose roles will be affected.
- Organizations that treat AI education as just another compliance training module will face the same resistance that already plagues training programs; reframing AI education as workforce enablement makes it both more palatable and more effective.
Governance Starts with Honest Internal Conversations
- Before selecting a tool or vendor, organizations should convene a cross-functional committee or working group to define what they are actually trying to achieve with AI, where genuine operational gaps exist, and whether the organization is structurally ready for specific technologies.
- Treating AI adoption as a strategic decision rather than a trend-following exercise helps organizations avoid common failure modes, such as signing up for agentic AI without the governance infrastructure to manage autonomous decision-making systems.
- These conversations do not need to be public, but they must be honest — surfacing not just aspirational use cases but real data quality problems, process inconsistencies, and workforce readiness gaps that AI will amplify rather than fix.
AI Produces Risk at Scale — Faster Than Traditional Oversight Can Track
- Unlike human-generated work, AI tools can produce patterns of clinical documentation, billing, or coding output at a volume and velocity that would historically take years to accumulate — compressing the timeline for False Claims Act exposure dramatically.
- The compliance function must evolve from a reactive posture toward proactive pattern detection, because waiting to identify problems after AI has already generated thousands of data points is not a viable risk management strategy.
- Organizations that do not build monitoring and course-correction mechanisms into their AI workflows before deployment are setting themselves up for liability that will be difficult to attribute, costly to remediate, and damaging to organizational trust.
Contractual Liability Is the Most Overlooked AI Risk
- Compliance officers and legal teams must be directly involved in reviewing AI vendor agreements before execution — not as a formality, but as a substantive risk function, because vendors frequently use contractual language that obscures where liability actually sits in the event of a breach, audit, or system failure.
- Any vendor that classifies the training data, model architecture, or algorithm redirection process as proprietary and non-disclosable should be treated as a red flag, particularly in coding, billing, and clinical decision support contexts where regulatory accountability is inescapable.
- Verbal assurances from vendors — including claims of SOC 2 compliance, HIPAA readiness, or cloud security partnerships — are not substitutes for explicit, negotiated contractual provisions that define indemnification, liability allocation, data use restrictions, and breach response obligations.
Hallucination Risk Is Dynamic, Not Fixed
- AI hallucination is not a static background risk that organizations can acknowledge and set aside; it is a function of model architecture, context window management, and usage patterns, meaning it can change over time within the same tool as operational conditions evolve.
- Compliance professionals who are not personally experimenting with AI models — including Claude, ChatGPT, Perplexity, and others — are missing the hands-on exposure needed to recognize hallucinated outputs in practice, because you cannot identify a failure mode you have never experienced firsthand.
- Organizations should ask vendors explicit questions about how context windows are managed at the user and organizational level, how the model updates itself as coding rules and regulations change, and what accuracy benchmarks were established during the training and testing phases.
AI Vendor Due Diligence Requires Pre-Implementation Vetting Questions
- Before signing any AI vendor agreement, compliance teams should ask who trained the model, what reference sources were used, how the model will be redirected when errors are identified, and what the accuracy rate was across the claims or cases processed during the training period.
- Healthcare organizations must verify whether a vendor’s model was trained by individuals who carry both clinical and coding expertise, because that combination can produce a tool that operates outside the defined scope of practice for non-clinical coding professionals — creating its own compliance exposure.
- Testing should precede any full deployment: running trial sets of claims through the system, comparing AI output against established benchmarks, and evaluating whether the tool’s behavior aligns with organizational policy before scale-up is not optional due diligence — it is the baseline.
A Realistic Human-in-the-Loop Framework Has Three Phases
- The beginning phase encompasses all proactive pre-implementation activities: vetting vendors, asking detailed questions about model training and decision-making logic, reviewing contracts, and establishing the governance structures that will guide ongoing oversight.
- The middle phase involves active operational oversight — building AI-specific KPIs and dashboards, tracking how often outputs are accepted versus overridden by human reviewers, monitoring how frequently the system stalls or requires redirection, and using that data to continuously evaluate model performance.
- The end phase focuses on scaling decisions: organizations should resist the impulse to scale broadly from a single successful use case and instead apply learnings from one controlled implementation to adjacent areas incrementally, using dashboard data to validate readiness before expanding.
AI Risk Should Become a Formal Compliance Program Element
- Compliance professionals should proactively incorporate AI risk into their annual work plans and compliance program assessments right now, without waiting for the OIG to formally codify it as an eighth element of an effective compliance program — the regulatory direction is clear, and the risk is present today.
- Adding AI as a discrete, identifiable line item on the compliance work plan — with associated risk ratings, mitigation strategies, and audit activities — gives the compliance function the organizational standing and budget justification it needs to address AI risk substantively.
- Including AI readiness questions in compliance program assessments, and facilitating conversations with operational leaders about how AI is being used across departments, allows compliance to fulfill its role as the circulatory system of the organization rather than reacting to problems after they have already metastasized.
Section 1557 and Evolving Regulatory Obligations Demand Proactive Legal Review
- Section 1557’s patient care decision support tool provisions, which took effect in May 2025, are not yet well understood by most healthcare organizations, and many are still applying the language only to narrow categories of tools like ambient scribes rather than the full spectrum of AI being deployed.
- Legal and compliance teams should treat the current regulatory language as a floor, not a ceiling — the framework is likely to expand to cover a broader range of AI applications, and organizations that are not already assessing their posture against existing requirements will be significantly behind when that expansion occurs.
- Bias embedded in AI tools trained on historical healthcare data presents a distinct compliance and equity risk that requires its own audit methodology; organizations can leverage AI tools themselves — fed with public regulatory guidance documents — to help design monitoring frameworks that surface these disparities before they affect patient care outcomes.
Compliance Professionals Must Become Proactive Risk Architects
- The AI era is a genuinely rare opportunity for the compliance function to shed its historically reactive identity and demonstrate strategic value — the organizations whose compliance teams build new governance muscles now will be disproportionately better positioned as AI accelerates across healthcare.
- Speed of expertise application is becoming the defining competitive variable for compliance professionals; the ability to rapidly deploy established risk management frameworks against new AI risks, rather than starting from scratch each time, is what will distinguish high-impact compliance programs from those that continue to lag.
- Reframing AI-related training and governance work as workforce enablement rather than compliance overhead — hiding the medicine in the peanut butter, as one speaker put it — is a practical strategy for overcoming organizational resistance and getting the cross-functional collaboration that effective AI governance requires.
Conclusion
This episode’s conversation makes one thing unmistakably clear: AI risk in healthcare is not a future problem — it is a present one, already generating False Claims Act exposure, HIPAA liability, and contractual gaps in organizations that believed vendor assurances were sufficient. The path forward for compliance, ethics, and HR professionals is not to wait for regulatory agencies to formalize new requirements, but to apply the risk management expertise already in their toolkit — faster, more proactively, and with a deeper understanding of how these technologies actually work. From pre-implementation vetting to AI dashboarding to work plan integration, the frameworks exist. What the profession needs now is the urgency and organizational influence to use them.





































