Trending Now Q4: Hottest Topics in E&C and HR


Full Episode Available
WATCH ON-DEMANDFrom executive orders attempting to override state AI regulations to independent UK researchers using artificial intelligence to detect fraud in Boston cancer research, Q4 2025 proved that compliance risk no longer respects organizational boundaries, jurisdictional limits, or even national borders. As organizations navigate an increasingly complex environment of state-federal regulatory conflicts, emerging whistleblower mechanisms, and AI-enabled fraud detection, compliance professionals face unprecedented challenges in protecting their organizations while managing expanding personal liability. This quarterly review examines the most critical developments in anti-corruption enforcement, AI regulation, cybersecurity failures, and emerging whistleblower trends that will shape compliance strategies heading into 2026.
This episode of The Ethicsverse examines fourth-quarter 2025 developments across ethics, compliance, and human resources domains, with particular emphasis on regulatory enforcement evolution and emerging technological governance challenges. The review explores the Department of Justice’s resumed Foreign Corrupt Practices Act (FCPA) enforcement following a mid-2025 pause, analyzing the strategic shift toward individual accountability, expedited case resolution, and enhanced declination opportunities for cooperating organizations. Special attention is devoted to the proliferation of artificial intelligence governance frameworks, including examination of executive order conflicts with state-level regulation, the inadequacy of “human-in-the-loop” controls as effective safeguards, and the unprecedented liability exposure facing compliance officers tasked with auditing non-transparent algorithmic systems.
Featuring:
- Karen Moore, Principal, Sounding Board Compliance LLC
- Matt Kelly, CEO & Editor, Radical Compliance
- Nick Gallo, Chief Servant & Co-CEO, Ethico
Key Takeaways
FCPA Enforcement Returns with Strategic Pivot
- The Department of Justice officially announced at its December FCPA conference that the mid-2025 enforcement pause has ended and cases will resume at what they describe as a “traditional cadence,” with prosecutors explicitly stating they will pursue cases against both foreign and U.S. companies that harm American business interests abroad.
- DOJ leadership has fundamentally shifted enforcement strategy away from corporate penalties toward individual accountability, with prosecutors seeking what one expert described as “scalps not fines” by targeting executives and compliance officers personally rather than treating violations as mere costs of doing business absorbed through corporate settlements.
- The strategic pivot creates unprecedented personal liability exposure for senior leaders and compliance professionals, as recent enforcement actions like the $120 million Millicom settlement demonstrate DOJ’s willingness to pursue substantial penalties while simultaneously holding individuals criminally accountable for compliance program failures.
Voluntary Disclosure Creates High-Stakes Time Pressure
- The DOJ’s enhanced voluntary self-disclosure program offers a presumption of declination for companies that disclose FCPA violations, but the policy’s six-month discovery window and one-year remediation requirement create intense pressure for organizations conducting post-acquisition due diligence to uncover corruption risks within compressed timelines.
- Missing the voluntary disclosure deadline transforms what could have been a declination into a prosecutorial roadmap, as organizations that discover misconduct after the six-month window must disclose findings that DOJ can then use to build enforcement cases without the benefit of cooperation credit.
- Even when organizations successfully obtain declinations through voluntary disclosure, they still face substantial financial penalties and extensive remediation obligations, meaning the program offers mitigation of consequences rather than complete amnesty and requires significant upfront compliance infrastructure investment to position organizations for potential disclosure scenarios.
U.S. and International Anti-Corruption Enforcement Alignment
- The UK Serious Fraud Office issued updated compliance guidance in Q4 2025 that harmonizes with DOJ expectations by emphasizing continuous risk assessment processes rather than periodic one-off evaluations, reinforcing that adequate compliance measures require ongoing monitoring and dynamic adaptation to evolving corruption risks.
- Both U.S. and UK enforcement authorities now explicitly require compliance programs to address not only bribery but also fraud prevention, with the SFO’s guidance incorporating “failure to prevent fraud” alongside traditional anti-corruption requirements and evaluating whether organizations demonstrate meaningful prevention capabilities rather than merely documenting policies.
- International enforcement coordination continues to intensify as authorities increasingly share intelligence and align prosecution strategies, making robust compliance programs essential for multinational organizations regardless of which jurisdiction initiates investigation, particularly given that most major FCPA enforcement actions in recent years have targeted foreign companies operating across multiple regulatory regimes.
Paper Compliance Programs Face Elimination
- Both DOJ and SFO guidance explicitly reject what enforcement authorities derisively call “paper programs” that exist only in policy binders and training slides without meaningful operational integration, measurable impact on employee behavior, or documented effectiveness in preventing misconduct.
- Prosecutors now expect compliance programs to demonstrate tangible prevention capabilities through comprehensive risk assessments that identify specific corruption vulnerabilities, tailored controls directly addressing those identified risks, and quantitative metrics proving the program actually reduces misconduct rather than merely creating documentation that policies exist.
- Organizations relying on generic policies copied from templates, checkbox e-learning modules, and superficial monitoring face severe sanctions when violations occur, as enforcement authorities increasingly view such programs as evidence of willful blindness rather than good-faith compliance efforts, making substantive program investment with demonstrable business integration a critical enforcement defense strategy.
AI Regulatory Chaos Creates Compliance Paralysis
- The Trump administration’s executive order attempting to preempt state-level AI regulation has created regulatory fragmentation rather than the promised clarity, as states continue implementing diverse AI governance frameworks ranging from algorithmic bias audits to transparency requirements while federal agencies lack authority to override state laws through executive action alone.
- Unlike data privacy regulation where some degree of interstate harmonization has emerged through similar legislative frameworks, AI governance shows no signs of federal comprehensive legislation despite multiple congressional proposals, leaving organizations to navigate contradictory state requirements without the interstate commerce clause clarity that typically resolves such conflicts.
- The executive order’s adversarial positioning of federal agencies against state regulators wastes enforcement resources on jurisdictional battles rather than establishing coherent national governance standards, while simultaneously failing to address how AI-specific regulations intersect with existing data privacy laws, biometric identification restrictions, and algorithmic discrimination prohibitions already in effect across multiple states.
AI Governance Liability Exceeds Technical Competence
- Compliance officers increasingly receive responsibility for AI governance and oversight despite lacking the technical training necessary to audit algorithmic decision-making processes or validate outputs from proprietary “black box” neural network systems, creating asymmetric liability exposure where accountability vastly exceeds practical auditing capability.
- The AI governance challenge differs fundamentally from traditional compliance domains like financial controls, where CFOs possess accounting expertise enabling them to verify the financial statements they certify, whereas compliance officers tasked with AI oversight typically cannot examine the mathematical decision pathways within machine learning models or independently assess whether training data introduces prohibited biases.
- Organizations must provide compliance officers with either substantial technical resources capable of meaningful AI system auditing, explicit contractual liability protections acknowledging the technical limitations of non-expert oversight, or alternative governance structures that distribute AI accountability to data scientists and technical personnel rather than defaulting to compliance as the designated responsible function simply because the role involves enterprise-wide risk management.
Human-in-the-Loop Controls Provide False Security
- The widespread reliance on “human review” as a primary AI control mechanism demonstrates a dangerous misunderstanding of how such controls degrade over time through automation bias, where users develop overconfidence in AI accuracy and gradually reduce scrutiny of algorithmic outputs they are nominally tasked with validating.
- Historical precedents from financial trading systems show that employees required to approve automated decisions inevitably begin clicking through approval prompts without meaningful review once they develop trust in system accuracy, transforming what was designed as a substantive control into a liability-shifting formality that provides false assurance of human oversight.
- Human-in-the-loop controls represent static safeguards operating in dynamic risk environments where control effectiveness deteriorates as AI capabilities improve and recommendation accuracy increases, making comprehensive documentation of AI decision logic, continuous statistical validation of outputs against expected parameters, and periodic testing of human reviewer attentiveness far more critical than nominal human approval checkpoints in risk management frameworks.
AI Use Case Proliferation Outpaces Inventory Capabilities
- The non-deterministic nature of generative AI tools enables continuous expansion of applications beyond initially intended use cases as employees discover new ways to leverage AI capabilities, creating “AI creep” where tools adopted for specific purposes gradually permeate unrelated business processes without compliance awareness or risk assessment.
- Organizations fundamentally lack effective mechanisms to maintain current inventories of AI applications as employees adopt shadow AI solutions, create custom GPTs for departmental workflows, or repurpose approved tools for novel applications, making point-in-time risk assessments obsolete almost immediately upon completion.
- Third-party vendor due diligence questionnaires capture only partial AI exposure because vendors themselves cannot comprehensively disclose AI usage when their own employees utilize AI tools in operational contexts beyond the core product offering, meaning organizations face a “pickling effect” where AI gradually saturates all processes including vendor operations without visibility into actual deployment scope or associated risks.
Basic Access Controls Remain Catastrophically Inadequate
- Multiple major data breaches during Q4 2025 resulted from organizations failing to revoke system access for departed employees, including an education technology company where hackers exploited credentials from an employee who had left over a year earlier with super-administrator IT privileges that were never disabled.
- South Korea’s Coupang, the nation’s Amazon equivalent, suffered a breach exposing personal data of every adult citizen after a former employee who had returned to China retained valid system credentials for more than a year post-departure, demonstrating that even large sophisticated technology companies fail to implement fundamental access management protocols.
- The Federal Trade Commission has now sanctioned three separate organizations within recent months for data breaches involving unrevoked former employee credentials, including a cryptocurrency firm that lost $186 million when hackers used an old employee ID to penetrate systems, suggesting systemic cultural problems where operational convenience persistently overrides security discipline despite catastrophic breach consequences.
Independent AI-Enabled Whistleblowers Transform Enforcement
- The Dana-Farber Cancer Institute paid $15 million to settle False Claims Act charges in December 2025 after a UK-based biomedical researcher with no institutional affiliation used AI image analysis tools as a hobby to identify manipulated and duplicate images in research studies that formed the basis for federal grant applications.
- The independent whistleblower received a $2.6 million award for detecting fraud that internal peer review processes failed to catch, demonstrating how AI tools enable globally distributed individuals with no insider access to identify corporate misconduct faster and more comprehensively than traditional internal controls or compliance monitoring.
- Organizations now face an enforcement arms race where sophisticated external actors equipped with AI analytical capabilities can scrutinize publicly available documents, social media, regulatory filings, and research publications to detect anomalies that trigger whistleblower lawsuits, requiring companies to deploy comparable AI-enabled monitoring systems to identify compliance problems before external whistleblowers weaponize findings in qui tam litigation.
Conclusion
The fourth quarter of 2025 crystallized several inflection points that will define the compliance profession’s evolution into 2026 and beyond. The simultaneous intensification of individual accountability in FCPA enforcement, the regulatory vacuum in AI governance, and the proliferation of AI-enabled external whistleblowers collectively signal that traditional compliance approaches emphasizing policies, training, and periodic assessments no longer suffice. Compliance officers must transition from administrative coordinators to technical risk managers capable of auditing algorithmic systems, validating data integrity, and anticipating how technological capabilities enable both new forms of misconduct and new detection mechanisms. Organizations that continue treating compliance as a cost center focused on regulatory box-checking will find themselves increasingly vulnerable to enforcement actions driven by sophisticated external actors, while those that invest in technically competent, proactive compliance functions position themselves for competitive advantage through superior risk management.





































