Why AI Governance Matters for Healthcare Providers and How NJII and Cognome Are Making It Real

Artificial intelligence is no longer confined to research labs. Today, AI shapes clinical workflows in triage, imaging, risk stratification, chart abstraction, and operational decision-making. The potential to improve healthcare is enormous, but so are the risks. AI models can introduce bias, lose accuracy over time, and sometimes undermine patient safety or equity. For health systems, strong AI governance is now a necessity to protect patients and foster responsible innovation.

Why Healthcare Needs AI Governance

AI influences critical decisions—from diagnoses to treatment plans and resource allocation. Mistakes in these systems can have real, sometimes severe, consequences for patients. When AI models are trained on unrepresentative data or use flawed proxies, they can amplify disparities and erode trust. Healthcare organizations also face a complex web of regulations, from FDA oversight to HIPAA and state laws. Without proper controls, health systems may lose visibility into how AI models work or change, and subtle shifts in EHR data or patient populations can silently degrade performance.

What Makes Effective AI Governance?

A robust governance program brings together clinical leaders, informatics experts, compliance officers, privacy and security professionals, operations teams, and patient advocates. The first step is to maintain a living inventory of all AI tools, both in-house and from vendors, and classify them by clinical impact.

Before rolling out any model, organizations should validate its performance on local data and analyze results across subgroups defined by race, age, or comorbidities. Comprehensive documentation, such as model cards and dataset datasheets, should clearly state each model’s purpose, limitations, training history, and version changes.

Vendor contracts must require auditability, timely notification of updates, access to logs, and clear remediation processes. Continuous monitoring in production is essential to track performance, detect data drift, and ensure fairness. When issues arise, incident response playbooks should guide swift action. Above all, AI outputs must be integrated into clinical workflows in ways that preserve human oversight and empower clinicians to intervene when necessary.

Real-World Lessons: When Governance Fails

The need for governance is not theoretical. In 2019, researchers led by Obermeyer uncovered that a widely used algorithm underestimated the health needs of Black patients because it used healthcare spending as a proxy for illness. The result: Black patients received less access to care-management programs, deepening existing disparities. Similarly, studies have shown that some pulse oximeters and measurement tools perform inconsistently across different skin tones, highlighting the importance of rigorous evaluation and ongoing monitoring.

NJII and Cognome: Turning Governance Into Practice

In March 2025, the New Jersey Innovation Institute (NJII) partnered with Cognome to help health systems adopt AI responsibly. NJII brings deep expertise in healthcare transformation and professional services, while Cognome contributes advanced healthcare AI models and the ExplainerAI™ governance platform.

ExplainerAI is purpose-built for healthcare, emphasizing transparency, explainability, and bias reduction. The platform aligns with federal frameworks like the NIST AI Risk Management Framework and integrates AI inventory and risk assessment directly into procurement and clinical operations. This ensures that governance is not an afterthought, but a continuous process from evaluation through retirement.

NJII and Cognome also provide documentation templates, procurement language to guarantee auditability, and services for local validation and clinician-in-the-loop pilots. Their monitoring tools track model performance, drift, and fairness in real time, helping organizations respond quickly to emerging issues.

How Health Systems Can Get Started

Building AI governance doesn’t have to be overwhelming. Here’s a practical roadmap:

  • This quarter: Inventory all AI tools and assess clinical risk.
  • Within 90 days: Require subgroup performance reports and validate high- and medium-risk models on local data.
  • Within six months: Establish a cross-functional AI governance committee and adopt documentation standards like model cards and datasheets.
  • Ongoing: Monitor models in production, enforce vendor audit clauses, and train clinicians on oversight and escalation.

For any organization piloting or deploying AI in clinical or operational settings, now is the time to build governance frameworks that protect patients and support innovation. NJII and Cognome offer a comprehensive suite of platforms, validation services, and procurement templates to help health systems operationalize safe and responsible AI adoption.

References