Dive Brief:
- The Coalition for Health AI, a network of health systems and technology vendors working to create standards for the safe deployment of artificial intelligence in healthcare, has released a draft framework outlining some of those standards.
- The guidelines published Wednesday lay out a life cycle for product development, principles for trustworthy AI and potential use cases. CHAI also released a checklist meant to help developers and organizations implementing AI self-report and self-review their success.
- The coalition is seeking public comment on the draft framework for 60 days. CHAI said it will use the feedback to finalize the guidelines and update them as needed.
Dive Insight:
CHAI was founded in 2021 and has since grown to 1,300 member organizations, including tech giants Microsoft, Google and Amazon. The coalition also includes members of the federal government: In March, CHAI announced that Micky Tripathi, National Coordinator for Health Information Technology, and Troy Tazbaz, a director at the Food and Drug Administration, had joined the coalition’s inaugural board as non-voting members.
The group says its aim is to help create a network of quality assurance labs that can evaluate healthcare AI models, and develop best practices to deploy the technology — a key concern for the sector as interest in AI spikes.
Many experts and policymakers are worried that AI is being deployed too rapidly and without adequate oversight, despite assurances from developers and their clients that internal governance controls are keeping any detrimental outcomes from the technology in check.
CHAI’s draft guidelines, called the Assurance Standards Guide, aim to harmonize AI standards in the healthcare sector to avoid those negative results, according to the nonprofit.
Publication of the guidelines shows “that a consensus-based approach across the health ecosystem can both support innovation in healthcare and build trust that AI can serve all of us,” CHAI CEO Brian Anderson said in a statement.
The framework suggests how standards can be evaluated and woven into each stage of the AI development lifecycle, from defining a problem to implementing a small-scale pilot to monitoring the product once it’s been deployed at scale.
Reviewers can use included checklists to grade their AI’s performance, and should publicly report algorithms’ results in the interest of transparency, CHAI said.
The framework aligns with CHAI’s core principles for trustworthy AI: usability and efficacy, safety and reliability, transparency, equity, and data security and privacy. The guidelines also feature use cases to demonstrate best practices in different scenarios, like using generative AI to extract data from an electronic health record or deploying imaging AI for mammography.
CHAI is far from the only group looking to develop guidelines for responsible AI use in healthcare. More than 200 sets of guidelines have been issued worldwide by governments and other organizations, according to CHAI.
Cloud giant Microsoft, which has been highly active in the health AI space, launched another AI governance group earlier this year called the Trustworthy & Responsible AI Network that aims to operationalize CHAI’s standards.
In building its own standards, the private sector is filling a gap left by the federal government, which has yet to issue a comprehensive regulatory structure for overseeing the futuristic technology in healthcare.
That could soon change. An HHS task force is currently working on a health AI oversight plan to comply with an executive order issued in October.