ORLANDO, Fla. — Over the last year, Microsoft has announced a number of artificial intelligence initiatives in healthcare, including launching generative AI models aimed at reducing administrative burden on clinicians and a partnership with health records software giant Epic to more quickly test and deploy the models in practice.
Now, the technology giant is doubling down on efforts to responsibly implement AI amid thorny questions about the models’ potential for mistakes and bias.
Last week Microsoft — along with 16 health systems and two healthcare technology companies — launched a health AI governance network called The Trustworthy & Responsible AI Network, or TRAIN.
David Rhew, Microsoft’s global chief medical officer and vice president of healthcare, shared TRAIN’s goals in an interview with Healthcare Dive at the HIMSS conference.
Rhew also discussed the federal government’s role in overseeing health AI and advice for hospitals interested in standing up the technology in a careful and ethical way.
Editor’s note: This interview has been lightly edited for brevity and clarity.
HEALTHCARE DIVE: Why is TRAIN needed, and why now?
DAVID RHEW: We have recognized across the industry that, while there's amazing potential for AI, we also have to adhere to responsible AI principles. CHAI, the Coalition for Health AI, has provided a nice blueprint for how to look at responsible AI: defining key principles and assessing usefulness, determining whether or not there's bias and algorithms, looking for drift governance, etcetera.
We're trying to take those key principles and operationalize them. That means you have to leverage tools to make these principles come to life, like applying algorithms in local environments in which data never leaves your premises to test whether they're working. We want to do this post-deployment as well.
We want to do an inventory of all the algorithms to make sure we know what type of outcomes to expect, as well as any unintended consequences. We want to be able to create governance models that are easy and semi-automated, so that even potentially a one-person IT department can take advantage of responsible AI.
CHAI includes representatives of the federal government. TRAIN does not. Why?
In order to implement AI, we have to work with organizations that will be implementing it. These are frontline organizations that span small, medium and large hospital providers. Beyond providers, there’s going to be many other participants in TRAIN, specifically technology partners, that are going to help us build guardrails to address many of the things people are looking at.
People have asked me, “What’s the difference between CHAI and TRAIN?” CHAI is really more about the what, defining what is responsible AI. What are the standards that need to be applied? And then TRAIN is more about the how. How do you implement responsible AI? How do you apply these technologies?
These are all the key implementation challenges that we have to figure out. And it's not gonna be figured out by any one company or even the government. It's going to be figured out by organizations that are implementing it.
It seems there could be a role for government when it comes to overseeing implementation of these technologies. Do you disagree?
This ties into the relationship between CHAI and TRAIN. The work that CHAI is doing as a public-private model, where you’ve got multi-stakeholder input to define what are the things we should be doing to enable parties to advance responsible AI. That’s essential. You have to have that foundation.
But once you get that blueprint, somebody has to now figure out how to build a house. We are now at the point where we have to have the tools to construct processes that allow us to apply AI responsibly. And the government is involved in the CHAI process.
Have you gotten any interest from other health systems in joining TRAIN?
Yes, and I want to make sure people know as well — Microsoft is the first technology partner in the consortium. We expect there to be a large number of partners that are all going to help build these guardrails and enable the implementation of this in systems. And that's one of the exciting opportunities here for us to be able to empower an entire ecosystem to be able to solve a problem that no one company or organization can do.
What about other AI developers like Google — is TRAIN open to them?
You have to keep in mind that a lot of these technologies are built on [Microsoft cloud] Azure. So no, we're not restricting that. But this is why we start with providers, and the providers are the ones that are actually going to be implementing it so it'll be important that they can utilize these tools.
The HHS will issue a plan to regulate AI in healthcare by the end of the year. Does Microsoft think more federal action is needed?
When I mentioned that no one company or organization can tackle the responsibilities by themselves, that includes the federal government. The things that we know need to be done to apply responsible AI involve testing the algorithms on very large, diverse datasets. That's going to be very challenging if you have an organization that doesn't have that data.
So what would likely happen is that they would pass it back to the developers to do that. And developers don't have access to that either. So it's important that we build these public-private partnership models to enable us to be able to achieve some of the goals.
Developing standards — what CHAI is doing — it’s important to have multi-stakeholder input. It should include patients. It should include providers, regulators, industry. But once you actually have identified the things that should be done, then you need to go to the next stage of how you do it. And that’s where TRAIN comes in. But CHAI has a feedback loop with the government — we’re all working in coordination.
What strategy would you like Washington to take when it comes to regulating AI?
I think the approach they're taking right now with the public-private partnerships is the right approach. It’s a multi-stakeholder attempt to identify what the challenges are and figure out viable approaches.
There needs to be a paradigm change. In the past, it was about developers testing AI and getting the [Food and Drug Administration’s] seal of approval, and then you kind of forget about it and move on. But AI has a lifecycle. It can change over time. So it’s important to periodically monitor the outcomes associated with AI.
Now, as developers, regulators and implementers, each of us has a shared responsibility. We have to recognize that in order for AI to be done responsibly, you can’t rely on any one entity. To solve this, you’re going to need all three working in close collaboration.
In other challenging areas in healthcare, like quality improvement and patient safety, we have recognized that there needs to be more than a one-time assessment. It’s an ongoing assessment. So we'll see down the road how that evolves.
What’s your advice for a hospital interested in standing up AI?
This is the reason why we have a coalition because ultimately, no small hospital has the resources to do everything we just talked about. It's going to have to be done with the collaboration of other hospitals or other provider groups and technology to make it more efficient. Even large academic medical centers can't do this at the scale that’s required because there's just too much AI coming in.
So what we have found is that the technology is a critical component of this because it's the only way for us to automate or semi-automate these processes to make AI more efficient — both time-efficient and cost-efficient. That’s essential because smaller hospitals and under-resourced settings don't have the ability to be able to invest in resources to solve this. They need technology to — hopefully — help them.