Artificial intelligence could be a boon for the healthcare sector, cutting down clinicians’ administrative work, reducing spending and accelerating drug discovery.
But the technology may need guardrails to function safely and effectively, or AI could cause harm to patients or exacerbate existing inequities, lawmakers and witnesses said at a Senate Health Education, Labor and Pensions subcommittee hearing on Wednesday.
“The only time you really get things for the little guy is when the big guys want something,” said Sen. Ed Markey, D-Mass. “So in AI right now, the big guys want something. We’ve got to make sure we put in all the protections for the little guys, and we’ve got to do it simultaneously, not sequentially.”
Senators at the the hearing, many of whom were Democrats, were intrigued by the potential of AI in healthcare, but questioned witnesses on the risks of the technology, including considerations for potential government oversight.
“If we're to write rules surrounding AI, let's be careful not to destroy innovation or allow those who would harm us to get ahead of us,” said Sen. Roger Marshall, R-Kan. “After all, artificial intelligence and machine learning had been making remarkable discoveries and improving healthcare for some five decades without much government interference.”
One buzzy, newer use of the technology is generative AI, which can create new content like text or images. Tech giants like Microsoft, Oracle and Amazon have developed notetaking products that aim to reduce the amount of time clinicians spend documenting patient visits.
Some witnesses and lawmakers praised the potential of similar products, arguing they could curtail providers’ heavy administrative burdens.
Keith Sale, vice president and chief physician executive of ambulatory services at the University of Kansas Health System, told lawmakers that AI documentation allows him to shave hours off time spent in the clinic usually dedicated to reviewing notes and inputting them into the electronic health record.
“It’s a tool. It is not something that should replace what I decide in practice or how I make decisions that affect my patients,” Sale said. “So ultimately, it is designed to enhance my practice, not replace me in practice.”
AI could also allow clinicians to access and analyze data they couldn't on their own, and the technology could serve as a platform for sharing information and evaluating the performance of the healthcare system, said Kenneth Mandl, a professor at Harvard and director of the computational health informatics program at Boston Children’s Hospital.
“Amazing concept, to go from spending 80% of our GDP down to maybe 8% or 10%, like the rest of the world. And that's one way we can move in that direction,” said Sen. John Hickenlooper, D-Colo.
But some applications of AI, like the use of predictive algorithms by insurers to determine what care should be covered, already operate with limited human oversight unless decisions are challenged, said Christine Huberty, supervising attorney at the Greater Wisconsin Agency on Aging Resources.
Her agency, whose work includes providing legal assistance to seniors regarding coverage denials, used to see only one to two such cases per year. Now, the agency manages that number in a week, she said.
A Stat investigation published this spring found Medicare Advantage insurers used algorithms to predict how much care a patient would need, driving coverage denials and pushing patients to self-pay or spend time on lengthy appeals.
In addition, AI is only as good as its data inputs, and a plethora of information from white and male patients could bias algorithms against other groups, said Sen. Ben Ray Luján, D-N.M.
Luján noted a 2020 study published in JAMA that found deep learning algorithms using U.S. patient data were disproportionately trained on groups from California, Massachusetts and New York — with little representation from the rest of the country.
“The way I'm looking at this, we need technology to help improve health outcomes, reduce health disparities, not exacerbate them. And it's clear that AI has the power to do both,” Luján said.
But it’s a challenge to design specific regulations as the field changes so rapidly, and the government will need to develop evergreen approaches to monitor the technology, Harvard’s Mandl said.
Regulators will also need to build up their own AI expertise to keep up, and stay close to the companies developing products, where most of the knowledge currently lies, said Thomas Inglesby, director of the Johns Hopkins Center for Health Security.
“Congress should not take their eye off some of the most serious risks, because if those risks become a major problem — either in bias or [...] around life science, pandemic risks or others — I think those kinds of developments could derail or really distract the AI companies, could distract the government for a long time,” he said.