‘We Are Definitely In Uncharted Territory’: AI Innovation, Regulation And Health Equity Prospects
Executive Summary
A panel of industry experts from Microsoft, Singaporean government and Scripps Health highlighted concerns and opportunities within an AI regulatory framework at the recent HIMSS AI conference in San Diego.
At the dawning of AI regulation, there is more than a little trepidation among stakeholders about the new era the world is entering.
“We are definitely in uncharted territory right now,” noted Harjinder Sandhu, chief technology officer at Microsoft Health and Life Sciences Platforms and Solutions, at the HIMSS AI Healthcare Forum, held 14-15 December in San Diego, CA.
He emphasized that when it comes to regulating AI systems, health care cannot be considered in isolation. “One thing to keep in mind is that this regulatory discussion – the one that everybody wants to have and should be having – is a much broader concern than just health care,” Sandhu explained during a panel on health care innovation, regulation, and policy in the age of AI.
“When you think of regulation, the systems that you use in health care were different than the systems that were powering other systems. What's happening with the generative AI systems is that more and more you're going to see the same AI systems are getting used in health care,” he said.
Sandhu noted there is a differentiating factor in health care, insofar as it is “one area where you immediately see the negative outcomes of systems.”
There is specialization happening on the AI innovation front as well. Google – a core competitor to OpenAI’s ChatGPT, backed by Microsoft –announced last week in a blog post its launch of MedLM, a suite of AI models specifically designed for health care.
Panelist Raymond Chua, deputy director-general of health and health regulation at the Ministry of Health, Singapore, stressed the importance of collaboration between industry experts and regulators to ensure the safe development and implementation of AI.
But there’s a challenging balance to be struck in developing guardrails without stifling innovation, Sandhu said.
“There’s usually a tension between innovation and regulation. The biggest problem that companies face right now is … being able to put some boundaries or clarity around what level of risk is acceptable, what levels of diligence and testing is reasonable.”
Bhavnani praised the US Food and Drug Administration for having done a “good job” regulating the 700 machine learning devices that are currently on the market. But he said next-generation AI systems will be more sophisticated and will need to be regulated differently.
“How can we apply the same benefit-risk approach but also ask the question [of] how might that be modified with the new tools that are being developed to ensure that the AI in development is equitable and won’t preserve biases that have been built into the US health care systems for generations and that these tools and devices are accessible to all?” he said.
He continued, “If we can do that, then now we are looking at policy of action, and there’s a social good. Because the last 50 years of care, at least in the United States, has largely not been equitable, and one can argue that the digital divide has been propagated through the availability of machine learning devices.”
Regulatory Outlook
After months of debate, the European Parliament, European Council and European Commission on 8 December reached a provisional agreement on what stands to be the world’s first comprehensive set of rules governing the development and use of artificial intelligence – the EU AI Act. (Also see "Medtech Further Examines Repercussions With EU On Cusp Of Adopting World-First AI Act" - Medtech Insight, 14 Dec, 2023.)
While the final text has yet to be released, the new regulation is expected to have far-reaching impacts across stakeholder communities and industry sectors, including health care.
Seen as the most sweeping rule facing AI technologies worldwide, the AI Act includes measures to prohibit certain AI practices and impose transparency rules for systems intending to interact with “natural persons,” and those generating or manipulating image, audio, or video content, according to the National Law Review. The EU AI Act will apply to anyone or entity that offers AI system services in any market in the EU, regardless of whether they are established within the EU or in other countries, such as the US, and will apply to users of AI systems within the EU and outside the US.
While the EU AI Act has been drawing most eyes, China has been rolling out its own regulations in recent years on recommendation algorithms and synthetically generated content, followed by a draft regulation in April 2023 to address AI chatbots specifically.
Other countries like the US, UK and Singapore have created their own sets of proposals to regulate AI.
“We have not yet come up with a definite stance how we want to regulate [AI],” noted Chua. Singapore continues to take a “wait-and-see approach” as AI governance developments in other places, while taking measures to promote fairness, accountability, transparency, robustness, and developing a system that’s free from risks and biases.
Moderator Brigid Bondoc, partner and life sciences attorney at the law firms of Morrison Foerster, noted that in in the US, President Biden’s Executive Order on AI governance, issued on 30 October, focuses to large extent on “elimination of bias, transparency and patient safety with respect to the health care document – targeting the areas that pose the greatest risk to society.”
Reuters reported earlier this month that 28 health care companies had signed Biden’s voluntary commitments to ensure the safe development of AI.
“Regulation has become vital, but we’re definitely very early,” Sandhu said. “We are going to be in a world that’s dominated by LLMs for the foreseeable future,” and these systems are becoming much more complex, with researchers still working to understand their behaviors. Regulation will have to evolve at pace, he suggested.
AI For Equity
Panelist Sanjeev Bhavnani, senior medical director and principle investigator of health care innovation and practice transformation at Scripps Health, a $4.3bn not-for-profit San Diego-based health care group, expects a top priority for the next phase of regulation will be responsible and ethical use of AI. (Also see "Medtronic Endoscopy’s AI Head Contemplates ‘Responsible Use Of AI’" - Medtech Insight, 22 Nov, 2023.)
“Regulatory has its role as industry and developers have theirs,” said Bhavnani, who formerly served as a senior medical officer and policy advisor to the Digital Health Center of Excellence at the US FDA. “There’s also the clinical aspect … the implementation of new tools and processes of patient care, whether it’s for chronic diseases or for health and wellness. As we look at the breadth of development that has occurred over the past 10 years with machine learning and the devices that are currently available and those on the horizon, we now start asking, ‘How do we plan to use these in responsible, ethical ways?’”
The issue of equity was raised by many at the HIMSS AI conference. The prospect of deepening systemic inequity, due to AI systems being trained on data that does not represent a wide or accurate population, remains a key concern. Sandhu agreed that eliminating bias in training data will require a lot of focus. (Also see "‘This Is The Way Of The Future’: Digital Health Experts Share Thoughts, Experiences With Generative AI" - Medtech Insight, 18 May, 2023.)
That said, he has high expectations for the next generation of AI systems and their potential to remove obstacles to equitable care – eg, by helping health providers and others overcome language barriers and addressing cultural differences that pose limitations.
“These AI systems are reshaping the way we interact,” he said. “You haven’t seen it on mass yet, but we will pretty soon enter a world where you don’t need to be a software developer to write code. We are going to see in a few years these systems being accessible to a person that has never touched a computer before … in a language of their choice [using their voice].”
While current LLMs can speak languages besides English, they don’t do it well yet. But Sandhu said that will change.
“I think the companies that are building the LLMs or building the infrastructure around it themselves are incentivized to continue pushing to make these more and more broadly available,” Sandhu said. “I think that will happen whether or not government intervenes.”
He also believes that the cost barrier will come down for health systems to implement AI systems, which currently can run into the tens or hundreds of thousands of dollars and beyond.
Source: Medtech