Hospitals and health systems are increasingly adding tech-focused positions like chief digital officers and chief data officers to the C-suite. Following the rise in artificial intelligence in healthcare, could chief AI officers be far behind?
Chicago-based Northwestern Medicine’s Yuan Luo, PhD, doesn’t think so. He’s the chief AI officer at Feinberg School of Medicine’s Clinical and Translational Sciences Institute and one of the first leaders in the country with that title.
Becker’s reached out to Dr. Luo for his thoughts on whether chief AI officers are coming to hospitals.
Question: What does a chief AI officer do?
Dr. Yuan Luo: A chief AI officer:
— Serves as the hub of AI expertise in the organization, with a well-rounded knowledge of AI algorithms and applications in clinical practice, scientific innovation and efficient operation.
— Lays out and implements coherent organizationwide AI strategies that align well with the health system’s overall goals and federal regulations.
—Stands up the organizational structure for AI governance, oversight, innovation and stewardship.
— Builds a collaborative AI in healthcare enterprise.
Q: Do you think hospitals and health systems will have chief AI officers?
YL: I think more hospitals and more health systems will follow the Department of Health and Human Services in creating this position, and I think it’s going to be an important role in crafting an organizationwide AI strategy and delivering values across the entire health system value chain.
Q: How would they differ from similar executives like chief data officers or chief digital officers?
YL: Compared with similar executive roles, a chief AI officer role in a health system demands more of the following:
— Strong technical understanding of both AI/machine learning algorithms and clinical/scientific problems.
— Making strategic decisions on pursuing select marquee partnerships versus building the organizations’ own AI tech stack and capabilities.
— Ensuring AI gets applied across functions and bridging collaboration gaps across disciplines — for example, clinicians, scientists and administrators.
— Collaborating with the chief digital officer (if one exists), expediting and facilitating the conversion from data to AI-enabled insights and actions.
Q. Do you know of any hospitals or health systems that have chief AI officers?
YL: The Department of Health and Human Services appointed its first chief AI officer in March 2021. This is a significant bellwether for health systems and hospitals, and I see several academic medical centers already have been filling or shaping such roles.
The following academic medical centers or hospitals effectively have chief AI officers:
— Michael Pencina, PhD, is leading Duke AI Health (Durham, N.C.).
— Nigam Shah, PhD, is chief data scientist at Stanford Health Care (Palo Alto, Calif.).
— Anthony Chang, MD, is chief intelligence and innovation officer at Children’s Health of Orange County (Orange, Calif.).
Q: Where do you see AI going next in healthcare, particularly for hospitals and health systems?
YL: I see three major directions for AI in healthcare: proactive AI, collaborative AI and industrial AI.
First, AI in healthcare is going from reactive AI toward proactive AI. The steps of conventional AI workflows passively react to human-expert input and data. This is inadequate to address the ongoing data change and bias issues in the rapidly evolving healthcare landscape. For example, in the constantly evolving COVID-19 pandemic, as the virus keeps mutating and different populations get infected, AI models to predict patient outcomes built on data collected this month may not work next month.
I advocate for proactive AI, which features the feedback loop that enables the model to tell us where the weak spot of the data is. This helps automate targeted data collection and augmentation to quickly update the models to keep up with the change. This can also help AI algorithms such as reinforcement learning better adapt to patients’ changing baselines and provide up-to-date point-of-care recommendations on treatments and interventions.
Second, AI in healthcare is becoming more collaborative. The old and misunderstood view that AI will displace radiologists and pathologists has lost momentum — neither will clinicians reject AI algorithms. By the way, this view has always been a misunderstanding of AI. Health systems should create a fertile ground to grow the next generation of clinicians and AI scientists together.
Such a venue will enable them to work together from the beginning, brainstorm ideas, debate solutions, deploy implementations, and conduct continuous evaluations. Such collaborations throughout the entire life cycle of models will cross-pollinate clinicians and AI scientists, letting them invite one another’s perspectives to add an expansive dimension to their own.
Moreover, collaborative AI also means collaboration between AI and humans. Recent technology development allows AI to share decision-making with humans so we can intervene to correct its course when necessary. This is important to give high-stakes control back to clinicians and hospitals, and we can treat AI as receptive colleagues.
Third, AI needs to complete its own Industrial Revolution. One big challenge is the global shortage of AI talent to unleash the data value, even more so for health systems. I argue that if you view AI as handicrafts, then finding full-stack craftsmen is indeed hard. But can we draw inspiration from the past Industrial Revolutions?
My vision is that by breaking down AI workflows into biteable-sized, standard and modular pieces that are easy to master and internalize into muscle memory by ordinary people such as scientists, clinicians and administrators, then we can organize AI workflows as “assembly networks,” complete AI’s own Industrial Revolution, and engage the whole health system as our talent pool. This is also why chief AI officers need to have the passion to disseminate and democratize AI literacy, resources, and tooling across organizations beyond the central data/analytics teams.