Panel on AI in Healthcare

This AI Institute panel brought together diverse voices from medicine, psychology, public health, and computer science to discuss how artificial intelligence is reshaping healthcare, and the very real challenges that come with it.

When Raj Shah talks about his patients, he doesn’t see isolated medical conditions. He sees what medical anthropologists call “syndemics” – multiple diseases working together in complex socioeconomic contexts to accelerate poor health outcomes. It’s a perspective that challenges how medicine has traditionally been practiced, one problem at a time, and it’s exactly the kind of systems thinking that artificial intelligence might help us understand better.

Shah, a geriatrician and professor at Rush University Medical Center, was one of five experts who gathered recently for the DePaul University AI Institute’s panel on AI in healthcare. Alongside colleagues from Sinai Urban Health Institute, DePaul’s psychology and public health programs, and the School of Computing, the conversation revealed both the tremendous promise and sobering realities of deploying AI in real-world healthcare settings.

Beyond the Technology: Human Stories Drive Innovation

The panel’s strength lay not in discussing abstract AI capabilities, but in grounding the conversation in concrete projects addressing real community needs. Kelly McCabe (Associate Director of Community Health Innovations at Sinai Urban Health Institute) described how her team works with community health workers—trusted bridges between Chicago’s South and West Side communities and complex healthcare systems.

“Our community health workers are from the communities that we serve,” McCabe explained. “Patients can identify with them. They speak the same language. There’s a lot of cultural concordance.” Her team is now using AI to better identify high-risk patients who could benefit from these community health workers’ interventions, focusing on social determinants of health like transportation, food, and housing that drive 80% of health outcomes.

This human-centered approach ran through all the panelists’ work. Joanna Buscemi, (Associate Professor of Psychology at DePaul) has spent years translating evidence-based behavioral interventions into digital formats to reach people who might otherwise lack access to care – whether due to geographic barriers or other constraints. Her work with Latina breast cancer survivors exemplifies how AI and digital health tools can extend the reach of proven psychological interventions.

The Translation Challenge: From Lab to Life

Perhaps the most sobering discussion centered on what healthcare professionals call the “bench to bedside” gap – the infamous 17 years it typically takes for research findings to change actual medical practice. Dan Shober (Associate Professor of Public Health at DePaul) captured this challenge while describing his sleep intervention research with African American women in Chicago.

While apps like Headspace show promise for improving sleep, Shober noted that completion rates hover around 43%. “The challenge that we face with applications like headspace is that the completion rates are fairly low,” he observed. “So, I think we need to think about how do we get people engaged in that space.”

His solution involves community health workers helping users engage with the technology – keeping the human element central even in digital interventions. It’s a theme that emerged repeatedly: AI works best when it augments rather than replaces human expertise and connection.

Speaking Different Languages: The Collaboration Imperative

One of the panel’s most valuable insights came from exploring how medical professionals and data scientists can work together effectively. As moderator Casey Bennett noted, “it’s almost like we’re speaking two different languages” when trying to collaborate across disciplines.

The panelists offered concrete strategies for bridging this divide. Shah emphasized the importance of regular in-person meetings: “What’s really helped is being able to meet in person every week. I’d come down to DePaul on Thursdays and we talk because you can see people’s faces. Right. And that’s easier to explain things when you’re talking different languages.”

Buscemi advocated for developing “minimal literacy” in each other’s fields: “You’re not going to be fluent in the other person’s language, but could you learn like a little bit of that language? And couldn’t they learn a little bit of your language?”

The role of students emerged as particularly important in these collaborations. As Shah noted, “students have helped us honestly, because students are curious enough and they don’t have biases yet. So they help us to bridge these two worlds.”

The Equity Imperative: Who Benefits from AI in Healthcare?

The conversation took a critical turn when discussing equity and access. McCabe pointed to a troubling reality: “The digital divide is getting more and more robust, and the patients that probably need that connection the most are the ones who are less and less likely to receive it.”

This concern extended beyond simple access to technology. Shah raised fundamental questions about how AI development is funded and prioritized: “Why are people coming to health care to use AI? Because that’s where the money is.” He challenged the panel to imagine AI that empowers patients rather than just making billing more efficient for providers.

Shober highlighted how even well-intentioned technologies can embed bias. He mentioned research showing that fitness trackers using green light technology work less effectively for people with darker skin tones – a problem that could have been avoided by including diverse populations in the research process from the beginning.

Real Problems, Practical Solutions

When asked about opportunities for AI to address healthcare challenges specific to Chicago, the panelists moved beyond technological solutions to identify fundamental human needs. Shah identified loneliness and lack of belonging as major health challenges that technology might inadvertently worsen but could potentially help address.

Shober pointed to traffic safety as an area ripe for AI intervention, noting that accidents tend to happen in predictable patterns that could be prevented with better data integration and prediction models.

McCabe shared a personal example of AI already improving healthcare delivery: a pediatric visit where conversation recording technology allowed the doctor to maintain eye contact with her son instead of typing notes. “It was a different experience,” she reflected, highlighting how AI can enhance rather than diminish human connection in healthcare.

The Trust Question: Privacy, Liability, and Professional Responsibility

The panel didn’t shy away from difficult questions about data privacy and professional liability. Shah painted a stark picture of the current landscape: “We live in a world where we are obligated as physicians to protect people, and we live in a world where still the physician gets sued.”

This tension between innovation and responsibility emerged as a critical barrier to AI adoption in healthcare. While AI tools might reduce physician burnout and improve patient care, the current legal and regulatory framework places liability squarely on individual practitioners rather than the technology companies developing these tools.

Looking Forward: Human-Centered AI

The panel’s ultimate message was clear: successful AI in healthcare must remain fundamentally human-centered. This means including affected communities in the design process, maintaining human expertise in the loop, and addressing equity concerns from the outset rather than as an afterthought.

As the panelists demonstrated through their diverse projects, AI’s greatest healthcare applications may not be the flashiest technological achievements, but rather the tools that help community health workers better serve their neighbors, enable doctors to look their patients in the eye, and extend evidence-based interventions to communities that have historically lacked access to care.

The conversation left audiences with both hope and homework. The promise of AI in healthcare is real, but realizing that promise requires the kind of thoughtful, inclusive, and collaborative approach that DePaul University’s AI Institute is fostering – bringing together diverse expertise to ensure that technological advancement serves human flourishing.

In a field often dominated by technological enthusiasm, this panel provided a necessary reminder that the most important question isn’t what AI can do, but rather what it should do, for whom, and how we can ensure it serves the communities that need it most.