, , ,

Balancing the hype and usefulness of AI healthcare

6–9 minutes

Markus Rothmaier, Head of R&D at Polaroid Therapeutics, talks about how to look beyond the hype to explore more use cases of AI.

igor-omilaev-eGGFZ5X2LnA-unsplash-1024x576 Balancing the hype and usefulness of AI healthcare
Rothmaier remains cautiously optimistic regarding the future of AI: “I’m pretty sure this will be solved because the benefit over the risk is so high.” Image Credit: Igor Omilaev/Unsplash.

Artificial Intelligence (AI) is revolutionizing various industries, and healthcare is no exception. The last decade has seen immense excitement and investment in AI technologies that promise to improve diagnostics, personalize treatments, streamline administrative processes, and accelerate drug discovery. However, while the potential is undeniable, so is the need for caution. The assumption that AI will solve all of healthcare’s most entrenched problems quickly and comprehensively is overly optimistic and potentially dangerous.

AI is a powerful tool—perhaps one of the most transformative technologies of our time—but it is not a panacea. Markus Rothmaier, Head of R&D at Polaroid Therapeutics, in an interview with the Drug and Device World, talks about the current hype surrounding AI and the potential uses of the technology.  

The Allure of the “Magic Bullet” Myth

AI’s emergence into mainstream healthcare discourse coincided with breakthroughs in deep learning and the democratization of large language models like ChatGPT. Public enthusiasm was swift and sweeping. Suddenly, AI was seen as capable of everything: from diagnosing cancer through imaging to designing novel drugs in a fraction of the time it would take human researchers.

Rothmaier reflects on this evolution, tracing its roots back decades. He notes that “Expert systems” were already being explored in the 1940s and 50s. One early example he cites is Mycin, an expert system developed in the 1970s for diagnosing bacterial infections and recommending treatments. Despite being groundbreaking for its time, it foreshadowed the same limitations that modern AI systems face today—insufficient data quality, hardware constraints, and oversimplified assumptions.

Today’s hype, Rothmaier suggests, is driven by technological accessibility. “Now it’s in my iPhone, and it actually works well.” This portability and user-friendliness create the illusion of universal applicability. However, healthcare is more complex than most other domains. “It’s a great opportunity,” Rothmaier says of AI in drug discovery, “but obviously, it’s not so easy really to achieve what we want.”

Data Is the Bottleneck

One of the biggest misconceptions about AI in healthcare is that it’s just a matter of coding more intelligent software. But according to Rothmaier, “We are missing a bit the data, the foundation, to make it really better when it comes to the predictions that we want.”

The issue is not just the amount of data—but its nuance, diversity, and completeness. Clinical trials often rely on homogeneous populations, such as “only white men between 40 and 50,” Rothmaier highlights. Such narrow data sets limit generalizability across genders, ethnicities, age groups, and medical histories. Without broad and inclusive data, AI predictions can be biased, inaccurate, or outright harmful.

Even with better data, causation and correlation are not easily disentangled, notes Rothmaier. AI might find patterns, but understanding why something happens in human biology is far more complex. Therefore, the notion that AI can independently and accurately derive medical conclusions without robust, nuanced, and representative data is fundamentally flawed.

Tools, Not Replacements

Another pervasive myth is that AI will replace doctors, researchers, or healthcare professionals. Rothmaier firmly counters this: “They will not replace a human doctor so fast. That’s just the question of trust, empathy, transparency.”

Adding that, trust is central in healthcare. AI can generate predictions with remarkable speed and accuracy, but it lacks the emotional intelligence, ethical judgment, and experiential wisdom that healthcare professionals provide. Patients want to talk to someone who understands their lived experiences, who can offer comfort, and who can explain treatment options empathetically—not just compute probabilities.

Furthermore, Rothmaier identifies a core limitation of current generative AI systems: the inability to explain their thought processes. “There is no thought process that you can follow… that also creates [a problem for] accountability.” This black-box nature of AI undermines its reliability in life-critical applications. If an AI makes a faulty recommendation, who is liable—the developer, the doctor, the patient?

In Rothmaier’s view, AI should serve as a “tool,” not an autonomous decision-maker. “It helps me to make good decisions. It helps me to save time. AI can summarize research, flag anomalies in imaging, and propose hypotheses, but ultimately, a qualified human must interpret and act upon this information,” he explains.

Rothmaier also notes that the belief that anyone can harness AI effectively without deep domain knowledge is misguided. AI isn’t magic; it’s a tool that needs skilled handlers. Rothmaier draws an insightful parallel: “Even with things like ChatGPT… your answer is only as good as your prompt.” A nuanced understanding of both the data and the medical context is necessary to use AI effectively.

This reinforces the importance of medical and data literacy. Training future healthcare workers to collaborate with AI—not merely use it—is critical. Just as a scalpel in untrained hands is dangerous, so too is AI when wielded without understanding.

Use Cases That Work—and Why

Not all AI applications in healthcare are problematic, some are extraordinarily successful, explains Rothmaier. For instance, image recognition in radiology and pathology is a well-established domain where AI has exceeded expectations. According to Rothmaier, “These are the simpler use cases, where AI is more accurate sometimes, it’s a lot faster than a doctor in radiology.”

These success stories share a common theme: well-defined tasks, structured data, and clear performance metrics. AI is most effective where the environment is controlled, the data is labelled, and the outcomes are measurable. It falters in dynamic, complex, and human-centered areas like mental health, chronic disease management, and complex diagnostics.

Rothmaier also highlights that administrative applications such as hospital resource management, virtual assistants, and patient triage systems as promising areas. These domains offer immediate improvements without risking patient safety.

Risk, Bias, and Accountability

One under-discussed concern in AI’s role in healthcare is the psychological and legal implications of decision-making. People may defer to AI, not because they trust it, but because they want to avoid responsibility.

“If AI gives one answer… I might go with the AI and think that it relieves me from consequences,” Rothmaier observes. This “liability outsourcing” is dangerous. Trusting AI blindly may protect individuals from blame, but it doesn’t ensure the best outcomes, he adds. Additionally, the question of liability remains unresolved. “If the prediction was wrong, then who’s liable? AI? A doctor? Are you?”

Rothmaier notes that regulatory bodies like the US Food and Drug Administration (FDA) are grappling with these questions, but consensus remains elusive. Until accountability frameworks catch up with the technology, AI must be treated with caution—especially in domains with high-stakes consequences.

Despite these challenges, Rothmaier remains cautiously optimistic. “I’m pretty sure this will be solved because the benefit over the risk is so high.” He envisions a future where AI does more than just analyze existing data—it helps generate new data, design better experiments, and simulate complex systems.

He references the idea of the “digital twin”—a virtual replica of a patient’s body used for simulation and testing. “Let’s play with Markus as the twin,” he jokes, “and let’s see what happens to him with time under certain conditions.” Adding that such models could revolutionize personalized medicine, allowing tailored treatments and risk assessments.

He also touches on futuristic possibilities like brain-computer interfaces and mental health monitoring. “There is a lot going on that cannot just be treated with an implant or with a pill,” he says. Addressing these dimensions will require AI to move beyond pattern recognition into more holistic, integrative frameworks—still a distant goal.

Hope for the future

AI is undeniably a powerful force in healthcare, but it is not a cure-all. It excels in well-structured environments with abundant data but struggles in the messy, variable, and deeply human aspects of medicine. The belief that AI can supplant expert judgment, patient relationships, and nuanced decision-making is not only premature—it’s dangerous.

Rothmaier’s insights underscore this reality: AI is a tool, not a replacement. Its utility is determined not by its theoretical potential, but by how wisely and ethically we choose to deploy it. For AI to fulfill its promise in healthcare, it must be integrated thoughtfully—backed by robust data, guided by human expertise, and tempered by a clear understanding of its limitations.

Innovation in healthcare should be embraced, but never blindly. If we view AI as an assistant rather than an oracle, we can harness its strengths while avoiding its pitfalls. The future of medicine is not machine vs. human—it is machine with human. And in that alliance, the human must always lead.

Oh hi there 1f44b Balancing the hype and usefulness of AI healthcare
It’s nice to meet you.

Sign up to our weekly newsletter to keep updated on our latest content

We don’t spam! So rest easy and subscribe.

EXCLUSIVE OFFER!! Sign up for our newsletter and get TWO MONTHS of free membership access to our in-depth and exclusive content.

cards
Powered by paypal

Latest News