
In an era defined by data, the healthcare sector stands at a pivotal crossroads. We have moved from paper charts to electronic health records, from generalized population studies to genomic sequencing, yet a fundamental gap persists: the chasm between the dynamic, continuous nature of human physiology and the static, episodic nature of most clinical data. Doctors typically see patients at discrete points in time—during an annual check-up or a crisis—making decisions based on population averages, single-moment imaging, and symptoms that have already manifested. What if we could shift from treating a patient based on a historical snapshot to continuously modeling their unique, living biology?
This is the ambitious promise of the “digital twin” in medicine. A concept borrowed from advanced engineering, a digital twin is a virtual, dynamic replica of a physical entity, continuously updated with real-world data to simulate, predict, and optimize outcomes. In healthcare, it represents the ultimate fusion of computational science and clinical practice: a living, breathing computational model of an individual patient, informed by their medical imaging, genetics, wearable sensor data, and more. It’s not merely a diagnostic tool but a predictive and preventive platform, allowing clinicians to simulate disease progression and test interventions in a risk-free virtual environment before applying them to the person.
Leading this frontier is Dr. Amanda Randles, PhD, the director of Duke University’s Center for Computational and Digital Health Innovation and head of the Randles Lab. Her work transcends conventional digital health applications like apps or dashboards. Dr. Randles and her team are building the foundational computational infrastructure to create true patient-specific digital twins, starting with the complex highways of the human cardiovascular system. Their research combines exascale high-performance computing, physics-based fluid dynamics modeling, artificial intelligence, and real-world clinical data to deliver unprecedented, personalized insights into health and disease.
In a recent conversation with the Drug and Device World, Dr. Randles shed light on the journey, challenges, and transformative potential of this work. From the practical reasons for starting with the heart to the intricate dance of interdisciplinary collaboration and the paramount importance of data ethics, she outlined a future where healthcare is continuously informed, deeply personalized, and profoundly proactive. The following interview delves into the making of a medical revolution, one computational heartbeat at a time.
The Shift from Analytics to Actionable Digital Twins
The evolution of medical data analysis has been profound, yet Dr. Randles identifies a critical limitation in current paradigms. “What we’ve been doing in the last 20 years is really focused on single time points,” she explains. “Or we’re looking at population average metrics… And I think what we really need is a shift to have it be much more personalized.”
The problem with population averages is that they can obscure individual risk. What constitutes a “normal” blood pressure or cholesterol level for a population may mask a significant, dangerous change for a specific person from their own unique baseline. The digital twin concept addresses this by anchoring itself to the individual. “What really matters is your change from your baseline, but your baseline is going to be very different than someone else’s,” Dr. Randles notes. The goal, therefore, is to create a longitudinal map of a patient—a digital entity that evolves with them, continuously updated by data from wearable sensors and periodic scans. This moves medicine beyond reactive analysis of what has happened to proactive simulation of what could happen, enabling earlier detection of deviation from a personal norm.
Why Start with the Heart?
Given the vast complexity of human biology, choosing a starting point for digital twinning is a strategic decision. For the Randles Lab, cardiovascular disease was the logical entry point. “That area has been really open to computational models,” Dr. Randles states, pointing to the commercial and clinical adoption of companies like HeartFlow, which uses CT scans to create 3D models of coronary arteries and compute fractional flow reserve (FFR)—a key indicator of blockage significance.
This existing traction provided a crucial foundation of trust and, importantly, validation. “We started some of our big studies looking at fractional flow reserve… because it’s an accepted metric that we have an invasive guide wire that is measuring,” she says. This allowed her team to perform a critical reality check: their sophisticated digital models had to produce results that matched the gold-standard, invasive physical measurement. Successfully replicating this single time-point metric was the essential first step to prove accuracy before expanding the model’s scope to longitudinal flow mapping and more complex predictions. The cardiovascular field, with its canonical studies and measurable biomarkers, offered “an ideal space to validate the models and then we can extend them to longer time frames.”
The Interdisciplinary Chasm: Bridging Physics, Computing, and Clinical Reality
One of the most significant challenges in building medical digital twins is not purely technological—it’s cultural and linguistic. The endeavor requires deep collaboration between computational physicists, high-performance computing experts, data scientists, biostatisticians, and clinicians, each with their own methodologies and lexicons.
Dr. Randles highlights this beautifully with the example of synthetic data. From a modeling perspective, creating a vast synthetic population of virtual patients is a powerful way to amplify research and test scenarios. “That really throws biostatisticians for a loop,” she laughs. “You start asking: how many ‘N (number of variables)’ do I have, and what is my population? And then you realize, well, I have a population, but I also generated all of these synthetic people.” Reconciling these different frameworks for evidence and validity requires constant, careful dialogue.
The collaboration extends to the entire pipeline. She recounts having graduate students with thesis committees containing both a world expert in parallel computing and an interventional cardiologist. “Trying to get feedback from both and have a coherent discussion… it’s been really exciting.” This integration is not just beneficial but necessary to avoid brilliant but useless solutions. Dr. Randles, trained as a physicist, recalls instances where cardiologists would review a model and respond, “That’s really cute, but we would never be able to put that in practice.” The clinical reality check is vital: “If you gave me that data, there’s nothing actionable I could do with that.” True innovation, therefore, depends on having all disciplines “in the room from the beginning,” ensuring the pursuit of both computational elegance and clinical utility.
The Challenge of a Living Model
A core technical hurdle in digital twinning is the fusion of static, high-resolution medical images with dynamic, often noisy, continuous data streams. A CT or MRI scan provides an exquisitely detailed anatomical snapshot, but it is just that—a snapshot. “Your CT scan is only valid for so long,” Randles cautions. Tissues change, plaques evolve, and interventions like stents alter the geometry. The model must know when its foundational assumptions have broken.
This is where AI and multi-faceted prediction come in. The lab’s approach involves predicting not just one primary biomarker but at least two quantities. “One is the thing that you’re using to diagnose disease… but then something else that is calibrating your model,” she explains. This secondary metric acts as a reality check. She offers a relatable example: wearable data corruption. “What if your kid or your dog picks up the wearable and is running with it? How do we make sure that that measurement… the one wrong measurement.. does not throw off the model” By cross-referencing multiple data streams—heart rate, activity, maybe even audio or location cues from the wearable—the model can identify outliers and calibrate itself, distinguishing a medical crisis from a canine caper.
From Coronaries to Cancer
While cardiovascular disease was the proving ground, the underlying computational infrastructure developed by the Randles Lab has proven remarkably versatile. The models that simulate blood flow and cell transport in arteries can be repurposed to answer critical questions in other diseases. This expansion happened somewhat serendipitously when, after a talk on high-resolution vascular modeling, a cancer researcher posed a transformative question: “What if you dropped a cancer cell in it? Could we actually tell how the cancer cells are going to move through the body?”
The physics, it turned out, were strikingly similar. Modeling how a circulating tumor cell navigates the bloodstream, adheres to vessel walls, and eventually forms a metastasis involves the same fundamental principles of fluid dynamics and cellular adhesion as modeling cholesterol deposition in an artery. This insight allowed the lab to pivot its powerful framework toward oncology, specifically in modeling metastasis, as well as to hematological conditions like sickle cell anemia, where “red blood cells get more sticky.”
The long-term vision, now being pursued through the broader Duke Center for Computational and Digital Health Innovation, is a universal platform. “The idea is to eventually have this universal platform that we can try different studies,” Randles says. For a neurological study, the platform might emphasize wearable and eye-tracking data over detailed blood flow. For an oncological study, it would integrate the high-resolution vascular transport models. This flexible, multimodal data integration platform is the backbone upon which a comprehensive, whole-body digital twin could one day be built.
Privacy, Security, and the Myth of “De-Identification”
As digital twin research gathers more intimate and continuous data, ethical and security challenges escalate dramatically. Dr. Randles identifies a fundamental issue with current data protection paradigms: the very concept of “de-identified” data is becoming obsolete. “Your step count… while that might be considered de-identified, is pretty much a fingerprint, unique only to you,” she states. In a world of big data, unique patterns of daily activity, heart rate variability, or sleep cycles can be used to re-identify individuals, especially when linked to other databases.
The threat is amplified by AI itself, which can “easily go in and denoise it for you and re-identify people easier.” This creates a synergistic privacy risk; each new data stream added to a model increases the potential for unwanted identification. Her team’s approach is preemptive and collaborative. Setting up a wearable data registry at Duke involved two years of work with the Institutional Review Board (IRB), the health system’s IT department, and security experts. “Having all of those conversations ahead of time… is really important,” she emphasizes. Solutions may involve adding intentional noise to data, using aggregated metrics, or implementing rigorous, layered security protocols for raw data storage. The principle is clear: building public trust requires demonstrating that security and ethics are not afterthoughts but are engineered into the foundation of the research.
The 20-Year Vision
Looking a decade or two ahead, Dr. Randles envisions a fundamental shift in the patient-care paradigm driven by digital twins. “The big thing is going to be moving the monitoring and tracking outside the clinic,” she predicts. This continuous, longitudinal monitoring will empower physicians to understand a patient’s health in the context of their real life, not just the artificial environment of a clinic exam room.
The near-term future (5-10 years) will likely see concrete examples where longitudinal data leads to earlier detection and refined treatment planning for specific conditions. The longer-term future (10-20 years) points toward ubiquity. “We’ll see more of a ubiquitous… here are the 20 diseases that we’re now able to identify earlier,” Randles says. Treatment will become intensely personalized, with baselines and alerts defined by the individual’s own historical data rather than population averages. This will enable the detection of subtle, early-warning decompensation—a 10% decline in function for that patient—long before it reaches a critical, symptomatic threshold.
Building the Living Map of Human Health
The work of Dr. Randles and her team at Duke is more than incremental innovation; it is a foundational effort to re-engineer how we understand and interact with human health. By building the computational scaffolding for patient-specific digital twins, they are charting a course from a medicine of snapshots and averages to a medicine of dynamic, personal narratives.
The journey is fraught with technical hurdles, from scaling exascale computations to fusing disparate data streams. It is complicated by the human challenge of unifying diverse scientific cultures. And it is constrained by the imperative to build systems that are not only powerful but also private and secure. Yet the potential payoff is monumental: a future where disease is intercepted before it strikes, where treatments are optimized in silico before they are applied in vivo, and where every individual’s healthcare is as unique and dynamic as their own physiology. In the quest to map the living human system, digital twins are becoming the most promising cartography tools we have ever developed.


