top of page

When “Anything Is Possible” Means “Nothing Will Scale”

  • Mar 16
  • 5 min read
Scale

Designing Healthcare AI for Scale, Discipline, Durability, and Investability


You’ve probably seen the dazzling AI demo: text writes itself, insights appear in real time, and when you ask if it can handle your unique workflows, the answer is, “Absolutely—we can customize anything.” But – if you have lived through failed implementations, it sounds like future heartburn, because in healthcare technology “anything is possible” usually means nothing will scale, nothing will be easy to govern, and nothing will be simple to operate three years from now. 


This matters because the people who depend on these systems are not stock images in a pitch deck; they are patients and families who hurt, who worry about lost wages and medical bills, and who bear the cost when deficient software keeps clinicians from managing records and treatments efficiently. 


Software meant to manage chronic disease is often failing the people who use it. Among nurses experiencing burnout, nearly one‑third report that their electronic health record is a contributing factor, and about 40% of those nurses say they are likely to leave their organization within the next two years. In the broader system, an estimated 31 million Americans—roughly 12% of adults—borrowed money in a single year to pay medical bills, taking on about 74 billion dollars in debt. Non performant software is not just an inconvenience in this environment; it amplifies costs, delays care, and adds friction to processes that are already under strain. 


Non performant software is not just an inconvenience in this environment; it amplifies costs, delays care, and adds friction to processes that are already under strain.  

On the capital side, AI has become the center of gravity in healthcare investment. In 2025, AI companies accounted for roughly 46% of all healthcare investment—about 18 billion dollars across the U.S. and Europe—even as total healthcare investment fell 12% from the prior year. That is not pure exuberance; it is a shift toward a smaller number of bets on systems that promise durable value. Investors are voting for AI, but they are also voting for discipline. 


A consistent thread through my career is that you do not win by pushing harder against the physics of the system; you win by finding the few levers that actually change its state. In solid‑state batteries, using coupled electrochemical–thermal models and multi‑objective optimization enabled the choice of improved designs. In energy systems work, we taught engineers to treat grids, vehicles, and controls as one coupled system, to improve all of them at once. In battery controls, the lesson was that pack‑level strategies only work when they respect real limits on heat, degradation, and imbalance. That is exactly how we now think about healthcare data: as a constrained system where discipline in architecture and feedback matters more than clever demonstrations of improvement in any single component.  


That is exactly how we now think about healthcare data: as a constrained system where discipline in architecture and feedback matters more than clever demonstrations of improvement in any single component. 

You can guess what happens when that sort of discipline is missing. A model that can, in principle, answer any question or drive any workflow almost inevitably picks up layers of special‑case logic, each added to make one more exception behave “just right.” Each new use case adds its own prompts, fields, and branching rules. Upgrades become perilous, because fixing one behavior risks breaking three others. Monitoring and validation turn into archaeology, and what was perfect on last week’s trajectories, is unstable on tomorrow’s. 


Healthcare cannot afford that kind of instability. Healthcare CIO priorities for 2026 emphasize operations in which AI, compliance, and cyber‑resilience converge. Commentators increasingly describe data interoperability and integrity as infrastructure; failures in exchange or provenance are likened to failures in a national power grid. In the real world, the test of an AI system is not whether it can impress a room, but whether it can stay up, stay safe, and stay useful under ordinary strain and extraordinary events. 


Upgrades become perilous, because fixing one behavior risks breaking three others… Healthcare cannot afford that kind of instability. 

That pushes us toward a different design philosophy. Instead of treating every preference as a requirement, start from a coherent architecture. Allow variation only where it can be contained and governed—through configuration, not fragmentation. Design so that everyday work naturally produces machine‑readable, longitudinal information. Embed AI inside that structure, rather than beside it. 


In controls engineering, robust designs assume every model is incomplete and the world will misbehave; they rely on feedback and margins, not wishful thinking. Healthcare AI needs the same humility— visible logs, continuous monitoring, well‑defined failure modes, and simple ways to back out of a change that is not working. Security, observability, and governance are inputs to the first architecture diagrams, not as tickets filed after go‑live. 


Investability becomes solely based on how a platform behaves – and can be systematically improved – over time. A system is investable when its behavior is easy to understand, its evolution can be planned rather than improvised, and every new customer tends to make it stronger instead of more fragile. Platforms built on a small set of shared abstractions, with disciplined use of structured data and tight limits on customization, improve as they grow. Platforms that promise “anything” age like over‑tuned models: they look brilliant on yesterday’s demo path and increasingly unpredictable when tomorrow’s reality arrives. The real test of our industry will not be who can stage the best demo. It will be who can build the quiet, disciplined, durable systems that clinicians trust, boards approve, and investors are proud to hold through many cycles. 


A system is investable when its behavior is easy to understand, its evolution can be planned rather than improvised, and every new customer tends to make it stronger instead of more fragile. 

So instead of only asking, “How impressive is the demo?”, ask, “What does this system do reliably, in the same way, every day?” The best systems must clear both bars: they can show you something striking, and then deliver mathematically disciplined, predictable performance in the messiness of real care. If you hold to that standard—favoring architectures that are understandable, repeatable, and boring in their reliability—you end up with AI that people can trust, organizations can govern, and investors can support over the long term. 


The best systems must clear both bars: they can show you something striking, and then deliver mathematically disciplined, predictable performance in the messiness of real care. 

bottom of page