Exec Perspective: Why I Started Building AI Tools Instead of Just Advising on Them

For most of my career, my job was to sit at the intersection of clinical medicine and organizational strategy — translating evidence into decisions, risk into policy, data into programs. As a CMO with population health and NCQA experience, I knew how to advise. I knew how to evaluate a vendor, scrutinize a whitepaper, and ask the right questions in a boardroom. I was comfortable there.

So what changed?

An exponential growth of new AI-powered tools — risk stratification engines, care gap platforms, ambient documentation systems —aiming to answer questions I have tackled my entire career. Amazing technology that can revolutionize healthcare but with a missing design element- physician leadership expertise. The conversation is often about adoption, integration, compliance, ROI. Almost never about what the tool is actually doing underneath. And answers on the methodology are often vague.

This realization has driven me to this: I was going to stop just evaluating these tools and start understanding how to build them.

The Credibility Gap

There is a version of physician leadership that functions almost entirely as a translation layer — clinical credibility loaned to decisions made by others. I have played that role, and I believe in it. But as AI becomes the operational backbone of health systems, payers, and digital health companies, that arrangement starts to break down.

If you cannot reason about a model — what data it was trained on, what it is actually optimizing for, where it fails — then your clinical judgment is decorating someone else's decision, not informing it. That is not advisory work. That is a rubber stamp with an MD/DO behind it.

The physician leaders, who will matter in AI-native healthcare organizations are the ones who can sit in a room with engineers and data scientists and push back with precision. Not "I am not sure I trust this" — but "This model appears to be optimizing for utilization reduction, not health outcomes, and here is how I know."

What Building Actually Taught Me

I started with something I knew well: preventive medicine and population risk. I built a tool — what has become the Previty Method — that calculates Health Age, not just biological age or chronological age, but a composite that accounts for modifiable risk, disease burden, and the gap between how old someone's body is behaving versus how old their birth certificate says they are.

The process was humbling. It required me to formalize intuitions I had held for decades — clinical judgments I had made thousands of times — into logic that a machine could execute consistently. It surfaced assumptions I did not know I was making. It forced precision where I had been comfortable with gestalt.

And it gave me something I did not expect: a much clearer picture of where AI can genuinely help and where it cannot. The tools that earn trust are not the ones that replicate clinical reasoning — they are the ones that augment the parts of clinical work that humans do poorly at scale: pattern recognition across large populations, consistency of risk scoring, surfacing the patient who is slipping through the cracks.

Why This Matters for Physician Leaders

I am not suggesting every CMO should become a developer. That is not the point. The point is that the learning curve for meaningful AI literacy is shorter than most physicians think — and the cost of not closing it is rising fast.

According to the 2025 AMA Physician Sentiment Survey, physician burnout dropped nearly 10 percent in a single year, driven in significant part by AI-assisted documentation tools. Two-thirds of physicians who use AI now say they look forward to coming to work more often. That is not a marginal improvement. That is a structural shift in not only how physicians feel but how they show up to the practice of medicine.

But those same systems need clinical leaders who can hold them accountable. Who can ask whether the ambient scribe is capturing clinical nuance or just generating plausible-sounding notes. Who can look at a population health dashboard and recognize when the algorithm is flagging the easy-to-find patients while missing the highest-risk ones.

Previty exists at that intersection — where clinical rigor and technical literacy have to coexist. I built these tools because I wanted to understand them from the inside. And I share that work because I believe other physician leaders can do the same.

You do not have to wait to be invited into these conversations. You can build your way in.

Previous
Previous

Population Health: Health Age vs. Biological Age: What the Difference Tells You About Your Patients