01 / 04
MOST AI STRATEGIES OPTIMIZE FOR USE CASES. THE WINNERS OPTIMIZE FOR WORKFLOWS. THE FIRST CREATES DEMOS. THE SECOND CREATES LEVERAGE. WE START BY MAPPING WHERE THE WORK ACTUALLY HAPPENS.
Most enterprise AI fails at deployment,
not strategy. I work on the deployment.
Building internal tools and agents that make knowledge workers measurably better at the work that matters.
// 01 — ABOUT
Most of what I build isn't visible publicly. The tools live behind firewalls. The deployments serve internal users. The wins are measured in productivity and quality rather than press releases.
But this is some of the most consequential AI work happening in enterprise right now: making senior expertise scalable, automating the manual work that buries professionals, and building governance that lets organizations move fast without breaking trust.
I'm here to share what I'm learning — about deployment patterns that actually work, the gap between AI hype and AI value, and what it takes to lead AI inside organizations that can't afford to get it wrong.
// 02 — EXPERIENCE
2003 — Present
10+years
Consulting
Deep expertise in organizational change, technology strategy, and operating model design. Working at the most complex organizations during their most consequential transformations. The discipline that taught me how organizations actually decide, resist, and adopt.
2015 — Present
8+years
Product
Product leadership across enterprise software, internal platforms, and client-facing systems. The discipline that taught me the gap between what stakeholders ask for and what users actually need — and how to ship in the space between those two things.
2021 — Present
3+years
AI
Director-level AI deployment in professional services: agents, evaluation frameworks, governance structures. The work where the other two disciplines converge — strategy depth, product instinct, and the technical fluency to know what's actually possible.
"The combination is the differentiator: strategy depth, product instinct, AI fluency. Twenty-one years building toward a moment where all three matter simultaneously."
// 03 — DOMAINS
Nine intersecting disciplines that show up in almost every engagement.
From prototype to production — integration, volume handling, organizational adoption.
Frameworks for what "good enough" means across high-stakes professional workflows.
Systems that automate and augment expert work without removing human judgment.
Technology decisions inside complex organizations with competing stakeholders.
Why organizational adoption fails and the structural reasons it gets ignored.
Making professional expertise scalable without diluting the expertise itself.
Moving fast without breaking trust — compliance, accountability, audit trails.
Building what users need, not what stakeholders ask for, inside enterprise constraints.
Closing the gap between capability and understanding before it becomes a liability.
// 04 — MANIFESTO
What separates AI investments that compound from those that stall is never the model choice. It's the deployment decision: which workflow it integrates with, what judgment it augments, what feedback loop it creates.
Strategy documents, proof-of-concepts, and pilots are not AI deployment. They're AI theater. Real deployment means production systems that handle volume, edge cases, and organizational resistance — and keep working when the novelty wears off.
Every enterprise AI project eventually hits the same wall: how good is good enough? The absence of rigorous evaluation frameworks isn't a research problem. It's a leadership problem — and it's causing billions in misallocated investment.
When AI capability outpaces organizational understanding, you don't get fast adoption. You get fear, workarounds, and poor decisions made confidently. Closing that gap deliberately is the AI leader's actual job.
The strategies that work at AI-first companies create technical and organizational debt when applied inside traditional enterprise. Different constraints, different stakeholders, different definition of done. The playbook transfer problem is real and almost never discussed.
Strong AI POV without deployment infrastructure doesn't create competitive advantage. It creates technical debt, failed pilots, and organizational cynicism. The execution layer isn't implementation detail. It is the strategy.
// 05 — DEPLOYMENT
Four phases. Each one harder than it looks. The work that separates AI investments that compound from those that stall.
// 06 — PUBLIC PROJECTS
Documenting what I build outside of work — small AI systems, experiments, and prototypes. Each shipped publicly with the code, the reasoning, and what I'd do differently next time.
An AI system that generates structured summaries, action items, and decision logs from meeting transcripts. Built on Whisper + Claude. Ships publicly once it clears firm publishing guidelines.
An open framework for evaluating LLM outputs against domain-specific quality criteria. Designed to address the threshold problem: what does "good enough" mean for high-stakes professional work?
A collection of AI tools targeting specific workflows where professional expertise creates the most value — and where manual effort creates the most drag.
// 07 — WRITING
Observations, frameworks, and arguments about AI in enterprise. Follow on LinkedIn for shorter takes while essays are in progress.
Why most enterprise AI investments fail not in model selection or integration, but in the final 20% between working prototype and organizational adoption.
The absence of rigorous evaluation frameworks is the most underreported problem in enterprise AI deployment — and it's causing billions in misallocated investment.
Why strategies that work at AI-first companies create debt when applied inside traditional enterprise — and what actually works instead.
// 08 — CONNECT