How I work
These are the convictions I've built — and occasionally had to revise — over 18 years of building products in high-stakes domains. They're not a methodology. They're how I actually think.
Start with the problem, not the solution
The most expensive engineering mistakes I've witnessed didn't fail at implementation. They failed at problem definition. The team built the wrong thing with exceptional competence.
My instinct — developed through years of working in privacy, healthcare, and market research — is to slow down at the beginning. To resist the pull toward architecture diagrams and sprint planning until the team has a shared, honest understanding of what we're actually solving and for whom.
In practice, this means asking uncomfortable questions early: What does success look like in 18 months, not just at launch? Who is harmed if this goes wrong? What assumption, if false, invalidates everything we're about to build? These questions aren't obstacles to progress. They are progress.
Decide at the right altitude
Engineering organizations fail in two ways: leaders who won't make decisions, and leaders who make decisions that belong to the team. Both are forms of abdication, just in opposite directions.
Over time I've developed a working principle: a decision should be made at the lowest level where someone has both the context and the accountability to make it well. My job is to push decisions down whenever I can — and to step in when the stakes or the ambiguity require it.
When I do make a call — on architecture, on prioritization, on whether to ship or hold — I try to be explicit about the reasoning, not just the outcome. Teams that understand why a decision was made can adapt it. Teams that only know what was decided are fragile. Transparent reasoning compounds into a culture of sound judgment.
Build the team to outlast the plan
Product roadmaps are hypotheses. Markets shift. Business priorities change. The thing that lasts — the real organizational asset — is a team of people who know how to think clearly, communicate honestly, and adapt without losing their sense of craft.
Building that team is different from hiring. It requires deliberate investment in how engineers grow — not just their technical skills, but their judgment about tradeoffs, their comfort with ambiguity, their ability to disagree and commit. I've found that the most important coaching conversations aren't about performance. They're about how someone is thinking.
At Intel, I spent years developing engineers who went on to lead teams of their own. That compounding — watching someone you mentored become someone others seek out for mentorship — is the most durable metric I know in this work.
Collaborate across the whole problem
Engineering doesn't produce value in isolation. The products I'm proudest of were built in close, honest collaboration with product managers, designers, data scientists, and business stakeholders — relationships where each party understood enough of the other's domain to have real conversations.
My approach to cross-functional work is to invest heavily upfront in shared understanding. Not just shared requirements — shared mental models of the user, the market, and the constraints. When a product manager and an engineer are working from the same understanding of why something matters, tradeoff conversations stop being territorial and become genuinely productive.
I'm particularly attentive to the relationship between engineering and compliance in regulated domains. At McAfee and Philips, this relationship was existential. Treating legal and regulatory stakeholders as constraints to minimize, rather than partners to include early, is a failure mode I've seen play out badly — and worked hard to avoid.
Technology in service of the human outcome
I've worked at Intel long enough to be genuinely excited by what AI and distributed systems make possible. I've also worked in healthcare long enough to be permanently humble about the cost of getting it wrong.
The through-line in my career — consumer security, clinical imaging AI, privacy protection, market research — is software that has real human consequences. That doesn't mean moving slowly. It means moving deliberately. It means the question "what could go wrong for a user?" is never optional.
I try to instill this in the teams I lead. The goal isn't caution for its own sake. It's building things that users can rely on — not just when they work, but in the ways they're designed to fail gracefully.
What I optimize for — and consciously avoid
What I optimize for
- Long-term system health over short-term velocity
- Clear problem framing before solution design
- Team judgment that scales without me
- Psychological safety in technical review
- Transparent, reasoned decision-making
- Deep collaboration with product and design
- Architecture that survives the next two business pivots
- Honest failure analysis, not blame
- Early engagement with compliance and legal stakeholders
What I consciously avoid
- Metric theater that obscures real outcomes
- Architecture astronautics — complexity without necessity
- Decisions that belong at the team level made from above
- Velocity at the expense of sustainable pace
- Cargo-cult adoption of frameworks or processes
- Technical roadmaps disconnected from user value
- Hiring for loyalty rather than craft and character
- Post-mortems that produce documentation but no learning