On Building Things That Matter
What 18 years in high-stakes domains taught me about the difference between shipping and delivering.
The question that haunts good engineers isn't "did it ship?" — it's "did it matter?"
I've shipped hundreds of features across my career. A handful of them genuinely mattered. Understanding the difference took me most of those 18 years, and I'm not entirely sure I have it right even now. But I have a working theory, built from the particular domains I've worked in: consumer security, clinical AI, privacy protection, market research. Each of them has a way of clarifying what engineering responsibility actually means.
The work that stays with me isn't the technically impressive stuff. It's the moment a hospital in Singapore shortened a diagnostic turnaround because our imaging workflow ran more efficiently. It's a family in Germany whose children had safer internet access because of a parental control platform we built at Intel. It's someone who, without knowing it, had a data broker's record of their home address scrubbed from the internet because our privacy product quietly did its job. None of those moments made it into a press release. Very few made it into a performance review. But they're why I still find this work meaningful after nearly two decades.
The seduction of complexity
Engineering culture has a complicated relationship with impact. We celebrate the architecturally elegant solution, the system that handles ten million requests per second, the codebase that future engineers will admire. These things have their place. But they can also be elaborate distractions.
I spent years at Intel working on consumer security products — anti-theft mechanisms, parental controls, device protection. The technical problems were interesting. But the more interesting question was always: would someone actually use this? And if they did, would it protect them?
That's a different kind of problem. It requires talking to customers, not just reading usage metrics. It requires empathy for people who don't think about software the way we do. It requires resisting the urge to add capability and instead fighting — sometimes quite hard — for simplicity. The most important architectural decision I made during those years wasn't about databases or microservices. It was choosing what to leave out.
When stakes are permanent
Healthcare changed how I think about engineering responsibility.
At Philips, we were building tools for radiologists — the IntelliSpace PACS Advanced Workspace, an AI-assisted diagnostic imaging platform. The users were clinical professionals under enormous time pressure. Every workflow decision had downstream consequences: how quickly a critical finding was surfaced, how a subtle anomaly was flagged, whether a busy oncologist had the information they needed at precisely the moment they needed it.
There's no user story format that captures what it feels like to build software that a doctor relies on. No sprint review captures the weight of knowing that a latency problem in your system might have real consequences in a clinical environment.
I'm not saying this to dramatize it. I'm saying it because it recalibrated my entire approach to engineering quality. When you've worked in a domain where the cost of a bug is measured in something other than revenue, your standards shift permanently. That shift followed me to McAfee, to SurveyMonkey, into every conversation I have with an engineering team about what "good enough" actually means.
Privacy as a human problem
Privacy protection isn't life-or-death in the same way as clinical software, but it isn't trivial either. Stalking victims. Domestic abuse survivors. Activists. Teenagers navigating the internet for the first time. These are real people whose safety can be compromised by a failure in a data sanitization pipeline, a false negative in exposure detection, a lag in a remediation workflow.
When we were building the Privacy Exposure platform at McAfee, I kept coming back to a specific failure mode: the system returns a false negative. It tells a user their exposure is low when it isn't. That user, feeling protected, stops looking. The data broker who has their home address continues operating freely. There's no alarm, no error log, no on-call page. Just a person who trusted our product and wasn't protected by it.
That failure mode defined our engineering priorities more than any product requirement document. We treated false negatives as worse than false positives — the opposite of the usual optimization direction. We built the remediation workflows to be aggressive. We invested heavily in adversarial testing: our own red-team would attempt to maintain data broker listings that our system was supposed to catch. Every gap in that testing was a real-world vulnerability.
The patent that took years to understand
In 2022, I was granted a US patent for a system that computes privacy exposure risk indices for online entities. It sounds technical and abstract. Let me tell you what it actually is.
The internet has a long memory. Every form you've ever filled out, every service you signed up for and forgot, every time a data broker scraped your information from a public record — it accumulates. Most people have no idea what their digital footprint looks like, which means they have no way to manage it. The system we built analyzed this digital exhaust, scored the risk it represented, and gave users a clear picture of their exposure — and a path to reduce it.
The idea came from a persistent frustration: we kept building reactive security products. Things that responded to threats after they materialized. What if we built something preventive? What if we could help someone understand their exposure before it became a problem?
It took years to get right. The NLP pipelines went through dozens of iterations. The scoring model required both technical accuracy and user intelligibility — a harder combination than it sounds. The user experience had to make a technically complex concept legible to someone who just wants to feel safer online.
When it worked, it worked. And I understood, for the first time, what it meant to hold a patent not as a credential, but as a record of a problem genuinely solved.
What impact actually looks like
Eighteen years in, I've stopped being impressed by scale metrics. Millions of users, petabytes of data, 99.99% uptime — these numbers matter operationally, but they're not what I mean by impact.
Impact is when a product changes someone's behavior in a way that makes their life meaningfully better. Impact is when a team delivers something they're genuinely proud of. Impact is when the next person who inherits the codebase — or the organization — finds it in better shape than they expected.
That last one matters more than people admit. The invisible work of building maintainable systems, scalable teams, and clear processes — work that no press release covers — is what determines whether an organization can keep building things that matter over years, not quarters.
My job, as I've grown into more senior leadership roles, has increasingly become creating the conditions for others to build well. That's less glamorous than individual contribution. It requires a different kind of attention: to culture, to communication, to the quiet removal of obstacles that slow teams down before those obstacles become visible to anyone else.
But it's the work that compounds. And after 18 years, I've developed a strong preference for work that compounds.
The question isn't just "did it ship?" or even "did it matter?" The question I now ask is: did it leave things better than we found them? That's the standard I've learned to hold myself — and the teams I lead — to.