Selected Experience

Four chapters that shaped how I lead

These aren't a complete job history. They're the moments — some triumphant, some instructive — that most clearly illustrate the decisions I've made as an engineering leader and why I made them.

SurveyMonkey 2024 – Present

Building new capability inside a mature product — without breaking what works

The challenge

Simultaneously managing three distinct products — SM Apply, Market Research Solutions, and Wufoo — each at a different stage of maturity, with shared infrastructure and a single global team.

The constraints

Legacy codebases that couldn't be easily rearchitected. Customer SLAs that couldn't slip. A market research landscape being reshaped in real time by AI.

The outcomes

35% increase in delivery velocity. 18% reduction in operating costs. A global hackathon win in 2024 for a next-generation market research platform built in days — then evolved into product.

When I joined SurveyMonkey's Bengaluru engineering organization, I inherited a team managing three very different products at once. Wufoo — a beloved, high-scale form builder with millions of users — was in maintenance mode. SM Apply was a complex workflow platform for grants and scholarships that was growing rapidly. Market Research Solutions was a newer platform trying to compete in a rapidly evolving space. Each demanded a different kind of leadership attention.

The temptation in a situation like this is to try to unify everything — a single architecture, a single delivery process, a single metric. That instinct is usually wrong. A high-scale legacy product and a growth-stage new product require fundamentally different operating rhythms, different risk tolerances, different conversations about what "quality" means. Flattening those differences creates noise, not clarity.

My decision was to be explicit about the portfolio structure: segment the team's attention deliberately, invest in clean interfaces between the products where they shared infrastructure, and protect Wufoo's stability while giving the Market Research and SM Apply teams space to move faster. This meant harder conversations about headcount allocation than most people expected from an incoming manager. But clarity about where we were and weren't investing was more useful than false balance.

The 2024 Global Hackathon was a signal moment. My team led the creation of a real-time market research platform capable of advanced client segmentation and live industry trend analysis — and won from a global field. The win mattered less than what it demonstrated: a team with real creative energy, capable of building beyond the roadmap when given the space to do so.

"Treating three products as one team is a management comfort. Treating them as three distinct bets — with shared talent and different rhythms — is how you actually make progress on all of them."
McAfee 2021 – 2024

Privacy as a product discipline — not a compliance checkbox

The challenge

Building AI-powered privacy protection products that could accurately identify exposure risk for millions of users across a constantly changing landscape of data brokers and online platforms.

The constraints

Adversarial targets: data brokers actively resist removal requests. Global regulatory variance. NLP pipelines that needed to stay accurate as the internet's structure changed constantly.

The outcomes

40% reduction in sensitive data exposure incidents. Two US patents. Consumer and enterprise-grade products deployed globally protecting millions of users.

The framing that most organizations use for privacy is defensive — minimize liability, meet regulatory requirements, don't make headlines. What McAfee was trying to do was different: make privacy protection a product that users actively valued, understood, and relied on.

That's a substantially harder problem. It requires building AI systems that are accurate, explainable, and actionable — not just technically sophisticated. A user who is told their privacy risk score is high needs to understand why, and needs a clear path to reduce it. That combination of technical depth and user experience clarity is genuinely difficult to build.

The Privacy Exposure and Digital Sanitization platform was the centerpiece of this work. We built NLP pipelines that could identify and classify user data across hundreds of data broker sites, compute a meaningful risk index, and trigger remediation workflows — automated where possible, guided where not. The adversarial dimension was constant: data brokers would restructure their pages, add bot-detection layers, and change their opt-out flows. Our systems had to adapt.

The patent — US 12314431 B2 — came from a question we kept asking: why do privacy products always react to threats rather than predict them? The system we designed quantified exposure before harm occurred, giving users a proactive view of their digital footprint. That shift from reactive to predictive was the fundamental engineering insight that drove the work.

The Social Protection product — an AI-driven social media privacy manager — added a different dimension: the social graph. Protecting users' privacy when the data in question belongs to them but lives on platforms they don't control required a different class of solutions. We used Go for its concurrency model across platform APIs, and built a multi-platform architecture that protected across Facebook, Twitter, and LinkedIn in a unified experience.

"The hardest part of building privacy AI wasn't the NLP. It was defining what 'good enough' looks like when the cost of a false negative isn't an unhappy user — it's someone's home address in the hands of someone who shouldn't have it."
Philips Healthcare 2019 – 2021

Building for clinical environments: where the user is an expert and the cost of error is real

The challenge

Delivering AI-assisted medical imaging workflows for radiologists across 15+ countries — users under time pressure, in high-stakes environments, with zero tolerance for unreliability.

The constraints

Clinical validation requirements. Deep integration with existing PACS systems. Global deployment across varying infrastructure maturity. Regulatory approval processes that operate on their own timeline.

The outcomes

Measurably reduced diagnosis turnaround times in partner hospitals. AI Assistant Widget adopted by medical practitioners for diagnostic image interpretation. Global rollout across healthcare systems in 15+ countries.

Healthcare software is a humbling domain for an engineer. The users are experts — often more expert in their domain than you will ever be. The consequence of a bad user experience isn't frustration. It's a radiologist missing a finding. It's a delay in a critical diagnosis.

At Philips, I led the Solution Acceleration Lab team responsible for the IntelliSpace PACS Advanced Workspace — a platform that radiologists use to manage and interpret medical imaging studies. The challenge was to introduce AI-assisted workflows into a product that clinicians had built workflows around for years, without disrupting those workflows or introducing new failure modes.

The AI Assistant Widget was the most delicate piece of this. Medical image interpretation is not a problem where AI replaces the clinician — it's a problem where AI can surface information faster and flag anomalies that a fatigued radiologist working through a large study queue might otherwise miss. The design principle was: augment, don't automate. Every AI suggestion remained a suggestion. The radiologist made every call.

That principle — AI as cognitive assistant, not decision-maker — required constant discipline in how we shaped the engineering. There was always pressure to make the AI more assertive, more automated, because that made the efficiency story cleaner. We pushed back on that pressure consistently. The right metric wasn't how often the AI made a decision. It was how often it helped the human make a better one.

This experience permanently changed how I think about AI in high-stakes contexts. The engineering challenge is relatively straightforward compared to the design and ethical challenge: understanding precisely where AI adds value without creating new dependencies or failure modes that didn't exist before.

"Working in clinical software taught me that the most important engineering question is sometimes not 'how do we make this smarter?' but 'who should be making this decision, and are we helping them or replacing them?'"
Intel 2008 – 2019 · 11 years

Eleven years: from engineer to architect to leader — and what that arc actually teaches you

The challenge

Building consumer security products for global markets from the ground up — while simultaneously building the engineering organization capable of sustaining and extending them across a decade of platform shifts.

The constraints

Hardware-first corporate culture. Global telecom partnerships with demanding integration requirements. Consumer trust that took years to earn and could be lost in a single incident.

The outcomes

Safe Family deployed globally through telecom partnerships. Mobile Anti-theft platform protecting users across multiple platforms. Six innovation awards. Two patents filed. A 40+ person global engineering organization built and led.

Eleven years at a single company can look like inertia from the outside. From the inside, it was the most compressed education in engineering leadership I could have designed for myself. I held four distinct roles at Intel — and more importantly, I made the mistakes that belong to each level of seniority, learned from them, and moved on.

As a senior engineer, I learned that clever code is rarely good code. The parental control system we were building — Safe Family — needed to work across wildly different device environments, carrier networks, and user demographics. Elegance was a liability. Clarity and reliability were the only things that mattered.

As an architect, I learned that the most important design decisions aren't about data structures or API contracts. They're about organizational seams: where do responsibilities hand off? What does this system assume about the team that will maintain it in three years? The Mobile Anti-theft platform I architected had to work in markets where device theft was endemic and mobile connectivity was unreliable. The reliability requirements pushed us toward a hybrid architecture well before that was a fashionable term.

As a manager, I learned that my judgment about technical decisions was simultaneously my greatest asset and my most significant liability. The teams I led had to be able to make good technical decisions without me in the room — which meant I needed to invest as much in how they thought about problems as in the specific decisions they made. That shift took longer than I expected and is still the most important thing I've learned.

The Safe Family parental control platform represented the full arc. I helped design the initial system. I architected its evolution to a microservices model as it scaled. I managed the team that deployed it globally through partnerships with major telecom operators. By the end, the part I was most proud of wasn't the technology — it was the team that had grown up around it, most of whom had taken on significant leadership responsibilities of their own.

"The measure of a long career at one company isn't tenure. It's whether you actually grew — whether the decisions you make at year 10 are qualitatively different from the ones you made at year 1. I know they are. I have the failed experiments and the corrected instincts to prove it."
Continue

The thinking behind the work

Two long-form essays on engineering leadership — written from experience, not from a framework.

Read my writing