About

Architecture is a set of decisions under constraint.

I started as a backend developer. Then data engineering. Then systems architecture across big tech — the kind of work where the decisions you make in week one are still load-bearing in year three. Over twenty years I've sat at the table when the choice was "do we move to Kafka or can we get another year out of this batch pipeline," and I've seen what happens when you get that call wrong.

Now I'm Head of AI Products and Head of Product at a marketing technology company, running a parallel studio of startups on the side. The common thread across all of it is the same thing it always was: systems that work at scale, measurement that's honest, and the discipline to not ship something until it's actually ready.

This portfolio exists because a resume doesn't communicate architectural thinking. You can list ClickHouse and Snowflake and Druid on a resume. What you can't list is the twenty minutes of whiteboard that explains why you'd choose one over another for a specific workload at a specific scale — and whether you've seen what happens when you get it wrong. That's what these case studies are for.


How I operate

Product Manager Day job. Roadmaps, specs, stakeholder alignment, prioritization. Systems thinker applied to product problems.
Builder Startups, consulting sites, personal tools. Ship fast, park cleanly, resume without ceremony.
Content Creator Writing, multi-channel publishing, building in public. Still developing this muscle deliberately.
Learner Reading, research, staying current. Synthesis is the output — not just consumption.
Life Ops Finance, family, network, health, admin. Necessary, not always loved, never optional.

How I think about data infrastructure

The warehouse isn't a technology choice, it's a strategic one. When a team says "we use Snowflake," I want to know whether they're using it as a storage layer, a compute layer, or an activation layer — because those are three different architectural decisions with very different implications for cost, latency, and organizational capability.

Real-time is a spectrum. The question isn't "do we need real-time" — it's "what's the actual latency requirement, and what's the operational cost of maintaining it." Druid and Pinot make sense for sub-second OLAP at scale. Snowflake makes sense for batch/historical work. ClickHouse makes sense when you need both and you're willing to pay the operational overhead. Most teams don't think clearly about where they actually sit on that spectrum before they commit to a stack.

Measurement is the hardest problem. Not because the technology is hard — the tech is mostly solved — but because clean measurement requires organizational discipline to run experiments correctly, patience to let them run long enough, and honesty to report what the data actually says when it contradicts what marketing wants to hear.

Currently

  • → Head of AI Products + Head of Product at a marketing technology company
  • → Running a portfolio of startup projects on the side (Orino AI, SaasMatchup, others)
  • → Targeting Snowflake Industry Architect — the intersection of data infrastructure and customer-facing advisory
  • → Building in public when time allows
LinkedIn → GitHub → Contact