Heidi · Product Manager · Americas Health Systems

I build
the thing
I spec.

PM who ships production code with Claude Code and Cursor. Six apps in 30 days. Two in healthcare. All solo. When I write a PRD, I know what I'm asking for — because I've built it myself.

206
Commits
30 days
168
Sessions
Claude Code
6
Products
All shipped
0
Templates
From scratch
Scroll
What Transfers

Three themes Heidi cares about.

Every project below was a real product with real users. Here's what I learned building them — and why it maps to health system deployments.

01
Healthcare + AI
I've already built
in your domain
Two production apps in clinical monitoring. I've worked with health data pipelines, AI-powered clinical narratives, and evaluation frameworks for patient-facing outcomes. Not theoretical — shipped and used.
Nourish
Infant monitoring app. Custom SVG charts for growth tracking, personalized baselines, and AI-generated clinical narratives via Claude API that summarize weeks of vitals into clinician-ready text.
SpO2 Monitor
Python data pipeline for infant pulse oximetry. AI-powered evaluation of oxygen saturation patterns with visualization dashboards. Turns raw sensor data into actionable clinical insights.
Why this matters at Heidi
I know what "clinical workflow" means
I've observed how data flows from device → pipeline → evaluation → clinician. When I go on-site at a health system, I won't be learning the vocabulary.
I've built AI evals, not just used AI
SpO2's evaluation pipeline checks AI output against clinical baselines. I understand how to assess whether an LLM's output is clinically useful vs. just fluent.
I treat compliance as a product constraint
Health data isn't regular data. Both apps were designed with data sensitivity as a first-class concern, not an afterthought.
02
Data Fluency
I build the dashboards
I read
Composite scoring, historical comparisons, trend analysis, and interactive data viz. When your JD says "read adoption dashboards and distinguish a training problem from a product problem" — I built tools that do exactly that.
SKODcast
QB draft intelligence platform. Composite scoring across 5 data sources, historical comps, style drift analysis, and tiered rankings. Data viz with Chart.js for pattern recognition across draft classes.
Meal Planner
Domain-heavy data modeling. 4 measurement systems, 128-item produce database, package-size intelligence. Proof I can model complex domains and build structured data flows end-to-end.
Why this matters at Heidi
Diagnostic data fluency
SKODcast's composite scoring is the same pattern as adoption dashboards — combine multiple signals, weight them, surface what matters. I built the instrument, not just read it.
I can model complex domains
Meal Planner's unit conversion system (metric, imperial, volume, weight) mirrors the kind of domain logic Heidi hits with EMR integrations — messy real-world data that needs clean abstractions.
I know when a chart is lying
Building data viz from raw data means I understand what gets lost in aggregation. When I see an adoption dashboard, I ask "what's this not showing me?" before "what does this show?"
03
Ship Under Pressure
50 users. One night.
No second chance
War Room shipped for NFL draft night 2026 — a hard deadline with 50+ concurrent users who would immediately find every bug. Build-vs-configure trade-offs with real stakes.
War Room
Real-time multiplayer draft companion. Commissioner controls, bracket scoring, ESPN probability engine, live reactions. Firebase RTDB for instant sync. Shipped solo, used live, zero downtime on draft night.
Flash Crash
iOS SpriteKit game with Bloomberg terminal aesthetics. StoreKit 2, GameKit leaderboards, App Store submission. Proof I can ship across platforms, not just web.
Why this matters at Heidi
I make build-vs-configure trade-offs daily
War Room: Firebase over custom WebSockets (speed to ship). Room codes over OAuth (zero friction for users). ESPN scraper over manual entry (accuracy at scale). Each choice was a deployment decision.
I've shipped for real deadlines with real users
Draft night doesn't move. 50 people expected it to work. This is the same pressure as a health system go-live — the deadline is the deployment, and rollback isn't an option.
First deployment informs the next
Patterns from War Room (room-based architecture, commissioner roles, real-time sync) became reusable modules for future projects. Deployment problems became configuration problems.
How I Build

Claude Code is my engineering team.

Your JD says "build with AI tools and show what you've made." Here's 30 days of building — not a demo, not a tutorial, not a hackathon project. Production code, pushed to production, used by real people.

168
Sessions
Claude Code sessions in 30 days. Not prompting for fun — shipping features, fixing bugs, deploying infrastructure.
206
Commits
~7 commits per day. Each one a shipped change — feature, fix, or deploy.
403
Hours
Logged in Claude Code. Full-stack: frontend, backend, infra, data, deploy.
What I do with AI tools
End-to-end shipping — code, test, deploy, DNS, SEO, monitoring. Not just "write me a function."
Course-correct constantly — redirected Claude from wrong approaches 25 times in 30 days. The AI proposes, I decide.
Build-vs-configure — chose incremental migration over greenfield rewrite. Assembled fixes from existing capabilities before requesting new features.
Multi-project infrastructure — Cloudflare Pages, custom domains, D1 databases, CI/CD pipelines. Six apps on one platform.
How this translates to Heidi
I spec what I can build — my PRDs are grounded in implementation reality, not wishful thinking.
I unblock myself — when the AI gives me the wrong adapter, I debug it. When a deploy fails, I diagnose the root cause. I don't file a ticket and wait.
I prototype fast — if a health system needs a custom pre-charting workflow, I can build a working proof in hours, not weeks.
I know when to stop building — the skill isn't coding. It's knowing when to configure, when to build, and when to say "this isn't worth shipping."
Straight from the machine

These are direct quotes from my Claude Code Insights report — an automated analysis of 168 sessions over 30 days. I didn't write these. The tool analyzed my usage patterns and produced them.

Interaction Style
“High-velocity product shipper who gives Claude clear directives, course-corrects aggressively when it veers off track, and pushes to production relentlessly — treating it as a capable but opinionated junior dev that needs firm supervision.”
What's Working
“You've effectively used Claude as a full-stack shipping partner, going from feature code to Cloudflare deployment to SEO and DNS setup, building out a real multi-project infrastructure largely through conversational iteration.”
Product Instinct
“That instinct to redirect from a greenfield rewrite to an incremental refactor, or shut down unsolicited architecture advice, is why your sessions land so well.”
Solo Shipping
“You run a tight feedback loop where you test changes visually, give precise corrections, and push updates rapidly — often cycling through UI tweaks across multiple rounds in a single session.”
Solo, Not Alone

I built the team I don't have.

Your JD asks: "Can you execute without a legion of data analysts, product marketers, and research coordinators?" I built a PM operating system with specialist AI agents that fill the roles a solo builder can't hire. Every project runs through the same quality gates a full team would enforce.

Staff Engineer
Code Reviewer
Five-axis review: correctness, readability, architecture, security, performance. Categorizes every finding as Critical, Important, or Suggestion. Cites file and line number.
Verdict: approve / request changes
⚠︎
Security Engineer
Security Auditor
OWASP Top 10, auth patterns, input validation, data protection. Requires proof-of-concept for Critical/High findings. Three-tier boundary system: Always Do / Ask First / Never Do.
Severity: critical → info
QA Engineer
Test Engineer
Coverage analysis, test pyramid strategy (unit > integration > E2E). Prove-It Pattern for bugs: write the failing test first, then fix. Outputs recommended tests with priority.
Output: coverage + test plan
Domain Expert
Clinical Reviewer
Modeled on a neonatal pulse oximetry specialist. Reviews threshold validity, alarm accuracy, signal vs. noise, handoff quality, and clinical safety for health-facing features.
Verdict: acceptable / clinically unsafe
Data Scientist
Data Scientist
Reviews data pipelines, model evaluation, statistical rigor, and visualization quality. Catches leakage, vanity metrics, and notebook thinking. Every metric needs a baseline and business translation.
Score: 0–10 rubric + verdict
Every project runs this pipeline
/concept-gen
/rapid_prototype
/review-arch
Build
/review-code
Ship
41 skills 17 knowledge frameworks 5 specialist agents 1 persistent operating system
Let's talk
I want to make
health systems
nicer.

The prospect of making health systems a lot nicer makes me feel fuzzy inside. I'd love to talk about how I can help Heidi strengthen the human connection at the heart of healthcare.