The healthcare problems I care about,
and how I'm starting to solve them.
Three problems in healthcare, three working prototypes — neonatal monitoring, SaMD regulatory infrastructure, clinical evidence design. Built in code, grounded in published standards and clinical literature.
SaMD-OS · 5 specialist agents
Pathway Designer · FHIR CarePlan
SpO2 pipeline · Babysat
Nourish · flash_crash
SaMD teams can't ship because their regulatory artifacts are disconnected from their codebase.
SaMD ships every sprint. The 510(k) shows up months later, drawn from spreadsheets and Word docs that drifted the day they were written. SaMD-OS makes the artifacts a build output: generated from the codebase, reviewed against pinned standards, and updated when the code changes.
Docs run on a separate track from code.
SaMD ships continuously. Most teams still keep design controls in spreadsheets, risk files in Word, and SOUP lists in someone's head. A 510(k) pulls from all of it at once, and none of it was built to stay in sync.
Artifacts drift the day they're written.
By the time you submit, the codebase has moved. Teams that shipped first have no real traceability between production and the docs regulators will see. The visible cost is delays and FDA AI requests. The bigger cost, in my experience, is QA leads burning out trying to reconcile it all by hand.
Treat regulatory documentation as a build artifact.
SaMD-OS generates IEC 62304 design controls, ISO 14971 hazard analyses, and SOUP registers directly from the codebase. The drafts then go through five specialist reviewers (regulatory, QA, safety, cybersecurity, clinical), each grounded in a pinned standard so verdicts are repeatable run after run.
For teams already in production, three reverse-engineering paths recover what should already exist: code → SOUP, code → design inputs, code → hazard candidates.
Each reviewer reads from a canonical reference (FDA guidance, an ISO standard, or peer-reviewed literature) pinned to a specific version, so its judgment is stable across runs. Your team's own QA decisions and meeting notes feed back into the review context, so institutional memory lives in the repo instead of one person's head.
A regulatory CI/CD layer.
Pilot with SaMD teams approaching a 510(k) to confirm the artifacts hold up end-to-end. Wire the reviewers into the pull-request loop so every design-control update ships with the code that triggered it. Extend coverage from FDA to EU MDR and IVDR.
Where this goes: regulatory CI/CD. Every commit knows which design input it satisfies, which SOUP component it touches, and which hazard it might introduce. The six-month pre-submission scramble becomes something that's already done.
Zero connected strength training device studies in the literature — evidence exists but no product bridges it.
Resistance training is first-line therapy for sarcopenia. Supervised adherence sits at 72%. Once supervision ends, it falls to 43%. The Clinical Pathway Designer closes that gap with a shared device data spine so the provider and patient see the same signal between visits.
Resistance training works, but adherence collapses without supervision.
Resistance training is first-line therapy for sarcopenia (ICFSR 2018, strong recommendation). Supervised adherence runs 72%. Once supervision ends, which insurance won't reimburse long-term, it drops to 43%. Function (gait speed, grip, TUG) responds strongly to progressive loading. Muscle-mass measures don't move.
No connected device closes the loop between provider and patient.
Across 34 digital interventions for adults 60+, none use a connected strength device. Apps coach. Wearables count steps. None of them surface actual session data (load, sets, compliance) to the provider. The bottleneck is the silent eight months between supervised discharge and the next functional decline.
A shared device data spine for provider and patient.
The Clinical Pathway Designer links the provider workflow and the patient journey through three shared signals: load progression, session compliance, and baseline strength. Both sides see the same view at every visit. Goals are functional only (gait speed, grip, TUG); body composition stays out of the targets because the evidence doesn't support it.
Every phase of the pathway maps to a specific ICFSR adherence barrier: cost, transportation, lack of support. Seven evidence anchors (E1–E7) trace each design decision back to a published source. An illustrative FHIR CarePlan shows what an EHR-shippable structure looks like.
From design pathway to fielded clinical evidence.
Pilot with one geriatric clinic and one connected-strength-device manufacturer to validate the pathway end-to-end. Build the FHIR CarePlan integration into a real EHR so adherence data shows up where the provider already looks.
Where this goes: a closed-loop sarcopenia pathway where the device is part of the medical record, providers see adherence during chart review, and reimbursement follows function-restoration outcomes instead of visit volume.
380,000 preterm infants go home each year with no personalized monitoring.
Home pulse oximeters fire constant alarms because they use one-size-fits-all thresholds. Babysat builds personalized baselines from each infant's own data and triggers on the depth and duration of an event, which is what actually matters clinically.
One-size-fits-all thresholds at home.
Each year, 380,000 preterm infants are discharged with BPD or CHD. Most go home on fixed SpO2 thresholds that ignore gestational age. A 90% alarm on a 34-week preemie fires constantly.
Alarm fatigue hides the events that matter.
Preterm SpO2 ranges (92–95%) overlap with what counts as "borderline" for term infants (97–100%). Fixed thresholds either fire on every breath or miss real desaturation events. Sensor accuracy isn't the issue. Clinically significant desaturation is defined by depth and duration, and home monitoring doesn't model either one.
Personalize the baseline. Trigger on physiology.
Babysat builds 14-day rolling baselines from each infant's own data, using GA-adjusted reference ranges validated against Castillo 2008 and Dawson 2010. Alarms only fire when both depth and duration cross significance, so a momentary dip on a healthy baby doesn't wake the household.
Clinical narratives translate the raw numbers into something parents can act on. Provider handoff reports include a BISQ-R sleep assessment and tokenized viewer access for the next clinician on the chart.
From home monitoring to a clinical evidence stream.
Validate the personalized-baseline approach in a small NICU follow-up cohort, benchmarked against fixed-threshold practice. Wire the HL7v2 interop into a real EHR pilot so triage flags appear inside the existing nursing workflow instead of yet another app.
Where this goes: every infant's home monitoring data becomes part of the clinical record. The next provider doesn't start blind, and population baselines refine themselves as more infants are discharged.