Remote: Yes (US time zones; experienced leading fully remote and hybrid teams)
Willing to relocate: No
Technologies:
- Data & AI leader | Healthcare technology executive with 15+ years building data strategies, AI/ML/LLM systems, and digital transformation for PE-backed healthcare and multi-site retail.
- Data & AI (AI/ML/LLM implementation, predictive analytics, model calibration)
- Data Engineering (DuckDB, dbt, Parquet, modern data pipelines)
- Full-stack (Python, Ruby, Go, JavaScript/TypeScript, SQL, HTML/CSS)
- Cloud & Infrastructure (AWS, Azure, Terraform, distributed systems)
- Analytics & Observability (dashboards, real-time monitoring)
ELO translated to the NFL with margin-of-victory adjustments, a modest home-field term, and week-to-week recency weighting.
Post-hoc calibration with isotonic regression so 70% predictions land near 0.70 empirically.
Monte Carlo to roll games forward for distributions on weekly win odds and season outcomes, plus basic reliability/Brier/log-loss tracking.
# Where I’m taking it (ensemble ideas)
Blend a few complementary signals: (1) pure ELO strength; (2) schedule-adjusted EPA/Success Rate features; (3) injury/QB continuity and rest/travel effects; (4) a small “market prior” from closing lines; (5) weather/play style pace features.
Combine via a simple stacked model (regularized logistic, isotonic on top), or a Bayesian hierarchical model that lets team effects evolve with partial pooling.
Separate models for win prob vs. expected margin, then reconcile with a consistent link so the two don’t disagree.
Emphasis on calibration over leaderboard-chasing: reliability diagrams, ECE, PIT histograms, and backtests that penalize regime drift.
# Why I’m doing it
It’s a sandbox to teach myself Monte Carlo and ELO end-to-end—data ingest → feature plumbing → simulation → calibration → eval—on a domain with immediate feedback every week.
# How this connects to my day job (healthcare ops)
I work at BlueSprig, running ~150 ABA therapy clinics. I’m exploring whether ELO-like ideas can augment ops decisions:
“Strength” ratings for clinics, care teams, or scheduling templates based on outcome deltas and throughput (margin-of-victory ≈ effect size/efficiency).
Monte Carlo for expansion planning (new-site ramp curves), capacity/OT forecasting, and risk-adjusted outcome monitoring with calibration so probabilities mean something.
Guardrails for fairness and interpretability so ratings don’t become blunt scorecards.
# Help
If you’ve shipped calibrated ensembles in sports or have pointers on applying rating systems to multi-site healthcare operations, I’d love to trade notes or if you need someone to this and other kind of work for their dayjob email me at mgracepellon@gmail.com -- I would love to do this fulltime.
Remote: Yes (US time zones; experienced leading fully remote and hybrid teams)
Willing to relocate: No
Technologies:
- Data & AI leader | Healthcare technology executive with 15+ years building data strategies, AI/ML/LLM systems, and digital transformation for PE-backed healthcare and multi-site retail. - Data & AI (AI/ML/LLM implementation, predictive analytics, model calibration) - Data Engineering (DuckDB, dbt, Parquet, modern data pipelines) - Full-stack (Python, Ruby, Go, JavaScript/TypeScript, SQL, HTML/CSS) - Cloud & Infrastructure (AWS, Azure, Terraform, distributed systems) - Analytics & Observability (dashboards, real-time monitoring)
Résumé/CV: https://michellepellon.com
Email: mgracepellon at gmail dot com