Pit Strategy Optimizer

Data-driven decision support for Formula 1 pit stop strategy

📍 Product Case Study 👤 Hamna Nimra 📅 February 2026

The Problem

Race engineers make million-dollar decisions in seconds with incomplete information

$20M+

Stakes Per Race

A single pit stop error can cost championship points worth millions in prize money and sponsorship value.

5-10s

Decision Window

Race engineers have seconds to decide: pit now or wait another lap. No time to model scenarios.

5M+

Underserved Users

Sim racers and F1 content creators need strategy tools but can't access team-level analysis.

User Pain Points

❌ No Validation

Teams can't easily validate post-race whether their pit strategy was optimal or if they left time on the table.

❌ Black Box Tools

AWS F1 Insights shows predictions but doesn't explain why. Engineers need reasoning, not just answers.

❌ Slow Manual Analysis

Analyzing lap times, tire degradation, and pit loss manually takes hours. Too slow for real-time decisions.

The Solution

Explainable AI that recommends optimal pit windows with confidence intervals

📊

Tire Degradation Modeling

Linear regression models predict lap time degradation per track, compound, and fuel load. Interpretable and fast.

🎯

Pit Window Optimization

Simulates pitting on current lap and N future laps, applies pit loss and degradation, ranks by total projected time.

💡

Explainable Recommendations

Explains why pit window opens, when degradation overtakes pit loss, and the cost of delaying or advancing the stop.

Historical Validation

Tests recommendations against actual F1 team decisions. Measures lap delta and alignment within ±3 laps.

📈

Uncertainty Quantification

Sensitivity analysis shows how recommendations change with ±2s pit loss or ±0.02s/lap degradation uncertainty.

🏎️

VSC Scenario Modeling

Models Virtual Safety Car scenarios where pit loss is reduced by ~50%, helping teams capitalize on race interruptions.

Product Development Process

How I built and validated this product

1. Problem Discovery

Identified gap in prosumer F1 analytics market. Teams have $10M internal tools. Fans have nothing actionable. Opportunity: accessible, validated strategy tool for 5M+ sim racers and content creators.

2. Scope Definition

Ruthlessly scoped to dry races and single-car strategy. Rejected multi-car game theory (exponential complexity), weather transitions (insufficient data), and real-time API (v2 feature). Focus = MVP validation.

3. Model Development

Built linear tire degradation models per track and compound using FastF1 historical data. Chose interpretability over deep learning complexity. Fitted models on 2023-2024 seasons with R² > 0.85.

4. Optimizer Implementation

Created pit window optimization engine that simulates N future pit scenarios, applies track-specific pit loss, and ranks by total projected race time. Outputs recommended lap with ±3 lap confidence interval.

5. Explainability Layer

Added rule-based explanation generation that describes why the pit window opens (degradation > pit loss), optimal timing, and trade-offs of waiting. Users need reasoning, not just lap numbers.

6. Historical Validation

Validated recommendations against actual F1 team decisions across 5 races (Bahrain, Monaco, Spain, Silverstone, Monza). Measured lap delta and alignment within ±3 laps to prove model accuracy.

7. Documentation & Testing

Wrote comprehensive PRD, assumptions document, and case studies. Built unit and integration test suite. Exported visualizations as PNG/HTML for portfolio presentation.

Success Metrics

How we measure product performance

±3
Lap Accuracy Target
5+
Races Validated
0.85+
Model R² Score
100%
Test Coverage

Product Success Criteria

Validated against real F1 team decisions from the 2023-2024 seasons

✅ Recommendation Accuracy

Alignment within ±3 laps of actual team pit decisions

✅ Model Performance

R² > 0.85 for tire degradation predictions across all tracks

✅ Validation Coverage

Tested on 5 diverse circuits: street, permanent, high-speed, high-deg

Product Trade-offs

Key decisions and why I made them

Decision What I Chose What I Rejected Rationale
Model Complexity Linear degradation Deep learning Interpretability > 2% accuracy gain. Users need to trust recommendations, which requires explainable models.
Race Conditions Dry races only All weather 80% of races are dry. Wet/intermediate modeling adds months of complexity for 20% use case.
Strategy Scope Single-car optimization Multi-car game theory Single-car covers 80% of value with 20% of effort. Multi-car interactions = exponential complexity.
User Interface CLI + Python API Web dashboard Developer-first approach enables faster iteration. Web UI is v2 after proving core value.
Pit Loss Modeling Track-specific constant Dynamic traffic modeling Insufficient data to model pit lane traffic accurately. Constant pit loss is good enough for MVP.

Biggest Trade-off: Trust vs Accuracy

I chose a linear degradation model that achieved R² = 0.85-0.90 instead of a neural network that might have reached 0.92-0.95. Why? Because users (race engineers, strategists) need to understand why the model recommends a certain pit lap. A black-box model that's 3% more accurate but can't explain itself is useless in high-stakes racing decisions. This is the core PM lesson: the best solution isn't always the most technically sophisticated one.

Key Learnings

What I learned building this product

What Worked

  • Starting with historical validation (proof before features)
  • Documentation-first approach (PRD, assumptions, case studies)
  • Ruthless scoping (dry races only saved 2 months)
  • Building explainability layer (users care about "why")
  • Comprehensive testing (unit + integration tests)

What I'd Change

  • Talk to race engineers earlier (validated assumptions vs assumed)
  • Build web UI sooner (CLI limits user testing audience)
  • More visual design upfront (matplotlib plots aren't portfolio-ready)
  • Document trade-offs in real-time (easy to forget reasoning later)
  • Set up user feedback loop earlier (metrics aren't everything)

Biggest Lesson

PM work is 80% communication, 20% building. I spent too much time optimizing the degradation model and not enough explaining why it matters to users. The most technically impressive feature is worthless if users don't understand the value proposition.

Product Roadmap

Where this product goes next

✅ V1 - Completed

Historical Validation

  • ✓ Tire degradation modeling
  • ✓ CLI interface
  • ✓ Validation on 5 races
  • ✓ Explainability layer
  • ✓ Comprehensive docs + tests
🚧 V2 - Next 3 Months

Usability Improvements

  • Web dashboard for non-technical users
  • Multi-race comparison view
  • Shareable race reports (PDF export)
  • Interactive degradation visualizations
  • User feedback collection system
📅 V3 - Next 6 Months

Real-time Capability

  • Live race API integration
  • VSC/Safety car probability modeling
  • Multi-driver team coordination
  • Weather transition modeling
  • Mobile app for trackside use
🎯 V4 - Next 12 Months

Monetization & Scale

  • Freemium API model
  • Integration with iRacing/sim platforms
  • Content creator partnerships
  • Team licensing (F2, F3, FE)
  • Expand to endurance racing (WEC, IMSA)