A walkthrough of the public Stripboard benchmark
on the indie-feature fixture — a 24-scene, 4-location, 92-page synthetic
production with a 5-person cast and an 18-shoot-day target. The fixture
ships in the Stripboard repo; you can clone it and re-run everything below
with pnpm bench indie-feature. Numbers on the live benchmark page are
regenerated from the JSON output of that command — never hand-edited.
This post walks through one fixture so the rest of the benchmark page
makes sense. If you only have ten minutes, skip straight to
/benchmark and run the
harness yourself; this exists to add the narrative.
24 scenes across four locations: cabin (8 scenes), forest (8), town (4), highway (4). Five named cast — hero (in every scene), rival, mentor, sidekick, love, plus a single-day cameo. Hard constraint: the mentor character has a hard cast-out on day 10 (a real-world stand-in for an actor with a back-to-back booking). Soft constraints: cluster scenes by location to minimise company moves, batch day-vs-night, keep cast hold days under three for everyone except hero.
This is the schedule a working 1st AD might build by hand in Movie Magic after about half a day of strip-shuffling. It clusters cabin scenes in the first window (the cabin is the most location-constrained), then forest, then mixes town and highway on the back half once the bigger cast is gone. Mentor exits after day 10 as required; cabin is empty from day 5 onward; the day/night ratio is the natural mix of the script.
Day/night break inside one location is a typical company-day pattern.
Cabin wraps. From day 5 the company moves to forest.
Note the single-strip day — mentor and sidekick are both inbound for sc-23, so it gets its own slot rather than pairing with a non-mentor scene.
You can see the AD’s reasoning in the layout: cluster the
location-constrained scenes first, get them done before the cast list
expands, save the more flexible town and highway scenes for the back
half when only hero is on call. This is the work hand-scheduling does
well — it bakes in judgement about which constraints matter most on
your particular shoot.
The interesting part of the benchmark isn’t whether the solver
beats the AD on raw days saved (often it doesn’t, on a
well-built ground truth). It’s what happens when the constraints
change. A re-schedule after a cast-availability bump or weather day
takes a 1st AD anywhere from 2–8 hours in Movie Magic, because every
move ripples and there’s no solve again button. The solver
re-runs in milliseconds.
So the case-study question for the indie-feature fixture isn’t
"who built the better baseline?" — it’s "what does it cost to
move?" Drop a hard cast-out for love on day 8 and re-solve:
The honest caveat: the solver’s cost function on the published fixture is currently optimising aggressively for the simulated-annealing objective and not yet penalising single-day collapse hard enough on this fixture class — see the /benchmark page for the current per-fixture numbers and where the solver wins and loses today. That’s the whole point of shipping the harness publicly: the methodology and the numbers move together, in the open.
Three places, on this fixture and in general:
.sex round-trip with the union ecosystem. EP Payroll, IATSE
paymasters, and completion bond auditors all read .sex. We
read .sex (so you can ingest an existing schedule into the
solver), but we don’t write it. That’s an intentional
scoping choice — see the Stripboard FAQ.If you re-run this benchmark on your own fixture, two things will bite you:
constraints.json so the solver can’t produce a
schedule the AD couldn’t.Both are documented in the methodology section on the benchmark page. If something looks wrong on your fixture, file an issue with the JSON output and the host string — three of the four scheduler bugs we’ve found in the beta came in that way.
The benchmark, the fixtures, the solver, and this site are all Apache-2.0. The single CTA on this post is the same one on the landing page — there’s no signup gate, no email capture, no "book a demo" button.
git clone https://github.com/jmnprlabs/jmnpr.git
cd jmnpr && pnpm install
pnpm --filter @jmnpr/production build
pnpm bench indie-feature
That writes packages/production/bench/results/indie-feature.json.
Diff it against the same file on this commit; numbers should match
within wall-time noise. Then go to
stripboard.jmnpr.co/benchmark
and verify the published numbers came from the same JSON.