Most prioritisation frameworks ask you to score ideas on a subjective 1-10 scale. The Impact-Probability (IP) framework, popularised by growth operator Josh Lachkovic of Ballpoint, takes a different approach: it replaces subjective scores with actual financial forecasts.
The result is a single number — expected value — that tells you what each experiment is worth in real terms.
Table of contents
Open Table of contents
How the IP framework works
The IP framework evaluates experiments using just two inputs:
- Impact — the forecast uplift in your core metric (e.g. contribution margin, revenue, leads) if the experiment succeeds
- Probability — the percentage likelihood the experiment will produce that result
Multiply these together and you get the expected value (EV) of the experiment:
Expected Value (EV) = Forecast uplift × Probability of success
That number is all you need to rank your backlog.
A worked example
Imagine you are optimising for contribution margin and have two experiment ideas to choose from:
Experiment A: You believe you could double your conversion rate, adding £250,000 in contribution margin next month. But the probability of that happening is only 20%.
- EV = £250,000 × 0.20 = £50,000
Experiment B: A more modest change that could generate £75,000 in contribution margin, but you estimate an 80% probability of success.
- EV = £75,000 × 0.80 = £60,000
Despite Experiment A having a far higher ceiling, Experiment B has the higher expected value. Start with B.
Expected value thinking forces you to weigh upside against likelihood, rather than chasing the biggest possible win regardless of the odds.
What makes IP different from ICE
The ICE framework scores ideas on Impact, Confidence, and Ease — each rated 1-10. The IP framework strips this down to two factors and replaces subjective scores with actual numbers.
| ICE | IP | |
|---|---|---|
| Inputs | Impact, Confidence, Ease (1-10 each) | Impact (£), Probability (%) |
| Output | Score out of 1,000 | Expected value in £ |
| Ease/Effort | Included | Excluded |
| Subjectivity | High (three subjective scores) | Lower (anchored to forecasts) |
The most notable difference is that IP drops the Ease component entirely. Lachkovic’s reasoning is straightforward: when performance is deteriorating, the difficulty of an experiment matters less than whether it will actually move the needle.
When IP works best
The IP framework suits teams that:
- Track a clear financial metric like contribution margin, revenue, or pipeline value
- Have enough data or experience to estimate probability with some confidence
- Are in what Lachkovic calls wartime — when performance is declining and you need to make bets rather than run safe, incremental tests
- Want to cut through subjective scoring debates and anchor decisions to real numbers
It is less useful when you have no baseline data to forecast impact, or when your experiments are exploratory and you cannot estimate probability.
Why probability trumps impact
IP contradicts most prioritisation advice on this point.
Traditional frameworks like RICE and PIE treat confidence as one factor among several. IP makes probability the decisive factor. A high-impact experiment with low probability will lose to a moderate-impact experiment with high probability every time.
This cuts against the instinct many marketers have to swing for the fences. Expected value thinking says: stop chasing moonshots with 5% odds. Run the experiment that is most likely to deliver measurable results, even if the upside is smaller.
As Annie Duke writes in Thinking in Bets: “What makes a decision great is not that it has a great outcome. A great decision is the result of a good process.”
Expected value is that process.
The case against statistical rigour in a crisis
Lachkovic makes another contrarian argument: when your business is in wartime, abandon the pursuit of statistically significant data. Instead, make bigger, bolder bets and move faster.
This directly contradicts the CRO orthodoxy of running clean A/B tests with 95% confidence intervals. His argument is that statistical rigour is a peacetime luxury. When the numbers are heading in the wrong direction, you need speed and conviction more than certainty.
Many teams hide behind the need for “more data” when what they actually need is a decision.
How to estimate probability
The hardest part of IP is estimating probability honestly. A few guidelines:
- Use base rates. If your team’s historical experiment win rate is 30%, that is your starting point, not 80%.
- Anchor to evidence. Experiments backed by customer research, competitor data, or prior test results deserve higher probabilities.
- Be honest about uncertainty. If you are guessing, say so. A probability estimate of 50% usually means “I have no idea” — not “it is a coin flip.”
- Calibrate over time. Track your probability estimates against actual outcomes. Most teams are wildly overconfident at first.
Combining IP with other frameworks
IP works well as a final filter on top of your existing framework. Score ideas with ICE or HIPE first, then run the top candidates through an expected value calculation before committing resources.
This is especially useful when two experiments score similarly on a composite framework. Expected value breaks the tie with a number grounded in reality rather than another subjective score.
For more on how Growth Method approaches experiment prioritisation, see the campaign idea scoring guide.