Skip to content
Go back

The Impact-Probability Framework: Prioritise Experiments Using Expected Value

Most prioritisation frameworks ask you to score ideas on a subjective 1-10 scale. The Impact-Probability (IP) framework, popularised by growth operator Josh Lachkovic of Ballpoint, takes a different approach: it replaces subjective scores with actual financial forecasts.

The result is a single number — expected value — that tells you what each experiment is worth in real terms.

Table of contents

Open Table of contents

How the IP framework works

The IP framework evaluates experiments using just two inputs:

  1. Impact — the forecast uplift in your core metric (e.g. contribution margin, revenue, leads) if the experiment succeeds
  2. Probability — the percentage likelihood the experiment will produce that result

Multiply these together and you get the expected value (EV) of the experiment:

Expected Value (EV) = Forecast uplift × Probability of success

That number is all you need to rank your backlog.

A worked example

Imagine you are optimising for contribution margin and have two experiment ideas to choose from:

Experiment A: You believe you could double your conversion rate, adding £250,000 in contribution margin next month. But the probability of that happening is only 20%.

Experiment B: A more modest change that could generate £75,000 in contribution margin, but you estimate an 80% probability of success.

Despite Experiment A having a far higher ceiling, Experiment B has the higher expected value. Start with B.

Expected value thinking forces you to weigh upside against likelihood, rather than chasing the biggest possible win regardless of the odds.

What makes IP different from ICE

The ICE framework scores ideas on Impact, Confidence, and Ease — each rated 1-10. The IP framework strips this down to two factors and replaces subjective scores with actual numbers.

ICEIP
InputsImpact, Confidence, Ease (1-10 each)Impact (£), Probability (%)
OutputScore out of 1,000Expected value in £
Ease/EffortIncludedExcluded
SubjectivityHigh (three subjective scores)Lower (anchored to forecasts)

The most notable difference is that IP drops the Ease component entirely. Lachkovic’s reasoning is straightforward: when performance is deteriorating, the difficulty of an experiment matters less than whether it will actually move the needle.

When IP works best

The IP framework suits teams that:

It is less useful when you have no baseline data to forecast impact, or when your experiments are exploratory and you cannot estimate probability.

Why probability trumps impact

IP contradicts most prioritisation advice on this point.

Traditional frameworks like RICE and PIE treat confidence as one factor among several. IP makes probability the decisive factor. A high-impact experiment with low probability will lose to a moderate-impact experiment with high probability every time.

This cuts against the instinct many marketers have to swing for the fences. Expected value thinking says: stop chasing moonshots with 5% odds. Run the experiment that is most likely to deliver measurable results, even if the upside is smaller.

As Annie Duke writes in Thinking in Bets: “What makes a decision great is not that it has a great outcome. A great decision is the result of a good process.”

Expected value is that process.

The case against statistical rigour in a crisis

Lachkovic makes another contrarian argument: when your business is in wartime, abandon the pursuit of statistically significant data. Instead, make bigger, bolder bets and move faster.

This directly contradicts the CRO orthodoxy of running clean A/B tests with 95% confidence intervals. His argument is that statistical rigour is a peacetime luxury. When the numbers are heading in the wrong direction, you need speed and conviction more than certainty.

Many teams hide behind the need for “more data” when what they actually need is a decision.

How to estimate probability

The hardest part of IP is estimating probability honestly. A few guidelines:

Combining IP with other frameworks

IP works well as a final filter on top of your existing framework. Score ideas with ICE or HIPE first, then run the top candidates through an expected value calculation before committing resources.

This is especially useful when two experiments score similarly on a composite framework. Expected value breaks the tie with a number grounded in reality rather than another subjective score.

For more on how Growth Method approaches experiment prioritisation, see the campaign idea scoring guide.

Further reading


Back to top ↑