Master AI Prioritisation to Boost Your Growth Strategy

Stuart Brameld, Founder at Growth Method

Article written by

Stuart Brameld


Look, I've been through every prioritisation framework out there. RICE, ICE, PIE, PXL, HIPE, Value vs. Effort Matrix, Weighted Scoring, WSJF—you name it. They all sound smart on paper, but they don't work in practice. Not for growth teams anyway.

Here's why these frameworks fall short and how AI is changing the game entirely.

Why traditional prioritisation frameworks don't work

After working with countless B2B teams, I've seen the same problems over and over:

Goals keep changing. Your Q1 priority might be lead generation, but by Q3 you're focused on retention. Most frameworks can't adapt quickly enough.

Everything is subjective guesswork. Impact scores? Confidence ratings? Come on. We're terrible at predicting user behaviour. That's literally why A/B testing exists—because our estimates are usually wrong.

B2B teams don't have time for manual prioritisation. The biggest challenge most companies face is moving too slowly. Adding more manual processes just makes this worse.

Teams ignore the frameworks anyway. I've seen too many beautifully crafted priority matrices gathering dust while teams chase whatever seems exciting that week.

AI is reducing effort dramatically. When you can spin up an experiment in hours instead of weeks, the traditional "effort" calculations become meaningless.

As Itamar Gilad points out, impact-effort prioritisation often fails because we can't reliably estimate either variable upfront.

What we really need

Instead of more manual frameworks, we need prioritisation that:

  • Automatically aligns with current goals

  • Removes subjectivity and individual bias

  • Requires zero manual effort

  • Adapts instantly when priorities shift

  • Gets smarter over time

That's where AI comes in.

The AI prioritisation approach

We built our system around two core metrics: Effort and Relevance. But instead of humans making subjective guesses, AI scores everything automatically.

Building on solid foundations

First, we start with well-structured hypotheses using Craig Sullivan's Hypothesis Kit v4. The format is simple:

"Based on {data/research} we believe that {change} for {population} will cause {impact}."

This structure gives AI the context it needs to score accurately. Even junior team members can create hypotheses that provide enough detail for proper prioritisation. If you need help crafting these, check out Johann Van Tonder's Hypothesis Helper GPT.

Scoring effort with AI

Here's what surprised me: simple AI prompts are remarkably good at estimating effort. Often better than humans.

We use prompts like:

"Here's a list of growth ideas. Rate each based on estimated effort to run the experiment. Add an effort score and explanation."

The AI consistently provides accurate effort assessments with minimal context. It understands the difference between a simple email copy change and a complex funnel rebuild.

Scoring relevance dynamically

This is where it gets interesting. Relevance scores automatically adjust based on your current goals.

If your goal is "Increase conversions from 400 to 500 MQLs per month," the AI scores ideas based on how likely they are to drive MQL growth.

Change the goal to "Increase organic search traffic to 5000 visits per month," and every idea gets rescored automatically. No manual work required.

The prompt structure:

"Our current goal is {primary active goal}. Add relevance scores based on how likely each idea is to achieve this goal."

One score to rule them all

Rather than juggling multiple metrics, we combine effort and relevance into a single score with a clear explanation. This keeps things simple while maintaining flexibility to improve the algorithm over time.

When users hover over a score, they see exactly why it was rated that way. No black box decisions.

Future improvements

The beauty of this approach is that it gets smarter over time. We can layer in additional signals:

  • Version weighting: Is this a v2 idea based on previous success?

  • Team engagement: Has the team commented on or liked this idea?

  • Historical performance: How well have similar experiments performed?

  • Individual track record: Who's proposing this idea and what's their success rate?

  • Stakeholder likelihood: How likely is this to get approved?

Companies like Pinterest have shown how systematic experiment review processes can supercharge growth teams. AI just makes this scalable for everyone.

The bottom line

Traditional prioritisation frameworks were built for a different era. When experiments took weeks to build and goals stayed stable for quarters, manual scoring made sense.

Today's growth teams need something different. They need automated prioritisation with feedback loops that adapts in real-time.

AI prioritisation isn't about replacing human judgement—it's about removing the busywork so teams can focus on what matters: running experiments and driving growth.

The tools are already here. Even ClickUp now offers AI-powered task prioritisation. The question isn't whether AI will transform how we prioritise work—it's whether you'll be early or late to adopt it.

For more insights on effective prioritisation techniques, check out Itamar Gilad's comprehensive guide and John Cutler's approach to prioritisation activities.

The future of growth strategy isn't about better frameworks—it's about smarter automation.

"We are on-track to deliver a 43% increase in inbound leads this year. There is no doubt the adoption of Growth Method is the primary driver behind these results."

Laura Perrott, Colt Technology Services

Growth Method is the GrowthOS built for marketing teams focused on pipeline — not projects. Book a call at https://cal.com/stuartb/30min.


Stuart Brameld, Founder at Growth Method
Stuart Brameld, Founder at Growth Method
Stuart Brameld, Founder at Growth Method

Article written by

Stuart Brameld

Category:

Acquisition Channels

Real experiments. Shared monthly.

Join 500+ growth professionals