The ICE framework is the original scoring system for growth marketing teams. Created by Sean Ellis — the person who coined the term “growth hacking” and author of Hacking Growth — ICE was designed to help teams at companies like LogMeIn and Dropbox decide which experiments to run first. It remains the most widely used prioritisation framework in growth teams today.
Table of contents
Open Table of contents
What does ICE stand for?
ICE is an acronym for three factors:
-
Impact — How much will this experiment move the needle on your target metric? Consider the potential size of the effect. A change to your pricing page has higher potential impact than tweaking a footer link.
-
Confidence — How sure are you that this experiment will produce the expected result? Base this on data, past experience, and supporting evidence. An idea backed by user research or competitor analysis deserves a higher score than a gut feeling.
-
Ease — How quickly and cheaply can you run this experiment? Consider development time, design resources, third-party dependencies, and approvals needed. If you can launch it in an afternoon, that is high ease. If it needs a sprint of engineering work, that is low ease.
Each factor is scored on a scale of 1 to 10, where 1 is the lowest and 10 is the highest.
How to calculate an ICE score
The ICE score is calculated by averaging the three individual scores:
ICE Score = (Impact + Confidence + Ease) / 3
This gives a final score between 1 and 10. Some teams simply add the three scores (giving a range of 3 to 30) and rank by the total — the relative order is identical either way. Pick one method and use it consistently.
“Growth hacking is based on the scientific method — having a hypothesis, testing that hypothesis in an experiment, and learning if the hypothesis were true or not.”
Sean Ellis, author of Hacking Growth
Worked example: scoring real marketing experiments
Here is how ICE scoring works in practice. Say your growth team has four experiment ideas on the backlog:
| Experiment | Impact | Confidence | Ease | ICE Score |
|---|---|---|---|---|
| Rewrite homepage headline based on customer interviews | 8 | 7 | 9 | 8.0 |
| Launch referral programme with double-sided incentive | 9 | 5 | 4 | 6.0 |
| Add exit-intent popup offering lead magnet | 6 | 6 | 8 | 6.7 |
| Rebuild onboarding flow with personalised steps | 9 | 6 | 3 | 6.0 |
The homepage headline rewrite scores highest — not because it has the biggest potential impact, but because the team has high confidence (backed by customer interview data) and it is easy to execute. The referral programme and onboarding rebuild have higher potential impact but score lower because they require more effort and carry less certainty.
ICE stops teams from always chasing the biggest ideas and surfaces the experiments most likely to deliver results quickly.
When to use the ICE framework
ICE works best in these situations:
-
Early-stage growth teams that need a simple, fast way to start prioritising. ICE takes seconds to score an idea, so you can work through dozens in a single session.
-
High-velocity experimentation where you run multiple experiments per week and need quick decisions. Three factors keeps scoring sessions short.
-
Cross-functional teams where marketers, designers, and developers need a shared language for evaluating ideas without bogging down in methodology debates.
Strengths of the ICE framework
-
Speed. Three factors, scored 1 to 10. You can score an idea in under a minute — ideal for teams that prioritise experiment velocity.
-
Simplicity. Anyone can understand and use ICE without training. There is no formula to memorise beyond a simple average.
-
Forces structured thinking. Even a quick ICE score makes you consider an idea from three angles before committing resources. That alone beats deciding based on who shouts loudest.
Limitations of the ICE framework
-
Subjectivity. The biggest criticism of ICE is that scores are subjective. Two people can score the same idea very differently, particularly on Impact and Confidence. This is especially problematic for newer teams without historical data to calibrate against.
-
No reach factor. ICE does not account for how many people an experiment will affect. A high-impact change on a page with 100 monthly visitors is very different from the same change on a page with 100,000 visitors. The RICE framework adds a Reach factor to address this.
-
Ease is ambiguous. Teams often struggle with whether Ease refers to effort, time, cost, or some combination. Agree on a shared definition before scoring.
-
Anchoring bias. Whoever scores first in a group session can anchor everyone else. Have team members score independently before sharing to reduce this.
“It takes more than good tools. It takes a complete change of attitude.”
Stefan Thomke, Harvard Business School, on building a culture of experimentation
Tips for better ICE scoring
-
Define your scoring scale. Before your first scoring session, agree as a team on what a 1 versus a 10 means for each factor. Write it down and reference it in future sessions.
-
Score independently first. Have each team member score the idea on their own before discussing. This prevents anchoring and surfaces genuine disagreements.
-
Discuss outliers. When scores diverge sharply — one person gives Impact a 3 and another gives it an 8 — the team has different assumptions. Talk through these before averaging.
-
Re-score regularly. An idea scored three months ago may need re-scoring as market conditions, team capacity, or priorities shift.
-
Use ICE for ranking, not precision. The scores produce a relative ranking, not exact predictions. Do not agonise over the difference between a 6 and a 7.
ICE vs other prioritisation frameworks
ICE is the simplest framework, but not always the best fit. Here is a quick comparison:
| If you need… | Use |
|---|---|
| Speed and simplicity | ICE |
| To account for audience size | RICE |
| CRO-specific prioritisation | PIE |
| To reduce scoring subjectivity | PXL |
| Historical evidence weighting | HIPE |
For a detailed comparison of all frameworks, see our guide: How to pick a prioritisation framework.
Getting started with ICE
Pick your top 10 experiment ideas, score each one using ICE in a team session, and run the highest-scoring experiment that week. Do not overthink the methodology — the value of ICE is in building the habit of structured prioritisation, not in achieving perfect scores.
Growth Method is the only work management platform built specifically for growth teams, with ICE scoring built in. Book a call to learn more.