Most lead scoring models fail not because the math is wrong, but because sales never bought in. The result is predictable: marketing passes leads over the wall, sales ignores the scores, and everyone blames the other team. Building a model that sales actually trusts requires collaboration from day one, transparent logic, and a commitment to ongoing calibration.
Start With Your Ideal Customer Profile¶
Before assigning a single point, align marketing and sales on your Ideal Customer Profile (ICP). Pull your last 12 months of closed-won deals and analyze the patterns.
| Firmographic Attribute | High-Fit Example | Points |
|---|---|---|
| Company size | 200–2,000 employees | +15 |
| Industry | SaaS, FinTech | +10 |
| Job title | VP Sales, CRO, RevOps Dir | +15 |
| Geography | North America, UK | +5 |
| Annual revenue | $10M–$500M | +10 |
Leads that match your ICP on three or more attributes should score significantly higher before they take a single action. This firmographic baseline prevents the classic mistake of routing a blog-reading intern to your enterprise AE.
Layer in Behavioral Scoring¶
Behavioral signals capture intent. Not all actions are equal - weight them by proximity to a buying decision.
| Behavioral Signal | Intent Level | Points |
|---|---|---|
| Pricing page visit | High | +20 |
| Demo request | High | +25 |
| Case study download | Medium | +10 |
| Blog post view | Low | +2 |
| Email open | Low | +1 |
| Webinar attendance | Medium | +8 |
| Repeat site visit (3+) | Medium | +12 |
Decay matters. A pricing page visit from six months ago is not the same as one from yesterday. Apply a time-decay factor - reduce behavioral points by 50% after 30 days of inactivity and by 90% after 90 days.
Set Thresholds That Mean Something¶
Define clear score bands and map them to specific actions:
- 0–30 points: Cold - stays in marketing nurture
- 31–60 points: Warm - enrolled in targeted sequences
- 61–85 points: MQL - routed to SDR for qualification
- 86+ points: SQL-ready - fast-tracked to AE
Pro tip: Do not set thresholds in a conference room. Analyze your last 100 closed-won deals, retroactively score them, and find the natural breakpoints. If 80% of your wins scored above 70, that is your MQL threshold - not an arbitrary number someone picked on a whiteboard.
Calibrate With Sales Feedback¶
The model is not done at launch - it is a living system. Build a lightweight feedback loop:
- Weekly review: SDRs flag leads that scored high but were clearly unqualified (false positives) and leads that scored low but converted (false negatives).
- Monthly analysis: RevOps pulls conversion rates by score band. If MQLs in the 61–85 range convert to opportunity at less than 15%, the threshold or point values need adjustment.
- Quarterly recalibration: Re-run your ICP analysis with fresh closed-won data. Markets shift, product lines expand, and your scoring model must keep pace.
Track these metrics to measure model health:
- MQL-to-SQL conversion rate (target: 25–35%)
- SQL-to-opportunity rate (target: 40–55%)
- Sales acceptance rate (target: >80%)
- Average time from MQL to first sales touch (target: <4 hours)
Key Takeaways¶
- Build firmographic scoring from closed-won deal data, not assumptions - analyze 12 months of wins to identify real ICP patterns
- Weight behavioral signals by intent proximity; a pricing page visit is worth 10x more than a blog view
- Apply time-decay to behavioral scores so stale engagement does not inflate lead quality
- Set MQL thresholds by retroactively scoring past wins, not by guessing in a planning meeting
- Establish a weekly feedback loop with sales and recalibrate point values quarterly to maintain trust