Why Lead Scoring is Dangerous

Gabriel Lim Avatar

Gabriel Lim | 19 March 2019

Image Credit: Jordan Bruner

Recently, I was on a call with the Head of Marketing of one of our long-time customers -- a global subsidiary of a listed company.

She told me, "I'm going to let you in on a secret. In the beginning, we were quite skeptical about your product."

"During the first three months, we were loading all our D-tier leads into Saleswhale to test your API. For context, we classify our leads into A, B, C, and D tiers. D-tier is what we call 'bottom-of-the-barrel'. No sales rep will actually bother to call them."

"We weren't expecting anything at all. It was mostly to test the workflow and your AI capabilities before we loaded better leads into your platform."

"But this initial campaign exceeded all expectations. We actually managed to engage 34% of leads, and generated more than 30 sales opportunities. We’ve even closed a couple of them."

"This was a shock to the marketing team, because it showed us that we were either too conservative with our lead scoring, or our models are wrong. Now we wonder how many sales opportunities our lead scoring missed out over the years."

Searching for the edge

As marketing operations get more sophisticated, more and more companies use lead scoring to qualify their leads.

Now, don't get me wrong. I am not against lead scoring. 

The benefits are plenty. Why do companies lead score?

  • Identify ready-to-buy prospects
  • Unqualified leads don't distract sales productivity
  • Creates the initial basis for sales & marketing lead follow-up SLAs

Once marketing teams increase their inbound flow volume, they often introduce a lead scoring system.

In general, implementing a lead scoring system is a smart move. Unfortunately, lead scoring is often implemented in such a manner that causes more harm than good. 

The Glengarry leads

Let’s hear from Tom Tunguz, partner at Redpoint Ventures and former product manager at Google:

"We did lots of lead scoring at Google. We had this internal tool called Glengarry (from the movie Glengarry Glen Ross)."

"We always thought it to be super impactful. But what we found is, using the data we had, at the mid-market and enterprise price points of $15k and above, lead scoring actually generated a negative signal."

"We were actually filtering out leads that could be really good accounts that we are not calling because of the low lead score."

"[When deal values got higher], activity is no longer a good signal, because they may be talking to their procurement teams, discussing internally etc. But if you disqualify them or don't emphasize them, then you may be missing out on a material revenue opportunity."

"At 50k to 150k price points, we realised we were getting 4X higher conversions when not using activity to score leads, versus when we were using it."

"We were actually filtering out leads that could be really good accounts ... [that] we are not calling because of the low lead score.”

Three reasons that can cause your lead scoring to fail

Your lead scoring algorithm becomes too complicated

Over time, issues tend to arise when your lead scoring algorithm becomes too complicated.

Here's an example:

Your marketing team may stipulate that once a lead exceeds a lead score of 60, they will be passed to sales.

This stipulation by itself sounds innocuous. However, what does a score of 60 actually mean? How does this algorithm actually work behind the scenes?

In many cases, it's based on an overly complicated set of factors.

For instance, if a lead downloads a white paper, their lead score increases by 5 points. If the lead views the pricing page, their score increases by 20 points. If the lead downloads an e-book, their score increases by 10 points. Additional e-book downloads are worth 5 points each.

This means that there are many permutations that could get a lead's score above, or below 60. Depending on how your lead score is set up, an intern working at a pre-revenue startup who downloads eleven e-books over the course of a week might get passed to sales. Whereas an important decision maker from a Fortune 2000 company who visits one page, and downloads a case study might not.

So, you need to be very thoughtful about how you set up your lead scoring. Or else you would end up missing valid sales opportunities or surface false positives.

Poor data quality

Another danger could arise when you have poor data quality.

Have you ever submitted fake information to a web contact form, in order to avoid being inundated with email newsletters and sales calls?

You're definitely not alone. 😉

When you have poor data quality, it creates a scenario where it's hard to debug what's going wrong - because you are scoring attributes such as job titles wrongly.

Research by MarketingSherpa shows that, up to 47% of prospects give fake/inaccurate job titles when filling out forms. And a whooping 62% of prospects provide inaccurate phone numbers when filling in forms that ask for them.

Sometimes, prospects may want to answer truthfully, but are too early in their buying process to give accurate answers. Asking for BANT information in your forms either puts prospects off from completing your forms, or encourages them to enter incorrect information.

Lack of feedback loop to iterate on lead scoring model

This last point sounds obvious.

But through my conversations with marketing teams, I'm surprised how few of them actually take a scientific and rigorous approach to fine-tuning their lead scoring models.

Creating your lead scoring algorithm is not a one and done deal. Far from it.

Do you backtest your algorithm/models with actual data from previous leads that actually turned into paying customers?

Sometimes, even simple, ad-hoc discussions with sales reps who receive marketing qualified leads can yield huge dividends.

For example, if you sell Customer Success software, you might underweight points to IT managers who sign up, as compared to say, Sales Managers. But what if a significant number of sales deals actually originated from IT managers? You wouldn’t know this, unless you’ve spoken with your colleagues from sales.

It is this feedback loop that turns arbitrary guesswork into accurate predictions of leads' sales-readiness.

Conclusion

As mentioned above, lead scoring can be incredibly valuable for aligning sales and marketing functions. 

Certainly, there are companies that have made lead scoring work for their business. But you need to be extremely thoughtful about how you set up lead scoring in order to succeed. Your lead scoring model must also evolve continually with your prospects' customer journey.

If you are not getting as much out of your lead scoring as you’d like, Saleswhale's AI assistant can help.

Give her all the leads that your lead scoring system de-prioritises. She’ll reach out to every one of them personally over email, engage them, and tease out their buying intention. Any qualified leads will be routed over to sales.

Like our client, you might discover some gold among your “bottom of the barrel” leads.

Want to get more sales-ready leads? Request a demo of our AI sales assistant today!
CUSTOMER STORIES

Gabriel Lim Avatar

Gabriel Lim

Co-founder & CEO at Saleswhale, Inc.


You might also like