3971_SM_Sales_CycleLead_Management_Blog-1I was recently working with a financial services firm to launch a new program aimed at providing analytically-scored sales leads to a team of field-based sellers. The scoring had tested very well with a pilot group, and we were now considering nuances of the rollout—for example, should we provide the direct score to the salesperson (on a zero to 100-point scale), or should we bucket the leads into simpler categories, such as high, medium and low?

As we debated the merits of each approach, it occurred to me that the success of programs like these depend much more on qualitative decisions like this — How do we act upon our analysis? — than on the quality of the analysis itself. In fact, if I reflect on all the lead scoring work I’ve seen over the years, I’d say that most of the failures can be traced back to gaps in alignment with the selling process.

For example, some common gaps I’ve seen are:

  • Providing relatively low-probability leads to highly-paid, high-impact sellers. If the cost of a sales meeting is high, the bar for lead quality should be high as well. Otherwise, you risk wasting sales effort on low-value qualification, or more likely, outright rejection of the lead program.

  • Providing lead scores without context. Many B-to-B and B-to-B-to-C sales processes are context dependent. If an opportunity is simply described as “high value,” the seller may not know how to engage and carry on the conversation, and wind up rehashing old topics or devoting unnecessary time to exploration.

  • Providing too much detail. While sellers typically need context, there is such a thing as too much context. Some scoring programs can become data dumps that slow down salespeople or create questions about data quality that distract from the core value of the program.

  • Providing leads without expected actions. Sales must be held accountable for lead follow up and know where they have flexibility—and where they don’t. Highly prescriptive programs seldom work for sophisticated sales roles, but the absence of any accountability may excuse inaction.

  • Failing to close the loop. Often leads are handed over to sales and not adequately tracked, but tracking in a timely manner is important: It enables understanding what worked and what didn’t, and feeds directly into updating the lead score. Without this, leads may linger at a score for too long, hindering prioritization and action.

Avoiding these pitfalls requires an understanding of the selling process and some adaptability on the part of the team driving the lead analytics. To illustrate this point, consider the example from a sales division of a large bank. The team ran a sophisticated scoring approach for sales prospects, but what really distinguished them was their execution. For example:

  • The team leaders learned that for their salespeople, the most frustrating aspect of lead follow-up was “bad data”: phone numbers that didn’t work, or prospects who were interested in the products for another division, or who were already customers. So the team invested in machine learning-driven data cleanup where possible and a third-party call center team that would vet certain leads to ensure accuracy. The reduction in wasted time and sales frustration was well worth the investment.

  • They routed leads to sales in a way that fit with the rhythm of the job. First, they segmented leads among their two teams, with their more prospect-oriented sales team receiving leads that were lower probability but still worth calling. Their more elite team was given a smaller number of well-vetted leads to supplement their existing customer base. These leads were provided every Sunday, since most salespeople started their weekly planning on Monday mornings.

  • Finally, they developed a fast track process for leads that required next-day follow-up and set accountabilities accordingly. This process was carefully restricted to the most impactful cases (always less than 2% of the total) to avoid unnecessarily disrupting the sales rhythm.

The above scenario won’t work for every sales organization, since buying processes and selling processes differ widely. But it serves to illustrate how analytic lead scoring can be adapted to work within the selling process, rather than outside of it.

Ultimately, it’s important to remember that most sales organizations don’t spend all their time prospecting, and that prospecting sources often go beyond company-provided leads. Organizations that adapt their lead scoring programs to fit with the sales process have the best chance of seeing those leads turn into customers.

 

 

Topics: sales process design, Sales data, sales process, Financial Services, lead scoring