Skip to content
Playbook8 min read · 2026-05-11

The 19 Signals That Predict Whether a Local Business Will Hire an Agency This Quarter

Most prospect lists treat every local business the same. They aren't. Here are the 19 public-web signals that separate a buyer from a name on a map — with the real weights.


Most prospect lists treat every local business the same. They aren't.

A plumber with 9 reviews and a 2014-era brochure site is in a different market than a plumber with 240 reviews and a site rebuilt last quarter. They are not interchangeable line items. They are different prospects, with different odds of returning your email, and with very different willingness to write a four-figure monthly check.

The job of a scoring layer is to tell you which is which — before you dial.

This is the model behind LocalVein's 19-signal vulnerability score. It is built from public-web data, runs on every business in every scan, and produces a single 0–100 number per row. Higher means more vulnerable to a competitor agency picking them off. Higher means more likely to take the call.

The model

Fifteen core signals sum to exactly 100 points. A handful of supplemental signals (directory-listing claim status, rating divergence across review platforms) are additive on top — the final score is clamped at 100. Every weight below is the production weight, not a hypothesis.

Signal Weight What it tells you
Review count 13% How long they've ignored their online presence
Site freshness (months since last meaningful change) 12% Whether the site is a living asset or an abandoned brochure
Star rating 11% Whether reputation is hurting them now
No website 11% Whether they've made any digital decision at all
Incomplete Google Business Profile 11% Whether they're claiming free traffic Google would give them
Site speed 7% Whether the site costs them mobile traffic
No social presence 7% Whether the channel mix is missing entirely
No SSL 4% Trust signal failure
No ad pixels 4% Whether they're already buying ads (paying competitor)
Slow review velocity 4% Whether the operation is shrinking
Domain age 10+ years 4% Established-but-coasting — a refresh sell
No schema markup 3% Technical SEO floor
Not actively hiring 3% Negative growth signal
High competitor density 3% Crowded market — visibility matters more
DIY website platform 3% Built it themselves — usually fixable

Fifteen signals, exactly 100 points. The math is intentionally boring. The interesting part is why each weight is where it is.

Why review count is the heaviest

Review count is the single best public proxy for how long a business has been quietly ignoring its online presence. It's a cumulative measure. It only goes up. If a 12-year-old plumber has 9 reviews, the only explanation is that nobody has been asking. That's not a sophistication failure — it's an attention failure. And attention is what a marketing agency sells.

A 4-year-old plumber with 240 reviews already knows how to drive reviews. The agency proposition there is harder; you're displacing someone, not introducing a new motion.

Review count anchors the composite at 13% because it is the most agency-translatable signal we measure. Every other signal in the model is downstream of this one in some way.

Why site freshness is the second heaviest

Site freshness is the surprise of the model.

It's not whether the site looks dated. Plenty of dated-looking sites are actively maintained — the owner is paying someone to update copy every quarter, they just haven't refreshed the visual design. Those businesses are not vulnerable. They have an agency relationship. You're displacing.

Freshness is whether the underlying content has actually changed.

The weight is tiered:

  • 0 to 6 months stale: 0 points. Actively maintained. No signal.
  • 7 to 18 months: 4 points. Soft signal. Could be a lull.
  • 19 to 36 months: 9 points. Meaningfully neglected.
  • 37+ months: the full 12 points. Abandoned brochure site.

Most agency cold-email wins live in the 19+ month band. The owner made one site decision four years ago, ran out of attention, and never went back. That is a buyable customer. The site shipped, the owner moved on to running the business, and the asset has been quietly aging since.

Why "no website" gets 11 points instead of 25

The instinct says no website should be the biggest signal in the model. It's the most visible vulnerability.

It isn't, for a single reason. Most businesses with no website at all in 2026 are not running a business that wants a website. They're running a referral-only operation by choice. The owner is 64, the phone rings enough, the calendar is full. They're not buying.

The signal still matters — at 11 points it's tied with star rating and Google Business Profile completeness. But it doesn't dominate, because "no website" is more often a closed door than an open one. Site freshness, by contrast, almost always means an open door: someone tried, ran out of steam, and would say yes to help.

The phone signal isn't in the 19

It's worth saying what isn't in the headline composite.

Phone line type — whether the listed number is a mobile, landline, or routed VoIP — is an enrichment field on every business, but it isn't a vulnerability weight. It's a reachability weight. A mobile line doesn't make a business more likely to need an agency. It makes them more likely to pick up when you call. That's a different axis.

The same is true for whether the listed phone is on the federal Do-Not-Call registry. That doesn't change whether they'd buy. It changes whether you're allowed to call.

The score answers "should I pursue them?" Reachability and compliance answer "how should I pursue them?" Don't conflate.

What the composite gets you in practice

The numerical score isn't the deliverable. The deliverable is the sorting.

When you rank a 200-business list by composite score, the top quartile out-converts the bottom quartile on every campaign type measured — cold email, cold call, LinkedIn — by a margin large enough that a one-person agency feels it inside two weeks. The bottom quartile of any maps-derived list is overwhelmingly businesses that are either too small, too dormant, or too well-served by an existing provider to ever convert.

Cutting the bottom quartile is the easy win. Cutting the bottom half — and only working the top 50 of the original 200 — is where the time savings get serious.


The full per-signal breakdown, with reasoning for every weight: Read the methodology →


What the score does not do

A scoring composite is a filter, not a substitute for judgment.

It will surface a plumber who is bleeding on every public signal and turn out, on the call, to be a guy who is six months from retirement and selling the business to his son-in-law. The score didn't know that. Nothing on the public web told it that. The call did.

The point isn't perfect prediction. The point is that the score lets the call happen in the first place — instead of the 60 other calls that would have happened to fine-looking businesses with no reason to switch.

If the score is right 7 times out of 10, that's a 7-out-of-10 list of warm conversations instead of a 1-out-of-10 list of cold ones. That delta is the entire business model.

Where to take it next

If you've been running a maps-pull workflow and feeling diminishing returns at the 100-call mark, this is usually where the leak is. You're doing the finding fine. You're not doing the filtering — because the filtering signals don't live in the maps source.

The fix isn't a new outreach tool. It's a scoring layer that sits upstream of whatever outreach tool you already use.


See how a 19-signal score reshapes a cold list: Compare LocalVein plans →

Find prospects like this in your market

3 free scans. No credit card.

Start scanning free →