Loading

wait a moment

Loose match types, bid-caching, and getting what you pay for in digital marketing

When a lack of advertiser control — whatever the platform’s motives — results in unintended purchases, trust inevitably erodes.

As new close variants start trickling into search query reports, paid search marketers are still peeved at Google’s decision to expand the definition of exact match close variants to include queries with the same intent, implied words, and paraphrasing. The possibilities under this new regime of close variants appear bounded only by Google’s own determination of what queries carry the “same meaning” as a keyword.

The core of the argument against this sort of update comes back to the simple expectation that advertisers should feel confident that they are getting what they pay for. This is an issue that Google has long faced with regards to keyword matching, and one that is also currently popping up in other areas of digital marketing as well.

Google’s longstanding issues with query matching

Google’s issues with serving ads for queries based on incorrect keywords predates the invention of close variants, and indeed arose even when exact and phrase match keywords were still limited to pure exact and phrase matches.

As George Michie wrote back in 2010, there’s long been an…um…feature in which Google will serve traffic from a query under a broad match keyword that might have a higher bid than a keyword that exactly matches the query.

George argued back then, and I’d argue now, that sending traffic from a query which might exactly match one keyword to a different keyword isn’t illegal or unethical — it’s just a bad business decision.

On the user side, it risks sending searchers to a landing page that might not be as relevant as the landing page assigned to a closer keyword match. This could increase the likelihood searchers turn away from ads if they find the landing pages irrelevant.

On the advertiser side, forcing marketers to pay one price for a specific query through one keyword, when a bid has already been placed for that specific query through a different keyword that exactly matches it, makes bidding to efficiency more difficult. This lack of control can lead to less investment overall and can also make advertisers feel as if they’re not able to pay for the clicks they do want without the risk of paying for some traffic they don’t want.

The same concern arises with the recent expansion of close variants, as traffic might shift from a keyword that a query specifically matches to a keyword that Google deems to have the same meaning but doesn’t exactly match the query.

Brief aside: Another bit from that article to keep in mind is that Google’s initial response to the situation George laid out was that it “wasn’t happening.” My advice is to remember that whenever Google is talking about what can or can’t happen with match type cannibalization under the current rules. For what it’s worth, current documentation lists only two situations in which a keyword that more closely matches a query would be ignored, but which Google does not qualify as the *only* situations it might happen:

  1. There’s a cheaper keyword with a higher Ad Rank
  2. One of your keywords has a low search volume status

This notion of advertisers setting bids for one segment of traffic and then being pulled into auctions for a different segment is also rearing its head on the programmatic side of digital marketing.

Bid-caching is like the close variant match of programmatic

Index Exchange caught a lot of heat a few weeks ago when it was uncovered that it used bid-caching to take bids for one auction and transfer them to subsequent auctions that didn’t share all the same characteristics. As such, advertisers ended up paying for placements that didn’t match the criteria of what they intended to pay for.

The backlash to this revelation was loud and swift, with Index Exchange quickly eliminating the practice from its business. Speakers at this year’s Advertising Week reportedly went as far as to call it a “white-collar crime.”

There’s a clear difference between Index Exchange’s use of bid-caching and Google’s use of close variants in that the former was kept a secret while the latter has been publicly announced whenever updates are made.

It’s also the case that advertisers can set keyword negatives on Google to prevent keywords from showing ads for specific queries, whereas there was no way to opt out of placements with Index Exchange. As Brad Geddes mentions here, though, limits to the number of negative keywords permitted might start becoming a real issue on Google.

Still, at their core, these two situations seem similar in that advertisers set bids for one segment and then end up targeting any number of other segments as a result.

Of course, Google’s done nothing to change course and doesn’t seem inclined to do so, regardless of the poor feedback it’s received from many in the industry.

What is Google thinking?

Considering the backlash to its decision, it’s worth exploring what motivates Google to make this change at all.

One possibility is that Google genuinely thinks that the close variants it serves are just as valuable as the true exact matches a keyword will trigger ads for. Most advertisers would argue that’s not the case, and Merkle (my employer) data shows that the median advertiser sees exact close variant conversion rate between 20% and 30% lower than true exact matches for non-brand keywords.

One area where close variants are equally valuable is on the brand text ad side of things, where I’ve found almost no difference between true exact matches and exact close variants for the median advertiser. While that’s a positive development, it doesn’t alleviate concerns for non-brand keywords.

Another possibility is that Google knows that close variants convert at a lower rate and is willing to eat lower CPCs from advertisers decreasing bids if it results in increased ad click volume by entering ads into new auctions. This would only work if Google determined that there are a meaningful number of advertisers that don’t currently have sufficient keyword coverage and that would not immediately trim out new close variants via negatives.

Maybe Google knows it has two different sets of advertisers –- those that will just add negatives and adjust campaigns so that performance isn’t meaningfully impacted, and another group of less actively managed accounts that won’t be able to adequately control the new matching and which will end up spending more.

Naturally in the big scheme of things Google probably expects this to be a positive for its ad revenue. How that would play out is more complicated to assess.

Conclusion

The level of control available to advertisers on Google tends to ebb and flow.

For example, Google took away the ability to target tablet and desktop users independently when it rolled out Enhanced Campaigns and lumped desktops and tablets together. It then (thankfully) gave back that control in 2016, allowing advertisers to adjust bids to account for the significant difference in expected value between desktop and tablet clicks.

In the case of close variants, though, it’s just been one long slog of losing control. Over the years Google moved from optional to mandatory close variant targeting, and then expanded the definition of what constitutes a close variant, and is now in the process of expanding the definition again.

Whether it’s Google or Index Exchange or any other digital marketing platform, advertisers want to know that they’re getting what they intend to pay for and have the controls available to do so. It’s now becoming debatable if advertisers have the needed controls through Google Ads, and we’ll see over the next few weeks how expansively the new close variants extend the reach of exact match keywords.

To paraphrase –- more control good, less control bad; some platforms sneakily expand, others do it in broad daylight; advertisers mostly not like either way.

Was that a poor paraphrase? Let’s just call it a close variant.

Published in SEM