Using word2vec for finding negative keywords

Looking in the right way at your campaigns searchqueries offers a very high potential for optimizing your performance. We have approaches to analyse n-grams of the users queries and map all relevant KPIs like Cost Per Orders, Conversion Rates, Value Per Click, etc.
Wait a moment! n-Grams, what’s that?
Let’s say we have the following searchquery:

hugo boss onlineshop

1-Grams would be:
hugo
boss
onlineshop
2-Grams would be:
hugo boss
hugo onlineshop
boss onlineshop

When you do this over thousands of queries you will find some very interesting patterns that perform completely different. Let’s say the performance is very bad the action would be to add some negative keywords for that pattern.

This approach works quite well if you have enough sample size (e.g. Clicks > 100) on a n-Gram pattern – to problem is that there is still a high amount of infrequent words where you waste a lot of money on but it would take to long to review all of them. How can we automate that?

Build a Word2Vec model for similarity searches

Word2Vec was invented at google and is using neural networks to build up models that understand the context of words. There are pre-trained models out there for several languages – in our case we build our own model with all available searchqueries we paid for in the past.

Adwords optimization Use Case: Find negative search patterns and add them as negative keywords

Let’s say we have one adwords account that contains keywords for all product brands, e.g. +hugo +boss.
We realized by looking at the n-Gram analysis that some people are searching for the pattern brand + location. The search intend is to buy local and not online. That’s the reason that cities like “berlin” and “hamburg” are popping up with quite high CPOs and bad conversion rates if they appear in the searchquery.
Ok, time to query the model – let’s use “berlin” as input and get some similar words:

{“berlin”: [[“münchen”, 0.7318567037582397], [“hamburg”, 0.6991483569145203], [“düsseldorf”, 0.6703126430511475], [“essen”, 0.6388342976570129], [“wien”, 0.6380628347396851], [“österreich”, 0.6259065270423889], [“nürnberg”, 0.6144401431083679], [“germany”, 0.6049420237541199], [“köln”, 0.6002721786499023], [“hannover”, 0.5998085737228394], [“austria”, 0.5866931080818176], [“graz”, 0.5863818526268005], [“stuttgart”, 0.5808620452880859], [“deutschland”, 0.5711857676506042], [“duitsland”, 0.5685932040214539], [“munich”, 0.5610017776489258], [“frankfurt”, 0.5607503652572632], [“dresden”, 0.5480561256408691], [“aachen”, 0.5386894345283508], [“regensburg”, 0.5285172462463379]]}

Wow, pretty impressive results! The output shows similar words together with their similarity value. For that case I limited the result to the Top 20s.
What does it mean for my negative keyword list now?
Based on just 1-Gram (“berlin”), where we have enough click data, the model suggests a list of very similar words that currently do not have enough samples to raise our attention by just looking at the n-Gram list. We use the output to add a large amount of new negative keywords that prevent to pay for future searchqueries that are unlikely to convert.

So the full process looks like this now:

  1. Classify negative search patterns based on their n-Gram data where we have sufficient data
  2. Use this classification as input for querying our word2vec model and get thousands of similar words

This two-step approach works great for google AdWords accounts that are using:

  • Phrase, Modified Broad or Broad match keywords
  • Google Shopping
  • Dynamic Search Ads

If you are interested to get a demo for our Querylyzer SaaS module that provides an webbased interface for that process (with Google AdWords API interface for easy action rollout) please write to neefi@sealyzer.com.

More Similar Posts

Menu