How to structure Keywords using Entity Lookups in BigQuery?

BigQuery is quite powerful to analyze your PPC or SEO data. Databases are optimized to run JOIN statements between large tables. In this post, I will describe how to transform the “Problem” of Keyword Tagging or Entity Recognition that it can be handled with pure SQL within Google BigQuery. Even for big keyword lists.

On this page

    Why entities are useful for keyword analysis?

    In simple words: Entities are additional logical layers on top of your keywords or search terms. They will make it possible to aggregate numbers and observations.

    By doing this, search patterns will appear that were long hidden before. Actually you are doing a kind of clustering on your keywords that is suited even for large data sets – I mean for GBs of data.

    Getting started: Build an Entity Lookup List

    First let’s talk about the base structure of an Entity Lookup list: We have some Entity Keys and the matching Entity Name. What we will do is to search for Entity Keys within the large Keyword set. Whenever a match is found, the Entity Name is assigned to that Keyword. So lets start with a base lookup list:

    The stucture of the Entity Lookup List: “key” is the string we are searching in keywords – “value” is the assigned entity name if a match is found

    You will now realize that you probably have data available somewhere to enrich you Entity List. If you are running an e-commerce shop, you will have a product feed – this is great for getting entities like product brand names, categories/lines, and colors or sizes. Put everything you have for your business in the table structure from above and 50 percent of the work is already done.

    Enrich your Entity Lookup List – further approaches

    First: The implementation of these approaches is not a part of this post – we are going to share detailed posts about that in a follow up. For our consulting projects, we are using this approaches to enrich our entity lists:

    • String Similarity using Levenshtein: Based on existing entity keys in your lookup list we are checking untagged n-Grams that appear in the Keyword Set we want to analyze. Here is Python Script example how you can apply string similarity.
      Examples: “hotel” is very close to “hotels” (singular/plural cases) or “hottel” (Typos)
    • Semantic Similarity using Word2Vec Embeddings: This approach will discover semantic related entity keys which would be impossible by doing string similarity approaches. We are using the word2vec technology for discovering negative keywords within google ads, here are some more details how it works.
      Examples: “hotel” is very close to “accommodation” when it comes to meaning.

    Note: Our strategy is to work heavily on enriching the entity lookup list. Similarity Checks are expensive in Runtime, so you should avoid it doing it every time when you want to label a keyword set. Having a big lookup list instead makes it possible to use database JOINs which are superior in Runtime.

    Labeling Keywords using Google Big Query

    To run the entity labeling, we need of course another data source: Keywords or Queries with KPIs you want to analyze. When you are a PPC Professional, the Search Term Performance Report is a good starting point. When you are a SEO Professional, you might want to analyze your Search Console data with this approach. Importing the data to Big Query is not part of this post but you can automate the data import to Big Query easily by using Google Ads Script or Google App Script.

    The raw keyword/query report that should be labeled with entity values

    Now we have two tables in Big Query, one that contains the Keywords that should be tagged, the second table contains the label names together with the entity keys, in other words: if that test string is found in the full query, the label should be assigned.

    Transforming Keywords to N-Grams using ML.GRAMS function in Big Query

    You will realize that the existing data structure will not work for using database JOIN operations for running the entity extractions. For that reason, we are splitting the keywords into n-Grams by using ML.NGRAMS within BigQuery.

    Split Queries into N-Grams using BigQueryML

    When doing this transformation, we are splitting a single row that contains the keywords into multiple row that contain the N-Gram substring of the original keywords. I used 1-Grams and 2-Grams. If you look at the resulting N-Grams, you will realize that JOINS with the Entity Lookup table are possible now.

    JOIN N-Grams Transformations with the Entity Table

    ngram_entity is doing the JOIN with T_entity (The table that contains our lookup values)

    Flatten the Entity Lookup: STRING_AGG

    Group by original Query with STRING_AGG Function reduces rows

    SELECT Statement: Bringing everything together

    Final Select Statement using all components

    When we put everything together and run the full query, we are getting the desired result:

    Labeled Queries after Running the full SQL Statement

    You will realize that there are a bunch of new columns added to the original keyword sources table:

    • Entity: this column contains the found Entities within the keyword. The or combined with a Pipe as delimiter. You can change that easily to your needs in the the STRING_AGG definition. If we do the tagging that way we are loosing some information, why we assigned the entity. This perspective is useful for analysis on search patterns without the need of details.
    • EntityKeyValues: When details are important and you want to know the entity values. After each assigned entity the matching key is shown. Both are separated with “:” as delimiter.
    • EntityKeyValuesJson: You will probably have some use cases that require other string transformations then we used for the EntityKeyValues column which addressing some reporting related cases. In Big Query you can make use of JSON Extract functions, this is why we added this column. This makes it very easy to create additional columns like Campaign, AdGroup, Keyword Labels that are using entity data as placeholders.
    • NumberOfUnknownWords: This number tells how good the coverage of your entity list is. It makes it also possible to run some filters like: Add new Keywords to the Google Ads Account (in a specific Campaign or AdGroup) where a list of entities was found and unknown words are zero. It will also give you a good indication how the traffic is changing over time, e.g. google is forcing close variant matches in google ads.

    You will have to adjust the SQL Code to fit to your own table structure, nevertheless I think it makes sense to share the SQL Code to copy and paste some code fragments:

    WITH Ngram AS (
        SELECT Ngram,Query, 
        FROM `sealyzer-data-science.test.T_Queryreport`,
        UNNEST(ML.NGRAMS(SPLIT(REGEXP_REPLACE(LOWER(Query), r'(\pP)', r' \1 '), ' '), [1,2], ' ')) AS Ngram 
    ),
    
    ngram_entity AS (
        SELECT t1.*,t2.value,CONCAT("'",t2.value,"'"," : ","'",t1.Ngram,"'") AS TagValuesJson,CONCAT(t2.value," : ",t1.Ngram) AS TagValues
        FROM Ngram AS t1
        LEFT JOIN `sealyzer-data-science.test.T_entity` AS t2
        ON LOWER(t1.Ngram)=LOWER(t2.key)
        #WHERE value is not Null
    ),
    
    query_entity AS (
        SELECT Query,STRING_AGG(value," | ") AS Entity,STRING_AGG(TagValues," | ") AS EntityKeyValues,
        CONCAT("{",STRING_AGG(TagValuesJson,","),"}") AS EntityKeyValuesJson, 
        ARRAY_AGG(DISTINCT IF(value IS NULL, NULL, Ngram) IGNORE NULLS) AS RecognizedNgram,
        FROM ngram_entity
        GROUP BY Query
    )
    
    SELECT t1.*,t2.Entity,t2.EntityKeyValues,t2.EntityKeyValuesJson,
        (ARRAY_LENGTH(SPLIT(t2.Query," "))-IF(ARRAY_LENGTH(RecognizedNgram) IS NULL,0,ARRAY_LENGTH(RecognizedNgram))) AS NumberOfUnknownWords
        FROM `sealyzer-data-science.test.T_Queryreport` AS t1
        INNER JOIN query_entity AS t2
        ON t1.Query=t2.Query

    Next Steps: Interesting Use Cases for Entity Labeling in SEO and PPC

    Congratulations! You have a new powerful tool in your box to make data driven optimizations. Now it is time to apply it to some real world problems within Online Marketing. Here is a list of ideas where you can test this approach. All you have to do is to adjust the input columns and maybe adjust the output. It is quite simple to extract a specific list of keys within the JSON column and flatten them to additional columns:

    JSON_EXTRACT on the entity data makes it easy to create additional columns with Entity Name as column name

    SEO Ranking Monitoring based on Entity Patterns

    You have several options here for data sources you can use. Maybe you have your own ranking monitoring tool that is collecting data for each keyword. I personally use Googles Custom Search API and save ranking results (Spreadsheet solution) directly in Big Query, which makes it very easy to run this analysis. In addition to that, I also recommend to store your Google Search Console data in Big Query. The entity tagging part stays the same.

    Value used for entity tagging: Keywords
    KPIs: Ranking, Impressions, Clicks
    Benefits: When you look on single queries you are often lost in data. Major changes in rankings and performance might be hidden. Grouping Keywords together by entities will give you bigger numbers which helps you to identify new action areas.

    SEO Content Analysis using Entity Tagging

    Instead of Keywords you can use the URL slugs. When you also save the ranking urls of your competition, you can use the data set from above. Another good data source is the xml sitemap. You can easily extract the URL Slugs out of the sitemap with some lines of Python. To get an idea you can also use our free online tool for seo content analysis for extracting the most frequent n-Grams. Now think of an extra layer on top that is extracting entities – this will give you bigger keyword clusters.

    Value used for entity tagging: URL Slugs, Website Titles, Headlines
    KPIs: Count of occurrences
    Benefits: Knowing the high level content strategies of your SEO Competitors you can adjust your own actions. What is the market benchmark and how do they use entities to improve their SEO Rankings? Of course you can run the same analysis with your own content and compare with your competition and look for the main differences and gaps.

    Value Prediction Models using Entity Features

    This is my favorite one and a think I will share a deep dive on building machine learning models within Big Query soon. Every prediction model needs features – and maybe you have realized that in the SQL Code we used ML.GRAMS to create n-Grams – this is very common to create features out of text. Our approach is even more powerful: using structured entities tailored to your business will result in superior models.

    Next Question: what does “Value” Prediction mean? In PPC Context you can predict Conversion Rates and Average Order Values – this will give you an estimated VPC (Value Per Click) that is most you need to set manual CPC Bids in your Google Ads account based on your targets. With a big entity list you can create superior bidding models compared to googles smart bidding. By knowing your business you can enrich your keywords with entities that will give you more observations and better models to judge the longtail keyword performance.

    Model Features: Entity Keys and Entity Values of PPC Search Terms
    Target Variable: PPC Transactions, PPC Order Values

    If you have the situation in your company where you have high spending SEA Department, you should use this approach to prioritize your SEO activities. In the end it is not about ranking gains or traffic growth. It is about the value of traffic for your business.

    Now I can already hear some SEO Professionals screaming: You can’t focus on transactional content only! There are more stages before a transaction will happen. You are right! Let’s model it. The features are staying the same but we have to adjust the target variable:

    Our target is now based on the user coming back to our website after some time, let’s say in a time window of 90 days. You can extract that data on client id / Url Level in Google Analytics. Use e.g. the URL, Headlines or Titles to extract the entity features and create a prediction model that gives you the probability that the top of funnel content will give you returning users. The feature importance (entities and entity keys) will give you a great way set the right focus on our SEO activities.

    Model Features: Entity Keys and Entity Values of Page Content (URL, Title, Headlines)
    Target Variable: Returning User within X Days time window

    When you start brainstorming a little bit you will find a lot of additional use cases for your Business. One more thing: It is very easy to build a nice Google Data Studio Dashboard that is making use of our Entity Insights. Your colleagues will be impressed! 🙂

    Comments

    You must be logged in to post a comment.

    Similar posts

    Menu