Leveraging Machine Learning for Effective Marketing Strategy Development

Tips and tricks for successful building of a marketing strategy using ML

Elena K.
Towards Data Science

--

Image from unsplash.com

Marketing Attribution Models are widely used today for building marketing strategies. The strategies are based on assigning credit to each touchpoint along all customers’ journeys. There are lots of different types of models, although, they can be classified into 2 groups: Single-touch attribution models and Multi-touch attribution models. Usually, you can easily interpret and implement these models. They may be even useful in rare cases. However, most of them are incapable of building a robust marketing strategy on their own. The problem lies in the fact that all these models either operate based on rules that may not be applicable to certain data/industries, or they rely on a limited amount of data, leading to a loss of valuable insights. To learn more about the types of marketing attribution models, check out my previous article.

Today, I would like to discuss how we utilized machine learning to develop a marketing strategy, the data we used, and the outcomes we achieved. Within this article, we will address the following questions:

  1. Where is it better to obtain the data?
  2. How to prepare the data for model training?
  3. How to utilize the model predictions effectively and make meaningful conclusions?

I will be presenting all of this using the data of one of our clients, with some parts modified. These modifications will not impact the overall results. Let’s refer to this company as XYZ. The publication of these data was allowed by the client.

Data

There are several ways to obtain traffic logs from websites. These methods don’t always provide a comprehensive information that you may require for the analysis. However, sometimes integration of one source into another is available, and at other times, you can manually accumulate and combine data from multiple sources. You can also write scripts yourself to gather the necessary information. Now, let’s talk a bit about the most in-demand sources currently and the data you can obtain from them:

Google Analytics

Google Analytics (GA4) is a powerful platform that gives you an access to the different website analytics tools, and allows you to measure engagement and traffic across your apps and websites. It usually uses last-click attribution, nevertheless, you can build your custom ML attribution model collecting the following GA4 data such as:

Google Analytics offers you different events for different industries.

Meta Pixel

Meta Pixel is a tool that enables you to track ad promotion and visitors’ activity on your website. It gives you some insights into how your audience interacts with your Facebook and Instagram ads, and data on how these users behave on your website after they click on an ad. In general, you will get the same data as when you’re using Google Analytics. Nevertheless, Meta Pixel is more focused on retargeting, therefore you will get more tools for that in comparison with Google Analytics.

Yandex Metrika

Yandex Metrika has the similar features as the services above. However, it has its pros and cons. As a downside, Yandex Metrika has limit of the processed requests from one account (5.000 requests / day). At the same time Google Analytics has limit 200.000 requests / day. And the advantage is that Yandex Metrika has Webvisor which helps you to get all mouse movements.

There are not all available services which you can use in order to obtain user data. Although many types of data are represented in each data source, so when choosing a data source, you can pay attention to factors such as ease of report configuration and integration with other products. We have chosen Google Analytics (GA4) because it provides comprehensive data and convenient tools. Additionally, the data easily integrates with BigQuery, and we utilize the Google Cloud infrastructure. So the raw data looks as follows:

Data Preparation

Returning to the task at hand, we aim to determine which ad campaigns are more attractive for investment in order to reduce expenses in budget allocation while maintaining or increasing revenue levels. Therefore, the representation of GA4 data is convenient for us because it contains information about each user action/touchpoint, such as:

  • Button clicks
  • Scrolling
  • Photo views
  • Searches, etc.

In turn, all these actions can be further transformed into micro-conversions, which are exactly what we need. We will use this set of micro-conversions to predict the likelihood of a user making a purchase in each session.

When solving such a task, the following micro-conversions can be of interest:

  • Visiting the sale page
  • Viewing popular or key products
  • Searching for a specific size
  • Viewing product photos
  • Viewing all product photos
  • Reviewing product care information
  • Adding a product to the shopping cart, etc.

In fact, you can come up with any number of micro-conversions on your own. The choice of micro-conversions greatly depends on the specific characteristics of your store/business.

In the end, we settled on the following features and micro-conversions for our model. The total number of all our features is 97. This is the subset of our features:

You can see a lot of features connected to UTM, they mean as follows:

  • utm_source is the name of the platform or tool that’s used to create the medium;
  • utm_medium identifies the type or high-level channel of the traffic;
  • utm_campaign is the name of the marketing campaign;
  • the other utm features refer to the first touchpoint inside the user journey or the session.

Let’s get back to other features discussion. Some of the columns are available in the raw data, so you don’t have to do anything with them. However, some columns aren’t ready for use and you have to do some manipulations first. Here is an example of how we obtained a micro-conversion, such as adding a product to the shopping cart:

Model

I would like to remind you that using the model, we want to obtain the probability of a user making a purchase at each touchpoint. Then, we will convert this into the probability of making a purchase within a session. Therefore, we used a classification model where we utilized predict_proba to get the probability of purchase at each user interaction. After trying several models ranging from linear to boosting, we settled on using CatBoostClassifier. Before deploying and retraining the model daily, a hyperparameters tuning was performed. We will not delve into the details of model creation as we followed a classic approach of hyperparameters tuning, subsequent model training, and calculation of the relevant metrics.

Now the model is being trained using one month data, as changing this duration to a longer or shorter period did not show a significant improvement. Additionally, we use a threshold 0.1 to determine a purchase. We specifically used this value because it is 10 times higher than the baseline probability of purchase from our client. This serves as a trigger for us to consider these events and investigate whether a purchase has been made, and if not, why. In other words, any actions where the model’s probability > 0.1 are classified as a purchase. As a result, we obtained the following values for the recall and accuracy metrics:

Recall on the TEST: 0.947
Accuracy on the TEST: 0.999

Based on the obtained metrics, we can see that we are still missing some purchases. It’s possible that the paths to these purchases differs from the typical user journey.

So, we have all the features and model probabilities, and now we want to build a report and understand which ad campaigns are underrated and which ones are overrated. To obtain the ad_campaign, we combine the utm_source, utm_medium, and utm_campaign features. Then, we will take the maximum probability within each user session and then multiply it by the average order value within the same timeframe as the test dataset. Afterward, we generate a report by calculating the sum for each ad campaign.

It gives us the following report:

Now we have to move to the marketing metrics. Since we want to measure the success of marketing campaigns, we can consider the following metrics, which marketers often use:

  • ROAS (Return on Ad Spend) is a marketing metric that measures the efficacy of a digital advertising campaign;
  • CRR (Cost Revenue Ratio) measures the ratio of operating expenses to revenues generated by a business.

We will calculate them using our data, and compare with the ROAS and CRR values that marketers typically obtain using last-click attribution.

Since we only see three paid campaigns within the analyzed period, we will find the metrics for these campaigns in GA4. And add fact ROAS and CRR based on last-click attribution. We discussed why last-click attribution is not an exact approach for evaluating ad campaign contribution in the previous article.

And using the formulas mentioned above, we will calculate the final report with the predicted ROAS and CRR:

Now we have all the data to draw conclusions about the ad campaigns:

  • We can see that the campaign “google/cpc/mg_ga_brand_all_categories_every_usa_0_rem_s_bas” is overrated, as its predicted ROAS is 2 times lower than ROAS based on last-click attribution. Most likely, users often make purchases after clicking on this ad campaign, but they are already warm customers.
  • Ad campaign “instagram / cpc / 010323_main” is underrated, as its predicted ROAS is 4 times higher than fact ROAS.
  • And campaign “google / cpc / mg_ga_brand_all_categories_every_latvia_0_rem_s_bas” has similar predicted and actual ROAS.

And with this data, you can independently develop marketing strategies for the next period. Also, you shouldn’t forget that marketing strategies require testing. However, it is beyond the scope of our article.

In this article, we discussed how machine learning can be used to build a marketing strategy. We touched upon the topic of data selection, data preprocessing for modeling, the modeling process itself, and deriving insights from the obtained results. If you are also working on a similar task, approaches exploited by you would also be of interest for me.

Thank you for reading!

I hope that the insights shared today have been valuable to you. If you want to reach out to me, please feel free to add me on my LinkedIn.

--

--