Data Science Interview

The Ultimate Guide to Cracking Product Case Interviews for Data Scientists (Part 2)

4 types of product sense questions and frameworks

Emma Ding
Towards Data Science
12 min readApr 13, 2021

--

Written by Emma Ding and Rob Wang

This is Part 2 of Cracking Business Case Interviews for Data Scientists! See Part 1 of the article here:

If you are more of a video person, check out this YouTube video which is an abbreviated version of this post, and this playlist for sample answers to real business case interview questions from Facebook, LinkedIn, and Lyft.

Table of Contents

  1. 4 Types of Business Case Questions

2. How to Prepare for a Business Case Interview?
3. Tips to Ace the Business Case Interview

4 Types of Business Case Questions

In this section, we will summarize the 4 most commonly asked categories of business case questions:

  • Diagnosing a Problem;
  • Measuring Success;
  • Launch or Not;
  • Improving a Product.

We will also provide frameworks for approaching each of these categories. Most metrics questions are open-ended by nature, and there are many solution strategies. The goal of these frameworks is to serve as a mental checklist for a complete response. These frameworks are sufficiently general that you should never need to recite them verbatim or follow them blindly without additional creative thinking.

Regardless of the business case interview’s particular focus, we always recommend starting with clarifying questions and ending with a summary of your approach. Clarifying questions are particularly relevant for confirming the function and goal of the underlying product. Proceeding without them can be a red flag for the interviewer; imagine how awkward it would be if you end up spending 5 minutes answering the question and later realizing that your understanding of the product is wrong. Questions worth asking may include but are not limited to:

  • What does a feature / product do?
  • How is a feature / product used?
  • For whom is a feature / product built?

Diagnosing a Problem

Photo by Markus Spiske on Unsplash

The first question category is Diagnosing a Problem. Indeed, suppose that an important business metric is trending negatively, and stakeholders are asking you to identify the root cause. Here are some sample questions:

  • The creation of Facebook user groups has gone down by 20%. What is going on?
  • How to investigate a 10% drop in the usage of a product?
  • We have a dashboard tracking our metrics, and the average estimated time of arrival (ETA) is up by 3 minutes. How would you investigate this problem?

To successfully pass the interview, the most crucial ingredient is convincing the interviewer that you can outline a systematic approach. Indeed, there are numerous aspects that can be talked about, but throwing random ideas at the interviewer is never encouraged. Here are 6 useful steps that can be taken, but not every question needs all 6 steps.

  1. Clarify the definition of the metric. For example, for the above ETA question, you could clarify how start time and end time are defined. For a question related to engagement, you could clarify and propose ways of measuring engagement: Is it by the number of created posts, replies to posts or reactions to posts, or shared posts? Or, perhaps it is the time people spend on posts?
  2. The temporal aspect of the change. Did the metric change suddenly or progressively? Once this is answered, you could then discuss whether it was due to internal factors, such as the corrupted data source or bugs in production code, or external factors such as seasonality, industry trends, marketing campaigns of competitors, or special events such as natural disasters or political instability around the same time the metric has changed.
  3. Whether other products or features have the same change. You could investigate whether metrics for other related products have experienced the same change. Also, you could ask the interviewer whether changes were made to the overall product line.
  4. Segment users by demographics and behavioral features (e.g. user age group, region, language, and platform). For instance, was the decline happening in an isolated geographic region? Does this change involve only one platform, i.e. iOS, Android, or web users?
  5. Decompose the metric for a more in-depth analysis. You could discuss which particular user group was primarily experiencing the change. For example, DAU = existing users + new users + resurrected users — churned users. Investigating which user segment has the largest influence helps to narrow down the problem.
  6. Summarize your approach to show the interviewer that you have a clear and structured way to analyze the problem. Depending on feedback from the interviewer, you could further brainstorm potential fixes for the root causes that you have identified.

Measuring Success

The second question category is Measuring Success. In particular, you are asked to measure the success or health of a product or feature. Below are some questions that can fall into this category.

  • How would you measure the health of Mentions — Facebook’s app for celebrities? How can Facebook determine if it is worth keeping the feature?
  • Instagram is launching a new feature. How do you tell if it is doing well?

More abstract versions of this question category may include:

  • How to measure the success of conversations of an online forum?
  • How to measure the happiness of drivers of a ride-sharing application?

At their core, these questions are aimed at evaluating the candidate’s capability of defining success metrics. To answer this type of question, we recommend providing no more than 3 metrics, including 2 success metrics (to measure or indicate the effectiveness and success of a product) and one guardrail metric (should not degrade in pursuit of a new product or feature).

In Part 1 of this post, we summarized a few characteristics of good metrics. Here, we emphasize one additional key characteristic: Good metrics should also fit the context. Indeed, a seemingly reasonable metric might not make sense in a different context. For example, consider the setting of measuring the success of a new job recommendation algorithm. The underlying goal is to improve user satisfaction of the recommendation results. In such a scenario, DAU would not be an appropriate success metric. It would make more sense to use metrics that are natural for the context, such as the click through rate of the results or the percentage of users who applied for a job. A guardrail metric could be the average time taken to get results back, because a good algorithm should not only return good results but also generate these results sufficiently quickly.

Launch or Not

Photo by NASA on Unsplash

The third question category is Launch or Not. You are asked how to test a product idea or whether to launch a product / feature. Some sample questions are:

  • How would you set up an experiment to understand a feature change in Instagram stories?
  • How would you decide to launch a feature or not if engagement within a specific cohort decreased while all the rest increased?
  • If a product manager says that he wants to double the number of ads in News Feed, how would you figure out if this is a good idea or not?

This question category is, in general, more challenging than the previous two types because it requires in-depth knowledge on A/B testing. For additional reading, here is a great blog post that covers several commonly asked questions and answers on A/B testing.

Similar to the Measuring Success category, you should first clarify the goal of the product and come up with metrics to measure the success. You should then propose an experiment design for inferring causal impact, making sure to include discussion points such as:

  • Definitions of control and treatment groups (sometimes, multiple treatment groups might make sense).
  • Randomization unit (e.g. User? Visitor? If it’s a user-level experiment, what type of user? Recall that a user can play multiple roles, particularly in multi-sided online platforms.), and time of experiment assignment. Sometimes, a trigger condition should be considered to minimize dilution (e.g. assign a user only if they reach a particular page of the website).
  • Experiment run-time: Usually determined by a power calculation from historical data. The particular calculation will depend on the historical baseline, effect size that you want to measure, power, and variability of the underlying target metric. If the underlying target metric is particularly noisy, an even longer run-time may be warranted. If the underlying target metric is lagged, the experiment results may need to be revisited several weeks after initial launch, particularly for the cohorts that were assigned later on.
  • Common pitfalls and potential fixes, such as novelty effects, peeking (particularly if stakeholders are overly excited about launching a product early), multiple testing (particularly if there are many metrics or many segments of interest), potential interactions between various groups (what are alternative experiment designs?), etc.
  • Long-term monitoring: May consider holdout groups, which can enable both the measurement of long-term effects for a single experiment and the impact of combined product changes from several experiments.

In order to provide a complete answer, it is always encouraged that you make a launch recommendation based on the experiment results, although the recommendation could be in either direction (yes or no). Link the result to the initial goal and business impact. The perfect scenario for recommending a launch is:

  • One or more success metrics has a statistically as well practically significant increase;
  • No change in guardrail metrics.

However, this does not happen often in practice. The interviewer is likely to ask for your approach when seeing conflicting results. For example, consider the setting of an increase in DAU coupled with an increase in bounce rate. If possible, try to tie the changes to a single business metric, such as revenue (How can a 0.1% increase in DAU translate to revenue? Is it worth it to launch the product given a potential increase in various costs?). Also, comment on the tradeoff between short term and long term impacts. Indeed, even if there is an increase in bounce rate, the product launch could potentially bring in more users to the platform and, in the long term, the benefits would outweigh the drawbacks.

Improving A Product

The last question category is Improving a Product. In particular, you are asked how to improve a product / feature or how to shift a business metric to the positive direction. This type of question is more open-ended than the previous types and typically requires more advanced product knowledge. The ability to identify product opportunities is often necessary for providing a good answer. A few sample questions include:

  • What would you change in Twitter app? How would you test if the proposed change is effective or not?
  • How to improve the “what’s on your mind” posting feature on Facebook?
  • How to create a business rule for reducing fraud on an online platform?

If you feel clueless on this kind of question, this video contains a detailed answer to one sample question. In general, to provide an informative and organized answer, we recommend 5 key steps:

Step 1: Clarify the goal and narrow down the scope of the improvement. If the question asks you to improve a product with a diverse set of features, it is worth clarifying which feature to focus on.

Step 2: Explain your approach to identify product opportunities and brainstorm a few ideas. There are many ways to come up with improvement ideas, and here we summarize 3 commonly used methods:

  • Reduce friction in the current user experience: Analyze the “user journey” and focus on actions that users are already performing but could be further simplified. For example, if it takes them several steps to finish the checkout process, then simplifying the flow will likely result in more customers making a purchase on the website.
  • Segment users based on their behaviors and identify key needs of distinct groups. Clearly, the needs of occasional users can be different from those of frequent users. Based on the needs of inactive users, think about ways to turn them into active users.
  • Identify variables that are correlated with the target metric. Build a machine learning model to predict the target metric and propose a follow-up action that can move the metric. For example, suppose the goal is to devise a rule to reduce fraud losses on an online platform. For users that are flagged as fraudsters, a follow-up action might be to restrict them or to prompt them for additional verification. Some key steps can include: Defining what it means for a user or action to be fraudulent (e.g. define the positive labels), coming up with features that are predictive of fraudulent behaviors, evaluating the rule or model offline (using historical data and key metrics such as precision and recall), and designing an A/B test for measuring live performance (typically against another baseline rule / model). For business cases of this type, it is also necessary to comment comprehensively on the tradeoff between fraud reduction and impact on legitimate users.

Step 3: Prioritization. Given the ideas you proposed, which one would you prioritize and why?

Step 4: Define 1 or 2 success metrics to evaluate the success of the idea.

Step 5: Summarize the overall approach.

How to Prepare for a Business Case Interview?

Photo by Green Chameleon on Unsplash

You are already doing a good job with interview preparation by reading and absorbing the content in this post! In addition to these, we recommend 4 additional action items:

Action Item 1: Gather a large pool of sample questions and group them into different themes. You will soon find that the vast majority of the questions falls into the 4 aforementioned categories!

Action Item 2: Develop your own frameworks and answers. This can be done by reading, thinking, and communicating with fellow data scientists. In addition, we recommend a few general resources:

Action Item 3: Talk solutions out loud. When preparing answers to sample questions, it might help to generate two sets of answers: long and short respectively. During phone screens, the short version may be more appropriate for delivering simple and quick insights. During onsite interviews, more time can be spent on the long answers.

Action Item 4: Research the company and understand its product. Although most companies do not require candidates to be very familiar with their products, getting to know the product leads to deeper and ultimately better conversations in the interview.

Tips to Ace the Business Case Interview

Lastly, we want to share a few tips for you to ace the business case interview:

  • (For the second time!) Always clarify the question to make sure you fully understand the high-level goal before starting answering. If the interviewer refuses to answer your clarifying questions, you could read upon this blog to learn how to deal with 5 different types of interviewers.
  • Interact with the interviewer. During real interviews, the most important thing is to listen to the feedback and to expand or shorten your answers accordingly. Some interviewers may not give you any suggestions or feedback. In these scenarios, you want to make sure that they completely understand what your approach is. If they do talk, be a good listener, and take their feedback seriously and promptly.
  • Prevent the interviewer from losing focus. Interviewers might lose focus during the conversation. When you explain your thought process, it is better to speak out concise bullet points (and to pause frequently with transitional sentences such as “Would you like me to clarify further?” or “Would you like me to add more detail?”) rather than to go off on long, unstructured discussions.
  • Do not follow any framework blindly. If you choose to use the frameworks that we have proposed, be sure to adapt them flexibly and creatively. Interviewers are looking for people who have real problem solving skills rather than those who only follow structured templates (oftentimes, they can immediately tell).

Thanks for Reading!

If you like this post and want to support me…

To continue reading, we recommend the following:

--

--