Facebook recently launched a new feed algorithm. As a result, the average session duration dropped by 20%. What would you do?
You'll get access to over 3,000 product manager interview questions and answers
Recommended by over 100k members
Here's how I would be approaching the problem
- Understand the nature of the drop
- Confirm if there's actually a drop or is it a reporting issue or a measuring issue
- once we know there's indeed a drop isolate it (goal here is to ensure we are able to establish causality)
Nature of drop
-Is this gradual or steep? assuming gradual, if it was steep I would match launch timelines with metrics changes to see if they match ( steep drops are much easier to investigate)
- Is the drop expected? Im assuming the feature was in AB before it went live to all users, is this something we checked as part of the AB, we should go back and check in parallel
Also the AB was a success else this feature in all likelihood wouldn't have been rolled out
(Also check if this change had a compliance angle to it where the company has chosen to take a hit to ensure compliance with latest rules and regulations - assuming thats not the case here)
Confirm the drop
- Are we measuring the right metric - time spent per session has dropped, what about total time spent , total sessions and total active user count? are they up or down? It is possible that net time spent , sessions and users are up but per sesssion metrics are down - assuming thats not the case here
- Was there a relative surge which is making it seem like there's a drop
For example: Timespent when the lock down was on going would have gone up, when the lockdown relaxes then the metrics would return to their usual baselines
- is there a lag with data?
- Is there a data corruption or population issues?
- did the metric definition changes (in terms of how we've instrumented it?)
- Did we make any changes to the dashboard in terms of we measure
Assuming all of the above are good
Isolate the drop/check if its specific to a cohort
- Internal factors
----- is the drop specific to device OS, platform ( App vs web)
----- is it impacting new user or returning users
----- Are any other metrics hit?
----- is it specific to a country
----- is it specific to an app version
----- Is any tech metrics showing a decline/any error rates which are up?
----- how's the overall product funnel looking? is all normal
----- Did any major release go out other than algo change? any experiments that are live - did someone misconfigure an experiment
External factors
----- did the competition make any changes? how are the other sister companies metrics looking/are they reporting issues
----- did any major OS make any changes?
----- a major event that can explain the drop? snow storm leading to a blackout
Assumingg all of the above aren't an issue
investigate the experiment
- did we see the drop during the AB - assuming not else the experiment wouldnt have been rolled out to all users
- given 20% per session is a major drop and if sessions/users are unchanged then means a 20% loss in engagement i would immediately start a reverse AB with older algorithm with 50% users exposed to see if the drop recovers, if the drop starts showing recovering within a day or 2 then I'll increase the rollout to say 90%, keep 10% in control to benchmark performance longer term
- in all likelihood it seems like a deployment issue with the feature where the production release had a bug and will get it invesitgated/fixed and then roll it out
- in that roll out there will be variants - older algo version, algo release which was buggy which lead to the drop and then finally the fixed version, ideally fixed version should do better than first 2 cohorts
Clarification and assumption:
Drop observed as a part of A/B test where we measured the new feed performance vs older
New feed aims to provide better content recommendation and better UI
Drop in US on all platforms
Product:
Lets understand the product/feature and value it provides to each stakeholder
Product Goal: Increase overall engagement
Newsfeed is the default landing page of FB and it provides users with personalized engaging content generating by friends or relevant sources
Viewers: Feel engaged/entertained/connected
Creators: Share life stories, knowledge etc and feel fulfiled, appreciated and heard
FB: Getting more engagement and user data for more intelligent recommendations, which increases ad revenue
Advertisers: They reach relevant target audience optimized for conversion
Hypothesis:
Lets generate a hypothesis and then look at some of the relevant metrics we need to look at to resolve the tradeoff
Its possible that the new feed is providing more personalized content to user hence they dont have to scroll longer to find the same amount of information or the same level of engagement
The new feed also might have some UI optimization for quickly viewing photo snippets or videos hence user save time
The new feed might be providing more real estate to shorter content like stories etc which takes less time to view
My take here would be that although session duration might have dropped but there might be increased in # of sessions so the total time spent on the platform might not have dropped much. Also the content has become shorter (like stories) and its faster to navigate (new feed feature) so it takes less time to view the same amount of content. So, even the total time spent might have seen a small drop but the amount of content viewed might have gone up
Metric:
Total time spent (D/W)
Session length
# of sessions per user (D/W)
Total content viewed (D/W) per user
Total interactions (Like/comment/share) per user
Total content created per user
Resolve tradeoff:
The new feed might decrease session length and total time spent slightly (say less than 2% which is acceptable as a guardrail metric). But I will specifically look for any decrease in content viewed, interactions or content creation. If there is any decrease in any of these metric then the new feed isnt achieveing the product or feature goal as it is decreasing the overall engagement. It might have a downstream effect of hampering the ecosystem.
In other case if new feed decrease sssion length and total time spent (within limits) and increase all other metric then its is good choice and should be launched.
So there has been a drop in average session usage by 20% as soon as we launched a new feed algorithm. At such an incident, I would like to first of all check if there is a drop actually or is it just because of some reporting issue or measuring issue.
Assuming that there wasn't any reporting issue or measuing issue, Next I would like to go about thinking why was this launched, what was the goal we are trying to achieve through this and finally few information on the experiment.
Assuming that the goal was to help users find more meaningful content to interact with and thus improve the Engagement metrics and retention metrics like Average time spent, DAU/MAU. Let me also assume that this experiment was launched at America initially and the drop is pertaining to this region only.
Next, I would like to understand more about the drop, through the following questions:
Was the drop a more gradual ones or was it a sudden one?
Was the drop present across device types like mobile, desktop, or was present only at a particular type.
Finally I would also want to check how are the related metrics doing like # of sessions per day, Average time spent overall. If these metrics are going up, then what we are facing is not a drop at all. So, I would want to check this as well.
Assuming the following to be answers to the above question:
The drop was a more gradual one.
The drop is common across devices.
All the related metrics have went down as well.
The actions that I would do, is to split probable causes as internal and external ones and then discuss diving deep into issues depending on the situations.
Since, the change has been a more gradual one, I would like to place my bet on External reasons like Bad PR, Government Regualtions and changes in User behaviour.
In the above cases, I would directly jump to change in User Behaviour as the experiment directly affects the behaviour of users.
To understand the user behaviour change, I would like to list the user Journey that one undergoes.
- The user has a need, and thus opens FB
- Lands on the News Feed page.
- Scrolls down and interacts with different posts and ads present in the Feed
- After spending some time, a user leaves the app.
So, in this I would like to check the how are the following metrics doing.
Opening rate of the application
# of posts that interacted by Top 90 percentile users before bouncing off
# of posts interacted by Top 90 percentile of users before seeing the following phrase, "You have caught up everything"
Assuming that the 1st metric has went up while the rest have gone down significanlty, it could mean that the algorithm is overoptimizing the content to make sure that the contents are meaningful to the user depending upon the previous actions and thus, the # of posts that a user see is getting reduced. This will also mean that the user being shown to new genres to try will be low. Once these aspects gets reduced the user would naturally bounce off as he has nothing to interact with.
To confirm this hypothesis, I would like to check if the 2nd and 3rd metrics are equal to each other. Assuming a yes, We can conclude that the reason for the drop is that the new algorithm is over optimizing for the releavance to users that they are getting lower and lower posts to interact with.
Before finalizing on what to do, I would like to iterate what the Goal of FB has been. It was always to help users build a meaningful community and bring the world closer.
This can be done only when users are shown different things relating to their interests and as well as new things so that we can learn more about users and as well as create more accurate and personalized ads.
Due to the above reasons, I would like to do the following:
Will interact with the Cross-functional team and try reducing the score that is reason behind the over-optimization.
Next, I would like to implement the changes and look for how the metrics are doing through A/B Tests.
To summarize, I would check if the issue was actually an issue, then understand about the experiment, goal behind and finally understand about the drop. With all the information, I would next dive deep into the probable causes, check the metrics and come up with a solution.
Top Meta (Facebook) interview questions
- What is your favorite product? Why?89 answers | 263k views
- How would you design a bicycle renting app for tourists?62 answers | 82.5k views
- Build a product to buy and sell antiques.54 answers | 66.8k views
- See Meta (Facebook) PM Interview Questions
Top Problem Solving interview questions
- A metric for a video streaming service dropped by 80%. What do you do?50 answers | 135k views
- You launched a new signup flow to encourage new users to add more profile information. A/B test results indicate that the % of people that added more information increased by 8%. However, 7 day retention decreased by 2%. What do you do?29 answers | 28.8k views
- Drivers are dropping out of a city on Lyft. How do you figure out what's going on?23 answers | 18.8k views
- See Problem Solving PM Interview Questions
Top Problem Solving interview questions
- Your new feature boosts Amazon Search by 10%, adds 2s to load time. What do you do?19 answers | 36k views
- There is a 15% drop in the open rate of Instagram App. You are the PM. Tell us what could have happened.11 answers | 10.1k views
- There is a data point that indicates that there are more Uber drop-offs at the airport than pick-ups from the airport. Why is this the case and what would you do within the product to change that?10 answers | 22k views
- See Problem Solving PM Interview Questions