Your team has implemented a change in the "Share" feature and released it for A/B testing. You realized that there is 20% of usage of the feature. Would you still decide to release it?
Asked at
Meta (Facebook)
Answers (2)
You'll get access to over 3,000 product manager interview questions and answers
Recommended by over 100k members
Let's start by asking some clarifying questions-
1. What change was introduced? - Assume a change to make sharing easier.
2. Was this done for the share feature for all types of content? like posts, videos etc.? - Assume yes
3. What metric were were tracking as part of the A/B test? - Assume Engagment
4. How long as was the A/B test? - Assume 1 month
5. when you say 20% usage, are you saying that this was 20% in the test group or overall? - Test group
6. What was the usage for the control group? - Assume much higher (say 60%)
7. Is it ok to assume that the audience split for the test was 50%-50% and that the same profile of customers were used for the test? - sure
8. Are there any constraints that we should know about? - Assume none
Goal - The share feature in Facebook allows users to send content of interest to friends and others on their list so that they can exchange ideas and information thereby growing engagement and building relationships on common interests.
This ties back very well with the overall goal of meta to create community and bring the world closer.
Now- we are seeing based on an A/B test that the usage was only 20% for the test group that received the changes which is far short of the control group that did not see any difference in the feature and saw much higher usage.
The share feature is going to be used by any type of user on facebook - and the user flow would be similar:
Open FB>> read content>> create content>> read more content>> interact with content such as like and comment>> click share>> find friends to share this content>> press send.
There are a few things that we can consider here -
1. Analyze the reason for the lower usage - where in the user flow with the new changes are we losing the activity from the customer -this would give us a good reason to understand the low usage and take necessary decisions NOT to release this feature.
2. Monitor telemetry - Are we calculating the usage differently for some reason between the two groups that is leading to a lower number? - correct it and see what the numbers are and if the usage was higher than the control group and then make a go/no-go decision.
3. Continue running the A/B test for a few more weeks to see if the trend is temporary or if it the trend continues - If the trend continues, then the decision is made for us to NOT ship the feature.
4. While 20% seems like less, I would want to see if we can get any statistical signifiance from this and then make a go/no-go decision -
5. I would also just not look at the pure usage numbers, but rather see how this change has affected other relevant metrics that are important to the business and see if those have improved - If they have, then I can make a more educated decision on moving forward to release the feature.
Analyzing things overall would give me a good indication of where we are and help make the decision for us.
1. What change was introduced? - Assume a change to make sharing easier.
2. Was this done for the share feature for all types of content? like posts, videos etc.? - Assume yes
3. What metric were were tracking as part of the A/B test? - Assume Engagment
4. How long as was the A/B test? - Assume 1 month
5. when you say 20% usage, are you saying that this was 20% in the test group or overall? - Test group
6. What was the usage for the control group? - Assume much higher (say 60%)
7. Is it ok to assume that the audience split for the test was 50%-50% and that the same profile of customers were used for the test? - sure
8. Are there any constraints that we should know about? - Assume none
Goal - The share feature in Facebook allows users to send content of interest to friends and others on their list so that they can exchange ideas and information thereby growing engagement and building relationships on common interests.
This ties back very well with the overall goal of meta to create community and bring the world closer.
Now- we are seeing based on an A/B test that the usage was only 20% for the test group that received the changes which is far short of the control group that did not see any difference in the feature and saw much higher usage.
The share feature is going to be used by any type of user on facebook - and the user flow would be similar:
Open FB>> read content>> create content>> read more content>> interact with content such as like and comment>> click share>> find friends to share this content>> press send.
There are a few things that we can consider here -
1. Analyze the reason for the lower usage - where in the user flow with the new changes are we losing the activity from the customer -this would give us a good reason to understand the low usage and take necessary decisions NOT to release this feature.
2. Monitor telemetry - Are we calculating the usage differently for some reason between the two groups that is leading to a lower number? - correct it and see what the numbers are and if the usage was higher than the control group and then make a go/no-go decision.
3. Continue running the A/B test for a few more weeks to see if the trend is temporary or if it the trend continues - If the trend continues, then the decision is made for us to NOT ship the feature.
4. While 20% seems like less, I would want to see if we can get any statistical signifiance from this and then make a go/no-go decision -
5. I would also just not look at the pure usage numbers, but rather see how this change has affected other relevant metrics that are important to the business and see if those have improved - If they have, then I can make a more educated decision on moving forward to release the feature.
Analyzing things overall would give me a good indication of where we are and help make the decision for us.
0 likes | 0 feedback
C : Thanks. It will be great If I can know the change.
I : Assume the change is in Share feature workflow.
C : Is there any cosmetic change done? Is the placement of Share button changed
I : No
C : When I think of workflow, I think It could be add more steps or option or vice versa.
I : Maybe
C : Can I know why this change was interoduced?
I : To increase the usuage.
C : (Redefining the Business Goal) Share button of FB is changed to increse the usuage by making change in its workflow.
I : Yes
C : After A/B testing, its observed that onlu 20% usuage.
C : What is duration of this A/B test
I : Assume a Week
C : So after a week, we may not be sure how people are using is it.
Option1 : We should continue the A/B test for another week to reach a substainal grounds. In case we see same results, we would check the following posibilites
a) Adding more steps to workflow may have confused users as they dont want to go in more detaild of 'how-to'. We might need to reverse engineer it?
b) In case we decreased the workflow may users wonder if they want to share it without assuring or not? In case of this, we can revert it.
c) Maybe after 1 more week, we may see changes and notifications which may alter the approach.
d) We may also need to check other parameters where we need see if any other metric change like we see a drop in the WAU.
Overall, I would say we first check other parameter, then wait for another week to reach any substinal solution.
I : Assume the change is in Share feature workflow.
C : Is there any cosmetic change done? Is the placement of Share button changed
I : No
C : When I think of workflow, I think It could be add more steps or option or vice versa.
I : Maybe
C : Can I know why this change was interoduced?
I : To increase the usuage.
C : (Redefining the Business Goal) Share button of FB is changed to increse the usuage by making change in its workflow.
I : Yes
C : After A/B testing, its observed that onlu 20% usuage.
C : What is duration of this A/B test
I : Assume a Week
C : So after a week, we may not be sure how people are using is it.
Option1 : We should continue the A/B test for another week to reach a substainal grounds. In case we see same results, we would check the following posibilites
a) Adding more steps to workflow may have confused users as they dont want to go in more detaild of 'how-to'. We might need to reverse engineer it?
b) In case we decreased the workflow may users wonder if they want to share it without assuring or not? In case of this, we can revert it.
c) Maybe after 1 more week, we may see changes and notifications which may alter the approach.
d) We may also need to check other parameters where we need see if any other metric change like we see a drop in the WAU.
Overall, I would say we first check other parameter, then wait for another week to reach any substinal solution.
0 likes | 0 feedback
Top Meta (Facebook) interview questions
- What is your favorite product? Why?89 answers | 263k views
- How would you design a bicycle renting app for tourists?62 answers | 82.5k views
- Build a product to buy and sell antiques.54 answers | 66.8k views
- See Meta (Facebook) PM Interview Questions
Top Execution interview questions
- Imagine you were in charge of Facebook Watch. What metric would you want to measure?13 answers | 9.1k views
- Weekly active users (WAU) for iPhone app dropped. What happened?10 answers | 6.3k views
- You are the PM of Instagram stories. What goal would you set and how would you measure success?10 answers | 14k views
- See Execution PM Interview Questions
Top Execution interview questions
- How would you decide between showing more ads on the Facebook Newsfeed vs showing a "People you may know" recommendation widget?9 answers | 8.9k views
- You are the PM of Facebook Lite. What goals would you set?7 answers | 8.2k views
- Define the metrics for YouTube search.6 answers | 4k views
- See Execution PM Interview Questions