You launched a new signup flow to encourage new users to add more profile information. A/B test results indicate that the % of people that added more information increased by 8%. However, 7 day retention decreased by 2%. What do you do?
You'll get access to over 3,000 product manager interview questions and answers
Recommended by over 100k members
Assumption: The decrease in 2% in retention is for those user who went through the new sign up flow
What value does the added information hold for us? Why are we collecting more information? - engagements? more acquisition, revenue, retention?
Is this noticed for a particular geography? segment of user?
What has our observation been for people on the other side of A/B testing? Have they observed any such increase/decrease?
We would like to look into the amount of time it takes to onboard the user/complete the sign up process from what it was earlier to what it is now?
We would also like to understand if any of this extra information that the user has asked for is causing any inconvenience to the user? has there been any downtime during the process that is happening with the new flow? This can be done through applications like Microsoft clarity
Any support queries which are captured at this stage of the process? Has there been an increase in the same?
Is the API not hitting properly
- For OTP
- for any other information which is extracted?
Are the changes that made in the UI/UX process too overwhelming for the using which maybe causing any disruption in the flow of the journey? Let’s say an
- action button is not visible
- the process is tedious
- the options are confusing
- too many steps
- any personal information which is made mandatory
We can possibly add visuals to make it more interesting.
Add an option for auto-fill forms which the user might be using frequently.
Now, we can either offer the user some benefit for signing up. So, the user has satisfaction attached to this. It could be one from the following:
- Acknowledgement
- Referral option
- Signing up points or product related benefits
This would encourage the user to complete the signing up process.
- We can also take a 2-step process, lets say some information of the sign up is to be provided on sign up only and the rest can be done before the user takes another action. Like say purchase, engage, or scrolls (here we can put a timer that after so many minutes/days of exploration, the user must add certain information to proceed). This was the user can have a peak into your offering, see what they are missing on, and this shall motivate them to provide more information
Are the metrics giving meaningful insights? are the metrics linked? like are we seeing retention decrease to users who added info or are these metrics disjoint?
7 day retention metric in question: is it post the new feature or does it include both timeperiods? say ike feature is rolled out on wednesday and people used till Tuesday but dropped on wednesday due to extra step etc; -- this is important because this could help us come to a possibility that maybe majority of these dropped users could be fake users or bots who are unable to/dont want to complete this step. in this case the metric doesnt give any meaningful insight, it could be a bonus in disguise as you got rid of fake users
Few clarifying questions:
- Is it the app or a website? (Answer: It is a web app)
- What type of app or website is it? (Answer: It's a social media web app)
- Why did we launch an A/B test? What was the objective for this? (Answer: We wanted to personalize offerings by asking users to provide more information as they sign-up. The overall objective was to improve engagement & retention.)
- What do we get out of this 8% lift in the no. of users, what was the original estimate? (Answer: The original estimate was ~8% hence we are observing a significant and expected lift here.)
- Was a 2% drop in D7 retention expected? (Answer: No, this was not expected.)
- Are the results significant? (Answer: Yes)
- Talk about the user cohorts here
- Was it across the users or a particular set of users? (Answer: All users)
- What is across geo or restricted to some geography? (Answer: All geography)
- Finally, do the timelines of an 8% increase and a 2% drop coincide? (Answer: Yes, the impact of this new A/B test was a drop in retention by 2%.)
- What does a 2% drop signify? Is it acceptable?
- Answer: Since it was not expected and hence 2% drop needs investigation
- Why are these users not retaining or at which step these users have dropped for good?
- Was it during signup, to get additional information?
- Was it after signup, when users added the information?
- If it was during signup, then it means every time we ask users additional information they are more likely to not retain
If it's post signup, then it would mean additional data added by the user is actually reducing the personalization aspect for the users, and hence users are not getting retained. In here I'd also like to work a bit on the personalization algorithm, to understand what's going on behind the scene and how can it be improved.
- Adding information causes users to drop off at the same instance - This has to be validated
- If adding more information is hampering the personalization experience, then the algorithm has to be reworked for the new users.
Assumptions:
- A/B test was conducted on new signup flow vs control (Existing flow)
- new flow has additional steps where profile information is collected
- No specific target user groups - eg. Pages or Businesses. The new flow is for regular users
- Assume this signup flow is on FB/a social media app
- Assume there is nothing internal/external effecting the metrics.
- Were other metrics effected? Answer - sign ups also went down .
- CLARIFY:
- Are the users who are dropping off the same users completing the additional information? You choose.
- Is there anything unique about the users dropping off (ex. they're all on mobile devices, same age range, etc.)? You choose.
- Are there any other changes to the product aside from the application flow? No.
- Are there any changes to the way the metrics were originally measured v. now? No.
- Are all of these users - i.e. the people adding more information and dropping off - fully completing the application? You choose. In this case, I assume that the people abandoning the product have not completed the application (i.e. creating an account and then deleting it). (Interviewer could confirm if such assumption is OK or if it should be a factor to explore.)
- AB TEST BACKGROUND: A product ran an AB test that gave a Control group the application flow as normal and the Experimental group an increased number of questions that allowed the users to provide more information. In the experimental group, 8% of people added additional information, but overall retention is down. (Interviewer confirms description is correct.)
- USERS: There are 4 main user groups to consider. I am most interested in the Experimental Group that leaves the product after 7 days.
- Control:
- Stays on Product after 7 days
- Leaves Product after 7 days
- Experimental:
- Stays on Product after 7 days
- Leaves Product after 7 days
- Control:
- HYPOTHESIS: Given that the Experimental group experienced a longer application, my hypothesis is that the longer application caused a drop in retention rate. Users are abandoning the longer applications / not finishing them because of its extended length, although they are initially filling in more information on the application than the Control.
- POSSIBLE REASONS FOR DROP IN RETENTION RATE:
- User Fatigue: User got tired of all the applications and abandoned the application after 7 dies (i.e. never logged back in to finish it)
- Data / Privacy Concerns: User became weary of Product / wondered why they had to provide so much information and abandoned application to due privacy concerns.
- Time: User felt they did not have enough time to complete the application.
- ACTION ITEMS: There are three action items we could explore.
- Trends Over Time: I'd like to continue measuring the Control v. Experimental groups over a longer time period to see if we see a continued higher drop in retention rate in the Experimental Group. I'd measure every 7 days (7 days, 14 days, 21 days and 28 days).
- Prospect Survey: I'd survey prospects of the Product and ask them how willing would they be to fill out a complete application with the Control application # of questions v. Experimental application # of questions and ask them why they answered the way they did.
- Drops Survey: I'd survey prospects who dropped from the application and ask them why they dropped. I'd be sure to classify them as in the Control Group v. Experimental Group.
- PRIORITIZING ACTION ITEMS:
Action Item Impact to Hypothesis Cost Trends Over Time High: Allows for us to measure continued behavior of people in experiment. Low: Analytics already seem to be in place. Prospect Survey Medium: Users may not respond as they would actually behave. Does not directly target the people using the application. Low: Not costly but may take time to gather sufficient responses. Drops Survey High: Would provide direct insight into why someone is leaving. Low: Not costly but may take time to gather sufficient responses.
- SUMMARY: Given prioritization, I'd start by measuring the Trends Over Time, as it is the easiest to implement and will allow us direct insight into the group behaviors. If possible, I'd also survey the users who have dropped to confirm my hypothesis.
Statement
You launched a new web application signup flow to encourage new users to add more profile information. A/B test results indicate that the number of people that added additional profile information increased by 8%. However, 7-day retention decreased by 20%. What do you do?
Clarify the problem
This is an existing product? Yes
The only change that was made to the app was the registration form? Yes
The form is now longer? Lates more time to complete ? Not Sure
Did you the overall % of users that completed to form change ? You decide
Using A/B test increase in # users with more information in profile
Drop in retention WoW
Objective indicators
This is not something that has accorded over time - sudden
This is not limited to a specific region - country / language / culture
Not limited to a specific platform - desktop, mobile , tablet (IOS etc…)
This drop is not something that we have seen from other cohorts (“older” registers are still churning like before)
List of reasons
Seasonality
Alternative product
Competition
Other features in the same property (cannibalism)
Other changes happened in the product
The registration form is encouraging some users to add more information and thereby disencouraging other users that do not want to supply this additional information. The outcome is that you are getting a different “mix” of users that are registering and not necessarily the same types of users that provide more information.
This would not have been a problem, unless the outcome of higher churn. Which means that you quality users were discouraged by the registration process …..
Sort through the reasons
Reasons 1-3 are not very likely (or better said I do not assume them to be applicable)
Reason 4 is what I think is the probable cause.
How would we test that?
A careful analysis of the data should answer the question if the overall % of completes has increased or decreased. If not then we need to rethink this list of potential causes.
launch a form with with less mandatory fields and see if you mix changes
1. Question1 : What is the main business model of the App? At a broad level is this a subscription based App or Ad funded business?
2. Q2: Is there a LTV model built for the App which takes into account 1. the value of increased user information 2. 7 day Retention
3. Q3 : What was the original goal /hypotheses of the A/B test: Was it to increase the info per user and how was that quantified in terms of business impact?
If an Ad funded business, increased user information could result in increased CPC/CPMs and that needs to be quantified and compared to reduction in engaged users
Also if 7 day retention loss of 2% should be segmented to see whether the users being lost are from valuable segments? In the early phase of a Product that would be a more important factor than any short term gains from increased Ad revenue.
On the other hand if this is a mature Product and past Product Market fit i.e. the users being retained are the valuable ones, then the increased info for those users would actually be more beneficial
Hi,
First of all, I make to make certain assumptions or inferences here:
1. The % users who filled addl information increased by 8% which means that providing additional information was optional.
2. The overall decrease in retention includes people with both the variations (control as well as variation version)
Now, I want to analyze further which user segment retention rate is affected and would suggest strategy to address the problem -
1. People who did not provide additional information and bounced off Landing Page or in subsequent usage -
I would verify these users retention rate between control version and variation version. Ideally it should be same. If it is not, then we should look at user profiles at each variation to seek the differences.
2. People who provided additional information and then bounced off Landing Page or in subsequent usage -
If retention number is lower here, then we need to focus on providing relevant customized offerings basis additional information and check retention rate in subsequent 7-day period.
3. People who signed up but bounced off additional information page and never made it to the landing page -
If % of users is higher enough to reduce the overall retention rate, then we should look at alternative placement of additional information or content of additional information (this can be gathered by analyzing which page and field had the maximum bounce-off rate). May be we can ask user to provide information on 2nd or 3rd usage etc., as we must give a chance to user to use our application before exiting it.
You launched a new signup flow to encourage new users to add more profile information. A/B test results indicate that % of people that add addtl. profile information increased by 8%. However, 7 day retention decreased by 2%. What do you do?
Let's start from the WHY behind the change- likely that you were implementing this change to improve retention ; let's proceed with the assumption that this is a social app where profiles play a critical role.
An increase of 8% for sign up flows is a significant increase in number of people completing profile information - I would ask how was this implemented because from experience I know that every time we add a step it introduces a 3-4% drop off.
My assumption here is that this is an added pop up screen during sign up which is causing a drop off in total number of users completing sign up successfully since they drop off on the profile info screen.
*Interviewers nods yes*
Also I would recommend changing the way we measure success of the A/B test , let me tell you why - consider the following scenarios
In case A for every 100 people signing up 50 people ended up signing up of which 20 completed profile information
In case B ( winning variant ) for 100 people signing up 48 people ended up signing up of which 22 complete their profile
So the blocking screen is causing an overall drop off which reduces D7 while increase number of people completing their profile
I would re-configure the experiment hypothesis to " users with better profile information have higher retention than users who don't - how can i increase in the number of users filling their profile information on D0 ( primary metric) while increasing/not affecting the successful sign up rate ( secondary metric that doubles up as a kill metric) "
if my goal is to increase profile information of new sign up I would focus on passive methods ( push notif, in app pop ups, incentives) POST users successfully signing up so as to negate this drop off
if my larger business goal is to increase retention I would reduce the steps of sign to increase successful signup and focus on passive methods ( push notif, in app pop ups, incentives) POST users successfully signing up so as to increase total user successfully signing up and help improve D7
Asking clarifying questions is critical to properly answering this Google problem solving question:
- What was the new sign up the flow and how is it different than the old sign up flow. My assumption is the new sign up flow asks for more user information, thus the increase in the % of profile information.
- What was our goal? Was it to increase profile information? What was the acceptable counter metric decrease? Is the result within range?
- Increase in profile information - the more data we have, the better our recommendation and personalization system and the network becomes more valuable to the users. Cons, depending on the types of information we ask and require of the user, the user may have a different level of comfort and privacy concerns
- Decrease in 7 day retention - if a user does not come back in 7 days, this is not great for the platform, however, I want to also consider whether 14 day or monthly retention has decreased, perhaps the new user no longer needs to come back on 7th day and add a profile pic or other information.
Assuming no other change was implemented that could have affected these metrics, lets start by breaking down the question. First piece of information is that additional profile information increased by 8%, which is desirable. We do not have information about what exactly we are asking the user to provide, so lets assume that the additional profile information consists of user's interests and experiences. And second info we have is that 7 day retention decreased by 2% because of this change.
The objective of getting more profile information from the user is to find better content and connections related to the user's interests and experiences. If we are not able to use the additional user info to find more relevant and engaging content for the user, then this effort is a waste. So, our primary focus should be to make the best use of this info to improve the experience for the user. This would have an effect on the churn rate too and generate positive word-of-mouth marketing for the product.
Now lets try to understand the possible reasons for the decrease in user retention. A thought that would come to our mind is that probably we have made the process so elaborate and complicated that user did not even complete the process and chose to leave the app. But since retention period metric tracks the user after they have completed the signup process, we can rule out this reason. We ought to be more specific about what kind of information we are asking from the user. For privacy reasons, many users will not be comfortable in sharing more personal info. Make sure that users are satisfied with your product's security and privacy policies and convinced that they have full control over how your product uses their data.
In summary, we need to do the following to improve retention -
- Create a more engaging experience for the user by finding and displaying relevant content based on user's interests.
- Communicate your product's compliance and privacy standards and make sure they trust you with their data.
I will analyze the situation in the following way:
- 7-day retention è the percentage of users who come back to the app/product within 7 days after the time of their first session.
- The hypothesis of the A/B test is with the signup flow change the numbers of users adding more profile information during signup should increase without any significant effect on other health metrics? à Yes
- What were the exact A/B test changes? à I am assuming the A/B test change mainly asks for more user information in a much better user-friendly manner.
- Before the start of the A/B test, did the team agree on any accepted % change in the 7-day retention? àIf the accepted % change was around 5%, then the team may be satisfied with the A/B test results and with the new change. If the accepted % change was around between 0%-1%, the team would need to analyze the situation.
- Similarly, before the start of the A/B test, did the team agree on any accepted % change in the new suers adding more profile information? àIf the accepted % change was around 15% - 20%, then the team would not be satisfied with the A/B test results and would not implement the new change. If the accepted % change was around between 5%-10%, the team will consider analyzing the situation further to implement the change.
- What was the A/B test run time window? And the metric value changes are for what time-period? à It should not happen that the A/B test was supposed to be run for 1 month and the team is already analyzing the results just within 2 weeks of the start of the A/B test or the A/B test was run for more than 1 month.
- Is this change specific to any platform (Android vs iOS), mobile vs web, any region, any specific user segment or any specific version?
- I am assuming none of the other product features got upgraded during the time-window that the A/B test was run.
- What type of profile information is being added by the new users during the signup? à If such information is not useful, then the signup flow change is not really required. If the information is useful and later used by the product’s underlying models, then the signup flow change seems to be important.
- Is there similar proportionate decrease in the 14-day or 1-month retention rates? à If there is no such decrease, we can go ahead with the implementation of the new signup flow change else we need to further analyze.
- Is this 7-day retention decrease include folks who didn’t meaningfully engage with the product but were mere spectators? à Losing high-value folks would matter much more than losing the low-value users
- Is there a change in the number of people signing up and actually getting activated to be considered for 7-day retention? à If there are more people getting activated but same/higher number of people are returning back to the product after 7 days, obviously there will be a decrease in retention rate which is not worrisome
- Also, I would like to look at the absolute values to determine the exact impact.
I would like to analyze the old and new signup flow changes and what actions did the user undertake after coming back to the product after 7 days.
It is possible that earlier the users did not provide all the information so they received 1 or 2 notification emails to complete their profile and hence they logged into the product within the 1st 7 days.
Now, with signup flow change, since the users are providing extra information at the signup stage itself, they are no longer getting those notification emails and are not required to log into the product within 7 days of signing up.
CQ:
What's the product we are offering - Assume this is for Facebook login
Use case of asking for more profile information - curb fake accounts
Since when do we see the change - as soon as the change was made
sudden/gradual - happened right after the change
Sensitive to geography - no
Sensitive to platform - web/app - no
Do we also see any change to avg session length? - no
7 day retention is basically the number of users who are coming back to the app within 7 days of last usage.
Approach would be to review the scenario as per below hypothesis
Hypothesis:
Fake users couldn’t provide additional info and hence they dropped off:
Next Step - Sample check their FB usage pattern, kind of posts made, whether they respond to 1-1 chats and if the hypothesis is validated then use the learning to remove fake profile via this method as well instead of nudging users to add extra profile information
Whether login was affected after addition of profile page info
Next Step - Get it QAed and resolve issues if found
Change in how data is now being measured
Next Step - understand the steps and fix
I will first start with Clarifying Questions from the interviewer:-
- What type of service/product is it basically is this transactional such as Tinder, Cars24 etc or non-transactional such as FB, Netflix, Youtube etc? This is important as let say if user has filled out all the information on Cars24 about the used car they are looking to sell and operational efficies are there into the system we might have reduced our TAT to list cars from initially let say 10 days to 5 days itself.
- I assume A/B Test results are now live across the entire platform and retention drop is uniform across new users and all regions?
- What do you mean by retention? App launch retention or Doing any core activity retention
- Behavior on day of SignUp:-
- %age of users performing any core activity such as sending a friend request or joining any group or posting on the wall
- Avg. Engagement time peruser post signup completion
- Is this only D7 retention taking a dip or D1, D3 retention also showing some drop. This can indicate that engagement post signup is not happening or not?
- How profile Information has been used by platform?
- Content Recommendation Logics
- No of Ads shown to user per unit of time they are on platform
- Any update being sent to new users
- Any new feature release for new users
- Crashlytics Report
Clarifying Questions -
What was the new sign up flow? How is it different from the old sign up? (Asking for additional information including location, age, and interests)
What was the goal of the sign up flow change? Was it to increase engagement? (Yes)
Re- Stating the question with new data
With the goal of increasing engagement we launched a new signup flow that encouraged users to add more profile info related to their location, age and interests. Our data shows that 8% added more info, but our retention has decreased by 2% shortly after.
My Assumptions
This is a social platform where users get to interact with people
Historically retention has been healthy
With the new sign up flow, nothing in the UI has changed.
2 hypothesis
We leveraged the add’l profile info to feed into our algo’s and serve them better curated content. Unfortunately the ML models aren’t working correctly and we are serving them the wrong content thus making them churn.
The users who do go through the new flow, realize they are uncomfortable with the level of information that they provided. Thus churning from the service.
My approach
I’m leaning towards hypothesis 1 (broken ML, serving no good content). Hypothesis 2 states the churn is caused by an uncomfortable amount of info required. I’m more inclined to believe that if a user is uncomfortable with the level of info they are providing they most likely won’t complete the flow.
In the case of hypothesis 1, I’d like to test the new flow myself with the team to quickly see if the ML is serving us content that we actually like, or if it’s serving content not relevant to our interests. If our quick test confirms this, the next steps i’d like to take is to retract the new user flow to stop the bleeding.
Then i’d run an internal dogfooding session to understand where the issue lies, and how we can improve the algo.
Alternatives.
If hypothesis 1 is incorrect, and we determine that the algo is working fine, I’d move on to Hypothesis number 2.
If hypothesis number 2 is incorrect i’d like to explore external factors that may be causing this decrease in 7 day retention. It could be that users are fine with the new flow, maybe a new platform has been released that is grabbing our users' attention.
Clarification :
- Sign up flow : What changes were done in sign up , assuming we started taking more information then but does that mean users gave more information but did not complete the sign up flow. we might be taking information is steps .
- What are we doing with this new information - are we showing them recommendation or are we sending them more notifications . ?
- 7 day Retention :How we define retention. this means that if i come today and i come again within 7 days then i would call it 7 days retention user .
- Check the data of user providing information - whether it is correct or not. might be any auto suggest or auto fill is giving wrong data .
- Check the user funnel data of retention before and after.
- Check all the sources of users coming back again and check for any issues with emailer / notifications .
- Seasonality :Has this retention dropped on any particular day.
- Check 1 day 2 day 3 day .. till 7 days retention data .
- Check for competition data for any particular day for less traffic.
FIrst off, I would begin with clarifying questions:
- What was the new sign up flow, and how does it differ from the old sign up flow?
- How is 7-day retention defined?
- What does the user journey look like after the user has completed the creation of their account? Are there critical actions that the user has to take within the 7-day period before the changes to the sign up flow has been implemented?
- What was the ultimate business goal we were trying to move when we had implemented this A/B test?
- This is for a social app where profile information is critical to building the right experience for the user - be it, marketplace, social dating platforms
- 7-day retention is defined from the moment that the user has complete profile creation.
Approach:
My assumption for this question is that the user activities that we prompt the user to engage in when they have just created their user account are activities that they do and has an impact on the 7-day retention rate e.g. adding of profile pictures, adding of interests
However, with the new sign-up flow, users have the option to add these information in the sign-up flow and this inevitably led to the impact of the decrease in 7-day retention rate.
To ensure that we are not over focused on one metric, I'd recommend to look at 14-day and 30-day retention to see if there is an impact on these numbers as well.
I would also a longer-term view and look at how long does it take for the user to experience the product's value proposition typically and understand if this group of users are able to experience the product value prop quicker (due to the streamlined sign up flow) and have a better retention rate and affect the business bottomline in a positive manner
Clarify:
Assume additional info required for FB sign up
Goal for new sign up: To increase richness of profile which helps increasing engagement (# of interactions with other users) and increase personalization of content
As extra info was asked explicitly the % of people adding more info went up
Goals:
The mission of FB is building stronger communities and the reason we added a new flow was to collect more information so as to facilitate meaningful interactions and engagement with content.
The goals of the new feature align with the overall mission
Metrics:
Lets think of some metric we might be trying to measure during the experiment:
Goal Metric : % of new user adding extra information (It increased which is a good thing)
Guardrail metrics:
Retention (weekly/monthly)
Daily time spent on FB for new users
# of content they interacted with (like/comment/share)
# of posts they created weekly
# of friends added
Hypothesis:
Hypothesis 1: We are asking for information that users perceive as private and hence they are conscious about using the app after signing up.
To validate this hypothesis I will look at weekly and monthly retention, if both are lower it signifies a drop in users coming back to use the platform.
I will also look for other guardrail metrics like Daily time spent, # of content interacted with, # of post created or # of friends added. If these metrics also show a drop, it confirms our hypothesis that users are perceiving the new info as private and the platform as overall not safe.
In this case we need to dig deeper to see what extra info we are asking for and if the benefits of having it outweighs the cost. Can we also add a short note for the users as to why we ask for this information or make it optional?
Hypothesis 2: Users were previously notified to add additional info so they were logging back within 7 days. They aren’t notified in the new flow
To validate this I will check for monthly retention to see if there is a drop in the long term. If our hypothesis is true there should be no drop in monthly retention
Additionally I would check for other guardrail metrics, if they remain the same then that confirms our hypothesis
The drop in retention is temporary in this case so we don’t need to do anything
Clarification
I think we need a better understanding of just what is going on before we can answer this properly.
- What were the changes we made to the sign up flow? Are certain fields now required, or are the input boxes placed more prominently?
- What are we trying to accomplish by capturing more information during the sign up flow? Increased activation and retention as users have more complete profiles to start with?
- How are we measuring retention exactly? Is there some usage threshold involved or is it simply anyone with a single session?
- Assuming the decrease in retention was only observable in the group with the new sign up flow changes.
- How long have we been observing this increase in information and decrease in engagement? Was this a sudden observation or something we noticed after looking at this over a long time span. While 2% is pretty small, I'm going to assume it is statistically signficant.
- Are these observations uniform across geographies or are there certain countries / cities where it is more or less noticeable?
- Is this uniform across all user segments and use cases? Maybe a particular demographic is impacted more or less?
- Does this vary by platform, mobile vs desktop?
- One would assume that more engagement would lead to more retention. Does a more complete user profile actually lead to more engaged users? We could look at the average time spent in the product per week or average number of sessions or examine specific actions.
- If our assumption was that more complete profiles leads to more engaged users we should've confirmed that assumption with the existing users before going down this route.
- Is there anything characteristically different in the users we are retaining that have added the additional information? Maybe they are from a particular high value user segment or engage with the product more as discussed above. Quality over quantity.
- I don't know what the additional information we are capturing is but we should examine how valuable that information is to us. Could we use the new profile pictures to aid our facial recognition research? Could the demographic information improve our ad targeting and corresponding CPM rates?
- Is weekly retention the appropriate time period for us to be looking at? What does the increase or decrease look like for monthly retention?
- Can we use this additional information to increase the level of personalization in the bridgebacks to our product? "Hey Joe, check out these other users also from the Bay Area!"
- Did we have some sort of email campaign or notification scheme that encouraged users to come back and add more profile information? This may have gotten turned off if users already filled out their profile despite them still benefiting from a CTA to come back to the product.
Clarify scope:
What type of product are we referring to? Is it gmail? Or youtube? Or google account?
Things to figure out..
Internally
Among the customers who are not coming back in 7 days, how many were from the test population?(we need to figure out if the decrease in 7 day retention is related to the A/B test or not)
Were all the onboarding emails working properly? If we historically send users a Welcome email after they sign up but that email was not working properly, that could be a factor in deceased 7 day retention
Did we remove any features recently? Were any features that were part of the reason users signed up got removed?
Has there been any outages during this decrease period? Did anything happen to result in poor customer experience and low confidence in our product?
Externally
Time of the year? What time range was the 2% decrease? Does historical data show similar fluctuations? How does that compare to last year’s data? Example, if we’re talking about a product such as gmail, and 7 day retention is decreasing for users who sign up right before christmas break, the fluctuation could be normal since users might be on holidays and no one is checking their email.
Any new competitor releases that may have impacted our retention rates?
Was there a specific segment of customers that contributed more to the 2% decrease?Ex, if we saw a 50% decrease among mobile users, we should look into issues specific to Mobile. Were the app stores working? Were customers just unable to log into the app and therefore retention decreased?
I will also work with stakeholder teams such as customer support ,marketing, and CRM to identify potential reasons. Were there increased customer complaints about something? Did we pause any marketing campaigns or CRM efforts?Among the users who signed up, which campaigns did they come from? Were customers coming in due to incentives that we gave us and therefore we attracting less quality customers who were never going “stick” with us?
Assuming the drop is due to new sign-up flow,
Case 1:
I'll check if there is a drop in New Sign-up user due to added steps in the journey. If that is the case, I'll review if the drop justifies the objective of collecting additional information.
If the drop is not justified, I'd explore to optimize the new journey with a change in UX such that the information can be collected post successful sign-up and re-run the experiment till we find optimal balance in both the metrics.
Case 2:
Sign-up rate remains same but users are uninstalling at a later stage. In this case, I'll check if we're using the additional information to target the users for sale of product/services which is causing them to leave the platform. If that is the case, I'll review if the drop justifies the increase in the transactions. If yes, I'd gradually roll-out the new sign-up flow to 100% user base.
Clarifying question:
Candidate: What is the business goal? Is it to collect additional information that are critical to provide great service to the customers as well as serve ads to generate revenue.
Interviewer: Yes
Candidate: I assume that the user that provides more information would tend to have higher engagement and hence google tend to generate more ad reveune from them.
So we can measure the long term value for cohort that add more details and the baseline cohort. Then we can base our decision off the decision to persist the feature off the analysis whether the incremental revenue that we generate from maximizers group offsets the loss we make from the users that attrite.
Deep dive analysis: We need to measure the long term value that we generate from a user that added 8% more information vs. long term value that we generate from a baseline user.
For example, if we were to generate $15 revenue each year from the users that submit 8% more information and $10 revenue each from the baseline user. It translates to $5 in incremental revenue from 8% of the customers and loss of $10 in revenue from 2% of the users.
Long term revenue difference is =$15*0.08*X- $10*.02*X
=$1.2X-$0.2X
=$X
In this case, we recommend to continue the feature. On the other hand, if the incremetal revenue is not high enough to offset the loss, we recommend to discontinue it.
Conclusion: We can base our recommendation off two factors a) incremental revenue we generate through increased engagement from the cohort (8% of population) that provide more information and b) revenue that we lose from the cohort that provide less information (2% of population). If the net is positive, we recommend to launch the feature. Otherwise, we recommend not to launch the feature.
Clarifying questions:
- What was the objective of the feature that resulted in an increase in profile information added?
- Was a reduction in user retention expected?
- What was the specific change in the sign up flow? I'm assuming that the change was made to the flow post sign up so that it didn't affect the percentage of visitors signing up?
Here is how I would approach this problem:
Ask clarifying questions:
1. When we say the % of people that added more information increased by 8%, I believe we mean that if, earlier, 50 people out of 100 were completing the sign up flow, now 54 out of 100 people are completing the sign up flow.
2. 7-day retention rate is calculated by the percentage of people who return to our app after 7 days.
Taking any decision based on just two of these parameters would not be good. So, we need to look into other metrics as well:
First and foremost, identify the goal of introducing the new user sign up flow. Few of the goals that I could identify are:
a) Getting more information/data about the users for recommendations or monetization purposes
b) You have introduced a new feature in your product for which you need this information
c) It's a legal mandate that you need to have this information
Then, align the goals with the additional metrics that we need to have a look at:
1. Acquisition metrics: We know that the number of people who added more information increased by 8%, but what about the number of people who are coming to our website? Has it got increased, decreased or has it remain unchanged?
2. Activation metrics: Out of all those people coming to our website, how many people start the sign up process? In the sign up process, on which page how many people drop off?
3. For retention metrics, we already know that 7 day retention decreased by 2%
4. We would also need to check monetization metrics.
Note that, we need to align the goals with the metrics. This is because if the goal of introducing the new sign up flow was to increase the monetization, it may happen that although the 7 day retention has reduced but the data that you managed to capture was worth more. Also, it may also happen that the number of people visiting your website may have increased. And if you have advertisement based revenue model, that may be of benefit to you.
In addition to these metrics, you also need to go through the overall impact on the product that the sign up process may have caused:
Due to the change in the sign up flow, was there any impact on any feature inside the product? For example, let's say you started taking more data to give more appropriate recommendations to the users, however, those recommendations did not work out and hence, it led to the decrease in the 7 day retention. So, we would need to check feature level metrics as well.
So, clearly, we need to look into other data as well to reach at a conclusion.
Clarifications
What does 7 day retention mean? How do you measure it? Lets assume its : Number of users who used the service atleast twice in the last 8 days/ Number of active users at beginning of 7 day period.
Do you have any reason to connect the A/B test with retention metric? Was retention metric considered a ket metric to monitor for A/B test?
To clarify the workflow
1/ Users sign up through the signup flow, 2/ Become a registered user, 3/ Use the service, or not use the service, 4/ Can de-register from the service.
When was the signup flow test started? If it was 6 months back, then there may not be any causal effect anyway.
Assume A/B test started at teh beginning of the trailing 7 day period. And both these metrics are measured today, after 7 days of the test.
Lets first determine if Retention metric decrease is even a problem?
What does 2% decrease mean? Is 200bps decrease? Or 2% of retention metric which itself is a %. Lets assume its 200bps decrease.
If this considered an outlier compared recent behavior?
For e.g, How did the decrease compare to same day prev week? There maybe exected decreased from FRI to SAT anyway.
What about same time period last year ? any seasonality effect?
Compare against standard deviation of the metric for last 3 months. If its less than 1 sigma, then it may not be an issue.
Lets assume its not seasonal and its more than 3 sigma - which means its probably an outlier and hence needs to be investigated.
A/B test may not have a meaningful cause-effect impact on retention metric. Hence, trying to find out the root cause for retention metric decrease, to fix that.
Retention metric is impacted by the number of active users (denominator) and users using the service twice in 8 days (numerator).
Did any numerator or denominator change in the same time period? By how much?
Assume active users didnt change much but users using the service twice in 8 days decreased.
Did the type of users who joined change? Are they not finding the service useful and dropping off?
Where was the biggest drop in retention rate? Is it coming from users who signed up with new workflow?
We have now narrowed the cause - users from new signup flow are causing the drop in retention rate.
What was the new sign-up flow? Why are users signing up?
Clarify
What was the original goal of the experiment? Because we're getting more information, I'm guessing we want to use that information for either increasing engagement after sign-up or something related.
Are there any higher business goals that I should be aware of? What company am I at? This is important when evaluating tradeoffs and determining how these metrics tie back to the company's overall business goals
Investigate
Did we gather any other metrics from the experiment? I would want to look at if 14-day or 30-day retention also showed similar decrease as the 7-day retention. 7-day retention is important, but it's still a short time window and ideally we should be optimizing for long-term retention because that is most susainable for the business.
Have other experiments resulted in similar increases/decreases? How hard are these metrics historically difficult to move? 2% doesn't seem like much if we're trading off for profile information which may be difficult to achieve.
Evaluation
Optimizing for profile information
- Pros: Ability for more personalized product and re-engagement experience
- Cons: More user effort required, data usually can't be acted upon immediately; privacy concerns, especially if this info is required
- Pros: leading indicator for future engagement and monetization which are key business metrics
- Cons: need to look at long-term metrics too
Iterate
Because we gathered new information, we may not be using that information the right way. I would clarify the goals of the test, re-examine the user flow, and relaunch the experiment. We can bring more personalization into the 7-day user journey with the information we gathered and see if that change in the product experience counteracts the extra effort required by the user.
Clarification
- Can I assume the signup flow is 1) User signup with Basic Info 2)User asked to provide addtl. profile info 3) User visit the service again? [Yes]
- The only difference between experiment and control is in step 2)? [Yes]
- Is Step 2) optional, meaning even if user skip 2) they can still visit the site later? [Yes]
- The % of people that add addtl profile information is defined as (# of user that provided addtional profile)/(# of user that completed basic profile)? [Yes]
- The 7 day retention is defined as (# of user that visited with 7 days post signup)/(# of user that completed basic profile)?[Yes]
- Can I assume the confidence interval for both metrics are small enough that I can trust the experiment result? [Yes]
- Users who didn't complete addtl profile, and showed upwithin 7 days.
- Users who didn't complete addtl profile, and didn't show up within 7 days.
- Users who compleed addtl profile, and showed up within 7 days.
- Users who completed addtl profile, and didn't show up within 7 days.
How are we “encouraging new users to add more profile information”, is it another step in the profile process? Is it an optional step or a mandatory step? Is it a pop up that is removing the user from the main workflow?
What is the objective of the business? To increase retention? To increase user engagement? To increase the amount of targeting data to sell to businesses?
An increase of 8% from sign up with additional information is a significant increase assuming that the total number of signups have not decreased..However, for the 2% retention drop, I would first look at the activation %. Has the total Activation % dropped from when the “encouragement” was introduced? Is the retention counting from when the user “signed up” or when the user “completed their profile?” If there was a 2% drop when the retention was counted from sign-up, one thing we can hypothesize is that the # of sign ups increased but the # of people who fully completed their profile to activate their accounts stayed stagnate or dropped. Or, it could be that the # of sign ups stayed the same, but the # of activated devices dropped.
Assumption. We are seeing a higher drop off flow of users who are not completing their profile because additional steps are demotivating the users to finish sign up. Retention is measured from Activation not sign-up.
A/B testing should measure three aspects
1. Total number of users signing up
2. Number of users who finish completing their profile
3. The number of users who finished completing their profile still using retained.
What to do:
(Preliminary) Change the way the data is measured
% of users who completed their profile with additional information from signing up
% of users of retention counted from activation among the people with the additional profile information
VS
% of users who completed their profile without additional information from signing up
% of user retention from activation among people without additional profile information (from before the feature is introduced)
If the additional information is optional, and activation is completed, then ideally retention should be the same, and we should focus on moving consumers from without additional information from signing up without additional information to additional information.
If the retention drop with the measure adjustment is happening in one of the two categories, we need to look into the user profile and understand if there is a reason among engagement and retention efforts that changed.
Top Google interview questions
- What is your favorite product? Why?89 answers | 263k views
- How would you design a bicycle renting app for tourists?62 answers | 82.5k views
- Build a product to buy and sell antiques.54 answers | 66.8k views
- See Google PM Interview Questions
Top Problem Solving interview questions
- A metric for a video streaming service dropped by 80%. What do you do?50 answers | 135k views
- Drivers are dropping out of a city on Lyft. How do you figure out what's going on?23 answers | 18.8k views
- Your new feature boosts Amazon Search by 10%, adds 2s to load time. What do you do?19 answers | 36k views
- See Problem Solving PM Interview Questions
Top Google interview questions
- How would you improve Google Maps?53 answers | 228k views
- Calculate the number of queries answered by Google per second.45 answers | 78.5k views
- How would you design a web search engine for children below 14 years old?36 answers | 42.9k views
- See Google PM Interview Questions
Top Problem Solving interview questions
- There is a 15% drop in the open rate of Instagram App. You are the PM. Tell us what could have happened.11 answers | 10.1k views
- There is a data point that indicates that there are more Uber drop-offs at the airport than pick-ups from the airport. Why is this the case and what would you do within the product to change that?10 answers | 22k views
- You are a PM at Uber or Ola. You have to upsell Uber Go to Uber Auto customers. How would you go about it? What product changes would you make?9 answers | 3.5k views
- See Problem Solving PM Interview Questions