15% off membership for Easter! Learn more. Close

You launched a new signup flow to encourage new users to add more profile information. A/B test results indicate that the % of people that added more information increased by 8%. However, 7 day retention decreased by 2%. What do you do?

Asked at Google
28.8k views
Answers (29)
crownAccess expert answers by becoming a member

You'll get access to over 3,000 product manager interview questions and answers

Assumption: The decrease in 2% in retention is for those user who went through the new sign up flow

  • What value does the added information hold for us? Why are we collecting more information? - engagements? more acquisition, revenue, retention?

  • Is this noticed for a particular geography? segment of user?

  • What has our observation been for people on the other side of A/B testing? Have they observed any such increase/decrease?

  • We would like to look into the amount of time it takes to onboard the user/complete the sign up process from what it was earlier to what it is now?

  • We would also like to understand if any of this extra information that the user has asked for is causing any inconvenience to the user? has there been any downtime during the process that is happening with the new flow? This can be done through applications like Microsoft clarity

  • Any support queries which are captured at this stage of the process? Has there been an increase in the same?

  • Is the API not hitting properly

    • For OTP
    • for any other information which is extracted?
  • Are the changes that made in the UI/UX process too overwhelming for the using which maybe causing any disruption in the flow of the journey? Let’s say an

    • action button is not visible
    • the process is tedious
    • the options are confusing
    • too many steps
    • any personal information which is made mandatory

    We can possibly add visuals to make it more interesting.

    Add an option for auto-fill forms which the user might be using frequently.

  • Now, we can either offer the user some benefit for signing up. So, the user has satisfaction attached to this. It could be one from the following:

    • Acknowledgement
    • Referral option
    • Signing up points or product related benefits

This would encourage the user to complete the signing up process.

  • We can also take a 2-step process, lets say some information of the sign up is to be provided on sign up only and the rest can be done before the user takes another action. Like say purchase, engage, or scrolls (here we can put a timer that after so many minutes/days of exploration, the user must add certain information to proceed). This was the user can have a peak into your offering, see what they are missing on, and this shall motivate them to provide more information
Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading
badge Platinum PM
Structure and hypothesis is covered in many answers but i wanted to add one point here

 

Are the metrics giving meaningful insights? are the metrics linked? like are we seeing retention decrease to users who added info or are these metrics disjoint?

7 day retention metric in question:  is it post the new feature or does it include both timeperiods? say ike feature is rolled out on wednesday and people used till Tuesday but dropped on wednesday due to extra step etc; -- this is important because this could help us come to a possibility that maybe majority of these dropped users could be fake users or bots who are unable to/dont want to complete this step. in this case the metric doesnt give any meaningful insight, it could be a bonus in disguise as you got rid of  fake users
Access expert answers by becoming a member
1 like   |  
Sign up for FREE to continue reading
badge Gold PM

Few clarifying questions:

  • Is it the app or a website? (Answer: It is a web app)
  • What type of app or website is it? (Answer: It's a social media web app)
  • Why did we launch an A/B test? What was the objective for this? (Answer: We wanted to personalize offerings by asking users to provide more information as they sign-up. The overall objective was to improve engagement & retention.)
  • What do we get out of this 8% lift in the no. of users, what was the original estimate? (Answer: The original estimate was ~8% hence we are observing a significant and expected lift here.)
  •  Was a 2% drop in D7 retention expected? (Answer: No, this was not expected.)
  • Are the results significant? (Answer: Yes)
  • Talk about the user cohorts here 
    • Was it across the users or a particular set of users? (Answer: All users)
    • What is across geo or restricted to some geography? (Answer: All geography)
  • Finally, do the timelines of an 8% increase and a 2% drop coincide? (Answer: Yes, the impact of this new A/B test was a drop in retention by 2%.)
Further analysis:
  • What does a 2% drop signify? Is it acceptable?
    • Answer: Since it was not expected and hence 2% drop needs investigation
  • Why are these users not retaining or at which step these users have dropped for good?
    • Was it during signup, to get additional information?
    • Was it after signup, when users added the information?
  • If it was during signup, then it means every time we ask users additional information they are more likely to not retain

If it's post signup, then it would mean additional data added by the user is actually reducing the personalization aspect for the users, and hence users are not getting retained. In here I'd also like to work a bit on the personalization algorithm, to understand what's going on behind the scene and how can it be improved.

Summing up,
  • Adding information causes users to drop off at  the same instance - This has to be validated
  • If adding more information is hampering the personalization experience, then the algorithm has to be reworked for the new users.
 
Looking for feedback. Thanks! 
Access expert answers by becoming a member
2 likes   |  
Sign up for FREE to continue reading
badge Gold PM

Assumptions:

  1. A/B test was conducted on new signup flow vs control (Existing flow)
  2. new flow has additional steps where profile information is collected 
  3. No specific target user groups - eg. Pages or Businesses. The new flow is for regular users
  4. Assume this signup flow is on FB/a social media app
  5. Assume there is nothing internal/external effecting the metrics. 
Clarifying questions  
  1. Were other metrics effected? Answer - sign ups also went down . 
Situation is - % of users that add more profile information is up 8%  = addntl #of users with more info/total number of users. This number can go up as a consequence of both factors - more people add info  and total people signing up went down .  
As first step I would analyze which of these 2 factors has caused the increase and identify the actual #  of  people who  added additional information. 
7 day retention = % of new users who come back at or after 7days has decreased by 2%. I'd then relate this to drop in total sign ups to see if there is a correlation. I.e., if total sign ups also went down by 2% it means that there is no difference in retention rate for users onboarded in the new flow.
 
In this case, it looks like  total sign ups have decreased with no change in retention but a significant increase in profile data collection. 
 
Since this is FB, we can safely assume that there is a large user base. At this stage, FB's key goal is monetization. There are a lot of strong organic, network effects that push people to sign up for FB. And given that FB's revenue is generated by personalized ads, richer profile data will definitely help FB personalize ads with greater revenue potential. I'd imagine that the new cohort of users have a higher LTV as their ads are better personalized. While the drop is signups is concerning, I dont believe it is an urgent problem to solve. 
 
As a next step, I'd continue experimentation with this hypothesis to esimate the potential revenue gain for users with the new flow. Once  have this calculated, I'd offset this against the revenue loss of lower signups. 
I'd explore other ways to get the churned users back to sign up. 
 
Access expert answers by becoming a member
3 likes   |  
Sign up for FREE to continue reading
badge Platinum PM
  1. CLARIFY: 
    1. Are the users who are dropping off the same users completing the additional information? You choose.
    2. Is there anything unique about the users dropping off (ex. they're all on mobile devices, same age range, etc.)? You choose. 
    3. Are there any other changes to the product aside from the application flow? No.
    4. Are there any changes to the way the metrics were originally measured v. now? No.
    5. Are all of these users - i.e. the people adding more information and dropping off - fully completing the application? You choose. In this case, I assume that the people abandoning the product have not completed the application (i.e. creating an account and then deleting it). (Interviewer could confirm if such assumption is OK or if it should be a factor to explore.)
  2. AB TEST BACKGROUND: A product ran an AB test that gave a Control group the application flow as normal and the Experimental group an increased number of questions that allowed the users to provide more information. In the experimental group, 8% of people added additional information, but overall retention is down. (Interviewer confirms description is correct.)
  3. USERS: There are 4 main user groups to consider. I am most interested in the Experimental Group that leaves the product after 7 days. 
    1. Control:
      1. Stays on Product after 7 days
      2. Leaves Product after 7 days
    2. Experimental:
      1. Stays on Product after 7 days
      2. Leaves Product after 7 days
  4. HYPOTHESIS: Given that the Experimental group experienced a longer application, my hypothesis is that the longer application caused a drop in retention rate. Users are abandoning the longer applications / not finishing them because of its extended length, although they are initially filling in more information on the application than the Control.
  5. POSSIBLE REASONS FOR DROP IN RETENTION RATE:
    1. User Fatigue: User got tired of all the applications and abandoned the application after 7 dies (i.e. never logged back in to finish it)
    2. Data / Privacy Concerns: User became weary of Product / wondered why they had to provide so much information and abandoned application to due privacy concerns. 
    3. Time: User felt they did not have enough time to complete the application.
  6. ACTION ITEMS: There are three action items we could explore.
    1. Trends Over Time: I'd like to continue measuring the Control v. Experimental groups over a longer time period to see if we see a continued higher drop in retention rate in the Experimental Group. I'd measure every 7 days (7 days, 14 days, 21 days and 28 days). 
    2. Prospect Survey: I'd survey prospects of the Product and ask them how willing would they be to fill out a complete application with the Control application # of questions v. Experimental application # of questions and ask them why they answered the way they did.
    3. Drops Survey: I'd survey prospects who dropped from the application and ask them why they dropped. I'd be sure to classify them as in the Control Group v. Experimental Group.
  7. PRIORITIZING ACTION ITEMS: 
    1. Action ItemImpact to HypothesisCost
      Trends Over TimeHigh: Allows for us to measure continued behavior of people in experiment.Low: Analytics already seem to be in place.
      Prospect SurveyMedium: Users may not respond as they would actually behave. Does not directly target the people using the application.Low: Not costly but may take time to gather sufficient responses.
      Drops SurveyHigh: Would provide direct insight into why someone is leaving. Low: Not costly but may take time to gather sufficient responses.
  8. SUMMARY: Given prioritization, I'd start by measuring the Trends Over Time, as it is the easiest to implement and will allow us direct insight into the group behaviors. If possible, I'd also survey the users who have dropped to confirm my hypothesis. 
Access expert answers by becoming a member
8 likes   |  
Sign up for FREE to continue reading
badge Bronze PM

Statement 

You launched a new web application signup flow to encourage new users to add more profile information. A/B test results indicate that the number of people that added additional profile information increased by 8%. However, 7-day retention decreased by 20%. What do you do?

Clarify the problem 

  • This is an existing product? Yes

  • The only change that was made to the app was the registration form? Yes

  • The form is now longer? Lates more time to complete ? Not Sure

  • Did you the overall % of users that completed to form change ? You decide 

Using A/B test increase in # users with more information in profile
Drop in retention WoW 

Objective indicators 

  1. This is not something that has accorded over time - sudden

  2. This is not limited to a specific region - country / language / culture 

  3. Not limited to a specific platform - desktop, mobile , tablet (IOS etc…)

  4. This drop is not something that we have seen from other cohorts (“older” registers are still churning like before)

List of reasons 

  1. Seasonality 

  2. Alternative product 

    1. Competition 

    2. Other features in the same property (cannibalism)

  3. Other changes happened in the product 

  4. The registration form is encouraging some users to add more information and thereby disencouraging other users that do not want to supply this additional information. The outcome is that you are getting a different “mix” of users that are registering and not necessarily the same types of users that provide more information.
    This would not have been a problem, unless the outcome of higher churn. Which means that you quality users were discouraged by the registration process ….. 

Sort through the reasons 

Reasons  1-3 are not very likely (or better said I do not assume them to be applicable) 

Reason 4 is  what I think is the probable cause. 

How would we test that?

A careful analysis of the data should answer the question if the overall % of completes has increased or decreased. If not then we need to rethink this list of potential causes. 

launch a form with with less mandatory fields and see if you mix changes 

Access expert answers by becoming a member
3 likes   |  
Sign up for FREE to continue reading
badge Bronze PM
I would begin first with some clarifying questions for understanding more of the 1. core business KPIs  2.  the A/B test goal and 3. the current phase of the Company i.e. is user retention more valuable or revenue (from the increased user info) at cost of lesser long term users

1. Question1 : What is the main business model of the App? At a broad level is this a subscription based App or Ad funded business?

2. Q2: Is there a LTV model built for the App which takes into account 1. the value of increased user information 2. 7 day Retention

3. Q3 : What was the original goal /hypotheses of the A/B test: Was it to increase the info per user and how was that quantified in terms of business impact?

If an Ad funded business, increased user information could result in increased CPC/CPMs and that needs to be quantified and compared to reduction in engaged users

Also if 7 day retention loss of 2% should be segmented to see whether the users being lost are from valuable segments? In the early phase of a Product that would be a more important factor than any short term gains from increased Ad revenue.

On the other hand if this is a mature Product and past Product Market fit i.e. the users being retained are the valuable ones, then the increased info for those users would actually be more beneficial
Access expert answers by becoming a member
7 likes   |  
Sign up for FREE to continue reading
badge Platinum PM

Hi, 

First of all, I make to make certain assumptions or inferences here:

1. The % users who filled addl information increased by 8% which means that providing additional information was optional.

2. The overall decrease in retention includes people with both the variations (control as well as variation version)

Now, I want to analyze further which user segment retention rate is affected and would suggest strategy to address the problem - 

1. People who did not provide additional information and bounced off Landing Page or in subsequent usage -  

I would verify these users retention rate between control version and variation version. Ideally it should be same. If it is not, then we should look at user profiles at each variation to seek the differences.

2. People who provided additional information and then bounced off Landing Page or in subsequent usage -

If retention number is lower here, then we need to focus on providing relevant customized offerings basis additional information and check retention rate in subsequent 7-day period.

3. People who signed up but bounced off additional information page and never made it to the landing page - 

If % of users is higher enough to reduce the overall retention rate, then we should look at alternative placement of additional information or content of additional information (this can be gathered by analyzing which page and field had the maximum bounce-off rate). May be we can ask user to provide information on 2nd or 3rd usage etc., as we must give a chance to user to use our application before exiting it. 

Access expert answers by becoming a member
23 likes   |  
Sign up for FREE to continue reading
badge Platinum PM

You launched a new signup flow to encourage new users to add more profile information. A/B test results indicate that % of people that add addtl. profile information increased by 8%. However, 7 day retention decreased by 2%. What do you do?

Let's start from the WHY behind the change- likely that you were implementing this change to improve retention ; let's proceed with the assumption that this is a social app where profiles play a critical role.

An increase of 8% for sign up flows is a significant increase in number of people completing profile information - I would ask how was this implemented because from experience I know that every time we add a step it introduces a 3-4% drop off. 

My assumption here is that this is an added pop up screen during sign up which is causing a drop off in total number of users completing sign up successfully since they drop off on the profile info screen.

*Interviewers nods yes*

Also I would recommend changing the way we measure success of the A/B test , let me tell you why - consider the following scenarios

In case A for every 100 people signing up 50 people ended up signing up of which 20 completed profile information

In case B ( winning variant ) for 100 people signing up 48 people ended up signing up of which 22 complete their profile 

So the blocking screen is causing an overall drop off which reduces D7 while increase number of people completing their profile

I would re-configure the experiment hypothesis to " users with better profile information have higher retention than users who don't - how can i increase in the number of users filling their profile information on D0 ( primary metric) while increasing/not affecting the successful sign up rate ( secondary metric that doubles up as a kill metric) "

if my goal is to increase profile information of new sign up I would focus on passive methods  ( push notif, in app pop ups, incentives)  POST users successfully signing up so as to negate this drop off

if my larger business goal is to increase retention I would reduce the steps of sign to increase successful signup and focus on passive methods ( push notif, in app pop ups, incentives) POST users successfully signing up so as to increase total user successfully signing up and help improve D7

Access expert answers by becoming a member
29 likes   |  
Sign up for FREE to continue reading
badge Silver PM

Asking clarifying questions is critical to properly answering this Google problem solving question:

  1. What was the new sign up the flow and how is it different than the old sign up flow. My assumption is the new sign up flow asks for more user information, thus the increase in the % of profile information. 
  2. What was our goal? Was it to increase profile information? What was the acceptable counter metric decrease? Is the result within range? 
List pros + cons of both metric
  1. Increase in profile information - the more data we have, the better our recommendation and personalization system and the network becomes more valuable to the users. Cons, depending on the types of information we ask and require of the user, the user may have a different level of comfort and privacy concerns
  2. Decrease in 7 day retention - if a user does not come back in 7 days, this is not great for the platform, however, I want to also consider whether 14 day or monthly retention has decreased, perhaps the new user no longer needs to come back on 7th day and add a profile pic or other information. 
What would I do to validate my hypothesis
1. One hypothesis is 7th-day retention may decrease due to the new user no longer come back to fill out additional information, I would then compare 14 day and 30 day retention to see if it decreased. 
 
Access expert answers by becoming a member
29 likes   |  
1 Feedback
badge PM
I agree with the clarifying questions here, it is a must before making assumptions. I also like the structure and approach of the answer.

Merging both answers above would be ideal in a way tho.
3
Sign up for FREE to continue reading

Assuming no other change was implemented that could have affected these metrics, lets start by breaking down the question. First piece of information is that additional profile information increased by 8%, which is desirable. We do not have information about what exactly we are asking the user to provide, so lets assume that the additional profile information consists of user's interests and experiences. And second info we have is that 7 day retention decreased by 2% because of this change.

The objective of getting more profile information from the user is to find better content and connections related to the user's interests and experiences. If we are not able to use the additional user info to find more relevant and engaging content for the user, then this effort is a waste. So, our primary focus should be to make the best use of this info to improve the experience for the user. This would have an effect on the churn rate too and generate positive word-of-mouth marketing for the product.

Now lets try to understand the possible reasons for the decrease in user retention. A thought that would come to our mind is that probably we have made the process so elaborate and complicated that user did not even complete the process and chose to leave the app. But since retention period metric tracks the user after they have completed the signup process, we can rule out this reason. We ought to be more specific about what kind of information we are asking from the user. For privacy reasons, many users will not be comfortable in sharing more personal info. Make sure that users are satisfied with your product's security and privacy policies and convinced that they have full control over how your product uses their data.

In summary, we need to do the following to improve retention -

  1. Create a more engaging experience for the user by finding and displaying relevant content based on user's interests.
  2. Communicate your product's compliance and privacy standards and make sure they trust you with their data.
Access expert answers by becoming a member
4 likes   |  
Sign up for FREE to continue reading
badge Platinum PM

I will analyze the situation in the following way:

  • 7-day retention è the percentage of users who come back to the app/product within 7 days after the time of their first session.
  • The hypothesis of the A/B test is with the signup flow change the numbers of users adding more profile information during signup should increase without any significant effect on other health metrics? à Yes
  • What were the exact A/B test changes? à I am assuming the A/B test change mainly asks for more user information in a much better user-friendly manner.
  • Before the start of the A/B test, did the team agree on any accepted % change in the 7-day retention? àIf the accepted % change was around 5%, then the team may be satisfied with the A/B test results and with the new change. If the accepted % change was around between 0%-1%, the team would need to analyze the situation.
  • Similarly, before the start of the A/B test, did the team agree on any accepted % change in the new suers adding more profile information? àIf the accepted % change was around 15% - 20%, then the team would not be satisfied with the A/B test results and would not implement the new change. If the accepted % change was around between 5%-10%, the team will consider analyzing the situation further to implement the change.
  • What was the A/B test run time window? And the metric value changes are for what time-period? à It should not happen that the A/B test was supposed to be run for 1 month and the team is already analyzing the results just within 2 weeks of the start of the A/B test or the A/B test was run for more than 1 month.
  • Is this change specific to any platform (Android vs iOS), mobile vs web, any region, any specific user segment or any specific version?
  • I am assuming none of the other product features got upgraded during the time-window that the A/B test was run.
  • What type of profile information is being added by the new users during the signup? à If such information is not useful, then the signup flow change is not really required. If the information is useful and later used by the product’s underlying models, then the signup flow change seems to be important.
  • Is there similar proportionate decrease in the 14-day or 1-month retention rates? à If there is no such decrease, we can go ahead with the implementation of the new signup flow change else we need to further analyze.
  • Is this 7-day retention decrease include folks who didn’t meaningfully engage with the product but were mere spectators? à Losing high-value folks would matter much more than losing the low-value users
  • Is there a change in the number of people signing up and actually getting activated to be considered for 7-day retention? à If there are more people getting activated but same/higher number of people are returning back to the product after 7 days, obviously there will be a decrease in retention rate which is not worrisome
  • Also, I would like to look at the absolute values to determine the exact impact.

I would like to analyze the old and new signup flow changes and what actions did the user undertake after coming back to the product after 7 days

  • It is possible that earlier the users did not provide all the information so they received 1 or 2 notification emails to complete their profile and hence they logged into the product within the 1st 7 days. 

  • Now, with signup flow change, since the users are providing extra information at the signup stage itself, they are no longer getting those notification emails and are not required to log into the product within 7 days of signing up.

Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading
badge Gold PM

CQ:

  1. What's the product we are offering - Assume this is for Facebook login

  2. Use case of asking for more profile information - curb fake accounts

  3. Since when do we see the change - as soon as the change was made

  4. sudden/gradual - happened right after the change

  5. Sensitive to geography - no

  6. Sensitive to platform - web/app - no

  7. Do we also see any change to avg session length? - no

 

7 day retention is basically the number of users who are coming back to the app within 7 days of last usage.

 

Approach would be to review the scenario as per below hypothesis

 

Hypothesis:

 

  1. Fake users couldn’t provide additional info and hence they dropped off:

    1. Next Step - Sample check their FB usage pattern, kind of posts made, whether they respond to 1-1 chats and if the hypothesis is validated then use the learning to remove fake profile via this method as well instead of nudging users to add extra profile information

  2. Whether login was affected after addition of profile page info

    1. Next Step - Get it QAed and resolve issues if found

  3. Change in how data is now being measured 

    1. Next Step - understand the steps and fix

Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading
badge PM
First, Clarify: 
1. What is the definition of 7-day retention?  Does it mean people stop signing in after 7-days?  Is there any obvious logical connection between collection of the additional information and not signing in later? (for e.g. is the profile info affecting any parts of the user experience)? 
2. What are the profile information being asked for? How will it be used? Is it useful when only 8% of the population provide the info? Has this change passed its own success threshold? 
3. For this question, should we know what product is it? its objectives and north star metric? 
4. Is a 2% drop significantly more than typical? 
 
Assuming that this is a free signup software service, and the profile information is not impacting any subsequent user experience.  The information could be used to personalize experience later, but not now. 
 
Assume that the product's north star metric is number of active users, say weekly.  Assume 2% is more than the fluctuations seen.
 
Lay out the structure:  
1. I will first, segment the data to narrow down the problem and see if there are any hints to why this could be happening.  
2. Next, I will brainstorm for some hypothesis, internal and external
3. Then, I will validate these hypotheses
4. When we find the root cause, we can propose a fix. 
 
Segment the data: 
- Assume this is not a data pipeline, outage type of problems
- Assume no change of definition/measurement
1. When did the 2% drop happen?  Is it specific to any geographies, languages, user group (new, existing, or other types - personal, business)?  platforms, devices, browsers
2. Is my test group not truly randomized?  Or is it coinciding with other types of tests? 
3. Has there been a surge in other related metrics? e.g. higher signups? 
4. What is the user journey or funnel?  Do we see any obvious drop? Or, there is simply less visits altogether? 
>> User visit sites -> View pricing --> Signup --> Profile Info --> User home page
>> User visits --> login --> home page
 
Hypothesize: 
First - Understand the change in the User experience. What additional information is asked that could impact 7-day retention?  Is there any other changes beyond the UX? Is the layout different? 
1. Is there any new features being launched, for e.g. 7-day free trial ended?  
2. Any promotion that coincides? 
3.  Did we stop anything - for e.g. email, notifications that bring users back? 
4. Market changes - competitive move, less need for this service (e.g. SAT test prep site, and SAT no longer required)
 
Validate against the above. 
 
Fix: 
1. If it is related to marketing changes or external factors, then we should continue to roll out the feature (if 8% is considered good enough)
2. Consider bring back the emails or notifications if the 2% drop is unintended or unacceptable.  
 
 
Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading

I will first start with Clarifying Questions from the interviewer:-

  1. What type of service/product is it basically is this transactional such as Tinder, Cars24 etc or non-transactional such as FB, Netflix, Youtube etc? This is important as let say if user has filled out all the information on Cars24 about the used car they are looking to sell and operational efficies are there into the system we might have reduced our TAT to list cars from initially let say 10 days to 5 days itself.
  2. I assume A/B Test results are now live across the entire platform and retention drop is uniform across new users and all regions?
  3. What do you mean by retention? App launch retention or Doing any core activity retention
I will assume that we are talking about a non-transactional platform such as Facebook where retention is important and goal is to make user to comeback on the platform on daily basis. I will first look at following data:-
  • Behavior on day of SignUp:-
    • %age of users performing any core activity such as sending a friend request or joining any group or posting on the wall
    • Avg. Engagement time peruser post signup completion
    • Is this only D7 retention taking a dip or D1, D3 retention also showing some drop. This can indicate that engagement post signup is not happening or not?
  • How profile Information has been used by platform?
    • Content Recommendation Logics
    • No of Ads shown to user per unit of time they are on platform
Another angle is to look at Tech Implementation
  • Any update being sent to new users
  • Any new feature release for new users
  • Crashlytics Report 
If we can see that everything above looks good and there has been not much we are able to validate. We then as a group need to decide that what is important metric from an organisation POV and how will this impact our DAU or revenue metrics? 
For me, Retention has always been of paramount importance in products where we will user to visit us on recurring basis
Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading

Clarifying Questions - 

What was the new sign up flow? How is it different from the old sign up? (Asking for additional information including location, age, and interests) 

What was the goal of the sign up flow change? Was it to increase engagement? (Yes)
 

Re- Stating the question with new data

With the goal of increasing engagement we launched a new signup flow that encouraged users to add more profile info related to their location, age and interests. Our data shows that 8% added more info, but our retention has decreased by 2% shortly after. 

 

My Assumptions 

  1. This is a social platform where users get to interact with people 

  2. Historically retention has been healthy

  3. With the new sign up flow, nothing in the UI has changed. 

 

2 hypothesis

  1. We leveraged the add’l profile info to feed into our algo’s and serve them better curated content. Unfortunately the ML models aren’t working correctly and we are serving them the wrong content thus making them churn.
     

  2. The users who do go through the new flow, realize they are uncomfortable with the level of information that they provided. Thus churning from the service.
     



 

My approach

 

I’m leaning towards hypothesis 1 (broken ML, serving no good content). Hypothesis 2 states the churn is caused by an uncomfortable amount of info required. I’m more inclined to believe that if a user is uncomfortable with the level of info they are providing they most likely won’t complete the flow.

In the case of hypothesis 1, I’d like to test the new flow myself with the team to quickly see if the ML is serving us content that we actually like, or if it’s serving content not relevant to our interests. If our quick test confirms this, the next steps i’d like to take is to retract the new user flow to stop the bleeding. 

Then i’d run an internal dogfooding session to understand where the issue lies, and how we can improve the algo. 


 

Alternatives. 

 

If hypothesis 1 is incorrect, and we determine that the algo is working fine, I’d move on to Hypothesis number 2. 

 

If hypothesis number 2 is incorrect  i’d like to explore external factors that may be causing this decrease in 7 day retention. It could be that users are fine with the new flow, maybe a new platform has been released that is grabbing our users' attention. 

 

Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading

Clarification : 

  • Sign up flow : What changes were done in sign up , assuming we started taking more information then but does that mean users gave more information but did not complete the sign up flow. we might be taking information is steps . 
  • What are we doing with this new information - are we showing them recommendation or are we sending them more notifications . ? 
  • 7 day Retention :How we define retention. this means that if i come today and i come again within 7 days then i would call it 7 days retention user .
Factors : Baed on assumption that with new sign up flow , we are taking more information from users and showing them recommendation
  • Check the data of user providing information - whether it is correct or not. might be any auto suggest or auto fill is giving wrong data .
  • Check the user funnel data of retention before and after. 
  • Check all the  sources of users coming back again and check for any issues with emailer / notifications .
  • Seasonality :Has this retention dropped on any particular day. 
  • Check 1 day 2 day 3 day .. till 7 days retention data .
  • Check for competition data for any particular day for less traffic.
Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading

FIrst off, I would begin with clarifying questions: 

  1. What was the new sign up flow, and how does it differ from the old sign up flow?
  2. How is 7-day retention defined?
  3. What does the user journey look like after the user has completed the creation of their account? Are there critical actions that the user has to take within the 7-day period before the changes to the sign up flow has been implemented?
  4. What was the ultimate business goal we were trying to move when we had implemented this A/B test?
Following which, I will proceed to list down assumptions: 
 
  1. This is for a social app where profile information is critical to building the right experience for the user - be it, marketplace, social dating platforms 
  2. 7-day retention is defined from the moment that the user has complete profile creation. 

Approach: 

My assumption for this question is that the user activities that we prompt the user to engage in when they have just created their user account are activities that they do and has an impact on the 7-day retention rate e.g. adding of profile pictures, adding of interests

However, with the new sign-up flow, users have the option to add these information in the sign-up flow and this inevitably led to the impact of the decrease in 7-day retention rate. 

To ensure that we are not over focused on one metric, I'd recommend to look at 14-day and 30-day retention to see if there is an impact on these numbers as well.

I would also a longer-term view and look at how long does it take for the user to experience the product's value proposition typically and understand if this group of users are able to experience the product value prop quicker (due to the streamlined sign up flow) and have a better retention rate and affect the business bottomline in a positive manner

Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading
badge Silver PM

Clarify:

  • Assume additional info required for FB sign up

  • Goal for new sign up: To increase richness of profile which helps increasing engagement (# of interactions with other users) and increase personalization of content

  • As extra info was asked explicitly the % of people adding more info went up

 

Goals:

The mission of FB is building stronger communities and the reason we added a new flow was to collect more information so as to facilitate meaningful interactions and engagement with content. 

The goals of the new feature align with the overall mission

 

Metrics:

Lets think of some metric we might be trying to measure during the experiment:

Goal Metric : % of new user adding extra information (It increased which is a good thing)

Guardrail metrics:

  • Retention (weekly/monthly)

  • Daily time spent on FB for new users

  • # of content they interacted with (like/comment/share)

  • # of posts they created weekly

  • # of friends added

 

Hypothesis:

Hypothesis 1: We are asking for information that users perceive as private and hence they are conscious about using the app after signing up.

To validate this hypothesis I will look at weekly and monthly retention, if both are lower it signifies a drop in users coming back to use the platform.

I will also look for other guardrail metrics like Daily time spent, # of content interacted with, # of post created or # of friends added. If these metrics also show a drop, it confirms our hypothesis that users are perceiving the new info as private and the platform as overall not safe.

In this case we need to dig deeper to see what extra info we are asking for and if the benefits of having it outweighs the cost. Can we also add a short note for the users as to why we ask for this information or make it optional?

 

Hypothesis 2: Users were previously notified to add additional info so they were logging back within 7 days. They aren’t notified in the new flow

To validate this I will check for monthly retention to see if there is a drop in the long term. If our hypothesis is true there should be no drop in monthly retention

Additionally I would check for other guardrail metrics, if they remain the same then that confirms our hypothesis

 

The drop in retention is temporary in this case so we don’t need to do anything

 
Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading
badge Platinum PM

Clarification

I think we need a better understanding of just what is going on before we can answer this properly. 

  • What were the changes we made to the sign up flow? Are certain fields now required, or are the input boxes placed more prominently?
  •  What are we trying to accomplish by capturing more information during the sign up flow? Increased activation and retention as users have more complete profiles to start with?
  • How are we measuring retention exactly? Is there some usage threshold involved or is it simply anyone with a single session? 
  • Assuming the decrease in retention was only observable in the group with the new sign up flow changes.
 
Zoning in on the difference
Now that we understand the question properly, let's continue to investigate further by looking at what we're seeing by a few different angles:
  • How long have we been observing this increase in information and decrease in engagement? Was this a sudden observation or something we noticed after looking at this over a long time span. While 2% is pretty small, I'm going to assume it is statistically signficant.
  • Are these observations uniform across geographies or are there certain countries / cities where it is more or less noticeable?
  • Is this uniform across all user segments and use cases? Maybe a particular demographic is impacted more or less?
  • Does this vary by platform, mobile vs desktop?
 
Impact
When we have a better understanding of what is happening we can better measure the impact on our product and it's goals.
  • One would assume that more engagement would lead to more retention. Does a more complete user profile actually lead to more engaged users? We could look at the average time spent in the product per week or average number of sessions or examine specific actions. 
    • If our assumption was that more complete profiles leads to more engaged users we should've confirmed that assumption with the existing users before going down this route.
  • Is there anything characteristically different in the users we are retaining that have added the additional information? Maybe they are from a particular high value user segment or engage with the product more as discussed above. Quality over quantity.
  • I don't know what the additional information we are capturing is but we should examine how valuable that information is to us. Could we use the new profile pictures to aid our facial recognition research? Could the demographic information improve our ad targeting and corresponding CPM rates?
  • Is weekly retention the appropriate time period for us to be looking at? What does the increase or decrease look like for monthly retention? 
 
Next Steps
When we better understand the impact we can decide next steps. We don't necessarily need to take a binary approach of either keep this change or leave it, but perhaps some happy more optimized medium is in order.
  • Can we use this additional information to increase the level of personalization in the bridgebacks to our product? "Hey Joe, check out these other users also from the Bay Area!"
  • Did we have some sort of email campaign or notification scheme that encouraged users to come back and add more profile information? This may have gotten turned off if users already filled out their profile despite them still benefiting from a CTA to come back to the product.
 
Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading
badge Bronze PM

Clarify scope: 

What type of product are we referring to? Is it gmail?  Or youtube? Or google account?

 

Things to figure out..

 

  1. Internally

    1. Among the customers who are not coming back in 7 days, how many were from the test population?(we need to figure out if the decrease in 7 day retention is related to the A/B test or not)

    2. Were all the onboarding emails working properly? If we historically send users a Welcome email after they sign up but that email was not working properly, that could be a factor in deceased 7 day retention

    3. Did we remove any features recently? Were any features that were part of the reason users signed up got removed?

    4. Has there been any outages during this decrease period? Did anything happen to result in poor customer experience and low confidence in our product?

  2. Externally

    1. Time of the year? What time range was the 2% decrease? Does historical data show similar fluctuations? How does that compare to last year’s data? Example, if we’re talking about a product such as gmail, and 7 day retention is decreasing for users who sign up right before christmas break, the fluctuation could be normal since users might be on holidays and no one is checking their email. 

    2. Any new competitor releases that may have impacted our retention rates?

    3. Was there a specific segment of customers that contributed more to the 2% decrease?Ex, if we saw a 50% decrease among mobile users, we should look into issues specific to Mobile. Were the app stores working? Were customers just unable to log into the app and therefore retention decreased?

  3. I will also work with stakeholder teams such as customer support ,marketing, and CRM to identify potential reasons. Were there increased customer complaints about something? Did we pause any marketing campaigns or CRM efforts?Among the users who signed up, which campaigns did they come from? Were customers coming in due to incentives that we gave us and therefore we attracting less quality customers who were never going “stick” with us?

Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading
First of all, I'll check if there's a correlation in the 7-day retention drop and the new sign-up flow. To do that, I'll check the 7-day retention for the new sign-up flow.

Assuming the drop is due to new sign-up flow,

Case 1:

I'll check if there is a drop in New Sign-up user due to added steps in the journey. If that is the case, I'll review if the drop justifies the objective of collecting additional information.

If the drop is not justified, I'd explore to optimize the new journey with a change in UX such that the information can be collected post successful sign-up and re-run the experiment till we find optimal balance in both the metrics.

Case 2:

Sign-up rate remains same but users are uninstalling at a later stage. In this case, I'll check if we're using the additional information to target the users for sale of product/services which is causing them to leave the platform. If that is the case, I'll review if the drop justifies the increase in the transactions. If yes, I'd gradually roll-out the new sign-up flow to 100% user base.
Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading
badge Platinum PM

Clarifying question:

Candidate: What is the business goal? Is it to collect additional information that are critical to provide great service to the customers as well as serve ads to generate revenue.

Interviewer: Yes

Candidate: I assume that the user that provides more information would tend to have higher engagement and hence google tend to generate more ad reveune from them. 

So we can measure the long term value for cohort that add more details and the baseline cohort. Then we can base our decision off the decision to persist the feature off the analysis whether the incremental revenue that we generate from maximizers group offsets the loss we make from the users that attrite.

Deep dive analysis: We need to measure the long term value that we generate from a user that added 8% more information vs. long term value that we generate from a baseline user.

For example, if we were to generate $15 revenue each year from the users that submit 8% more information and $10 revenue each from the baseline user. It translates to $5 in incremental revenue from 8% of the customers and loss of $10 in revenue from 2% of the users.

Long term revenue difference is =$15*0.08*X- $10*.02*X

                                                         =$1.2X-$0.2X

                                                          =$X

In this case, we recommend to continue the feature. On the other hand, if the incremetal revenue is not high enough to offset the loss, we recommend to discontinue it.

Conclusion: We can base our recommendation off two factors a) incremental revenue we generate through increased engagement from the cohort (8% of population) that provide more information and b) revenue that we lose from the cohort that provide less information (2% of population). If the net is positive, we recommend to launch the feature. Otherwise, we recommend not to launch the feature.

Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading
badge Bronze PM

Clarifying questions:

  • What was the objective of the feature that resulted in an increase in profile information added?
  • Was a reduction in user retention expected?
  • What was the specific change in the sign up flow? I'm assuming that the change was made to the flow post sign up so that it didn't affect the percentage of visitors signing up?
Based on the answers to the above questions, the first objective would be to identify if the reduction in retention is because of the new feature. This can be done by comparing the retention for the 2 buckets. 
Possibility 1:
If we are similar similar retention for users in both buckets or better retention for users in the test variant, then the reduction in retention is unlikely to be because of the change in the sign up flow. 
We can then look at the user funnel as a whole and see what users are doing differently which is causing a reduction.
 
Possibility 2:
If the test variant shows lower retention, then we can look at the funnel for those users specifically and see where the dropoff is coming. It could be that users in the control variant were returning within 7 days to update their profile which wouldn't be the case for users who have already completed their profile. 
Based on the findings, we can decide if the 2% reduction in retention is made up by other KPIs.
 
Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading

Here is how I would approach this problem:

Ask clarifying questions: 

1. When we say the % of people that added more information increased by 8%, I believe we mean that if, earlier, 50 people out of 100 were completing the sign up flow, now 54 out of 100 people are completing the sign up flow.

2. 7-day retention rate is calculated by the percentage of people who return to our app after 7 days.

Taking any decision based on just two of these parameters would not be good. So, we need to look into other metrics as well:

First and foremost, identify the goal of introducing the new user sign up flow. Few of the goals that I could identify are: 
a) Getting more information/data about the users for recommendations or monetization purposes
b) You have introduced a new feature in your product for which you need this information
c) It's a legal mandate that you need to have this information

Then, align the goals with the additional metrics that we need to have a look at:

1. Acquisition metrics: We know that the number of people who added more information increased by 8%, but what about the number of people who are coming to our website? Has it got increased, decreased or has it remain unchanged?

2. Activation metrics: Out of all those people coming to our website, how many people start the sign up process? In the sign up process, on which page how many people drop off?

3. For retention metrics, we already know that 7 day retention decreased by 2%

4. We would also need to check monetization metrics.

Note that, we need to align the goals with the metrics. This is because if the goal of introducing the new sign up flow was to increase the monetization, it may happen that although the 7 day retention has reduced but the data that you managed to capture was worth more. Also, it may also happen that the number of people visiting your website may have increased. And if you have advertisement based revenue model, that may be of benefit to you.

In addition to these metrics, you also need to go through the overall impact on the product that the sign up process may have caused:

Due to the change in the sign up flow, was there any impact on any feature inside the product? For example, let's say you started taking more data to give more appropriate recommendations to the users, however, those recommendations did not work out and hence, it led to the decrease in the 7 day retention. So, we would need to check feature level metrics as well.

So, clearly, we need to look into other data as well to reach at a conclusion.

Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading

Clarifications

What does 7 day retention mean? How do you measure it? Lets assume its : Number of users who used the service atleast twice in the last 8 days/ Number of active users at beginning of 7 day period.  

 

Do you have any reason to connect the A/B test with retention metric? Was retention metric considered a ket metric to monitor for A/B test?


 

To clarify the workflow

1/ Users sign up through the signup flow, 2/ Become a registered user, 3/ Use the service, or not use the service, 4/ Can de-register from the service.  

 

When was the signup flow test started? If it was 6 months back, then there may not be any causal effect anyway. 

Assume A/B test started at teh beginning of the trailing 7 day period. And both these metrics are measured today, after 7 days of the test. 

 

Lets first determine if Retention metric decrease is even a problem? 

 

What does 2% decrease mean? Is 200bps decrease? Or 2% of retention metric which itself is a %. Lets assume its 200bps decrease. 

If this considered an outlier compared recent behavior? 

For e.g, How did the decrease compare to same day prev week?  There maybe exected decreased from FRI to SAT anyway. 

What about same time period last year ? any seasonality effect?

Compare against standard deviation of the metric for last 3 months. If its less than 1 sigma, then it may not be an issue. 

 

Lets assume its not seasonal and its more than 3 sigma - which means its probably an outlier and hence needs to be investigated. 

 

A/B test may not have a meaningful cause-effect impact on retention metric. Hence, trying to find out the root cause for retention metric decrease, to fix that.  

 

Retention metric is impacted by the number of active users (denominator) and users using the service twice in 8 days (numerator).

 

Did any numerator or denominator change in the same time period? By how much? 

 

Assume active users didnt change much but users using the service twice in 8 days decreased. 

 

Did the type of users who joined change? Are they not finding the service useful and dropping off?

 

Where was the biggest drop in retention rate? Is it coming from users who signed up with new workflow? 

 

We have now narrowed the cause - users from new signup flow are causing the drop in retention rate. 

 

What was the new sign-up flow? Why are users signing up?

Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading
badge Silver PM

Clarify

What was the original goal of the experiment? Because we're getting more information, I'm guessing we want to use that information for either increasing engagement after sign-up or something related.

Are there any higher business goals that I should be aware of? What company am I at? This is important when evaluating tradeoffs and determining how these metrics tie back to the company's overall business goals

Investigate

Did we gather any other metrics from the experiment? I would want to look at if 14-day or 30-day retention also showed similar decrease as the 7-day retention. 7-day retention is important, but it's still a short time window and ideally we should be optimizing for long-term retention because that is most susainable for the business. 

Have other experiments resulted in similar increases/decreases? How hard are these metrics historically difficult to move? 2% doesn't seem like much if we're trading off for profile information which may be difficult to achieve.

Evaluation

Optimizing for profile information

  • Pros: Ability for more personalized product and re-engagement experience
  • Cons: More user effort required, data usually can't be acted upon immediately; privacy concerns, especially if this info is required
Optimizing for 7-day retention
  • Pros: leading indicator for future engagement and monetization which are key business metrics
  • Cons: need to look at long-term metrics too

Iterate

Because we gathered new information, we may not be using that information the right way. I would clarify the goals of the test, re-examine the user flow, and relaunch the experiment. We can bring more personalization into the 7-day user journey with the information we gathered and see if that change in the product experience counteracts the extra effort required by the user.

Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading
badge Platinum PM

Clarification

  1. Can I assume the signup flow is 1) User signup with Basic Info 2)User asked to provide addtl. profile info 3) User visit the service again? [Yes]
  2. The only difference between experiment and control is in step 2)? [Yes]
  3. Is Step 2) optional, meaning even if user skip 2) they can still visit the site later? [Yes]
  4. The % of people that add addtl profile information is defined as (# of user that provided addtional profile)/(# of user that completed basic profile)? [Yes]
  5. The 7 day retention is defined as (# of user that visited with 7 days post signup)/(# of user that completed basic profile)?[Yes]
  6. Can I assume the confidence interval for both metrics are small enough that I can trust the experiment result? [Yes]
Analysis
Awesome, I think there are 4 different user journeys.
  1. Users who didn't complete addtl profile, and showed upwithin 7 days.
  2. Users who didn't complete addtl profile, and didn't show up within 7 days.
  3. Users who compleed addtl profile, and showed up within 7 days.
  4. Users who completed addtl profile, and didn't show up within 7 days.
If we assume there are 100 users that completed basic signup, We can derive that 
delta_3+delta_4=8
delta_3+delta_1=-2
and there are more decrease through journey 1 than increase through journey_4.
This might indicate the change in step 2) might adversely impacted user's perception of the product, so even though they provided additional information, they didnt' continue to use it.
 
To verify this, I probably also wanna look at 14 day retention, 30 day retention to make sure the trend is consistent with 7 day retention. I also wanna compare conversions at different signup stages, to get more hint when and why users drop out.
 
If the trend is consistent, I probably won't launch the feature right away, given that user retention is probably more important than volume of profiles.
I may iterate with additional experiment, to see if we can utilize the added profile information to attract the user back, for example, an email like "Hey, your friend, x y, has also joined..."
Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading
badge Bronze PM
C: Clarifying questions

How are we “encouraging new users to add more profile information”, is it another step in the profile process? Is it an optional step or a mandatory step? Is it a pop up that is removing the user from the main workflow?

What is the objective of the business? To increase retention? To increase user engagement? To increase the amount of targeting data to sell to businesses?

 

An increase of 8% from sign up with additional information is a significant increase assuming that the total number of signups have not decreased..However, for the 2% retention drop, I would first look at the activation %. Has the total Activation % dropped from when the “encouragement” was introduced? Is the retention counting from when the user “signed up” or when the user “completed their profile?” If there was a 2% drop when the retention was counted from sign-up, one thing we can hypothesize is that the # of sign ups increased but the # of people who fully completed their profile to activate their accounts stayed stagnate or dropped. Or, it could be that the # of sign ups stayed the same, but the # of activated devices dropped.

 

Assumption. We are seeing a higher drop off flow of users who are not completing their profile because additional steps are demotivating the users to finish sign up. Retention is measured from Activation not sign-up.

 

A/B testing should measure three aspects

1. Total number of users signing up

2.  Number of users who finish completing their profile

3. The number of users who finished completing their profile still using retained.

 

What to do:

(Preliminary) Change the way the data is measured

% of users who completed their profile with additional information from signing up

% of users of retention counted from activation among the people with the additional profile information

 

 

VS

 

% of users who completed their profile without additional information from signing up

% of user retention from activation among people without additional profile information (from before the feature is introduced)

 

If the additional information is optional, and activation is completed, then ideally retention should be the same, and we should focus on moving consumers from without additional information from signing up without additional information to additional information.

 

If the retention drop with the measure adjustment is happening in one of the two categories, we need to look into the user profile and understand if there is a reason among engagement and retention efforts that changed.
Access expert answers by becoming a member
0 likes   |  
Sign up for FREE to continue reading
Sign up for FREE to continue reading