A user satisfaction survey was conducted for two groups of facebook users (each with a 50k sample size). Group 1: Enabled certain login security features (Experiment). Group 2: Did not enable these security features. It was found that user satisfaction with Group 1 was 30% lower than with Group 2. Why do you think are the reasons? Comment on how the survey was conducted.
You'll get access to over 3,000 product manager interview questions and answers
Recommended by over 100k members
With that being said, let's understand the problem we have at hand a bit deeper
- What exactly are the additional security features?
-Can we quantify or qualify the impact the extra features have in strengthening the security of the platform?
-How many steps has it added to the customer journey compared to the control group?
-Did the test reveal any insights on collateral impact? Other than customer satisfaction decreasing, do we see drops in any other steps along the customer journey? For example: has the # of logins decreased? how about DAUs, MAUs, and other usage metrics?
If the initial experiment did not reveal insights about further collateral impact, we can design more A/B tests to understand the collateral impact (or lack thereof).
If the collateral impact is high, then we can consider stalling the changes to login. However, if collateral impact is not significant, then we can continue with the extra security features to create a safer, more secure Facebook platform for everyone.
A user satisfaction survey was conducted for two groups of facebook users (each with 50 k sample size). Group1: who had enabled certain login security features (Experiment) Group 2: who had not enabled these security features. It was found that user satisfaction with group1 was 30% lower than with group 2. Why do you think so? Comment on how the survey was conducted?
Here is the approach that I will take. Will ask detailed questions and move forward based on the answers.
Things that could have gone wrong.
1. Survey was poorly constructed.
2. Wrong customers surveyed
3. Security features were poor and not contributed towards increased security/privacy or UX or User didn't see value or the value was not communicated clearly. Or did not fit with perceived "ethics" associated with Facebook.
4. Analysis could be wrong.
5. Competition could have solved the security issue better.
First, I will understand why the was survey done?
- Was it only a survey or was there also an additional implementation done like an A/B test?
Is there no error in the conclusion?
Was there security issues that were found in FB that necessitated the change?
- If so was the intent and reasoning communicated in the appropriate fashion to the survey target? (depending on their tech level. Security is considered boring by many people)
Do the security features ACTUALLY increase the security or do just make the process more cumbersome?
Maybe the security features are known to be poor in providing additional security.
How did the features affect the user experience?
- Necessitate change of passwd to a more complicated passwd
- Necessitate 2FA that would need additional steps to login.
- Did it affect other SSO compatibility with other sites?
How did it affect the privacy of the customer?
-- Could require more personal information therefore not popular.
Statistical results are also dependent on the type of customers chosen.
- Who was the target customer for the survey?
- Were the selected properly?
- Was the sample set well distributed among all types of people?
- Sample set could be less security-savvy people?
- The survey could reflect a local phenomenon that
Are the results statisticaly significant?
Analysis could indicate the survey was not significant.
Was it a survey that is conducted scientifically and not just using random inputs?
Were the questions clear?
Were the questions appropriate to come to the conclusion about user satisfaction?
Should FB have followed better approaches for the same problem used by Google or Microsoft or someother comnpany?
Now, as per the results the group which had to face login security features was dissatisfied( by a huge margin).
What were the minimum criteria for success for this feature?
Any security feature is bound to create dissatisfaction but by how much is acceptable?
And what is the security feature? Does it pose such a big security threat that even such high dissatisfaction levels can be accomodated? Can there be a workaround or aother way to achieve security with lesser dissatisfaction?
Top Meta (Facebook) interview questions
- What is your favorite product? Why?89 answers | 263k views
- How would you design a bicycle renting app for tourists?62 answers | 82.5k views
- Build a product to buy and sell antiques.54 answers | 66.8k views
- See Meta (Facebook) PM Interview Questions
Top Problem Solving interview questions
- A metric for a video streaming service dropped by 80%. What do you do?50 answers | 135k views
- You launched a new signup flow to encourage new users to add more profile information. A/B test results indicate that the % of people that added more information increased by 8%. However, 7 day retention decreased by 2%. What do you do?29 answers | 28.8k views
- Drivers are dropping out of a city on Lyft. How do you figure out what's going on?23 answers | 18.8k views
- See Problem Solving PM Interview Questions
Top Problem Solving interview questions
- Your new feature boosts Amazon Search by 10%, adds 2s to load time. What do you do?19 answers | 36k views
- There is a 15% drop in the open rate of Instagram App. You are the PM. Tell us what could have happened.11 answers | 10.1k views
- There is a data point that indicates that there are more Uber drop-offs at the airport than pick-ups from the airport. Why is this the case and what would you do within the product to change that?10 answers | 22k views
- See Problem Solving PM Interview Questions