You'll get access to over 3,000 product manager interview questions and answers
Recommended by over 100k members
Here's how I would answer this product improvement question. Feedback welcomed!
Clarifying questions:
Are we talking about building a product? (Up to you)
What exactly is misinformation? (Fake news, inaccurate facts being shared on social media)
Goal: Reduce the amount of misinformation spread on the platform and improve user trust.
Key users associated with misinformation:
News Accounts (eg CNN) - They may get involved in some controversy about a fake photoshopped article being shared by rogue actors.
Influencers - Due to deep fakes a lot of celebrities might end up getting looped into fake news or fake videos.
Regular FB users - These are users who start reporting fake articles by sharing it and forwarding it to their friends and families without verifying the facts. Some users may also be creating fake accounts and sharing fake information.
For the purpose of this exercise, I want to focus on regular users since these are the ones who are creating fake information and spreading rumors.
Some of the problems associated with accounts are:
Lack of Trust
Don’t know who started the article and what is the source of information
Lack of ability to effectively report inaccurate information
Hard to report fake information. The design is cryptic (3 dots)
Lack of information -
People don’t know how to get the most trustworthy information when they see fake information.
These 3 problems are very deeply interrelated and can’t be solved alone. We need a holistic solution to solve all of them.
Potential solution:
FB Fact Checker
A team of community moderators that detects trending fake topics using trending algorithms and create a report of trending fake topics. A key set of expert fact-checkers will verify the key trends and publish a weekly report explaining Fake vs Truth to FB users. This report will be provided in the FB newsfeed as a card in a highlighted manner.
Content recommendation
Using FB community managers to detect fake trending topics and for those topics recommend similar news articles from verified accounts next to them in the feed. For example, do topic detection and recommend articles with the same topics from verified sources before or after the fake news card element.
Limit information forwarding and improved reporting
For articles that have been shared too often (say more than 10 times) and contains topics that are deemed fake trending topics. We should make it more clear to users that the information seems fake and allow users to flag the content also to FB community managers.
Deep fake detectors
For videos that are being manipulated using deep fake to make appear influencers spread fake information. They need to be detected using topic modeling and influencer identification. Allow such content to be flagged via explicit flags and labeling videos as manipulated video.
Prioritization I am going to prioritize the solutions using the following approach. Based on eng cost and impact on UX (engagement, reach and impact on the goal). I am not showing the prioritization matrix here, but I used a t shirt sizing approach and measure each feature along with impact on goal/UX, eng cost and came up with a stack ranked ROI score.
My prioritized list is
FB Fact Checker (Table stakes) - This ought to be built as a team of humans community managers that are monitoring fake news in the market and reporting to experts for verification. This will ensure FB can convey to users it is monitoring fake news in their country and providing meaningful information.
Deep fake detector (Since it is a huge problem that is being faced by social media outlets)
Content recommendation - This can really help users discover both sides of the story when they see random posts on FB and thus make informed decisions.
Limit information forwarding
Success metrics:
Impact on user trust of FB - Brand survey results
# of fake topics being detected and their lifetime on the platform
Time to detect fake topics and their rate of sharing with time
Impact on user trust - Measured via NPS through surveys
- Context:
- FB being a social media platform where huge chunk of the content is created by users and consumed by users. Rest of the content is from brands/media channels. Currently Facebook provides a 'Report a post' feature. U can click on 3 dots on the top right of the post and then report the post. Facebook passes the post to its content reviewer team that will check if the content violates the 'community standards policy' and accordingly the content will be reduced in distribution and sometimes might be taken down as well.
- Clarifications
- By solving the problem for misinformation - what do we mean? do we need to reduce the misleading/false information content on facebook? So basically we need to improve the Facebook product in terms of reducing false information content on Facebook? Is my understanding correct? Yes
- For the scope of the discussion lets restrict ourselves to the mobile application.
- Goal:
- Reduce False information content on Facebook to improve customer satisfaction of facebook users & improve retention of users. I am taking customer satisfaction & retention as the goal for the task of reducing false info since a wrong peice of information spoils customer's mood and also if person engages with such a content which turns out to beharmful.. user might blame facebook and end up churning from the platform.
- Type of misinformation
- False/incorrect information posted just for fun and does not intend to harm any person - like a false check in
- False/incorrect information posted to encourage users to redirect to malicious websites/links - hack users account,
- False/incorrect information posted to increase your network
- Out of these I would want to focus malicious content as it is the most harmful one.
- Pain points of catching this issue from FB perspective
- Difficult to manually review all the content as the amount of content is huge.
- 'Reporting the issue' feature is available but users don't report malicious content
- Difficult to stop creation of such post itself
- Prio
- Pain point 1- FB already has AI mechanisms to short list such posts and then only forward the suspicious onces to reviewers. -> so already in place.
- Pain pt 2-
- Impact - high
- Relavance - high because it is a community application. Thus best way to weed out misinformation should also be via community itself.
- Pain pt 3:-
- Impact - high
- Relevance - medium. As a platform facebook wants to ensure that authentic content should be posted.
- Would prio - pain pt 2 based on above evaluation.
- Solutions
- Incentivising users to report such content by providing badges to the users for being good community workers. We can bronze, silver, platinum badges and as usrs report more misinformation and if after the manual review it is found to be a correct report then user's points would increase and he would get a badge.
- Badge would be displayed along with the user profile pic
- Reduce users points if the report was found to be incorrect.
- Create a post once user receives a badge, and then user can share it on his/her timeline --> similar to memories feature.
- Incentivize the users by featuring the users as 'Community workers of the week' who reported content in your area/country in last one week and the report was found to be correct.
- Educate users on malicious content -
- Time to time, freq can be decided, some kind of information should come on the home page of news feed itself, for example like the 'mark yourself safe' feature... which informs user about the possible malacious content and if they find such content what shd they do about it.
- It might happen that users, end up clicking on a post that took them to a website or link that was fake or looked fake. Now in such case it is very difficult for user to report the issue since the post was viewed so that post disappears from users feed. User will have to visit the profile or page of the post and then report --> which is tedious.
- Provide option to see all the viewed posts also in the activity log or add 'clicking on a link' pr clicking on the post also as a interaction with the page.
- Evaluation, prio
- Incentivising via badges -
- Impact on FB - high
- Effort - high. FB does not have any points system till now so it would require fresh effort.
- User impact - high. It will motivate other users as well as the user who reported will feel that he has done something for the society.
- Incentivising via featuring
- Impact - med
- Effort - high.
- User impact - might back fire as users don't want to be featured on a unknown user's profile ... as they might start getting unnecessary friend requests.
- Educate the users
- Impact - low
- Effort - low
- User impact - low.
- Activity log to have posts/ ads on which u clicked as well as an interaction
- Impact - medium. As users might not be very aware about the activity log itself.
- Effort - medium as already activity log exists, we just need to add another category. And as a disclaimer we would mention that it tracks from so and so date i.e launch date.
- User impact - medium
- Based on above would prio incentive feature via badges as it has highest impact and will increase engagement which is tied to the mission of Facebook.
- Trade off
- Users might start reporting genuine content also as misinformation as they want to get the badge.. this will increase the work load of the reviewers and due to increase in false postives the actual postives wil take time to resolve thus more time to take down misinformation posts and thus overall purpose might get defeated. To avoid this we can use the AI algo currently being used to detect malicious content and if a content reported has very low score in being malicious we can reject it via AI and not take it to the manual reviewer. Also we can place a warning for users who have reported more than 1 malicious content in 1 week (some fixed freq) that if the report is incorrect pts would be deducted and in worst cases account might be suspended as well.
- Some metrics to measure impact
- Increase in usage of the 'Reporting an issue' feature with category as 'False information' -> this will indicate whether feature is being used
- No. of cases reported before and after launch with a filter for correct reportings to ensure feature is not misused--> we can check month over month
- Customer NPS via surveys to ensure whether customer satisfaction goal is met
Clarify
- Misinformation? Define - factually incorrect? fake accounts?
- FB all platform? including Instagram, Whats app etc?
- Why is FBs solving this?
- consumers - individuals consuming info on FB. Could be news, media, movies, songs, photos etc and sharing info
- producers - individuals posting content such as news updates with links to not credible sites, photos
- brands - news orgs, celebrities posting content to engage fans
- Discover content and trust the authenticity of the content (news, video)
- Resharing content to friends is safe. Bots can add sad reactions to pics which kills the emotional connection
- Connecting with people. Lot's of account are fake and have wrong info.
- Privacy - keep my information confidential
Priortize based on impact to goal, and how painful it is. Pick #1
Solutions
- Verified badge concept for news articles shown in feeds etc. so users know the sources are aut
- Ue AI to detect deep fakes in videos and flag them as red.
- Chat - detect if the person is a bot using AI to read their human index
- Alerts - Alert users that bots have been liking their pics and block them
To start with, Facebook's mission is to give power to the people to build communities, bring people closer together by providing a platform, where people can express and share what matters to them and where people can discover what's happening around the world.
The problem of misinformation and fake news is the biggest obstacle for Facebook to make significant progress towards this mission.
Before I start with my answer, let me ensure I understand the scope of the question. Misinformation may come in the form of Fakenews items, Ads, posts etc. Several actors whether they are agencies or users post facts that are inaccurate intentionally or unintentionally On the other side, several users may be reacting to those inaccurate facts and even sharing them with their friends intentionally or unintentionally because the content looks engaging. it is important for FB however to separate fake facts from opinions.
User Personas:
- - Facebook Active Users who want to report fake or inaccurate posts
- - Facebook Integrity Agents who want to detect and handle fake posts put on intentionally
- - Agents who want to take action to control the impact of fake information
JOURNEY:
- I want to immediately flag a post and depending on how sure I am, give it a Severity. warning.
- I would also like to know the inaccurate posts that were flagged by other fellow users or by FB admins
- If I see a post that I thought was valid but is flagged as inaccurate, I would like to challenge the determination with some explanation. Also, Would like to know if my determination was wrong.
- On each post, there Is an option to flag and indicate severity Low Medium High
- If a particular post is marked suspicious, it could be color coded to indicate that it Is under review
- Posts that are deemed inaccurate and high negative impact should be removed / hidden from all community newsfeeds
- I also have a button to challenge a particular flagged feed if I feel it is not inaccurate and an ability to explain why
- MVP: Users can flag posts they suspect - [Low Effort, Potential high engagement]
- MVP + 1: Add Severity indicator (Low, Medium, High) - [Low Effort, Potential high engagement].
- All flagged posts with high priority will have to go to a team of Fact Checker experts. They can put special attention to any posts that have been shared 10K+ times.
- These agents will have an abiity to mark the feed as suspicious if they have a reason to believe so which should create a color coded indicator.
- If the post is deemed inaccurate and high negative impact, Agents will have an ability to permanently hide/remove the post from user accounts. The accounts will be suspended.
- Alternative is to appoint monitors from the community (like groups) and award them some points for every fake story reported and confirmed - which they can redeem for some gift cards
Clarifying questions:
- Do you want to solve this problem within the app/ product itself or do you want to build another product? (upto you)
- What do you mean by misinformation here? (All types of posts/ reels / stories that are not based on a true fact)
- Would this be for personal accounts with limited following or would it be for public accounts for a larger following / engagement? (both)
- Why are we solving this problem? Have we had trouble with misinformation on Instagram? (increased flagging of content / customer concerns because of an abundance of misinformation)
- Are we solving this for any specific user segment? No
- Do we have any constraints (technical/ budget / legal)? No
About the product:
Instagram is a social networking platform that thrives on user generated content. Meta’s overall vision is to connect people over the internet and also to be a platform where they can host right/ ethical content.
The problem with user-generated content is that it is highly dependent on the creator’s POV which cannot be classified as right or wrong.
Our goal of this exercise is to ensure that the right information is relayed to the user and is still aligned with the vision of Meta (free will when it comes to sharing content)
Users:
Creator:
Consumer:
(Misinformation is a big issue for the consumer as this is is the largest segment of the userbase and are the victims of misinformation so I am prioritising consumers:
What kind of content does a consumer consume?
- reels/ stories / posts of friends, family, acquaintances (people they know)
- content shared by public accounts (mostly news/ gossip/ advice etc.)
- content related to celebrities / famous personalities of social media
- viral content (posts / reels etc.) from people they don’t know
- content which contains a lot of filters / deep fake / photoshop
The largest risk of misinformation is with the last three types of content, so I am targeting that.
Painpoints:
- herd mentality (if many people are saying so then it must be true)
- no idea about what the public feels about the content.
- have no means to fact check / verify (can always cross-check on Google if there is a high intent to verify the fact)
- no ways of expressing their disagreement / agreement with the content (except comment) their opinion gets lost in a sea of comments.
- no authority on Instagram that can prove otherwise (unless a celebrity shares a public post disproving the gossip/ rumour)
- have no idea how to detect fake/ photoshopped videos/ content.
Features:
Public poll: P0
Content that has crossed a level of virality (for eg. more than 1000 views, 500 likes etc.) will have a feature where anyone can start a poll with respect to whether the content is genuine or not. It will not be a mandatory feature. Anyone who wishes to participate can do so. This would give users an idea about whether the content has a source of truth or not.
This can be limited to content which contains (news/ gossip/ rumours / advice (legal, personal, health/ career etc.)
Deep fake / Photoshop detectors: P3
Instagram should AI in their algorithm which can detect the use of artificial means like deep fake / other AI generated images/ content on Instagram. Every time a viewer consumes such content, there is a note of advisory to the general public “Hey the content that you are consuming might have been tampered with photoshop/ deep fake etc.)
Automatic flagging of content with negative sentiments: P1
Instagram should build an engine which runs a sentimental analysis of content (on a weekly basis) via analysing comments (reserved for content with more than 100 comments) and automatically detect negative sentiments in the content and flag them. We can place flagged content in buckets like ‘violence’, ‘body shaming’, ‘hate-speech’, ‘racist’. This can be done on the basis of top words/ phrases used in comments. The flagged content would then be reviewed by the content auditor to double check and if needed any strict action can be taken.
Instagram bot moderators: P2
There is a lot of misinformation spread on Instagram via group chats/ private DMs. We can have moderators (very similar to reddit) which will monitor the content shared on these DMs. They would detect content that has misinformation, negative sentiments attached to it and would generate a warning/ advisory to the rest of the members in the group, that the content shared might not have a source of truth, or is spreading negative sentiments. It is then the user’s choice to consume or remove the content.
Tradeoffs:
- Public opinion is not always the right opinion (so will be hard to verify public polls)
- deep fake / photoshop detectors are very technically difficult to implent)
- Hard to monitor content on Instagram which generates billions of content every week (very hard to implement on such a scale)
- People might feel that we are invading their privacy with the bot moderators (Meta is already in a negative light for this)
Metrics:
- polls created / participated in weekly
- flagged content weekly / # content that is classified as negative (weekly)
- Average bot activity rate in group chats (weekly) (No. of times an advisory is generated in a week per group chat)
- Customer reviews on app store & play store
We first need a definition for misinformation. Do you already have one? Or should I come up with one?
Types of info
1/ Known facts about physical world. For e.g, Earth is spherical. Sun rises in the east. Highest mountain in Everest. Set of natural numbers is infinite.
2/ Current Statistics: Population of US. Some of this can be checked.
3/ Historical information:
4/ Ongoing news cycle.
Type of content
Text, Audio Images Videos
What does solving mean? Definition of success? Which metric do we want to objectively change?
Goal: Reduce spread of mis-information.
This is very tricky - goes against the core product tenets which is to make it easy to spread info.
`Goals
1/ Take down fake content which is already viral
2/ Prevent fake content from going viral beyond a stage
Success metric:
NUmber of posts/week with virality score > xx, are triaged fake. For high threshold xx, this number should be zero.
There are three stages: Detect, Triage, Prevent spread
Impact | Execution complexity | |
Detect: Enable users to flag fake news | H | L |
Detect: Send any content with virality score > x for triage | H | L |
Triage: Use AI to score fakeness. Use AI to detect deepfakes Classify content between entertainment or information. FOr e.g, | H | H |
Triage: Setup community of moderators to triage potential fake news | H | H Start |
Prevent: Stop virality | Need well defined policies on the prevention technique to be used | |
Prevent: Take down posts if needed | ||
Prevent: Take down/suspend key influencer accounts creating/ peddling a lot of fake news |
Answer Overview -
Each of the answers so far contains some important points. But I feel there could be a better precise structure to answer this question that is comprehensive and easy to follow and consolidates important points from each of the answers.
Approach -
This question demands a bit more than the typical structure of Clarifying questions, assumptions, Goals, User groups, problems/pain points, solutions, evaluation criteria, conclusion.
Rather than directly using user groups focused approach, we need to break down the problem from inception to outgrowth and then subdivide based on users within this classification if needed.
Clarifying Questions -
- What is misinformation? - Fake news, inaccurate facts
- Is opinion different from misinformation? Yes, as known as it is explicit and not confused as facts
Assumptions -
- We are focusing on misinformation in general and not to a specific user group/topic etc.
High Level Goals -
- To capture the misinformation
- To reduce the spread of misinformation
- And thus, improve user trust (This is the primary goal and will be auto accomplished by addressing first two goals)
Breaking down the problem in the order from inception to outgrowth (In brackets information also have some solution ideas) -
- Is the misinformation shared intentionally or unintentionally? (It is difficult to identify!)
- What is the source of the misinformation? (Is it created on FB or shared from other channels/websites/blogs etc.?)
- What is the impact of the misinformation? (How many users it is affecting? How much is the potential damage associated with it?)
- What is the criteria to classify a content as misinformation? (Unauthentic sources, created by untrusted/new/fake accounts, flagged by readers, expert verified, machine/algorithm spotted)
- How to reduce the spread of misinformation? (Authentic source badges, Opinion/Facts Tags, Fake/Misinformation flags, Removing/Deleting verified misinformation, Prioritizing the verification of trending posts, Actions against identified culprits)
Evaluation Criteria -
- Solutions should be evaluated on the basis of how much impact they have in maintaining the user trust. In short term, solutions that can capture and remove trending misinformation posts would be given priority whereas in long term, solutions that can resolve the problem from the root level would be given priority. At the same time we would also need to look at the efforts associated with implementing the solutions.
Prioritization -
Serial No. | Solution | Focus | Impact | Efforts | Comments | Priority Rank |
Identifying Trending Misinformation Posts | Short Term | High | Low | Algorithms or Qualified professionals can look at the trending topics and verify the information and remove the harmful content | 1 (Low hanging fruits) | |
Tagging posts based on Sources | Long Term | High | High | Posts can be tagged as Authentic/Perspective etc. based on the source. For ex. Sources such as established corporations would likely have researched backed content and blogs would likely have user perspectives. But since this would require information about each and every link that is shared on FB, the efforts needed to arrange this information would be a bit high. But since this solves the problem right at its inception, it would be a highly effective solution in the long run. | 2 (Solves core of the problem) | |
Enabling users to report/flag a suspicious post | Both Short term and long term | Low | Low | Since users usually don't have the expertise or knowledge to classify an information as fact or fake, this solution may not give the desired results. So, though the efforts are low, it is not recommended to take this solution as priority | 3 (May not be effective enough) |
Thanks
Before we jump into solving misinformation, let's define misinformation. Misinformation is basically false news and doctored content that exists to spread false narratives and erode people's trust. As facebook's mission is to empower people to build communities, garnering trust is a key aspect of that, for which we have to solve misinformation.
There are two rungs of addressing misinformation: i)Using Facebook tools/ resources such as Community Operations team and ML Algorithms to spot and stop misinformation ii)Empowering Facebook users to stop and spot misinformation. I'll go into details of each rung here.
i) Using Facebook tools/ resources
We can have content moderators look into content that is posted on Facebook and train them to spot fakes by a)educating them about what the real news is b)Training them to spot patterns. For example, there can be patterns in the source, in the type of narrative, in the way images are doctored, etc. The content moderators can also come up with patters they have noticed - and then we can teach these patterns to ML algorithms to identify misinformation at scale.
ii)Empowering Facebook users
We cam empower Facebook users by directing them to the right source of knowledge beforehand. For example, currently, Facebook is highlighting CDC's updates the most for any COVID related inquiries. Once the real news is established, the users will learn how to spot fake news. Then, we can empower users to stop misinformation from spreading by giving them the tools to flag content as fake news. Our ML Algorithm can also learn from these user actions, and fortify the defense even further.
Once misinformation is spotted, either through FB users or through an algorithm or through Community Operations teams, then Facebook can take drastic actions to stop misinformation by banning pages that post a lot of misinformation and banning people who can be traced back as the source of misinformation.
Top Meta (Facebook) interview questions
- What is your favorite product? Why?89 answers | 263k views
- How would you design a bicycle renting app for tourists?62 answers | 82.5k views
- Build a product to buy and sell antiques.54 answers | 66.8k views
- See Meta (Facebook) PM Interview Questions
Top Product Improvement interview questions
- How would you improve Google Maps?53 answers | 228k views
- How would you improve YouTube?29 answers | 81.3k views
- How can you improve Facebook Stories?22 answers | 45.5k views
- See Product Improvement PM Interview Questions
Top Product Improvement interview questions
- How would you improve Facebook Birthdays?21 answers | 25.8k views
- How would you improve user engagement on WhatsApp?18 answers | 25.1k views
- How would you improve Amazon?14 answers | 35k views
- See Product Improvement PM Interview Questions