You'll get access to over 3,000 product manager interview questions and answers
Recommended by over 100k members
- Creators who post such things deliberately
- Creators who post such things inadvertently
Journey | Pain Point | Severity |
Pre-posting | I trust the incorrect sources but I don't know it at the time (blogs) | M (Most of the times, I will try not to rely on blogs) |
Sources give incorrect data and I believe that | L (Quality of the source data becomes important, and so this is important as to which source to trust) | |
I trust forwards and create videos based on that | L (I can trust the source as a person and just trust that he/she sent me the details in good faith) | |
During Posting | No one tells me this video is filled with inaccuracies | M (It would be better to manage this at the script stage itself) |
I don't know that certain words/language can be classified as offensive | M (Usually people these are sensitised to what is acceptable and what is not. Hence lowering the priority here to Medium) | |
Post Posting | Confused as to what led to the backlash | S (Your comments section will tell you this) |
How to avoid such a backlash in the future | S (You will learn based on what happens in the comments section) |
Solutions | Reach | Impact | Effort |
Have a set of trusted sources for images and news, and ask creators to give you the references. This is like, where at the end of the book, the authors give you all the references that they used to come up with the book. Include this as a parameter when recommending. When the sources are from your whitelist, then give that video a higher ranking. | L (This would benefit everyone) | L (Because sources are known and you can always cross check, it will be a good thing to have. You can have a references section, where the details can be further shown also) | L (You will need to create a whitelist of sources and then cross reference. So, this will take some effort) |
Run the transcript/script through an LLM and see if there are known fake news or morphed images in the video | L (This would benefit everyone) | L (Because Gemini is trained on all the materials and we can have a deal with the publishers to onboard their papers, this will be a good addition) | M (The models have already been trained so should be ok) |
Allow users to run a fact check to check for things like abusive language, known deep fakes, known misinformation campaigns. | M (Not everyone would think to run it) | M (Not everyone would think to run this) | M (LLMs could be leveraged here) |
Have a community notes like feature where a group of people can fact check a video and the video can be marked accordingly. Also downgrade the ranking of such a video | M (This may be good as a post posting thing to do but might not prevent in the long run) | S (The post has already gone out) | M (crowd sourcing framework may need to be built out) |
Automatically disable commenting when we see that the comments section is becoming toxic. | M (defining toxic will need to be thought of) | S (wouldn't have as much impact) | M (Models exist for identifying certain types of toxicity, so these can be leveraged) |
- Number of whitelist matches and mismatches
- Usage of the fact checker tool that we will build out
Constraint - time minute to answer 20 mins
Problem statement - prevent hate, misinformation, deep-fakes on youtube
Approach:
1. Identify use cases for this problem source and align with the user
Source of Hate & misinformation - Users reply in the comment section of the you tube videos, or create channels to upload content
Deep fakes - Users upload videos which are edited by other softwares to manipulate the content of the software
2. Potential options to solve the above
a. Manage the upload frequency content based on users trust profile
1. content uploader trust profile - factors for creating the profile - since when is the content creator on the platform (older the better), is the content creator uploading data from multiple accounts from the same ip (more the worse), does is comment on videos from youtube channels which are know to create hate content (this would need a way to tag suspicious youtube accounts) - based on these factors we can tag the customer profile as Low, Medium, High Confidence profile - We can put restriction on how much content the profiles can upload or can comment in a given day. The lower the confidence less will be the value
b. Identify such content by using machine learning methods and tag them either deleter or flag it to users
Once the content has been uploaded. We can identify such content by machine learning methods and delete the content or mark the content with special tag so that users are aware of such content. Classification based Supervised learning and Natural Language processing methods would need to be used to identify such content. However implementing NLP is easier then implementing Classification. Will need to create test and verification data for creating learning models. Might need to use manual identifiers to start with.
We can also users magging the comment as fake (only by profiles which are high confidence profile) for either input to machine learning method or blocking the content altogether
c. Manage bot generated content - This is not specific to hate content only but for completness of solution - manage content by captcha mechanism, moderate frequency of content from same ip
3. KPI to help indicate if this is problem - No of comments against recency of content uploaded.e.g. If a content gets updated today and suddenly you see that it has 1000 comments in 5 mins. (this number can be achieved by historical comments trends - mean and devidation for particular categories). Unless this comment is not by a verified youtube channel or by a customer of high trust profile this might mean that fake comments are still a problem. You need to also get qualitative feedback from your contact center team and business development team which will help validate your hypothesis.
Call Out: I realize that in 20 mins it is not easy to move beyong further analysis.
Feedback back need: On quality of analysis within the time constraint.
The first thing that comes to mind when answering this Google product design question is the following paragraph:
Youtube is universal destination for video content - accessed by hundreds of millons of people around the world. Due to its huge audience and openness, the content uploaded has the potential to influence large audiences without any restriction. Content intended to engender hate or is pure misrepresentation of facts can lead to real life incidents. For e.g. content intended to incite one community against the other. Youtube should not be the reason for such terrible incidents in society and should do its part as a destination to prevent propagation of such content.
Target Audience/stakeholders:
1. General population - this is a problem that affects everyone in society, particularly the vulnerable sections of the society in various places.
2. Civil and law enforcement authorities - they are usually handling the impact of such misinformation
3. Youtube content creators - they are the ones creating content, they need to be clear about the rules
4. Google as a company has certain responsibilities in different regions of the world on these issues
Needs Analysis:
Target Audience | Needs | Underserved |
General population | Does not have visibility into how video content can be manipulated, designed to incite fear and hatred, or lies. Expects good quality content published by authors with good intent and not for manipulation. | Yes |
Civil and law enforcement | They need to stay on top of potential bad outcomes from online but dont have expertise to do their normal job. | Unknown |
Does not want to be associated with such content as a company due to its values and impact on brand, wants to mitigate things as soon as possible. | Yes | |
Youtube content creators | Content creators do not want to go through a bureaucratic process, they want to be upto speed on content guideline so that they do not cause headaches. | Yes |
Priority of audience: Of the different audiences here, the impact of addressing the needs of the general population seems highest - if they knew enough, perhaps we could prevent the impact of such content on YouTube. If we could prevent the spread of such content on Youtube, perhaps, the bad actor content creators would be discoouraged. While, civil/law needs to monitor, they find out far slower than spread of online hate, and cannot real stop the damage.
Youtube has content restrictions for adult / kid differences.
Usecases - General audience | User Satisfaction |
Genpop user can view videos that are vetted for malicious content | High |
Genpop user can get more information about the video author before sharing or viewing more content | Medium |
Genpop user can participate in vetting videos for harmful content | High |
Genpop user can get more information about the video to verify authenticity | High |
Usecases - Google | |
Google needs to detect potentially harmful content at the source/submission | High |
Google needs to human judgement and machine to identify content published as malicious | High |
Google needs to react quickly to prevent viral shares of potentially harmful content | High |
Google needs to make sure the right to free speech is not violated with clear guidelines | High |
Solution
A content rating system that collects information and signals from all kinds of sources such as from the web, from youtube viewers, human judgement panel to identify and remove harmful content.
Feature | Benefits | Satisfaction | Google Brand/Values/Compliance | Dev /Resource Cost | Recommendation (first version) |
Public Content Policy | Clear policy to identify bad content | High | High | Low | Yes |
Web intelligence on uploaded video | If video is associated with tainted sites, web forums, flagged in news | Medium | Medium | High | Yes |
Human judge panel on uploaded video | Internal/external panel of human judges for suspect videos | High | High | Medium | Yes |
Content intelligence on uploaded video (stop words) | Words analysis, sentiment analysis to identify if video is designed to incite | High | High | Medium | Yes |
Youtube audience feedback/flag on content proactively | Proactively ask users based on credibility of author and video to rate content | High | High | Low | Yes |
Country specific sensitive topics filter | Filter for topics that have been sensitive in the past | High | High | Medium | Yes |
Audit enforcement | Audit - inappropriate content filtration | High | High | Low | Yes |
Monitor real-time virality of uploaded videos and top videos | Identify and monitor videos that are growing in influence quickly | High | High | Medium | No |
Author credibility score | Reputation score for author based on tenure, history, offline facts | High | High | Medium | Yes |
Video credibility score on each video | Create a transparent, explainable score to signal to audience credibility | High | High | Medium | No |
Community feedback summary on each video | Create a summary to represent community sentiment | High | Medium | Medium | No |
Video sharing and search regulation
| Throttle video sharing and search based on video credibility score | Medium | Medium | High | No |
Video similiarity matching | Look for similarity in duplicated videos to reduce mass copy and upload | High | High | Low | Yes |
Upload throttling based on video signature | Throttle upload based on video signature | High | High | Low | No |
Metrics (hourly, daily, weekly, monthly):
1. Identified bad videos published for 24 hours+ %
2. Users exposed to bad videos %
3. Automated bad video recognition rate with respect to benchmark human panel
Feedback appreciated with thanks.
- When you say prevent hate - essentially you mean users should not be considering or hating someone for a an act he/she has not performed and i getting impacted due to deep fake content ? - yes
- No contraints to the project - Answer - yes
- GEO - We can begin with india and post that try to replicate this across country
- Objective
- to identify fake videos and sunset them
- So basically no revenue - keep users engageed
- Mission - The mission of youtube and google in general is to organise all information around the world and make it accessable (youtube - its video content)
- Type of users
cy - Bloggers, Travel enthusiast, Teachers, finfluencers (young age population), Podcast (once a week or 2 times a week) - short form or medium lenth content (5 mins to 20 mins) Mid Frequency - Upload video once a month and give all information in long 2 hours video, the lengh of content is Long - 1 hour to 3 hours Low Frequency - Who have just started uploading content as a hobby | Large | Large |
Viewer High Frequecy Mid Freq Low Freq | Large | Large`` |
Actors/Co actors/artist | Mid | Large |
3. Pain points
Pain Points | Size | Depth |
Creator Loss of Reputation Mental stress | Large | Large |
Viewer Getting misinformed will create distress- end up spreading that to muliple users which impacts their sentiments Creates negative envoirement Starts multiple arguments and debates over the topic | Large | Large |
Actors/talent- Loss of repuation, impacts money/bussiness |
Solution
Sol(solution) | Reach | impact | effort | Phase |
For creators - Use of AI to continiosuly keep running crone in backgroud which will try recog similar patters of face used in video - using face recognition and also cross verify the audio, text if its matching with another video | ||||
Alog training - Post matching scoring of confidece that you belive its same video should be more than 85 % | ||||
Send alerts to creators to cross verify and confirm this to youtube so that they can take it down and send legal notice to the abc channel provider | ||||
Analytics - Provide on a daily how many videos were detected T-1 basis /Real time tll now |
Sol(solution)- Viewers | Reach | impact | effort | Phase |
Notify if they follow. a certain channel and its under radar of deep fake(via push/whatsapp/email) | ||||
Give users the option to compare to video urls to run them through a tool to idenfity similar face patter, lips, cheeks etc and also audio, subitle if they feel its deep fake video users can run a excercise compare the videos and if score is above 85 % than immediately content creator would be informed /notified via email/whatsapp |
- Videos taken down due to AI detetcetion/Video dectected
- 10/100= 10 % convertion
Clarifying questions:
any particular region or across the globe?
Any particular genre or type of content?
User Groups:
User
YT channel owner
Govt agencies
YouTube/Google as the company
Pain points:
User:
Does not know which news is fake and which is true
YT channel owner:
Genuine channels: hard time standing out from the fake ones
Govt agencies:
Hard time handling the misinformation spread
Youtube Company:
Hard time handling figuring out what’s fake and what’s not
Prioritization:
If YouTube as a company can figure out what’s fake and what’s not, then the problems of the other 3 user groups get resolved!
Solving the problem of YouTube to identify fakes has the most value
Solutions:
Rely on user feedback/review for the “news” category:
Users can give review if the news seems fake or not
This might be biased on a small scale but on a big enough scale, it might yield good results
Allow users to mark news as wrong offensive or incomplete source
Ask creators to mention the source of the news (unless they’re verified news channels themselves)
This could be used to verify the legitimacy of the news
News Channels:
Penalty system:
Example 1: Don’t allow the deletion of videos without a proper apology or disclaimer video stating that they were wrong
Example 2: reduce reach and inform them that their reach is restricted for the next couple of days
Based on this feedback and sources, YouTube can try to figure out an algo to identify fake
Example: Use video transcripts (auto-generated) to visit the sources mentioned by creators and verify if the topics & facts covered in the video are actually from the sources mentioned by creators. If not then flag it to the creator & unpublish the video.
Prioritize Solutions:
P0
The source mentioned by creators: easiest to implement & gives transparency
User feedback mechanism:
P1:
Penalty System
AI/Video transcript based
CQ’s >>
- We are talking about the youtube.com platform here? yes
- Hate can be spread by both videos and comments. Which one should we focus on ? Video
Youtube is a platform which allow user to create and share videos and viewers a wide array of videos to watch from based on different purposes like news, entertainment, upskilling etc
Type of hate, misinformation -
Political parties or their affliates who might post videos and provide a polar perspective for political gains
Extremist groups who will upload videos for recruitment or spreading hate
Spammers who will upload fake videos
Influencers who will upload fake videos for followers or viewership gains
Users of proposed product -
Content creators - There are already some checks for content creators. Impact will be hign
Viewers - If we are able to create something for viewers which might give them more information then the impact will be very high
Government officials - They often find late about these things. Hence impact will be medium on this
youtube team - Youtube team is already working on this . Impact low
Based on reach and impact, I will priortize building solution for viewers ( general population)
Pain points of general population
- How would I know that the facts stated in the video are correct?
- How can I get to know both sides of any argument?
- How will I know if the video is authentic or not?
- How do I know that the video creator is not biased or affliated to one point of view?
- How would I know that the video is not a deep fake one?
Priortisation based on impact on general population and type of videos,
1>2>4>3>5
Solutions -
- Create a system for credibility score for video creator on the basis of their expertise, profile, past experience.
Impact : high / Effort : low - Medium ( Assimilate the info about the content creator from different websites and past videos to show their score) 2. Build a video credibility score for each video so that the viewer knows that this is the trusted video. It can be based on profile of video creator, plus some in-house specialist and some top contributors Impact - High / Effort - High ( since setting up the inhouse specialist and identifying top contributor) 3. Implement AI to check the video content and browse the internet to provide counter view as well on the topic Impact - Medium / Effort - Medium 4. Improve the public participation in the vetting process of videos. This can be done through this that whatever video raises a red flag by algorithm, that should be pushed to various groups of public to find out what they think about the video and rate/ flag the video as per their understanding. Initially we can incentivise user to add their feedback. Impact - High / Effort - Low
4>1>2>3
Metrics :
- How many false positive raised flags raised by the algorithm
- How many people adding their feedback
let me start with a few clarifying questions
1. Youtube has a portfolio of products. can i assume that we want to focus on the user-generated content side - traditional youtube product? - yeah
2. should i assume that i am a PM at Youtube tasked with this? - yeah
thanks to generative AI , users can now create content that is hard to differentiate between fake and real. so, it is a valid problem and it is critical for Youtube to solve this.
at youtube, we have 3 key user segments -
1. content creators -> influencers, regular, occasional , commentators
2. content viewers
3. google
4. government
content viewers are the buyers and end users. so, let us focus on their problem statements
pain points for content viewers
1. i don't want to see abusive content in my feed or searches
-> (impact on our users: medium)
2. i don't want to watch an abusive, hate or deep fake content
-> (impact on our users: high)
3. i don't want to share content that is abusive or has hate content in it.
-> (impact on our users: high)
let us now focus on defining some solutions to ensure that viewers do not get to see abusive content while watching any content.
1. we could restrict who can upload content
2. we can launch mechanical turk like system to review content when users upload the video and ensure that content is available for viewing only after this.
-> (accuracy : high, cost of effort: low) -> google already does some of this.
2. introduce rating system and leverage crowd sourcing to rate content. here any users can get to rate the content
-> (accuracy: medium, cost of effort: low)
3. leverage AI/ML to automatically review content and tag the content.
-> (accuracy: high, cost of effort: medium)
i will prioritize on training content to enable that our AI/ML algorithm can detect hate/misinformation and deep fake
success metrics:
1. the accuracy of AI validation vs the current validation.
2. user complaints around these contents
Clarifying questions:
I assume we want to build a safe youtube community by preventing hate, misinformation, and deep-fakes. This is also key to comply with the regulators requirements. This will help to avoid scenarios where advertizers boycott the platform that spreads hate, mis-information and deep-fakes.
Safer community will avoid customer attrition, advertizer attrition, thereby boosting the user base, revenue and more importantly will contribute to build a safe and healthy society.
User groups
To stop hate, misinformation and deep fakes, we need to build policies and technologies that target the abusive content creators.
Extremists groups: that spread misinformation due to ideologies and recruit unsuspecting people
Spammers: that create abusive content
Political and religious groups: that spread lies and engage in slander for political gains
Influencers with massive following: that spread mis-information due to mis-placed beleifs
Groups with bad track record:
In order to create a safe community, we need to target all the creators that generate abusive content.
Pain points
Customers see stream of user-generated content that include hate, lies, and mis-information. Therefore, customers make poor life choices
Customers are influcenced by extermist groups and are getting recruited by the extremists groups
Users are buillied by the internet trolls that affect their mental health and that lead to social anxiety, depression, poor body image, etc.
Kids are exposed to A-rated content
Prioritized pain points
Since we need to weed out the abusive content completely to create a just society, we need to prooritize all these pain points.
Solution:
A) We need to create policies and make it clear to the content creators that their content should comply youtube community guidelines and otherwise enforcement actions will be taken on their channel.
B) We need to tag the videos with abusive keywords and video with abusive content as identified by the subtitle, these videos should be sent for additional review. This is additinal review can be performed by the ethical experts or philosophers until we build a ML capabilities to automate the proces.
C) We need to allow users to flag the video for abusive content that will be reviewed manually or AI.
When users that have excellent track record of identifying abusive content flags any video, we can take those videos down immediately.
D) We need to stop advertizing on the channels that are extremisting groups, religious groups, to avoid funding these channels. We need to avoid putting ads on political channels to remain non-partisan.
E) We need to send the videos associated with abusive channels or sketchy websites for additional screening before they go live on the channel. Similarly, we need to flag the vidoes on sensitive topics such as vaccine, birth control for additional reviews before we publish them.
F) We can incentivize users by giving them monetary rewards or free premium subscription when they help youtube to flag and take down the abusive content.
G) We need build a profile for all the content creators and assign a risk score, when the user crosses a risk threshold their channel should be enforced.
H) Youtube should be trained not to recommend the videos on the sensitive topics and not include sensitive video on the trending channel, and so that their dissemination would be limited, reducing the likelihood that abusive content reach wide audience.
i) Youtube should attach a fact-checker sign for the political videos until the video is fact-checked. You-tube should immediately block the channels that are compromised to protect the user experience.
Trade-off
While creating mechanism to stop abusive content, Youtube need to ensure that it does not undermine the freedom of speech. Therefore it is critical that algorithms are trained to minimize false positives.
Top Google interview questions
- What is your favorite product? Why?89 answers | 263k views
- How would you design a bicycle renting app for tourists?62 answers | 82.5k views
- Build a product to buy and sell antiques.54 answers | 66.8k views
- See Google PM Interview Questions
Top Product Design interview questions
- How would you design a web search engine for children below 14 years old?36 answers | 42.9k views
- How would you design a consumer application for a scooter sharing business?21 answers | 18.6k views
- Build a product to solve the dog poop problem.13 answers | 9.4k views
- See Product Design PM Interview Questions
Top Google interview questions
- How would you improve Google Maps?53 answers | 228k views
- A metric for a video streaming service dropped by 80%. What do you do?50 answers | 135k views
- Calculate the number of queries answered by Google per second.45 answers | 78.5k views
- See Google PM Interview Questions
Top Product Design interview questions
- How would you design a "Google Refrigerator"?13 answers | 8.3k views
- Design a social travel product.12 answers | 13.4k views
- Design a product for Facebook to fight COVID-19.12 answers | 6.8k views
- See Product Design PM Interview Questions