15% off membership for Easter! Learn more. Close

How would you prevent hate, misinformation or deep-fakes on YouTube?

Asked at Google
6.3k views
Answers (8)
crownAccess expert answers by becoming a member

You'll get access to over 3,000 product manager interview questions and answers

badge Gold PM
Step 1 - Scope and clarifying questions
Just to be clear, I want to lay out the definitions of some of these words.
Hate - speaking or acting against a certain community or group of people
Misinformation - Information that is factually not true, or where the data is available but incorrect conclusions are drawn.
Deep fakes - face of one person is super-imposed on the other
Now, YouTube is a video discovery platform, where creators create videos and consumers consume those videos. The primary source of revenue is via ads and creators also get paid via endorsements. Apart from being the right thing to do ethically, prevention of such things is also very important because it will scare advertisers away from the platform, and also drive away some members from the platform, which would lead to a negative viral loop, where fewer consumers would mean fewer creators/fewer uploaded videos.
 
Step 2 - List user groups and choose one
There are 2 primary user groups here, the consumers and the creators. I would usually go with the creators because it is better to tackle this from the source, but let me know if you want me to look at the other side also. 
Even within the creators, I can divide this into 2 categories:
  1. Creators who post such things deliberately
  2. Creators who post such things inadvertently
 
I believe that most of this actually stems from people posting such things inadvertently because the people who post deliberately usually get reported, and their channel gets taken down. So, I would like to focus on the people who post inadvertently here. I want to pause here and check if you have any questions.
 
Step 3 - List and prioritize the pain points
JourneyPain PointSeverity
Pre-postingI trust the incorrect sources but I don't know it at the time (blogs)M (Most of the times, I will try not to rely on blogs)
 Sources give incorrect data and I believe thatL (Quality of the source data becomes important, and so this is important as to which source to trust)
 I trust forwards and create videos based on thatL (I can trust the source as a person and just trust that he/she sent me the details in good faith)
During PostingNo one tells me this video is filled with inaccuraciesM (It would be better to manage this at the script stage itself)
 I don't know that certain words/language can be classified as offensiveM (Usually people these are sensitised to what is acceptable and what is not. Hence lowering the priority here to Medium)
Post PostingConfused as to what led to the backlashS (Your comments section will tell you this)
 How to avoid such a backlash in the futureS (You will learn based on what happens in the comments section)
Based on the above, I will prioritize the pain point where the creator is unaware of the quality of the source or the quality of the data.
 
Step 4 - List and prioritize the solutions
SolutionsReachImpactEffort
Have a set of trusted sources for images and news, and ask creators to give you the references. This is like, where at the end of the book, the authors give you all the references that they used to come up with the book. Include this as a parameter when recommending. When the sources are from your whitelist, then give that video a higher ranking.L (This would benefit everyone)L (Because sources are known and you can always cross check, it will be a good thing to have. You can have a references section, where the details can be further shown also)L (You will need to create a whitelist of sources and then cross reference. So, this will take some effort)
Run the transcript/script through an LLM and see if there are known fake news or morphed images in the videoL (This would benefit everyone)L (Because Gemini is trained on all the materials and we can have a deal with the publishers to onboard their papers, this will be a good addition)M (The models have already been trained so should be ok)
Allow users to run a fact check to check for things like abusive language, known deep fakes, known misinformation campaigns.M (Not everyone would think to run it)M (Not everyone would think to run this)M (LLMs could be leveraged here)
Have a community notes like feature where a group of people can fact check a video and the video can be marked accordingly. Also downgrade the ranking of such a videoM (This may be good as a post posting thing to do but might not prevent in the long run)S (The post has already gone out)M (crowd sourcing framework may need to be built out)
Automatically disable commenting when we see that the comments section is becoming toxic.M (defining toxic will need to be thought of)S (wouldn't have as much impact)M (Models exist for identifying certain types of toxicity, so these can be leveraged)
Based on the above, I would pick the first 2.
 
Step 5 - Risks
It is a fractured world, so what we whitelist will lead to criticism and some influencers might want to stay away or even quit the platform in protest
 
Step 6 - Metrics
  1. Number of whitelist matches and mismatches
  2. Usage of the fact checker tool that we will build out
Access expert answers by becoming a member
1 like   |  
Get unlimited access for $12/month
Get access to 2,346 pm interview questions and answers to give yourself a strong edge against other candidates that are interviewing for the same position
Get access to over 238 hours of video material containing an interview prep course, recorded mock interviews by expert PMs, group practice sessions, and QAs with expert PMs
Boost your confidence in PM interviews by attending peer to peer mock interview practices, group practices, and QA sessions with expert PMs

Constraint - time minute to answer 20 mins

Problem statement - prevent hate, misinformation, deep-fakes on youtube

Approach:

1. Identify use cases for this problem source and align with the user

Source of Hate & misinformation - Users reply in the comment section of the you tube videos, or create channels to upload content 

Deep fakes - Users upload videos which are edited by other softwares to manipulate the content of the software

2. Potential options to solve the above

a. Manage the upload frequency content based on users trust profile 

1. content uploader trust profile - factors for creating the profile - since when is the content creator on the platform (older the better), is the content creator uploading data from multiple accounts from the same ip (more the worse), does is comment on videos from youtube channels which are know to create hate content (this would need a way to tag suspicious youtube accounts) - based on these factors we can tag the customer profile as Low, Medium, High Confidence profile - We can put restriction on how much content the profiles can upload or can comment in a given day. The lower the confidence less will be the value

b. Identify such content by using machine learning methods and tag them either deleter or flag it to users

Once the content has been uploaded. We can identify such content by machine learning methods and delete the content or mark the content with special tag so that users are aware of such content. Classification based Supervised learning and Natural Language processing methods would need to be used to identify such content. However implementing NLP is easier then implementing Classification. Will need to create test and verification data for creating learning models. Might need to use manual identifiers to start with. 

We can also users magging the comment as fake (only by profiles which are high confidence profile) for either input to machine learning method or blocking the content altogether

c. Manage bot generated content - This is not specific to hate content only but for completness of solution - manage content by captcha mechanism, moderate frequency of content from same ip 

3. KPI to help indicate if this is problem - No of comments against recency of content uploaded.e.g. If a content gets updated today and suddenly you see that it has 1000 comments in 5 mins. (this number can be achieved by historical comments trends - mean and devidation for particular categories). Unless this comment is not by a verified youtube channel or by a customer of high trust profile this might mean that fake comments are still a problem. You need to also get qualitative feedback from your contact center team and business development team which will help validate your hypothesis.

Call Out: I realize that in 20 mins it is not easy to move beyong further analysis. 

Feedback back need: On quality of  analysis within the time constraint.

 

 

Access expert answers by becoming a member
1 like   |  
Get unlimited access for $12/month
Get access to 2,346 pm interview questions and answers to give yourself a strong edge against other candidates that are interviewing for the same position
Get access to over 238 hours of video material containing an interview prep course, recorded mock interviews by expert PMs, group practice sessions, and QAs with expert PMs
Boost your confidence in PM interviews by attending peer to peer mock interview practices, group practices, and QA sessions with expert PMs
badge Gold PM

The first thing that comes to mind when answering this Google product design question is the following paragraph:

Youtube is universal destination for video content - accessed by hundreds of millons of people around the world. Due to its huge audience and openness, the content uploaded has the potential to influence large audiences without any restriction. Content intended to engender hate or is pure misrepresentation of facts can lead to real life incidents. For e.g. content intended to incite one community against the other. Youtube should not be the reason for such terrible incidents in society and should do its part as a destination to prevent propagation of such content. 

Target Audience/stakeholders:

1. General population - this is a problem that affects everyone in society, particularly the vulnerable sections of the society in various places. 

2. Civil and law enforcement authorities - they are usually handling the impact of such misinformation

3. Youtube content creators - they are the ones creating content, they need to be clear about the rules 

4. Google as a company has certain responsibilities in different regions of the world on these issues

Needs Analysis:

Target AudienceNeeds Underserved
General population

Does not have visibility into how video content can 

be manipulated, designed to incite fear and hatred, 

or lies. Expects good quality content published by authors with good intent and not for manipulation.  

Yes
Civil and law enforcementThey need to stay on top of potential bad outcomes from online but dont have expertise to do their normal job.Unknown
GoogleDoes not want to be associated with such content as a company due to its values and impact on brand, wants to mitigate things as soon as possible.Yes
Youtube content creatorsContent creators do not want to go through a bureaucratic process, they want to be upto speed on content guideline so that they do not cause headaches.Yes

Priority of audience: Of the different audiences here, the impact of addressing the needs of the general population seems highest - if they knew enough, perhaps we could prevent the impact of such content on YouTube. If we could prevent the spread of such content on Youtube, perhaps, the bad actor content creators would be discoouraged. While, civil/law needs to monitor, they find out far slower than spread of online hate, and cannot real stop the damage.  

Youtube has content restrictions for adult / kid differences.

Usecases - General audienceUser Satisfaction
Genpop user can view videos that are vetted for malicious contentHigh
Genpop user can get more information about the video author before sharing or viewing more contentMedium
Genpop user can participate in vetting videos for harmful contentHigh
Genpop user can get more information about the video to verify authenticityHigh
Usecases - Google 
Google needs to detect potentially harmful content at the source/submissionHigh
Google needs to human judgement and machine to identify content published as maliciousHigh
Google needs to react quickly to prevent viral shares of potentially harmful contentHigh
Google needs to make sure the right to free speech is not violated with clear guidelinesHigh

 

Solution

A content rating system that collects information and signals from all kinds of sources such as from the web, from youtube viewers, human judgement panel to identify and remove harmful content. 

Feature BenefitsSatisfactionGoogle Brand/Values/Compliance

Dev /Resource

Cost

Recommendation

(first version) 

Public Content PolicyClear policy to identify bad contentHighHighLowYes
Web intelligence on uploaded video If video is associated with tainted sites, web forums, flagged in newsMediumMediumHighYes
Human judge panel on uploaded videoInternal/external panel of human judges for suspect videos HighHighMediumYes
Content intelligence on uploaded video (stop words) 

Words analysis, sentiment analysis to 

identify if video is designed to incite

HighHighMediumYes
Youtube audience feedback/flag on content proactivelyProactively ask users based on credibility of author and video to rate contentHighHighLowYes
Country specific sensitive topics filterFilter for topics that have been sensitive in the pastHighHighMediumYes
Audit enforcementAudit - inappropriate content filtrationHighHighLowYes
Monitor real-time virality of uploaded videos and top videos Identify and monitor videos that are growing in influence quicklyHighHighMediumNo
Author credibility scoreReputation score for author based on tenure, history, offline factsHighHighMediumYes
Video credibility score on each videoCreate a transparent, explainable score to signal to audience credibilityHighHighMediumNo
Community feedback summary on each videoCreate a summary to represent community sentimentHighMediumMediumNo

Video sharing and search regulation

 

Throttle video sharing and search based on video credibility scoreMediumMediumHighNo
 Video similiarity matching  Look for similarity in duplicated videos to reduce mass copy and upload HighHighLowYes
Upload throttling based on video signatureThrottle upload based on video signatureHighHighLowNo

 

Metrics (hourly, daily, weekly, monthly):

1. Identified bad videos published for 24 hours+ %

2. Users exposed to bad videos %

3. Automated bad video recognition rate with respect to benchmark human panel

Feedback appreciated with thanks.

Access expert answers by becoming a member
20 likes   |  
1 Feedback
badge Gold PM
This is a brilliant answer. Loved reading through your solution.
0
Get unlimited access for $12/month
Get access to 2,346 pm interview questions and answers to give yourself a strong edge against other candidates that are interviewing for the same position
Get access to over 238 hours of video material containing an interview prep course, recorded mock interviews by expert PMs, group practice sessions, and QAs with expert PMs
Boost your confidence in PM interviews by attending peer to peer mock interview practices, group practices, and QA sessions with expert PMs
  • When you say prevent hate - essentially you mean users should not be considering or hating someone for a an act he/she has not performed and i getting impacted due to deep fake content ? - yes 
  • No contraints to the project - Answer - yes 
  • GEO - We can begin with india and post that try to replicate this across country 
  • Objective 
    • to identify fake videos and sunset them 
    • So basically no revenue - keep users engageed 
The way i want to structure this answer is Mission, Type of User, Pain points, Solution , sucess metric 
 
First of all does this structure look ok ? - Yes
  1. Mission - The mission of youtube and google in general is to organise all information around the world and make it accessable (youtube - its video content)
  2. Type of users 
cy - Bloggers, Travel enthusiast, Teachers, finfluencers (young age population), Podcast (once a week or 2 times a week) - short form or medium lenth content (5 mins to 20 mins)
Mid Frequency -  Upload video once a month and give all information in long 2 hours video, the lengh of content is Long - 1 hour to 3 hours

Low Frequency - Who have just started uploading content as a hobby 

 
LargeLarge
Viewer 
High Frequecy 
Mid Freq
Low Freq 




 
LargeLarge``
Actors/Co actors/artist MidLarge
 

 

Hence reach and impact for both categories is large , but we will choose high frequency users from both viewer and creater as they are the ones who are most likely to get effected by it and if we are able to solve for this than we can end up extending this solution to all cateogory of users 
 

3. Pain points
Pain PointsSizeDepth
Creator
Loss of Reputation
Mental stress 

 
LargeLarge
Viewer 
Getting misinformed will create distress- end up spreading that to muliple users which impacts their sentiments 

Creates negative envoirement 

Starts multiple arguments and debates over the topic 




 
LargeLarge
Actors/talent- Loss of repuation, impacts money/bussiness  


Solution
Sol(solution)ReachimpacteffortPhase
For creators - Use of AI to continiosuly  keep running crone in backgroud which will try recog similar patters of face used in video - using face recognition and also cross verify the audio, text if its matching with another video    
Alog training - Post matching scoring of confidece that you belive its  same video should be more than 85 %     
Send alerts to creators to cross verify and confirm this to youtube so that they can take it down and send legal notice to the abc channel provider     
Analytics - Provide on a daily how many videos were detected T-1 basis /Real time tll now     

 

 

Sol(solution)- ViewersReachimpacteffortPhase
Notify if they follow. a certain channel and its under radar of deep fake(via push/whatsapp/email)    
Give users the option to compare to video urls to run them through a tool to idenfity similar face patter, lips, cheeks etc and also audio, subitle if they feel its deep fake video users can run a excercise compare the videos and if score is above 85 % than immediately content creator would be informed /notified via email/whatsapp    
 
Sucess metric 
 
  1. Videos taken down due to AI detetcetion/Video dectected 
    1. 10/100= 10 % convertion 
Access expert answers by becoming a member
0 likes   |  
Get unlimited access for $12/month
Get access to 2,346 pm interview questions and answers to give yourself a strong edge against other candidates that are interviewing for the same position
Get access to over 238 hours of video material containing an interview prep course, recorded mock interviews by expert PMs, group practice sessions, and QAs with expert PMs
Boost your confidence in PM interviews by attending peer to peer mock interview practices, group practices, and QA sessions with expert PMs

 

  1. Clarifying questions: 

    1. any particular region or across the globe?

    2. Any particular genre or type of content?

  2. User Groups:

    1. User

    2. YT channel owner

    3. Govt agencies

    4. YouTube/Google as the company

  3. Pain points:

    1. User:

      1. Does not know which news is fake and which is true

    2. YT channel owner:

      1. Genuine channels: hard time standing out from the fake ones

    3. Govt agencies:

      1. Hard time handling the misinformation spread

    4. Youtube Company:

      1. Hard time handling figuring out what’s fake and what’s not

  4. Prioritization:

    1. If YouTube as a company can figure out what’s fake and what’s not, then the problems of the other 3 user groups get resolved!

      1. Solving the problem of YouTube to identify fakes has the most value

  5. Solutions:

    1. Rely on user feedback/review for the “news” category:

      1. Users can give review if the news seems fake or not

        1. This might be biased on a small scale but on a big enough scale, it might yield good results

    2. Allow users to mark news as wrong offensive or incomplete source

    3. Ask creators to mention the source of the news (unless they’re verified news channels themselves)

      1. This could be used to verify the legitimacy of the news

    4. News Channels: 

      1. Penalty system:

        1. Example 1: Don’t allow the deletion of videos without a proper apology or disclaimer video stating that they were wrong

        2. Example 2: reduce reach and inform them that their reach is restricted for the next couple of days

    5. Based on this feedback and sources, YouTube can try to figure out an algo to identify fake

      1. Example: Use video transcripts (auto-generated) to visit the sources mentioned by creators and verify if the topics & facts covered in the video are actually from the sources mentioned by creators. If not then flag it to the creator & unpublish the video.

  6. Prioritize Solutions:

    1. P0

      1. The source mentioned by creators: easiest to implement & gives transparency

      2. User feedback mechanism: 

    2. P1:

      1. Penalty System

      2. AI/Video transcript based 

Access expert answers by becoming a member
0 likes   |  
Get unlimited access for $12/month
Get access to 2,346 pm interview questions and answers to give yourself a strong edge against other candidates that are interviewing for the same position
Get access to over 238 hours of video material containing an interview prep course, recorded mock interviews by expert PMs, group practice sessions, and QAs with expert PMs
Boost your confidence in PM interviews by attending peer to peer mock interview practices, group practices, and QA sessions with expert PMs
badge Silver PM
How would you prevent hate, misinformation or deep-fakes on YouTube?

CQ’s >>

  1. We are talking about the youtube.com platform here? yes
  2. Hate can be spread by both videos and comments. Which one should we focus on ? Video

Youtube is a platform which allow user to create and share videos and viewers a wide array of videos to watch from based on different purposes like news, entertainment, upskilling etc

Type of hate, misinformation -

  1. Political parties or their affliates who might post videos and provide a polar perspective for political gains

  2. Extremist groups who will upload videos for recruitment or spreading hate

  3. Spammers who will upload fake videos

  4. Influencers who will upload fake videos for followers or viewership gains

Users of proposed product -

  1. Content creators - There are already some checks for content creators. Impact will be hign

  2. Viewers - If we are able to create something for viewers which might give them more information then the impact will be very high

  3. Government officials - They often find late about these things. Hence impact will be medium on this

  4. youtube team - Youtube team is already working on this . Impact low

Based on reach and impact, I will priortize building solution for viewers ( general population)

Pain points of general population

  1. How would I know that the facts stated in the video are correct?
  2. How can I get to know both sides of any argument?
  3. How will I know if the video is authentic or not?
  4. How do I know that the video creator is not biased or affliated to one point of view?
  5. How would I know that the video is not a deep fake one?

Priortisation based on impact on general population and type of videos,

1>2>4>3>5

Solutions -

  1. Create a system for credibility score for video creator on the basis of their expertise, profile, past experience.

Impact : high / Effort : low - Medium ( Assimilate the info about the content creator from different websites and past videos to show their score) 2. Build a video credibility score for each video so that the viewer knows that this is the trusted video. It can be based on profile of video creator, plus some in-house specialist and some top contributors Impact - High / Effort - High ( since setting up the inhouse specialist and identifying top contributor) 3. Implement AI to check the video content and browse the internet to provide counter view as well on the topic Impact - Medium / Effort - Medium 4. Improve the public participation in the vetting process of videos. This can be done through this that whatever video raises a red flag by algorithm, that should be pushed to various groups of public to find out what they think about the video and rate/ flag the video as per their understanding. Initially we can incentivise user to add their feedback. Impact - High / Effort - Low

4>1>2>3

Metrics :

  1. How many false positive raised flags raised by the algorithm
  2. How many people adding their feedback
Access expert answers by becoming a member
0 likes   |  
Get unlimited access for $12/month
Get access to 2,346 pm interview questions and answers to give yourself a strong edge against other candidates that are interviewing for the same position
Get access to over 238 hours of video material containing an interview prep course, recorded mock interviews by expert PMs, group practice sessions, and QAs with expert PMs
Boost your confidence in PM interviews by attending peer to peer mock interview practices, group practices, and QA sessions with expert PMs
badge Silver PM

let me start with a few clarifying questions

1. Youtube has a portfolio of products. can i assume that we want to focus on the user-generated content side - traditional youtube product? - yeah 

2. should i assume that i am a PM at Youtube tasked with this? - yeah

thanks to generative AI , users can now create content that is hard to differentiate between fake and real. so, it is a valid problem and it is critical for Youtube to solve this. 

at youtube, we have 3 key user segments - 

1. content creators -> influencers, regular, occasional , commentators

2. content viewers 

3. google 

4. government 

content viewers are the buyers and end users. so, let us focus on their problem statements

pain points for content viewers

1. i don't want to see abusive content in my feed or searches  

-> (impact on our users: medium) 

2. i don't want to watch an abusive, hate or deep fake content 

-> (impact on our users: high) 

3. i don't want to share content that is abusive or has hate content in it.

-> (impact on our users: high) 

let us now focus on defining some solutions to ensure that viewers do not get to see abusive content while watching any content. 

1. we could restrict who can upload content 

2. we can launch mechanical turk like system to review content when users upload the video and ensure that content is available for viewing only after this.  

-> (accuracy : high,  cost of effort: low) -> google already does some of this. 

2. introduce rating system and leverage crowd sourcing to rate content. here any users can get to rate the content 

-> (accuracy: medium, cost of effort: low) 

3. leverage AI/ML to automatically review content and tag the content. 

-> (accuracy: high, cost of effort: medium) 

i will prioritize on training content to enable that our AI/ML algorithm can detect hate/misinformation and deep fake

success metrics:

1. the accuracy of AI validation vs the current validation. 

2. user complaints around these contents

 

 

 

Access expert answers by becoming a member
0 likes   |  
Get unlimited access for $12/month
Get access to 2,346 pm interview questions and answers to give yourself a strong edge against other candidates that are interviewing for the same position
Get access to over 238 hours of video material containing an interview prep course, recorded mock interviews by expert PMs, group practice sessions, and QAs with expert PMs
Boost your confidence in PM interviews by attending peer to peer mock interview practices, group practices, and QA sessions with expert PMs
badge Platinum PM

Clarifying questions:

I assume we want to build a safe youtube community by preventing hate, misinformation, and deep-fakes. This is also key to comply with the regulators requirements. This will help to avoid scenarios where advertizers boycott the platform that spreads hate, mis-information and deep-fakes. 

Safer community will avoid customer attrition, advertizer attrition, thereby boosting the user base, revenue and more importantly will contribute to build a safe and healthy society.

User groups

 To stop hate, misinformation and deep fakes, we need to build policies and technologies that target the abusive content creators.

Extremists groups: that spread misinformation due to ideologies and recruit unsuspecting people

Spammers: that create abusive content

Political and religious groups: that spread lies and engage in slander for political gains

Influencers with massive following: that spread mis-information due to mis-placed beleifs

Groups with bad track record: 

In order to create a safe community, we need to target all the creators that generate abusive content.

Pain points

Customers see stream of user-generated content that include hate, lies, and mis-information. Therefore, customers make poor life choices

Customers are influcenced by extermist groups and are getting recruited by the extremists groups

Users are buillied by the internet trolls that affect their mental health and that lead to social anxiety, depression, poor body image, etc.

Kids are exposed to A-rated content

Prioritized pain points

Since we need to weed out the abusive content completely to create a just society, we need to prooritize all these pain points.

Solution:

A) We need to create policies and make it clear to the content creators that their content should comply youtube community guidelines and otherwise enforcement actions will be taken on their channel.

B) We need to tag the videos with abusive keywords and video with abusive content as identified by the subtitle, these videos should be sent for additional review. This is additinal review can be performed by the ethical experts or philosophers until we build a ML capabilities to automate the proces. 

C) We need to allow users to flag the video for abusive content that will be reviewed manually or AI. 

When users that have excellent track record of identifying abusive content flags any video, we can take those videos down immediately.

D) We need to stop advertizing on the channels that are extremisting groups, religious groups, to avoid funding these channels. We need to avoid putting ads on political channels to remain non-partisan.

E) We need to send the videos associated with abusive channels or sketchy websites for additional screening before they go live on the channel. Similarly, we need to flag the vidoes on sensitive topics such as vaccine, birth control for additional reviews before we publish them.

F) We can incentivize users by giving them monetary rewards or free premium subscription when they help youtube to flag and take down the abusive content.

G) We need build a profile for all the content creators and assign a risk score, when the user crosses a risk threshold their channel should be enforced.

H) Youtube should be trained not to recommend the videos on the sensitive topics  and not include sensitive video on the trending channel, and so that their dissemination would be limited, reducing the likelihood that abusive content reach wide audience.

i) Youtube should attach a fact-checker sign for the political videos until the video is fact-checked. You-tube should immediately block the channels that are compromised to protect the user experience. 

Trade-off

While creating mechanism to stop abusive content, Youtube need to ensure that it does not undermine the freedom of speech. Therefore it is critical that algorithms are trained to minimize false positives.

 

 

 

 

 

Access expert answers by becoming a member
0 likes   |  
Get unlimited access for $12/month
Get access to 2,346 pm interview questions and answers to give yourself a strong edge against other candidates that are interviewing for the same position
Get access to over 238 hours of video material containing an interview prep course, recorded mock interviews by expert PMs, group practice sessions, and QAs with expert PMs
Boost your confidence in PM interviews by attending peer to peer mock interview practices, group practices, and QA sessions with expert PMs
Get unlimited access for $12/month
Get access to 2,346 pm interview questions and answers to give yourself a strong edge against other candidates that are interviewing for the same position
Get access to over 238 hours of video material containing an interview prep course, recorded mock interviews by expert PMs, group practice sessions, and QAs with expert PMs
Boost your confidence in PM interviews by attending peer to peer mock interview practices, group practices, and QA sessions with expert PMs