WhatsApp has zero tolerance for child sexual exploitation and abuse, and we ban users when we become aware they are sharing content that exploits or endangers children. We also report violating content and accounts to the National Center for Missing and Exploited Children (NCMEC), which refers these CyberTips to Law Enforcement globally. We have features and controls that help prevent exploitation and abuse, and we employ a dedicated team that includes experts in law enforcement, online safety policy, investigations, and technology development to oversee these efforts.
WhatsApp was built for personal messaging. To protect the privacy and security of our users, we provide end-to-end encryption by default, which means only the sender and recipient can see the content of messages. To keep our users safe, we work to prevent abuse from occurring in the first place. Unlike in public spaces, on WhatsApp you cannot search for people you do not know. You need someone’s phone number to connect with them, and the first time you get a message from someone outside of your address book we ask if you want to block or report them. 90% of messages sent on WhatsApp are between two people, and the average group size is fewer than 10 people. We give users control to decide who can add them to groups, and we limit the number of chats you can forward a message to at once to help limit the spread of harmful viral content.
We also work with app store providers to prevent the proliferation of apps that contain child exploitative imagery (CEI) or that attempt to connect people interested in sharing this type of content via group invite links. We restrict the listing of invite links by popular search engines.
To further combat child sexual exploitation, WhatsApp relies on all available unencrypted information, including user reports, to detect and prevent this kind of abuse, and we are constantly improving our detection technology.
Our detection methods include the use of advanced automated technology, including photo- and video-matching technology, to proactively scan unencrypted information such as profile and group photos and user reports for known CEI. We have additional technology to detect new, unknown CEI within this unencrypted information. We also use machine learning classifiers to both scan text surfaces, such as user profiles and group descriptions, and evaluate group information and behavior for suspected CEI sharing.
Along with our proactive detection work, WhatsApp encourages users to report problematic content to us. Users can also block or report an individual account or group at any time. Learn more about how to stay safe on WhatsApp in our Help Center.
Using these techniques, WhatsApp bans more than 300,000 accounts per month for suspected CEI sharing.
WhatsApp recognizes the work that law enforcement agencies do to keep people safe around the world. We engage regularly with law enforcement agencies to ensure they know how to contact us and understand how to make requests of WhatsApp. Our Information for Law Enforcement Authorities resource center includes an online system for law enforcement to securely submit these legal requests.
When WhatsApp becomes aware of CEI on the platform, we ban the accounts involved. We also remove the images and report them along with associated account details to NCMEC in compliance with U.S. law.
When NCMEC refers matters to law enforcement for investigation, WhatsApp is prepared to respond to valid law enforcement requests accordingly. WhatsApp has received feedback from law enforcement that our efforts have assisted in rescuing victims of child abuse.
February, 2021 (account ban statistics based on Q4 2020 analysis)