Web 2.0 revolution has led to the explosion of content generated every day on the internet. Social sharing platforms such as Facebook, Twitter, Instagram etc. have seen astonishing growth in their daily active users but have been at their split ends when it comes to monitoring the content generated by their users. Users are uploading inappropriate content such as nudity or using abusive language while commenting on posts. Such behavior leads to social issues like bullying and revenge porn and also hampers the authenticity of the platform. However, the pace at which the content is generated online today is so high that it is nearly impossible to monitor everything manually. On Facebook itself, a total of 136,000 photos are uploaded, 510,000 comments are posted and 293,000 statuses are updated in every 60 seconds. At ParallelDots, we solved this problem through Machine Learning by building an algorithm that can classify nude photos (nudity detection) or abusive content with very high accuracy.
In one of our previous blog, we discussed how our text analytics APIs can identify spam and bot accounts on Twitter and prevent them from adding any bias in the twitter analysis. Adding another important tool for content moderation, we have released two new APIs – Nudity Detection API and Abusive Content Classifier API.
Nudity Detection Classifier
Dataset: Nude and non-nude photos were crawled from different internet sites to build the dataset. We crawled around 200,000 nude images from different nude pictures forums and websites while non-nude human images were sourced from Wikipedia. As a result, we were able to build a huge dataset to train the Nudity Detection classifier.
Architecture: We chose ResNet50 architecture for the classifier which was proposed by Kaiming He et al in 2016. The dataset crawled from the internet was randomly split into a train [80%], validation[10%] and test set[10%]. The accuracy of the classifier trained on train set and hyperparameter tuned on validation set comes out to be slightly over 95%.
Abusive Content Classifier
Dataset: Similar to Nudity Detection classifier, abuse classifier’s dataset was built by collecting abusive content from the internet, specifically Twitter. We identified certain hashtags associated with abusive and offensive language and other hashtags associated with non-abusive languages. These tweets were further manually checked to ensure they are classified correctly.
Architecture: We used Long Short-Term Memory (LSTM) networks to train the abuse classifier. LSTMs model sentences as the chain of forget-remember decisions based on context. By training it on Twitter data, we gave it an ability to understand the vague and poorly written tweets full of smileys and spelling mistakes and still be able to understand the semantics of the content and classify it as abusive.
Putting the classifier to work: Use case for content moderation
Abusive content and nudity detection classifiers are powerful tools to filter out such content from social media feeds, forums, messaging apps, etc. Here we are discussing some use-cases where these classifiers can be put to work.
Feeds of User Generated content
If you own a mobile app or a website where users actively post photos or comments, you would already be facing a hard time keeping the feed free from the abusive content or nude pictures. Current best-practices of letting your users flag these content is an unreliable and time-consuming task and requires a team of human moderators to check each of the flagged content and take action accordingly. Deploying the Abuse and Nudity Detection classifiers on such apps can improve your response time to handle such content. A perfect scenario will be one where the system flags the content as inappropriate and alerts one of the moderators even before it makes its way to the public feed. If the moderator finds the content to be mistakenly classified as Nudity Detection or abusive (false positive), she can authorize the content to go live. Such a machine augmented human moderation system can ensure that your feeds are clean of any inappropriate content and your brand reputation remains intact.
One of the biggest internet inventions has been the ability to dynamically generate content in the form of opinions, comments, Q&As, etc. on forums. However, a downside of this is that these forums are often replete with spam and abusive content, leading to issues like bullying. Hiding behind the wall of anonymity on many of these forums, such content can create a disastrous impact on the teenagers and students often leading to suicidal tendencies. Using abuse classifier can help you the forum owners to moderate the content and potentially ban the users who are repeat offenders.
Similar to forum moderation, one can use the Abuse classifier to keep the comments section of blog free from any abusive content. All the news media websites are currently struggling to keep their content safe and abuse-free as they cover different controversial topics like Immigration, Terrorism, Unemployment, etc. Keeping the comment section clean from any abusive or offensive content is one of the top most priority of every news publisher today and abuse classifier can play a significant role in combating this menace.
Crowdsourced digital marketing campaigns
Digital Marketing campaigns that rely on crowdsourced content have proven to be a very effective strategy to drive conversation between brands and consumers like Dorito’s “Crash the Super Bowl” contest. However, content uploaded by the consumers in such a contest must be monitored carefully to protect the brand reputation. Manual verification of each and every submission can be a tedious task and ParallelDots’ Nudity Detection classifier can be used to automatically flag Nude and Abusive content.
Filtering Naked content in digital ads
Ad exchanges have grown in popularity with the explosion of digital content creation and remain the only source of monetization for a majority of blogs, forums, mobile apps, etc. However, a flipside of this that sometimes ads of major brands can be shown on websites containing naked content, damaging their brand reputation. In one such instance, ads for Farmers Insurance were being served on a site called DrunkenStepfather.com thanks largely to the growth of exchange-based ad buying. The site’s tagline is “We like to have fun with pretty girls” and does not classify as appropriate for serving ads of Farmers Insurance.
Ad exchanges and servers can integrate ParallelDots’ Nudity Detection classifier API to identify nude pictures publishers or advertisers and restrict the ad delivery before it snowballs into a PR crisis.
How to use Nudity Detection Classifier?
ParallelDots’ Nudity Detection classifier is available as an API to integrate with existing applications. The API accepts a piece of text or an image and flags it as abusive or naked content respectively, in real-time. Try the Nudity detection API directly in the browser by uploading a picture here. Also, check the Abusive content classifier demo which is available here. Dive into the API documentation for Nudity Detection and abusive content classifier or check out GitHub repo to get started with API wrappers in a language of your choice.
Both the classifier computes a confidence score on a 0 to 1 scale for its prediction. A score of 1 would mean that the content is most likely abusive or nude with a very high confidence while a score close to 0 would imply that the algorithm is not very confident in its prediction.