NSFW content moderation


					  

Update: A blogger just did a comprehensive comparison of all the major NSFW APIs and we came out on top! Here's the link to it.

AI Powered Content Moderator

The repercussions of hosting objectionable content on your platform or website can be severe, often resulting in businesses getting crippled. Detecting and filtering such unsavoury content not only ensures that you keep your business safe but also offers a huge competitive edge. While an AI-powered solution is still in its embryonic stages, models trained and tailored to specific needs are still useful for all practical purposes.

One of our current customers uses Nanonets NSFW model to automatically identify and remove inappropriate images from their social website. The next section is a brief description of how Nanonets helped them solve their problem.

Manual Moderation

Given the sheer volume of data that must be reviewed, manually combing through the entire data would require an unreasonable number of human moderators. For instance, more than 50 million images are uploaded to our customer’s social website on a monthly basis, which translates to ~1.7 million images every day. Since hiring human-moderators to go through all these images would not be economically viable, they manually reviewed only a small subset of total images, the ones which were flagged as “inappropriate” by users.

Human-reviewed content moderation suffers from a few drawbacks. Humans can never compete with the speed at which a machine can analyze images. The task of classifying images into “reject” (NSFW) and “accept” (Safe for Work) is highly subjective and prone to inconsistency when labelled/reviewed by different people. Another neglected side effect of using human annotators is the mental health issues that they develop by prolonged exposure to such disturbing content.


Automated Moderation using Nanonets API

Classifying images as inappropriate depends on factors such as the geographical location and the policies of the social platform (you might want to allow suggestive nudity but not explicit nudity). Because of this, there is no one model that fits all. We tackle this problem by building a custom model for each customer's need. Each custom NSFW model is built by fine-tuning the NSFW pre-trained Nanonets model on the customer's data. This way the model can be built to output customer specific labels (e.g. gore, porn, nudity, safe etc).

For this particular customer, we trained a model to classify an image into 2 categories -- NSFW and SFW. The model was built with a training data of 20,000 images for each class. The trained model was deployed onto the customer’s infrastructure using a Docker image.



Nanonets Impact:

Rate of Analysing Images

Manually (human reviewed)

800 images/man/hour

With NanoNets API

200,000 images/hour

Cost of Reviewing Images

Manually (human reviewed)

$$$

With NanoNets API

$