aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorRahiel Kasim <rahielkasim@gmail.com>2017-06-15 13:11:01 +0200
committerRahiel Kasim <rahielkasim@gmail.com>2017-06-15 13:11:01 +0200
commit7506e568418ad044d566595fe8cf27353848c8e2 (patch)
treed4909183fabbcecd548bc7950b6f6a57f550c845
parent86fbd5b4d43a3c90992878697929287bfff24e2f (diff)
update README
-rw-r--r--README.md90
1 files changed, 43 insertions, 47 deletions
diff --git a/README.md b/README.md
index 6cc183e..9782570 100644
--- a/README.md
+++ b/README.md
@@ -1,47 +1,43 @@
-# Open nsfw model
-This repo contains code for running Not Suitable for Work (NSFW) classification deep neural network Caffe models.
-
-#### Not suitable for work classifier
-Detecting offensive / adult images is an important problem which researchers have tackled for decades. With the evolution of computer vision and deep learning the algorithms have matured and we are now able to classify an image as not suitable for work with greater precision.
-
-Defining NSFW material is subjective and the task of identifying these images is non-trivial. Moreover, what may be objectionable in one context can be suitable in another. For this reason, the model we describe below focuses only on one type of NSFW content: pornographic images. The identification of NSFW sketches, cartoons, text, images of graphic violence, or other types of unsuitable content is not addressed with this model.
-
-Since images and user generated content dominate the internet today, filtering nudity and other not suitable for work images becomes an important problem. In this repository we opensource a caffe deep neural network for priliminary filtering of NSFW images.
-
-#### Usage
-
-* The network takes in a image and gives output a probability (score between 0-1) which can be used to filter not suitable for work images. Scores < 0.2 indicate that the image is likely to be safe with high probability. Scores > 0.8 indicate that the image is highly probable to be NSFW. Scores in middle range may be binned for different NSFW levels.
-* Depending on the dataset, usecase and types of images, we advise developers to choose suitable thresholds. Due to difficult nature of problem, there will be errors, which depend on use-cases / definition / tolerance of NSFW. Ideally developers should create a evaluation set according to the definition of what is safe for their application, then fit a [ROC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) curve to choose a suitable threshold if they are using the model as it is.
-* ***Results can be improved by [fine-tuning](http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html)*** the model for your dataset/ uscase / definition of NSFW. We do not provide any guarantees of accuracy of results. Please read the disclaimer below.
-
-#### Description of model
-We trained the model on the dataset with NSFW images as positive and SFW(suitable for work) images as negative. These images were editorially labelled. We cannot release the dataset or other details due to the nature of the data.
-
-We use [CaffeOnSpark](https://github.com/yahoo/CaffeOnSpark) which is a wonderful framework for distributed learning that brings deep learning to Hadoop and Spark clusters for training models for our experiments. Big thanks to to the CaffeOnSpark team!
-
-The deep model was first pretrained on ImageNet 1000 class dataset. Then we finetuned the weights on the NSFW dataset.
-We used the thin resnet 50 1by2 architecture as the pretrained network. The model was generated using [pynetbuilder](https://github.com/jay-mahadeokar/pynetbuilder) tool and replicates the [residual network](https://arxiv.org/pdf/1512.03385v1.pdf) paper's 50 layer network (with half number of filters in each layer). You can find more details on how the model was generated and trained [here](https://github.com/jay-mahadeokar/pynetbuilder/tree/master/models/imagenet)
-
-Please note that deeper networks, or networks with more filters can improve accuracy. We train the model using a thin residual network architecture, since it provides good tradeoff in terms of accuracy, and the model is light-weight in terms of runtime (or flops) and memory (or number of parameters).
-
-#### Running the model
-To run this model, please install [Caffe](https://github.com/BVLC/caffe) and its python extension and make sure pycaffe is available in your PYTHONPATH.
-
-We can use the [classify.py](https://github.com/BVLC/caffe/blob/master/python/classify.py) script to run the NSFW model. For convenience, we have provided the script in this repo as well, and it prints the NSFW score.
-
- ```
- python ./classify_nsfw.py \
- --model_def nsfw_model/deploy.prototxt \
- --pretrained_model nsfw_model/resnet_50_1by2_nsfw.caffemodel \
- INPUT_IMAGE_PATH
- ```
-
-#### ***Disclaimer***
-The definition of NSFW is subjective and contextual. This model is a general purpose reference model, which can be used for the preliminary filtering of pornographic images. We do not provide guarantees of accuracy of output, rather we make this available for developers to explore and enhance as an open source project. Results can be improved by [fine-tuning](http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html) the model for your dataset.
-
-#### License
-Code licensed under the [BSD 2 clause license] (https://github.com/BVLC/caffe/blob/master/LICENSE). See LICENSE file for terms.
-
-#### Contact
-The model was trained by [Jay Mahadeokar](https://github.com/jay-mahadeokar/), in collaboration with [Sachin Farfade](https://github.com/sachinfarfade/) , [Amar Ramesh Kamat](https://github.com/amar-kamat), [Armin Kappeler](https://github.com/akappeler) and others. Special thanks to Gerry Pesavento for taking the initiative for open-sourcing this model. If you have any queries, please raise a issue and we will get back ASAP.
-
+# open_nsfw--
+
+This is a fork of Yahoo's [open_nsfw][]. The goal is to make the *Not Suitable
+for Work* (NSFW) classification model easily accessible through an HTTP API
+deployable with Docker.
+
+[Install Docker][docker] (available in Debian as [docker.io][dpkg]) and build
+the image, this might take a while:
+``` shell
+docker build -t open_nsfw https://raw.githubusercontent.com/rahiel/open_nsfw--/master/Dockerfile
+```
+
+Then you can start the API:
+``` shell
+docker run -p <port>:8080 open_nsfw
+```
+where you replace `<port>` with the port number you want to have the API
+accessible on your local machine.
+
+[open_nsfw]: https://github.com/yahoo/open_nsfw
+[docker]: https://docs.docker.com/engine/installation/
+[dpkg]: https://packages.debian.org/sid/docker.io
+
+# Usage
+
+The API is very simple, you POST an `url` of an image and the API will then
+fetch it, classify it and return the probability that it's NSFW. The probability
+is expressed as a real number between 0 and 1.
+
+In the following examples I assume you picked 8080 for the port number, so the
+API is running at `localhost:8080`.
+
+With curl:
+``` shell
+curl -d 'url=http://example.com/image.jpg' localhost:8080
+```
+
+With Python:
+``` python
+import requests
+r = requests.post("http://localhost:8080", data={"url": "http://example.com/image.jpg"})
+nsfw_prob = float(r.text)
+```