fer


Namefer JSON
Version 22.5.1 PyPI version JSON
download
home_pagehttps://github.com/justinshenk/fer
SummaryFacial expression recognition from images
upload_time2023-06-08 16:28:01
maintainerJustin Shenk
docs_urlNone
authorJustin Shenk
requires_python>= 3.6
licenseMIT
keywords facial expressions emotion detection faces images
VCS
bugtrack_url
requirements matplotlib keras opencv-python opencv-contrib-python pandas Pillow requests facenet-pytorch tqdm moviepy ffmpeg
Travis-CI
coveralls test coverage No coveralls.
            FER
===

Facial expression recognition.

![image](https://github.com/justinshenk/fer/raw/master/result.jpg)

[![PyPI version](https://badge.fury.io/py/fer.svg)](https://badge.fury.io/py/fer) [![Build Status](https://travis-ci.org/justinshenk/fer.svg?branch=master)](https://travis-ci.org/justinshenk/fer) [![Downloads](https://pepy.tech/badge/fer)](https://pepy.tech/project/fer)

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](http://colab.research.google.com/github/justinshenk/fer/blob/master/fer-video-demo.ipynb)

[![DOI](https://zenodo.org/badge/150107943.svg)](https://zenodo.org/badge/latestdoi/150107943)


INSTALLATION
============

Currently FER only supports Python 3.6 onwards. It can be installed
through pip:

```bash
$ pip install fer
```

This implementation requires OpenCV\>=3.2 and Tensorflow\>=1.7.0
installed in the system, with bindings for Python3.

They can be installed through pip (if pip version \>= 9.0.1):

```bash
$ pip install tensorflow>=1.7 opencv-contrib-python==3.3.0.9
```

or compiled directly from sources
([OpenCV3](https://github.com/opencv/opencv/archive/3.4.0.zip),
[Tensorflow](https://www.tensorflow.org/install/install_sources)).

Note that a tensorflow-gpu version can be used instead if a GPU device
is available on the system, which will speedup the results. It can be
installed with pip:

```bash
$ pip install tensorflow-gpu\>=1.7.0
```
To extract videos that includes sound, ffmpeg and moviepy packages must be installed with pip:

```bash
$ pip install ffmpeg moviepy 
```

USAGE
=====

The following example illustrates the ease of use of this package:

```python
from fer import FER
import cv2

img = cv2.imread("justin.jpg")
detector = FER()
detector.detect_emotions(img)
```

Sample output:
```
[{'box': [277, 90, 48, 63], 'emotions': {'angry': 0.02, 'disgust': 0.0, 'fear': 0.05, 'happy': 0.16, 'neutral': 0.09, 'sad': 0.27, 'surprise': 0.41}]
```

Pretty print it with `import pprint; pprint.pprint(result)`.

Just want the top emotion? Try:

```python
emotion, score = detector.top_emotion(img) # 'happy', 0.99
```

#### MTCNN Facial Recognition

Faces by default are detected using OpenCV's Haar Cascade classifier. To use the more accurate MTCNN network,
add the parameter:

```python
detector = FER(mtcnn=True)
```

#### Video
For recognizing facial expressions in video, the `Video` class splits video into frames. It can use a local Keras model (default) or Peltarion API for the backend:

```python
from fer import Video
from fer import FER

video_filename = "tests/woman2.mp4"
video = Video(video_filename)

# Analyze video, displaying the output
detector = FER(mtcnn=True)
raw_data = video.analyze(detector, display=True)
df = video.to_pandas(raw_data)
```

The detector returns a list of JSON objects. Each JSON object contains
two keys: 'box' and 'emotions':

-   The bounding box is formatted as [x, y, width, height] under the key
    'box'.
-   The emotions are formatted into a JSON object with the keys 'anger',
    'disgust', 'fear', 'happy', 'sad', surprise', and 'neutral'.

Other good examples of usage can be found in the files
[demo.py](demo.py) located in the root of this repository.

To run the examples, install click for command line with `pip install click` and enter `python demo.py [image|video|webcam]` --help.

TF-SERVING
==========

Support running with online TF Serving docker image.

To use: Run `docker-compose up` and initialize FER with `FER(..., tfserving=True)`.

MODEL
=====

FER bundles a Keras model.

The model is a convolutional neural network with weights saved to HDF5
file in the `data` folder relative to the module's path. It can be
overriden by injecting it into the `FER()` constructor during
instantiation with the `emotion_model` parameter.

LICENSE
=======

[MIT License](LICENSE).

CREDIT
======

This code includes methods and package structure copied or derived from
Iván de Paz Centeno's [implementation](https://github.com/ipazc/mtcnn/)
of MTCNN and Octavio Arriaga's [facial expression recognition
repo](https://github.com/oarriaga/face_classification/).

REFERENCE
---------

FER 2013 dataset curated by Pierre Luc Carrier and Aaron Courville, described in:

"Challenges in Representation Learning: A report on three machine learning contests," by Ian J. Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler, Dong-Hyun Lee, Yingbo Zhou, Chetan Ramaiah, Fangxiang Feng, Ruifan Li, Xiaojie Wang, Dimitris Athanasakis, John Shawe-Taylor, Maxim Milakov, John Park, Radu Ionescu, Marius Popescu, Cristian Grozea, James Bergstra, Jingjing Xie, Lukasz Romaszko, Bing Xu, Zhang Chuang, and Yoshua Bengio, [arXiv:1307.0414](https://arxiv.org/abs/1307.0414).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/justinshenk/fer",
    "name": "fer",
    "maintainer": "Justin Shenk",
    "docs_url": null,
    "requires_python": ">= 3.6",
    "maintainer_email": "shenkjustin@gmail.com",
    "keywords": "facial expressions,emotion detection,faces,images",
    "author": "Justin Shenk",
    "author_email": "shenkjustin@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/d8/99/7c336d0b740b70a96c16e5d06b9514a62b7cc2dcbed82f261b820e7384bc/fer-22.5.1.tar.gz",
    "platform": null,
    "description": "FER\n===\n\nFacial expression recognition.\n\n![image](https://github.com/justinshenk/fer/raw/master/result.jpg)\n\n[![PyPI version](https://badge.fury.io/py/fer.svg)](https://badge.fury.io/py/fer) [![Build Status](https://travis-ci.org/justinshenk/fer.svg?branch=master)](https://travis-ci.org/justinshenk/fer) [![Downloads](https://pepy.tech/badge/fer)](https://pepy.tech/project/fer)\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](http://colab.research.google.com/github/justinshenk/fer/blob/master/fer-video-demo.ipynb)\n\n[![DOI](https://zenodo.org/badge/150107943.svg)](https://zenodo.org/badge/latestdoi/150107943)\n\n\nINSTALLATION\n============\n\nCurrently FER only supports Python 3.6 onwards. It can be installed\nthrough pip:\n\n```bash\n$ pip install fer\n```\n\nThis implementation requires OpenCV\\>=3.2 and Tensorflow\\>=1.7.0\ninstalled in the system, with bindings for Python3.\n\nThey can be installed through pip (if pip version \\>= 9.0.1):\n\n```bash\n$ pip install tensorflow>=1.7 opencv-contrib-python==3.3.0.9\n```\n\nor compiled directly from sources\n([OpenCV3](https://github.com/opencv/opencv/archive/3.4.0.zip),\n[Tensorflow](https://www.tensorflow.org/install/install_sources)).\n\nNote that a tensorflow-gpu version can be used instead if a GPU device\nis available on the system, which will speedup the results. It can be\ninstalled with pip:\n\n```bash\n$ pip install tensorflow-gpu\\>=1.7.0\n```\nTo extract videos that includes sound, ffmpeg and moviepy packages must be installed with pip:\n\n```bash\n$ pip install ffmpeg moviepy \n```\n\nUSAGE\n=====\n\nThe following example illustrates the ease of use of this package:\n\n```python\nfrom fer import FER\nimport cv2\n\nimg = cv2.imread(\"justin.jpg\")\ndetector = FER()\ndetector.detect_emotions(img)\n```\n\nSample output:\n```\n[{'box': [277, 90, 48, 63], 'emotions': {'angry': 0.02, 'disgust': 0.0, 'fear': 0.05, 'happy': 0.16, 'neutral': 0.09, 'sad': 0.27, 'surprise': 0.41}]\n```\n\nPretty print it with `import pprint; pprint.pprint(result)`.\n\nJust want the top emotion? Try:\n\n```python\nemotion, score = detector.top_emotion(img) # 'happy', 0.99\n```\n\n#### MTCNN Facial Recognition\n\nFaces by default are detected using OpenCV's Haar Cascade classifier. To use the more accurate MTCNN network,\nadd the parameter:\n\n```python\ndetector = FER(mtcnn=True)\n```\n\n#### Video\nFor recognizing facial expressions in video, the `Video` class splits video into frames. It can use a local Keras model (default) or Peltarion API for the backend:\n\n```python\nfrom fer import Video\nfrom fer import FER\n\nvideo_filename = \"tests/woman2.mp4\"\nvideo = Video(video_filename)\n\n# Analyze video, displaying the output\ndetector = FER(mtcnn=True)\nraw_data = video.analyze(detector, display=True)\ndf = video.to_pandas(raw_data)\n```\n\nThe detector returns a list of JSON objects. Each JSON object contains\ntwo keys: 'box' and 'emotions':\n\n-   The bounding box is formatted as [x, y, width, height] under the key\n    'box'.\n-   The emotions are formatted into a JSON object with the keys 'anger',\n    'disgust', 'fear', 'happy', 'sad', surprise', and 'neutral'.\n\nOther good examples of usage can be found in the files\n[demo.py](demo.py) located in the root of this repository.\n\nTo run the examples, install click for command line with `pip install click` and enter `python demo.py [image|video|webcam]` --help.\n\nTF-SERVING\n==========\n\nSupport running with online TF Serving docker image.\n\nTo use: Run `docker-compose up` and initialize FER with `FER(..., tfserving=True)`.\n\nMODEL\n=====\n\nFER bundles a Keras model.\n\nThe model is a convolutional neural network with weights saved to HDF5\nfile in the `data` folder relative to the module's path. It can be\noverriden by injecting it into the `FER()` constructor during\ninstantiation with the `emotion_model` parameter.\n\nLICENSE\n=======\n\n[MIT License](LICENSE).\n\nCREDIT\n======\n\nThis code includes methods and package structure copied or derived from\nIv\u00e1n de Paz Centeno's [implementation](https://github.com/ipazc/mtcnn/)\nof MTCNN and Octavio Arriaga's [facial expression recognition\nrepo](https://github.com/oarriaga/face_classification/).\n\nREFERENCE\n---------\n\nFER 2013 dataset curated by Pierre Luc Carrier and Aaron Courville, described in:\n\n\"Challenges in Representation Learning: A report on three machine learning contests,\" by Ian J. Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler, Dong-Hyun Lee, Yingbo Zhou, Chetan Ramaiah, Fangxiang Feng, Ruifan Li, Xiaojie Wang, Dimitris Athanasakis, John Shawe-Taylor, Maxim Milakov, John Park, Radu Ionescu, Marius Popescu, Cristian Grozea, James Bergstra, Jingjing Xie, Lukasz Romaszko, Bing Xu, Zhang Chuang, and Yoshua Bengio, [arXiv:1307.0414](https://arxiv.org/abs/1307.0414).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Facial expression recognition from images",
    "version": "22.5.1",
    "project_urls": {
        "Homepage": "https://github.com/justinshenk/fer"
    },
    "split_keywords": [
        "facial expressions",
        "emotion detection",
        "faces",
        "images"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "19d7cc5954aac8db0be7eaff31c5cd059c23af59391d228c9d09cead0250bdae",
                "md5": "501946a6d5e299a66fabf94cc55e96db",
                "sha256": "a07be72f664885f8c3fa6835994de90d9ac4523b5535fecb686c4cdb4cc52064"
            },
            "downloads": -1,
            "filename": "fer-22.5.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "501946a6d5e299a66fabf94cc55e96db",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">= 3.6",
            "size": 1491874,
            "upload_time": "2023-06-08T16:27:57",
            "upload_time_iso_8601": "2023-06-08T16:27:57.537527Z",
            "url": "https://files.pythonhosted.org/packages/19/d7/cc5954aac8db0be7eaff31c5cd059c23af59391d228c9d09cead0250bdae/fer-22.5.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d8997c336d0b740b70a96c16e5d06b9514a62b7cc2dcbed82f261b820e7384bc",
                "md5": "3c3b28129840548ba672a37a52df91de",
                "sha256": "dce8b0ab44b7c75cc84c90e84fff7aa94c10378eb448626ab6598a3ae260b7be"
            },
            "downloads": -1,
            "filename": "fer-22.5.1.tar.gz",
            "has_sig": false,
            "md5_digest": "3c3b28129840548ba672a37a52df91de",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">= 3.6",
            "size": 4201083,
            "upload_time": "2023-06-08T16:28:01",
            "upload_time_iso_8601": "2023-06-08T16:28:01.859472Z",
            "url": "https://files.pythonhosted.org/packages/d8/99/7c336d0b740b70a96c16e5d06b9514a62b7cc2dcbed82f261b820e7384bc/fer-22.5.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-08 16:28:01",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "justinshenk",
    "github_project": "fer",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "matplotlib",
            "specs": []
        },
        {
            "name": "keras",
            "specs": [
                [
                    ">=",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "opencv-python",
            "specs": []
        },
        {
            "name": "opencv-contrib-python",
            "specs": []
        },
        {
            "name": "pandas",
            "specs": []
        },
        {
            "name": "Pillow",
            "specs": []
        },
        {
            "name": "requests",
            "specs": []
        },
        {
            "name": "facenet-pytorch",
            "specs": []
        },
        {
            "name": "tqdm",
            "specs": [
                [
                    ">=",
                    "4.62.1"
                ]
            ]
        },
        {
            "name": "moviepy",
            "specs": [
                [
                    "==",
                    "1.0.3"
                ]
            ]
        },
        {
            "name": "ffmpeg",
            "specs": [
                [
                    "==",
                    "1.4"
                ]
            ]
        }
    ],
    "lcname": "fer"
}
        
Elapsed time: 0.10340s