scikit-surprise


Namescikit-surprise JSON
Version 1.1.4 PyPI version JSON
download
home_pageNone
SummaryAn easy-to-use library for recommender systems.
upload_time2024-05-19 14:25:59
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseCopyright (c) 2016, Nicolas Hug All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
keywords recommender recommendation system
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            [![GitHub version](https://badge.fury.io/gh/nicolashug%2FSurprise.svg)](https://badge.fury.io/gh/nicolashug%2FSurprise)
[![Documentation Status](https://readthedocs.org/projects/surprise/badge/?version=stable)](https://surprise.readthedocs.io/en/stable/?badge=stable)
[![python versions](https://img.shields.io/badge/python-3.8+-blue.svg)](https://surpriselib.com)
[![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02174/status.svg)](https://doi.org/10.21105/joss.02174)

[![logo](./logo_black.svg)](https://surpriselib.com)

Overview
--------

[Surprise](https://surpriselib.com) is a Python
[scikit](https://projects.scipy.org/scikits.html) for building and analyzing
recommender systems that deal with explicit rating data.

[Surprise](https://surpriselib.com) **was designed with the
following purposes in mind**:

- Give users perfect control over their experiments. To this end, a strong
  emphasis is laid on
  [documentation](https://surprise.readthedocs.io/en/stable/index.html), which we
  have tried to make as clear and precise as possible by pointing out every
  detail of the algorithms.
- Alleviate the pain of [Dataset
  handling](https://surprise.readthedocs.io/en/stable/getting_started.html#load-a-custom-dataset).
  Users can use both *built-in* datasets
  ([Movielens](https://grouplens.org/datasets/movielens/),
  [Jester](https://eigentaste.berkeley.edu/dataset/)), and their own *custom*
  datasets.
- Provide various ready-to-use [prediction
  algorithms](https://surprise.readthedocs.io/en/stable/prediction_algorithms_package.html)
  such as [baseline
  algorithms](https://surprise.readthedocs.io/en/stable/basic_algorithms.html),
  [neighborhood
  methods](https://surprise.readthedocs.io/en/stable/knn_inspired.html), matrix
  factorization-based (
  [SVD](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVD),
  [PMF](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#unbiased-note),
  [SVD++](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVDpp),
  [NMF](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.NMF)),
  and [many
  others](https://surprise.readthedocs.io/en/stable/prediction_algorithms_package.html).
  Also, various [similarity
  measures](https://surprise.readthedocs.io/en/stable/similarities.html)
  (cosine, MSD, pearson...) are built-in.
- Make it easy to implement [new algorithm
  ideas](https://surprise.readthedocs.io/en/stable/building_custom_algo.html).
- Provide tools to [evaluate](https://surprise.readthedocs.io/en/stable/model_selection.html),
  [analyse](https://nbviewer.jupyter.org/github/NicolasHug/Surprise/tree/master/examples/notebooks/KNNBasic_analysis.ipynb/)
  and
  [compare](https://nbviewer.jupyter.org/github/NicolasHug/Surprise/blob/master/examples/notebooks/Compare.ipynb)
  the algorithms' performance. Cross-validation procedures can be run very
  easily using powerful CV iterators (inspired by
  [scikit-learn](https://scikit-learn.org/) excellent tools), as well as
  [exhaustive search over a set of
  parameters](https://surprise.readthedocs.io/en/stable/getting_started.html#tune-algorithm-parameters-with-gridsearchcv).


The name *SurPRISE* (roughly :) ) stands for *Simple Python RecommendatIon
System Engine*.

Please note that surprise does not support implicit ratings or content-based
information.


Getting started, example
------------------------

Here is a simple example showing how you can (down)load a dataset, split it for
5-fold cross-validation, and compute the MAE and RMSE of the
[SVD](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVD)
algorithm.


```python
from surprise import SVD
from surprise import Dataset
from surprise.model_selection import cross_validate

# Load the movielens-100k dataset (download it if needed).
data = Dataset.load_builtin('ml-100k')

# Use the famous SVD algorithm.
algo = SVD()

# Run 5-fold cross-validation and print results.
cross_validate(algo, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)
```

**Output**:

```
Evaluating RMSE, MAE of algorithm SVD on 5 split(s).

                  Fold 1  Fold 2  Fold 3  Fold 4  Fold 5  Mean    Std     
RMSE (testset)    0.9367  0.9355  0.9378  0.9377  0.9300  0.9355  0.0029  
MAE (testset)     0.7387  0.7371  0.7393  0.7397  0.7325  0.7375  0.0026  
Fit time          0.62    0.63    0.63    0.65    0.63    0.63    0.01    
Test time         0.11    0.11    0.14    0.14    0.14    0.13    0.02    
```

[Surprise](https://surpriselib.com) can do **much** more (e.g,
[GridSearchCV](https://surprise.readthedocs.io/en/stable/getting_started.html#tune-algorithm-parameters-with-gridsearchcv))!
You'll find [more usage
examples](https://surprise.readthedocs.io/en/stable/getting_started.html) in the
[documentation ](https://surprise.readthedocs.io/en/stable/index.html).


Benchmarks
----------

Here are the average RMSE, MAE and total execution time of various algorithms
(with their default parameters) on a 5-fold cross-validation procedure. The
datasets are the [Movielens](https://grouplens.org/datasets/movielens/) 100k and
1M datasets. The folds are the same for all the algorithms. All experiments are
run on a laptop with an intel i5 11th Gen 2.60GHz. The code
for generating these tables can be found in the [benchmark
example](https://github.com/NicolasHug/Surprise/tree/master/examples/benchmark.py).

| [Movielens 100k](http://grouplens.org/datasets/movielens/100k)                                                                         |   RMSE |   MAE | Time    |
|:---------------------------------------------------------------------------------------------------------------------------------------|-------:|------:|:--------|
| [SVD](http://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVD)      |  0.934 | 0.737 | 0:00:06 |
| [SVD++ (cache_ratings=False)](http://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVDpp)  |  0.919 | 0.721 | 0:01:39 |
| [SVD++ (cache_ratings=True)](http://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVDpp)  |  0.919 | 0.721 | 0:01:22 |
| [NMF](http://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.NMF)      |  0.963 | 0.758 | 0:00:06 |
| [Slope One](http://surprise.readthedocs.io/en/stable/slope_one.html#surprise.prediction_algorithms.slope_one.SlopeOne)                 |  0.946 | 0.743 | 0:00:09 |
| [k-NN](http://surprise.readthedocs.io/en/stable/knn_inspired.html#surprise.prediction_algorithms.knns.KNNBasic)                        |  0.98  | 0.774 | 0:00:08 |
| [Centered k-NN](http://surprise.readthedocs.io/en/stable/knn_inspired.html#surprise.prediction_algorithms.knns.KNNWithMeans)           |  0.951 | 0.749 | 0:00:09 |
| [k-NN Baseline](http://surprise.readthedocs.io/en/stable/knn_inspired.html#surprise.prediction_algorithms.knns.KNNBaseline)            |  0.931 | 0.733 | 0:00:13 |
| [Co-Clustering](http://surprise.readthedocs.io/en/stable/co_clustering.html#surprise.prediction_algorithms.co_clustering.CoClustering) |  0.963 | 0.753 | 0:00:06 |
| [Baseline](http://surprise.readthedocs.io/en/stable/basic_algorithms.html#surprise.prediction_algorithms.baseline_only.BaselineOnly)   |  0.944 | 0.748 | 0:00:02 |
| [Random](http://surprise.readthedocs.io/en/stable/basic_algorithms.html#surprise.prediction_algorithms.random_pred.NormalPredictor)    |  1.518 | 1.219 | 0:00:01 |


| [Movielens 1M](https://grouplens.org/datasets/movielens/1m)                                                                             |   RMSE |   MAE | Time    |
|:----------------------------------------------------------------------------------------------------------------------------------------|-------:|------:|:--------|
| [SVD](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVD)      |  0.873 | 0.686 | 0:01:07 |
| [SVD++ (cache_ratings=False)](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVDpp)  |  0.862 | 0.672 | 0:41:06 |
| [SVD++ (cache_ratings=True)](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVDpp)  |  0.862 | 0.672 | 0:34:55 |
| [NMF](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.NMF)      |  0.916 | 0.723 | 0:01:39 |
| [Slope One](http://surprise.readthedocs.io/en/stable/slope_one.html#surprise.prediction_algorithms.slope_one.SlopeOne)                 |  0.907 | 0.715 | 0:02:31 |
| [k-NN](http://surprise.readthedocs.io/en/stable/knn_inspired.html#surprise.prediction_algorithms.knns.KNNBasic)                        |  0.923 | 0.727 | 0:05:27 |
| [Centered k-NN](http://surprise.readthedocs.io/en/stable/knn_inspired.html#surprise.prediction_algorithms.knns.KNNWithMeans)           |  0.929 | 0.738 | 0:05:43 |
| [k-NN Baseline](http://surprise.readthedocs.io/en/stable/knn_inspired.html#surprise.prediction_algorithms.knns.KNNBaseline)            |  0.895 | 0.706 | 0:05:55 |
| [Co-Clustering](http://surprise.readthedocs.io/en/stable/co_clustering.html#surprise.prediction_algorithms.co_clustering.CoClustering) |  0.915 | 0.717 | 0:00:31 |
| [Baseline](http://surprise.readthedocs.io/en/stable/basic_algorithms.html#surprise.prediction_algorithms.baseline_only.BaselineOnly)   |  0.909 | 0.719 | 0:00:19 |
| [Random](http://surprise.readthedocs.io/en/stable/basic_algorithms.html#surprise.prediction_algorithms.random_pred.NormalPredictor)    |  1.504 | 1.206 | 0:00:19 |

Installation
------------

With pip (you'll need [numpy](https://www.numpy.org/), and a C compiler. Windows
users might prefer using conda):

    $ pip install numpy
    $ pip install scikit-surprise

With conda:

    $ conda install -c conda-forge scikit-surprise

For the latest version, you can also clone the repo and build the source
(you'll first need [Cython](https://cython.org/) and
[numpy](https://www.numpy.org/)):

    $ pip install numpy cython
    $ git clone https://github.com/NicolasHug/surprise.git
    $ cd surprise
    $ python setup.py install

License and reference
---------------------

This project is licensed under the [BSD
3-Clause](https://opensource.org/licenses/BSD-3-Clause) license, so it can be
used for pretty much everything, including commercial applications.

I'd love to know how Surprise is useful to you. Please don't hesitate to open
an issue and describe how you use it!

Please make sure to cite the
[paper](https://joss.theoj.org/papers/10.21105/joss.02174) if you use
Surprise for your research:

    @article{Hug2020,
      doi = {10.21105/joss.02174},
      url = {https://doi.org/10.21105/joss.02174},
      year = {2020},
      publisher = {The Open Journal},
      volume = {5},
      number = {52},
      pages = {2174},
      author = {Nicolas Hug},
      title = {Surprise: A Python library for recommender systems},
      journal = {Journal of Open Source Software}
    }

Contributors
------------

The following persons have contributed to [Surprise](https://surpriselib.com):

ashtou, Abhishek Bhatia, bobbyinfj, caoyi, Chieh-Han Chen,  Raphael-Dayan, Олег
Демиденко, Charles-Emmanuel Dias, dmamylin, Lauriane Ducasse, Marc Feger,
franckjay, Lukas Galke, Tim Gates, Pierre-François Gimenez, Zachary Glassman,
Jeff Hale, Nicolas Hug, Janniks, jyesawtellrickson, Doruk Kilitcioglu, Ravi Raju
Krishna, lapidshay, Hengji Liu, Ravi Makhija, Maher Malaeb, Manoj K, James
McNeilis, Naturale0, nju-luke, Pierre-Louis Pécheux, Jay Qi, Lucas Rebscher,
Craig Rodrigues, Skywhat, Hercules Smith, David Stevens, Vesna Tanko,
TrWestdoor, Victor Wang, Mike Lee Williams, Jay Wong, Chenchen Xu, YaoZh1918.

Thanks a lot :) !

Development Status
------------------

Starting from version 1.1.0 (September 2019), I will only maintain the package,
provide bugfixes, and perhaps sometimes perf improvements. I have less time to
dedicate to it now, so I'm unabe to consider new features.

For bugs, issues or questions about [Surprise](https://surpriselib.com), please
avoid sending me emails; I will most likely not be able to answer). Please use
the GitHub [project page](https://github.com/NicolasHug/Surprise) instead, so
that others can also benefit from it.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "scikit-surprise",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "recommender, recommendation system",
    "author": null,
    "author_email": "Nicolas Hug <contact@nicolas-hug.com>",
    "download_url": "https://files.pythonhosted.org/packages/d9/8d/40ac32e703f3808159f9e2b33760cfbd6224cc7783eb663091eddc9581c2/scikit_surprise-1.1.4.tar.gz",
    "platform": null,
    "description": "[![GitHub version](https://badge.fury.io/gh/nicolashug%2FSurprise.svg)](https://badge.fury.io/gh/nicolashug%2FSurprise)\n[![Documentation Status](https://readthedocs.org/projects/surprise/badge/?version=stable)](https://surprise.readthedocs.io/en/stable/?badge=stable)\n[![python versions](https://img.shields.io/badge/python-3.8+-blue.svg)](https://surpriselib.com)\n[![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.02174/status.svg)](https://doi.org/10.21105/joss.02174)\n\n[![logo](./logo_black.svg)](https://surpriselib.com)\n\nOverview\n--------\n\n[Surprise](https://surpriselib.com) is a Python\n[scikit](https://projects.scipy.org/scikits.html) for building and analyzing\nrecommender systems that deal with explicit rating data.\n\n[Surprise](https://surpriselib.com) **was designed with the\nfollowing purposes in mind**:\n\n- Give users perfect control over their experiments. To this end, a strong\n  emphasis is laid on\n  [documentation](https://surprise.readthedocs.io/en/stable/index.html), which we\n  have tried to make as clear and precise as possible by pointing out every\n  detail of the algorithms.\n- Alleviate the pain of [Dataset\n  handling](https://surprise.readthedocs.io/en/stable/getting_started.html#load-a-custom-dataset).\n  Users can use both *built-in* datasets\n  ([Movielens](https://grouplens.org/datasets/movielens/),\n  [Jester](https://eigentaste.berkeley.edu/dataset/)), and their own *custom*\n  datasets.\n- Provide various ready-to-use [prediction\n  algorithms](https://surprise.readthedocs.io/en/stable/prediction_algorithms_package.html)\n  such as [baseline\n  algorithms](https://surprise.readthedocs.io/en/stable/basic_algorithms.html),\n  [neighborhood\n  methods](https://surprise.readthedocs.io/en/stable/knn_inspired.html), matrix\n  factorization-based (\n  [SVD](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVD),\n  [PMF](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#unbiased-note),\n  [SVD++](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVDpp),\n  [NMF](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.NMF)),\n  and [many\n  others](https://surprise.readthedocs.io/en/stable/prediction_algorithms_package.html).\n  Also, various [similarity\n  measures](https://surprise.readthedocs.io/en/stable/similarities.html)\n  (cosine, MSD, pearson...) are built-in.\n- Make it easy to implement [new algorithm\n  ideas](https://surprise.readthedocs.io/en/stable/building_custom_algo.html).\n- Provide tools to [evaluate](https://surprise.readthedocs.io/en/stable/model_selection.html),\n  [analyse](https://nbviewer.jupyter.org/github/NicolasHug/Surprise/tree/master/examples/notebooks/KNNBasic_analysis.ipynb/)\n  and\n  [compare](https://nbviewer.jupyter.org/github/NicolasHug/Surprise/blob/master/examples/notebooks/Compare.ipynb)\n  the algorithms' performance. Cross-validation procedures can be run very\n  easily using powerful CV iterators (inspired by\n  [scikit-learn](https://scikit-learn.org/) excellent tools), as well as\n  [exhaustive search over a set of\n  parameters](https://surprise.readthedocs.io/en/stable/getting_started.html#tune-algorithm-parameters-with-gridsearchcv).\n\n\nThe name *SurPRISE* (roughly :) ) stands for *Simple Python RecommendatIon\nSystem Engine*.\n\nPlease note that surprise does not support implicit ratings or content-based\ninformation.\n\n\nGetting started, example\n------------------------\n\nHere is a simple example showing how you can (down)load a dataset, split it for\n5-fold cross-validation, and compute the MAE and RMSE of the\n[SVD](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVD)\nalgorithm.\n\n\n```python\nfrom surprise import SVD\nfrom surprise import Dataset\nfrom surprise.model_selection import cross_validate\n\n# Load the movielens-100k dataset (download it if needed).\ndata = Dataset.load_builtin('ml-100k')\n\n# Use the famous SVD algorithm.\nalgo = SVD()\n\n# Run 5-fold cross-validation and print results.\ncross_validate(algo, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\n```\n\n**Output**:\n\n```\nEvaluating RMSE, MAE of algorithm SVD on 5 split(s).\n\n                  Fold 1  Fold 2  Fold 3  Fold 4  Fold 5  Mean    Std     \nRMSE (testset)    0.9367  0.9355  0.9378  0.9377  0.9300  0.9355  0.0029  \nMAE (testset)     0.7387  0.7371  0.7393  0.7397  0.7325  0.7375  0.0026  \nFit time          0.62    0.63    0.63    0.65    0.63    0.63    0.01    \nTest time         0.11    0.11    0.14    0.14    0.14    0.13    0.02    \n```\n\n[Surprise](https://surpriselib.com) can do **much** more (e.g,\n[GridSearchCV](https://surprise.readthedocs.io/en/stable/getting_started.html#tune-algorithm-parameters-with-gridsearchcv))!\nYou'll find [more usage\nexamples](https://surprise.readthedocs.io/en/stable/getting_started.html) in the\n[documentation ](https://surprise.readthedocs.io/en/stable/index.html).\n\n\nBenchmarks\n----------\n\nHere are the average RMSE, MAE and total execution time of various algorithms\n(with their default parameters) on a 5-fold cross-validation procedure. The\ndatasets are the [Movielens](https://grouplens.org/datasets/movielens/) 100k and\n1M datasets. The folds are the same for all the algorithms. All experiments are\nrun on a laptop with an intel i5 11th Gen 2.60GHz. The code\nfor generating these tables can be found in the [benchmark\nexample](https://github.com/NicolasHug/Surprise/tree/master/examples/benchmark.py).\n\n| [Movielens 100k](http://grouplens.org/datasets/movielens/100k)                                                                         |   RMSE |   MAE | Time    |\n|:---------------------------------------------------------------------------------------------------------------------------------------|-------:|------:|:--------|\n| [SVD](http://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVD)      |  0.934 | 0.737 | 0:00:06 |\n| [SVD++ (cache_ratings=False)](http://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVDpp)  |  0.919 | 0.721 | 0:01:39 |\n| [SVD++ (cache_ratings=True)](http://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVDpp)  |  0.919 | 0.721 | 0:01:22 |\n| [NMF](http://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.NMF)      |  0.963 | 0.758 | 0:00:06 |\n| [Slope One](http://surprise.readthedocs.io/en/stable/slope_one.html#surprise.prediction_algorithms.slope_one.SlopeOne)                 |  0.946 | 0.743 | 0:00:09 |\n| [k-NN](http://surprise.readthedocs.io/en/stable/knn_inspired.html#surprise.prediction_algorithms.knns.KNNBasic)                        |  0.98  | 0.774 | 0:00:08 |\n| [Centered k-NN](http://surprise.readthedocs.io/en/stable/knn_inspired.html#surprise.prediction_algorithms.knns.KNNWithMeans)           |  0.951 | 0.749 | 0:00:09 |\n| [k-NN Baseline](http://surprise.readthedocs.io/en/stable/knn_inspired.html#surprise.prediction_algorithms.knns.KNNBaseline)            |  0.931 | 0.733 | 0:00:13 |\n| [Co-Clustering](http://surprise.readthedocs.io/en/stable/co_clustering.html#surprise.prediction_algorithms.co_clustering.CoClustering) |  0.963 | 0.753 | 0:00:06 |\n| [Baseline](http://surprise.readthedocs.io/en/stable/basic_algorithms.html#surprise.prediction_algorithms.baseline_only.BaselineOnly)   |  0.944 | 0.748 | 0:00:02 |\n| [Random](http://surprise.readthedocs.io/en/stable/basic_algorithms.html#surprise.prediction_algorithms.random_pred.NormalPredictor)    |  1.518 | 1.219 | 0:00:01 |\n\n\n| [Movielens 1M](https://grouplens.org/datasets/movielens/1m)                                                                             |   RMSE |   MAE | Time    |\n|:----------------------------------------------------------------------------------------------------------------------------------------|-------:|------:|:--------|\n| [SVD](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVD)      |  0.873 | 0.686 | 0:01:07 |\n| [SVD++ (cache_ratings=False)](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVDpp)  |  0.862 | 0.672 | 0:41:06 |\n| [SVD++ (cache_ratings=True)](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVDpp)  |  0.862 | 0.672 | 0:34:55 |\n| [NMF](https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.NMF)      |  0.916 | 0.723 | 0:01:39 |\n| [Slope One](http://surprise.readthedocs.io/en/stable/slope_one.html#surprise.prediction_algorithms.slope_one.SlopeOne)                 |  0.907 | 0.715 | 0:02:31 |\n| [k-NN](http://surprise.readthedocs.io/en/stable/knn_inspired.html#surprise.prediction_algorithms.knns.KNNBasic)                        |  0.923 | 0.727 | 0:05:27 |\n| [Centered k-NN](http://surprise.readthedocs.io/en/stable/knn_inspired.html#surprise.prediction_algorithms.knns.KNNWithMeans)           |  0.929 | 0.738 | 0:05:43 |\n| [k-NN Baseline](http://surprise.readthedocs.io/en/stable/knn_inspired.html#surprise.prediction_algorithms.knns.KNNBaseline)            |  0.895 | 0.706 | 0:05:55 |\n| [Co-Clustering](http://surprise.readthedocs.io/en/stable/co_clustering.html#surprise.prediction_algorithms.co_clustering.CoClustering) |  0.915 | 0.717 | 0:00:31 |\n| [Baseline](http://surprise.readthedocs.io/en/stable/basic_algorithms.html#surprise.prediction_algorithms.baseline_only.BaselineOnly)   |  0.909 | 0.719 | 0:00:19 |\n| [Random](http://surprise.readthedocs.io/en/stable/basic_algorithms.html#surprise.prediction_algorithms.random_pred.NormalPredictor)    |  1.504 | 1.206 | 0:00:19 |\n\nInstallation\n------------\n\nWith pip (you'll need [numpy](https://www.numpy.org/), and a C compiler. Windows\nusers might prefer using conda):\n\n    $ pip install numpy\n    $ pip install scikit-surprise\n\nWith conda:\n\n    $ conda install -c conda-forge scikit-surprise\n\nFor the latest version, you can also clone the repo and build the source\n(you'll first need [Cython](https://cython.org/) and\n[numpy](https://www.numpy.org/)):\n\n    $ pip install numpy cython\n    $ git clone https://github.com/NicolasHug/surprise.git\n    $ cd surprise\n    $ python setup.py install\n\nLicense and reference\n---------------------\n\nThis project is licensed under the [BSD\n3-Clause](https://opensource.org/licenses/BSD-3-Clause) license, so it can be\nused for pretty much everything, including commercial applications.\n\nI'd love to know how Surprise is useful to you. Please don't hesitate to open\nan issue and describe how you use it!\n\nPlease make sure to cite the\n[paper](https://joss.theoj.org/papers/10.21105/joss.02174) if you use\nSurprise for your research:\n\n    @article{Hug2020,\n      doi = {10.21105/joss.02174},\n      url = {https://doi.org/10.21105/joss.02174},\n      year = {2020},\n      publisher = {The Open Journal},\n      volume = {5},\n      number = {52},\n      pages = {2174},\n      author = {Nicolas Hug},\n      title = {Surprise: A Python library for recommender systems},\n      journal = {Journal of Open Source Software}\n    }\n\nContributors\n------------\n\nThe following persons have contributed to [Surprise](https://surpriselib.com):\n\nashtou, Abhishek Bhatia, bobbyinfj, caoyi, Chieh-Han Chen,  Raphael-Dayan, \u041e\u043b\u0435\u0433\n\u0414\u0435\u043c\u0438\u0434\u0435\u043d\u043a\u043e, Charles-Emmanuel Dias, dmamylin, Lauriane Ducasse, Marc Feger,\nfranckjay, Lukas Galke, Tim Gates, Pierre-Fran\u00e7ois Gimenez, Zachary Glassman,\nJeff Hale, Nicolas Hug, Janniks, jyesawtellrickson, Doruk Kilitcioglu, Ravi Raju\nKrishna, lapidshay, Hengji Liu, Ravi Makhija, Maher Malaeb, Manoj K, James\nMcNeilis, Naturale0, nju-luke, Pierre-Louis P\u00e9cheux, Jay Qi, Lucas Rebscher,\nCraig Rodrigues, Skywhat, Hercules Smith, David Stevens, Vesna Tanko,\nTrWestdoor, Victor Wang, Mike Lee Williams, Jay Wong, Chenchen Xu, YaoZh1918.\n\nThanks a lot :) !\n\nDevelopment Status\n------------------\n\nStarting from version 1.1.0 (September 2019), I will only maintain the package,\nprovide bugfixes, and perhaps sometimes perf improvements. I have less time to\ndedicate to it now, so I'm unabe to consider new features.\n\nFor bugs, issues or questions about [Surprise](https://surpriselib.com), please\navoid sending me emails; I will most likely not be able to answer). Please use\nthe GitHub [project page](https://github.com/NicolasHug/Surprise) instead, so\nthat others can also benefit from it.\n",
    "bugtrack_url": null,
    "license": "Copyright (c) 2016, Nicolas Hug All rights reserved.  Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.  3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ",
    "summary": "An easy-to-use library for recommender systems.",
    "version": "1.1.4",
    "project_urls": {
        "homepage": "https://surpriselib.com",
        "repository": "https://github.com/NicolasHug/Surprise"
    },
    "split_keywords": [
        "recommender",
        " recommendation system"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d98d40ac32e703f3808159f9e2b33760cfbd6224cc7783eb663091eddc9581c2",
                "md5": "c50cd91a204eb0e1f81308800c5d4303",
                "sha256": "130c45feaee9de4b8cba0aff413ad9b51b2d5c5c90d41aee4759e00059913752"
            },
            "downloads": -1,
            "filename": "scikit_surprise-1.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "c50cd91a204eb0e1f81308800c5d4303",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 154434,
            "upload_time": "2024-05-19T14:25:59",
            "upload_time_iso_8601": "2024-05-19T14:25:59.527271Z",
            "url": "https://files.pythonhosted.org/packages/d9/8d/40ac32e703f3808159f9e2b33760cfbd6224cc7783eb663091eddc9581c2/scikit_surprise-1.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-19 14:25:59",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "NicolasHug",
    "github_project": "Surprise",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "scikit-surprise"
}
        
Elapsed time: 3.03669s