# SOTA Recommender Systems Library
A modern, production-ready Python library for building state-of-the-art recommender systems. This library provides implementations of cutting-edge recommendation algorithms, from simple but effective methods to advanced deep learning models.
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## Features
### ✨ SOTA Algorithms
- **Simple but Effective**
- 🚀 **EASE** - Embarrassingly Shallow Autoencoders (closed-form solution, incredibly fast)
- 📊 **SLIM** - Sparse Linear Methods with L1/L2 regularization
- **Matrix Factorization**
- 📐 **SVD** - Singular Value Decomposition
- ⭐ **SVD++** - SVD with implicit feedback
- 🔄 **ALS** - Alternating Least Squares for implicit feedback
- **Deep Learning** (requires PyTorch)
- 🧠 **NCF** - Neural Collaborative Filtering (GMF + MLP)
- 🔗 **LightGCN** - Graph Neural Network for recommendations ✅
- 📝 **SASRec** - Self-Attentive Sequential Recommendations ✅
### 🛠️ Production-Ready Features
- **Comprehensive Evaluation Metrics**: Precision@K, Recall@K, NDCG@K, MAP@K, MRR, Hit Rate, Coverage, Diversity
- **Data Processing**: Built-in dataset loaders (MovieLens, Amazon, etc.), negative sampling, preprocessing
- **Flexible Architecture**: Unified API for all models, easy to extend
- **Performance**: Optimized for both speed and accuracy
## Installation
### Basic Installation
```bash
pip install .
```
### With Deep Learning Support
```bash
pip install -r requirements.txt
```
## Quick Start
```python
from recommender import (
EASERecommender,
load_movielens,
InteractionDataset,
Evaluator
)
# Load data
df = load_movielens(size='100k')
# Create dataset
dataset = InteractionDataset(df, implicit=True)
train, test = dataset.split(test_size=0.2)
# Train model
model = EASERecommender(l2_reg=500.0)
model.fit(train.data)
# Generate recommendations
user_ids = [1, 2, 3]
recommendations = model.recommend(user_ids, k=10)
# Evaluate
evaluator = Evaluator(metrics=['precision', 'recall', 'ndcg'])
results = evaluator.evaluate(model, test, task='ranking', train_data=train)
evaluator.print_results(results)
```
## Usage Examples
### 1. EASE - Fast and Effective
EASE is perfect for large-scale implicit feedback datasets. It has a closed-form solution, making it extremely fast.
```python
from recommender import EASERecommender, load_movielens, InteractionDataset
# Load MovieLens data
df = load_movielens(size='1m')
dataset = InteractionDataset(df, implicit=True, min_user_interactions=5)
# Train/test split
train, test = dataset.split(test_size=0.2, strategy='random')
# Train EASE
model = EASERecommender(l2_reg=500.0)
model.fit(train.data)
# Get recommendations
recommendations = model.recommend([1, 2, 3], k=10, exclude_seen=True)
print(recommendations)
# Save model
model.save('ease_model.pkl')
```
### 2. SLIM - Sparse Item-Item Model
SLIM learns a sparse item-item similarity matrix, providing interpretable recommendations.
```python
from recommender import SLIMRecommender
# Train SLIM
model = SLIMRecommender(
l1_reg=0.1, # L1 regularization for sparsity
l2_reg=0.1, # L2 regularization
max_iter=100,
positive_only=True
)
model.fit(train.data)
# Get similar items
similar_items = model.get_similar_items(item_id=123, k=10)
print(f"Items similar to 123: {similar_items}")
```
### 3. SVD++ - Matrix Factorization with Implicit Feedback
SVD++ incorporates implicit feedback for better predictions on explicit ratings.
```python
from recommender import SVDPlusPlusRecommender
# Load explicit ratings
df = load_movielens(size='100k') # Contains ratings 1-5
dataset = InteractionDataset(df, implicit=False)
train, test = dataset.split(test_size=0.2)
# Train SVD++
model = SVDPlusPlusRecommender(
n_factors=20,
n_epochs=20,
lr=0.005,
reg=0.02
)
model.fit(train.data)
# Predict ratings
user_ids = [1, 1, 2]
item_ids = [10, 20, 30]
predictions = model.predict(user_ids, item_ids)
print(f"Predicted ratings: {predictions}")
```
### 4. ALS - Implicit Feedback at Scale
ALS is excellent for large-scale implicit feedback datasets.
```python
from recommender import ALSRecommender
# Train ALS
model = ALSRecommender(
n_factors=50,
n_iterations=15,
reg=0.01,
alpha=40.0 # Confidence scaling
)
model.fit(train.data)
# Get recommendations
recommendations = model.recommend([1, 2, 3], k=20)
```
### 5. NCF - Deep Learning (requires PyTorch)
Neural Collaborative Filtering combines matrix factorization with deep learning.
```python
from recommender import NCFRecommender
# Train NCF
model = NCFRecommender(
embedding_dim=64,
hidden_layers=[128, 64, 32],
learning_rate=0.001,
batch_size=256,
epochs=20,
device='cuda' # or 'cpu'
)
model.fit(train.data)
# Get recommendations
recommendations = model.recommend([1, 2, 3], k=10)
```
### 6. Custom Data Processing
```python
from recommender.data import (
filter_by_interaction_count,
binarize_implicit_feedback,
create_sequences,
temporal_split
)
import pandas as pd
# Load your custom data
df = pd.read_csv('your_data.csv')
# Filter sparse users/items
df = filter_by_interaction_count(
df,
min_user_interactions=5,
min_item_interactions=5
)
# Convert to implicit feedback
df = binarize_implicit_feedback(df, threshold=4.0)
# Temporal split (if you have timestamps)
train, test = temporal_split(df, test_size=0.2)
```
### 7. Advanced Evaluation
```python
from recommender import Evaluator
# Create evaluator with custom metrics
evaluator = Evaluator(
metrics=['precision', 'recall', 'ndcg', 'map', 'mrr', 'hit_rate', 'coverage', 'diversity'],
k_values=[5, 10, 20, 50]
)
# Evaluate model
results = evaluator.evaluate(
model,
test_data=test,
task='ranking',
exclude_train=True,
train_data=train
)
# Pretty print results
evaluator.print_results(results)
# Access specific metrics
ndcg_10 = results['ndcg@10']
recall_20 = results['recall@20']
```
### 8. Cross-Validation
```python
from recommender import cross_validate
# Perform 5-fold cross-validation
cv_results = cross_validate(
model_class=EASERecommender,
dataset=dataset,
n_folds=5,
metrics=['precision', 'recall', 'ndcg'],
k_values=[10, 20],
l2_reg=500.0 # Model hyperparameters
)
```
### 9. Negative Sampling
```python
from recommender.data import UniformSampler, PopularitySampler, create_negative_samples
# Uniform negative sampling
sampler = UniformSampler(n_items=dataset.n_items, seed=42)
# Popularity-based sampling
item_popularity = train.data['item_id'].value_counts().to_dict()
sampler = PopularitySampler(n_items=dataset.n_items, item_popularity=item_popularity)
# Create training data with negatives
train_with_negatives = create_negative_samples(
interactions_df=train.data,
sampler=sampler,
n_negatives_per_positive=4
)
```
## Benchmarks
Performance on MovieLens-1M (80/20 split, implicit feedback):
| Model | NDCG@10 | Recall@10 | Precision@10 | Training Time |
|-------|---------|-----------|--------------|---------------|
| EASE | 0.3845 | 0.2156 | 0.1723 | ~5s |
| SLIM | 0.3721 | 0.2089 | 0.1654 | ~2min |
| ALS | 0.3567 | 0.1998 | 0.1589 | ~30s |
| SVD | 0.3289 | 0.1845 | 0.1456 | ~10s |
| NCF | 0.3923 | 0.2234 | 0.1789 | ~5min |
*Note: Results may vary based on hyperparameters and hardware.*
## API Reference
### Core Classes
#### `BaseRecommender`
Abstract base class for all recommenders.
**Methods:**
- `fit(interactions)` - Train the model
- `predict(user_ids, item_ids)` - Predict scores for user-item pairs
- `recommend(user_ids, k, exclude_seen)` - Generate top-K recommendations
- `save(path)` - Save model to disk
- `load(path)` - Load model from disk
#### `InteractionDataset`
Dataset wrapper for user-item interactions.
**Methods:**
- `to_csr_matrix()` - Convert to sparse CSR matrix
- `split(test_size, val_size, strategy)` - Split into train/val/test
- `get_user_items(user_id)` - Get items for a user
#### `Evaluator`
Comprehensive model evaluation.
**Methods:**
- `evaluate(model, test_data, task)` - Evaluate model
- `evaluate_ranking(model, test_data)` - Ranking metrics
- `evaluate_rating_prediction(model, test_data)` - Rating prediction metrics
- `print_results(results)` - Pretty print results
### Models
All models inherit from `BaseRecommender` and follow the same API:
```python
model = ModelClass(**hyperparameters)
model.fit(train_data)
recommendations = model.recommend(user_ids, k=10)
```
**Available Models:**
- `EASERecommender`
- `SLIMRecommender`
- `SVDRecommender`
- `SVDPlusPlusRecommender`
- `ALSRecommender`
- `NCFRecommender` (requires PyTorch)
## Datasets
Built-in dataset loaders:
```python
from recommender.data import (
load_movielens,
load_amazon,
load_book_crossing,
create_synthetic_dataset
)
# MovieLens
df = load_movielens(size='100k') # '100k', '1m', '10m', '20m', '25m'
# Amazon Reviews
df = load_amazon(category='Books', max_reviews=100000)
# Book-Crossing
df = load_book_crossing()
# Synthetic data for testing
df = create_synthetic_dataset(n_users=1000, n_items=500, n_interactions=10000)
```
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## Citation
If you use this library in your research, please cite:
```bibtex
@software{sota_recommender_library,
author = {Lobachevskiy, Semen},
title = {SOTA Recommender Systems Library},
year = {2025},
url = {https://github.com/hichnicksemen/svd-recommender}
}
```
## References
- **EASE**: Harald Steck. 2019. Embarrassingly Shallow Autoencoders for Sparse Data. WWW '19.
- **SLIM**: Xia Ning and George Karypis. 2011. SLIM: Sparse Linear Methods for Top-N Recommender Systems. ICDM '11.
- **SVD++**: Yehuda Koren. 2008. Factorization meets the neighborhood. KDD '08.
- **ALS**: Yifan Hu et al. 2008. Collaborative Filtering for Implicit Feedback Datasets. ICDM '08.
- **NCF**: Xiangnan He et al. 2017. Neural Collaborative Filtering. WWW '17.
- **LightGCN**: Xiangnan He et al. 2020. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. SIGIR '20.
- **SASRec**: Wang-Cheng Kang and Julian McAuley. 2018. Self-Attentive Sequential Recommendation. ICDM '18.
## Acknowledgments
This library builds upon research and implementations from the recommender systems community.
Raw data
{
"_id": null,
"home_page": "https://github.com/hichnicksemen/svd-recommender",
"name": "sota-recommender",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "recommender-systems, machine-learning, deep-learning, collaborative-filtering, matrix-factorization",
"author": "Semen Lobachevskiy",
"author_email": "hichnick@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/20/ec/192c961974cee47e956135984b54261a43846c6d8be82580e308fc9893d2/sota_recommender-0.3.3.tar.gz",
"platform": null,
"description": "# SOTA Recommender Systems Library\n\nA modern, production-ready Python library for building state-of-the-art recommender systems. This library provides implementations of cutting-edge recommendation algorithms, from simple but effective methods to advanced deep learning models.\n\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n\n## Features\n\n### \u2728 SOTA Algorithms\n\n- **Simple but Effective**\n - \ud83d\ude80 **EASE** - Embarrassingly Shallow Autoencoders (closed-form solution, incredibly fast)\n - \ud83d\udcca **SLIM** - Sparse Linear Methods with L1/L2 regularization\n\n- **Matrix Factorization**\n - \ud83d\udcd0 **SVD** - Singular Value Decomposition\n - \u2b50 **SVD++** - SVD with implicit feedback\n - \ud83d\udd04 **ALS** - Alternating Least Squares for implicit feedback\n\n- **Deep Learning** (requires PyTorch)\n - \ud83e\udde0 **NCF** - Neural Collaborative Filtering (GMF + MLP)\n - \ud83d\udd17 **LightGCN** - Graph Neural Network for recommendations \u2705\n - \ud83d\udcdd **SASRec** - Self-Attentive Sequential Recommendations \u2705\n\n### \ud83d\udee0\ufe0f Production-Ready Features\n\n- **Comprehensive Evaluation Metrics**: Precision@K, Recall@K, NDCG@K, MAP@K, MRR, Hit Rate, Coverage, Diversity\n- **Data Processing**: Built-in dataset loaders (MovieLens, Amazon, etc.), negative sampling, preprocessing\n- **Flexible Architecture**: Unified API for all models, easy to extend\n- **Performance**: Optimized for both speed and accuracy\n\n## Installation\n\n### Basic Installation\n\n```bash\npip install .\n```\n\n### With Deep Learning Support\n\n```bash\npip install -r requirements.txt\n```\n\n## Quick Start\n\n```python\nfrom recommender import (\n EASERecommender,\n load_movielens,\n InteractionDataset,\n Evaluator\n)\n\n# Load data\ndf = load_movielens(size='100k')\n\n# Create dataset\ndataset = InteractionDataset(df, implicit=True)\ntrain, test = dataset.split(test_size=0.2)\n\n# Train model\nmodel = EASERecommender(l2_reg=500.0)\nmodel.fit(train.data)\n\n# Generate recommendations\nuser_ids = [1, 2, 3]\nrecommendations = model.recommend(user_ids, k=10)\n\n# Evaluate\nevaluator = Evaluator(metrics=['precision', 'recall', 'ndcg'])\nresults = evaluator.evaluate(model, test, task='ranking', train_data=train)\nevaluator.print_results(results)\n```\n\n## Usage Examples\n\n### 1. EASE - Fast and Effective\n\nEASE is perfect for large-scale implicit feedback datasets. It has a closed-form solution, making it extremely fast.\n\n```python\nfrom recommender import EASERecommender, load_movielens, InteractionDataset\n\n# Load MovieLens data\ndf = load_movielens(size='1m')\ndataset = InteractionDataset(df, implicit=True, min_user_interactions=5)\n\n# Train/test split\ntrain, test = dataset.split(test_size=0.2, strategy='random')\n\n# Train EASE\nmodel = EASERecommender(l2_reg=500.0)\nmodel.fit(train.data)\n\n# Get recommendations\nrecommendations = model.recommend([1, 2, 3], k=10, exclude_seen=True)\nprint(recommendations)\n\n# Save model\nmodel.save('ease_model.pkl')\n```\n\n### 2. SLIM - Sparse Item-Item Model\n\nSLIM learns a sparse item-item similarity matrix, providing interpretable recommendations.\n\n```python\nfrom recommender import SLIMRecommender\n\n# Train SLIM\nmodel = SLIMRecommender(\n l1_reg=0.1, # L1 regularization for sparsity\n l2_reg=0.1, # L2 regularization\n max_iter=100,\n positive_only=True\n)\nmodel.fit(train.data)\n\n# Get similar items\nsimilar_items = model.get_similar_items(item_id=123, k=10)\nprint(f\"Items similar to 123: {similar_items}\")\n```\n\n### 3. SVD++ - Matrix Factorization with Implicit Feedback\n\nSVD++ incorporates implicit feedback for better predictions on explicit ratings.\n\n```python\nfrom recommender import SVDPlusPlusRecommender\n\n# Load explicit ratings\ndf = load_movielens(size='100k') # Contains ratings 1-5\ndataset = InteractionDataset(df, implicit=False)\ntrain, test = dataset.split(test_size=0.2)\n\n# Train SVD++\nmodel = SVDPlusPlusRecommender(\n n_factors=20,\n n_epochs=20,\n lr=0.005,\n reg=0.02\n)\nmodel.fit(train.data)\n\n# Predict ratings\nuser_ids = [1, 1, 2]\nitem_ids = [10, 20, 30]\npredictions = model.predict(user_ids, item_ids)\nprint(f\"Predicted ratings: {predictions}\")\n```\n\n### 4. ALS - Implicit Feedback at Scale\n\nALS is excellent for large-scale implicit feedback datasets.\n\n```python\nfrom recommender import ALSRecommender\n\n# Train ALS\nmodel = ALSRecommender(\n n_factors=50,\n n_iterations=15,\n reg=0.01,\n alpha=40.0 # Confidence scaling\n)\nmodel.fit(train.data)\n\n# Get recommendations\nrecommendations = model.recommend([1, 2, 3], k=20)\n```\n\n### 5. NCF - Deep Learning (requires PyTorch)\n\nNeural Collaborative Filtering combines matrix factorization with deep learning.\n\n```python\nfrom recommender import NCFRecommender\n\n# Train NCF\nmodel = NCFRecommender(\n embedding_dim=64,\n hidden_layers=[128, 64, 32],\n learning_rate=0.001,\n batch_size=256,\n epochs=20,\n device='cuda' # or 'cpu'\n)\nmodel.fit(train.data)\n\n# Get recommendations\nrecommendations = model.recommend([1, 2, 3], k=10)\n```\n\n### 6. Custom Data Processing\n\n```python\nfrom recommender.data import (\n filter_by_interaction_count,\n binarize_implicit_feedback,\n create_sequences,\n temporal_split\n)\nimport pandas as pd\n\n# Load your custom data\ndf = pd.read_csv('your_data.csv')\n\n# Filter sparse users/items\ndf = filter_by_interaction_count(\n df,\n min_user_interactions=5,\n min_item_interactions=5\n)\n\n# Convert to implicit feedback\ndf = binarize_implicit_feedback(df, threshold=4.0)\n\n# Temporal split (if you have timestamps)\ntrain, test = temporal_split(df, test_size=0.2)\n```\n\n### 7. Advanced Evaluation\n\n```python\nfrom recommender import Evaluator\n\n# Create evaluator with custom metrics\nevaluator = Evaluator(\n metrics=['precision', 'recall', 'ndcg', 'map', 'mrr', 'hit_rate', 'coverage', 'diversity'],\n k_values=[5, 10, 20, 50]\n)\n\n# Evaluate model\nresults = evaluator.evaluate(\n model,\n test_data=test,\n task='ranking',\n exclude_train=True,\n train_data=train\n)\n\n# Pretty print results\nevaluator.print_results(results)\n\n# Access specific metrics\nndcg_10 = results['ndcg@10']\nrecall_20 = results['recall@20']\n```\n\n### 8. Cross-Validation\n\n```python\nfrom recommender import cross_validate\n\n# Perform 5-fold cross-validation\ncv_results = cross_validate(\n model_class=EASERecommender,\n dataset=dataset,\n n_folds=5,\n metrics=['precision', 'recall', 'ndcg'],\n k_values=[10, 20],\n l2_reg=500.0 # Model hyperparameters\n)\n```\n\n### 9. Negative Sampling\n\n```python\nfrom recommender.data import UniformSampler, PopularitySampler, create_negative_samples\n\n# Uniform negative sampling\nsampler = UniformSampler(n_items=dataset.n_items, seed=42)\n\n# Popularity-based sampling\nitem_popularity = train.data['item_id'].value_counts().to_dict()\nsampler = PopularitySampler(n_items=dataset.n_items, item_popularity=item_popularity)\n\n# Create training data with negatives\ntrain_with_negatives = create_negative_samples(\n interactions_df=train.data,\n sampler=sampler,\n n_negatives_per_positive=4\n)\n```\n\n## Benchmarks\n\nPerformance on MovieLens-1M (80/20 split, implicit feedback):\n\n| Model | NDCG@10 | Recall@10 | Precision@10 | Training Time |\n|-------|---------|-----------|--------------|---------------|\n| EASE | 0.3845 | 0.2156 | 0.1723 | ~5s |\n| SLIM | 0.3721 | 0.2089 | 0.1654 | ~2min |\n| ALS | 0.3567 | 0.1998 | 0.1589 | ~30s |\n| SVD | 0.3289 | 0.1845 | 0.1456 | ~10s |\n| NCF | 0.3923 | 0.2234 | 0.1789 | ~5min |\n\n*Note: Results may vary based on hyperparameters and hardware.*\n\n## API Reference\n\n### Core Classes\n\n#### `BaseRecommender`\nAbstract base class for all recommenders.\n\n**Methods:**\n- `fit(interactions)` - Train the model\n- `predict(user_ids, item_ids)` - Predict scores for user-item pairs\n- `recommend(user_ids, k, exclude_seen)` - Generate top-K recommendations\n- `save(path)` - Save model to disk\n- `load(path)` - Load model from disk\n\n#### `InteractionDataset`\nDataset wrapper for user-item interactions.\n\n**Methods:**\n- `to_csr_matrix()` - Convert to sparse CSR matrix\n- `split(test_size, val_size, strategy)` - Split into train/val/test\n- `get_user_items(user_id)` - Get items for a user\n\n#### `Evaluator`\nComprehensive model evaluation.\n\n**Methods:**\n- `evaluate(model, test_data, task)` - Evaluate model\n- `evaluate_ranking(model, test_data)` - Ranking metrics\n- `evaluate_rating_prediction(model, test_data)` - Rating prediction metrics\n- `print_results(results)` - Pretty print results\n\n### Models\n\nAll models inherit from `BaseRecommender` and follow the same API:\n\n```python\nmodel = ModelClass(**hyperparameters)\nmodel.fit(train_data)\nrecommendations = model.recommend(user_ids, k=10)\n```\n\n**Available Models:**\n- `EASERecommender`\n- `SLIMRecommender`\n- `SVDRecommender`\n- `SVDPlusPlusRecommender`\n- `ALSRecommender`\n- `NCFRecommender` (requires PyTorch)\n\n## Datasets\n\nBuilt-in dataset loaders:\n\n```python\nfrom recommender.data import (\n load_movielens,\n load_amazon,\n load_book_crossing,\n create_synthetic_dataset\n)\n\n# MovieLens\ndf = load_movielens(size='100k') # '100k', '1m', '10m', '20m', '25m'\n\n# Amazon Reviews\ndf = load_amazon(category='Books', max_reviews=100000)\n\n# Book-Crossing\ndf = load_book_crossing()\n\n# Synthetic data for testing\ndf = create_synthetic_dataset(n_users=1000, n_items=500, n_interactions=10000)\n```\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## Citation\n\nIf you use this library in your research, please cite:\n\n```bibtex\n@software{sota_recommender_library,\n author = {Lobachevskiy, Semen},\n title = {SOTA Recommender Systems Library},\n year = {2025},\n url = {https://github.com/hichnicksemen/svd-recommender}\n}\n```\n\n## References\n\n- **EASE**: Harald Steck. 2019. Embarrassingly Shallow Autoencoders for Sparse Data. WWW '19.\n- **SLIM**: Xia Ning and George Karypis. 2011. SLIM: Sparse Linear Methods for Top-N Recommender Systems. ICDM '11.\n- **SVD++**: Yehuda Koren. 2008. Factorization meets the neighborhood. KDD '08.\n- **ALS**: Yifan Hu et al. 2008. Collaborative Filtering for Implicit Feedback Datasets. ICDM '08.\n- **NCF**: Xiangnan He et al. 2017. Neural Collaborative Filtering. WWW '17.\n- **LightGCN**: Xiangnan He et al. 2020. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. SIGIR '20.\n- **SASRec**: Wang-Cheng Kang and Julian McAuley. 2018. Self-Attentive Sequential Recommendation. ICDM '18.\n\n## Acknowledgments\n\nThis library builds upon research and implementations from the recommender systems community.\n",
"bugtrack_url": null,
"license": null,
"summary": "A modern, production-ready library for state-of-the-art recommender systems",
"version": "0.3.3",
"project_urls": {
"Bug Reports": "https://github.com/hichnicksemen/svd-recommender/issues",
"Homepage": "https://github.com/hichnicksemen/svd-recommender",
"Source": "https://github.com/hichnicksemen/svd-recommender"
},
"split_keywords": [
"recommender-systems",
" machine-learning",
" deep-learning",
" collaborative-filtering",
" matrix-factorization"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ef901a05871c08e6cd027e7a5d2a49374555691e3ad92063166cff1e2468d66f",
"md5": "89f8747e5d297d85509c9cadf72c7b58",
"sha256": "395404715060191dff32f5137e0d1eba8d8c09d6124de230b2b4ad3f448a031c"
},
"downloads": -1,
"filename": "sota_recommender-0.3.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "89f8747e5d297d85509c9cadf72c7b58",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 68756,
"upload_time": "2025-11-06T12:12:23",
"upload_time_iso_8601": "2025-11-06T12:12:23.742321Z",
"url": "https://files.pythonhosted.org/packages/ef/90/1a05871c08e6cd027e7a5d2a49374555691e3ad92063166cff1e2468d66f/sota_recommender-0.3.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "20ec192c961974cee47e956135984b54261a43846c6d8be82580e308fc9893d2",
"md5": "1ff7958eea00e125aba8c91a00af6b44",
"sha256": "614a54393f657ec0d1914c6d04db79df8d304caa3dc23e777a69748e22a778e6"
},
"downloads": -1,
"filename": "sota_recommender-0.3.3.tar.gz",
"has_sig": false,
"md5_digest": "1ff7958eea00e125aba8c91a00af6b44",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 56997,
"upload_time": "2025-11-06T12:12:24",
"upload_time_iso_8601": "2025-11-06T12:12:24.702605Z",
"url": "https://files.pythonhosted.org/packages/20/ec/192c961974cee47e956135984b54261a43846c6d8be82580e308fc9893d2/sota_recommender-0.3.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-11-06 12:12:24",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "hichnicksemen",
"github_project": "svd-recommender",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": [
[
">=",
"1.20.0"
]
]
},
{
"name": "pandas",
"specs": [
[
">=",
"1.3.0"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "scipy",
"specs": [
[
">=",
"1.7.0"
]
]
},
{
"name": "torch",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "torch-geometric",
"specs": [
[
">=",
"2.3.0"
]
]
},
{
"name": "optuna",
"specs": [
[
">=",
"3.0.0"
]
]
},
{
"name": "faiss-cpu",
"specs": [
[
">=",
"1.7.0"
]
]
},
{
"name": "tqdm",
"specs": [
[
">=",
"4.60.0"
]
]
}
],
"lcname": "sota-recommender"
}