rec-pangu


Namerec-pangu JSON
Version 0.4.1 PyPI version JSON
download
home_pagehttps://github.com/HaSai666/rec_pangu
SummarySome Rank/Multi-task model implemented by Pytorch
upload_time2023-07-26 07:38:14
maintainer
docs_urlNone
authorwk
requires_python
license
keywords rank multi task deep learning pytorch recsys recommendation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Rec PanGu
[![stars](https://img.shields.io/github/stars/HaSai666/rec_pangu?color=097abb)](https://github.com/HaSai666/rec_pangu/stargazers)
[![issues](https://img.shields.io/github/issues/HaSai666/rec_pangu?color=097abb)](https://github.com/HaSai666/rec_pangu/issues)
[![license](https://img.shields.io/github/license/HaSai666/rec_pangu?color=097abb)](https://github.com/HaSai666/rec_pangu/blob/main/LICENSE)
<img src='https://img.shields.io/badge/python-3.7+-brightgreen'>
<img src='https://img.shields.io/badge/torch-1.7+-brightgreen'>
<img src='https://img.shields.io/badge/scikit_learn-0.23.2+-brightgreen'>
<img src='https://img.shields.io/badge/pandas-1.0.5+-brightgreen'>
<img src='https://img.shields.io/badge/pypi-0.2.4-brightgreen'>
<a href="https://wakatime.com/badge/user/4f5f529d-94ee-4a12-94de-38e886b0219b/project/5b1e1c2d-5596-4335-937e-f2b5515a7fab"><img src="https://wakatime.com/badge/user/4f5f529d-94ee-4a12-94de-38e886b0219b/project/5b1e1c2d-5596-4335-937e-f2b5515a7fab.svg" alt="wakatime"></a>
[![Codacy Badge](https://app.codacy.com/project/badge/Grade/27fce22d928f412bb7cb3dab813ab481)](https://app.codacy.com/gh/HaSai666/rec_pangu/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)
[![Downloads](https://pepy.tech/badge/rec-pangu)](https://pepy.tech/project/rec-pangu)
[![Downloads](https://pepy.tech/badge/rec-pangu/month)](https://pepy.tech/project/rec-pangu)
[![Downloads](https://pepy.tech/badge/rec-pangu/week)](https://pepy.tech/project/rec-pangu)
## 1.开源定位 
- 使用pytorch对经典的rank/多任务模型进行实现,并且对外提供统一调用的API接口,极大的降低了使用Rank/多任务模型的时间成本
- 该项目使用了pytorch来实现我们的各种模型,以便于初学推荐系统的人可以更好的理解算法的核心思想
- 由于已经有了很多类似的优秀的开源,我们这里对那些十分通用的模块参考了已有的开源,十分感谢这些开源贡献者的贡献 

<img src='pic/overview.png'>

## 2.安装  
```bash
#最新版
git clone https://github.com/HaSai666/rec_pangu.git
cd rec_pangu
pip install -e . --verbose

#稳定版 
pip install rec_pangu --upgrade
```
## 3.Rank模型

| 模型      | 论文     | 年份   | 
|---------|------|------|
| WDL     | [Wide & Deep Learning for Recommender Systems](https://arxiv.org/pdf/1606.07792)     | 2016 | 
| DeepFM  | [DeepFM: A Factorization-Machine based Neural Network for CTR Prediction](https://arxiv.org/pdf/1703.04247)     | 2017 | 
| NFM     | [Neural Factorization Machines for Sparse Predictive Analytics](https://arxiv.org/pdf/1708.05027.pdf)              | 2017 | 
| FiBiNet | [FiBiNET: Combining Feature Importance and Bilinear Feature Interaction for Click-Through Rate](https://arxiv.org/pdf/1905.09433.pdf) | 2019 | 
| AFM     | [Attentional Factorization Machines](https://arxiv.org/pdf/1708.04617)                  | 2017 | 
| AutoInt | [AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks](https://arxiv.org/pdf/1810.11921.pdf)              | 2018 | 
| CCPM    | [A Convolutional Click Prediction Model](http://www.shuwu.name/sw/Liu2015CCPM.pdf)    | 2015 |
| LR      | /  | 2019 | 
| FM      | /  | 2019 | 
| xDeepFM | [xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems](https://arxiv.org/pdf/1803.05170.pdf)     | 2018 |
| DCN     | [Deep & Cross Network for Ad Click Predictions](https://arxiv.org/pdf/1708.05123.pdf) | 2019 | 
| MaskNet | [MaskNet: Introducing Feature-Wise Multiplication to CTR Ranking Models by Instance-Guided Mask](https://arxiv.org/pdf/2102.07619.pdf) | 2021 | 
## 4.多任务模型

| 模型          | 论文                                                                                                                                          | 年份   | 
|-------------|---------------------------------------------------------------------------------------------------------------------------------------------|------|
| MMOE        | [Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts](https://dl.acm.org/doi/pdf/10.1145/3219819.3220007) | 2018 |
| ShareBottom | [Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts](https://dl.acm.org/doi/pdf/10.1145/3219819.3220007) | 2018 |
| ESSM        | [Entire Space Multi-Task Model: An Effective Approach for Estimating Post-Click Conversion Rate](https://arxiv.org/pdf/1804.07931.pdf)      | 2018 |
| OMOE        | [Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts](https://dl.acm.org/doi/pdf/10.1145/3219819.3220007) | 2018 |
| MLMMOE      | /                                                                                                                                           | /    |
| AITM        | [Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising](https://arxiv.org/pdf/2105.08489.pdf)| 2019 |

## 5.序列召回模型

目前支持如下类型的序列召回模型:

- 经典序列召回模型
- 基于图的序列召回模型
- 多兴趣序列召回模型
- 基于LLM的序列召回模型


| 模型              | 类型      | 论文                                                                                                                                                                                                         | 年份   | 
|-----------------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|
| YotubeDNN       | 经典序列召回  | [Deep Neural Networks for YouTube Recommendations](https://dl.acm.org/doi/pdf/10.1145/2959100.2959190?utm_campaign=Weekly%20dose%20of%20Machine%20Learning&utm_medium=email&utm_source=Revue%20newsletter) | 2016 |
| Gru4Rec         | 经典序列召回  | [Session-based recommendations with recurrent neural networks](https://arxiv.org/pdf/1511.06939)                                                                                                           | 2015 |
| Narm            | 经典序列召回  | [Neural Attentive Session-based Recommendation](https://arxiv.org/pdf/1711.04725)                                                                                                                          | 2017 |
| NextItNet       | 经典序列召回  | [A Simple Convolutional Generative Network for Next Item](https://arxiv.org/pdf/1808.05163)                                                                                                                | 2019 |
| ContraRec       | 序列对比学习  | [Sequential Recommendation with Multiple Contrast Signals](https://dl.acm.org/doi/pdf/10.1145/3522673)                                                                                                     |      |
| ComirecSA       | 多兴趣召回   | [Controllable multi-interest framework for recommendation](https://arxiv.org/pdf/2005.09347)                                                                                                               | 2020 |
| ComirecDR       | 多兴趣召回   | [Controllable multi-interest framework for recommendation](https://arxiv.org/pdf/2005.09347)                                                                                                               | 2020 |
| Mind            | 多兴趣召回   | [Multi-Interest Network with Dynamic Routing for Recommendation at Tmall](https://arxiv.org/pdf/1904.08030)                                                                                                | 2019 |
| Re4             | 多兴趣召回   | [Re4: Learning to Re-contrast, Re-attend, Re-construct for Multi-interest Recommendation](https://dl.acm.org/doi/10.1145/3485447.3512094)                                                                  | 2022 |
| CMI             | 多兴趣召回   | [mproving Micro-video Recommendation via Contrastive Multiple Interests](https://arxiv.org/pdf/2205.09593)                                                                                                 | 2022 |
| SRGNN           | 图序列召回   | [Session-based Recommendation with Graph Neural Networks](https://ojs.aaai.org/index.php/AAAI/article/view/3804/3682)                                                                                      | 2019 |
| GC-SAN          | 图序列召回   | [SGraph Contextualized Self-Attention Network for Session-based Recommendation](https://www.ijcai.org/proceedings/2019/0547.pdf)                                                                           | 2019 |
| NISER           | 图序列召回   | [NISER: Normalized Item and Session Representations to Handle Popularity Bias](https://arxiv.org/pdf/1909.04276)                                                                                           | 2019 |
| GCE-GNN(ToDo)   | 图序列召回   | [Global Context Enhanced Graph Neural Networks for Session-based Recommendation](https://arxiv.org/pdf/2106.05081.pdf)                                                                                     | 2020 |
| Recformer(ToDo) | LLM序列召回 | [Text Is All You Need: Learning Language Representations for Sequential Recommendation](https://arxiv.org/pdf/2305.13731.pdf)                                                                              | 2023 |




## 6.图协同过滤模型


| 模型             | 类型          | 论文                                                                                                                          | 年份   | 
|----------------|-------------|-----------------------------------------------------------------------------------------------------------------------------|------|
| NGCF(ToDo)     | 图协同过滤       | [Neural Graph Collaborative Filtering](https://arxiv.org/pdf/1905.08108.pdf)                                                | 2019 |
| LightGCN(ToDo)   | 图协同过滤       | [LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation](https://arxiv.org/pdf/2002.02126.pdf)     | 2019 |
| NCL(ToDo)       | 图对比学习       | [Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive Learning](https://arxiv.org/pdf/2202.06200) | 2022 |
| SimGCL(ToDo)    | 图对比学习       | [Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for Recommendation](https://www.researchgate.net/profile/Junliang-Yu/publication/359788233_Are_Graph_Augmentations_Necessary_Simple_Graph_Contrastive_Learning_for_Recommendation/links/624e802ad726197cfd426f81/Are-Graph-Augmentations-Necessary-Simple-Graph-Contrastive-Learning-for-Recommendation.pdf?ref=https://githubhelp.com) | 2022 |
| SGL(ToDo)      | 图对比学习       | [Self-supervised Graph Learning for Recommendation](https://arxiv.org/pdf/2010.10783)                                       | 2021 |


## 7.Demo
我们的Rank和多任务模型所对外暴露的接口十分相似,同时我们这里也支持使用wandb来实时监测模型训练指标,我们下面会分别给出Rank,多任务模型,wandb的demo
### 7.1 排序任务Demo

```python
# 声明数据schema
import torch
from rec_pangu.dataset import get_dataloader
from rec_pangu.models.ranking import WDL, DeepFM, NFM, FiBiNet, AFM, AFN, AOANet, AutoInt, CCPM, LR, FM, xDeepFM
from rec_pangu.trainer import RankTrainer
import pandas as pd

if __name__ == '__main__':
    df = pd.read_csv('sample_data/ranking_sample_data.csv')
    print(df.head())
    # 声明数据schema
    schema = {
        "sparse_cols": ['user_id', 'item_id', 'item_type', 'dayofweek', 'is_workday', 'city', 'county',
                        'town', 'village', 'lbs_city', 'lbs_district', 'hardware_platform', 'hardware_ischarging',
                        'os_type', 'network_type', 'position'],
        "dense_cols": ['item_expo_1d', 'item_expo_7d', 'item_expo_14d', 'item_expo_30d', 'item_clk_1d',
                       'item_clk_7d', 'item_clk_14d', 'item_clk_30d', 'use_duration'],
        "label_col": 'click',
    }
    # 准备数据,这里只选择了100条数据,所以没有切分数据集
    train_df = df
    valid_df = df
    test_df = df

    # 声明使用的device
    device = torch.device('cpu')
    # 获取dataloader
    train_loader, valid_loader, test_loader, enc_dict = get_dataloader(train_df, valid_df, test_df, schema)
    # 声明模型,排序模型目前支持:WDL, DeepFM, NFM, FiBiNet, AFM, AFN, AOANet, AutoInt, CCPM, LR, FM, xDeepFM
    model = xDeepFM(enc_dict=enc_dict)
    # 声明Trainer
    trainer = RankTrainer(num_task=1)
    # 训练模型
    trainer.fit(model, train_loader, valid_loader, epoch=5, lr=1e-3, device=device)
    # 保存模型权重
    trainer.save_model(model, './model_ckpt')
    # 模型验证
    test_metric = trainer.evaluate_model(model, test_loader, device=device)
    print('Test metric:{}'.format(test_metric))

```
### 7.2 多任务模型Demo

```python
import torch
from rec_pangu.dataset import get_dataloader
from rec_pangu.models.multi_task import AITM, ShareBottom, ESSM, MMOE, OMOE, MLMMOE
from rec_pangu.trainer import RankTrainer
import pandas as pd

if __name__ == '__main__':
    df = pd.read_csv('sample_data/multi_task_sample_data.csv')
    print(df.head())
    #声明数据schema
    schema = {
        "sparse_cols": ['user_id', 'item_id', 'item_type', 'dayofweek', 'is_workday', 'city', 'county',
                        'town', 'village', 'lbs_city', 'lbs_district', 'hardware_platform', 'hardware_ischarging',
                        'os_type', 'network_type', 'position'],
        "dense_cols": ['item_expo_1d', 'item_expo_7d', 'item_expo_14d', 'item_expo_30d', 'item_clk_1d',
                       'item_clk_7d', 'item_clk_14d', 'item_clk_30d', 'use_duration'],
        "label_col": ['click', 'scroll'],
    }
    #准备数据,这里只选择了100条数据,所以没有切分数据集
    train_df = df
    valid_df = df
    test_df = df

    #声明使用的device
    device = torch.device('cpu')
    #获取dataloader
    train_loader, valid_loader, test_loader, enc_dict = get_dataloader(train_df, valid_df, test_df, schema)
    #声明模型,多任务模型目前支持:AITM,ShareBottom,ESSM,MMOE,OMOE,MLMMOE
    model = AITM(enc_dict=enc_dict)
    #声明Trainer
    trainer = RankTrainer(num_task=2)
    #训练模型
    trainer.fit(model, train_loader, valid_loader, epoch=5, lr=1e-3, device=device)
    #保存模型权重
    trainer.save_model(model, './model_ckpt')
    #模型验证
    test_metric = trainer.evaluate_model(model, test_loader, device=device)
    print('Test metric:{}'.format(test_metric))
```
### 7.3 序列召回Demo
```python
import torch
from rec_pangu.dataset import get_dataloader
from rec_pangu.models.sequence import ComirecSA,ComirecDR,MIND,CMI,Re4,NARM,YotubeDNN,SRGNN
from rec_pangu.trainer import SequenceTrainer
from rec_pangu.utils import set_device
import pandas as pd

if __name__=='__main__':
    #声明数据schema
    schema = {
        'user_col': 'user_id',
        'item_col': 'item_id',
        'cate_cols': ['genre'],
        'max_length': 20,
        'time_col': 'timestamp',
        'task_type':'sequence'
    }
    # 模型配置
    config = {
        'embedding_dim': 64,
        'lr': 0.001,
        'K': 1,
        'device':-1,
    }
    config['device'] = set_device(config['device'])
    config.update(schema)

    #样例数据
    train_df = pd.read_csv('./sample_data/sample_train.csv')
    valid_df = pd.read_csv('./sample_data/sample_valid.csv')
    test_df = pd.read_csv('./sample_data/sample_test.csv')

    #声明使用的device
    device = torch.device('cpu')
    #获取dataloader
    train_loader, valid_loader, test_loader, enc_dict = get_dataloader(train_df, valid_df, test_df, schema, batch_size=50)
    #声明模型,序列召回模型模型目前支持: ComirecSA,ComirecDR,MIND,CMI,Re4,NARM,YotubeDNN,SRGNN
    model = ComirecSA(enc_dict=enc_dict,config=config)
    #声明Trainer
    trainer = SequenceTrainer(model_ckpt_dir='./model_ckpt')
    #训练模型
    trainer.fit(model, train_loader, valid_loader, epoch=500, lr=1e-3, device=device,log_rounds=10,
                use_earlystoping=True, max_patience=5, monitor_metric='recall@20',)
    #保存模型权重和enc_dict
    trainer.save_all(model, enc_dict, './model_ckpt')
    #模型验证
    test_metric = trainer.evaluate_model(model, test_loader, device=device)

```
### 7.4 图协调过滤Demo
TODO

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/HaSai666/rec_pangu",
    "name": "rec-pangu",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "rank,multi task,deep learning,pytorch,recsys,recommendation",
    "author": "wk",
    "author_email": "306178200@qq.com",
    "download_url": "",
    "platform": "all",
    "description": "# Rec PanGu\n[![stars](https://img.shields.io/github/stars/HaSai666/rec_pangu?color=097abb)](https://github.com/HaSai666/rec_pangu/stargazers)\n[![issues](https://img.shields.io/github/issues/HaSai666/rec_pangu?color=097abb)](https://github.com/HaSai666/rec_pangu/issues)\n[![license](https://img.shields.io/github/license/HaSai666/rec_pangu?color=097abb)](https://github.com/HaSai666/rec_pangu/blob/main/LICENSE)\n<img src='https://img.shields.io/badge/python-3.7+-brightgreen'>\n<img src='https://img.shields.io/badge/torch-1.7+-brightgreen'>\n<img src='https://img.shields.io/badge/scikit_learn-0.23.2+-brightgreen'>\n<img src='https://img.shields.io/badge/pandas-1.0.5+-brightgreen'>\n<img src='https://img.shields.io/badge/pypi-0.2.4-brightgreen'>\n<a href=\"https://wakatime.com/badge/user/4f5f529d-94ee-4a12-94de-38e886b0219b/project/5b1e1c2d-5596-4335-937e-f2b5515a7fab\"><img src=\"https://wakatime.com/badge/user/4f5f529d-94ee-4a12-94de-38e886b0219b/project/5b1e1c2d-5596-4335-937e-f2b5515a7fab.svg\" alt=\"wakatime\"></a>\n[![Codacy Badge](https://app.codacy.com/project/badge/Grade/27fce22d928f412bb7cb3dab813ab481)](https://app.codacy.com/gh/HaSai666/rec_pangu/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)\n[![Downloads](https://pepy.tech/badge/rec-pangu)](https://pepy.tech/project/rec-pangu)\n[![Downloads](https://pepy.tech/badge/rec-pangu/month)](https://pepy.tech/project/rec-pangu)\n[![Downloads](https://pepy.tech/badge/rec-pangu/week)](https://pepy.tech/project/rec-pangu)\n## 1.\u5f00\u6e90\u5b9a\u4f4d \n- \u4f7f\u7528pytorch\u5bf9\u7ecf\u5178\u7684rank/\u591a\u4efb\u52a1\u6a21\u578b\u8fdb\u884c\u5b9e\u73b0\uff0c\u5e76\u4e14\u5bf9\u5916\u63d0\u4f9b\u7edf\u4e00\u8c03\u7528\u7684API\u63a5\u53e3\uff0c\u6781\u5927\u7684\u964d\u4f4e\u4e86\u4f7f\u7528Rank/\u591a\u4efb\u52a1\u6a21\u578b\u7684\u65f6\u95f4\u6210\u672c\n- \u8be5\u9879\u76ee\u4f7f\u7528\u4e86pytorch\u6765\u5b9e\u73b0\u6211\u4eec\u7684\u5404\u79cd\u6a21\u578b\uff0c\u4ee5\u4fbf\u4e8e\u521d\u5b66\u63a8\u8350\u7cfb\u7edf\u7684\u4eba\u53ef\u4ee5\u66f4\u597d\u7684\u7406\u89e3\u7b97\u6cd5\u7684\u6838\u5fc3\u601d\u60f3\n- \u7531\u4e8e\u5df2\u7ecf\u6709\u4e86\u5f88\u591a\u7c7b\u4f3c\u7684\u4f18\u79c0\u7684\u5f00\u6e90\uff0c\u6211\u4eec\u8fd9\u91cc\u5bf9\u90a3\u4e9b\u5341\u5206\u901a\u7528\u7684\u6a21\u5757\u53c2\u8003\u4e86\u5df2\u6709\u7684\u5f00\u6e90\uff0c\u5341\u5206\u611f\u8c22\u8fd9\u4e9b\u5f00\u6e90\u8d21\u732e\u8005\u7684\u8d21\u732e \n\n<img src='pic/overview.png'>\n\n## 2.\u5b89\u88c5  \n```bash\n#\u6700\u65b0\u7248\ngit clone https://github.com/HaSai666/rec_pangu.git\ncd rec_pangu\npip install -e . --verbose\n\n#\u7a33\u5b9a\u7248 \npip install rec_pangu --upgrade\n```\n## 3.Rank\u6a21\u578b\n\n| \u6a21\u578b      | \u8bba\u6587     | \u5e74\u4efd   | \n|---------|------|------|\n| WDL     | [Wide & Deep Learning for Recommender Systems](https://arxiv.org/pdf/1606.07792)     | 2016 | \n| DeepFM  | [DeepFM: A Factorization-Machine based Neural Network for CTR Prediction](https://arxiv.org/pdf/1703.04247)     | 2017 | \n| NFM     | [Neural Factorization Machines for Sparse Predictive Analytics](https://arxiv.org/pdf/1708.05027.pdf)              | 2017 | \n| FiBiNet | [FiBiNET: Combining Feature Importance and Bilinear Feature Interaction for Click-Through Rate](https://arxiv.org/pdf/1905.09433.pdf) | 2019 | \n| AFM     | [Attentional Factorization Machines](https://arxiv.org/pdf/1708.04617)                  | 2017 | \n| AutoInt | [AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks](https://arxiv.org/pdf/1810.11921.pdf)              | 2018 | \n| CCPM    | [A Convolutional Click Prediction Model](http://www.shuwu.name/sw/Liu2015CCPM.pdf)    | 2015 |\n| LR      | /  | 2019 | \n| FM      | /  | 2019 | \n| xDeepFM | [xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems](https://arxiv.org/pdf/1803.05170.pdf)     | 2018 |\n| DCN     | [Deep & Cross Network for Ad Click Predictions](https://arxiv.org/pdf/1708.05123.pdf) | 2019 | \n| MaskNet | [MaskNet: Introducing Feature-Wise Multiplication to CTR Ranking Models by Instance-Guided Mask](https://arxiv.org/pdf/2102.07619.pdf) | 2021 | \n## 4.\u591a\u4efb\u52a1\u6a21\u578b\n\n| \u6a21\u578b          | \u8bba\u6587                                                                                                                                          | \u5e74\u4efd   | \n|-------------|---------------------------------------------------------------------------------------------------------------------------------------------|------|\n| MMOE        | [Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts](https://dl.acm.org/doi/pdf/10.1145/3219819.3220007) | 2018 |\n| ShareBottom | [Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts](https://dl.acm.org/doi/pdf/10.1145/3219819.3220007) | 2018 |\n| ESSM        | [Entire Space Multi-Task Model: An Effective Approach for Estimating Post-Click Conversion Rate](https://arxiv.org/pdf/1804.07931.pdf)      | 2018 |\n| OMOE        | [Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts](https://dl.acm.org/doi/pdf/10.1145/3219819.3220007) | 2018 |\n| MLMMOE      | /                                                                                                                                           | /    |\n| AITM        | [Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising](https://arxiv.org/pdf/2105.08489.pdf)| 2019 |\n\n## 5.\u5e8f\u5217\u53ec\u56de\u6a21\u578b\n\n\u76ee\u524d\u652f\u6301\u5982\u4e0b\u7c7b\u578b\u7684\u5e8f\u5217\u53ec\u56de\u6a21\u578b:\n\n- \u7ecf\u5178\u5e8f\u5217\u53ec\u56de\u6a21\u578b\n- \u57fa\u4e8e\u56fe\u7684\u5e8f\u5217\u53ec\u56de\u6a21\u578b\n- \u591a\u5174\u8da3\u5e8f\u5217\u53ec\u56de\u6a21\u578b\n- \u57fa\u4e8eLLM\u7684\u5e8f\u5217\u53ec\u56de\u6a21\u578b\n\n\n| \u6a21\u578b              | \u7c7b\u578b      | \u8bba\u6587                                                                                                                                                                                                         | \u5e74\u4efd   | \n|-----------------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|\n| YotubeDNN       | \u7ecf\u5178\u5e8f\u5217\u53ec\u56de  | [Deep Neural Networks for YouTube Recommendations](https://dl.acm.org/doi/pdf/10.1145/2959100.2959190?utm_campaign=Weekly%20dose%20of%20Machine%20Learning&utm_medium=email&utm_source=Revue%20newsletter) | 2016 |\n| Gru4Rec         | \u7ecf\u5178\u5e8f\u5217\u53ec\u56de  | [Session-based recommendations with recurrent neural networks](https://arxiv.org/pdf/1511.06939)                                                                                                           | 2015 |\n| Narm            | \u7ecf\u5178\u5e8f\u5217\u53ec\u56de  | [Neural Attentive Session-based Recommendation](https://arxiv.org/pdf/1711.04725)                                                                                                                          | 2017 |\n| NextItNet       | \u7ecf\u5178\u5e8f\u5217\u53ec\u56de  | [A Simple Convolutional Generative Network for Next Item](https://arxiv.org/pdf/1808.05163)                                                                                                                | 2019 |\n| ContraRec       | \u5e8f\u5217\u5bf9\u6bd4\u5b66\u4e60  | [Sequential Recommendation with Multiple Contrast Signals](https://dl.acm.org/doi/pdf/10.1145/3522673)                                                                                                     |      |\n| ComirecSA       | \u591a\u5174\u8da3\u53ec\u56de   | [Controllable multi-interest framework for recommendation](https://arxiv.org/pdf/2005.09347)                                                                                                               | 2020 |\n| ComirecDR       | \u591a\u5174\u8da3\u53ec\u56de   | [Controllable multi-interest framework for recommendation](https://arxiv.org/pdf/2005.09347)                                                                                                               | 2020 |\n| Mind            | \u591a\u5174\u8da3\u53ec\u56de   | [Multi-Interest Network with Dynamic Routing for Recommendation at Tmall](https://arxiv.org/pdf/1904.08030)                                                                                                | 2019 |\n| Re4             | \u591a\u5174\u8da3\u53ec\u56de   | [Re4: Learning to Re-contrast, Re-attend, Re-construct for Multi-interest Recommendation](https://dl.acm.org/doi/10.1145/3485447.3512094)                                                                  | 2022 |\n| CMI             | \u591a\u5174\u8da3\u53ec\u56de   | [mproving Micro-video Recommendation via Contrastive Multiple Interests](https://arxiv.org/pdf/2205.09593)                                                                                                 | 2022 |\n| SRGNN           | \u56fe\u5e8f\u5217\u53ec\u56de   | [Session-based Recommendation with Graph Neural Networks](https://ojs.aaai.org/index.php/AAAI/article/view/3804/3682)                                                                                      | 2019 |\n| GC-SAN          | \u56fe\u5e8f\u5217\u53ec\u56de   | [SGraph Contextualized Self-Attention Network for Session-based Recommendation](https://www.ijcai.org/proceedings/2019/0547.pdf)                                                                           | 2019 |\n| NISER           | \u56fe\u5e8f\u5217\u53ec\u56de   | [NISER: Normalized Item and Session Representations to Handle Popularity Bias](https://arxiv.org/pdf/1909.04276)                                                                                           | 2019 |\n| GCE-GNN(ToDo)   | \u56fe\u5e8f\u5217\u53ec\u56de   | [Global Context Enhanced Graph Neural Networks for Session-based Recommendation](https://arxiv.org/pdf/2106.05081.pdf)                                                                                     | 2020 |\n| Recformer(ToDo) | LLM\u5e8f\u5217\u53ec\u56de | [Text Is All You Need: Learning Language Representations for Sequential Recommendation](https://arxiv.org/pdf/2305.13731.pdf)                                                                              | 2023 |\n\n\n\n\n## 6.\u56fe\u534f\u540c\u8fc7\u6ee4\u6a21\u578b\n\n\n| \u6a21\u578b             | \u7c7b\u578b          | \u8bba\u6587                                                                                                                          | \u5e74\u4efd   | \n|----------------|-------------|-----------------------------------------------------------------------------------------------------------------------------|------|\n| NGCF(ToDo)     | \u56fe\u534f\u540c\u8fc7\u6ee4       | [Neural Graph Collaborative Filtering](https://arxiv.org/pdf/1905.08108.pdf)                                                | 2019 |\n| LightGCN(ToDo)   | \u56fe\u534f\u540c\u8fc7\u6ee4       | [LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation](https://arxiv.org/pdf/2002.02126.pdf)     | 2019 |\n| NCL(ToDo)       | \u56fe\u5bf9\u6bd4\u5b66\u4e60       | [Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive Learning](https://arxiv.org/pdf/2202.06200) | 2022 |\n| SimGCL(ToDo)    | \u56fe\u5bf9\u6bd4\u5b66\u4e60       | [Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for Recommendation](https://www.researchgate.net/profile/Junliang-Yu/publication/359788233_Are_Graph_Augmentations_Necessary_Simple_Graph_Contrastive_Learning_for_Recommendation/links/624e802ad726197cfd426f81/Are-Graph-Augmentations-Necessary-Simple-Graph-Contrastive-Learning-for-Recommendation.pdf?ref=https://githubhelp.com) | 2022 |\n| SGL(ToDo)      | \u56fe\u5bf9\u6bd4\u5b66\u4e60       | [Self-supervised Graph Learning for Recommendation](https://arxiv.org/pdf/2010.10783)                                       | 2021 |\n\n\n## 7.Demo\n\u6211\u4eec\u7684Rank\u548c\u591a\u4efb\u52a1\u6a21\u578b\u6240\u5bf9\u5916\u66b4\u9732\u7684\u63a5\u53e3\u5341\u5206\u76f8\u4f3c,\u540c\u65f6\u6211\u4eec\u8fd9\u91cc\u4e5f\u652f\u6301\u4f7f\u7528wandb\u6765\u5b9e\u65f6\u76d1\u6d4b\u6a21\u578b\u8bad\u7ec3\u6307\u6807,\u6211\u4eec\u4e0b\u9762\u4f1a\u5206\u522b\u7ed9\u51faRank,\u591a\u4efb\u52a1\u6a21\u578b,wandb\u7684demo\n### 7.1 \u6392\u5e8f\u4efb\u52a1Demo\n\n```python\n# \u58f0\u660e\u6570\u636eschema\nimport torch\nfrom rec_pangu.dataset import get_dataloader\nfrom rec_pangu.models.ranking import WDL, DeepFM, NFM, FiBiNet, AFM, AFN, AOANet, AutoInt, CCPM, LR, FM, xDeepFM\nfrom rec_pangu.trainer import RankTrainer\nimport pandas as pd\n\nif __name__ == '__main__':\n    df = pd.read_csv('sample_data/ranking_sample_data.csv')\n    print(df.head())\n    # \u58f0\u660e\u6570\u636eschema\n    schema = {\n        \"sparse_cols\": ['user_id', 'item_id', 'item_type', 'dayofweek', 'is_workday', 'city', 'county',\n                        'town', 'village', 'lbs_city', 'lbs_district', 'hardware_platform', 'hardware_ischarging',\n                        'os_type', 'network_type', 'position'],\n        \"dense_cols\": ['item_expo_1d', 'item_expo_7d', 'item_expo_14d', 'item_expo_30d', 'item_clk_1d',\n                       'item_clk_7d', 'item_clk_14d', 'item_clk_30d', 'use_duration'],\n        \"label_col\": 'click',\n    }\n    # \u51c6\u5907\u6570\u636e,\u8fd9\u91cc\u53ea\u9009\u62e9\u4e86100\u6761\u6570\u636e,\u6240\u4ee5\u6ca1\u6709\u5207\u5206\u6570\u636e\u96c6\n    train_df = df\n    valid_df = df\n    test_df = df\n\n    # \u58f0\u660e\u4f7f\u7528\u7684device\n    device = torch.device('cpu')\n    # \u83b7\u53d6dataloader\n    train_loader, valid_loader, test_loader, enc_dict = get_dataloader(train_df, valid_df, test_df, schema)\n    # \u58f0\u660e\u6a21\u578b,\u6392\u5e8f\u6a21\u578b\u76ee\u524d\u652f\u6301\uff1aWDL, DeepFM, NFM, FiBiNet, AFM, AFN, AOANet, AutoInt, CCPM, LR, FM, xDeepFM\n    model = xDeepFM(enc_dict=enc_dict)\n    # \u58f0\u660eTrainer\n    trainer = RankTrainer(num_task=1)\n    # \u8bad\u7ec3\u6a21\u578b\n    trainer.fit(model, train_loader, valid_loader, epoch=5, lr=1e-3, device=device)\n    # \u4fdd\u5b58\u6a21\u578b\u6743\u91cd\n    trainer.save_model(model, './model_ckpt')\n    # \u6a21\u578b\u9a8c\u8bc1\n    test_metric = trainer.evaluate_model(model, test_loader, device=device)\n    print('Test metric:{}'.format(test_metric))\n\n```\n### 7.2 \u591a\u4efb\u52a1\u6a21\u578bDemo\n\n```python\nimport torch\nfrom rec_pangu.dataset import get_dataloader\nfrom rec_pangu.models.multi_task import AITM, ShareBottom, ESSM, MMOE, OMOE, MLMMOE\nfrom rec_pangu.trainer import RankTrainer\nimport pandas as pd\n\nif __name__ == '__main__':\n    df = pd.read_csv('sample_data/multi_task_sample_data.csv')\n    print(df.head())\n    #\u58f0\u660e\u6570\u636eschema\n    schema = {\n        \"sparse_cols\": ['user_id', 'item_id', 'item_type', 'dayofweek', 'is_workday', 'city', 'county',\n                        'town', 'village', 'lbs_city', 'lbs_district', 'hardware_platform', 'hardware_ischarging',\n                        'os_type', 'network_type', 'position'],\n        \"dense_cols\": ['item_expo_1d', 'item_expo_7d', 'item_expo_14d', 'item_expo_30d', 'item_clk_1d',\n                       'item_clk_7d', 'item_clk_14d', 'item_clk_30d', 'use_duration'],\n        \"label_col\": ['click', 'scroll'],\n    }\n    #\u51c6\u5907\u6570\u636e,\u8fd9\u91cc\u53ea\u9009\u62e9\u4e86100\u6761\u6570\u636e,\u6240\u4ee5\u6ca1\u6709\u5207\u5206\u6570\u636e\u96c6\n    train_df = df\n    valid_df = df\n    test_df = df\n\n    #\u58f0\u660e\u4f7f\u7528\u7684device\n    device = torch.device('cpu')\n    #\u83b7\u53d6dataloader\n    train_loader, valid_loader, test_loader, enc_dict = get_dataloader(train_df, valid_df, test_df, schema)\n    #\u58f0\u660e\u6a21\u578b,\u591a\u4efb\u52a1\u6a21\u578b\u76ee\u524d\u652f\u6301\uff1aAITM,ShareBottom,ESSM,MMOE,OMOE,MLMMOE\n    model = AITM(enc_dict=enc_dict)\n    #\u58f0\u660eTrainer\n    trainer = RankTrainer(num_task=2)\n    #\u8bad\u7ec3\u6a21\u578b\n    trainer.fit(model, train_loader, valid_loader, epoch=5, lr=1e-3, device=device)\n    #\u4fdd\u5b58\u6a21\u578b\u6743\u91cd\n    trainer.save_model(model, './model_ckpt')\n    #\u6a21\u578b\u9a8c\u8bc1\n    test_metric = trainer.evaluate_model(model, test_loader, device=device)\n    print('Test metric:{}'.format(test_metric))\n```\n### 7.3 \u5e8f\u5217\u53ec\u56deDemo\n```python\nimport torch\nfrom rec_pangu.dataset import get_dataloader\nfrom rec_pangu.models.sequence import ComirecSA,ComirecDR,MIND,CMI,Re4,NARM,YotubeDNN,SRGNN\nfrom rec_pangu.trainer import SequenceTrainer\nfrom rec_pangu.utils import set_device\nimport pandas as pd\n\nif __name__=='__main__':\n    #\u58f0\u660e\u6570\u636eschema\n    schema = {\n        'user_col': 'user_id',\n        'item_col': 'item_id',\n        'cate_cols': ['genre'],\n        'max_length': 20,\n        'time_col': 'timestamp',\n        'task_type':'sequence'\n    }\n    # \u6a21\u578b\u914d\u7f6e\n    config = {\n        'embedding_dim': 64,\n        'lr': 0.001,\n        'K': 1,\n        'device':-1,\n    }\n    config['device'] = set_device(config['device'])\n    config.update(schema)\n\n    #\u6837\u4f8b\u6570\u636e\n    train_df = pd.read_csv('./sample_data/sample_train.csv')\n    valid_df = pd.read_csv('./sample_data/sample_valid.csv')\n    test_df = pd.read_csv('./sample_data/sample_test.csv')\n\n    #\u58f0\u660e\u4f7f\u7528\u7684device\n    device = torch.device('cpu')\n    #\u83b7\u53d6dataloader\n    train_loader, valid_loader, test_loader, enc_dict = get_dataloader(train_df, valid_df, test_df, schema, batch_size=50)\n    #\u58f0\u660e\u6a21\u578b,\u5e8f\u5217\u53ec\u56de\u6a21\u578b\u6a21\u578b\u76ee\u524d\u652f\u6301\uff1a ComirecSA,ComirecDR,MIND,CMI,Re4,NARM,YotubeDNN,SRGNN\n    model = ComirecSA(enc_dict=enc_dict,config=config)\n    #\u58f0\u660eTrainer\n    trainer = SequenceTrainer(model_ckpt_dir='./model_ckpt')\n    #\u8bad\u7ec3\u6a21\u578b\n    trainer.fit(model, train_loader, valid_loader, epoch=500, lr=1e-3, device=device,log_rounds=10,\n                use_earlystoping=True, max_patience=5, monitor_metric='recall@20',)\n    #\u4fdd\u5b58\u6a21\u578b\u6743\u91cd\u548cenc_dict\n    trainer.save_all(model, enc_dict, './model_ckpt')\n    #\u6a21\u578b\u9a8c\u8bc1\n    test_metric = trainer.evaluate_model(model, test_loader, device=device)\n\n```\n### 7.4 \u56fe\u534f\u8c03\u8fc7\u6ee4Demo\nTODO\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Some Rank/Multi-task model implemented by Pytorch",
    "version": "0.4.1",
    "project_urls": {
        "Homepage": "https://github.com/HaSai666/rec_pangu"
    },
    "split_keywords": [
        "rank",
        "multi task",
        "deep learning",
        "pytorch",
        "recsys",
        "recommendation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cd78bfcf6b378d48ac2dea5a6b2e0ea8b7b3c3de9505ac365d93eddf8e9ca12e",
                "md5": "fbebfe41b7b75914ee677be117ecf461",
                "sha256": "1ef3a6542918f804ce3b8e0caadf6baf3e63483ace0b634490e180f69f581004"
            },
            "downloads": -1,
            "filename": "rec_pangu-0.4.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fbebfe41b7b75914ee677be117ecf461",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 123251,
            "upload_time": "2023-07-26T07:38:14",
            "upload_time_iso_8601": "2023-07-26T07:38:14.614965Z",
            "url": "https://files.pythonhosted.org/packages/cd/78/bfcf6b378d48ac2dea5a6b2e0ea8b7b3c3de9505ac365d93eddf8e9ca12e/rec_pangu-0.4.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-26 07:38:14",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "HaSai666",
    "github_project": "rec_pangu",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "rec-pangu"
}
        
wk
Elapsed time: 1.98954s