torchrec


Nametorchrec JSON
Version 1.3.0 PyPI version JSON
download
home_pagehttps://github.com/pytorch/torchrec
SummaryTorchRec: Pytorch library for recommendation systems
upload_time2025-09-13 22:51:07
maintainerTroyGarden
docs_urlNone
authorTorchRec Team
requires_python>=3.9
licenseBSD-3
keywords pytorch recommendation systems sharding distributed training
VCS
bugtrack_url
requirements black click cmake fbgemm-gpu hypothesis importlib-metadata iopath numpy pandas pyre-extensions scikit-build tensordict torchmetrics torchx tqdm usort parameterized PyYAML expecttest
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # TorchRec

**TorchRec** is a PyTorch domain library built to provide common sparsity and parallelism primitives needed for large-scale recommender systems (RecSys). TorchRec allows training and inference of models with large embedding tables sharded across many GPUs and **powers many production RecSys models at Meta**.

## External Presence
TorchRec has been used to accelerate advancements in recommendation systems, some examples:
* [Latest version of Meta's DLRM (Deep Learning Recommendation Model)](https://github.com/facebookresearch/dlrm) is built using TorchRec
* [Disaggregated Multi-Tower: Topology-aware Modeling Technique for Efficient Large-Scale Recommendation](https://arxiv.org/abs/2403.00877) paper
* [The Algorithm ML](https://github.com/twitter/the-algorithm-ml) from Twitter
* [Training Recommendation Models with Databricks](https://docs.databricks.com/en/machine-learning/train-recommender-models.html)
* [Toward 100TB model with Embedding Offloading Paper](https://dl.acm.org/doi/10.1145/3640457.3688037)


## Introduction

To begin learning about TorchRec, check out:
* Our complete [TorchRec Tutorial](https://pytorch.org/tutorials/intermediate/torchrec_intro_tutorial.html)
* The [TorchRec documentation](https://pytorch.org/torchrec/) for an overview of TorchRec and API references


### TorchRec Features
- Parallelism primitives that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism/model-parallelism.
- Sharders to shard embedding tables with different strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, column-wise, and table-wise-column-wise sharding.
- Planner that can automatically generate optimized sharding plans for models.
- Pipelined training overlapping dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance.
- Optimized kernels for RecSys powered by [FBGEMM](https://github.com/pytorch/FBGEMM/tree/main).
- Quantization support for reduced precision training and inference, along with optimizing a TorchRec model for C++ inference.
- Common modules for RecSys.
- RecSys datasets (criteo click logs and movielens)
- Examples of end-to-end training such the dlrm event prediction model trained on criteo click logs dataset.


## Installation

Check out the [Getting Started](https://pytorch.org/torchrec/setup-torchrec.html) section in the documentation for recommended ways to set up Torchrec.

### From Source

**Generally, there isn't a need to build from source**. For most use cases, follow the section above to set up TorchRec. However, to build from source and to get the latest changes, do the following:

1. Install pytorch. See [pytorch documentation](https://pytorch.org/get-started/locally/).
   ```
   CUDA 12.6

   pip install torch --index-url https://download.pytorch.org/whl/nightly/cu126

   CUDA 12.8

   pip install torch --index-url https://download.pytorch.org/whl/nightly/cu128

   CUDA 12.9

   pip install torch --index-url https://download.pytorch.org/whl/nightly/cu129

   CPU

   pip install torch --index-url https://download.pytorch.org/whl/nightly/cpu
   ```

2. Clone TorchRec.
   ```
   git clone --recursive https://github.com/pytorch/torchrec
   cd torchrec
   ```

3. Install FBGEMM.
   ```
   CUDA 12.6

   pip install fbgemm-gpu --index-url https://download.pytorch.org/whl/nightly/cu126

   CUDA 12.8

   pip install fbgemm-gpu --index-url https://download.pytorch.org/whl/nightly/cu128

   CUDA 12.9

   pip install fbgemm-gpu --index-url https://download.pytorch.org/whl/nightly/cu129

   CPU

   pip install fbgemm-gpu --index-url https://download.pytorch.org/whl/nightly/cpu
   ```

4. Install other requirements.
   ```
   pip install -r requirements.txt
   ```

4. Install TorchRec.
   ```
   python setup.py install develop
   ```

5. Test the installation.
   ```
   GPU mode

   torchx run -s local_cwd dist.ddp -j 1x2 --gpu 2 --script test_installation.py

   CPU Mode

   torchx run -s local_cwd dist.ddp -j 1x2 --script test_installation.py -- --cpu_only
   ```
   See [TorchX](https://pytorch.org/torchx/) for more information on launching distributed and remote jobs.

5. If you want to run a more complex example, please take a look at the torchrec [DLRM example](https://github.com/facebookresearch/dlrm/blob/main/torchrec_dlrm/dlrm_main.py).

## Contributing

See [CONTRIBUTING.md](https://github.com/pytorch/torchrec/blob/main/CONTRIBUTING.md) for details about contributing to TorchRec!

## Citation

If you're using TorchRec, please refer to BibTeX entry to cite this work:
```
@inproceedings{10.1145/3523227.3547387,
author = {Ivchenko, Dmytro and Van Der Staay, Dennis and Taylor, Colin and Liu, Xing and Feng, Will and Kindi, Rahul and Sudarshan, Anirudh and Sefati, Shahin},
title = {TorchRec: a PyTorch Domain Library for Recommendation Systems},
year = {2022},
isbn = {9781450392785},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3523227.3547387},
doi = {10.1145/3523227.3547387},
abstract = {Recommendation Systems (RecSys) comprise a large footprint of production-deployed AI today. The neural network-based recommender systems differ from deep learning models in other domains in using high-cardinality categorical sparse features that require large embedding tables to be trained. In this talk we introduce TorchRec, a PyTorch domain library for Recommendation Systems. This new library provides common sparsity and parallelism primitives, enabling researchers to build state-of-the-art personalization models and deploy them in production. In this talk we cover the building blocks of the TorchRec library including modeling primitives such as embedding bags and jagged tensors, optimized recommender system kernels powered by FBGEMM, a flexible sharder that supports a veriety of strategies for partitioning embedding tables, a planner that automatically generates optimized and performant sharding plans, support for GPU inference and common modeling modules for building recommender system models. TorchRec library is currently used to train large-scale recommender models at Meta. We will present how TorchRec helped Meta’s recommender system platform to transition from CPU asynchronous training to accelerator-based full-sync training.},
booktitle = {Proceedings of the 16th ACM Conference on Recommender Systems},
pages = {482–483},
numpages = {2},
keywords = {information retrieval, recommender systems},
location = {Seattle, WA, USA},
series = {RecSys '22}
}
```

## License
TorchRec is BSD licensed, as found in the [LICENSE](LICENSE) file.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/pytorch/torchrec",
    "name": "torchrec",
    "maintainer": "TroyGarden",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "hhy@meta.com",
    "keywords": "pytorch, recommendation systems, sharding, distributed training",
    "author": "TorchRec Team",
    "author_email": "packages@pytorch.org",
    "download_url": null,
    "platform": null,
    "description": "# TorchRec\n\n**TorchRec** is a PyTorch domain library built to provide common sparsity and parallelism primitives needed for large-scale recommender systems (RecSys). TorchRec allows training and inference of models with large embedding tables sharded across many GPUs and **powers many production RecSys models at Meta**.\n\n## External Presence\nTorchRec has been used to accelerate advancements in recommendation systems, some examples:\n* [Latest version of Meta's DLRM (Deep Learning Recommendation Model)](https://github.com/facebookresearch/dlrm) is built using TorchRec\n* [Disaggregated Multi-Tower: Topology-aware Modeling Technique for Efficient Large-Scale Recommendation](https://arxiv.org/abs/2403.00877) paper\n* [The Algorithm ML](https://github.com/twitter/the-algorithm-ml) from Twitter\n* [Training Recommendation Models with Databricks](https://docs.databricks.com/en/machine-learning/train-recommender-models.html)\n* [Toward 100TB model with Embedding Offloading Paper](https://dl.acm.org/doi/10.1145/3640457.3688037)\n\n\n## Introduction\n\nTo begin learning about TorchRec, check out:\n* Our complete [TorchRec Tutorial](https://pytorch.org/tutorials/intermediate/torchrec_intro_tutorial.html)\n* The [TorchRec documentation](https://pytorch.org/torchrec/) for an overview of TorchRec and API references\n\n\n### TorchRec Features\n- Parallelism primitives that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism/model-parallelism.\n- Sharders to shard embedding tables with different strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, column-wise, and table-wise-column-wise sharding.\n- Planner that can automatically generate optimized sharding plans for models.\n- Pipelined training overlapping dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance.\n- Optimized kernels for RecSys powered by [FBGEMM](https://github.com/pytorch/FBGEMM/tree/main).\n- Quantization support for reduced precision training and inference, along with optimizing a TorchRec model for C++ inference.\n- Common modules for RecSys.\n- RecSys datasets (criteo click logs and movielens)\n- Examples of end-to-end training such the dlrm event prediction model trained on criteo click logs dataset.\n\n\n## Installation\n\nCheck out the [Getting Started](https://pytorch.org/torchrec/setup-torchrec.html) section in the documentation for recommended ways to set up Torchrec.\n\n### From Source\n\n**Generally, there isn't a need to build from source**. For most use cases, follow the section above to set up TorchRec. However, to build from source and to get the latest changes, do the following:\n\n1. Install pytorch. See [pytorch documentation](https://pytorch.org/get-started/locally/).\n   ```\n   CUDA 12.6\n\n   pip install torch --index-url https://download.pytorch.org/whl/nightly/cu126\n\n   CUDA 12.8\n\n   pip install torch --index-url https://download.pytorch.org/whl/nightly/cu128\n\n   CUDA 12.9\n\n   pip install torch --index-url https://download.pytorch.org/whl/nightly/cu129\n\n   CPU\n\n   pip install torch --index-url https://download.pytorch.org/whl/nightly/cpu\n   ```\n\n2. Clone TorchRec.\n   ```\n   git clone --recursive https://github.com/pytorch/torchrec\n   cd torchrec\n   ```\n\n3. Install FBGEMM.\n   ```\n   CUDA 12.6\n\n   pip install fbgemm-gpu --index-url https://download.pytorch.org/whl/nightly/cu126\n\n   CUDA 12.8\n\n   pip install fbgemm-gpu --index-url https://download.pytorch.org/whl/nightly/cu128\n\n   CUDA 12.9\n\n   pip install fbgemm-gpu --index-url https://download.pytorch.org/whl/nightly/cu129\n\n   CPU\n\n   pip install fbgemm-gpu --index-url https://download.pytorch.org/whl/nightly/cpu\n   ```\n\n4. Install other requirements.\n   ```\n   pip install -r requirements.txt\n   ```\n\n4. Install TorchRec.\n   ```\n   python setup.py install develop\n   ```\n\n5. Test the installation.\n   ```\n   GPU mode\n\n   torchx run -s local_cwd dist.ddp -j 1x2 --gpu 2 --script test_installation.py\n\n   CPU Mode\n\n   torchx run -s local_cwd dist.ddp -j 1x2 --script test_installation.py -- --cpu_only\n   ```\n   See [TorchX](https://pytorch.org/torchx/) for more information on launching distributed and remote jobs.\n\n5. If you want to run a more complex example, please take a look at the torchrec [DLRM example](https://github.com/facebookresearch/dlrm/blob/main/torchrec_dlrm/dlrm_main.py).\n\n## Contributing\n\nSee [CONTRIBUTING.md](https://github.com/pytorch/torchrec/blob/main/CONTRIBUTING.md) for details about contributing to TorchRec!\n\n## Citation\n\nIf you're using TorchRec, please refer to BibTeX entry to cite this work:\n```\n@inproceedings{10.1145/3523227.3547387,\nauthor = {Ivchenko, Dmytro and Van Der Staay, Dennis and Taylor, Colin and Liu, Xing and Feng, Will and Kindi, Rahul and Sudarshan, Anirudh and Sefati, Shahin},\ntitle = {TorchRec: a PyTorch Domain Library for Recommendation Systems},\nyear = {2022},\nisbn = {9781450392785},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3523227.3547387},\ndoi = {10.1145/3523227.3547387},\nabstract = {Recommendation Systems (RecSys) comprise a large footprint of production-deployed AI today. The neural network-based recommender systems differ from deep learning models in other domains in using high-cardinality categorical sparse features that require large embedding tables to be trained. In this talk we introduce TorchRec, a PyTorch domain library for Recommendation Systems. This new library provides common sparsity and parallelism primitives, enabling researchers to build state-of-the-art personalization models and deploy them in production. In this talk we cover the building blocks of the TorchRec library including modeling primitives such as embedding bags and jagged tensors, optimized recommender system kernels powered by FBGEMM, a flexible sharder that supports a veriety of strategies for partitioning embedding tables, a planner that automatically generates optimized and performant sharding plans, support for GPU inference and common modeling modules for building recommender system models. TorchRec library is currently used to train large-scale recommender models at Meta. We will present how TorchRec helped Meta\u2019s recommender system platform to transition from CPU asynchronous training to accelerator-based full-sync training.},\nbooktitle = {Proceedings of the 16th ACM Conference on Recommender Systems},\npages = {482\u2013483},\nnumpages = {2},\nkeywords = {information retrieval, recommender systems},\nlocation = {Seattle, WA, USA},\nseries = {RecSys '22}\n}\n```\n\n## License\nTorchRec is BSD licensed, as found in the [LICENSE](LICENSE) file.\n",
    "bugtrack_url": null,
    "license": "BSD-3",
    "summary": "TorchRec: Pytorch library for recommendation systems",
    "version": "1.3.0",
    "project_urls": {
        "Homepage": "https://github.com/pytorch/torchrec"
    },
    "split_keywords": [
        "pytorch",
        " recommendation systems",
        " sharding",
        " distributed training"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ec9a75bfedd92d6cdcb05684d5adef6b9ecd3d489ded6cc4dea8c879bc7df960",
                "md5": "27edc11ac5b6417b044d3536e94e7638",
                "sha256": "a0b2cde734a4ad9337e38fa75c7c1029fff33d6cd1b8b5a2cdcf26af3ae3a9c8"
            },
            "downloads": -1,
            "filename": "torchrec-1.3.0-py310-none-any.whl",
            "has_sig": false,
            "md5_digest": "27edc11ac5b6417b044d3536e94e7638",
            "packagetype": "bdist_wheel",
            "python_version": "py310",
            "requires_python": ">=3.9",
            "size": 769255,
            "upload_time": "2025-09-13T22:51:07",
            "upload_time_iso_8601": "2025-09-13T22:51:07.155855Z",
            "url": "https://files.pythonhosted.org/packages/ec/9a/75bfedd92d6cdcb05684d5adef6b9ecd3d489ded6cc4dea8c879bc7df960/torchrec-1.3.0-py310-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "70c4bf239477778cf58c0020fd05d1ffef85e2219ea465606d37e04156ad4d65",
                "md5": "b9f1ba7da03d694213c15f01cfc18d21",
                "sha256": "30d63801a4fbf41201da4f005c0be736ebafdf1fd42f9477ba036ed421851167"
            },
            "downloads": -1,
            "filename": "torchrec-1.3.0-py311-none-any.whl",
            "has_sig": false,
            "md5_digest": "b9f1ba7da03d694213c15f01cfc18d21",
            "packagetype": "bdist_wheel",
            "python_version": "py311",
            "requires_python": ">=3.9",
            "size": 769254,
            "upload_time": "2025-09-13T22:50:29",
            "upload_time_iso_8601": "2025-09-13T22:50:29.643750Z",
            "url": "https://files.pythonhosted.org/packages/70/c4/bf239477778cf58c0020fd05d1ffef85e2219ea465606d37e04156ad4d65/torchrec-1.3.0-py311-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "49362587534fc4f54bda64c450eafb161dbe11c08bf75c9c48d31c17be576c79",
                "md5": "07dcc4b5bde20d6f8dd903004c5f4df6",
                "sha256": "230207c39a9d30b2e55e025f41f2725066435704dc52b32a5aa9ffb45f22ef87"
            },
            "downloads": -1,
            "filename": "torchrec-1.3.0-py312-none-any.whl",
            "has_sig": false,
            "md5_digest": "07dcc4b5bde20d6f8dd903004c5f4df6",
            "packagetype": "bdist_wheel",
            "python_version": "py312",
            "requires_python": ">=3.9",
            "size": 769255,
            "upload_time": "2025-09-13T22:54:53",
            "upload_time_iso_8601": "2025-09-13T22:54:53.932858Z",
            "url": "https://files.pythonhosted.org/packages/49/36/2587534fc4f54bda64c450eafb161dbe11c08bf75c9c48d31c17be576c79/torchrec-1.3.0-py312-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "a9d800a48559e13f7aa721ee4cb6a1b3273daaed2ec3991e3185b4d26e34d9f6",
                "md5": "5b6ded8246cd38343b4f76ddef3cd346",
                "sha256": "a337d3b2b5ceadb9429a7c8c6756ff15ef57c7a239381027ae1e18d14a050eda"
            },
            "downloads": -1,
            "filename": "torchrec-1.3.0-py313-none-any.whl",
            "has_sig": false,
            "md5_digest": "5b6ded8246cd38343b4f76ddef3cd346",
            "packagetype": "bdist_wheel",
            "python_version": "py313",
            "requires_python": ">=3.9",
            "size": 769254,
            "upload_time": "2025-09-13T22:54:52",
            "upload_time_iso_8601": "2025-09-13T22:54:52.207523Z",
            "url": "https://files.pythonhosted.org/packages/a9/d8/00a48559e13f7aa721ee4cb6a1b3273daaed2ec3991e3185b4d26e34d9f6/torchrec-1.3.0-py313-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-13 22:51:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pytorch",
    "github_project": "torchrec",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "black",
            "specs": []
        },
        {
            "name": "click",
            "specs": []
        },
        {
            "name": "cmake",
            "specs": []
        },
        {
            "name": "fbgemm-gpu",
            "specs": []
        },
        {
            "name": "hypothesis",
            "specs": [
                [
                    "==",
                    "6.70.1"
                ]
            ]
        },
        {
            "name": "importlib-metadata",
            "specs": []
        },
        {
            "name": "iopath",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "pandas",
            "specs": []
        },
        {
            "name": "pyre-extensions",
            "specs": []
        },
        {
            "name": "scikit-build",
            "specs": []
        },
        {
            "name": "tensordict",
            "specs": []
        },
        {
            "name": "torchmetrics",
            "specs": [
                [
                    "==",
                    "1.0.3"
                ]
            ]
        },
        {
            "name": "torchx",
            "specs": []
        },
        {
            "name": "tqdm",
            "specs": []
        },
        {
            "name": "usort",
            "specs": []
        },
        {
            "name": "parameterized",
            "specs": []
        },
        {
            "name": "PyYAML",
            "specs": []
        },
        {
            "name": "expecttest",
            "specs": []
        }
    ],
    "lcname": "torchrec"
}
        
Elapsed time: 1.52794s