torchft-nightly


Nametorchft-nightly JSON
Version 2025.7.11 PyPI version JSON
download
home_pageNone
SummaryNone
upload_time2025-07-11 11:27:14
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="./media/torchft_logo_dark.svg">
    <img width="55%" src="./media/torchft_logo.svg" alt="torchft">
  </picture>
</p>

<h3 align="center">
Easy Per Step Fault Tolerance for PyTorch
</h3>

<p align="center">
  | <a href="https://pytorch.org/torchft/"><b>Documentation</b></a>
  | <a href="https://github.com/pytorch-labs/torchft/blob/main/media/fault_tolerance_poster.pdf"><b>Poster</b></a>
  | <a href="https://docs.google.com/document/d/1OZsOsz34gRDSxYXiKkj4WqcD9x0lP9TcsfBeu_SsOY4/edit"><b>Design Doc</b></a>
  |
</p>
<p align="center">
  <a href="https://pypi.org/project/torchft-nightly/"><img alt="PyPI - Version" src="https://img.shields.io/pypi/v/torchft-nightly"></a>
</p>

---

This repository implements techniques for doing a per-step fault tolerance so
you can keep training if errors occur without interrupting the entire training
job.

[This is based on the large scale training techniques presented at PyTorch
Conference 2024.](./media/fault_tolerance_poster.pdf)

## Overview

torchft is designed to provide the primitives required to implement fault
tolerance in any application/train script as well as the primitives needed to
implement custom fault tolerance strategies.

Out of the box, torchft provides the following algorithms:

* Fault Tolerant DDP
* Fault Tolerant HSDP: fault tolerance across the replicated dimension with any mix of FSDP/TP/etc across the other dimensions.
* LocalSGD
* DiLoCo

To implement these, torchft provides some key reusable components:

1. Coordination primitives that can determine which workers are healthy via
  heartbeating on a per-step basis
2. Fault tolerant ProcessGroup implementations that report errors sanely and be
  reinitialized gracefully.
3. Checkpoint transports that can be used to do live recovery from a healthy
  peer when doing scale up operations.

The following component diagram shows the high level components and how they
relate to each other:

![Component Diagram](./media/overview.mmd.svg)

See [torchft's documentation](https://pytorch.org/torchft) for more details.

## Examples

### torchtitan (Fault Tolerant HSDP)

torchtitan provides an out of the box fault tolerant HSDP training loop built on
top of torchft that can be used to train models such as Llama 3 70B.

It also serves as a good example of how you can integrate torchft into your own training script for use with HSDP.

See [torchtitan's documentation for end to end usage](https://github.com/pytorch/torchtitan/blob/main/docs/torchft.md).

### Fault Tolerant DDP

We have a minimal DDP train loop that highlights all of the key components in torchft.

See [train_ddp.py](./train_ddp.py) for more info.


### DiLoCo

LocalSGD and DiLoCo are currently experimental.

See
[the diloco_train_loop/local_sgd_train_loop tests](./torchft/local_sgd_integ_test.py)
for an example on how to integrate these algorithms into your training loop.


## Design

torchft is designed to allow for fault tolerance when using training with replicated weights such as in DDP or HSDP (FSDP with DDP).

See the [design doc](https://docs.google.com/document/d/1OZsOsz34gRDSxYXiKkj4WqcD9x0lP9TcsfBeu_SsOY4/edit) for the most detailed explanation.

### Lighthouse

torchft implements a lighthouse server that coordinates across the different
replica groups and then a per replica group manager and fault tolerance library
that can be used in a standard PyTorch training loop.

This allows for membership changes at the training step granularity which can
greatly improve efficiency by avoiding stopping the world training on errors.

![Lighthouse Diagram](./media/torchft-overview.png)

### Fault Tolerant HSDP Algorithm

torchft provides an implementation of a fault tolerant HSDP/DDP algorithm. The
following diagram shows the high level operations that need to happen in the
train loop to ensure everything stays consistent during a healing operation.

![HSDP Diagram](./media/hsdp_train_loop.png)

See the design doc linked above for more details.

## Installing from PyPI

We have nighty builds available at https://pypi.org/project/torchft-nightly/

To install torchft with minimal dependencies you can run:

```sh
pip install torchft-nightly
```

If you want all development dependencies you can install:

```sh
pip install torchft-nightly[dev]
```

## Installing from Source

### Prerequisites

Before proceeding, ensure you have the following installed:

- Rust (with necessary dependencies)
- `protobuf-compiler` and the corresponding development package for Protobuf.
- PyTorch 2.7 RC+ or Nightly

Note that the Rust versions available in many conda environments may be outdated. To install the latest version of Rust, we recommend downloading it directly from the official website as shown in the below command:
```sh
curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh
```

To install the required packages on a Debian-based system (such as Ubuntu) using apt, run:

```sh
sudo apt install protobuf-compiler libprotobuf-dev
```

or for a Red Hat-based system, run:

```sh
sudo dnf install protobuf-compiler protobuf-devel
```

### Installation

```sh
pip install .
```

This uses pyo3+maturin to build the package, you'll need maturin installed.

If the installation command fails to invoke `cargo update` due to an inability to fetch the manifest, it may be caused by the `proxy`, `proxySSLCert`, and `proxySSLKey` settings in your .`gitconfig` file affecting the `cargo` command. To resolve this issue, try temporarily removing these fields from your `.gitconfig` before running the installation command.

To install in editable mode w/ the Rust extensions and development dependencies, you can use the normal pip install command:

```sh
pip install -e '.[dev]'
```

## Usage

### Lighthouse

The lighthouse is used for fault tolerance across replicated workers (DDP/FSDP)
when using synchronous training.

You can start a lighthouse server by running:

```sh
RUST_BACKTRACE=1 torchft_lighthouse --min_replicas 1 --quorum_tick_ms 100 --join_timeout_ms 10000
```

### Example Training Loop (DDP)

See [train_ddp.py](./train_ddp.py) for the full example.

Invoke with:

```sh
TORCHFT_LIGHTHOUSE=http://localhost:29510 torchrun --master_port 29501 --nnodes 1 --nproc_per_node 1 train_ddp.py
```

train.py:

```py
from torchft import Manager, DistributedDataParallel, Optimizer, ProcessGroupGloo

manager = Manager(
    pg=ProcessGroupGloo(),
    load_state_dict=...,
    state_dict=...,
)

m = nn.Linear(2, 3)
m = DistributedDataParallel(manager, m)
optimizer = Optimizer(manager, optim.AdamW(m.parameters()))

for i in range(1000):
    batch = torch.rand(2, 2, device=device)

    optimizer.zero_grad()

    out = m(batch)
    loss = out.sum()

    loss.backward()

    optimizer.step()
```

### Running DDP

After starting the lighthouse server by running:

```sh
RUST_BACKTRACE=1 torchft_lighthouse --min_replicas 1 --quorum_tick_ms 100 --join_timeout_ms 10000
```

A test DDP script can be launched with torchX with:

```sh
torchx run
```

Or Diloco with:

```sh
USE_STREAMING=True torchx run ./torchft/torchx.py:hsdp --script='train_diloco.py'
```

See [.torchxconfig](.torchxconfig), [torchx.py](./torchft/torchx.py) and the [torchX documentation](https://pytorch.org/torchx/latest/) to understand how DDP is being ran.

`torchx.py` could also launch HSDP jobs when `workers_per_replica` is set > 1, if the training script supports it. For an example HSDP training implementation with torchFT enabled, see [torchtitan](https://github.com/pytorch/torchtitan).

Alternatively, to test on a node with two GPUs, you can launch two replica groups running  [train_ddp.py](./train_ddp.py) by:

On shell 1 (one replica groups starts initial training):
```sh
export REPLICA_GROUP_ID=0
export NUM_REPLICA_GROUPS=2

CUDA_VISIBLE_DEVICES=0 TORCHFT_LIGHTHOUSE=http://localhost:29510 torchrun --master_port=29600 --nnodes=1 --nproc_per_node=1 -- train_ddp.py
```

On shell 2 (a second replica group joins):
```sh
export REPLICA_GROUP_ID=1
export NUM_REPLICA_GROUPS=2

CUDA_VISIBLE_DEVICES=1 TORCHFT_LIGHTHOUSE=http://localhost:29510 torchrun --master_port=29601 --nnodes=1 --nproc_per_node=1 -- train_ddp.py
```

By observing the outputs from both shells, you should observe process group reconfiguration and live checkpoint recovery.

### Example Parameter Server

torchft has a fault tolerant parameter server implementation built on it's
reconfigurable ProcessGroups. This does not require/use a Lighthouse server.

See [parameter_server_test.py](./torchft/parameter_server_test.py) for an example.

## Contributing

We welcome PRs! See the [CONTRIBUTING](./CONTRIBUTING.md) file.

## License

torchft is BSD 3-Clause licensed. See [LICENSE](./LICENSE) for more details.


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "torchft-nightly",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": null,
    "download_url": null,
    "platform": null,
    "description": "<p align=\"center\">\n  <picture>\n    <source media=\"(prefers-color-scheme: dark)\" srcset=\"./media/torchft_logo_dark.svg\">\n    <img width=\"55%\" src=\"./media/torchft_logo.svg\" alt=\"torchft\">\n  </picture>\n</p>\n\n<h3 align=\"center\">\nEasy Per Step Fault Tolerance for PyTorch\n</h3>\n\n<p align=\"center\">\n  | <a href=\"https://pytorch.org/torchft/\"><b>Documentation</b></a>\n  | <a href=\"https://github.com/pytorch-labs/torchft/blob/main/media/fault_tolerance_poster.pdf\"><b>Poster</b></a>\n  | <a href=\"https://docs.google.com/document/d/1OZsOsz34gRDSxYXiKkj4WqcD9x0lP9TcsfBeu_SsOY4/edit\"><b>Design Doc</b></a>\n  |\n</p>\n<p align=\"center\">\n  <a href=\"https://pypi.org/project/torchft-nightly/\"><img alt=\"PyPI - Version\" src=\"https://img.shields.io/pypi/v/torchft-nightly\"></a>\n</p>\n\n---\n\nThis repository implements techniques for doing a per-step fault tolerance so\nyou can keep training if errors occur without interrupting the entire training\njob.\n\n[This is based on the large scale training techniques presented at PyTorch\nConference 2024.](./media/fault_tolerance_poster.pdf)\n\n## Overview\n\ntorchft is designed to provide the primitives required to implement fault\ntolerance in any application/train script as well as the primitives needed to\nimplement custom fault tolerance strategies.\n\nOut of the box, torchft provides the following algorithms:\n\n* Fault Tolerant DDP\n* Fault Tolerant HSDP: fault tolerance across the replicated dimension with any mix of FSDP/TP/etc across the other dimensions.\n* LocalSGD\n* DiLoCo\n\nTo implement these, torchft provides some key reusable components:\n\n1. Coordination primitives that can determine which workers are healthy via\n  heartbeating on a per-step basis\n2. Fault tolerant ProcessGroup implementations that report errors sanely and be\n  reinitialized gracefully.\n3. Checkpoint transports that can be used to do live recovery from a healthy\n  peer when doing scale up operations.\n\nThe following component diagram shows the high level components and how they\nrelate to each other:\n\n![Component Diagram](./media/overview.mmd.svg)\n\nSee [torchft's documentation](https://pytorch.org/torchft) for more details.\n\n## Examples\n\n### torchtitan (Fault Tolerant HSDP)\n\ntorchtitan provides an out of the box fault tolerant HSDP training loop built on\ntop of torchft that can be used to train models such as Llama 3 70B.\n\nIt also serves as a good example of how you can integrate torchft into your own training script for use with HSDP.\n\nSee [torchtitan's documentation for end to end usage](https://github.com/pytorch/torchtitan/blob/main/docs/torchft.md).\n\n### Fault Tolerant DDP\n\nWe have a minimal DDP train loop that highlights all of the key components in torchft.\n\nSee [train_ddp.py](./train_ddp.py) for more info.\n\n\n### DiLoCo\n\nLocalSGD and DiLoCo are currently experimental.\n\nSee\n[the diloco_train_loop/local_sgd_train_loop tests](./torchft/local_sgd_integ_test.py)\nfor an example on how to integrate these algorithms into your training loop.\n\n\n## Design\n\ntorchft is designed to allow for fault tolerance when using training with replicated weights such as in DDP or HSDP (FSDP with DDP).\n\nSee the [design doc](https://docs.google.com/document/d/1OZsOsz34gRDSxYXiKkj4WqcD9x0lP9TcsfBeu_SsOY4/edit) for the most detailed explanation.\n\n### Lighthouse\n\ntorchft implements a lighthouse server that coordinates across the different\nreplica groups and then a per replica group manager and fault tolerance library\nthat can be used in a standard PyTorch training loop.\n\nThis allows for membership changes at the training step granularity which can\ngreatly improve efficiency by avoiding stopping the world training on errors.\n\n![Lighthouse Diagram](./media/torchft-overview.png)\n\n### Fault Tolerant HSDP Algorithm\n\ntorchft provides an implementation of a fault tolerant HSDP/DDP algorithm. The\nfollowing diagram shows the high level operations that need to happen in the\ntrain loop to ensure everything stays consistent during a healing operation.\n\n![HSDP Diagram](./media/hsdp_train_loop.png)\n\nSee the design doc linked above for more details.\n\n## Installing from PyPI\n\nWe have nighty builds available at https://pypi.org/project/torchft-nightly/\n\nTo install torchft with minimal dependencies you can run:\n\n```sh\npip install torchft-nightly\n```\n\nIf you want all development dependencies you can install:\n\n```sh\npip install torchft-nightly[dev]\n```\n\n## Installing from Source\n\n### Prerequisites\n\nBefore proceeding, ensure you have the following installed:\n\n- Rust (with necessary dependencies)\n- `protobuf-compiler` and the corresponding development package for Protobuf.\n- PyTorch 2.7 RC+ or Nightly\n\nNote that the Rust versions available in many conda environments may be outdated. To install the latest version of Rust, we recommend downloading it directly from the official website as shown in the below command:\n```sh\ncurl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh\n```\n\nTo install the required packages on a Debian-based system (such as Ubuntu) using apt, run:\n\n```sh\nsudo apt install protobuf-compiler libprotobuf-dev\n```\n\nor for a Red Hat-based system, run:\n\n```sh\nsudo dnf install protobuf-compiler protobuf-devel\n```\n\n### Installation\n\n```sh\npip install .\n```\n\nThis uses pyo3+maturin to build the package, you'll need maturin installed.\n\nIf the installation command fails to invoke `cargo update` due to an inability to fetch the manifest, it may be caused by the `proxy`, `proxySSLCert`, and `proxySSLKey` settings in your .`gitconfig` file affecting the `cargo` command. To resolve this issue, try temporarily removing these fields from your `.gitconfig` before running the installation command.\n\nTo install in editable mode w/ the Rust extensions and development dependencies, you can use the normal pip install command:\n\n```sh\npip install -e '.[dev]'\n```\n\n## Usage\n\n### Lighthouse\n\nThe lighthouse is used for fault tolerance across replicated workers (DDP/FSDP)\nwhen using synchronous training.\n\nYou can start a lighthouse server by running:\n\n```sh\nRUST_BACKTRACE=1 torchft_lighthouse --min_replicas 1 --quorum_tick_ms 100 --join_timeout_ms 10000\n```\n\n### Example Training Loop (DDP)\n\nSee [train_ddp.py](./train_ddp.py) for the full example.\n\nInvoke with:\n\n```sh\nTORCHFT_LIGHTHOUSE=http://localhost:29510 torchrun --master_port 29501 --nnodes 1 --nproc_per_node 1 train_ddp.py\n```\n\ntrain.py:\n\n```py\nfrom torchft import Manager, DistributedDataParallel, Optimizer, ProcessGroupGloo\n\nmanager = Manager(\n    pg=ProcessGroupGloo(),\n    load_state_dict=...,\n    state_dict=...,\n)\n\nm = nn.Linear(2, 3)\nm = DistributedDataParallel(manager, m)\noptimizer = Optimizer(manager, optim.AdamW(m.parameters()))\n\nfor i in range(1000):\n    batch = torch.rand(2, 2, device=device)\n\n    optimizer.zero_grad()\n\n    out = m(batch)\n    loss = out.sum()\n\n    loss.backward()\n\n    optimizer.step()\n```\n\n### Running DDP\n\nAfter starting the lighthouse server by running:\n\n```sh\nRUST_BACKTRACE=1 torchft_lighthouse --min_replicas 1 --quorum_tick_ms 100 --join_timeout_ms 10000\n```\n\nA test DDP script can be launched with torchX with:\n\n```sh\ntorchx run\n```\n\nOr Diloco with:\n\n```sh\nUSE_STREAMING=True torchx run ./torchft/torchx.py:hsdp --script='train_diloco.py'\n```\n\nSee [.torchxconfig](.torchxconfig), [torchx.py](./torchft/torchx.py) and the [torchX documentation](https://pytorch.org/torchx/latest/) to understand how DDP is being ran.\n\n`torchx.py` could also launch HSDP jobs when `workers_per_replica` is set > 1, if the training script supports it. For an example HSDP training implementation with torchFT enabled, see [torchtitan](https://github.com/pytorch/torchtitan).\n\nAlternatively, to test on a node with two GPUs, you can launch two replica groups running  [train_ddp.py](./train_ddp.py) by:\n\nOn shell 1 (one replica groups starts initial training):\n```sh\nexport REPLICA_GROUP_ID=0\nexport NUM_REPLICA_GROUPS=2\n\nCUDA_VISIBLE_DEVICES=0 TORCHFT_LIGHTHOUSE=http://localhost:29510 torchrun --master_port=29600 --nnodes=1 --nproc_per_node=1 -- train_ddp.py\n```\n\nOn shell 2 (a second replica group joins):\n```sh\nexport REPLICA_GROUP_ID=1\nexport NUM_REPLICA_GROUPS=2\n\nCUDA_VISIBLE_DEVICES=1 TORCHFT_LIGHTHOUSE=http://localhost:29510 torchrun --master_port=29601 --nnodes=1 --nproc_per_node=1 -- train_ddp.py\n```\n\nBy observing the outputs from both shells, you should observe process group reconfiguration and live checkpoint recovery.\n\n### Example Parameter Server\n\ntorchft has a fault tolerant parameter server implementation built on it's\nreconfigurable ProcessGroups. This does not require/use a Lighthouse server.\n\nSee [parameter_server_test.py](./torchft/parameter_server_test.py) for an example.\n\n## Contributing\n\nWe welcome PRs! See the [CONTRIBUTING](./CONTRIBUTING.md) file.\n\n## License\n\ntorchft is BSD 3-Clause licensed. See [LICENSE](./LICENSE) for more details.\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": null,
    "version": "2025.7.11",
    "project_urls": {
        "Documentation": "https://pytorch-labs.github.io/torchft",
        "Issues": "https://github.com/pytorch-labs/torchft/issues",
        "Repository": "https://github.com/pytorch-labs/torchft"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "93fb66352423894958c70cfcf03337859b4818315f63a644199375c866cc39db",
                "md5": "524c14566b9339c7a3073208578b1622",
                "sha256": "441b00d98fb832cf307b3a712f1ae4bb3add55ff8cd283c8482b13bd552639eb"
            },
            "downloads": -1,
            "filename": "torchft_nightly-2025.7.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "524c14566b9339c7a3073208578b1622",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 2475586,
            "upload_time": "2025-07-11T11:27:14",
            "upload_time_iso_8601": "2025-07-11T11:27:14.285820Z",
            "url": "https://files.pythonhosted.org/packages/93/fb/66352423894958c70cfcf03337859b4818315f63a644199375c866cc39db/torchft_nightly-2025.7.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4803cd6eb1a09726697382ec51011185a7278a9fb0acd9615b8eb7423c65b6aa",
                "md5": "ec8bb200ad919ec564ca5186accc7748",
                "sha256": "0b17f28eff1487224f04377c49f8a53e238cb409735a021c44bf5197806b1231"
            },
            "downloads": -1,
            "filename": "torchft_nightly-2025.7.11-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "ec8bb200ad919ec564ca5186accc7748",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.8",
            "size": 2473745,
            "upload_time": "2025-07-11T11:27:16",
            "upload_time_iso_8601": "2025-07-11T11:27:16.026507Z",
            "url": "https://files.pythonhosted.org/packages/48/03/cd6eb1a09726697382ec51011185a7278a9fb0acd9615b8eb7423c65b6aa/torchft_nightly-2025.7.11-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7a5eaad2f2340daaf5da2bebc852f24595ed98222e1faae9316e2e560cd90c59",
                "md5": "a9a7f6fa60def06d57dce823abec30d4",
                "sha256": "538cf4001f05a308ded34c9b8cde1ab91d81352e864431f08deaa1b4201c269a"
            },
            "downloads": -1,
            "filename": "torchft_nightly-2025.7.11-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "a9a7f6fa60def06d57dce823abec30d4",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.8",
            "size": 2475353,
            "upload_time": "2025-07-11T11:27:17",
            "upload_time_iso_8601": "2025-07-11T11:27:17.119398Z",
            "url": "https://files.pythonhosted.org/packages/7a/5e/aad2f2340daaf5da2bebc852f24595ed98222e1faae9316e2e560cd90c59/torchft_nightly-2025.7.11-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "882cd8eea38c76c4664e143c200bff740ab87b4037156a97931dc864a2cf9319",
                "md5": "b84d60421af4462fcff30dcf31b2b4e3",
                "sha256": "aad7c1b8dcac3b7526b008eed404e9a57650e7857395bc9fd690bfa082e6a962"
            },
            "downloads": -1,
            "filename": "torchft_nightly-2025.7.11-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "b84d60421af4462fcff30dcf31b2b4e3",
            "packagetype": "bdist_wheel",
            "python_version": "cp313",
            "requires_python": ">=3.8",
            "size": 2474472,
            "upload_time": "2025-07-11T11:27:18",
            "upload_time_iso_8601": "2025-07-11T11:27:18.569586Z",
            "url": "https://files.pythonhosted.org/packages/88/2c/d8eea38c76c4664e143c200bff740ab87b4037156a97931dc864a2cf9319/torchft_nightly-2025.7.11-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3d77ce7c1650b22320d3bad7bf16da1a7dfe0cb2b276c55f1839ecacbf2bd2bf",
                "md5": "d1bb32a95fc440edb1668690d199bd49",
                "sha256": "360061d6378b50a6a891a741bef246823b6a1045e938faa9ee6e10fb13f56ea5"
            },
            "downloads": -1,
            "filename": "torchft_nightly-2025.7.11-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "d1bb32a95fc440edb1668690d199bd49",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.8",
            "size": 2476496,
            "upload_time": "2025-07-11T11:27:19",
            "upload_time_iso_8601": "2025-07-11T11:27:19.918699Z",
            "url": "https://files.pythonhosted.org/packages/3d/77/ce7c1650b22320d3bad7bf16da1a7dfe0cb2b276c55f1839ecacbf2bd2bf/torchft_nightly-2025.7.11-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-11 11:27:14",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pytorch-labs",
    "github_project": "torchft",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "torchft-nightly"
}
        
Elapsed time: 0.54467s