theseus-ai-nightly


Nametheseus-ai-nightly JSON
Version 2023.4.21 PyPI version JSON
download
home_pagehttps://github.com/facebookresearch/theseus
SummaryA library for differentiable nonlinear optimization.
upload_time2023-04-21 20:22:28
maintainer
docs_urlNone
authorMeta Research
requires_python>=3.8
license
keywords differentiable optimization nonlinear least squares factor graphs
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![](https://raw.githubusercontent.com/facebookresearch/theseus/main/docs/source/img/theseus-color-horizontal.png)

<p align="center">
    <!-- CI -->
    <a href="https://circleci.com/gh/facebookresearch/theseus/tree/main">
        <img src="https://circleci.com/gh/facebookresearch/theseus/tree/main.svg?style=svg" alt="CircleCI" height="20">
    </a>
    <!-- License -->
    <a href="https://github.com/facebookresearch/theseus/blob/main/LICENSE">
        <img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="License" height="20">
    </a>
    <!-- pypi -->
    <a href="https://pypi.org/project/theseus-ai/">
        <img src="https://img.shields.io/pypi/v/theseus-ai" alt="pypi"
        heigh="20">
    <!-- Downloads counter -->
    <a href="https://pypi.org/project/theseus-ai/">
        <img src="https://pepy.tech/badge/theseus-ai" alt="PyPi Downloads" height="20">
    </a>
    <!-- Python -->
    <a href="https://www.python.org/downloads/release/">
        <img src="https://img.shields.io/badge/python-3.8%20%7C%203.9%20%7C%203.10-blue.svg" alt="Python" height="20">
    </a>
    <!-- Pre-commit -->
    <a href="https://github.com/pre-commit/pre-commit">
        <img src="https://img.shields.io/badge/pre--commit-enabled-green?logo=pre-commit&logoColor=white" alt="pre-commit" height="20">
    </a>
    <!-- Black -->
    <a href="https://github.com/psf/black">
        <img src="https://img.shields.io/badge/code%20style-black-000000.svg" alt="black" height="20">
    </a>
    <!-- PRs -->
    <a href="https://github.com/facebookresearch/theseus/blob/main/CONTRIBUTING.md">
        <img src="https://img.shields.io/badge/PRs-welcome-green.svg" alt="PRs" height="20">
    </a>
</p>

<p align="center">
    <i>A library for differentiable nonlinear optimization</i>
</p>

<p align="center">
    <a href="https://arxiv.org/abs/2207.09442">Paper</a> •
    <a href="https://ai.facebook.com/blog/theseus-a-library-for-encoding-domain-knowledge-in-end-to-end-ai-models/">Blog</a> •
    <a href="https://sites.google.com/view/theseus-ai/">Webpage</a> •
    <a href="https://github.com/facebookresearch/theseus/tree/main/tutorials">Tutorials</a> •
    <a href="https://theseus-ai.readthedocs.io/">Docs</a>
</p>

Theseus is an efficient application-agnostic library for building custom nonlinear optimization layers in PyTorch to support constructing various problems in robotics and vision as end-to-end differentiable architectures.

![](https://raw.githubusercontent.com/facebookresearch/theseus/main/docs/source/img/theseuslayer.png)

Differentiable nonlinear optimization provides a general scheme to encode inductive priors, as the objective function can be partly parameterized by neural models and partly with expert domain-specific differentiable models. The ability to compute gradients end-to-end is retained by differentiating through the optimizer which allows neural models to train on the final task loss, while also taking advantage of priors captured by the optimizer.

-----

## Current Features

### Application agnostic interface
Our implementation provides an easy to use interface to build custom optimization layers and plug them into any neural architecture. Following differentiable features are currently available:
- [Second-order nonlinear optimizers](https://github.com/facebookresearch/theseus/tree/main/theseus/optimizer/nonlinear)
    - Gauss-Newton, Levenberg–Marquardt
- [Linear solvers](https://github.com/facebookresearch/theseus/tree/main/theseus/optimizer/linear)
    - Dense: Cholesky, LU; Sparse: CHOLMOD, LU (GPU-only), [BaSpaCho](https://github.com/facebookresearch/baspacho)
- [Commonly used costs](https://github.com/facebookresearch/theseus/tree/main/theseus/embodied), [AutoDiffCostFunction](https://github.com/facebookresearch/theseus/blob/main/theseus/core/cost_function.py), [RobustCostFunction](https://github.com/facebookresearch/theseus/blob/main/theseus/core/robust_cost_function.py)
- [Lie groups](https://github.com/facebookresearch/theseus/tree/main/theseus/geometry)
- [Robot kinematics](https://github.com/facebookresearch/theseus/blob/main/theseus/embodied/kinematics/kinematics_model.py)

### Efficiency based design
We support several features that improve computation times and memory consumption:
- [Sparse linear solvers](https://github.com/facebookresearch/theseus/tree/main/theseus/optimizer/linear)
- Batching and GPU acceleration
- [Automatic vectorization](https://github.com/facebookresearch/theseus/blob/main/theseus/core/vectorizer.py)
- [Backward modes](https://github.com/facebookresearch/theseus/blob/main/theseus/optimizer/nonlinear/nonlinear_optimizer.py)
    - Implicit, Truncated, Direct Loss Minimization ([DLM](https://github.com/facebookresearch/theseus/blob/main/theseus/theseus_layer.py)), Sampling ([LEO](https://github.com/facebookresearch/theseus/blob/main/examples/state_estimation_2d.py))


## Getting Started

### Prerequisites
- We *strongly* recommend you install Theseus in a venv or conda environment with Python 3.8-3.10.
- Theseus requires `torch` installation. To install for your particular CPU/CUDA configuration, follow the instructions in the PyTorch [website](https://pytorch.org/get-started/locally/).
- For GPU support, Theseus requires [nvcc](https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html) to compile custom CUDA operations. Make sure it matches the version used to compile pytorch with `nvcc --version`. If not, install it and ensure its location is on your system's `$PATH` variable.
- Theseus also requires [`suitesparse`](https://people.engr.tamu.edu/davis/suitesparse.html), which you can install via:
    - `sudo apt-get install libsuitesparse-dev` (Ubuntu).
    - `conda install -c conda-forge suitesparse` (Mac).
    
### Installing

- **pypi**
    ```bash
    pip install theseus-ai
    ```
    We currently provide wheels with our CUDA extensions compiled using CUDA 11.6 and Python 3.10.
    For other CUDA versions, consider installing from source or using our 
    [build script](https://github.com/facebookresearch/theseus/blob/main/build_scripts/build_wheel.sh).

    Note that `pypi` installation doesn't include our experimental [Theseus Labs](https://github.com/facebookresearch/theseus/tree/main/theseus/labs).
    For this, please install from source.

- #### **From source**
    The simplest way to install Theseus from source is by running the following (see further below to also include BaSpaCho)
    ```bash
    git clone https://github.com/facebookresearch/theseus.git && cd theseus
    pip install -e .
    ```
    If you are interested in contributing to Theseus, instead install
    ```bash
    pip install -e ".[dev]"
    ```
    and follow the more detailed instructions in [CONTRIBUTING](https://github.com/facebookresearch/theseus/blob/main/CONTRIBUTING.md).

- **Installing BaSpaCho extensions from source**

    By default, installing from source doesn't include our BaSpaCho sparse solver extension. For this, follow these steps:

    1. Compile BaSpaCho from source following instructions [here](https://github.com/facebookresearch/baspacho). We recommend using flags `-DBLA_STATIC=ON -DBUILD_SHARED_LIBS=OFF`.
    2. Run
        
        ```bash
        git clone https://github.com/facebookresearch/theseus.git && cd theseus
        BASPACHO_ROOT_DIR=<path/to/root/baspacho/dir> pip install -e .
        ```
        
        where the BaSpaCho root dir must have the binaries in the subdirectory `build`.

### Running unit tests (requires `dev` installation)
```bash
python -m pytest tests
```
By default, unit tests include tests for our CUDA extensions. You can add the option `-m "not cudaext"`
to skip them when installing without CUDA support. Additionally, the tests for sparse solver BaSpaCho are automatically 
skipped when its extlib is not compiled.


## Examples

[Simple example](https://github.com/facebookresearch/theseus/blob/main/examples/simple_example.py). This example is fitting the curve $y$ to a dataset of $N$ observations $(x,y) \sim D$. This is modeled as an `Objective` with a single `CostFunction` that computes the residual $y - v e^x$. The `Objective` and the `GaussNewton` optimizer are encapsulated into a `TheseusLayer`. With `Adam` and MSE loss, $x$ is learned by differentiating through the `TheseusLayer`.

```python
import torch
import theseus as th

x_true, y_true, v_true = read_data() # shapes (1, N), (1, N), (1, 1)
x = th.Variable(torch.randn_like(x_true), name="x")
y = th.Variable(y_true, name="y")
v = th.Vector(1, name="v") # a manifold subclass of Variable for optim_vars

def error_fn(optim_vars, aux_vars): # returns y - v * exp(x)
    x, y = aux_vars
    return y.tensor - optim_vars[0].tensor * torch.exp(x.tensor)

objective = th.Objective()
cost_function = th.AutoDiffCostFunction(
    [v], error_fn, y_true.shape[1], aux_vars=[x, y],
    cost_weight=th.ScaleCostWeight(1.0))
objective.add(cost_function)
layer = th.TheseusLayer(th.GaussNewton(objective, max_iterations=10))

phi = torch.nn.Parameter(x_true + 0.1 * torch.ones_like(x_true))
outer_optimizer = torch.optim.Adam([phi], lr=0.001)
for epoch in range(10):
    solution, info = layer.forward(
        input_tensors={"x": phi.clone(), "v": torch.ones(1, 1)},
        optimizer_kwargs={"backward_mode": "implicit"})
    outer_loss = torch.nn.functional.mse_loss(solution["v"], v_true)
    outer_loss.backward()
    outer_optimizer.step()
```

See [tutorials](https://github.com/facebookresearch/theseus/blob/main/tutorials/), and robotics and vision [examples](https://github.com/facebookresearch/theseus/tree/main/examples) to learn about the API and usage.


## Citing Theseus

If you use Theseus in your work, please cite the [paper](https://arxiv.org/abs/2207.09442) with the BibTeX below.

```bibtex
@article{pineda2022theseus,
  title   = {{Theseus: A Library for Differentiable Nonlinear Optimization}},
  author  = {Luis Pineda and Taosha Fan and Maurizio Monge and Shobha Venkataraman and Paloma Sodhi and Ricky TQ Chen and Joseph Ortiz and Daniel DeTone and Austin Wang and Stuart Anderson and Jing Dong and Brandon Amos and Mustafa Mukadam},
  journal = {Advances in Neural Information Processing Systems},
  year    = {2022}
}
```


## License

Theseus is MIT licensed. See the [LICENSE](https://github.com/facebookresearch/theseus/blob/main/LICENSE) for details.


## Additional Information

- Join the community on [Github Discussions](https://github.com/facebookresearch/theseus/discussions) for questions and sugesstions.
- Use [Github Issues](https://github.com/facebookresearch/theseus/issues/new/choose) for bugs and features.
- See [CONTRIBUTING](https://github.com/facebookresearch/theseus/blob/main/CONTRIBUTING.md) if interested in helping out.

Theseus is made possible by the following contributors:

<a href="https://github.com/facebookresearch/theseus/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=facebookresearch/theseus" />
</a>

Made with [contrib.rocks](https://contrib.rocks).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/facebookresearch/theseus",
    "name": "theseus-ai-nightly",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "differentiable optimization,nonlinear least squares,factor graphs",
    "author": "Meta Research",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/6d/d1/8d7235510e92e59ba96c71487717c5515fa01b4c7daf0abf36c27052d7f5/theseus-ai-nightly-2023.4.21.tar.gz",
    "platform": null,
    "description": "![](https://raw.githubusercontent.com/facebookresearch/theseus/main/docs/source/img/theseus-color-horizontal.png)\n\n<p align=\"center\">\n    <!-- CI -->\n    <a href=\"https://circleci.com/gh/facebookresearch/theseus/tree/main\">\n        <img src=\"https://circleci.com/gh/facebookresearch/theseus/tree/main.svg?style=svg\" alt=\"CircleCI\" height=\"20\">\n    </a>\n    <!-- License -->\n    <a href=\"https://github.com/facebookresearch/theseus/blob/main/LICENSE\">\n        <img src=\"https://img.shields.io/badge/license-MIT-blue.svg\" alt=\"License\" height=\"20\">\n    </a>\n    <!-- pypi -->\n    <a href=\"https://pypi.org/project/theseus-ai/\">\n        <img src=\"https://img.shields.io/pypi/v/theseus-ai\" alt=\"pypi\"\n        heigh=\"20\">\n    <!-- Downloads counter -->\n    <a href=\"https://pypi.org/project/theseus-ai/\">\n        <img src=\"https://pepy.tech/badge/theseus-ai\" alt=\"PyPi Downloads\" height=\"20\">\n    </a>\n    <!-- Python -->\n    <a href=\"https://www.python.org/downloads/release/\">\n        <img src=\"https://img.shields.io/badge/python-3.8%20%7C%203.9%20%7C%203.10-blue.svg\" alt=\"Python\" height=\"20\">\n    </a>\n    <!-- Pre-commit -->\n    <a href=\"https://github.com/pre-commit/pre-commit\">\n        <img src=\"https://img.shields.io/badge/pre--commit-enabled-green?logo=pre-commit&logoColor=white\" alt=\"pre-commit\" height=\"20\">\n    </a>\n    <!-- Black -->\n    <a href=\"https://github.com/psf/black\">\n        <img src=\"https://img.shields.io/badge/code%20style-black-000000.svg\" alt=\"black\" height=\"20\">\n    </a>\n    <!-- PRs -->\n    <a href=\"https://github.com/facebookresearch/theseus/blob/main/CONTRIBUTING.md\">\n        <img src=\"https://img.shields.io/badge/PRs-welcome-green.svg\" alt=\"PRs\" height=\"20\">\n    </a>\n</p>\n\n<p align=\"center\">\n    <i>A library for differentiable nonlinear optimization</i>\n</p>\n\n<p align=\"center\">\n    <a href=\"https://arxiv.org/abs/2207.09442\">Paper</a> \u2022\n    <a href=\"https://ai.facebook.com/blog/theseus-a-library-for-encoding-domain-knowledge-in-end-to-end-ai-models/\">Blog</a> \u2022\n    <a href=\"https://sites.google.com/view/theseus-ai/\">Webpage</a> \u2022\n    <a href=\"https://github.com/facebookresearch/theseus/tree/main/tutorials\">Tutorials</a> \u2022\n    <a href=\"https://theseus-ai.readthedocs.io/\">Docs</a>\n</p>\n\nTheseus is an efficient application-agnostic library for building custom nonlinear optimization layers in PyTorch to support constructing various problems in robotics and vision as end-to-end differentiable architectures.\n\n![](https://raw.githubusercontent.com/facebookresearch/theseus/main/docs/source/img/theseuslayer.png)\n\nDifferentiable nonlinear optimization provides a general scheme to encode inductive priors, as the objective function can be partly parameterized by neural models and partly with expert domain-specific differentiable models. The ability to compute gradients end-to-end is retained by differentiating through the optimizer which allows neural models to train on the final task loss, while also taking advantage of priors captured by the optimizer.\n\n-----\n\n## Current Features\n\n### Application agnostic interface\nOur implementation provides an easy to use interface to build custom optimization layers and plug them into any neural architecture. Following differentiable features are currently available:\n- [Second-order nonlinear optimizers](https://github.com/facebookresearch/theseus/tree/main/theseus/optimizer/nonlinear)\n    - Gauss-Newton, Levenberg\u2013Marquardt\n- [Linear solvers](https://github.com/facebookresearch/theseus/tree/main/theseus/optimizer/linear)\n    - Dense: Cholesky, LU; Sparse: CHOLMOD, LU (GPU-only), [BaSpaCho](https://github.com/facebookresearch/baspacho)\n- [Commonly used costs](https://github.com/facebookresearch/theseus/tree/main/theseus/embodied), [AutoDiffCostFunction](https://github.com/facebookresearch/theseus/blob/main/theseus/core/cost_function.py), [RobustCostFunction](https://github.com/facebookresearch/theseus/blob/main/theseus/core/robust_cost_function.py)\n- [Lie groups](https://github.com/facebookresearch/theseus/tree/main/theseus/geometry)\n- [Robot kinematics](https://github.com/facebookresearch/theseus/blob/main/theseus/embodied/kinematics/kinematics_model.py)\n\n### Efficiency based design\nWe support several features that improve computation times and memory consumption:\n- [Sparse linear solvers](https://github.com/facebookresearch/theseus/tree/main/theseus/optimizer/linear)\n- Batching and GPU acceleration\n- [Automatic vectorization](https://github.com/facebookresearch/theseus/blob/main/theseus/core/vectorizer.py)\n- [Backward modes](https://github.com/facebookresearch/theseus/blob/main/theseus/optimizer/nonlinear/nonlinear_optimizer.py)\n    - Implicit, Truncated, Direct Loss Minimization ([DLM](https://github.com/facebookresearch/theseus/blob/main/theseus/theseus_layer.py)), Sampling ([LEO](https://github.com/facebookresearch/theseus/blob/main/examples/state_estimation_2d.py))\n\n\n## Getting Started\n\n### Prerequisites\n- We *strongly* recommend you install Theseus in a venv or conda environment with Python 3.8-3.10.\n- Theseus requires `torch` installation. To install for your particular CPU/CUDA configuration, follow the instructions in the PyTorch [website](https://pytorch.org/get-started/locally/).\n- For GPU support, Theseus requires [nvcc](https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html) to compile custom CUDA operations. Make sure it matches the version used to compile pytorch with `nvcc --version`. If not, install it and ensure its location is on your system's `$PATH` variable.\n- Theseus also requires [`suitesparse`](https://people.engr.tamu.edu/davis/suitesparse.html), which you can install via:\n    - `sudo apt-get install libsuitesparse-dev` (Ubuntu).\n    - `conda install -c conda-forge suitesparse` (Mac).\n    \n### Installing\n\n- **pypi**\n    ```bash\n    pip install theseus-ai\n    ```\n    We currently provide wheels with our CUDA extensions compiled using CUDA 11.6 and Python 3.10.\n    For other CUDA versions, consider installing from source or using our \n    [build script](https://github.com/facebookresearch/theseus/blob/main/build_scripts/build_wheel.sh).\n\n    Note that `pypi` installation doesn't include our experimental [Theseus Labs](https://github.com/facebookresearch/theseus/tree/main/theseus/labs).\n    For this, please install from source.\n\n- #### **From source**\n    The simplest way to install Theseus from source is by running the following (see further below to also include BaSpaCho)\n    ```bash\n    git clone https://github.com/facebookresearch/theseus.git && cd theseus\n    pip install -e .\n    ```\n    If you are interested in contributing to Theseus, instead install\n    ```bash\n    pip install -e \".[dev]\"\n    ```\n    and follow the more detailed instructions in [CONTRIBUTING](https://github.com/facebookresearch/theseus/blob/main/CONTRIBUTING.md).\n\n- **Installing BaSpaCho extensions from source**\n\n    By default, installing from source doesn't include our BaSpaCho sparse solver extension. For this, follow these steps:\n\n    1. Compile BaSpaCho from source following instructions [here](https://github.com/facebookresearch/baspacho). We recommend using flags `-DBLA_STATIC=ON -DBUILD_SHARED_LIBS=OFF`.\n    2. Run\n        \n        ```bash\n        git clone https://github.com/facebookresearch/theseus.git && cd theseus\n        BASPACHO_ROOT_DIR=<path/to/root/baspacho/dir> pip install -e .\n        ```\n        \n        where the BaSpaCho root dir must have the binaries in the subdirectory `build`.\n\n### Running unit tests (requires `dev` installation)\n```bash\npython -m pytest tests\n```\nBy default, unit tests include tests for our CUDA extensions. You can add the option `-m \"not cudaext\"`\nto skip them when installing without CUDA support. Additionally, the tests for sparse solver BaSpaCho are automatically \nskipped when its extlib is not compiled.\n\n\n## Examples\n\n[Simple example](https://github.com/facebookresearch/theseus/blob/main/examples/simple_example.py). This example is fitting the curve $y$ to a dataset of $N$ observations $(x,y) \\sim D$. This is modeled as an `Objective` with a single `CostFunction` that computes the residual $y - v e^x$. The `Objective` and the `GaussNewton` optimizer are encapsulated into a `TheseusLayer`. With `Adam` and MSE loss, $x$ is learned by differentiating through the `TheseusLayer`.\n\n```python\nimport torch\nimport theseus as th\n\nx_true, y_true, v_true = read_data() # shapes (1, N), (1, N), (1, 1)\nx = th.Variable(torch.randn_like(x_true), name=\"x\")\ny = th.Variable(y_true, name=\"y\")\nv = th.Vector(1, name=\"v\") # a manifold subclass of Variable for optim_vars\n\ndef error_fn(optim_vars, aux_vars): # returns y - v * exp(x)\n    x, y = aux_vars\n    return y.tensor - optim_vars[0].tensor * torch.exp(x.tensor)\n\nobjective = th.Objective()\ncost_function = th.AutoDiffCostFunction(\n    [v], error_fn, y_true.shape[1], aux_vars=[x, y],\n    cost_weight=th.ScaleCostWeight(1.0))\nobjective.add(cost_function)\nlayer = th.TheseusLayer(th.GaussNewton(objective, max_iterations=10))\n\nphi = torch.nn.Parameter(x_true + 0.1 * torch.ones_like(x_true))\nouter_optimizer = torch.optim.Adam([phi], lr=0.001)\nfor epoch in range(10):\n    solution, info = layer.forward(\n        input_tensors={\"x\": phi.clone(), \"v\": torch.ones(1, 1)},\n        optimizer_kwargs={\"backward_mode\": \"implicit\"})\n    outer_loss = torch.nn.functional.mse_loss(solution[\"v\"], v_true)\n    outer_loss.backward()\n    outer_optimizer.step()\n```\n\nSee [tutorials](https://github.com/facebookresearch/theseus/blob/main/tutorials/), and robotics and vision [examples](https://github.com/facebookresearch/theseus/tree/main/examples) to learn about the API and usage.\n\n\n## Citing Theseus\n\nIf you use Theseus in your work, please cite the [paper](https://arxiv.org/abs/2207.09442) with the BibTeX below.\n\n```bibtex\n@article{pineda2022theseus,\n  title   = {{Theseus: A Library for Differentiable Nonlinear Optimization}},\n  author  = {Luis Pineda and Taosha Fan and Maurizio Monge and Shobha Venkataraman and Paloma Sodhi and Ricky TQ Chen and Joseph Ortiz and Daniel DeTone and Austin Wang and Stuart Anderson and Jing Dong and Brandon Amos and Mustafa Mukadam},\n  journal = {Advances in Neural Information Processing Systems},\n  year    = {2022}\n}\n```\n\n\n## License\n\nTheseus is MIT licensed. See the [LICENSE](https://github.com/facebookresearch/theseus/blob/main/LICENSE) for details.\n\n\n## Additional Information\n\n- Join the community on [Github Discussions](https://github.com/facebookresearch/theseus/discussions) for questions and sugesstions.\n- Use [Github Issues](https://github.com/facebookresearch/theseus/issues/new/choose) for bugs and features.\n- See [CONTRIBUTING](https://github.com/facebookresearch/theseus/blob/main/CONTRIBUTING.md) if interested in helping out.\n\nTheseus is made possible by the following contributors:\n\n<a href=\"https://github.com/facebookresearch/theseus/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=facebookresearch/theseus\" />\n</a>\n\nMade with [contrib.rocks](https://contrib.rocks).\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A library for differentiable nonlinear optimization.",
    "version": "2023.4.21",
    "split_keywords": [
        "differentiable optimization",
        "nonlinear least squares",
        "factor graphs"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c3cc1e4961517296409a254faba888507f6d9b4efebf05e13ed02f6724ef87f9",
                "md5": "b2e0e567ed3b884568a3417a317895e7",
                "sha256": "39032ed45d979486dd6f1a25bf26ed8fadf2870fadce3b3b2f565147a15dfff2"
            },
            "downloads": -1,
            "filename": "theseus_ai_nightly-2023.4.21-cp310-cp310-manylinux_2_17_x86_64.whl",
            "has_sig": false,
            "md5_digest": "b2e0e567ed3b884568a3417a317895e7",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 7258592,
            "upload_time": "2023-04-21T20:22:26",
            "upload_time_iso_8601": "2023-04-21T20:22:26.122825Z",
            "url": "https://files.pythonhosted.org/packages/c3/cc/1e4961517296409a254faba888507f6d9b4efebf05e13ed02f6724ef87f9/theseus_ai_nightly-2023.4.21-cp310-cp310-manylinux_2_17_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6dd18d7235510e92e59ba96c71487717c5515fa01b4c7daf0abf36c27052d7f5",
                "md5": "ef9c779d02e280c13a29d279ec5dca61",
                "sha256": "d8fbbb25ef4baf1b778e4365731edfd48293539019ff2da88a6e6508b6926a35"
            },
            "downloads": -1,
            "filename": "theseus-ai-nightly-2023.4.21.tar.gz",
            "has_sig": false,
            "md5_digest": "ef9c779d02e280c13a29d279ec5dca61",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 209426,
            "upload_time": "2023-04-21T20:22:28",
            "upload_time_iso_8601": "2023-04-21T20:22:28.737479Z",
            "url": "https://files.pythonhosted.org/packages/6d/d1/8d7235510e92e59ba96c71487717c5515fa01b4c7daf0abf36c27052d7f5/theseus-ai-nightly-2023.4.21.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-04-21 20:22:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "facebookresearch",
    "github_project": "theseus",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "circle": true,
    "lcname": "theseus-ai-nightly"
}
        
Elapsed time: 0.07064s