trellis-3d-python


Nametrellis-3d-python JSON
Version 0.1.9 PyPI version JSON
download
home_pagehttps://github.com/microsoft/trellis
SummaryTRELLIS is a large 3D asset generation model.
upload_time2025-09-05 02:43:24
maintainerNone
docs_urlNone
authorMicrosoft Corporation
requires_python>=3.8
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <img src="assets/logo.webp" width="100%" align="center">
<h1 align="center">Structured 3D Latents<br>for Scalable and Versatile 3D Generation</h1>
<p align="center"><a href="https://arxiv.org/abs/2412.01506"><img src='https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv&logoColor=white' alt='arXiv'></a>
<a href='https://trellis3d.github.io'><img src='https://img.shields.io/badge/Project_Page-Website-green?logo=googlechrome&logoColor=white' alt='Project Page'></a>
<a href='https://huggingface.co/spaces/JeffreyXiang/TRELLIS'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Live_Demo-blue'></a>
</p>
<p align="center"><img src="assets/teaser.png" width="100%"></p>

<span style="font-size: 16px; font-weight: 600;">T</span><span style="font-size: 12px; font-weight: 700;">RELLIS</span> is a large 3D asset generation model. It takes in text or image prompts and generates high-quality 3D assets in various formats, such as Radiance Fields, 3D Gaussians, and meshes. The cornerstone of <span style="font-size: 16px; font-weight: 600;">T</span><span style="font-size: 12px; font-weight: 700;">RELLIS</span> is a unified Structured LATent (<span style="font-size: 16px; font-weight: 600;">SL</span><span style="font-size: 12px; font-weight: 700;">AT</span>) representation that allows decoding to different output formats and Rectified Flow Transformers tailored for <span style="font-size: 16px; font-weight: 600;">SL</span><span style="font-size: 12px; font-weight: 700;">AT</span> as the powerful backbones. We provide large-scale pre-trained models with up to 2 billion parameters on a large 3D asset dataset of 500K diverse objects. <span style="font-size: 16px; font-weight: 600;">T</span><span style="font-size: 12px; font-weight: 700;">RELLIS</span> significantly surpasses existing methods, including recent ones at similar scales, and showcases flexible output format selection and local 3D editing capabilities which were not offered by previous models.

***Check out our [Project Page](https://trellis3d.github.io) for more videos and interactive demos!***

<!-- Features -->
## 🌟 Features
- **High Quality**: It produces diverse 3D assets at high quality with intricate shape and texture details.
- **Versatility**: It takes text or image prompts and can generate various final 3D representations including but not limited to *Radiance Fields*, *3D Gaussians*, and *meshes*, accommodating diverse downstream requirements.
- **Flexible Editing**: It allows for easy editings of generated 3D assets, such as generating variants of the same object or local editing of the 3D asset.

<!-- Updates -->
## ⏩ Updates

**03/25/2025**
- Release training code.
- Release **TRELLIS-text** models and asset variants generation.
  - Examples are provided as [example_text.py](example_text.py) and [example_variant.py](example_variant.py).
  - Gradio demo is provided as [app_text.py](app_text.py).
  - *Note: It is always recommended to do text to 3D generation by first generating images using text-to-image models and then using TRELLIS-image models for 3D generation. Text-conditioned models are less creative and detailed due to data limitations.*

**12/26/2024**
- Release [**TRELLIS-500K**](https://github.com/microsoft/TRELLIS#-dataset) dataset and toolkits for data preparation.

**12/18/2024**
- Implementation of multi-image conditioning for **TRELLIS-image** model. ([#7](https://github.com/microsoft/TRELLIS/issues/7)). This is based on tuning-free algorithm without training a specialized model, so it may not give the best results for all input images.
- Add Gaussian export in `app.py` and `example.py`. ([#40](https://github.com/microsoft/TRELLIS/issues/40))

<!-- Installation -->
## πŸ“¦ Installation

### Prerequisites
- **System**: The code is currently tested only on **Linux**.  For windows setup, you may refer to [#3](https://github.com/microsoft/TRELLIS/issues/3) (not fully tested).
- **Hardware**: An NVIDIA GPU with at least 16GB of memory is necessary. The code has been verified on NVIDIA A100 and A6000 GPUs.  
- **Software**:   
  - The [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive) is needed to compile certain submodules. The code has been tested with CUDA versions 11.8 and 12.2.  
  - [Conda](https://docs.anaconda.com/miniconda/install/#quick-command-line-install) is recommended for managing dependencies.  
  - Python version 3.8 or higher is required. 

### Installation Steps
1. Clone the repo:
    ```sh
    git clone --recurse-submodules https://github.com/microsoft/TRELLIS.git
    cd TRELLIS
    ```

2. Install the dependencies:
    
    **Before running the following command there are somethings to note:**
    - By adding `--new-env`, a new conda environment named `trellis` will be created. If you want to use an existing conda environment, please remove this flag.
    - By default the `trellis` environment will use pytorch 2.4.0 with CUDA 11.8. If you want to use a different version of CUDA (e.g., if you have CUDA Toolkit 12.2 installed and do not want to install another 11.8 version for submodule compilation), you can remove the `--new-env` flag and manually install the required dependencies. Refer to [PyTorch](https://pytorch.org/get-started/previous-versions/) for the installation command.
    - If you have multiple CUDA Toolkit versions installed, `PATH` should be set to the correct version before running the command. For example, if you have CUDA Toolkit 11.8 and 12.2 installed, you should run `export PATH=/usr/local/cuda-11.8/bin:$PATH` before running the command.
    - By default, the code uses the `flash-attn` backend for attention. For GPUs do not support `flash-attn` (e.g., NVIDIA V100), you can remove the `--flash-attn` flag to install `xformers` only and set the `ATTN_BACKEND` environment variable to `xformers` before running the code. See the [Minimal Example](#minimal-example) for more details.
    - The installation may take a while due to the large number of dependencies. Please be patient. If you encounter any issues, you can try to install the dependencies one by one, specifying one flag at a time.
    - If you encounter any issues during the installation, feel free to open an issue or contact us.
    
    Create a new conda environment named `trellis` and install the dependencies:
    ```sh
    . ./setup.sh --new-env --basic --xformers --flash-attn --diffoctreerast --spconv --mipgaussian --kaolin --nvdiffrast
    ```
    The detailed usage of `setup.sh` can be found by running `. ./setup.sh --help`.
    ```sh
    Usage: setup.sh [OPTIONS]
    Options:
        -h, --help              Display this help message
        --new-env               Create a new conda environment
        --basic                 Install basic dependencies
        --train                 Install training dependencies
        --xformers              Install xformers
        --flash-attn            Install flash-attn
        --diffoctreerast        Install diffoctreerast
        --vox2seq               Install vox2seq
        --spconv                Install spconv
        --mipgaussian           Install mip-splatting
        --kaolin                Install kaolin
        --nvdiffrast            Install nvdiffrast
        --demo                  Install all dependencies for demo
    ```

<!-- Pretrained Models -->
## πŸ€– Pretrained Models

We provide the following pretrained models:

| Model | Description | #Params | Download |
| --- | --- | --- | --- |
| TRELLIS-image-large | Large image-to-3D model | 1.2B | [Download](https://huggingface.co/JeffreyXiang/TRELLIS-image-large) |
| TRELLIS-text-base | Base text-to-3D model | 342M | [Download](https://huggingface.co/JeffreyXiang/TRELLIS-text-base) |
| TRELLIS-text-large | Large text-to-3D model | 1.1B | [Download](https://huggingface.co/JeffreyXiang/TRELLIS-text-large) |
| TRELLIS-text-xlarge | Extra-large text-to-3D model | 2.0B | [Download](https://huggingface.co/JeffreyXiang/TRELLIS-text-xlarge) |

*Note: It is always recommended to use the image conditioned version of the models for better performance.*

*Note: All VAEs are included in **TRELLIS-image-large** model repo.*

The models are hosted on Hugging Face. You can directly load the models with their repository names in the code:
```python
TrellisImageTo3DPipeline.from_pretrained("JeffreyXiang/TRELLIS-image-large")
```

If you prefer loading the model from local, you can download the model files from the links above and load the model with the folder path (folder structure should be maintained):
```python
TrellisImageTo3DPipeline.from_pretrained("/path/to/TRELLIS-image-large")
```

<!-- Usage -->
## πŸ’‘ Usage

### Minimal Example

Here is an [example](example.py) of how to use the pretrained models for 3D asset generation.

```python
import os
# os.environ['ATTN_BACKEND'] = 'xformers'   # Can be 'flash-attn' or 'xformers', default is 'flash-attn'
os.environ['SPCONV_ALGO'] = 'native'        # Can be 'native' or 'auto', default is 'auto'.
                                            # 'auto' is faster but will do benchmarking at the beginning.
                                            # Recommended to set to 'native' if run only once.

import imageio
from PIL import Image
from trellis.pipelines import TrellisImageTo3DPipeline
from trellis.utils import render_utils, postprocessing_utils

# Load a pipeline from a model folder or a Hugging Face model hub.
pipeline = TrellisImageTo3DPipeline.from_pretrained("JeffreyXiang/TRELLIS-image-large")
pipeline.cuda()

# Load an image
image = Image.open("assets/example_image/T.png")

# Run the pipeline
outputs = pipeline.run(
    image,
    seed=1,
    # Optional parameters
    # sparse_structure_sampler_params={
    #     "steps": 12,
    #     "cfg_strength": 7.5,
    # },
    # slat_sampler_params={
    #     "steps": 12,
    #     "cfg_strength": 3,
    # },
)
# outputs is a dictionary containing generated 3D assets in different formats:
# - outputs['gaussian']: a list of 3D Gaussians
# - outputs['radiance_field']: a list of radiance fields
# - outputs['mesh']: a list of meshes

# Render the outputs
video = render_utils.render_video(outputs['gaussian'][0])['color']
imageio.mimsave("sample_gs.mp4", video, fps=30)
video = render_utils.render_video(outputs['radiance_field'][0])['color']
imageio.mimsave("sample_rf.mp4", video, fps=30)
video = render_utils.render_video(outputs['mesh'][0])['normal']
imageio.mimsave("sample_mesh.mp4", video, fps=30)

# GLB files can be extracted from the outputs
glb = postprocessing_utils.to_glb(
    outputs['gaussian'][0],
    outputs['mesh'][0],
    # Optional parameters
    simplify=0.95,          # Ratio of triangles to remove in the simplification process
    texture_size=1024,      # Size of the texture used for the GLB
)
glb.export("sample.glb")

# Save Gaussians as PLY files
outputs['gaussian'][0].save_ply("sample.ply")
```

After running the code, you will get the following files:
- `sample_gs.mp4`: a video showing the 3D Gaussian representation
- `sample_rf.mp4`: a video showing the Radiance Field representation
- `sample_mesh.mp4`: a video showing the mesh representation
- `sample.glb`: a GLB file containing the extracted textured mesh
- `sample.ply`: a PLY file containing the 3D Gaussian representation


### Web Demo

[app.py](app.py) provides a simple web demo for 3D asset generation. Since this demo is based on [Gradio](https://gradio.app/), additional dependencies are required:
```sh
. ./setup.sh --demo
```

After installing the dependencies, you can run the demo with the following command:
```sh
python app.py
```

Then, you can access the demo at the address shown in the terminal.

***The web demo is also available on [Hugging Face Spaces](https://huggingface.co/spaces/JeffreyXiang/TRELLIS)!***


<!-- Dataset -->
## πŸ“š Dataset

We provide **TRELLIS-500K**, a large-scale dataset containing 500K 3D assets curated from [Objaverse(XL)](https://objaverse.allenai.org/), [ABO](https://amazon-berkeley-objects.s3.amazonaws.com/index.html), [3D-FUTURE](https://tianchi.aliyun.com/specials/promotion/alibaba-3d-future), [HSSD](https://huggingface.co/datasets/hssd/hssd-models), and [Toys4k](https://github.com/rehg-lab/lowshot-shapebias/tree/main/toys4k), filtered based on aesthetic scores. Please refer to the [dataset README](DATASET.md) for more details.


<!-- Training -->
## πŸ‹οΈβ€β™‚οΈ Training

TRELLIS’s training framework is organized to provide a flexible and modular approach to building and fine-tuning large-scale 3D generation models. The training code is centered around `train.py` and is structured into several directories to clearly separate dataset handling, model components, training logic, and visualization utilities.

### Code Structure

- **train.py**: Main entry point for training.
- **trellis/datasets**: Dataset loading and preprocessing.
- **trellis/models**: Different models and their components.
- **trellis/modules**: Custom modules for various models.
- **trellis/pipelines**: Inference pipelines for different models.
- **trellis/renderers**: Renderers for different 3D representations.
- **trellis/representations**: Different 3D representations.
- **trellis/trainers**: Training logic for different models.
- **trellis/utils**: Utility functions for training and visualization.

### Training Setup

1. **Prepare the Environment:**
   - Ensure all training dependencies are installed.
   - Use a Linux system with an NVIDIA GPU (The models are trained on NVIDIA A100 GPUs).
   - For distributed training, verify that your nodes can communicate through the designated master address and port.

2. **Dataset Preparation:**
   - Organize your dataset similar to TRELLIS-500K. Specify your dataset path using the `--data_dir` argument when launching training.

3. **Configuration Files:**
   - Training hyperparameters and model architectures are defined in configuration files under the `configs/` directory.
   - Example configuration files include:

| Config | Pretained Model | Description |
| --- | --- | --- |
| [`vae/ss_vae_conv3d_16l8_fp16.json`](configs/vae/ss_vae_conv3d_16l8_fp16.json) | [Encoder](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/ss_enc_conv3d_16l8_fp16.safetensors) [Decoder](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/ss_dec_conv3d_16l8_fp16.safetensors) | Sparse structure VAE |
| [`vae/slat_vae_enc_dec_gs_swin8_B_64l8_fp16.json`](configs/vae/slat_vae_enc_dec_gs_swin8_B_64l8_fp16.json) | [Encoder](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/slat_enc_swin8_B_64l8_fp16.safetensors) [Decoder](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/slat_dec_gs_swin8_B_64l8gs32_fp16.safetensors) | SLat VAE with Gaussian Decoder |
| [`vae/slat_vae_dec_rf_swin8_B_64l8_fp16.json`](configs/vae/slat_vae_dec_rf_swin8_B_64l8_fp16.json) | [Decoder](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/slat_dec_rf_swin8_B_64l8r16_fp16.safetensors) | SLat Radiance Field Decoder |
| [`vae/slat_vae_dec_mesh_swin8_B_64l8_fp16.json`](configs/vae/slat_vae_dec_mesh_swin8_B_64l8_fp16.json) | [Decoder](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/slat_dec_mesh_swin8_B_64l8m256c_fp16.safetensors) | SLat Mesh Decoder |
| [`generation/ss_flow_img_dit_L_16l8_fp16.json`](configs/generation/ss_flow_img_dit_L_16l8_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/ss_flow_img_dit_L_16l8_fp16.safetensors) | Image conditioned sparse structure Flow Model |
| [`generation/slat_flow_img_dit_L_64l8p2_fp16.json`](configs/generation/slat_flow_img_dit_L_64l8p2_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/slat_flow_img_dit_L_64l8p2_fp16.safetensors) | Image conditioned SLat Flow Model |
| [`generation/ss_flow_txt_dit_B_16l8_fp16.json`](configs/generation/ss_flow_txt_dit_B_16l8_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-text-base/blob/main/ckpts/ss_flow_txt_dit_B_16l8_fp16.safetensors) | Base text-conditioned sparse structure Flow Model |
| [`generation/slat_flow_txt_dit_B_64l8p2_fp16.json`](configs/generation/slat_flow_txt_dit_B_64l8p2_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-text-base/blob/main/ckpts/slat_flow_txt_dit_B_64l8p2_fp16.safetensors) | Base text-conditioned SLat Flow Model |
| [`generation/ss_flow_txt_dit_L_16l8_fp16.json`](configs/generation/ss_flow_txt_dit_L_16l8_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-text-large/blob/main/ckpts/ss_flow_txt_dit_L_16l8_fp16.safetensors) | Large text-conditioned sparse structure Flow Model |
| [`generation/slat_flow_txt_dit_L_64l8p2_fp16.json`](configs/generation/slat_flow_txt_dit_L_64l8p2_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-text-large/blob/main/ckpts/slat_flow_txt_dit_L_64l8p2_fp16.safetensors) | Large text-conditioned SLat Flow Model |
| [`generation/ss_flow_txt_dit_XL_16l8_fp16.json`](configs/generation/ss_flow_txt_dit_XL_16l8_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-text-xlarge/blob/main/ckpts/ss_flow_txt_dit_XL_16l8_fp16.safetensors) | Extra-large text-conditioned sparse structure Flow Model |
| [`generation/slat_flow_txt_dit_XL_64l8p2_fp16.json`](configs/generation/slat_flow_txt_dit_XL_64l8p2_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-text-xlarge/blob/main/ckpts/slat_flow_txt_dit_XL_64l8p2_fp16.safetensors) | Extra-large text-conditioned SLat Flow Model |


### Command-Line Options

The training script can be run as follows:
```sh
usage: train.py [-h] --config CONFIG --output_dir OUTPUT_DIR [--load_dir LOAD_DIR] [--ckpt CKPT] [--data_dir DATA_DIR] [--auto_retry AUTO_RETRY] [--tryrun] [--profile] [--num_nodes NUM_NODES] [--node_rank NODE_RANK] [--num_gpus NUM_GPUS] [--master_addr MASTER_ADDR] [--master_port MASTER_PORT]

options:
  -h, --help                    show this help message and exit
  --config CONFIG               Experiment config file
  --output_dir OUTPUT_DIR       Output directory
  --load_dir LOAD_DIR           Load directory, default to output_dir
  --ckpt CKPT                   Checkpoint step to resume training, default to latest
  --data_dir DATA_DIR           Data directory
  --auto_retry AUTO_RETRY       Number of retries on error
  --tryrun                      Try run without training
  --profile                     Profile training
  --num_nodes NUM_NODES         Number of nodes
  --node_rank NODE_RANK         Node rank
  --num_gpus NUM_GPUS           Number of GPUs per node, default to all
  --master_addr MASTER_ADDR     Master address for distributed training
  --master_port MASTER_PORT     Port for distributed training
```

### Example Training Commands

#### Single-node Training

To train a image-to-3D stage 2 model with a single machine.
```sh
python train.py \
  --config configs/vae/slat_vae_dec_mesh_swin8_B_64l8_fp16.json \
  --output_dir outputs/slat_vae_dec_mesh_swin8_B_64l8_fp16_1node \
  --data_dir /path/to/your/dataset1,/path/to/your/dataset2 \
```
The script will automatically distribute the training across all available GPUs. Specify the number of GPUs with the `--num_gpus` flag if you want to limit the number of GPUs used.

#### Multi-node Training

To train a image-to-3D stage 2 model with multiple GPUs across nodes (e.g., 2 nodes):
```sh
python train.py \
  --config configs/generation/slat_flow_img_dit_L_64l8p2_fp16.json \
  --output_dir outputs/slat_flow_img_dit_L_64l8p2_fp16_2nodes \
  --data_dir /path/to/your/dataset1,/path/to/your/dataset2 \
  --num_nodes 2 \
  --node_rank 0 \
  --master_addr $MASTER_ADDR \
  --master_port $MASTER_PORT
```
Be sure to adjust `node_rank`, `master_addr`, and `master_port` for each node accordingly.

#### Resuming Training

By default, training will resume from the latest saved checkpoint in the same output directory. To specify a specific checkpoint to resume from, use the `--load_dir` and `--ckpt` flags:
```sh
python train.py \
  --config configs/generation/slat_flow_img_dit_L_64l8p2_fp16.json \
  --output_dir outputs/slat_flow_img_dit_L_64l8p2_fp16_resume \
  --data_dir /path/to/your/dataset1,/path/to/your/dataset2 \
  --load_dir /path/to/your/checkpoint \
  --ckpt [step]
```

### Additional Options

- **Auto Retry:** Use the `--auto_retry` flag to specify the number of retries in case of intermittent errors.
- **Dry Run:** The `--tryrun` flag allows you to check your configuration and environment without launching full training.
- **Profiling:** Enable profiling with the `--profile` flag to gain insights into training performance and diagnose bottlenecks.

Adjust the file paths and parameters to match your experimental setup.


<!-- License -->
## βš–οΈ License

TRELLIS models and the majority of the code are licensed under the [MIT License](LICENSE). The following submodules may have different licenses:
- [**diffoctreerast**](https://github.com/JeffreyXiang/diffoctreerast): We developed a CUDA-based real-time differentiable octree renderer for rendering radiance fields as part of this project. This renderer is derived from the [diff-gaussian-rasterization](https://github.com/graphdeco-inria/diff-gaussian-rasterization) project and is available under the [LICENSE](https://github.com/JeffreyXiang/diffoctreerast/blob/master/LICENSE).


- [**Modified Flexicubes**](https://github.com/MaxtirError/FlexiCubes): In this project, we used a modified version of [Flexicubes](https://github.com/nv-tlabs/FlexiCubes) to support vertex attributes. This modified version is licensed under the [LICENSE](https://github.com/nv-tlabs/FlexiCubes/blob/main/LICENSE.txt).


<!-- Citation -->
## πŸ“œ Citation

If you find this work helpful, please consider citing our paper:

```bibtex
@article{xiang2024structured,
    title   = {Structured 3D Latents for Scalable and Versatile 3D Generation},
    author  = {Xiang, Jianfeng and Lv, Zelong and Xu, Sicheng and Deng, Yu and Wang, Ruicheng and Zhang, Bowen and Chen, Dong and Tong, Xin and Yang, Jiaolong},
    journal = {arXiv preprint arXiv:2412.01506},
    year    = {2024}
}
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/microsoft/trellis",
    "name": "trellis-3d-python",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Microsoft Corporation",
    "author_email": "t-jxiang@microsoft.com",
    "download_url": "https://files.pythonhosted.org/packages/c9/f8/3757f5cea60310f5f67965a355a94c01142bd9c054489e24f00d3db099aa/trellis_3d_python-0.1.9.tar.gz",
    "platform": null,
    "description": "<img src=\"assets/logo.webp\" width=\"100%\" align=\"center\">\n<h1 align=\"center\">Structured 3D Latents<br>for Scalable and Versatile 3D Generation</h1>\n<p align=\"center\"><a href=\"https://arxiv.org/abs/2412.01506\"><img src='https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv&logoColor=white' alt='arXiv'></a>\n<a href='https://trellis3d.github.io'><img src='https://img.shields.io/badge/Project_Page-Website-green?logo=googlechrome&logoColor=white' alt='Project Page'></a>\n<a href='https://huggingface.co/spaces/JeffreyXiang/TRELLIS'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Live_Demo-blue'></a>\n</p>\n<p align=\"center\"><img src=\"assets/teaser.png\" width=\"100%\"></p>\n\n<span style=\"font-size: 16px; font-weight: 600;\">T</span><span style=\"font-size: 12px; font-weight: 700;\">RELLIS</span> is a large 3D asset generation model. It takes in text or image prompts and generates high-quality 3D assets in various formats, such as Radiance Fields, 3D Gaussians, and meshes. The cornerstone of <span style=\"font-size: 16px; font-weight: 600;\">T</span><span style=\"font-size: 12px; font-weight: 700;\">RELLIS</span> is a unified Structured LATent (<span style=\"font-size: 16px; font-weight: 600;\">SL</span><span style=\"font-size: 12px; font-weight: 700;\">AT</span>) representation that allows decoding to different output formats and Rectified Flow Transformers tailored for <span style=\"font-size: 16px; font-weight: 600;\">SL</span><span style=\"font-size: 12px; font-weight: 700;\">AT</span> as the powerful backbones. We provide large-scale pre-trained models with up to 2 billion parameters on a large 3D asset dataset of 500K diverse objects. <span style=\"font-size: 16px; font-weight: 600;\">T</span><span style=\"font-size: 12px; font-weight: 700;\">RELLIS</span> significantly surpasses existing methods, including recent ones at similar scales, and showcases flexible output format selection and local 3D editing capabilities which were not offered by previous models.\n\n***Check out our [Project Page](https://trellis3d.github.io) for more videos and interactive demos!***\n\n<!-- Features -->\n## \ud83c\udf1f Features\n- **High Quality**: It produces diverse 3D assets at high quality with intricate shape and texture details.\n- **Versatility**: It takes text or image prompts and can generate various final 3D representations including but not limited to *Radiance Fields*, *3D Gaussians*, and *meshes*, accommodating diverse downstream requirements.\n- **Flexible Editing**: It allows for easy editings of generated 3D assets, such as generating variants of the same object or local editing of the 3D asset.\n\n<!-- Updates -->\n## \u23e9 Updates\n\n**03/25/2025**\n- Release training code.\n- Release **TRELLIS-text** models and asset variants generation.\n  - Examples are provided as [example_text.py](example_text.py) and [example_variant.py](example_variant.py).\n  - Gradio demo is provided as [app_text.py](app_text.py).\n  - *Note: It is always recommended to do text to 3D generation by first generating images using text-to-image models and then using TRELLIS-image models for 3D generation. Text-conditioned models are less creative and detailed due to data limitations.*\n\n**12/26/2024**\n- Release [**TRELLIS-500K**](https://github.com/microsoft/TRELLIS#-dataset) dataset and toolkits for data preparation.\n\n**12/18/2024**\n- Implementation of multi-image conditioning for **TRELLIS-image** model. ([#7](https://github.com/microsoft/TRELLIS/issues/7)). This is based on tuning-free algorithm without training a specialized model, so it may not give the best results for all input images.\n- Add Gaussian export in `app.py` and `example.py`. ([#40](https://github.com/microsoft/TRELLIS/issues/40))\n\n<!-- Installation -->\n## \ud83d\udce6 Installation\n\n### Prerequisites\n- **System**: The code is currently tested only on **Linux**.  For windows setup, you may refer to [#3](https://github.com/microsoft/TRELLIS/issues/3) (not fully tested).\n- **Hardware**: An NVIDIA GPU with at least 16GB of memory is necessary. The code has been verified on NVIDIA A100 and A6000 GPUs.  \n- **Software**:   \n  - The [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive) is needed to compile certain submodules. The code has been tested with CUDA versions 11.8 and 12.2.  \n  - [Conda](https://docs.anaconda.com/miniconda/install/#quick-command-line-install) is recommended for managing dependencies.  \n  - Python version 3.8 or higher is required. \n\n### Installation Steps\n1. Clone the repo:\n    ```sh\n    git clone --recurse-submodules https://github.com/microsoft/TRELLIS.git\n    cd TRELLIS\n    ```\n\n2. Install the dependencies:\n    \n    **Before running the following command there are somethings to note:**\n    - By adding `--new-env`, a new conda environment named `trellis` will be created. If you want to use an existing conda environment, please remove this flag.\n    - By default the `trellis` environment will use pytorch 2.4.0 with CUDA 11.8. If you want to use a different version of CUDA (e.g., if you have CUDA Toolkit 12.2 installed and do not want to install another 11.8 version for submodule compilation), you can remove the `--new-env` flag and manually install the required dependencies. Refer to [PyTorch](https://pytorch.org/get-started/previous-versions/) for the installation command.\n    - If you have multiple CUDA Toolkit versions installed, `PATH` should be set to the correct version before running the command. For example, if you have CUDA Toolkit 11.8 and 12.2 installed, you should run `export PATH=/usr/local/cuda-11.8/bin:$PATH` before running the command.\n    - By default, the code uses the `flash-attn` backend for attention. For GPUs do not support `flash-attn` (e.g., NVIDIA V100), you can remove the `--flash-attn` flag to install `xformers` only and set the `ATTN_BACKEND` environment variable to `xformers` before running the code. See the [Minimal Example](#minimal-example) for more details.\n    - The installation may take a while due to the large number of dependencies. Please be patient. If you encounter any issues, you can try to install the dependencies one by one, specifying one flag at a time.\n    - If you encounter any issues during the installation, feel free to open an issue or contact us.\n    \n    Create a new conda environment named `trellis` and install the dependencies:\n    ```sh\n    . ./setup.sh --new-env --basic --xformers --flash-attn --diffoctreerast --spconv --mipgaussian --kaolin --nvdiffrast\n    ```\n    The detailed usage of `setup.sh` can be found by running `. ./setup.sh --help`.\n    ```sh\n    Usage: setup.sh [OPTIONS]\n    Options:\n        -h, --help              Display this help message\n        --new-env               Create a new conda environment\n        --basic                 Install basic dependencies\n        --train                 Install training dependencies\n        --xformers              Install xformers\n        --flash-attn            Install flash-attn\n        --diffoctreerast        Install diffoctreerast\n        --vox2seq               Install vox2seq\n        --spconv                Install spconv\n        --mipgaussian           Install mip-splatting\n        --kaolin                Install kaolin\n        --nvdiffrast            Install nvdiffrast\n        --demo                  Install all dependencies for demo\n    ```\n\n<!-- Pretrained Models -->\n## \ud83e\udd16 Pretrained Models\n\nWe provide the following pretrained models:\n\n| Model | Description | #Params | Download |\n| --- | --- | --- | --- |\n| TRELLIS-image-large | Large image-to-3D model | 1.2B | [Download](https://huggingface.co/JeffreyXiang/TRELLIS-image-large) |\n| TRELLIS-text-base | Base text-to-3D model | 342M | [Download](https://huggingface.co/JeffreyXiang/TRELLIS-text-base) |\n| TRELLIS-text-large | Large text-to-3D model | 1.1B | [Download](https://huggingface.co/JeffreyXiang/TRELLIS-text-large) |\n| TRELLIS-text-xlarge | Extra-large text-to-3D model | 2.0B | [Download](https://huggingface.co/JeffreyXiang/TRELLIS-text-xlarge) |\n\n*Note: It is always recommended to use the image conditioned version of the models for better performance.*\n\n*Note: All VAEs are included in **TRELLIS-image-large** model repo.*\n\nThe models are hosted on Hugging Face. You can directly load the models with their repository names in the code:\n```python\nTrellisImageTo3DPipeline.from_pretrained(\"JeffreyXiang/TRELLIS-image-large\")\n```\n\nIf you prefer loading the model from local, you can download the model files from the links above and load the model with the folder path (folder structure should be maintained):\n```python\nTrellisImageTo3DPipeline.from_pretrained(\"/path/to/TRELLIS-image-large\")\n```\n\n<!-- Usage -->\n## \ud83d\udca1 Usage\n\n### Minimal Example\n\nHere is an [example](example.py) of how to use the pretrained models for 3D asset generation.\n\n```python\nimport os\n# os.environ['ATTN_BACKEND'] = 'xformers'   # Can be 'flash-attn' or 'xformers', default is 'flash-attn'\nos.environ['SPCONV_ALGO'] = 'native'        # Can be 'native' or 'auto', default is 'auto'.\n                                            # 'auto' is faster but will do benchmarking at the beginning.\n                                            # Recommended to set to 'native' if run only once.\n\nimport imageio\nfrom PIL import Image\nfrom trellis.pipelines import TrellisImageTo3DPipeline\nfrom trellis.utils import render_utils, postprocessing_utils\n\n# Load a pipeline from a model folder or a Hugging Face model hub.\npipeline = TrellisImageTo3DPipeline.from_pretrained(\"JeffreyXiang/TRELLIS-image-large\")\npipeline.cuda()\n\n# Load an image\nimage = Image.open(\"assets/example_image/T.png\")\n\n# Run the pipeline\noutputs = pipeline.run(\n    image,\n    seed=1,\n    # Optional parameters\n    # sparse_structure_sampler_params={\n    #     \"steps\": 12,\n    #     \"cfg_strength\": 7.5,\n    # },\n    # slat_sampler_params={\n    #     \"steps\": 12,\n    #     \"cfg_strength\": 3,\n    # },\n)\n# outputs is a dictionary containing generated 3D assets in different formats:\n# - outputs['gaussian']: a list of 3D Gaussians\n# - outputs['radiance_field']: a list of radiance fields\n# - outputs['mesh']: a list of meshes\n\n# Render the outputs\nvideo = render_utils.render_video(outputs['gaussian'][0])['color']\nimageio.mimsave(\"sample_gs.mp4\", video, fps=30)\nvideo = render_utils.render_video(outputs['radiance_field'][0])['color']\nimageio.mimsave(\"sample_rf.mp4\", video, fps=30)\nvideo = render_utils.render_video(outputs['mesh'][0])['normal']\nimageio.mimsave(\"sample_mesh.mp4\", video, fps=30)\n\n# GLB files can be extracted from the outputs\nglb = postprocessing_utils.to_glb(\n    outputs['gaussian'][0],\n    outputs['mesh'][0],\n    # Optional parameters\n    simplify=0.95,          # Ratio of triangles to remove in the simplification process\n    texture_size=1024,      # Size of the texture used for the GLB\n)\nglb.export(\"sample.glb\")\n\n# Save Gaussians as PLY files\noutputs['gaussian'][0].save_ply(\"sample.ply\")\n```\n\nAfter running the code, you will get the following files:\n- `sample_gs.mp4`: a video showing the 3D Gaussian representation\n- `sample_rf.mp4`: a video showing the Radiance Field representation\n- `sample_mesh.mp4`: a video showing the mesh representation\n- `sample.glb`: a GLB file containing the extracted textured mesh\n- `sample.ply`: a PLY file containing the 3D Gaussian representation\n\n\n### Web Demo\n\n[app.py](app.py) provides a simple web demo for 3D asset generation. Since this demo is based on [Gradio](https://gradio.app/), additional dependencies are required:\n```sh\n. ./setup.sh --demo\n```\n\nAfter installing the dependencies, you can run the demo with the following command:\n```sh\npython app.py\n```\n\nThen, you can access the demo at the address shown in the terminal.\n\n***The web demo is also available on [Hugging Face Spaces](https://huggingface.co/spaces/JeffreyXiang/TRELLIS)!***\n\n\n<!-- Dataset -->\n## \ud83d\udcda Dataset\n\nWe provide **TRELLIS-500K**, a large-scale dataset containing 500K 3D assets curated from [Objaverse(XL)](https://objaverse.allenai.org/), [ABO](https://amazon-berkeley-objects.s3.amazonaws.com/index.html), [3D-FUTURE](https://tianchi.aliyun.com/specials/promotion/alibaba-3d-future), [HSSD](https://huggingface.co/datasets/hssd/hssd-models), and [Toys4k](https://github.com/rehg-lab/lowshot-shapebias/tree/main/toys4k), filtered based on aesthetic scores. Please refer to the [dataset README](DATASET.md) for more details.\n\n\n<!-- Training -->\n## \ud83c\udfcb\ufe0f\u200d\u2642\ufe0f Training\n\nTRELLIS\u2019s training framework is organized to provide a flexible and modular approach to building and fine-tuning large-scale 3D generation models. The training code is centered around `train.py` and is structured into several directories to clearly separate dataset handling, model components, training logic, and visualization utilities.\n\n### Code Structure\n\n- **train.py**: Main entry point for training.\n- **trellis/datasets**: Dataset loading and preprocessing.\n- **trellis/models**: Different models and their components.\n- **trellis/modules**: Custom modules for various models.\n- **trellis/pipelines**: Inference pipelines for different models.\n- **trellis/renderers**: Renderers for different 3D representations.\n- **trellis/representations**: Different 3D representations.\n- **trellis/trainers**: Training logic for different models.\n- **trellis/utils**: Utility functions for training and visualization.\n\n### Training Setup\n\n1. **Prepare the Environment:**\n   - Ensure all training dependencies are installed.\n   - Use a Linux system with an NVIDIA GPU (The models are trained on NVIDIA A100 GPUs).\n   - For distributed training, verify that your nodes can communicate through the designated master address and port.\n\n2. **Dataset Preparation:**\n   - Organize your dataset similar to TRELLIS-500K. Specify your dataset path using the `--data_dir` argument when launching training.\n\n3. **Configuration Files:**\n   - Training hyperparameters and model architectures are defined in configuration files under the `configs/` directory.\n   - Example configuration files include:\n\n| Config | Pretained Model | Description |\n| --- | --- | --- |\n| [`vae/ss_vae_conv3d_16l8_fp16.json`](configs/vae/ss_vae_conv3d_16l8_fp16.json) | [Encoder](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/ss_enc_conv3d_16l8_fp16.safetensors) [Decoder](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/ss_dec_conv3d_16l8_fp16.safetensors) | Sparse structure VAE |\n| [`vae/slat_vae_enc_dec_gs_swin8_B_64l8_fp16.json`](configs/vae/slat_vae_enc_dec_gs_swin8_B_64l8_fp16.json) | [Encoder](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/slat_enc_swin8_B_64l8_fp16.safetensors) [Decoder](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/slat_dec_gs_swin8_B_64l8gs32_fp16.safetensors) | SLat VAE with Gaussian Decoder |\n| [`vae/slat_vae_dec_rf_swin8_B_64l8_fp16.json`](configs/vae/slat_vae_dec_rf_swin8_B_64l8_fp16.json) | [Decoder](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/slat_dec_rf_swin8_B_64l8r16_fp16.safetensors) | SLat Radiance Field Decoder |\n| [`vae/slat_vae_dec_mesh_swin8_B_64l8_fp16.json`](configs/vae/slat_vae_dec_mesh_swin8_B_64l8_fp16.json) | [Decoder](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/slat_dec_mesh_swin8_B_64l8m256c_fp16.safetensors) | SLat Mesh Decoder |\n| [`generation/ss_flow_img_dit_L_16l8_fp16.json`](configs/generation/ss_flow_img_dit_L_16l8_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/ss_flow_img_dit_L_16l8_fp16.safetensors) | Image conditioned sparse structure Flow Model |\n| [`generation/slat_flow_img_dit_L_64l8p2_fp16.json`](configs/generation/slat_flow_img_dit_L_64l8p2_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-image-large/blob/main/ckpts/slat_flow_img_dit_L_64l8p2_fp16.safetensors) | Image conditioned SLat Flow Model |\n| [`generation/ss_flow_txt_dit_B_16l8_fp16.json`](configs/generation/ss_flow_txt_dit_B_16l8_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-text-base/blob/main/ckpts/ss_flow_txt_dit_B_16l8_fp16.safetensors) | Base text-conditioned sparse structure Flow Model |\n| [`generation/slat_flow_txt_dit_B_64l8p2_fp16.json`](configs/generation/slat_flow_txt_dit_B_64l8p2_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-text-base/blob/main/ckpts/slat_flow_txt_dit_B_64l8p2_fp16.safetensors) | Base text-conditioned SLat Flow Model |\n| [`generation/ss_flow_txt_dit_L_16l8_fp16.json`](configs/generation/ss_flow_txt_dit_L_16l8_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-text-large/blob/main/ckpts/ss_flow_txt_dit_L_16l8_fp16.safetensors) | Large text-conditioned sparse structure Flow Model |\n| [`generation/slat_flow_txt_dit_L_64l8p2_fp16.json`](configs/generation/slat_flow_txt_dit_L_64l8p2_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-text-large/blob/main/ckpts/slat_flow_txt_dit_L_64l8p2_fp16.safetensors) | Large text-conditioned SLat Flow Model |\n| [`generation/ss_flow_txt_dit_XL_16l8_fp16.json`](configs/generation/ss_flow_txt_dit_XL_16l8_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-text-xlarge/blob/main/ckpts/ss_flow_txt_dit_XL_16l8_fp16.safetensors) | Extra-large text-conditioned sparse structure Flow Model |\n| [`generation/slat_flow_txt_dit_XL_64l8p2_fp16.json`](configs/generation/slat_flow_txt_dit_XL_64l8p2_fp16.json) | [Denoiser](https://huggingface.co/JeffreyXiang/TRELLIS-text-xlarge/blob/main/ckpts/slat_flow_txt_dit_XL_64l8p2_fp16.safetensors) | Extra-large text-conditioned SLat Flow Model |\n\n\n### Command-Line Options\n\nThe training script can be run as follows:\n```sh\nusage: train.py [-h] --config CONFIG --output_dir OUTPUT_DIR [--load_dir LOAD_DIR] [--ckpt CKPT] [--data_dir DATA_DIR] [--auto_retry AUTO_RETRY] [--tryrun] [--profile] [--num_nodes NUM_NODES] [--node_rank NODE_RANK] [--num_gpus NUM_GPUS] [--master_addr MASTER_ADDR] [--master_port MASTER_PORT]\n\noptions:\n  -h, --help                    show this help message and exit\n  --config CONFIG               Experiment config file\n  --output_dir OUTPUT_DIR       Output directory\n  --load_dir LOAD_DIR           Load directory, default to output_dir\n  --ckpt CKPT                   Checkpoint step to resume training, default to latest\n  --data_dir DATA_DIR           Data directory\n  --auto_retry AUTO_RETRY       Number of retries on error\n  --tryrun                      Try run without training\n  --profile                     Profile training\n  --num_nodes NUM_NODES         Number of nodes\n  --node_rank NODE_RANK         Node rank\n  --num_gpus NUM_GPUS           Number of GPUs per node, default to all\n  --master_addr MASTER_ADDR     Master address for distributed training\n  --master_port MASTER_PORT     Port for distributed training\n```\n\n### Example Training Commands\n\n#### Single-node Training\n\nTo train a image-to-3D stage 2 model with a single machine.\n```sh\npython train.py \\\n  --config configs/vae/slat_vae_dec_mesh_swin8_B_64l8_fp16.json \\\n  --output_dir outputs/slat_vae_dec_mesh_swin8_B_64l8_fp16_1node \\\n  --data_dir /path/to/your/dataset1,/path/to/your/dataset2 \\\n```\nThe script will automatically distribute the training across all available GPUs. Specify the number of GPUs with the `--num_gpus` flag if you want to limit the number of GPUs used.\n\n#### Multi-node Training\n\nTo train a image-to-3D stage 2 model with multiple GPUs across nodes (e.g., 2 nodes):\n```sh\npython train.py \\\n  --config configs/generation/slat_flow_img_dit_L_64l8p2_fp16.json \\\n  --output_dir outputs/slat_flow_img_dit_L_64l8p2_fp16_2nodes \\\n  --data_dir /path/to/your/dataset1,/path/to/your/dataset2 \\\n  --num_nodes 2 \\\n  --node_rank 0 \\\n  --master_addr $MASTER_ADDR \\\n  --master_port $MASTER_PORT\n```\nBe sure to adjust `node_rank`, `master_addr`, and `master_port` for each node accordingly.\n\n#### Resuming Training\n\nBy default, training will resume from the latest saved checkpoint in the same output directory. To specify a specific checkpoint to resume from, use the `--load_dir` and `--ckpt` flags:\n```sh\npython train.py \\\n  --config configs/generation/slat_flow_img_dit_L_64l8p2_fp16.json \\\n  --output_dir outputs/slat_flow_img_dit_L_64l8p2_fp16_resume \\\n  --data_dir /path/to/your/dataset1,/path/to/your/dataset2 \\\n  --load_dir /path/to/your/checkpoint \\\n  --ckpt [step]\n```\n\n### Additional Options\n\n- **Auto Retry:** Use the `--auto_retry` flag to specify the number of retries in case of intermittent errors.\n- **Dry Run:** The `--tryrun` flag allows you to check your configuration and environment without launching full training.\n- **Profiling:** Enable profiling with the `--profile` flag to gain insights into training performance and diagnose bottlenecks.\n\nAdjust the file paths and parameters to match your experimental setup.\n\n\n<!-- License -->\n## \u2696\ufe0f License\n\nTRELLIS models and the majority of the code are licensed under the [MIT License](LICENSE). The following submodules may have different licenses:\n- [**diffoctreerast**](https://github.com/JeffreyXiang/diffoctreerast): We developed a CUDA-based real-time differentiable octree renderer for rendering radiance fields as part of this project. This renderer is derived from the [diff-gaussian-rasterization](https://github.com/graphdeco-inria/diff-gaussian-rasterization) project and is available under the [LICENSE](https://github.com/JeffreyXiang/diffoctreerast/blob/master/LICENSE).\n\n\n- [**Modified Flexicubes**](https://github.com/MaxtirError/FlexiCubes): In this project, we used a modified version of [Flexicubes](https://github.com/nv-tlabs/FlexiCubes) to support vertex attributes. This modified version is licensed under the [LICENSE](https://github.com/nv-tlabs/FlexiCubes/blob/main/LICENSE.txt).\n\n\n<!-- Citation -->\n## \ud83d\udcdc Citation\n\nIf you find this work helpful, please consider citing our paper:\n\n```bibtex\n@article{xiang2024structured,\n    title   = {Structured 3D Latents for Scalable and Versatile 3D Generation},\n    author  = {Xiang, Jianfeng and Lv, Zelong and Xu, Sicheng and Deng, Yu and Wang, Ruicheng and Zhang, Bowen and Chen, Dong and Tong, Xin and Yang, Jiaolong},\n    journal = {arXiv preprint arXiv:2412.01506},\n    year    = {2024}\n}\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "TRELLIS is a large 3D asset generation model.",
    "version": "0.1.9",
    "project_urls": {
        "Homepage": "https://github.com/microsoft/trellis"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "eb046d19bd6a1b11fd8d8684f5bf9a430aeafb02889d09ee4d503f151f77c8d5",
                "md5": "f7b65c48dd4500de63c0c481fc5e9d0f",
                "sha256": "c15a2f4bbb06d688af038e758974e65929c4e0f40d2ad5b17fb7a11dc7ac26c1"
            },
            "downloads": -1,
            "filename": "trellis_3d_python-0.1.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f7b65c48dd4500de63c0c481fc5e9d0f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 187032,
            "upload_time": "2025-09-05T02:43:23",
            "upload_time_iso_8601": "2025-09-05T02:43:23.717326Z",
            "url": "https://files.pythonhosted.org/packages/eb/04/6d19bd6a1b11fd8d8684f5bf9a430aeafb02889d09ee4d503f151f77c8d5/trellis_3d_python-0.1.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c9f83757f5cea60310f5f67965a355a94c01142bd9c054489e24f00d3db099aa",
                "md5": "04cc6998aebfb63ecd8faee3b8ca2b24",
                "sha256": "a0d6047b75cfceb1e4b951d91bd57f207f3a942964e9febabcb3784e36a60ca1"
            },
            "downloads": -1,
            "filename": "trellis_3d_python-0.1.9.tar.gz",
            "has_sig": false,
            "md5_digest": "04cc6998aebfb63ecd8faee3b8ca2b24",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 131208,
            "upload_time": "2025-09-05T02:43:24",
            "upload_time_iso_8601": "2025-09-05T02:43:24.704914Z",
            "url": "https://files.pythonhosted.org/packages/c9/f8/3757f5cea60310f5f67965a355a94c01142bd9c054489e24f00d3db099aa/trellis_3d_python-0.1.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-05 02:43:24",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "microsoft",
    "github_project": "trellis",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "trellis-3d-python"
}
        
Elapsed time: 9.56520s