segment-anything-fast


Namesegment-anything-fast JSON
Version 0.1.2 PyPI version JSON
download
home_pagehttps://github.com/opengeos/FastSAM
SummaryFast Segment Anything
upload_time2023-08-16 01:48:34
maintainer
docs_urlNone
authorQiusheng Wu
requires_python>=3.8
licenseApache Software License
keywords segment-anything
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![](assets/logo.png)

# Fast Segment Anything

[![image](https://img.shields.io/pypi/v/segment-anything-fast.svg)](https://pypi.python.org/pypi/segment-anything-fast)

[[`📕Paper`](https://arxiv.org/pdf/2306.12156.pdf)] [[`🤗HuggingFace Demo`](https://huggingface.co/spaces/An-619/FastSAM)] [[`Colab demo`](https://colab.research.google.com/drive/1oX14f6IneGGw612WgVlAiy91UHwFAvr9?usp=sharing)] [[`Replicate demo & API`](https://replicate.com/casia-iva-lab/fastsam)] [[`Model Zoo`](#model-checkpoints)] [[`BibTeX`](#citing-fastsam)]

![FastSAM Speed](assets/head_fig.png)

The **Fast Segment Anything Model(FastSAM)** is a CNN Segment Anything Model trained using only 2% of the SA-1B dataset published by SAM authors. FastSAM achieves comparable performance with
the SAM method at **50× higher run-time speed**.

![FastSAM design](assets/Overview.png)

**🍇 Updates**

- **`2023/07/06`** Added to [Ultralytics (YOLOv8) Model Hub](https://docs.ultralytics.com/models/fast-sam/). Thanks to [Ultralytics](https://github.com/ultralytics/ultralytics) for help 🌹.
- **`2023/06/29`** Support [text mode](https://huggingface.co/spaces/An-619/FastSAM) in HuggingFace Space. Thanks a lot to [gaoxinge](https://github.com/gaoxinge) for help 🌹.
- **`2023/06/29`** Release [FastSAM_Awesome_TensorRT](https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT). Thanks a lot to [ChuRuaNh0](https://github.com/ChuRuaNh0) for providing the TensorRT model of FastSAM 🌹.
- **`2023/06/26`** Release [FastSAM Replicate Online Demo](https://replicate.com/casia-iva-lab/fastsam). Thanks a lot to [Chenxi](https://chenxwh.github.io/) for providing this nice demo 🌹.
- **`2023/06/26`** Support [points mode](https://huggingface.co/spaces/An-619/FastSAM) in HuggingFace Space. Better and faster interaction will come soon!
- **`2023/06/24`** Thanks a lot to [Grounding-SAM](https://github.com/IDEA-Research/Grounded-Segment-Anything) for Combining Grounding-DINO with FastSAM in [Grounded-FastSAM](https://github.com/IDEA-Research/Grounded-Segment-Anything/tree/main/EfficientSAM) 🌹.

## Installation

Clone the repository locally:

```shell
pip install segment-anything-fast
```

## <a name="GettingStarted"></a> Getting Started

First download a [model checkpoint](#model-checkpoints).

Then, you can run the scripts to try the everything mode and three prompt modes.

```shell
# Everything mode
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg
```

```shell
# Text prompt
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg  --text_prompt "the yellow dog"
```

```shell
# Box prompt (xywh)
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg --box_prompt "[[570,200,230,400]]"
```

```shell
# Points prompt
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg  --point_prompt "[[520,360],[620,300]]" --point_label "[1,0]"
```

You can use the following code to generate all masks, make mask selection based on prompts, and visualize the results.

```shell
from fastsam import FastSAM, FastSAMPrompt

model = FastSAM('./weights/FastSAM.pt')
IMAGE_PATH = './images/dogs.jpg'
DEVICE = 'cpu'
everything_results = model(IMAGE_PATH, device=DEVICE, retina_masks=True, imgsz=1024, conf=0.4, iou=0.9,)
prompt_process = FastSAMPrompt(IMAGE_PATH, everything_results, device=DEVICE)

# everything prompt
ann = prompt_process.everything_prompt()

# bbox default shape [0,0,0,0] -> [x1,y1,x2,y2]
ann = prompt_process.box_prompt(bbox=[[200, 200, 300, 300]])

# text prompt
ann = prompt_process.text_prompt(text='a photo of a dog')

# point prompt
# points default [[0,0]] [[x1,y1],[x2,y2]]
# point_label default [0] [1,0] 0:background, 1:foreground
ann = prompt_process.point_prompt(points=[[620, 360]], pointlabel=[1])

prompt_process.plot(annotations=ann,output_path='./output/dog.jpg',)
```

You are also welcomed to try our Colab demo: [FastSAM_example.ipynb](https://colab.research.google.com/drive/1oX14f6IneGGw612WgVlAiy91UHwFAvr9?usp=sharing).

## Different Inference Options

We provide various options for different purposes, details are in [MORE_USAGES.md](MORE_USAGES.md).

## Web demo

### Gradio demo

- We also provide a UI for testing our method that is built with gradio. You can upload a custom image, select the mode and set the parameters, click the segment button, and get a satisfactory segmentation result. Currently, the UI supports interaction with the 'Everything mode' and 'points mode'. We plan to add support for additional modes in the future. Running the following command in a terminal will launch the demo:

```
# Download the pre-trained model in "./weights/FastSAM.pt"
python app_gradio.py
```

- This demo is also hosted on [HuggingFace Space](https://huggingface.co/spaces/An-619/FastSAM).

![HF_Everyhting](assets/hf_everything_mode.png) ![HF_Points](assets/hf_points_mode.png)

### Replicate demo

- [Replicate demo](https://replicate.com/casia-iva-lab/fastsam) has supported all modes, you can experience points/box/text mode.

![Replicate-1](assets/replicate-1.png) ![Replicate-2](assets/replicate-2.png) ![Replicate-3](assets/replicate-3.png)

## <a name="Models"></a>Model Checkpoints

Two model versions of the model are available with different sizes. Click the links below to download the checkpoint for the corresponding model type.

- **`default` or `FastSAM`: [YOLOv8x based Segment Anything Model](https://drive.google.com/file/d/1m1sjY4ihXBU1fZXdQ-Xdj-mDltW-2Rqv/view?usp=sharing) | [Baidu Cloud (pwd: 0000).](https://pan.baidu.com/s/18KzBmOTENjByoWWR17zdiQ?pwd=0000)**
- `FastSAM-s`: [YOLOv8s based Segment Anything Model.](https://drive.google.com/file/d/10XmSj6mmpmRb8NhXbtiuO9cTTBwR_9SV/view?usp=sharing)

## Results

All result were tested on a single NVIDIA GeForce RTX 3090.

### 1. Inference time

Running Speed under Different Point Prompt Numbers(ms).
| method | params | 1 | 10 | 100 | E(16x16) | E(32x32\*) | E(64x64) |
|:------------------:|:--------:|:-----:|:-----:|:-----:|:----------:|:-----------:|:----------:|
| SAM-H | 0.6G | 446 | 464 | 627 | 852 | 2099 | 6972 |
| SAM-B | 136M | 110 | 125 | 230 | 432 | 1383 | 5417 |
| FastSAM | 68M | 40 |40 | 40 | 40 | 40 | 40 |

### 2. Memory usage

|  Dataset  | Method  | GPU Memory (MB) |
| :-------: | :-----: | :-------------: |
| COCO 2017 | FastSAM |      2608       |
| COCO 2017 |  SAM-H  |      7060       |
| COCO 2017 |  SAM-B  |      4670       |

### 3. Zero-shot Transfer Experiments

#### Edge Detection

Test on the BSDB500 dataset.
|method | year| ODS | OIS | AP | R50 |
|:----------:|:-------:|:--------:|:--------:|:------:|:-----:|
| HED | 2015| .788 | .808 | .840 | .923 |
| SAM | 2023| .768 | .786 | .794 | .928 |
| FastSAM | 2023| .750 | .790 | .793 | .903 |

#### Object Proposals

##### COCO

|  method   | AR10 | AR100 | AR1000 | AUC  |
| :-------: | :--: | :---: | :----: | :--: |
| SAM-H E64 | 15.5 | 45.6  |  67.7  | 32.1 |
| SAM-H E32 | 18.5 | 49.5  |  62.5  | 33.7 |
| SAM-B E32 | 11.4 | 39.6  |  59.1  | 27.3 |
|  FastSAM  | 15.7 | 47.3  |  63.7  | 32.2 |

##### LVIS

bbox AR@1000
| method | all | small | med. | large |
|:---------------:|:-----:|:------:|:-----:|:------:|
| ViTDet-H | 65.0 | 53.2 | 83.3 | 91.2 |
zero-shot transfer methods
| SAM-H E64 | 52.1 | 36.6 | 75.1 | 88.2 |
| SAM-H E32 | 50.3 | 33.1 | 76.2 | 89.8 |
| SAM-B E32 | 45.0 | 29.3 | 68.7 | 80.6 |
| FastSAM | 57.1 | 44.3 | 77.1 | 85.3 |

#### Instance Segmentation On COCO 2017

|  method  |  AP  | APS  | APM  | APL  |
| :------: | :--: | :--: | :--: | :--: |
| ViTDet-H | .510 | .320 | .543 | .689 |
|   SAM    | .465 | .308 | .510 | .617 |
| FastSAM  | .379 | .239 | .434 | .500 |

### 4. Performance Visualization

Several segmentation results:

#### Natural Images

![Natural Images](assets/eightpic.png)

#### Text to Mask

![Text to Mask](assets/dog_clip.png)

### 5.Downstream tasks

The results of several downstream tasks to show the effectiveness.

#### Anomaly Detection

![Anomaly Detection](assets/anomaly.png)

#### Salient Object Detection

![Salient Object Detection](assets/salient.png)

#### Building Extracting

![Building Detection](assets/building.png)

## License

The model is licensed under the [Apache 2.0 license](LICENSE).

## Acknowledgement

- [Segment Anything](https://segment-anything.com/) provides the SA-1B dataset and the base codes.
- [YOLOv8](https://github.com/ultralytics/ultralytics) provides codes and pre-trained models.
- [YOLACT](https://arxiv.org/abs/2112.10003) provides powerful instance segmentation method.
- [Grounded-Segment-Anything](https://huggingface.co/spaces/yizhangliu/Grounded-Segment-Anything) provides a useful web demo template.

## Contributors

Our project wouldn't be possible without the contributions of these amazing people! Thank you all for making this project better.

<a href="https://github.com/CASIA-IVA-Lab/FastSAM/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=CASIA-IVA-Lab/FastSAM" />
</a>

## Citing FastSAM

If you find this project useful for your research, please consider citing the following BibTeX entry.

```
@misc{zhao2023fast,
      title={Fast Segment Anything},
      author={Xu Zhao and Wenchao Ding and Yongqi An and Yinglong Du and Tao Yu and Min Li and Ming Tang and Jinqiao Wang},
      year={2023},
      eprint={2306.12156},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
```

[![Star History Chart](https://api.star-history.com/svg?repos=CASIA-IVA-Lab/FastSAM&type=Date)](https://star-history.com/#CASIA-IVA-Lab/FastSAM&Date)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/opengeos/FastSAM",
    "name": "segment-anything-fast",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "segment-anything",
    "author": "Qiusheng Wu",
    "author_email": "giswqs@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/c6/e5/a24dcf30d67094a39fa333c0a5f26213c223c9ccef1008f10482c3dbbccb/segment-anything-fast-0.1.2.tar.gz",
    "platform": null,
    "description": "![](assets/logo.png)\n\n# Fast Segment Anything\n\n[![image](https://img.shields.io/pypi/v/segment-anything-fast.svg)](https://pypi.python.org/pypi/segment-anything-fast)\n\n[[`\ud83d\udcd5Paper`](https://arxiv.org/pdf/2306.12156.pdf)] [[`\ud83e\udd17HuggingFace Demo`](https://huggingface.co/spaces/An-619/FastSAM)] [[`Colab demo`](https://colab.research.google.com/drive/1oX14f6IneGGw612WgVlAiy91UHwFAvr9?usp=sharing)] [[`Replicate demo & API`](https://replicate.com/casia-iva-lab/fastsam)] [[`Model Zoo`](#model-checkpoints)] [[`BibTeX`](#citing-fastsam)]\n\n![FastSAM Speed](assets/head_fig.png)\n\nThe **Fast Segment Anything Model(FastSAM)** is a CNN Segment Anything Model trained using only 2% of the SA-1B dataset published by SAM authors. FastSAM achieves comparable performance with\nthe SAM method at **50\u00d7 higher run-time speed**.\n\n![FastSAM design](assets/Overview.png)\n\n**\ud83c\udf47 Updates**\n\n- **`2023/07/06`** Added to [Ultralytics (YOLOv8) Model Hub](https://docs.ultralytics.com/models/fast-sam/). Thanks to [Ultralytics](https://github.com/ultralytics/ultralytics) for help \ud83c\udf39.\n- **`2023/06/29`** Support [text mode](https://huggingface.co/spaces/An-619/FastSAM) in HuggingFace Space. Thanks a lot to [gaoxinge](https://github.com/gaoxinge) for help \ud83c\udf39.\n- **`2023/06/29`** Release [FastSAM_Awesome_TensorRT](https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT). Thanks a lot to [ChuRuaNh0](https://github.com/ChuRuaNh0) for providing the TensorRT model of FastSAM \ud83c\udf39.\n- **`2023/06/26`** Release [FastSAM Replicate Online Demo](https://replicate.com/casia-iva-lab/fastsam). Thanks a lot to [Chenxi](https://chenxwh.github.io/) for providing this nice demo \ud83c\udf39.\n- **`2023/06/26`** Support [points mode](https://huggingface.co/spaces/An-619/FastSAM) in HuggingFace Space. Better and faster interaction will come soon!\n- **`2023/06/24`** Thanks a lot to [Grounding-SAM](https://github.com/IDEA-Research/Grounded-Segment-Anything) for Combining Grounding-DINO with FastSAM in [Grounded-FastSAM](https://github.com/IDEA-Research/Grounded-Segment-Anything/tree/main/EfficientSAM) \ud83c\udf39.\n\n## Installation\n\nClone the repository locally:\n\n```shell\npip install segment-anything-fast\n```\n\n## <a name=\"GettingStarted\"></a> Getting Started\n\nFirst download a [model checkpoint](#model-checkpoints).\n\nThen, you can run the scripts to try the everything mode and three prompt modes.\n\n```shell\n# Everything mode\npython Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg\n```\n\n```shell\n# Text prompt\npython Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg  --text_prompt \"the yellow dog\"\n```\n\n```shell\n# Box prompt (xywh)\npython Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg --box_prompt \"[[570,200,230,400]]\"\n```\n\n```shell\n# Points prompt\npython Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg  --point_prompt \"[[520,360],[620,300]]\" --point_label \"[1,0]\"\n```\n\nYou can use the following code to generate all masks, make mask selection based on prompts, and visualize the results.\n\n```shell\nfrom fastsam import FastSAM, FastSAMPrompt\n\nmodel = FastSAM('./weights/FastSAM.pt')\nIMAGE_PATH = './images/dogs.jpg'\nDEVICE = 'cpu'\neverything_results = model(IMAGE_PATH, device=DEVICE, retina_masks=True, imgsz=1024, conf=0.4, iou=0.9,)\nprompt_process = FastSAMPrompt(IMAGE_PATH, everything_results, device=DEVICE)\n\n# everything prompt\nann = prompt_process.everything_prompt()\n\n# bbox default shape [0,0,0,0] -> [x1,y1,x2,y2]\nann = prompt_process.box_prompt(bbox=[[200, 200, 300, 300]])\n\n# text prompt\nann = prompt_process.text_prompt(text='a photo of a dog')\n\n# point prompt\n# points default [[0,0]] [[x1,y1],[x2,y2]]\n# point_label default [0] [1,0] 0:background, 1:foreground\nann = prompt_process.point_prompt(points=[[620, 360]], pointlabel=[1])\n\nprompt_process.plot(annotations=ann,output_path='./output/dog.jpg',)\n```\n\nYou are also welcomed to try our Colab demo: [FastSAM_example.ipynb](https://colab.research.google.com/drive/1oX14f6IneGGw612WgVlAiy91UHwFAvr9?usp=sharing).\n\n## Different Inference Options\n\nWe provide various options for different purposes, details are in [MORE_USAGES.md](MORE_USAGES.md).\n\n## Web demo\n\n### Gradio demo\n\n- We also provide a UI for testing our method that is built with gradio. You can upload a custom image, select the mode and set the parameters, click the segment button, and get a satisfactory segmentation result. Currently, the UI supports interaction with the 'Everything mode' and 'points mode'. We plan to add support for additional modes in the future. Running the following command in a terminal will launch the demo:\n\n```\n# Download the pre-trained model in \"./weights/FastSAM.pt\"\npython app_gradio.py\n```\n\n- This demo is also hosted on [HuggingFace Space](https://huggingface.co/spaces/An-619/FastSAM).\n\n![HF_Everyhting](assets/hf_everything_mode.png) ![HF_Points](assets/hf_points_mode.png)\n\n### Replicate demo\n\n- [Replicate demo](https://replicate.com/casia-iva-lab/fastsam) has supported all modes, you can experience points/box/text mode.\n\n![Replicate-1](assets/replicate-1.png) ![Replicate-2](assets/replicate-2.png) ![Replicate-3](assets/replicate-3.png)\n\n## <a name=\"Models\"></a>Model Checkpoints\n\nTwo model versions of the model are available with different sizes. Click the links below to download the checkpoint for the corresponding model type.\n\n- **`default` or `FastSAM`: [YOLOv8x based Segment Anything Model](https://drive.google.com/file/d/1m1sjY4ihXBU1fZXdQ-Xdj-mDltW-2Rqv/view?usp=sharing) | [Baidu Cloud (pwd: 0000).](https://pan.baidu.com/s/18KzBmOTENjByoWWR17zdiQ?pwd=0000)**\n- `FastSAM-s`: [YOLOv8s based Segment Anything Model.](https://drive.google.com/file/d/10XmSj6mmpmRb8NhXbtiuO9cTTBwR_9SV/view?usp=sharing)\n\n## Results\n\nAll result were tested on a single NVIDIA GeForce RTX 3090.\n\n### 1. Inference time\n\nRunning Speed under Different Point Prompt Numbers(ms).\n| method | params | 1 | 10 | 100 | E(16x16) | E(32x32\\*) | E(64x64) |\n|:------------------:|:--------:|:-----:|:-----:|:-----:|:----------:|:-----------:|:----------:|\n| SAM-H | 0.6G | 446 | 464 | 627 | 852 | 2099 | 6972 |\n| SAM-B | 136M | 110 | 125 | 230 | 432 | 1383 | 5417 |\n| FastSAM | 68M | 40 |40 | 40 | 40 | 40 | 40 |\n\n### 2. Memory usage\n\n|  Dataset  | Method  | GPU Memory (MB) |\n| :-------: | :-----: | :-------------: |\n| COCO 2017 | FastSAM |      2608       |\n| COCO 2017 |  SAM-H  |      7060       |\n| COCO 2017 |  SAM-B  |      4670       |\n\n### 3. Zero-shot Transfer Experiments\n\n#### Edge Detection\n\nTest on the BSDB500 dataset.\n|method | year| ODS | OIS | AP | R50 |\n|:----------:|:-------:|:--------:|:--------:|:------:|:-----:|\n| HED | 2015| .788 | .808 | .840 | .923 |\n| SAM | 2023| .768 | .786 | .794 | .928 |\n| FastSAM | 2023| .750 | .790 | .793 | .903 |\n\n#### Object Proposals\n\n##### COCO\n\n|  method   | AR10 | AR100 | AR1000 | AUC  |\n| :-------: | :--: | :---: | :----: | :--: |\n| SAM-H E64 | 15.5 | 45.6  |  67.7  | 32.1 |\n| SAM-H E32 | 18.5 | 49.5  |  62.5  | 33.7 |\n| SAM-B E32 | 11.4 | 39.6  |  59.1  | 27.3 |\n|  FastSAM  | 15.7 | 47.3  |  63.7  | 32.2 |\n\n##### LVIS\n\nbbox AR@1000\n| method | all | small | med. | large |\n|:---------------:|:-----:|:------:|:-----:|:------:|\n| ViTDet-H | 65.0 | 53.2 | 83.3 | 91.2 |\nzero-shot transfer methods\n| SAM-H E64 | 52.1 | 36.6 | 75.1 | 88.2 |\n| SAM-H E32 | 50.3 | 33.1 | 76.2 | 89.8 |\n| SAM-B E32 | 45.0 | 29.3 | 68.7 | 80.6 |\n| FastSAM | 57.1 | 44.3 | 77.1 | 85.3 |\n\n#### Instance Segmentation On COCO 2017\n\n|  method  |  AP  | APS  | APM  | APL  |\n| :------: | :--: | :--: | :--: | :--: |\n| ViTDet-H | .510 | .320 | .543 | .689 |\n|   SAM    | .465 | .308 | .510 | .617 |\n| FastSAM  | .379 | .239 | .434 | .500 |\n\n### 4. Performance Visualization\n\nSeveral segmentation results:\n\n#### Natural Images\n\n![Natural Images](assets/eightpic.png)\n\n#### Text to Mask\n\n![Text to Mask](assets/dog_clip.png)\n\n### 5.Downstream tasks\n\nThe results of several downstream tasks to show the effectiveness.\n\n#### Anomaly Detection\n\n![Anomaly Detection](assets/anomaly.png)\n\n#### Salient Object Detection\n\n![Salient Object Detection](assets/salient.png)\n\n#### Building Extracting\n\n![Building Detection](assets/building.png)\n\n## License\n\nThe model is licensed under the [Apache 2.0 license](LICENSE).\n\n## Acknowledgement\n\n- [Segment Anything](https://segment-anything.com/) provides the SA-1B dataset and the base codes.\n- [YOLOv8](https://github.com/ultralytics/ultralytics) provides codes and pre-trained models.\n- [YOLACT](https://arxiv.org/abs/2112.10003) provides powerful instance segmentation method.\n- [Grounded-Segment-Anything](https://huggingface.co/spaces/yizhangliu/Grounded-Segment-Anything) provides a useful web demo template.\n\n## Contributors\n\nOur project wouldn't be possible without the contributions of these amazing people! Thank you all for making this project better.\n\n<a href=\"https://github.com/CASIA-IVA-Lab/FastSAM/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=CASIA-IVA-Lab/FastSAM\" />\n</a>\n\n## Citing FastSAM\n\nIf you find this project useful for your research, please consider citing the following BibTeX entry.\n\n```\n@misc{zhao2023fast,\n      title={Fast Segment Anything},\n      author={Xu Zhao and Wenchao Ding and Yongqi An and Yinglong Du and Tao Yu and Min Li and Ming Tang and Jinqiao Wang},\n      year={2023},\n      eprint={2306.12156},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n```\n\n[![Star History Chart](https://api.star-history.com/svg?repos=CASIA-IVA-Lab/FastSAM&type=Date)](https://star-history.com/#CASIA-IVA-Lab/FastSAM&Date)\n",
    "bugtrack_url": null,
    "license": "Apache Software License",
    "summary": "Fast Segment Anything",
    "version": "0.1.2",
    "project_urls": {
        "Homepage": "https://github.com/opengeos/FastSAM"
    },
    "split_keywords": [
        "segment-anything"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9a559fae42f34692ea9e428c794e95b48174e2d3ec593d37cd98c4f0116fc1c6",
                "md5": "faf5aadc7382b269a609c0d27096d99a",
                "sha256": "37201cd919b51c5138e2b2477f54f6cd1356ece59729296981223263be5d9bb3"
            },
            "downloads": -1,
            "filename": "segment_anything_fast-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "faf5aadc7382b269a609c0d27096d99a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 20469,
            "upload_time": "2023-08-16T01:48:33",
            "upload_time_iso_8601": "2023-08-16T01:48:33.513256Z",
            "url": "https://files.pythonhosted.org/packages/9a/55/9fae42f34692ea9e428c794e95b48174e2d3ec593d37cd98c4f0116fc1c6/segment_anything_fast-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c6e5a24dcf30d67094a39fa333c0a5f26213c223c9ccef1008f10482c3dbbccb",
                "md5": "abc0163be3d8306ff0ce9ef43751df21",
                "sha256": "778cdeceb5a1234a30f17724b2c8bfd02487130be8c05b841def12ab16b5be9c"
            },
            "downloads": -1,
            "filename": "segment-anything-fast-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "abc0163be3d8306ff0ce9ef43751df21",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 23048,
            "upload_time": "2023-08-16T01:48:34",
            "upload_time_iso_8601": "2023-08-16T01:48:34.530666Z",
            "url": "https://files.pythonhosted.org/packages/c6/e5/a24dcf30d67094a39fa333c0a5f26213c223c9ccef1008f10482c3dbbccb/segment-anything-fast-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-16 01:48:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "opengeos",
    "github_project": "FastSAM",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "segment-anything-fast"
}
        
Elapsed time: 0.10590s