efficient-track-anything


Nameefficient-track-anything JSON
Version 1.0 PyPI version JSON
download
home_pagehttps://yformer.github.io/efficient-track-anything/
SummaryEfficient Track Anything
upload_time2024-12-12 22:08:47
maintainerNone
docs_urlNone
authorMeta AI
requires_python>=3.10.0
licenseApache 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Efficient Track Anything
[[`📕Project`](https://yformer.github.io/efficient-track-anything/)][[`🤗Gradio Demo`](https://2ab5e2198a0dcbe8a2.gradio.live)][[`📕Paper`](https://arxiv.org/pdf/2411.18933)]

![Efficient Track Anything Speed](figs/examples/speed_vs_latency.png)

The **Efficient Track Anything Model(EfficientTAM)** takes a vanilla lightweight ViT image encoder. An efficient memory cross-attention is proposed to further improve the efficiency. Our EfficientTAMs are trained on SA-1B (image) and SA-V (video) datasets. EfficientTAM achieves comparable performance with SAM 2 with improved efficiency. Our EfficientTAM can run **>10 frames per second** with reasonable video segmentation performance on **iPhone 15**. Try our demo with a family of EfficientTAMs at [[`🤗Gradio Demo`](https://2ab5e2198a0dcbe8a2.gradio.live)].

![Efficient Track Anything design](figs/examples/overview.png)

## News
[Dec.4 2024] [`🤗Efficient Track Anything for segment everything`](https://5239f8e221db7ee8a0.gradio.live/). Thanks to @SkalskiP!

[Dec.2 2024] We release the codebase of Efficient Track Anything.

## Online Demo & Examples
Online demo and examples can be found in the [project page](https://yformer.github.io/efficient-track-anything/).

## EfficientTAM Video Segmentation Examples
  |   |   |
:-------------------------:|:-------------------------:
SAM 2 | ![SAM2](figs/examples/sam2_video_segmentation.png)
EfficientTAM |  ![EfficientTAM](figs/examples/efficienttam_video_segmentation.png)

## EfficientTAM Image Segmentation Examples
Input Image, SAM, EficientSAM, SAM 2, EfficientTAM
  |   |   |
:-------------------------:|:-------------------------:
Point-prompt | ![point-prompt](figs/examples/demo_img_point.png)
Box-prompt |  ![box-prompt](figs/examples/demo_img_box.png)
Segment everything |![segment everything](figs/examples/demo_img_everything.png)

## Model
EfficientTAM checkpoints will be available soon on the [Hugging Face Space](https://huggingface.co/spaces/yunyangx/EfficientTAM/tree/main).

## Acknowledgement

+ [SAM2](https://github.com/facebookresearch/sam2)
+ [SAM2-Video-Predictor](https://huggingface.co/spaces/fffiloni/SAM2-Video-Predictor)
+ [florence-sam](https://huggingface.co/spaces/SkalskiP/florence-sam)
+ [SAM](https://github.com/facebookresearch/segment-anything)
+ [EfficientSAM](https://github.com/yformer/EfficientSAM)

If you're using Efficient Track Anything in your research or applications, please cite using this BibTeX:
```bibtex


@article{xiong2024efficienttam,
  title={Efficient Track Anything},
  author={Yunyang Xiong, Chong Zhou, Xiaoyu Xiang, Lemeng Wu, Chenchen Zhu, Zechun Liu, Saksham Suri, Balakrishnan Varadarajan, Ramya Akula, Forrest Iandola, Raghuraman Krishnamoorthi, Bilge Soran, Vikas Chandra},
  journal={preprint arXiv:2411.18933},
  year={2024}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://yformer.github.io/efficient-track-anything/",
    "name": "efficient-track-anything",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10.0",
    "maintainer_email": null,
    "keywords": null,
    "author": "Meta AI",
    "author_email": "yunyang@meta.com",
    "download_url": "https://files.pythonhosted.org/packages/60/bb/f50d651bcc76604fc0f8de85038e8ab266b7a987d63c521ad07b438c9091/efficient_track_anything-1.0.tar.gz",
    "platform": null,
    "description": "# Efficient Track Anything\n[[`\ud83d\udcd5Project`](https://yformer.github.io/efficient-track-anything/)][[`\ud83e\udd17Gradio Demo`](https://2ab5e2198a0dcbe8a2.gradio.live)][[`\ud83d\udcd5Paper`](https://arxiv.org/pdf/2411.18933)]\n\n![Efficient Track Anything Speed](figs/examples/speed_vs_latency.png)\n\nThe **Efficient Track Anything Model(EfficientTAM)** takes a vanilla lightweight ViT image encoder. An efficient memory cross-attention is proposed to further improve the efficiency. Our EfficientTAMs are trained on SA-1B (image) and SA-V (video) datasets. EfficientTAM achieves comparable performance with SAM 2 with improved efficiency. Our EfficientTAM can run **>10 frames per second** with reasonable video segmentation performance on **iPhone 15**. Try our demo with a family of EfficientTAMs at [[`\ud83e\udd17Gradio Demo`](https://2ab5e2198a0dcbe8a2.gradio.live)].\n\n![Efficient Track Anything design](figs/examples/overview.png)\n\n## News\n[Dec.4 2024] [`\ud83e\udd17Efficient Track Anything for segment everything`](https://5239f8e221db7ee8a0.gradio.live/). Thanks to @SkalskiP!\n\n[Dec.2 2024] We release the codebase of Efficient Track Anything.\n\n## Online Demo & Examples\nOnline demo and examples can be found in the [project page](https://yformer.github.io/efficient-track-anything/).\n\n## EfficientTAM Video Segmentation Examples\n  |   |   |\n:-------------------------:|:-------------------------:\nSAM 2 | ![SAM2](figs/examples/sam2_video_segmentation.png)\nEfficientTAM |  ![EfficientTAM](figs/examples/efficienttam_video_segmentation.png)\n\n## EfficientTAM Image Segmentation Examples\nInput Image, SAM, EficientSAM, SAM 2, EfficientTAM\n  |   |   |\n:-------------------------:|:-------------------------:\nPoint-prompt | ![point-prompt](figs/examples/demo_img_point.png)\nBox-prompt |  ![box-prompt](figs/examples/demo_img_box.png)\nSegment everything |![segment everything](figs/examples/demo_img_everything.png)\n\n## Model\nEfficientTAM checkpoints will be available soon on the [Hugging Face Space](https://huggingface.co/spaces/yunyangx/EfficientTAM/tree/main).\n\n## Acknowledgement\n\n+ [SAM2](https://github.com/facebookresearch/sam2)\n+ [SAM2-Video-Predictor](https://huggingface.co/spaces/fffiloni/SAM2-Video-Predictor)\n+ [florence-sam](https://huggingface.co/spaces/SkalskiP/florence-sam)\n+ [SAM](https://github.com/facebookresearch/segment-anything)\n+ [EfficientSAM](https://github.com/yformer/EfficientSAM)\n\nIf you're using Efficient Track Anything in your research or applications, please cite using this BibTeX:\n```bibtex\n\n\n@article{xiong2024efficienttam,\n  title={Efficient Track Anything},\n  author={Yunyang Xiong, Chong Zhou, Xiaoyu Xiang, Lemeng Wu, Chenchen Zhu, Zechun Liu, Saksham Suri, Balakrishnan Varadarajan, Ramya Akula, Forrest Iandola, Raghuraman Krishnamoorthi, Bilge Soran, Vikas Chandra},\n  journal={preprint arXiv:2411.18933},\n  year={2024}\n}\n```\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "Efficient Track Anything",
    "version": "1.0",
    "project_urls": {
        "Homepage": "https://yformer.github.io/efficient-track-anything/"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4792d291086fc97c9e5abe45d706ea1467037b51ef7bea6feda4717c535a6447",
                "md5": "2b1b1c9d05e8f63cb16e6f8e1e3dd3b5",
                "sha256": "b4df95d0f093437fa521778bf6b740b6644dbd0b7741f7c023c23c57f053103e"
            },
            "downloads": -1,
            "filename": "efficient_track_anything-1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2b1b1c9d05e8f63cb16e6f8e1e3dd3b5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10.0",
            "size": 77604,
            "upload_time": "2024-12-12T22:08:44",
            "upload_time_iso_8601": "2024-12-12T22:08:44.925852Z",
            "url": "https://files.pythonhosted.org/packages/47/92/d291086fc97c9e5abe45d706ea1467037b51ef7bea6feda4717c535a6447/efficient_track_anything-1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "60bbf50d651bcc76604fc0f8de85038e8ab266b7a987d63c521ad07b438c9091",
                "md5": "633304cade3705a4f30ea66e011ed9b3",
                "sha256": "b89f919d3085337390000d850a416e40e5b6ec2341544fc9c02e1582aa282be7"
            },
            "downloads": -1,
            "filename": "efficient_track_anything-1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "633304cade3705a4f30ea66e011ed9b3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10.0",
            "size": 66539,
            "upload_time": "2024-12-12T22:08:47",
            "upload_time_iso_8601": "2024-12-12T22:08:47.647972Z",
            "url": "https://files.pythonhosted.org/packages/60/bb/f50d651bcc76604fc0f8de85038e8ab266b7a987d63c521ad07b438c9091/efficient_track_anything-1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-12 22:08:47",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "efficient-track-anything"
}
        
Elapsed time: 1.96575s