# Birder
Birder is an open-source computer vision framework for wildlife image analysis, focusing on avian species.
* [Introduction](#introduction)
* [Setup](#setup)
* [Getting Started](#getting-started)
* [Pre-trained Models](#pre-trained-models)
* [Detection](#detection)
* [Licenses](#licenses)
* [Acknowledgments](#acknowledgments)
## Introduction
Birder is an open-source computer vision framework designed for wildlife imagery, specifically focused on bird species classification and detection. This project leverages deep neural networks to provide robust models that can handle real-world data challenges.
For comprehensive documentation, tutorials, and more visit the main documentation at [docs/README.md](docs/README.md).
The project features:
* A diverse collection of classification and detection models
* Support for self-supervised pre-training
* Knowledge distillation training (teacher-student)
* Custom utilities and data augmentation techniques
* Comprehensive training scripts
* Advanced error analysis tools
* Extensive documentation and tutorials (hopefully...)
Unlike projects that aim to reproduce ImageNet training results from common papers, Birder is tailored specifically for practical applications in ornithology, conservation, and wildlife photography.
As Ross Wightman eloquently stated in the [timm README](https://github.com/huggingface/pytorch-image-models#introduction):
> The work of many others is present here. I've tried to make sure all source material is acknowledged via links to github, arXiv papers, etc. in the README, documentation, and code docstrings. Please let me know if I missed anything.
The same principle applies to Birder. We stand on the shoulders of giants in the fields of computer vision, machine learning, and ornithology. We've made every effort to acknowledge and credit the work that has influenced and contributed to this project. If you believe we've missed any attributions, please let us know by opening an issue.
## Setup
1. Ensure PyTorch 2.5 is installed on your system
1. Install the latest Birder version:
```sh
pip install birder
```
For detailed installation options, including source installation, refer to our [Setup Guide](docs/getting_started.md#setup).
## Getting Started
![Example](docs/img/example.jpeg)
Check out the Birder Colab notebook for an interactive tutorial.
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/birder-project/birder/blob/main/notebooks/getting_started.ipynb)
[![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Project-blue)](https://huggingface.co/birder-project)
If you prefer a local setup, follow the installation instructions below.
Once Birder is installed, you can start exploring its capabilities.
Birder provides pre-trained models that you can download using the `fetch-model` tool.
To download a model, use the following command:
```sh
python -m birder.tools fetch-model mvit_v2_t_il-all
```
Create a data directory and download an example image:
```sh
mkdir data
wget https://f000.backblazeb2.com/file/birder/data/img_001.jpeg -O data/img_001.jpeg
```
To classify bird images, use the `birder-predict` script as follows:
```sh
birder-predict -n mvit_v2_t -t il-all --show data/img_001.jpeg
```
For more options and detailed usage of the prediction tool, run:
```sh
birder-predict --help
```
For more detailed usage instructions and examples, please refer to our [documentation](docs/README.md).
## Pre-trained Models
Birder provides a comprehensive suite of pre-trained models for bird species classification.
To explore the full range of available pre-trained models, use the `list-models` tool:
```sh
python -m birder.tools list-models --pretrained
```
This command displays a catalog of models ready for download.
### Model Nomenclature
The naming convention for Birder models encapsulates key information about their architecture and training approach.
Architecture: The first part of the model name indicates the core neural network structure (e.g., MobileNet, ResNet).
Training indicators:
* intermediate: Signifies models that underwent a two-stage training process, beginning with a large-scale weakly labeled dataset before fine-tuning on the primary dataset
* mim: Indicates models that leveraged self-supervised pre-training techniques, primarily Masked Autoencoder (MAE), prior to supervised training
Other tags:
* quantized: Model that has been quantized to reduce the computational and memory costs of running inference
* reparameterized: Model that has been restructured to simplify its architecture for optimized inference performance
Net Param: The number following the model name (e.g., 50, 1.0, 0.5), called the `net_param`, represents a specific configuration choice for the network. It represents a specific configuration choice for the network, which can affect aspects such as model size or complexity.
Epoch Number (optional): The last part of the model name may include an underscore followed by a number (e.g., `0`, `200`), which represents the epoch.
For instance, *resnext_50_intermediate_300* represents a ResNeXt model with a `net_param` of 50 that underwent intermediate training and is from epoch 300.
### Self-supervised Image Pre-training
Our pre-training process utilizes a diverse collection of image datasets.
This approach allows our models to learn rich, general-purpose visual representations before fine-tuning on specific bird classification tasks.
The pre-training dataset comprises:
* iNaturalist 2021 (~3.3M)
* WebVision-2.0 (~1.5M random subset)
* imagenet-w21-webp-wds (~1M random subset)
* SA-1B (~220K random subset of 20 chunks)
* COCO (~120K)
* NABirds (~48K)
* Birdsnap v1.1 (~44K)
* CUB-200 2011 (~18K)
* The Birder dataset (~5M)
Total: ~11M images
This carefully curated mix of datasets balances general visual knowledge with domain-specific bird imagery, enhancing the model's overall performance.
For detailed information about these datasets, including descriptions, citations, and licensing details, please refer to [docs/public_datasets.md](docs/public_datasets.md).
## Detection
Detection features are currently under development and will be available in future releases.
## Project Status and Contributions
Birder is currently a personal project in active development. As the sole developer, I am focused on building and refining the core functionalities of the framework. At this time, I am not actively seeking external contributors.
However, I greatly appreciate the interest and support from the community. If you have suggestions, find bugs, or want to provide feedback, please feel free to:
* Open an issue in the project's issue tracker
* Use the project and share your experiences
* Star the repository if you find it useful
While I may not be able to incorporate external contributions at this stage, your input is valuable and helps shape the direction of Birder. I'll update this section if the contribution policy changes in the future.
Thank you for your understanding and interest in Birder!
## Licenses
### Code
The code in this project is licensed under Apache 2.0. See [LICENSE](LICENSE) for details.
Some code is adapted from other projects.
There are notices with links to the references at the top of the file or at the specific class/function.
It is your responsibility to ensure compliance with licenses here and conditions of any dependent licenses.
If you think we've missed a reference or a license, please create an issue.
### Pretrained Weights
Some of the pretrained weights available here are pretrained on ImageNet. ImageNet was released for non-commercial research purposes only (<https://image-net.org/download>). It's not clear what the implications of that are for the use of pretrained weights from that dataset. It's best to seek legal advice if you intend to use the pretrained weights in a commercial product.
### Disclaimer
If you intend to use Birder, its pretrained weights, or any associated datasets in a commercial product, we strongly recommend seeking legal advice to ensure compliance with all relevant licenses and terms of use.
It's the user's responsibility to ensure that their use of this project, including any pretrained weights or datasets, complies with all applicable licenses and legal requirements.
## Acknowledgments
Birder owes much to the work of others in computer vision, machine learning, and ornithology.
Special thanks to:
* **Ross Wightman**: His work on [PyTorch Image Models (timm)](https://github.com/huggingface/pytorch-image-models) greatly inspired the design and approach of Birder.
* **Image Contributors**:
* Yaron Schmid - from [YS Wildlife](https://www.yswildlifephotography.com/who-we-are)
for their generous donations of bird photographs.
This project also benefits from numerous open-source libraries and ornithological resources.
If any attribution is missing, please open an issue to let us know.
Raw data
{
"_id": null,
"home_page": null,
"name": "birder",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "computer-vision, image-classification",
"author": "Ofer Hasson",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/47/23/b30b73e6cc6afb0941f735429bb3f462e504816029d585fbda71cb7d3e56/birder-0.0.5a81.tar.gz",
"platform": null,
"description": "# Birder\n\nBirder is an open-source computer vision framework for wildlife image analysis, focusing on avian species.\n\n* [Introduction](#introduction)\n* [Setup](#setup)\n* [Getting Started](#getting-started)\n* [Pre-trained Models](#pre-trained-models)\n* [Detection](#detection)\n* [Licenses](#licenses)\n* [Acknowledgments](#acknowledgments)\n\n## Introduction\n\nBirder is an open-source computer vision framework designed for wildlife imagery, specifically focused on bird species classification and detection. This project leverages deep neural networks to provide robust models that can handle real-world data challenges.\n\nFor comprehensive documentation, tutorials, and more visit the main documentation at [docs/README.md](docs/README.md).\n\nThe project features:\n\n* A diverse collection of classification and detection models\n* Support for self-supervised pre-training\n* Knowledge distillation training (teacher-student)\n* Custom utilities and data augmentation techniques\n* Comprehensive training scripts\n* Advanced error analysis tools\n* Extensive documentation and tutorials (hopefully...)\n\nUnlike projects that aim to reproduce ImageNet training results from common papers, Birder is tailored specifically for practical applications in ornithology, conservation, and wildlife photography.\n\nAs Ross Wightman eloquently stated in the [timm README](https://github.com/huggingface/pytorch-image-models#introduction):\n\n> The work of many others is present here. I've tried to make sure all source material is acknowledged via links to github, arXiv papers, etc. in the README, documentation, and code docstrings. Please let me know if I missed anything.\n\nThe same principle applies to Birder. We stand on the shoulders of giants in the fields of computer vision, machine learning, and ornithology. We've made every effort to acknowledge and credit the work that has influenced and contributed to this project. If you believe we've missed any attributions, please let us know by opening an issue.\n\n## Setup\n\n1. Ensure PyTorch 2.5 is installed on your system\n\n1. Install the latest Birder version:\n\n```sh\npip install birder\n```\n\nFor detailed installation options, including source installation, refer to our [Setup Guide](docs/getting_started.md#setup).\n\n## Getting Started\n\n![Example](docs/img/example.jpeg)\n\nCheck out the Birder Colab notebook for an interactive tutorial.\n\n[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/birder-project/birder/blob/main/notebooks/getting_started.ipynb)\n[![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Project-blue)](https://huggingface.co/birder-project)\n\nIf you prefer a local setup, follow the installation instructions below.\n\nOnce Birder is installed, you can start exploring its capabilities.\n\nBirder provides pre-trained models that you can download using the `fetch-model` tool.\nTo download a model, use the following command:\n\n```sh\npython -m birder.tools fetch-model mvit_v2_t_il-all\n```\n\nCreate a data directory and download an example image:\n\n```sh\nmkdir data\nwget https://f000.backblazeb2.com/file/birder/data/img_001.jpeg -O data/img_001.jpeg\n```\n\nTo classify bird images, use the `birder-predict` script as follows:\n\n```sh\nbirder-predict -n mvit_v2_t -t il-all --show data/img_001.jpeg\n```\n\nFor more options and detailed usage of the prediction tool, run:\n\n```sh\nbirder-predict --help\n```\n\nFor more detailed usage instructions and examples, please refer to our [documentation](docs/README.md).\n\n## Pre-trained Models\n\nBirder provides a comprehensive suite of pre-trained models for bird species classification.\n\nTo explore the full range of available pre-trained models, use the `list-models` tool:\n\n```sh\npython -m birder.tools list-models --pretrained\n```\n\nThis command displays a catalog of models ready for download.\n\n### Model Nomenclature\n\nThe naming convention for Birder models encapsulates key information about their architecture and training approach.\n\nArchitecture: The first part of the model name indicates the core neural network structure (e.g., MobileNet, ResNet).\n\nTraining indicators:\n\n* intermediate: Signifies models that underwent a two-stage training process, beginning with a large-scale weakly labeled dataset before fine-tuning on the primary dataset\n* mim: Indicates models that leveraged self-supervised pre-training techniques, primarily Masked Autoencoder (MAE), prior to supervised training\n\nOther tags:\n\n* quantized: Model that has been quantized to reduce the computational and memory costs of running inference\n* reparameterized: Model that has been restructured to simplify its architecture for optimized inference performance\n\nNet Param: The number following the model name (e.g., 50, 1.0, 0.5), called the `net_param`, represents a specific configuration choice for the network. It represents a specific configuration choice for the network, which can affect aspects such as model size or complexity.\n\nEpoch Number (optional): The last part of the model name may include an underscore followed by a number (e.g., `0`, `200`), which represents the epoch.\n\nFor instance, *resnext_50_intermediate_300* represents a ResNeXt model with a `net_param` of 50 that underwent intermediate training and is from epoch 300.\n\n### Self-supervised Image Pre-training\n\nOur pre-training process utilizes a diverse collection of image datasets.\nThis approach allows our models to learn rich, general-purpose visual representations before fine-tuning on specific bird classification tasks.\n\nThe pre-training dataset comprises:\n\n* iNaturalist 2021 (~3.3M)\n* WebVision-2.0 (~1.5M random subset)\n* imagenet-w21-webp-wds (~1M random subset)\n* SA-1B (~220K random subset of 20 chunks)\n* COCO (~120K)\n* NABirds (~48K)\n* Birdsnap v1.1 (~44K)\n* CUB-200 2011 (~18K)\n* The Birder dataset (~5M)\n\nTotal: ~11M images\n\nThis carefully curated mix of datasets balances general visual knowledge with domain-specific bird imagery, enhancing the model's overall performance.\n\nFor detailed information about these datasets, including descriptions, citations, and licensing details, please refer to [docs/public_datasets.md](docs/public_datasets.md).\n\n## Detection\n\nDetection features are currently under development and will be available in future releases.\n\n## Project Status and Contributions\n\nBirder is currently a personal project in active development. As the sole developer, I am focused on building and refining the core functionalities of the framework. At this time, I am not actively seeking external contributors.\n\nHowever, I greatly appreciate the interest and support from the community. If you have suggestions, find bugs, or want to provide feedback, please feel free to:\n\n* Open an issue in the project's issue tracker\n* Use the project and share your experiences\n* Star the repository if you find it useful\n\nWhile I may not be able to incorporate external contributions at this stage, your input is valuable and helps shape the direction of Birder. I'll update this section if the contribution policy changes in the future.\n\nThank you for your understanding and interest in Birder!\n\n## Licenses\n\n### Code\n\nThe code in this project is licensed under Apache 2.0. See [LICENSE](LICENSE) for details.\nSome code is adapted from other projects.\nThere are notices with links to the references at the top of the file or at the specific class/function.\nIt is your responsibility to ensure compliance with licenses here and conditions of any dependent licenses.\n\nIf you think we've missed a reference or a license, please create an issue.\n\n### Pretrained Weights\n\nSome of the pretrained weights available here are pretrained on ImageNet. ImageNet was released for non-commercial research purposes only (<https://image-net.org/download>). It's not clear what the implications of that are for the use of pretrained weights from that dataset. It's best to seek legal advice if you intend to use the pretrained weights in a commercial product.\n\n### Disclaimer\n\nIf you intend to use Birder, its pretrained weights, or any associated datasets in a commercial product, we strongly recommend seeking legal advice to ensure compliance with all relevant licenses and terms of use.\n\nIt's the user's responsibility to ensure that their use of this project, including any pretrained weights or datasets, complies with all applicable licenses and legal requirements.\n\n## Acknowledgments\n\nBirder owes much to the work of others in computer vision, machine learning, and ornithology.\n\nSpecial thanks to:\n\n* **Ross Wightman**: His work on [PyTorch Image Models (timm)](https://github.com/huggingface/pytorch-image-models) greatly inspired the design and approach of Birder.\n\n* **Image Contributors**:\n * Yaron Schmid - from [YS Wildlife](https://www.yswildlifephotography.com/who-we-are)\n\n for their generous donations of bird photographs.\n\nThis project also benefits from numerous open-source libraries and ornithological resources.\n\nIf any attribution is missing, please open an issue to let us know.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "A computer vision framework for wildlife image analysis, focusing on avian species.",
"version": "0.0.5a81",
"project_urls": {
"Documentation": "https://birder.gitlab.io/birder/",
"Homepage": "https://gitlab.com/birder/birder",
"Issues": "https://gitlab.com/birder/birder/-/issues"
},
"split_keywords": [
"computer-vision",
" image-classification"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "50e447468ba1aff3a859564a0c1570e7fe4c482105955ed2f890ff56c1d2c0f7",
"md5": "1ce400a135143dd4a183d75dc22d51a3",
"sha256": "976a6be68e48bbde6d80ae4f5576de8f2fdd99fbb31763775be72f02c708d7b6"
},
"downloads": -1,
"filename": "birder-0.0.5a81-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1ce400a135143dd4a183d75dc22d51a3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 466027,
"upload_time": "2024-12-21T16:24:24",
"upload_time_iso_8601": "2024-12-21T16:24:24.964661Z",
"url": "https://files.pythonhosted.org/packages/50/e4/47468ba1aff3a859564a0c1570e7fe4c482105955ed2f890ff56c1d2c0f7/birder-0.0.5a81-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4723b30b73e6cc6afb0941f735429bb3f462e504816029d585fbda71cb7d3e56",
"md5": "c43a1aa4d5bc68c57db71a19b2df0e00",
"sha256": "e90b71746d063471035bb7fe74ff35cc4837b6d1d7712d0dc616232ea02441b4"
},
"downloads": -1,
"filename": "birder-0.0.5a81.tar.gz",
"has_sig": false,
"md5_digest": "c43a1aa4d5bc68c57db71a19b2df0e00",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 311881,
"upload_time": "2024-12-21T16:24:27",
"upload_time_iso_8601": "2024-12-21T16:24:27.537477Z",
"url": "https://files.pythonhosted.org/packages/47/23/b30b73e6cc6afb0941f735429bb3f462e504816029d585fbda71cb7d3e56/birder-0.0.5a81.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-21 16:24:27",
"github": false,
"gitlab": true,
"bitbucket": false,
"codeberg": false,
"gitlab_user": "birder",
"gitlab_project": "birder",
"lcname": "birder"
}