finetuner


Namefinetuner JSON
Version 0.8.1 PyPI version JSON
download
home_pagehttps://github.com/jina-ai/finetuner/
SummaryTask-oriented finetuning for better embeddings on neural search.
upload_time2023-07-26 08:47:07
maintainer
docs_urlNone
authorJina AI
requires_python>=3.8.0
licenseApache 2.0
keywords jina neural-search neural-network deep-learning pretraining fine-tuning pretrained-models triplet-loss metric-learning siamese-network few-shot-learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <br><br>

<p align="center">
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>


<p align="center">
<b>Task-oriented finetuning for better embeddings on neural search</b>
</p>

<p align=center>
<a href="https://pypi.org/project/finetuner/"><img alt="PyPI" src="https://img.shields.io/pypi/v/finetuner?label=Release&style=flat-square"></a>
<a href="https://codecov.io/gh/jina-ai/finetuner"><img alt="Codecov branch" src="https://img.shields.io/codecov/c/github/jina-ai/finetuner/main?logo=Codecov&logoColor=white&style=flat-square"></a>
<a href="https://pypistats.org/packages/finetuner"><img alt="PyPI - Downloads from official pypistats" src="https://img.shields.io/pypi/dm/finetuner?style=flat-square"></a>
<a href="https://discord.jina.ai"><img src="https://img.shields.io/discord/1106542220112302130?logo=discord&logoColor=white&style=flat-square"></a>
</p>

<!-- start elevator-pitch -->

Fine-tuning is an effective way to improve performance on [neural search](https://jina.ai/news/what-is-neural-search-and-learn-to-build-a-neural-search-engine/) tasks.
However, setting up and performing fine-tuning can be very time-consuming and resource-intensive.

Jina AI's Finetuner makes fine-tuning easier and faster by streamlining the workflow and handling all the complexity and infrastructure in the cloud.
With Finetuner, you can easily enhance the performance of pre-trained models,
making them production-ready [without extensive labeling](https://jina.ai/news/fine-tuning-with-low-budget-and-high-expectations/) or expensive hardware.

🎏 **Better embeddings**: Create high-quality embeddings for semantic search, visual similarity search, cross-modal text<->image search, recommendation systems,
clustering, duplication detection, anomaly detection, or other uses.

⏰ **Low budget, high expectations**: Bring considerable improvements to model performance, making the most out of as little as a few hundred training samples, and finish fine-tuning in as little as an hour.

📈 **Performance promise**: Enhance the performance of pre-trained models so that they deliver state-of-the-art performance on 
domain-specific applications.

🔱 **Simple yet powerful**: Easy access to 40+ mainstream loss functions, 10+ optimizers, layer pruning, weight
freezing, dimensionality reduction, hard-negative mining, cross-modal models, and distributed training. 

☁ **All-in-cloud**: Train using our GPU infrastructure, manage runs, experiments, and artifacts on Jina AI Cloud
without worrying about resource availability, complex integration, or infrastructure costs.

<!-- end elevator-pitch -->

## [Documentation](https://finetuner.jina.ai/)

## Pretrained Text Embedding Models

| name                   | parameter | dimension | Huggingface                                            |
|------------------------|-----------|-----------|--------------------------------------------------------|
| jina-embedding-t-en-v1 | 14m       | 312             | [link](https://huggingface.co/jinaai/jina-embedding-t-en-v1) |
| jina-embedding-s-en-v1 | 35m       | 512             | [link](https://huggingface.co/jinaai/jina-embedding-s-en-v1) |
| jina-embedding-b-en-v1 | 110m      | 768             | [link](https://huggingface.co/jinaai/jina-embedding-b-en-v1) |
| jina-embedding-l-en-v1 | 330m      | 1024            | [link](https://huggingface.co/jinaai/jina-embedding-l-en-v1) |

## Benchmarks

<table>
<thead>
  <tr>
    <th>Model</th>
    <th>Task</th>
    <th>Metric</th>
    <th>Pretrained</th>
    <th>Finetuned</th>
    <th>Delta</th>
    <th>Run it!</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td rowspan="2">BERT</td>
    <td rowspan="2"><a href="https://www.kaggle.com/c/quora-question-pairs">Quora</a> Question Answering</td>
    <td>mRR</td>
    <td>0.835</td>
    <td>0.967</td>
    <td><span style="color:green">15.8%</span></td>
    <td rowspan="2"><p align=center><a href="https://colab.research.google.com/drive/1Ui3Gw3ZL785I7AuzlHv3I0-jTvFFxJ4_?usp=sharing"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"></a></p></td>
  </tr>
  <tr>
    <td>Recall</td>
    <td>0.915</td>
    <td>0.963</td>
    <td><span style="color:green">5.3%</span></td>
  </tr>
  <tr>
    <td rowspan="2">ResNet</td>
    <td rowspan="2">Visual similarity search on <a href="https://sites.google.com/view/totally-looks-like-dataset">TLL</a></td>
    <td>mAP</td>
    <td>0.110</td>
    <td>0.196</td>
    <td><span style="color:green">78.2%</span></td>
    <td rowspan="2"><p align=center><a href="https://colab.research.google.com/drive/1QuUTy3iVR-kTPljkwplKYaJ-NTCgPEc_?usp=sharing"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"></a></p></td>
  </tr>
  <tr>
    <td>Recall</td>
    <td>0.249</td>
    <td>0.460</td>
    <td><span style="color:green">84.7%</span></td>
  </tr>
  <tr>
    <td rowspan="2">CLIP</td>
    <td rowspan="2"><a href="https://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html">Deep Fashion</a> text-to-image search</td>
    <td>mRR</td>
    <td>0.575</td>
    <td>0.676</td>
    <td><span style="color:green">17.4%</span></td>
    <td rowspan="2"><p align=center><a href="https://colab.research.google.com/drive/1yKnmy2Qotrh3OhgwWRsMWPFwOSAecBxg?usp=sharing"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"></a></p></td>
  </tr>
  <tr>
    <td>Recall</td>
    <td>0.473</td>
    <td>0.564</td>
    <td><span style="color:green">19.2%</span></td>
  </tr>
  <tr>
    <td rowspan="2">M-CLIP</td>
    <td rowspan="2"><a href="https://xmrec.github.io/">Cross market</a> product recommendation (German)</td>
    <td>mRR</td>
    <td>0.430</td>
    <td>0.648</td>
    <td><span style="color:green">50.7%</span></td>
    <td rowspan="2"><p align=center><a href="https://colab.research.google.com/drive/10Wldbu0Zugj7NmQyZwZzuorZ6SSAhtIo"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"></a></p></td>
  </tr>
  <tr>
    <td>Recall</td>
    <td>0.247</td>
    <td>0.340</td>
    <td><span style="color:green">37.7%</span></td>
  </tr>
  <tr>
    <td rowspan="2">PointNet++</td>
    <td rowspan="2"><a href="https://modelnet.cs.princeton.edu/">ModelNet40</a> 3D Mesh Search</td>
    <td>mRR</td>
    <td>0.791</td>
    <td>0.891</td>
    <td><span style="color:green">12.7%</span></td>
    <td rowspan="2"><p align=center><a href="https://colab.research.google.com/drive/1lIMDFkUVsWMshU-akJ_hwzBfJ37zLFzU?usp=sharing"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"></a></p></td>
  </tr>
  <tr>
    <td>Recall</td>
    <td>0.154</td>
    <td>0.242</td>
    <td><span style="color:green">57.1%</span></td>
  </tr>

</tbody>
</table>

<sub><sup>All metrics were evaluated for k@20 after training for 5 epochs using the Adam optimizer with learning rates of 1e-4 for ResNet, 1e-7 for CLIP and 1e-5 for the BERT models, 5e-4 for PointNet++</sup></sub>

<!-- start install-instruction -->

## Install

Make sure you have Python 3.8+ installed. Finetuner can be installed via `pip` by executing:

```bash
pip install -U finetuner
```

If you want to submit a fine-tuning job on the cloud, please use

```bash
pip install "finetuner[full]"
```

<!-- end install-instruction -->

> ⚠️ Starting with version 0.5.0, Finetuner computing is performed on Jina AI Cloud. The last local version is `0.4.1`. 
> This version is still available for installation via `pip`. See [Finetuner git tags and releases](https://github.com/jina-ai/finetuner/releases).

<!-- start finetuner-articles -->
## Articles about Finetuner

Check out our published blogposts and tutorials to see Finetuner in action!

- [Fine-tuning with Low Budget and High Expectations](https://jina.ai/news/fine-tuning-with-low-budget-and-high-expectations/)
- [Hype and Hybrids: Search is more than Keywords and Vectors](https://jina.ai/news/hype-and-hybrids-multimodal-search-means-more-than-keywords-and-vectors-2/)
- [Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models](https://jina.ai/news/improving-search-quality-non-english-queries-fine-tuned-multilingual-clip-models/)
- [How Much Do We Get by Finetuning CLIP?](https://jina.ai/news/applying-jina-ai-finetuner-to-clip-less-data-smaller-models-higher-performance/)

<!-- end finetuner-articles -->

<!-- start citations -->
If you find Jina Embeddings useful in your research, please cite the following paper:

```text
@misc{günther2023jina,
      title={Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models}, 
      author={Michael Günther and Louis Milliken and Jonathan Geuter and Georgios Mastrapas and Bo Wang and Han Xiao},
      year={2023},
      eprint={2307.11224},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

```
<!-- end citations -->

<!-- start support-pitch -->
## Support

- Use [Discussions](https://github.com/jina-ai/finetuner/discussions) to talk about your use cases, questions, and
  support queries.
- Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
- Join our [Engineering All Hands](https://youtube.com/playlist?list=PL3UBBWOUVhFYRUa_gpYYKBqEAkO4sxmne) meet-up to discuss your use case and learn Jina AI new features.
    - **When?** The second Tuesday of every month
    - **Where?**
      Zoom ([see our public events calendar](https://calendar.google.com/calendar/embed?src=c_1t5ogfp2d45v8fit981j08mcm4%40group.calendar.google.com&ctz=Europe%2FBerlin)/[.ical](https://calendar.google.com/calendar/ical/c_1t5ogfp2d45v8fit981j08mcm4%40group.calendar.google.com/public/basic.ics))
      and [live stream on YouTube](https://youtube.com/c/jina-ai)
- Subscribe to the latest video tutorials on our [YouTube channel](https://youtube.com/c/jina-ai)

## Join Us

Finetuner is backed by [Jina AI](https://jina.ai) and licensed under [Apache-2.0](./LICENSE). 

[We are actively hiring](https://jobs.jina.ai) AI engineers and solution engineers to build the next generation of
open-source AI ecosystems.

<!-- end support-pitch -->
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/jina-ai/finetuner/",
    "name": "finetuner",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8.0",
    "maintainer_email": "",
    "keywords": "jina neural-search neural-network deep-learning pretraining fine-tuning pretrained-models triplet-loss metric-learning siamese-network few-shot-learning",
    "author": "Jina AI",
    "author_email": "hello@jina.ai",
    "download_url": "https://files.pythonhosted.org/packages/2b/d7/5328c449e46bfca3a6470fc162be45a5ed0cda613a19061fa0018ef16477/finetuner-0.8.1.tar.gz",
    "platform": null,
    "description": "<br><br>\n\n<p align=\"center\">\n<img src=\"https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true\" alt=\"Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications.\" width=\"150px\">\n</p>\n\n\n<p align=\"center\">\n<b>Task-oriented finetuning for better embeddings on neural search</b>\n</p>\n\n<p align=center>\n<a href=\"https://pypi.org/project/finetuner/\"><img alt=\"PyPI\" src=\"https://img.shields.io/pypi/v/finetuner?label=Release&style=flat-square\"></a>\n<a href=\"https://codecov.io/gh/jina-ai/finetuner\"><img alt=\"Codecov branch\" src=\"https://img.shields.io/codecov/c/github/jina-ai/finetuner/main?logo=Codecov&logoColor=white&style=flat-square\"></a>\n<a href=\"https://pypistats.org/packages/finetuner\"><img alt=\"PyPI - Downloads from official pypistats\" src=\"https://img.shields.io/pypi/dm/finetuner?style=flat-square\"></a>\n<a href=\"https://discord.jina.ai\"><img src=\"https://img.shields.io/discord/1106542220112302130?logo=discord&logoColor=white&style=flat-square\"></a>\n</p>\n\n<!-- start elevator-pitch -->\n\nFine-tuning is an effective way to improve performance on [neural search](https://jina.ai/news/what-is-neural-search-and-learn-to-build-a-neural-search-engine/) tasks.\nHowever, setting up and performing fine-tuning can be very time-consuming and resource-intensive.\n\nJina AI's Finetuner makes fine-tuning easier and faster by streamlining the workflow and handling all the complexity and infrastructure in the cloud.\nWith Finetuner, you can easily enhance the performance of pre-trained models,\nmaking them production-ready [without extensive labeling](https://jina.ai/news/fine-tuning-with-low-budget-and-high-expectations/) or expensive hardware.\n\n\ud83c\udf8f **Better embeddings**: Create high-quality embeddings for semantic search, visual similarity search, cross-modal text<->image search, recommendation systems,\nclustering, duplication detection, anomaly detection, or other uses.\n\n\u23f0 **Low budget, high expectations**: Bring considerable improvements to model performance, making the most out of as little as a few hundred training samples, and finish fine-tuning in as little as an hour.\n\n\ud83d\udcc8 **Performance promise**: Enhance the performance of pre-trained models so that they deliver state-of-the-art performance on \ndomain-specific applications.\n\n\ud83d\udd31 **Simple yet powerful**: Easy access to 40+ mainstream loss functions, 10+ optimizers, layer pruning, weight\nfreezing, dimensionality reduction, hard-negative mining, cross-modal models, and distributed training. \n\n\u2601 **All-in-cloud**: Train using our GPU infrastructure, manage runs, experiments, and artifacts on Jina AI Cloud\nwithout worrying about resource availability, complex integration, or infrastructure costs.\n\n<!-- end elevator-pitch -->\n\n## [Documentation](https://finetuner.jina.ai/)\n\n## Pretrained Text Embedding Models\n\n| name                   | parameter | dimension | Huggingface                                            |\n|------------------------|-----------|-----------|--------------------------------------------------------|\n| jina-embedding-t-en-v1 | 14m       | 312             | [link](https://huggingface.co/jinaai/jina-embedding-t-en-v1) |\n| jina-embedding-s-en-v1 | 35m       | 512             | [link](https://huggingface.co/jinaai/jina-embedding-s-en-v1) |\n| jina-embedding-b-en-v1 | 110m      | 768             | [link](https://huggingface.co/jinaai/jina-embedding-b-en-v1) |\n| jina-embedding-l-en-v1 | 330m      | 1024            | [link](https://huggingface.co/jinaai/jina-embedding-l-en-v1) |\n\n## Benchmarks\n\n<table>\n<thead>\n  <tr>\n    <th>Model</th>\n    <th>Task</th>\n    <th>Metric</th>\n    <th>Pretrained</th>\n    <th>Finetuned</th>\n    <th>Delta</th>\n    <th>Run it!</th>\n  </tr>\n</thead>\n<tbody>\n  <tr>\n    <td rowspan=\"2\">BERT</td>\n    <td rowspan=\"2\"><a href=\"https://www.kaggle.com/c/quora-question-pairs\">Quora</a> Question Answering</td>\n    <td>mRR</td>\n    <td>0.835</td>\n    <td>0.967</td>\n    <td><span style=\"color:green\">15.8%</span></td>\n    <td rowspan=\"2\"><p align=center><a href=\"https://colab.research.google.com/drive/1Ui3Gw3ZL785I7AuzlHv3I0-jTvFFxJ4_?usp=sharing\"><img alt=\"Open In Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\"></a></p></td>\n  </tr>\n  <tr>\n    <td>Recall</td>\n    <td>0.915</td>\n    <td>0.963</td>\n    <td><span style=\"color:green\">5.3%</span></td>\n  </tr>\n  <tr>\n    <td rowspan=\"2\">ResNet</td>\n    <td rowspan=\"2\">Visual similarity search on <a href=\"https://sites.google.com/view/totally-looks-like-dataset\">TLL</a></td>\n    <td>mAP</td>\n    <td>0.110</td>\n    <td>0.196</td>\n    <td><span style=\"color:green\">78.2%</span></td>\n    <td rowspan=\"2\"><p align=center><a href=\"https://colab.research.google.com/drive/1QuUTy3iVR-kTPljkwplKYaJ-NTCgPEc_?usp=sharing\"><img alt=\"Open In Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\"></a></p></td>\n  </tr>\n  <tr>\n    <td>Recall</td>\n    <td>0.249</td>\n    <td>0.460</td>\n    <td><span style=\"color:green\">84.7%</span></td>\n  </tr>\n  <tr>\n    <td rowspan=\"2\">CLIP</td>\n    <td rowspan=\"2\"><a href=\"https://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html\">Deep Fashion</a> text-to-image search</td>\n    <td>mRR</td>\n    <td>0.575</td>\n    <td>0.676</td>\n    <td><span style=\"color:green\">17.4%</span></td>\n    <td rowspan=\"2\"><p align=center><a href=\"https://colab.research.google.com/drive/1yKnmy2Qotrh3OhgwWRsMWPFwOSAecBxg?usp=sharing\"><img alt=\"Open In Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\"></a></p></td>\n  </tr>\n  <tr>\n    <td>Recall</td>\n    <td>0.473</td>\n    <td>0.564</td>\n    <td><span style=\"color:green\">19.2%</span></td>\n  </tr>\n  <tr>\n    <td rowspan=\"2\">M-CLIP</td>\n    <td rowspan=\"2\"><a href=\"https://xmrec.github.io/\">Cross market</a> product recommendation (German)</td>\n    <td>mRR</td>\n    <td>0.430</td>\n    <td>0.648</td>\n    <td><span style=\"color:green\">50.7%</span></td>\n    <td rowspan=\"2\"><p align=center><a href=\"https://colab.research.google.com/drive/10Wldbu0Zugj7NmQyZwZzuorZ6SSAhtIo\"><img alt=\"Open In Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\"></a></p></td>\n  </tr>\n  <tr>\n    <td>Recall</td>\n    <td>0.247</td>\n    <td>0.340</td>\n    <td><span style=\"color:green\">37.7%</span></td>\n  </tr>\n  <tr>\n    <td rowspan=\"2\">PointNet++</td>\n    <td rowspan=\"2\"><a href=\"https://modelnet.cs.princeton.edu/\">ModelNet40</a> 3D Mesh Search</td>\n    <td>mRR</td>\n    <td>0.791</td>\n    <td>0.891</td>\n    <td><span style=\"color:green\">12.7%</span></td>\n    <td rowspan=\"2\"><p align=center><a href=\"https://colab.research.google.com/drive/1lIMDFkUVsWMshU-akJ_hwzBfJ37zLFzU?usp=sharing\"><img alt=\"Open In Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\"></a></p></td>\n  </tr>\n  <tr>\n    <td>Recall</td>\n    <td>0.154</td>\n    <td>0.242</td>\n    <td><span style=\"color:green\">57.1%</span></td>\n  </tr>\n\n</tbody>\n</table>\n\n<sub><sup>All metrics were evaluated for k@20 after training for 5 epochs using the Adam optimizer with learning rates of 1e-4 for ResNet, 1e-7 for CLIP and 1e-5 for the BERT models, 5e-4 for PointNet++</sup></sub>\n\n<!-- start install-instruction -->\n\n## Install\n\nMake sure you have Python 3.8+ installed. Finetuner can be installed via `pip` by executing:\n\n```bash\npip install -U finetuner\n```\n\nIf you want to submit a fine-tuning job on the cloud, please use\n\n```bash\npip install \"finetuner[full]\"\n```\n\n<!-- end install-instruction -->\n\n> \u26a0\ufe0f Starting with version 0.5.0, Finetuner computing is performed on Jina AI Cloud. The last local version is `0.4.1`. \n> This version is still available for installation via `pip`. See [Finetuner git tags and releases](https://github.com/jina-ai/finetuner/releases).\n\n<!-- start finetuner-articles -->\n## Articles about Finetuner\n\nCheck out our published blogposts and tutorials to see Finetuner in action!\n\n- [Fine-tuning with Low Budget and High Expectations](https://jina.ai/news/fine-tuning-with-low-budget-and-high-expectations/)\n- [Hype and Hybrids: Search is more than Keywords and Vectors](https://jina.ai/news/hype-and-hybrids-multimodal-search-means-more-than-keywords-and-vectors-2/)\n- [Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models](https://jina.ai/news/improving-search-quality-non-english-queries-fine-tuned-multilingual-clip-models/)\n- [How Much Do We Get by Finetuning CLIP?](https://jina.ai/news/applying-jina-ai-finetuner-to-clip-less-data-smaller-models-higher-performance/)\n\n<!-- end finetuner-articles -->\n\n<!-- start citations -->\nIf you find Jina Embeddings useful in your research, please cite the following paper:\n\n```text\n@misc{g\u00fcnther2023jina,\n      title={Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models}, \n      author={Michael G\u00fcnther and Louis Milliken and Jonathan Geuter and Georgios Mastrapas and Bo Wang and Han Xiao},\n      year={2023},\n      eprint={2307.11224},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n\n```\n<!-- end citations -->\n\n<!-- start support-pitch -->\n## Support\n\n- Use [Discussions](https://github.com/jina-ai/finetuner/discussions) to talk about your use cases, questions, and\n  support queries.\n- Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.\n- Join our [Engineering All Hands](https://youtube.com/playlist?list=PL3UBBWOUVhFYRUa_gpYYKBqEAkO4sxmne) meet-up to discuss your use case and learn Jina AI new features.\n    - **When?** The second Tuesday of every month\n    - **Where?**\n      Zoom ([see our public events calendar](https://calendar.google.com/calendar/embed?src=c_1t5ogfp2d45v8fit981j08mcm4%40group.calendar.google.com&ctz=Europe%2FBerlin)/[.ical](https://calendar.google.com/calendar/ical/c_1t5ogfp2d45v8fit981j08mcm4%40group.calendar.google.com/public/basic.ics))\n      and [live stream on YouTube](https://youtube.com/c/jina-ai)\n- Subscribe to the latest video tutorials on our [YouTube channel](https://youtube.com/c/jina-ai)\n\n## Join Us\n\nFinetuner is backed by [Jina AI](https://jina.ai) and licensed under [Apache-2.0](./LICENSE). \n\n[We are actively hiring](https://jobs.jina.ai) AI engineers and solution engineers to build the next generation of\nopen-source AI ecosystems.\n\n<!-- end support-pitch -->",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "Task-oriented finetuning for better embeddings on neural search.",
    "version": "0.8.1",
    "project_urls": {
        "Documentation": "https://finetuner.jina.ai",
        "Download": "https://github.com/jina-ai/finetuner/tags",
        "Homepage": "https://github.com/jina-ai/finetuner/",
        "Source": "https://github.com/jina-ai/finetuner/",
        "Tracker": "https://github.com/jina-ai/finetuner/issues"
    },
    "split_keywords": [
        "jina",
        "neural-search",
        "neural-network",
        "deep-learning",
        "pretraining",
        "fine-tuning",
        "pretrained-models",
        "triplet-loss",
        "metric-learning",
        "siamese-network",
        "few-shot-learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2bd75328c449e46bfca3a6470fc162be45a5ed0cda613a19061fa0018ef16477",
                "md5": "7c511df5cdf99ad4675e5cae3677104e",
                "sha256": "0bdfc31fb3504bf925f272afb9389cce76e361f0604bc8d1b13e70a4ccff4fc1"
            },
            "downloads": -1,
            "filename": "finetuner-0.8.1.tar.gz",
            "has_sig": false,
            "md5_digest": "7c511df5cdf99ad4675e5cae3677104e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.0",
            "size": 39665,
            "upload_time": "2023-07-26T08:47:07",
            "upload_time_iso_8601": "2023-07-26T08:47:07.149782Z",
            "url": "https://files.pythonhosted.org/packages/2b/d7/5328c449e46bfca3a6470fc162be45a5ed0cda613a19061fa0018ef16477/finetuner-0.8.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-26 08:47:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jina-ai",
    "github_project": "finetuner",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "finetuner"
}
        
Elapsed time: 0.11253s