oceandb


Nameoceandb JSON
Version 0.1.0 PyPI version JSON
download
home_page
SummaryOcean.
upload_time2023-07-25 22:51:50
maintainer
docs_urlNone
authorKye Gomez
requires_python>=3.7,<4.0
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# Ocean 🌊🐠

Ocean is a powerful, flexible, and easy-to-use library for cross-modal and modality-specific searching. It provides a unified interface for embedding and querying text, images, and audio. Ocean leverages the latest advancements in deep learning and the power of the ImageBind Embedding model to deliver unparalleled search accuracy and performance.

<p align="center">
  <a href="https://discord.gg/qUtxnK2NMf" target="_blank">
      <img src="https://img.shields.io/discord/1073293645303795742" alt="Discord">
  </a> |
  <a href="https://github.com/ocean-core/ocean/blob/master/LICENSE" target="_blank">
      <img src="https://img.shields.io/static/v1?label=license&message=Apache 2.0&color=white" alt="License">
  </a> |
  <a href="https://docs.tryocean.com/" target="_blank">
      Docs
  </a> |
  <a href="https://www.tryocean.com/" target="_blank">
      Homepage
  </a>
</p>

<p align="center">
  <a href="https://github.com/ocean-core/ocean/actions/workflows/ocean-integration-test.yml" target="_blank">
    <img src="https://github.com/ocean-core/ocean/actions/workflows/ocean-integration-test.yml/badge.svg?branch=main" alt="Integration Tests">
  </a> |
  <a href="https://github.com/ocean-core/ocean/actions/workflows/ocean-test.yml" target="_blank">
    <img src="https://github.com/ocean-core/ocean/actions/workflows/ocean-test.yml/badge.svg?branch=main" alt="Tests">
  </a>
</p>

```bash
pip install https://github.com/kyegomez/Ocean.git # python client
# for javascript, npm install oceandb!
# for client-server mode, docker-compose up -d --build
```

The core API is only 4 functions (run our [πŸ’‘ Google Colab](https://colab.research.google.com/drive/1QEzFyqnoFxq7LUGyP1vzR4iLt9PpCDXv?usp=sharing) or [Replit template](https://replit.com/@swyx/BasicOceanStarter?v=1)):

```python
import oceandb
api = oceandb.Client()
print(api.heartbeat())

from oceandb.utils.embedding_functions import MultiModalEmbeddingFunction


# setup Ocean in-memory, for easy prototyping. Can add persistence easily!
client = oceandb.Client()

#text
text_embedding_function = MultiModalEmbeddingFunction(modality="text")


#vision
#vision_embedding_function = MultiModalEmbeddingFunction(modality="vision")

#audio
#audio_embedding_function = MultiModalEmbeddingFunction(modality="audio")

# # Create collection. get_collection, get_or_create_collection, delete_collection also available and add embedding function
collection = client.create_collection("all-my-documents", embedding_function=text_embedding_function)



text_data = ['This is a query about artificial intelligence']

#test
test = collection.add(
    documents=text_data,
    ids=['doc1']
)

print(test)

#query result
results = collection.query(
    query_texts=[query_text],
    n_results=1
)

print(f"Query texts {query_text}")
print("Most similar document:", results['documents'][0][0])


```

## Features

- **Simple**: Fully-typed, fully-tested, fully-documented == happiness
- **Integrations**: [`πŸ¦œοΈπŸ”— LangChain`](https://blog.langchain.dev/langchain-ocean/) (python and js), [`πŸ¦™ LlamaIndex`](https://twitter.com/atroyn/status/1628557389762007040) and more soon
- **Dev, Test, Prod**: the same API that runs in your python notebook, scales to your cluster
- **Feature-rich**: Queries, filtering, density estimation and more
- **Free & Open Source**: Apache 2.0 Licensed

## Use case: ChatGPT for **\_\_**

For example, the `"Chat your data"` use case:

1. Add documents to your database. You can pass in your own embeddings, embedding function, or let Ocean embed them for you.
2. Query relevant documents with natural language.
3. Compose documents into the context window of an LLM like `GPT3` for additional summarization or analysis.

## Embeddings?

What are embeddings?

- [Read the guide from OpenAI](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings)
- **Literal**: Embedding something turns it from image/text/audio into a list of numbers. πŸ–ΌοΈ or πŸ“„ => `[1.2, 2.1, ....]`. This process makes documents "understandable" to a machine learning model.
- **By analogy**: An embedding represents the essence of a document. This enables documents and queries with the same essence to be "near" each other and therefore easy to find.
- **Technical**: An embedding is the latent-space position of a document at a layer of a deep neural network. For models trained specifically to embed data, this is the last layer.
- **A small example**: If you search your photos for "famous bridge in San Francisco". By embedding this query and comparing it to the embeddings of your photos and their metadata - it should return photos of the Golden Gate Bridge.

Embeddings databases (also known as **vector databases**) store embeddings and allow you to search by nearest neighbors rather than by substrings like a traditional database. By default, Ocean uses [ImageBind](https://github.com/facebookresearch/ImageBind) to embed for you but you can also use OpenAI embeddings, Cohere (multilingual) embeddings, or your own.

## Roadmap πŸ—ΊοΈ

- [ ] Integrate the new 3 loss functions (conditional, cross-modal, and unimodality)
- [ ] Integrate ImageBind model to embed images, text, and audio as a native embedder
- [ ] Implement a method to choose query algorithm: `query([vectors], search_algorithm="knn")`
- [ ] Implement shapeless and polymorphic support
- [ ] Explore the integration of database worker agents that manage the embedding, tokenization, and indexation (like a swarm)
- [ ] Implement an endless context length embedding model
- [ ] Enable running the ImageBind embedding model offline in a database repository
- [ ] Allow users to choose modality in the upsert method
- [ ] Deploy ImageBind as an API and increase context length

## Get involved at Agora

Ocean is a rapidly developing project. We welcome PR contributors and ideas for how to improve the project.

- [Join the conversation on Discord](https://discord.gg/sbYvXgqc)
- [Review the roadmap and contribute your ideas](https://docs.tryocean.com/roadmap)
- [Grab an issue and open a PR](https://github.com/ocean-core/ocean/issues)

## License

[Apache 2.0](./LICENSE)

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "oceandb",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7,<4.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/65/23/544b4d343426e29cdd3083be5417c0340e1111c9c38f3f31ee54c22e8759/oceandb-0.1.0.tar.gz",
    "platform": null,
    "description": "\n# Ocean \ud83c\udf0a\ud83d\udc20\n\nOcean is a powerful, flexible, and easy-to-use library for cross-modal and modality-specific searching. It provides a unified interface for embedding and querying text, images, and audio. Ocean leverages the latest advancements in deep learning and the power of the ImageBind Embedding model to deliver unparalleled search accuracy and performance.\n\n<p align=\"center\">\n  <a href=\"https://discord.gg/qUtxnK2NMf\" target=\"_blank\">\n      <img src=\"https://img.shields.io/discord/1073293645303795742\" alt=\"Discord\">\n  </a> |\n  <a href=\"https://github.com/ocean-core/ocean/blob/master/LICENSE\" target=\"_blank\">\n      <img src=\"https://img.shields.io/static/v1?label=license&message=Apache 2.0&color=white\" alt=\"License\">\n  </a> |\n  <a href=\"https://docs.tryocean.com/\" target=\"_blank\">\n      Docs\n  </a> |\n  <a href=\"https://www.tryocean.com/\" target=\"_blank\">\n      Homepage\n  </a>\n</p>\n\n<p align=\"center\">\n  <a href=\"https://github.com/ocean-core/ocean/actions/workflows/ocean-integration-test.yml\" target=\"_blank\">\n    <img src=\"https://github.com/ocean-core/ocean/actions/workflows/ocean-integration-test.yml/badge.svg?branch=main\" alt=\"Integration Tests\">\n  </a> |\n  <a href=\"https://github.com/ocean-core/ocean/actions/workflows/ocean-test.yml\" target=\"_blank\">\n    <img src=\"https://github.com/ocean-core/ocean/actions/workflows/ocean-test.yml/badge.svg?branch=main\" alt=\"Tests\">\n  </a>\n</p>\n\n```bash\npip install https://github.com/kyegomez/Ocean.git # python client\n# for javascript, npm install oceandb!\n# for client-server mode, docker-compose up -d --build\n```\n\nThe core API is only 4 functions (run our [\ud83d\udca1 Google Colab](https://colab.research.google.com/drive/1QEzFyqnoFxq7LUGyP1vzR4iLt9PpCDXv?usp=sharing) or [Replit template](https://replit.com/@swyx/BasicOceanStarter?v=1)):\n\n```python\nimport oceandb\napi = oceandb.Client()\nprint(api.heartbeat())\n\nfrom oceandb.utils.embedding_functions import MultiModalEmbeddingFunction\n\n\n# setup Ocean in-memory, for easy prototyping. Can add persistence easily!\nclient = oceandb.Client()\n\n#text\ntext_embedding_function = MultiModalEmbeddingFunction(modality=\"text\")\n\n\n#vision\n#vision_embedding_function = MultiModalEmbeddingFunction(modality=\"vision\")\n\n#audio\n#audio_embedding_function = MultiModalEmbeddingFunction(modality=\"audio\")\n\n# # Create collection. get_collection, get_or_create_collection, delete_collection also available and add embedding function\ncollection = client.create_collection(\"all-my-documents\", embedding_function=text_embedding_function)\n\n\n\ntext_data = ['This is a query about artificial intelligence']\n\n#test\ntest = collection.add(\n    documents=text_data,\n    ids=['doc1']\n)\n\nprint(test)\n\n#query result\nresults = collection.query(\n    query_texts=[query_text],\n    n_results=1\n)\n\nprint(f\"Query texts {query_text}\")\nprint(\"Most similar document:\", results['documents'][0][0])\n\n\n```\n\n## Features\n\n- **Simple**: Fully-typed, fully-tested, fully-documented == happiness\n- **Integrations**: [`\ud83e\udd9c\ufe0f\ud83d\udd17 LangChain`](https://blog.langchain.dev/langchain-ocean/) (python and js), [`\ud83e\udd99 LlamaIndex`](https://twitter.com/atroyn/status/1628557389762007040) and more soon\n- **Dev, Test, Prod**: the same API that runs in your python notebook, scales to your cluster\n- **Feature-rich**: Queries, filtering, density estimation and more\n- **Free & Open Source**: Apache 2.0 Licensed\n\n## Use case: ChatGPT for **\\_\\_**\n\nFor example, the `\"Chat your data\"` use case:\n\n1. Add documents to your database. You can pass in your own embeddings, embedding function, or let Ocean embed them for you.\n2. Query relevant documents with natural language.\n3. Compose documents into the context window of an LLM like `GPT3` for additional summarization or analysis.\n\n## Embeddings?\n\nWhat are embeddings?\n\n- [Read the guide from OpenAI](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings)\n- **Literal**: Embedding something turns it from image/text/audio into a list of numbers. \ud83d\uddbc\ufe0f or \ud83d\udcc4 => `[1.2, 2.1, ....]`. This process makes documents \"understandable\" to a machine learning model.\n- **By analogy**: An embedding represents the essence of a document. This enables documents and queries with the same essence to be \"near\" each other and therefore easy to find.\n- **Technical**: An embedding is the latent-space position of a document at a layer of a deep neural network. For models trained specifically to embed data, this is the last layer.\n- **A small example**: If you search your photos for \"famous bridge in San Francisco\". By embedding this query and comparing it to the embeddings of your photos and their metadata - it should return photos of the Golden Gate Bridge.\n\nEmbeddings databases (also known as **vector databases**) store embeddings and allow you to search by nearest neighbors rather than by substrings like a traditional database. By default, Ocean uses [ImageBind](https://github.com/facebookresearch/ImageBind) to embed for you but you can also use OpenAI embeddings, Cohere (multilingual) embeddings, or your own.\n\n## Roadmap \ud83d\uddfa\ufe0f\n\n- [ ] Integrate the new 3 loss functions (conditional, cross-modal, and unimodality)\n- [ ] Integrate ImageBind model to embed images, text, and audio as a native embedder\n- [ ] Implement a method to choose query algorithm: `query([vectors], search_algorithm=\"knn\")`\n- [ ] Implement shapeless and polymorphic support\n- [ ] Explore the integration of database worker agents that manage the embedding, tokenization, and indexation (like a swarm)\n- [ ] Implement an endless context length embedding model\n- [ ] Enable running the ImageBind embedding model offline in a database repository\n- [ ] Allow users to choose modality in the upsert method\n- [ ] Deploy ImageBind as an API and increase context length\n\n## Get involved at Agora\n\nOcean is a rapidly developing project. We welcome PR contributors and ideas for how to improve the project.\n\n- [Join the conversation on Discord](https://discord.gg/sbYvXgqc)\n- [Review the roadmap and contribute your ideas](https://docs.tryocean.com/roadmap)\n- [Grab an issue and open a PR](https://github.com/ocean-core/ocean/issues)\n\n## License\n\n[Apache 2.0](./LICENSE)\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Ocean.",
    "version": "0.1.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/kyegomez/ocean/issues",
        "Homepage": "https://github.com/kyegomez/ocean"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1e3cf64a29f6e8f98d224e656d3a27576e11e0162df23a28737a1d7b63e07fdb",
                "md5": "e03d6266b06eacfb6fd217ad874328bc",
                "sha256": "0e9af73864e93ee6d7e9dedaa30a19d5894fc88dfcb90b196e933071ea09f4c9"
            },
            "downloads": -1,
            "filename": "oceandb-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e03d6266b06eacfb6fd217ad874328bc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7,<4.0",
            "size": 2821733,
            "upload_time": "2023-07-25T22:51:45",
            "upload_time_iso_8601": "2023-07-25T22:51:45.048024Z",
            "url": "https://files.pythonhosted.org/packages/1e/3c/f64a29f6e8f98d224e656d3a27576e11e0162df23a28737a1d7b63e07fdb/oceandb-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6523544b4d343426e29cdd3083be5417c0340e1111c9c38f3f31ee54c22e8759",
                "md5": "3b53eacde5132227687c4038c6d9f64f",
                "sha256": "b343bdebd2e87d872ff815b723e23678570a3bc8b641dd3198202e42686dc297"
            },
            "downloads": -1,
            "filename": "oceandb-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "3b53eacde5132227687c4038c6d9f64f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7,<4.0",
            "size": 2810845,
            "upload_time": "2023-07-25T22:51:50",
            "upload_time_iso_8601": "2023-07-25T22:51:50.815912Z",
            "url": "https://files.pythonhosted.org/packages/65/23/544b4d343426e29cdd3083be5417c0340e1111c9c38f3f31ee54c22e8759/oceandb-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-25 22:51:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "ocean",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "oceandb"
}
        
Elapsed time: 0.11841s