docarray


Namedocarray JSON
Version 0.40.0 PyPI version JSON
download
home_pagehttps://docs.docarray.org/
SummaryThe data structure for multimodal data
upload_time2023-12-22 12:12:25
maintainer
docs_urlNone
authorDocArray
requires_python>=3.8,<4.0
licenseApache 2.0
keywords docarray deep-learning data-structures cross-modal multi-modal unstructured-data nested-data neural-search
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
<img src="https://github.com/docarray/docarray/blob/main/docs/assets/logo-dark.svg?raw=true" alt="DocArray logo: The data structure for unstructured data" width="150px">
<br>
<b>The data structure for multimodal data</b>
</p>

<p align=center>
<a href="https://pypi.org/project/docarray/"><img src="https://img.shields.io/pypi/v/docarray?style=flat-square&amp;label=Release" alt="PyPI"></a>
<a href="https://bestpractices.coreinfrastructure.org/projects/6554"><img src="https://bestpractices.coreinfrastructure.org/projects/6554/badge"></a>
<a href="https://codecov.io/gh/docarray/docarray"><img alt="Codecov branch" src="https://img.shields.io/codecov/c/github/docarray/docarray/main?&logo=Codecov&logoColor=white&style=flat-square"></a>
<a href="https://pypistats.org/packages/docarray"><img alt="PyPI - Downloads from official pypistats" src="https://img.shields.io/pypi/dm/docarray?style=flat-square"></a>
<a href="https://discord.gg/WaMp6PVPgR"><img src="https://dcbadge.vercel.app/api/server/WaMp6PVPgR?theme=default-inverted&style=flat-square"></a>
</p>

> **Note**
> The README you're currently viewing is for DocArray>0.30, which introduces some significant changes from DocArray 0.21. If you wish to continue using the older DocArray <=0.21, ensure you install it via `pip install docarray==0.21`. Refer to its [codebase](https://github.com/docarray/docarray/tree/v0.21.0), [documentation](https://docarray.jina.ai), and [its hot-fixes branch](https://github.com/docarray/docarray/tree/docarray-v1-fixes) for more information.


DocArray is a Python library expertly crafted for the [representation](#represent), [transmission](#send), [storage](#store), and [retrieval](#retrieve) of multimodal data. Tailored for the development of multimodal AI applications, its design guarantees seamless integration with the extensive Python and machine learning ecosystems. As of January 2022, DocArray is openly distributed under the [Apache License 2.0](https://github.com/docarray/docarray/blob/main/LICENSE) and currently enjoys the status of a sandbox project within the [LF AI & Data Foundation](https://lfaidata.foundation/).



- :fire: Offers native support for **[NumPy](https://github.com/numpy/numpy)**, **[PyTorch](https://github.com/pytorch/pytorch)**, **[TensorFlow](https://github.com/tensorflow/tensorflow)**, and **[JAX](https://github.com/google/jax)**, catering specifically to **model training scenarios**.
- :zap: Based on **[Pydantic](https://github.com/pydantic/pydantic)**, and instantly compatible with web and microservice frameworks like **[FastAPI](https://github.com/tiangolo/fastapi/)** and **[Jina](https://github.com/jina-ai/jina/)**.
- :package: Provides support for vector databases such as **[Weaviate](https://weaviate.io/), [Qdrant](https://qdrant.tech/), [ElasticSearch](https://www.elastic.co/de/elasticsearch/), [Redis](https://redis.io/)**, and **[HNSWLib](https://github.com/nmslib/hnswlib)**.
- :chains: Allows data transmission as JSON over **HTTP** or as **[Protobuf](https://protobuf.dev/)** over **[gRPC](https://grpc.io/)**.

## Installation

To install DocArray from the CLI, run the following command:

```shell
pip install -U docarray
```

> **Note**
> To use DocArray <=0.21, make sure you install via `pip install docarray==0.21` and check out its [codebase](https://github.com/docarray/docarray/tree/v0.21.0) and [docs](https://docarray.jina.ai) and [its hot-fixes branch](https://github.com/docarray/docarray/tree/docarray-v1-fixes).

## Get Started
New to DocArray? Depending on your use case and background, there are multiple ways to learn about DocArray:
 
- [Coming from pure PyTorch or TensorFlow](#coming-from-pytorch)
- [Coming from Pydantic](#coming-from-pydantic)
- [Coming from FastAPI](#coming-from-fastapi)
- [Coming from Jina](#coming-from-jina)
- [Coming from a vector database](#coming-from-a-vector-database)
- [Coming from Langchain](#coming-from-langchain)


## Represent

DocArray empowers you to **represent your data** in a manner that is inherently attuned to machine learning.

This is particularly beneficial for various scenarios:

- :running: You are **training a model**: You're dealing with tensors of varying shapes and sizes, each signifying different elements. You desire a method to logically organize them.
- :cloud: You are **serving a model**: Let's say through FastAPI, and you wish to define your API endpoints precisely.
- :card_index_dividers: You are **parsing data**: Perhaps for future deployment in your machine learning or data science projects.

> :bulb: **Familiar with Pydantic?** You'll be pleased to learn
> that DocArray is not only constructed atop Pydantic but also maintains complete compatibility with it!
> Furthermore, we have a [specific section](#coming-from-pydantic) dedicated to your needs!

In essence, DocArray facilitates data representation in a way that mirrors Python dataclasses, with machine learning being an integral component:


```python
from docarray import BaseDoc
from docarray.typing import TorchTensor, ImageUrl
import torch


# Define your data model
class MyDocument(BaseDoc):
    description: str
    image_url: ImageUrl  # could also be VideoUrl, AudioUrl, etc.
    image_tensor: TorchTensor[1704, 2272, 3]  # you can express tensor shapes!


# Stack multiple documents in a Document Vector
from docarray import DocVec

vec = DocVec[MyDocument](
    [
        MyDocument(
            description="A cat",
            image_url="https://example.com/cat.jpg",
            image_tensor=torch.rand(1704, 2272, 3),
        ),
    ]
    * 10
)
print(vec.image_tensor.shape)  # (10, 1704, 2272, 3)
```

<details markdown="1">
  <summary>Click for more details</summary>

Let's take a closer look at how you can represent your data with DocArray:

```python
from docarray import BaseDoc
from docarray.typing import TorchTensor, ImageUrl
from typing import Optional
import torch


# Define your data model
class MyDocument(BaseDoc):
    description: str
    image_url: ImageUrl  # could also be VideoUrl, AudioUrl, etc.
    image_tensor: Optional[
        TorchTensor[1704, 2272, 3]
    ] = None  # could also be NdArray or TensorflowTensor
    embedding: Optional[TorchTensor] = None
```

So not only can you define the types of your data, you can even **specify the shape of your tensors!**

```python
# Create a document
doc = MyDocument(
    description="This is a photo of a mountain",
    image_url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
)

# Load image tensor from URL
doc.image_tensor = doc.image_url.load()


# Compute embedding with any model of your choice
def clip_image_encoder(image_tensor: TorchTensor) -> TorchTensor:  # dummy function
    return torch.rand(512)


doc.embedding = clip_image_encoder(doc.image_tensor)

print(doc.embedding.shape)  # torch.Size([512])
```

### Compose nested Documents

Of course, you can compose Documents into a nested structure:

```python
from docarray import BaseDoc
from docarray.documents import ImageDoc, TextDoc
import numpy as np


class MultiModalDocument(BaseDoc):
    image_doc: ImageDoc
    text_doc: TextDoc


doc = MultiModalDocument(
    image_doc=ImageDoc(tensor=np.zeros((3, 224, 224))), text_doc=TextDoc(text='hi!')
)
```

You rarely work with a single data point at a time, especially in machine learning applications. That's why you can easily collect multiple `Documents`:

### Collect multiple `Documents`

When building or interacting with an ML system, usually you want to process multiple Documents (data points) at once.

DocArray offers two data structures for this:

- **`DocVec`**: A vector of `Documents`. All tensors in the documents are stacked into a single tensor. **Perfect for batch processing and use inside of ML models**.
- **`DocList`**: A list of `Documents`. All tensors in the documents are kept as-is. **Perfect for streaming, re-ranking, and shuffling of data**.

Let's take a look at them, starting with `DocVec`:

```python
from docarray import DocVec, BaseDoc
from docarray.typing import AnyTensor, ImageUrl
import numpy as np


class Image(BaseDoc):
    url: ImageUrl
    tensor: AnyTensor  # this allows torch, numpy, and tensor flow tensors


vec = DocVec[Image](  # the DocVec is parametrized by your personal schema!
    [
        Image(
            url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
            tensor=np.zeros((3, 224, 224)),
        )
        for _ in range(100)
    ]
)
``` 

In the code snippet above, `DocVec` is **parametrized by the type of document** you want to use with it: `DocVec[Image]`.

This may look weird at first, but we're confident that you'll get used to it quickly!
Besides, it lets us do some cool things, like having **bulk access to the fields that you defined** in your document:

```python
tensor = vec.tensor  # gets all the tensors in the DocVec
print(tensor.shape)  # which are stacked up into a single tensor!
print(vec.url)  # you can bulk access any other field, too
```

The second data structure, `DocList`, works in a similar way:

```python
from docarray import DocList

dl = DocList[Image](  # the DocList is parametrized by your personal schema!
    [
        Image(
            url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
            tensor=np.zeros((3, 224, 224)),
        )
        for _ in range(100)
    ]
)
```

You can still bulk access the fields of your document:

```python
tensors = dl.tensor  # gets all the tensors in the DocList
print(type(tensors))  # as a list of tensors
print(dl.url)  # you can bulk access any other field, too
```

And you can insert, remove, and append documents to your `DocList`:

```python
# append
dl.append(
    Image(
        url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
        tensor=np.zeros((3, 224, 224)),
    )
)
# delete
del dl[0]
# insert
dl.insert(
    0,
    Image(
        url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
        tensor=np.zeros((3, 224, 224)),
    ),
)
```

And you can seamlessly switch between `DocVec` and `DocList`:

```python
vec_2 = dl.to_doc_vec()
assert isinstance(vec_2, DocVec)

dl_2 = vec_2.to_doc_list()
assert isinstance(dl_2, DocList)
```

</details>

## Send

DocArray facilitates the **transmission of your data** in a manner inherently compatible with machine learning.

This includes native support for **Protobuf and gRPC**, along with **HTTP** and serialization to JSON, JSONSchema, Base64, and Bytes.

This feature proves beneficial for several scenarios:

- :cloud: You are **serving a model**, perhaps through frameworks like **[Jina](https://github.com/jina-ai/jina/)** or **[FastAPI](https://github.com/tiangolo/fastapi/)**
- :spider_web: You are **distributing your model** across multiple machines and need an efficient means of transmitting your data between nodes
- :gear: You are architecting a **microservice** environment and require a method for data transmission between microservices

> :bulb: **Are you familiar with FastAPI?** You'll be delighted to learn
> that DocArray maintains full compatibility with FastAPI!
> Plus, we have a [dedicated section](#coming-from-fastapi) specifically for you!

When it comes to data transmission, serialization is a crucial step. Let's delve into how DocArray streamlines this process:


```python
from docarray import BaseDoc
from docarray.typing import ImageTorchTensor
import torch


# model your data
class MyDocument(BaseDoc):
    description: str
    image: ImageTorchTensor[3, 224, 224]


# create a Document
doc = MyDocument(
    description="This is a description",
    image=torch.zeros((3, 224, 224)),
)

# serialize it!
proto = doc.to_protobuf()
bytes_ = doc.to_bytes()
json = doc.json()

# deserialize it!
doc_2 = MyDocument.from_protobuf(proto)
doc_4 = MyDocument.from_bytes(bytes_)
doc_5 = MyDocument.parse_raw(json)
```

Of course, serialization is not all you need. So check out how DocArray integrates with **[Jina](https://github.com/jina-ai/jina/)** and **[FastAPI](https://github.com/tiangolo/fastapi/)**.

## Store

After modeling and possibly distributing your data, you'll typically want to **store it** somewhere. That's where DocArray steps in!

**Document Stores** provide a seamless way to, as the name suggests, store your Documents. Be it locally or remotely, you can do it all through the same user interface:

- :cd: **On disk**, as a file in your local filesystem
- :bucket: On **[AWS S3](https://aws.amazon.com/de/s3/)**
- :cloud: On **[Jina AI Cloud](https://cloud.jina.ai/)**

The Document Store interface lets you push and pull Documents to and from multiple data sources, all with the same user interface.

For example, let's see how that works with on-disk storage:

```python
from docarray import BaseDoc, DocList


class SimpleDoc(BaseDoc):
    text: str


docs = DocList[SimpleDoc]([SimpleDoc(text=f'doc {i}') for i in range(8)])
docs.push('file://simple_docs')

docs_pull = DocList[SimpleDoc].pull('file://simple_docs')
```

## Retrieve

**Document Indexes** let you index your Documents in a **vector database** for efficient similarity-based retrieval.

This is useful for:

- :left_speech_bubble: Augmenting **LLMs and Chatbots** with domain knowledge ([Retrieval Augmented Generation](https://arxiv.org/abs/2005.11401))
- :mag: **Neural search** applications
- :bulb: **Recommender systems**

Currently, Document Indexes support **[Weaviate](https://weaviate.io/)**, **[Qdrant](https://qdrant.tech/)**, **[ElasticSearch](https://www.elastic.co/)**,  **[Redis](https://redis.io/)**, and **[HNSWLib](https://github.com/nmslib/hnswlib)**, with more to come!

The Document Index interface lets you index and retrieve Documents from multiple vector databases, all with the same user interface.

It supports ANN vector search, text search, filtering, and hybrid search.

```python
from docarray import DocList, BaseDoc
from docarray.index import HnswDocumentIndex
import numpy as np

from docarray.typing import ImageUrl, ImageTensor, NdArray


class ImageDoc(BaseDoc):
    url: ImageUrl
    tensor: ImageTensor
    embedding: NdArray[128]


# create some data
dl = DocList[ImageDoc](
    [
        ImageDoc(
            url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
            tensor=np.zeros((3, 224, 224)),
            embedding=np.random.random((128,)),
        )
        for _ in range(100)
    ]
)

# create a Document Index
index = HnswDocumentIndex[ImageDoc](work_dir='/tmp/test_index')


# index your data
index.index(dl)

# find similar Documents
query = dl[0]
results, scores = index.find(query, limit=10, search_field='embedding')
```

---

## Learn DocArray

Depending on your background and use case, there are different ways for you to understand DocArray.

### Coming from DocArray <=0.21

<details markdown="1">
  <summary>Click to expand</summary>

If you are using DocArray version 0.30.0 or lower, you will be familiar with its [dataclass API](https://docarray.jina.ai/fundamentals/dataclass/).

_DocArray >=0.30 is that idea, taken seriously._ Every document is created through a dataclass-like interface,
courtesy of [Pydantic](https://pydantic-docs.helpmanual.io/usage/models/).

This gives the following advantages:
- **Flexibility:** No need to conform to a fixed set of fields -- your data defines the schema
- **Multimodality:** At their core, documents are just dictionaries. This makes it easy to create and send them from any language, not just Python.

You may also be familiar with our old Document Stores for vector DB integration.
They are now called **Document Indexes** and offer the following improvements (see [here](#store) for the new API):

- **Hybrid search:** You can now combine vector search with text search, and even filter by arbitrary fields
- **Production-ready:** The new Document Indexes are a much thinner wrapper around the various vector DB libraries, making them more robust and easier to maintain
- **Increased flexibility:** We strive to support any configuration or setting that you could perform through the DB's first-party client

For now, Document Indexes support **[Weaviate](https://weaviate.io/)**, **[Qdrant](https://qdrant.tech/)**, **[ElasticSearch](https://www.elastic.co/)**, **[Redis](https://redis.io/)**,  Exact Nearest Neighbour search and **[HNSWLib](https://github.com/nmslib/hnswlib)**, with more to come.

</details>

### Coming from Pydantic

<details markdown="1">
  <summary>Click to expand</summary>

If you come from Pydantic, you can see DocArray documents as juiced up Pydantic models, and DocArray as a collection of goodies around them.

More specifically, we set out to **make Pydantic fit for the ML world** - not by replacing it, but by building on top of it!

This means you get the following benefits:

- **ML-focused types**: Tensor, TorchTensor, Embedding, ..., including **tensor shape validation**
- Full compatibility with **FastAPI**
- **DocList** and **DocVec** generalize the idea of a model to a _sequence_ or _batch_ of models. Perfect for **use in ML models** and other batch processing tasks.
- **Types that are alive**: ImageUrl can `.load()` a URL to image tensor, TextUrl can load and tokenize text documents, etc.
- Cloud-ready: Serialization to **Protobuf** for use with microservices and **gRPC**
- **Pre-built multimodal documents** for different data modalities: Image, Text, 3DMesh, Video, Audio and more. Note that all of these are valid Pydantic models!
- **Document Stores** and **Document Indexes** let you store your data and retrieve it using **vector search**

The most obvious advantage here is **first-class support for ML centric data**, such as `{Torch, TF, ...}Tensor`, `Embedding`, etc.

This includes handy features such as validating the shape of a tensor:

```python
from docarray import BaseDoc
from docarray.typing import TorchTensor
import torch


class MyDoc(BaseDoc):
    tensor: TorchTensor[3, 224, 224]


doc = MyDoc(tensor=torch.zeros(3, 224, 224))  # works
doc = MyDoc(tensor=torch.zeros(224, 224, 3))  # works by reshaping

try:
    doc = MyDoc(tensor=torch.zeros(224))  # fails validation
except Exception as e:
    print(e)
    # tensor
    # Cannot reshape tensor of shape (224,) to shape (3, 224, 224) (type=value_error)


class Image(BaseDoc):
    tensor: TorchTensor[3, 'x', 'x']


Image(tensor=torch.zeros(3, 224, 224))  # works

try:
    Image(
        tensor=torch.zeros(3, 64, 128)
    )  # fails validation because second dimension does not match third
except Exception as e:
    print()


try:
    Image(
        tensor=torch.zeros(4, 224, 224)
    )  # fails validation because of the first dimension
except Exception as e:
    print(e)
    # Tensor shape mismatch. Expected(3, 'x', 'x'), got(4, 224, 224)(type=value_error)

try:
    Image(
        tensor=torch.zeros(3, 64)
    )  # fails validation because it does not have enough dimensions
except Exception as e:
    print(e)
    # Tensor shape mismatch. Expected (3, 'x', 'x'), got (3, 64) (type=value_error)
```

</details>

### Coming from PyTorch

<details markdown="1">
  <summary>Click to expand</summary>

If you come from PyTorch, you can see DocArray mainly as a way of _organizing your data as it flows through your model_.

It offers you several advantages:

- Express **tensor shapes in type hints**
- **Group tensors that belong to the same object**, e.g. an audio track and an image
- **Go directly to deployment**, by re-using your data model as a [FastAPI](https://fastapi.tiangolo.com/) or [Jina](https://github.com/jina-ai/jina) API schema
- Connect model components between **microservices**, using Protobuf and gRPC

DocArray can be used directly inside ML models to handle and represent multimodaldata.
This allows you to reason about your data using DocArray's abstractions deep inside of `nn.Module`,
and provides a FastAPI-compatible schema that eases the transition between model training and model serving.

To see the effect of this, let's first observe a vanilla PyTorch implementation of a tri-modal ML model:

```python
import torch
from torch import nn


def encoder(x):
    return torch.rand(512)


class MyMultiModalModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.audio_encoder = encoder()
        self.image_encoder = encoder()
        self.text_encoder = encoder()

    def forward(self, text_1, text_2, image_1, image_2, audio_1, audio_2):
        embedding_text_1 = self.text_encoder(text_1)
        embedding_text_2 = self.text_encoder(text_2)

        embedding_image_1 = self.image_encoder(image_1)
        embedding_image_2 = self.image_encoder(image_2)

        embedding_audio_1 = self.image_encoder(audio_1)
        embedding_audio_2 = self.image_encoder(audio_2)

        return (
            embedding_text_1,
            embedding_text_2,
            embedding_image_1,
            embedding_image_2,
            embedding_audio_1,
            embedding_audio_2,
        )
```

Not very easy on the eyes if you ask us. And even worse, if you need to add one more modality you have to touch every part of your code base, changing the `forward()` return type and making a whole lot of changes downstream from that.

So, now let's see what the same code looks like with DocArray:

```python
from docarray import DocList, BaseDoc
from docarray.documents import ImageDoc, TextDoc, AudioDoc
from docarray.typing import TorchTensor
from torch import nn
import torch


def encoder(x):
    return torch.rand(512)


class Podcast(BaseDoc):
    text: TextDoc
    image: ImageDoc
    audio: AudioDoc


class PairPodcast(BaseDoc):
    left: Podcast
    right: Podcast


class MyPodcastModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.audio_encoder = encoder()
        self.image_encoder = encoder()
        self.text_encoder = encoder()

    def forward_podcast(self, docs: DocList[Podcast]) -> DocList[Podcast]:
        docs.audio.embedding = self.audio_encoder(docs.audio.tensor)
        docs.text.embedding = self.text_encoder(docs.text.tensor)
        docs.image.embedding = self.image_encoder(docs.image.tensor)

        return docs

    def forward(self, docs: DocList[PairPodcast]) -> DocList[PairPodcast]:
        docs.left = self.forward_podcast(docs.left)
        docs.right = self.forward_podcast(docs.right)

        return docs
```

Looks much better, doesn't it?
You instantly win in code readability and maintainability. And for the same price you can turn your PyTorch model into a FastAPI app and reuse your Document
schema definition (see [below](#coming-from-fastapi)). Everything is handled in a pythonic manner by relying on type hints.

</details>


### Coming from TensorFlow

<details markdown="1">
  <summary>Click to expand</summary>

Like the [PyTorch approach](#coming-from-pytorch), you can also use DocArray with TensorFlow to handle and represent multimodal data inside your ML model.

First off, to use DocArray with TensorFlow we first need to install it as follows:

```
pip install tensorflow==2.12.0
pip install protobuf==3.19.0
```

Compared to using DocArray with PyTorch, there is one main difference when using it with TensorFlow:
While DocArray's `TorchTensor` is a subclass of `torch.Tensor`, this is not the case for the `TensorFlowTensor`: Due to some technical limitations of `tf.Tensor`, DocArray's `TensorFlowTensor` is not a subclass of `tf.Tensor` but rather stores a `tf.Tensor` in its `.tensor` attribute. 

How does this affect you? Whenever you want to access the tensor data to, let's say, do operations with it or hand it to your ML model, instead of handing over your `TensorFlowTensor` instance, you need to access its `.tensor` attribute.

This would look like the following:

```python
from typing import Optional

from docarray import DocList, BaseDoc

import tensorflow as tf


class Podcast(BaseDoc):
    audio_tensor: Optional[AudioTensorFlowTensor] = None
    embedding: Optional[AudioTensorFlowTensor] = None


class MyPodcastModel(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.audio_encoder = AudioEncoder()

    def call(self, inputs: DocList[Podcast]) -> DocList[Podcast]:
        inputs.audio_tensor.embedding = self.audio_encoder(
            inputs.audio_tensor.tensor
        )  # access audio_tensor's .tensor attribute
        return inputs
```

</details>

### Coming from FastAPI

<details markdown="1">
  <summary>Click to expand</summary>

Documents are Pydantic Models (with a twist), and as such they are fully compatible with FastAPI!

But why should you use them, and not the Pydantic models you already know and love?
Good question!

- Because of the ML-first features, types and validations, [here](#coming-from-pydantic)
- Because DocArray can act as an [ORM for vector databases](#coming-from-a-vector-database), similar to what SQLModel does for SQL databases

And to seal the deal, let us show you how easily documents slot into your FastAPI app:

```python
import numpy as np
from fastapi import FastAPI
from docarray.base_doc import DocArrayResponse
from docarray import BaseDoc
from docarray.documents import ImageDoc
from docarray.typing import NdArray, ImageTensor


class InputDoc(BaseDoc):
    img: ImageDoc
    text: str


class OutputDoc(BaseDoc):
    embedding_clip: NdArray
    embedding_bert: NdArray


app = FastAPI()


def model_img(img: ImageTensor) -> NdArray:
    return np.zeros((100, 1))


def model_text(text: str) -> NdArray:
    return np.zeros((100, 1))


@app.post("/embed/", response_model=OutputDoc, response_class=DocArrayResponse)
async def create_item(doc: InputDoc) -> OutputDoc:
    doc = OutputDoc(
        embedding_clip=model_img(doc.img.tensor), embedding_bert=model_text(doc.text)
    )
    return doc


input_doc = InputDoc(text='', img=ImageDoc(tensor=np.random.random((3, 224, 224))))

async with AsyncClient(app=app, base_url="http://test") as ac:
    response = await ac.post("/embed/", data=input_doc.json())
```

Just like a vanilla Pydantic model!

</details>

### Coming from Jina

<details markdown="1">
  <summary>Click to expand</summary>

Jina has adopted docarray as their library for representing and serializing Documents.

Jina allows to serve models and services that are built with DocArray allowing you to serve and scale these applications
making full use of DocArray's serialization capabilites. 

```python
import numpy as np
from jina import Deployment, Executor, requests
from docarray import BaseDoc, DocList
from docarray.documents import ImageDoc
from docarray.typing import NdArray, ImageTensor


class InputDoc(BaseDoc):
    img: ImageDoc
    text: str


class OutputDoc(BaseDoc):
    embedding_clip: NdArray
    embedding_bert: NdArray


def model_img(img: ImageTensor) -> NdArray:
    return np.zeros((100, 1))


def model_text(text: str) -> NdArray:
    return np.zeros((100, 1))


class MyEmbeddingExecutor(Executor):
    @requests(on='/embed')
    def encode(self, docs: DocList[InputDoc], **kwargs) -> DocList[OutputDoc]:
        ret = DocList[OutputDoc]()
        for doc in docs:
            output = OutputDoc(
                embedding_clip=model_img(doc.img.tensor),
                embedding_bert=model_text(doc.text),
            )
            ret.append(output)
        return ret


with Deployment(
    protocols=['grpc', 'http'], ports=[12345, 12346], uses=MyEmbeddingExecutor
) as dep:
    resp = dep.post(
        on='/embed',
        inputs=DocList[InputDoc](
            [InputDoc(text='', img=ImageDoc(tensor=np.random.random((3, 224, 224))))]
        ),
        return_type=DocList[OutputDoc],
    )
    print(resp)
```

</details>

### Coming from a vector database

<details markdown="1">
  <summary>Click to expand</summary>

If you came across DocArray as a universal vector database client, you can best think of it as **a new kind of ORM for vector databases**.
DocArray's job is to take multimodal, nested and domain-specific data and to map it to a vector database,
store it there, and thus make it searchable:

```python
from docarray import DocList, BaseDoc
from docarray.index import HnswDocumentIndex
import numpy as np

from docarray.typing import ImageUrl, ImageTensor, NdArray


class ImageDoc(BaseDoc):
    url: ImageUrl
    tensor: ImageTensor
    embedding: NdArray[128]


# create some data
dl = DocList[ImageDoc](
    [
        ImageDoc(
            url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
            tensor=np.zeros((3, 224, 224)),
            embedding=np.random.random((128,)),
        )
        for _ in range(100)
    ]
)

# create a Document Index
index = HnswDocumentIndex[ImageDoc](work_dir='/tmp/test_index2')


# index your data
index.index(dl)

# find similar Documents
query = dl[0]
results, scores = index.find(query, limit=10, search_field='embedding')
```

Currently, DocArray supports the following vector databases:

- [Weaviate](https://www.weaviate.io/)
- [Qdrant](https://qdrant.tech/)
- [Elasticsearch](https://www.elastic.co/elasticsearch/) v8 and v7
- [Redis](https://redis.io/)
- [Milvus](https://milvus.io)
- ExactNNMemorySearch as a local alternative with exact kNN search.
- [HNSWlib](https://github.com/nmslib/hnswlib) as a local-first ANN alternative

An integration of [OpenSearch](https://opensearch.org/) is currently in progress.

Of course this is only one of the things that DocArray can do, so we encourage you to check out the rest of this readme!

</details>


### Coming from Langchain

<details markdown="1">
  <summary>Click to expand</summary>

With DocArray, you can connect external data to LLMs through Langchain. DocArray gives you the freedom to establish 
flexible document schemas and choose from different backends for document storage.
After creating your document index, you can connect it to your Langchain app using [DocArrayRetriever](https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/docarray_retriever).

Install Langchain via:
```shell
pip install langchain
```

1. Define a schema and create documents:
```python
from docarray import BaseDoc, DocList
from docarray.typing import NdArray
from langchain.embeddings.openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings()

# Define a document schema
class MovieDoc(BaseDoc):
    title: str
    description: str
    year: int
    embedding: NdArray[1536]


movies = [
    {"title": "#1 title", "description": "#1 description", "year": 1999},
    {"title": "#2 title", "description": "#2 description", "year": 2001},
]

# Embed `description` and create documents
docs = DocList[MovieDoc](
    MovieDoc(embedding=embeddings.embed_query(movie["description"]), **movie)
    for movie in movies
)
```

2. Initialize a document index using any supported backend:
```python
from docarray.index import (
    InMemoryExactNNIndex,
    HnswDocumentIndex,
    WeaviateDocumentIndex,
    QdrantDocumentIndex,
    ElasticDocIndex,
    RedisDocumentIndex,
)

# Select a suitable backend and initialize it with data
db = InMemoryExactNNIndex[MovieDoc](docs)
```

3. Finally, initialize a retriever and integrate it into your chain!
```python
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.retrievers import DocArrayRetriever


# Create a retriever
retriever = DocArrayRetriever(
    index=db,
    embeddings=embeddings,
    search_field="embedding",
    content_field="description",
)

# Use the retriever in your chain
model = ChatOpenAI()
qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)
```

Alternatively, you can use built-in vector stores. Langchain supports two vector stores: [DocArrayInMemorySearch](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/docarray_in_memory) and [DocArrayHnswSearch](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/docarray_hnsw). 
Both are user-friendly and are best suited to small to medium-sized datasets.

</details>


## See also

- [Documentation](https://docs.docarray.org)
- [DocArray<=0.21 documentation](https://docarray.jina.ai/)
- [Join our Discord server](https://discord.gg/WaMp6PVPgR)
- [Donation to Linux Foundation AI&Data blog post](https://jina.ai/news/donate-docarray-lf-for-inclusive-standard-multimodal-data-model/)
- [Roadmap](https://github.com/docarray/docarray/issues/1714)

> DocArray is a trademark of LF AI Projects, LLC
> 

            

Raw data

            {
    "_id": null,
    "home_page": "https://docs.docarray.org/",
    "name": "docarray",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8,<4.0",
    "maintainer_email": "",
    "keywords": "docarray,deep-learning,data-structures cross-modal multi-modal, unstructured-data, nested-data,neural-search",
    "author": "DocArray",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/ff/20/4ce3b1324dcadd62e10a1122f16ddc87b837ae9402791f40859e1ff98b87/docarray-0.40.0.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n<img src=\"https://github.com/docarray/docarray/blob/main/docs/assets/logo-dark.svg?raw=true\" alt=\"DocArray logo: The data structure for unstructured data\" width=\"150px\">\n<br>\n<b>The data structure for multimodal data</b>\n</p>\n\n<p align=center>\n<a href=\"https://pypi.org/project/docarray/\"><img src=\"https://img.shields.io/pypi/v/docarray?style=flat-square&amp;label=Release\" alt=\"PyPI\"></a>\n<a href=\"https://bestpractices.coreinfrastructure.org/projects/6554\"><img src=\"https://bestpractices.coreinfrastructure.org/projects/6554/badge\"></a>\n<a href=\"https://codecov.io/gh/docarray/docarray\"><img alt=\"Codecov branch\" src=\"https://img.shields.io/codecov/c/github/docarray/docarray/main?&logo=Codecov&logoColor=white&style=flat-square\"></a>\n<a href=\"https://pypistats.org/packages/docarray\"><img alt=\"PyPI - Downloads from official pypistats\" src=\"https://img.shields.io/pypi/dm/docarray?style=flat-square\"></a>\n<a href=\"https://discord.gg/WaMp6PVPgR\"><img src=\"https://dcbadge.vercel.app/api/server/WaMp6PVPgR?theme=default-inverted&style=flat-square\"></a>\n</p>\n\n> **Note**\n> The README you're currently viewing is for DocArray>0.30, which introduces some significant changes from DocArray 0.21. If you wish to continue using the older DocArray <=0.21, ensure you install it via `pip install docarray==0.21`. Refer to its [codebase](https://github.com/docarray/docarray/tree/v0.21.0), [documentation](https://docarray.jina.ai), and [its hot-fixes branch](https://github.com/docarray/docarray/tree/docarray-v1-fixes) for more information.\n\n\nDocArray is a Python library expertly crafted for the [representation](#represent), [transmission](#send), [storage](#store), and [retrieval](#retrieve) of multimodal data. Tailored for the development of multimodal AI applications, its design guarantees seamless integration with the extensive Python and machine learning ecosystems. As of January 2022, DocArray is openly distributed under the [Apache License 2.0](https://github.com/docarray/docarray/blob/main/LICENSE) and currently enjoys the status of a sandbox project within the [LF AI & Data Foundation](https://lfaidata.foundation/).\n\n\n\n- :fire: Offers native support for **[NumPy](https://github.com/numpy/numpy)**, **[PyTorch](https://github.com/pytorch/pytorch)**, **[TensorFlow](https://github.com/tensorflow/tensorflow)**, and **[JAX](https://github.com/google/jax)**, catering specifically to **model training scenarios**.\n- :zap: Based on **[Pydantic](https://github.com/pydantic/pydantic)**, and instantly compatible with web and microservice frameworks like **[FastAPI](https://github.com/tiangolo/fastapi/)** and **[Jina](https://github.com/jina-ai/jina/)**.\n- :package: Provides support for vector databases such as **[Weaviate](https://weaviate.io/), [Qdrant](https://qdrant.tech/), [ElasticSearch](https://www.elastic.co/de/elasticsearch/), [Redis](https://redis.io/)**, and **[HNSWLib](https://github.com/nmslib/hnswlib)**.\n- :chains: Allows data transmission as JSON over **HTTP** or as **[Protobuf](https://protobuf.dev/)** over **[gRPC](https://grpc.io/)**.\n\n## Installation\n\nTo install DocArray from the CLI, run the following command:\n\n```shell\npip install -U docarray\n```\n\n> **Note**\n> To use DocArray <=0.21, make sure you install via `pip install docarray==0.21` and check out its [codebase](https://github.com/docarray/docarray/tree/v0.21.0) and [docs](https://docarray.jina.ai) and [its hot-fixes branch](https://github.com/docarray/docarray/tree/docarray-v1-fixes).\n\n## Get Started\nNew to DocArray? Depending on your use case and background, there are multiple ways to learn about DocArray:\n \n- [Coming from pure PyTorch or TensorFlow](#coming-from-pytorch)\n- [Coming from Pydantic](#coming-from-pydantic)\n- [Coming from FastAPI](#coming-from-fastapi)\n- [Coming from Jina](#coming-from-jina)\n- [Coming from a vector database](#coming-from-a-vector-database)\n- [Coming from Langchain](#coming-from-langchain)\n\n\n## Represent\n\nDocArray empowers you to **represent your data** in a manner that is inherently attuned to machine learning.\n\nThis is particularly beneficial for various scenarios:\n\n- :running: You are **training a model**: You're dealing with tensors of varying shapes and sizes, each signifying different elements. You desire a method to logically organize them.\n- :cloud: You are **serving a model**: Let's say through FastAPI, and you wish to define your API endpoints precisely.\n- :card_index_dividers: You are **parsing data**: Perhaps for future deployment in your machine learning or data science projects.\n\n> :bulb: **Familiar with Pydantic?** You'll be pleased to learn\n> that DocArray is not only constructed atop Pydantic but also maintains complete compatibility with it!\n> Furthermore, we have a [specific section](#coming-from-pydantic) dedicated to your needs!\n\nIn essence, DocArray facilitates data representation in a way that mirrors Python dataclasses, with machine learning being an integral component:\n\n\n```python\nfrom docarray import BaseDoc\nfrom docarray.typing import TorchTensor, ImageUrl\nimport torch\n\n\n# Define your data model\nclass MyDocument(BaseDoc):\n    description: str\n    image_url: ImageUrl  # could also be VideoUrl, AudioUrl, etc.\n    image_tensor: TorchTensor[1704, 2272, 3]  # you can express tensor shapes!\n\n\n# Stack multiple documents in a Document Vector\nfrom docarray import DocVec\n\nvec = DocVec[MyDocument](\n    [\n        MyDocument(\n            description=\"A cat\",\n            image_url=\"https://example.com/cat.jpg\",\n            image_tensor=torch.rand(1704, 2272, 3),\n        ),\n    ]\n    * 10\n)\nprint(vec.image_tensor.shape)  # (10, 1704, 2272, 3)\n```\n\n<details markdown=\"1\">\n  <summary>Click for more details</summary>\n\nLet's take a closer look at how you can represent your data with DocArray:\n\n```python\nfrom docarray import BaseDoc\nfrom docarray.typing import TorchTensor, ImageUrl\nfrom typing import Optional\nimport torch\n\n\n# Define your data model\nclass MyDocument(BaseDoc):\n    description: str\n    image_url: ImageUrl  # could also be VideoUrl, AudioUrl, etc.\n    image_tensor: Optional[\n        TorchTensor[1704, 2272, 3]\n    ] = None  # could also be NdArray or TensorflowTensor\n    embedding: Optional[TorchTensor] = None\n```\n\nSo not only can you define the types of your data, you can even **specify the shape of your tensors!**\n\n```python\n# Create a document\ndoc = MyDocument(\n    description=\"This is a photo of a mountain\",\n    image_url=\"https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg\",\n)\n\n# Load image tensor from URL\ndoc.image_tensor = doc.image_url.load()\n\n\n# Compute embedding with any model of your choice\ndef clip_image_encoder(image_tensor: TorchTensor) -> TorchTensor:  # dummy function\n    return torch.rand(512)\n\n\ndoc.embedding = clip_image_encoder(doc.image_tensor)\n\nprint(doc.embedding.shape)  # torch.Size([512])\n```\n\n### Compose nested Documents\n\nOf course, you can compose Documents into a nested structure:\n\n```python\nfrom docarray import BaseDoc\nfrom docarray.documents import ImageDoc, TextDoc\nimport numpy as np\n\n\nclass MultiModalDocument(BaseDoc):\n    image_doc: ImageDoc\n    text_doc: TextDoc\n\n\ndoc = MultiModalDocument(\n    image_doc=ImageDoc(tensor=np.zeros((3, 224, 224))), text_doc=TextDoc(text='hi!')\n)\n```\n\nYou rarely work with a single data point at a time, especially in machine learning applications. That's why you can easily collect multiple `Documents`:\n\n### Collect multiple `Documents`\n\nWhen building or interacting with an ML system, usually you want to process multiple Documents (data points) at once.\n\nDocArray offers two data structures for this:\n\n- **`DocVec`**: A vector of `Documents`. All tensors in the documents are stacked into a single tensor. **Perfect for batch processing and use inside of ML models**.\n- **`DocList`**: A list of `Documents`. All tensors in the documents are kept as-is. **Perfect for streaming, re-ranking, and shuffling of data**.\n\nLet's take a look at them, starting with `DocVec`:\n\n```python\nfrom docarray import DocVec, BaseDoc\nfrom docarray.typing import AnyTensor, ImageUrl\nimport numpy as np\n\n\nclass Image(BaseDoc):\n    url: ImageUrl\n    tensor: AnyTensor  # this allows torch, numpy, and tensor flow tensors\n\n\nvec = DocVec[Image](  # the DocVec is parametrized by your personal schema!\n    [\n        Image(\n            url=\"https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg\",\n            tensor=np.zeros((3, 224, 224)),\n        )\n        for _ in range(100)\n    ]\n)\n``` \n\nIn the code snippet above, `DocVec` is **parametrized by the type of document** you want to use with it: `DocVec[Image]`.\n\nThis may look weird at first, but we're confident that you'll get used to it quickly!\nBesides, it lets us do some cool things, like having **bulk access to the fields that you defined** in your document:\n\n```python\ntensor = vec.tensor  # gets all the tensors in the DocVec\nprint(tensor.shape)  # which are stacked up into a single tensor!\nprint(vec.url)  # you can bulk access any other field, too\n```\n\nThe second data structure, `DocList`, works in a similar way:\n\n```python\nfrom docarray import DocList\n\ndl = DocList[Image](  # the DocList is parametrized by your personal schema!\n    [\n        Image(\n            url=\"https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg\",\n            tensor=np.zeros((3, 224, 224)),\n        )\n        for _ in range(100)\n    ]\n)\n```\n\nYou can still bulk access the fields of your document:\n\n```python\ntensors = dl.tensor  # gets all the tensors in the DocList\nprint(type(tensors))  # as a list of tensors\nprint(dl.url)  # you can bulk access any other field, too\n```\n\nAnd you can insert, remove, and append documents to your `DocList`:\n\n```python\n# append\ndl.append(\n    Image(\n        url=\"https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg\",\n        tensor=np.zeros((3, 224, 224)),\n    )\n)\n# delete\ndel dl[0]\n# insert\ndl.insert(\n    0,\n    Image(\n        url=\"https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg\",\n        tensor=np.zeros((3, 224, 224)),\n    ),\n)\n```\n\nAnd you can seamlessly switch between `DocVec` and `DocList`:\n\n```python\nvec_2 = dl.to_doc_vec()\nassert isinstance(vec_2, DocVec)\n\ndl_2 = vec_2.to_doc_list()\nassert isinstance(dl_2, DocList)\n```\n\n</details>\n\n## Send\n\nDocArray facilitates the **transmission of your data** in a manner inherently compatible with machine learning.\n\nThis includes native support for **Protobuf and gRPC**, along with **HTTP** and serialization to JSON, JSONSchema, Base64, and Bytes.\n\nThis feature proves beneficial for several scenarios:\n\n- :cloud: You are **serving a model**, perhaps through frameworks like **[Jina](https://github.com/jina-ai/jina/)** or **[FastAPI](https://github.com/tiangolo/fastapi/)**\n- :spider_web: You are **distributing your model** across multiple machines and need an efficient means of transmitting your data between nodes\n- :gear: You are architecting a **microservice** environment and require a method for data transmission between microservices\n\n> :bulb: **Are you familiar with FastAPI?** You'll be delighted to learn\n> that DocArray maintains full compatibility with FastAPI!\n> Plus, we have a [dedicated section](#coming-from-fastapi) specifically for you!\n\nWhen it comes to data transmission, serialization is a crucial step. Let's delve into how DocArray streamlines this process:\n\n\n```python\nfrom docarray import BaseDoc\nfrom docarray.typing import ImageTorchTensor\nimport torch\n\n\n# model your data\nclass MyDocument(BaseDoc):\n    description: str\n    image: ImageTorchTensor[3, 224, 224]\n\n\n# create a Document\ndoc = MyDocument(\n    description=\"This is a description\",\n    image=torch.zeros((3, 224, 224)),\n)\n\n# serialize it!\nproto = doc.to_protobuf()\nbytes_ = doc.to_bytes()\njson = doc.json()\n\n# deserialize it!\ndoc_2 = MyDocument.from_protobuf(proto)\ndoc_4 = MyDocument.from_bytes(bytes_)\ndoc_5 = MyDocument.parse_raw(json)\n```\n\nOf course, serialization is not all you need. So check out how DocArray integrates with **[Jina](https://github.com/jina-ai/jina/)** and **[FastAPI](https://github.com/tiangolo/fastapi/)**.\n\n## Store\n\nAfter modeling and possibly distributing your data, you'll typically want to **store it** somewhere. That's where DocArray steps in!\n\n**Document Stores** provide a seamless way to, as the name suggests, store your Documents. Be it locally or remotely, you can do it all through the same user interface:\n\n- :cd: **On disk**, as a file in your local filesystem\n- :bucket: On **[AWS S3](https://aws.amazon.com/de/s3/)**\n- :cloud: On **[Jina AI Cloud](https://cloud.jina.ai/)**\n\nThe Document Store interface lets you push and pull Documents to and from multiple data sources, all with the same user interface.\n\nFor example, let's see how that works with on-disk storage:\n\n```python\nfrom docarray import BaseDoc, DocList\n\n\nclass SimpleDoc(BaseDoc):\n    text: str\n\n\ndocs = DocList[SimpleDoc]([SimpleDoc(text=f'doc {i}') for i in range(8)])\ndocs.push('file://simple_docs')\n\ndocs_pull = DocList[SimpleDoc].pull('file://simple_docs')\n```\n\n## Retrieve\n\n**Document Indexes** let you index your Documents in a **vector database** for efficient similarity-based retrieval.\n\nThis is useful for:\n\n- :left_speech_bubble: Augmenting **LLMs and Chatbots** with domain knowledge ([Retrieval Augmented Generation](https://arxiv.org/abs/2005.11401))\n- :mag: **Neural search** applications\n- :bulb: **Recommender systems**\n\nCurrently, Document Indexes support **[Weaviate](https://weaviate.io/)**, **[Qdrant](https://qdrant.tech/)**, **[ElasticSearch](https://www.elastic.co/)**,  **[Redis](https://redis.io/)**, and **[HNSWLib](https://github.com/nmslib/hnswlib)**, with more to come!\n\nThe Document Index interface lets you index and retrieve Documents from multiple vector databases, all with the same user interface.\n\nIt supports ANN vector search, text search, filtering, and hybrid search.\n\n```python\nfrom docarray import DocList, BaseDoc\nfrom docarray.index import HnswDocumentIndex\nimport numpy as np\n\nfrom docarray.typing import ImageUrl, ImageTensor, NdArray\n\n\nclass ImageDoc(BaseDoc):\n    url: ImageUrl\n    tensor: ImageTensor\n    embedding: NdArray[128]\n\n\n# create some data\ndl = DocList[ImageDoc](\n    [\n        ImageDoc(\n            url=\"https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg\",\n            tensor=np.zeros((3, 224, 224)),\n            embedding=np.random.random((128,)),\n        )\n        for _ in range(100)\n    ]\n)\n\n# create a Document Index\nindex = HnswDocumentIndex[ImageDoc](work_dir='/tmp/test_index')\n\n\n# index your data\nindex.index(dl)\n\n# find similar Documents\nquery = dl[0]\nresults, scores = index.find(query, limit=10, search_field='embedding')\n```\n\n---\n\n## Learn DocArray\n\nDepending on your background and use case, there are different ways for you to understand DocArray.\n\n### Coming from DocArray <=0.21\n\n<details markdown=\"1\">\n  <summary>Click to expand</summary>\n\nIf you are using DocArray version 0.30.0 or lower, you will be familiar with its [dataclass API](https://docarray.jina.ai/fundamentals/dataclass/).\n\n_DocArray >=0.30 is that idea, taken seriously._ Every document is created through a dataclass-like interface,\ncourtesy of [Pydantic](https://pydantic-docs.helpmanual.io/usage/models/).\n\nThis gives the following advantages:\n- **Flexibility:** No need to conform to a fixed set of fields -- your data defines the schema\n- **Multimodality:** At their core, documents are just dictionaries. This makes it easy to create and send them from any language, not just Python.\n\nYou may also be familiar with our old Document Stores for vector DB integration.\nThey are now called **Document Indexes** and offer the following improvements (see [here](#store) for the new API):\n\n- **Hybrid search:** You can now combine vector search with text search, and even filter by arbitrary fields\n- **Production-ready:** The new Document Indexes are a much thinner wrapper around the various vector DB libraries, making them more robust and easier to maintain\n- **Increased flexibility:** We strive to support any configuration or setting that you could perform through the DB's first-party client\n\nFor now, Document Indexes support **[Weaviate](https://weaviate.io/)**, **[Qdrant](https://qdrant.tech/)**, **[ElasticSearch](https://www.elastic.co/)**, **[Redis](https://redis.io/)**,  Exact Nearest Neighbour search and **[HNSWLib](https://github.com/nmslib/hnswlib)**, with more to come.\n\n</details>\n\n### Coming from Pydantic\n\n<details markdown=\"1\">\n  <summary>Click to expand</summary>\n\nIf you come from Pydantic, you can see DocArray documents as juiced up Pydantic models, and DocArray as a collection of goodies around them.\n\nMore specifically, we set out to **make Pydantic fit for the ML world** - not by replacing it, but by building on top of it!\n\nThis means you get the following benefits:\n\n- **ML-focused types**: Tensor, TorchTensor, Embedding, ..., including **tensor shape validation**\n- Full compatibility with **FastAPI**\n- **DocList** and **DocVec** generalize the idea of a model to a _sequence_ or _batch_ of models. Perfect for **use in ML models** and other batch processing tasks.\n- **Types that are alive**: ImageUrl can `.load()` a URL to image tensor, TextUrl can load and tokenize text documents, etc.\n- Cloud-ready: Serialization to **Protobuf** for use with microservices and **gRPC**\n- **Pre-built multimodal documents** for different data modalities: Image, Text, 3DMesh, Video, Audio and more. Note that all of these are valid Pydantic models!\n- **Document Stores** and **Document Indexes** let you store your data and retrieve it using **vector search**\n\nThe most obvious advantage here is **first-class support for ML centric data**, such as `{Torch, TF, ...}Tensor`, `Embedding`, etc.\n\nThis includes handy features such as validating the shape of a tensor:\n\n```python\nfrom docarray import BaseDoc\nfrom docarray.typing import TorchTensor\nimport torch\n\n\nclass MyDoc(BaseDoc):\n    tensor: TorchTensor[3, 224, 224]\n\n\ndoc = MyDoc(tensor=torch.zeros(3, 224, 224))  # works\ndoc = MyDoc(tensor=torch.zeros(224, 224, 3))  # works by reshaping\n\ntry:\n    doc = MyDoc(tensor=torch.zeros(224))  # fails validation\nexcept Exception as e:\n    print(e)\n    # tensor\n    # Cannot reshape tensor of shape (224,) to shape (3, 224, 224) (type=value_error)\n\n\nclass Image(BaseDoc):\n    tensor: TorchTensor[3, 'x', 'x']\n\n\nImage(tensor=torch.zeros(3, 224, 224))  # works\n\ntry:\n    Image(\n        tensor=torch.zeros(3, 64, 128)\n    )  # fails validation because second dimension does not match third\nexcept Exception as e:\n    print()\n\n\ntry:\n    Image(\n        tensor=torch.zeros(4, 224, 224)\n    )  # fails validation because of the first dimension\nexcept Exception as e:\n    print(e)\n    # Tensor shape mismatch. Expected(3, 'x', 'x'), got(4, 224, 224)(type=value_error)\n\ntry:\n    Image(\n        tensor=torch.zeros(3, 64)\n    )  # fails validation because it does not have enough dimensions\nexcept Exception as e:\n    print(e)\n    # Tensor shape mismatch. Expected (3, 'x', 'x'), got (3, 64) (type=value_error)\n```\n\n</details>\n\n### Coming from PyTorch\n\n<details markdown=\"1\">\n  <summary>Click to expand</summary>\n\nIf you come from PyTorch, you can see DocArray mainly as a way of _organizing your data as it flows through your model_.\n\nIt offers you several advantages:\n\n- Express **tensor shapes in type hints**\n- **Group tensors that belong to the same object**, e.g. an audio track and an image\n- **Go directly to deployment**, by re-using your data model as a [FastAPI](https://fastapi.tiangolo.com/) or [Jina](https://github.com/jina-ai/jina) API schema\n- Connect model components between **microservices**, using Protobuf and gRPC\n\nDocArray can be used directly inside ML models to handle and represent multimodaldata.\nThis allows you to reason about your data using DocArray's abstractions deep inside of `nn.Module`,\nand provides a FastAPI-compatible schema that eases the transition between model training and model serving.\n\nTo see the effect of this, let's first observe a vanilla PyTorch implementation of a tri-modal ML model:\n\n```python\nimport torch\nfrom torch import nn\n\n\ndef encoder(x):\n    return torch.rand(512)\n\n\nclass MyMultiModalModel(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.audio_encoder = encoder()\n        self.image_encoder = encoder()\n        self.text_encoder = encoder()\n\n    def forward(self, text_1, text_2, image_1, image_2, audio_1, audio_2):\n        embedding_text_1 = self.text_encoder(text_1)\n        embedding_text_2 = self.text_encoder(text_2)\n\n        embedding_image_1 = self.image_encoder(image_1)\n        embedding_image_2 = self.image_encoder(image_2)\n\n        embedding_audio_1 = self.image_encoder(audio_1)\n        embedding_audio_2 = self.image_encoder(audio_2)\n\n        return (\n            embedding_text_1,\n            embedding_text_2,\n            embedding_image_1,\n            embedding_image_2,\n            embedding_audio_1,\n            embedding_audio_2,\n        )\n```\n\nNot very easy on the eyes if you ask us. And even worse, if you need to add one more modality you have to touch every part of your code base, changing the `forward()` return type and making a whole lot of changes downstream from that.\n\nSo, now let's see what the same code looks like with DocArray:\n\n```python\nfrom docarray import DocList, BaseDoc\nfrom docarray.documents import ImageDoc, TextDoc, AudioDoc\nfrom docarray.typing import TorchTensor\nfrom torch import nn\nimport torch\n\n\ndef encoder(x):\n    return torch.rand(512)\n\n\nclass Podcast(BaseDoc):\n    text: TextDoc\n    image: ImageDoc\n    audio: AudioDoc\n\n\nclass PairPodcast(BaseDoc):\n    left: Podcast\n    right: Podcast\n\n\nclass MyPodcastModel(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.audio_encoder = encoder()\n        self.image_encoder = encoder()\n        self.text_encoder = encoder()\n\n    def forward_podcast(self, docs: DocList[Podcast]) -> DocList[Podcast]:\n        docs.audio.embedding = self.audio_encoder(docs.audio.tensor)\n        docs.text.embedding = self.text_encoder(docs.text.tensor)\n        docs.image.embedding = self.image_encoder(docs.image.tensor)\n\n        return docs\n\n    def forward(self, docs: DocList[PairPodcast]) -> DocList[PairPodcast]:\n        docs.left = self.forward_podcast(docs.left)\n        docs.right = self.forward_podcast(docs.right)\n\n        return docs\n```\n\nLooks much better, doesn't it?\nYou instantly win in code readability and maintainability. And for the same price you can turn your PyTorch model into a FastAPI app and reuse your Document\nschema definition (see [below](#coming-from-fastapi)). Everything is handled in a pythonic manner by relying on type hints.\n\n</details>\n\n\n### Coming from TensorFlow\n\n<details markdown=\"1\">\n  <summary>Click to expand</summary>\n\nLike the [PyTorch approach](#coming-from-pytorch), you can also use DocArray with TensorFlow to handle and represent multimodal data inside your ML model.\n\nFirst off, to use DocArray with TensorFlow we first need to install it as follows:\n\n```\npip install tensorflow==2.12.0\npip install protobuf==3.19.0\n```\n\nCompared to using DocArray with PyTorch, there is one main difference when using it with TensorFlow:\nWhile DocArray's `TorchTensor` is a subclass of `torch.Tensor`, this is not the case for the `TensorFlowTensor`: Due to some technical limitations of `tf.Tensor`, DocArray's `TensorFlowTensor` is not a subclass of `tf.Tensor` but rather stores a `tf.Tensor` in its `.tensor` attribute. \n\nHow does this affect you? Whenever you want to access the tensor data to, let's say, do operations with it or hand it to your ML model, instead of handing over your `TensorFlowTensor` instance, you need to access its `.tensor` attribute.\n\nThis would look like the following:\n\n```python\nfrom typing import Optional\n\nfrom docarray import DocList, BaseDoc\n\nimport tensorflow as tf\n\n\nclass Podcast(BaseDoc):\n    audio_tensor: Optional[AudioTensorFlowTensor] = None\n    embedding: Optional[AudioTensorFlowTensor] = None\n\n\nclass MyPodcastModel(tf.keras.Model):\n    def __init__(self):\n        super().__init__()\n        self.audio_encoder = AudioEncoder()\n\n    def call(self, inputs: DocList[Podcast]) -> DocList[Podcast]:\n        inputs.audio_tensor.embedding = self.audio_encoder(\n            inputs.audio_tensor.tensor\n        )  # access audio_tensor's .tensor attribute\n        return inputs\n```\n\n</details>\n\n### Coming from FastAPI\n\n<details markdown=\"1\">\n  <summary>Click to expand</summary>\n\nDocuments are Pydantic Models (with a twist), and as such they are fully compatible with FastAPI!\n\nBut why should you use them, and not the Pydantic models you already know and love?\nGood question!\n\n- Because of the ML-first features, types and validations, [here](#coming-from-pydantic)\n- Because DocArray can act as an [ORM for vector databases](#coming-from-a-vector-database), similar to what SQLModel does for SQL databases\n\nAnd to seal the deal, let us show you how easily documents slot into your FastAPI app:\n\n```python\nimport numpy as np\nfrom fastapi import FastAPI\nfrom docarray.base_doc import DocArrayResponse\nfrom docarray import BaseDoc\nfrom docarray.documents import ImageDoc\nfrom docarray.typing import NdArray, ImageTensor\n\n\nclass InputDoc(BaseDoc):\n    img: ImageDoc\n    text: str\n\n\nclass OutputDoc(BaseDoc):\n    embedding_clip: NdArray\n    embedding_bert: NdArray\n\n\napp = FastAPI()\n\n\ndef model_img(img: ImageTensor) -> NdArray:\n    return np.zeros((100, 1))\n\n\ndef model_text(text: str) -> NdArray:\n    return np.zeros((100, 1))\n\n\n@app.post(\"/embed/\", response_model=OutputDoc, response_class=DocArrayResponse)\nasync def create_item(doc: InputDoc) -> OutputDoc:\n    doc = OutputDoc(\n        embedding_clip=model_img(doc.img.tensor), embedding_bert=model_text(doc.text)\n    )\n    return doc\n\n\ninput_doc = InputDoc(text='', img=ImageDoc(tensor=np.random.random((3, 224, 224))))\n\nasync with AsyncClient(app=app, base_url=\"http://test\") as ac:\n    response = await ac.post(\"/embed/\", data=input_doc.json())\n```\n\nJust like a vanilla Pydantic model!\n\n</details>\n\n### Coming from Jina\n\n<details markdown=\"1\">\n  <summary>Click to expand</summary>\n\nJina has adopted docarray as their library for representing and serializing Documents.\n\nJina allows to serve models and services that are built with DocArray allowing you to serve and scale these applications\nmaking full use of DocArray's serialization capabilites. \n\n```python\nimport numpy as np\nfrom jina import Deployment, Executor, requests\nfrom docarray import BaseDoc, DocList\nfrom docarray.documents import ImageDoc\nfrom docarray.typing import NdArray, ImageTensor\n\n\nclass InputDoc(BaseDoc):\n    img: ImageDoc\n    text: str\n\n\nclass OutputDoc(BaseDoc):\n    embedding_clip: NdArray\n    embedding_bert: NdArray\n\n\ndef model_img(img: ImageTensor) -> NdArray:\n    return np.zeros((100, 1))\n\n\ndef model_text(text: str) -> NdArray:\n    return np.zeros((100, 1))\n\n\nclass MyEmbeddingExecutor(Executor):\n    @requests(on='/embed')\n    def encode(self, docs: DocList[InputDoc], **kwargs) -> DocList[OutputDoc]:\n        ret = DocList[OutputDoc]()\n        for doc in docs:\n            output = OutputDoc(\n                embedding_clip=model_img(doc.img.tensor),\n                embedding_bert=model_text(doc.text),\n            )\n            ret.append(output)\n        return ret\n\n\nwith Deployment(\n    protocols=['grpc', 'http'], ports=[12345, 12346], uses=MyEmbeddingExecutor\n) as dep:\n    resp = dep.post(\n        on='/embed',\n        inputs=DocList[InputDoc](\n            [InputDoc(text='', img=ImageDoc(tensor=np.random.random((3, 224, 224))))]\n        ),\n        return_type=DocList[OutputDoc],\n    )\n    print(resp)\n```\n\n</details>\n\n### Coming from a vector database\n\n<details markdown=\"1\">\n  <summary>Click to expand</summary>\n\nIf you came across DocArray as a universal vector database client, you can best think of it as **a new kind of ORM for vector databases**.\nDocArray's job is to take multimodal, nested and domain-specific data and to map it to a vector database,\nstore it there, and thus make it searchable:\n\n```python\nfrom docarray import DocList, BaseDoc\nfrom docarray.index import HnswDocumentIndex\nimport numpy as np\n\nfrom docarray.typing import ImageUrl, ImageTensor, NdArray\n\n\nclass ImageDoc(BaseDoc):\n    url: ImageUrl\n    tensor: ImageTensor\n    embedding: NdArray[128]\n\n\n# create some data\ndl = DocList[ImageDoc](\n    [\n        ImageDoc(\n            url=\"https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg\",\n            tensor=np.zeros((3, 224, 224)),\n            embedding=np.random.random((128,)),\n        )\n        for _ in range(100)\n    ]\n)\n\n# create a Document Index\nindex = HnswDocumentIndex[ImageDoc](work_dir='/tmp/test_index2')\n\n\n# index your data\nindex.index(dl)\n\n# find similar Documents\nquery = dl[0]\nresults, scores = index.find(query, limit=10, search_field='embedding')\n```\n\nCurrently, DocArray supports the following vector databases:\n\n- [Weaviate](https://www.weaviate.io/)\n- [Qdrant](https://qdrant.tech/)\n- [Elasticsearch](https://www.elastic.co/elasticsearch/) v8 and v7\n- [Redis](https://redis.io/)\n- [Milvus](https://milvus.io)\n- ExactNNMemorySearch as a local alternative with exact kNN search.\n- [HNSWlib](https://github.com/nmslib/hnswlib) as a local-first ANN alternative\n\nAn integration of [OpenSearch](https://opensearch.org/) is currently in progress.\n\nOf course this is only one of the things that DocArray can do, so we encourage you to check out the rest of this readme!\n\n</details>\n\n\n### Coming from Langchain\n\n<details markdown=\"1\">\n  <summary>Click to expand</summary>\n\nWith DocArray, you can connect external data to LLMs through Langchain. DocArray gives you the freedom to establish \nflexible document schemas and choose from different backends for document storage.\nAfter creating your document index, you can connect it to your Langchain app using [DocArrayRetriever](https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/docarray_retriever).\n\nInstall Langchain via:\n```shell\npip install langchain\n```\n\n1. Define a schema and create documents:\n```python\nfrom docarray import BaseDoc, DocList\nfrom docarray.typing import NdArray\nfrom langchain.embeddings.openai import OpenAIEmbeddings\n\nembeddings = OpenAIEmbeddings()\n\n# Define a document schema\nclass MovieDoc(BaseDoc):\n    title: str\n    description: str\n    year: int\n    embedding: NdArray[1536]\n\n\nmovies = [\n    {\"title\": \"#1 title\", \"description\": \"#1 description\", \"year\": 1999},\n    {\"title\": \"#2 title\", \"description\": \"#2 description\", \"year\": 2001},\n]\n\n# Embed `description` and create documents\ndocs = DocList[MovieDoc](\n    MovieDoc(embedding=embeddings.embed_query(movie[\"description\"]), **movie)\n    for movie in movies\n)\n```\n\n2. Initialize a document index using any supported backend:\n```python\nfrom docarray.index import (\n    InMemoryExactNNIndex,\n    HnswDocumentIndex,\n    WeaviateDocumentIndex,\n    QdrantDocumentIndex,\n    ElasticDocIndex,\n    RedisDocumentIndex,\n)\n\n# Select a suitable backend and initialize it with data\ndb = InMemoryExactNNIndex[MovieDoc](docs)\n```\n\n3. Finally, initialize a retriever and integrate it into your chain!\n```python\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.chains import ConversationalRetrievalChain\nfrom langchain.retrievers import DocArrayRetriever\n\n\n# Create a retriever\nretriever = DocArrayRetriever(\n    index=db,\n    embeddings=embeddings,\n    search_field=\"embedding\",\n    content_field=\"description\",\n)\n\n# Use the retriever in your chain\nmodel = ChatOpenAI()\nqa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)\n```\n\nAlternatively, you can use built-in vector stores. Langchain supports two vector stores: [DocArrayInMemorySearch](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/docarray_in_memory) and [DocArrayHnswSearch](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/docarray_hnsw). \nBoth are user-friendly and are best suited to small to medium-sized datasets.\n\n</details>\n\n\n## See also\n\n- [Documentation](https://docs.docarray.org)\n- [DocArray<=0.21 documentation](https://docarray.jina.ai/)\n- [Join our Discord server](https://discord.gg/WaMp6PVPgR)\n- [Donation to Linux Foundation AI&Data blog post](https://jina.ai/news/donate-docarray-lf-for-inclusive-standard-multimodal-data-model/)\n- [Roadmap](https://github.com/docarray/docarray/issues/1714)\n\n> DocArray is a trademark of LF AI Projects, LLC\n> \n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "The data structure for multimodal data",
    "version": "0.40.0",
    "project_urls": {
        "Documentation": "https://docs.docarray.org",
        "Homepage": "https://docs.docarray.org/",
        "Repository": "https://github.com/docarray/docarray"
    },
    "split_keywords": [
        "docarray",
        "deep-learning",
        "data-structures cross-modal multi-modal",
        " unstructured-data",
        " nested-data",
        "neural-search"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "27bf90439e206a5d2df089e3467a703dfa0349f17d73f003ec51367db23bf8de",
                "md5": "e92bc393c1d697019b54233aa8d5c827",
                "sha256": "86ceadb84cdec2dc9579e2f79823748a3af094c57df4e0441c5f0bac7e63ef97"
            },
            "downloads": -1,
            "filename": "docarray-0.40.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e92bc393c1d697019b54233aa8d5c827",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<4.0",
            "size": 270232,
            "upload_time": "2023-12-22T12:12:16",
            "upload_time_iso_8601": "2023-12-22T12:12:16.923488Z",
            "url": "https://files.pythonhosted.org/packages/27/bf/90439e206a5d2df089e3467a703dfa0349f17d73f003ec51367db23bf8de/docarray-0.40.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ff204ce3b1324dcadd62e10a1122f16ddc87b837ae9402791f40859e1ff98b87",
                "md5": "8da0d7ba7b92fb5938a787be94b35e81",
                "sha256": "c3f3b88b7b6128c10308dddbd21650c9845e270da850cf6718cb1d3d867d5986"
            },
            "downloads": -1,
            "filename": "docarray-0.40.0.tar.gz",
            "has_sig": false,
            "md5_digest": "8da0d7ba7b92fb5938a787be94b35e81",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<4.0",
            "size": 195524,
            "upload_time": "2023-12-22T12:12:25",
            "upload_time_iso_8601": "2023-12-22T12:12:25.053005Z",
            "url": "https://files.pythonhosted.org/packages/ff/20/4ce3b1324dcadd62e10a1122f16ddc87b837ae9402791f40859e1ff98b87/docarray-0.40.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-22 12:12:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "docarray",
    "github_project": "docarray",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "docarray"
}
        
Elapsed time: 0.16534s