# Python bindings for Lance Data Format
> :warning: **Under heavy development**
<div align="center">
<p align="center">
<img width="257" alt="Lance Logo" src="https://user-images.githubusercontent.com/917119/199353423-d3e202f7-0269-411d-8ff2-e747e419e492.png">
Lance is a new columnar data format for data science and machine learning
</p></div>
Why you should use Lance
1. Is order of magnitude faster than parquet for point queries and nested data structures common to DS/ML
2. Comes with a fast vector index that delivers sub-millisecond nearest neighbors search performance
3. Is automatically versioned and supports lineage and time-travel for full reproducibility
4. Integrated with duckdb/pandas/polars already. Easily convert from/to parquet in 2 lines of code
## Quick start
**Installation**
```shell
pip install pylance
```
Make sure you have a recent version of pandas (1.5+), pyarrow (10.0+), and DuckDB (0.7.0+)
**Converting to Lance**
```python
import lance
import pandas as pd
import pyarrow as pa
import pyarrow.dataset
df = pd.DataFrame({"a": [5], "b": [10]})
uri = "/tmp/test.parquet"
tbl = pa.Table.from_pandas(df)
pa.dataset.write_dataset(tbl, uri, format='parquet')
parquet = pa.dataset.dataset(uri, format='parquet')
lance.write_dataset(parquet, "/tmp/test.lance")
```
**Reading Lance data**
```python
dataset = lance.dataset("/tmp/test.lance")
assert isinstance(dataset, pa.dataset.Dataset)
```
**Pandas**
```python
df = dataset.to_table().to_pandas()
```
**DuckDB**
```python
import duckdb
# If this segfaults, make sure you have duckdb v0.7+ installed
duckdb.query("SELECT * FROM dataset LIMIT 10").to_df()
```
**Vector search**
Download the sift1m subset
```shell
wget ftp://ftp.irisa.fr/local/texmex/corpus/sift.tar.gz
tar -xzf sift.tar.gz
```
Convert it to Lance
```python
import lance
from lance.vector import vec_to_table
import numpy as np
import struct
nvecs = 1000000
ndims = 128
with open("sift/sift_base.fvecs", mode="rb") as fobj:
buf = fobj.read()
data = np.array(struct.unpack("<128000000f", buf[4 : 4 + 4 * nvecs * ndims])).reshape((nvecs, ndims))
dd = dict(zip(range(nvecs), data))
table = vec_to_table(dd)
uri = "vec_data.lance"
sift1m = lance.write_dataset(table, uri, max_rows_per_group=8192, max_rows_per_file=1024*1024)
```
Build the index
```python
sift1m.create_index("vector",
index_type="IVF_PQ",
num_partitions=256, # IVF
num_sub_vectors=16) # PQ
```
Search the dataset
```python
# Get top 10 similar vectors
import duckdb
dataset = lance.dataset(uri)
# Sample 100 query vectors. If this segfaults, make sure you have duckdb v0.7+ installed
sample = duckdb.query("SELECT vector FROM dataset USING SAMPLE 100").to_df()
query_vectors = np.array([np.array(x) for x in sample.vector])
# Get nearest neighbors for all of them
rs = [dataset.to_table(nearest={"column": "vector", "k": 10, "q": q})
for q in query_vectors]
```
*More distance metrics, HNSW, and distributed support is on the roadmap
## Python package details
Install from PyPI: `pip install pylance` # >=0.3.0 is the new rust-based implementation
Install from source: `maturin develop` (under the `/python` directory)
Run unit tests: `make test`
Run integration tests: `make integtest`
Import via: `import lance`
The python integration is done via pyo3 + custom python code:
1. We make wrapper classes in Rust for Dataset/Scanner/RecordBatchReader that's exposed to python.
2. These are then used by LanceDataset / LanceScanner implementations that extend pyarrow Dataset/Scanner for duckdb compat.
3. Data is delivered via the Arrow C Data Interface
## Motivation
Why do we *need* a new format for data science and machine learning?
### 1. Reproducibility is a must-have
Versioning and experimentation support should be built into the dataset instead of requiring multiple tools.<br/>
It should also be efficient and not require expensive copying everytime you want to create a new version.<br/>
We call this "Zero copy versioning" in Lance. It makes versioning data easy without increasing storage costs.
### 2. Cloud storage is now the default
Remote object storage is the default now for data science and machine learning and the performance characteristics of cloud are fundamentally different.<br/>
Lance format is optimized to be cloud native. Common operations like filter-then-take can be order of magnitude faster
using Lance than Parquet, especially for ML data.
### 3. Vectors must be a first class citizen, not a separate thing
The majority of reasonable scale workflows should not require the added complexity and cost of a
specialized database just to compute vector similarity. Lance integrates optimized vector indices
into a columnar format so no additional infrastructure is required to get low latency top-K similarity search.
### 4. Open standards is a requirement
The DS/ML ecosystem is incredibly rich and data *must be* easily accessible across different languages, tools, and environments.
Lance makes Apache Arrow integration its primary interface, which means conversions to/from is 2 lines of code, your
code does not need to change after conversion, and nothing is locked-up to force you to pay for vendor compute.
We need open-source not fauxpen-source.
Raw data
{
"_id": null,
"home_page": null,
"name": "pylance",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "data-format, data-science, machine-learning, arrow, data-analytics",
"author": "Lance Devs <dev@lancedb.com>",
"author_email": "Lance Devs <dev@lancedb.com>",
"download_url": null,
"platform": null,
"description": "# Python bindings for Lance Data Format\n\n> :warning: **Under heavy development**\n\n<div align=\"center\">\n<p align=\"center\">\n\n<img width=\"257\" alt=\"Lance Logo\" src=\"https://user-images.githubusercontent.com/917119/199353423-d3e202f7-0269-411d-8ff2-e747e419e492.png\">\n\nLance is a new columnar data format for data science and machine learning\n</p></div>\n\nWhy you should use Lance\n1. Is order of magnitude faster than parquet for point queries and nested data structures common to DS/ML\n2. Comes with a fast vector index that delivers sub-millisecond nearest neighbors search performance\n3. Is automatically versioned and supports lineage and time-travel for full reproducibility\n4. Integrated with duckdb/pandas/polars already. Easily convert from/to parquet in 2 lines of code\n\n\n## Quick start\n\n**Installation**\n\n```shell\npip install pylance\n```\n\nMake sure you have a recent version of pandas (1.5+), pyarrow (10.0+), and DuckDB (0.7.0+)\n\n**Converting to Lance**\n```python\nimport lance\n\nimport pandas as pd\nimport pyarrow as pa\nimport pyarrow.dataset\n\ndf = pd.DataFrame({\"a\": [5], \"b\": [10]})\nuri = \"/tmp/test.parquet\"\ntbl = pa.Table.from_pandas(df)\npa.dataset.write_dataset(tbl, uri, format='parquet')\n\nparquet = pa.dataset.dataset(uri, format='parquet')\nlance.write_dataset(parquet, \"/tmp/test.lance\")\n```\n\n**Reading Lance data**\n```python\ndataset = lance.dataset(\"/tmp/test.lance\")\nassert isinstance(dataset, pa.dataset.Dataset)\n```\n\n**Pandas**\n```python\ndf = dataset.to_table().to_pandas()\n```\n\n**DuckDB**\n```python\nimport duckdb\n\n# If this segfaults, make sure you have duckdb v0.7+ installed\nduckdb.query(\"SELECT * FROM dataset LIMIT 10\").to_df()\n```\n\n**Vector search**\n\nDownload the sift1m subset\n\n```shell\nwget ftp://ftp.irisa.fr/local/texmex/corpus/sift.tar.gz\ntar -xzf sift.tar.gz\n```\n\nConvert it to Lance\n\n```python\nimport lance\nfrom lance.vector import vec_to_table\nimport numpy as np\nimport struct\n\nnvecs = 1000000\nndims = 128\nwith open(\"sift/sift_base.fvecs\", mode=\"rb\") as fobj:\n buf = fobj.read()\n data = np.array(struct.unpack(\"<128000000f\", buf[4 : 4 + 4 * nvecs * ndims])).reshape((nvecs, ndims))\n dd = dict(zip(range(nvecs), data))\n\ntable = vec_to_table(dd)\nuri = \"vec_data.lance\"\nsift1m = lance.write_dataset(table, uri, max_rows_per_group=8192, max_rows_per_file=1024*1024)\n```\n\nBuild the index\n\n```python\nsift1m.create_index(\"vector\",\n index_type=\"IVF_PQ\", \n num_partitions=256, # IVF\n num_sub_vectors=16) # PQ\n```\n\nSearch the dataset\n\n```python\n# Get top 10 similar vectors\nimport duckdb\n\ndataset = lance.dataset(uri)\n\n# Sample 100 query vectors. If this segfaults, make sure you have duckdb v0.7+ installed\nsample = duckdb.query(\"SELECT vector FROM dataset USING SAMPLE 100\").to_df()\nquery_vectors = np.array([np.array(x) for x in sample.vector])\n\n# Get nearest neighbors for all of them\nrs = [dataset.to_table(nearest={\"column\": \"vector\", \"k\": 10, \"q\": q}) \n for q in query_vectors]\n```\n\n*More distance metrics, HNSW, and distributed support is on the roadmap\n\n\n## Python package details\n\nInstall from PyPI: `pip install pylance` # >=0.3.0 is the new rust-based implementation\nInstall from source: `maturin develop` (under the `/python` directory)\nRun unit tests: `make test`\nRun integration tests: `make integtest`\n\nImport via: `import lance`\n\nThe python integration is done via pyo3 + custom python code:\n\n1. We make wrapper classes in Rust for Dataset/Scanner/RecordBatchReader that's exposed to python.\n2. These are then used by LanceDataset / LanceScanner implementations that extend pyarrow Dataset/Scanner for duckdb compat.\n3. Data is delivered via the Arrow C Data Interface\n\n## Motivation\n\nWhy do we *need* a new format for data science and machine learning?\n\n### 1. Reproducibility is a must-have\n\nVersioning and experimentation support should be built into the dataset instead of requiring multiple tools.<br/>\nIt should also be efficient and not require expensive copying everytime you want to create a new version.<br/>\nWe call this \"Zero copy versioning\" in Lance. It makes versioning data easy without increasing storage costs.\n\n### 2. Cloud storage is now the default\n\nRemote object storage is the default now for data science and machine learning and the performance characteristics of cloud are fundamentally different.<br/>\nLance format is optimized to be cloud native. Common operations like filter-then-take can be order of magnitude faster\nusing Lance than Parquet, especially for ML data.\n\n### 3. Vectors must be a first class citizen, not a separate thing\n\nThe majority of reasonable scale workflows should not require the added complexity and cost of a\nspecialized database just to compute vector similarity. Lance integrates optimized vector indices\ninto a columnar format so no additional infrastructure is required to get low latency top-K similarity search.\n\n### 4. Open standards is a requirement\n\nThe DS/ML ecosystem is incredibly rich and data *must be* easily accessible across different languages, tools, and environments.\nLance makes Apache Arrow integration its primary interface, which means conversions to/from is 2 lines of code, your\ncode does not need to change after conversion, and nothing is locked-up to force you to pay for vendor compute.\nWe need open-source not fauxpen-source.\n\n\n",
"bugtrack_url": null,
"license": null,
"summary": "python wrapper for Lance columnar format",
"version": "0.20.0",
"project_urls": null,
"split_keywords": [
"data-format",
" data-science",
" machine-learning",
" arrow",
" data-analytics"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c1d9f2a5ee73b07df1c2c6bc06b53f67960caa5374f55118ee46fabe35396de5",
"md5": "a8f46ef76ecae7d759fc1b5f561d48d7",
"sha256": "fbb640b00567ff79d23a5994c0f0bc97587fcf74ece6ca568e77c453f70801c5"
},
"downloads": -1,
"filename": "pylance-0.20.0-cp39-abi3-macosx_10_15_x86_64.whl",
"has_sig": false,
"md5_digest": "a8f46ef76ecae7d759fc1b5f561d48d7",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 31512397,
"upload_time": "2024-12-04T22:59:47",
"upload_time_iso_8601": "2024-12-04T22:59:47.925673Z",
"url": "https://files.pythonhosted.org/packages/c1/d9/f2a5ee73b07df1c2c6bc06b53f67960caa5374f55118ee46fabe35396de5/pylance-0.20.0-cp39-abi3-macosx_10_15_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "01dc14c8321a08bbe110789e19aa8b9ba840f52ef8db88d0cdd9c3a29789791b",
"md5": "72b83d88b4daf90f71b3c93c47601a4f",
"sha256": "c8e30f1b6429b843429fde8f3d6fb7e715153174161e3bcf29902e2d32ee471f"
},
"downloads": -1,
"filename": "pylance-0.20.0-cp39-abi3-macosx_11_0_arm64.whl",
"has_sig": false,
"md5_digest": "72b83d88b4daf90f71b3c93c47601a4f",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 29266199,
"upload_time": "2024-12-04T22:42:09",
"upload_time_iso_8601": "2024-12-04T22:42:09.353254Z",
"url": "https://files.pythonhosted.org/packages/01/dc/14c8321a08bbe110789e19aa8b9ba840f52ef8db88d0cdd9c3a29789791b/pylance-0.20.0-cp39-abi3-macosx_11_0_arm64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "1e2cf262507cdbed70994afc8bcc60beae2b823d10967bc632d9144806f035d4",
"md5": "7920bce534edb7cfc0b888e1f017f1bc",
"sha256": "032242a347ac909db81c0ade6384d82102f4ec61bc892d8caaa04b3d0a7b1613"
},
"downloads": -1,
"filename": "pylance-0.20.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"has_sig": false,
"md5_digest": "7920bce534edb7cfc0b888e1f017f1bc",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 33539993,
"upload_time": "2024-12-04T22:41:27",
"upload_time_iso_8601": "2024-12-04T22:41:27.379821Z",
"url": "https://files.pythonhosted.org/packages/1e/2c/f262507cdbed70994afc8bcc60beae2b823d10967bc632d9144806f035d4/pylance-0.20.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "419c88eb6eb07f1a803dec43930d28c587d9df3dc996337d399fa74bcb3cbb10",
"md5": "db659d92187d70bd560bda797a9bf2dc",
"sha256": "5320f11925524c1a67279afc4638cad60f61c36f11d3d9c2a91651489874be0d"
},
"downloads": -1,
"filename": "pylance-0.20.0-cp39-abi3-manylinux_2_24_aarch64.whl",
"has_sig": false,
"md5_digest": "db659d92187d70bd560bda797a9bf2dc",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 31858413,
"upload_time": "2024-12-04T22:41:48",
"upload_time_iso_8601": "2024-12-04T22:41:48.200222Z",
"url": "https://files.pythonhosted.org/packages/41/9c/88eb6eb07f1a803dec43930d28c587d9df3dc996337d399fa74bcb3cbb10/pylance-0.20.0-cp39-abi3-manylinux_2_24_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "22d2acaf3328d1bd55201f9775d8b8a3f7c497966d3f3371e22aabb269cb4f0f",
"md5": "bff98d0002dee18241b22dd07f19fdbe",
"sha256": "fa5acd4488c574f6017145eafd5b45b178d611a5cbcd2ed492e01013fc72f5a2"
},
"downloads": -1,
"filename": "pylance-0.20.0-cp39-abi3-manylinux_2_28_x86_64.whl",
"has_sig": false,
"md5_digest": "bff98d0002dee18241b22dd07f19fdbe",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 33465409,
"upload_time": "2024-12-04T22:41:44",
"upload_time_iso_8601": "2024-12-04T22:41:44.675239Z",
"url": "https://files.pythonhosted.org/packages/22/d2/acaf3328d1bd55201f9775d8b8a3f7c497966d3f3371e22aabb269cb4f0f/pylance-0.20.0-cp39-abi3-manylinux_2_28_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "c70ac012ef957c3c99edf7a87d5f77ccf174bdf161d4ae1aac2181d750fcbcd5",
"md5": "af9087e1be26410a83196f32f0d96a08",
"sha256": "587850cddd0e669addd9414f378fa30527fc9020010cb73c842f026ea8a9b4ea"
},
"downloads": -1,
"filename": "pylance-0.20.0-cp39-abi3-win_amd64.whl",
"has_sig": false,
"md5_digest": "af9087e1be26410a83196f32f0d96a08",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 31356456,
"upload_time": "2024-12-04T22:52:54",
"upload_time_iso_8601": "2024-12-04T22:52:54.620505Z",
"url": "https://files.pythonhosted.org/packages/c7/0a/c012ef957c3c99edf7a87d5f77ccf174bdf161d4ae1aac2181d750fcbcd5/pylance-0.20.0-cp39-abi3-win_amd64.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-04 22:59:47",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "pylance"
}