Name | llama-index-vector-stores-mariadb JSON |
Version |
0.3.1
JSON |
| download |
home_page | None |
Summary | llama-index vector_stores mariadb integration |
upload_time | 2025-02-13 23:21:25 |
maintainer | None |
docs_url | None |
author | Kalin Arsov |
requires_python | <4.0,>=3.9 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex Vector_Stores Integration: MariaDB
Starting with version `11.7.1`, the MariaDB relational database has vector search functionality integrated.
Thus now it can be used as a fully-functional vector store in LlamaIndex.
To learn more about the feature in MariaDB, check its [Vector Overview documentation](https://mariadb.com/kb/en/vector-overview/).
Please note that versions before `0.3.0` of this package are not compatible with MariaDB 11.7 and later.
They are compatible only with the one-off `MariaDB 11.6 Vector` preview release which used a slightly different syntax.
## Installation
```shell
pip install llama-index-vector-stores-mariadb
```
## Usage
```python
from llama_index.vector_stores.mariadb import MariaDBVectorStore
vector_store = MariaDBVectorStore.from_params(
host="localhost",
port=3306,
user="llamaindex",
password="password",
database="vectordb",
table_name="llama_index_vectorstore",
embed_dim=1536, # OpenAI embedding dimension
default_m=6, # MariaDB Vector system parameter
ef_search=20, # MariaDB Vector system parameter
)
```
## Development
### Running Integration Tests
A suite of integration tests is available to verify the MariaDB vector store integration.
The test suite needs a MariaDB database with vector search support up and running. If not found, the tests are skipped.
To facilitate that, a sample `docker-compose.yaml` file is provided, so you can simply do:
```shell
docker compose -f tests/docker-compose.yaml up
pytest -v
# Clean up when you finish testing
docker compose -f tests/docker-compose.yaml down
```
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-vector-stores-mariadb",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Kalin Arsov",
"author_email": "kalin@skysql.com",
"download_url": "https://files.pythonhosted.org/packages/52/72/31259da5336ea976d7c582bf7d7f38704e9af3c623e21ee286e86a0d1a62/llama_index_vector_stores_mariadb-0.3.1.tar.gz",
"platform": null,
"description": "# LlamaIndex Vector_Stores Integration: MariaDB\n\nStarting with version `11.7.1`, the MariaDB relational database has vector search functionality integrated.\nThus now it can be used as a fully-functional vector store in LlamaIndex.\n\nTo learn more about the feature in MariaDB, check its [Vector Overview documentation](https://mariadb.com/kb/en/vector-overview/).\n\nPlease note that versions before `0.3.0` of this package are not compatible with MariaDB 11.7 and later.\nThey are compatible only with the one-off `MariaDB 11.6 Vector` preview release which used a slightly different syntax.\n\n## Installation\n\n```shell\npip install llama-index-vector-stores-mariadb\n```\n\n## Usage\n\n```python\nfrom llama_index.vector_stores.mariadb import MariaDBVectorStore\n\nvector_store = MariaDBVectorStore.from_params(\n host=\"localhost\",\n port=3306,\n user=\"llamaindex\",\n password=\"password\",\n database=\"vectordb\",\n table_name=\"llama_index_vectorstore\",\n embed_dim=1536, # OpenAI embedding dimension\n default_m=6, # MariaDB Vector system parameter\n ef_search=20, # MariaDB Vector system parameter\n)\n```\n\n## Development\n\n### Running Integration Tests\n\nA suite of integration tests is available to verify the MariaDB vector store integration.\nThe test suite needs a MariaDB database with vector search support up and running. If not found, the tests are skipped.\nTo facilitate that, a sample `docker-compose.yaml` file is provided, so you can simply do:\n\n```shell\ndocker compose -f tests/docker-compose.yaml up\n\npytest -v\n\n# Clean up when you finish testing\ndocker compose -f tests/docker-compose.yaml down\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index vector_stores mariadb integration",
"version": "0.3.1",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0dfa4bae35286f87cfb3fc2397a8da3d204831f284dc50f6fda3a635c7d333e1",
"md5": "2635f7baeb67b3d5867ba306660592c1",
"sha256": "c657a247a928f60e5a160c28c7178aa9787d087db48cd779a9a94f9669745bc5"
},
"downloads": -1,
"filename": "llama_index_vector_stores_mariadb-0.3.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2635f7baeb67b3d5867ba306660592c1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 6630,
"upload_time": "2025-02-13T23:21:23",
"upload_time_iso_8601": "2025-02-13T23:21:23.645574Z",
"url": "https://files.pythonhosted.org/packages/0d/fa/4bae35286f87cfb3fc2397a8da3d204831f284dc50f6fda3a635c7d333e1/llama_index_vector_stores_mariadb-0.3.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "527231259da5336ea976d7c582bf7d7f38704e9af3c623e21ee286e86a0d1a62",
"md5": "022fad340032942b5ab07b66883ec23b",
"sha256": "0a1345e5d8cd6f0295545f01a07baae5f749c8ec047b91afd775ba6a245bba56"
},
"downloads": -1,
"filename": "llama_index_vector_stores_mariadb-0.3.1.tar.gz",
"has_sig": false,
"md5_digest": "022fad340032942b5ab07b66883ec23b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 6156,
"upload_time": "2025-02-13T23:21:25",
"upload_time_iso_8601": "2025-02-13T23:21:25.335456Z",
"url": "https://files.pythonhosted.org/packages/52/72/31259da5336ea976d7c582bf7d7f38704e9af3c623e21ee286e86a0d1a62/llama_index_vector_stores_mariadb-0.3.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-13 23:21:25",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-vector-stores-mariadb"
}