langchain-turbopuffer


Namelangchain-turbopuffer JSON
Version 0.1.2 PyPI version JSON
download
home_pageNone
Summaryan unofficial langchain binding for turbopuffer
upload_time2025-01-12 23:01:02
maintainerNone
docs_urlNone
authorAlex Chi Z
requires_python<3.13,>=3.10
licenseMIT
keywords langchain turbopuffer vector-store
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # langchain-turbopuffer

Use Turbopuffer as a vector store for LangChain.

## Usage

```bash
poetry add git+https://github.com/skyzh/langchain-turbopuffer
# see example.py for usage
```

```python
from langchain_turbopuffer import TurbopufferVectorStore

vectorstore = TurbopufferVectorStore(
    embedding=OllamaEmbeddings(model="mxbai-embed-large"),
    namespace="langchain-turbopuffer-test",
    api_key=os.getenv("TURBOPUFFER_API_KEY"),
)
```

## Local Development

```bash
git clone https://github.com/skyzh/langchain-turbopuffer
cd langchain-turbopuffer
poetry env use 3.12
poetry install

ollama pull mxbai-embed-large llama3.2
ollama run llama3.2
export TURBOPUFFER_API_KEY=your_api_key
poetry run python example.py
poetry run python example.py --skip-load
```

In the example, you can ask questions like "What is prompt engineering?"

## License

MIT

Note that the `example.py` is from [langchain-risinglight](https://github.com/skyzh/langchain-risinglight) based on several online tutorials (see the file header for more details).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "langchain-turbopuffer",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.10",
    "maintainer_email": null,
    "keywords": "langchain, turbopuffer, vector-store",
    "author": "Alex Chi Z",
    "author_email": "iskyzh@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/4f/e5/59ee8d094fab8f00f17d3c6359e62316b6c76debd05f201d1c825c23e08e/langchain_turbopuffer-0.1.2.tar.gz",
    "platform": null,
    "description": "# langchain-turbopuffer\n\nUse Turbopuffer as a vector store for LangChain.\n\n## Usage\n\n```bash\npoetry add git+https://github.com/skyzh/langchain-turbopuffer\n# see example.py for usage\n```\n\n```python\nfrom langchain_turbopuffer import TurbopufferVectorStore\n\nvectorstore = TurbopufferVectorStore(\n    embedding=OllamaEmbeddings(model=\"mxbai-embed-large\"),\n    namespace=\"langchain-turbopuffer-test\",\n    api_key=os.getenv(\"TURBOPUFFER_API_KEY\"),\n)\n```\n\n## Local Development\n\n```bash\ngit clone https://github.com/skyzh/langchain-turbopuffer\ncd langchain-turbopuffer\npoetry env use 3.12\npoetry install\n\nollama pull mxbai-embed-large llama3.2\nollama run llama3.2\nexport TURBOPUFFER_API_KEY=your_api_key\npoetry run python example.py\npoetry run python example.py --skip-load\n```\n\nIn the example, you can ask questions like \"What is prompt engineering?\"\n\n## License\n\nMIT\n\nNote that the `example.py` is from [langchain-risinglight](https://github.com/skyzh/langchain-risinglight) based on several online tutorials (see the file header for more details).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "an unofficial langchain binding for turbopuffer",
    "version": "0.1.2",
    "project_urls": null,
    "split_keywords": [
        "langchain",
        " turbopuffer",
        " vector-store"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "46e11a45fa451f943e6aacef2a897d80416fa16e5894908531c06775e4bd8995",
                "md5": "2c5fc97ec547757394e942137d33e985",
                "sha256": "6bdf2d3ef0f8bb35a8ab55737b40d21f2f5442e29283912cecbee5a0a6e87c11"
            },
            "downloads": -1,
            "filename": "langchain_turbopuffer-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2c5fc97ec547757394e942137d33e985",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.10",
            "size": 3639,
            "upload_time": "2025-01-12T23:01:00",
            "upload_time_iso_8601": "2025-01-12T23:01:00.035536Z",
            "url": "https://files.pythonhosted.org/packages/46/e1/1a45fa451f943e6aacef2a897d80416fa16e5894908531c06775e4bd8995/langchain_turbopuffer-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4fe559ee8d094fab8f00f17d3c6359e62316b6c76debd05f201d1c825c23e08e",
                "md5": "84bbfd90112f8fc107e82bd19334c2b1",
                "sha256": "92458a136442540cc11f5b1ff759b07832b00c3b75b1def0f197b8ab70391572"
            },
            "downloads": -1,
            "filename": "langchain_turbopuffer-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "84bbfd90112f8fc107e82bd19334c2b1",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.10",
            "size": 3157,
            "upload_time": "2025-01-12T23:01:02",
            "upload_time_iso_8601": "2025-01-12T23:01:02.099953Z",
            "url": "https://files.pythonhosted.org/packages/4f/e5/59ee8d094fab8f00f17d3c6359e62316b6c76debd05f201d1c825c23e08e/langchain_turbopuffer-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-12 23:01:02",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "langchain-turbopuffer"
}
        
Elapsed time: 0.38651s