# LangChain Zunno Integration
A LangChain integration for Zunno LLM and Embeddings, providing easy-to-use wrappers for text generation and embeddings.
## Installation
```bash
pip install langchain-zunno
```
## Quick Start
### Text Generation (LLM)
```python
from langchain_zunno import ZunnoLLM
# Create an LLM instance
llm = ZunnoLLM(model_name="mistral:latest")
# Generate text
response = llm.invoke("Hello, how are you?")
print(response)
```
### Embeddings
```python
from langchain_zunno import ZunnoLLMEmbeddings
# Create an embeddings instance
embeddings = ZunnoLLMEmbeddings(model_name="mistral:latest")
# Get embeddings for a single text
embedding = embeddings.embed_query("Hello, how are you?")
print(f"Embedding dimension: {len(embedding)}")
# Get embeddings for multiple texts
texts = ["Hello world", "How are you?", "Good morning"]
embeddings_list = embeddings.embed_documents(texts)
print(f"Number of embeddings: {len(embeddings_list)}")
```
### Async Usage
```python
import asyncio
from langchain_zunno import ZunnoLLM, ZunnoLLMEmbeddings
async def main():
# Async LLM
llm = ZunnoLLM(model_name="mistral:latest")
response = await llm.ainvoke("Hello, how are you?")
print(response)
# Async embeddings
embeddings = ZunnoLLMEmbeddings(model_name="mistral:latest")
embedding = await embeddings.aembed_query("Hello, how are you?")
print(f"Embedding dimension: {len(embedding)}")
asyncio.run(main())
```
## Factory Functions
For convenience, you can use factory functions to create instances:
```python
from langchain_zunno import create_zunno_llm, create_zunno_embeddings
# Create LLM
llm = create_zunno_llm(
model_name="mistral:latest",
temperature=0.7,
max_tokens=100
)
# Create embeddings
embeddings = create_zunno_embeddings(
model_name="mistral:latest"
)
```
## Configuration
### LLM Configuration
- `model_name`: The name of the model to use
- `base_url`: API endpoint (default: "http://15.206.124.44/v1/prompt-response")
- `temperature`: Controls randomness in generation (default: 0.7)
- `max_tokens`: Maximum number of tokens to generate (optional)
- `timeout`: Request timeout in seconds (default: 300)
### Embeddings Configuration
- `model_name`: The name of the embedding model to use
- `base_url`: API endpoint (default: "http://15.206.124.44/v1/text-embeddings")
- `timeout`: Request timeout in seconds (default: 300)
## API Endpoints
The package connects to the following Zunno API endpoints:
- **Text Generation**: `http://15.206.124.44/v1/prompt-response`
- **Embeddings**: `http://15.206.124.44/v1/text-embeddings`
## Error Handling
The package includes comprehensive error handling:
```python
try:
response = llm.invoke("Hello")
except Exception as e:
print(f"Error: {e}")
```
## Development
### Installation for Development
```bash
git clone https://github.com/zunno/langchain-zunno.git
cd langchain-zunno
pip install -e ".[dev]"
```
### Running Tests
```bash
pytest
```
### Code Formatting
```bash
black .
isort .
```
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests
5. Submit a pull request
## Support
For support, please open an issue on GitHub or contact us at support@zunno.ai.
Raw data
{
"_id": null,
"home_page": null,
"name": "langchain-zunno",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "Amit Kumar <amit@zunno.ai>",
"keywords": "langchain, llm, embeddings, zunno, ai, machine-learning",
"author": null,
"author_email": "Amit Kumar <amit@zunno.ai>",
"download_url": "https://files.pythonhosted.org/packages/00/65/15f308c1c04e8d99fafb60c7b053dfe0de19d3bc1e9d36ea38e87ab76af6/langchain_zunno-0.1.1.tar.gz",
"platform": null,
"description": "# LangChain Zunno Integration\n\nA LangChain integration for Zunno LLM and Embeddings, providing easy-to-use wrappers for text generation and embeddings.\n\n## Installation\n\n```bash\npip install langchain-zunno\n```\n\n## Quick Start\n\n### Text Generation (LLM)\n\n```python\nfrom langchain_zunno import ZunnoLLM\n\n# Create an LLM instance\nllm = ZunnoLLM(model_name=\"mistral:latest\")\n\n# Generate text\nresponse = llm.invoke(\"Hello, how are you?\")\nprint(response)\n```\n\n### Embeddings\n\n```python\nfrom langchain_zunno import ZunnoLLMEmbeddings\n\n# Create an embeddings instance\nembeddings = ZunnoLLMEmbeddings(model_name=\"mistral:latest\")\n\n# Get embeddings for a single text\nembedding = embeddings.embed_query(\"Hello, how are you?\")\nprint(f\"Embedding dimension: {len(embedding)}\")\n\n# Get embeddings for multiple texts\ntexts = [\"Hello world\", \"How are you?\", \"Good morning\"]\nembeddings_list = embeddings.embed_documents(texts)\nprint(f\"Number of embeddings: {len(embeddings_list)}\")\n```\n\n### Async Usage\n\n```python\nimport asyncio\nfrom langchain_zunno import ZunnoLLM, ZunnoLLMEmbeddings\n\nasync def main():\n # Async LLM\n llm = ZunnoLLM(model_name=\"mistral:latest\")\n response = await llm.ainvoke(\"Hello, how are you?\")\n print(response)\n \n # Async embeddings\n embeddings = ZunnoLLMEmbeddings(model_name=\"mistral:latest\")\n embedding = await embeddings.aembed_query(\"Hello, how are you?\")\n print(f\"Embedding dimension: {len(embedding)}\")\n\nasyncio.run(main())\n```\n\n## Factory Functions\n\nFor convenience, you can use factory functions to create instances:\n\n```python\nfrom langchain_zunno import create_zunno_llm, create_zunno_embeddings\n\n# Create LLM\nllm = create_zunno_llm(\n model_name=\"mistral:latest\",\n temperature=0.7,\n max_tokens=100\n)\n\n# Create embeddings\nembeddings = create_zunno_embeddings(\n model_name=\"mistral:latest\"\n)\n```\n\n## Configuration\n\n### LLM Configuration\n\n- `model_name`: The name of the model to use\n- `base_url`: API endpoint (default: \"http://15.206.124.44/v1/prompt-response\")\n- `temperature`: Controls randomness in generation (default: 0.7)\n- `max_tokens`: Maximum number of tokens to generate (optional)\n- `timeout`: Request timeout in seconds (default: 300)\n\n### Embeddings Configuration\n\n- `model_name`: The name of the embedding model to use\n- `base_url`: API endpoint (default: \"http://15.206.124.44/v1/text-embeddings\")\n- `timeout`: Request timeout in seconds (default: 300)\n\n## API Endpoints\n\nThe package connects to the following Zunno API endpoints:\n\n- **Text Generation**: `http://15.206.124.44/v1/prompt-response`\n- **Embeddings**: `http://15.206.124.44/v1/text-embeddings`\n\n## Error Handling\n\nThe package includes comprehensive error handling:\n\n```python\ntry:\n response = llm.invoke(\"Hello\")\nexcept Exception as e:\n print(f\"Error: {e}\")\n```\n\n## Development\n\n### Installation for Development\n\n```bash\ngit clone https://github.com/zunno/langchain-zunno.git\ncd langchain-zunno\npip install -e \".[dev]\"\n```\n\n### Running Tests\n\n```bash\npytest\n```\n\n### Code Formatting\n\n```bash\nblack .\nisort .\n```\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests\n5. Submit a pull request\n\n## Support\n\nFor support, please open an issue on GitHub or contact us at support@zunno.ai. \n",
"bugtrack_url": null,
"license": "MIT",
"summary": "LangChain integration for Zunno LLM and Embeddings",
"version": "0.1.1",
"project_urls": {
"Bug Tracker": "https://github.com/zunno/langchain-zunno/issues",
"Documentation": "https://github.com/zunno/langchain-zunno#readme",
"Homepage": "https://github.com/zunno/langchain-zunno",
"Repository": "https://github.com/zunno/langchain-zunno"
},
"split_keywords": [
"langchain",
" llm",
" embeddings",
" zunno",
" ai",
" machine-learning"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "916356f98fe7143189591c424f5f4ee633a13994378b60788b6bd0d3dbf7b77a",
"md5": "36014cffc38900142d1ed227942dfc5b",
"sha256": "c1f137adfdc9be31cf0f4c9f5af172d2ce982965c4ecf679d4f8bbbbc91d2780"
},
"downloads": -1,
"filename": "langchain_zunno-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "36014cffc38900142d1ed227942dfc5b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 6214,
"upload_time": "2025-07-26T04:50:54",
"upload_time_iso_8601": "2025-07-26T04:50:54.291882Z",
"url": "https://files.pythonhosted.org/packages/91/63/56f98fe7143189591c424f5f4ee633a13994378b60788b6bd0d3dbf7b77a/langchain_zunno-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "006515f308c1c04e8d99fafb60c7b053dfe0de19d3bc1e9d36ea38e87ab76af6",
"md5": "c540728bf0de4b1ce002caf77eb2ec24",
"sha256": "78c97e6c002a64ed77b711a60a6e45ee1df9c27ae0e65216e78f41e035a63b72"
},
"downloads": -1,
"filename": "langchain_zunno-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "c540728bf0de4b1ce002caf77eb2ec24",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 5862,
"upload_time": "2025-07-26T04:50:55",
"upload_time_iso_8601": "2025-07-26T04:50:55.657221Z",
"url": "https://files.pythonhosted.org/packages/00/65/15f308c1c04e8d99fafb60c7b053dfe0de19d3bc1e9d36ea38e87ab76af6/langchain_zunno-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-26 04:50:55",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "zunno",
"github_project": "langchain-zunno",
"github_not_found": true,
"lcname": "langchain-zunno"
}