langchain-zunno


Namelangchain-zunno JSON
Version 0.1.6 PyPI version JSON
download
home_pageNone
SummaryLangChain integration for Zunno LLM and Embeddings
upload_time2025-08-25 06:14:54
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT
keywords langchain llm embeddings zunno ai machine-learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LangChain Zunno Integration

A LangChain integration for Zunno LLM and Embeddings, providing easy-to-use wrappers for text generation and embeddings.

## Installation

```bash
pip install langchain-zunno
```

## Quick Start

### Text Generation (LLM)

#### Basic Usage (Returns only response text)
```python
from langchain_zunno import ZunnoLLM

# Create an LLM instance
llm = ZunnoLLM(model_name="mistral:latest")

# Generate text
response = llm.invoke("Hello, how are you?")
print(response)
```

#### Full Response Mode (Returns complete API response)
```python
from langchain_zunno import ZunnoLLM

# Create an LLM instance with full response
llm = ZunnoLLM(
    model_name="mistral:latest",
    return_full_response=True
)

# Get complete API response
full_response = llm.invoke("Hello, how are you?")
print(full_response)
# Returns: {"response": "...", "model_used": "...", "tokens_used": 123, ...}
```

### Embeddings

#### Basic Usage (Returns only embeddings vector)
```python
from langchain_zunno import ZunnoLLMEmbeddings

# Create an embeddings instance
embeddings = ZunnoLLMEmbeddings(model_name="mistral:latest")

# Get embeddings for a single text
embedding = embeddings.embed_query("Hello, how are you?")
print(f"Embedding dimension: {len(embedding)}")

# Get embeddings for multiple texts
texts = ["Hello world", "How are you?", "Good morning"]
embeddings_list = embeddings.embed_documents(texts)
print(f"Number of embeddings: {len(embeddings_list)}")
```

#### Full Response Mode (Returns complete API response)
```python
from langchain_zunno import ZunnoLLMEmbeddings

# Create an embeddings instance with full response
embeddings = ZunnoLLMEmbeddings(
    model_name="mistral:latest",
    return_full_response=True
)

# Get complete API response
full_response = embeddings.embed_query("Hello, how are you?")
print(full_response)
# Returns: {"embeddings": [...], "model_used": "...", "embedding_dimension": 4096, ...}
```

### Async Usage

```python
import asyncio
from langchain_zunno import ZunnoLLM, ZunnoLLMEmbeddings

async def main():
    # Async LLM
    llm = ZunnoLLM(model_name="mistral:latest")
    response = await llm.ainvoke("Hello, how are you?")
    print(response)
    
    # Async embeddings
    embeddings = ZunnoLLMEmbeddings(model_name="mistral:latest")
    embedding = await embeddings.aembed_query("Hello, how are you?")
    print(f"Embedding dimension: {len(embedding)}")

asyncio.run(main())
```

## Factory Functions

For convenience, you can use factory functions to create instances:

```python
from langchain_zunno import create_zunno_llm, create_zunno_embeddings

# Create LLM with full response
llm = create_zunno_llm(
    model_name="mistral:latest",
    temperature=0.7,
    max_tokens=100,
    return_full_response=True
)

# Create embeddings with full response
embeddings = create_zunno_embeddings(
    model_name="mistral:latest",
    return_full_response=True
)
```

## Configuration

### LLM Configuration

- `model_name`: The name of the model to use
- `base_url`: API endpoint (default: "http://15.206.124.44/v1/prompt-response")
- `temperature`: Controls randomness in generation (default: 0.7)
- `max_tokens`: Maximum number of tokens to generate (optional)
- `timeout`: Request timeout in seconds (default: 300)
- `return_full_response`: Return complete API response instead of just text (default: False)

### Embeddings Configuration

- `model_name`: The name of the embedding model to use
- `base_url`: API endpoint (default: "http://15.206.124.44/v1/text-embeddings")
- `timeout`: Request timeout in seconds (default: 300)
- `return_full_response`: Return complete API response instead of just embeddings (default: False)

## Response Modes

### Basic Mode (Default)
- **LLM**: Returns only the generated text
- **Embeddings**: Returns only the embeddings vector

### Full Response Mode
- **LLM**: Returns complete JSON response with all API fields
- **Embeddings**: Returns complete JSON response with all API fields

Example full response for LLM:
```json
{
  "response": "Hello! I'm doing well, thank you for asking.",
  "model_used": "mistral:latest",
  "tokens_used": 15,
  "prompt_tokens": 5,
  "completion_tokens": 10,
  "total_tokens": 15
}
```

Example full response for embeddings:
```json
{
  "embeddings": [0.123, -0.456, 0.789, ...],
  "model_used": "mistral:latest",
  "embedding_dimension": 4096,
  "normalized": true
}
```

## API Endpoints

The package connects to the following Zunno API endpoints:

- **Text Generation**: `http://15.206.124.44/v1/prompt-response`
- **Embeddings**: `http://15.206.124.44/v1/text-embeddings`

## Error Handling

The package includes comprehensive error handling:

```python
try:
    response = llm.invoke("Hello")
except Exception as e:
    print(f"Error: {e}")
```

## Development

### Installation for Development

```bash
git clone https://github.com/zunno/langchain-zunno.git
cd langchain-zunno
pip install -e ".[dev]"
```

### Running Tests

```bash
pytest
```

### Code Formatting

```bash
black .
isort .
```

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests
5. Submit a pull request

## Support

For support, please open an issue on GitHub or contact us at support@zunno.ai. 

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "langchain-zunno",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Amit Kumar <amit@zunno.ai>",
    "keywords": "langchain, llm, embeddings, zunno, ai, machine-learning",
    "author": null,
    "author_email": "Amit Kumar <amit@zunno.ai>",
    "download_url": "https://files.pythonhosted.org/packages/47/6f/c662a2cd0f97aa3d6cd21ea3f9d67c94d14fd7efd0c2d014ec76d6f6596f/langchain_zunno-0.1.6.tar.gz",
    "platform": null,
    "description": "# LangChain Zunno Integration\n\nA LangChain integration for Zunno LLM and Embeddings, providing easy-to-use wrappers for text generation and embeddings.\n\n## Installation\n\n```bash\npip install langchain-zunno\n```\n\n## Quick Start\n\n### Text Generation (LLM)\n\n#### Basic Usage (Returns only response text)\n```python\nfrom langchain_zunno import ZunnoLLM\n\n# Create an LLM instance\nllm = ZunnoLLM(model_name=\"mistral:latest\")\n\n# Generate text\nresponse = llm.invoke(\"Hello, how are you?\")\nprint(response)\n```\n\n#### Full Response Mode (Returns complete API response)\n```python\nfrom langchain_zunno import ZunnoLLM\n\n# Create an LLM instance with full response\nllm = ZunnoLLM(\n    model_name=\"mistral:latest\",\n    return_full_response=True\n)\n\n# Get complete API response\nfull_response = llm.invoke(\"Hello, how are you?\")\nprint(full_response)\n# Returns: {\"response\": \"...\", \"model_used\": \"...\", \"tokens_used\": 123, ...}\n```\n\n### Embeddings\n\n#### Basic Usage (Returns only embeddings vector)\n```python\nfrom langchain_zunno import ZunnoLLMEmbeddings\n\n# Create an embeddings instance\nembeddings = ZunnoLLMEmbeddings(model_name=\"mistral:latest\")\n\n# Get embeddings for a single text\nembedding = embeddings.embed_query(\"Hello, how are you?\")\nprint(f\"Embedding dimension: {len(embedding)}\")\n\n# Get embeddings for multiple texts\ntexts = [\"Hello world\", \"How are you?\", \"Good morning\"]\nembeddings_list = embeddings.embed_documents(texts)\nprint(f\"Number of embeddings: {len(embeddings_list)}\")\n```\n\n#### Full Response Mode (Returns complete API response)\n```python\nfrom langchain_zunno import ZunnoLLMEmbeddings\n\n# Create an embeddings instance with full response\nembeddings = ZunnoLLMEmbeddings(\n    model_name=\"mistral:latest\",\n    return_full_response=True\n)\n\n# Get complete API response\nfull_response = embeddings.embed_query(\"Hello, how are you?\")\nprint(full_response)\n# Returns: {\"embeddings\": [...], \"model_used\": \"...\", \"embedding_dimension\": 4096, ...}\n```\n\n### Async Usage\n\n```python\nimport asyncio\nfrom langchain_zunno import ZunnoLLM, ZunnoLLMEmbeddings\n\nasync def main():\n    # Async LLM\n    llm = ZunnoLLM(model_name=\"mistral:latest\")\n    response = await llm.ainvoke(\"Hello, how are you?\")\n    print(response)\n    \n    # Async embeddings\n    embeddings = ZunnoLLMEmbeddings(model_name=\"mistral:latest\")\n    embedding = await embeddings.aembed_query(\"Hello, how are you?\")\n    print(f\"Embedding dimension: {len(embedding)}\")\n\nasyncio.run(main())\n```\n\n## Factory Functions\n\nFor convenience, you can use factory functions to create instances:\n\n```python\nfrom langchain_zunno import create_zunno_llm, create_zunno_embeddings\n\n# Create LLM with full response\nllm = create_zunno_llm(\n    model_name=\"mistral:latest\",\n    temperature=0.7,\n    max_tokens=100,\n    return_full_response=True\n)\n\n# Create embeddings with full response\nembeddings = create_zunno_embeddings(\n    model_name=\"mistral:latest\",\n    return_full_response=True\n)\n```\n\n## Configuration\n\n### LLM Configuration\n\n- `model_name`: The name of the model to use\n- `base_url`: API endpoint (default: \"http://15.206.124.44/v1/prompt-response\")\n- `temperature`: Controls randomness in generation (default: 0.7)\n- `max_tokens`: Maximum number of tokens to generate (optional)\n- `timeout`: Request timeout in seconds (default: 300)\n- `return_full_response`: Return complete API response instead of just text (default: False)\n\n### Embeddings Configuration\n\n- `model_name`: The name of the embedding model to use\n- `base_url`: API endpoint (default: \"http://15.206.124.44/v1/text-embeddings\")\n- `timeout`: Request timeout in seconds (default: 300)\n- `return_full_response`: Return complete API response instead of just embeddings (default: False)\n\n## Response Modes\n\n### Basic Mode (Default)\n- **LLM**: Returns only the generated text\n- **Embeddings**: Returns only the embeddings vector\n\n### Full Response Mode\n- **LLM**: Returns complete JSON response with all API fields\n- **Embeddings**: Returns complete JSON response with all API fields\n\nExample full response for LLM:\n```json\n{\n  \"response\": \"Hello! I'm doing well, thank you for asking.\",\n  \"model_used\": \"mistral:latest\",\n  \"tokens_used\": 15,\n  \"prompt_tokens\": 5,\n  \"completion_tokens\": 10,\n  \"total_tokens\": 15\n}\n```\n\nExample full response for embeddings:\n```json\n{\n  \"embeddings\": [0.123, -0.456, 0.789, ...],\n  \"model_used\": \"mistral:latest\",\n  \"embedding_dimension\": 4096,\n  \"normalized\": true\n}\n```\n\n## API Endpoints\n\nThe package connects to the following Zunno API endpoints:\n\n- **Text Generation**: `http://15.206.124.44/v1/prompt-response`\n- **Embeddings**: `http://15.206.124.44/v1/text-embeddings`\n\n## Error Handling\n\nThe package includes comprehensive error handling:\n\n```python\ntry:\n    response = llm.invoke(\"Hello\")\nexcept Exception as e:\n    print(f\"Error: {e}\")\n```\n\n## Development\n\n### Installation for Development\n\n```bash\ngit clone https://github.com/zunno/langchain-zunno.git\ncd langchain-zunno\npip install -e \".[dev]\"\n```\n\n### Running Tests\n\n```bash\npytest\n```\n\n### Code Formatting\n\n```bash\nblack .\nisort .\n```\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests\n5. Submit a pull request\n\n## Support\n\nFor support, please open an issue on GitHub or contact us at support@zunno.ai. \n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "LangChain integration for Zunno LLM and Embeddings",
    "version": "0.1.6",
    "project_urls": {
        "Bug Tracker": "https://github.com/zunno/langchain-zunno/issues",
        "Documentation": "https://github.com/zunno/langchain-zunno#readme",
        "Homepage": "https://github.com/zunno/langchain-zunno",
        "Repository": "https://github.com/zunno/langchain-zunno"
    },
    "split_keywords": [
        "langchain",
        " llm",
        " embeddings",
        " zunno",
        " ai",
        " machine-learning"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f0831b9b8f45b2fe7ebd5c6ce26d44ad6d65d2dc54f0560fa3fae35d75440e62",
                "md5": "d438d039e18f00a906d40fa9c82a3dce",
                "sha256": "0c39713357439d06091e4c9ab6b7bd8538572b812a3f82b75dddc94d1570eb25"
            },
            "downloads": -1,
            "filename": "langchain_zunno-0.1.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d438d039e18f00a906d40fa9c82a3dce",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 6999,
            "upload_time": "2025-08-25T06:14:51",
            "upload_time_iso_8601": "2025-08-25T06:14:51.952279Z",
            "url": "https://files.pythonhosted.org/packages/f0/83/1b9b8f45b2fe7ebd5c6ce26d44ad6d65d2dc54f0560fa3fae35d75440e62/langchain_zunno-0.1.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "476fc662a2cd0f97aa3d6cd21ea3f9d67c94d14fd7efd0c2d014ec76d6f6596f",
                "md5": "05d47040f51292d0e153a632a12c85a2",
                "sha256": "2d2cf3bea90222f47b20dc618daed34f90c05e98a1bb672777f0ca9a38a0ba52"
            },
            "downloads": -1,
            "filename": "langchain_zunno-0.1.6.tar.gz",
            "has_sig": false,
            "md5_digest": "05d47040f51292d0e153a632a12c85a2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 7872,
            "upload_time": "2025-08-25T06:14:54",
            "upload_time_iso_8601": "2025-08-25T06:14:54.502721Z",
            "url": "https://files.pythonhosted.org/packages/47/6f/c662a2cd0f97aa3d6cd21ea3f9d67c94d14fd7efd0c2d014ec76d6f6596f/langchain_zunno-0.1.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-25 06:14:54",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "zunno",
    "github_project": "langchain-zunno",
    "github_not_found": true,
    "lcname": "langchain-zunno"
}
        
Elapsed time: 1.35652s