langchain-llm-config


Namelangchain-llm-config JSON
Version 0.1.5 PyPI version JSON
download
home_pageNone
SummaryA comprehensive LLM configuration package supporting multiple providers (OpenAI, VLLM, Gemini, Infinity) for chat assistants and embeddings
upload_time2025-07-23 11:13:48
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT
keywords assistant chat embeddings gemini langchain llm openai vllm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Langchain LLM Config

Yet another redundant Langchain abstraction: comprehensive Python package for managing and using multiple LLM providers (OpenAI, VLLM, Gemini, Infinity) with a unified interface for both chat assistants and embeddings.

[![PyPI version](https://badge.fury.io/py/langchain-llm-config.svg)](https://badge.fury.io/py/langchain-llm-config) [![Python package](https://github.com/liux2/Langchain-LLM-Config/actions/workflows/python-package.yml/badge.svg?branch=main)](https://github.com/liux2/Langchain-LLM-Config/actions/workflows/python-package.yml)

## Features

- šŸ¤– **Multiple Chat Providers**: Support for OpenAI, VLLM, and Gemini
- šŸ”— **Multiple Embedding Providers**: Support for OpenAI, VLLM, and Infinity
- āš™ļø **Unified Configuration**: Single YAML configuration file for all providers
- šŸš€ **Easy Setup**: CLI tool for quick configuration initialization
- šŸ”„ **Easy Context Concatenation**: Simplified process for combining contexts into chat
- šŸ”’ **Environment Variables**: Secure API key management
- šŸ“¦ **Self-Contained**: No need to import specific paths
- ⚔ **Async Support**: Full async/await support for all operations
- 🌊 **Streaming Chat**: Real-time streaming responses for interactive experiences
- šŸ› ļø **Enhanced CLI**: Environment setup and validation commands

## Installation

### Using pip

```bash
pip install langchain-llm-config
```

### Using uv (recommended)

```bash
uv add langchain-llm-config
```

### Development installation

```bash
git clone https://github.com/liux2/Langchain-LLM-Config.git
cd langchain-llm-config
uv sync --dev
uv run pip install -e .
```

## Quick Start

### 1. Initialize Configuration

```bash
# Initialize config in current directory
llm-config init

# Or specify a custom location
llm-config init ~/.config/api.yaml
```

This creates an `api.yaml` file with all supported providers configured.

### 2. Set Up Environment Variables

```bash
# Set up environment variables and create .env file
llm-config setup-env

# Or with custom config path
llm-config setup-env --config-path ~/.config/.env
```

This creates a `.env` file with placeholders for your API keys.

### 3. Configure Your Providers

Edit the generated `api.yaml` file with your API keys and settings:

```yaml
llm:
  openai:
    chat:
      api_base: "https://api.openai.com/v1"
      api_key: "${OPENAI_API_KEY}"
      model_name: "gpt-3.5-turbo"
      temperature: 0.7
      max_tokens: 8192
    embeddings:
      api_base: "https://api.openai.com/v1"
      api_key: "${OPENAI_API_KEY}"
      model_name: "text-embedding-ada-002"
  
  vllm:
    chat:
      api_base: "http://localhost:8000/v1"
      api_key: "${OPENAI_API_KEY}"
      model_name: "meta-llama/Llama-2-7b-chat-hf"
      temperature: 0.6
  
  default:
    chat_provider: "openai"
    embedding_provider: "openai"
```

### 4. Set Environment Variables

Edit the `.env` file with your actual API keys:

```bash
OPENAI_API_KEY=your-openai-api-key
GEMINI_API_KEY=your-gemini-api-key
```

### 5. Use in Your Code

#### Basic Usage (Synchronous)

```python
from langchain_llm_config import create_assistant, create_embedding_provider
from pydantic import BaseModel, Field
from typing import List


# Define your response model
class ArticleAnalysis(BaseModel):
    summary: str = Field(..., description="Article summary")
    keywords: List[str] = Field(..., description="Key topics")
    sentiment: str = Field(..., description="Overall sentiment")


# Create an assistant without response model (raw text mode)
assistant = create_assistant(
    response_model=None,  # Explicitly set to None for raw text
    system_prompt="You are a helpful article analyzer.",
    provider="openai",  # or "vllm", "gemini"
    auto_apply_parser=False,
)

# Use the assistant for raw text output
print("=== Raw Text Mode ===")
result = assistant.ask("Analyze this article: ...")
print(result)

# Apply parser to the same assistant (modifies in place)
print("\n=== Applying Parser ===")
assistant.apply_parser(response_model=ArticleAnalysis)

# Now use the same assistant for structured output
print("\n=== Structured Mode ===")
result = assistant.ask("Analyze this article: ...")
print(result)

# Create an embedding provider
embedding_provider = create_embedding_provider(provider="openai")

# Get embeddings (synchronous)
texts = ["Hello world", "How are you?"]
embeddings = embedding_provider.embed_texts(texts)
```

#### Advanced Usage (Asynchronous)

```python
import asyncio

# Use the assistant (asynchronous)
result = await assistant.ask_async("Analyze this article: ...")
print(result["summary"])

# Get embeddings (asynchronous)
embeddings = await embedding_provider.embed_texts_async(texts)
```

#### Streaming Chat

```python
import asyncio
from langchain_llm_config import create_chat_streaming


async def main():
    """Main async function to run the streaming chat example"""
    # Create streaming chat assistant
    # Try with OpenAI first to test streaming
    streaming_chat = create_chat_streaming(
        provider="vllm", system_prompt="You are a helpful assistant."
    )

    print("šŸ¤– Starting streaming chat...")
    print("Response: ", end="", flush=True)

    try:
        # Stream responses in real-time
        async for chunk in streaming_chat.chat_stream("Tell me a story"):
            if chunk["type"] == "stream":
                print(chunk["content"], end="", flush=True)
            elif chunk["type"] == "final":
                print(f"\n\nProcessing time: {chunk['processing_time']:.2f}s")
                print(f"Model used: {chunk['model_used']}")
    except Exception as e:
        print(f"\nāŒ Error occurred: {e}")


if __name__ == "__main__":
    # Run the async function
    asyncio.run(main())
```

## Supported Providers

### Chat Providers

| Provider | Models | Features |
|----------|--------|----------|
| **OpenAI** | GPT-3.5, GPT-4, etc. | Streaming, function calling, structured output |
| **VLLM** | Any HuggingFace model | Local deployment, high performance |
| **Gemini** | Gemini Pro, etc. | Google's latest models |

### Embedding Providers

| Provider | Models | Features |
|----------|--------|----------|
| **OpenAI** | text-embedding-ada-002, etc. | High quality, reliable |
| **VLLM** | BGE, sentence-transformers | Local deployment |
| **Infinity** | Various embedding models | Fast inference |

## CLI Commands

```bash
# Initialize a new configuration file
llm-config init [path]

# Set up environment variables and create .env file
llm-config setup-env [path] [--force]

# Validate existing configuration
llm-config validate [path]

# Show package information
llm-config info
```

## Advanced Usage

### Custom Configuration Path

```python
from langchain_llm_config import create_assistant

assistant = create_assistant(
    response_model=MyModel,
    config_path="/path/to/custom/api.yaml"
)
```

### Context-Aware Conversations

```python
# Add context to your queries
result = await assistant.ask_async(
    query="What are the main points?",
    context="This is a research paper about machine learning...",
    extra_system_prompt="Focus on technical details."
)
```

### Direct Provider Usage

```python
from langchain_llm_config import VLLMAssistant, OpenAIEmbeddingProvider

# Use providers directly
vllm_assistant = VLLMAssistant(
    config={"api_base": "http://localhost:8000/v1", "model_name": "llama-2"},
    response_model=MyModel
)

openai_embeddings = OpenAIEmbeddingProvider(
    config={"api_key": "your-key", "model_name": "text-embedding-ada-002"}
)
```

### Complete Example with Error Handling

```python
import asyncio
from langchain_llm_config import create_assistant, create_embedding_provider
from pydantic import BaseModel, Field
from typing import List

class ChatResponse(BaseModel):
    message: str = Field(..., description="The assistant's response message")
    confidence: float = Field(..., description="Confidence score", ge=0.0, le=1.0)
    suggestions: List[str] = Field(default_factory=list, description="Follow-up questions")

async def main():
    try:
        # Create assistant
        assistant = create_assistant(
            response_model=ChatResponse,
            provider="openai",
            system_prompt="You are a helpful AI assistant."
        )
        
        # Chat conversation
        response = await assistant.ask_async("What is the capital of France?")
        print(f"Assistant: {response['message']}")
        print(f"Confidence: {response['confidence']:.2f}")
        
        # Create embedding provider
        embedding_provider = create_embedding_provider(provider="openai")
        
        # Get embeddings
        texts = ["Hello world", "How are you?"]
        embeddings = await embedding_provider.embed_texts_async(texts)
        print(f"Generated {len(embeddings)} embeddings")
        
    except Exception as e:
        print(f"Error: {e}")

# Run the example
asyncio.run(main())
```

## Configuration Reference

### Environment Variables

The package supports environment variable substitution in configuration:

```yaml
api_key: "${OPENAI_API_KEY}"  # Will be replaced with actual value
```

### Configuration Structure

```yaml
llm:
  provider_name:
    chat:
      api_base: "https://api.example.com/v1"
      api_key: "${API_KEY}"
      model_name: "model-name"
      temperature: 0.7
      max_tokens: 8192
      top_p: 1.0
      connect_timeout: 60
      read_timeout: 60
      model_kwargs: {}
      # ... other parameters
    embeddings:
      api_base: "https://api.example.com/v1"
      api_key: "${API_KEY}"
      model_name: "embedding-model"
      # ... other parameters
  default:
    chat_provider: "provider_name"
    embedding_provider: "provider_name"
```

## Development

### Running Tests

```bash
uv run pytest
```

### Code Formatting

```bash
uv run black .
uv run isort .
```

### Type Checking

```bash
uv run mypy .
```

## Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests
5. Submit a pull request

## License

MIT License - see [LICENSE](LICENSE) file for details.

## Support

- šŸ“– [Documentation](https://github.com/liux2/Langchain-LLM-Config#readme)
- šŸ› [Issue Tracker](https://github.com/liux2/Langchain-LLM-Config/issues)
- šŸ’¬ [Discussions](https://github.com/liux2/Langchain-LLM-Config/discussions)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "langchain-llm-config",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "assistant, chat, embeddings, gemini, langchain, llm, openai, vllm",
    "author": null,
    "author_email": "Xingbang Liu <xingbangliu48@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/8b/76/eda3d4e4b8a0a14a0e2670d6170d1563769b15374ad30ade08827789110a/langchain_llm_config-0.1.5.tar.gz",
    "platform": null,
    "description": "# Langchain LLM Config\n\nYet another redundant Langchain abstraction: comprehensive Python package for managing and using multiple LLM providers (OpenAI, VLLM, Gemini, Infinity) with a unified interface for both chat assistants and embeddings.\n\n[![PyPI version](https://badge.fury.io/py/langchain-llm-config.svg)](https://badge.fury.io/py/langchain-llm-config) [![Python package](https://github.com/liux2/Langchain-LLM-Config/actions/workflows/python-package.yml/badge.svg?branch=main)](https://github.com/liux2/Langchain-LLM-Config/actions/workflows/python-package.yml)\n\n## Features\n\n- \ud83e\udd16 **Multiple Chat Providers**: Support for OpenAI, VLLM, and Gemini\n- \ud83d\udd17 **Multiple Embedding Providers**: Support for OpenAI, VLLM, and Infinity\n- \u2699\ufe0f **Unified Configuration**: Single YAML configuration file for all providers\n- \ud83d\ude80 **Easy Setup**: CLI tool for quick configuration initialization\n- \ud83d\udd04 **Easy Context Concatenation**: Simplified process for combining contexts into chat\n- \ud83d\udd12 **Environment Variables**: Secure API key management\n- \ud83d\udce6 **Self-Contained**: No need to import specific paths\n- \u26a1 **Async Support**: Full async/await support for all operations\n- \ud83c\udf0a **Streaming Chat**: Real-time streaming responses for interactive experiences\n- \ud83d\udee0\ufe0f **Enhanced CLI**: Environment setup and validation commands\n\n## Installation\n\n### Using pip\n\n```bash\npip install langchain-llm-config\n```\n\n### Using uv (recommended)\n\n```bash\nuv add langchain-llm-config\n```\n\n### Development installation\n\n```bash\ngit clone https://github.com/liux2/Langchain-LLM-Config.git\ncd langchain-llm-config\nuv sync --dev\nuv run pip install -e .\n```\n\n## Quick Start\n\n### 1. Initialize Configuration\n\n```bash\n# Initialize config in current directory\nllm-config init\n\n# Or specify a custom location\nllm-config init ~/.config/api.yaml\n```\n\nThis creates an `api.yaml` file with all supported providers configured.\n\n### 2. Set Up Environment Variables\n\n```bash\n# Set up environment variables and create .env file\nllm-config setup-env\n\n# Or with custom config path\nllm-config setup-env --config-path ~/.config/.env\n```\n\nThis creates a `.env` file with placeholders for your API keys.\n\n### 3. Configure Your Providers\n\nEdit the generated `api.yaml` file with your API keys and settings:\n\n```yaml\nllm:\n  openai:\n    chat:\n      api_base: \"https://api.openai.com/v1\"\n      api_key: \"${OPENAI_API_KEY}\"\n      model_name: \"gpt-3.5-turbo\"\n      temperature: 0.7\n      max_tokens: 8192\n    embeddings:\n      api_base: \"https://api.openai.com/v1\"\n      api_key: \"${OPENAI_API_KEY}\"\n      model_name: \"text-embedding-ada-002\"\n  \n  vllm:\n    chat:\n      api_base: \"http://localhost:8000/v1\"\n      api_key: \"${OPENAI_API_KEY}\"\n      model_name: \"meta-llama/Llama-2-7b-chat-hf\"\n      temperature: 0.6\n  \n  default:\n    chat_provider: \"openai\"\n    embedding_provider: \"openai\"\n```\n\n### 4. Set Environment Variables\n\nEdit the `.env` file with your actual API keys:\n\n```bash\nOPENAI_API_KEY=your-openai-api-key\nGEMINI_API_KEY=your-gemini-api-key\n```\n\n### 5. Use in Your Code\n\n#### Basic Usage (Synchronous)\n\n```python\nfrom langchain_llm_config import create_assistant, create_embedding_provider\nfrom pydantic import BaseModel, Field\nfrom typing import List\n\n\n# Define your response model\nclass ArticleAnalysis(BaseModel):\n    summary: str = Field(..., description=\"Article summary\")\n    keywords: List[str] = Field(..., description=\"Key topics\")\n    sentiment: str = Field(..., description=\"Overall sentiment\")\n\n\n# Create an assistant without response model (raw text mode)\nassistant = create_assistant(\n    response_model=None,  # Explicitly set to None for raw text\n    system_prompt=\"You are a helpful article analyzer.\",\n    provider=\"openai\",  # or \"vllm\", \"gemini\"\n    auto_apply_parser=False,\n)\n\n# Use the assistant for raw text output\nprint(\"=== Raw Text Mode ===\")\nresult = assistant.ask(\"Analyze this article: ...\")\nprint(result)\n\n# Apply parser to the same assistant (modifies in place)\nprint(\"\\n=== Applying Parser ===\")\nassistant.apply_parser(response_model=ArticleAnalysis)\n\n# Now use the same assistant for structured output\nprint(\"\\n=== Structured Mode ===\")\nresult = assistant.ask(\"Analyze this article: ...\")\nprint(result)\n\n# Create an embedding provider\nembedding_provider = create_embedding_provider(provider=\"openai\")\n\n# Get embeddings (synchronous)\ntexts = [\"Hello world\", \"How are you?\"]\nembeddings = embedding_provider.embed_texts(texts)\n```\n\n#### Advanced Usage (Asynchronous)\n\n```python\nimport asyncio\n\n# Use the assistant (asynchronous)\nresult = await assistant.ask_async(\"Analyze this article: ...\")\nprint(result[\"summary\"])\n\n# Get embeddings (asynchronous)\nembeddings = await embedding_provider.embed_texts_async(texts)\n```\n\n#### Streaming Chat\n\n```python\nimport asyncio\nfrom langchain_llm_config import create_chat_streaming\n\n\nasync def main():\n    \"\"\"Main async function to run the streaming chat example\"\"\"\n    # Create streaming chat assistant\n    # Try with OpenAI first to test streaming\n    streaming_chat = create_chat_streaming(\n        provider=\"vllm\", system_prompt=\"You are a helpful assistant.\"\n    )\n\n    print(\"\ud83e\udd16 Starting streaming chat...\")\n    print(\"Response: \", end=\"\", flush=True)\n\n    try:\n        # Stream responses in real-time\n        async for chunk in streaming_chat.chat_stream(\"Tell me a story\"):\n            if chunk[\"type\"] == \"stream\":\n                print(chunk[\"content\"], end=\"\", flush=True)\n            elif chunk[\"type\"] == \"final\":\n                print(f\"\\n\\nProcessing time: {chunk['processing_time']:.2f}s\")\n                print(f\"Model used: {chunk['model_used']}\")\n    except Exception as e:\n        print(f\"\\n\u274c Error occurred: {e}\")\n\n\nif __name__ == \"__main__\":\n    # Run the async function\n    asyncio.run(main())\n```\n\n## Supported Providers\n\n### Chat Providers\n\n| Provider | Models | Features |\n|----------|--------|----------|\n| **OpenAI** | GPT-3.5, GPT-4, etc. | Streaming, function calling, structured output |\n| **VLLM** | Any HuggingFace model | Local deployment, high performance |\n| **Gemini** | Gemini Pro, etc. | Google's latest models |\n\n### Embedding Providers\n\n| Provider | Models | Features |\n|----------|--------|----------|\n| **OpenAI** | text-embedding-ada-002, etc. | High quality, reliable |\n| **VLLM** | BGE, sentence-transformers | Local deployment |\n| **Infinity** | Various embedding models | Fast inference |\n\n## CLI Commands\n\n```bash\n# Initialize a new configuration file\nllm-config init [path]\n\n# Set up environment variables and create .env file\nllm-config setup-env [path] [--force]\n\n# Validate existing configuration\nllm-config validate [path]\n\n# Show package information\nllm-config info\n```\n\n## Advanced Usage\n\n### Custom Configuration Path\n\n```python\nfrom langchain_llm_config import create_assistant\n\nassistant = create_assistant(\n    response_model=MyModel,\n    config_path=\"/path/to/custom/api.yaml\"\n)\n```\n\n### Context-Aware Conversations\n\n```python\n# Add context to your queries\nresult = await assistant.ask_async(\n    query=\"What are the main points?\",\n    context=\"This is a research paper about machine learning...\",\n    extra_system_prompt=\"Focus on technical details.\"\n)\n```\n\n### Direct Provider Usage\n\n```python\nfrom langchain_llm_config import VLLMAssistant, OpenAIEmbeddingProvider\n\n# Use providers directly\nvllm_assistant = VLLMAssistant(\n    config={\"api_base\": \"http://localhost:8000/v1\", \"model_name\": \"llama-2\"},\n    response_model=MyModel\n)\n\nopenai_embeddings = OpenAIEmbeddingProvider(\n    config={\"api_key\": \"your-key\", \"model_name\": \"text-embedding-ada-002\"}\n)\n```\n\n### Complete Example with Error Handling\n\n```python\nimport asyncio\nfrom langchain_llm_config import create_assistant, create_embedding_provider\nfrom pydantic import BaseModel, Field\nfrom typing import List\n\nclass ChatResponse(BaseModel):\n    message: str = Field(..., description=\"The assistant's response message\")\n    confidence: float = Field(..., description=\"Confidence score\", ge=0.0, le=1.0)\n    suggestions: List[str] = Field(default_factory=list, description=\"Follow-up questions\")\n\nasync def main():\n    try:\n        # Create assistant\n        assistant = create_assistant(\n            response_model=ChatResponse,\n            provider=\"openai\",\n            system_prompt=\"You are a helpful AI assistant.\"\n        )\n        \n        # Chat conversation\n        response = await assistant.ask_async(\"What is the capital of France?\")\n        print(f\"Assistant: {response['message']}\")\n        print(f\"Confidence: {response['confidence']:.2f}\")\n        \n        # Create embedding provider\n        embedding_provider = create_embedding_provider(provider=\"openai\")\n        \n        # Get embeddings\n        texts = [\"Hello world\", \"How are you?\"]\n        embeddings = await embedding_provider.embed_texts_async(texts)\n        print(f\"Generated {len(embeddings)} embeddings\")\n        \n    except Exception as e:\n        print(f\"Error: {e}\")\n\n# Run the example\nasyncio.run(main())\n```\n\n## Configuration Reference\n\n### Environment Variables\n\nThe package supports environment variable substitution in configuration:\n\n```yaml\napi_key: \"${OPENAI_API_KEY}\"  # Will be replaced with actual value\n```\n\n### Configuration Structure\n\n```yaml\nllm:\n  provider_name:\n    chat:\n      api_base: \"https://api.example.com/v1\"\n      api_key: \"${API_KEY}\"\n      model_name: \"model-name\"\n      temperature: 0.7\n      max_tokens: 8192\n      top_p: 1.0\n      connect_timeout: 60\n      read_timeout: 60\n      model_kwargs: {}\n      # ... other parameters\n    embeddings:\n      api_base: \"https://api.example.com/v1\"\n      api_key: \"${API_KEY}\"\n      model_name: \"embedding-model\"\n      # ... other parameters\n  default:\n    chat_provider: \"provider_name\"\n    embedding_provider: \"provider_name\"\n```\n\n## Development\n\n### Running Tests\n\n```bash\nuv run pytest\n```\n\n### Code Formatting\n\n```bash\nuv run black .\nuv run isort .\n```\n\n### Type Checking\n\n```bash\nuv run mypy .\n```\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests\n5. Submit a pull request\n\n## License\n\nMIT License - see [LICENSE](LICENSE) file for details.\n\n## Support\n\n- \ud83d\udcd6 [Documentation](https://github.com/liux2/Langchain-LLM-Config#readme)\n- \ud83d\udc1b [Issue Tracker](https://github.com/liux2/Langchain-LLM-Config/issues)\n- \ud83d\udcac [Discussions](https://github.com/liux2/Langchain-LLM-Config/discussions)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A comprehensive LLM configuration package supporting multiple providers (OpenAI, VLLM, Gemini, Infinity) for chat assistants and embeddings",
    "version": "0.1.5",
    "project_urls": {
        "Bug Tracker": "https://github.com/liux2/Langchain-LLM-Config/issues",
        "Documentation": "https://github.com/liux2/Langchain-LLM-Config#readme",
        "Homepage": "https://github.com/liux2/Langchain-LLM-Config",
        "Repository": "https://github.com/liux2/Langchain-LLM-Config"
    },
    "split_keywords": [
        "assistant",
        " chat",
        " embeddings",
        " gemini",
        " langchain",
        " llm",
        " openai",
        " vllm"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "0c297d02984b33e19a9fffb229bb485c88b74af128f1f756e038140a6bc1cc84",
                "md5": "944e3b8d9d4662e641c1ee263fb49207",
                "sha256": "36483245219c7a428528bbe12e9b4b34792b75ccf559786cea79ced973d150f2"
            },
            "downloads": -1,
            "filename": "langchain_llm_config-0.1.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "944e3b8d9d4662e641c1ee263fb49207",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 806718,
            "upload_time": "2025-07-23T11:13:46",
            "upload_time_iso_8601": "2025-07-23T11:13:46.893360Z",
            "url": "https://files.pythonhosted.org/packages/0c/29/7d02984b33e19a9fffb229bb485c88b74af128f1f756e038140a6bc1cc84/langchain_llm_config-0.1.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "8b76eda3d4e4b8a0a14a0e2670d6170d1563769b15374ad30ade08827789110a",
                "md5": "84ce94fa24bfb98a4ba9b8970d4b93ca",
                "sha256": "b47fc8b8524983fc8dedfa070a5b62b950257233c548a988fcab67ca92ea0983"
            },
            "downloads": -1,
            "filename": "langchain_llm_config-0.1.5.tar.gz",
            "has_sig": false,
            "md5_digest": "84ce94fa24bfb98a4ba9b8970d4b93ca",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 1050199,
            "upload_time": "2025-07-23T11:13:48",
            "upload_time_iso_8601": "2025-07-23T11:13:48.574031Z",
            "url": "https://files.pythonhosted.org/packages/8b/76/eda3d4e4b8a0a14a0e2670d6170d1563769b15374ad30ade08827789110a/langchain_llm_config-0.1.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-23 11:13:48",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "liux2",
    "github_project": "Langchain-LLM-Config",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "langchain-llm-config"
}
        
Elapsed time: 0.43842s