AllLLMs


NameAllLLMs JSON
Version 0.1.1 PyPI version JSON
download
home_pagehttps://github.com/yourusername/llm-wrapper
SummaryA wrapper for various large language models including GPT, Claude, and Gemini
upload_time2024-08-19 11:04:36
maintainerNone
docs_urlNone
authorJayam Gupta
requires_python>=3.6
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ```markdown
# LLM-Wrapper

LLM-Wrapper is a Python package that provides a unified interface for interacting with multiple Large Language Models (LLMs) including ChatGPT, Claude, and Gemini.

## Features

- Easy initialization of LLM clients
- Unified interface for generating outputs from different LLMs
- Support for multiple models within each LLM platform
- API key validation during initialization

## Installation

```bash
pip install llm-wrapper
```

## Usage

### Initializing LLMs

You can initialize LLMs individually or all at once:

```python
from LLM import Initialize

# Initialize ChatGPT
Initialize.init_chatgpt("your_openai_api_key")

# Initialize Claude
Initialize.init_claude("your_anthropic_api_key")

# Initialize Gemini
Initialize.init_gemini("your_gemini_api_key")

# Initialize all LLMs at once
Initialize.init_all(
    chatgpt_api_key="your_openai_api_key",
    claude_api_key="your_anthropic_api_key",
    gemini_api_key="your_gemini_api_key"
)
```

Note: During initialization, a few tokens are used to verify that the provided API key is correct.

### Generating Output

```python
from LLM import Output

# Generate output using ChatGPT
gpt_response = Output.GPT("Tell me a joke about programming.")

# Generate output using Claude
claude_response = Output.Claude("Explain quantum computing in simple terms.")

# Generate output using Gemini
gemini_response = Output.Gemini("What are the benefits of renewable energy?")
```

### Customizing Model Parameters

You can customize model parameters when generating output:

```python
# Using a specific GPT model with custom temperature and max tokens
gpt_response = Output.GPT(
    "Summarize the history of artificial intelligence.",
    model="gpt-4o-mini-2024-07-18",
    temperature=0.7,
    max_tokens=2048
)

# Using a specific Claude model with custom temperature and max tokens
claude_response = Output.Claude(
    "Describe the process of photosynthesis.",
    model="claude-3-5-sonnet-20240620",
    temperature=0.5,
    max_tokens=1000
)
```

## Available Models

You can get information about available LLM models using the `get_llm_info()` function:

```python
from LLM.LLMModels import get_llm_info

llm_info = get_llm_info()
```

To get just the list of available models, you can use the `LLM_MODELS` dictionary:

```python
from LLM.LLMModels import LLM_MODELS

available_models = LLM_MODELS
```

Note: While the `get_llm_info()` function includes information about image upload support, this functionality is not currently implemented in the LLM-Wrapper.

## Error Handling

The package includes error handling for invalid API keys, unsupported models, and incorrect parameter values. Make sure to handle these exceptions in your code.

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

## License

This project is licensed under the MIT License.

```vhdl

This README provides an overview of the LLM-Wrapper package, including installation  instructions, usage examples for initializing LLMs and generating output, and information about customizing model parameters. It also mentions the API key validation during initialization and how to access the list of available models.
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/yourusername/llm-wrapper",
    "name": "AllLLMs",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": null,
    "author": "Jayam Gupta",
    "author_email": "guptajayam47@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/9f/6a/22b461464e0733b583de3702805998d3f02e0d7fb22b021dec12eb5aa14f/allllms-0.1.1.tar.gz",
    "platform": null,
    "description": "```markdown\r\n# LLM-Wrapper\r\n\r\nLLM-Wrapper is a Python package that provides a unified interface for interacting with multiple Large Language Models (LLMs) including ChatGPT, Claude, and Gemini.\r\n\r\n## Features\r\n\r\n- Easy initialization of LLM clients\r\n- Unified interface for generating outputs from different LLMs\r\n- Support for multiple models within each LLM platform\r\n- API key validation during initialization\r\n\r\n## Installation\r\n\r\n```bash\r\npip install llm-wrapper\r\n```\r\n\r\n## Usage\r\n\r\n### Initializing LLMs\r\n\r\nYou can initialize LLMs individually or all at once:\r\n\r\n```python\r\nfrom LLM import Initialize\r\n\r\n# Initialize ChatGPT\r\nInitialize.init_chatgpt(\"your_openai_api_key\")\r\n\r\n# Initialize Claude\r\nInitialize.init_claude(\"your_anthropic_api_key\")\r\n\r\n# Initialize Gemini\r\nInitialize.init_gemini(\"your_gemini_api_key\")\r\n\r\n# Initialize all LLMs at once\r\nInitialize.init_all(\r\n    chatgpt_api_key=\"your_openai_api_key\",\r\n    claude_api_key=\"your_anthropic_api_key\",\r\n    gemini_api_key=\"your_gemini_api_key\"\r\n)\r\n```\r\n\r\nNote: During initialization, a few tokens are used to verify that the provided API key is correct.\r\n\r\n### Generating Output\r\n\r\n```python\r\nfrom LLM import Output\r\n\r\n# Generate output using ChatGPT\r\ngpt_response = Output.GPT(\"Tell me a joke about programming.\")\r\n\r\n# Generate output using Claude\r\nclaude_response = Output.Claude(\"Explain quantum computing in simple terms.\")\r\n\r\n# Generate output using Gemini\r\ngemini_response = Output.Gemini(\"What are the benefits of renewable energy?\")\r\n```\r\n\r\n### Customizing Model Parameters\r\n\r\nYou can customize model parameters when generating output:\r\n\r\n```python\r\n# Using a specific GPT model with custom temperature and max tokens\r\ngpt_response = Output.GPT(\r\n    \"Summarize the history of artificial intelligence.\",\r\n    model=\"gpt-4o-mini-2024-07-18\",\r\n    temperature=0.7,\r\n    max_tokens=2048\r\n)\r\n\r\n# Using a specific Claude model with custom temperature and max tokens\r\nclaude_response = Output.Claude(\r\n    \"Describe the process of photosynthesis.\",\r\n    model=\"claude-3-5-sonnet-20240620\",\r\n    temperature=0.5,\r\n    max_tokens=1000\r\n)\r\n```\r\n\r\n## Available Models\r\n\r\nYou can get information about available LLM models using the `get_llm_info()` function:\r\n\r\n```python\r\nfrom LLM.LLMModels import get_llm_info\r\n\r\nllm_info = get_llm_info()\r\n```\r\n\r\nTo get just the list of available models, you can use the `LLM_MODELS` dictionary:\r\n\r\n```python\r\nfrom LLM.LLMModels import LLM_MODELS\r\n\r\navailable_models = LLM_MODELS\r\n```\r\n\r\nNote: While the `get_llm_info()` function includes information about image upload support, this functionality is not currently implemented in the LLM-Wrapper.\r\n\r\n## Error Handling\r\n\r\nThe package includes error handling for invalid API keys, unsupported models, and incorrect parameter values. Make sure to handle these exceptions in your code.\r\n\r\n## Contributing\r\n\r\nContributions are welcome! Please feel free to submit a Pull Request.\r\n\r\n## License\r\n\r\nThis project is licensed under the MIT License.\r\n\r\n```vhdl\r\n\r\nThis README provides an overview of the LLM-Wrapper package, including installation  instructions, usage examples for initializing LLMs and generating output, and information about customizing model parameters. It also mentions the API key validation during initialization and how to access the list of available models.\r\n```\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A wrapper for various large language models including GPT, Claude, and Gemini",
    "version": "0.1.1",
    "project_urls": {
        "Homepage": "https://github.com/yourusername/llm-wrapper"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "70e629a021fd12cbb0cda9a9083178a703f6f7121b34f07f29f17f69be5fac12",
                "md5": "007b41874f86761794cbd79f1c4fe688",
                "sha256": "617d5d78130c21d0f9b2bda0089065a038d4d518f974a54cb50e3cb40fb7e5e9"
            },
            "downloads": -1,
            "filename": "AllLLMs-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "007b41874f86761794cbd79f1c4fe688",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 6739,
            "upload_time": "2024-08-19T11:04:35",
            "upload_time_iso_8601": "2024-08-19T11:04:35.116319Z",
            "url": "https://files.pythonhosted.org/packages/70/e6/29a021fd12cbb0cda9a9083178a703f6f7121b34f07f29f17f69be5fac12/AllLLMs-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9f6a22b461464e0733b583de3702805998d3f02e0d7fb22b021dec12eb5aa14f",
                "md5": "ad9cb556d89f598c700de73009c2667c",
                "sha256": "3630dee043d42b0b17c04c60d2915540219c26064ec249426d625cc06a39307c"
            },
            "downloads": -1,
            "filename": "allllms-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "ad9cb556d89f598c700de73009c2667c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 6805,
            "upload_time": "2024-08-19T11:04:36",
            "upload_time_iso_8601": "2024-08-19T11:04:36.674416Z",
            "url": "https://files.pythonhosted.org/packages/9f/6a/22b461464e0733b583de3702805998d3f02e0d7fb22b021dec12eb5aa14f/allllms-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-19 11:04:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "yourusername",
    "github_project": "llm-wrapper",
    "github_not_found": true,
    "lcname": "allllms"
}
        
Elapsed time: 9.30469s