langchain-runpod-llm


Namelangchain-runpod-llm JSON
Version 0.0.5 PyPI version JSON
download
home_pagehttps://github.com/tsangwailam/langchain-runpod-llm
Summary🐍 | Python library for langchain using RunPod API endpoint as LLM.
upload_time2024-07-13 17:17:18
maintainerNone
docs_urlNone
authorWilliam Tsang
requires_python>=3.9
licenseMIT License
keywords runpod ai langchain llm llama2 sdk api python library
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Runpod LLM API Endpoint Lib for LangChain
[![PyPI version](https://badge.fury.io/py/langchain-runpod-llm.svg)](https://badge.fury.io/py/langchain-runpod-llm)

## Installation

```
# Install the latest release version
pip install runpod-llm

# or

# Install the latest development version (main branch)
pip install git+https://https://github.com/tsangwailam/langchain-runpod-llm
```

## Get Runpod API key

1. Goto www.runpod.io. Create a RunPod account.
2. From the portal, goto Settings>APIKeys
3. Create a new API key by click the "+ API Key" button.

## Usage

```python
from runpod_llm import RunpodLlama2

llm = RunpodLlama2(
        apikey="YOU_RUNPOD_API_KEY",
        llm_type="7b|13b",
        config={
            "max_tokens": 500, 
            #Maximum number of tokens to generate per output sequence.
            "n": 1,  # Number of output sequences to return for the given prompt.
            "best_of": 1,  # Number of output sequences that are generated from the prompt. From these best_of sequences, the top n sequences are returned. best_of must be greater than or equal to n. This is treated as the beam width when use_beam_search is True. By default, best_of is set to n.
            "Presence penalty": 0.2,  # Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
            "Frequency penalty": 0.5,  # Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
            "temperature": 0.3,  # Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
            "top_p": 1,  # Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
            "top_k": -1,  # Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.
            "use_beam_search": False,  # Whether to use beam search instead of sampling.
        },
        verbose=True, # verbose output
    )

    some_prompt_template = xxxxx
    output_chain = some_prompt_template | llm
    output_chain.invoke({"input":"some input to prompt template"})
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/tsangwailam/langchain-runpod-llm",
    "name": "langchain-runpod-llm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "runpod, ai, langchain, llm, llama2, SDK, API, python, library",
    "author": "William Tsang",
    "author_email": "William Tsang <contact@williamtsang.me>",
    "download_url": "https://files.pythonhosted.org/packages/64/61/e0db8a82ae97f0553947ba7cc98c8344c819b5a678e0fb7e82945002c5e8/langchain_runpod_llm-0.0.5.tar.gz",
    "platform": null,
    "description": "# Runpod LLM API Endpoint Lib for LangChain\n[![PyPI version](https://badge.fury.io/py/langchain-runpod-llm.svg)](https://badge.fury.io/py/langchain-runpod-llm)\n\n## Installation\n\n```\n# Install the latest release version\npip install runpod-llm\n\n# or\n\n# Install the latest development version (main branch)\npip install git+https://https://github.com/tsangwailam/langchain-runpod-llm\n```\n\n## Get Runpod API key\n\n1. Goto www.runpod.io. Create a RunPod account.\n2. From the portal, goto Settings>APIKeys\n3. Create a new API key by click the \"+ API Key\" button.\n\n## Usage\n\n```python\nfrom runpod_llm import RunpodLlama2\n\nllm = RunpodLlama2(\n        apikey=\"YOU_RUNPOD_API_KEY\",\n        llm_type=\"7b|13b\",\n        config={\n            \"max_tokens\": 500, \n            #Maximum number of tokens to generate per output sequence.\n            \"n\": 1,  # Number of output sequences to return for the given prompt.\n            \"best_of\": 1,  # Number of output sequences that are generated from the prompt. From these best_of sequences, the top n sequences are returned. best_of must be greater than or equal to n. This is treated as the beam width when use_beam_search is True. By default, best_of is set to n.\n            \"Presence penalty\": 0.2,  # Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.\n            \"Frequency penalty\": 0.5,  # Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.\n            \"temperature\": 0.3,  # Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.\n            \"top_p\": 1,  # Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.\n            \"top_k\": -1,  # Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.\n            \"use_beam_search\": False,  # Whether to use beam search instead of sampling.\n        },\n        verbose=True, # verbose output\n    )\n\n    some_prompt_template = xxxxx\n    output_chain = some_prompt_template | llm\n    output_chain.invoke({\"input\":\"some input to prompt template\"})\n```\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "\ud83d\udc0d | Python library for langchain using RunPod API endpoint as LLM.",
    "version": "0.0.5",
    "project_urls": {
        "Bug Tracker": "https://github.com/tsangwailam/langchain-runpod-llm/issues",
        "Changelog": "https://github.com/tsangwailam/langchain-runpod-llm/blob/main/CHANGELOG.md",
        "Documentation": "https://github.com/tsangwailam/langchain-runpod-llm/blob/main/README.md",
        "Homepage": "https://github.com/tsangwailam/langchain-runpod-llm",
        "Repository": "https://github.com/tsangwailam/langchain-runpod-llm"
    },
    "split_keywords": [
        "runpod",
        " ai",
        " langchain",
        " llm",
        " llama2",
        " sdk",
        " api",
        " python",
        " library"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e9645dbeeb2a4dff6770f572655ef1b7420e98ae2ef12e3c92e5b739393b5666",
                "md5": "5a61f7060d9c1a6b8b0f18c569ca089b",
                "sha256": "ee4248db8b46101fafb32b04821af9fb3020806cb35268d381a898112913856b"
            },
            "downloads": -1,
            "filename": "langchain_runpod_llm-0.0.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5a61f7060d9c1a6b8b0f18c569ca089b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 10207,
            "upload_time": "2024-07-13T17:17:16",
            "upload_time_iso_8601": "2024-07-13T17:17:16.478748Z",
            "url": "https://files.pythonhosted.org/packages/e9/64/5dbeeb2a4dff6770f572655ef1b7420e98ae2ef12e3c92e5b739393b5666/langchain_runpod_llm-0.0.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6461e0db8a82ae97f0553947ba7cc98c8344c819b5a678e0fb7e82945002c5e8",
                "md5": "5ba62975d01ab6272f35d3b149ba1bca",
                "sha256": "1cc87ef70c3819a1aacacdd092c80c7cb2911c3acb3ba6459452f5dcfb99aded"
            },
            "downloads": -1,
            "filename": "langchain_runpod_llm-0.0.5.tar.gz",
            "has_sig": false,
            "md5_digest": "5ba62975d01ab6272f35d3b149ba1bca",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 9021,
            "upload_time": "2024-07-13T17:17:18",
            "upload_time_iso_8601": "2024-07-13T17:17:18.041081Z",
            "url": "https://files.pythonhosted.org/packages/64/61/e0db8a82ae97f0553947ba7cc98c8344c819b5a678e0fb7e82945002c5e8/langchain_runpod_llm-0.0.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-13 17:17:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "tsangwailam",
    "github_project": "langchain-runpod-llm",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "langchain-runpod-llm"
}
        
Elapsed time: 1.74472s