# Python Vercel LLM API
[![PyPi Version](https://img.shields.io/pypi/v/vercel-llm-api.svg)](https://pypi.org/project/vercel-llm-api/)
This is a reverse engineered API wrapper for the [Vercel AI Playground](https://play.vercel.ai/), which allows for free access to many LLMs, including OpenAI's ChatGPT, Cohere's Command Nightly, as well as some open source models.
## Table of Contents:
- [Features](#features)
- [Limitations](#limitations)
- [Installation](#installation)
- [Documentation](#documentation)
* [Using the Client](#using-the-client)
+ [Downloading the Available Models](#downloading-the-available-models)
+ [Generating Text](#generating-text)
+ [Generating Chat Messages](#generating-chat-messages)
* [Misc](#misc)
+ [Changing the Logging Level](#changing-the-logging-level)
- [Copyright](#copyright)
* [Copyright Notice](#copyright-notice)
*Table of contents generated with [markdown-toc](http://ecotrust-canada.github.io/markdown-toc).*
## Features:
- Download the available models
- Generate text
- Generate chat messages
- Set custom parameters
- Stream the responses
## Limitations:
- User-agent is hardcoded
- No auth support
- Can't use "pro" or "hobby" models
## Installation:
You can install this library by running the following command:
```
pip3 install vercel-llm-api
```
## Documentation:
Examples can be found in the `/examples` directory. To run these examples, simply execute the included Python files from your terminal.
```
python3 examples/generate.py
```
### Using the Client:
To use this library, simply import `vercel_ai` and create a `vercel_ai.Client` instance. You can specify a proxy using the `proxy` keyword argument.
Normal example:
```python
import vercel_ai
client = vercel_ai.Client()
```
Proxied example:
```python
import vercel_ai
client = vercel_ai.Client(proxy="socks5h://193.29.62.48:11003")
```
Note that the following examples assume `client` is the name of your `vercel_ai.Client` instance.
#### Downloading the Available Models:
The client downloads the available models upon initialization, and stores them in `client.models`.
```python
>>> print(json.dumps(client.models, indent=2))
{
"anthropic:claude-instant-v1": {
"id": "anthropic:claude-instant-v1", #the model's id
"provider": "anthropic", #the model's provider
"providerHumanName": "Anthropic", #the provider's display name
"makerHumanName": "Anthropic", #the maker of the model
"minBillingTier": "hobby", #the minimum billing tier needed to use the model
"parameters": { #a dict of optional parameters that can be passed to the generate function
"temperature": { #the name of the parameter
"value": 1, #the default value for the parameter
"range": [0, 1] #a range of possible values for the parameter
},
...
}
...
}
}
```
Note that, since there is no auth yet, if a model has the `"minBillingTier"` property present, it can't be used.
A list of model IDs is also available in `client.model_ids`.
```python
>>> print(json.dumps(client.model_ids, indent=2))
[
"anthropic:claude-instant-v1", #locked to hobby tier; unusable
"anthropic:claude-v1", #locked to hobby tier; unusable
"replicate:replicate/alpaca-7b",
"replicate:stability-ai/stablelm-tuned-alpha-7b",
"huggingface:bigscience/bloom",
"huggingface:bigscience/bloomz",
"huggingface:google/flan-t5-xxl",
"huggingface:google/flan-ul2",
"huggingface:EleutherAI/gpt-neox-20b",
"huggingface:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
"huggingface:bigcode/santacoder",
"cohere:command-medium-nightly",
"cohere:command-xlarge-nightly",
"openai:gpt-4", #locked to pro tier; unusable
"openai:code-cushman-001",
"openai:code-davinci-002",
"openai:gpt-3.5-turbo",
"openai:text-ada-001",
"openai:text-babbage-001",
"openai:text-curie-001",
"openai:text-davinci-002",
"openai:text-davinci-003"
]
```
A dict of default parameters for each model can be found at `client.model_params`.
```python
>>> print(json.dumps(client.model_defaults, indent=2))
{
"anthropic:claude-instant-v1": {
"temperature": 1,
"maximumLength": 200,
"topP": 1,
"topK": 1,
"presencePenalty": 1,
"frequencyPenalty": 1,
"stopSequences": [
"\n\nHuman:"
]
},
...
}
```
#### Generating Text:
To generate some text, use the `client.generate` function, which accepts the following arguments:
- `model` - The ID of the model you want to use.
- `prompt` - Your prompt.
- `params = {}` - A dict of optional parameters. See the previous section for how to find these.
The function is a generator which returns the newly generated text as a string.
Streamed Example:
```python
for chunk in client.generate("openai:gpt-3.5-turbo", "Summarize the GNU GPL v3"):
print(chunk, end="", flush=True)
```
Non-Streamed Example:
```python
result = ""
for chunk in client.generate("openai:gpt-3.5-turbo", "Summarize the GNU GPL v3"):
result += chunk
print(result)
```
#### Generating Chat Messages:
To generate chat messages, use the `client.chat` function, which accepts the following arguments:
- `model` - The ID of the model you want to use.
- `messages` - A list of messages. The format for this is identical to how you would use the official OpenAI API.
- `params = {}` - A dict of optional parameters. See the "Downloading the Available Models" section for how to find these.
The function is a generator which returns the newly generated text as a string.
```python
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
for chunk in client.chat("openai:gpt-3.5-turbo", messages):
print(chunk, end="", flush=True)
print()
```
### Misc:
#### Changing the Logging Level:
If you want to show the debug messages, simply call `vercel_ai.logger.setLevel`.
```python
import vercel_ai
import logging
vercel_ai.logger.setLevel(logging.INFO)
```
## Copyright:
This program is licensed under the [GNU GPL v3](https://github.com/ading2210/vercel-llm-api/blob/main/LICENSE). All code has been written by me, [ading2210](https://github.com/ading2210).
### Copyright Notice:
```
ading2210/vercel-llm-api: a reverse engineered API wrapper for the Vercel AI Playground
Copyright (C) 2023 ading2210
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
```
Raw data
{
"_id": null,
"home_page": "https://github.com/ading2210/vercel-llm-api",
"name": "vercel-llm-api",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "",
"author": "ading2210",
"author_email": "",
"download_url": "",
"platform": null,
"description": "# Python Vercel LLM API\n\n[![PyPi Version](https://img.shields.io/pypi/v/vercel-llm-api.svg)](https://pypi.org/project/vercel-llm-api/)\n\nThis is a reverse engineered API wrapper for the [Vercel AI Playground](https://play.vercel.ai/), which allows for free access to many LLMs, including OpenAI's ChatGPT, Cohere's Command Nightly, as well as some open source models.\n\n## Table of Contents:\n- [Features](#features)\n- [Limitations](#limitations)\n- [Installation](#installation)\n- [Documentation](#documentation)\n * [Using the Client](#using-the-client)\n + [Downloading the Available Models](#downloading-the-available-models)\n + [Generating Text](#generating-text)\n + [Generating Chat Messages](#generating-chat-messages)\n * [Misc](#misc)\n + [Changing the Logging Level](#changing-the-logging-level)\n- [Copyright](#copyright)\n * [Copyright Notice](#copyright-notice)\n\n*Table of contents generated with [markdown-toc](http://ecotrust-canada.github.io/markdown-toc).*\n\n## Features:\n - Download the available models\n - Generate text\n - Generate chat messages\n - Set custom parameters\n - Stream the responses\n\n## Limitations:\n - User-agent is hardcoded\n - No auth support\n - Can't use \"pro\" or \"hobby\" models\n\n## Installation:\nYou can install this library by running the following command:\n```\npip3 install vercel-llm-api\n```\n\n## Documentation:\nExamples can be found in the `/examples` directory. To run these examples, simply execute the included Python files from your terminal.\n```\npython3 examples/generate.py\n```\n\n### Using the Client:\nTo use this library, simply import `vercel_ai` and create a `vercel_ai.Client` instance. You can specify a proxy using the `proxy` keyword argument.\n\nNormal example:\n```python\nimport vercel_ai\nclient = vercel_ai.Client()\n```\n\nProxied example:\n```python\nimport vercel_ai\nclient = vercel_ai.Client(proxy=\"socks5h://193.29.62.48:11003\")\n```\n\nNote that the following examples assume `client` is the name of your `vercel_ai.Client` instance.\n\n#### Downloading the Available Models:\nThe client downloads the available models upon initialization, and stores them in `client.models`. \n```python\n>>> print(json.dumps(client.models, indent=2))\n\n{\n \"anthropic:claude-instant-v1\": { \n \"id\": \"anthropic:claude-instant-v1\", #the model's id\n \"provider\": \"anthropic\", #the model's provider\n \"providerHumanName\": \"Anthropic\", #the provider's display name\n \"makerHumanName\": \"Anthropic\", #the maker of the model\n \"minBillingTier\": \"hobby\", #the minimum billing tier needed to use the model\n \"parameters\": { #a dict of optional parameters that can be passed to the generate function\n \"temperature\": { #the name of the parameter\n \"value\": 1, #the default value for the parameter\n \"range\": [0, 1] #a range of possible values for the parameter\n },\n ...\n }\n ...\n }\n}\n```\nNote that, since there is no auth yet, if a model has the `\"minBillingTier\"` property present, it can't be used.\n\nA list of model IDs is also available in `client.model_ids`.\n```python\n>>> print(json.dumps(client.model_ids, indent=2))\n[\n \"anthropic:claude-instant-v1\", #locked to hobby tier; unusable\n \"anthropic:claude-v1\", #locked to hobby tier; unusable\n \"replicate:replicate/alpaca-7b\",\n \"replicate:stability-ai/stablelm-tuned-alpha-7b\",\n \"huggingface:bigscience/bloom\",\n \"huggingface:bigscience/bloomz\",\n \"huggingface:google/flan-t5-xxl\",\n \"huggingface:google/flan-ul2\",\n \"huggingface:EleutherAI/gpt-neox-20b\",\n \"huggingface:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5\",\n \"huggingface:bigcode/santacoder\",\n \"cohere:command-medium-nightly\",\n \"cohere:command-xlarge-nightly\",\n \"openai:gpt-4\", #locked to pro tier; unusable\n \"openai:code-cushman-001\",\n \"openai:code-davinci-002\",\n \"openai:gpt-3.5-turbo\",\n \"openai:text-ada-001\",\n \"openai:text-babbage-001\",\n \"openai:text-curie-001\",\n \"openai:text-davinci-002\",\n \"openai:text-davinci-003\"\n]\n```\n\nA dict of default parameters for each model can be found at `client.model_params`.\n```python\n>>> print(json.dumps(client.model_defaults, indent=2))\n{\n \"anthropic:claude-instant-v1\": {\n \"temperature\": 1,\n \"maximumLength\": 200,\n \"topP\": 1,\n \"topK\": 1,\n \"presencePenalty\": 1,\n \"frequencyPenalty\": 1,\n \"stopSequences\": [\n \"\\n\\nHuman:\"\n ]\n },\n ...\n}\n```\n\n#### Generating Text:\nTo generate some text, use the `client.generate` function, which accepts the following arguments:\n - `model` - The ID of the model you want to use.\n - `prompt` - Your prompt.\n - `params = {}` - A dict of optional parameters. See the previous section for how to find these.\n\nThe function is a generator which returns the newly generated text as a string.\n\nStreamed Example:\n```python\nfor chunk in client.generate(\"openai:gpt-3.5-turbo\", \"Summarize the GNU GPL v3\"):\n print(chunk, end=\"\", flush=True)\n```\n\nNon-Streamed Example:\n```python\nresult = \"\"\nfor chunk in client.generate(\"openai:gpt-3.5-turbo\", \"Summarize the GNU GPL v3\"):\n result += chunk\nprint(result)\n```\n\n#### Generating Chat Messages:\nTo generate chat messages, use the `client.chat` function, which accepts the following arguments:\n - `model` - The ID of the model you want to use.\n - `messages` - A list of messages. The format for this is identical to how you would use the official OpenAI API.\n - `params = {}` - A dict of optional parameters. See the \"Downloading the Available Models\" section for how to find these.\n\nThe function is a generator which returns the newly generated text as a string.\n\n```python\nmessages = [\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Who won the world series in 2020?\"},\n {\"role\": \"assistant\", \"content\": \"The Los Angeles Dodgers won the World Series in 2020.\"},\n {\"role\": \"user\", \"content\": \"Where was it played?\"}\n]\nfor chunk in client.chat(\"openai:gpt-3.5-turbo\", messages):\n print(chunk, end=\"\", flush=True)\nprint()\n```\n\n### Misc:\n\n#### Changing the Logging Level:\nIf you want to show the debug messages, simply call `vercel_ai.logger.setLevel`.\n\n```python\nimport vercel_ai\nimport logging\nvercel_ai.logger.setLevel(logging.INFO)\n```\n\n## Copyright:\nThis program is licensed under the [GNU GPL v3](https://github.com/ading2210/vercel-llm-api/blob/main/LICENSE). All code has been written by me, [ading2210](https://github.com/ading2210).\n\n### Copyright Notice:\n```\nading2210/vercel-llm-api: a reverse engineered API wrapper for the Vercel AI Playground\nCopyright (C) 2023 ading2210\n\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program. If not, see <https://www.gnu.org/licenses/>.\n```\n",
"bugtrack_url": null,
"license": "GPLv3",
"summary": "A reverse engineered API for the Vercel AI playground.",
"version": "0.3.1",
"project_urls": {
"Homepage": "https://github.com/ading2210/vercel-llm-api"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7b8d2770e2c6bb91f3febdaf54e0166fc4eae4ea2d1f44996a144d5aaee794d8",
"md5": "c812089d9ae45d6a408ba8d5a66c5dc3",
"sha256": "f01947fd143fc5f274061178ed9fb990395ff79cf1ded8508ff77b3e95f1201a"
},
"downloads": -1,
"filename": "vercel_llm_api-0.3.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c812089d9ae45d6a408ba8d5a66c5dc3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 18659,
"upload_time": "2023-08-05T06:14:31",
"upload_time_iso_8601": "2023-08-05T06:14:31.157366Z",
"url": "https://files.pythonhosted.org/packages/7b/8d/2770e2c6bb91f3febdaf54e0166fc4eae4ea2d1f44996a144d5aaee794d8/vercel_llm_api-0.3.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-08-05 06:14:31",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ading2210",
"github_project": "vercel-llm-api",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "vercel-llm-api"
}