api4all


Nameapi4all JSON
Version 0.4.0 PyPI version JSON
download
home_pageNone
SummaryEasy-to-use LLM API from a state-of-the-art provider and comparison
upload_time2024-05-10 22:49:40
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT License Copyright (c) 2024 Hieu Minh Nguyen Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords llm llmapi llminference llmprovider llmprice llmlearderboard llmpricing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # api4all
Easy-to-use LLM API from state-of-the-art providers and comparison.

## Features
- **Easy-to-use**: A simple and easy-to-use API for state-of-the-art language models from different providers but using in a same way.
- **Comparison**: Compare the cost and performance of different providers and models. Let you choose the best provider and model for your use case.
- **Log**: Log the response and cost of the request in a log file.
- **Providers**: Support for all of providers both open-source and closed-source.
- **Result**: See the actual time taken by the request, especially when you dont't trust the benchmark.

## Installation

#### 1. Install the package
```bash
pip3 install api4all
```

#### 2. **Optional** - Create and activate a virtual environment
- Unix / macOS
```bash
python3 -m venv venv
source venv/bin/activate
```
- Windows
```bash
python3 -m venv venv
.\venv\Scripts\activate
```

## Quick Start

#### 1. Wrap the API keys in a `.env` file of the provider you want to test.
```bash
TOGETHER_API_KEY=xxx
OPENAI_API_KEY=xxx
MISTRAL_API_KEY=xxx
ANTHROPIC_API_KEY=xxx
```

or set the environment variable directly.
```bash
export TOGETHER_API_KEY=xxx
export OPENAI_API_KEY=xxx
```

#### 2. Run the code
```python
from api4all import EngineFactory

messages = [
    {"role": "system",
    "content": "You are a helpful assistent for the my Calculus class."},
    {"role": "user",
    "content": "What is the current status of the economy?"}
]


engine = EngineFactory.create_engine(provider="together", 
                                    model="google/gemma-7b-it", 
                                    messages=messages, 
                                    temperature=0.9, 
                                    max_tokens=1028, 
                                    )

response = engine.generate_response()

print(response)
```

- There are some examples in the [examples](api4all/examples) folder or <a href="https://colab.research.google.com/drive/1nMGqoWIkL2xLlaSE54vOHhpffaHpihY3?usp=sharing"><img src="api4all/img/colab.svg" alt="Open In Colab"></a> to test the examples in Google Colab.

#### 3. Check the [log file](logfile.log) for the response and the cost of the request.
```log
Request ID - fa8cebd0-265a-44b2-95d7-6ff1588d2c87
	create at: 2024-03-15 16:38:18,129
	INFO - SUCCESS
	
    Response:
		I am not able to provide information about the current status of the economy, as I do not have access to real-time information. Therefore, I recommend checking a reliable source for the latest economic news and data.
	
    Cost: $0.0000154    # Cost of this provider for this request
    Provider: together  # Provider used for this request
    Execution-time: Execution time not provided by the provider
    Actual-time: 0.9448428153991699 # Actual time taken by the request
    Input-token: 33     # Number of tokens used for the input
    Output-token: 44    # Number of tokens used for the output
```

## Providers and Models

### Providers

| Provider | Free Credit | Rate Limit | API Key name | Provider string name |
|:------:|:------:|:------:|:------:|:------:|
|  [Groq](https://wow.groq.com)          |     Unlimited | 30 Requests / Minute  | GROQ_API_KEY | "groq"  |
|  [Anyscale](https://www.anyscale.com)  |     $10      | 30 Requests / Second  |  ANYSCALE_API_KEY | "anyscale"  |
|  [Together AI](https://www.together.ai)|     $25      | 1 Requests / Second  | TOGETHER_API_KEY | "together"  | 
|  [Replicate](https://replicate.com)    |     Free to try  | 50 Requests / Second    | REPLICATE_API_KEY | "replicate"  |
|  [Fireworks](https://fireworks.ai)     |     $1      | 600 Requests / Minute  |  FIREWORKS_API_KEY | "fireworks"  |  
|  [Deepinfra](https://deepinfra.com)    |     Free to try     | 200 Concurrent request |  DEEPINFRA_API_KEY | "deepinfra"  |
|  [Lepton](https://www.lepton.ai)    |     $10     | 10 Requests / Minute |  LEPTON_API_KEY | "lepton"  |
|  ------    |     ------     |  ------ |  ------ |  ------  |
|  [Google AI (Vertex AI)](https://ai.google.dev)    |     Unlimited     | 60 Requests / Minute | GOOGLE_API_KEY | "google"  |
|  [OpenAI](http://openai.com)    |     &#x2715;     | 60 Requests / Minute | OPENAI_API_KEY | "openai"  |
|  [Mistral AI](https://mistral.ai)    |     Free to try     | 5 Requests / Second | MISTRAL_API_KEY | "mistral"  |
|  [Anthropic](https://www.anthropic.com)    |     Free to try     | 5 Requests / Minute | ANTHROPIC_API_KEY | "anthropic"  |


- **Free to try**: Free to try, no credit card required but limited to a certain number of tokens.
- Rate limit is based on the free plan of the provider. The actual rate limit may be different based on the plan you choose.

### Open-source models
  -- |Mixtral-8x7b-Instruct-v0.1 | Gemma 7B it |  Mistral-7B-Instruct-v0.1 | LLaMA2-70b | Mistral-7B-Instruct-v0.2 | CodeLlama-70b-Instruct | LLaMA3-8b-Instruct | LLaMA3-80b
|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
|  API string name          |     "mistralai/Mixtral-8x7B-Instruct-v0.1"    | "google/gemma-7b-it"    | "mistralai/Mistral-7B-Instruct-v0.1"  | "meta/Llama-2-70b-chat" | "mistralai/Mistral-7B-Instruct-v0.2" | "meta/CodeLlama-2-70b-intruct" | "meta/Llama-3-8b-Instruct" | "meta/Llama-3-80b"
|  Context Length          |     32,768    | 8.192    |  4,096 | 4,096 | 32,768 | 16,384 | 8,192 | 8,192
|  Developer          |     Mistral AI    | Google    |  Mistral AI | Meta | Mistral AI | Meta | Meta | Meta
|  **Cost (Input - Output / MTokens)**          |     -----    | ------    | ------ | ----- | ------ | ------ | ------ | ------ | ------ | ------ |
|  [Groq](https://wow.groq.com)          |     $0-$0    | $0-$0    | &#x2715; | $0-$0 | &#x2715; | &#x2715; | $0-$0 | $0-$0
|  [Anyscale](https://www.anyscale.com)  |     $0.5-$0.5       | $0.15-$0.15       |  $0.05-$0.25 | $1.0-$1.0 | &#x2715; | $1.0-$1.0 | $0.15-$0.15 | $1.0-$1.0
|  [Together AI](https://www.together.ai)|     $0.6-$0.6        | $0.2-$0.2        | $0.2-$0.2 | $0.9-$0.9 | $0.05-$0.25 | $0.9-$0.9 | $0.2-$0.2 | $0.9-$0.9
|  [Replicate](https://replicate.com)    |     $0.3-$1       | &#x2715;       |  $0.05-$0.25 | $0.65-$2.75 | $0.2-$0.2 | $0.65-$2.75 | $0.05-$0.25 | $0.65-$2.75
|  [Fireworks](https://fireworks.ai)     |     $0.5-$0.5        | &#x2715;        |  $0.2-$0.2  | $0.9-$0.9 | $0.2-$0.2 | $0.9-$0.9 | $0.2-$0.2 | $0.9-$0.9
|  [Deepinfra](https://deepinfra.com)    |     $0.27-$0.27    | $0.13-$0.13    |   $0.13-$0.13 | $0.7-$0.9 | &#x2715; | $0.7-$0.9 | $0.08-$0.08 | $0.59-$0.79
|  [Lepton](https://www.lepton.ai)    |     $0.5-$0.5    | &#x2715;    |   &#x2715; | $0.8-$0.8 | &#x2715; | &#x2715; | $0.07-$0.07 | $0.8-$0.8

### Closed-source models
#### 1. Mistral AI

| Model | Input Pricing ($/1M Tokens) | Output Pricing ($/1M Tokens) | Context Length | API string name |
|:------:|:------:|:------:|:------:|:------:|
|  Mistral-7B-Instruct-v0.1          |     $0.25        | $0.25    |  8,192 | "mistral/open-mistral-7b" |
|  Mixtral-8x7b-Instruct-v0.1          |     $0.7        | $0.7    |  8,192 | "mistral/open-mixtral-8x7b" |
|  Mixtral Small          |     $2        | $6    |  &#x2715; | "mistral/mistral-small-latest" |
|  Mixtral Medium          |     $2.7        | $8.1    |  &#x2715; | "mistral/mistral-medium-latest" |
|  Mixtral Large          |     $8        | $24    |  &#x2715; | "mistral/mistral-large-latest" |


#### 2. OpenAI

| Model | Input Pricing ($/1M Tokens) | Output Pricing ($/1M Tokens) | Context Length | API string name |
|:------:|:------:|:------:|:------:|:------:|
|  GPT-3.5-0125          |     $0.5        | $1.5    |  16,385 | "openai/gpt-3.5-turbo-0125" |
|  GPT-3.5          |     $0.5        | $1.5    |  16,385 | "openai/gpt-3.5-turbo" |
|  GPT-4          |     $30        | $60    |  8,192 | "openai/gpt-4" |
|  GPT-4          |     $60        | $120    |  32,768 | "openai/gpt-4-32k" |


#### 3. Anthropic
| Model | Input Pricing ($/1M Tokens) | Output Pricing ($/1M Tokens) | Context Length | API string name |
|:------:|:------:|:------:|:------:|:------:|
|  Claude 3 Opus  |     $15        | $75    |  200,000 | "anthropic/claude-3-opus" |
|  Claude 3 Sonnet  |     $3        | $15    |  200,000 | "anthropic/claude-3-sonnet" |
|  Claude 3 Haiku  |     $0.25        | $1.25    |  200,000 | "anthropic/claude-3-haiku" |
|  Claude 2.1  |     $8        | $24    |  200,000 | "anthropic/claude-2.1" |
|  Claude 2.0  |     $8        | $24    |  100,000 | "anthropic/claude-2.0" |
|  Claude 2.0  |     $0.8        | $2.4    |  100,000 | "anthropic/claude-instant-1.2" |


#### 4. Google
| Model | Input Pricing ($/1M Tokens) | Output Pricing ($/1M Tokens) | Context Length | API string name |
|:------:|:------:|:------:|:------:|:------:|
|  Google Gemini 1.0 Pro  |     $0        | $0    |  32,768 | "google/gemini-1.0-pro" |



## Contributing
Welcome to contribute to the project. If you see any updated pricing, new models, new providers, or any other changes, feel free to open an issue or a pull request.


## Problems from the providers and Solutions

#### Error with Gemini pro 1.0
```bash
ValueError: The `response.text` quick accessor only works when the response contains a valid `Part`, but none was returned. Check the `candidate.safety_ratings` to see if the response was blocked.
```
**Solution**: The output is larger than your maximum tokens. Increase the `max_tokens`.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "api4all",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Hieu Nguyen <hieung.tech@gmail.com>",
    "keywords": "llm, llmapi, llminference, llmprovider, llmprice, llmlearderboard, llmpricing",
    "author": null,
    "author_email": "Hieu Nguyen <hieung.tech@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/2b/9c/6aafee91da0eb5e042302cd668f0afafb7e68780bd5c5eee61c502a6a55e/api4all-0.4.0.tar.gz",
    "platform": null,
    "description": "# api4all\nEasy-to-use LLM API from state-of-the-art providers and comparison.\n\n## Features\n- **Easy-to-use**: A simple and easy-to-use API for state-of-the-art language models from different providers but using in a same way.\n- **Comparison**: Compare the cost and performance of different providers and models. Let you choose the best provider and model for your use case.\n- **Log**: Log the response and cost of the request in a log file.\n- **Providers**: Support for all of providers both open-source and closed-source.\n- **Result**: See the actual time taken by the request, especially when you dont't trust the benchmark.\n\n## Installation\n\n#### 1. Install the package\n```bash\npip3 install api4all\n```\n\n#### 2. **Optional** - Create and activate a virtual environment\n- Unix / macOS\n```bash\npython3 -m venv venv\nsource venv/bin/activate\n```\n- Windows\n```bash\npython3 -m venv venv\n.\\venv\\Scripts\\activate\n```\n\n## Quick Start\n\n#### 1. Wrap the API keys in a `.env` file of the provider you want to test.\n```bash\nTOGETHER_API_KEY=xxx\nOPENAI_API_KEY=xxx\nMISTRAL_API_KEY=xxx\nANTHROPIC_API_KEY=xxx\n```\n\nor set the environment variable directly.\n```bash\nexport TOGETHER_API_KEY=xxx\nexport OPENAI_API_KEY=xxx\n```\n\n#### 2. Run the code\n```python\nfrom api4all import EngineFactory\n\nmessages = [\n    {\"role\": \"system\",\n    \"content\": \"You are a helpful assistent for the my Calculus class.\"},\n    {\"role\": \"user\",\n    \"content\": \"What is the current status of the economy?\"}\n]\n\n\nengine = EngineFactory.create_engine(provider=\"together\", \n                                    model=\"google/gemma-7b-it\", \n                                    messages=messages, \n                                    temperature=0.9, \n                                    max_tokens=1028, \n                                    )\n\nresponse = engine.generate_response()\n\nprint(response)\n```\n\n- There are some examples in the [examples](api4all/examples) folder or <a href=\"https://colab.research.google.com/drive/1nMGqoWIkL2xLlaSE54vOHhpffaHpihY3?usp=sharing\"><img src=\"api4all/img/colab.svg\" alt=\"Open In Colab\"></a> to test the examples in Google Colab.\n\n#### 3. Check the [log file](logfile.log) for the response and the cost of the request.\n```log\nRequest ID - fa8cebd0-265a-44b2-95d7-6ff1588d2c87\n\tcreate at: 2024-03-15 16:38:18,129\n\tINFO - SUCCESS\n\t\n    Response:\n\t\tI am not able to provide information about the current status of the economy, as I do not have access to real-time information. Therefore, I recommend checking a reliable source for the latest economic news and data.\n\t\n    Cost: $0.0000154    # Cost of this provider for this request\n    Provider: together  # Provider used for this request\n    Execution-time: Execution time not provided by the provider\n    Actual-time: 0.9448428153991699 # Actual time taken by the request\n    Input-token: 33     # Number of tokens used for the input\n    Output-token: 44    # Number of tokens used for the output\n```\n\n## Providers and Models\n\n### Providers\n\n| Provider | Free Credit | Rate Limit | API Key name | Provider string name |\n|:------:|:------:|:------:|:------:|:------:|\n|  [Groq](https://wow.groq.com)          |     Unlimited | 30 Requests / Minute  | GROQ_API_KEY | \"groq\"  |\n|  [Anyscale](https://www.anyscale.com)  |     $10      | 30 Requests / Second  |  ANYSCALE_API_KEY | \"anyscale\"  |\n|  [Together AI](https://www.together.ai)|     $25      | 1 Requests / Second  | TOGETHER_API_KEY | \"together\"  | \n|  [Replicate](https://replicate.com)    |     Free to try  | 50 Requests / Second    | REPLICATE_API_KEY | \"replicate\"  |\n|  [Fireworks](https://fireworks.ai)     |     $1      | 600 Requests / Minute  |  FIREWORKS_API_KEY | \"fireworks\"  |  \n|  [Deepinfra](https://deepinfra.com)    |     Free to try     | 200 Concurrent request |  DEEPINFRA_API_KEY | \"deepinfra\"  |\n|  [Lepton](https://www.lepton.ai)    |     $10     | 10 Requests / Minute |  LEPTON_API_KEY | \"lepton\"  |\n|  ------    |     ------     |  ------ |  ------ |  ------  |\n|  [Google AI (Vertex AI)](https://ai.google.dev)    |     Unlimited     | 60 Requests / Minute | GOOGLE_API_KEY | \"google\"  |\n|  [OpenAI](http://openai.com)    |     &#x2715;     | 60 Requests / Minute | OPENAI_API_KEY | \"openai\"  |\n|  [Mistral AI](https://mistral.ai)    |     Free to try     | 5 Requests / Second | MISTRAL_API_KEY | \"mistral\"  |\n|  [Anthropic](https://www.anthropic.com)    |     Free to try     | 5 Requests / Minute | ANTHROPIC_API_KEY | \"anthropic\"  |\n\n\n- **Free to try**: Free to try, no credit card required but limited to a certain number of tokens.\n- Rate limit is based on the free plan of the provider. The actual rate limit may be different based on the plan you choose.\n\n### Open-source models\n  -- |Mixtral-8x7b-Instruct-v0.1 | Gemma 7B it |  Mistral-7B-Instruct-v0.1 | LLaMA2-70b | Mistral-7B-Instruct-v0.2 | CodeLlama-70b-Instruct | LLaMA3-8b-Instruct | LLaMA3-80b\n|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|\n|  API string name          |     \"mistralai/Mixtral-8x7B-Instruct-v0.1\"    | \"google/gemma-7b-it\"    | \"mistralai/Mistral-7B-Instruct-v0.1\"  | \"meta/Llama-2-70b-chat\" | \"mistralai/Mistral-7B-Instruct-v0.2\" | \"meta/CodeLlama-2-70b-intruct\" | \"meta/Llama-3-8b-Instruct\" | \"meta/Llama-3-80b\"\n|  Context Length          |     32,768    | 8.192    |  4,096 | 4,096 | 32,768 | 16,384 | 8,192 | 8,192\n|  Developer          |     Mistral AI    | Google    |  Mistral AI | Meta | Mistral AI | Meta | Meta | Meta\n|  **Cost (Input - Output / MTokens)**          |     -----    | ------    | ------ | ----- | ------ | ------ | ------ | ------ | ------ | ------ |\n|  [Groq](https://wow.groq.com)          |     $0-$0    | $0-$0    | &#x2715; | $0-$0 | &#x2715; | &#x2715; | $0-$0 | $0-$0\n|  [Anyscale](https://www.anyscale.com)  |     $0.5-$0.5       | $0.15-$0.15       |  $0.05-$0.25 | $1.0-$1.0 | &#x2715; | $1.0-$1.0 | $0.15-$0.15 | $1.0-$1.0\n|  [Together AI](https://www.together.ai)|     $0.6-$0.6        | $0.2-$0.2        | $0.2-$0.2 | $0.9-$0.9 | $0.05-$0.25 | $0.9-$0.9 | $0.2-$0.2 | $0.9-$0.9\n|  [Replicate](https://replicate.com)    |     $0.3-$1       | &#x2715;       |  $0.05-$0.25 | $0.65-$2.75 | $0.2-$0.2 | $0.65-$2.75 | $0.05-$0.25 | $0.65-$2.75\n|  [Fireworks](https://fireworks.ai)     |     $0.5-$0.5        | &#x2715;        |  $0.2-$0.2  | $0.9-$0.9 | $0.2-$0.2 | $0.9-$0.9 | $0.2-$0.2 | $0.9-$0.9\n|  [Deepinfra](https://deepinfra.com)    |     $0.27-$0.27    | $0.13-$0.13    |   $0.13-$0.13 | $0.7-$0.9 | &#x2715; | $0.7-$0.9 | $0.08-$0.08 | $0.59-$0.79\n|  [Lepton](https://www.lepton.ai)    |     $0.5-$0.5    | &#x2715;    |   &#x2715; | $0.8-$0.8 | &#x2715; | &#x2715; | $0.07-$0.07 | $0.8-$0.8\n\n### Closed-source models\n#### 1. Mistral AI\n\n| Model | Input Pricing ($/1M Tokens) | Output Pricing ($/1M Tokens) | Context Length | API string name |\n|:------:|:------:|:------:|:------:|:------:|\n|  Mistral-7B-Instruct-v0.1          |     $0.25        | $0.25    |  8,192 | \"mistral/open-mistral-7b\" |\n|  Mixtral-8x7b-Instruct-v0.1          |     $0.7        | $0.7    |  8,192 | \"mistral/open-mixtral-8x7b\" |\n|  Mixtral Small          |     $2        | $6    |  &#x2715; | \"mistral/mistral-small-latest\" |\n|  Mixtral Medium          |     $2.7        | $8.1    |  &#x2715; | \"mistral/mistral-medium-latest\" |\n|  Mixtral Large          |     $8        | $24    |  &#x2715; | \"mistral/mistral-large-latest\" |\n\n\n#### 2. OpenAI\n\n| Model | Input Pricing ($/1M Tokens) | Output Pricing ($/1M Tokens) | Context Length | API string name |\n|:------:|:------:|:------:|:------:|:------:|\n|  GPT-3.5-0125          |     $0.5        | $1.5    |  16,385 | \"openai/gpt-3.5-turbo-0125\" |\n|  GPT-3.5          |     $0.5        | $1.5    |  16,385 | \"openai/gpt-3.5-turbo\" |\n|  GPT-4          |     $30        | $60    |  8,192 | \"openai/gpt-4\" |\n|  GPT-4          |     $60        | $120    |  32,768 | \"openai/gpt-4-32k\" |\n\n\n#### 3. Anthropic\n| Model | Input Pricing ($/1M Tokens) | Output Pricing ($/1M Tokens) | Context Length | API string name |\n|:------:|:------:|:------:|:------:|:------:|\n|  Claude 3 Opus  |     $15        | $75    |  200,000 | \"anthropic/claude-3-opus\" |\n|  Claude 3 Sonnet  |     $3        | $15    |  200,000 | \"anthropic/claude-3-sonnet\" |\n|  Claude 3 Haiku  |     $0.25        | $1.25    |  200,000 | \"anthropic/claude-3-haiku\" |\n|  Claude 2.1  |     $8        | $24    |  200,000 | \"anthropic/claude-2.1\" |\n|  Claude 2.0  |     $8        | $24    |  100,000 | \"anthropic/claude-2.0\" |\n|  Claude 2.0  |     $0.8        | $2.4    |  100,000 | \"anthropic/claude-instant-1.2\" |\n\n\n#### 4. Google\n| Model | Input Pricing ($/1M Tokens) | Output Pricing ($/1M Tokens) | Context Length | API string name |\n|:------:|:------:|:------:|:------:|:------:|\n|  Google Gemini 1.0 Pro  |     $0        | $0    |  32,768 | \"google/gemini-1.0-pro\" |\n\n\n\n## Contributing\nWelcome to contribute to the project. If you see any updated pricing, new models, new providers, or any other changes, feel free to open an issue or a pull request.\n\n\n## Problems from the providers and Solutions\n\n#### Error with Gemini pro 1.0\n```bash\nValueError: The `response.text` quick accessor only works when the response contains a valid `Part`, but none was returned. Check the `candidate.safety_ratings` to see if the response was blocked.\n```\n**Solution**: The output is larger than your maximum tokens. Increase the `max_tokens`.\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 Hieu Minh Nguyen  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Easy-to-use LLM API from a state-of-the-art provider and comparison",
    "version": "0.4.0",
    "project_urls": {
        "Changelog": "https://github.com/hieuminh65/api4all/blob/main/CHANGELOG.md",
        "Documentation": "https://github.com/hieuminh65/api4all",
        "Homepage": "https://github.com/hieuminh65/api4all",
        "Repository": "https://github.com/hieuminh65/api4all"
    },
    "split_keywords": [
        "llm",
        " llmapi",
        " llminference",
        " llmprovider",
        " llmprice",
        " llmlearderboard",
        " llmpricing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2f8e3dbde2f88247d7a7afac93d9d9b5aa8683bea6b787ba4b3f23b4f1cb23d7",
                "md5": "bab0efa6b64625c279dd3e78367c8094",
                "sha256": "9978575038fa64288f49d45668268655e5ec09eefb42fb7792f30f24d4a19dec"
            },
            "downloads": -1,
            "filename": "api4all-0.4.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bab0efa6b64625c279dd3e78367c8094",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 18807,
            "upload_time": "2024-05-10T22:49:39",
            "upload_time_iso_8601": "2024-05-10T22:49:39.154144Z",
            "url": "https://files.pythonhosted.org/packages/2f/8e/3dbde2f88247d7a7afac93d9d9b5aa8683bea6b787ba4b3f23b4f1cb23d7/api4all-0.4.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2b9c6aafee91da0eb5e042302cd668f0afafb7e68780bd5c5eee61c502a6a55e",
                "md5": "ce9172f138df9f8021fdcfd3cea249fe",
                "sha256": "a0573a70c553ac53b5bf4a0d028ddd8ebf4054c5b3f123e341a8a40a506f0f0c"
            },
            "downloads": -1,
            "filename": "api4all-0.4.0.tar.gz",
            "has_sig": false,
            "md5_digest": "ce9172f138df9f8021fdcfd3cea249fe",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 18599,
            "upload_time": "2024-05-10T22:49:40",
            "upload_time_iso_8601": "2024-05-10T22:49:40.823549Z",
            "url": "https://files.pythonhosted.org/packages/2b/9c/6aafee91da0eb5e042302cd668f0afafb7e68780bd5c5eee61c502a6a55e/api4all-0.4.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-10 22:49:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "hieuminh65",
    "github_project": "api4all",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "api4all"
}
        
Elapsed time: 0.29497s