Name | tokonomics JSON |
Version |
0.5.0
JSON |
| download |
home_page | None |
Summary | Calcuate costs for LLM Usage based on token count |
upload_time | 2025-10-06 20:27:44 |
maintainer | None |
docs_url | None |
author | Philipp Temminghoff |
requires_python | >=3.12 |
license | MIT License
Copyright (c) 2024, Philipp Temminghoff
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
|
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Tokonomics
[](https://pypi.org/project/tokonomics/)
[](https://pypi.org/project/tokonomics/)
[](https://pypi.org/project/tokonomics/)
[](https://pypi.org/project/tokonomics/)
[](https://pypi.org/project/tokonomics/)
[](https://pypi.org/project/tokonomics/)
[](https://pypi.org/project/tokonomics/)
[](https://github.com/phil65/tokonomics/releases)
[](https://github.com/phil65/tokonomics/graphs/contributors)
[](https://github.com/phil65/tokonomics/discussions)
[](https://github.com/phil65/tokonomics/forks)
[](https://github.com/phil65/tokonomics/issues)
[](https://github.com/phil65/tokonomics/pulls)
[](https://github.com/phil65/tokonomics/watchers)
[](https://github.com/phil65/tokonomics/stars)
[](https://github.com/phil65/tokonomics)
[](https://github.com/phil65/tokonomics/commits)
[](https://github.com/phil65/tokonomics/releases)
[](https://github.com/phil65/tokonomics)
[](https://github.com/phil65/tokonomics)
[](https://codecov.io/gh/phil65/tokonomics/)
[](https://pyup.io/repos/github/phil65/tokonomics/)
[Read the documentation!](https://phil65.github.io/tokonomics/)
Calculate costs for LLM usage based on token counts using LiteLLM's pricing data.
## Installation
```bash
pip install tokonomics
```
## Features
- Automatic cost calculation for various LLM models
- Detailed cost breakdown (prompt, completion, and total costs)
- Caches pricing data locally (24-hour default cache duration)
- Supports multiple model name formats (e.g., "gpt-4", "openai:gpt-4")
- Asynchronous API
- Fully typed with runtime type checking
- Zero configuration required
## Usage
```python
import asyncio
from tokonomics import calculate_token_cost
async def main():
# Calculate cost with token counts
costs = await calculate_token_cost(
model="gpt-4",
input_tokens=100, # tokens used in the prompt
output_tokens=50, # tokens used in the completion
)
if costs:
print(f"Prompt cost: ${costs.input_cost:.6f}")
print(f"Completion cost: ${costs.output_cost:.6f}")
print(f"Total cost: ${costs.total_cost:.6f}")
else:
print("Could not determine cost for model")
asyncio.run(main())
```
You can customize the cache timeout:
```python
from tokonomics import get_model_costs, clear_cache
# Get model costs with custom cache duration (e.g., 1 hour)
costs = await get_model_costs("gpt-4", cache_timeout=3600)
if costs:
print(f"Input cost per token: ${costs['input_cost_per_token']}")
print(f"Output cost per token: ${costs['output_cost_per_token']}")
clear_cache()
```
### Getting Model Token Limits
You can retrieve the token limits for a model using `get_model_limits`:
```python
from tokonomics import get_model_limits
async def main():
# Get token limit information for a model
limits = await get_model_limits("gpt-4")
if limits:
print(f"Maximum total tokens: {limits.total_tokens}")
print(f"Maximum input tokens: {limits.input_tokens}")
print(f"Maximum output tokens: {limits.output_tokens}")
else:
print("Could not find limit data for model")
```
The function returns a `TokenLimits` object with three fields:
- `total_tokens`: Maximum combined tokens (input + output) the model supports
- `input_tokens`: Maximum number of input/prompt tokens
- `output_tokens`: Maximum number of output/completion tokens
### Pydantic-AI Integration
If you're using pydantic-ai, you can directly calculate costs from its Usage objects:
```python
from tokonomics import calculate_pydantic_cost
# Assuming you have a pydantic-ai Usage object
costs = await calculate_pydantic_cost(
model="gpt-4",
usage=usage_object,
)
if costs:
print(f"Prompt cost: ${costs.input_cost:.6f}")
print(f"Completion cost: ${costs.output_cost:.6f}")
print(f"Total cost: ${costs.total_cost:.6f}")
```
## Model Name Support
The library supports multiple formats for model names:
- Direct model names: `"gpt-4"`
- Provider-prefixed: `"openai:gpt-4"`
- Provider-path style: `"openai/gpt-4"`
Names are matched case-insensitively.
## Data Source
Pricing data is sourced from [LiteLLM's pricing repository](https://github.com/BerriAI/litellm) and is automatically cached locally using `hishel`. The cache is updated when pricing data is not found or has expired.
## Requirements
- Python 3.12+
- `httpx`
- `platformdirs`
- `upath`
- `pydantic` (≥ 2.0)
## License
This project is licensed under the MIT License - see the LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "tokonomics",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": null,
"author": "Philipp Temminghoff",
"author_email": "Philipp Temminghoff <philipptemminghoff@googlemail.com>",
"download_url": "https://files.pythonhosted.org/packages/6f/15/9f0d38b5ede2ca6f2387c7a46acf6604cf01167e81e692f5972e146cd4aa/tokonomics-0.5.0.tar.gz",
"platform": null,
"description": "# Tokonomics\n\n[](https://pypi.org/project/tokonomics/)\n[](https://pypi.org/project/tokonomics/)\n[](https://pypi.org/project/tokonomics/)\n[](https://pypi.org/project/tokonomics/)\n[](https://pypi.org/project/tokonomics/)\n[](https://pypi.org/project/tokonomics/)\n[](https://pypi.org/project/tokonomics/)\n[](https://github.com/phil65/tokonomics/releases)\n[](https://github.com/phil65/tokonomics/graphs/contributors)\n[](https://github.com/phil65/tokonomics/discussions)\n[](https://github.com/phil65/tokonomics/forks)\n[](https://github.com/phil65/tokonomics/issues)\n[](https://github.com/phil65/tokonomics/pulls)\n[](https://github.com/phil65/tokonomics/watchers)\n[](https://github.com/phil65/tokonomics/stars)\n[](https://github.com/phil65/tokonomics)\n[](https://github.com/phil65/tokonomics/commits)\n[](https://github.com/phil65/tokonomics/releases)\n[](https://github.com/phil65/tokonomics)\n[](https://github.com/phil65/tokonomics)\n[](https://codecov.io/gh/phil65/tokonomics/)\n[](https://pyup.io/repos/github/phil65/tokonomics/)\n\n[Read the documentation!](https://phil65.github.io/tokonomics/)\n\n\nCalculate costs for LLM usage based on token counts using LiteLLM's pricing data.\n\n## Installation\n\n```bash\npip install tokonomics\n```\n\n## Features\n\n- Automatic cost calculation for various LLM models\n- Detailed cost breakdown (prompt, completion, and total costs)\n- Caches pricing data locally (24-hour default cache duration)\n- Supports multiple model name formats (e.g., \"gpt-4\", \"openai:gpt-4\")\n- Asynchronous API\n- Fully typed with runtime type checking\n- Zero configuration required\n\n## Usage\n\n```python\nimport asyncio\nfrom tokonomics import calculate_token_cost\n\nasync def main():\n # Calculate cost with token counts\n costs = await calculate_token_cost(\n model=\"gpt-4\",\n input_tokens=100, # tokens used in the prompt\n output_tokens=50, # tokens used in the completion\n )\n\n if costs:\n print(f\"Prompt cost: ${costs.input_cost:.6f}\")\n print(f\"Completion cost: ${costs.output_cost:.6f}\")\n print(f\"Total cost: ${costs.total_cost:.6f}\")\n else:\n print(\"Could not determine cost for model\")\n\nasyncio.run(main())\n```\n\nYou can customize the cache timeout:\n\n```python\nfrom tokonomics import get_model_costs, clear_cache\n\n# Get model costs with custom cache duration (e.g., 1 hour)\ncosts = await get_model_costs(\"gpt-4\", cache_timeout=3600)\nif costs:\n print(f\"Input cost per token: ${costs['input_cost_per_token']}\")\n print(f\"Output cost per token: ${costs['output_cost_per_token']}\")\n\nclear_cache()\n```\n\n### Getting Model Token Limits\n\nYou can retrieve the token limits for a model using `get_model_limits`:\n\n```python\nfrom tokonomics import get_model_limits\n\nasync def main():\n # Get token limit information for a model\n limits = await get_model_limits(\"gpt-4\")\n\n if limits:\n print(f\"Maximum total tokens: {limits.total_tokens}\")\n print(f\"Maximum input tokens: {limits.input_tokens}\")\n print(f\"Maximum output tokens: {limits.output_tokens}\")\n else:\n print(\"Could not find limit data for model\")\n```\n\nThe function returns a `TokenLimits` object with three fields:\n- `total_tokens`: Maximum combined tokens (input + output) the model supports\n- `input_tokens`: Maximum number of input/prompt tokens\n- `output_tokens`: Maximum number of output/completion tokens\n\n\n### Pydantic-AI Integration\n\nIf you're using pydantic-ai, you can directly calculate costs from its Usage objects:\n\n```python\nfrom tokonomics import calculate_pydantic_cost\n\n# Assuming you have a pydantic-ai Usage object\ncosts = await calculate_pydantic_cost(\n model=\"gpt-4\",\n usage=usage_object,\n)\n\nif costs:\n print(f\"Prompt cost: ${costs.input_cost:.6f}\")\n print(f\"Completion cost: ${costs.output_cost:.6f}\")\n print(f\"Total cost: ${costs.total_cost:.6f}\")\n```\n\n## Model Name Support\n\nThe library supports multiple formats for model names:\n- Direct model names: `\"gpt-4\"`\n- Provider-prefixed: `\"openai:gpt-4\"`\n- Provider-path style: `\"openai/gpt-4\"`\n\nNames are matched case-insensitively.\n\n## Data Source\n\nPricing data is sourced from [LiteLLM's pricing repository](https://github.com/BerriAI/litellm) and is automatically cached locally using `hishel`. The cache is updated when pricing data is not found or has expired.\n\n## Requirements\n\n- Python 3.12+\n- `httpx`\n- `platformdirs`\n- `upath`\n- `pydantic` (\u2265 2.0)\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) 2024, Philipp Temminghoff\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n ",
"summary": "Calcuate costs for LLM Usage based on token count",
"version": "0.5.0",
"project_urls": {
"Code coverage": "https://app.codecov.io/gh/phil65/tokonomics",
"Discussions": "https://github.com/phil65/tokonomics/discussions",
"Documentation": "https://phil65.github.io/tokonomics/",
"Issues": "https://github.com/phil65/tokonomics/issues",
"Source": "https://github.com/phil65/tokonomics"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "d771f63b71963d93e05e70a5d924f8090fd36eb42233e51144f30b6dcb398cf7",
"md5": "a3df7e5a21e82100e50ceb470e18aaf1",
"sha256": "e5da39e1ce442b9215b4e39e6212214946fe15dca19b6bfd5e4c0c212efbb88c"
},
"downloads": -1,
"filename": "tokonomics-0.5.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a3df7e5a21e82100e50ceb470e18aaf1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 50616,
"upload_time": "2025-10-06T20:27:43",
"upload_time_iso_8601": "2025-10-06T20:27:43.137763Z",
"url": "https://files.pythonhosted.org/packages/d7/71/f63b71963d93e05e70a5d924f8090fd36eb42233e51144f30b6dcb398cf7/tokonomics-0.5.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "6f159f0d38b5ede2ca6f2387c7a46acf6604cf01167e81e692f5972e146cd4aa",
"md5": "2dc47094b7919ce676953026c52a8357",
"sha256": "00d8c50a2ab0cb54321f91a9e8e7c1017f0eb102b934772f08babace43cdc31d"
},
"downloads": -1,
"filename": "tokonomics-0.5.0.tar.gz",
"has_sig": false,
"md5_digest": "2dc47094b7919ce676953026c52a8357",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 27978,
"upload_time": "2025-10-06T20:27:44",
"upload_time_iso_8601": "2025-10-06T20:27:44.671697Z",
"url": "https://files.pythonhosted.org/packages/6f/15/9f0d38b5ede2ca6f2387c7a46acf6604cf01167e81e692f5972e146cd4aa/tokonomics-0.5.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-06 20:27:44",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "phil65",
"github_project": "tokonomics",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "tokonomics"
}