Name | tokencost JSON |
Version |
0.1.25
JSON |
| download |
home_page | None |
Summary | To calculate token and translated USD cost of string and message calls to OpenAI, for example when used by AI agents |
upload_time | 2025-07-22 22:40:33 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | None |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<p align="center">
<img src="https://raw.githubusercontent.com/AgentOps-AI/tokencost/main/tokencost.png" height="300" alt="Tokencost" />
</p>
<p align="center">
<em>Clientside token counting + price estimation for LLM apps and AI agents.</em>
</p>
<p align="center">
<a href="https://pypi.org/project/tokencost/" target="_blank">
<img alt="Python" src="https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54" />
<img alt="Version" src="https://img.shields.io/pypi/v/tokencost?style=for-the-badge&color=3670A0">
</a>
</p>
<p align="center">
<a href="https://twitter.com/agentopsai/">🐦 Twitter</a>
<span> • </span>
<a href="https://discord.com/invite/FagdcwwXRR">📢 Discord</a>
<span> • </span>
<a href="https://agentops.ai/?tokencost">🖇️ AgentOps</a>
</p>
# TokenCost
[](https://opensource.org/licenses/MIT) 
[](https://x.com/agentopsai)
Tokencost helps calculate the USD cost of using major Large Language Model (LLMs) APIs by calculating the estimated cost of prompts and completions.
Building AI agents? Check out [AgentOps](https://agentops.ai/?tokencost)
### Features
* **LLM Price Tracking** Major LLM providers frequently add new models and update pricing. This repo helps track the latest price changes
* **Token counting** Accurately count prompt tokens before sending OpenAI requests
* **Easy integration** Get the cost of a prompt or completion with a single function
### Example usage:
```python
from tokencost import calculate_prompt_cost, calculate_completion_cost
model = "gpt-3.5-turbo"
prompt = [{ "role": "user", "content": "Hello world"}]
completion = "How may I assist you today?"
prompt_cost = calculate_prompt_cost(prompt, model)
completion_cost = calculate_completion_cost(completion, model)
print(f"{prompt_cost} + {completion_cost} = {prompt_cost + completion_cost}")
# 0.0000135 + 0.000014 = 0.0000275
```
## Installation
#### Recommended: [PyPI](https://pypi.org/project/tokencost/):
```bash
pip install tokencost
```
## Usage
### Cost estimates
Calculating the cost of prompts and completions from OpenAI requests
```python
from openai import OpenAI
client = OpenAI()
model = "gpt-3.5-turbo"
prompt = [{ "role": "user", "content": "Say this is a test"}]
chat_completion = client.chat.completions.create(
messages=prompt, model=model
)
completion = chat_completion.choices[0].message.content
# "This is a test."
prompt_cost = calculate_prompt_cost(prompt, model)
completion_cost = calculate_completion_cost(completion, model)
print(f"{prompt_cost} + {completion_cost} = {prompt_cost + completion_cost}")
# 0.0000180 + 0.000010 = 0.0000280
```
**Calculating cost using string prompts instead of messages:**
```python
from tokencost import calculate_prompt_cost
prompt_string = "Hello world"
response = "How may I assist you today?"
model= "gpt-3.5-turbo"
prompt_cost = calculate_prompt_cost(prompt_string, model)
print(f"Cost: ${prompt_cost}")
# Cost: $3e-06
```
**Counting tokens**
```python
from tokencost import count_message_tokens, count_string_tokens
message_prompt = [{ "role": "user", "content": "Hello world"}]
# Counting tokens in prompts formatted as message lists
print(count_message_tokens(message_prompt, model="gpt-3.5-turbo"))
# 9
# Alternatively, counting tokens in string prompts
print(count_string_tokens(prompt="Hello world", model="gpt-3.5-turbo"))
# 2
```
## How tokens are counted
Under the hood, strings and ChatML messages are tokenized using [Tiktoken](https://github.com/openai/tiktoken), OpenAI's official tokenizer. Tiktoken splits text into tokens (which can be parts of words or individual characters) and handles both raw strings and message formats with additional tokens for message formatting and roles.
For Anthropic models above version 3 (i.e. Sonnet 3.5, Haiku 3.5, and Opus 3), we use the [Anthropic beta token counting API](https://docs.anthropic.com/claude/docs/beta-api-for-counting-tokens) to ensure accurate token counts. For older Claude models, we approximate using Tiktoken with the cl100k_base encoding.
## Cost table
Units denominated in USD. All prices can be located [here](pricing_table.md).
Raw data
{
"_id": null,
"home_page": null,
"name": "tokencost",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Trisha Pan <trishaepan@gmail.com>, Alex Reibman <areibman@gmail.com>, Pratyush Shukla <ps4534@nyu.edu>",
"download_url": "https://files.pythonhosted.org/packages/b5/2f/c14f77980a27ba48c89b84e6cb9a400d5fbc7239958424e6095c4bc5aaf8/tokencost-0.1.25.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <img src=\"https://raw.githubusercontent.com/AgentOps-AI/tokencost/main/tokencost.png\" height=\"300\" alt=\"Tokencost\" />\n</p>\n\n<p align=\"center\">\n <em>Clientside token counting + price estimation for LLM apps and AI agents.</em>\n</p>\n<p align=\"center\">\n <a href=\"https://pypi.org/project/tokencost/\" target=\"_blank\">\n <img alt=\"Python\" src=\"https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54\" />\n <img alt=\"Version\" src=\"https://img.shields.io/pypi/v/tokencost?style=for-the-badge&color=3670A0\">\n </a>\n</p>\n<p align=\"center\">\n<a href=\"https://twitter.com/agentopsai/\">\ud83d\udc26 Twitter</a>\n<span> \u2022 </span>\n<a href=\"https://discord.com/invite/FagdcwwXRR\">\ud83d\udce2 Discord</a>\n<span> \u2022 </span>\n<a href=\"https://agentops.ai/?tokencost\">\ud83d\udd87\ufe0f AgentOps</a>\n</p>\n\n\n# TokenCost\n[](https://opensource.org/licenses/MIT) \n[](https://x.com/agentopsai)\n\nTokencost helps calculate the USD cost of using major Large Language Model (LLMs) APIs by calculating the estimated cost of prompts and completions.\n\nBuilding AI agents? Check out [AgentOps](https://agentops.ai/?tokencost)\n\n\n### Features\n* **LLM Price Tracking** Major LLM providers frequently add new models and update pricing. This repo helps track the latest price changes\n* **Token counting** Accurately count prompt tokens before sending OpenAI requests\n* **Easy integration** Get the cost of a prompt or completion with a single function\n\n### Example usage:\n\n```python\nfrom tokencost import calculate_prompt_cost, calculate_completion_cost\n\nmodel = \"gpt-3.5-turbo\"\nprompt = [{ \"role\": \"user\", \"content\": \"Hello world\"}]\ncompletion = \"How may I assist you today?\"\n\nprompt_cost = calculate_prompt_cost(prompt, model)\ncompletion_cost = calculate_completion_cost(completion, model)\n\nprint(f\"{prompt_cost} + {completion_cost} = {prompt_cost + completion_cost}\")\n# 0.0000135 + 0.000014 = 0.0000275\n```\n\n## Installation\n\n#### Recommended: [PyPI](https://pypi.org/project/tokencost/):\n\n```bash\npip install tokencost\n```\n\n## Usage\n\n### Cost estimates\nCalculating the cost of prompts and completions from OpenAI requests\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\nmodel = \"gpt-3.5-turbo\"\nprompt = [{ \"role\": \"user\", \"content\": \"Say this is a test\"}]\n\nchat_completion = client.chat.completions.create(\n messages=prompt, model=model\n)\n\ncompletion = chat_completion.choices[0].message.content\n# \"This is a test.\"\n\nprompt_cost = calculate_prompt_cost(prompt, model)\ncompletion_cost = calculate_completion_cost(completion, model)\nprint(f\"{prompt_cost} + {completion_cost} = {prompt_cost + completion_cost}\")\n# 0.0000180 + 0.000010 = 0.0000280\n```\n\n**Calculating cost using string prompts instead of messages:**\n```python\nfrom tokencost import calculate_prompt_cost\n\nprompt_string = \"Hello world\" \nresponse = \"How may I assist you today?\"\nmodel= \"gpt-3.5-turbo\"\n\nprompt_cost = calculate_prompt_cost(prompt_string, model)\nprint(f\"Cost: ${prompt_cost}\")\n# Cost: $3e-06\n```\n\n**Counting tokens**\n\n```python\nfrom tokencost import count_message_tokens, count_string_tokens\n\nmessage_prompt = [{ \"role\": \"user\", \"content\": \"Hello world\"}]\n# Counting tokens in prompts formatted as message lists\nprint(count_message_tokens(message_prompt, model=\"gpt-3.5-turbo\"))\n# 9\n\n# Alternatively, counting tokens in string prompts\nprint(count_string_tokens(prompt=\"Hello world\", model=\"gpt-3.5-turbo\"))\n# 2\n\n```\n\n## How tokens are counted\n\nUnder the hood, strings and ChatML messages are tokenized using [Tiktoken](https://github.com/openai/tiktoken), OpenAI's official tokenizer. Tiktoken splits text into tokens (which can be parts of words or individual characters) and handles both raw strings and message formats with additional tokens for message formatting and roles.\n\nFor Anthropic models above version 3 (i.e. Sonnet 3.5, Haiku 3.5, and Opus 3), we use the [Anthropic beta token counting API](https://docs.anthropic.com/claude/docs/beta-api-for-counting-tokens) to ensure accurate token counts. For older Claude models, we approximate using Tiktoken with the cl100k_base encoding.\n\n\n## Cost table\nUnits denominated in USD. All prices can be located [here](pricing_table.md).\n",
"bugtrack_url": null,
"license": null,
"summary": "To calculate token and translated USD cost of string and message calls to OpenAI, for example when used by AI agents",
"version": "0.1.25",
"project_urls": {
"Homepage": "https://github.com/AgentOps-AI/tokencost",
"Issues": "https://github.com/AgentOps-AI/tokencost/issues"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "fe2883c38f11896f7c3dd130381a0c05d67cbb214109460726a0f230f7796319",
"md5": "87a33758e658026969a60b4251f33e19",
"sha256": "6efed6237e7df7cb03f89e3b476be0097a3db017139c7d4050c28da84acf28a9"
},
"downloads": -1,
"filename": "tokencost-0.1.25-py3-none-any.whl",
"has_sig": false,
"md5_digest": "87a33758e658026969a60b4251f33e19",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 44964,
"upload_time": "2025-07-22T22:40:32",
"upload_time_iso_8601": "2025-07-22T22:40:32.131786Z",
"url": "https://files.pythonhosted.org/packages/fe/28/83c38f11896f7c3dd130381a0c05d67cbb214109460726a0f230f7796319/tokencost-0.1.25-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b52fc14f77980a27ba48c89b84e6cb9a400d5fbc7239958424e6095c4bc5aaf8",
"md5": "bff17185cb8099e875d0344840a11189",
"sha256": "ee928bab995e59855e1f108deb8ce5e84da732e2623ef00c0cbfc524b7d43b30"
},
"downloads": -1,
"filename": "tokencost-0.1.25.tar.gz",
"has_sig": false,
"md5_digest": "bff17185cb8099e875d0344840a11189",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 46839,
"upload_time": "2025-07-22T22:40:33",
"upload_time_iso_8601": "2025-07-22T22:40:33.363569Z",
"url": "https://files.pythonhosted.org/packages/b5/2f/c14f77980a27ba48c89b84e6cb9a400d5fbc7239958424e6095c4bc5aaf8/tokencost-0.1.25.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-22 22:40:33",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "AgentOps-AI",
"github_project": "tokencost",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "tokencost"
}