openai-pricing-calc-draft


Nameopenai-pricing-calc-draft JSON
Version 0.5.0 PyPI version JSON
download
home_pagehttps://github.com/kokenconsulting/openai-api-pricing/tree/main/pypi
SummaryLLM Cost Calculation
upload_time2023-12-14 09:30:57
maintainer
docs_urlNone
authorKoken Consultng
requires_python
license
keywords python video stream video stream camera stream sockets
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# OPEN AI API - PRICE CALCULATOR

## Overview
This package is created to calculate cost of OPEN AI API usage.

Pricing based on following url : [OPEN AI Pricing API](https://openai-api-pricing-web-api.onrender.com/openai).
Source code: [Github](https://github.com/kokenconsulting/openai-api-pricing)



## Usage

### Installation

Install Page
```
pip install openai-pricing-calc-draft
```


### Without Surrounding Code

```python
from lll_pricing_calculation import calculate_openai_pricing

# Without surrounding Code
costForThousandCurrency,embeddingsCost,promptCost,completionTokenCost,total_cost = calculate_openai_pricing("GPT-3.5 Turbo","4K context",token_counter.total_embedding_token_count,token_counter.prompt_llm_token_count,token_counter.completion_llm_token_count)
print("currency:"+costForThousandCurrency)
print("embeddingsCost:"+str(embeddingsCost))
print("promptCost:"+str(promptCost))
print("completionTokenCost:"+str(completionTokenCost))
print("total cost:"+str(total_cost))
```
### With Surrounding Code Using Llama Index
```python
import tiktoken
from llama_index.callbacks import CallbackManager, TokenCountingHandler
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
from lll_pricing_calculation import calculate_openai_pricing

sampleQuery = "Sample Query"
token_counter = TokenCountingHandler(
    tokenizer=tiktoken.encoding_for_model("text-davinci-003").encode,
    verbose=False  # set to true to see usage printed to the console
)
callback_manager = CallbackManager([token_counter])
service_context = ServiceContext.from_defaults(callback_manager=callback_manager)

def askQuestion(quest,storage,service_context,token_counter):
    token_counter.reset_counts()
    # index defined outside
    specificindex = index.get_index(dataFolder,"./storage"+storage,service_context)
    print(quest)
    result = query.query_index(specificindex, quest,"./storage"+storage)
    print(result)
    # otherwise, you can access the count directly
    print("Embeddings Token Counter stuff is below (total_embedding_token_count):")
    print(token_counter.total_embedding_token_count)
    print("Detailed ")
    print('Embedding Tokens: ', token_counter.total_embedding_token_count, '\n',
      'LLM Prompt Tokens: ', token_counter.prompt_llm_token_count, '\n',
      'LLM Completion Tokens: ', token_counter.completion_llm_token_count, '\n',
      'Total LLM Token Count: ', token_counter.total_llm_token_count)

    # CALCULATE PRICING TAKES PLACE HERE
    costForThousandCurrency,embeddingsCost,promptCost,completionTokenCost,total_cost = calculate_openai_pricing("GPT-3.5 Turbo","4K context",token_counter.total_embedding_token_count,token_counter.prompt_llm_token_count,token_counter.completion_llm_token_count)
    print("currency:"+costForThousandCurrency)
    print("embeddingsCost:"+str(embeddingsCost))
    print("promptCost:"+str(promptCost))
    print("completionTokenCost:"+str(completionTokenCost))
    print("total cost:"+str(total_cost))

askQuestion(sampleQuery,"4",service_context,token_counter)
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kokenconsulting/openai-api-pricing/tree/main/pypi",
    "name": "openai-pricing-calc-draft",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "python,video,stream,video stream,camera stream,sockets",
    "author": "Koken Consultng",
    "author_email": "<ali@koken-consulting.com>",
    "download_url": "https://files.pythonhosted.org/packages/57/f0/dfa2d1c14c1ce10e306b80a2685f28ed4d90aee0b2c606ec98a033545d5f/openai_pricing_calc_draft-0.5.0.tar.gz",
    "platform": null,
    "description": "\n# OPEN AI API - PRICE CALCULATOR\n\n## Overview\nThis package is created to calculate cost of OPEN AI API usage.\n\nPricing based on following url : [OPEN AI Pricing API](https://openai-api-pricing-web-api.onrender.com/openai).\nSource code: [Github](https://github.com/kokenconsulting/openai-api-pricing)\n\n\n\n## Usage\n\n### Installation\n\nInstall Page\n```\npip install openai-pricing-calc-draft\n```\n\n\n### Without Surrounding Code\n\n```python\nfrom lll_pricing_calculation import calculate_openai_pricing\n\n# Without surrounding Code\ncostForThousandCurrency,embeddingsCost,promptCost,completionTokenCost,total_cost = calculate_openai_pricing(\"GPT-3.5 Turbo\",\"4K context\",token_counter.total_embedding_token_count,token_counter.prompt_llm_token_count,token_counter.completion_llm_token_count)\nprint(\"currency:\"+costForThousandCurrency)\nprint(\"embeddingsCost:\"+str(embeddingsCost))\nprint(\"promptCost:\"+str(promptCost))\nprint(\"completionTokenCost:\"+str(completionTokenCost))\nprint(\"total cost:\"+str(total_cost))\n```\n### With Surrounding Code Using Llama Index\n```python\nimport tiktoken\nfrom llama_index.callbacks import CallbackManager, TokenCountingHandler\nfrom llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext\nfrom lll_pricing_calculation import calculate_openai_pricing\n\nsampleQuery = \"Sample Query\"\ntoken_counter = TokenCountingHandler(\n    tokenizer=tiktoken.encoding_for_model(\"text-davinci-003\").encode,\n    verbose=False  # set to true to see usage printed to the console\n)\ncallback_manager = CallbackManager([token_counter])\nservice_context = ServiceContext.from_defaults(callback_manager=callback_manager)\n\ndef askQuestion(quest,storage,service_context,token_counter):\n    token_counter.reset_counts()\n    # index defined outside\n    specificindex = index.get_index(dataFolder,\"./storage\"+storage,service_context)\n    print(quest)\n    result = query.query_index(specificindex, quest,\"./storage\"+storage)\n    print(result)\n    # otherwise, you can access the count directly\n    print(\"Embeddings Token Counter stuff is below (total_embedding_token_count):\")\n    print(token_counter.total_embedding_token_count)\n    print(\"Detailed \")\n    print('Embedding Tokens: ', token_counter.total_embedding_token_count, '\\n',\n      'LLM Prompt Tokens: ', token_counter.prompt_llm_token_count, '\\n',\n      'LLM Completion Tokens: ', token_counter.completion_llm_token_count, '\\n',\n      'Total LLM Token Count: ', token_counter.total_llm_token_count)\n\n    # CALCULATE PRICING TAKES PLACE HERE\n    costForThousandCurrency,embeddingsCost,promptCost,completionTokenCost,total_cost = calculate_openai_pricing(\"GPT-3.5 Turbo\",\"4K context\",token_counter.total_embedding_token_count,token_counter.prompt_llm_token_count,token_counter.completion_llm_token_count)\n    print(\"currency:\"+costForThousandCurrency)\n    print(\"embeddingsCost:\"+str(embeddingsCost))\n    print(\"promptCost:\"+str(promptCost))\n    print(\"completionTokenCost:\"+str(completionTokenCost))\n    print(\"total cost:\"+str(total_cost))\n\naskQuestion(sampleQuery,\"4\",service_context,token_counter)\n```\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "LLM Cost Calculation",
    "version": "0.5.0",
    "project_urls": {
        "Homepage": "https://github.com/kokenconsulting/openai-api-pricing/tree/main/pypi"
    },
    "split_keywords": [
        "python",
        "video",
        "stream",
        "video stream",
        "camera stream",
        "sockets"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6ba46ac62f92c54ca2f7479547bd50cfd39f216a43f7355bb9c19e8c4a1e4d37",
                "md5": "6329fca69cd63105cad16ede4fb907ac",
                "sha256": "1c3ac117be69e81555248f54a1698d52e8c3d2fcdb45280b5e83acff7e7100b7"
            },
            "downloads": -1,
            "filename": "openai_pricing_calc_draft-0.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6329fca69cd63105cad16ede4fb907ac",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 3763,
            "upload_time": "2023-12-14T09:30:56",
            "upload_time_iso_8601": "2023-12-14T09:30:56.060853Z",
            "url": "https://files.pythonhosted.org/packages/6b/a4/6ac62f92c54ca2f7479547bd50cfd39f216a43f7355bb9c19e8c4a1e4d37/openai_pricing_calc_draft-0.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "57f0dfa2d1c14c1ce10e306b80a2685f28ed4d90aee0b2c606ec98a033545d5f",
                "md5": "0bc3db341823081fd61d84cf29dbd958",
                "sha256": "ab46ec2deda2320d1653b8391225ff48e0e77b5fba9a9904769b43e08660ef38"
            },
            "downloads": -1,
            "filename": "openai_pricing_calc_draft-0.5.0.tar.gz",
            "has_sig": false,
            "md5_digest": "0bc3db341823081fd61d84cf29dbd958",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 3392,
            "upload_time": "2023-12-14T09:30:57",
            "upload_time_iso_8601": "2023-12-14T09:30:57.380705Z",
            "url": "https://files.pythonhosted.org/packages/57/f0/dfa2d1c14c1ce10e306b80a2685f28ed4d90aee0b2c606ec98a033545d5f/openai_pricing_calc_draft-0.5.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-14 09:30:57",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kokenconsulting",
    "github_project": "openai-api-pricing",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "openai-pricing-calc-draft"
}
        
Elapsed time: 0.17028s