# CrashLens Logger ๐ง ๐ธ
Structured Token & Cost Logs for OpenAI / Anthropic Usage
[](https://badge.fury.io/py/crashlens_logger)
[](LICENSE)
> โ ๏ธ Are you burning money on GPT calls without knowing where or why?
> CrashLens Logger captures cost, tokens, and prompts in JSON logs โ for FinOps, audits, or debugging.
---
## Purpose
**CrashLens Logger** is a Python package for generating structured, machine-readable logs of LLM (Large Language Model) API usage.
It helps you:
- Track prompt, model, and token usage for every AI call
- Automatically calculate cost using standard model pricing
- Output logs in newline-delimited JSON (NDJSON) for easy analysis, monitoring, and cost tracking
---
## Real Use Cases
- ๐ Debug fallback loops by logging all model calls with prompt/token trace
- ๐ฐ Auto-generate cost reports across agents & users
- ๐ง Analyze which prompts are burning tokens (and why)
- ๐ก๏ธ Audit LLM usage for compliance or security
---
## Installation
```bash
pip install --upgrade crashlens_logger
```
_This will install or upgrade to the latest version._
---
## Quick Start
```python
from crashlens_logger import CrashLensLogger
import uuid
from datetime import datetime
import openai
logger = CrashLensLogger()
def call_and_log():
trace_id = str(uuid.uuid4())
start_time = datetime.utcnow().isoformat() + "Z"
prompt = "What are the main tourist attractions in Rome?"
model = "gpt-3.5-turbo"
response = openai.ChatCompletion.create(
model=model,
messages=[{"role": "user", "content": prompt}]
)
end_time = datetime.utcnow().isoformat() + "Z"
usage = response["usage"]
logger.log_event(
traceId=trace_id,
startTime=start_time,
endTime=end_time,
input={"model": model, "prompt": prompt},
usage=usage
)
```
---
## Where Do Logs Go?
By default, logs are printed to `stdout` in newline-delimited JSON (NDJSON) format.
You can redirect output to a file:
```bash
python your_script.py > logs.jsonl
```
---
## Example Output
```json
{
"traceId": "trace_norm_01",
"startTime": "2025-07-22T10:30:05Z",
"input": {"model": "gpt-3.5-turbo", "prompt": "What are the main tourist attractions in Rome?"},
"usage": {"prompt_tokens": 10, "completion_tokens": 155, "total_tokens": 165},
"cost": 0.0002375
}
```
---
## What Gets Calculated Automatically?
- **total_tokens**: If you provide `prompt_tokens` and `completion_tokens` in `usage`, the logger adds `total_tokens`.
- **cost**: If you provide `model`, `prompt_tokens`, and `completion_tokens`, the logger calculates cost using standard pricing.
---
## Troubleshooting
- **Cannot resolve host:** Check your internet connection or DNS.
- **pip cache issues:** Try `pip install --no-cache-dir crashlens_logger`
- **Permission errors:** Use a virtual environment or add `--user` to your pip command.
- **Module not found:** Ensure youโre using the correct Python environment.
---
## Roadmap
- [ ] Token pricing overrides
- [ ] File/DB exporters
- [ ] SDK instrumentation helpers
- [ ] Pydantic validation for log structure
---
## Testing
Run tests with:
```bash
pytest
```
*100% coverage on core logging logic.*
---
## License
MIT License
Raw data
{
"_id": null,
"home_page": null,
"name": "crashlens-logger",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": "llm, logging, finops, cost-tracking, token-usage",
"author": "Aditya Singh",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/f9/94/61508d3f2c6b27668e4f9c0819ce1fbc28c92df7190f882ef9ef0c2c6128/crashlens_logger-1.0.4.tar.gz",
"platform": null,
"description": "# CrashLens Logger \ud83e\udde0\ud83d\udcb8 \nStructured Token & Cost Logs for OpenAI / Anthropic Usage\n\n[](https://badge.fury.io/py/crashlens_logger)\n[](LICENSE)\n\n> \u26a0\ufe0f Are you burning money on GPT calls without knowing where or why? \n> CrashLens Logger captures cost, tokens, and prompts in JSON logs \u2014 for FinOps, audits, or debugging.\n\n---\n\n## Purpose\n\n**CrashLens Logger** is a Python package for generating structured, machine-readable logs of LLM (Large Language Model) API usage. \nIt helps you:\n- Track prompt, model, and token usage for every AI call\n- Automatically calculate cost using standard model pricing\n- Output logs in newline-delimited JSON (NDJSON) for easy analysis, monitoring, and cost tracking\n\n---\n\n## Real Use Cases\n\n- \ud83d\udd0d Debug fallback loops by logging all model calls with prompt/token trace\n- \ud83d\udcb0 Auto-generate cost reports across agents & users\n- \ud83e\udde0 Analyze which prompts are burning tokens (and why)\n- \ud83d\udee1\ufe0f Audit LLM usage for compliance or security\n\n---\n\n## Installation\n\n```bash\npip install --upgrade crashlens_logger\n```\n_This will install or upgrade to the latest version._\n\n---\n\n## Quick Start\n\n```python\nfrom crashlens_logger import CrashLensLogger\nimport uuid\nfrom datetime import datetime\nimport openai\n\nlogger = CrashLensLogger()\n\ndef call_and_log():\n trace_id = str(uuid.uuid4())\n start_time = datetime.utcnow().isoformat() + \"Z\"\n prompt = \"What are the main tourist attractions in Rome?\"\n model = \"gpt-3.5-turbo\"\n response = openai.ChatCompletion.create(\n model=model,\n messages=[{\"role\": \"user\", \"content\": prompt}]\n )\n end_time = datetime.utcnow().isoformat() + \"Z\"\n usage = response[\"usage\"]\n logger.log_event(\n traceId=trace_id,\n startTime=start_time,\n endTime=end_time,\n input={\"model\": model, \"prompt\": prompt},\n usage=usage\n )\n```\n\n---\n\n## Where Do Logs Go?\n\nBy default, logs are printed to `stdout` in newline-delimited JSON (NDJSON) format. \nYou can redirect output to a file:\n\n```bash\npython your_script.py > logs.jsonl\n```\n\n---\n\n## Example Output\n\n```json\n{\n \"traceId\": \"trace_norm_01\",\n \"startTime\": \"2025-07-22T10:30:05Z\",\n \"input\": {\"model\": \"gpt-3.5-turbo\", \"prompt\": \"What are the main tourist attractions in Rome?\"},\n \"usage\": {\"prompt_tokens\": 10, \"completion_tokens\": 155, \"total_tokens\": 165},\n \"cost\": 0.0002375\n}\n```\n\n---\n\n## What Gets Calculated Automatically?\n\n- **total_tokens**: If you provide `prompt_tokens` and `completion_tokens` in `usage`, the logger adds `total_tokens`.\n- **cost**: If you provide `model`, `prompt_tokens`, and `completion_tokens`, the logger calculates cost using standard pricing.\n\n---\n\n## Troubleshooting\n\n- **Cannot resolve host:** Check your internet connection or DNS.\n- **pip cache issues:** Try `pip install --no-cache-dir crashlens_logger`\n- **Permission errors:** Use a virtual environment or add `--user` to your pip command.\n- **Module not found:** Ensure you\u2019re using the correct Python environment.\n\n---\n\n## Roadmap\n\n- [ ] Token pricing overrides\n- [ ] File/DB exporters\n- [ ] SDK instrumentation helpers\n- [ ] Pydantic validation for log structure\n\n---\n\n## Testing\n\nRun tests with:\n\n```bash\npytest\n```\n\n*100% coverage on core logging logic.*\n\n---\n\n## License\n\nMIT License\n\n\n",
"bugtrack_url": null,
"license": null,
"summary": "CLI tool to generate structured LLM logs for CrashLens cost detection",
"version": "1.0.4",
"project_urls": null,
"split_keywords": [
"llm",
" logging",
" finops",
" cost-tracking",
" token-usage"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "701e30789757ea57d27c82fbb54bdaaa1565ac1f50c60b5bb97e179978566f1d",
"md5": "6cbc2462cd9bfe4914578b1538ca5bff",
"sha256": "db42ff93c0b6b638c9f3e4ac8525a3bf21eae7d0a90b28bb7c6edc18961a761d"
},
"downloads": -1,
"filename": "crashlens_logger-1.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6cbc2462cd9bfe4914578b1538ca5bff",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 9946,
"upload_time": "2025-07-25T19:45:13",
"upload_time_iso_8601": "2025-07-25T19:45:13.061218Z",
"url": "https://files.pythonhosted.org/packages/70/1e/30789757ea57d27c82fbb54bdaaa1565ac1f50c60b5bb97e179978566f1d/crashlens_logger-1.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "f99461508d3f2c6b27668e4f9c0819ce1fbc28c92df7190f882ef9ef0c2c6128",
"md5": "46855db5e2e1964d328b062a2be755db",
"sha256": "fb1b5866cbbf324eca8a7a1581c2fafac90be1369a8c3414734f8e0f8eb5924e"
},
"downloads": -1,
"filename": "crashlens_logger-1.0.4.tar.gz",
"has_sig": false,
"md5_digest": "46855db5e2e1964d328b062a2be755db",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 10726,
"upload_time": "2025-07-25T19:45:15",
"upload_time_iso_8601": "2025-07-25T19:45:15.987070Z",
"url": "https://files.pythonhosted.org/packages/f9/94/61508d3f2c6b27668e4f9c0819ce1fbc28c92df7190f882ef9ef0c2c6128/crashlens_logger-1.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-25 19:45:15",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "crashlens-logger"
}