latteries


Namelatteries JSON
Version 0.1.5 PyPI version JSON
download
home_pageNone
SummaryJames' API LLM evaluations workflow library - A collection of tools for LLM API calls, caching, and evaluation workflows
upload_time2025-08-11 16:41:48
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT
keywords anthropic api caching evaluation llm openai
VCS
bugtrack_url
requirements pydantic slist streamlit anyio python-dotenv
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # James' API LLM evaluations workflow library
Library of functions that I find useful in my day-to-day work.

## Installation as starter code to run evals.
Clone the repo if you want to use the example scripts. Can be useful for e.g. cursor and coding agents.

**Clone the repo and install dependencies:**
  ```bash
  git clone git@github.com:thejaminator/latteries.git
  cd latteries
  uv venv
  source .venv/bin/activate
  uv pip install -r requirements.txt
  uv pip install -e .
  ```

Minimal setup: OpenAI API key
Create a .env file in the root of the repo.

```bash
OPENAI_API_KEY=sk-...
```




### Installation as a package.
Alternatively, you can install the package and use it as a library without the example scripts.
```bash
pip install latteries
```






## My workflow
- I want to call LLM APIs like normal python.
- This is a library. Not a framework. Frameworks make you declare magical things in configs and functions. This is a library, which is a collection of tools I find useful.
- Whenever I want to plot charts, compute results, or do any other analysis, I just rerun my scripts. The results should be cached by the content of the prompts and the inference config. This helped me be fast in getting results out.

### Core functionality - caching
```python
from latteries import load_openai_caller, ChatHistory, InferenceConfig


async def example_main():
    # Cache to the folder "cache"
    caller = load_openai_caller("cache")
    prompt = ChatHistory.from_user("How many letter 'r's are in the word 'strawberry?")
    config = InferenceConfig(temperature=0.0, max_tokens=100, model="gpt-4o")
    # This cache is based on the hash of the prompt and the InferenceConfig.
    response = await caller.call(prompt, config)
    print(response.first_response)


if __name__ == "__main__":
    import asyncio

    asyncio.run(example_main())
```

### Core functionality - call LLMs in parallel
- The caching is safe to be used in parallel. I use my library [slist for useful utils for lists](https://github.com/thejaminator/slist), such as running calls in parallel.
- [See full example](example_scripts/example_parallel.py).
```python
async def example_parallel_tqdm():
    caller = load_openai_caller("cache")
    fifty_prompts = [f"What is {i} * {i+1}?" for i in range(50)]
    prompts = [ChatHistory.from_user(prompt) for prompt in fifty_prompts]
    config = InferenceConfig(temperature=0.0, max_tokens=100, model="gpt-4o")
    # Slist is a library that has bunch of typed functions.
    # # par_map_async runs async functions in parallel.
    results = await Slist(prompts).par_map_async(
        lambda prompt: caller.call(prompt, config),
        max_par=10, # Parallelism limit.
        tqdm=True, # Brings up tqdm bar.
    )
    result_strings = [result.first_response for result in results]
    print(result_strings)
```

### Core functionality - support of different model providers
- You often need to call models on openrouter / use a different API client such as Anthropic's.
- I use MultiClientCaller, which routes by matching on the model name. You should make a copy of this to match the routing logic you want.
- [See full example](example_scripts/example_llm_providers.py).
```python
def load_multi_client(cache_path: str) -> MultiClientCaller:
    """Matches based on the model name."""
    openai_api_key = os.getenv("OPENAI_API_KEY")
    openrouter_api_key = os.getenv("OPENROUTER_API_KEY")
    anthropic_api_key = os.getenv("ANTHROPIC_API_KEY")
    shared_cache = CacheByModel(Path(cache_path))
    openai_caller = OpenAICaller(api_key=openai_api_key, cache_path=shared_cache)
    openrouter_caller = OpenAICaller(
        openai_client=AsyncOpenAI(api_key=openrouter_api_key, base_url="https://openrouter.ai/api/v1"),
        cache_path=shared_cache,
    )
    anthropic_caller = AnthropicCaller(api_key=anthropic_api_key, cache_path=shared_cache)

    # Define rules for routing models.
    clients = [
        CallerConfig(name="gpt", caller=openai_caller),
        CallerConfig(name="gemini-2.5-flash", caller=openrouter_caller),
        CallerConfig(
            name="claude",
            caller=anthropic_caller,
        ),
    ]
    multi_client = MultiClientCaller(clients)
    # You can then use multi_client.call(prompt, config) to call different based on the name of the model.
    return multi_client
```


### Viewing model outputs:
We have a simple tool to view conversations that are in a jsonl format of "user" and "assistant".
[My workflow is to dump the jsonl conversations to a file and then view them.](example_scripts/example_parallel_and_log.py)
```bash
latteries-viewer <path_to_jsonl_file>
```
<img src="docs/viewer.png" width="70%" alt="Viewer Screenshot">





## Example scripts
These are evaluations of multiple models and creating charts with error bars.
- Single turn evaluation, MCQ: [MMLU](example_scripts/mmlu/evaluate_mmlu.py), [TruthfulQA](example_scripts/truthfulqa/evaluate_truthfulqa.py)
- Single turn with a judge model for misalignment. TODO.
- Multi turn evaluation with a judge model to parse the answer: [Are you sure sycophancy?](example_scripts/mmlu/mmlu_are_you_sure.py)






## FAQ

What if I want to repeat the same prompt without caching?
- [Pass try_number to the caller.call function](example_scripts/example_parallel.py).

Do you have support for JSON schema calling?
- [Yes](example_scripts/example_json.py).

Do you have support for log probs?
- [Yes](example_scripts/example_probs.py).

How do I delete my cache?
- Just delete the folder that you've been caching to.

What is the difference between this and xxxx?
- TODO




## General philsophy on evals engineering.
TODO: Elaborate
- Don't mutate python objects. Causes bugs. Please copy / deepcopy things like configs and prompts.
- Python is a scripting language. Use it to write your scripts!!! Avoid writing complicated bash files when you can just write python.
- I hate yaml. More specifically, I hate yaml that becomes a programming language. Sorry. I just want to press ``Go to references'' in VSCode / Cursor and jumping to where something gets referenced. YAML does not do that.
- Keep objects as pydantic basemodels / dataclasses. Avoid passing data around as pandas dataframes. No one (including your coding agent)  zknows what is in the dataframe. Hard to read. Also can be lossy (losing types). If you want to store intermediate data, use jsonl.
- Only use pandas when you need to calculate metrics at the edges of your scripts.
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "latteries",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "anthropic, api, caching, evaluation, llm, openai",
    "author": null,
    "author_email": "James <your-email@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/23/fc/31351c5cfc353a3c76a00391c921c17a3d5f4ed66547c752b4653b5bad9c/latteries-0.1.5.tar.gz",
    "platform": null,
    "description": "# James' API LLM evaluations workflow library\nLibrary of functions that I find useful in my day-to-day work.\n\n## Installation as starter code to run evals.\nClone the repo if you want to use the example scripts. Can be useful for e.g. cursor and coding agents.\n\n**Clone the repo and install dependencies:**\n  ```bash\n  git clone git@github.com:thejaminator/latteries.git\n  cd latteries\n  uv venv\n  source .venv/bin/activate\n  uv pip install -r requirements.txt\n  uv pip install -e .\n  ```\n\nMinimal setup: OpenAI API key\nCreate a .env file in the root of the repo.\n\n```bash\nOPENAI_API_KEY=sk-...\n```\n\n\n\n\n### Installation as a package.\nAlternatively, you can install the package and use it as a library without the example scripts.\n```bash\npip install latteries\n```\n\n\n\n\n\n\n## My workflow\n- I want to call LLM APIs like normal python.\n- This is a library. Not a framework. Frameworks make you declare magical things in configs and functions. This is a library, which is a collection of tools I find useful.\n- Whenever I want to plot charts, compute results, or do any other analysis, I just rerun my scripts. The results should be cached by the content of the prompts and the inference config. This helped me be fast in getting results out.\n\n### Core functionality - caching\n```python\nfrom latteries import load_openai_caller, ChatHistory, InferenceConfig\n\n\nasync def example_main():\n    # Cache to the folder \"cache\"\n    caller = load_openai_caller(\"cache\")\n    prompt = ChatHistory.from_user(\"How many letter 'r's are in the word 'strawberry?\")\n    config = InferenceConfig(temperature=0.0, max_tokens=100, model=\"gpt-4o\")\n    # This cache is based on the hash of the prompt and the InferenceConfig.\n    response = await caller.call(prompt, config)\n    print(response.first_response)\n\n\nif __name__ == \"__main__\":\n    import asyncio\n\n    asyncio.run(example_main())\n```\n\n### Core functionality - call LLMs in parallel\n- The caching is safe to be used in parallel. I use my library [slist for useful utils for lists](https://github.com/thejaminator/slist), such as running calls in parallel.\n- [See full example](example_scripts/example_parallel.py).\n```python\nasync def example_parallel_tqdm():\n    caller = load_openai_caller(\"cache\")\n    fifty_prompts = [f\"What is {i} * {i+1}?\" for i in range(50)]\n    prompts = [ChatHistory.from_user(prompt) for prompt in fifty_prompts]\n    config = InferenceConfig(temperature=0.0, max_tokens=100, model=\"gpt-4o\")\n    # Slist is a library that has bunch of typed functions.\n    # # par_map_async runs async functions in parallel.\n    results = await Slist(prompts).par_map_async(\n        lambda prompt: caller.call(prompt, config),\n        max_par=10, # Parallelism limit.\n        tqdm=True, # Brings up tqdm bar.\n    )\n    result_strings = [result.first_response for result in results]\n    print(result_strings)\n```\n\n### Core functionality - support of different model providers\n- You often need to call models on openrouter / use a different API client such as Anthropic's.\n- I use MultiClientCaller, which routes by matching on the model name. You should make a copy of this to match the routing logic you want.\n- [See full example](example_scripts/example_llm_providers.py).\n```python\ndef load_multi_client(cache_path: str) -> MultiClientCaller:\n    \"\"\"Matches based on the model name.\"\"\"\n    openai_api_key = os.getenv(\"OPENAI_API_KEY\")\n    openrouter_api_key = os.getenv(\"OPENROUTER_API_KEY\")\n    anthropic_api_key = os.getenv(\"ANTHROPIC_API_KEY\")\n    shared_cache = CacheByModel(Path(cache_path))\n    openai_caller = OpenAICaller(api_key=openai_api_key, cache_path=shared_cache)\n    openrouter_caller = OpenAICaller(\n        openai_client=AsyncOpenAI(api_key=openrouter_api_key, base_url=\"https://openrouter.ai/api/v1\"),\n        cache_path=shared_cache,\n    )\n    anthropic_caller = AnthropicCaller(api_key=anthropic_api_key, cache_path=shared_cache)\n\n    # Define rules for routing models.\n    clients = [\n        CallerConfig(name=\"gpt\", caller=openai_caller),\n        CallerConfig(name=\"gemini-2.5-flash\", caller=openrouter_caller),\n        CallerConfig(\n            name=\"claude\",\n            caller=anthropic_caller,\n        ),\n    ]\n    multi_client = MultiClientCaller(clients)\n    # You can then use multi_client.call(prompt, config) to call different based on the name of the model.\n    return multi_client\n```\n\n\n### Viewing model outputs:\nWe have a simple tool to view conversations that are in a jsonl format of \"user\" and \"assistant\".\n[My workflow is to dump the jsonl conversations to a file and then view them.](example_scripts/example_parallel_and_log.py)\n```bash\nlatteries-viewer <path_to_jsonl_file>\n```\n<img src=\"docs/viewer.png\" width=\"70%\" alt=\"Viewer Screenshot\">\n\n\n\n\n\n## Example scripts\nThese are evaluations of multiple models and creating charts with error bars.\n- Single turn evaluation, MCQ: [MMLU](example_scripts/mmlu/evaluate_mmlu.py), [TruthfulQA](example_scripts/truthfulqa/evaluate_truthfulqa.py)\n- Single turn with a judge model for misalignment. TODO.\n- Multi turn evaluation with a judge model to parse the answer: [Are you sure sycophancy?](example_scripts/mmlu/mmlu_are_you_sure.py)\n\n\n\n\n\n\n## FAQ\n\nWhat if I want to repeat the same prompt without caching?\n- [Pass try_number to the caller.call function](example_scripts/example_parallel.py).\n\nDo you have support for JSON schema calling?\n- [Yes](example_scripts/example_json.py).\n\nDo you have support for log probs?\n- [Yes](example_scripts/example_probs.py).\n\nHow do I delete my cache?\n- Just delete the folder that you've been caching to.\n\nWhat is the difference between this and xxxx?\n- TODO\n\n\n\n\n## General philsophy on evals engineering.\nTODO: Elaborate\n- Don't mutate python objects. Causes bugs. Please copy / deepcopy things like configs and prompts.\n- Python is a scripting language. Use it to write your scripts!!! Avoid writing complicated bash files when you can just write python.\n- I hate yaml. More specifically, I hate yaml that becomes a programming language. Sorry. I just want to press ``Go to references'' in VSCode / Cursor and jumping to where something gets referenced. YAML does not do that.\n- Keep objects as pydantic basemodels / dataclasses. Avoid passing data around as pandas dataframes. No one (including your coding agent)  zknows what is in the dataframe. Hard to read. Also can be lossy (losing types). If you want to store intermediate data, use jsonl.\n- Only use pandas when you need to calculate metrics at the edges of your scripts.",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "James' API LLM evaluations workflow library - A collection of tools for LLM API calls, caching, and evaluation workflows",
    "version": "0.1.5",
    "project_urls": {
        "Homepage": "https://github.com/thejaminator/latteries",
        "Issues": "https://github.com/thejaminator/latteries/issues",
        "Repository": "https://github.com/thejaminator/latteries"
    },
    "split_keywords": [
        "anthropic",
        " api",
        " caching",
        " evaluation",
        " llm",
        " openai"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "97710eaa910554c9e351b169c4041311ac0ea283f4358d0df4715e2fb9dbda39",
                "md5": "e568d9c045d32b3917c2f350beac1875",
                "sha256": "2f2047c6ab9b18e4086ece865bfd673639776bb2e5455d05e5f1af9bfda88e24"
            },
            "downloads": -1,
            "filename": "latteries-0.1.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e568d9c045d32b3917c2f350beac1875",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 17551,
            "upload_time": "2025-08-11T16:41:47",
            "upload_time_iso_8601": "2025-08-11T16:41:47.207176Z",
            "url": "https://files.pythonhosted.org/packages/97/71/0eaa910554c9e351b169c4041311ac0ea283f4358d0df4715e2fb9dbda39/latteries-0.1.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "23fc31351c5cfc353a3c76a00391c921c17a3d5f4ed66547c752b4653b5bad9c",
                "md5": "232c51510cfe7a5bafbc8b91c368bea0",
                "sha256": "b2115755e525372828eb65783f9e892b7e24568c6824a239cdb2dd59ffab1469"
            },
            "downloads": -1,
            "filename": "latteries-0.1.5.tar.gz",
            "has_sig": false,
            "md5_digest": "232c51510cfe7a5bafbc8b91c368bea0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 293644,
            "upload_time": "2025-08-11T16:41:48",
            "upload_time_iso_8601": "2025-08-11T16:41:48.584434Z",
            "url": "https://files.pythonhosted.org/packages/23/fc/31351c5cfc353a3c76a00391c921c17a3d5f4ed66547c752b4653b5bad9c/latteries-0.1.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-11 16:41:48",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "thejaminator",
    "github_project": "latteries",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "pydantic",
            "specs": []
        },
        {
            "name": "slist",
            "specs": []
        },
        {
            "name": "streamlit",
            "specs": []
        },
        {
            "name": "anyio",
            "specs": []
        },
        {
            "name": "python-dotenv",
            "specs": []
        }
    ],
    "lcname": "latteries"
}
        
Elapsed time: 0.43834s