# reclm
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
When building AI based tooling and packaging we often call LLMs while
prototyping and testing our code. A single LLM call can take 100’s of ms
to run and the output isn’t deterministic. This can really slow down
development especially if our notebook contains many LLM calls 😞.
While LLMs are new, working with external APIs in our code isn’t. Plenty
of tooling already exists that make working with APIs much easier. For
example, Python’s unittest mock object is commonly used to simulate or
mock an API call so that it returns a hardcoded response. This works
really well in the traditional Python development workflow and can make
our tests fast and predictable.
However, it doesn’t work well in the nbdev workflow where oftentimes
we’ll want to quickly run all cells in our notebook while we’re
developing our code. While we can use mocks in our test cells we don’t
want our exported code cells to be mocked. This leaves us with two
choices:
- we temporarily mock our exported code cells but undo the mocking
before we export these cells.
- we do nothing and just live with notebooks that take a long time to
run.
Both options are pretty terrible as they pull us out of our flow state
and slow down development 😞.
`reclm` builds on the underlying idea of mocks but adapts them to the
nbdev workflow.
## Usage
To use `reclm`
- install the package:
`pip install git+https://github.com/AnswerDotAI/reclm.git`
- import the package `from reclm.core import enable_reclm` in each
notebook
- add
[`enable_reclm()`](https://AnswerDotAI.github.io/reclm/core.html#enable_reclm)
to the top of each notebook
*Note:
[`enable_reclm`](https://AnswerDotAI.github.io/reclm/core.html#enable_reclm)
should be added after you import the OpenAI and/or Anthropic SDK.*
Every LLM call you make using OpenAI/Anthropic will now be cached in
`nbs/reclm.json`.
### Tests
`nbdev_test` will automatically read from the cache. However, if your
notebooks contain LLM calls that haven’t been cached, `nbdev_test` will
call the OpenAI/Anthropic APIs and then cache the responses.
### Cleaning the cache
It is recommended that you clean the cache before committing it.
To clean the cache, run `update_reclm_cache` from your project’s root
directory.
*Note: Your LLM request/response data is stored in your current working
directory in a file called `reclm.json`. All request headers are removed
so it is safe to include this file in your version control system
(e.g. git). In fact, it is expected that you’ll include this file in
your vcs.*
Raw data
{
"_id": null,
"home_page": "https://github.com/AnswerDotAI/reclm",
"name": "reclm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "nbdev jupyter notebook python",
"author": "Tommy",
"author_email": "tc@answer.ai",
"download_url": "https://files.pythonhosted.org/packages/25/58/51548f4060c03a8310f55ab572209d36e52d409bb1809190aec0d8be30e6/reclm-0.0.4.tar.gz",
"platform": null,
"description": "# reclm\n\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\nWhen building AI based tooling and packaging we often call LLMs while\nprototyping and testing our code. A single LLM call can take 100\u2019s of ms\nto run and the output isn\u2019t deterministic. This can really slow down\ndevelopment especially if our notebook contains many LLM calls \ud83d\ude1e.\n\nWhile LLMs are new, working with external APIs in our code isn\u2019t. Plenty\nof tooling already exists that make working with APIs much easier. For\nexample, Python\u2019s unittest mock object is commonly used to simulate or\nmock an API call so that it returns a hardcoded response. This works\nreally well in the traditional Python development workflow and can make\nour tests fast and predictable.\n\nHowever, it doesn\u2019t work well in the nbdev workflow where oftentimes\nwe\u2019ll want to quickly run all cells in our notebook while we\u2019re\ndeveloping our code. While we can use mocks in our test cells we don\u2019t\nwant our exported code cells to be mocked. This leaves us with two\nchoices:\n\n- we temporarily mock our exported code cells but undo the mocking\n before we export these cells.\n- we do nothing and just live with notebooks that take a long time to\n run.\n\nBoth options are pretty terrible as they pull us out of our flow state\nand slow down development \ud83d\ude1e.\n\n`reclm` builds on the underlying idea of mocks but adapts them to the\nnbdev workflow.\n\n## Usage\n\nTo use `reclm`\n\n- install the package:\n `pip install git+https://github.com/AnswerDotAI/reclm.git`\n- import the package `from reclm.core import enable_reclm` in each\n notebook\n- add\n [`enable_reclm()`](https://AnswerDotAI.github.io/reclm/core.html#enable_reclm)\n to the top of each notebook\n\n*Note:\n[`enable_reclm`](https://AnswerDotAI.github.io/reclm/core.html#enable_reclm)\nshould be added after you import the OpenAI and/or Anthropic SDK.*\n\nEvery LLM call you make using OpenAI/Anthropic will now be cached in\n`nbs/reclm.json`.\n\n### Tests\n\n`nbdev_test` will automatically read from the cache. However, if your\nnotebooks contain LLM calls that haven\u2019t been cached, `nbdev_test` will\ncall the OpenAI/Anthropic APIs and then cache the responses.\n\n### Cleaning the cache\n\nIt is recommended that you clean the cache before committing it.\n\nTo clean the cache, run `update_reclm_cache` from your project\u2019s root\ndirectory.\n\n*Note: Your LLM request/response data is stored in your current working\ndirectory in a file called `reclm.json`. All request headers are removed\nso it is safe to include this file in your version control system\n(e.g.\u00a0git). In fact, it is expected that you\u2019ll include this file in\nyour vcs.*\n",
"bugtrack_url": null,
"license": "Apache Software License 2.0",
"summary": "Record your llm calls and make your notebooks fast again.",
"version": "0.0.4",
"project_urls": {
"Homepage": "https://github.com/AnswerDotAI/reclm"
},
"split_keywords": [
"nbdev",
"jupyter",
"notebook",
"python"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "731ff3b4c7a8332407130d8a98eb8f3cb83ae4141ba468995f049ab42cc9753f",
"md5": "4aeaea058e8d569c7e9624bac308c4a1",
"sha256": "142da78e29b13251985eeacb21e1dc29c17aea17e78448818523a70458a14ce9"
},
"downloads": -1,
"filename": "reclm-0.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4aeaea058e8d569c7e9624bac308c4a1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 9980,
"upload_time": "2025-03-19T16:13:44",
"upload_time_iso_8601": "2025-03-19T16:13:44.281235Z",
"url": "https://files.pythonhosted.org/packages/73/1f/f3b4c7a8332407130d8a98eb8f3cb83ae4141ba468995f049ab42cc9753f/reclm-0.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "255851548f4060c03a8310f55ab572209d36e52d409bb1809190aec0d8be30e6",
"md5": "b50eb941d040b4083b0bf66a001d80a2",
"sha256": "39edb36fc7fcef793f1c0c31e3f01d386f352ac2d9d92cb7de88d81e9f3c7a6a"
},
"downloads": -1,
"filename": "reclm-0.0.4.tar.gz",
"has_sig": false,
"md5_digest": "b50eb941d040b4083b0bf66a001d80a2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 10568,
"upload_time": "2025-03-19T16:13:45",
"upload_time_iso_8601": "2025-03-19T16:13:45.171350Z",
"url": "https://files.pythonhosted.org/packages/25/58/51548f4060c03a8310f55ab572209d36e52d409bb1809190aec0d8be30e6/reclm-0.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-03-19 16:13:45",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "AnswerDotAI",
"github_project": "reclm",
"github_not_found": true,
"lcname": "reclm"
}