cache-return


Namecache-return JSON
Version 0.1.2 PyPI version JSON
download
home_pageNone
SummaryA library providing a decorator wrapper for adding the caching mechanism to functions.
upload_time2024-09-24 18:35:03
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT License Copyright (c) 2024 Cody Xiaozhan Yang Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords cache return result debug
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Cache Return

This library provide a simple wrapper for custom functions to get the caching mechanism.

## Use cases and benefits

With the caching mechanism activated, the result of a sub-function will be cached in the dedicated folder in the initial run. And the in the following runs, the cached result will loaded to bypass the actual internal process of a function. 

This will be mainly useful for testing or debug tasks of larger projects with sophisticated sub-functions. As it will

1. Avoid the actual interanl function run, thus save resources (e.g. queries/calls to APIs or databases) and improve efficiency.
2. Achieve quicker testing processes and circles, as the reloading process is generally much faster than the function run.
3. Provide the possibility for "offline" debugging and investigation for the real cases, as the cached return result can be manually accessed for investigation.

## Examples

To add caching mechanism to a custom function, simply add the decorator wrapper to the function definition as below.

```
from cache_return import cache_return

@cache_return
def custom_function(arg_a, arg_b='default_value'):
    ... internal processes ...
    return results
```

Then custom function can be usage the same way as before. To activate its caching mechanism, simply provide an additional keyword argument `caching=True`.

```
# Before

return_result = custom_function(arg_a, arg_b='actual_value')

# After

return_result = custom_function(arg_a, arg_b='actual_value', caching=True)
```

To turn the caching mechanism off in the production code, you can either set the keyword argument as `caching=False`, or simple remove the `caching=True` keyword argument, as the default value `caching=False` will be used in this case.

The actual cached result will be saved in the auto-created folder `./_cache_` (sitting at the same directory as the top-level running script, not the script containing the custom function.), with the same name of the custom function as a pickle file

Then the cached result can be manully accessed in other places by the code below

```
import pickle

with open('./_cache_/custom_function.pkl', 'rb') as f:
    return_result = pickle.load(f)
```

#### Special note for environment with pandas version >= 2.0.0

The above custom investigation code has to be adjusted to the below code as 

```
import pandas as pd

with open('./_cache_/custom_function.pkl', 'rb') as f:
    return_result = pd.compat.pickle_compat.load('./_cache_/custom_function.pkl') 
```
Or, you can downgrade pandas to the 1.x series by `pip install pandas<2.0.0`, see the [stackoverflow topic here](https://stackoverflow.com/questions/75953279/modulenotfounderror-no-module-named-pandas-core-indexes-numeric-using-metaflo).

## Additional arguments

In some cases, if you only want to overwrite/update the cached result in the new function run, you can achieve this by set the `flushing` keyword argument as `flushing=True`.

By default the cached result will be save under directory `./_cache_`. An alternative directory `custom_dir` can be set by the argument `cache_path` as `cache_path=custom_dir`. Such a directory will be automatically created if it does not exist.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "cache-return",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "cache, return, result, debug",
    "author": null,
    "author_email": "Cody Xiaozhan Yang <xiaozhan.yang@icloud.com>",
    "download_url": "https://files.pythonhosted.org/packages/1b/6b/55112c0d6ee0689bb17552cb6089a86d2e0968b7b08eca18d3c0b2d47e00/cache_return-0.1.2.tar.gz",
    "platform": null,
    "description": "# Cache Return\n\nThis library provide a simple wrapper for custom functions to get the caching mechanism.\n\n## Use cases and benefits\n\nWith the caching mechanism activated, the result of a sub-function will be cached in the dedicated folder in the initial run. And the in the following runs, the cached result will loaded to bypass the actual internal process of a function. \n\nThis will be mainly useful for testing or debug tasks of larger projects with sophisticated sub-functions. As it will\n\n1. Avoid the actual interanl function run, thus save resources (e.g. queries/calls to APIs or databases) and improve efficiency.\n2. Achieve quicker testing processes and circles, as the reloading process is generally much faster than the function run.\n3. Provide the possibility for \"offline\" debugging and investigation for the real cases, as the cached return result can be manually accessed for investigation.\n\n## Examples\n\nTo add caching mechanism to a custom function, simply add the decorator wrapper to the function definition as below.\n\n```\nfrom cache_return import cache_return\n\n@cache_return\ndef custom_function(arg_a, arg_b='default_value'):\n    ... internal processes ...\n    return results\n```\n\nThen custom function can be usage the same way as before. To activate its caching mechanism, simply provide an additional keyword argument `caching=True`.\n\n```\n# Before\n\nreturn_result = custom_function(arg_a, arg_b='actual_value')\n\n# After\n\nreturn_result = custom_function(arg_a, arg_b='actual_value', caching=True)\n```\n\nTo turn the caching mechanism off in the production code, you can either set the keyword argument as `caching=False`, or simple remove the `caching=True` keyword argument, as the default value `caching=False` will be used in this case.\n\nThe actual cached result will be saved in the auto-created folder `./_cache_` (sitting at the same directory as the top-level running script, not the script containing the custom function.), with the same name of the custom function as a pickle file\n\nThen the cached result can be manully accessed in other places by the code below\n\n```\nimport pickle\n\nwith open('./_cache_/custom_function.pkl', 'rb') as f:\n    return_result = pickle.load(f)\n```\n\n#### Special note for environment with pandas version >= 2.0.0\n\nThe above custom investigation code has to be adjusted to the below code as \n\n```\nimport pandas as pd\n\nwith open('./_cache_/custom_function.pkl', 'rb') as f:\n    return_result = pd.compat.pickle_compat.load('./_cache_/custom_function.pkl') \n```\nOr, you can downgrade pandas to the 1.x series by `pip install pandas<2.0.0`, see the [stackoverflow topic here](https://stackoverflow.com/questions/75953279/modulenotfounderror-no-module-named-pandas-core-indexes-numeric-using-metaflo).\n\n## Additional arguments\n\nIn some cases, if you only want to overwrite/update the cached result in the new function run, you can achieve this by set the `flushing` keyword argument as `flushing=True`.\n\nBy default the cached result will be save under directory `./_cache_`. An alternative directory `custom_dir` can be set by the argument `cache_path` as `cache_path=custom_dir`. Such a directory will be automatically created if it does not exist.\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 Cody Xiaozhan Yang  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "A library providing a decorator wrapper for adding the caching mechanism to functions.",
    "version": "0.1.2",
    "project_urls": null,
    "split_keywords": [
        "cache",
        " return",
        " result",
        " debug"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "29fbc1fced10a8d5fe07fcba3259369df5fb4a0bc9e5899d814ea7488c662031",
                "md5": "92bf9433b244d11f1e3bebd6016b698a",
                "sha256": "649d8bc4b6a6b5a6247c29f131099882aeefa708a9b49d03a0c926ff9b9df748"
            },
            "downloads": -1,
            "filename": "cache_return-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "92bf9433b244d11f1e3bebd6016b698a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 4726,
            "upload_time": "2024-09-24T18:35:02",
            "upload_time_iso_8601": "2024-09-24T18:35:02.235003Z",
            "url": "https://files.pythonhosted.org/packages/29/fb/c1fced10a8d5fe07fcba3259369df5fb4a0bc9e5899d814ea7488c662031/cache_return-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1b6b55112c0d6ee0689bb17552cb6089a86d2e0968b7b08eca18d3c0b2d47e00",
                "md5": "b625bb19dec9da3a24878b2f9aba0c85",
                "sha256": "99484947a43da9dd7ee5e78ddd8db072a684c99f17fb3438cac08d005e269c7e"
            },
            "downloads": -1,
            "filename": "cache_return-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "b625bb19dec9da3a24878b2f9aba0c85",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 4139,
            "upload_time": "2024-09-24T18:35:03",
            "upload_time_iso_8601": "2024-09-24T18:35:03.796891Z",
            "url": "https://files.pythonhosted.org/packages/1b/6b/55112c0d6ee0689bb17552cb6089a86d2e0968b7b08eca18d3c0b2d47e00/cache_return-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-24 18:35:03",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "cache-return"
}
        
Elapsed time: 3.34360s