Pynisher is a library to limit resources of a function call in a synchronous manner.
You can use this to ensure that your function doesn't use up more resources than it
should.
## Usage
Limit the time a process can take
```python
import pynisher
def sleepy(x: int) -> int:
time.sleep(x)
return x
# You can also use `cpu_time` instead
with pynisher.limit(sleepy, wall_time=7) as limited_sleep:
x = limited_sleep(10) # Will raise a TimeoutException
```
Limit the memory usage in a sequential manner
```python
from pynisher import limit, MemoryLimitException, WallTimeoutException
def train_memory_hungry_model(X, y) -> Model:
# ... do some thing
return model
model_trainer = limit(
train_memory_hungry_model,
memory=(500, "MB"),
wall_time=(1.5, "h") # 1h30m
)
try:
model = model_trainer(X, y)
except (WallTimeoutException, MemoryLimitException):
model = None
```
Passing `raises=False` means it will hide all errors and will return `EMPTY` if
there is no result to give back.
```python
from pynisher import limit, EMPTY
def f():
raise ValueError()
limited_f = limit(f, wall_time=(2, "m"), raises=False)
result = limited_f()
if result is not EMPTY:
# ...
```
You can even use the decorator, in which case it will always be limited.
Please note in [Details](#details) that support for this is limited and mostly
for Linux.
```python
from pynisher import restricted
@restricted(wall_time=1, raises=False)
def notify_remote_server() -> Response:
"""We don't care that this fails, just give it a second to try"""
server = block_until_access(...)
response = server.notify()
notify_remote_server()
# ... continue on even if it failed
```
You can safely raise errors from inside your function and the same kind of error will be reraised
with a traceback.
```python
from pynisher import limit
def f():
raise ValueError()
limited_f = limit(f)
try:
limited_f()
except ValueError as e:
... # do what you need
```
If returning very large items, prefer to save them to file first and then read the result as
sending large objects through pipes can be very slow.
```python
from pathlib import Path
import pickle
from pynisher import limit
def train_gpt3(save_path: Path) -> bool:
gpt3 = ...
gpt3.train()
with save_path.open('wb') as f:
pickle.dump(gpt3, f)
return True
path = Path('gpt3.model')
trainer = limit(train_gpt3, memory=(1_000_000, "gb")):
try:
trainer(save_path=path)
with path.open("rb") as f:
gpt3 = pickle.load(f)
except MemoryLimitException as e:
...
```
## Details
Pynisher works by running your function inside of a subprocess.
Once in the subprocess, the resources will be limited for that process before running your
function. The methods for limiting specific resources can be found within the respective
`pynisher/limiters/<platform>.py`.
#### Features
To check if a feature is supported on your system:
```python
from pynisher import limit
for limit in ["cpu_time", "wall_time", "memory", "decorator"]:
print(f"Supports {limit} - {supports(limit)}")
limited_f = limit(f, ...)
if not limited_f.supports("memory"):
...
```
Currently we mainly support Linux with partial support for Mac and Windows:
| OS | `wall_time` | `cpu_time` | `memory` | `@restricted` |
| -- | ----------- | ---------- | -------- | ------------- |
| Linux | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Windows | :heavy_check_mark: | :heavy_check_mark: (1.) | :heavy_check_mark: (1.) | :x: (3.) |
| Mac | :heavy_check_mark: | :heavy_check_mark: (4.) | :x: (2.) | :x: (3.) |
1. Limiting memory and cputime on Windows is done with the library `pywin32`. There seem
to be installation issues when instead of using `conda install <x>`, you use `pip install <x>`
inside a conda environment, specifically only with `Python 3.8` and `Python 3.9`.
The workaround is to instead install `pywin32` with conda, which can be done with
`pip uninstall pywin32; conda install pywin32`.
Please see this [issue](https://github.com/mhammond/pywin32/issues/1865) for updates.
2. Mac doesn't seem to allow for limiting a processes memory. No workaround has been found
including trying `launchctl` which seems global and ignores memory limiting. Possibly `ulimit`
could work but needs to be tested. Using `setrlimit(RLIMIT_AS, (soft, hard))` does nothing
and will either fail explicitly or silently, hence we advertise it is not supported.
However, passing a memory limit on mac is still possible but may not do anything useful or
even raise an error. If you are aware of a solution, please let us know.
3. This is something due to how multiprocessing pickling protocols work, hence `@restricted(...)` does
not work for your Mac/Windows. Please use the `limit` method of limiting resources in this case.
(Technically this is supported for Mac Python 3.7 though). This is likely due to the default
`spawn` context for Windows and Mac but using other available methods on Mac also seems to not work.
For Linux, the `fork` and `forkserver` context seems to work.
4. For unknown reasons, using `time.process_time()` to query the cpu time usage within a pynished function
will cause the `cpu_time` limits to be ignored on Mac, leading to a function that will hang indefinitly
unless using some other limit. Please let us know if this is some known issue or any workarounds are
available.
#### Parameters
The full list of options available with both `limit` and `@restricted` are:
```python
# The name given to the multiprocessing.Process
name: str | None = None
# The memory limit to place. Specify the amount of bytes or (int, unit) where unit
# can be "B", "KB", "MB" or "GB"
memory: int | tuple[int, str] | None = None
# The cpu time in seconds to limit the process to. This time is only counted while the
# process is active.
# Can provide in (time, units) such as (1.5, "h") to indicate one and a half hours.
# Units available are "s", "m", "h"
cpu_time: int | tuple[float, str] | None = None
# The wall time in seconds to limit the process to
# Can provide in (time, units) such as (1.5, "h") to indicate one and a half hours.
# Units available are "s", "m", "h"
wall_time: int | tuple[float, str] | None = None
# Whether to throw any errors that occured in the subprocess or to silently
# throw them away. If `True` and an Error was raised, `None` will be returned.
# The errors raised in the subprocess will be the same type that are raised in
# the controlling process. The exception to this are MemoryErrors which occur
# in the subprocess, we convert these to MemoryLimitException.
raises: bool = True
# This is the multiprocess context used, please refer to their documentation
# https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
context: "fork" | "spawn" | "forkserver" | None = None
# Whether to emit warnings from limit or not. The current warnings:
# * When the memory limit is lower than the starting memory of a process
# * When trying to remove the memory limit for sending back information
# from the subprocess to the main process
warnings: bool = True
# How to handle errors. If `bool` then this decides whether or not to wrap them in
# a pynisher exception. If `list`, you can specify which errors get wrapped in a
# pynisher exception and if `dict`, you can specify what kind of errors get wrapped
# and how. See `pynisher::Pynisher::__init__` for more details on `dict`
#
# * wrap_errors={ "memory": [ImportError, (OSError, 22)], "pynisher": [ValueError] }
#
# We check that the exception is explicitly of the same type and not just a subclass.
# This is to prevent accidentally wrapping to eagerly.
wrap_errors: bool | list[Exception] | dict = False
# Whether to terminate child processes of your limited function.
# By default, pynisher will kill any subprocesses your function may spawn. If this
# is not desired behaviour, please use `daemon=True` with your spawned subprocesses
# and set `terminate_child_processes` to `False`
terminate_child_processes: bool = True
# Whether keyboard interrupts should forceably kill any subprocess or the
# pynished function. If True, it will temrinate the process tree of
# the pynished function and then reraise the KeyboardInterrupt.
forceful_keyboard_interrupt: bool = True
```
#### Exceptions
Pynisher will let all subprocess `Exceptions` buble up to the controlling process.
If a subprocess exceeds a limit, one of `CpuTimeoutException`, `WallTimeoutException` or `MemoryLimitException` are raised, but you can use their base classes to cover them more generally.
```python
class PynisherException(Exception): ...
"""When a subprocess exceeds a limit"""
class TimeoutException(PynisherException): ...
"""When a subprocess exceeds a time limit (walltime or cputime)"""
class CpuTimeoutException(TimeoutException): ...
"""When a subprocess exceeds its cpu time limit"""
class WallTimeoutException(TimeoutException):
"""When a subprocess exceeds its wall time limit"""
class MemoryLimitException(PynisherException, MemoryError):
"""When a subprocess tries to allocate memory that would take it over the limit
This also inherits from MemoryError as it is technically a MemoryError that we
catch and convert.
"""
```
## Changes from v0.6.0
For simplicity, pynisher will no longer try to control `stdout`, `stderr`, instead
users can use the builtins `redirect_stdout` and `redirect_stderr` of Python to
send things as needed.
Pynisher issues warnings through `stderr`. Depending on how you set up the `context`
to spawn a new process, using objects may now work as intended. The safest option
is to write to a file if needed.
```python
from contextlib import redirect_stderr
# You can always disable warnings
limited_f = limit(func, warnings=False)
# Capture warnings in a file
# Only seems to work properly on Linux
with open("stderr.txt", "w") as stderr, redirect_stderr(stderr):
limited_f()
with open("stderr.txt", "r") as stderr:
print(stderr.readlines())
```
The support for passing a `logger` to `Pynisher` has also been removed. The only diagnostics
information that would have been sent to the logger is not communicated with prints to `stderr`.
These diagnostic messages only occur when an attempt to limit resources failed
This can be captured or disabled as above.
Any other kind of issue will raise an exception with relevant information.
The support for checking `exit_status` was removed and the success of a pynisher process can
be handled in the usual Python manner of checking for errors, with a `try: except:`. If you
don't care for the `exit_status` then use `f = limit(func, raises=False)` and you can
check for output `output = f(...)`. This will be `None` if an error was raised and was `raises=False`.
Pynisher no longer times your function for you with `self.wall_clock_time`. If you need to measure
the duration it ran, please do so outside of `Pynisher`.
The exceptions were also changed, please see [Exceptions](#Exceptions)
## Controlling namespace pollution
As an advanced use case, sometimes you might want to keep the modules imported for your
limited function to be local only, preventing this from leaking to the main process that
runs created the limited function. You have three ways to control that the locally imported
error does not pollute the main namespace.
```python
import sys
from pynisher import PynisherException, limit
def import_sklearn() -> None:
"""Imports sklearn into a local namespace and has an sklearn object in its args"""
from sklearn.exceptions import NotFittedError
from sklearn.svm import SVR
assert "sklearn" in sys.modules.keys()
raise NotFittedError(SVR())
if __name__ == "__main__":
# Wrapping all errors
lf = limit(import_sklearn, wrap_errors=True)
try:
lf()
except PynisherException:
assert "sklearn" not in sys.modules.keys()
# Wrapping only specific errors
lf = limit(import_sklearn, wrap_errors=["NotFittedError"])
try:
lf()
except PynisherException:
assert "sklearn" not in sys.modules.keys()
# Wrapping that error specifically as a PynisherException
lf = limit(import_sklearn, wrap_errors={"pynisher": ["NotFittedError"]})
try:
lf()
except PynisherException:
assert "sklearn" not in sys.modules.keys()
```
## Pynisher and Multithreading
When Pynisher is used together with the Python Threading library, it is possible to run into
a deadlock when using the standard ``fork`` method to start new processes as described in
* https://github.com/Delgan/loguru/issues/231
* https://gist.github.com/mfm24/e62ec5d50c672524107ca00a391e6104
* https://github.com/dask/dask/issues/3759
One way of solving this would be to change the forking behavior as described
`here <https://github.com/google/python-atfork/blob/main/atfork/stdlib_fixer.py>`_, but this is
also makes very strong assumptions on how the code is executed. An alternative is passing a
`Context <https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods>`_
which uses either ``spawn`` or ``forkserver`` as the process startup method.
## Nested Pynisher and Multiprocessing contexts
Be careful when using multiple contexts for multiprocessing while using `pynisher`. If your
pynished function spawns subprocess using `"forkserver"` while you set `pynisher` to use
the context `"fork"`, then issues can begin to occur when terminate processes.
## Project origin
This repository is based on Stefan Falkner's https://github.com/sfalkner/pynisher.
Raw data
{
"_id": null,
"home_page": "https://github.com/automl/pynisher",
"name": "pynisher",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "resources",
"author": "('Stefan Falkner, Christina Hernandez-Wunsch, Samuel Mueller,Matthias Feurer, Francisco Rivera, Eddie Bergman and Rene Sass',)",
"author_email": "feurerm@informatik.uni-freiburg.de",
"download_url": "https://files.pythonhosted.org/packages/06/c9/ae65927f382f80e99841d05b3ef19fc5dd6fa0684f00812ed055c0909821/pynisher-1.0.5.tar.gz",
"platform": null,
"description": "Pynisher is a library to limit resources of a function call in a synchronous manner.\nYou can use this to ensure that your function doesn't use up more resources than it\nshould.\n\n## Usage\n\nLimit the time a process can take\n```python\nimport pynisher\n\n\ndef sleepy(x: int) -> int:\n time.sleep(x)\n return x\n\n# You can also use `cpu_time` instead\nwith pynisher.limit(sleepy, wall_time=7) as limited_sleep:\n x = limited_sleep(10) # Will raise a TimeoutException\n```\n\nLimit the memory usage in a sequential manner\n```python\nfrom pynisher import limit, MemoryLimitException, WallTimeoutException\n\n\ndef train_memory_hungry_model(X, y) -> Model:\n # ... do some thing\n return model\n\nmodel_trainer = limit(\n train_memory_hungry_model,\n memory=(500, \"MB\"),\n wall_time=(1.5, \"h\") # 1h30m\n)\n\ntry:\n model = model_trainer(X, y)\nexcept (WallTimeoutException, MemoryLimitException):\n model = None\n```\n\nPassing `raises=False` means it will hide all errors and will return `EMPTY` if\nthere is no result to give back.\n\n```python\nfrom pynisher import limit, EMPTY\n\ndef f():\n raise ValueError()\n\nlimited_f = limit(f, wall_time=(2, \"m\"), raises=False)\nresult = limited_f()\n\nif result is not EMPTY:\n # ...\n```\n\n\nYou can even use the decorator, in which case it will always be limited.\nPlease note in [Details](#details) that support for this is limited and mostly\nfor Linux.\n```python\nfrom pynisher import restricted\n\n@restricted(wall_time=1, raises=False)\ndef notify_remote_server() -> Response:\n \"\"\"We don't care that this fails, just give it a second to try\"\"\"\n server = block_until_access(...)\n response = server.notify()\n\nnotify_remote_server()\n# ... continue on even if it failed\n```\n\nYou can safely raise errors from inside your function and the same kind of error will be reraised\nwith a traceback.\n```python\nfrom pynisher import limit\n\n\ndef f():\n raise ValueError()\n\nlimited_f = limit(f)\n\ntry:\n limited_f()\nexcept ValueError as e:\n ... # do what you need\n```\n\nIf returning very large items, prefer to save them to file first and then read the result as\nsending large objects through pipes can be very slow.\n\n```python\nfrom pathlib import Path\nimport pickle\n\nfrom pynisher import limit\n\ndef train_gpt3(save_path: Path) -> bool:\n gpt3 = ...\n gpt3.train()\n with save_path.open('wb') as f:\n pickle.dump(gpt3, f)\n\n return True\n\npath = Path('gpt3.model')\ntrainer = limit(train_gpt3, memory=(1_000_000, \"gb\")):\n\ntry:\n trainer(save_path=path)\n\n with path.open(\"rb\") as f:\n gpt3 = pickle.load(f)\n\nexcept MemoryLimitException as e:\n ...\n```\n\n\n## Details\nPynisher works by running your function inside of a subprocess.\nOnce in the subprocess, the resources will be limited for that process before running your\nfunction. The methods for limiting specific resources can be found within the respective\n`pynisher/limiters/<platform>.py`.\n\n#### Features\nTo check if a feature is supported on your system:\n```python\nfrom pynisher import limit\n\n\nfor limit in [\"cpu_time\", \"wall_time\", \"memory\", \"decorator\"]:\n print(f\"Supports {limit} - {supports(limit)}\")\n\n\nlimited_f = limit(f, ...)\nif not limited_f.supports(\"memory\"):\n ...\n```\n\nCurrently we mainly support Linux with partial support for Mac and Windows:\n\n| OS | `wall_time` | `cpu_time` | `memory` | `@restricted` |\n| -- | ----------- | ---------- | -------- | ------------- |\n| Linux | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |\n| Windows | :heavy_check_mark: | :heavy_check_mark: (1.) | :heavy_check_mark: (1.) | :x: (3.) |\n| Mac | :heavy_check_mark: | :heavy_check_mark: (4.) | :x: (2.) | :x: (3.) |\n\n1. Limiting memory and cputime on Windows is done with the library `pywin32`. There seem\nto be installation issues when instead of using `conda install <x>`, you use `pip install <x>`\ninside a conda environment, specifically only with `Python 3.8` and `Python 3.9`.\nThe workaround is to instead install `pywin32` with conda, which can be done with\n`pip uninstall pywin32; conda install pywin32`.\nPlease see this [issue](https://github.com/mhammond/pywin32/issues/1865) for updates.\n\n2. Mac doesn't seem to allow for limiting a processes memory. No workaround has been found\nincluding trying `launchctl` which seems global and ignores memory limiting. Possibly `ulimit`\ncould work but needs to be tested. Using `setrlimit(RLIMIT_AS, (soft, hard))` does nothing\nand will either fail explicitly or silently, hence we advertise it is not supported.\nHowever, passing a memory limit on mac is still possible but may not do anything useful or\neven raise an error. If you are aware of a solution, please let us know.\n\n3. This is something due to how multiprocessing pickling protocols work, hence `@restricted(...)` does\nnot work for your Mac/Windows. Please use the `limit` method of limiting resources in this case.\n(Technically this is supported for Mac Python 3.7 though). This is likely due to the default\n`spawn` context for Windows and Mac but using other available methods on Mac also seems to not work.\nFor Linux, the `fork` and `forkserver` context seems to work.\n\n4. For unknown reasons, using `time.process_time()` to query the cpu time usage within a pynished function\nwill cause the `cpu_time` limits to be ignored on Mac, leading to a function that will hang indefinitly\nunless using some other limit. Please let us know if this is some known issue or any workarounds are\navailable.\n\n\n#### Parameters\nThe full list of options available with both `limit` and `@restricted` are:\n```python\n# The name given to the multiprocessing.Process\nname: str | None = None\n\n\n# The memory limit to place. Specify the amount of bytes or (int, unit) where unit\n# can be \"B\", \"KB\", \"MB\" or \"GB\"\nmemory: int | tuple[int, str] | None = None\n\n\n# The cpu time in seconds to limit the process to. This time is only counted while the\n# process is active.\n# Can provide in (time, units) such as (1.5, \"h\") to indicate one and a half hours.\n# Units available are \"s\", \"m\", \"h\"\ncpu_time: int | tuple[float, str] | None = None\n\n\n# The wall time in seconds to limit the process to\n# Can provide in (time, units) such as (1.5, \"h\") to indicate one and a half hours.\n# Units available are \"s\", \"m\", \"h\"\nwall_time: int | tuple[float, str] | None = None\n\n\n# Whether to throw any errors that occured in the subprocess or to silently\n# throw them away. If `True` and an Error was raised, `None` will be returned.\n# The errors raised in the subprocess will be the same type that are raised in\n# the controlling process. The exception to this are MemoryErrors which occur\n# in the subprocess, we convert these to MemoryLimitException.\nraises: bool = True\n\n\n# This is the multiprocess context used, please refer to their documentation\n# https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods\ncontext: \"fork\" | \"spawn\" | \"forkserver\" | None = None\n\n\n# Whether to emit warnings from limit or not. The current warnings:\n# * When the memory limit is lower than the starting memory of a process\n# * When trying to remove the memory limit for sending back information\n# from the subprocess to the main process\nwarnings: bool = True\n\n\n# How to handle errors. If `bool` then this decides whether or not to wrap them in\n# a pynisher exception. If `list`, you can specify which errors get wrapped in a\n# pynisher exception and if `dict`, you can specify what kind of errors get wrapped\n# and how. See `pynisher::Pynisher::__init__` for more details on `dict`\n#\n# * wrap_errors={ \"memory\": [ImportError, (OSError, 22)], \"pynisher\": [ValueError] }\n#\n# We check that the exception is explicitly of the same type and not just a subclass.\n# This is to prevent accidentally wrapping to eagerly.\nwrap_errors: bool | list[Exception] | dict = False\n\n\n# Whether to terminate child processes of your limited function.\n# By default, pynisher will kill any subprocesses your function may spawn. If this\n# is not desired behaviour, please use `daemon=True` with your spawned subprocesses\n# and set `terminate_child_processes` to `False`\nterminate_child_processes: bool = True\n\n# Whether keyboard interrupts should forceably kill any subprocess or the\n# pynished function. If True, it will temrinate the process tree of\n# the pynished function and then reraise the KeyboardInterrupt.\nforceful_keyboard_interrupt: bool = True\n```\n\n#### Exceptions\nPynisher will let all subprocess `Exceptions` buble up to the controlling process.\nIf a subprocess exceeds a limit, one of `CpuTimeoutException`, `WallTimeoutException` or `MemoryLimitException` are raised, but you can use their base classes to cover them more generally.\n\n```python\nclass PynisherException(Exception): ...\n \"\"\"When a subprocess exceeds a limit\"\"\"\n\nclass TimeoutException(PynisherException): ...\n \"\"\"When a subprocess exceeds a time limit (walltime or cputime)\"\"\"\n\nclass CpuTimeoutException(TimeoutException): ...\n \"\"\"When a subprocess exceeds its cpu time limit\"\"\"\n\nclass WallTimeoutException(TimeoutException):\n \"\"\"When a subprocess exceeds its wall time limit\"\"\"\n\nclass MemoryLimitException(PynisherException, MemoryError):\n \"\"\"When a subprocess tries to allocate memory that would take it over the limit\n\n This also inherits from MemoryError as it is technically a MemoryError that we\n catch and convert.\n \"\"\"\n```\n\n## Changes from v0.6.0\nFor simplicity, pynisher will no longer try to control `stdout`, `stderr`, instead\nusers can use the builtins `redirect_stdout` and `redirect_stderr` of Python to\nsend things as needed.\n\nPynisher issues warnings through `stderr`. Depending on how you set up the `context`\nto spawn a new process, using objects may now work as intended. The safest option\nis to write to a file if needed.\n\n```python\nfrom contextlib import redirect_stderr\n\n# You can always disable warnings\nlimited_f = limit(func, warnings=False)\n\n# Capture warnings in a file\n# Only seems to work properly on Linux\nwith open(\"stderr.txt\", \"w\") as stderr, redirect_stderr(stderr):\n limited_f()\n\nwith open(\"stderr.txt\", \"r\") as stderr:\n print(stderr.readlines())\n```\n\nThe support for passing a `logger` to `Pynisher` has also been removed. The only diagnostics\ninformation that would have been sent to the logger is not communicated with prints to `stderr`.\nThese diagnostic messages only occur when an attempt to limit resources failed\nThis can be captured or disabled as above.\n\nAny other kind of issue will raise an exception with relevant information.\n\nThe support for checking `exit_status` was removed and the success of a pynisher process can\nbe handled in the usual Python manner of checking for errors, with a `try: except:`. If you\ndon't care for the `exit_status` then use `f = limit(func, raises=False)` and you can\ncheck for output `output = f(...)`. This will be `None` if an error was raised and was `raises=False`.\n\nPynisher no longer times your function for you with `self.wall_clock_time`. If you need to measure\nthe duration it ran, please do so outside of `Pynisher`.\n\nThe exceptions were also changed, please see [Exceptions](#Exceptions)\n\n## Controlling namespace pollution\nAs an advanced use case, sometimes you might want to keep the modules imported for your\nlimited function to be local only, preventing this from leaking to the main process that\nruns created the limited function. You have three ways to control that the locally imported\nerror does not pollute the main namespace.\n\n```python\nimport sys\nfrom pynisher import PynisherException, limit\n\ndef import_sklearn() -> None:\n \"\"\"Imports sklearn into a local namespace and has an sklearn object in its args\"\"\"\n from sklearn.exceptions import NotFittedError\n from sklearn.svm import SVR\n\n assert \"sklearn\" in sys.modules.keys()\n raise NotFittedError(SVR())\n\n\nif __name__ == \"__main__\":\n # Wrapping all errors\n lf = limit(import_sklearn, wrap_errors=True)\n try:\n lf()\n except PynisherException:\n assert \"sklearn\" not in sys.modules.keys()\n\n # Wrapping only specific errors\n lf = limit(import_sklearn, wrap_errors=[\"NotFittedError\"])\n try:\n lf()\n except PynisherException:\n assert \"sklearn\" not in sys.modules.keys()\n\n # Wrapping that error specifically as a PynisherException\n lf = limit(import_sklearn, wrap_errors={\"pynisher\": [\"NotFittedError\"]})\n try:\n lf()\n except PynisherException:\n assert \"sklearn\" not in sys.modules.keys()\n```\n\n\n## Pynisher and Multithreading\nWhen Pynisher is used together with the Python Threading library, it is possible to run into\na deadlock when using the standard ``fork`` method to start new processes as described in\n\n* https://github.com/Delgan/loguru/issues/231\n* https://gist.github.com/mfm24/e62ec5d50c672524107ca00a391e6104\n* https://github.com/dask/dask/issues/3759\n\nOne way of solving this would be to change the forking behavior as described\n`here <https://github.com/google/python-atfork/blob/main/atfork/stdlib_fixer.py>`_, but this is\nalso makes very strong assumptions on how the code is executed. An alternative is passing a\n`Context <https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods>`_\nwhich uses either ``spawn`` or ``forkserver`` as the process startup method.\n\n\n## Nested Pynisher and Multiprocessing contexts\nBe careful when using multiple contexts for multiprocessing while using `pynisher`. If your\npynished function spawns subprocess using `\"forkserver\"` while you set `pynisher` to use\nthe context `\"fork\"`, then issues can begin to occur when terminate processes.\n\n## Project origin\nThis repository is based on Stefan Falkner's https://github.com/sfalkner/pynisher.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A library to limit the resources used by functions using subprocesses",
"version": "1.0.5",
"split_keywords": [
"resources"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "06c9ae65927f382f80e99841d05b3ef19fc5dd6fa0684f00812ed055c0909821",
"md5": "cba30d588a25def55416e31e1fb93371",
"sha256": "bac4a8c200b0193013897d5d6cfa5270ef7f148f84e7aeddbc6671002fb34b5e"
},
"downloads": -1,
"filename": "pynisher-1.0.5.tar.gz",
"has_sig": false,
"md5_digest": "cba30d588a25def55416e31e1fb93371",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 28899,
"upload_time": "2023-03-19T16:43:22",
"upload_time_iso_8601": "2023-03-19T16:43:22.226435Z",
"url": "https://files.pythonhosted.org/packages/06/c9/ae65927f382f80e99841d05b3ef19fc5dd6fa0684f00812ed055c0909821/pynisher-1.0.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-03-19 16:43:22",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "automl",
"github_project": "pynisher",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "pynisher"
}