InterProcessPyObjects


NameInterProcessPyObjects JSON
Version 1.0.7 PyPI version JSON
download
home_pageNone
SummaryThis high-performance package delivers blazing-fast inter-process communication through shared memory, enabling Python objects to be shared across processes with exceptional efficiency
upload_time2024-05-14 02:41:45
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords android ipc linux windows cengal crossplatform ios inter-process communication macos multiprocessing shared dict shared memory shared numpy ndarray shared object shared objects shared set shared torch tensor
VCS
bugtrack_url
requirements hatch twine numpy py-cpuinfo cengal_light
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![GitHub tag (with filter)](https://img.shields.io/github/v/tag/FI-Mihej/InterProcessPyObjects) ![Static Badge](https://img.shields.io/badge/OS-Linux_%7C_Windows_%7C_macOS-blue)

![PyPI - Version](https://img.shields.io/pypi/v/InterProcessPyObjects) ![PyPI - Format](https://img.shields.io/pypi/format/cengal-light?color=darkgreen) ![Static Badge](https://img.shields.io/badge/wheels-Linux_%7C_Windows_%7C_macOS-blue) ![Static Badge](https://img.shields.io/badge/Architecture-x86__64_%7C_ARM__64-blue) ![PyPI - Python Version](https://img.shields.io/pypi/pyversions/cengal-light) ![Static Badge](https://img.shields.io/badge/PyPy-3.8_%7C_3.9_%7C_3.10-blue) ![PyPI - Implementation](https://img.shields.io/pypi/implementation/cengal-light) 

![GitHub License](https://img.shields.io/github/license/FI-Mihej/InterProcessPyObjects?color=darkgreen) ![Static Badge](https://img.shields.io/badge/API_status-Stable-darkgreen)

# InterProcessPyObjects package

> InterProcessPyObjects is a part of the [Cengal](https://github.com/FI-Mihej/Cengal) library. If you have any questions or would like to participate in discussions, feel free to join the [Cengal Discord](https://discord.gg/TAy7xNgR). Your support and involvement are greatly appreciated as Cengal evolves.

This high-performance package delivers blazing-fast inter-process communication through shared memory, enabling Python objects to be shared across processes with exceptional efficiency. By minimizing the need for frequent serialization-deserialization, it enhances overall speed and responsiveness. The package offers a comprehensive suite of functionalities designed to support a diverse array of Python types and facilitate asynchronous IPC, optimizing performance for demanding applications.

![Throughput GiB/s](https://github.com/FI-Mihej/Cengal/raw/master/docs/assets/InterProcessPyObjects/ChartThroughputGiBs.png)

![Dict performance comparison](https://github.com/FI-Mihej/Cengal/raw/master/docs/assets/InterProcessPyObjects/ChartDictPerformanceComparison.png)

## API State

Stable. Guaranteed to not have braking changes in the future (see bellow for details).

Any hypothetical further API-breaking changes will lead to new module creation within the package. An old version will continue its existence and continue to be importable by an explicit address (see Details bellow).

<details>
<summary title="Details"><kbd> Details </kbd></summary>

The current (currently latest) version can be imported either by:

```python
from ipc_py_objects import *
```

or by

```python
from ipc_py_objects.versions.v_1 import *
```

If further braking changes will be made to the API - a new (`v_2`) version will be made. As result:

Current (`v_1`) version will continue to be accessible by an explicit address:

```python
from ipc_py_objects.versions.v_1 import *
```

Latest (`v_2`) version will be accessible by either: 

```python
from ipc_py_objects import *
```

or by

```python
from ipc_py_objects.versions.v_2 import *
```

This is a general approach across the entire [Cengal](https://github.com/FI-Mihej/Cengal) library. It gives me the ability to effectively work on its huge codebase, even by myself.

By the way. I'm finishing an implementation of [CengalPolyBuild](https://github.com/FI-Mihej/CengalPolyBuild) - my package creation system which provides same approach to users. It is a comprehensive and hackable build system for multilingual Python packages: Cython (including automatic conversion from Python to Cython), C/C++, Objective-C, Go, and Nim, with ongoing expansions to include additional languages. Basically, it will provide easy access to all the same features I'm already using in the Cengal library package creation and management processes.

</details>

## Key Features

* Shared Memory Communication:
    * Enables sharing of Python objects directly between processes using shared memory.
    * Utilizes a linked list of global messages to inform connected processes about new shared objects.

* Lock-Free Synchronization:
    * Uses memory barriers for efficient communication, avoiding slow syscalls.
    * Ensures each process can access and modify shared memory without contention.

* Supported Python Types:
    * Handles various Python data structures including:
        * Basic types: `None`, `bool`, 64-bit `int`, large `int` (arbitrary precision integers), `float`, `complex`, `bytes`, `bytearray`, `str`.
        * Standard types: `Decimal`, `slice`, `datetime`, `timedelta`, `timezone`, `date`, `time`
        * Containers: `tuple`, `list`, classes inherited from: `AbstractSet` (`frozenset`), `MutableSet` (`set`), `Mapping` and `MutableMapping` (`dict`).
        * Pickable classes instances: custom classes including `dataclass`
    * Allows mutable containers (lists, sets, mappings) to save basic types (`None`, `bool`, 64 bit `int`, `float`) internally, optimizing memory use and speed.

* NumPy and Torch Support:
    * Supports numpy arrays by creating shared bytes objects coupled with independent arrays.
    * Supports torch tensors by coupling them with shared numpy arrays.

* Custom Class Support:
    * Projects pickable custom classes instances (including `dataclasses`) onto shared dictionaries in shared memory.
    * Modifies the class instance to override attribute access methods, managing data fields within the shared dictionary.
    * supports classes with or without `__dict__` attr
    * supports classes with or without `__slots__` attr

* Asyncio Compatibility:
    * Provides a wrapper module for async-await functionality, integrating seamlessly with asyncio.
    * Ensures asynchronous operations work smoothly with the package's lock-free approach.

## Import

To use this package, simply install it via pip:

```shell
pip install InterProcessPyObjects
```

Then import it into your project:

```python
from ipc_py_objects import *
```

## Main principles

* only one process has access to the shared memory at the same time
* working cycle:
    1. work on your tasks
    2. acquire access to shared memory
    3. work with shared memory as fast as possible (read and/or update data structures in shared memory)
    4. release access to shared memory
    5. continue your work on other tasks
* do not forget to manually destroy your shared objects when they are not needed already
* feel free to not destroy your shared object if you need it for a whole run and/or do not care about the shared memory waste
* data will not be preserved between Creator's sessions. Shared memory will be wiped just before Creator finished its work with a shared memory instance (Consumer's session will be finished already at this point)

### ! Important about hashmaps

Package, currently, uses Python `hash()` call which is reliable across interpreter session but unreliable across different interpreter sessions because of random seeding.

In order to use same seeding across different interpreter instances (and as result, be able to use hashmaps) you can set 'PYTHONHASHSEED` env var to some fixed integer value

<details>
<summary title=".bashrc"><kbd> .bashrc </kbd></summary>

```bash
export PYTHONHASHSEED=0
```

</details>

<details>
<summary title="Your bash script"><kbd> Your bash script </kbd></summary>

```bash
export PYTHONHASHSEED=0
python YOURSCRIPT.py
```

</details>

<details>
<summary title="Terminal"><kbd> Terminal </kbd></summary>

```shell
$ PYTHONHASHSEED=0 python YOURSCRIPT.py
```

</details>

An issue with the behavior of an integrated `hash()` call **does Not** affect the following data types:
* `None`, `bool`, `int`, `float`, `complex`, `str`, `bytes`, `bytearray`
* `Decimal`, `slice`, `datetime`, `timedelta`, `timezone`, `date`, `time`
* `tuple`, `list`
* `set` wrapped by `FastLimitedSet` class instance: for example by using `.put_message(FastLimitedSet(my_set_obj))` call
* `dict` wrapped by `FastLimitedDict` class instance: for example by using `.put_message(FastLimitedDict(my_dict_obj))` call
* an instances of custom classes including `dataclass` by default: for example by using `.put_message(my_obj)` call
* an instances of custom classes including `dataclass` wrapped by `ForceStaticObjectCopy` or `ForceStaticObjectInplace` class instances. For example by using `.put_message(ForceStaticObjectInplace(my_obj))` call

It affects only the following data types: 
* `AbstractSet` (`frozenset`)
* `MutableSet` (`set`)
* `Mapping`
* `MutableMapping` (`dict`)
* an instances of custom classes including `dataclass` wrapped by `ForceGeneralObjectCopy` or `ForceGeneralObjectInplace` class instances. For example by using `.put_message(ForceGeneralObjectInplace(my_obj))` call

## Examples

* An async examples (with asyncio):
    * [sender.py](https://github.com/FI-Mihej/InterProcessPyObjects/blob/master/example/sender.py)
    * [receiver.py](https://github.com/FI-Mihej/InterProcessPyObjects/blob/master/example/receiver.py)
    * [shared_objects__types.py](https://github.com/FI-Mihej/InterProcessPyObjects/blob/master/example/shared_objects__types.py)

### Receiver.py performance measurements

* CPU: i5-3570@3.40GHz (Ivy Bridge)
* RAM: 32 GBytes, DDR3, dual channel, 655 MHz
* OS: Ubuntu 20.04.6 LTS under WSL2. Windows 10

```python
async with ashared_memory_context_manager.if_has_messages() as shared_memory:
    # Taking a message with an object from the queue.
    sso: SomeSharedObject = shared_memory.value.take_message()  # 5_833 iterations/seconds

    # We create local variables once in order to access them many times in the future, ensuring high performance.
    # Applying a principle that is widely recommended for improving Python code.
    company_metrics: List = sso.company_info.company_metrics  # 12_479 iterations/seconds
    some_employee: Employee = sso.company_info.some_employee  # 10_568 iterations/seconds
    data_dict: Dict = sso.data_dict  # 16_362 iterations/seconds
    numpy_ndarray: np.ndarray = data_dict['key3']  # 26_223 iterations/seconds

# Optimal work with shared data (through local variables):
async with ashared_memory_context_manager as shared_memory:
    # List
    k = company_metrics[CompanyMetrics.avg_salary]  # 1_535_267 iterations/seconds
    k = company_metrics[CompanyMetrics.employees]  # 1_498_278 iterations/seconds
    k = company_metrics[CompanyMetrics.in_a_good_state]  # 1_154_454 iterations/seconds
    k = company_metrics[CompanyMetrics.websites]  # 380_258 iterations/seconds
    company_metrics[CompanyMetrics.annual_income] = 2_000_000.0  # 1_380_983 iterations/seconds
    company_metrics[CompanyMetrics.employees] = 20  # 1_352_799 iterations/seconds
    company_metrics[CompanyMetrics.avg_salary] = 5_000.0  # 1_300_966 iterations/seconds
    company_metrics[CompanyMetrics.in_a_good_state] = None  # 1_224_573 iterations/seconds
    company_metrics[CompanyMetrics.in_a_good_state] = False  # 1_213_175 iterations/seconds
    company_metrics[CompanyMetrics.avg_salary] += 1.1  # 299_415 iterations/seconds
    company_metrics[CompanyMetrics.employees] += 1  # 247_476 iterations/seconds
    company_metrics[CompanyMetrics.emails] = tuple()  # 55_335 iterations/seconds (memory allocation performance is planned to be improved)
    company_metrics[CompanyMetrics.emails] = ('sails@company.com',)  # 30_314 iterations/seconds (memory allocation performance is planned to be improved)
    company_metrics[CompanyMetrics.emails] = ('sails@company.com', 'support@company.com')  # 20_860 iterations/seconds (memory allocation performance is planned to be improved)
    company_metrics[CompanyMetrics.websites] = ['http://company.com', 'http://company.org']  # 10_465 iterations/seconds (memory allocation performance is planned to be improved)
    
    # Method call on a shared object that changes a property through the method
    some_employee.increase_years_of_employment()  # 80548 iterations/seconds

    # Object properties
    k = sso.int_value  # 850_098 iterations/seconds
    k = sso.str_value  # 228_966 iterations/seconds
    sso.int_value = 200  # 207_480 iterations/seconds
    sso.int_value += 1  # 152_263 iterations/seconds
    sso.str_value = 'Hello. '  # 52_390 iterations/seconds (memory allocation performance is planned to be improved)
    sso.str_value += '!'  # 35_823 iterations/seconds (memory allocation performance is planned to be improved)

    # Numpy.ndarray
    numpy_ndarray += 10  # 403_646 iterations/seconds
    numpy_ndarray -= 15  # 402_107 iterations/seconds

    # Dict
    k = data_dict['key1']  # 87_558 iterations/seconds
    k = data_dict[('key', 2)]  # 49_338 iterations/seconds
    data_dict['key1'] = 200  # 86_744 iterations/seconds
    data_dict['key1'] += 3  # 41_409 iterations/seconds
    data_dict['key1'] *= 1  # 40_927 iterations/seconds
    data_dict[('key', 2)] = 'value2'  # 31_460 iterations/seconds (memory allocation performance is planned to be improved)
    data_dict[('key', 2)] = data_dict[('key', 2)] + 'd'  # 18_972 iterations/seconds (memory allocation performance is planned to be improved)
    data_dict[('key', 2)] = 'value2'  # 10_941 iterations/seconds (memory allocation performance is planned to be improved)
    data_dict[('key', 2)] += 'd'  # 16_568 iterations/seconds (memory allocation performance is planned to be improved)

# An example of non-optimal work with shared data (without using a local variables):
async with ashared_memory_context_manager as shared_memory:
    # An example of a non-optimal method call (without using a local variable) that changes a property through the method
    sso.company_info.some_employee.increase_years_of_employment()  # 9_418 iterations/seconds

    # An example of non-optimal work with object properties (without using local variables)
    k = sso.company_info.income  # 20_445 iterations/seconds
    sso.company_info.income = 3_000_000.0  # 13_899 iterations/seconds
    sso.company_info.income *= 1.1  # 17_272 iterations/seconds 
    sso.company_info.income += 500_000.0  # 18_376 iterations/seconds
    
    # Example of non-optimal usage of numpy.ndarray without a proper local variable
    data_dict['key3'] += 10  # 6_319 iterations/seconds

# Notify the sender about the completion of work on the shared object
async with ashared_memory_context_manager as shared_memory:
    sso.some_processing_stage_control = True  # 298_968 iterations/seconds
```

## Reference (and explaining examples line by line)

Code for shared memory Creator side:
```python
ashared_memory_manager: ASharedMemoryManager = ASharedMemoryManager(SharedMemory('shared_memory_identifier', create=True, size=200 * 1024**2))
# declare creation and initiation of the shared memory instance with a size of 200 MiB.
```

Code for shared memory Consumer side:
```python
ashared_memory_manager: ASharedMemoryManager = ASharedMemoryManager(SharedMemory('shared_memory_identifier'))
# declares connection to shared memory instance
```

On shared memory Creator side:
```python
async with ashared_memory_manager as asmm:
# creates, initiates shared memory instance and waits for the Consumer creation. Execute it once per run.
# feel free to share either `asmm` or `ashared_memory_manager` across your coroutines
```

On shared memory concumer side:
```python
async with ashared_memory_manager as asmm:
# waits for the shared memory creation and initiation by the shared memory Creator. Execute it once per run.
# feel free to share either `asmm` or `ashared_memory_manager` across your coroutines
```

```python
ashared_memory_context_manager: ASharedMemoryContextManager = asmm()
# creates shared memory access context manager. Create it once per coroutine. Use in the same coroutine as much as you need it.
```

```python
async with ashared_memory_context_manager as shared_memory:
# acquire access to shared memory as soon as possible
```

```python
async with ashared_memory_context_manager.if_has_messages() as shared_memory:
# acquire access to shared memory if message queue is not empty
```

```python
shared_memory # is an instance of ValueHolder class from the Cengal library
shared_memory.value  # is an instance of `SharedMemory` instance
shared_memory.existence  # bool. Will be set to True at the beginning of an each context (`with`) block. Set it to `False`
    # if you want to release CPU for a small time portion before shared memory will be acquired next time.
    # if at least one coroutine will not set it to `False` - next acquire attempt will be made immediately which will
    # lower latency and increase performance but at the same time will consume more CPU time.
    # Default behavior (`True`) is better for CPU intensive algorithms, 
    # while `False` on all process coroutines (which have their own memory access context managers) will be better
    # for example for desktop or mobile applications
```

### `SharedMemory` fields and methods you might frequently use in an async approach (in coroutines)

```python
SharedMemory.size  # an actual size of the shared memory
```

```python
SharedMemory.name  # an identifier of the shared memory
```

```python
SharedMemory.create  # `True` on Creator side. `False` on Consumer side
```

```python
obj_mapped = shared_memory.value.put_message(obj)
# Puts object to the shared memory and create an appropriate message
# Returns mapped version of the object if applicable (returns the same object otherwise).
# Next types will return same object: None, bool, int, float, str, bytes, bytearray, tuple.
```

```python
obj_mapped, shared_obj_offset = shared_memory.value.put_message_2(obj)
# Puts object to the shared memory and create an appropriate message
# Returns: 
# * Mapped version of the object if applicable (returns the same object otherwise).
#   Next types will return same object: None, bool, int, float, str, bytes, bytearray, tuple.
# * An offset to the shared object data structure which holds an appropriate shared object content
```

```python
has_messages: bool = shared_memory.value.has_messages()
# main way for checking an internal message queue for an emptiness
```

```python
obj_mapped = shared_memory.value.take_message()
# Takes (and removes) the latest message from an internal message queue.
# Creates and returns mapped object from an appropriate shared object data structure.
# Does not deletes an appropriate shared object data structure.
# Returns mapped version of the object if applicable (returns new copy of an object otherwise).
# Next types will return new copy of an object: None, bool, int, float, str, bytes, bytearray, tuple.
# Will raise NoMessagesInQueueError exception if an internal message queue is empty
```

```python
obj_mapped, shared_obj_offset = shared_memory.value.take_message_2()
# Takes (and removes) the latest message from an internal message queue.
# Creates and returns mapped object from an appropriate shared object data structure.
# Does not deletes an appropriate shared object data structure.
# Returns: 
# * Mapped version of the object if applicable (returns new copy of an object otherwise).
#   Next types will return new copy of an object: None, bool, int, float, str, bytes, bytearray, tuple.
# * An offset to the shared object data structure which holds an appropriate shared object content
# Will raise NoMessagesInQueueError exception if an internal message queue is empty
```

```python
shared_memory.value.destroy_obj(shared_obj_offset)
# Destroys the shared object data structure which holds an appropriate shared object content.
# Use it in order to free memory used by your shared object.


shared_memory.value.destroy_object(shared_obj_offset)
# An alias to the `shared_memory.value.destroy_obj()` call
```

```python
shared_obj_buffer: memoryview = shared_memory.value.get_obj_buffer(shared_obj_offset)
# returns `memoryview` to the data section of your shared object.
# Feel free to use this memory for your own needs while an appropriate shared object is still not destroyed.
# Next types provides buffer to their shared data: bytes, bytearray, str, numpy.ndarray (buffer of an appropriate shared bytes object mapped by this ndarray), torch.Tensor (buffer of an appropriate shared bytes object mapped by this Tensor)
```

### Useful exceptions

```python
class SharedMemoryError:
# Base exception all other exceptions are inherited from it
```

```python
class FreeMemoryChunkNotFoundError(SharedMemoryError):
"""Indicates that an unpartitioned chunk of free memory of requested size not being found.

    Regarding this error, it’s important to adjust the size parameter in the SharedMemory configuration. Trying to estimate memory consumption down to the byte is not practical because it fails to account for the memory overhead required by each entity stored (such as entity type metadata, pointers to child entities, etc.).

    When setting the size parameter for SharedMemory, consider using broader units like tens (for embedded systems), hundreds, or thousands of megabytes, rather than precise byte counts. This approach is similar to how you would not precisely calculate the amount of memory needed for a web server hosted externally; you make an educated guess, like assuming that 256 MB might be insufficient but 768 MB could be adequate, and then adjust based on practical testing.

    Also, be aware of memory fragmentation, which affects all memory allocation systems, including the OS itself. For example, if you have a SharedMemory pool sized to store exactly ten 64-bit integers, accounting for additional bytes for system information, your total might be around 200 bytes. Initially, after storing the integers, your memory might appear as ["int", "int", ..., "int"]. If you delete every second integer, the largest contiguous free memory chunk could be just 10 bytes, despite having 50 bytes free in total. This fragmentation means you cannot store a larger data structure like a 20-byte string which needs contiguous space.

    To resolve this, simply increase the size parameter value of SharedMemory. This is akin to how you would manage memory allocation for server hosting or thread stack sizes in software development.
"""
```

```python
NoMessagesInQueueError
# Next calls can raise it if an internal message queue is empty: take_message(), take_message_2(), read_message(), read_message_2()
```

## Performance tips

<details>
<summary title="Data structures"><kbd> Data structures </kbd></summary>

### Data structures

It is recommended to use `IntEnum`+`list` based data structures instead of dictionaries or even instead custom class instancess (including dataclass) if you want best performance.

For example instead operating with dict:

<details>
<summary title="Example"><kbd> Example </kbd></summary>

Message sender

```python
company_metrics: Dict[str, Any] = {
    'websites': ['http://company.com', 'http://company.org'],
    'avg_salary': 3_000.0,
    'employees': 10,
    'in_a_good_state': True,
}
company_metrics_mapped: List = shared_memory.value.put_message(company_metrics)
```

Message receiver

```python
company_metrics: Dict[str, Any] = shared_memory.value.take_message()
k = company_metrics['employees']  # 87_558 iterations/seconds
company_metrics['employees'] = 200  # 86_744 iterations/seconds
company_metrics['employees'] += 3  # 41_409 iterations/seconds
```

</details>
<br>

or even instead operating with dataclass (classes by default operate faster then dict):

<details>
<summary title="Example"><kbd> Example </kbd></summary>

Message sender

```python
@dataclass
class CompanyMetrics:
    income: float
    employees: int
    avg_salary: float
    annual_income: float
    in_a_good_state: bool
    emails: Tuple
    websites = List[str]

company_metrics: CompanyMetrics = CompanyMetrics(
    income=1.4,
    employees: 12,
    avg_salary: 35.0,
    annual_income: 30_000.0,
    in_a_good_state: False,
    emails: ('sails@company.com', 'support@company.com'),
    websites = ['http://company.com', 'http://company.org'],
)
company_metrics_mapped: CompanyMetrics = shared_memory.value.put_message(company_metrics)
```

Message receiver

```python
company_metrics: CompanyMetrics = shared_memory.value.take_message()
k = company_metrics.employees  # 850_098 iterations/seconds
company_metrics.employees = 200  # 207_480 iterations/seconds
company_metrics.employees += 1  # 152_263 iterations/seconds
```

</details>
<br>

it would be more beneficial to operate with a list and appropriate IntEnum indexes:

<details>
<summary title="Example"><kbd> Example </kbd></summary>

Message sender:

```python
class CompanyMetrics(IntEnum):
    income = 0
    employees = 1
    avg_salary = 2
    annual_income = 3
    in_a_good_state = 4
    emails = 5
    websites = 6

company_metrics: List = intenum_dict_to_list({  # lists with IntEnum indexes are blazing-fast alternative to dictionaries
    CompanyMetrics.websites: ['http://company.com', 'http://company.org'],
    CompanyMetrics.avg_salary: 3_000.0,
    CompanyMetrics.employees: 10,
    CompanyMetrics.in_a_good_state: True,
})  # Unmentioned fields will be filled with Null values
company_metrics_mapped: List = shared_memory.value.put_message(company_metrics)
```

Message receiver:

```python
company_metrics: List = shared_memory.value.take_message()
k = company_metrics[CompanyMetrics.avg_salary]  # 1_535_267 iterations/seconds
company_metrics[CompanyMetrics.avg_salary] = 5_000.0  # 1_300_966 iterations/seconds
company_metrics[CompanyMetrics.avg_salary] += 1.1  # 299_415 iterations/seconds
```

</details>
<br>

</details>

<details>
<summary title="Sets"><kbd> Sets </kbd></summary>

### Sets

You might use `FastLimitedSet` wrapper for your set in order to get much faster shared sets.

Just wrap your dictionary with `FastLimitedSet`:

```python
my_obj: List = [
    True,
    2,
    FastLimitedSet({
        'Hello ',
        'World',
        3,
    })
]
my_obj_mapped = shared_memory.value.put_message(my_obj)
```

Drawbacks of this approach: only initial set of items will be shared. Changes made to the mapped objects (an added or deleted items) will not be shared and will not be visible by other process.

</details>

<details>
<summary title="Dictionaries"><kbd> Dictionaries </kbd></summary>

### Dictionaries

You might use `FastLimitedDict` wrapper for your dict in order to get much faster shared dictionary.

Just wrap your dictionary with `FastLimitedDict`:

```python
my_obj: List = [
    True,
    2,
    FastLimitedDict({
        1: 'Hello ',
        '2': 'World',
        3: np.array([1, 2, 3], dtype=np.int32),
    })
]
my_obj_mapped = shared_memory.value.put_message(my_obj)
```

Drawbacks of this approach: only initial set of key-values pairs will be shared. Added, updated or deleted key-value pairs will not be shared and such changes will not be visible by other process.

</details>

<details>
<summary title="Custom classes (including `dataclass`)"><kbd> Custom classes (including `dataclass`) </kbd></summary>

### Custom classes (including `dataclass`)

By default, shared custom class instances (including `dataclass` instances) have static set of attributes (similar to instances of classes with `__slots__`). That means that all new (dynamically added to the mapped object, attributes will not became shared). This behavior increases performance.

<details>
<summary title="For example"><kbd> For example </kbd></summary>

#### For example

```python
@dataclass
class SomeSharedObject:
    some_processing_stage_control: bool
    int_value: int
    str_value: str
    data_dict: Dict[Hashable, Any]
    company_info: CompanyInfo

my_obj: List = [
    True,
    2,
    SomeSharedObject(
        some_processing_stage_control=False,
        int_value=18,
        str_value='Hello, ',
        data_dict=None,
        company_info=None,
    ),
]
my_obj_mapped: List = shared_memory.value.put_message(my_obj)

my_obj_mapped[2].some_new_attribute = 'Hi!'  # this attribute will Not became shared and as result will not became accessible by other process
```

</details>

If you need to share class instance with ability to add new shared attributes to it's mapped instance, you can wrap your object with either `ForceGeneralObjectCopy` or `ForceGeneralObjectInplace`

<details>
<summary title="For example"><kbd> For example </kbd></summary>

#### For example

```python
@dataclass
class SomeSharedObject:
    some_processing_stage_control: bool
    int_value: int
    str_value: str
    data_dict: Dict[Hashable, Any]
    company_info: CompanyInfo

my_obj: List = [
    True,
    2,
    ForceGeneralObjectInplace(SomeSharedObject(
        some_processing_stage_control=False,
        int_value=18,
        str_value='Hello, ',
        data_dict=None,
        company_info=None,
    )),
]
my_obj_mapped: List = shared_memory.value.put_message(my_obj)

my_obj_mapped[2].some_new_attribute = 'Hi!'  # this attribute Will became shared and as result Will be seen by process
```

</details>

Difference between `ForceGeneralObjectCopy` and `ForceGeneralObjectInplace`:
* `ForceGeneralObjectInplace`. `my_obj_mapped = shared_memory.value.put_message(ForceGeneralObjectInplace(my_obj))` call will change class of an original `my_obj` object. And `True == (my_obj is my_obj_mapped)`
* `ForceGeneralObjectCopy`. `my_obj_mapped = shared_memory.value.put_message(ForceGeneralObjectCopy(my_obj))` call will Not change an original `my_obj` object. `my_obj_mapped` object will be constructed from the scratch

Also you can tune a default behavior by wrapping your object with either `ForceStaticObjectCopy` or `ForceStaticObjectInplace`.

Difference between `ForceStaticObjectCopy` and `ForceGeneralObjectInplace`:
* `ForceGeneralObjectInplace`. `my_obj_mapped = shared_memory.value.put_message(ForceGeneralObjectInplace(my_obj))` call will change class of an original `my_obj` object. And `True == (my_obj is my_obj_mapped)`
* `ForceStaticObjectCopy`. `my_obj_mapped = shared_memory.value.put_message(ForceStaticObjectCopy(my_obj))` call will Not change an original `my_obj` object. `my_obj_mapped` object will be constructed from the scratch

</details>

## How to choose shared memory size

<details>
<summary title="How to choose shared memory size"><kbd> How to choose shared memory size </kbd></summary>

When setting the size parameter for SharedMemory, consider using broader units like tens (for embedded systems), hundreds, or thousands of megabytes, rather than precise byte counts. This approach is similar to how you would not precisely calculate the amount of memory needed for a web server hosted externally; you make an educated guess, like assuming that 256 MB might be insufficient but 768 MB could be adequate, and then adjust based on practical testing.

Also, be aware of memory fragmentation, which affects all memory allocation systems, including the OS itself. For example, if you have a SharedMemory pool sized to store exactly ten 64-bit integers, accounting for additional bytes for system information, your total might be around 200 bytes. Initially, after storing the integers, your memory might appear as ["int", "int", ..., "int"]. If you delete every second integer, the largest contiguous free memory chunk could be just 10 bytes, despite having 50 bytes free in total. This fragmentation means you cannot store a larger data structure like a 20-byte string which needs contiguous space.

To resolve this, simply increase the size parameter value of SharedMemory. This is akin to how you would manage memory allocation for server hosting or thread stack sizes in software development.

</details>

## Benchmarks

<details>
<summary title="System"><kbd> System </kbd></summary>

* CPU: i5-3570@3.40GHz (Ivy Bridge)
* RAM: 32 GBytes, DDR3, dual channel, 655 MHz
* OS: Ubuntu 20.04.6 LTS under WSL2. Windows 10

</details>

### Throughput GiB/s

![Throughput GiB/s](https://github.com/FI-Mihej/Cengal/raw/master/docs/assets/InterProcessPyObjects/ChartThroughputGiBs.png)

<details>
<summary title="Benchmarks results"><kbd> Benchmarks results </kbd></summary>

#### Refference results (sysbench)

```bash
sysbench memory --memory-oper=write run
```

```
5499.28 MiB/sec
```

#### Results

![Throughput GiB/s](https://github.com/FI-Mihej/Cengal/raw/master/docs/assets/InterProcessPyObjects/ChartThroughputGiBs.png)

`*` [multiprocessing.shared_memory.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__shared_memory.py) - simple implementation. This is a simple implementation because it uses a similar approach to the one used in `uvloop.*`, `asyncio.*`, `multiprocessing.Queue`, and `multiprocessing.Pipe` benchmarking scripts. Similar implementations are expected to be used by the majority of projects.

#### Benchmarks results table

| Approach                        | sync/async | Throughput GiB/s |
|---------------------------------|------------|------------------|
| InterProcessPyObjects (sync)    | sync       | 3.770            |
| InterProcessPyObjects + uvloop  | async      | 3.222            |
| InterProcessPyObjects + asyncio | async      | 3.079            |
| multiprocessing.shared_memory   | sync       | 2.685            |
| uvloop.UnixDomainSockets        | async      | 0.966            |
| asyncio + cengal.Streams        | async      | 0.942            |
| uvloop.Streams                  | async      | 0.922            |
| asyncio.Streams                 | async      | 0.784            |
| asyncio.UnixDomainSockets       | async      | 0.708            |
| multiprocessing.Queue           | sync       | 0.669            |
| multiprocessing.Pipe            | sync       | 0.469            |

#### Benchmark scripts

* InterProcessPyObjects - Sync:
    * [sender.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/shared_objects__transfer_sync__sender.py)
    * [receiver.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/shared_objects__transfer_sync__receiver.py)
* InterProcessPyObjects - Async (uvloop):
    * [sender.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/shared_objects__transfer_uvloop__sender.py)
    * [receiver.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/shared_objects__transfer_uvloop__receiver.py)
* InterProcessPyObjects - Async (asyncio):
    * [sender.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/shared_objects__transfer_asyncio__sender.py)
    * [receiver.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/shared_objects__transfer_asyncio__receiver.py)
* [multiprocessing.shared_memory.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__shared_memory.py)
* [uvloop.UnixDomainSockets.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__uvloop_unix_domain_sockets.py)
* [asyncio_with_cengal.Streams.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__cengal_efficient_streams.py)
* [uvloop.Streams.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__uvloop_streams.py)
* [asyncio.Streams.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__asyncio_streams.py)
* [asyncio.UnixDomainSockets.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__asyncio_unix_domain_sockets.py)
* [multiprocessing.Queue.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__multiprocess_queue.py)
* [multiprocessing.Pipe.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__multiprocess_pipe.py)

</details>

### Shared Dict Performance

![Dict performance comparison](https://github.com/FI-Mihej/Cengal/raw/master/docs/assets/InterProcessPyObjects/ChartDictPerformanceComparison.png)

<details>
<summary title="Benchmarks results"><kbd> Benchmarks results </kbd></summary>

#### Competitors:

##### 1. multiprocessing.Manager - dict

Pros:

* Part of the standard library

Cons:

* Slowest solution
* Values are read-only unless they are an explicit instances of `multiprocessing.Manager` supported types (types inherited from `multiprocessing.BaseProxy` like `multiprocessing.DictProxy` or `multiprocessing.ListProxy`)

Benchmark scripts:

* [dict__python__multiprocess_dict.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/dict__python__multiprocess_dict.py)

##### 2. [UltraDict](https://github.com/ronny-rentner/UltraDict)

Pros:

* Relatively fast writes
* Fast repetitive reads of an unchanged values
* Has built in inter-process synchronization mechanism

Cons:

* Relies on pickle on an each change
* Non-dictionary values are read-only from the multiprocessing perspective

Benchmark scripts:

* [dict__thirdparty__ultradict.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/dict__thirdparty__ultradict.py)

##### 3. [shared_memory_dict](https://github.com/luizalabs/shared-memory-dict)

Pros:

* At least faster than 'multiprocessing.Manager - dict' solution

Cons:

* Second slowest solution
* Relies on pickle on an each change
* Values (even lists or dicts) are read-only from the multiprocessing perspective
* Can not be initialized from the other `dict` nor even from the other `SharedMemoryDict` instance: you need to manually put an each key-value pair into it using loop
* Has known issues with inter-process synchronization. Is is better for developer to use their own external inter-process synchronization mechanisms (from `multiprocessing.Manager` for example)

Benchmark scripts:

* [dict__thirdparty__shared_memory_dict.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/dict__thirdparty__shared_memory_dict.py)

##### 4. InterProcessPyObjects - IntEnumListStruct

Pros:

* Fastest solution: 15.6 times faster than `UltraDict`
* Good for structures: when all fields are already known
* Support any InterProcessPyObjects' supported data types as values
* Values of mutable types are naturally mutable: developer do not need to prepare and change their data explicitly

Cons:

* Can use only IntEnum (int) keys
* Fixed size of a size of a provided IntEnum

Benchmark scripts:

* [dict__shared_objects__intenum_list.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/dict__shared_objects__intenum_list.py)

Message sender example:

```python
class CompanyMetrics(IntEnum):
    income = 0
    employees = 1
    avg_salary = 2
    annual_income = 3
    in_a_good_state = 4
    emails = 5
    websites = 6

company_metrics: List = intenum_dict_to_list({  # lists with IntEnum indexes are blazing-fast alternative to dictionaries
    CompanyMetrics.websites: ['http://company.com', 'http://company.org'],
    CompanyMetrics.avg_salary: 3_000.0,
    CompanyMetrics.employees: 10,
    CompanyMetrics.in_a_good_state: True,
})  # Unmentioned fields will be filled with Null values
company_metrics_mapped: List = shared_memory.value.put_message(company_metrics)
```

Message receiver example:

```python
company_metrics: List = shared_memory.value.take_message()
k = company_metrics[CompanyMetrics.avg_salary]
company_metrics[CompanyMetrics.avg_salary] = 5_000.0
company_metrics[CompanyMetrics.avg_salary] += 1.1
```

##### 5. InterProcessPyObjects - Dataclass

Pros:

* Fast. Second fastest solution: around 2 times faster than `UltraDict`
* Good for structures and object: when all fields are already known
* Works with objects naturally - without an explicit preparation or changes from the developer's side
* Support any InterProcessPyObjects' supported data types as values
* Supports `dataclass` as well as other objects
* Supports object's methods
* Fields of mutable types are naturally mutable: developer do not need to prepare and change their data explicitly
* Does not relies on frequent data pickle: uses pickle only for an initial object construction but not for the fields updates

Cons:

* Set of fields is fixed by the set of fields of an original object

Benchmark scripts:

* [dict__shared_objects__static_obj.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/dict__shared_objects__static_obj.py)

##### 6. InterProcessPyObjects - Dict

Pros:

* Slightly faster than `SharedMemoryDict`
* Works with objects naturally - without an explicit preparation or changes from the developer's side
* Support any InterProcessPyObjects' supported data types as values
* Support any InterProcessPyObjects' supported Hashable data types as keys
* Values of mutable types are naturally mutable: developer do not need to prepare and change their data explicitly
* Does not relies on pickle
* Speed optimizations are architectures and planned for an implementation

Cons:

* Speed optimization implementation are in progress - not released yet

Benchmark scripts:

* [dict__shared_objects__dict.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/dict__shared_objects__dict.py)

#### Results

![Dict performance comparison](https://github.com/FI-Mihej/Cengal/raw/master/docs/assets/InterProcessPyObjects/ChartDictPerformanceComparison.png)

#### Benchmarks results table

| Approach                                  | increments/s |
|-------------------------------------------|--------------|
| InterProcessPyObjects - IntEnumListStruct | 1189730      |
| InterProcessPyObjects - Dataclass         | 143091       |
| UltraDict                                 | 76214        |
| InterProcessPyObjects - Dict              | 44285        |
| SharedMemoryDict                          | 42862        |
| multiprocessing.Manager - dict            | 2751         |

</details>

## Todo

- [ ] Connect more than two processes
- [ ] Use third-party fast hashing implementations instead of or in addition to built in `hash()` call
- [ ] Continuous performance improvements

## Conclusion

This Python package provides a robust solution for inter-process communication, supporting a variety of Python data structures, types, and third-party libraries. Its lock-free synchronization and asyncio compatibility make it an ideal choice for high-performance, concurrent execution.

# Based on [Cengal](https://github.com/FI-Mihej/Cengal)

This is a stand-alone package for a specific Cengal module. Package is designed to offer users the ability to install specific Cengal functionality without the burden of the library's full set of dependencies.

The core of this approach lies in our 'cengal-light' package, which houses both Python and compiled Cengal modules. The 'cengal' package itself serves as a lightweight shell, devoid of its own modules, but dependent on 'cengal-light[full]' for a complete Cengal library installation with all required dependencies.

An equivalent import:
```python
from cengal.hardware.memory.shared_memory import *
from cengal.parallel_execution.asyncio.ashared_memory_manager import *
```

Cengal library can be installed by:

```bash
pip install cengal
```

https://github.com/FI-Mihej/Cengal

https://pypi.org/project/cengal/


# Projects using Cengal

* [CengalPolyBuild](https://github.com/FI-Mihej/CengalPolyBuild) - A Comprehensive and Hackable Build System for Multilingual Python Packages: Cython (including automatic conversion from Python to Cython), C/C++, Objective-C, Go, and Nim, with ongoing expansions to include additional languages. (Planned to be released soon) 
* [cengal_app_dir_path_finder](https://github.com/FI-Mihej/cengal_app_dir_path_finder) - A Python module offering a unified API for easy retrieval of OS-specific application directories, enhancing data management across Windows, Linux, and macOS 
* [cengal_cpu_info](https://github.com/FI-Mihej/cengal_cpu_info) - Extended, cached CPU info with consistent output format.
* [cengal_memory_barriers](https://github.com/FI-Mihej/cengal_memory_barriers) - Fast cross-platform memory barriers for Python.
* [flet_async](https://github.com/FI-Mihej/flet_async) - wrapper which makes [Flet](https://github.com/flet-dev/flet) async and brings booth Cengal.coroutines and asyncio to Flet (Flutter based UI)
* [justpy_containers](https://github.com/FI-Mihej/justpy_containers) - wrapper around [JustPy](https://github.com/justpy-org/justpy) in order to bring more security and more production-needed features to JustPy (VueJS based UI)
* [Bensbach](https://github.com/FI-Mihej/Bensbach) - decompiler from Unreal Engine 3 bytecode to a Lisp-like script and compiler back to Unreal Engine 3 bytecode. Made for a game modding purposes
* [Realistic-Damage-Model-mod-for-Long-War](https://github.com/FI-Mihej/Realistic-Damage-Model-mod-for-Long-War) - Mod for both the original XCOM:EW and the mod Long War. Was made with a Bensbach, which was made with Cengal
* [SmartCATaloguer.com](http://www.smartcataloguer.com/index.html) - TagDB based catalog of images (tags), music albums (genre tags) and apps (categories)

# License

Copyright © 2012-2024 ButenkoMS. All rights reserved.

Licensed under the Apache License, Version 2.0.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "InterProcessPyObjects",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "Android, IPC, Linux, Windows, cengal, crossplatform, iOS, inter-process communication, macOS, multiprocessing, shared dict, shared memory, shared numpy ndarray, shared object, shared objects, shared set, shared torch Tensor",
    "author": null,
    "author_email": "ButenkoMS <gtalk@butenkoms.space>",
    "download_url": "https://files.pythonhosted.org/packages/eb/a5/1add6cb55b93747690900951361997f8db3f6410d2b57d94fc81dfd05a1a/interprocesspyobjects-1.0.7.tar.gz",
    "platform": null,
    "description": "![GitHub tag (with filter)](https://img.shields.io/github/v/tag/FI-Mihej/InterProcessPyObjects) ![Static Badge](https://img.shields.io/badge/OS-Linux_%7C_Windows_%7C_macOS-blue)\n\n![PyPI - Version](https://img.shields.io/pypi/v/InterProcessPyObjects) ![PyPI - Format](https://img.shields.io/pypi/format/cengal-light?color=darkgreen) ![Static Badge](https://img.shields.io/badge/wheels-Linux_%7C_Windows_%7C_macOS-blue) ![Static Badge](https://img.shields.io/badge/Architecture-x86__64_%7C_ARM__64-blue) ![PyPI - Python Version](https://img.shields.io/pypi/pyversions/cengal-light) ![Static Badge](https://img.shields.io/badge/PyPy-3.8_%7C_3.9_%7C_3.10-blue) ![PyPI - Implementation](https://img.shields.io/pypi/implementation/cengal-light) \n\n![GitHub License](https://img.shields.io/github/license/FI-Mihej/InterProcessPyObjects?color=darkgreen) ![Static Badge](https://img.shields.io/badge/API_status-Stable-darkgreen)\n\n# InterProcessPyObjects package\n\n> InterProcessPyObjects is a part of the [Cengal](https://github.com/FI-Mihej/Cengal) library. If you have any questions or would like to participate in discussions, feel free to join the [Cengal Discord](https://discord.gg/TAy7xNgR). Your support and involvement are greatly appreciated as Cengal evolves.\n\nThis high-performance package delivers blazing-fast inter-process communication through shared memory, enabling Python objects to be shared across processes with exceptional efficiency. By minimizing the need for frequent serialization-deserialization, it enhances overall speed and responsiveness. The package offers a comprehensive suite of functionalities designed to support a diverse array of Python types and facilitate asynchronous IPC, optimizing performance for demanding applications.\n\n![Throughput GiB/s](https://github.com/FI-Mihej/Cengal/raw/master/docs/assets/InterProcessPyObjects/ChartThroughputGiBs.png)\n\n![Dict performance comparison](https://github.com/FI-Mihej/Cengal/raw/master/docs/assets/InterProcessPyObjects/ChartDictPerformanceComparison.png)\n\n## API State\n\nStable. Guaranteed to not have braking changes in the future (see bellow for details).\n\nAny hypothetical further API-breaking changes will lead to new module creation within the package. An old version will continue its existence and continue to be importable by an explicit address (see Details bellow).\n\n<details>\n<summary title=\"Details\"><kbd> Details </kbd></summary>\n\nThe current (currently latest) version can be imported either by:\n\n```python\nfrom ipc_py_objects import *\n```\n\nor by\n\n```python\nfrom ipc_py_objects.versions.v_1 import *\n```\n\nIf further braking changes will be made to the API - a new (`v_2`) version will be made. As result:\n\nCurrent (`v_1`) version will continue to be accessible by an explicit address:\n\n```python\nfrom ipc_py_objects.versions.v_1 import *\n```\n\nLatest (`v_2`) version will be accessible by either: \n\n```python\nfrom ipc_py_objects import *\n```\n\nor by\n\n```python\nfrom ipc_py_objects.versions.v_2 import *\n```\n\nThis is a general approach across the entire [Cengal](https://github.com/FI-Mihej/Cengal) library. It gives me the ability to effectively work on its huge codebase, even by myself.\n\nBy the way. I'm finishing an implementation of [CengalPolyBuild](https://github.com/FI-Mihej/CengalPolyBuild) - my package creation system which provides same approach to users. It is a comprehensive and hackable build system for multilingual Python packages: Cython (including automatic conversion from Python to Cython), C/C++, Objective-C, Go, and Nim, with ongoing expansions to include additional languages. Basically, it will provide easy access to all the same features I'm already using in the Cengal library package creation and management processes.\n\n</details>\n\n## Key Features\n\n* Shared Memory Communication:\n    * Enables sharing of Python objects directly between processes using shared memory.\n    * Utilizes a linked list of global messages to inform connected processes about new shared objects.\n\n* Lock-Free Synchronization:\n    * Uses memory barriers for efficient communication, avoiding slow syscalls.\n    * Ensures each process can access and modify shared memory without contention.\n\n* Supported Python Types:\n    * Handles various Python data structures including:\n        * Basic types: `None`, `bool`, 64-bit `int`, large `int` (arbitrary precision integers), `float`, `complex`, `bytes`, `bytearray`, `str`.\n        * Standard types: `Decimal`, `slice`, `datetime`, `timedelta`, `timezone`, `date`, `time`\n        * Containers: `tuple`, `list`, classes inherited from: `AbstractSet` (`frozenset`), `MutableSet` (`set`), `Mapping` and `MutableMapping` (`dict`).\n        * Pickable classes instances: custom classes including `dataclass`\n    * Allows mutable containers (lists, sets, mappings) to save basic types (`None`, `bool`, 64 bit `int`, `float`) internally, optimizing memory use and speed.\n\n* NumPy and Torch Support:\n    * Supports numpy arrays by creating shared bytes objects coupled with independent arrays.\n    * Supports torch tensors by coupling them with shared numpy arrays.\n\n* Custom Class Support:\n    * Projects pickable custom classes instances (including `dataclasses`) onto shared dictionaries in shared memory.\n    * Modifies the class instance to override attribute access methods, managing data fields within the shared dictionary.\n    * supports classes with or without `__dict__` attr\n    * supports classes with or without `__slots__` attr\n\n* Asyncio Compatibility:\n    * Provides a wrapper module for async-await functionality, integrating seamlessly with asyncio.\n    * Ensures asynchronous operations work smoothly with the package's lock-free approach.\n\n## Import\n\nTo use this package, simply install it via pip:\n\n```shell\npip install InterProcessPyObjects\n```\n\nThen import it into your project:\n\n```python\nfrom ipc_py_objects import *\n```\n\n## Main principles\n\n* only one process has access to the shared memory at the same time\n* working cycle:\n    1. work on your tasks\n    2. acquire access to shared memory\n    3. work with shared memory as fast as possible (read and/or update data structures in shared memory)\n    4. release access to shared memory\n    5. continue your work on other tasks\n* do not forget to manually destroy your shared objects when they are not needed already\n* feel free to not destroy your shared object if you need it for a whole run and/or do not care about the shared memory waste\n* data will not be preserved between Creator's sessions. Shared memory will be wiped just before Creator finished its work with a shared memory instance (Consumer's session will be finished already at this point)\n\n### ! Important about hashmaps\n\nPackage, currently, uses Python `hash()` call which is reliable across interpreter session but unreliable across different interpreter sessions because of random seeding.\n\nIn order to use same seeding across different interpreter instances (and as result, be able to use hashmaps) you can set 'PYTHONHASHSEED` env var to some fixed integer value\n\n<details>\n<summary title=\".bashrc\"><kbd> .bashrc </kbd></summary>\n\n```bash\nexport PYTHONHASHSEED=0\n```\n\n</details>\n\n<details>\n<summary title=\"Your bash script\"><kbd> Your bash script </kbd></summary>\n\n```bash\nexport PYTHONHASHSEED=0\npython YOURSCRIPT.py\n```\n\n</details>\n\n<details>\n<summary title=\"Terminal\"><kbd> Terminal </kbd></summary>\n\n```shell\n$ PYTHONHASHSEED=0 python YOURSCRIPT.py\n```\n\n</details>\n\nAn issue with the behavior of an integrated `hash()` call **does Not** affect the following data types:\n* `None`, `bool`, `int`, `float`, `complex`, `str`, `bytes`, `bytearray`\n* `Decimal`, `slice`, `datetime`, `timedelta`, `timezone`, `date`, `time`\n* `tuple`, `list`\n* `set` wrapped by `FastLimitedSet` class instance: for example by using `.put_message(FastLimitedSet(my_set_obj))` call\n* `dict` wrapped by `FastLimitedDict` class instance: for example by using `.put_message(FastLimitedDict(my_dict_obj))` call\n* an instances of custom classes including `dataclass` by default: for example by using `.put_message(my_obj)` call\n* an instances of custom classes including `dataclass` wrapped by `ForceStaticObjectCopy` or `ForceStaticObjectInplace` class instances. For example by using `.put_message(ForceStaticObjectInplace(my_obj))` call\n\nIt affects only the following data types: \n* `AbstractSet` (`frozenset`)\n* `MutableSet` (`set`)\n* `Mapping`\n* `MutableMapping` (`dict`)\n* an instances of custom classes including `dataclass` wrapped by `ForceGeneralObjectCopy` or `ForceGeneralObjectInplace` class instances. For example by using `.put_message(ForceGeneralObjectInplace(my_obj))` call\n\n## Examples\n\n* An async examples (with asyncio):\n    * [sender.py](https://github.com/FI-Mihej/InterProcessPyObjects/blob/master/example/sender.py)\n    * [receiver.py](https://github.com/FI-Mihej/InterProcessPyObjects/blob/master/example/receiver.py)\n    * [shared_objects__types.py](https://github.com/FI-Mihej/InterProcessPyObjects/blob/master/example/shared_objects__types.py)\n\n### Receiver.py performance measurements\n\n* CPU: i5-3570@3.40GHz (Ivy Bridge)\n* RAM: 32 GBytes, DDR3, dual channel, 655 MHz\n* OS: Ubuntu 20.04.6 LTS under WSL2. Windows 10\n\n```python\nasync with ashared_memory_context_manager.if_has_messages() as shared_memory:\n    # Taking a message with an object from the queue.\n    sso: SomeSharedObject = shared_memory.value.take_message()  # 5_833 iterations/seconds\n\n    # We create local variables once in order to access them many times in the future, ensuring high performance.\n    # Applying a principle that is widely recommended for improving Python code.\n    company_metrics: List = sso.company_info.company_metrics  # 12_479 iterations/seconds\n    some_employee: Employee = sso.company_info.some_employee  # 10_568 iterations/seconds\n    data_dict: Dict = sso.data_dict  # 16_362 iterations/seconds\n    numpy_ndarray: np.ndarray = data_dict['key3']  # 26_223 iterations/seconds\n\n# Optimal work with shared data (through local variables):\nasync with ashared_memory_context_manager as shared_memory:\n    # List\n    k = company_metrics[CompanyMetrics.avg_salary]  # 1_535_267 iterations/seconds\n    k = company_metrics[CompanyMetrics.employees]  # 1_498_278 iterations/seconds\n    k = company_metrics[CompanyMetrics.in_a_good_state]  # 1_154_454 iterations/seconds\n    k = company_metrics[CompanyMetrics.websites]  # 380_258 iterations/seconds\n    company_metrics[CompanyMetrics.annual_income] = 2_000_000.0  # 1_380_983 iterations/seconds\n    company_metrics[CompanyMetrics.employees] = 20  # 1_352_799 iterations/seconds\n    company_metrics[CompanyMetrics.avg_salary] = 5_000.0  # 1_300_966 iterations/seconds\n    company_metrics[CompanyMetrics.in_a_good_state] = None  # 1_224_573 iterations/seconds\n    company_metrics[CompanyMetrics.in_a_good_state] = False  # 1_213_175 iterations/seconds\n    company_metrics[CompanyMetrics.avg_salary] += 1.1  # 299_415 iterations/seconds\n    company_metrics[CompanyMetrics.employees] += 1  # 247_476 iterations/seconds\n    company_metrics[CompanyMetrics.emails] = tuple()  # 55_335 iterations/seconds (memory allocation performance is planned to be improved)\n    company_metrics[CompanyMetrics.emails] = ('sails@company.com',)  # 30_314 iterations/seconds (memory allocation performance is planned to be improved)\n    company_metrics[CompanyMetrics.emails] = ('sails@company.com', 'support@company.com')  # 20_860 iterations/seconds (memory allocation performance is planned to be improved)\n    company_metrics[CompanyMetrics.websites] = ['http://company.com', 'http://company.org']  # 10_465 iterations/seconds (memory allocation performance is planned to be improved)\n    \n    # Method call on a shared object that changes a property through the method\n    some_employee.increase_years_of_employment()  # 80548 iterations/seconds\n\n    # Object properties\n    k = sso.int_value  # 850_098 iterations/seconds\n    k = sso.str_value  # 228_966 iterations/seconds\n    sso.int_value = 200  # 207_480 iterations/seconds\n    sso.int_value += 1  # 152_263 iterations/seconds\n    sso.str_value = 'Hello. '  # 52_390 iterations/seconds (memory allocation performance is planned to be improved)\n    sso.str_value += '!'  # 35_823 iterations/seconds (memory allocation performance is planned to be improved)\n\n    # Numpy.ndarray\n    numpy_ndarray += 10  # 403_646 iterations/seconds\n    numpy_ndarray -= 15  # 402_107 iterations/seconds\n\n    # Dict\n    k = data_dict['key1']  # 87_558 iterations/seconds\n    k = data_dict[('key', 2)]  # 49_338 iterations/seconds\n    data_dict['key1'] = 200  # 86_744 iterations/seconds\n    data_dict['key1'] += 3  # 41_409 iterations/seconds\n    data_dict['key1'] *= 1  # 40_927 iterations/seconds\n    data_dict[('key', 2)] = 'value2'  # 31_460 iterations/seconds (memory allocation performance is planned to be improved)\n    data_dict[('key', 2)] = data_dict[('key', 2)] + 'd'  # 18_972 iterations/seconds (memory allocation performance is planned to be improved)\n    data_dict[('key', 2)] = 'value2'  # 10_941 iterations/seconds (memory allocation performance is planned to be improved)\n    data_dict[('key', 2)] += 'd'  # 16_568 iterations/seconds (memory allocation performance is planned to be improved)\n\n# An example of non-optimal work with shared data (without using a local variables):\nasync with ashared_memory_context_manager as shared_memory:\n    # An example of a non-optimal method call (without using a local variable) that changes a property through the method\n    sso.company_info.some_employee.increase_years_of_employment()  # 9_418 iterations/seconds\n\n    # An example of non-optimal work with object properties (without using local variables)\n    k = sso.company_info.income  # 20_445 iterations/seconds\n    sso.company_info.income = 3_000_000.0  # 13_899 iterations/seconds\n    sso.company_info.income *= 1.1  # 17_272 iterations/seconds \n    sso.company_info.income += 500_000.0  # 18_376 iterations/seconds\n    \n    # Example of non-optimal usage of numpy.ndarray without a proper local variable\n    data_dict['key3'] += 10  # 6_319 iterations/seconds\n\n# Notify the sender about the completion of work on the shared object\nasync with ashared_memory_context_manager as shared_memory:\n    sso.some_processing_stage_control = True  # 298_968 iterations/seconds\n```\n\n## Reference (and explaining examples line by line)\n\nCode for shared memory Creator side:\n```python\nashared_memory_manager: ASharedMemoryManager = ASharedMemoryManager(SharedMemory('shared_memory_identifier', create=True, size=200 * 1024**2))\n# declare creation and initiation of the shared memory instance with a size of 200 MiB.\n```\n\nCode for shared memory Consumer side:\n```python\nashared_memory_manager: ASharedMemoryManager = ASharedMemoryManager(SharedMemory('shared_memory_identifier'))\n# declares connection to shared memory instance\n```\n\nOn shared memory Creator side:\n```python\nasync with ashared_memory_manager as asmm:\n# creates, initiates shared memory instance and waits for the Consumer creation. Execute it once per run.\n# feel free to share either `asmm` or `ashared_memory_manager` across your coroutines\n```\n\nOn shared memory concumer side:\n```python\nasync with ashared_memory_manager as asmm:\n# waits for the shared memory creation and initiation by the shared memory Creator. Execute it once per run.\n# feel free to share either `asmm` or `ashared_memory_manager` across your coroutines\n```\n\n```python\nashared_memory_context_manager: ASharedMemoryContextManager = asmm()\n# creates shared memory access context manager. Create it once per coroutine. Use in the same coroutine as much as you need it.\n```\n\n```python\nasync with ashared_memory_context_manager as shared_memory:\n# acquire access to shared memory as soon as possible\n```\n\n```python\nasync with ashared_memory_context_manager.if_has_messages() as shared_memory:\n# acquire access to shared memory if message queue is not empty\n```\n\n```python\nshared_memory # is an instance of ValueHolder class from the Cengal library\nshared_memory.value  # is an instance of `SharedMemory` instance\nshared_memory.existence  # bool. Will be set to True at the beginning of an each context (`with`) block. Set it to `False`\n    # if you want to release CPU for a small time portion before shared memory will be acquired next time.\n    # if at least one coroutine will not set it to `False` - next acquire attempt will be made immediately which will\n    # lower latency and increase performance but at the same time will consume more CPU time.\n    # Default behavior (`True`) is better for CPU intensive algorithms, \n    # while `False` on all process coroutines (which have their own memory access context managers) will be better\n    # for example for desktop or mobile applications\n```\n\n### `SharedMemory` fields and methods you might frequently use in an async approach (in coroutines)\n\n```python\nSharedMemory.size  # an actual size of the shared memory\n```\n\n```python\nSharedMemory.name  # an identifier of the shared memory\n```\n\n```python\nSharedMemory.create  # `True` on Creator side. `False` on Consumer side\n```\n\n```python\nobj_mapped = shared_memory.value.put_message(obj)\n# Puts object to the shared memory and create an appropriate message\n# Returns mapped version of the object if applicable (returns the same object otherwise).\n# Next types will return same object: None, bool, int, float, str, bytes, bytearray, tuple.\n```\n\n```python\nobj_mapped, shared_obj_offset = shared_memory.value.put_message_2(obj)\n# Puts object to the shared memory and create an appropriate message\n# Returns: \n# * Mapped version of the object if applicable (returns the same object otherwise).\n#   Next types will return same object: None, bool, int, float, str, bytes, bytearray, tuple.\n# * An offset to the shared object data structure which holds an appropriate shared object content\n```\n\n```python\nhas_messages: bool = shared_memory.value.has_messages()\n# main way for checking an internal message queue for an emptiness\n```\n\n```python\nobj_mapped = shared_memory.value.take_message()\n# Takes (and removes) the latest message from an internal message queue.\n# Creates and returns mapped object from an appropriate shared object data structure.\n# Does not deletes an appropriate shared object data structure.\n# Returns mapped version of the object if applicable (returns new copy of an object otherwise).\n# Next types will return new copy of an object: None, bool, int, float, str, bytes, bytearray, tuple.\n# Will raise NoMessagesInQueueError exception if an internal message queue is empty\n```\n\n```python\nobj_mapped, shared_obj_offset = shared_memory.value.take_message_2()\n# Takes (and removes) the latest message from an internal message queue.\n# Creates and returns mapped object from an appropriate shared object data structure.\n# Does not deletes an appropriate shared object data structure.\n# Returns: \n# * Mapped version of the object if applicable (returns new copy of an object otherwise).\n#   Next types will return new copy of an object: None, bool, int, float, str, bytes, bytearray, tuple.\n# * An offset to the shared object data structure which holds an appropriate shared object content\n# Will raise NoMessagesInQueueError exception if an internal message queue is empty\n```\n\n```python\nshared_memory.value.destroy_obj(shared_obj_offset)\n# Destroys the shared object data structure which holds an appropriate shared object content.\n# Use it in order to free memory used by your shared object.\n\n\nshared_memory.value.destroy_object(shared_obj_offset)\n# An alias to the `shared_memory.value.destroy_obj()` call\n```\n\n```python\nshared_obj_buffer: memoryview = shared_memory.value.get_obj_buffer(shared_obj_offset)\n# returns `memoryview` to the data section of your shared object.\n# Feel free to use this memory for your own needs while an appropriate shared object is still not destroyed.\n# Next types provides buffer to their shared data: bytes, bytearray, str, numpy.ndarray (buffer of an appropriate shared bytes object mapped by this ndarray), torch.Tensor (buffer of an appropriate shared bytes object mapped by this Tensor)\n```\n\n### Useful exceptions\n\n```python\nclass SharedMemoryError:\n# Base exception all other exceptions are inherited from it\n```\n\n```python\nclass FreeMemoryChunkNotFoundError(SharedMemoryError):\n\"\"\"Indicates that an unpartitioned chunk of free memory of requested size not being found.\n\n    Regarding this error, it\u2019s important to adjust the size parameter in the SharedMemory configuration. Trying to estimate memory consumption down to the byte is not practical because it fails to account for the memory overhead required by each entity stored (such as entity type metadata, pointers to child entities, etc.).\n\n    When setting the size parameter for SharedMemory, consider using broader units like tens (for embedded systems), hundreds, or thousands of megabytes, rather than precise byte counts. This approach is similar to how you would not precisely calculate the amount of memory needed for a web server hosted externally; you make an educated guess, like assuming that 256 MB might be insufficient but 768 MB could be adequate, and then adjust based on practical testing.\n\n    Also, be aware of memory fragmentation, which affects all memory allocation systems, including the OS itself. For example, if you have a SharedMemory pool sized to store exactly ten 64-bit integers, accounting for additional bytes for system information, your total might be around 200 bytes. Initially, after storing the integers, your memory might appear as [\"int\", \"int\", ..., \"int\"]. If you delete every second integer, the largest contiguous free memory chunk could be just 10 bytes, despite having 50 bytes free in total. This fragmentation means you cannot store a larger data structure like a 20-byte string which needs contiguous space.\n\n    To resolve this, simply increase the size parameter value of SharedMemory. This is akin to how you would manage memory allocation for server hosting or thread stack sizes in software development.\n\"\"\"\n```\n\n```python\nNoMessagesInQueueError\n# Next calls can raise it if an internal message queue is empty: take_message(), take_message_2(), read_message(), read_message_2()\n```\n\n## Performance tips\n\n<details>\n<summary title=\"Data structures\"><kbd> Data structures </kbd></summary>\n\n### Data structures\n\nIt is recommended to use `IntEnum`+`list` based data structures instead of dictionaries or even instead custom class instancess (including dataclass) if you want best performance.\n\nFor example instead operating with dict:\n\n<details>\n<summary title=\"Example\"><kbd> Example </kbd></summary>\n\nMessage sender\n\n```python\ncompany_metrics: Dict[str, Any] = {\n    'websites': ['http://company.com', 'http://company.org'],\n    'avg_salary': 3_000.0,\n    'employees': 10,\n    'in_a_good_state': True,\n}\ncompany_metrics_mapped: List = shared_memory.value.put_message(company_metrics)\n```\n\nMessage receiver\n\n```python\ncompany_metrics: Dict[str, Any] = shared_memory.value.take_message()\nk = company_metrics['employees']  # 87_558 iterations/seconds\ncompany_metrics['employees'] = 200  # 86_744 iterations/seconds\ncompany_metrics['employees'] += 3  # 41_409 iterations/seconds\n```\n\n</details>\n<br>\n\nor even instead operating with dataclass (classes by default operate faster then dict):\n\n<details>\n<summary title=\"Example\"><kbd> Example </kbd></summary>\n\nMessage sender\n\n```python\n@dataclass\nclass CompanyMetrics:\n    income: float\n    employees: int\n    avg_salary: float\n    annual_income: float\n    in_a_good_state: bool\n    emails: Tuple\n    websites = List[str]\n\ncompany_metrics: CompanyMetrics = CompanyMetrics(\n    income=1.4,\n    employees: 12,\n    avg_salary: 35.0,\n    annual_income: 30_000.0,\n    in_a_good_state: False,\n    emails: ('sails@company.com', 'support@company.com'),\n    websites = ['http://company.com', 'http://company.org'],\n)\ncompany_metrics_mapped: CompanyMetrics = shared_memory.value.put_message(company_metrics)\n```\n\nMessage receiver\n\n```python\ncompany_metrics: CompanyMetrics = shared_memory.value.take_message()\nk = company_metrics.employees  # 850_098 iterations/seconds\ncompany_metrics.employees = 200  # 207_480 iterations/seconds\ncompany_metrics.employees += 1  # 152_263 iterations/seconds\n```\n\n</details>\n<br>\n\nit would be more beneficial to operate with a list and appropriate IntEnum indexes:\n\n<details>\n<summary title=\"Example\"><kbd> Example </kbd></summary>\n\nMessage sender:\n\n```python\nclass CompanyMetrics(IntEnum):\n    income = 0\n    employees = 1\n    avg_salary = 2\n    annual_income = 3\n    in_a_good_state = 4\n    emails = 5\n    websites = 6\n\ncompany_metrics: List = intenum_dict_to_list({  # lists with IntEnum indexes are blazing-fast alternative to dictionaries\n    CompanyMetrics.websites: ['http://company.com', 'http://company.org'],\n    CompanyMetrics.avg_salary: 3_000.0,\n    CompanyMetrics.employees: 10,\n    CompanyMetrics.in_a_good_state: True,\n})  # Unmentioned fields will be filled with Null values\ncompany_metrics_mapped: List = shared_memory.value.put_message(company_metrics)\n```\n\nMessage receiver:\n\n```python\ncompany_metrics: List = shared_memory.value.take_message()\nk = company_metrics[CompanyMetrics.avg_salary]  # 1_535_267 iterations/seconds\ncompany_metrics[CompanyMetrics.avg_salary] = 5_000.0  # 1_300_966 iterations/seconds\ncompany_metrics[CompanyMetrics.avg_salary] += 1.1  # 299_415 iterations/seconds\n```\n\n</details>\n<br>\n\n</details>\n\n<details>\n<summary title=\"Sets\"><kbd> Sets </kbd></summary>\n\n### Sets\n\nYou might use `FastLimitedSet` wrapper for your set in order to get much faster shared sets.\n\nJust wrap your dictionary with `FastLimitedSet`:\n\n```python\nmy_obj: List = [\n    True,\n    2,\n    FastLimitedSet({\n        'Hello ',\n        'World',\n        3,\n    })\n]\nmy_obj_mapped = shared_memory.value.put_message(my_obj)\n```\n\nDrawbacks of this approach: only initial set of items will be shared. Changes made to the mapped objects (an added or deleted items) will not be shared and will not be visible by other process.\n\n</details>\n\n<details>\n<summary title=\"Dictionaries\"><kbd> Dictionaries </kbd></summary>\n\n### Dictionaries\n\nYou might use `FastLimitedDict` wrapper for your dict in order to get much faster shared dictionary.\n\nJust wrap your dictionary with `FastLimitedDict`:\n\n```python\nmy_obj: List = [\n    True,\n    2,\n    FastLimitedDict({\n        1: 'Hello ',\n        '2': 'World',\n        3: np.array([1, 2, 3], dtype=np.int32),\n    })\n]\nmy_obj_mapped = shared_memory.value.put_message(my_obj)\n```\n\nDrawbacks of this approach: only initial set of key-values pairs will be shared. Added, updated or deleted key-value pairs will not be shared and such changes will not be visible by other process.\n\n</details>\n\n<details>\n<summary title=\"Custom classes (including `dataclass`)\"><kbd> Custom classes (including `dataclass`) </kbd></summary>\n\n### Custom classes (including `dataclass`)\n\nBy default, shared custom class instances (including `dataclass` instances) have static set of attributes (similar to instances of classes with `__slots__`). That means that all new (dynamically added to the mapped object, attributes will not became shared). This behavior increases performance.\n\n<details>\n<summary title=\"For example\"><kbd> For example </kbd></summary>\n\n#### For example\n\n```python\n@dataclass\nclass SomeSharedObject:\n    some_processing_stage_control: bool\n    int_value: int\n    str_value: str\n    data_dict: Dict[Hashable, Any]\n    company_info: CompanyInfo\n\nmy_obj: List = [\n    True,\n    2,\n    SomeSharedObject(\n        some_processing_stage_control=False,\n        int_value=18,\n        str_value='Hello, ',\n        data_dict=None,\n        company_info=None,\n    ),\n]\nmy_obj_mapped: List = shared_memory.value.put_message(my_obj)\n\nmy_obj_mapped[2].some_new_attribute = 'Hi!'  # this attribute will Not became shared and as result will not became accessible by other process\n```\n\n</details>\n\nIf you need to share class instance with ability to add new shared attributes to it's mapped instance, you can wrap your object with either `ForceGeneralObjectCopy` or `ForceGeneralObjectInplace`\n\n<details>\n<summary title=\"For example\"><kbd> For example </kbd></summary>\n\n#### For example\n\n```python\n@dataclass\nclass SomeSharedObject:\n    some_processing_stage_control: bool\n    int_value: int\n    str_value: str\n    data_dict: Dict[Hashable, Any]\n    company_info: CompanyInfo\n\nmy_obj: List = [\n    True,\n    2,\n    ForceGeneralObjectInplace(SomeSharedObject(\n        some_processing_stage_control=False,\n        int_value=18,\n        str_value='Hello, ',\n        data_dict=None,\n        company_info=None,\n    )),\n]\nmy_obj_mapped: List = shared_memory.value.put_message(my_obj)\n\nmy_obj_mapped[2].some_new_attribute = 'Hi!'  # this attribute Will became shared and as result Will be seen by process\n```\n\n</details>\n\nDifference between `ForceGeneralObjectCopy` and `ForceGeneralObjectInplace`:\n* `ForceGeneralObjectInplace`. `my_obj_mapped = shared_memory.value.put_message(ForceGeneralObjectInplace(my_obj))` call will change class of an original `my_obj` object. And `True == (my_obj is my_obj_mapped)`\n* `ForceGeneralObjectCopy`. `my_obj_mapped = shared_memory.value.put_message(ForceGeneralObjectCopy(my_obj))` call will Not change an original `my_obj` object. `my_obj_mapped` object will be constructed from the scratch\n\nAlso you can tune a default behavior by wrapping your object with either `ForceStaticObjectCopy` or `ForceStaticObjectInplace`.\n\nDifference between `ForceStaticObjectCopy` and `ForceGeneralObjectInplace`:\n* `ForceGeneralObjectInplace`. `my_obj_mapped = shared_memory.value.put_message(ForceGeneralObjectInplace(my_obj))` call will change class of an original `my_obj` object. And `True == (my_obj is my_obj_mapped)`\n* `ForceStaticObjectCopy`. `my_obj_mapped = shared_memory.value.put_message(ForceStaticObjectCopy(my_obj))` call will Not change an original `my_obj` object. `my_obj_mapped` object will be constructed from the scratch\n\n</details>\n\n## How to choose shared memory size\n\n<details>\n<summary title=\"How to choose shared memory size\"><kbd> How to choose shared memory size </kbd></summary>\n\nWhen setting the size parameter for SharedMemory, consider using broader units like tens (for embedded systems), hundreds, or thousands of megabytes, rather than precise byte counts. This approach is similar to how you would not precisely calculate the amount of memory needed for a web server hosted externally; you make an educated guess, like assuming that 256 MB might be insufficient but 768 MB could be adequate, and then adjust based on practical testing.\n\nAlso, be aware of memory fragmentation, which affects all memory allocation systems, including the OS itself. For example, if you have a SharedMemory pool sized to store exactly ten 64-bit integers, accounting for additional bytes for system information, your total might be around 200 bytes. Initially, after storing the integers, your memory might appear as [\"int\", \"int\", ..., \"int\"]. If you delete every second integer, the largest contiguous free memory chunk could be just 10 bytes, despite having 50 bytes free in total. This fragmentation means you cannot store a larger data structure like a 20-byte string which needs contiguous space.\n\nTo resolve this, simply increase the size parameter value of SharedMemory. This is akin to how you would manage memory allocation for server hosting or thread stack sizes in software development.\n\n</details>\n\n## Benchmarks\n\n<details>\n<summary title=\"System\"><kbd> System </kbd></summary>\n\n* CPU: i5-3570@3.40GHz (Ivy Bridge)\n* RAM: 32 GBytes, DDR3, dual channel, 655 MHz\n* OS: Ubuntu 20.04.6 LTS under WSL2. Windows 10\n\n</details>\n\n### Throughput GiB/s\n\n![Throughput GiB/s](https://github.com/FI-Mihej/Cengal/raw/master/docs/assets/InterProcessPyObjects/ChartThroughputGiBs.png)\n\n<details>\n<summary title=\"Benchmarks results\"><kbd> Benchmarks results </kbd></summary>\n\n#### Refference results (sysbench)\n\n```bash\nsysbench memory --memory-oper=write run\n```\n\n```\n5499.28 MiB/sec\n```\n\n#### Results\n\n![Throughput GiB/s](https://github.com/FI-Mihej/Cengal/raw/master/docs/assets/InterProcessPyObjects/ChartThroughputGiBs.png)\n\n`*` [multiprocessing.shared_memory.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__shared_memory.py) - simple implementation. This is a simple implementation because it uses a similar approach to the one used in `uvloop.*`, `asyncio.*`, `multiprocessing.Queue`, and `multiprocessing.Pipe` benchmarking scripts. Similar implementations are expected to be used by the majority of projects.\n\n#### Benchmarks results table\n\n| Approach                        | sync/async | Throughput GiB/s |\n|---------------------------------|------------|------------------|\n| InterProcessPyObjects (sync)    | sync       | 3.770            |\n| InterProcessPyObjects + uvloop  | async      | 3.222            |\n| InterProcessPyObjects + asyncio | async      | 3.079            |\n| multiprocessing.shared_memory   | sync       | 2.685            |\n| uvloop.UnixDomainSockets        | async      | 0.966            |\n| asyncio + cengal.Streams        | async      | 0.942            |\n| uvloop.Streams                  | async      | 0.922            |\n| asyncio.Streams                 | async      | 0.784            |\n| asyncio.UnixDomainSockets       | async      | 0.708            |\n| multiprocessing.Queue           | sync       | 0.669            |\n| multiprocessing.Pipe            | sync       | 0.469            |\n\n#### Benchmark scripts\n\n* InterProcessPyObjects - Sync:\n    * [sender.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/shared_objects__transfer_sync__sender.py)\n    * [receiver.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/shared_objects__transfer_sync__receiver.py)\n* InterProcessPyObjects - Async (uvloop):\n    * [sender.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/shared_objects__transfer_uvloop__sender.py)\n    * [receiver.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/shared_objects__transfer_uvloop__receiver.py)\n* InterProcessPyObjects - Async (asyncio):\n    * [sender.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/shared_objects__transfer_asyncio__sender.py)\n    * [receiver.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/shared_objects__transfer_asyncio__receiver.py)\n* [multiprocessing.shared_memory.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__shared_memory.py)\n* [uvloop.UnixDomainSockets.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__uvloop_unix_domain_sockets.py)\n* [asyncio_with_cengal.Streams.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__cengal_efficient_streams.py)\n* [uvloop.Streams.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__uvloop_streams.py)\n* [asyncio.Streams.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__asyncio_streams.py)\n* [asyncio.UnixDomainSockets.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__asyncio_unix_domain_sockets.py)\n* [multiprocessing.Queue.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__multiprocess_queue.py)\n* [multiprocessing.Pipe.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/plain_python__send_bytes__multiprocess_pipe.py)\n\n</details>\n\n### Shared Dict Performance\n\n![Dict performance comparison](https://github.com/FI-Mihej/Cengal/raw/master/docs/assets/InterProcessPyObjects/ChartDictPerformanceComparison.png)\n\n<details>\n<summary title=\"Benchmarks results\"><kbd> Benchmarks results </kbd></summary>\n\n#### Competitors:\n\n##### 1. multiprocessing.Manager - dict\n\nPros:\n\n* Part of the standard library\n\nCons:\n\n* Slowest solution\n* Values are read-only unless they are an explicit instances of `multiprocessing.Manager` supported types (types inherited from `multiprocessing.BaseProxy` like `multiprocessing.DictProxy` or `multiprocessing.ListProxy`)\n\nBenchmark scripts:\n\n* [dict__python__multiprocess_dict.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/dict__python__multiprocess_dict.py)\n\n##### 2. [UltraDict](https://github.com/ronny-rentner/UltraDict)\n\nPros:\n\n* Relatively fast writes\n* Fast repetitive reads of an unchanged values\n* Has built in inter-process synchronization mechanism\n\nCons:\n\n* Relies on pickle on an each change\n* Non-dictionary values are read-only from the multiprocessing perspective\n\nBenchmark scripts:\n\n* [dict__thirdparty__ultradict.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/dict__thirdparty__ultradict.py)\n\n##### 3. [shared_memory_dict](https://github.com/luizalabs/shared-memory-dict)\n\nPros:\n\n* At least faster than 'multiprocessing.Manager - dict' solution\n\nCons:\n\n* Second slowest solution\n* Relies on pickle on an each change\n* Values (even lists or dicts) are read-only from the multiprocessing perspective\n* Can not be initialized from the other `dict` nor even from the other `SharedMemoryDict` instance: you need to manually put an each key-value pair into it using loop\n* Has known issues with inter-process synchronization. Is is better for developer to use their own external inter-process synchronization mechanisms (from `multiprocessing.Manager` for example)\n\nBenchmark scripts:\n\n* [dict__thirdparty__shared_memory_dict.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/dict__thirdparty__shared_memory_dict.py)\n\n##### 4. InterProcessPyObjects - IntEnumListStruct\n\nPros:\n\n* Fastest solution: 15.6 times faster than `UltraDict`\n* Good for structures: when all fields are already known\n* Support any InterProcessPyObjects' supported data types as values\n* Values of mutable types are naturally mutable: developer do not need to prepare and change their data explicitly\n\nCons:\n\n* Can use only IntEnum (int) keys\n* Fixed size of a size of a provided IntEnum\n\nBenchmark scripts:\n\n* [dict__shared_objects__intenum_list.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/dict__shared_objects__intenum_list.py)\n\nMessage sender example:\n\n```python\nclass CompanyMetrics(IntEnum):\n    income = 0\n    employees = 1\n    avg_salary = 2\n    annual_income = 3\n    in_a_good_state = 4\n    emails = 5\n    websites = 6\n\ncompany_metrics: List = intenum_dict_to_list({  # lists with IntEnum indexes are blazing-fast alternative to dictionaries\n    CompanyMetrics.websites: ['http://company.com', 'http://company.org'],\n    CompanyMetrics.avg_salary: 3_000.0,\n    CompanyMetrics.employees: 10,\n    CompanyMetrics.in_a_good_state: True,\n})  # Unmentioned fields will be filled with Null values\ncompany_metrics_mapped: List = shared_memory.value.put_message(company_metrics)\n```\n\nMessage receiver example:\n\n```python\ncompany_metrics: List = shared_memory.value.take_message()\nk = company_metrics[CompanyMetrics.avg_salary]\ncompany_metrics[CompanyMetrics.avg_salary] = 5_000.0\ncompany_metrics[CompanyMetrics.avg_salary] += 1.1\n```\n\n##### 5. InterProcessPyObjects - Dataclass\n\nPros:\n\n* Fast. Second fastest solution: around 2 times faster than `UltraDict`\n* Good for structures and object: when all fields are already known\n* Works with objects naturally - without an explicit preparation or changes from the developer's side\n* Support any InterProcessPyObjects' supported data types as values\n* Supports `dataclass` as well as other objects\n* Supports object's methods\n* Fields of mutable types are naturally mutable: developer do not need to prepare and change their data explicitly\n* Does not relies on frequent data pickle: uses pickle only for an initial object construction but not for the fields updates\n\nCons:\n\n* Set of fields is fixed by the set of fields of an original object\n\nBenchmark scripts:\n\n* [dict__shared_objects__static_obj.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/dict__shared_objects__static_obj.py)\n\n##### 6. InterProcessPyObjects - Dict\n\nPros:\n\n* Slightly faster than `SharedMemoryDict`\n* Works with objects naturally - without an explicit preparation or changes from the developer's side\n* Support any InterProcessPyObjects' supported data types as values\n* Support any InterProcessPyObjects' supported Hashable data types as keys\n* Values of mutable types are naturally mutable: developer do not need to prepare and change their data explicitly\n* Does not relies on pickle\n* Speed optimizations are architectures and planned for an implementation\n\nCons:\n\n* Speed optimization implementation are in progress - not released yet\n\nBenchmark scripts:\n\n* [dict__shared_objects__dict.py](https://github.com/FI-Mihej/Cengal/blob/master/cengal/parallel_execution/asyncio/ashared_memory_manager/versions/v_0/development/dict__shared_objects__dict.py)\n\n#### Results\n\n![Dict performance comparison](https://github.com/FI-Mihej/Cengal/raw/master/docs/assets/InterProcessPyObjects/ChartDictPerformanceComparison.png)\n\n#### Benchmarks results table\n\n| Approach                                  | increments/s |\n|-------------------------------------------|--------------|\n| InterProcessPyObjects - IntEnumListStruct | 1189730      |\n| InterProcessPyObjects - Dataclass         | 143091       |\n| UltraDict                                 | 76214        |\n| InterProcessPyObjects - Dict              | 44285        |\n| SharedMemoryDict                          | 42862        |\n| multiprocessing.Manager - dict            | 2751         |\n\n</details>\n\n## Todo\n\n- [ ] Connect more than two processes\n- [ ] Use third-party fast hashing implementations instead of or in addition to built in `hash()` call\n- [ ] Continuous performance improvements\n\n## Conclusion\n\nThis Python package provides a robust solution for inter-process communication, supporting a variety of Python data structures, types, and third-party libraries. Its lock-free synchronization and asyncio compatibility make it an ideal choice for high-performance, concurrent execution.\n\n# Based on [Cengal](https://github.com/FI-Mihej/Cengal)\n\nThis is a stand-alone package for a specific Cengal module. Package is designed to offer users the ability to install specific Cengal functionality without the burden of the library's full set of dependencies.\n\nThe core of this approach lies in our 'cengal-light' package, which houses both Python and compiled Cengal modules. The 'cengal' package itself serves as a lightweight shell, devoid of its own modules, but dependent on 'cengal-light[full]' for a complete Cengal library installation with all required dependencies.\n\nAn equivalent import:\n```python\nfrom cengal.hardware.memory.shared_memory import *\nfrom cengal.parallel_execution.asyncio.ashared_memory_manager import *\n```\n\nCengal library can be installed by:\n\n```bash\npip install cengal\n```\n\nhttps://github.com/FI-Mihej/Cengal\n\nhttps://pypi.org/project/cengal/\n\n\n# Projects using Cengal\n\n* [CengalPolyBuild](https://github.com/FI-Mihej/CengalPolyBuild) - A Comprehensive and Hackable Build System for Multilingual Python Packages: Cython (including automatic conversion from Python to Cython), C/C++, Objective-C, Go, and Nim, with ongoing expansions to include additional languages. (Planned to be released soon) \n* [cengal_app_dir_path_finder](https://github.com/FI-Mihej/cengal_app_dir_path_finder) - A Python module offering a unified API for easy retrieval of OS-specific application directories, enhancing data management across Windows, Linux, and macOS \n* [cengal_cpu_info](https://github.com/FI-Mihej/cengal_cpu_info) - Extended, cached CPU info with consistent output format.\n* [cengal_memory_barriers](https://github.com/FI-Mihej/cengal_memory_barriers) - Fast cross-platform memory barriers for Python.\n* [flet_async](https://github.com/FI-Mihej/flet_async) - wrapper which makes [Flet](https://github.com/flet-dev/flet) async and brings booth Cengal.coroutines and asyncio to Flet (Flutter based UI)\n* [justpy_containers](https://github.com/FI-Mihej/justpy_containers) - wrapper around [JustPy](https://github.com/justpy-org/justpy) in order to bring more security and more production-needed features to JustPy (VueJS based UI)\n* [Bensbach](https://github.com/FI-Mihej/Bensbach) - decompiler from Unreal Engine 3 bytecode to a Lisp-like script and compiler back to Unreal Engine 3 bytecode. Made for a game modding purposes\n* [Realistic-Damage-Model-mod-for-Long-War](https://github.com/FI-Mihej/Realistic-Damage-Model-mod-for-Long-War) - Mod for both the original XCOM:EW and the mod Long War. Was made with a Bensbach, which was made with Cengal\n* [SmartCATaloguer.com](http://www.smartcataloguer.com/index.html) - TagDB based catalog of images (tags), music albums (genre tags) and apps (categories)\n\n# License\n\nCopyright \u00a9 2012-2024 ButenkoMS. All rights reserved.\n\nLicensed under the Apache License, Version 2.0.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "This high-performance package delivers blazing-fast inter-process communication through shared memory, enabling Python objects to be shared across processes with exceptional efficiency",
    "version": "1.0.7",
    "project_urls": {
        "Homepage": "https://github.com/FI-Mihej/InterProcessPyObjects"
    },
    "split_keywords": [
        "android",
        " ipc",
        " linux",
        " windows",
        " cengal",
        " crossplatform",
        " ios",
        " inter-process communication",
        " macos",
        " multiprocessing",
        " shared dict",
        " shared memory",
        " shared numpy ndarray",
        " shared object",
        " shared objects",
        " shared set",
        " shared torch tensor"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a3bba090803c9a74bb78fafa1f39b339adb1310f4b4338dfc3e4e73b6b4a8563",
                "md5": "336de93447bb71f4cbae31772eee198b",
                "sha256": "38b58c86daa7c9dc8e1b1c4dadad6b7058088cc2102a8b5e786b6cbfbf32724c"
            },
            "downloads": -1,
            "filename": "interprocesspyobjects-1.0.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "336de93447bb71f4cbae31772eee198b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 21278,
            "upload_time": "2024-05-14T02:41:40",
            "upload_time_iso_8601": "2024-05-14T02:41:40.206748Z",
            "url": "https://files.pythonhosted.org/packages/a3/bb/a090803c9a74bb78fafa1f39b339adb1310f4b4338dfc3e4e73b6b4a8563/interprocesspyobjects-1.0.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "eba51add6cb55b93747690900951361997f8db3f6410d2b57d94fc81dfd05a1a",
                "md5": "81ba886be137f680add3e5c7eca9ee54",
                "sha256": "725c5ce10a48e1aada2daf3fc8db7f9f9fbb45c92e14769fd8635111ff054c96"
            },
            "downloads": -1,
            "filename": "interprocesspyobjects-1.0.7.tar.gz",
            "has_sig": false,
            "md5_digest": "81ba886be137f680add3e5c7eca9ee54",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 30294,
            "upload_time": "2024-05-14T02:41:45",
            "upload_time_iso_8601": "2024-05-14T02:41:45.386645Z",
            "url": "https://files.pythonhosted.org/packages/eb/a5/1add6cb55b93747690900951361997f8db3f6410d2b57d94fc81dfd05a1a/interprocesspyobjects-1.0.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-14 02:41:45",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "FI-Mihej",
    "github_project": "InterProcessPyObjects",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "hatch",
            "specs": []
        },
        {
            "name": "twine",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "py-cpuinfo",
            "specs": []
        },
        {
            "name": "cengal_light",
            "specs": [
                [
                    ">=",
                    "4.4.0"
                ]
            ]
        }
    ],
    "lcname": "interprocesspyobjects"
}
        
Elapsed time: 0.24971s