py-sharedmemory


Namepy-sharedmemory JSON
Version 1.0.0 PyPI version JSON
download
home_pageNone
SummaryShared Memory Pipe for Fast Multiprocessing Data Sharing of Large Objects (>1MB)
upload_time2024-05-25 09:56:09
maintainerNone
docs_urlNone
authorTobias Würth
requires_python>=3.9
licenseMIT License Copyright (c) 2024 товіаѕ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords python python3 multiprocessing sharedmemory queue pipe
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Shared Memory Pipe for Fast Multiprocessing Data Sharing of Large Objects (>1MB)

Since [multiprocessing](https://docs.python.org/3/library/multiprocessing.html) [Queue](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue)s pickles the data, there is a certain performance decrease for larger data objects and huge amounts of I/O.

This is the reason why I created a substitute which instead converts data objects into bytes (if needed), puts the bytes into shared memory and only transfers meta-data about the data structure via Queues to the receiving end.

The following data types are currently supported:
```
- bytes
- int
- float
- bool
- str
- np.ndarray
- np.dtype
```

As well as any nested structures for the given types:
```
- tuple/NamedTuple
- list
- dict
- set
```

# Considerations
There is a certain overhead to the conversion process which is especially noticable for smaller objects and more complex data structures.
Use the following approach depending on the size of the data you're handling:

||10B|100B|1KB|10KB|100KB|1MB|10MB|100MB|1GB|10GB|
|---|---|---|---|---|---|---|---|---|---|---|
|``mp.Queue()``|✅|✅|✅|✅|✅|❌|❌|❌|❌|❌|
|``SharedMemory``|❌|❌|❌|❌|❌|✅|✅|✅|✅|✅|

Actual performance heavily relies on your memory speed.

# Usage example
```python
import multiprocessing as mp

from memory import create_shared_memory_pair, SharedMemorySender, SharedMemoryReceiver

def producer_sm(sender:SharedMemorySender):
    your_data = "your data"

    # ...
    sender.put(your_data) # blocks until there is space

    # or
    if sender.has_space():
        sender.put(your_data)
    else:
        # your else case
    
    # ...
    sender.wait_for_all_ack() # wait for all data to be received before closing the sender
    # exit

def consumer_sm(receiver:SharedMemoryReceiver, loops, size):
    # ...
    data = receiver.get() # blocks
    
    # or
    data = receiver.get(timeout=3) # raises queue.Empty exception after 3s

    # or
    data = receiver.get_nowait()
    if data is None:
        # handle empty case

    # ...

if __name__ == '__main__':
    sender, receiver = create_shared_memory_pair(capacity=5)

    mp.Process(target=producer_sm, args=(sender,)).start()
    mp.Process(target=consumer_sm, args=(receiver,)).start()

```

# Performance Testing
Note that in this testing example Producer and Consumer where dependent upon each other (due to Queue capacity), that's why they take a similar amount of time. Your actual performance may vary depending on the data type, structure and overall system performance.

Since there is no inter-process data transfer, I/O drops accordingly. E.g. in my case -2.4GB/s

```
--------------------
Bytes: 100 (100 B), loops: 100000, queue capacity: 1000
Producer done in     0.78797s  @ 12.10 MB/s
Consumer done in     0.78698s  @ 12.12 MB/s
Producer done in SM  5.37207s  @ 1.78 MB/s
Consumer done in SM  5.37207s  @ 1.78 MB/s   ❌
--------------------
Bytes: 1000 (1000 B), loops: 100000, queue capacity: 1000
Producer done in     0.92900s  @ 102.66 MB/s
Consumer done in     0.91599s  @ 104.11 MB/s
Producer done in SM  5.45683s  @ 17.48 MB/s
Consumer done in SM  5.45582s  @ 17.48 MB/s   ❌
--------------------
Bytes: 10000 (9.77 KB), loops: 100000, queue capacity: 1000
Producer done in     2.16847s  @ 439.79 MB/s
Consumer done in     2.16147s  @ 441.22 MB/s
Producer done in SM  5.64625s  @ 168.90 MB/s 
Consumer done in SM  5.64625s  @ 168.90 MB/s   ❌
--------------------
Bytes: 100000 (97.66 KB), loops: 100000, queue capacity: 1000
Producer done in     2.80213s  @ 3.32 GB/s
Consumer done in     2.80013s  @ 3.33 GB/s
Producer done in SM  8.24400s  @ 1.13 GB/s
Consumer done in SM  8.24200s  @ 1.13 GB/s   ❌
--------------------
Bytes: 1000000 (976.56 KB), loops: 10000, queue capacity: 1000
Producer done in     4.87300s  @ 1.91 GB/s
Consumer done in     4.87300s  @ 1.91 GB/s
Producer done in SM  4.01900s  @ 2.32 GB/s 
Consumer done in SM  4.01800s  @ 2.32 GB/s   ✅
--------------------
Bytes: 10000000 (9.54 MB), loops: 1000, queue capacity: 1000
Producer done in     6.25722s  @ 1.49 GB/s
Consumer done in     6.28221s  @ 1.48 GB/s
Producer done in SM  3.79851s  @ 2.45 GB/s 
Consumer done in SM  3.80359s  @ 2.45 GB/s   ✅
--------------------
Bytes: 100000000 (95.37 MB), loops: 1000, queue capacity: 100
Producer done in     64.91876s  @ 1.43 GB/s
Consumer done in     65.20476s  @ 1.43 GB/s
Producer done in SM  38.08093s  @ 2.45 GB/s
Consumer done in SM  38.17893s  @ 2.44 GB/s   ✅
--------------------
Bytes: 1000000000 (953.67 MB), loops: 100, queue capacity: 10
Producer done in     63.22359s  @ 1.47 GB/s
Consumer done in     66.07801s  @ 1.41 GB/s
Producer done in SM  36.39488s  @ 2.56 GB/s
Consumer done in SM  37.61108s  @ 2.48 GB/s   ✅
--------------------
Bytes: 10000000000 (9.31 GB), loops: 10, queue capacity: 10
Producer done in     CRASHED 
Consumer done in     CRASHED 
Producer done in SM  28.21499s  @ 3.30 GB/s
Consumer done in SM  34.32684s  @ 2.71 GB/s   ✅✅
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "py-sharedmemory",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "python, python3, multiprocessing, sharedmemory, queue, pipe",
    "author": "Tobias W\u00fcrth",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/5a/86/acb6839fc8787cf78dfa74fb1f7c32095e94207bfb82a99555c094195f49/py_sharedmemory-1.0.0.tar.gz",
    "platform": null,
    "description": "# Shared Memory Pipe for Fast Multiprocessing Data Sharing of Large Objects (>1MB)\r\n\r\nSince [multiprocessing](https://docs.python.org/3/library/multiprocessing.html) [Queue](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue)s pickles the data, there is a certain performance decrease for larger data objects and huge amounts of I/O.\r\n\r\nThis is the reason why I created a substitute which instead converts data objects into bytes (if needed), puts the bytes into shared memory and only transfers meta-data about the data structure via Queues to the receiving end.\r\n\r\nThe following data types are currently supported:\r\n```\r\n- bytes\r\n- int\r\n- float\r\n- bool\r\n- str\r\n- np.ndarray\r\n- np.dtype\r\n```\r\n\r\nAs well as any nested structures for the given types:\r\n```\r\n- tuple/NamedTuple\r\n- list\r\n- dict\r\n- set\r\n```\r\n\r\n# Considerations\r\nThere is a certain overhead to the conversion process which is especially noticable for smaller objects and more complex data structures.\r\nUse the following approach depending on the size of the data you're handling:\r\n\r\n||10B|100B|1KB|10KB|100KB|1MB|10MB|100MB|1GB|10GB|\r\n|---|---|---|---|---|---|---|---|---|---|---|\r\n|``mp.Queue()``|\u2705|\u2705|\u2705|\u2705|\u2705|\u274c|\u274c|\u274c|\u274c|\u274c|\r\n|``SharedMemory``|\u274c|\u274c|\u274c|\u274c|\u274c|\u2705|\u2705|\u2705|\u2705|\u2705|\r\n\r\nActual performance heavily relies on your memory speed.\r\n\r\n# Usage example\r\n```python\r\nimport multiprocessing as mp\r\n\r\nfrom memory import create_shared_memory_pair, SharedMemorySender, SharedMemoryReceiver\r\n\r\ndef producer_sm(sender:SharedMemorySender):\r\n    your_data = \"your data\"\r\n\r\n    # ...\r\n    sender.put(your_data) # blocks until there is space\r\n\r\n    # or\r\n    if sender.has_space():\r\n        sender.put(your_data)\r\n    else:\r\n        # your else case\r\n    \r\n    # ...\r\n    sender.wait_for_all_ack() # wait for all data to be received before closing the sender\r\n    # exit\r\n\r\ndef consumer_sm(receiver:SharedMemoryReceiver, loops, size):\r\n    # ...\r\n    data = receiver.get() # blocks\r\n    \r\n    # or\r\n    data = receiver.get(timeout=3) # raises queue.Empty exception after 3s\r\n\r\n    # or\r\n    data = receiver.get_nowait()\r\n    if data is None:\r\n        # handle empty case\r\n\r\n    # ...\r\n\r\nif __name__ == '__main__':\r\n    sender, receiver = create_shared_memory_pair(capacity=5)\r\n\r\n    mp.Process(target=producer_sm, args=(sender,)).start()\r\n    mp.Process(target=consumer_sm, args=(receiver,)).start()\r\n\r\n```\r\n\r\n# Performance Testing\r\nNote that in this testing example Producer and Consumer where dependent upon each other (due to Queue capacity), that's why they take a similar amount of time. Your actual performance may vary depending on the data type, structure and overall system performance.\r\n\r\nSince there is no inter-process data transfer, I/O drops accordingly. E.g. in my case -2.4GB/s\r\n\r\n```\r\n--------------------\r\nBytes: 100 (100 B), loops: 100000, queue capacity: 1000\r\nProducer done in     0.78797s  @ 12.10 MB/s\r\nConsumer done in     0.78698s  @ 12.12 MB/s\r\nProducer done in SM  5.37207s  @ 1.78 MB/s\r\nConsumer done in SM  5.37207s  @ 1.78 MB/s   \u274c\r\n--------------------\r\nBytes: 1000 (1000 B), loops: 100000, queue capacity: 1000\r\nProducer done in     0.92900s  @ 102.66 MB/s\r\nConsumer done in     0.91599s  @ 104.11 MB/s\r\nProducer done in SM  5.45683s  @ 17.48 MB/s\r\nConsumer done in SM  5.45582s  @ 17.48 MB/s   \u274c\r\n--------------------\r\nBytes: 10000 (9.77 KB), loops: 100000, queue capacity: 1000\r\nProducer done in     2.16847s  @ 439.79 MB/s\r\nConsumer done in     2.16147s  @ 441.22 MB/s\r\nProducer done in SM  5.64625s  @ 168.90 MB/s \r\nConsumer done in SM  5.64625s  @ 168.90 MB/s   \u274c\r\n--------------------\r\nBytes: 100000 (97.66 KB), loops: 100000, queue capacity: 1000\r\nProducer done in     2.80213s  @ 3.32 GB/s\r\nConsumer done in     2.80013s  @ 3.33 GB/s\r\nProducer done in SM  8.24400s  @ 1.13 GB/s\r\nConsumer done in SM  8.24200s  @ 1.13 GB/s   \u274c\r\n--------------------\r\nBytes: 1000000 (976.56 KB), loops: 10000, queue capacity: 1000\r\nProducer done in     4.87300s  @ 1.91 GB/s\r\nConsumer done in     4.87300s  @ 1.91 GB/s\r\nProducer done in SM  4.01900s  @ 2.32 GB/s \r\nConsumer done in SM  4.01800s  @ 2.32 GB/s   \u2705\r\n--------------------\r\nBytes: 10000000 (9.54 MB), loops: 1000, queue capacity: 1000\r\nProducer done in     6.25722s  @ 1.49 GB/s\r\nConsumer done in     6.28221s  @ 1.48 GB/s\r\nProducer done in SM  3.79851s  @ 2.45 GB/s \r\nConsumer done in SM  3.80359s  @ 2.45 GB/s   \u2705\r\n--------------------\r\nBytes: 100000000 (95.37 MB), loops: 1000, queue capacity: 100\r\nProducer done in     64.91876s  @ 1.43 GB/s\r\nConsumer done in     65.20476s  @ 1.43 GB/s\r\nProducer done in SM  38.08093s  @ 2.45 GB/s\r\nConsumer done in SM  38.17893s  @ 2.44 GB/s   \u2705\r\n--------------------\r\nBytes: 1000000000 (953.67 MB), loops: 100, queue capacity: 10\r\nProducer done in     63.22359s  @ 1.47 GB/s\r\nConsumer done in     66.07801s  @ 1.41 GB/s\r\nProducer done in SM  36.39488s  @ 2.56 GB/s\r\nConsumer done in SM  37.61108s  @ 2.48 GB/s   \u2705\r\n--------------------\r\nBytes: 10000000000 (9.31 GB), loops: 10, queue capacity: 10\r\nProducer done in     CRASHED \r\nConsumer done in     CRASHED \r\nProducer done in SM  28.21499s  @ 3.30 GB/s\r\nConsumer done in SM  34.32684s  @ 2.71 GB/s   \u2705\u2705\r\n```\r\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 \u0442\u043e\u0432\u0456\u0430\u0455  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Shared Memory Pipe for Fast Multiprocessing Data Sharing of Large Objects (>1MB)",
    "version": "1.0.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/tobiaswuerth/python_shared_memory_queue/issues",
        "Repository": "https://github.com/tobiaswuerth/python_shared_memory_queue"
    },
    "split_keywords": [
        "python",
        " python3",
        " multiprocessing",
        " sharedmemory",
        " queue",
        " pipe"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "29bdf2bf031cea4bcb43c28ce5deda3c5b153037366317bb3eb515e8a61d0d89",
                "md5": "69a874ad5f8b44d2842d812618136688",
                "sha256": "3529aaf6a9f292aa0d1aa4f3193618724e7a3afd794399642e66ee7370c08084"
            },
            "downloads": -1,
            "filename": "py_sharedmemory-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "69a874ad5f8b44d2842d812618136688",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 8353,
            "upload_time": "2024-05-25T09:56:07",
            "upload_time_iso_8601": "2024-05-25T09:56:07.723544Z",
            "url": "https://files.pythonhosted.org/packages/29/bd/f2bf031cea4bcb43c28ce5deda3c5b153037366317bb3eb515e8a61d0d89/py_sharedmemory-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5a86acb6839fc8787cf78dfa74fb1f7c32095e94207bfb82a99555c094195f49",
                "md5": "0534fddef583fd45060d52f833b10c7b",
                "sha256": "4e40f47833eeccab164d75a06ac9742b33adfcfd894feb61cf396296921c928a"
            },
            "downloads": -1,
            "filename": "py_sharedmemory-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "0534fddef583fd45060d52f833b10c7b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 10242,
            "upload_time": "2024-05-25T09:56:09",
            "upload_time_iso_8601": "2024-05-25T09:56:09.032963Z",
            "url": "https://files.pythonhosted.org/packages/5a/86/acb6839fc8787cf78dfa74fb1f7c32095e94207bfb82a99555c094195f49/py_sharedmemory-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-25 09:56:09",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "tobiaswuerth",
    "github_project": "python_shared_memory_queue",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "py-sharedmemory"
}
        
Elapsed time: 0.25781s