einsumt


Nameeinsumt JSON
Version 0.9.4 PyPI version JSON
download
home_pagehttps://github.com/mrkwjc/einsumt
SummaryMultithreaded version of numpy.einsum function
upload_time2023-05-20 12:29:18
maintainer
docs_urlNone
authorMarek Wojciechowski
requires_python>=2.7
licenseMIT
keywords numpy einsum hpc
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI
coveralls test coverage No coveralls.
            # einsumt
Multithreaded version of numpy.einsum function.

# Reasoning
Numpy's einsum is a fantastic function which allows for sophisticated array operations with a single, clear line of code. However, this function in general does not benefit from the underlaying multicore architecture and all operations are performed on a single CPU.

The idea is then to split the einsum input operands along the chosen subscript, perform computation in threads and then compose the final result by summation (if subscript is not present in output) or concatenation of partial results.

# Usage
This function can be used as a replacement for numpy's einsum:

    from einsumt import einsumt as einsum
    result = einsum(*operands, **kwargs)

In current implementation first operand *must* be a subscripts string. Other differences will be treated as unintended bugs.

# Benchmarking
In order to test, if `einsumt` would be beneficial in your particular case please run the benchmark, e.g.:

    import numpy as np
    from einsumt import bench_einsumt

    bench_einsumt('aijk,bkl->ail',
                  np.random.rand(100, 100, 10, 10),
                  np.random.rand(50, 10, 50))
and the result is:

    Platform:           Linux
    CPU type:           Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
    Subscripts:         aijk,bkl->ail
    Shapes of operands: (100, 100, 10, 10), (50, 10, 50)
    Leading index:      automatic
    Pool type:          default
    Number of threads:  12
    Execution time:
        np.einsum:      2755 ms  (average from 1 runs)
        einsumt:        507.9 ms  (average from 5 runs)
    Speed up:           5.424x
More exemplary benchmark calls are contained in bench_einsum.py file.

# Disclaimer
Before you start to blame me because of little or no speedups please keep in mind that threading costs additional time (because of splitting and joining data for example), so `einsumt` function would become beneficial for larger arrays only. Note also that in many cases numpy's einsum can be efficiently replaced with combination of optimized dots, tensordots, matmuls, transpositions and so on, instead of `einsumt` (at cost of code clarity of course).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/mrkwjc/einsumt",
    "name": "einsumt",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=2.7",
    "maintainer_email": "",
    "keywords": "numpy,einsum,hpc",
    "author": "Marek Wojciechowski",
    "author_email": "mrkwjc@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/06/1f/c049bd0fa7073bc0400404cc2a2b98852ae5e3c0d96b6290671629033bfd/einsumt-0.9.4.tar.gz",
    "platform": null,
    "description": "# einsumt\nMultithreaded version of numpy.einsum function.\n\n# Reasoning\nNumpy's einsum is a fantastic function which allows for sophisticated array operations with a single, clear line of code. However, this function in general does not benefit from the underlaying multicore architecture and all operations are performed on a single CPU.\n\nThe idea is then to split the einsum input operands along the chosen subscript, perform computation in threads and then compose the final result by summation (if subscript is not present in output) or concatenation of partial results.\n\n# Usage\nThis function can be used as a replacement for numpy's einsum:\n\n    from einsumt import einsumt as einsum\n    result = einsum(*operands, **kwargs)\n\nIn current implementation first operand *must* be a subscripts string. Other differences will be treated as unintended bugs.\n\n# Benchmarking\nIn order to test, if `einsumt` would be beneficial in your particular case please run the benchmark, e.g.:\n\n    import numpy as np\n    from einsumt import bench_einsumt\n\n    bench_einsumt('aijk,bkl->ail',\n                  np.random.rand(100, 100, 10, 10),\n                  np.random.rand(50, 10, 50))\nand the result is:\n\n    Platform:           Linux\n    CPU type:           Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz\n    Subscripts:         aijk,bkl->ail\n    Shapes of operands: (100, 100, 10, 10), (50, 10, 50)\n    Leading index:      automatic\n    Pool type:          default\n    Number of threads:  12\n    Execution time:\n        np.einsum:      2755 ms  (average from 1 runs)\n        einsumt:        507.9 ms  (average from 5 runs)\n    Speed up:           5.424x\nMore exemplary benchmark calls are contained in bench_einsum.py file.\n\n# Disclaimer\nBefore you start to blame me because of little or no speedups please keep in mind that threading costs additional time (because of splitting and joining data for example), so `einsumt` function would become beneficial for larger arrays only. Note also that in many cases numpy's einsum can be efficiently replaced with combination of optimized dots, tensordots, matmuls, transpositions and so on, instead of `einsumt` (at cost of code clarity of course).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Multithreaded version of numpy.einsum function",
    "version": "0.9.4",
    "project_urls": {
        "Homepage": "https://github.com/mrkwjc/einsumt"
    },
    "split_keywords": [
        "numpy",
        "einsum",
        "hpc"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d4a1757a5410cbde43cffee4bd97adee3c31cb33327dec23b0237d07ee8da1c8",
                "md5": "b9313e2b118e9f446162624164174dfc",
                "sha256": "56f56752dc939be8c619c70f0796d853fe3da84e7587ab2fafc21bf936279738"
            },
            "downloads": -1,
            "filename": "einsumt-0.9.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b9313e2b118e9f446162624164174dfc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=2.7",
            "size": 6494,
            "upload_time": "2023-05-20T12:29:16",
            "upload_time_iso_8601": "2023-05-20T12:29:16.608490Z",
            "url": "https://files.pythonhosted.org/packages/d4/a1/757a5410cbde43cffee4bd97adee3c31cb33327dec23b0237d07ee8da1c8/einsumt-0.9.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "061fc049bd0fa7073bc0400404cc2a2b98852ae5e3c0d96b6290671629033bfd",
                "md5": "52034b96ff92d0eea5982fbe5b5aa4db",
                "sha256": "06639495c4ef5e34092e536ae868f83fe2885addd0462270dce6e4530216a931"
            },
            "downloads": -1,
            "filename": "einsumt-0.9.4.tar.gz",
            "has_sig": false,
            "md5_digest": "52034b96ff92d0eea5982fbe5b5aa4db",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=2.7",
            "size": 6168,
            "upload_time": "2023-05-20T12:29:18",
            "upload_time_iso_8601": "2023-05-20T12:29:18.183813Z",
            "url": "https://files.pythonhosted.org/packages/06/1f/c049bd0fa7073bc0400404cc2a2b98852ae5e3c0d96b6290671629033bfd/einsumt-0.9.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-20 12:29:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mrkwjc",
    "github_project": "einsumt",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": true,
    "lcname": "einsumt"
}
        
Elapsed time: 0.16929s