multiprocesspandas


Namemultiprocesspandas JSON
Version 1.1.5 PyPI version JSON
download
home_pagehttps://github.com/akhtarshahnawaz/multiprocesspandas
SummaryExtends Pandas to run apply methods for dataframe, series and groups on multiple cores at same time.
upload_time2023-02-07 12:51:19
maintainer
docs_urlNone
authorShahnawaz Akhtar
requires_python
licenseMIT
keywords pandas multiprocessing pandas multiprocessing parallel parallize pandas
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # MultiprocessPandas

MultiprocessPandas package extends functionality of Pandas to easily run operations on multiple cores i.e. parallelize the operations. The current version of the package provides capability to parallelize **_apply()_** methods on DataFrames, Series and DataFrameGroupBy .

Importing the applyparallel module will add **_apply_parallel()_** method to DataFrame, Series and DataFrameGroupBy, which will allow you to run operation on multiple cores.

## Installation

The package can be pulled from GitHub or can be installed from PyPi directly.

To install using pip

```python
    pip install multiprocesspandas
```

## Setting up the Library

To use the library, you have to import applyparallel module. Import will attach required methods to pandas, and you can call them directly on Pandas data objects.

```python
    from multiprocesspandas import applyparallel
```

## Usage

Once imported, the library adds functionality to call **_apply_parallel()_** method on your DataFrame, Series or DataFrameGroupBy . The methods accepts a function that has to be applied, and two named arguments:

- **_static_data_** (External Data required by passed function, defaults to None)
- **_num_processes_** (Defaults to maximum available cores on your CPU)
- **_axis_** (Only for DataFrames, defaults to 0 i.e. rows. For columns, set axis=1.

**\*Note:** Any extra module required by the passed function must be re-imported again inside the function.\*

### Usage with DataFrameGroupBy

```python
    def func(x):
        import pandas as pd
        return pd.Series([x['C'].mean()])

    df.groupby(["A","B"]).apply_parallel(func, num_processes=30)
```

If you need some external data inside **func()**, it has to be passed and received as position arguments or keyword arguments.

```python
    data1 = pd.Series([1,2,3])
    data2 = 20

    def func(x, data1, data2):
        import pandas as pd
        output = data1 - x['C'].mean()
        return output * data2

    df.groupby(["A","B"]).apply_parallel(func, data1=data1, data2=data2, num_processes=30)
```

### Usage with DataFrame

Usage with DataFrames is very similar to the one with DataFrameGroupBy, however you have to pass an extra argument 'axis' which tells whether to apply function on the rows or the columns.

```python
    def func(x):
        return x.mean()

    df.apply_parallel(func, num_processes=30, axis=1)
```

External data can be passed in same way as we did in DataFrameGroupBy

```python
    data = pd.Series([1,2,3])

    def func(x, data):
        return data.sum() + x.mean()

    df.apply_parallel(func, data=data, num_processes=30)
```

### Usage with Series

Usage with Series is very similar to the usage with DataFrames and DataFrameGroupBy.

```python
    data = pd.Series([1,2,3])

    def func(x, data):
	    return data-x

    series.apply_parallel(func, data=data, num_processes=30)
```



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/akhtarshahnawaz/multiprocesspandas",
    "name": "multiprocesspandas",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "Pandas,Multiprocessing,Pandas Multiprocessing,Parallel,Parallize Pandas",
    "author": "Shahnawaz Akhtar",
    "author_email": "shahnawaz.akhtar@barcelonagse.eu",
    "download_url": "https://files.pythonhosted.org/packages/c7/60/94517679ffa67edd31f7e469023ba7ed3b276feaba8cf1d9e72037601dc7/multiprocesspandas-1.1.5.tar.gz",
    "platform": null,
    "description": "# MultiprocessPandas\n\nMultiprocessPandas package extends functionality of Pandas to easily run operations on multiple cores i.e. parallelize the operations. The current version of the package provides capability to parallelize **_apply()_** methods on DataFrames, Series and DataFrameGroupBy .\n\nImporting the applyparallel module will add **_apply_parallel()_** method to DataFrame, Series and DataFrameGroupBy, which will allow you to run operation on multiple cores.\n\n## Installation\n\nThe package can be pulled from GitHub or can be installed from PyPi directly.\n\nTo install using pip\n\n```python\n    pip install multiprocesspandas\n```\n\n## Setting up the Library\n\nTo use the library, you have to import applyparallel module. Import will attach required methods to pandas, and you can call them directly on Pandas data objects.\n\n```python\n    from multiprocesspandas import applyparallel\n```\n\n## Usage\n\nOnce imported, the library adds functionality to call **_apply_parallel()_** method on your DataFrame, Series or DataFrameGroupBy . The methods accepts a function that has to be applied, and two named arguments:\n\n- **_static_data_** (External Data required by passed function, defaults to None)\n- **_num_processes_** (Defaults to maximum available cores on your CPU)\n- **_axis_** (Only for DataFrames, defaults to 0 i.e. rows. For columns, set axis=1.\n\n**\\*Note:** Any extra module required by the passed function must be re-imported again inside the function.\\*\n\n### Usage with DataFrameGroupBy\n\n```python\n    def func(x):\n        import pandas as pd\n        return pd.Series([x['C'].mean()])\n\n    df.groupby([\"A\",\"B\"]).apply_parallel(func, num_processes=30)\n```\n\nIf you need some external data inside **func()**, it has to be passed and received as position arguments or keyword arguments.\n\n```python\n    data1 = pd.Series([1,2,3])\n    data2 = 20\n\n    def func(x, data1, data2):\n        import pandas as pd\n        output = data1 - x['C'].mean()\n        return output * data2\n\n    df.groupby([\"A\",\"B\"]).apply_parallel(func, data1=data1, data2=data2, num_processes=30)\n```\n\n### Usage with DataFrame\n\nUsage with DataFrames is very similar to the one with DataFrameGroupBy, however you have to pass an extra argument 'axis' which tells whether to apply function on the rows or the columns.\n\n```python\n    def func(x):\n        return x.mean()\n\n    df.apply_parallel(func, num_processes=30, axis=1)\n```\n\nExternal data can be passed in same way as we did in DataFrameGroupBy\n\n```python\n    data = pd.Series([1,2,3])\n\n    def func(x, data):\n        return data.sum() + x.mean()\n\n    df.apply_parallel(func, data=data, num_processes=30)\n```\n\n### Usage with Series\n\nUsage with Series is very similar to the usage with DataFrames and DataFrameGroupBy.\n\n```python\n    data = pd.Series([1,2,3])\n\n    def func(x, data):\n\t    return data-x\n\n    series.apply_parallel(func, data=data, num_processes=30)\n```\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Extends Pandas to run apply methods for  dataframe, series and groups on multiple cores at same time.",
    "version": "1.1.5",
    "split_keywords": [
        "pandas",
        "multiprocessing",
        "pandas multiprocessing",
        "parallel",
        "parallize pandas"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8840d40e3e061412f66478a648aed0e63193abebd771449bc04549ddc26b1a8d",
                "md5": "f122db57c09b5f527b2fc4a7a76bba0d",
                "sha256": "3f02c0a23c73ea6b34fc511d5af210a1066848323baa5a31e99e27c3d6a74820"
            },
            "downloads": -1,
            "filename": "multiprocesspandas-1.1.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f122db57c09b5f527b2fc4a7a76bba0d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 4859,
            "upload_time": "2023-02-07T12:51:18",
            "upload_time_iso_8601": "2023-02-07T12:51:18.499371Z",
            "url": "https://files.pythonhosted.org/packages/88/40/d40e3e061412f66478a648aed0e63193abebd771449bc04549ddc26b1a8d/multiprocesspandas-1.1.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c76094517679ffa67edd31f7e469023ba7ed3b276feaba8cf1d9e72037601dc7",
                "md5": "ad3343a71147c040c51423c86a04e722",
                "sha256": "41d48b6ae3dfcdd899f01912cb8ee481caa3e4ebc837e37476d6c8d993ed3c68"
            },
            "downloads": -1,
            "filename": "multiprocesspandas-1.1.5.tar.gz",
            "has_sig": false,
            "md5_digest": "ad3343a71147c040c51423c86a04e722",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 4353,
            "upload_time": "2023-02-07T12:51:19",
            "upload_time_iso_8601": "2023-02-07T12:51:19.734736Z",
            "url": "https://files.pythonhosted.org/packages/c7/60/94517679ffa67edd31f7e469023ba7ed3b276feaba8cf1d9e72037601dc7/multiprocesspandas-1.1.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-02-07 12:51:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "akhtarshahnawaz",
    "github_project": "multiprocesspandas",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "multiprocesspandas"
}
        
Elapsed time: 0.04728s