# mydatapreprocessing
[](https://pypi.python.org/pypi/mydatapreprocessing/) [](https://badge.fury.io/py/mydatapreprocessing) [](https://pepy.tech/project/mydatapreprocessing) [](https://mybinder.org/v2/gh/Malachov/mydatapreprocessing/HEAD?filepath=demo.ipynb) [](https://lgtm.com/projects/g/Malachov/mydatapreprocessing/context:python) [](https://mydatapreprocessing.readthedocs.io/?badge=latest) [](https://opensource.org/licenses/MIT) [](https://codecov.io/gh/Malachov/mydatapreprocessing)
Load data from web link or local file (json, csv, Excel file, parquet, h5...), consolidate it (resample data, clean NaN values, do string embedding) derive new features via columns derivation and do preprocessing like
standardization or smoothing. If you want to see how functions works, check it's docstrings - working examples with printed results are also in tests - visual.py.
## Links
[Repo on GitHub](https://github.com/Malachov/mydatapreprocessing)
[Official readthedocs documentation](https://mydatapreprocessing.readthedocs.io)
## Installation
Python >=3.6 (Python 2 is not supported).
Install just with
```console
pip install mydatapreprocessing
```
There are some libraries that not every user will be using (for some specific data inputs for example). If you want to be sure to have all libraries, you can provide extras requirements like.
```console
pip install mydatapreprocessing[datatypes]
```
Available extras are ["all", "datasets", "datatypes"]
## Examples
You can use live [jupyter demo on binder](https://mybinder.org/v2/gh/Malachov/mydatapreprocessing/HEAD?filepath=demo.ipynb)
<!--phmdoctest-setup-->
```python
import mydatapreprocessing as mdp
import pandas as pd
import numpy as np
```
### Load data
You can use:
- python formats (numpy.ndarray, pd.DataFrame, list, tuple, dict)
- local files
- web urls
Supported path formats are:
- csv
- xlsx and xls
- json
- parquet
- h5
You can load more data at once in list.
Syntax is always the same.
<!--phmdoctest-label test_load_data-->
<!--phmdoctest-share-names-->
```python
data = mdp.load_data.load_data(
"https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-min-temperatures.csv",
)
# data2 = mdp.load_data.load_data([PATH_TO_FILE.csv, PATH_TO_FILE2.csv])
```
### Consolidation
If you want to use data for some machine learning models, you will probably want to remove Nan values, convert string columns to numeric if possible, do encoding or keep only numeric data and resample.
Consolidation is working with pandas DataFrame as column names matters here.
There are many functions, but there is main function pipelining other functions `consolidate_data`
<!--phmdoctest-label test_consolidation-->
<!--phmdoctest-share-names-->
```python
consolidation_config = mdp.consolidation.consolidation_config.default_consolidation_config.do.copy()
consolidation_config.datetime.datetime_column = 'Date'
consolidation_config.resample.resample = 'M'
consolidation_config.resample.resample_function = "mean"
consolidation_config.dtype = 'float32'
consolidated = mdp.consolidation.consolidate_data(data, consolidation_config)
print(consolidated.head())
```
### Feature engineering
Functions in `feature_engineering` and `preprocessing` expects that data are in form (*n_samples*, *n_features*).
*n_samples* are usually much bigger and therefore transformed in `consolidate_data` if necessary.
In config, you can use shorter update dict syntax as all values names are unique.
### Feature engineering
Create new columns that can be for example used as another machine learning model input.
```python
import mydatapreprocessing.feature_engineering as mdpf
import mydatapreprocessing as mdp
data = pd.DataFrame(
[mdp.datasets.sin(n=30), mdp.datasets.ramp(n=30)]
).T
extended = mdpf.add_derived_columns(data, differences=True, rolling_means=10)
print(extended.columns)
print(f"\nit has less rows then on input {len(extended)}")
```
Functions in `feature_engineering` and `preprocessing` expects that data are in form (n_samples, n_features). n_samples are usually much bigger and therefore transformed in `consolidate_data`
if necessary.
### Preprocessing
Preprocessing can be used on pandas DataFrame as well as on numpy array. Column names are not important as it's just matrix with defined dtype.
There is many functions, but there is main function pipelining other functions `preprocess_data` Preprocessed data can be converted back with `preprocess_data_inverse`
<!--phmdoctest-label test_preprocess_data-->
<!--phmdoctest-share-names-->
```python
from mydatapreprocessing import preprocessing as mdpp
df = pd.DataFrame(np.array([range(5), range(20, 25), np.random.randn(5)]).astype("float32").T)
df.iloc[2, 0] = 500
config = mdpp.preprocessing_config.default_preprocessing_config.do.copy()
config.do.update({"remove_outliers": None, "difference_transform": True, "standardize": "standardize"})
data_preprocessed, inverse_config = mdpp.preprocess_data(df.values, config)
inverse_config.difference_transform = df.iloc[0, 0]
data_preprocessed_inverse = mdpp.preprocess_data_inverse(
data_preprocessed[:, 0], inverse_config
)
```
Raw data
{
"_id": null,
"home_page": "https://github.com/Malachov/mydatapreprocessing",
"name": "mydatapreprocessing",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "",
"author": "Daniel Malachov",
"author_email": "malachovd@seznam.cz",
"download_url": "https://files.pythonhosted.org/packages/63/b5/e4b0d97599501bed7d4b2a8340cff59de3caf288326fa39e9df8f1172ace/mydatapreprocessing-3.0.3.tar.gz",
"platform": "any",
"description": "# mydatapreprocessing\n\n[](https://pypi.python.org/pypi/mydatapreprocessing/) [](https://badge.fury.io/py/mydatapreprocessing) [](https://pepy.tech/project/mydatapreprocessing) [](https://mybinder.org/v2/gh/Malachov/mydatapreprocessing/HEAD?filepath=demo.ipynb) [](https://lgtm.com/projects/g/Malachov/mydatapreprocessing/context:python) [](https://mydatapreprocessing.readthedocs.io/?badge=latest) [](https://opensource.org/licenses/MIT) [](https://codecov.io/gh/Malachov/mydatapreprocessing)\n\nLoad data from web link or local file (json, csv, Excel file, parquet, h5...), consolidate it (resample data, clean NaN values, do string embedding) derive new features via columns derivation and do preprocessing like\nstandardization or smoothing. If you want to see how functions works, check it's docstrings - working examples with printed results are also in tests - visual.py.\n\n## Links\n\n[Repo on GitHub](https://github.com/Malachov/mydatapreprocessing)\n\n[Official readthedocs documentation](https://mydatapreprocessing.readthedocs.io)\n\n\n## Installation\n\nPython >=3.6 (Python 2 is not supported).\n\nInstall just with\n\n```console\npip install mydatapreprocessing\n```\n\nThere are some libraries that not every user will be using (for some specific data inputs for example). If you want to be sure to have all libraries, you can provide extras requirements like.\n\n```console\npip install mydatapreprocessing[datatypes]\n```\n\nAvailable extras are [\"all\", \"datasets\", \"datatypes\"]\n\n\n## Examples\n\nYou can use live [jupyter demo on binder](https://mybinder.org/v2/gh/Malachov/mydatapreprocessing/HEAD?filepath=demo.ipynb)\n\n<!--phmdoctest-setup-->\n```python\nimport mydatapreprocessing as mdp\nimport pandas as pd\nimport numpy as np\n```\n\n### Load data\n\nYou can use:\n\n- python formats (numpy.ndarray, pd.DataFrame, list, tuple, dict)\n- local files\n- web urls\n\nSupported path formats are:\n\n- csv\n- xlsx and xls\n- json\n- parquet\n- h5\n\nYou can load more data at once in list.\n\nSyntax is always the same.\n\n<!--phmdoctest-label test_load_data-->\n<!--phmdoctest-share-names-->\n```python\ndata = mdp.load_data.load_data(\n \"https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-min-temperatures.csv\",\n)\n# data2 = mdp.load_data.load_data([PATH_TO_FILE.csv, PATH_TO_FILE2.csv])\n```\n\n### Consolidation\nIf you want to use data for some machine learning models, you will probably want to remove Nan values, convert string columns to numeric if possible, do encoding or keep only numeric data and resample.\n\nConsolidation is working with pandas DataFrame as column names matters here.\n\nThere are many functions, but there is main function pipelining other functions `consolidate_data`\n\n\n<!--phmdoctest-label test_consolidation-->\n<!--phmdoctest-share-names-->\n```python\nconsolidation_config = mdp.consolidation.consolidation_config.default_consolidation_config.do.copy()\nconsolidation_config.datetime.datetime_column = 'Date'\nconsolidation_config.resample.resample = 'M'\nconsolidation_config.resample.resample_function = \"mean\"\nconsolidation_config.dtype = 'float32'\n\nconsolidated = mdp.consolidation.consolidate_data(data, consolidation_config)\nprint(consolidated.head())\n```\n\n### Feature engineering\nFunctions in `feature_engineering` and `preprocessing` expects that data are in form (*n_samples*, *n_features*).\n*n_samples* are usually much bigger and therefore transformed in `consolidate_data` if necessary.\n\nIn config, you can use shorter update dict syntax as all values names are unique.\n\n### Feature engineering\n\nCreate new columns that can be for example used as another machine learning model input.\n\n```python\nimport mydatapreprocessing.feature_engineering as mdpf\nimport mydatapreprocessing as mdp\n\ndata = pd.DataFrame(\n [mdp.datasets.sin(n=30), mdp.datasets.ramp(n=30)]\n).T\n\nextended = mdpf.add_derived_columns(data, differences=True, rolling_means=10)\nprint(extended.columns)\nprint(f\"\\nit has less rows then on input {len(extended)}\")\n```\n\nFunctions in `feature_engineering` and `preprocessing` expects that data are in form (n_samples, n_features). n_samples are usually much bigger and therefore transformed in `consolidate_data`\nif necessary.\n\n### Preprocessing\n\nPreprocessing can be used on pandas DataFrame as well as on numpy array. Column names are not important as it's just matrix with defined dtype.\n\nThere is many functions, but there is main function pipelining other functions `preprocess_data` Preprocessed data can be converted back with `preprocess_data_inverse`\n\n\n<!--phmdoctest-label test_preprocess_data-->\n<!--phmdoctest-share-names-->\n```python\n\nfrom mydatapreprocessing import preprocessing as mdpp\n\ndf = pd.DataFrame(np.array([range(5), range(20, 25), np.random.randn(5)]).astype(\"float32\").T)\ndf.iloc[2, 0] = 500\n\nconfig = mdpp.preprocessing_config.default_preprocessing_config.do.copy()\nconfig.do.update({\"remove_outliers\": None, \"difference_transform\": True, \"standardize\": \"standardize\"})\ndata_preprocessed, inverse_config = mdpp.preprocess_data(df.values, config)\ninverse_config.difference_transform = df.iloc[0, 0]\ndata_preprocessed_inverse = mdpp.preprocess_data_inverse(\n data_preprocessed[:, 0], inverse_config\n)\n```\n\n\n",
"bugtrack_url": null,
"license": "mit",
"summary": "Library/framework for making predictions.",
"version": "3.0.3",
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9de22542523aa5e8bde5468cfea78116f943a80e16809b0f06ec39bc635d907b",
"md5": "423e560453a371af31f9611f3b4a996e",
"sha256": "a00847a40c1fffdddf81c17a2e58225172dd68c03d74e556af42243e8c0bd7be"
},
"downloads": -1,
"filename": "mydatapreprocessing-3.0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "423e560453a371af31f9611f3b4a996e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 52930,
"upload_time": "2022-08-06T17:05:05",
"upload_time_iso_8601": "2022-08-06T17:05:05.770321Z",
"url": "https://files.pythonhosted.org/packages/9d/e2/2542523aa5e8bde5468cfea78116f943a80e16809b0f06ec39bc635d907b/mydatapreprocessing-3.0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "63b5e4b0d97599501bed7d4b2a8340cff59de3caf288326fa39e9df8f1172ace",
"md5": "5e42bac4feb41e131dafbdde2cb6f31e",
"sha256": "a74ce53a75ae1fc353f83f0bc5c2815e1520a701ed181d96362e164bb09f7f29"
},
"downloads": -1,
"filename": "mydatapreprocessing-3.0.3.tar.gz",
"has_sig": false,
"md5_digest": "5e42bac4feb41e131dafbdde2cb6f31e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 39001,
"upload_time": "2022-08-06T17:05:07",
"upload_time_iso_8601": "2022-08-06T17:05:07.498313Z",
"url": "https://files.pythonhosted.org/packages/63/b5/e4b0d97599501bed7d4b2a8340cff59de3caf288326fa39e9df8f1172ace/mydatapreprocessing-3.0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2022-08-06 17:05:07",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "Malachov",
"github_project": "mydatapreprocessing",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "mydatapreprocessing"
}