# Tablite
![Build status](https://github.com/root-11/tablite/actions/workflows/python-test.yml/badge.svg)
[![codecov](https://codecov.io/gh/root-11/tablite/branch/master/graph/badge.svg?token=A0QEWGO9R6)](https://codecov.io/gh/root-11/tablite)
[![Downloads](https://pepy.tech/badge/tablite)](https://pepy.tech/project/tablite)
[![Downloads](https://pepy.tech/badge/tablite/month)](https://pepy.tech/project/tablite)
[![PyPI version](https://badge.fury.io/py/tablite.svg)](https://badge.fury.io/py/tablite)
--------------
## Contents
- [introduction](#introduction)
- [installation](#installation)
- [feature overview](#feature_overview)
- [api](#api)
- [tutorial](#tutorial)
- [latest updates](#latest_updates)
- [credits](#credits)
## <a name="introduction"></a>Introduction
`Tablite` seeks to be the go-to library for manipulating tabular data with an api that is as close in syntax to pure python as possible.
### Even smaller memory footprint
Tablite uses [numpys fileformat](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html) as a backend with strong abstraction, so that copy, append & repetition of data is handled in pages. This is imperative for [incremental data processing](https://raw.githubusercontent.com/root-11/tablite/74e7b44cfc314950b7a769316cb48d67cce725d0/images/incremental_dataprocessing.svg).
Tablite tests [for memory footprint](https://github.com/root-11/tablite/blob/master/tests/test_memory_footprint.py). One test compares the memory footprint of 10,000,000 integers where `tablite` will use < 1 Mb RAM in contrast to python which will require around 133.7 Mb of RAM (1M lists with 10 integers). Tablite also tests to assure that working with [1Tb of data](https://github.com/root-11/tablite/blob/9bb6e572538a85aee31ef8a4a60c0945a6f857a4/tests/test_filereader_performance.py#L104) is tolerable.
Tablite achieves this minimal memory footprint by using a temporary storage set in `config.Config.workdir` as `tempfile.gettempdir()/tablite-tmp`.
If your OS (windows/linux/mac) sits on a SSD this will benefit from high IOPS and permit slices of [9,000,000,000 rows in less than a second](https://github.com/root-11/tablite/blob/master/images/1TB_test.png?raw=true).
### Multiprocessing enabled by default
Tablite uses numpy whereever possible and applies multiprocessing for bypassing the GIL on all major operations.
CSV import is performed in C through using `nim`s compiler and is as fast the hardware allows.
### All algorithms have been reworked to respect memory limits
Tablite respects the limits of free memory by tagging the free memory and defining task size before each memory intensive task is initiated (join, groupby, data import, etc).
If you still run out of memory you may try to reduce the `config.Config.PAGE_SIZE` and rerun your program.
### 100% support for all python datatypes
Tablite wants to make it easy for you to work with data. `tablite.Table's` behave like a dict with lists:
`my_table[column name] = [... data ...]`.
Tablite uses datatype mapping to native numpy types where possible and uses type mapping for non-native types such as timedelta, None, date, timeā¦ e.g. what you put in, is what you get out. This is inspired by [bank python](https://calpaterson.com/bank-python.html).
### Light weight
Tablite is ~200 kB.
### Helpful
Tablite wants you to be productive, so a number of helpers are available.
- `Table.import_file` to import csv*, tsv, txt, xls, xlsx, xlsm, ods, zip and logs. There is automatic type detection (see [tutorial.ipynb](https://nbviewer.org/github/root-11/tablite/blob/master/tutorial.ipynb) )
- To peek into any supported file use `get_headers` which shows the first 10 rows.
- Use `mytable.rows` and `mytable.columns` to iterate over rows or columns.
- Create multi-key `.index` for quick lookups.
- Perform multi-key `.sort`,
- Filter using `.any` and `.all` to select specific rows.
- use multi-key `.lookup` and `.join` to find data across tables.
- Perform `.groupby` and reorganise data as a `.pivot` table with max, min, sum, first, last, count, unique, average, st.deviation, median and mode
- Append / concatenate tables with `+=` which automatically sorts out the columns - even if they're not in perfect order.
- Should you tables be similar but not the identical you can use `.stack` to "stack" tables on top of each other
If you're still missing something add it to the [wishlist](https://github.com/root-11/tablite/issues)
---------------
## <a name="installation"></a>Installation
Get it from pypi: [![PyPI version](https://badge.fury.io/py/tablite.svg)](https://badge.fury.io/py/tablite)
Install: `pip install tablite`
Usage: `>>> from tablite import Table`
## <a name="build & test"></a>Build & test
install nim >= 2.0.0
run: `chmod +x ./build_nim.sh`
run: `./build_nim.sh`
Should the default nim not be your desired taste, please use `nims` environment manager (`atlas`) and run `source nim-2.0.0/activate.sh` on UNIX or `nim-2.0.0/activate.bat` on windows.
```
install python >= 3.8
python -m venv /your/venv/dir
activate /your/venv/dir
pip install -r requirements.txt
pip install -r requirements_for_testing.py
pytest ./tests
```
## <a name="feature_overview"></a>Feature overview
|want to...| this way... |
|---|---|
|loop over rows| `[ row for row in table.rows ]`|
|loop over columns| `[ table[col_name] for col_name in table.columns ]`|
|slice | `myslice = table['A', 'B', slice(0,None,15)]`|
|get column by name | `my_table['A']` |
|get row by index | `my_table[9_000_000_001]` |
|value update| `mytable['A'][2] = new value` |
|update w. list comprehension | `mytable['A'] = [ x*x for x in mytable['A'] if x % 2 != 0 ]`|
|join| `a_join = numbers.join(letters, left_keys=['colour'], right_keys=['color'], left_columns=['number'], right_columns=['letter'], kind='left')`|
| lookup| `travel_plan = friends.lookup(bustable, (DataTypes.time(21, 10), "<=", 'time'), ('stop', "==", 'stop'))`|
| groupby| `group_by = table.groupby(keys=['C', 'B'], functions=[('A', gb.count)])`|
| pivot table | `my_pivot = t.pivot(rows=['C'], columns=['A'], functions=[('B', gb.sum), ('B', gb.count)], values_as_rows=False)`|
| index| `indices = old_table.index(*old_table.columns)`|
| sort| `lookup1_sorted = lookup_1.sort(**{'time': True, 'name':False, "sort_mode":'unix'})`|
| filter | `true, false = unfiltered.filter( [{"column1": 'a', "criteria":">=", 'value2':3}, ... more criteria ... ], filter_type='all' )`|
| find any | `any_even_rows = mytable.any('A': lambda x : x%2==0, 'B': lambda x > 0)`|
| find all | `all_even_rows = mytable.all('A': lambda x : x%2==0, 'B': lambda x > 0)`|
| to json | `json_str = my_table.to_json()`|
| from json | `Table.from_json(json_str)`|
## <a name="api"></a>API
To view the detailed API see [api](https://root-11.github.io/tablite/latest/)
## <a name="tutorial"></a>Tutorial
To learn more see the [tutorial.ipynb](https://github.com/root-11/tablite/blob/master/tutorial.ipynb) (Jupyter notebook)
## <a name="latest_updates"></a>Latest updates
See [changelog.md](https://github.com/root-11/tablite/blob/master/changelog.md)
## <a name="credits"></a>Credits
- [Eugene Antonov](https://github.com/Jetman80) - the api documentation.
- [Audrius Kulikajevas](https://github.com/realratchet) - Edge case testing / various bugs, Jupyter notebook integration.
- [Ovidijus Grigas](https://github.com/omenSi) - various bugs, documentation.
- Martynas Kaunas - GroupBy functionality.
- Sergej Sinkarenko - various bugs.
- Lori Cooper - spell checking.
Raw data
{
"_id": null,
"home_page": "https://github.com/root-11/tablite",
"name": "tablite",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "all, any, average, column, columns, count, csv, data imputation, date range, dict, excel, filter, first, from, from_pandas, groupby, guess, imputation, in-memory, index, indexing, inner join, is sorted, json, last, left join, list, list on disk, log, max, median, min, mode, numpy, ods, out-of-memory, outer join, pandas, pivot, pivot table, product, read csv, remove duplicates, replace, replace missing values, rows, show, sort, standard deviation, stored list, sum, table, tables, tablite, to, to_pandas, tools, transpose, txt, unique, use disk, xlsx, xround, zip",
"author": "https://github.com/root-11",
"author_email": null,
"download_url": null,
"platform": "any",
"description": "# Tablite\n\n![Build status](https://github.com/root-11/tablite/actions/workflows/python-test.yml/badge.svg)\n[![codecov](https://codecov.io/gh/root-11/tablite/branch/master/graph/badge.svg?token=A0QEWGO9R6)](https://codecov.io/gh/root-11/tablite)\n[![Downloads](https://pepy.tech/badge/tablite)](https://pepy.tech/project/tablite)\n[![Downloads](https://pepy.tech/badge/tablite/month)](https://pepy.tech/project/tablite)\n[![PyPI version](https://badge.fury.io/py/tablite.svg)](https://badge.fury.io/py/tablite)\n\n--------------\n\n## Contents\n\n- [introduction](#introduction)\n- [installation](#installation)\n- [feature overview](#feature_overview)\n- [api](#api)\n- [tutorial](#tutorial)\n- [latest updates](#latest_updates)\n- [credits](#credits)\n\n## <a name=\"introduction\"></a>Introduction \n\n`Tablite` seeks to be the go-to library for manipulating tabular data with an api that is as close in syntax to pure python as possible. \n\n\n### Even smaller memory footprint\n\nTablite uses [numpys fileformat](https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html) as a backend with strong abstraction, so that copy, append & repetition of data is handled in pages. This is imperative for [incremental data processing](https://raw.githubusercontent.com/root-11/tablite/74e7b44cfc314950b7a769316cb48d67cce725d0/images/incremental_dataprocessing.svg).\n\nTablite tests [for memory footprint](https://github.com/root-11/tablite/blob/master/tests/test_memory_footprint.py). One test compares the memory footprint of 10,000,000 integers where `tablite` will use < 1 Mb RAM in contrast to python which will require around 133.7 Mb of RAM (1M lists with 10 integers). Tablite also tests to assure that working with [1Tb of data](https://github.com/root-11/tablite/blob/9bb6e572538a85aee31ef8a4a60c0945a6f857a4/tests/test_filereader_performance.py#L104) is tolerable.\n\nTablite achieves this minimal memory footprint by using a temporary storage set in `config.Config.workdir` as `tempfile.gettempdir()/tablite-tmp`.\nIf your OS (windows/linux/mac) sits on a SSD this will benefit from high IOPS and permit slices of [9,000,000,000 rows in less than a second](https://github.com/root-11/tablite/blob/master/images/1TB_test.png?raw=true).\n\n### Multiprocessing enabled by default\n\nTablite uses numpy whereever possible and applies multiprocessing for bypassing the GIL on all major operations. \nCSV import is performed in C through using `nim`s compiler and is as fast the hardware allows.\n\n### All algorithms have been reworked to respect memory limits\n\nTablite respects the limits of free memory by tagging the free memory and defining task size before each memory intensive task is initiated (join, groupby, data import, etc).\nIf you still run out of memory you may try to reduce the `config.Config.PAGE_SIZE` and rerun your program.\n\n### 100% support for all python datatypes\n\nTablite wants to make it easy for you to work with data. `tablite.Table's` behave like a dict with lists:\n\n`my_table[column name] = [... data ...]`.\n\nTablite uses datatype mapping to native numpy types where possible and uses type mapping for non-native types such as timedelta, None, date, time\u2026 e.g. what you put in, is what you get out. This is inspired by [bank python](https://calpaterson.com/bank-python.html).\n\n### Light weight\n\nTablite is ~200 kB.\n\n### Helpful\n\nTablite wants you to be productive, so a number of helpers are available. \n\n- `Table.import_file` to import csv*, tsv, txt, xls, xlsx, xlsm, ods, zip and logs. There is automatic type detection (see [tutorial.ipynb](https://nbviewer.org/github/root-11/tablite/blob/master/tutorial.ipynb) )\n- To peek into any supported file use `get_headers` which shows the first 10 rows.\n- Use `mytable.rows` and `mytable.columns` to iterate over rows or columns.\n- Create multi-key `.index` for quick lookups.\n- Perform multi-key `.sort`,\n- Filter using `.any` and `.all` to select specific rows.\n- use multi-key `.lookup` and `.join` to find data across tables.\n- Perform `.groupby` and reorganise data as a `.pivot` table with max, min, sum, first, last, count, unique, average, st.deviation, median and mode\n- Append / concatenate tables with `+=` which automatically sorts out the columns - even if they're not in perfect order.\n- Should you tables be similar but not the identical you can use `.stack` to \"stack\" tables on top of each other\n\nIf you're still missing something add it to the [wishlist](https://github.com/root-11/tablite/issues)\n\n\n---------------\n\n## <a name=\"installation\"></a>Installation\n\nGet it from pypi: [![PyPI version](https://badge.fury.io/py/tablite.svg)](https://badge.fury.io/py/tablite)\n\nInstall: `pip install tablite` \nUsage: `>>> from tablite import Table` \n\n## <a name=\"build & test\"></a>Build & test\n\ninstall nim >= 2.0.0\n\nrun: `chmod +x ./build_nim.sh`\nrun: `./build_nim.sh`\n\nShould the default nim not be your desired taste, please use `nims` environment manager (`atlas`) and run `source nim-2.0.0/activate.sh` on UNIX or `nim-2.0.0/activate.bat` on windows.\n\n```\ninstall python >= 3.8\npython -m venv /your/venv/dir\nactivate /your/venv/dir\npip install -r requirements.txt\npip install -r requirements_for_testing.py\npytest ./tests\n```\n\n## <a name=\"feature_overview\"></a>Feature overview\n\n|want to...| this way... |\n|---|---|\n|loop over rows| `[ row for row in table.rows ]`|\n|loop over columns| `[ table[col_name] for col_name in table.columns ]`|\n|slice | `myslice = table['A', 'B', slice(0,None,15)]`|\n|get column by name | `my_table['A']` |\n|get row by index | `my_table[9_000_000_001]` |\n|value update| `mytable['A'][2] = new value` |\n|update w. list comprehension | `mytable['A'] = [ x*x for x in mytable['A'] if x % 2 != 0 ]`|\n|join| `a_join = numbers.join(letters, left_keys=['colour'], right_keys=['color'], left_columns=['number'], right_columns=['letter'], kind='left')`|\n| lookup| `travel_plan = friends.lookup(bustable, (DataTypes.time(21, 10), \"<=\", 'time'), ('stop', \"==\", 'stop'))`|\n| groupby| `group_by = table.groupby(keys=['C', 'B'], functions=[('A', gb.count)])`|\n| pivot table | `my_pivot = t.pivot(rows=['C'], columns=['A'], functions=[('B', gb.sum), ('B', gb.count)], values_as_rows=False)`|\n| index| `indices = old_table.index(*old_table.columns)`|\n| sort| `lookup1_sorted = lookup_1.sort(**{'time': True, 'name':False, \"sort_mode\":'unix'})`|\n| filter | `true, false = unfiltered.filter( [{\"column1\": 'a', \"criteria\":\">=\", 'value2':3}, ... more criteria ... ], filter_type='all' )`|\n| find any | `any_even_rows = mytable.any('A': lambda x : x%2==0, 'B': lambda x > 0)`|\n| find all | `all_even_rows = mytable.all('A': lambda x : x%2==0, 'B': lambda x > 0)`|\n| to json | `json_str = my_table.to_json()`|\n| from json | `Table.from_json(json_str)`|\n\n## <a name=\"api\"></a>API\n\nTo view the detailed API see [api](https://root-11.github.io/tablite/latest/)\n\n## <a name=\"tutorial\"></a>Tutorial\n\nTo learn more see the [tutorial.ipynb](https://github.com/root-11/tablite/blob/master/tutorial.ipynb) (Jupyter notebook)\n\n\n## <a name=\"latest_updates\"></a>Latest updates\n\nSee [changelog.md](https://github.com/root-11/tablite/blob/master/changelog.md)\n\n\n## <a name=\"credits\"></a>Credits\n\n- [Eugene Antonov](https://github.com/Jetman80) - the api documentation.\n- [Audrius Kulikajevas](https://github.com/realratchet) - Edge case testing / various bugs, Jupyter notebook integration.\n- [Ovidijus Grigas](https://github.com/omenSi) - various bugs, documentation.\n- Martynas Kaunas - GroupBy functionality.\n- Sergej Sinkarenko - various bugs.\n- Lori Cooper - spell checking.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "multiprocessing enabled out-of-memory data analysis library for tabular data.",
"version": "2023.11.6",
"project_urls": {
"Homepage": "https://github.com/root-11/tablite"
},
"split_keywords": [
"all",
" any",
" average",
" column",
" columns",
" count",
" csv",
" data imputation",
" date range",
" dict",
" excel",
" filter",
" first",
" from",
" from_pandas",
" groupby",
" guess",
" imputation",
" in-memory",
" index",
" indexing",
" inner join",
" is sorted",
" json",
" last",
" left join",
" list",
" list on disk",
" log",
" max",
" median",
" min",
" mode",
" numpy",
" ods",
" out-of-memory",
" outer join",
" pandas",
" pivot",
" pivot table",
" product",
" read csv",
" remove duplicates",
" replace",
" replace missing values",
" rows",
" show",
" sort",
" standard deviation",
" stored list",
" sum",
" table",
" tables",
" tablite",
" to",
" to_pandas",
" tools",
" transpose",
" txt",
" unique",
" use disk",
" xlsx",
" xround",
" zip"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "16f14ce4250514278b71050aac14ee5b5b8507834777d13f2759efc63911ffa5",
"md5": "2b6c8e0b4a87af255b2723c2d07f7189",
"sha256": "35bcca8fc6c21bbb55c8f498ff41c04dffb883a34b2e0a97167a9de13d12d696"
},
"downloads": -1,
"filename": "tablite-2023.11.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2b6c8e0b4a87af255b2723c2d07f7189",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 1218968,
"upload_time": "2024-05-10T10:33:19",
"upload_time_iso_8601": "2024-05-10T10:33:19.391498Z",
"url": "https://files.pythonhosted.org/packages/16/f1/4ce4250514278b71050aac14ee5b5b8507834777d13f2759efc63911ffa5/tablite-2023.11.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-10 10:33:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "root-11",
"github_project": "tablite",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "tqdm",
"specs": [
[
">=",
"4.63.0"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"1.24.3"
]
]
},
{
"name": "psutil",
"specs": [
[
">=",
"5.9.5"
]
]
},
{
"name": "chardet",
"specs": [
[
"==",
"5.1.0"
]
]
},
{
"name": "pyexcel",
"specs": [
[
"==",
"0.7.0"
]
]
},
{
"name": "pyexcel-odsr",
"specs": [
[
"==",
"0.6.0"
]
]
},
{
"name": "pyexcel-ods",
"specs": [
[
"==",
"0.6.0"
]
]
},
{
"name": "pyexcel-xlsx",
"specs": [
[
"==",
"0.6.0"
]
]
},
{
"name": "pyexcel-xls",
"specs": [
[
"==",
"0.7.0"
]
]
},
{
"name": "pandas",
"specs": [
[
">=",
"2.2.1"
]
]
},
{
"name": "pyuca",
"specs": [
[
">=",
"1.2"
]
]
},
{
"name": "mplite",
"specs": [
[
">=",
"1.3.0"
]
]
},
{
"name": "PyYAML",
"specs": [
[
"==",
"6.0"
]
]
},
{
"name": "openpyxl",
"specs": [
[
"==",
"3.0.10"
]
]
},
{
"name": "h5py",
"specs": [
[
">=",
"3.6.0"
]
]
},
{
"name": "nimporter",
"specs": []
},
{
"name": "python-dotenv",
"specs": []
}
],
"lcname": "tablite"
}