automatize


Nameautomatize JSON
Version 1.0b7 PyPI version JSON
download
home_pagehttps://github.com/ttportela/automatize
SummaryAutomatize: A Multiple Aspect Trajectory Data Mining Tool Library
upload_time2023-10-26 17:31:00
maintainer
docs_urlNone
authorTarlis Tortelli Portela
requires_python>=3.7
licenseGPL Version 3 or superior (see LICENSE file)
keywords data-science machine-learning data-mining trajectory multiple-trajectory trajectory-classification movelet movelet-visualization
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Automatize: Multiple Aspect Trajectory Data Mining Tool Library
---

\[[Publication](#)\] \[[citation.bib](assets/citation.bib)\] \[[GitHub](https://github.com/ttportela/automatize)\] \[[PyPi](https://pypi.org/project/automatize/)\]


Welcome to Automatize Framework for Multiple Aspect Trajectory Analysis. You can use it as a web-platform or a Python library.

The present application offers a tool, called AutoMATize, to support the user in the classification task of multiple aspect trajectories, specifically for extracting and visualizing the movelets, the parts of the trajectory that better discriminate a class. The AutoMATize integrates into a unique platform the fragmented approaches available for multiple aspects trajectories and in general for multidimensional sequence classification into a unique web-based and python library system. Offers both movelets visualization and a complete configuration of classification experimental settings.

### Main Modules

- [Datasets](/datasets): Datasets descriptions, statistics and files to download;
- [Methods](/methods): Methods for trajectory classification and movelet extraction;
- [Scripting](/experiments): Script generator for experimental evaluation on available methods (Linux shell);
- [Results](/results): Experiments on trajectory datasets and method rankings;
- [Analysis](/analysis): Multiple Aspect Trajectory Analysis Tool (trajectory and movelet visualization);
- [Publications](/publications): Multiple Aspect Trajectory Analysis related publications;
- [Tutorial](/tutorial): Tutorial on how to use Automatise as a Python library.


### Available Classifiers (needs update):

* **MLP (Movelet)**: Multilayer-Perceptron (MLP) with movelets features. The models were implemented using the Python language, with the keras, fully-connected hidden layer of 100 units, Dropout Layer with dropout rate of 0.5, learning rate of 10−3 and softmax activation function in the Output Layer. Adam Optimization is used to avoid the categorical cross entropy loss, with 200 of batch size, and a total of 200 epochs per training. \[[REFERENCE](https://doi.org/10.1007/s10618-020-00676-x)\]
* **RF (Movelet)**: Random Forest (RF) with movelets features, that consists of an ensemble of 300 decision trees. The models were implemented using the Python language, with the keras. \[[REFERENCE](https://doi.org/10.1007/s10618-020-00676-x)\]
* **SVN (Movelet)**: Support Vector Machine (SVM) with movelets features. The models were implemented using the Python language, with the keras, linear kernel and default structure. Other structure details are default settings. \[[REFERENCE](https://doi.org/10.1007/s10618-020-00676-x)\]
* **POI-S**: Frequency-based method to extract features of trajectory datasets (TF-IDF approach), the method runs one dimension at a time (or more if concatenated). The models were implemented using the Python language, with the keras. \[[REFERENCE](https://doi.org/10.1145/3341105.3374045)\]
* **MARC**: Uses word embeddings for trajectory classification. It encapsulates all trajectory dimensions: space, time and semantics, and uses them as input to a neural network classifier, and use the geoHash on the spatial dimension, combined with others. The models were implemented using the Python language, with the keras. \[[REFERENCE](https://doi.org/10.1080/13658816.2019.1707835)\]
* **TRF**: Random Forest for trajectory data (TRF). Find the optimal set of hyperparameters for each model, applying the grid-search technique: varying number of trees (ne), the maximum number of features to consider at every split (mf), the maximum number of levels in a tree (md), the minimum number of samples required to split a node (mss), the minimum number of samples required at each leaf node (msl), and finally, the method of selecting samples for training each tree (bs). \[[REFERENCE](http://dx.doi.org/10.5220/0010227906640671)\]
* **XGBost**: Find the optimal set of hyperparameters for each model, applying the grid-search technique:  number of estimators (ne), the maximum depth of a tree (md), the learning rate (lr), the gamma (gm), the fraction of observations to be randomly samples for each tree (ss), the sub sample ratio of columns when constructing each tree (cst), the regularization parameters (l1) and (l2). \[[REFERENCE](http://dx.doi.org/10.5220/0010227906640671)\]
* **BITULER**: Find the optimal set of hyperparameters for each model, applying the grid-search technique: keeps 64 as the batch size and 0.001 as the learning rate and vary the units (un) of the recurrent layer, the embedding size to each attribute (es) and the dropout (dp). \[[REFERENCE](http://dx.doi.org/10.5220/0010227906640671)\]
* **TULVAE**: Find the optimal set of hyperparameters for each model, applying the grid-search technique: keeps 64 as the batch size and 0.001 as the learning rate and vary the units (un) of the recurrent layer, the embedding size to each attribute (es), the dropout (dp), and latent variable (z). \[[REFERENCE](http://dx.doi.org/10.5220/0010227906640671)\]
* **DEEPEST**: DeepeST employs a Recurrent Neural Network (RNN), both LSTM and Bidirectional LSTM (BLSTM). Find the optimal set of hyperparameters for each model, applying the grid-search technique: keeps 64 as the batch size and 0.001 as the learning rate and vary the units (un) of the recurrent layer, the embedding size to each attribute (es) and the dropout (dp). \[[REFERENCE](http://dx.doi.org/10.5220/0010227906640671)\]

### Installation

Install directly from PyPi repository, or, download from github. Intalling with pip will also provide command line scripts (available in folder `automatize/scripts`). (python >= 3.5 required)

```bash
    pip install automatize
```

To use Automatize as a python library, find examples in this sample Jupyter Notebbok: [Automatize_Sample_Code.ipynb](./assets/examples/Automatize_Sample_Code.ipynb)

To run automatize as a web application, run `MAT.py`. The application will run on http://127.0.0.1:8050/

You can run this application setting a path to the datasets directory, for instance:

```bash
    MAT.py -d "./data"
```

This will read the datasets structure for the scripting module. You have to use the same structure as:
```
data
├── multiple_trajectories
│   ├── Brightkite
│   │   ├── Brightkite.md [required: dataset description markdown file]
│   │   ├── Brightkite-stats.md [not required]
│   │   ├── specific_train.csv [train file, .csv/.mat/.zip]
│   │   ├── specific_test.csv [test file, .csv/.mat/.zip]
│   ├── FoursquareNYC
│   │   ├── FoursquareNYC.md
│   │   ├── train.csv
│   │   ├── test.csv
│   ├── Gowalla
│   │   ├── Gowalla.md
│   │   ├── train.csv
│   │   ├── test.csv
│   ├── descriptors [descriptor files for movelets methods]
│   │   ├── Brightkite_specific_hp.json
│   │   ├── Gowalla_specific_hp.json
├── raw_trajectories
│   ├── Geolife
│   │   ├── Gowalla.md
│   │   ├── train.csv
│   │   ├── test.csv
│   ├── GoTrack
│   │   ├── Gowalla.md
│   │   ├── train.csv
│   │   ├── test.csv
│   ├── descriptors
```

#### Available Scripts:

By installing the package the following python scripts will be installed for use in system command line tools:

* `MAT-TC.py`: Script to run classifiers on trajectory datasets, to details type: `MAT-TC.py --help`;
* `MAT-MC.py`: Script to run **movelet-based** classifiers on trajectory datasets, to details type: `MAT-MC.py --help`;
* `POIS-TC.py`: Script to run POI-F/POI-S classifiers on the methods feature matrix, to details type: `POIS-TC.py --help`;
* `MARC.py`: Script to run MARC classifier on trajectory datasets, to details type: `MARC.py --help`.

One script for running the **POI-F/POI-S** method:

* `POIS.py`: Script to run POI-F/POI-S feature extraction methods (`poi`, `npoi`, and `wnpoi`), to details type: `POIS.py --help`.

Scripts for helping the management of result files:

* `MAT-CheckRun.py`: Script to check resulting files from running the generated scripts of a experimental evaluation, the directory structure must comply with the expected naming patterns;
* `MAT-MergeDatasets.py`: Script to merge all `train.csv` and `test.csv` from a movelet-based method of feature extraction, it generates two merged `train.csv` and `test.csv` files from the subclasses results;
* `MAT-ResultsTo.py`: Script to export logs and classification files from resulting directory of the generated scripts in a experimental evaluation to a compressed file, the directory structure must comply with the expected naming patterns;
* `MAT-ExportResults.py`: Script to export logs and classification files from resulting directory of the generated scripts in a experimental evaluation, the directory structure must comply with the expected naming patterns;
* `MAT-ExportMovelets.py`: Script to export movelet `.json` files from resulting directory in a experimental evaluation, the directory structure must comply with the expected naming patterns.


### Citing

If you use `automatize` please cite the following paper:

    Portela, Tarlis Tortelli; Bogorny, Vania; Bernasconi, Anna; Renso, Chiara. AutoMATise: Multiple Aspect Trajectory Data Mining Tool Library. 2022 23rd IEEE International Conference on Mobile Data Management (MDM), 2022, pp. 282-285, doi: 10.1109/MDM55031.2022.00060.

[Bibtex](citation.bib):

```bash
@inproceedings{Portela2022automatise,
    title={AutoMATise: Multiple Aspect Trajectory Data Mining Tool Library},
    author={Portela, Tarlis Tortelli and Bogorny, Vania and Bernasconi, Anna and Renso, Chiara},
    booktitle = {2022 23rd IEEE International Conference on Mobile Data Management (MDM)},
    volume={},
    number={},
    address = {Online},
    year={2022},
    pages = {282--285},
    doi={10.1109/MDM55031.2022.00060}
}
```

(For disambiguity, AutoMATtize was previously written with 's', and now the current version with a 'z')

### Collaborate with us

Any contribution is welcome. This is an active project and if you would like to include your algorithm in `automatize`, feel free to fork the project, open an issue and contact us.

Feel free to contribute in any form, such as scientific publications referencing `automatize`, teaching material and workshop videos.

### Related packages

- [scikit-mobility](https://github.com/scikit-mobility/scikit-mobility): Human trajectory representation and visualizations in Python;
- [geopandas](https://geopandas.org/en/stable/): Library to help work with geospatial data in Python;
- [movingpandas](https://anitagraser.github.io/movingpandas/): Based on `geopandas` for movement data exploration and analysis.
- [PyMove](https://github.com/InsightLab/PyMove): a Python library for processing and visualization of trajectories and other spatial-temporal data.

### Change Log

This is a more complete and second version of the previous package [Automatise](https://github.com/ttportela/automatise). 
 
*Dec. 2022:*
 - [POI-F](https://doi.org/10.1145/3341105.3374045) extension: **POIS** is an extension to the POI-F method capable of concatenating dimensions and sequences for trajectory classification. Available for the methods `poi`, `npoi`, and `wnpoi`.
 - New classification methods: *TULVAE, BITULER, DeepestST, XGBoost, Traj. Random Forrest* ([source](https://github.com/nickssonfreitas/ICAART2021)).
 - Scripts for classification refactored to command line best practices, and implemented random seed parameter to all methods for reproductibility purposes.
 - Trajectory Generator Module: offers two types of generation methods: random or sampling real data. Possible to control the number of: trajectories, points, attributes, and classes.
 - Updated interface for AutoMATize Results and new graphics generation.
 - New visualization tools: trajectory spatial heatmap, etc. (graphincs.py).
 - New interface for calling scripts.
 - New structure for reading result files to allow easy extention.
 
 *TODO*:
 - Comments on all public interface funcions and modules
 - Incorporate new classifiers

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ttportela/automatize",
    "name": "automatize",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "Tarlis Tortelli Portela <tarlis@tarlis.com.br>",
    "keywords": "data-science,machine-learning,data-mining,trajectory,multiple-trajectory,trajectory-classification,movelet,movelet-visualization",
    "author": "Tarlis Tortelli Portela",
    "author_email": "Tarlis Tortelli Portela <tarlis@tarlis.com.br>",
    "download_url": "https://files.pythonhosted.org/packages/9a/dd/1731a6173ca9a581e24f6e3afb50f7e3070980f0fb32c884db25e018b15f/automatize-1.0b7.tar.gz",
    "platform": null,
    "description": "# Automatize: Multiple Aspect Trajectory Data Mining Tool Library\n---\n\n\\[[Publication](#)\\] \\[[citation.bib](assets/citation.bib)\\] \\[[GitHub](https://github.com/ttportela/automatize)\\] \\[[PyPi](https://pypi.org/project/automatize/)\\]\n\n\nWelcome to Automatize Framework for Multiple Aspect Trajectory Analysis. You can use it as a web-platform or a Python library.\n\nThe present application offers a tool, called AutoMATize, to support the user in the classification task of multiple aspect trajectories, specifically for extracting and visualizing the movelets, the parts of the trajectory that better discriminate a class. The AutoMATize integrates into a unique platform the fragmented approaches available for multiple aspects trajectories and in general for multidimensional sequence classification into a unique web-based and python library system. Offers both movelets visualization and a complete configuration of classification experimental settings.\n\n### Main Modules\n\n- [Datasets](/datasets): Datasets descriptions, statistics and files to download;\n- [Methods](/methods): Methods for trajectory classification and movelet extraction;\n- [Scripting](/experiments): Script generator for experimental evaluation on available methods (Linux shell);\n- [Results](/results): Experiments on trajectory datasets and method rankings;\n- [Analysis](/analysis): Multiple Aspect Trajectory Analysis Tool (trajectory and movelet visualization);\n- [Publications](/publications): Multiple Aspect Trajectory Analysis related publications;\n- [Tutorial](/tutorial): Tutorial on how to use Automatise as a Python library.\n\n\n### Available Classifiers (needs update):\n\n* **MLP (Movelet)**: Multilayer-Perceptron (MLP) with movelets features. The models were implemented using the Python language, with the keras, fully-connected hidden layer of 100 units, Dropout Layer with dropout rate of 0.5, learning rate of 10\u22123 and softmax activation function in the Output Layer. Adam Optimization is used to avoid the categorical cross entropy loss, with 200 of batch size, and a total of 200 epochs per training. \\[[REFERENCE](https://doi.org/10.1007/s10618-020-00676-x)\\]\n* **RF (Movelet)**: Random Forest (RF) with movelets features, that consists of an ensemble of 300 decision trees. The models were implemented using the Python language, with the keras. \\[[REFERENCE](https://doi.org/10.1007/s10618-020-00676-x)\\]\n* **SVN (Movelet)**: Support Vector Machine (SVM) with movelets features. The models were implemented using the Python language, with the keras, linear kernel and default structure. Other structure details are default settings. \\[[REFERENCE](https://doi.org/10.1007/s10618-020-00676-x)\\]\n* **POI-S**: Frequency-based method to extract features of trajectory datasets (TF-IDF approach), the method runs one dimension at a time (or more if concatenated). The models were implemented using the Python language, with the keras. \\[[REFERENCE](https://doi.org/10.1145/3341105.3374045)\\]\n* **MARC**: Uses word embeddings for trajectory classification. It encapsulates all trajectory dimensions: space, time and semantics, and uses them as input to a neural network classifier, and use the geoHash on the spatial dimension, combined with others. The models were implemented using the Python language, with the keras. \\[[REFERENCE](https://doi.org/10.1080/13658816.2019.1707835)\\]\n* **TRF**: Random Forest for trajectory data (TRF). Find the optimal set of hyperparameters for each model, applying the grid-search technique: varying number of trees (ne), the maximum number of features to consider at every split (mf), the maximum number of levels in a tree (md), the minimum number of samples required to split a node (mss), the minimum number of samples required at each leaf node (msl), and finally, the method of selecting samples for training each tree (bs). \\[[REFERENCE](http://dx.doi.org/10.5220/0010227906640671)\\]\n* **XGBost**: Find the optimal set of hyperparameters for each model, applying the grid-search technique:  number of estimators (ne), the maximum depth of a tree (md), the learning rate (lr), the gamma (gm), the fraction of observations to be randomly samples for each tree (ss), the sub sample ratio of columns when constructing each tree (cst), the regularization parameters (l1) and (l2). \\[[REFERENCE](http://dx.doi.org/10.5220/0010227906640671)\\]\n* **BITULER**: Find the optimal set of hyperparameters for each model, applying the grid-search technique: keeps 64 as the batch size and 0.001 as the learning rate and vary the units (un) of the recurrent layer, the embedding size to each attribute (es) and the dropout (dp). \\[[REFERENCE](http://dx.doi.org/10.5220/0010227906640671)\\]\n* **TULVAE**: Find the optimal set of hyperparameters for each model, applying the grid-search technique: keeps 64 as the batch size and 0.001 as the learning rate and vary the units (un) of the recurrent layer, the embedding size to each attribute (es), the dropout (dp), and latent variable (z). \\[[REFERENCE](http://dx.doi.org/10.5220/0010227906640671)\\]\n* **DEEPEST**: DeepeST employs a Recurrent Neural Network (RNN), both LSTM and Bidirectional LSTM (BLSTM). Find the optimal set of hyperparameters for each model, applying the grid-search technique: keeps 64 as the batch size and 0.001 as the learning rate and vary the units (un) of the recurrent layer, the embedding size to each attribute (es) and the dropout (dp). \\[[REFERENCE](http://dx.doi.org/10.5220/0010227906640671)\\]\n\n### Installation\n\nInstall directly from PyPi repository, or, download from github. Intalling with pip will also provide command line scripts (available in folder `automatize/scripts`). (python >= 3.5 required)\n\n```bash\n    pip install automatize\n```\n\nTo use Automatize as a python library, find examples in this sample Jupyter Notebbok: [Automatize_Sample_Code.ipynb](./assets/examples/Automatize_Sample_Code.ipynb)\n\nTo run automatize as a web application, run `MAT.py`. The application will run on http://127.0.0.1:8050/\n\nYou can run this application setting a path to the datasets directory, for instance:\n\n```bash\n    MAT.py -d \"./data\"\n```\n\nThis will read the datasets structure for the scripting module. You have to use the same structure as:\n```\ndata\n\u251c\u2500\u2500 multiple_trajectories\n\u2502   \u251c\u2500\u2500 Brightkite\n\u2502   \u2502   \u251c\u2500\u2500 Brightkite.md [required: dataset description markdown file]\n\u2502   \u2502   \u251c\u2500\u2500 Brightkite-stats.md [not required]\n\u2502   \u2502   \u251c\u2500\u2500 specific_train.csv [train file, .csv/.mat/.zip]\n\u2502   \u2502   \u251c\u2500\u2500 specific_test.csv [test file, .csv/.mat/.zip]\n\u2502   \u251c\u2500\u2500 FoursquareNYC\n\u2502   \u2502   \u251c\u2500\u2500 FoursquareNYC.md\n\u2502   \u2502   \u251c\u2500\u2500 train.csv\n\u2502   \u2502   \u251c\u2500\u2500 test.csv\n\u2502   \u251c\u2500\u2500 Gowalla\n\u2502   \u2502   \u251c\u2500\u2500 Gowalla.md\n\u2502   \u2502   \u251c\u2500\u2500 train.csv\n\u2502   \u2502   \u251c\u2500\u2500 test.csv\n\u2502   \u251c\u2500\u2500 descriptors [descriptor files for movelets methods]\n\u2502   \u2502   \u251c\u2500\u2500 Brightkite_specific_hp.json\n\u2502   \u2502   \u251c\u2500\u2500 Gowalla_specific_hp.json\n\u251c\u2500\u2500 raw_trajectories\n\u2502   \u251c\u2500\u2500 Geolife\n\u2502   \u2502   \u251c\u2500\u2500 Gowalla.md\n\u2502   \u2502   \u251c\u2500\u2500 train.csv\n\u2502   \u2502   \u251c\u2500\u2500 test.csv\n\u2502   \u251c\u2500\u2500 GoTrack\n\u2502   \u2502   \u251c\u2500\u2500 Gowalla.md\n\u2502   \u2502   \u251c\u2500\u2500 train.csv\n\u2502   \u2502   \u251c\u2500\u2500 test.csv\n\u2502   \u251c\u2500\u2500 descriptors\n```\n\n#### Available Scripts:\n\nBy installing the package the following python scripts will be installed for use in system command line tools:\n\n* `MAT-TC.py`: Script to run classifiers on trajectory datasets, to details type: `MAT-TC.py --help`;\n* `MAT-MC.py`: Script to run **movelet-based** classifiers on trajectory datasets, to details type: `MAT-MC.py --help`;\n* `POIS-TC.py`: Script to run POI-F/POI-S classifiers on the methods feature matrix, to details type: `POIS-TC.py --help`;\n* `MARC.py`: Script to run MARC classifier on trajectory datasets, to details type: `MARC.py --help`.\n\nOne script for running the **POI-F/POI-S** method:\n\n* `POIS.py`: Script to run POI-F/POI-S feature extraction methods (`poi`, `npoi`, and `wnpoi`), to details type: `POIS.py --help`.\n\nScripts for helping the management of result files:\n\n* `MAT-CheckRun.py`: Script to check resulting files from running the generated scripts of a experimental evaluation, the directory structure must comply with the expected naming patterns;\n* `MAT-MergeDatasets.py`: Script to merge all `train.csv` and `test.csv` from a movelet-based method of feature extraction, it generates two merged `train.csv` and `test.csv` files from the subclasses results;\n* `MAT-ResultsTo.py`: Script to export logs and classification files from resulting directory of the generated scripts in a experimental evaluation to a compressed file, the directory structure must comply with the expected naming patterns;\n* `MAT-ExportResults.py`: Script to export logs and classification files from resulting directory of the generated scripts in a experimental evaluation, the directory structure must comply with the expected naming patterns;\n* `MAT-ExportMovelets.py`: Script to export movelet `.json` files from resulting directory in a experimental evaluation, the directory structure must comply with the expected naming patterns.\n\n\n### Citing\n\nIf you use `automatize` please cite the following paper:\n\n    Portela, Tarlis Tortelli; Bogorny, Vania; Bernasconi, Anna; Renso, Chiara. AutoMATise: Multiple Aspect Trajectory Data Mining Tool Library. 2022 23rd IEEE International Conference on Mobile Data Management (MDM), 2022, pp. 282-285, doi: 10.1109/MDM55031.2022.00060.\n\n[Bibtex](citation.bib):\n\n```bash\n@inproceedings{Portela2022automatise,\n    title={AutoMATise: Multiple Aspect Trajectory Data Mining Tool Library},\n    author={Portela, Tarlis Tortelli and Bogorny, Vania and Bernasconi, Anna and Renso, Chiara},\n    booktitle = {2022 23rd IEEE International Conference on Mobile Data Management (MDM)},\n    volume={},\n    number={},\n    address = {Online},\n    year={2022},\n    pages = {282--285},\n    doi={10.1109/MDM55031.2022.00060}\n}\n```\n\n(For disambiguity, AutoMATtize was previously written with 's', and now the current version with a 'z')\n\n### Collaborate with us\n\nAny contribution is welcome. This is an active project and if you would like to include your algorithm in `automatize`, feel free to fork the project, open an issue and contact us.\n\nFeel free to contribute in any form, such as scientific publications referencing `automatize`, teaching material and workshop videos.\n\n### Related packages\n\n- [scikit-mobility](https://github.com/scikit-mobility/scikit-mobility): Human trajectory representation and visualizations in Python;\n- [geopandas](https://geopandas.org/en/stable/): Library to help work with geospatial data in Python;\n- [movingpandas](https://anitagraser.github.io/movingpandas/): Based on `geopandas` for movement data exploration and analysis.\n- [PyMove](https://github.com/InsightLab/PyMove): a Python library for processing and visualization of trajectories and other spatial-temporal data.\n\n### Change Log\n\nThis is a more complete and second version of the previous package [Automatise](https://github.com/ttportela/automatise). \n \n*Dec. 2022:*\n - [POI-F](https://doi.org/10.1145/3341105.3374045) extension: **POIS** is an extension to the POI-F method capable of concatenating dimensions and sequences for trajectory classification. Available for the methods `poi`, `npoi`, and `wnpoi`.\n - New classification methods: *TULVAE, BITULER, DeepestST, XGBoost, Traj. Random Forrest* ([source](https://github.com/nickssonfreitas/ICAART2021)).\n - Scripts for classification refactored to command line best practices, and implemented random seed parameter to all methods for reproductibility purposes.\n - Trajectory Generator Module: offers two types of generation methods: random or sampling real data. Possible to control the number of: trajectories, points, attributes, and classes.\n - Updated interface for AutoMATize Results and new graphics generation.\n - New visualization tools: trajectory spatial heatmap, etc. (graphincs.py).\n - New interface for calling scripts.\n - New structure for reading result files to allow easy extention.\n \n *TODO*:\n - Comments on all public interface funcions and modules\n - Incorporate new classifiers\n",
    "bugtrack_url": null,
    "license": "GPL Version 3 or superior (see LICENSE file)",
    "summary": "Automatize: A Multiple Aspect Trajectory Data Mining Tool Library",
    "version": "1.0b7",
    "project_urls": {
        "Bug Tracker": "https://github.com/ttportela/automatize/issues",
        "Documentation": "https://github.com/ttportela/automatize/blob/main/README.md",
        "Download": "https://pypi.org/project/automatize/#files",
        "Homepage": "https://github.com/ttportela/automatize",
        "Repository": "https://github.com/ttportela/automatize"
    },
    "split_keywords": [
        "data-science",
        "machine-learning",
        "data-mining",
        "trajectory",
        "multiple-trajectory",
        "trajectory-classification",
        "movelet",
        "movelet-visualization"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d79a56ebbf68fcb858c5f3ade0914bbb2664640d81a09508cc53ebd1941ddfcf",
                "md5": "5921b340ed65b9dfd1fc0e8ef64093d4",
                "sha256": "9395eeecb8b7974dfe1d964e92faa5ce5ecf942bd0d93cd46dd361d7cdf8ef4f"
            },
            "downloads": -1,
            "filename": "automatize-1.0b7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5921b340ed65b9dfd1fc0e8ef64093d4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 4153716,
            "upload_time": "2023-10-26T17:30:44",
            "upload_time_iso_8601": "2023-10-26T17:30:44.075329Z",
            "url": "https://files.pythonhosted.org/packages/d7/9a/56ebbf68fcb858c5f3ade0914bbb2664640d81a09508cc53ebd1941ddfcf/automatize-1.0b7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9add1731a6173ca9a581e24f6e3afb50f7e3070980f0fb32c884db25e018b15f",
                "md5": "8d24fb32f1b9faf379da6865c0ab6498",
                "sha256": "0f2e751922e80806770d77cf510d0856ab8b469d2f756ed49006e0bd3075bebd"
            },
            "downloads": -1,
            "filename": "automatize-1.0b7.tar.gz",
            "has_sig": false,
            "md5_digest": "8d24fb32f1b9faf379da6865c0ab6498",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 4022824,
            "upload_time": "2023-10-26T17:31:00",
            "upload_time_iso_8601": "2023-10-26T17:31:00.137573Z",
            "url": "https://files.pythonhosted.org/packages/9a/dd/1731a6173ca9a581e24f6e3afb50f7e3070980f0fb32c884db25e018b15f/automatize-1.0b7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-26 17:31:00",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ttportela",
    "github_project": "automatize",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "automatize"
}
        
Elapsed time: 0.13163s