prescyent


Nameprescyent JSON
Version 0.7.2 PyPI version JSON
download
home_pageNone
SummaryData-driven trajectory forecasting library
upload_time2024-12-30 19:10:07
maintainerNone
docs_urlNone
authorNone
requires_pythonNone
licenseMIT
keywords trajectory forecasting pytorch_lightning
VCS
bugtrack_url
requirements h5py matplotlib numpy pydantic pytorch_lightning protobuf scipy tensorboard torch tqdm
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
    <img alt="PreScyent" src="https://raw.githubusercontent.com/hucebot/prescyent/main/assets/banner.png" width="50%">
</p>

<h2 style="text-align: center;">
Data-driven trajectory forecasting library built in python
</h2>

<p align="center">
    <img alt="Trajectory visualization" src="https://raw.githubusercontent.com/hucebot/prescyent/main/assets/render_andy_dataset_positions_with_siMLPe_predictions_at_500ms.gif" width="40%">
</p>

# Get Started

Find the user documentation here: https://hucebot.github.io/prescyent/  

PreScyent is a trajectory forecasting library, built upon pytorch_lightning  
It comes with datasets such as:
- [AndyData-lab-onePerson](https://zenodo.org/records/3254403#.Y_9fwBeZMVk)  
- [AndyData-lab-onePersonTeleoperatingICub](https://zenodo.org/record/5913573)  
- [Human3.6M](http://vision.imar.ro/human3.6m/description.php)  
- And synthetics dataset to test simple properties of our predictors.  

It come also with baselines to run over theses datasets, referred in the code as Predictors, such as:
- [SiMLPe](https://arxiv.org/abs/2207.01567) a MultiLayer Perceptron (MLP) with Discrete Cosine Transform (DCT), shown as a strong baseline achieving SOTA results against bigger and more complicated models.  
- [Seq2Seq](https://arxiv.org/abs/1409.3215), an architecture mapping an input sequence to an output sequence, that originated from NLP and grew in popularity for time series predictions. Here we implemented an RNN Encoder and RNN Decoder.  
- Probabilistic Movement Primitives (ProMPs), an approach commonly used in robotics to model movements by learning from demonstrations and generating smooth, adaptable trajectories under uncertainty.  
- Some simple ML Baselines such as a Fully Connected MultiLayer Perceptron and an auto-regressive predictor with LSTMs.  
- Non machine learning baselines, maintaining the velocity or positions of the inputs, or simply delaying it.  


## Task Definition

- With **H** our history size, the number of observed frames​  
- With **F** our future size, the number of frames we want to predict​  
- With **X** the points and features used as input​  
- With **Y** the points and features produced as output​  
We define **P** our predictor such as, at a given time step **T** we have:​  



$P(X_{T-H}, \dots, X_T) = Y_{T+1}, \dots, Y_{T+F}$

## Installation

### From pypi
You can install released versions of the project from [PyPi](https://pypi.org/project/prescyent/). install using pip from source (you may want to be in a [virtualenv](https://python-guide-pt-br.readthedocs.io/fr/latest/dev/virtualenvs.html) beforehand):  
```bash
pip install prescyent
```
### From Docker
You can build an image docker from the Dockerfile at the source of the repository.  
Please refer to [docker documentation](https://docs.docker.com) for build command and options.  
The Dockerfile is designed to be run interactively.  

### From source
If you want to setup a dev environment, you should clone the repository:  

```bash
git clone git@github.com:hucebot/prescyent.git
cd prescyent
```
Then install using pip from source (you may want to be in a [virtualenv](https://python-guide-pt-br.readthedocs.io/fr/latest/dev/virtualenvs.html) beforehand):  
For dev install (recommended if you intent to add new classes to the lib) use:  
```bash
pip install -e .
```

## Datasets
Each dataset is composed a list Trajectories, split as train, test and val.  
In this lib, we call "Trajectory" a sequence in time split in F frames, tracking P points above D dimensions.  
It is represented with a batched tensor of shape:  
`(B, F, P, D)`.  
Note that unbatched tensors are also allowed for inference.  
For each Trajectory we describe its tensor with Features, a list of Feature describing what are the dimensions tracked for each point at each frame. Theses allows conversions to occur in background or as preprocessing, as well as using some distance specific losses and metrics (e.g. Euclidean distance for Coordinates and geodesic distance for Rotations)  
Alongside each trajectory tensor, some dataset provide some additional "context" (images, center of mass, velocities...), that is represented inside the library as a dictionary of tensors.  

### Downloads
We use HDF5 file format to load a dataset internally. Please get the original data and pre process them using the scripts in `/datapreprocessing` to match the library's format.  
Then when creating an instance of a Dataset, make sure you pass the path to the newly generated hdf5 file to the dataset's config attribute `hdf5_path`.  
The hdf5 versions of theses datasets have also been uploaded here: https://gitlab.inria.fr/hucebot/datasets/


#### TeleopIcubDataset
Download AndyData-lab-prescientTeleopICub's data [here](https://zenodo.org/record/5913573/)
and unzip it, it should be following this structure:  
```bash
├── AndyData-lab-prescientTeleopICub
│   └── datasetMultipleTasks
│       └── AndyData-lab-prescientTeleopICub
│           ├── datasetGoals
│           │   ├── 1.csv
│           │   ├── 2.csv
│           │   ├── 3.csv
│           │   ├── ...
│           ├── datasetObstacles
│           │   ├── 1.csv
│           │   ├── 2.csv
│           │   ├── 3.csv
│           │   ├── ...
│           ├── datasetMultipleTasks
│           │   ├── BottleBox
│           │   │   ├── 1.csv
│           │   │   ├── 2.csv
│           │   │   ├── 3.csv
│           │   │   ├── ...
│           │   ├── BottleTable
│           │   │   ├── 1.csv
│           │   │   ├── 2.csv
│           │   │   ├── 3.csv
│           │   │   ├── ...
│           │   ├── ...

```
Then run the script `dataset_preprocessing/teleopicubdataset_to_hdf5.py` to generate the dataset in the lib's format, with `--data_path` argument providing the path to the downloaded dataset, and the `--hdf5_path` argument giving the path and name of the generated hdf5 file (optional).  
For example something like:
```bash
python dataset_preprocessing/teleopicubdataset_to_hdf5.py --data_path AndyData-lab-prescientTeleopICub/ --hdf5_path AndyData-lab-prescientTeleopICub.hdf5
```


#### H36MDataset
For [Human3.6M](http://vision.imar.ro/human3.6m/description.php) you need to download the zip [here](http://www.cs.stanford.edu/people/ashesh/h3.6m.zip) and prepare your data following this directory structure:  
```bash
└── h36m
|   ├── S1
|   ├── S5
|   ├── S6
|   ├── ...
|   ├── S11
```
Then run the script `dataset_preprocessing/h36mdataset_to_hdf5.py` to generate the dataset in the lib's format, with `--data_path` argument providing the path to the downloaded dataset, and the `--hdf5_path` argument giving the path and name of the generated hdf5 file (optional).  
For example something like:
```bash
python dataset_preprocessing/h36mdataset_to_hdf5.py --data_path h36m/ --hdf5_path h36m.hdf5
```


#### AndyDataset
For [AndyDataset](https://andydataset.loria.fr/) you need to download the zip [here](https://zenodo.org/records/3254403/files/xens_mnvx.zip?download=1) and prepare your data following this directory structure:  
```bash
└── AndyData-lab-onePerson
|   └── xsens_mnvx
|       ├── Participant_541
|       ├── Participant_909
|       ├── Participant_2193
|       ├── ...
|       ├── Participant_9875
```
Then run the script `dataset_preprocessing/andydataset_to_hdf5.py` to generate the dataset in the lib's format, with `--data_path` argument providing the path to the downloaded dataset, and the `--hdf5_path` argument giving the path and name of the generated hdf5 file (optional).  
For example something like:
```bash
python dataset_preprocessing/andydataset_to_hdf5.py --data_path AndyData-lab-onePerson/ --hdf5_path AndyData-lab-onePerson.hdf5
```


## Predictors
The trajectory prediction methods are organized as Predictor classes.  
For example, the MlpPredictor class is the implementation of a configurable MLP as a baseline for the task of Trajectory prediction.  
Relying on the PytorchLightning Framework, it instantiates or load an existing torch Module, with a generic predictor wrapper for saving, loading, iterations over a trajectory and logging.  
Feel free to add some new predictor implementations following the example of this simple class, inheriting at least from the BasePredictor class.  

## Evaluator
We also provide a set of functions to run evaluations and plot some trajectories.  
Runners take a list of predictors, with a list of trajectories and provide an evaluation summary on the following metrics:  
- Average Displacement Error (ADE): Mean distance over all points (feature wise) over the whole prediction.  
- Final Displacement Error (FDE): Mean distance over all points (feature wise) over the last frame predicted.  
- Mean Per Joint Position Error (MPJPE): Mean distance over all points (feature wise) given at each predicted frame.  
- Real Time Factor (RTF): Processing time (in seconds) / trajectory duration (in seconds).  

## Usage

### Examples and tutorials
Please look into the `examples/` directory to find common usages of the library  
We use tensorboard for training logging, use `tensorboard --logdir {log_path}` to view the training and testing infos (default log_path is `data/models/`)  

For example to run the script for mlp training on teleopIcub dataset use this in the environment where prescyent is installed:
```bash
python examples/mlp_icub_train.py
```  
If you want to start a training from a config file (as examples/configs/mlp_teleopicub_with_center_of_mass_context.json), use the following:  
```bash
python examples/train_from_config.py examples/configs/mlp_teleopicub_with_center_of_mass_context.json
```  

The script `start_multiple_trainings.py` is an example of how to generate variations of configuration files and running them using methods from `train_from_config.py`  

Also for evaluation purposes, you can see an example running tests and plots using AutoPredictor and AutoDataset in `load_and_plot_predictors.py`  or animations such as in `plot_pred_over_time.py`  

<p align="center">
    <img alt="Trajectory prediction animation" src="https://raw.githubusercontent.com/hucebot/prescyent/main/assets/test_datasetMultipleTasks_BottleTable_12_animation.gif" width="80%">
</p>

### Extend the lib with a custom dataset or predictor
Predictors inherit from the BasePredictor class, which define interfaces and core methods to keep consistency between each new implementation.  
Each Predictor defines its PredictorConfig with arguments that will be passed on to the core class, again with a BaseConfig with common attributes that needs to be defined.  
If you want to implement a new ML predictor using PyTorch follow the structure of one of our simple predictors such as MlpPredictor and its 3 files where:  
- in `module.py` you create your torch.nn.Module and forward method as you would usually do, you may want to inherit from BaseTorchModule instead of just torch.nn.Module and decorate your forward method with `@self_auto_batch` and `@BaseTorchModule.deriv_tensor` to benefit from some of the lib's features.  
- in `config.py` create your [pydantic BaseModel](https://docs.pydantic.dev/latest/) inheriting from `ModuleConfig` to ensure your predictor's config has all the needed variables, and add any new values you want as variables in your model's architecture.  
- finally `predictor.py` simply connects the above two by declaring both classes as class attributes for this specific predictor. Most of the magic happens in the parent classes using pytorch_lightning with your torch module.  

In the same way you can extend the dataset module with a new Dataset inheriting from TrajectoriesDataset with its own DatasetConfig. Again taking examples on one of our implementation as TeleopIcubDataset, you must:  
- in `dataset.py`, inherit from the TrajectoriesDataset class and implement a `prepare_data` method where you must init `self.trajectories` with a `Trajectories` instance built from your data/files.  
- in `config.py` create your [pydantic BaseModel](https://docs.pydantic.dev/latest/) inheriting from `TrajectoriesDatasetConfig` to ensure you have all variables for the dataset processes, and add any new value you want as variables in your dataset's architecture.  
- optionally use `metadata.py` as we did to store some constant describing your dataset.  
All the logic creating the datasamples and dataloaders is handled in the parent class as long as self.trajectories is defined and the config is valid.  
If you simply want to test a Predictor over some data, you can create an instance of CustomDataset. As long as you turned your lists of episodes into Trajectories, the CustomDataset allows you to split them into training samples using a generic DatasetConfig and use all the functionalities of the library as usual (except that a CustomDataset cannot be loaded using AutoDataset)...  

## Ros2
See [dedicated repository](https://github.com/hucebot/ros2_prescyent/tree/dev)

## Run tests
After installing, you can run the test suite to make sure the installation is ok  

```bash
python -m unittest -v
```

# Contribute

This repo use Github actions to run the test suite and linting at pushes and merge over the `dev` and `main` branches as continuous integration.  
If the integration workflow is successful and we are on the `main` branch, then we'll build the library and publish it to pypi.  
Please make sure that you updated the library's version `prescyent.__init__py:__version__` otherwise the deployment will fail.  

# References
siMLPe  
> Wen Guo, Yuming Du, Xi Shen, Vincent Lepetit, Xavier Alameda-Pineda, et al.. Back to MLP: A Simple Baseline for Human Motion Prediction. WACV 2023 - IEEE Winter Conference on Applications of Computer Vision, Jan 2023, Waikoloa, United States. pp.1-11. ⟨hal-03906936⟩​  

AndyDataset  
> Maurice P., Malaisé A., Amiot C., Paris N., Richard G.J., Rochel O., Ivaldi S. « Human Movement and Ergonomics: an Industry-Oriented Dataset for Collaborative Robotics ». The International Journal of Robotics Reserach, Volume 38, Issue 14, Pages 1529-1537.  

TeleopIcub Dataset  
> Penco, L., Mouret, J., & Ivaldi, S. (2021, July 2). Prescient teleoperation of humanoid robots. arXiv.org. https://arxiv.org/abs/2107.01281  

H36M Dataset  
> Human3.6M: Large scale datasets and predictive methods for 3D human sensing in natural environments. (n.d.). IEEE Journals & Magazine | IEEE Xplore. https://ieeexplore.ieee.org/document/6682899  

On the Continuity of Rotation Representations in Neural Networks  
> Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. (2018, December 17). On the Continuity of Rotation Representations in Neural Networks. arXiv.org. https://arxiv.org/abs/1812.07035  

ProMPs  
> Paraschos, Alexandros & Daniel, Christian & Peters, Jan & Neumann, Gerhard. (2018). Using probabilistic movement primitives in robotics. Autonomous Robots. 42. 10.1007/s10514-017-9648-7. ```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "prescyent",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "trajectory, forecasting, pytorch_lightning",
    "author": null,
    "author_email": "Alexis Biver <alexis.biver@inria.fr>",
    "download_url": "https://files.pythonhosted.org/packages/50/87/143757f4a95d9a82e79810ce0e02cc0ce4edd5e17fc94b092a912e1de9c3/prescyent-0.7.2.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n    <img alt=\"PreScyent\" src=\"https://raw.githubusercontent.com/hucebot/prescyent/main/assets/banner.png\" width=\"50%\">\n</p>\n\n<h2 style=\"text-align: center;\">\nData-driven trajectory forecasting library built in python\n</h2>\n\n<p align=\"center\">\n    <img alt=\"Trajectory visualization\" src=\"https://raw.githubusercontent.com/hucebot/prescyent/main/assets/render_andy_dataset_positions_with_siMLPe_predictions_at_500ms.gif\" width=\"40%\">\n</p>\n\n# Get Started\n\nFind the user documentation here: https://hucebot.github.io/prescyent/  \n\nPreScyent is a trajectory forecasting library, built upon pytorch_lightning  \nIt comes with datasets such as:\n- [AndyData-lab-onePerson](https://zenodo.org/records/3254403#.Y_9fwBeZMVk)  \n- [AndyData-lab-onePersonTeleoperatingICub](https://zenodo.org/record/5913573)  \n- [Human3.6M](http://vision.imar.ro/human3.6m/description.php)  \n- And synthetics dataset to test simple properties of our predictors.  \n\nIt come also with baselines to run over theses datasets, referred in the code as Predictors, such as:\n- [SiMLPe](https://arxiv.org/abs/2207.01567) a MultiLayer Perceptron (MLP) with Discrete Cosine Transform (DCT), shown as a strong baseline achieving SOTA results against bigger and more complicated models.  \n- [Seq2Seq](https://arxiv.org/abs/1409.3215), an architecture mapping an input sequence to an output sequence, that originated from NLP and grew in popularity for time series predictions. Here we implemented an RNN Encoder and RNN Decoder.  \n- Probabilistic Movement Primitives (ProMPs), an approach commonly used in robotics to model movements by learning from demonstrations and generating smooth, adaptable trajectories under uncertainty.  \n- Some simple ML Baselines such as a Fully Connected MultiLayer Perceptron and an auto-regressive predictor with LSTMs.  \n- Non machine learning baselines, maintaining the velocity or positions of the inputs, or simply delaying it.  \n\n\n## Task Definition\n\n- With **H** our history size, the number of observed frames\u200b  \n- With **F** our future size, the number of frames we want to predict\u200b  \n- With **X** the points and features used as input\u200b  \n- With **Y** the points and features produced as output\u200b  \nWe define **P** our predictor such as, at a given time step **T** we have:\u200b  \n\n\n\n$P(X_{T-H}, \\dots, X_T) = Y_{T+1}, \\dots, Y_{T+F}$\n\n## Installation\n\n### From pypi\nYou can install released versions of the project from [PyPi](https://pypi.org/project/prescyent/). install using pip from source (you may want to be in a [virtualenv](https://python-guide-pt-br.readthedocs.io/fr/latest/dev/virtualenvs.html) beforehand):  \n```bash\npip install prescyent\n```\n### From Docker\nYou can build an image docker from the Dockerfile at the source of the repository.  \nPlease refer to [docker documentation](https://docs.docker.com) for build command and options.  \nThe Dockerfile is designed to be run interactively.  \n\n### From source\nIf you want to setup a dev environment, you should clone the repository:  \n\n```bash\ngit clone git@github.com:hucebot/prescyent.git\ncd prescyent\n```\nThen install using pip from source (you may want to be in a [virtualenv](https://python-guide-pt-br.readthedocs.io/fr/latest/dev/virtualenvs.html) beforehand):  \nFor dev install (recommended if you intent to add new classes to the lib) use:  \n```bash\npip install -e .\n```\n\n## Datasets\nEach dataset is composed a list Trajectories, split as train, test and val.  \nIn this lib, we call \"Trajectory\" a sequence in time split in F frames, tracking P points above D dimensions.  \nIt is represented with a batched tensor of shape:  \n`(B, F, P, D)`.  \nNote that unbatched tensors are also allowed for inference.  \nFor each Trajectory we describe its tensor with Features, a list of Feature describing what are the dimensions tracked for each point at each frame. Theses allows conversions to occur in background or as preprocessing, as well as using some distance specific losses and metrics (e.g. Euclidean distance for Coordinates and geodesic distance for Rotations)  \nAlongside each trajectory tensor, some dataset provide some additional \"context\" (images, center of mass, velocities...), that is represented inside the library as a dictionary of tensors.  \n\n### Downloads\nWe use HDF5 file format to load a dataset internally. Please get the original data and pre process them using the scripts in `/datapreprocessing` to match the library's format.  \nThen when creating an instance of a Dataset, make sure you pass the path to the newly generated hdf5 file to the dataset's config attribute `hdf5_path`.  \nThe hdf5 versions of theses datasets have also been uploaded here: https://gitlab.inria.fr/hucebot/datasets/\n\n\n#### TeleopIcubDataset\nDownload AndyData-lab-prescientTeleopICub's data [here](https://zenodo.org/record/5913573/)\nand unzip it, it should be following this structure:  \n```bash\n\u251c\u2500\u2500 AndyData-lab-prescientTeleopICub\n\u2502   \u2514\u2500\u2500 datasetMultipleTasks\n\u2502       \u2514\u2500\u2500 AndyData-lab-prescientTeleopICub\n\u2502           \u251c\u2500\u2500 datasetGoals\n\u2502           \u2502   \u251c\u2500\u2500 1.csv\n\u2502           \u2502   \u251c\u2500\u2500 2.csv\n\u2502           \u2502   \u251c\u2500\u2500 3.csv\n\u2502           \u2502   \u251c\u2500\u2500 ...\n\u2502           \u251c\u2500\u2500 datasetObstacles\n\u2502           \u2502   \u251c\u2500\u2500 1.csv\n\u2502           \u2502   \u251c\u2500\u2500 2.csv\n\u2502           \u2502   \u251c\u2500\u2500 3.csv\n\u2502           \u2502   \u251c\u2500\u2500 ...\n\u2502           \u251c\u2500\u2500 datasetMultipleTasks\n\u2502           \u2502   \u251c\u2500\u2500 BottleBox\n\u2502           \u2502   \u2502   \u251c\u2500\u2500 1.csv\n\u2502           \u2502   \u2502   \u251c\u2500\u2500 2.csv\n\u2502           \u2502   \u2502   \u251c\u2500\u2500 3.csv\n\u2502           \u2502   \u2502   \u251c\u2500\u2500 ...\n\u2502           \u2502   \u251c\u2500\u2500 BottleTable\n\u2502           \u2502   \u2502   \u251c\u2500\u2500 1.csv\n\u2502           \u2502   \u2502   \u251c\u2500\u2500 2.csv\n\u2502           \u2502   \u2502   \u251c\u2500\u2500 3.csv\n\u2502           \u2502   \u2502   \u251c\u2500\u2500 ...\n\u2502           \u2502   \u251c\u2500\u2500 ...\n\n```\nThen run the script `dataset_preprocessing/teleopicubdataset_to_hdf5.py` to generate the dataset in the lib's format, with `--data_path` argument providing the path to the downloaded dataset, and the `--hdf5_path` argument giving the path and name of the generated hdf5 file (optional).  \nFor example something like:\n```bash\npython dataset_preprocessing/teleopicubdataset_to_hdf5.py --data_path AndyData-lab-prescientTeleopICub/ --hdf5_path AndyData-lab-prescientTeleopICub.hdf5\n```\n\n\n#### H36MDataset\nFor [Human3.6M](http://vision.imar.ro/human3.6m/description.php) you need to download the zip [here](http://www.cs.stanford.edu/people/ashesh/h3.6m.zip) and prepare your data following this directory structure:  \n```bash\n\u2514\u2500\u2500 h36m\n|   \u251c\u2500\u2500 S1\n|   \u251c\u2500\u2500 S5\n|   \u251c\u2500\u2500 S6\n|   \u251c\u2500\u2500 ...\n|   \u251c\u2500\u2500 S11\n```\nThen run the script `dataset_preprocessing/h36mdataset_to_hdf5.py` to generate the dataset in the lib's format, with `--data_path` argument providing the path to the downloaded dataset, and the `--hdf5_path` argument giving the path and name of the generated hdf5 file (optional).  \nFor example something like:\n```bash\npython dataset_preprocessing/h36mdataset_to_hdf5.py --data_path h36m/ --hdf5_path h36m.hdf5\n```\n\n\n#### AndyDataset\nFor [AndyDataset](https://andydataset.loria.fr/) you need to download the zip [here](https://zenodo.org/records/3254403/files/xens_mnvx.zip?download=1) and prepare your data following this directory structure:  \n```bash\n\u2514\u2500\u2500 AndyData-lab-onePerson\n|   \u2514\u2500\u2500 xsens_mnvx\n|       \u251c\u2500\u2500 Participant_541\n|       \u251c\u2500\u2500 Participant_909\n|       \u251c\u2500\u2500 Participant_2193\n|       \u251c\u2500\u2500 ...\n|       \u251c\u2500\u2500 Participant_9875\n```\nThen run the script `dataset_preprocessing/andydataset_to_hdf5.py` to generate the dataset in the lib's format, with `--data_path` argument providing the path to the downloaded dataset, and the `--hdf5_path` argument giving the path and name of the generated hdf5 file (optional).  \nFor example something like:\n```bash\npython dataset_preprocessing/andydataset_to_hdf5.py --data_path AndyData-lab-onePerson/ --hdf5_path AndyData-lab-onePerson.hdf5\n```\n\n\n## Predictors\nThe trajectory prediction methods are organized as Predictor classes.  \nFor example, the MlpPredictor class is the implementation of a configurable MLP as a baseline for the task of Trajectory prediction.  \nRelying on the PytorchLightning Framework, it instantiates or load an existing torch Module, with a generic predictor wrapper for saving, loading, iterations over a trajectory and logging.  \nFeel free to add some new predictor implementations following the example of this simple class, inheriting at least from the BasePredictor class.  \n\n## Evaluator\nWe also provide a set of functions to run evaluations and plot some trajectories.  \nRunners take a list of predictors, with a list of trajectories and provide an evaluation summary on the following metrics:  \n- Average Displacement Error (ADE): Mean distance over all points (feature wise) over the whole prediction.  \n- Final Displacement Error (FDE): Mean distance over all points (feature wise) over the last frame predicted.  \n- Mean Per Joint Position Error (MPJPE): Mean distance over all points (feature wise) given at each predicted frame.  \n- Real Time Factor (RTF): Processing time (in seconds) / trajectory duration (in seconds).  \n\n## Usage\n\n### Examples and tutorials\nPlease look into the `examples/` directory to find common usages of the library  \nWe use tensorboard for training logging, use `tensorboard --logdir {log_path}` to view the training and testing infos (default log_path is `data/models/`)  \n\nFor example to run the script for mlp training on teleopIcub dataset use this in the environment where prescyent is installed:\n```bash\npython examples/mlp_icub_train.py\n```  \nIf you want to start a training from a config file (as examples/configs/mlp_teleopicub_with_center_of_mass_context.json), use the following:  \n```bash\npython examples/train_from_config.py examples/configs/mlp_teleopicub_with_center_of_mass_context.json\n```  \n\nThe script `start_multiple_trainings.py` is an example of how to generate variations of configuration files and running them using methods from `train_from_config.py`  \n\nAlso for evaluation purposes, you can see an example running tests and plots using AutoPredictor and AutoDataset in `load_and_plot_predictors.py`  or animations such as in `plot_pred_over_time.py`  \n\n<p align=\"center\">\n    <img alt=\"Trajectory prediction animation\" src=\"https://raw.githubusercontent.com/hucebot/prescyent/main/assets/test_datasetMultipleTasks_BottleTable_12_animation.gif\" width=\"80%\">\n</p>\n\n### Extend the lib with a custom dataset or predictor\nPredictors inherit from the BasePredictor class, which define interfaces and core methods to keep consistency between each new implementation.  \nEach Predictor defines its PredictorConfig with arguments that will be passed on to the core class, again with a BaseConfig with common attributes that needs to be defined.  \nIf you want to implement a new ML predictor using PyTorch follow the structure of one of our simple predictors such as MlpPredictor and its 3 files where:  \n- in `module.py` you create your torch.nn.Module and forward method as you would usually do, you may want to inherit from BaseTorchModule instead of just torch.nn.Module and decorate your forward method with `@self_auto_batch` and `@BaseTorchModule.deriv_tensor` to benefit from some of the lib's features.  \n- in `config.py` create your [pydantic BaseModel](https://docs.pydantic.dev/latest/) inheriting from `ModuleConfig` to ensure your predictor's config has all the needed variables, and add any new values you want as variables in your model's architecture.  \n- finally `predictor.py` simply connects the above two by declaring both classes as class attributes for this specific predictor. Most of the magic happens in the parent classes using pytorch_lightning with your torch module.  \n\nIn the same way you can extend the dataset module with a new Dataset inheriting from TrajectoriesDataset with its own DatasetConfig. Again taking examples on one of our implementation as TeleopIcubDataset, you must:  \n- in `dataset.py`, inherit from the TrajectoriesDataset class and implement a `prepare_data` method where you must init `self.trajectories` with a `Trajectories` instance built from your data/files.  \n- in `config.py` create your [pydantic BaseModel](https://docs.pydantic.dev/latest/) inheriting from `TrajectoriesDatasetConfig` to ensure you have all variables for the dataset processes, and add any new value you want as variables in your dataset's architecture.  \n- optionally use `metadata.py` as we did to store some constant describing your dataset.  \nAll the logic creating the datasamples and dataloaders is handled in the parent class as long as self.trajectories is defined and the config is valid.  \nIf you simply want to test a Predictor over some data, you can create an instance of CustomDataset. As long as you turned your lists of episodes into Trajectories, the CustomDataset allows you to split them into training samples using a generic DatasetConfig and use all the functionalities of the library as usual (except that a CustomDataset cannot be loaded using AutoDataset)...  \n\n## Ros2\nSee [dedicated repository](https://github.com/hucebot/ros2_prescyent/tree/dev)\n\n## Run tests\nAfter installing, you can run the test suite to make sure the installation is ok  \n\n```bash\npython -m unittest -v\n```\n\n# Contribute\n\nThis repo use Github actions to run the test suite and linting at pushes and merge over the `dev` and `main` branches as continuous integration.  \nIf the integration workflow is successful and we are on the `main` branch, then we'll build the library and publish it to pypi.  \nPlease make sure that you updated the library's version `prescyent.__init__py:__version__` otherwise the deployment will fail.  \n\n# References\nsiMLPe  \n> Wen Guo, Yuming Du, Xi Shen, Vincent Lepetit, Xavier Alameda-Pineda, et al.. Back to MLP: A Simple Baseline for Human Motion Prediction. WACV 2023 - IEEE Winter Conference on Applications of Computer Vision, Jan 2023, Waikoloa, United States. pp.1-11. \u27e8hal-03906936\u27e9\u200b  \n\nAndyDataset  \n> Maurice P., Malais\u00e9 A., Amiot C., Paris N., Richard G.J., Rochel O., Ivaldi S. \u00ab Human Movement and Ergonomics: an Industry-Oriented Dataset for Collaborative Robotics \u00bb. The International Journal of Robotics Reserach, Volume 38, Issue 14, Pages 1529-1537.  \n\nTeleopIcub Dataset  \n> Penco, L., Mouret, J., & Ivaldi, S. (2021, July 2). Prescient teleoperation of humanoid robots. arXiv.org. https://arxiv.org/abs/2107.01281  \n\nH36M Dataset  \n> Human3.6M: Large scale datasets and predictive methods for 3D human sensing in natural environments. (n.d.). IEEE Journals & Magazine | IEEE Xplore. https://ieeexplore.ieee.org/document/6682899  \n\nOn the Continuity of Rotation Representations in Neural Networks  \n> Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. (2018, December 17). On the Continuity of Rotation Representations in Neural Networks. arXiv.org. https://arxiv.org/abs/1812.07035  \n\nProMPs  \n> Paraschos, Alexandros & Daniel, Christian & Peters, Jan & Neumann, Gerhard. (2018). Using probabilistic movement primitives in robotics. Autonomous Robots. 42. 10.1007/s10514-017-9648-7. ```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Data-driven trajectory forecasting library",
    "version": "0.7.2",
    "project_urls": {
        "Documentation": "https://hucebot.github.io/prescyent/",
        "Issues": "https://github.com/hucebot/prescyent/issues",
        "Repository": "https://github.com/hucebot/prescyent"
    },
    "split_keywords": [
        "trajectory",
        " forecasting",
        " pytorch_lightning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a7ea12a1c8ed4b20caf917cac086206195bc02caae967d219946a87978dcc6bc",
                "md5": "d7dc37fb04949914c35d3c1b14484990",
                "sha256": "a51cc8eb35ebec7ae936933aadba6321a121980c3c49eab74af35c11bf0636d4"
            },
            "downloads": -1,
            "filename": "prescyent-0.7.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d7dc37fb04949914c35d3c1b14484990",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 198242,
            "upload_time": "2024-12-30T19:10:05",
            "upload_time_iso_8601": "2024-12-30T19:10:05.057325Z",
            "url": "https://files.pythonhosted.org/packages/a7/ea/12a1c8ed4b20caf917cac086206195bc02caae967d219946a87978dcc6bc/prescyent-0.7.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5087143757f4a95d9a82e79810ce0e02cc0ce4edd5e17fc94b092a912e1de9c3",
                "md5": "0f4a6c474be4021f64d85b302f668187",
                "sha256": "1eb4fefedc2332ec618579a25868492c993be27ad6da06c5fc8c4f22be7ba09a"
            },
            "downloads": -1,
            "filename": "prescyent-0.7.2.tar.gz",
            "has_sig": false,
            "md5_digest": "0f4a6c474be4021f64d85b302f668187",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 134847,
            "upload_time": "2024-12-30T19:10:07",
            "upload_time_iso_8601": "2024-12-30T19:10:07.697876Z",
            "url": "https://files.pythonhosted.org/packages/50/87/143757f4a95d9a82e79810ce0e02cc0ce4edd5e17fc94b092a912e1de9c3/prescyent-0.7.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-30 19:10:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "hucebot",
    "github_project": "prescyent",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "h5py",
            "specs": [
                [
                    "<",
                    "4"
                ],
                [
                    ">=",
                    "3.10"
                ]
            ]
        },
        {
            "name": "matplotlib",
            "specs": [
                [
                    ">=",
                    "3.0"
                ],
                [
                    "<",
                    "4.0"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "<",
                    "2.0.0"
                ],
                [
                    ">=",
                    "1.17.3"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    "<",
                    "3.0"
                ],
                [
                    ">=",
                    "2.0"
                ]
            ]
        },
        {
            "name": "pytorch_lightning",
            "specs": [
                [
                    "<",
                    "3.0"
                ],
                [
                    ">=",
                    "2.0"
                ]
            ]
        },
        {
            "name": "protobuf",
            "specs": [
                [
                    ">=",
                    "4.0"
                ]
            ]
        },
        {
            "name": "scipy",
            "specs": [
                [
                    "<",
                    "2.0"
                ],
                [
                    ">=",
                    "1.9"
                ]
            ]
        },
        {
            "name": "tensorboard",
            "specs": [
                [
                    "<",
                    "3.0"
                ],
                [
                    ">=",
                    "2.0"
                ]
            ]
        },
        {
            "name": "torch",
            "specs": [
                [
                    "<",
                    "3.0"
                ],
                [
                    ">=",
                    "2.0"
                ]
            ]
        },
        {
            "name": "tqdm",
            "specs": [
                [
                    ">=",
                    "4.66"
                ]
            ]
        }
    ],
    "lcname": "prescyent"
}
        
Elapsed time: 0.56498s