Name | itwinai JSON |
Version |
0.2.2
JSON |
| download |
home_page | None |
Summary | AI and ML workflows module for scientific digital twins. |
upload_time | 2024-09-19 15:53:08 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | MIT License Copyright (c) 2023 interTwin Community Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
ml
ai
hpc
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# itwinai
[![GitHub Super-Linter](https://github.com/interTwin-eu/T6.5-AI-and-ML/actions/workflows/lint.yml/badge.svg)](https://github.com/marketplace/actions/super-linter)
[![GitHub Super-Linter](https://github.com/interTwin-eu/T6.5-AI-and-ML/actions/workflows/check-links.yml/badge.svg)](https://github.com/marketplace/actions/markdown-link-check)
[![SQAaaS source code](https://github.com/EOSC-synergy/itwinai.assess.sqaaas/raw/main/.badge/status_shields.svg)](https://sqaaas.eosc-synergy.eu/#/full-assessment/report/https://raw.githubusercontent.com/eosc-synergy/itwinai.assess.sqaaas/main/.report/assessment_output.json)
![itwinai Logo](./docs/images/icon-itwinai-orange-black-subtitle.png)
See the latest version of our [docs](https://itwinai.readthedocs.io/)
for a quick overview of this platform for advanced AI/ML workflows in digital twin applications.
If you are a **developer**, please refer to the [developers installation guide](#installation-for-developers).
## User installation
Requirements:
- Linux or macOS environment. Windows was never tested.
### Python virtual environment
Depending on your environment, there are different ways to
select a specific python version.
#### Laptop or GPU node
If you are working on a laptop
or on a simple on-prem setup, you could consider using
[pyenv](https://github.com/pyenv/pyenv). See the
[installation instructions](https://github.com/pyenv/pyenv?tab=readme-ov-file#installation). If you are using pyenv,
make sure to read [this](https://github.com/pyenv/pyenv/wiki#suggested-build-environment).
#### HPC environment
In HPC systems it is more popular to load dependencies using
Environment Modules or Lmod. If you don't know what modules to load,
contact the system administrator
to learn how to select the proper modules.
##### PyTorch environment
Commands to execute every time **before** installing or activating the python virtual
environment for PyTorch:
- Juelich Supercomputer (JSC):
```bash
ml --force purge
ml Stages/2024 GCC OpenMPI CUDA/12 cuDNN MPI-settings/CUDA
ml Python CMake HDF5 PnetCDF libaio mpi4py
```
- Vega supercomputer:
```bash
ml --force purge
ml Python CMake/3.24.3-GCCcore-11.3.0 mpi4py OpenMPI CUDA/11.7
ml GCCcore/11.3.0 NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 cuDNN
```
##### TensorFlow environment
Commands to execute every time **before** installing or activating the python virtual
environment for TensorFlow:
- Juelich Supercomputer (JSC):
```bash
ml --force purge
ml Stages/2024 GCC/12.3.0 OpenMPI CUDA/12 MPI-settings/CUDA
ml Python/3.11 HDF5 PnetCDF libaio mpi4py CMake cuDNN/8.9.5.29-CUDA-12
```
- Vega supercomputer:
```bash
ml --force purge
ml Python CMake/3.24.3-GCCcore-11.3.0 mpi4py OpenMPI CUDA/11.7
ml GCCcore/11.3.0 NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 cuDNN
```
### Install itwinai
Install itwinai and its dependencies using the
following command, and follow the instructions:
```bash
# First, load the required environment modules, if on an HPC
# Second, create a python virtual environment and activate it
$ python -m venv ENV_NAME
$ source ENV_NAME/bin/activate
# Install itwinai inside the environment
(ENV_NAME) $ export ML_FRAMEWORK="pytorch" # or "tensorflow"
(ENV_NAME) $ curl -fsSL https://github.com/interTwin-eu/itwinai/raw/main/env-files/itwinai-installer.sh | bash
```
The `ML_FRAMEWORK` environment variable controls whether you are installing
itwinai for PyTorch or TensorFlow.
> [!WARNING]
> itwinai depends on Horovod, which requires `CMake>=1.13` and
> [other packages](https://horovod.readthedocs.io/en/latest/install_include.html#requirements).
> Make sure to have them installed in your environment before proceeding.
## Installation for developers
If you are contributing to this repository, please continue below for
more advanced instructions.
> [!WARNING]
> Branch protection rules are applied to all branches which names
> match this regex: `[dm][ea][vi]*` . When creating new branches,
> please avoid using names that match that regex, otherwise branch
> protection rules will block direct pushes to that branch.
### Clone the itwinai repository
```bash
git clone [--recurse-submodules] git@github.com:interTwin-eu/itwinai.git
```
### Install itwinai environment
You can create the
Python virtual environments using our predefined Makefile targets.
#### PyTorch (+ Lightning) virtual environment
Makefile targets for environment installation:
- Juelich Supercomputer (JSC): `torch-gpu-jsc`
- Vega supercomputer: `torch-env-vega`
- In any other cases, when CUDA is available: `torch-env`
- In any other cases, when CUDA **NOT** is available (CPU-only installation): `torch-env-cpu`
For instance, on a laptop with a CUDA-compatible GPU you can use:
```bash
make torch-env
```
When not on an HPC system, you can activate the python environment directly with:
```bash
source .venv-pytorch/bin/activate
```
Otherwise, if you are on an HPC system, please refer to
[this section](#activate-itwinai-environment-on-hpc)
explaining how to load the required environment modules before the python environment.
To build a Docker image for the pytorch version (need to adapt `TAG`):
```bash
# Local
docker buildx build -t itwinai:TAG -f env-files/torch/Dockerfile .
# Ghcr.io
docker buildx build -t ghcr.io/intertwin-eu/itwinai:TAG -f env-files/torch/Dockerfile .
docker push ghcr.io/intertwin-eu/itwinai:TAG
```
#### TensorFlow virtual environment
Makefile targets for environment installation:
- Juelich Supercomputer (JSC): `tf-gpu-jsc`
- Vega supercomputer: `tf-env-vega`
- In any other case, when CUDA is available: `tensorflow-env`
- In any other case, when CUDA **NOT** is available (CPU-only installation): `tensorflow-env-cpu`
For instance, on a laptop with a CUDA-compatible GPU you can use:
```bash
make tensorflow-env
```
When not on an HPC system, you can activate the python environment directly with:
```bash
source .venv-tf/bin/activate
```
Otherwise, if you are on an HPC system, please refer to
[this section](#activate-itwinai-environment-on-hpc)
explaining how to load the required environment modules before the python environment.
To build a Docker image for the tensorflow version (need to adapt `TAG`):
```bash
# Local
docker buildx build -t itwinai:TAG -f env-files/tensorflow/Dockerfile .
# Ghcr.io
docker buildx build -t ghcr.io/intertwin-eu/itwinai:TAG -f env-files/tensorflow/Dockerfile .
docker push ghcr.io/intertwin-eu/itwinai:TAG
```
### Activate itwinai environment on HPC
Usually, HPC systems organize their software in modules which need to be imported by the users
every time they open a new shell, **before** activating a Python virtual environment.
Below you can find some examples on how to load the correct environment modules on the HPC
systems we are currently working with.
#### Load modules before PyTorch virtual environment
Commands to be executed before activating the python virtual environment:
- Juelich Supercomputer (JSC):
```bash
ml --force purge
ml Stages/2024 GCC OpenMPI CUDA/12 cuDNN MPI-settings/CUDA
ml Python CMake HDF5 PnetCDF libaio mpi4py
```
- Vega supercomputer:
```bash
ml --force purge
ml Python CMake/3.24.3-GCCcore-11.3.0 mpi4py OpenMPI CUDA/11.7
ml GCCcore/11.3.0 NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 cuDNN
```
- When not on an HPC: do nothing.
For instance, on JSC you can activate the PyTorch virtual environment in this way:
```bash
# Load environment modules
ml --force purge
ml Stages/2024 GCC OpenMPI CUDA/12 cuDNN MPI-settings/CUDA
ml Python CMake HDF5 PnetCDF libaio mpi4py
# Activate virtual env
source envAI_hdfml/bin/activate
```
#### Load modules before TensorFlow virtual environment
Commands to be executed before activating the python virtual environment:
- Juelich Supercomputer (JSC):
```bash
ml --force purge
ml Stages/2024 GCC/12.3.0 OpenMPI CUDA/12 MPI-settings/CUDA
ml Python/3.11 HDF5 PnetCDF libaio mpi4py CMake cuDNN/8.9.5.29-CUDA-12
```
- Vega supercomputer:
```bash
ml --force purge
ml Python CMake/3.24.3-GCCcore-11.3.0 mpi4py OpenMPI CUDA/11.7
ml GCCcore/11.3.0 NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 cuDNN
```
- When not on an HPC: do nothing.
For instance, on JSC you can activate the TensorFlow virtual environment in this way:
```bash
# Load environment modules
ml --force purge
ml Stages/2024 GCC/12.3.0 OpenMPI CUDA/12 MPI-settings/CUDA
ml Python/3.11 HDF5 PnetCDF libaio mpi4py CMake cuDNN/8.9.5.29-CUDA-12
# Activate virtual env
source envAItf_hdfml/bin/activate
```
### Test with `pytest`
Do this only if you are a developer wanting to test your code with pytest.
First, you need to create virtual environments both for torch and tensorflow,
following the instructions above, depending on the system that you are using
(e.g., JSC).
To select the name of the torch and tf environments in which the tests will be
executed you can set the following environment variables.
If these env variables are not set, the testing suite will assume that the
PyTorch environment is under
`.venv-pytorch` and the TensorFlow environment is under `.venv-tf`.
```bash
export TORCH_ENV="my_torch_env"
export TF_ENV="my_tf_env"
```
Functional tests (marked with `pytest.mark.functional`) will be executed under
`/tmp/pytest` location to guarantee isolation among tests.
To run functional tests use:
```bash
pytest -v tests/ -m "functional"
```
> [!NOTE]
> Depending on the system that you are using, we implemented a tailored Makefile
> target to run the test suite on it. Read these instructions until the end!
We provide some Makefile targets to run the whole test suite including unit, integration,
and functional tests. Choose the right target depending on the system that you are using:
Makefile targets:
- Juelich Supercomputer (JSC): `test-jsc`
- In any other case: `test`
For instance, to run the test suite on your laptop user:
```bash
make test
```
<!--
### Micromamba installation (deprecated)
To manage Conda environments we use micromamba, a light weight version of conda.
It is suggested to refer to the
[Manual installation guide](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html#manual-installation).
Consider that Micromamba can eat a lot of space when building environments because packages are cached on
the local filesystem after being downloaded. To clear cache you can use `micromamba clean -a`.
Micromamba data are kept under the `$HOME` location. However, in some systems, `$HOME` has a limited storage
space and it would be cleverer to install Micromamba in another location with more storage space.
Thus by changing the `$MAMBA_ROOT_PREFIX` variable. See a complete installation example for Linux below, where the
default `$MAMBA_ROOT_PREFIX` is overridden:
```bash
cd $HOME
# Download micromamba (This command is for Linux Intel (x86_64) systems. Find the right one for your system!)
curl -Ls https://micro.mamba.pm/api/micromamba/linux-64/latest | tar -xvj bin/micromamba
# Install micromamba in a custom directory
MAMBA_ROOT_PREFIX='my-mamba-root'
./bin/micromamba shell init $MAMBA_ROOT_PREFIX
# To invoke micromamba from Makefile, you need to add explicitly to $PATH
echo 'PATH="$(dirname $MAMBA_EXE):$PATH"' >> ~/.bashrc
```
**Reference**: [Micromamba installation guide](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html).
-->
Raw data
{
"_id": null,
"home_page": null,
"name": "itwinai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": "Matteo Bunino <matteo.bunino@cern.ch>, Rakesh Sarma <r.sarma@fz-juelich.de>, Mario Ruettgers <m.ruettgers@fz-juelich.de>, Kalliopi Tsolaki <kalliopi.tsolaki@cern.ch>",
"keywords": "ml, ai, hpc",
"author": null,
"author_email": "Matteo Bunino <matteo.bunino@cern.ch>, Rakesh Sarma <r.sarma@fz-juelich.de>",
"download_url": "https://files.pythonhosted.org/packages/f8/b3/45abc008eb113862c3be205eb45b7ea5d425f03dddcf2c0a1341e3f8f543/itwinai-0.2.2.tar.gz",
"platform": null,
"description": "# itwinai\n\n[![GitHub Super-Linter](https://github.com/interTwin-eu/T6.5-AI-and-ML/actions/workflows/lint.yml/badge.svg)](https://github.com/marketplace/actions/super-linter)\n[![GitHub Super-Linter](https://github.com/interTwin-eu/T6.5-AI-and-ML/actions/workflows/check-links.yml/badge.svg)](https://github.com/marketplace/actions/markdown-link-check)\n [![SQAaaS source code](https://github.com/EOSC-synergy/itwinai.assess.sqaaas/raw/main/.badge/status_shields.svg)](https://sqaaas.eosc-synergy.eu/#/full-assessment/report/https://raw.githubusercontent.com/eosc-synergy/itwinai.assess.sqaaas/main/.report/assessment_output.json)\n\n ![itwinai Logo](./docs/images/icon-itwinai-orange-black-subtitle.png)\n\nSee the latest version of our [docs](https://itwinai.readthedocs.io/)\nfor a quick overview of this platform for advanced AI/ML workflows in digital twin applications.\n\nIf you are a **developer**, please refer to the [developers installation guide](#installation-for-developers).\n\n## User installation\n\nRequirements:\n\n- Linux or macOS environment. Windows was never tested.\n\n### Python virtual environment\n\nDepending on your environment, there are different ways to\nselect a specific python version.\n\n#### Laptop or GPU node\n\nIf you are working on a laptop\nor on a simple on-prem setup, you could consider using\n[pyenv](https://github.com/pyenv/pyenv). See the\n[installation instructions](https://github.com/pyenv/pyenv?tab=readme-ov-file#installation). If you are using pyenv,\nmake sure to read [this](https://github.com/pyenv/pyenv/wiki#suggested-build-environment).\n\n#### HPC environment\n\nIn HPC systems it is more popular to load dependencies using\nEnvironment Modules or Lmod. If you don't know what modules to load,\ncontact the system administrator\nto learn how to select the proper modules.\n\n##### PyTorch environment\n\nCommands to execute every time **before** installing or activating the python virtual\nenvironment for PyTorch:\n\n- Juelich Supercomputer (JSC):\n\n ```bash\n ml --force purge\n ml Stages/2024 GCC OpenMPI CUDA/12 cuDNN MPI-settings/CUDA\n ml Python CMake HDF5 PnetCDF libaio mpi4py\n ```\n\n- Vega supercomputer:\n\n ```bash\n ml --force purge\n ml Python CMake/3.24.3-GCCcore-11.3.0 mpi4py OpenMPI CUDA/11.7\n ml GCCcore/11.3.0 NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 cuDNN\n ```\n\n##### TensorFlow environment\n\nCommands to execute every time **before** installing or activating the python virtual\nenvironment for TensorFlow:\n\n- Juelich Supercomputer (JSC):\n\n ```bash\n ml --force purge\n ml Stages/2024 GCC/12.3.0 OpenMPI CUDA/12 MPI-settings/CUDA\n ml Python/3.11 HDF5 PnetCDF libaio mpi4py CMake cuDNN/8.9.5.29-CUDA-12\n ```\n\n- Vega supercomputer:\n\n ```bash\n ml --force purge\n ml Python CMake/3.24.3-GCCcore-11.3.0 mpi4py OpenMPI CUDA/11.7\n ml GCCcore/11.3.0 NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 cuDNN\n ```\n\n### Install itwinai\n\nInstall itwinai and its dependencies using the\nfollowing command, and follow the instructions:\n\n```bash\n# First, load the required environment modules, if on an HPC\n\n# Second, create a python virtual environment and activate it\n$ python -m venv ENV_NAME\n$ source ENV_NAME/bin/activate\n\n# Install itwinai inside the environment\n(ENV_NAME) $ export ML_FRAMEWORK=\"pytorch\" # or \"tensorflow\"\n(ENV_NAME) $ curl -fsSL https://github.com/interTwin-eu/itwinai/raw/main/env-files/itwinai-installer.sh | bash\n```\n\nThe `ML_FRAMEWORK` environment variable controls whether you are installing\nitwinai for PyTorch or TensorFlow.\n\n> [!WARNING] \n> itwinai depends on Horovod, which requires `CMake>=1.13` and\n> [other packages](https://horovod.readthedocs.io/en/latest/install_include.html#requirements).\n> Make sure to have them installed in your environment before proceeding.\n\n## Installation for developers\n\nIf you are contributing to this repository, please continue below for\nmore advanced instructions.\n\n> [!WARNING]\n> Branch protection rules are applied to all branches which names\n> match this regex: `[dm][ea][vi]*` . When creating new branches,\n> please avoid using names that match that regex, otherwise branch\n> protection rules will block direct pushes to that branch.\n\n### Clone the itwinai repository\n\n```bash\ngit clone [--recurse-submodules] git@github.com:interTwin-eu/itwinai.git\n```\n\n### Install itwinai environment\n\nYou can create the\nPython virtual environments using our predefined Makefile targets.\n\n#### PyTorch (+ Lightning) virtual environment\n\nMakefile targets for environment installation:\n\n- Juelich Supercomputer (JSC): `torch-gpu-jsc`\n- Vega supercomputer: `torch-env-vega`\n- In any other cases, when CUDA is available: `torch-env`\n- In any other cases, when CUDA **NOT** is available (CPU-only installation): `torch-env-cpu`\n\nFor instance, on a laptop with a CUDA-compatible GPU you can use:\n\n```bash\nmake torch-env \n```\n\nWhen not on an HPC system, you can activate the python environment directly with:\n\n```bash\nsource .venv-pytorch/bin/activate\n```\n\nOtherwise, if you are on an HPC system, please refer to\n[this section](#activate-itwinai-environment-on-hpc)\nexplaining how to load the required environment modules before the python environment.\n\nTo build a Docker image for the pytorch version (need to adapt `TAG`):\n\n```bash\n# Local\ndocker buildx build -t itwinai:TAG -f env-files/torch/Dockerfile .\n\n# Ghcr.io\ndocker buildx build -t ghcr.io/intertwin-eu/itwinai:TAG -f env-files/torch/Dockerfile .\ndocker push ghcr.io/intertwin-eu/itwinai:TAG\n```\n\n#### TensorFlow virtual environment\n\nMakefile targets for environment installation:\n\n- Juelich Supercomputer (JSC): `tf-gpu-jsc`\n- Vega supercomputer: `tf-env-vega`\n- In any other case, when CUDA is available: `tensorflow-env`\n- In any other case, when CUDA **NOT** is available (CPU-only installation): `tensorflow-env-cpu`\n\nFor instance, on a laptop with a CUDA-compatible GPU you can use:\n\n```bash\nmake tensorflow-env\n```\n\nWhen not on an HPC system, you can activate the python environment directly with:\n\n```bash\nsource .venv-tf/bin/activate\n```\n\nOtherwise, if you are on an HPC system, please refer to\n[this section](#activate-itwinai-environment-on-hpc)\nexplaining how to load the required environment modules before the python environment.\n\nTo build a Docker image for the tensorflow version (need to adapt `TAG`):\n\n```bash\n# Local\ndocker buildx build -t itwinai:TAG -f env-files/tensorflow/Dockerfile .\n\n# Ghcr.io\ndocker buildx build -t ghcr.io/intertwin-eu/itwinai:TAG -f env-files/tensorflow/Dockerfile .\ndocker push ghcr.io/intertwin-eu/itwinai:TAG\n```\n\n### Activate itwinai environment on HPC\n\nUsually, HPC systems organize their software in modules which need to be imported by the users\nevery time they open a new shell, **before** activating a Python virtual environment.\n\nBelow you can find some examples on how to load the correct environment modules on the HPC\nsystems we are currently working with.\n\n#### Load modules before PyTorch virtual environment\n\nCommands to be executed before activating the python virtual environment:\n\n- Juelich Supercomputer (JSC):\n\n ```bash\n ml --force purge\n ml Stages/2024 GCC OpenMPI CUDA/12 cuDNN MPI-settings/CUDA\n ml Python CMake HDF5 PnetCDF libaio mpi4py\n ```\n\n- Vega supercomputer:\n\n ```bash\n ml --force purge\n ml Python CMake/3.24.3-GCCcore-11.3.0 mpi4py OpenMPI CUDA/11.7\n ml GCCcore/11.3.0 NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 cuDNN\n ```\n\n- When not on an HPC: do nothing.\n\nFor instance, on JSC you can activate the PyTorch virtual environment in this way:\n\n```bash\n# Load environment modules\nml --force purge\nml Stages/2024 GCC OpenMPI CUDA/12 cuDNN MPI-settings/CUDA\nml Python CMake HDF5 PnetCDF libaio mpi4py\n\n# Activate virtual env\nsource envAI_hdfml/bin/activate\n```\n\n#### Load modules before TensorFlow virtual environment\n\nCommands to be executed before activating the python virtual environment:\n\n- Juelich Supercomputer (JSC):\n\n ```bash\n ml --force purge\n ml Stages/2024 GCC/12.3.0 OpenMPI CUDA/12 MPI-settings/CUDA\n ml Python/3.11 HDF5 PnetCDF libaio mpi4py CMake cuDNN/8.9.5.29-CUDA-12\n ```\n\n- Vega supercomputer:\n\n ```bash\n ml --force purge\n ml Python CMake/3.24.3-GCCcore-11.3.0 mpi4py OpenMPI CUDA/11.7\n ml GCCcore/11.3.0 NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0 cuDNN\n ```\n\n- When not on an HPC: do nothing.\n\nFor instance, on JSC you can activate the TensorFlow virtual environment in this way:\n\n```bash\n# Load environment modules\nml --force purge\nml Stages/2024 GCC/12.3.0 OpenMPI CUDA/12 MPI-settings/CUDA\nml Python/3.11 HDF5 PnetCDF libaio mpi4py CMake cuDNN/8.9.5.29-CUDA-12\n\n# Activate virtual env\nsource envAItf_hdfml/bin/activate\n```\n\n### Test with `pytest`\n\nDo this only if you are a developer wanting to test your code with pytest.\n\nFirst, you need to create virtual environments both for torch and tensorflow,\nfollowing the instructions above, depending on the system that you are using\n(e.g., JSC).\n\nTo select the name of the torch and tf environments in which the tests will be\nexecuted you can set the following environment variables.\nIf these env variables are not set, the testing suite will assume that the\nPyTorch environment is under\n`.venv-pytorch` and the TensorFlow environment is under `.venv-tf`.\n\n```bash\nexport TORCH_ENV=\"my_torch_env\"\nexport TF_ENV=\"my_tf_env\"\n```\n\nFunctional tests (marked with `pytest.mark.functional`) will be executed under\n`/tmp/pytest` location to guarantee isolation among tests.\n\nTo run functional tests use:\n\n```bash\npytest -v tests/ -m \"functional\"\n```\n\n> [!NOTE]\n> Depending on the system that you are using, we implemented a tailored Makefile\n> target to run the test suite on it. Read these instructions until the end!\n\nWe provide some Makefile targets to run the whole test suite including unit, integration,\nand functional tests. Choose the right target depending on the system that you are using:\n\nMakefile targets:\n\n- Juelich Supercomputer (JSC): `test-jsc`\n- In any other case: `test`\n\nFor instance, to run the test suite on your laptop user:\n\n```bash\nmake test\n```\n\n<!--\n### Micromamba installation (deprecated)\n\nTo manage Conda environments we use micromamba, a light weight version of conda.\n\nIt is suggested to refer to the\n[Manual installation guide](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html#manual-installation).\n\nConsider that Micromamba can eat a lot of space when building environments because packages are cached on\nthe local filesystem after being downloaded. To clear cache you can use `micromamba clean -a`.\nMicromamba data are kept under the `$HOME` location. However, in some systems, `$HOME` has a limited storage\nspace and it would be cleverer to install Micromamba in another location with more storage space.\nThus by changing the `$MAMBA_ROOT_PREFIX` variable. See a complete installation example for Linux below, where the\ndefault `$MAMBA_ROOT_PREFIX` is overridden:\n\n```bash\ncd $HOME\n\n# Download micromamba (This command is for Linux Intel (x86_64) systems. Find the right one for your system!)\ncurl -Ls https://micro.mamba.pm/api/micromamba/linux-64/latest | tar -xvj bin/micromamba\n\n# Install micromamba in a custom directory\nMAMBA_ROOT_PREFIX='my-mamba-root'\n./bin/micromamba shell init $MAMBA_ROOT_PREFIX\n\n# To invoke micromamba from Makefile, you need to add explicitly to $PATH\necho 'PATH=\"$(dirname $MAMBA_EXE):$PATH\"' >> ~/.bashrc\n```\n\n**Reference**: [Micromamba installation guide](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html).\n\n-->\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2023 interTwin Community Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
"summary": "AI and ML workflows module for scientific digital twins.",
"version": "0.2.2",
"project_urls": {
"Documentation": "https://itwinai.readthedocs.io/",
"Homepage": "https://www.intertwin.eu/",
"Repository": "https://github.com/interTwin-eu/itwinai"
},
"split_keywords": [
"ml",
" ai",
" hpc"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8910916fd79130ceea0010567bc650a6eb06e53dc7d65f7ea5a5e8a9f1a0400c",
"md5": "c060891b1453c21374d037b2ff59c05d",
"sha256": "553a5cc0beed3ce486fca6d2ba6a8613a3ac61bc4055672d406098ad032997bd"
},
"downloads": -1,
"filename": "itwinai-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c060891b1453c21374d037b2ff59c05d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 61492,
"upload_time": "2024-09-19T15:53:06",
"upload_time_iso_8601": "2024-09-19T15:53:06.669786Z",
"url": "https://files.pythonhosted.org/packages/89/10/916fd79130ceea0010567bc650a6eb06e53dc7d65f7ea5a5e8a9f1a0400c/itwinai-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f8b345abc008eb113862c3be205eb45b7ea5d425f03dddcf2c0a1341e3f8f543",
"md5": "69612710dd5edf53bc5fe5e45e962ec2",
"sha256": "6f5ed04adaefc28ec3567b05c2a14b10da111411c2722e2157f8ad72971abc69"
},
"downloads": -1,
"filename": "itwinai-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "69612710dd5edf53bc5fe5e45e962ec2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 6111144,
"upload_time": "2024-09-19T15:53:08",
"upload_time_iso_8601": "2024-09-19T15:53:08.341455Z",
"url": "https://files.pythonhosted.org/packages/f8/b3/45abc008eb113862c3be205eb45b7ea5d425f03dddcf2c0a1341e3f8f543/itwinai-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-19 15:53:08",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "interTwin-eu",
"github_project": "itwinai",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "itwinai"
}