# Quick Start
## Python
This package requires a Python **version between 3.8 and 3.10** (versions above have not been tested).
You can use a currently installed Python or, for example, **create a miniconda/anaconda environment**:
```bash
conda create --name nnpf python=3.10
conda activate nnpf
```
If you need a specific computation platform context, like an older CUDA version or the support of ROCm, you should install PyTorch manually using the instructions available on the [official website](https://pytorch.org/get-started/locally/).
## Install from Pypi
```bash
pip install nnpf
```
## Install from source
**Download** or clone this repository:
```bash
git clone https://github.com/PhaseFieldICJ/nnpf
cd nnpf
```
**Install** the nnpf module:
```bash
pip install .
```
You can now move in your **working directory**.
## Self test
You can check the installation with:
```bash
nnpf selftest
```
## Basic training
**Launch** the learning of the reaction term of the Allen-Cahn equation, with default parameters:
```bash
nnpf train Reaction --batch_size 10
```
and/or with custom hidden layers:
```bash
nnpf train Reaction --batch_size 10 --layer_dims 8 8 3 --activation ReLU
```
If you have an CUDA compatible GPU, you can speedup the learning by simply adding the `--gpu` option:
```bash
nnpf train Reaction --batch_size 10 --layer_dims 8 8 3 --activation ReLU --gpu 1
```
**Check** informations of one trained model:
```bash
nnpf infos logs/Reaction/version_0
```
**Visualize** the loss evolution and compare hyper-parameters using TensorBoard:
```bash
tensorboard --logdir logs
```
and open your browser at http://localhost:6006/
# Custom model
You can also create a custom model in a file and make it derives from the problem you want to solve.
For example, create a file `model.py` with the following content:
```Python
from torch.nn import Sequential
from nnpf.problems import AllenCahnProblem
from nnpf.models import Reaction, HeatArray
from nnpf.utils import get_default_args
class ModelDR(AllenCahnProblem):
def __init__(self,
kernel_size=17, kernel_init='zeros',
layers=[8, 3], activation='GaussActivation',
**kwargs):
super().__init__(**kwargs)
# Fix kernel size to match domain dimension
if isinstance(kernel_size, int):
kernel_size = [kernel_size]
else:
kernel_size = list(kernel_size)
if len(kernel_size) == 1:
kernel_size = kernel_size * self.domain.dim
# Hyper-parameters (used for saving/loading the module)
self.save_hyperparameters(
'kernel_size', 'kernel_init',
'layers', 'activation',
)
self.model = Sequential(
HeatArray(
kernel_size=kernel_size, init=kernel_init,
bounds=self.hparams.bounds, N=self.hparams.N
),
Reaction(layers, activation),
)
def forward(self, x):
return self.model(x)
@staticmethod
def add_model_specific_args(parent_parser, defaults={}):
parser = AllenCahnProblem.add_model_specific_args(parent_parser, defaults)
group = parser.add_argument_group("Allen-Cahn DR", "Options specific to this model")
group.add_argument('--kernel_size', type=int, nargs='+', help='Size of the kernel (nD)')
group.add_argument('--kernel_init', choices=['zeros', 'random'], help="Initialization of the convolution kernel")
group.add_argument('--layers', type=int, nargs='+', help='Sizes of the hidden layers')
group.add_argument('--activation', type=str, help='Name of the activation function')
group.set_defaults(**{**get_default_args(ModelDR), **defaults})
return parser
```
with some boilerplate to handle command-line arguments and save hyper-parameters (see Lightning documentation).
`ModelDR` is declared as a model of the Allen-Cahn problem and thus inherits from the associated training and validation datasets.
You can then display the command-line interface with the `--help` option after specifying the file and the model name:
```bash
nnpf train model.py:ModelDR --help
```
You can start the training for the 2D case with some custom arguments:
```bash
nnpf train model.py:ModelDR --kernel_size 33 --max_epochs 2000 --check_val_every_n_epoch 100
```
For the 3D case:
```bash
nnpf train model.py:ModelDR --bounds [0,1]x[0,1]x[0,1] --N 64 --max_epochs 2000 --check_val_every_n_epoch 100
```
Using a GPU:
```bash
nnpf train model.py:ModelDR --bounds [0,1]x[0,1]x[0,1] --N 64 --max_epochs 2000 --check_val_every_n_epoch 100 --gpus 1
```
Using a configuration file in YAML format:
```bash
nnpf train model.py:ModelDR --config config.yml
```
Raw data
{
"_id": null,
"home_page": null,
"name": "nnpf",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "Roland Denis <denis@math.univ-lyon1.fr>",
"keywords": "phase field, mean curvature, neural network",
"author": null,
"author_email": "Roland Denis <denis@math.univ-lyon1.fr>, Garry Terii <terii@math.univ-lyon1.fr>",
"download_url": "https://files.pythonhosted.org/packages/1e/d0/d7cf367cccf639a9f8eca7f9de30b995f28dca70cc634833e11277faf969/nnpf-1.0.2.tar.gz",
"platform": null,
"description": "# Quick Start\n\n## Python\n\nThis package requires a Python **version between 3.8 and 3.10** (versions above have not been tested).\n\nYou can use a currently installed Python or, for example, **create a miniconda/anaconda environment**:\n```bash\nconda create --name nnpf python=3.10\nconda activate nnpf\n```\n\nIf you need a specific computation platform context, like an older CUDA version or the support of ROCm, you should install PyTorch manually using the instructions available on the [official website](https://pytorch.org/get-started/locally/).\n\n## Install from Pypi\n\n```bash\npip install nnpf\n```\n\n## Install from source\n\n**Download** or clone this repository:\n```bash\ngit clone https://github.com/PhaseFieldICJ/nnpf\ncd nnpf\n```\n\n**Install** the nnpf module:\n```bash\npip install .\n```\n\nYou can now move in your **working directory**.\n\n## Self test\n\nYou can check the installation with:\n```bash\nnnpf selftest\n```\n\n## Basic training\n\n**Launch** the learning of the reaction term of the Allen-Cahn equation, with default parameters:\n```bash\nnnpf train Reaction --batch_size 10\n```\nand/or with custom hidden layers:\n```bash\nnnpf train Reaction --batch_size 10 --layer_dims 8 8 3 --activation ReLU\n```\n\nIf you have an CUDA compatible GPU, you can speedup the learning by simply adding the `--gpu` option:\n```bash\nnnpf train Reaction --batch_size 10 --layer_dims 8 8 3 --activation ReLU --gpu 1\n```\n\n**Check** informations of one trained model:\n```bash\nnnpf infos logs/Reaction/version_0\n```\n\n**Visualize** the loss evolution and compare hyper-parameters using TensorBoard:\n```bash\ntensorboard --logdir logs\n```\nand open your browser at http://localhost:6006/\n\n\n# Custom model\n\nYou can also create a custom model in a file and make it derives from the problem you want to solve.\n\nFor example, create a file `model.py` with the following content:\n```Python\nfrom torch.nn import Sequential\n\nfrom nnpf.problems import AllenCahnProblem\nfrom nnpf.models import Reaction, HeatArray\nfrom nnpf.utils import get_default_args\n\nclass ModelDR(AllenCahnProblem):\n def __init__(self,\n kernel_size=17, kernel_init='zeros',\n layers=[8, 3], activation='GaussActivation',\n **kwargs):\n super().__init__(**kwargs)\n\n # Fix kernel size to match domain dimension\n if isinstance(kernel_size, int):\n kernel_size = [kernel_size]\n else:\n kernel_size = list(kernel_size)\n if len(kernel_size) == 1:\n kernel_size = kernel_size * self.domain.dim\n\n # Hyper-parameters (used for saving/loading the module)\n self.save_hyperparameters(\n 'kernel_size', 'kernel_init',\n 'layers', 'activation',\n )\n\n self.model = Sequential(\n HeatArray(\n kernel_size=kernel_size, init=kernel_init,\n bounds=self.hparams.bounds, N=self.hparams.N\n ),\n Reaction(layers, activation),\n )\n\n def forward(self, x):\n return self.model(x)\n\n @staticmethod\n def add_model_specific_args(parent_parser, defaults={}):\n parser = AllenCahnProblem.add_model_specific_args(parent_parser, defaults)\n group = parser.add_argument_group(\"Allen-Cahn DR\", \"Options specific to this model\")\n group.add_argument('--kernel_size', type=int, nargs='+', help='Size of the kernel (nD)')\n group.add_argument('--kernel_init', choices=['zeros', 'random'], help=\"Initialization of the convolution kernel\")\n group.add_argument('--layers', type=int, nargs='+', help='Sizes of the hidden layers')\n group.add_argument('--activation', type=str, help='Name of the activation function')\n group.set_defaults(**{**get_default_args(ModelDR), **defaults})\n return parser\n```\nwith some boilerplate to handle command-line arguments and save hyper-parameters (see Lightning documentation).\n`ModelDR` is declared as a model of the Allen-Cahn problem and thus inherits from the associated training and validation datasets.\n\nYou can then display the command-line interface with the `--help` option after specifying the file and the model name:\n```bash\nnnpf train model.py:ModelDR --help\n```\n\nYou can start the training for the 2D case with some custom arguments:\n```bash\nnnpf train model.py:ModelDR --kernel_size 33 --max_epochs 2000 --check_val_every_n_epoch 100\n```\n\nFor the 3D case:\n```bash\nnnpf train model.py:ModelDR --bounds [0,1]x[0,1]x[0,1] --N 64 --max_epochs 2000 --check_val_every_n_epoch 100\n```\n\nUsing a GPU:\n```bash\nnnpf train model.py:ModelDR --bounds [0,1]x[0,1]x[0,1] --N 64 --max_epochs 2000 --check_val_every_n_epoch 100 --gpus 1\n```\n\nUsing a configuration file in YAML format:\n```bash\nnnpf train model.py:ModelDR --config config.yml\n```\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Neural Network for Phase-Field models",
"version": "1.0.2",
"project_urls": {
"Bug Tracker": "https://github.com/PhaseFieldICJ/nnpf/issues",
"homepage": "https://github.com/PhaseFieldICJ/nnpf",
"repository": "https://github.com/PhaseFieldICJ/nnpf"
},
"split_keywords": [
"phase field",
" mean curvature",
" neural network"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8d5cfb60085027261c8239b3dbe159ffb4043fa3883e7b9b136d4518a0169b79",
"md5": "b8e86b4a412998d4034d59dcf8b210d2",
"sha256": "19822eb54168c8d0f2dbfbd97b143ce85a44104142388624595ee2072d96a0c8"
},
"downloads": -1,
"filename": "nnpf-1.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b8e86b4a412998d4034d59dcf8b210d2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 84191,
"upload_time": "2024-10-22T14:32:54",
"upload_time_iso_8601": "2024-10-22T14:32:54.809899Z",
"url": "https://files.pythonhosted.org/packages/8d/5c/fb60085027261c8239b3dbe159ffb4043fa3883e7b9b136d4518a0169b79/nnpf-1.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "1ed0d7cf367cccf639a9f8eca7f9de30b995f28dca70cc634833e11277faf969",
"md5": "6e378c26ef9f6cd214e5b7a69690084e",
"sha256": "b7397daa2eccac3419ce632dd944d637680c32f9a52f11196f1c865990e5e098"
},
"downloads": -1,
"filename": "nnpf-1.0.2.tar.gz",
"has_sig": false,
"md5_digest": "6e378c26ef9f6cd214e5b7a69690084e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 62327,
"upload_time": "2024-10-22T14:32:57",
"upload_time_iso_8601": "2024-10-22T14:32:57.000192Z",
"url": "https://files.pythonhosted.org/packages/1e/d0/d7cf367cccf639a9f8eca7f9de30b995f28dca70cc634833e11277faf969/nnpf-1.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-22 14:32:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "PhaseFieldICJ",
"github_project": "nnpf",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "nnpf"
}