## 1 Introduction
Physical Information Neural Network implemented using pytorch, Mainly to facilitate the solution of ordinary differential equations.
## 2 External dependencies for pypinn
torch, numpy, and tqdm.
## 3 Usage
### 3.1 step 1
Define a class that inherits from `pypinn.Pinn`, such as `Net`. Write the equation to be solved in the `get_f` function of `Net`. Here is an example:
```python
import pypinn
import torch
import torch.nn as nn
class Net(pypinn.Pinn):
def __init__(self, input_size: int, hidden_sizes: list, output_size: int, seed=0):
super().__init__(input_size, hidden_sizes, output_size, seed)
def get_f(self,x):
y, dy = self.get_y_and_dy(x)
f1 = dy[0]-y[1]
f2 = dy[1]-(-y[1]-(2+torch.sin(x))*y[0])
f3 = self.forward(torch.tensor([[0.0]])) - torch.tensor([[0,1]])
return f1,f2,f3
```
The above example is used to solve a system of equations:
$$
\begin{cases}
\frac{\mathrm{d}y_1}{\mathrm{d}x} = y_2,\\
\frac{\mathrm{d}y_2}{\mathrm{d}x} = -y_2-(2+\sin x)y_1,\\
y_1(0)=0,y_2(0)=1.
\end{cases}
$$
Tips: ` dy[0], dy[1] ` represent $\frac{dy_1}{dx}, \frac{dy_2}{dx}$, ` y[0], y[1]` represent $y_1, y_2$. The number of equations can theoretically be arbitrarily large.
### 3.2 step 2
Instantiate the class we defined, for example, set the number of input neurons to 1, the hidden layer to 20*20, and the number of output neurons to 2: `mdl = Net(1,[20,20],2)`.
### 3.3 step 3
Configure the training data, loss function, optimizer, learning rate, number of iterations, and then use `mdl.train(**settings)` to train.
```python
settings={
'x': torch.linspace(0,6,301,requires_grad=True).view(-1,1),
'loss_fn': nn.MSELoss(),
'optimizer': torch.optim.Adam,
'lr': 1e-3,
'epochs': 5000
}
mdl.train(**settings)
```
### 3.4 step 4
After the model is trained, it can be visualized. Here is a simple example:
```python
import matplotlib.pyplot as plt
t = torch.linspace(0,6,500).view(-1,1)
plt.plot(t,mdl.predict(t))
plt.show()
```
## Update log
`1.0.1` More detailed instructions have been added
`1.0.0` first updata
Raw data
{
"_id": null,
"home_page": "",
"name": "pypinn",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "python,pinn,pypinn,windows,mac,linux",
"author": "Dongpeng Han",
"author_email": "3223003076@qq.com",
"download_url": "https://files.pythonhosted.org/packages/71/7e/a504fc282a2b786a3e21b289b38e12db2648c9aa100a98f243d2bb4ed7ff/pypinn-1.0.1.tar.gz",
"platform": null,
"description": "\r\n## 1 Introduction\r\r\nPhysical Information Neural Network implemented using pytorch, Mainly to facilitate the solution of ordinary differential equations.\r\r\n\r\r\n## 2 External dependencies for pypinn\r\r\ntorch, numpy, and tqdm.\r\r\n\r\r\n## 3 Usage\r\r\n\r\r\n### 3.1 step 1 \r\r\nDefine a class that inherits from `pypinn.Pinn`, such as `Net`. Write the equation to be solved in the `get_f` function of `Net`. Here is an example:\r\r\n```python\r\r\nimport pypinn\r\r\nimport torch\r\r\nimport torch.nn as nn\r\r\n\r\r\nclass Net(pypinn.Pinn):\r\r\n def __init__(self, input_size: int, hidden_sizes: list, output_size: int, seed=0):\r\r\n super().__init__(input_size, hidden_sizes, output_size, seed)\r\r\n\r\r\n def get_f(self,x):\r\r\n y, dy = self.get_y_and_dy(x)\r\r\n f1 = dy[0]-y[1]\r\r\n f2 = dy[1]-(-y[1]-(2+torch.sin(x))*y[0])\r\r\n f3 = self.forward(torch.tensor([[0.0]])) - torch.tensor([[0,1]])\r\r\n return f1,f2,f3\r\r\n```\r\r\nThe above example is used to solve a system of equations:\r\r\n$$\r\r\n\\begin{cases}\r\r\n \\frac{\\mathrm{d}y_1}{\\mathrm{d}x} = y_2,\\\\\r\r\n \\frac{\\mathrm{d}y_2}{\\mathrm{d}x} = -y_2-(2+\\sin x)y_1,\\\\\r\r\n y_1(0)=0,y_2(0)=1.\r\r\n\\end{cases}\r\r\n$$\r\r\n\r\r\nTips: ` dy[0], dy[1] ` represent $\\frac{dy_1}{dx}, \\frac{dy_2}{dx}$, ` y[0], y[1]` represent $y_1, y_2$. The number of equations can theoretically be arbitrarily large.\r\r\n\r\r\n### 3.2 step 2\r\r\nInstantiate the class we defined, for example, set the number of input neurons to 1, the hidden layer to 20*20, and the number of output neurons to 2: `mdl = Net(1,[20,20],2)`.\r\r\n\r\r\n### 3.3 step 3\r\r\nConfigure the training data, loss function, optimizer, learning rate, number of iterations, and then use `mdl.train(**settings)` to train.\r\r\n```python\r\r\nsettings={\r\r\n 'x': torch.linspace(0,6,301,requires_grad=True).view(-1,1),\r\r\n 'loss_fn': nn.MSELoss(),\r\r\n 'optimizer': torch.optim.Adam,\r\r\n 'lr': 1e-3,\r\r\n 'epochs': 5000\r\r\n}\r\r\nmdl.train(**settings)\r\r\n```\r\r\n\r\r\n### 3.4 step 4\r\r\nAfter the model is trained, it can be visualized. Here is a simple example:\r\r\n```python\r\r\nimport matplotlib.pyplot as plt\r\r\nt = torch.linspace(0,6,500).view(-1,1)\r\r\nplt.plot(t,mdl.predict(t))\r\r\nplt.show()\r\r\n```\r\r\n\r\r\n\r\r\n\r\r\n## Update log\r\r\n`1.0.1` More detailed instructions have been added\r\r\n`1.0.0` first updata\r\n\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "pinn implementation using torch.",
"version": "1.0.1",
"project_urls": null,
"split_keywords": [
"python",
"pinn",
"pypinn",
"windows",
"mac",
"linux"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "717ea504fc282a2b786a3e21b289b38e12db2648c9aa100a98f243d2bb4ed7ff",
"md5": "8786583a02dd37bff85842783571aba0",
"sha256": "f698a10769f3935799701469109af28304f98a56f0fc006c96cc705fa8bcefaa"
},
"downloads": -1,
"filename": "pypinn-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "8786583a02dd37bff85842783571aba0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 5996,
"upload_time": "2023-12-25T12:29:19",
"upload_time_iso_8601": "2023-12-25T12:29:19.191795Z",
"url": "https://files.pythonhosted.org/packages/71/7e/a504fc282a2b786a3e21b289b38e12db2648c9aa100a98f243d2bb4ed7ff/pypinn-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-25 12:29:19",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "pypinn"
}