keras4torch


Namekeras4torch JSON
Version 1.2.3 PyPI version JSON
download
home_pagehttps://github.com/blueloveTH/keras4torch
SummaryA compatible-with-keras wrapper for training PyTorch models✨
upload_time2021-05-10 14:03:56
maintainer
docs_urlNone
authorblueloveTH
requires_python>=3.6
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
    <img src="imgs/keras4torch_logo.svg" alt="Keras4Torch" height=48>
</p>
<p align="center">
    <strong>A compatible-with-keras wrapper for training PyTorch models✨</strong>
</p>
<p align="center">
    <a href="https://pypi.python.org/pypi/keras4torch"><img src="https://img.shields.io/pypi/v/keras4torch.svg" alt="PyPI"></a>
    <a href="https://pepy.tech/project/keras4torch"><img src="https://pepy.tech/badge/keras4torch" alt="Downloads"></a>
    <!-- <a href="https://www.buymeacoffee.com/blueloveTH"><img src="https://img.shields.io/badge/Buy%20me%20a-coffee-cyan.svg?logo=buy-me-a-coffee&logoColor=cyan"></a> -->
    <a href="https://codecov.io/gh/blueloveTH/keras4torch"><img src="https://codecov.io/gh/blueloveTH/keras4torch/branch/main/graph/badge.svg" alt="CodeCov"/></a>
    <a href="https://github.com/blueloveTH/keras4torch/blob/master/LICENSE"><img src="https://img.shields.io/github/license/blueloveTH/keras4torch.svg" alt="License"></a>
</p>
<p align="center">
    <a href="https://keras4torch.readthedocs.io/en/latest">Documentations</a>
    •
    <a href="https://github.com/blueloveTH/keras4torch/discussions/5">Dev Logs</a>
    •
    <a href="https://github.com/blueloveTH/keras4torch/tree/main/minimum">Mini Version</a>
</p>


`keras4torch` provides a high-level API to train PyTorch models compatible with Keras. This project is designed for beginner with these objectives: 

+   Help people who are new to PyTorch but familar with Keras
+   Reduce the cost for migrating Keras model implementation to PyTorch

---

<p align="center">
    <h4 align="center">Use <code>keras4torch</code> for Kaggle's code competition! Check this <a href="https://www.kaggle.com/blueloveth/keras4torch">package dataset</a> and <a href="https://www.kaggle.com/blueloveth/keras4torch-starter">starter notebook.</a></h4>
</p>


---

## Installation

```
pip install keras4torch
```

PyTorch 1.6+ and Python 3.6+ is required.



## Quick start

Suppose you have a `nn.Module` to train.

```python
model = torchvision.models.resnet18(num_classes=10)
```

All you need to do is wrapping it via `k4t.Model()`.

```python
import keras4torch as k4t

model = k4t.Model(model)
```

Now, there're two workflows can be used for training.

The **NumPy workflow** is compatible with Keras.

+   `.compile(optimizer, loss, metrics)` for settings of optimizer, loss and metrics
+   `.fit(x, y, epochs, batch_size, ...)` takes raw numpy input for training
+   `.evaluate(x, y)` outputs a `dict` result of your metrics
+   `.predict(x)` for doing predictions



And **DataLoader workflow** is more flexible and of pytorch style.

+   `.compile(optimizer, loss, metrics)` same as NumPy workflow
+   `.fit_dl(train_loader, val_loader, epochs)` for training the model via `DataLoader`
+   `.evaluate_dl(data_loader)` same as NumPy workflow but takes `DataLoader`
+   `.predict_dl(data_loader)` same as NumPy workflow but takes `DataLoader`

The two workflows can be mixed.



## MNIST example

Here we show a complete example of training a ConvNet on MNIST.

```python
import torch
import torchvision
from torch import nn

import keras4torch as k4t
```

#### Step1: Preprocess data

```python
mnist = torchvision.datasets.MNIST(root='./', download=True)
x, y = mnist.train_data.unsqueeze(1), mnist.train_labels

x = x.float() / 255.0    # scale the pixels to [0, 1]

x_train, y_train = x[:40000], y[:40000]
x_test, y_test = x[40000:], y[40000:]
```

#### Step2: Define the model

If you have a `nn.Module` already, just wrap it via `k4t.Model`. For example,

```python
model = torchvision.models.resnet50(num_classes=10)

model = k4t.Model(model)
```

For building models from scratch, you can use `KerasLayer` (located in `k4t.layers`) for automatic shape inference, which can free you from calculating the input channels.

As is shown below, `k4t.layers.Conv2d(32, kernel_size=3)` equals `nn.Conv2d(?, 32, kernel_size=3)` where the first parameter `?` (i.e. `in_channels`) will be determined by itself.

```python
model = torch.nn.Sequential(
    k4t.layers.Conv2d(32, kernel_size=3), nn.ReLU(),
    nn.MaxPool2d(2, 2), 
    k4t.layers.Conv2d(64, kernel_size=3), nn.ReLU(),
    nn.Flatten(),
    k4t.layers.Linear(10)
)
```

A model containing `KerasLayer` needs an extra `.build(input_shape)` operation.

```python
model = k4t.Model(model).build([1, 28, 28])
```

#### Step3: Summary the model

```python
model.summary()
```

```txt
=========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
=========================================================================================
├─Conv2d*: 1-1                           [-1, 32, 26, 26]          320
├─ReLU: 1-2                              [-1, 32, 26, 26]          --
├─MaxPool2d: 1-3                         [-1, 32, 13, 13]          --
├─Conv2d*: 1-4                           [-1, 64, 11, 11]          18,496
├─ReLU: 1-5                              [-1, 64, 11, 11]          --
├─Flatten: 1-6                           [-1, 7744]                --
├─Linear*: 1-7                           [-1, 10]                  77,450
=========================================================================================
Total params: 96,266
Trainable params: 96,266
Non-trainable params: 0
Total mult-adds (M): 2.50
=========================================================================================
```

#### Step4: Config optimizer, loss and metrics

```python
model.compile(optimizer='adam', loss=nn.CrossEntropyLoss(), metrics=['acc'])
```

If GPU is available, it will be used automatically. You can also pass `device` parameter to `.compile()` explicitly.

#### Step5: Training

```python
history = model.fit(x_train, y_train,
                	epochs=30,
                	batch_size=512,
                	validation_split=0.2,
                	)
```

```txt
Train on 32000 samples, validate on 8000 samples:
Epoch 1/30 - 2.8s - loss: 0.6109 - acc: 0.8372 - val_loss: 0.2712 - val_acc: 0.9235 - lr: 1e-03
Epoch 2/30 - 1.5s - loss: 0.2061 - acc: 0.9402 - val_loss: 0.1494 - val_acc: 0.9579 - lr: 1e-03
Epoch 3/30 - 1.5s - loss: 0.1202 - acc: 0.9653 - val_loss: 0.0974 - val_acc: 0.9719 - lr: 1e-03
Epoch 4/30 - 1.5s - loss: 0.0835 - acc: 0.9757 - val_loss: 0.0816 - val_acc: 0.9769 - lr: 1e-03
... ...
```

#### Step6: Plot learning curve

```
history.plot(kind='line', y=['loss', 'val_loss'])
```

<img src="imgs/learning_curve.svg"  />

#### Step7: Evaluate on test set

```python
model.evaluate(x_test, y_test)
```

```txt
{'loss': 0.06655170023441315, 'acc': 0.9839999675750732}
```



## Communication

We have activated [Github Discussion](https://github.com/blueloveTH/keras4torch/discussions) for Q&A and most general topics!

For bugs report, please use [Github Issues](https://github.com/blueloveTH/keras4torch/issues).



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/blueloveTH/keras4torch",
    "name": "keras4torch",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "",
    "author": "blueloveTH",
    "author_email": "blueloveTH@qq.com",
    "download_url": "https://files.pythonhosted.org/packages/07/0a/d9448ce4bc2177a3355c1bf5299037db4f0efe3e06b875bf2eee47d621c7/keras4torch-1.2.3.tar.gz",
    "platform": "",
    "description": "<p align=\"center\">\n    <img src=\"imgs/keras4torch_logo.svg\" alt=\"Keras4Torch\" height=48>\n</p>\n<p align=\"center\">\n    <strong>A compatible-with-keras wrapper for training PyTorch models\u2728</strong>\n</p>\n<p align=\"center\">\n    <a href=\"https://pypi.python.org/pypi/keras4torch\"><img src=\"https://img.shields.io/pypi/v/keras4torch.svg\" alt=\"PyPI\"></a>\n    <a href=\"https://pepy.tech/project/keras4torch\"><img src=\"https://pepy.tech/badge/keras4torch\" alt=\"Downloads\"></a>\n    <!-- <a href=\"https://www.buymeacoffee.com/blueloveTH\"><img src=\"https://img.shields.io/badge/Buy%20me%20a-coffee-cyan.svg?logo=buy-me-a-coffee&logoColor=cyan\"></a> -->\n    <a href=\"https://codecov.io/gh/blueloveTH/keras4torch\"><img src=\"https://codecov.io/gh/blueloveTH/keras4torch/branch/main/graph/badge.svg\" alt=\"CodeCov\"/></a>\n    <a href=\"https://github.com/blueloveTH/keras4torch/blob/master/LICENSE\"><img src=\"https://img.shields.io/github/license/blueloveTH/keras4torch.svg\" alt=\"License\"></a>\n</p>\n<p align=\"center\">\n    <a href=\"https://keras4torch.readthedocs.io/en/latest\">Documentations</a>\n    \u2022\n    <a href=\"https://github.com/blueloveTH/keras4torch/discussions/5\">Dev Logs</a>\n    \u2022\n    <a href=\"https://github.com/blueloveTH/keras4torch/tree/main/minimum\">Mini Version</a>\n</p>\n\n\n`keras4torch` provides a high-level API to train PyTorch models compatible with Keras. This project is designed for beginner with these objectives: \n\n+   Help people who are new to PyTorch but familar with Keras\n+   Reduce the cost for migrating Keras model implementation to PyTorch\n\n---\n\n<p align=\"center\">\n    <h4 align=\"center\">Use <code>keras4torch</code> for Kaggle's code competition! Check this <a href=\"https://www.kaggle.com/blueloveth/keras4torch\">package dataset</a> and <a href=\"https://www.kaggle.com/blueloveth/keras4torch-starter\">starter notebook.</a></h4>\n</p>\n\n\n---\n\n## Installation\n\n```\npip install keras4torch\n```\n\nPyTorch 1.6+ and Python 3.6+ is required.\n\n\n\n## Quick start\n\nSuppose you have a `nn.Module` to train.\n\n```python\nmodel = torchvision.models.resnet18(num_classes=10)\n```\n\nAll you need to do is wrapping it via `k4t.Model()`.\n\n```python\nimport keras4torch as k4t\n\nmodel = k4t.Model(model)\n```\n\nNow, there're two workflows can be used for training.\n\nThe **NumPy workflow** is compatible with Keras.\n\n+   `.compile(optimizer, loss, metrics)` for settings of optimizer, loss and metrics\n+   `.fit(x, y, epochs, batch_size, ...)` takes raw numpy input for training\n+   `.evaluate(x, y)` outputs a `dict` result of your metrics\n+   `.predict(x)` for doing predictions\n\n\n\nAnd **DataLoader workflow** is more flexible and of pytorch style.\n\n+   `.compile(optimizer, loss, metrics)` same as NumPy workflow\n+   `.fit_dl(train_loader, val_loader, epochs)` for training the model via `DataLoader`\n+   `.evaluate_dl(data_loader)` same as NumPy workflow but takes `DataLoader`\n+   `.predict_dl(data_loader)` same as NumPy workflow but takes `DataLoader`\n\nThe two workflows can be mixed.\n\n\n\n## MNIST example\n\nHere we show a complete example of training a ConvNet on MNIST.\n\n```python\nimport torch\nimport torchvision\nfrom torch import nn\n\nimport keras4torch as k4t\n```\n\n#### Step1: Preprocess data\n\n```python\nmnist = torchvision.datasets.MNIST(root='./', download=True)\nx, y = mnist.train_data.unsqueeze(1), mnist.train_labels\n\nx = x.float() / 255.0    # scale the pixels to [0, 1]\n\nx_train, y_train = x[:40000], y[:40000]\nx_test, y_test = x[40000:], y[40000:]\n```\n\n#### Step2: Define the model\n\nIf you have a `nn.Module` already, just wrap it via `k4t.Model`. For example,\n\n```python\nmodel = torchvision.models.resnet50(num_classes=10)\n\nmodel = k4t.Model(model)\n```\n\nFor building models from scratch, you can use `KerasLayer` (located in `k4t.layers`) for automatic shape inference, which can free you from calculating the input channels.\n\nAs is shown below, `k4t.layers.Conv2d(32, kernel_size=3)` equals `nn.Conv2d(?, 32, kernel_size=3)` where the first parameter `?` (i.e. `in_channels`) will be determined by itself.\n\n```python\nmodel = torch.nn.Sequential(\n    k4t.layers.Conv2d(32, kernel_size=3), nn.ReLU(),\n    nn.MaxPool2d(2, 2), \n    k4t.layers.Conv2d(64, kernel_size=3), nn.ReLU(),\n    nn.Flatten(),\n    k4t.layers.Linear(10)\n)\n```\n\nA model containing `KerasLayer` needs an extra `.build(input_shape)` operation.\n\n```python\nmodel = k4t.Model(model).build([1, 28, 28])\n```\n\n#### Step3: Summary the model\n\n```python\nmodel.summary()\n```\n\n```txt\n=========================================================================================\nLayer (type:depth-idx)                   Output Shape              Param #\n=========================================================================================\n\u251c\u2500Conv2d*: 1-1                           [-1, 32, 26, 26]          320\n\u251c\u2500ReLU: 1-2                              [-1, 32, 26, 26]          --\n\u251c\u2500MaxPool2d: 1-3                         [-1, 32, 13, 13]          --\n\u251c\u2500Conv2d*: 1-4                           [-1, 64, 11, 11]          18,496\n\u251c\u2500ReLU: 1-5                              [-1, 64, 11, 11]          --\n\u251c\u2500Flatten: 1-6                           [-1, 7744]                --\n\u251c\u2500Linear*: 1-7                           [-1, 10]                  77,450\n=========================================================================================\nTotal params: 96,266\nTrainable params: 96,266\nNon-trainable params: 0\nTotal mult-adds (M): 2.50\n=========================================================================================\n```\n\n#### Step4: Config optimizer, loss and metrics\n\n```python\nmodel.compile(optimizer='adam', loss=nn.CrossEntropyLoss(), metrics=['acc'])\n```\n\nIf GPU is available, it will be used automatically. You can also pass `device` parameter to `.compile()` explicitly.\n\n#### Step5: Training\n\n```python\nhistory = model.fit(x_train, y_train,\n                \tepochs=30,\n                \tbatch_size=512,\n                \tvalidation_split=0.2,\n                \t)\n```\n\n```txt\nTrain on 32000 samples, validate on 8000 samples:\nEpoch 1/30 - 2.8s - loss: 0.6109 - acc: 0.8372 - val_loss: 0.2712 - val_acc: 0.9235 - lr: 1e-03\nEpoch 2/30 - 1.5s - loss: 0.2061 - acc: 0.9402 - val_loss: 0.1494 - val_acc: 0.9579 - lr: 1e-03\nEpoch 3/30 - 1.5s - loss: 0.1202 - acc: 0.9653 - val_loss: 0.0974 - val_acc: 0.9719 - lr: 1e-03\nEpoch 4/30 - 1.5s - loss: 0.0835 - acc: 0.9757 - val_loss: 0.0816 - val_acc: 0.9769 - lr: 1e-03\n... ...\n```\n\n#### Step6: Plot learning curve\n\n```\nhistory.plot(kind='line', y=['loss', 'val_loss'])\n```\n\n<img src=\"imgs/learning_curve.svg\"  />\n\n#### Step7: Evaluate on test set\n\n```python\nmodel.evaluate(x_test, y_test)\n```\n\n```txt\n{'loss': 0.06655170023441315, 'acc': 0.9839999675750732}\n```\n\n\n\n## Communication\n\nWe have activated [Github Discussion](https://github.com/blueloveTH/keras4torch/discussions) for Q&A and most general topics!\n\nFor bugs report, please use [Github Issues](https://github.com/blueloveTH/keras4torch/issues).\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A compatible-with-keras wrapper for training PyTorch models\u2728",
    "version": "1.2.3",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "391f74a58762d851c36e7794b27d94f3",
                "sha256": "487ba150459462652215cc548a135e5c4ce58faa15a271c69424aec8566d6da8"
            },
            "downloads": -1,
            "filename": "keras4torch-1.2.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "391f74a58762d851c36e7794b27d94f3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 33254,
            "upload_time": "2021-05-10T14:03:54",
            "upload_time_iso_8601": "2021-05-10T14:03:54.233424Z",
            "url": "https://files.pythonhosted.org/packages/52/79/cc18e05185689d73c807b4ca7920fdcb072940dcbbab1383fba56b9a5ed0/keras4torch-1.2.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "69c4b57ba3d9149a015068c32e8f98f8",
                "sha256": "824db08f41e301b0476f6f49beafda95cbde23e3f862c3f07fbacb18700013c1"
            },
            "downloads": -1,
            "filename": "keras4torch-1.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "69c4b57ba3d9149a015068c32e8f98f8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 28745,
            "upload_time": "2021-05-10T14:03:56",
            "upload_time_iso_8601": "2021-05-10T14:03:56.257087Z",
            "url": "https://files.pythonhosted.org/packages/07/0a/d9448ce4bc2177a3355c1bf5299037db4f0efe3e06b875bf2eee47d621c7/keras4torch-1.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2021-05-10 14:03:56",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": null,
    "github_project": "blueloveTH",
    "error": "Could not fetch GitHub repository",
    "lcname": "keras4torch"
}
        
Elapsed time: 0.25687s