ravdl


Nameravdl JSON
Version 0.10 PyPI version JSON
download
home_pagehttps://github.com/ravenprotocol/ravdl
Summary
upload_time2023-02-07 14:33:18
maintainer
docs_urlNone
authorRaven Protocol
requires_python
licenseMIT
keywords ravdl deep learning library algorithms
VCS
bugtrack_url
requirements numpy terminaltables onnx ravop python-dotenv
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <img src="https://user-images.githubusercontent.com/36446402/217170090-b3090798-bc0c-4ead-aa3b-7b4ced07e3ec.svg" width="200" height="100">
<h1> RavDL - Deep Learning Library </h1>
</div>

Introducing Raven Protocol's Distributed Deep Learning tool that allows Requesters to easily build, train and test their neural networks by leveraging the compute power of participating nodes across the globe.

RavDL can be thought of as a high level wrapper (written in Python) that defines the mathematical backend for building layers of neural networks by utilizing the fundamental operations from Ravop library to provide essential abstractions for training complex DL architectures in the Ravenverse.  

This framework seemlessly integrates with the [Ravenverse](https://www.ravenverse.ai/) where the models get divided into optimized subgraphs, which get assigned to the participating nodes for computation in a secure manner. Once all subgraphs have been computed, the saved model will be returned to the requester.

In this manner, a requester can securely train complex models without dedicating his or her own system for this heavy and time-consuming task.

There is something in it for the providers too! The nodes that contribute their processing power will be rewarded with tokens proportionate to the capabilities of their systems and duration of participation. More information is available [here](https://github.com/ravenprotocol/ravpy).

![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)

## Table of Contents

- [Installation](#installation)
- [Layers](#layers)
  - [Dense](#dense)
  - [BatchNormalization1D](#batchnormalization1d)
  - [BatchNormalization2D](#batchnormalization2d)
  - [LayerNormalization](#layernormalization)
  - [Dropout](#dropout)
  - [Activation](#activation)
  - [Conv2D](#conv2d)
  - [Flatten](#flatten)
  - [MaxPooling2D](#maxpooling2d)
  - [Embedding](#embedding)

- [Optimizers](#optimizers)
- [Loss Functions](#losses)
- [Usage](#usage)
- [Functional Model Definition](#functional-model-definition)
- [Sequential Model Definition](#sequential-model-definition)
- [Activate Graph](#activating-the-graph)
- [Execute Graph](#executing-the-graph)
- [Retrieving Persisting Ops](#retrieving-persisting-ops)
- [License](#license)

![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)

## Installation

Make sure [Ravop](https://github.com/ravenprotocol/ravop) is installed and working properly. 

### With PIP
```bash
pip install ravdl
```

![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)

## Layers


### Dense
```python
Dense(n_units, initial_W=None, initial_w0=None, use_bias='True') 
```
#### Parameters
* ```n_units```: Output dimension of the layer
* ```initial_W```: Initial weights of the layer
* ```initial_w0```: Initial bias of the layer
* ```use_bias```: Whether to use bias or not

#### Shape
* Input: (batch_size, ..., input_dim)
* Output: (batch_size, ..., n_units)



### BatchNormalization1D

```python
BatchNormalization1D(momentum=0.99, epsilon=0.01, affine=True, initial_gamma=None, initial_beta=None, initial_running_mean=None, initial_running_var=None)
```

#### Parameters
* ```momentum```: Momentum for the moving average and variance
* ```epsilon```: Small value to avoid division by zero
* ```affine```: Whether to learn the scaling and shifting parameters
* ```initial_gamma```: Initial scaling parameter
* ```initial_beta```: Initial shifting parameter
* ```initial_running_mean```: Initial running mean
* ```initial_running_var```: Initial running variance

#### Shape
* Input: (batch_size, channels) or (batch_size, channels, length)
* Output: same as input


### BatchNormalization2D

```python
BatchNormalization2D(num_features, momentum=0.99, epsilon=0.01, affine=True, initial_gamma=None, initial_beta=None, initial_running_mean=None, initial_running_var=None)
```

#### Parameters
* ```num_features```: Number of channels in the input
* ```momentum```: Momentum for the moving average and variance
* ```epsilon```: Small value to avoid division by zero
* ```affine```: Whether to learn the scaling and shifting parameters
* ```initial_gamma```: Initial scaling parameter
* ```initial_beta```: Initial shifting parameter
* ```initial_running_mean```: Initial running mean
* ```initial_running_var```: Initial running variance

#### Shape
* Input: (batch_size, channels, height, width)
* Output: same as input


### LayerNormalization

```python
LayerNormalization(normalized_shape=None, epsilon=1e-5, initial_W=None, initial_w0=None)
```

#### Parameters
* ```normalized_shape```: Shape of the input or integer representing the last dimension of the input
* ```epsilon```: Small value to avoid division by zero
* ```initial_W```: Initial weights of the layer
* ```initial_w0```: Initial bias of the layer

#### Shape
* Input: (batch_size, ...)
* Output: same as input


### Dropout

```python
Dropout(p=0.5)
```

#### Parameters
* ```p```: Probability of dropping out a unit

#### Shape
* Input: any shape
* Output: same as input

### Activation

```python
Activation(name='relu')
```

#### Parameters
* ```name```: Name of the activation function

> **Currently Supported:** 'relu', 'sigmoid', 'tanh', 'softmax', 'leaky_relu','elu', 'selu', 'softplus', 'softsign', 'tanhshrink', 'logsigmoid', 'hardshrink', 'hardtanh', 'softmin', 'softshrink', 'threshold',


#### Shape
* Input: any shape
* Output: same as input

### Conv2D

```python
Conv2D(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', initial_W=None, initial_w0=None)
```

#### Parameters
* ```in_channels```: Number of channels in the input image
* ```out_channels```: Number of channels produced by the convolution
* ```kernel_size```: Size of the convolving kernel
* ```stride```: Stride of the convolution
* ```padding```: Padding added to all 4 sides of the input (int, tuple or string)
* ```dilation```: Spacing between kernel elements
* ```groups```: Number of blocked connections from input channels to output channels
* ```bias```: If True, adds a learnable bias to the output
* ```padding_mode```: 'zeros', 'reflect', 'replicate' or 'circular'
* ```initial_W```: Initial weights of the layer
* ```initial_w0```: Initial bias of the layer

#### Shape
* Input: (batch_size, in_channels, height, width)
* Output: (batch_size, out_channels, new_height, new_width)


### Flatten

```python
Flatten(start_dim=1, end_dim=-1)
```

#### Parameters
* ```start_dim```: First dimension to flatten
* ```end_dim```: Last dimension to flatten

#### Shape
* Input: (batch_size, ...)
* Output: (batch_size, flattened_dimension)


### MaxPooling2D

```python
MaxPooling2D(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
```

#### Parameters
* ```kernel_size```: Size of the max pooling window
* ```stride```: Stride of the max pooling window
* ```padding```: Zero-padding added to both sides of the input
* ```dilation```: Spacing between kernel elements
* ```return_indices```: If True, will return the max indices along with the outputs
* ```ceil_mode```: If True, will use ceil instead of floor to compute the output shape

#### Shape
* Input: (batch_size, channels, height, width)
* Output: (batch_size, channels, new_height, new_width)


### Embedding
```python
Embedding(vocab_size, embed_dim, initial_W=None)
```

#### Parameters
* ```vocab_size```: Size of the vocabulary
* ```embed_dim```: Dimension of the embedding
* ```initial_W```: Initial weights of the layer

#### Shape
* Input: (batch_size, sequence_length)
* Output: (batch_size, sequence_length, embed_dim)


![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)

## Optimizers

### RMSprop

```python
RMSprop(lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)
```

#### Parameters
* ```lr```: Learning rate
* ```alpha```: Smoothing constant
* ```eps```: Term added to the denominator to improve numerical stability
* ```weight_decay```: Weight decay (L2 penalty)
* ```momentum```: Momentum factor
* ```centered```: If True, compute the centered RMSProp, the gradient is normalized by an estimation of its variance

### Adam

```python
Adam(lr=0.001, betas=(0.9,0.999), eps=1e-08, weight_decay=0, amsgrad=False)
```

#### Parameters
* ```lr```: Learning rate
* ```betas```: Coefficients used for computing running averages of gradient and its square
* ```eps```: Term added to the denominator to improve numerical stability
* ```weight_decay```: Weight decay (L2 penalty)
* ```amsgrad```: If True, use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond

![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)

## Losses
* Mean Squared Error
```python    
ravop.square_loss(y_true, y_pred)
```
* Cross Entropy
```python        
ravop.cross_entropy_loss(y_true, y_pred, ignore_index=None, reshape_target=None, reshape_label=None)
```

![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)

## Usage

This section gives a more detailed walkthrough on how a requester can define their ML/DL architectures in Python by using RavDL and Ravop functionalities.

>**Note:** The complete scripts of the functionalities demonstrated in this document are available in the [Ravenverse Repository](https://github.com/ravenprotocol/ravenverse).   

### Authentication and Graph Definition

The Requester must connect to the Ravenverse using a unique token that they can generate by logging in on Raven's Website using their MetaMask wallet credentials.   

```python
import ravop as R
R.initialize('<TOKEN>')
```

In the Ravenverse, each script executed by a requester is treated as a collection of Ravop Operations called Graph.<br> 
> **Note:** In the current release, the requester can execute only 1 graph with their unique token. Therefore, to clear any previous/existing graphs, the requester must use ```R.flush()``` method. <br>

The next step involves the creation of a Graph... 

```python
R.flush()

algo = R.Graph(name='cnn', algorithm='convolutional_neural_network', approach='distributed')
```
> **Note:** ```name``` and ```algorithm``` parameters can be set to any string. However, the ```approach``` needs to be set to either "distributed" or "federated". 

The Current Release of RavDL supports Functional and Sequential Model Definitions.

![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)

## Functional Model Definition

### Define Custom Layers

The latest release of RavDL supports the definition of custom layers by the requester allowing them to write their own application-specific layers either from scratch or as the composition of existing layers.

The custom layer can be defined by inheriting the ```CustomLayer``` class from ```ravdl.v2.layers``` module. The class defined by the requester must implement certain methods shown as follows:

```python
class CustomLayer1(CustomLayer):
    def __init__(self) -> None:
        super().__init__()
        self.d1 = Dense(n_hidden, input_shape=(n_features,))
        self.bn1 = BatchNormalization1D(momentum=0.99, epsilon=0.01)

    def _forward_pass_call(self, input, training=True):
        o = self.d1._forward_pass(input)
        o = self.bn1._forward_pass(o, training=training)
        return o

class CustomLayer2(CustomLayer):
    def __init__(self) -> None:
        super().__init__()
        self.d1 = Dense(30)
        self.dropout = Dropout(0.9)
        self.d2 = Dense(3)

    def _forward_pass_call(self, input, training=True):
        o = self.d1._forward_pass(input)
        o = self.dropout._forward_pass(o, training=training)
        o = self.d2._forward_pass(o)
        return 
```
### Defining Custom Model Class

The custom model class can be defined by inheriting the ```Functional``` class from ```ravdl.v2``` module. This feature allows the requester to define their own model class by composing the custom and existing layers.

The class defined by the requester must implement certain methods shown as follows:

```python
class ANNModel(Functional):
    def __init__(self, optimizer):
        super().__init__()
        self.custom_layer1 = CustomLayer1()
        self.custom_layer2 = CustomLayer2()
        self.act = Activation('softmax')
        self.initialize_params(optimizer)

    def _forward_pass_call(self, input, training=True):
        o = self.custom_layer1._forward_pass(input, training=training)
        o = self.custom_layer2._forward_pass(o, training=training)
        o = self.act._forward_pass(o)
        return o
```

> **Note:** The ```initialize_params``` method must be called in the ```__init__``` method of the custom model class. This method initializes the parameters of the model and sets the optimizer for the model. 

### Defining the Training Loop

The requester can now define their training loop by using the ```batch_iterator``` function from ```ravdl.v2.utils``` module. This function takes the input and target data as arguments and returns a generator that yields a batch of data at each iteration. 

Note that the ```_forward_pass()``` and ```_backward_pass()``` methods of the custom model class must be called in the training loop.

```python
optimizer = Adam()
model = ANNModel(optimizer)

epochs = 100

for i in range(epochs):
    for X_batch, y_batch in batch_iterator(X, y, batch_size=25):
        X_t = R.t(X_batch)
        y_t = R.t(y_batch)

        out = model._forward_pass(X_t)
        loss = R.square_loss(y_t, out)
        model._backward_pass(loss)
```

### Make a Prediction

```python
out = model._forward_pass(R.t(X_test), training=False)
out.persist_op(name="prediction")
```

> **Note:** The ```_forward_pass()``` method takes an additional argument ```training``` which is set to ```True``` by default. This argument is used to determine whether the model is in training mode or not. The ```_forward_pass()``` method must be called with ```training=False``` when making predictions.


Complete example scripts of Functional Model can be found here:
- [ANN](https://github.com/ravenprotocol/ravenverse/blob/master/Requester/ann_functional.py)
- [CNN](https://github.com/ravenprotocol/ravenverse/blob/master/Requester/cnn_functional.py)


![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)

## Sequential Model Definition

### Setting Model Parameters

```python
from ravdl.v2 import NeuralNetwork
from ravdl.v2.optimizers import RMSprop, Adam
from ravdl.v2.layers import Activation, Dense, BatchNormalization1D, Dropout, Conv2D, Flatten, MaxPooling2D

model = NeuralNetwork(optimizer=RMSprop(),loss='SquareLoss')
```

### Adding Layers to Model

```python
model.add(Dense(n_hidden, input_shape=(n_features,)))
model.add(BatchNormalization1D())
model.add(Dense(30))
model.add(Dropout(0.9))
model.add(Dense(3))
model.add(Activation('softmax'))
```

You can view the summary of model in tabular format...

```python
model.summary()
```

### Training the Model

```python
train_err = model.fit(X, y, n_epochs=5, batch_size=25)
```
By default, the batch losses for each epoch are made to persist in the Ravenverse and can be retrieved later on as and when the computations of those losses are completed. 

### Testing the Model on Ravenverse

If required, model inference can be tested by using the ```predict``` function. The output is stored as an Op and should be made to persist in order to view it later on.

```python 
y_test_pred = model.predict(X_test)
y_test_pred.persist_op(name='test_prediction')
```

![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)

## Activating the Graph

Once the model has been defined (Functional/Sequential) and all required Ops for the Graph have been defined, then Graph can be activated and made ready for execution as follows: 

```python
R.activate()
```
Here is what should happen on activating the Graph (the script executed below is available [here](https://github.com/ravenprotocol/ravenverse/blob/master/ANN_example/ANN_compile.py)):
![ANN_compile](https://user-images.githubusercontent.com/36445587/178669352-03758cbd-85ae-4ccf-bdc8-a7a99001a065.gif)

![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)

## Executing the Graph
Once the Graph has been activated, no more Ops can be added to it. The Graph is now ready for execution. Once Ravop has been initialized with the token, the graph can be executed and tracked as follows:

```python
R.execute()
R.track_progress()
```
Here is what should happen on executing the Graph (the script executed below is available [here](https://github.com/ravenprotocol/ravenverse/blob/master/Requester/ann_sequential.py)):

![ANN_execute](https://user-images.githubusercontent.com/36445587/178670666-0b98a36b-12f9-4d4b-a956-2d8bafbe6728.gif)

![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)

## Retrieving Persisting Ops
As mentioned above, the batch losses for each epoch can be retrieved as and when they have been computed. The entire Graph need not be computed in order to view a persisting Op that has been computed. Any other Ops that have been made to persist, such as ```y_test_pred``` in the example above, can be retrieved as well.

```python
batch_loss = R.fetch_persisting_op(op_name="training_loss_epoch_{}_batch_{}".format(epoch_no, batch_no))
print("training_loss_epoch_1_batch_1: ", batch_loss)

y_test_pred = R.fetch_persisting_op(op_name="test_prediction")
print("Test prediction: ", y_test_pred)
```
> **Note:** The Ops that have been fetched are of type **torch.Tensor**.


<!-- ## How to Contribute -->

![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)

## License

<a href="https://github.com/ravenprotocol/ravdl/blob/master/LICENSE"><img src="https://img.shields.io/github/license/ravenprotocol/ravdl"></a>

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ravenprotocol/ravdl",
    "name": "ravdl",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "Ravdl,deep learning library,algorithms",
    "author": "Raven Protocol",
    "author_email": "kailash@ravenprotocol.com",
    "download_url": "https://files.pythonhosted.org/packages/26/4c/e2ee62436b74edda0e7e41ae64a7027be1f5c60c260980fd6ec171a4f737/ravdl-0.10.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n  <img src=\"https://user-images.githubusercontent.com/36446402/217170090-b3090798-bc0c-4ead-aa3b-7b4ced07e3ec.svg\" width=\"200\" height=\"100\">\n<h1> RavDL - Deep Learning Library </h1>\n</div>\n\nIntroducing Raven Protocol's Distributed Deep Learning tool that allows Requesters to easily build, train and test their neural networks by leveraging the compute power of participating nodes across the globe.\n\nRavDL can be thought of as a high level wrapper (written in Python) that defines the mathematical backend for building layers of neural networks by utilizing the fundamental operations from Ravop library to provide essential abstractions for training complex DL architectures in the Ravenverse.  \n\nThis framework seemlessly integrates with the [Ravenverse](https://www.ravenverse.ai/) where the models get divided into optimized subgraphs, which get assigned to the participating nodes for computation in a secure manner. Once all subgraphs have been computed, the saved model will be returned to the requester.\n\nIn this manner, a requester can securely train complex models without dedicating his or her own system for this heavy and time-consuming task.\n\nThere is something in it for the providers too! The nodes that contribute their processing power will be rewarded with tokens proportionate to the capabilities of their systems and duration of participation. More information is available [here](https://github.com/ravenprotocol/ravpy).\n\n![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)\n\n## Table of Contents\n\n- [Installation](#installation)\n- [Layers](#layers)\n  - [Dense](#dense)\n  - [BatchNormalization1D](#batchnormalization1d)\n  - [BatchNormalization2D](#batchnormalization2d)\n  - [LayerNormalization](#layernormalization)\n  - [Dropout](#dropout)\n  - [Activation](#activation)\n  - [Conv2D](#conv2d)\n  - [Flatten](#flatten)\n  - [MaxPooling2D](#maxpooling2d)\n  - [Embedding](#embedding)\n\n- [Optimizers](#optimizers)\n- [Loss Functions](#losses)\n- [Usage](#usage)\n- [Functional Model Definition](#functional-model-definition)\n- [Sequential Model Definition](#sequential-model-definition)\n- [Activate Graph](#activating-the-graph)\n- [Execute Graph](#executing-the-graph)\n- [Retrieving Persisting Ops](#retrieving-persisting-ops)\n- [License](#license)\n\n![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)\n\n## Installation\n\nMake sure [Ravop](https://github.com/ravenprotocol/ravop) is installed and working properly. \n\n### With PIP\n```bash\npip install ravdl\n```\n\n![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)\n\n## Layers\n\n\n### Dense\n```python\nDense(n_units, initial_W=None, initial_w0=None, use_bias='True') \n```\n#### Parameters\n* ```n_units```: Output dimension of the layer\n* ```initial_W```: Initial weights of the layer\n* ```initial_w0```: Initial bias of the layer\n* ```use_bias```: Whether to use bias or not\n\n#### Shape\n* Input: (batch_size, ..., input_dim)\n* Output: (batch_size, ..., n_units)\n\n\n\n### BatchNormalization1D\n\n```python\nBatchNormalization1D(momentum=0.99, epsilon=0.01, affine=True, initial_gamma=None, initial_beta=None, initial_running_mean=None, initial_running_var=None)\n```\n\n#### Parameters\n* ```momentum```: Momentum for the moving average and variance\n* ```epsilon```: Small value to avoid division by zero\n* ```affine```: Whether to learn the scaling and shifting parameters\n* ```initial_gamma```: Initial scaling parameter\n* ```initial_beta```: Initial shifting parameter\n* ```initial_running_mean```: Initial running mean\n* ```initial_running_var```: Initial running variance\n\n#### Shape\n* Input: (batch_size, channels) or (batch_size, channels, length)\n* Output: same as input\n\n\n### BatchNormalization2D\n\n```python\nBatchNormalization2D(num_features, momentum=0.99, epsilon=0.01, affine=True, initial_gamma=None, initial_beta=None, initial_running_mean=None, initial_running_var=None)\n```\n\n#### Parameters\n* ```num_features```: Number of channels in the input\n* ```momentum```: Momentum for the moving average and variance\n* ```epsilon```: Small value to avoid division by zero\n* ```affine```: Whether to learn the scaling and shifting parameters\n* ```initial_gamma```: Initial scaling parameter\n* ```initial_beta```: Initial shifting parameter\n* ```initial_running_mean```: Initial running mean\n* ```initial_running_var```: Initial running variance\n\n#### Shape\n* Input: (batch_size, channels, height, width)\n* Output: same as input\n\n\n### LayerNormalization\n\n```python\nLayerNormalization(normalized_shape=None, epsilon=1e-5, initial_W=None, initial_w0=None)\n```\n\n#### Parameters\n* ```normalized_shape```: Shape of the input or integer representing the last dimension of the input\n* ```epsilon```: Small value to avoid division by zero\n* ```initial_W```: Initial weights of the layer\n* ```initial_w0```: Initial bias of the layer\n\n#### Shape\n* Input: (batch_size, ...)\n* Output: same as input\n\n\n### Dropout\n\n```python\nDropout(p=0.5)\n```\n\n#### Parameters\n* ```p```: Probability of dropping out a unit\n\n#### Shape\n* Input: any shape\n* Output: same as input\n\n### Activation\n\n```python\nActivation(name='relu')\n```\n\n#### Parameters\n* ```name```: Name of the activation function\n\n> **Currently Supported:** 'relu', 'sigmoid', 'tanh', 'softmax', 'leaky_relu','elu', 'selu', 'softplus', 'softsign', 'tanhshrink', 'logsigmoid', 'hardshrink', 'hardtanh', 'softmin', 'softshrink', 'threshold',\n\n\n#### Shape\n* Input: any shape\n* Output: same as input\n\n### Conv2D\n\n```python\nConv2D(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', initial_W=None, initial_w0=None)\n```\n\n#### Parameters\n* ```in_channels```: Number of channels in the input image\n* ```out_channels```: Number of channels produced by the convolution\n* ```kernel_size```: Size of the convolving kernel\n* ```stride```: Stride of the convolution\n* ```padding```: Padding added to all 4 sides of the input (int, tuple or string)\n* ```dilation```: Spacing between kernel elements\n* ```groups```: Number of blocked connections from input channels to output channels\n* ```bias```: If True, adds a learnable bias to the output\n* ```padding_mode```: 'zeros', 'reflect', 'replicate' or 'circular'\n* ```initial_W```: Initial weights of the layer\n* ```initial_w0```: Initial bias of the layer\n\n#### Shape\n* Input: (batch_size, in_channels, height, width)\n* Output: (batch_size, out_channels, new_height, new_width)\n\n\n### Flatten\n\n```python\nFlatten(start_dim=1, end_dim=-1)\n```\n\n#### Parameters\n* ```start_dim```: First dimension to flatten\n* ```end_dim```: Last dimension to flatten\n\n#### Shape\n* Input: (batch_size, ...)\n* Output: (batch_size, flattened_dimension)\n\n\n### MaxPooling2D\n\n```python\nMaxPooling2D(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)\n```\n\n#### Parameters\n* ```kernel_size```: Size of the max pooling window\n* ```stride```: Stride of the max pooling window\n* ```padding```: Zero-padding added to both sides of the input\n* ```dilation```: Spacing between kernel elements\n* ```return_indices```: If True, will return the max indices along with the outputs\n* ```ceil_mode```: If True, will use ceil instead of floor to compute the output shape\n\n#### Shape\n* Input: (batch_size, channels, height, width)\n* Output: (batch_size, channels, new_height, new_width)\n\n\n### Embedding\n```python\nEmbedding(vocab_size, embed_dim, initial_W=None)\n```\n\n#### Parameters\n* ```vocab_size```: Size of the vocabulary\n* ```embed_dim```: Dimension of the embedding\n* ```initial_W```: Initial weights of the layer\n\n#### Shape\n* Input: (batch_size, sequence_length)\n* Output: (batch_size, sequence_length, embed_dim)\n\n\n![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)\n\n## Optimizers\n\n### RMSprop\n\n```python\nRMSprop(lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)\n```\n\n#### Parameters\n* ```lr```: Learning rate\n* ```alpha```: Smoothing constant\n* ```eps```: Term added to the denominator to improve numerical stability\n* ```weight_decay```: Weight decay (L2 penalty)\n* ```momentum```: Momentum factor\n* ```centered```: If True, compute the centered RMSProp, the gradient is normalized by an estimation of its variance\n\n### Adam\n\n```python\nAdam(lr=0.001, betas=(0.9,0.999), eps=1e-08, weight_decay=0, amsgrad=False)\n```\n\n#### Parameters\n* ```lr```: Learning rate\n* ```betas```: Coefficients used for computing running averages of gradient and its square\n* ```eps```: Term added to the denominator to improve numerical stability\n* ```weight_decay```: Weight decay (L2 penalty)\n* ```amsgrad```: If True, use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond\n\n![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)\n\n## Losses\n* Mean Squared Error\n```python    \nravop.square_loss(y_true, y_pred)\n```\n* Cross Entropy\n```python        \nravop.cross_entropy_loss(y_true, y_pred, ignore_index=None, reshape_target=None, reshape_label=None)\n```\n\n![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)\n\n## Usage\n\nThis section gives a more detailed walkthrough on how a requester can define their ML/DL architectures in Python by using RavDL and Ravop functionalities.\n\n>**Note:** The complete scripts of the functionalities demonstrated in this document are available in the [Ravenverse Repository](https://github.com/ravenprotocol/ravenverse).   \n\n### Authentication and Graph Definition\n\nThe Requester must connect to the Ravenverse using a unique token that they can generate by logging in on Raven's Website using their MetaMask wallet credentials.   \n\n```python\nimport ravop as R\nR.initialize('<TOKEN>')\n```\n\nIn the Ravenverse, each script executed by a requester is treated as a collection of Ravop Operations called Graph.<br> \n> **Note:** In the current release, the requester can execute only 1 graph with their unique token. Therefore, to clear any previous/existing graphs, the requester must use ```R.flush()``` method. <br>\n\nThe next step involves the creation of a Graph... \n\n```python\nR.flush()\n\nalgo = R.Graph(name='cnn', algorithm='convolutional_neural_network', approach='distributed')\n```\n> **Note:** ```name``` and ```algorithm``` parameters can be set to any string. However, the ```approach``` needs to be set to either \"distributed\" or \"federated\". \n\nThe Current Release of RavDL supports Functional and Sequential Model Definitions.\n\n![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)\n\n## Functional Model Definition\n\n### Define Custom Layers\n\nThe latest release of RavDL supports the definition of custom layers by the requester allowing them to write their own application-specific layers either from scratch or as the composition of existing layers.\n\nThe custom layer can be defined by inheriting the ```CustomLayer``` class from ```ravdl.v2.layers``` module. The class defined by the requester must implement certain methods shown as follows:\n\n```python\nclass CustomLayer1(CustomLayer):\n    def __init__(self) -> None:\n        super().__init__()\n        self.d1 = Dense(n_hidden, input_shape=(n_features,))\n        self.bn1 = BatchNormalization1D(momentum=0.99, epsilon=0.01)\n\n    def _forward_pass_call(self, input, training=True):\n        o = self.d1._forward_pass(input)\n        o = self.bn1._forward_pass(o, training=training)\n        return o\n\nclass CustomLayer2(CustomLayer):\n    def __init__(self) -> None:\n        super().__init__()\n        self.d1 = Dense(30)\n        self.dropout = Dropout(0.9)\n        self.d2 = Dense(3)\n\n    def _forward_pass_call(self, input, training=True):\n        o = self.d1._forward_pass(input)\n        o = self.dropout._forward_pass(o, training=training)\n        o = self.d2._forward_pass(o)\n        return \n```\n### Defining Custom Model Class\n\nThe custom model class can be defined by inheriting the ```Functional``` class from ```ravdl.v2``` module. This feature allows the requester to define their own model class by composing the custom and existing layers.\n\nThe class defined by the requester must implement certain methods shown as follows:\n\n```python\nclass ANNModel(Functional):\n    def __init__(self, optimizer):\n        super().__init__()\n        self.custom_layer1 = CustomLayer1()\n        self.custom_layer2 = CustomLayer2()\n        self.act = Activation('softmax')\n        self.initialize_params(optimizer)\n\n    def _forward_pass_call(self, input, training=True):\n        o = self.custom_layer1._forward_pass(input, training=training)\n        o = self.custom_layer2._forward_pass(o, training=training)\n        o = self.act._forward_pass(o)\n        return o\n```\n\n> **Note:** The ```initialize_params``` method must be called in the ```__init__``` method of the custom model class. This method initializes the parameters of the model and sets the optimizer for the model. \n\n### Defining the Training Loop\n\nThe requester can now define their training loop by using the ```batch_iterator``` function from ```ravdl.v2.utils``` module. This function takes the input and target data as arguments and returns a generator that yields a batch of data at each iteration. \n\nNote that the ```_forward_pass()``` and ```_backward_pass()``` methods of the custom model class must be called in the training loop.\n\n```python\noptimizer = Adam()\nmodel = ANNModel(optimizer)\n\nepochs = 100\n\nfor i in range(epochs):\n    for X_batch, y_batch in batch_iterator(X, y, batch_size=25):\n        X_t = R.t(X_batch)\n        y_t = R.t(y_batch)\n\n        out = model._forward_pass(X_t)\n        loss = R.square_loss(y_t, out)\n        model._backward_pass(loss)\n```\n\n### Make a Prediction\n\n```python\nout = model._forward_pass(R.t(X_test), training=False)\nout.persist_op(name=\"prediction\")\n```\n\n> **Note:** The ```_forward_pass()``` method takes an additional argument ```training``` which is set to ```True``` by default. This argument is used to determine whether the model is in training mode or not. The ```_forward_pass()``` method must be called with ```training=False``` when making predictions.\n\n\nComplete example scripts of Functional Model can be found here:\n- [ANN](https://github.com/ravenprotocol/ravenverse/blob/master/Requester/ann_functional.py)\n- [CNN](https://github.com/ravenprotocol/ravenverse/blob/master/Requester/cnn_functional.py)\n\n\n![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)\n\n## Sequential Model Definition\n\n### Setting Model Parameters\n\n```python\nfrom ravdl.v2 import NeuralNetwork\nfrom ravdl.v2.optimizers import RMSprop, Adam\nfrom ravdl.v2.layers import Activation, Dense, BatchNormalization1D, Dropout, Conv2D, Flatten, MaxPooling2D\n\nmodel = NeuralNetwork(optimizer=RMSprop(),loss='SquareLoss')\n```\n\n### Adding Layers to Model\n\n```python\nmodel.add(Dense(n_hidden, input_shape=(n_features,)))\nmodel.add(BatchNormalization1D())\nmodel.add(Dense(30))\nmodel.add(Dropout(0.9))\nmodel.add(Dense(3))\nmodel.add(Activation('softmax'))\n```\n\nYou can view the summary of model in tabular format...\n\n```python\nmodel.summary()\n```\n\n### Training the Model\n\n```python\ntrain_err = model.fit(X, y, n_epochs=5, batch_size=25)\n```\nBy default, the batch losses for each epoch are made to persist in the Ravenverse and can be retrieved later on as and when the computations of those losses are completed. \n\n### Testing the Model on Ravenverse\n\nIf required, model inference can be tested by using the ```predict``` function. The output is stored as an Op and should be made to persist in order to view it later on.\n\n```python \ny_test_pred = model.predict(X_test)\ny_test_pred.persist_op(name='test_prediction')\n```\n\n![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)\n\n## Activating the Graph\n\nOnce the model has been defined (Functional/Sequential) and all required Ops for the Graph have been defined, then Graph can be activated and made ready for execution as follows: \n\n```python\nR.activate()\n```\nHere is what should happen on activating the Graph (the script executed below is available [here](https://github.com/ravenprotocol/ravenverse/blob/master/ANN_example/ANN_compile.py)):\n![ANN_compile](https://user-images.githubusercontent.com/36445587/178669352-03758cbd-85ae-4ccf-bdc8-a7a99001a065.gif)\n\n![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)\n\n## Executing the Graph\nOnce the Graph has been activated, no more Ops can be added to it. The Graph is now ready for execution. Once Ravop has been initialized with the token, the graph can be executed and tracked as follows:\n\n```python\nR.execute()\nR.track_progress()\n```\nHere is what should happen on executing the Graph (the script executed below is available [here](https://github.com/ravenprotocol/ravenverse/blob/master/Requester/ann_sequential.py)):\n\n![ANN_execute](https://user-images.githubusercontent.com/36445587/178670666-0b98a36b-12f9-4d4b-a956-2d8bafbe6728.gif)\n\n![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)\n\n## Retrieving Persisting Ops\nAs mentioned above, the batch losses for each epoch can be retrieved as and when they have been computed. The entire Graph need not be computed in order to view a persisting Op that has been computed. Any other Ops that have been made to persist, such as ```y_test_pred``` in the example above, can be retrieved as well.\n\n```python\nbatch_loss = R.fetch_persisting_op(op_name=\"training_loss_epoch_{}_batch_{}\".format(epoch_no, batch_no))\nprint(\"training_loss_epoch_1_batch_1: \", batch_loss)\n\ny_test_pred = R.fetch_persisting_op(op_name=\"test_prediction\")\nprint(\"Test prediction: \", y_test_pred)\n```\n> **Note:** The Ops that have been fetched are of type **torch.Tensor**.\n\n\n<!-- ## How to Contribute -->\n\n![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)\n\n## License\n\n<a href=\"https://github.com/ravenprotocol/ravdl/blob/master/LICENSE\"><img src=\"https://img.shields.io/github/license/ravenprotocol/ravdl\"></a>\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "",
    "version": "0.10",
    "split_keywords": [
        "ravdl",
        "deep learning library",
        "algorithms"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "264ce2ee62436b74edda0e7e41ae64a7027be1f5c60c260980fd6ec171a4f737",
                "md5": "c123a32b3d344314b15e798b72777b71",
                "sha256": "35099dc03cab5377c2ea04a1dadce5e2d267edb78355e102f3c60663cfa55a65"
            },
            "downloads": -1,
            "filename": "ravdl-0.10.tar.gz",
            "has_sig": false,
            "md5_digest": "c123a32b3d344314b15e798b72777b71",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 23597,
            "upload_time": "2023-02-07T14:33:18",
            "upload_time_iso_8601": "2023-02-07T14:33:18.604345Z",
            "url": "https://files.pythonhosted.org/packages/26/4c/e2ee62436b74edda0e7e41ae64a7027be1f5c60c260980fd6ec171a4f737/ravdl-0.10.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-02-07 14:33:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "ravenprotocol",
    "github_project": "ravdl",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "numpy",
            "specs": [
                [
                    "==",
                    "1.21.5"
                ]
            ]
        },
        {
            "name": "terminaltables",
            "specs": [
                [
                    "==",
                    "3.1.10"
                ]
            ]
        },
        {
            "name": "onnx",
            "specs": [
                [
                    "==",
                    "1.12.0"
                ]
            ]
        },
        {
            "name": "ravop",
            "specs": []
        },
        {
            "name": "python-dotenv",
            "specs": []
        }
    ],
    "lcname": "ravdl"
}
        
Elapsed time: 0.08117s