neural-network


Nameneural-network JSON
Version 0.1.1 PyPI version JSON
download
home_pagehttps://github.com/AnhQuoc533/neural-network
SummaryA Neural Network framework for building Multi-layer Perceptron model.
upload_time2025-01-03 00:10:45
maintainerNone
docs_urlNone
authorAnh Quoc
requires_python>=3.8
licenseMIT
keywords neural-network deep-learning machine-learning neural-networks machine-learning-algorithms
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center">neural-network</h1>

---
**neural-network** is a Python package on TestPyPi that provides a 
Multi-Layer Perceptron (MLP) framework built using only [**NumPy**](https://numpy.org/doc/stable/). 
The framework supports Gradient Descent, Momentum, RMSProp, Adam optimizers.
<!-- TABLE OF CONTENTS -->
<details>
  <summary>Table of Contents</summary>
  <ol>
    <li><a href="#installation">Installation</a>
      <ul>
        <li><a href="#dependencies">Dependencies</a></li>
        <li><a href="#user-installation">User installation</a></li>
      </ul>
    </li>
    <li><a href="#simple-usage">Simple Usage</a>
      <ul>
        <li><a href="#designing-the-model-architecture">Designing the Model Architecture</a></li>
        <li><a href="#training-the-model">Training the Model</a></li>
        <li><a href="#making-predictions">Making predictions</a></li>
      </ul>
    </li>
    <li><a href="#beyond-the-framework">Beyond the Framework</a>
      <ul>
        <li><a href="#activation-functions">Activation functions</a></li>
        <li><a href="#loss-functions">Loss functions</a></li>
        <li><a href="#2d-decision-boundary">2D Decision Boundary</a></li>
      </ul>
    </li>
    <li><a href="#license">License</a></li>
  </ol>
</details>
&nbsp;

## Installation

### Dependencies
```
python>= 3.8
numpy>= 1.22.1
matplotlib>= 3.5.1 
```

### User installation
You can install neural-network using `pip`:
```
pip install -i https://test.pypi.org/simple/neural-network
```

## Simple Usage

### Designing the Model Architecture
To define your MLP model, you need to specify the number of layers, and the number of neurons in each one. \
Unless you want to manually set up the parameters, the size of the input layer is not needed, as it will be automatically determined in the initial training process.
```python
from neural_network import NeuralNetwork
model = NeuralNetwork(neurons=[64, 120, 1])
```
In this example, we have a four-layer neural network containing auto-defined input neurons, 
first hidden layer with 64 neurons, second hidden layer with 120 neurons, and one output neuron.

### Training the Model
To train the model, you need to provide the input data and the corresponding target (or label) data.
```python
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])

model.fit(X, y, epochs=1000, learning_rate=0.1, optimizer='adam')
```
When training the model without setting the activation functions or/and the loss functions, the framework will automatically do the job for you. It will initialize the parameters and the functions according to the type of model (regression or classification) and its architecture.

### Making predictions
Once the model has been trained, you can use it to make predictions by simple call `predict` method.
```python
predictions = model.predict(X)
```

## Beyond the Framework
Apart from the neural network framework, the package also provides:
### Activation functions
<table>
<tr>
    <td><a href="https://en.wikipedia.org/wiki/Sigmoid_function">Sigmoid function</a></td>
    <td><code>sigmoid()</code></td>
</tr>
<tr>
    <td><a href="https://www.medcalc.org/manual/tanh-function.php">Hyperbolic tangent function</a></td>
    <td><code>tanh()</code></td>
</tr>
<tr>
    <td><a href="https://paperswithcode.com/method/relu">Rectified linear unit</a></td>
    <td><code>relu()</code></td>
</tr>
<tr>
    <td><a href="https://paperswithcode.com/method/leaky-relu">Leaky Rectified linear unit</a></td>
    <td><code>leaky_relu()</code></td>
</tr>
<tr>
    <td><a href="https://en.wikipedia.org/wiki/Softmax_function">Softmax function</a></td>
    <td><code>softmax()</code></td>
</tr>
<tr>
    <td><a href="https://paperswithcode.com/method/gelu">Gaussian error linear unit</a></td>
    <td><code>gelu()</code></td>
</tr>
</table>

All above functions have 2 parameters:
* `x`: The input values. Even though some functions can accept numeric primitive data type,
  it is advised to use NumPy array.
* `derivative`: A boolean value indicating whether the function computes the derivative on input `x`. Default is False.

### Loss functions
<table>
<tr>
    <td><a href="">Logistic loss function</a></td>
    <td><code>log_loss()</code></td>
</tr>
<tr>
    <td><a href="">Cross-entropy loss function</a></td>
    <td><code>cross_entropy_loss()</code></td>
</tr>
<tr>
    <td><a href="">Quadratic loss function</a></td>
    <td><code>quadratic_loss()</code></td>
</tr>
</table>

All above functions have 3 parameters:
* `y_pred`: Predicted labels. It must be a 2D NumPy array and have the same size as `y_true`.
* `y_true`: True labels. It must be a 2D NumPy array and have the same size as `y_pred`.
* `derivative`: A boolean value indicating whether the function computes the derivative. Default is False.

### 2D Decision Boundary
This utility function is used for illustrative purpose. It takes a trained binary classification model, a 2D NumPy input data with 2 attributes, and the corresponding binary label data as input. The function then will plot a 2D decision boundary based on the prediction of the model. \
The input model is not necessarily an instance of **NeuralNetwork**, but it must have `predict`
method that accepts a 2D NumPy array as input.
```python
plot_decision_boundary(model, train_x, train_y)
```
<p align="center">
  <img src="img/Figure_1.png">
</p>

## License
This project has MIT License, as found in the [LICENSE](LICENSE) file.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/AnhQuoc533/neural-network",
    "name": "neural-network",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "neural-network, deep-learning, machine-learning, neural-networks, machine-learning-algorithms",
    "author": "Anh Quoc",
    "author_email": "lhoanganhquoc@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/6f/61/4e019acebeebe640a9cbf1023eed641942ece5f89916519be715257b5b31/neural-network-0.1.1.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\">neural-network</h1>\r\n\r\n---\r\n**neural-network** is a Python package on TestPyPi that provides a \r\nMulti-Layer Perceptron (MLP) framework built using only [**NumPy**](https://numpy.org/doc/stable/). \r\nThe framework supports Gradient Descent, Momentum, RMSProp, Adam optimizers.\r\n<!-- TABLE OF CONTENTS -->\r\n<details>\r\n  <summary>Table of Contents</summary>\r\n  <ol>\r\n    <li><a href=\"#installation\">Installation</a>\r\n      <ul>\r\n        <li><a href=\"#dependencies\">Dependencies</a></li>\r\n        <li><a href=\"#user-installation\">User installation</a></li>\r\n      </ul>\r\n    </li>\r\n    <li><a href=\"#simple-usage\">Simple Usage</a>\r\n      <ul>\r\n        <li><a href=\"#designing-the-model-architecture\">Designing the Model Architecture</a></li>\r\n        <li><a href=\"#training-the-model\">Training the Model</a></li>\r\n        <li><a href=\"#making-predictions\">Making predictions</a></li>\r\n      </ul>\r\n    </li>\r\n    <li><a href=\"#beyond-the-framework\">Beyond the Framework</a>\r\n      <ul>\r\n        <li><a href=\"#activation-functions\">Activation functions</a></li>\r\n        <li><a href=\"#loss-functions\">Loss functions</a></li>\r\n        <li><a href=\"#2d-decision-boundary\">2D Decision Boundary</a></li>\r\n      </ul>\r\n    </li>\r\n    <li><a href=\"#license\">License</a></li>\r\n  </ol>\r\n</details>\r\n&nbsp;\r\n\r\n## Installation\r\n\r\n### Dependencies\r\n```\r\npython>= 3.8\r\nnumpy>= 1.22.1\r\nmatplotlib>= 3.5.1 \r\n```\r\n\r\n### User installation\r\nYou can install neural-network using `pip`:\r\n```\r\npip install -i https://test.pypi.org/simple/neural-network\r\n```\r\n\r\n## Simple Usage\r\n\r\n### Designing the Model Architecture\r\nTo define your MLP model, you need to specify the number of layers, and the number of neurons in each one. \\\r\nUnless you want to manually set up the parameters, the size of the input layer is not needed, as it will be automatically determined in the initial training process.\r\n```python\r\nfrom neural_network import NeuralNetwork\r\nmodel = NeuralNetwork(neurons=[64, 120, 1])\r\n```\r\nIn this example, we have a four-layer neural network containing auto-defined input neurons, \r\nfirst hidden layer with 64 neurons, second hidden layer with 120 neurons, and one output neuron.\r\n\r\n### Training the Model\r\nTo train the model, you need to provide the input data and the corresponding target (or label) data.\r\n```python\r\nX = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])\r\ny = np.array([[0], [1], [1], [0]])\r\n\r\nmodel.fit(X, y, epochs=1000, learning_rate=0.1, optimizer='adam')\r\n```\r\nWhen training the model without setting the activation functions or/and the loss functions, the framework will automatically do the job for you. It will initialize the parameters and the functions according to the type of model (regression or classification) and its architecture.\r\n\r\n### Making predictions\r\nOnce the model has been trained, you can use it to make predictions by simple call `predict` method.\r\n```python\r\npredictions = model.predict(X)\r\n```\r\n\r\n## Beyond the Framework\r\nApart from the neural network framework, the package also provides:\r\n### Activation functions\r\n<table>\r\n<tr>\r\n    <td><a href=\"https://en.wikipedia.org/wiki/Sigmoid_function\">Sigmoid function</a></td>\r\n    <td><code>sigmoid()</code></td>\r\n</tr>\r\n<tr>\r\n    <td><a href=\"https://www.medcalc.org/manual/tanh-function.php\">Hyperbolic tangent function</a></td>\r\n    <td><code>tanh()</code></td>\r\n</tr>\r\n<tr>\r\n    <td><a href=\"https://paperswithcode.com/method/relu\">Rectified linear unit</a></td>\r\n    <td><code>relu()</code></td>\r\n</tr>\r\n<tr>\r\n    <td><a href=\"https://paperswithcode.com/method/leaky-relu\">Leaky Rectified linear unit</a></td>\r\n    <td><code>leaky_relu()</code></td>\r\n</tr>\r\n<tr>\r\n    <td><a href=\"https://en.wikipedia.org/wiki/Softmax_function\">Softmax function</a></td>\r\n    <td><code>softmax()</code></td>\r\n</tr>\r\n<tr>\r\n    <td><a href=\"https://paperswithcode.com/method/gelu\">Gaussian error linear unit</a></td>\r\n    <td><code>gelu()</code></td>\r\n</tr>\r\n</table>\r\n\r\nAll above functions have 2 parameters:\r\n* `x`: The input values. Even though some functions can accept numeric primitive data type,\r\n  it is advised to use NumPy array.\r\n* `derivative`: A boolean value indicating whether the function computes the derivative on input `x`. Default is False.\r\n\r\n### Loss functions\r\n<table>\r\n<tr>\r\n    <td><a href=\"\">Logistic loss function</a></td>\r\n    <td><code>log_loss()</code></td>\r\n</tr>\r\n<tr>\r\n    <td><a href=\"\">Cross-entropy loss function</a></td>\r\n    <td><code>cross_entropy_loss()</code></td>\r\n</tr>\r\n<tr>\r\n    <td><a href=\"\">Quadratic loss function</a></td>\r\n    <td><code>quadratic_loss()</code></td>\r\n</tr>\r\n</table>\r\n\r\nAll above functions have 3 parameters:\r\n* `y_pred`: Predicted labels. It must be a 2D NumPy array and have the same size as `y_true`.\r\n* `y_true`: True labels. It must be a 2D NumPy array and have the same size as `y_pred`.\r\n* `derivative`: A boolean value indicating whether the function computes the derivative. Default is False.\r\n\r\n### 2D Decision Boundary\r\nThis utility function is used for illustrative purpose. It takes a trained binary classification model, a 2D NumPy input data with 2 attributes, and the corresponding binary label data as input. The function then will plot a 2D decision boundary based on the prediction of the model. \\\r\nThe input model is not necessarily an instance of **NeuralNetwork**, but it must have `predict`\r\nmethod that accepts a 2D NumPy array as input.\r\n```python\r\nplot_decision_boundary(model, train_x, train_y)\r\n```\r\n<p align=\"center\">\r\n  <img src=\"img/Figure_1.png\">\r\n</p>\r\n\r\n## License\r\nThis project has MIT License, as found in the [LICENSE](LICENSE) file.\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Neural Network framework for building Multi-layer Perceptron model.",
    "version": "0.1.1",
    "project_urls": {
        "Bug Reports": "https://github.com/AnhQuoc533/neural-network/issues",
        "Funding": "https://donate.pypi.org",
        "Homepage": "https://github.com/AnhQuoc533/neural-network",
        "Source": "https://github.com/AnhQuoc533/neural-network"
    },
    "split_keywords": [
        "neural-network",
        " deep-learning",
        " machine-learning",
        " neural-networks",
        " machine-learning-algorithms"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "243847a58cb7a98a05147c3e41c8271840b83c76d6c4ee235ac4691a46b8a7bc",
                "md5": "dfbda3e966e6c5e24e7530013b22485d",
                "sha256": "7df4ca8ea9a513009a3f0c68f2eecb9ea49e489acc323eb2242bc4ba061ce957"
            },
            "downloads": -1,
            "filename": "neural_network-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "dfbda3e966e6c5e24e7530013b22485d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 11026,
            "upload_time": "2025-01-03T00:10:42",
            "upload_time_iso_8601": "2025-01-03T00:10:42.900558Z",
            "url": "https://files.pythonhosted.org/packages/24/38/47a58cb7a98a05147c3e41c8271840b83c76d6c4ee235ac4691a46b8a7bc/neural_network-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6f614e019acebeebe640a9cbf1023eed641942ece5f89916519be715257b5b31",
                "md5": "c1565c9b81d59ba7ca91deb76e8b4c47",
                "sha256": "57e8c054485f7e4f1b65a215a10b57d1adf3d629e0ac24edfbee474398e83db3"
            },
            "downloads": -1,
            "filename": "neural-network-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "c1565c9b81d59ba7ca91deb76e8b4c47",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 12579,
            "upload_time": "2025-01-03T00:10:45",
            "upload_time_iso_8601": "2025-01-03T00:10:45.183669Z",
            "url": "https://files.pythonhosted.org/packages/6f/61/4e019acebeebe640a9cbf1023eed641942ece5f89916519be715257b5b31/neural-network-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-03 00:10:45",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AnhQuoc533",
    "github_project": "neural-network",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "neural-network"
}
        
Elapsed time: 0.38872s