NeuralEngine


NameNeuralEngine JSON
Version 0.1.1 PyPI version JSON
download
home_pagehttps://github.com/Prajjwal2404/NeuralEngine
SummaryA framework/library for building and training neural networks.
upload_time2025-07-14 04:45:34
maintainerPrajjwal Pratap Shah
docs_urlNone
authorPrajjwal Pratap Shah
requires_python>=3.10
licenseMIT License Copyright (c) 2023 Eduardo Leitao da Cunha Opice Leao Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice, this permission notice, and the following attribution clause shall be included in all copies or substantial portions of the Software. Attribution Clause: Any public use, distribution, or derivative work of this software must give appropriate credit to the original developer, Prajjwal Pratap Shah, by including a visible acknowledgment in documentation, websites, or other materials accompanying the software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords deep-learning neural-networks machine-learning numpy cupy autograd
VCS
bugtrack_url
requirements numpy
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
    <img src="https://raw.githubusercontent.com/Prajjwal2404/NeuralEngine/refs/heads/main/NeuralEngine.jpg" alt="NeuralEngine Cover" width="600" />
</p>

<p align="center">
    <a href="https://github.com/Prajjwal2404/NeuralEngine/pulse" alt="Activity">
        <img src="https://img.shields.io/github/commit-activity/m/Prajjwal2404/NeuralEngine" /></a>
    <a href="https://github.com/Prajjwal2404/NeuralEngine/graphs/contributors" alt="Contributors">
        <img src="https://img.shields.io/github/contributors/Prajjwal2404/NeuralEngine" /></a>
    <a href="https://pypi.org/project/NeuralEngine" alt="PyPI">
        <img src="https://img.shields.io/pypi/v/NeuralEngine?color=brightgreen&label=PyPI" /></a>
    <a href="https://www.python.org/">
        <img src="https://img.shields.io/badge/language-Python-blue">
    </a>
    <a href="mailto:prajjwalpratapshah@outlook.com">
        <img src="https://img.shields.io/badge/-Email-red?style=flat-square&logo=gmail&logoColor=white">
    </a>
    <a href="https://www.linkedin.com/in/prajjwal2404">
        <img src="https://img.shields.io/badge/-Linkedin-blue?style=flat-square&logo=linkedin">
    </a>
</p>


# NeuralEngine

A framework/library for building and training neural networks in Python. NeuralEngine provides core components for constructing, training, and evaluating neural networks, with support for both CPU and GPU (CUDA) acceleration. Designed for extensibility, performance, and ease of use, it is suitable for research, prototyping, and production.

## Table of Contents
- [Features](#features)
- [Installation](#installation)
- [Example Usage](#example-usage)
- [Project Structure](#project-structure)
- [Capabilities & Documentation](#capabilities--documentation)
- [Contribution Guide](#contribution-guide)
- [License](#license)
- [Attribution](#attribution)

## Features
- Custom tensor operations (CPU/GPU support via NumPy and optional CuPy)
- Configurable neural network layers (Linear, Flatten, etc.)
- Built-in loss functions, metrics, and optimizers
- Model class for easy training and evaluation
- Device management (CPU/CUDA)
- Utilities for deep learning workflows
- Autograd capabilities using dynamic computational graphs
- Extensible design for custom layers, losses, and optimizers

## Installation
Install via pip:
```bash
pip install NeuralEngine
```
Or clone and install locally:
```bash
pip install .
```

### Optional CUDA Support
To enable GPU acceleration, Install via pip:
```bash
pip install NeuralEngine[cuda]
```
Or install the optional dependency
```bash
pip install cupy-cuda12x
```

## Example Usage
```python
import neuralengine as ne

# Set device (CPU or CUDA)
ne.set_device(ne.Device.CUDA)

# Load your dataset (example: MNIST)
x_train, y_train, x_test, y_test = load_mnist_data()

y_train = ne.one_hot(y_train)
y_test = ne.one_hot(y_test)

# Build your model
model = ne.Model(
    input_size=(28, 28),
    optimizer=ne.Adam(),
    loss=ne.CrossEntropy(),
    metrics=ne.ClassificationMetrics()
)
model(
    ne.Flatten(),
    ne.Linear(64, activation=ne.RELU()),
    ne.Linear(10, activation=ne.Softmax()),
)

# Train and evaluate
model.train(x_train, y_train, epochs=30, batch_size=10000)
result = model.eval(x_test, y_test)
```

## Project Structure
```
neuralengine/
    __init__.py
    config.py
    tensor.py
    utils.py
    nn/
        __init__.py
        layers.py
        loss.py
        metrics.py
        model.py
        optim.py
setup.py
requirements.txt
pyproject.toml
MANIFEST.in
LICENSE
README.md
```

## Capabilities & Documentation
NeuralEngine offers the following core capabilities:

### Device Management
- `ne.set_device(device)`: Switch between CPU and GPU (CUDA) for computation.
- Device enum: `ne.Device.CPU`, `ne.Device.CUDA`.

### Tensors & Autograd
- Custom tensor implementation supporting NumPy and CuPy backends.
- Automatic differentiation (autograd) using dynamic computational graphs for backpropagation.
- Supports gradients, parameter updates, and custom operations.
- Supported tensor operations:
  - Arithmetic: `+`, `-`, `*`, `/`, `**` (power)
  - Matrix multiplication: `@`
  - Mathematical: `log`, `sqrt`, `exp`, `abs`
  - Reductions: `sum`, `max`, `min`, `mean`, `var`
  - Shape: `transpose`, `reshape`, `concatenate`, `stack`, `slice`
  - Elementwise: `where`, `masked_fill`
  - Comparison: `>`, `>=`, `<`, `<=`
  - Utility: `zero_grad()` (reset gradients)
  - Autograd: `backward()` (compute gradients for the computation graph)

### Layers
- `ne.Flatten()`: Flattens input tensors to 2D (batch, features).
- `ne.Linear(out_features, activation=None)`: Fully connected layer with optional activation.
- `ne.LSTM(...)`: Long Short-Term Memory layer with options for attention, bidirectionality, sequence/state output. You can build deep LSTM networks by stacking multiple LSTM layers. When stacking, ensure that the hidden units for subsequent layers are set correctly:
    - For a standard LSTM, the hidden state shape for the last timestep is `(batch, hidden_units)`.
    - For a bidirectional LSTM, the hidden and cell state shape becomes `(batch, hidden_units * 2)`.
    - If attention is enabled, the hidden state shape is `(batch, hidden_units + input_size[-1])`.
    - If subsequent layers require state initializations from prior layers, set the hidden units accordingly to match the output shape of the previous LSTM (including adjustments for bidirectionality and attention).
- `ne.MultiplicativeAttention(units)`: Dot-product attention mechanism for sequence models.
- `ne.MultiHeadSelfAttention(num_heads=1, in_size=None)`: Multi-head self-attention layer for transformer and sequence models.
- `ne.Embedding(embed_size, vocab_size, n_timesteps=None)`: Embedding layer for mapping indices to dense vectors, with optional positional encoding.
- `ne.LayerNorm(norm_shape, eps=1e-7)`: Layer normalization for stabilizing training.
- `ne.Dropout(prob=0.5)`: Dropout regularization for reducing overfitting.
- All layers inherit from a common base and support extensibility for custom architectures.

### Activations
- `ne.Sigmoid()`: Sigmoid activation function.
- `ne.Tanh()`: Tanh activation function.
- `ne.RELU(alpha=0, parametric=False)`: ReLU, Leaky ReLU, or Parametric ReLU activation.
- `ne.Softmax(axis=-1)`: Softmax activation for classification tasks.
- All activations inherit from a common base and support extensibility for custom architectures.

### Loss Functions
- `ne.CrossEntropy(binary=False, eps=1e-7)`: Categorical and binary cross-entropy loss for classification tasks.
- `ne.MSE()`: Mean Squared Error loss for regression.
- `ne.MAE()`: Mean Absolute Error loss for regression.
- `ne.Huber(delta=1.0)`: Huber loss, robust to outliers.
- `ne.GaussianNLL(eps=1e-7)`: Gaussian Negative Log Likelihood loss for probabilistic regression.
- `ne.KLDivergence(eps=1e-7)`: Kullback-Leibler Divergence loss for measuring distribution differences.
- All loss functions inherit from a common base and support autograd.

### Optimizers
- `ne.Adam(lr=1e-3, betas=(0.9, 0.99), eps=1e-7, reg=0)`: Adam optimizer (switches to RMSProp if only one beta is provided).
- `ne.SGD(lr=1e-2, reg=0, momentum=0, nesterov=False)`: Stochastic Gradient Descent with optional momentum and Nesterov acceleration.
- All optimizers support L2 regularization and gradient reset.

### Metrics
- `ne.ClassificationMetrics(num_classes=None, acc=True, prec=False, rec=False, f1=False, cm=False)`: Computes accuracy, precision, recall, F1 score, and confusion matrix for classification tasks.
- `ne.RMSE()`: Root Mean Squared Error for regression.
- `ne.R2()`: R2 Score for regression.
- All metrics return results as dictionaries and support batch evaluation.

### Model API
- `ne.Model(input_size, optimizer, loss, metrics)`: Create a model specifying input size, optimizer, loss function, and metrics.
- Add layers by calling the model instance: `model(layer1, layer2, ...)` or using `model.build(layer1, layer2, ...)`.
- `model.train(x, y, epochs=10, batch_size=64, random_seed=None)`: Train the model on data, with support for batching, shuffling, and metric/loss reporting per epoch.
- `model.eval(x, y)`: Evaluate the model on data, prints loss and metrics, and returns output tensor. Also prints confusion matrix if enabled in metrics.
- Layers are set to training or evaluation mode automatically during `train` and `eval`.

### Utilities
- Tensor creation: `tensor(data, requires_grad=False)`, `zeros(shape)`, `ones(shape)`, `rand(shape)`, `randn(shape, xavier=False)`, `randint(low, high, shape)` and their `_like` variants for matching shapes.
- Tensor operations: `sum`, `max`, `min`, `mean`, `var`, `log`, `sqrt`, `exp`, `abs`, `concat`, `stack`, `where`, `clip` for elementwise and reduction operations.
- Encoding: `one_hot(labels, num_classes=None)` for converting integer labels to one-hot encoding.

### Extensibility
NeuralEngine is designed for easy extension and customization:
- **Custom Layers**: Create new layers by inheriting from the `Layer` base class and implementing the `forward(self, x)` method. You can add parameters, initialization logic, and custom computations as needed. All built-in layers follow this pattern, making it simple to add your own.
- **Custom Losses**: Define new loss functions by inheriting from the `Loss` base class and implementing the `compute(self, z, y)` method. This allows you to integrate any custom loss logic with autograd support.
- **Custom Optimizers**: Implement new optimization algorithms by inheriting from the `Optimizer` base class and providing your own `step(self)` method. You can manage optimizer state and parameter updates as required.
- **Custom Metrics**: Add new metrics by inheriting from the `Metric` base class and implementing the `compute(self, z, y)` method. Alternatively, you can pass a function of the form `func(x, y) -> dict[str, float | np.ndarray]` directly to the model's metrics argument for flexible evaluation.
- All core components are modular and can be replaced or extended for research, experimentation, or production use.

## Contribution Guide

NeuralEngine is an open-source project, and I warmly welcome all kinds of contributions whether it's code, documentation, bug reports, feature ideas, or sharing cool examples. If you want to help make NeuralEngine better, you're in the right place!

### How to Contribute
- **Fork the repository** and create a new branch for your feature, fix, or documentation update.
- **Keep it clean and consistent**: Try to follow the existing code style, naming conventions, and documentation patterns. Well-commented, readable code is always appreciated!
- **Add tests** for new features or bug fixes if you can.
- **Document your changes**: Update or add docstrings and README sections so others can easily understand your work.
- **Open a pull request** describing what you've changed and why it's awesome.

### What Can You Contribute?
- New layers, loss functions, optimizers, metrics, or utility functions
- Improvements to existing components
- Bug fixes and performance tweaks
- Documentation updates and tutorials
- Example scripts and notebooks
- Feature requests, feedback, and ideas

Every contribution is reviewed for quality and consistency, but don't worry—if you have questions or need help, just open an issue or start a discussion. I'm happy to help and love seeing new faces in the community!

Thanks for making NeuralEngine better, together! 🚀

## License
MIT License with attribution clause. See LICENSE file for details.

## Attribution
If you use this project, please credit the original developer: Prajjwal Pratap Shah.

Special thanks to the Autograd Framework From Scratch project by Eduardo Leitão da Cunha Opice Leão, which served as a reference for tensor operations and autograd implementations.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Prajjwal2404/NeuralEngine",
    "name": "NeuralEngine",
    "maintainer": "Prajjwal Pratap Shah",
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "deep-learning, neural-networks, machine-learning, numpy, cupy, autograd",
    "author": "Prajjwal Pratap Shah",
    "author_email": "Prajjwal Pratap Shah <prajjwalpratapshah@outlook.com>",
    "download_url": "https://files.pythonhosted.org/packages/67/bd/01111d12a76eef1650c0b0503ba2cd45f42070555ec74105345fade45090/neuralengine-0.1.1.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\r\n    <img src=\"https://raw.githubusercontent.com/Prajjwal2404/NeuralEngine/refs/heads/main/NeuralEngine.jpg\" alt=\"NeuralEngine Cover\" width=\"600\" />\r\n</p>\r\n\r\n<p align=\"center\">\r\n    <a href=\"https://github.com/Prajjwal2404/NeuralEngine/pulse\" alt=\"Activity\">\r\n        <img src=\"https://img.shields.io/github/commit-activity/m/Prajjwal2404/NeuralEngine\" /></a>\r\n    <a href=\"https://github.com/Prajjwal2404/NeuralEngine/graphs/contributors\" alt=\"Contributors\">\r\n        <img src=\"https://img.shields.io/github/contributors/Prajjwal2404/NeuralEngine\" /></a>\r\n    <a href=\"https://pypi.org/project/NeuralEngine\" alt=\"PyPI\">\r\n        <img src=\"https://img.shields.io/pypi/v/NeuralEngine?color=brightgreen&label=PyPI\" /></a>\r\n    <a href=\"https://www.python.org/\">\r\n        <img src=\"https://img.shields.io/badge/language-Python-blue\">\r\n    </a>\r\n    <a href=\"mailto:prajjwalpratapshah@outlook.com\">\r\n        <img src=\"https://img.shields.io/badge/-Email-red?style=flat-square&logo=gmail&logoColor=white\">\r\n    </a>\r\n    <a href=\"https://www.linkedin.com/in/prajjwal2404\">\r\n        <img src=\"https://img.shields.io/badge/-Linkedin-blue?style=flat-square&logo=linkedin\">\r\n    </a>\r\n</p>\r\n\r\n\r\n# NeuralEngine\r\n\r\nA framework/library for building and training neural networks in Python. NeuralEngine provides core components for constructing, training, and evaluating neural networks, with support for both CPU and GPU (CUDA) acceleration. Designed for extensibility, performance, and ease of use, it is suitable for research, prototyping, and production.\r\n\r\n## Table of Contents\r\n- [Features](#features)\r\n- [Installation](#installation)\r\n- [Example Usage](#example-usage)\r\n- [Project Structure](#project-structure)\r\n- [Capabilities & Documentation](#capabilities--documentation)\r\n- [Contribution Guide](#contribution-guide)\r\n- [License](#license)\r\n- [Attribution](#attribution)\r\n\r\n## Features\r\n- Custom tensor operations (CPU/GPU support via NumPy and optional CuPy)\r\n- Configurable neural network layers (Linear, Flatten, etc.)\r\n- Built-in loss functions, metrics, and optimizers\r\n- Model class for easy training and evaluation\r\n- Device management (CPU/CUDA)\r\n- Utilities for deep learning workflows\r\n- Autograd capabilities using dynamic computational graphs\r\n- Extensible design for custom layers, losses, and optimizers\r\n\r\n## Installation\r\nInstall via pip:\r\n```bash\r\npip install NeuralEngine\r\n```\r\nOr clone and install locally:\r\n```bash\r\npip install .\r\n```\r\n\r\n### Optional CUDA Support\r\nTo enable GPU acceleration, Install via pip:\r\n```bash\r\npip install NeuralEngine[cuda]\r\n```\r\nOr install the optional dependency\r\n```bash\r\npip install cupy-cuda12x\r\n```\r\n\r\n## Example Usage\r\n```python\r\nimport neuralengine as ne\r\n\r\n# Set device (CPU or CUDA)\r\nne.set_device(ne.Device.CUDA)\r\n\r\n# Load your dataset (example: MNIST)\r\nx_train, y_train, x_test, y_test = load_mnist_data()\r\n\r\ny_train = ne.one_hot(y_train)\r\ny_test = ne.one_hot(y_test)\r\n\r\n# Build your model\r\nmodel = ne.Model(\r\n    input_size=(28, 28),\r\n    optimizer=ne.Adam(),\r\n    loss=ne.CrossEntropy(),\r\n    metrics=ne.ClassificationMetrics()\r\n)\r\nmodel(\r\n    ne.Flatten(),\r\n    ne.Linear(64, activation=ne.RELU()),\r\n    ne.Linear(10, activation=ne.Softmax()),\r\n)\r\n\r\n# Train and evaluate\r\nmodel.train(x_train, y_train, epochs=30, batch_size=10000)\r\nresult = model.eval(x_test, y_test)\r\n```\r\n\r\n## Project Structure\r\n```\r\nneuralengine/\r\n    __init__.py\r\n    config.py\r\n    tensor.py\r\n    utils.py\r\n    nn/\r\n        __init__.py\r\n        layers.py\r\n        loss.py\r\n        metrics.py\r\n        model.py\r\n        optim.py\r\nsetup.py\r\nrequirements.txt\r\npyproject.toml\r\nMANIFEST.in\r\nLICENSE\r\nREADME.md\r\n```\r\n\r\n## Capabilities & Documentation\r\nNeuralEngine offers the following core capabilities:\r\n\r\n### Device Management\r\n- `ne.set_device(device)`: Switch between CPU and GPU (CUDA) for computation.\r\n- Device enum: `ne.Device.CPU`, `ne.Device.CUDA`.\r\n\r\n### Tensors & Autograd\r\n- Custom tensor implementation supporting NumPy and CuPy backends.\r\n- Automatic differentiation (autograd) using dynamic computational graphs for backpropagation.\r\n- Supports gradients, parameter updates, and custom operations.\r\n- Supported tensor operations:\r\n  - Arithmetic: `+`, `-`, `*`, `/`, `**` (power)\r\n  - Matrix multiplication: `@`\r\n  - Mathematical: `log`, `sqrt`, `exp`, `abs`\r\n  - Reductions: `sum`, `max`, `min`, `mean`, `var`\r\n  - Shape: `transpose`, `reshape`, `concatenate`, `stack`, `slice`\r\n  - Elementwise: `where`, `masked_fill`\r\n  - Comparison: `>`, `>=`, `<`, `<=`\r\n  - Utility: `zero_grad()` (reset gradients)\r\n  - Autograd: `backward()` (compute gradients for the computation graph)\r\n\r\n### Layers\r\n- `ne.Flatten()`: Flattens input tensors to 2D (batch, features).\r\n- `ne.Linear(out_features, activation=None)`: Fully connected layer with optional activation.\r\n- `ne.LSTM(...)`: Long Short-Term Memory layer with options for attention, bidirectionality, sequence/state output. You can build deep LSTM networks by stacking multiple LSTM layers. When stacking, ensure that the hidden units for subsequent layers are set correctly:\r\n    - For a standard LSTM, the hidden state shape for the last timestep is `(batch, hidden_units)`.\r\n    - For a bidirectional LSTM, the hidden and cell state shape becomes `(batch, hidden_units * 2)`.\r\n    - If attention is enabled, the hidden state shape is `(batch, hidden_units + input_size[-1])`.\r\n    - If subsequent layers require state initializations from prior layers, set the hidden units accordingly to match the output shape of the previous LSTM (including adjustments for bidirectionality and attention).\r\n- `ne.MultiplicativeAttention(units)`: Dot-product attention mechanism for sequence models.\r\n- `ne.MultiHeadSelfAttention(num_heads=1, in_size=None)`: Multi-head self-attention layer for transformer and sequence models.\r\n- `ne.Embedding(embed_size, vocab_size, n_timesteps=None)`: Embedding layer for mapping indices to dense vectors, with optional positional encoding.\r\n- `ne.LayerNorm(norm_shape, eps=1e-7)`: Layer normalization for stabilizing training.\r\n- `ne.Dropout(prob=0.5)`: Dropout regularization for reducing overfitting.\r\n- All layers inherit from a common base and support extensibility for custom architectures.\r\n\r\n### Activations\r\n- `ne.Sigmoid()`: Sigmoid activation function.\r\n- `ne.Tanh()`: Tanh activation function.\r\n- `ne.RELU(alpha=0, parametric=False)`: ReLU, Leaky ReLU, or Parametric ReLU activation.\r\n- `ne.Softmax(axis=-1)`: Softmax activation for classification tasks.\r\n- All activations inherit from a common base and support extensibility for custom architectures.\r\n\r\n### Loss Functions\r\n- `ne.CrossEntropy(binary=False, eps=1e-7)`: Categorical and binary cross-entropy loss for classification tasks.\r\n- `ne.MSE()`: Mean Squared Error loss for regression.\r\n- `ne.MAE()`: Mean Absolute Error loss for regression.\r\n- `ne.Huber(delta=1.0)`: Huber loss, robust to outliers.\r\n- `ne.GaussianNLL(eps=1e-7)`: Gaussian Negative Log Likelihood loss for probabilistic regression.\r\n- `ne.KLDivergence(eps=1e-7)`: Kullback-Leibler Divergence loss for measuring distribution differences.\r\n- All loss functions inherit from a common base and support autograd.\r\n\r\n### Optimizers\r\n- `ne.Adam(lr=1e-3, betas=(0.9, 0.99), eps=1e-7, reg=0)`: Adam optimizer (switches to RMSProp if only one beta is provided).\r\n- `ne.SGD(lr=1e-2, reg=0, momentum=0, nesterov=False)`: Stochastic Gradient Descent with optional momentum and Nesterov acceleration.\r\n- All optimizers support L2 regularization and gradient reset.\r\n\r\n### Metrics\r\n- `ne.ClassificationMetrics(num_classes=None, acc=True, prec=False, rec=False, f1=False, cm=False)`: Computes accuracy, precision, recall, F1 score, and confusion matrix for classification tasks.\r\n- `ne.RMSE()`: Root Mean Squared Error for regression.\r\n- `ne.R2()`: R2 Score for regression.\r\n- All metrics return results as dictionaries and support batch evaluation.\r\n\r\n### Model API\r\n- `ne.Model(input_size, optimizer, loss, metrics)`: Create a model specifying input size, optimizer, loss function, and metrics.\r\n- Add layers by calling the model instance: `model(layer1, layer2, ...)` or using `model.build(layer1, layer2, ...)`.\r\n- `model.train(x, y, epochs=10, batch_size=64, random_seed=None)`: Train the model on data, with support for batching, shuffling, and metric/loss reporting per epoch.\r\n- `model.eval(x, y)`: Evaluate the model on data, prints loss and metrics, and returns output tensor. Also prints confusion matrix if enabled in metrics.\r\n- Layers are set to training or evaluation mode automatically during `train` and `eval`.\r\n\r\n### Utilities\r\n- Tensor creation: `tensor(data, requires_grad=False)`, `zeros(shape)`, `ones(shape)`, `rand(shape)`, `randn(shape, xavier=False)`, `randint(low, high, shape)` and their `_like` variants for matching shapes.\r\n- Tensor operations: `sum`, `max`, `min`, `mean`, `var`, `log`, `sqrt`, `exp`, `abs`, `concat`, `stack`, `where`, `clip` for elementwise and reduction operations.\r\n- Encoding: `one_hot(labels, num_classes=None)` for converting integer labels to one-hot encoding.\r\n\r\n### Extensibility\r\nNeuralEngine is designed for easy extension and customization:\r\n- **Custom Layers**: Create new layers by inheriting from the `Layer` base class and implementing the `forward(self, x)` method. You can add parameters, initialization logic, and custom computations as needed. All built-in layers follow this pattern, making it simple to add your own.\r\n- **Custom Losses**: Define new loss functions by inheriting from the `Loss` base class and implementing the `compute(self, z, y)` method. This allows you to integrate any custom loss logic with autograd support.\r\n- **Custom Optimizers**: Implement new optimization algorithms by inheriting from the `Optimizer` base class and providing your own `step(self)` method. You can manage optimizer state and parameter updates as required.\r\n- **Custom Metrics**: Add new metrics by inheriting from the `Metric` base class and implementing the `compute(self, z, y)` method. Alternatively, you can pass a function of the form `func(x, y) -> dict[str, float | np.ndarray]` directly to the model's metrics argument for flexible evaluation.\r\n- All core components are modular and can be replaced or extended for research, experimentation, or production use.\r\n\r\n## Contribution Guide\r\n\r\nNeuralEngine is an open-source project, and I warmly welcome all kinds of contributions whether it's code, documentation, bug reports, feature ideas, or sharing cool examples. If you want to help make NeuralEngine better, you're in the right place!\r\n\r\n### How to Contribute\r\n- **Fork the repository** and create a new branch for your feature, fix, or documentation update.\r\n- **Keep it clean and consistent**: Try to follow the existing code style, naming conventions, and documentation patterns. Well-commented, readable code is always appreciated!\r\n- **Add tests** for new features or bug fixes if you can.\r\n- **Document your changes**: Update or add docstrings and README sections so others can easily understand your work.\r\n- **Open a pull request** describing what you've changed and why it's awesome.\r\n\r\n### What Can You Contribute?\r\n- New layers, loss functions, optimizers, metrics, or utility functions\r\n- Improvements to existing components\r\n- Bug fixes and performance tweaks\r\n- Documentation updates and tutorials\r\n- Example scripts and notebooks\r\n- Feature requests, feedback, and ideas\r\n\r\nEvery contribution is reviewed for quality and consistency, but don't worry\u2014if you have questions or need help, just open an issue or start a discussion. I'm happy to help and love seeing new faces in the community!\r\n\r\nThanks for making NeuralEngine better, together! \ud83d\ude80\r\n\r\n## License\r\nMIT License with attribution clause. See LICENSE file for details.\r\n\r\n## Attribution\r\nIf you use this project, please credit the original developer: Prajjwal Pratap Shah.\r\n\r\nSpecial thanks to the Autograd Framework From Scratch project by Eduardo Leit\u00e3o da Cunha Opice Le\u00e3o, which served as a reference for tensor operations and autograd implementations.\r\n",
    "bugtrack_url": null,
    "license": "MIT License\r\n        \r\n        Copyright (c) 2023 Eduardo Leitao da Cunha Opice Leao\r\n        \r\n        Permission is hereby granted, free of charge, to any person obtaining a copy\r\n        of this software and associated documentation files (the \"Software\"), to deal\r\n        in the Software without restriction, including without limitation the rights\r\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\r\n        copies of the Software, and to permit persons to whom the Software is\r\n        furnished to do so, subject to the following conditions:\r\n        \r\n        The above copyright notice, this permission notice, and the following attribution clause shall be included in all\r\n        copies or substantial portions of the Software.\r\n        \r\n        Attribution Clause:\r\n        Any public use, distribution, or derivative work of this software must give appropriate credit to the original developer, Prajjwal Pratap Shah, by including a visible acknowledgment in documentation, websites, or other materials accompanying the software.\r\n        \r\n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\r\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\r\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\r\n        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\r\n        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\r\n        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\r\n        SOFTWARE.\r\n        ",
    "summary": "A framework/library for building and training neural networks.",
    "version": "0.1.1",
    "project_urls": {
        "Homepage": "https://github.com/Prajjwal2404/NeuralEngine"
    },
    "split_keywords": [
        "deep-learning",
        " neural-networks",
        " machine-learning",
        " numpy",
        " cupy",
        " autograd"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c4d90b58bf6c73ae11941b8d7227c26e16508019a7e97ad89badda8b359c9d50",
                "md5": "7fc1b1439e274e45b116ba4b1aa8308f",
                "sha256": "1b68288e95a824a4078fc0a35d5c56309dbe4498d9569b134a86c91ae6d70362"
            },
            "downloads": -1,
            "filename": "NeuralEngine-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7fc1b1439e274e45b116ba4b1aa8308f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 24285,
            "upload_time": "2025-07-14T04:45:32",
            "upload_time_iso_8601": "2025-07-14T04:45:32.610986Z",
            "url": "https://files.pythonhosted.org/packages/c4/d9/0b58bf6c73ae11941b8d7227c26e16508019a7e97ad89badda8b359c9d50/NeuralEngine-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "67bd01111d12a76eef1650c0b0503ba2cd45f42070555ec74105345fade45090",
                "md5": "980ab208c0e348db0ce5b584eb1faafb",
                "sha256": "72f4074a22f90f3c39d9ebac0f4a593cab4a9bafe1a7940b2b95e57eb6696884"
            },
            "downloads": -1,
            "filename": "neuralengine-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "980ab208c0e348db0ce5b584eb1faafb",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 21153,
            "upload_time": "2025-07-14T04:45:34",
            "upload_time_iso_8601": "2025-07-14T04:45:34.291585Z",
            "url": "https://files.pythonhosted.org/packages/67/bd/01111d12a76eef1650c0b0503ba2cd45f42070555ec74105345fade45090/neuralengine-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-14 04:45:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Prajjwal2404",
    "github_project": "NeuralEngine",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "numpy",
            "specs": [
                [
                    ">=",
                    "1.26.4"
                ]
            ]
        }
    ],
    "lcname": "neuralengine"
}
        
Elapsed time: 0.66540s