Name | mayini-framework JSON |
Version |
0.1.0
JSON |
| download |
home_page | https://github.com/907-bot-collab/mayini |
Summary | A comprehensive deep learning framework with Tensor operations, ANN, CNN, and RNN implementations |
upload_time | 2025-10-06 15:21:17 |
maintainer | None |
docs_url | None |
author | Abhishek Adari |
requires_python | >=3.7 |
license | MIT License
Copyright (c) 2025 MAYINI Framework
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
keywords |
deep-learning
machine-learning
neural-networks
tensor
pytorch-like
framework
|
VCS |
 |
bugtrack_url |
|
requirements |
numpy
matplotlib
seaborn
tqdm
scikit-learn
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# MAYINI Deep Learning Framework
[](https://badge.fury.io/py/mayini-framework)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/yourusername/mayini-framework/actions)
MAYINI is a comprehensive deep learning framework built from scratch in Python, featuring automatic differentiation, neural network components, and complete training infrastructure. It's designed for educational purposes and research, providing a PyTorch-like API with full transparency into the underlying mechanics.
## π Key Features
### Core Engine
- **Tensor Operations**: Complete tensor class with automatic differentiation
- **Computational Graph**: Cycle detection and gradient computation
- **Broadcasting Support**: NumPy-style broadcasting for operations
### Neural Network Components
- **Linear Layers**: Dense layers with multiple initialization methods (Xavier, He, Normal)
- **Convolutional Layers**: 2D convolution with im2col optimization
- **Pooling Layers**: Max and Average pooling with stride and padding support
- **Normalization**: Batch Normalization for improved training
- **Regularization**: Dropout with inverted dropout implementation
### Activation Functions
- **Standard Functions**: ReLU, Sigmoid, Tanh, Softmax
- **Modern Activations**: GELU, Leaky ReLU
- **Numerical Stability**: Implemented with overflow/underflow protection
### Recurrent Neural Networks
- **RNN Cells**: Vanilla RNN with configurable activations
- **LSTM Cells**: Long Short-Term Memory with proper gate mechanisms
- **GRU Cells**: Gated Recurrent Units for efficient sequence modeling
- **Multi-layer Support**: Stack multiple RNN layers with dropout
### Loss Functions
- **Regression**: MSE Loss, MAE Loss, Huber Loss
- **Classification**: Cross-Entropy Loss, Binary Cross-Entropy Loss
- **Flexible Reduction**: Support for mean, sum, and none reduction modes
### Optimization Algorithms
- **SGD**: Stochastic Gradient Descent with momentum and weight decay
- **Adam**: Adaptive moment estimation with bias correction
- **AdamW**: Adam with decoupled weight decay
- **RMSprop**: Root Mean Square Propagation
### Training Infrastructure
- **DataLoader**: Efficient batch processing with shuffling
- **Metrics**: Comprehensive evaluation (accuracy, precision, recall, F1)
- **Early Stopping**: Prevent overfitting with validation monitoring
- **Learning Rate Scheduling**: Step, exponential, and cosine annealing schedulers
- **Checkpointing**: Save and restore model states
## π¦ Installation
### From PyPI
```bash
pip install mayini-framework
```
### From Source
```bash
git clone https://github.com/yourusername/mayini-framework.git
cd mayini-framework
pip install -e .
```
### Development Installation
```bash
git clone https://github.com/yourusername/mayini-framework.git
cd mayini-framework
pip install -e ".[dev]"
```
## π Quick Start
### Basic Tensor Operations
```python
import mayini as mn
# Create tensors with automatic differentiation
x = mn.Tensor([[1.0, 2.0], [3.0, 4.0]], requires_grad=True)
y = mn.Tensor([[2.0, 1.0], [1.0, 2.0]], requires_grad=True)
# Perform operations
z = x.matmul(y) + x * 2
loss = z.sum()
# Automatic differentiation
loss.backward()
print(f"Gradient of x: {x.grad}")
print(f"Gradient of y: {y.grad}")
```
### Building Neural Networks
```python
from mayini.nn import Sequential, Linear, ReLU, Softmax
# Create a simple neural network
model = Sequential(
Linear(784, 256, init_method='he'),
ReLU(),
Linear(256, 128, init_method='he'),
ReLU(),
Linear(128, 10),
Softmax(dim=1)
)
# Forward pass
x = mn.Tensor(np.random.randn(32, 784))
output = model(x)
print(f"Output shape: {output.shape}")
```
### Training a Model
```python
from mayini.optim import Adam
from mayini.nn import CrossEntropyLoss
from mayini.data import DataLoader
from mayini.training import Trainer
# Setup training components
optimizer = Adam(model.parameters(), lr=0.001)
criterion = CrossEntropyLoss()
train_loader = DataLoader(X_train, y_train, batch_size=64, shuffle=True)
# Create trainer and train
trainer = Trainer(model, optimizer, criterion)
history = trainer.fit(train_loader, epochs=10, verbose=True)
```
### Convolutional Neural Networks
```python
from mayini.nn import Conv2D, MaxPool2D, Flatten
# CNN for image classification
cnn_model = Sequential(
Conv2D(1, 32, kernel_size=3, padding=1),
ReLU(),
MaxPool2D(kernel_size=2),
Conv2D(32, 64, kernel_size=3, padding=1),
ReLU(),
MaxPool2D(kernel_size=2),
Flatten(),
Linear(64 * 7 * 7, 128),
ReLU(),
Linear(128, 10),
Softmax(dim=1)
)
```
### Recurrent Neural Networks
```python
from mayini.nn import RNN, LSTMCell
# LSTM for sequence modeling
lstm_model = RNN(
input_size=100,
hidden_size=128,
num_layers=2,
cell_type='lstm',
dropout=0.2,
batch_first=True
)
# Process sequences
x_seq = mn.Tensor(np.random.randn(32, 50, 100)) # (batch, seq_len, features)
output, hidden_states = lstm_model(x_seq)
```
## π Documentation
### API Reference
#### Core Components
- **Tensor**: Core tensor class with automatic differentiation
- **Module**: Base class for all neural network modules
- **Sequential**: Container for chaining modules
#### Neural Network Layers
- **Linear**: Fully connected layer
- **Conv2D**: 2D convolutional layer
- **MaxPool2D, AvgPool2D**: Pooling layers
- **BatchNorm1d**: Batch normalization
- **Dropout**: Dropout regularization
#### Activation Functions
- **ReLU, Sigmoid, Tanh, Softmax**: Standard activations
- **GELU, LeakyReLU**: Modern activation functions
#### Loss Functions
- **MSELoss**: Mean squared error
- **CrossEntropyLoss**: Cross-entropy for classification
- **BCELoss**: Binary cross-entropy
- **HuberLoss**: Robust loss for regression
#### Optimizers
- **SGD**: Stochastic gradient descent
- **Adam**: Adaptive moment estimation
- **AdamW**: Adam with decoupled weight decay
- **RMSprop**: Root mean square propagation
### Examples
Complete examples are available in the `examples/` directory:
- **MNIST Classification**: Train a neural network on handwritten digits
- **CIFAR-10 CNN**: Convolutional neural network for image classification
- **Text Classification**: RNN/LSTM for sequence classification
- **Time Series Prediction**: Forecasting with recurrent networks
## π§ͺ Testing
Run the test suite:
```bash
pytest tests/
```
Run with coverage:
```bash
pytest --cov=mayini tests/
```
## π€ Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
### Development Setup
```bash
git clone https://github.com/yourusername/mayini-framework.git
cd mayini-framework
pip install -e ".[dev]"
pre-commit install
```
### Running Tests
```bash
pytest tests/
black src/
flake8 src/
```
## π Educational Use
MAYINI is designed with education in mind. Each component is implemented from scratch with clear, readable code and comprehensive documentation. It's perfect for:
- **Learning Deep Learning**: Understand how neural networks work under the hood
- **Research Projects**: Prototype new architectures and algorithms
- **Teaching**: Demonstrate concepts with transparent implementations
- **Experimentation**: Quick prototyping of ideas
## π¬ Comparison with Other Frameworks
| Feature | MAYINI | PyTorch | TensorFlow |
|---------|--------|---------|------------|
| Educational Focus | β
| β | β |
| Transparent Implementation | β
| β | β |
| Automatic Differentiation | β
| β
| β
|
| GPU Support | β | β
| β
|
| Production Ready | β | β
| β
|
| Easy to Understand | β
| β οΈ | β |
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## π Acknowledgments
- Inspired by PyTorch's design philosophy
- Built for educational purposes and research
- Thanks to the open-source community for inspiration
## π Support
- **Issues**: [GitHub Issues](https://github.com/yourusername/mayini-framework/issues)
- **Documentation**: [Read the Docs](https://mayini-framework.readthedocs.io/)
- **Discussions**: [GitHub Discussions](https://github.com/yourusername/mayini-framework/discussions)
## πΊοΈ Roadmap
- [ ] GPU support with CUDA
- [ ] More activation functions (Swish, Mish, etc.)
- [ ] Transformer components
- [ ] Model serialization/deserialization
- [ ] Distributed training support
- [ ] Mobile deployment utilities
---
**MAYINI** - Making AI Neural Intelligence Intuitive π§ β¨# MAYINI Deep Learning Framework
[](https://badge.fury.io/py/mayini-framework)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/yourusername/mayini-framework/actions)
MAYINI is a comprehensive deep learning framework built from scratch in Python, featuring automatic differentiation, neural network components, and complete training infrastructure. It's designed for educational purposes and research, providing a PyTorch-like API with full transparency into the underlying mechanics.
## π Key Features
### Core Engine
- **Tensor Operations**: Complete tensor class with automatic differentiation
- **Computational Graph**: Cycle detection and gradient computation
- **Broadcasting Support**: NumPy-style broadcasting for operations
### Neural Network Components
- **Linear Layers**: Dense layers with multiple initialization methods (Xavier, He, Normal)
- **Convolutional Layers**: 2D convolution with im2col optimization
- **Pooling Layers**: Max and Average pooling with stride and padding support
- **Normalization**: Batch Normalization for improved training
- **Regularization**: Dropout with inverted dropout implementation
### Activation Functions
- **Standard Functions**: ReLU, Sigmoid, Tanh, Softmax
- **Modern Activations**: GELU, Leaky ReLU
- **Numerical Stability**: Implemented with overflow/underflow protection
### Recurrent Neural Networks
- **RNN Cells**: Vanilla RNN with configurable activations
- **LSTM Cells**: Long Short-Term Memory with proper gate mechanisms
- **GRU Cells**: Gated Recurrent Units for efficient sequence modeling
- **Multi-layer Support**: Stack multiple RNN layers with dropout
### Loss Functions
- **Regression**: MSE Loss, MAE Loss, Huber Loss
- **Classification**: Cross-Entropy Loss, Binary Cross-Entropy Loss
- **Flexible Reduction**: Support for mean, sum, and none reduction modes
### Optimization Algorithms
- **SGD**: Stochastic Gradient Descent with momentum and weight decay
- **Adam**: Adaptive moment estimation with bias correction
- **AdamW**: Adam with decoupled weight decay
- **RMSprop**: Root Mean Square Propagation
### Training Infrastructure
- **DataLoader**: Efficient batch processing with shuffling
- **Metrics**: Comprehensive evaluation (accuracy, precision, recall, F1)
- **Early Stopping**: Prevent overfitting with validation monitoring
- **Learning Rate Scheduling**: Step, exponential, and cosine annealing schedulers
- **Checkpointing**: Save and restore model states
## π¦ Installation
### From PyPI
```bash
pip install mayini-framework
```
### From Source
```bash
git clone https://github.com/yourusername/mayini-framework.git
cd mayini-framework
pip install -e .
```
### Development Installation
```bash
git clone https://github.com/yourusername/mayini-framework.git
cd mayini-framework
pip install -e ".[dev]"
```
## π Quick Start
### Basic Tensor Operations
```python
import mayini as mn
# Create tensors with automatic differentiation
x = mn.Tensor([[1.0, 2.0], [3.0, 4.0]], requires_grad=True)
y = mn.Tensor([[2.0, 1.0], [1.0, 2.0]], requires_grad=True)
# Perform operations
z = x.matmul(y) + x * 2
loss = z.sum()
# Automatic differentiation
loss.backward()
print(f"Gradient of x: {x.grad}")
print(f"Gradient of y: {y.grad}")
```
### Building Neural Networks
```python
from mayini.nn import Sequential, Linear, ReLU, Softmax
# Create a simple neural network
model = Sequential(
Linear(784, 256, init_method='he'),
ReLU(),
Linear(256, 128, init_method='he'),
ReLU(),
Linear(128, 10),
Softmax(dim=1)
)
# Forward pass
x = mn.Tensor(np.random.randn(32, 784))
output = model(x)
print(f"Output shape: {output.shape}")
```
### Training a Model
```python
from mayini.optim import Adam
from mayini.nn import CrossEntropyLoss
from mayini.data import DataLoader
from mayini.training import Trainer
# Setup training components
optimizer = Adam(model.parameters(), lr=0.001)
criterion = CrossEntropyLoss()
train_loader = DataLoader(X_train, y_train, batch_size=64, shuffle=True)
# Create trainer and train
trainer = Trainer(model, optimizer, criterion)
history = trainer.fit(train_loader, epochs=10, verbose=True)
```
### Convolutional Neural Networks
```python
from mayini.nn import Conv2D, MaxPool2D, Flatten
# CNN for image classification
cnn_model = Sequential(
Conv2D(1, 32, kernel_size=3, padding=1),
ReLU(),
MaxPool2D(kernel_size=2),
Conv2D(32, 64, kernel_size=3, padding=1),
ReLU(),
MaxPool2D(kernel_size=2),
Flatten(),
Linear(64 * 7 * 7, 128),
ReLU(),
Linear(128, 10),
Softmax(dim=1)
)
```
### Recurrent Neural Networks
```python
from mayini.nn import RNN, LSTMCell
# LSTM for sequence modeling
lstm_model = RNN(
input_size=100,
hidden_size=128,
num_layers=2,
cell_type='lstm',
dropout=0.2,
batch_first=True
)
# Process sequences
x_seq = mn.Tensor(np.random.randn(32, 50, 100)) # (batch, seq_len, features)
output, hidden_states = lstm_model(x_seq)
```
## π Documentation
### API Reference
#### Core Components
- **Tensor**: Core tensor class with automatic differentiation
- **Module**: Base class for all neural network modules
- **Sequential**: Container for chaining modules
#### Neural Network Layers
- **Linear**: Fully connected layer
- **Conv2D**: 2D convolutional layer
- **MaxPool2D, AvgPool2D**: Pooling layers
- **BatchNorm1d**: Batch normalization
- **Dropout**: Dropout regularization
#### Activation Functions
- **ReLU, Sigmoid, Tanh, Softmax**: Standard activations
- **GELU, LeakyReLU**: Modern activation functions
#### Loss Functions
- **MSELoss**: Mean squared error
- **CrossEntropyLoss**: Cross-entropy for classification
- **BCELoss**: Binary cross-entropy
- **HuberLoss**: Robust loss for regression
#### Optimizers
- **SGD**: Stochastic gradient descent
- **Adam**: Adaptive moment estimation
- **AdamW**: Adam with decoupled weight decay
- **RMSprop**: Root mean square propagation
### Examples
Complete examples are available in the `examples/` directory:
- **MNIST Classification**: Train a neural network on handwritten digits
- **CIFAR-10 CNN**: Convolutional neural network for image classification
- **Text Classification**: RNN/LSTM for sequence classification
- **Time Series Prediction**: Forecasting with recurrent networks
## π§ͺ Testing
Run the test suite:
```bash
pytest tests/
```
Run with coverage:
```bash
pytest --cov=mayini tests/
```
## π€ Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
### Development Setup
```bash
git clone https://github.com/yourusername/mayini-framework.git
cd mayini-framework
pip install -e ".[dev]"
pre-commit install
```
### Running Tests
```bash
pytest tests/
black src/
flake8 src/
```
## π Educational Use
MAYINI is designed with education in mind. Each component is implemented from scratch with clear, readable code and comprehensive documentation. It's perfect for:
- **Learning Deep Learning**: Understand how neural networks work under the hood
- **Research Projects**: Prototype new architectures and algorithms
- **Teaching**: Demonstrate concepts with transparent implementations
- **Experimentation**: Quick prototyping of ideas
## π¬ Comparison with Other Frameworks
| Feature | MAYINI | PyTorch | TensorFlow |
|---------|--------|---------|------------|
| Educational Focus | β
| β | β |
| Transparent Implementation | β
| β | β |
| Automatic Differentiation | β
| β
| β
|
| GPU Support | β | β
| β
|
| Production Ready | β | β
| β
|
| Easy to Understand | β
| β οΈ | β |
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## π Acknowledgments
- Inspired by PyTorch's design philosophy
- Built for educational purposes and research
- Thanks to the open-source community for inspiration
## π Support
- **Issues**: [GitHub Issues](https://github.com/yourusername/mayini-framework/issues)
- **Documentation**: [Read the Docs](https://mayini-framework.readthedocs.io/)
- **Discussions**: [GitHub Discussions](https://github.com/yourusername/mayini-framework/discussions)
## πΊοΈ Roadmap
- [ ] GPU support with CUDA
- [ ] More activation functions (Swish, Mish, etc.)
- [ ] Transformer components
- [ ] Model serialization/deserialization
- [ ] Distributed training support
- [ ] Mobile deployment utilities
---
**MAYINI** - Making AI Neural Intelligence Intuitive π§ β¨
Raw data
{
"_id": null,
"home_page": "https://github.com/907-bot-collab/mayini",
"name": "mayini-framework",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "Palivela Giridhar <nanipalivela830@gmail.com>",
"keywords": "deep-learning, machine-learning, neural-networks, tensor, pytorch-like, framework",
"author": "Abhishek Adari",
"author_email": "Adari Abhishek <abhishekadari85@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/7a/03/092c2e0fa7fb6e6a61246c9ad433a133fc57434e645d8e3fff05e6627862/mayini_framework-0.1.0.tar.gz",
"platform": null,
"description": "\n# MAYINI Deep Learning Framework\n\n[](https://badge.fury.io/py/mayini-framework)\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n[](https://github.com/yourusername/mayini-framework/actions)\n\nMAYINI is a comprehensive deep learning framework built from scratch in Python, featuring automatic differentiation, neural network components, and complete training infrastructure. It's designed for educational purposes and research, providing a PyTorch-like API with full transparency into the underlying mechanics.\n\n## \ud83d\ude80 Key Features\n\n### Core Engine\n- **Tensor Operations**: Complete tensor class with automatic differentiation\n- **Computational Graph**: Cycle detection and gradient computation\n- **Broadcasting Support**: NumPy-style broadcasting for operations\n\n### Neural Network Components\n- **Linear Layers**: Dense layers with multiple initialization methods (Xavier, He, Normal)\n- **Convolutional Layers**: 2D convolution with im2col optimization\n- **Pooling Layers**: Max and Average pooling with stride and padding support\n- **Normalization**: Batch Normalization for improved training\n- **Regularization**: Dropout with inverted dropout implementation\n\n### Activation Functions\n- **Standard Functions**: ReLU, Sigmoid, Tanh, Softmax\n- **Modern Activations**: GELU, Leaky ReLU\n- **Numerical Stability**: Implemented with overflow/underflow protection\n\n### Recurrent Neural Networks\n- **RNN Cells**: Vanilla RNN with configurable activations\n- **LSTM Cells**: Long Short-Term Memory with proper gate mechanisms\n- **GRU Cells**: Gated Recurrent Units for efficient sequence modeling\n- **Multi-layer Support**: Stack multiple RNN layers with dropout\n\n### Loss Functions\n- **Regression**: MSE Loss, MAE Loss, Huber Loss\n- **Classification**: Cross-Entropy Loss, Binary Cross-Entropy Loss\n- **Flexible Reduction**: Support for mean, sum, and none reduction modes\n\n### Optimization Algorithms\n- **SGD**: Stochastic Gradient Descent with momentum and weight decay\n- **Adam**: Adaptive moment estimation with bias correction\n- **AdamW**: Adam with decoupled weight decay\n- **RMSprop**: Root Mean Square Propagation\n\n### Training Infrastructure\n- **DataLoader**: Efficient batch processing with shuffling\n- **Metrics**: Comprehensive evaluation (accuracy, precision, recall, F1)\n- **Early Stopping**: Prevent overfitting with validation monitoring\n- **Learning Rate Scheduling**: Step, exponential, and cosine annealing schedulers\n- **Checkpointing**: Save and restore model states\n\n## \ud83d\udce6 Installation\n\n### From PyPI\n```bash\npip install mayini-framework\n```\n\n### From Source\n```bash\ngit clone https://github.com/yourusername/mayini-framework.git\ncd mayini-framework\npip install -e .\n```\n\n### Development Installation\n```bash\ngit clone https://github.com/yourusername/mayini-framework.git\ncd mayini-framework\npip install -e \".[dev]\"\n```\n\n## \ud83c\udfc3 Quick Start\n\n### Basic Tensor Operations\n```python\nimport mayini as mn\n\n# Create tensors with automatic differentiation\nx = mn.Tensor([[1.0, 2.0], [3.0, 4.0]], requires_grad=True)\ny = mn.Tensor([[2.0, 1.0], [1.0, 2.0]], requires_grad=True)\n\n# Perform operations\nz = x.matmul(y) + x * 2\nloss = z.sum()\n\n# Automatic differentiation\nloss.backward()\nprint(f\"Gradient of x: {x.grad}\")\nprint(f\"Gradient of y: {y.grad}\")\n```\n\n### Building Neural Networks\n```python\nfrom mayini.nn import Sequential, Linear, ReLU, Softmax\n\n# Create a simple neural network\nmodel = Sequential(\n Linear(784, 256, init_method='he'),\n ReLU(),\n Linear(256, 128, init_method='he'), \n ReLU(),\n Linear(128, 10),\n Softmax(dim=1)\n)\n\n# Forward pass\nx = mn.Tensor(np.random.randn(32, 784))\noutput = model(x)\nprint(f\"Output shape: {output.shape}\")\n```\n\n### Training a Model\n```python\nfrom mayini.optim import Adam\nfrom mayini.nn import CrossEntropyLoss\nfrom mayini.data import DataLoader\nfrom mayini.training import Trainer\n\n# Setup training components\noptimizer = Adam(model.parameters(), lr=0.001)\ncriterion = CrossEntropyLoss()\ntrain_loader = DataLoader(X_train, y_train, batch_size=64, shuffle=True)\n\n# Create trainer and train\ntrainer = Trainer(model, optimizer, criterion)\nhistory = trainer.fit(train_loader, epochs=10, verbose=True)\n```\n\n### Convolutional Neural Networks\n```python\nfrom mayini.nn import Conv2D, MaxPool2D, Flatten\n\n# CNN for image classification\ncnn_model = Sequential(\n Conv2D(1, 32, kernel_size=3, padding=1),\n ReLU(),\n MaxPool2D(kernel_size=2),\n Conv2D(32, 64, kernel_size=3, padding=1),\n ReLU(), \n MaxPool2D(kernel_size=2),\n Flatten(),\n Linear(64 * 7 * 7, 128),\n ReLU(),\n Linear(128, 10),\n Softmax(dim=1)\n)\n```\n\n### Recurrent Neural Networks\n```python\nfrom mayini.nn import RNN, LSTMCell\n\n# LSTM for sequence modeling\nlstm_model = RNN(\n input_size=100,\n hidden_size=128, \n num_layers=2,\n cell_type='lstm',\n dropout=0.2,\n batch_first=True\n)\n\n# Process sequences\nx_seq = mn.Tensor(np.random.randn(32, 50, 100)) # (batch, seq_len, features)\noutput, hidden_states = lstm_model(x_seq)\n```\n\n## \ud83d\udcda Documentation\n\n### API Reference\n\n#### Core Components\n- **Tensor**: Core tensor class with automatic differentiation\n- **Module**: Base class for all neural network modules\n- **Sequential**: Container for chaining modules\n\n#### Neural Network Layers\n- **Linear**: Fully connected layer\n- **Conv2D**: 2D convolutional layer\n- **MaxPool2D, AvgPool2D**: Pooling layers\n- **BatchNorm1d**: Batch normalization\n- **Dropout**: Dropout regularization\n\n#### Activation Functions\n- **ReLU, Sigmoid, Tanh, Softmax**: Standard activations\n- **GELU, LeakyReLU**: Modern activation functions\n\n#### Loss Functions\n- **MSELoss**: Mean squared error\n- **CrossEntropyLoss**: Cross-entropy for classification\n- **BCELoss**: Binary cross-entropy\n- **HuberLoss**: Robust loss for regression\n\n#### Optimizers\n- **SGD**: Stochastic gradient descent\n- **Adam**: Adaptive moment estimation\n- **AdamW**: Adam with decoupled weight decay\n- **RMSprop**: Root mean square propagation\n\n### Examples\n\nComplete examples are available in the `examples/` directory:\n- **MNIST Classification**: Train a neural network on handwritten digits\n- **CIFAR-10 CNN**: Convolutional neural network for image classification\n- **Text Classification**: RNN/LSTM for sequence classification\n- **Time Series Prediction**: Forecasting with recurrent networks\n\n## \ud83e\uddea Testing\n\nRun the test suite:\n```bash\npytest tests/\n```\n\nRun with coverage:\n```bash\npytest --cov=mayini tests/\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n### Development Setup\n```bash\ngit clone https://github.com/yourusername/mayini-framework.git\ncd mayini-framework\npip install -e \".[dev]\"\npre-commit install\n```\n\n### Running Tests\n```bash\npytest tests/\nblack src/\nflake8 src/\n```\n\n## \ud83d\udcd6 Educational Use\n\nMAYINI is designed with education in mind. Each component is implemented from scratch with clear, readable code and comprehensive documentation. It's perfect for:\n\n- **Learning Deep Learning**: Understand how neural networks work under the hood\n- **Research Projects**: Prototype new architectures and algorithms\n- **Teaching**: Demonstrate concepts with transparent implementations\n- **Experimentation**: Quick prototyping of ideas\n\n## \ud83d\udd2c Comparison with Other Frameworks\n\n| Feature | MAYINI | PyTorch | TensorFlow |\n|---------|--------|---------|------------|\n| Educational Focus | \u2705 | \u274c | \u274c |\n| Transparent Implementation | \u2705 | \u274c | \u274c |\n| Automatic Differentiation | \u2705 | \u2705 | \u2705 |\n| GPU Support | \u274c | \u2705 | \u2705 |\n| Production Ready | \u274c | \u2705 | \u2705 |\n| Easy to Understand | \u2705 | \u26a0\ufe0f | \u274c |\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- Inspired by PyTorch's design philosophy\n- Built for educational purposes and research\n- Thanks to the open-source community for inspiration\n\n## \ud83d\udcde Support\n\n- **Issues**: [GitHub Issues](https://github.com/yourusername/mayini-framework/issues)\n- **Documentation**: [Read the Docs](https://mayini-framework.readthedocs.io/)\n- **Discussions**: [GitHub Discussions](https://github.com/yourusername/mayini-framework/discussions)\n\n## \ud83d\uddfa\ufe0f Roadmap\n\n- [ ] GPU support with CUDA\n- [ ] More activation functions (Swish, Mish, etc.)\n- [ ] Transformer components\n- [ ] Model serialization/deserialization\n- [ ] Distributed training support\n- [ ] Mobile deployment utilities\n\n---\n\n**MAYINI** - Making AI Neural Intelligence Intuitive \ud83e\udde0\u2728# MAYINI Deep Learning Framework\n\n[](https://badge.fury.io/py/mayini-framework)\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n[](https://github.com/yourusername/mayini-framework/actions)\n\nMAYINI is a comprehensive deep learning framework built from scratch in Python, featuring automatic differentiation, neural network components, and complete training infrastructure. It's designed for educational purposes and research, providing a PyTorch-like API with full transparency into the underlying mechanics.\n\n## \ud83d\ude80 Key Features\n\n### Core Engine\n- **Tensor Operations**: Complete tensor class with automatic differentiation\n- **Computational Graph**: Cycle detection and gradient computation\n- **Broadcasting Support**: NumPy-style broadcasting for operations\n\n### Neural Network Components\n- **Linear Layers**: Dense layers with multiple initialization methods (Xavier, He, Normal)\n- **Convolutional Layers**: 2D convolution with im2col optimization\n- **Pooling Layers**: Max and Average pooling with stride and padding support\n- **Normalization**: Batch Normalization for improved training\n- **Regularization**: Dropout with inverted dropout implementation\n\n### Activation Functions\n- **Standard Functions**: ReLU, Sigmoid, Tanh, Softmax\n- **Modern Activations**: GELU, Leaky ReLU\n- **Numerical Stability**: Implemented with overflow/underflow protection\n\n### Recurrent Neural Networks\n- **RNN Cells**: Vanilla RNN with configurable activations\n- **LSTM Cells**: Long Short-Term Memory with proper gate mechanisms\n- **GRU Cells**: Gated Recurrent Units for efficient sequence modeling\n- **Multi-layer Support**: Stack multiple RNN layers with dropout\n\n### Loss Functions\n- **Regression**: MSE Loss, MAE Loss, Huber Loss\n- **Classification**: Cross-Entropy Loss, Binary Cross-Entropy Loss\n- **Flexible Reduction**: Support for mean, sum, and none reduction modes\n\n### Optimization Algorithms\n- **SGD**: Stochastic Gradient Descent with momentum and weight decay\n- **Adam**: Adaptive moment estimation with bias correction\n- **AdamW**: Adam with decoupled weight decay\n- **RMSprop**: Root Mean Square Propagation\n\n### Training Infrastructure\n- **DataLoader**: Efficient batch processing with shuffling\n- **Metrics**: Comprehensive evaluation (accuracy, precision, recall, F1)\n- **Early Stopping**: Prevent overfitting with validation monitoring\n- **Learning Rate Scheduling**: Step, exponential, and cosine annealing schedulers\n- **Checkpointing**: Save and restore model states\n\n## \ud83d\udce6 Installation\n\n### From PyPI\n```bash\npip install mayini-framework\n```\n\n### From Source\n```bash\ngit clone https://github.com/yourusername/mayini-framework.git\ncd mayini-framework\npip install -e .\n```\n\n### Development Installation\n```bash\ngit clone https://github.com/yourusername/mayini-framework.git\ncd mayini-framework\npip install -e \".[dev]\"\n```\n\n## \ud83c\udfc3 Quick Start\n\n### Basic Tensor Operations\n```python\nimport mayini as mn\n\n# Create tensors with automatic differentiation\nx = mn.Tensor([[1.0, 2.0], [3.0, 4.0]], requires_grad=True)\ny = mn.Tensor([[2.0, 1.0], [1.0, 2.0]], requires_grad=True)\n\n# Perform operations\nz = x.matmul(y) + x * 2\nloss = z.sum()\n\n# Automatic differentiation\nloss.backward()\nprint(f\"Gradient of x: {x.grad}\")\nprint(f\"Gradient of y: {y.grad}\")\n```\n\n### Building Neural Networks\n```python\nfrom mayini.nn import Sequential, Linear, ReLU, Softmax\n\n# Create a simple neural network\nmodel = Sequential(\n Linear(784, 256, init_method='he'),\n ReLU(),\n Linear(256, 128, init_method='he'), \n ReLU(),\n Linear(128, 10),\n Softmax(dim=1)\n)\n\n# Forward pass\nx = mn.Tensor(np.random.randn(32, 784))\noutput = model(x)\nprint(f\"Output shape: {output.shape}\")\n```\n\n### Training a Model\n```python\nfrom mayini.optim import Adam\nfrom mayini.nn import CrossEntropyLoss\nfrom mayini.data import DataLoader\nfrom mayini.training import Trainer\n\n# Setup training components\noptimizer = Adam(model.parameters(), lr=0.001)\ncriterion = CrossEntropyLoss()\ntrain_loader = DataLoader(X_train, y_train, batch_size=64, shuffle=True)\n\n# Create trainer and train\ntrainer = Trainer(model, optimizer, criterion)\nhistory = trainer.fit(train_loader, epochs=10, verbose=True)\n```\n\n### Convolutional Neural Networks\n```python\nfrom mayini.nn import Conv2D, MaxPool2D, Flatten\n\n# CNN for image classification\ncnn_model = Sequential(\n Conv2D(1, 32, kernel_size=3, padding=1),\n ReLU(),\n MaxPool2D(kernel_size=2),\n Conv2D(32, 64, kernel_size=3, padding=1),\n ReLU(), \n MaxPool2D(kernel_size=2),\n Flatten(),\n Linear(64 * 7 * 7, 128),\n ReLU(),\n Linear(128, 10),\n Softmax(dim=1)\n)\n```\n\n### Recurrent Neural Networks\n```python\nfrom mayini.nn import RNN, LSTMCell\n\n# LSTM for sequence modeling\nlstm_model = RNN(\n input_size=100,\n hidden_size=128, \n num_layers=2,\n cell_type='lstm',\n dropout=0.2,\n batch_first=True\n)\n\n# Process sequences\nx_seq = mn.Tensor(np.random.randn(32, 50, 100)) # (batch, seq_len, features)\noutput, hidden_states = lstm_model(x_seq)\n```\n\n## \ud83d\udcda Documentation\n\n### API Reference\n\n#### Core Components\n- **Tensor**: Core tensor class with automatic differentiation\n- **Module**: Base class for all neural network modules\n- **Sequential**: Container for chaining modules\n\n#### Neural Network Layers\n- **Linear**: Fully connected layer\n- **Conv2D**: 2D convolutional layer\n- **MaxPool2D, AvgPool2D**: Pooling layers\n- **BatchNorm1d**: Batch normalization\n- **Dropout**: Dropout regularization\n\n#### Activation Functions\n- **ReLU, Sigmoid, Tanh, Softmax**: Standard activations\n- **GELU, LeakyReLU**: Modern activation functions\n\n#### Loss Functions\n- **MSELoss**: Mean squared error\n- **CrossEntropyLoss**: Cross-entropy for classification\n- **BCELoss**: Binary cross-entropy\n- **HuberLoss**: Robust loss for regression\n\n#### Optimizers\n- **SGD**: Stochastic gradient descent\n- **Adam**: Adaptive moment estimation\n- **AdamW**: Adam with decoupled weight decay\n- **RMSprop**: Root mean square propagation\n\n### Examples\n\nComplete examples are available in the `examples/` directory:\n- **MNIST Classification**: Train a neural network on handwritten digits\n- **CIFAR-10 CNN**: Convolutional neural network for image classification\n- **Text Classification**: RNN/LSTM for sequence classification\n- **Time Series Prediction**: Forecasting with recurrent networks\n\n## \ud83e\uddea Testing\n\nRun the test suite:\n```bash\npytest tests/\n```\n\nRun with coverage:\n```bash\npytest --cov=mayini tests/\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n### Development Setup\n```bash\ngit clone https://github.com/yourusername/mayini-framework.git\ncd mayini-framework\npip install -e \".[dev]\"\npre-commit install\n```\n\n### Running Tests\n```bash\npytest tests/\nblack src/\nflake8 src/\n```\n\n## \ud83d\udcd6 Educational Use\n\nMAYINI is designed with education in mind. Each component is implemented from scratch with clear, readable code and comprehensive documentation. It's perfect for:\n\n- **Learning Deep Learning**: Understand how neural networks work under the hood\n- **Research Projects**: Prototype new architectures and algorithms\n- **Teaching**: Demonstrate concepts with transparent implementations\n- **Experimentation**: Quick prototyping of ideas\n\n## \ud83d\udd2c Comparison with Other Frameworks\n\n| Feature | MAYINI | PyTorch | TensorFlow |\n|---------|--------|---------|------------|\n| Educational Focus | \u2705 | \u274c | \u274c |\n| Transparent Implementation | \u2705 | \u274c | \u274c |\n| Automatic Differentiation | \u2705 | \u2705 | \u2705 |\n| GPU Support | \u274c | \u2705 | \u2705 |\n| Production Ready | \u274c | \u2705 | \u2705 |\n| Easy to Understand | \u2705 | \u26a0\ufe0f | \u274c |\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- Inspired by PyTorch's design philosophy\n- Built for educational purposes and research\n- Thanks to the open-source community for inspiration\n\n## \ud83d\udcde Support\n\n- **Issues**: [GitHub Issues](https://github.com/yourusername/mayini-framework/issues)\n- **Documentation**: [Read the Docs](https://mayini-framework.readthedocs.io/)\n- **Discussions**: [GitHub Discussions](https://github.com/yourusername/mayini-framework/discussions)\n\n## \ud83d\uddfa\ufe0f Roadmap\n\n- [ ] GPU support with CUDA\n- [ ] More activation functions (Swish, Mish, etc.)\n- [ ] Transformer components\n- [ ] Model serialization/deserialization\n- [ ] Distributed training support\n- [ ] Mobile deployment utilities\n\n---\n\n**MAYINI** - Making AI Neural Intelligence Intuitive \ud83e\udde0\u2728\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) 2025 MAYINI Framework\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.",
"summary": "A comprehensive deep learning framework with Tensor operations, ANN, CNN, and RNN implementations",
"version": "0.1.0",
"project_urls": {
"Bug Tracker": "https://github.com/907-bot-collab/mayini/issues",
"Changelog": "https://github.com/907-bot-collab/mayini/blob/main/CHANGELOG.md",
"Documentation": "https://mayini-framework.readthedocs.io/",
"Homepage": "https://github.com/907-bot-collab/mayini",
"Repository": "https://github.com/907-bot-collab/mayini"
},
"split_keywords": [
"deep-learning",
" machine-learning",
" neural-networks",
" tensor",
" pytorch-like",
" framework"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ea75c0b0e80191f5fcc8165611d0d45790ec26778dbfe484d881fe3bd64a5cff",
"md5": "677a3689d8bbf81e462a83f6874e7783",
"sha256": "2863f7d58c6af38833f73b137035103fe6e88f33d882ccbb24cf4fe810a05220"
},
"downloads": -1,
"filename": "mayini_framework-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "677a3689d8bbf81e462a83f6874e7783",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 31801,
"upload_time": "2025-10-06T15:21:16",
"upload_time_iso_8601": "2025-10-06T15:21:16.036095Z",
"url": "https://files.pythonhosted.org/packages/ea/75/c0b0e80191f5fcc8165611d0d45790ec26778dbfe484d881fe3bd64a5cff/mayini_framework-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "7a03092c2e0fa7fb6e6a61246c9ad433a133fc57434e645d8e3fff05e6627862",
"md5": "40cafb41d6bbca223711af90e3135d4d",
"sha256": "344398e6de31a04953a20bffd4bcda0ff2ac5e6a2dd51cc6fd4bb16a1481c212"
},
"downloads": -1,
"filename": "mayini_framework-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "40cafb41d6bbca223711af90e3135d4d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 36338,
"upload_time": "2025-10-06T15:21:17",
"upload_time_iso_8601": "2025-10-06T15:21:17.658499Z",
"url": "https://files.pythonhosted.org/packages/7a/03/092c2e0fa7fb6e6a61246c9ad433a133fc57434e645d8e3fff05e6627862/mayini_framework-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-06 15:21:17",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "907-bot-collab",
"github_project": "mayini",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": [
[
">=",
"1.21.0"
]
]
},
{
"name": "matplotlib",
"specs": [
[
">=",
"3.5.0"
]
]
},
{
"name": "seaborn",
"specs": [
[
">=",
"0.11.0"
]
]
},
{
"name": "tqdm",
"specs": [
[
">=",
"4.64.0"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
">=",
"1.1.0"
]
]
}
],
"lcname": "mayini-framework"
}