# ๐ง ConvNet-NumPy
[](https://pypi.org/project/convnet/)
[](https://www.python.org/downloads/)
[](https://numpy.org/)
[](https://cupy.dev/)
[](LICENSE.md)
[]()
> **A clean, educational Convolutional Neural Network framework built from scratch using pure Python and NumPy**
This project was created as a school assignment with the goal of understanding deep learning from the ground up. It's designed to be **easy to understand** and **learn from**, implementing a complete CNN framework using only NumPy for core computations. Additional modules are used only for visualization (tqdm), optimization (Numba JIT), and optional GPU acceleration (CuPy).
---
## ๐ Features
### Core Functionality
- โ
**Pure NumPy Core** - All neural network math implemented from scratch
- ๐ฅ **Complete CNN Support** - Conv2D, MaxPool2D, Flatten, Dense layers
- ๐ **Modern Training** - Batch normalization, dropout, early stopping
- ๐ฏ **Smart Optimizers** - SGD with momentum and Adam optimizer
- ๐ **Learning Rate Scheduling** - Plateau-based LR reduction
- ๐พ **Model Persistence** - Save/load models in HDF5 or NPZ format
- ๐ **Data Augmentation Ready** - Thread-pooled data loading
### Performance Enhancements
- โก **Numba JIT Compilation** - Automatic acceleration of critical operations
- ๐ **Optional GPU Support** - CUDA acceleration via CuPy
- ๐งต **Multi-threading** - Auto-configured BLAS threads for CPU optimization
- ๐ฆ **Batch Processing** - Efficient mini-batch training
### Developer Experience
- ๐ **Clean Code** - Well-documented and easy to follow
- ๐ **Educational** - Built for learning deep learning fundamentals
- ๐ง **Modular Design** - Easy to extend and customize
- ๐ป **Examples Included** - MNIST training example and GUI demo
---
## ๐ Quick Start
### Installation
**Install from PyPI (Recommended):**
```bash
# Install the latest version from PyPI
pip install convnet
# Or install with GPU support
pip install convnet[cuda11] # For CUDA 11.x
pip install convnet[cuda12] # For CUDA 12.x
pip install convnet[cuda13] # For CUDA 13.x
```
**Install from Source:**
```bash
# Clone the repository
git clone https://github.com/codinggamer-dev/ConvNet-NumPy.git
cd ConvNet-NumPy
# Install in development mode
pip install -e .
```
### Your First Neural Network in 10 Lines
```python
from convnet import Model
from convnet.layers import Conv2D, Activation, MaxPool2D, Flatten, Dense
# Build a simple CNN
model = Model([
Conv2D(8, (3, 3)), Activation('relu'),
MaxPool2D((2, 2)),
Flatten(),
Dense(10)
])
# Configure training
model.compile(loss='categorical_crossentropy', optimizer='adam', lr=0.001)
# Train on your data
history = model.fit(train_dataset, epochs=10, batch_size=32)
```
---
## ๐ Complete MNIST Example
Here's a full example training a CNN on MNIST:
```python
import numpy as np
from convnet import Model, data
from convnet.layers import Conv2D, Activation, MaxPool2D, Flatten, Dense, Dropout
# Load MNIST data
train_data, test_data = data.load_mnist_gz('mnist_dataset')
# Build the model
model = Model([
Conv2D(8, (3, 3)), Activation('relu'),
MaxPool2D((2, 2)),
Conv2D(16, (3, 3)), Activation('relu'),
MaxPool2D((2, 2)),
Flatten(),
Dense(64), Activation('relu'), Dropout(0.2),
Dense(10) # 10 classes for MNIST
])
# Compile with Adam optimizer
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
lr=0.001,
weight_decay=1e-4,
clip_norm=5.0
)
# Create validation split
split_idx = int(0.9 * len(train_data))
X_val = train_data.images[split_idx:].astype(np.float32) / 255.0
y_val = train_data.labels[split_idx:]
train_subset = data.Dataset(train_data.images[:split_idx], train_data.labels[:split_idx])
# Train with early stopping and LR scheduling
history = model.fit(
train_subset,
epochs=100,
batch_size=256,
num_classes=10,
val_data=(X_val, y_val),
early_stopping=True,
patience=15,
lr_schedule='plateau',
lr_factor=0.5,
lr_patience=4
)
# Save the model
model.save('my_mnist_model.hdf5')
# Later... load and use
loaded_model = Model.load('my_mnist_model.hdf5')
predictions = loaded_model.predict(test_images)
```
---
## ๐งฉ Architecture Components
### Available Layers
| Layer | Description | Parameters |
|-------|-------------|------------|
| `Conv2D(filters, kernel_size)` | 2D Convolutional layer | `filters`, `kernel_size`, `stride`, `padding` |
| `Dense(units)` | Fully connected layer | `units`, `use_bias` |
| `MaxPool2D(pool_size)` | Max pooling layer | `pool_size`, `stride` |
| `Activation(type)` | Activation function | `'relu'`, `'tanh'`, `'sigmoid'`, `'softmax'` |
| `Flatten()` | Reshape to 1D | None |
| `Dropout(rate)` | Dropout regularization | `rate` (0.0 to 1.0) |
| `BatchNorm2D()` | Batch normalization | `momentum`, `epsilon` |
### Optimizers
- **SGD** - Stochastic Gradient Descent with momentum
```python
model.compile(optimizer='sgd', lr=0.01, momentum=0.9)
```
- **Adam** - Adaptive Moment Estimation (recommended)
```python
model.compile(optimizer='adam', lr=0.001, beta1=0.9, beta2=0.999)
```
### Loss Functions
- `'categorical_crossentropy'` - For multi-class classification
- `'mse'` - Mean Squared Error for regression
---
## ๐ฎ Examples & Demos
The `examples/` directory contains several demonstrations:
### 1. MNIST Training (`mnist_train-example.py`)
Complete training pipeline with early stopping, LR scheduling, and model persistence.
```bash
python examples/mnist_train-example.py
```
### 2. Interactive GUI Demo (`mnist_gui.py`)
Draw digits and see real-time predictions! Requires tkinter.
```bash
python examples/mnist_gui.py
```
### 3. GPU Training Test (`test_gpu_training.py`)
Benchmark GPU vs CPU performance.
```bash
python examples/test_gpu_training.py
```
### 4. Numba Benchmark (`benchmark_numba.py`)
Compare Numba JIT vs pure NumPy performance.
```bash
python examples/benchmark_numba.py
```
---
## โ๏ธ Advanced Features
### GPU Acceleration
ConvNet-NumPy automatically detects and uses CUDA GPUs when CuPy is installed:
```bash
# Install with GPU support using extras
pip install convnet[cuda11] # For CUDA 11.x
pip install convnet[cuda12] # For CUDA 12.x
pip install convnet[cuda13] # For CUDA 13.x
# Or install CuPy separately
pip install cupy-cuda11x # For CUDA 11.x
pip install cupy-cuda12x # For CUDA 12.x
pip install cupy-cuda13x # For CUDA 13.x
```
The framework will automatically:
- Move tensors to GPU
- Use GPU-accelerated operations
- Handle CPU โ GPU transfers transparently
### Regularization
```python
model.compile(
optimizer='adam',
lr=0.001,
weight_decay=1e-4, # L2 regularization
clip_norm=5.0 # Gradient clipping
)
```
### Learning Rate Scheduling
```python
history = model.fit(
dataset,
lr_schedule='plateau', # Reduce LR when validation plateaus
lr_factor=0.5, # Multiply LR by 0.5
lr_patience=5, # Wait 5 epochs before reducing
lr_min=1e-6 # Minimum learning rate
)
```
### Early Stopping
```python
history = model.fit(
dataset,
val_data=(X_val, y_val),
early_stopping=True,
patience=10, # Stop after 10 epochs without improvement
min_delta=0.001 # Minimum change to qualify as improvement
)
```
---
## ๐ Model Inspection
```python
# Print model architecture and parameter counts
model.summary()
# Output:
# Model summary:
# Conv2D: params=80
# Activation: params=0
# MaxPool2D: params=0
# Conv2D: params=1168
# Activation: params=0
# MaxPool2D: params=0
# Flatten: params=0
# Dense: params=40064
# Activation: params=0
# Dropout: params=0
# Dense: params=650
# Total params: 41962
```
---
## ๐ง Configuration
### Thread Configuration
The framework automatically configures BLAS threads for optimal CPU performance:
```python
import os
os.environ['NN_DISABLE_AUTO_THREADS'] = '1' # Disable auto-configuration
import convnet
```
### Custom RNG Seeds
For reproducibility:
```python
import numpy as np
rng = np.random.default_rng(seed=42)
model = Model([
Conv2D(8, (3, 3), rng=rng),
Dense(10, rng=rng)
])
```
---
## ๐ Understanding the Code
This project is designed for learning. Here's how to explore:
### Start Here
1. **`convnet/layers.py`** - See how Conv2D, Dense, and other layers work
2. **`convnet/model.py`** - Understand forward/backward propagation
3. **`convnet/optim.py`** - Learn how optimizers update weights
4. **`examples/mnist_train-example.py`** - Complete training example
### Key Concepts Implemented
- ๐ **Backpropagation** - Full gradient computation chain
- ๐ **Gradient Descent** - SGD and Adam optimization
- ๐ฒ **Weight Initialization** - Glorot/Xavier uniform
- ๐งฎ **Convolution Math** - Pure NumPy implementation
- ๐ **Batch Normalization** - Running mean/variance tracking
- ๐ฏ **Softmax & Cross-Entropy** - Numerically stable implementation
---
## ๐ฏ Project Goals
This framework was built to:
1. **Understand** deep learning by implementing it from scratch
2. **Learn** how CNNs actually work under the hood
3. **Teach** others the fundamentals of neural networks
4. **Provide** a clean, readable codebase for education
**Not for production use** - Use PyTorch, TensorFlow, or JAX for real applications!
---
## ๐ฆ Project Structure
```
ConvNet-NumPy/
โโโ convnet/ # Core framework
โ โโโ __init__.py # Package initialization & auto-config
โ โโโ layers.py # Layer implementations
โ โโโ model.py # Model class with training loop
โ โโโ optim.py # Optimizers (SGD, Adam)
โ โโโ losses.py # Loss functions
โ โโโ data.py # Data loading utilities
โ โโโ utils.py # Helper functions
โ โโโ cuda.py # GPU acceleration wrapper
โ โโโ numba_ops.py # JIT-compiled operations
โ โโโ io.py # Model save/load
โโโ examples/ # Example scripts
โ โโโ mnist_train-example.py
โ โโโ mnist_gui.py
โ โโโ test_gpu_training.py
โ โโโ benchmark_numba.py
โโโ requirements.txt # Dependencies
โโโ setup.py # Package setup
โโโ LICENSE.md # MIT License
โโโ README.md # This file
```
---
## ๐ค Contributing
This is an educational project, but contributions are welcome! Feel free to:
- ๐ Report bugs
- ๐ก Suggest improvements
- ๐ Improve documentation
- โจ Add new features
---
## ๐ Requirements
### Core Dependencies
- **Python** 3.8 or higher
- **NumPy** โฅ 1.20.0 (the star of the show! ๐)
- **tqdm** โฅ 4.60.0 (progress bars)
- **h5py** โฅ 3.0.0 (model serialization)
- **Numba** โฅ 0.56.0 (JIT compilation)
### Optional Dependencies
- **CuPy** โฅ 10.0.0 (GPU acceleration)
- **tkinter** (for GUI demo, usually included with Python)
---
## ๐ License
This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details.
Copyright (c) 2025 Tim Bauer
---
## ๐ Acknowledgments
- Built as a school project to learn deep learning fundamentals
- Inspired by PyTorch and TensorFlow's clean APIs
- Thanks to the NumPy, Numba, and CuPy teams for amazing tools
- MNIST dataset by Yann LeCun and Corinna Cortes - the perfect dataset for learning CNNs
---
## ๐ฌ Questions?
Feel free to open an issue on GitHub if you have questions or run into problems!
---
<div align="center">
**Made with โค๏ธ for learning and education**
โญ If this helped you understand CNNs better, consider giving it a star! โญ
</div>
Raw data
{
"_id": null,
"home_page": "https://github.com/codinggamer-dev/ConvNet-NumPy",
"name": "convnet",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "deep-learning neural-network cnn convolutional numpy education cuda scratch python",
"author": "codinggamer-dev",
"author_email": "ege.tba1940@gmail.com",
"download_url": null,
"platform": null,
"description": "# \ud83e\udde0 ConvNet-NumPy\n\n[](https://pypi.org/project/convnet/)\n[](https://www.python.org/downloads/)\n[](https://numpy.org/)\n[](https://cupy.dev/)\n[](LICENSE.md)\n[]()\n\n> **A clean, educational Convolutional Neural Network framework built from scratch using pure Python and NumPy**\n\nThis project was created as a school assignment with the goal of understanding deep learning from the ground up. It's designed to be **easy to understand** and **learn from**, implementing a complete CNN framework using only NumPy for core computations. Additional modules are used only for visualization (tqdm), optimization (Numba JIT), and optional GPU acceleration (CuPy).\n\n---\n\n## \ud83c\udf1f Features\n\n### Core Functionality\n- \u2705 **Pure NumPy Core** - All neural network math implemented from scratch\n- \ud83d\udd25 **Complete CNN Support** - Conv2D, MaxPool2D, Flatten, Dense layers\n- \ud83d\udcca **Modern Training** - Batch normalization, dropout, early stopping\n- \ud83c\udfaf **Smart Optimizers** - SGD with momentum and Adam optimizer\n- \ud83d\udcc8 **Learning Rate Scheduling** - Plateau-based LR reduction\n- \ud83d\udcbe **Model Persistence** - Save/load models in HDF5 or NPZ format\n- \ud83d\udd04 **Data Augmentation Ready** - Thread-pooled data loading\n\n### Performance Enhancements\n- \u26a1 **Numba JIT Compilation** - Automatic acceleration of critical operations\n- \ud83d\ude80 **Optional GPU Support** - CUDA acceleration via CuPy\n- \ud83e\uddf5 **Multi-threading** - Auto-configured BLAS threads for CPU optimization\n- \ud83d\udce6 **Batch Processing** - Efficient mini-batch training\n\n### Developer Experience\n- \ud83d\udcda **Clean Code** - Well-documented and easy to follow\n- \ud83c\udf93 **Educational** - Built for learning deep learning fundamentals\n- \ud83d\udd27 **Modular Design** - Easy to extend and customize\n- \ud83d\udcbb **Examples Included** - MNIST training example and GUI demo\n\n---\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n\n**Install from PyPI (Recommended):**\n\n```bash\n# Install the latest version from PyPI\npip install convnet\n\n# Or install with GPU support\npip install convnet[cuda11] # For CUDA 11.x\npip install convnet[cuda12] # For CUDA 12.x\npip install convnet[cuda13] # For CUDA 13.x\n```\n\n**Install from Source:**\n\n```bash\n# Clone the repository\ngit clone https://github.com/codinggamer-dev/ConvNet-NumPy.git\ncd ConvNet-NumPy\n\n# Install in development mode\npip install -e .\n```\n\n### Your First Neural Network in 10 Lines\n\n```python\nfrom convnet import Model\nfrom convnet.layers import Conv2D, Activation, MaxPool2D, Flatten, Dense\n\n# Build a simple CNN\nmodel = Model([\n Conv2D(8, (3, 3)), Activation('relu'),\n MaxPool2D((2, 2)),\n Flatten(),\n Dense(10)\n])\n\n# Configure training\nmodel.compile(loss='categorical_crossentropy', optimizer='adam', lr=0.001)\n\n# Train on your data\nhistory = model.fit(train_dataset, epochs=10, batch_size=32)\n```\n\n---\n\n## \ud83d\udcd6 Complete MNIST Example\n\nHere's a full example training a CNN on MNIST:\n\n```python\nimport numpy as np\nfrom convnet import Model, data\nfrom convnet.layers import Conv2D, Activation, MaxPool2D, Flatten, Dense, Dropout\n\n# Load MNIST data\ntrain_data, test_data = data.load_mnist_gz('mnist_dataset')\n\n# Build the model\nmodel = Model([\n Conv2D(8, (3, 3)), Activation('relu'),\n MaxPool2D((2, 2)),\n Conv2D(16, (3, 3)), Activation('relu'),\n MaxPool2D((2, 2)),\n Flatten(),\n Dense(64), Activation('relu'), Dropout(0.2),\n Dense(10) # 10 classes for MNIST\n])\n\n# Compile with Adam optimizer\nmodel.compile(\n loss='categorical_crossentropy',\n optimizer='adam',\n lr=0.001,\n weight_decay=1e-4,\n clip_norm=5.0\n)\n\n# Create validation split\nsplit_idx = int(0.9 * len(train_data))\nX_val = train_data.images[split_idx:].astype(np.float32) / 255.0\ny_val = train_data.labels[split_idx:]\ntrain_subset = data.Dataset(train_data.images[:split_idx], train_data.labels[:split_idx])\n\n# Train with early stopping and LR scheduling\nhistory = model.fit(\n train_subset,\n epochs=100,\n batch_size=256,\n num_classes=10,\n val_data=(X_val, y_val),\n early_stopping=True,\n patience=15,\n lr_schedule='plateau',\n lr_factor=0.5,\n lr_patience=4\n)\n\n# Save the model\nmodel.save('my_mnist_model.hdf5')\n\n# Later... load and use\nloaded_model = Model.load('my_mnist_model.hdf5')\npredictions = loaded_model.predict(test_images)\n```\n\n---\n\n## \ud83e\udde9 Architecture Components\n\n### Available Layers\n\n| Layer | Description | Parameters |\n|-------|-------------|------------|\n| `Conv2D(filters, kernel_size)` | 2D Convolutional layer | `filters`, `kernel_size`, `stride`, `padding` |\n| `Dense(units)` | Fully connected layer | `units`, `use_bias` |\n| `MaxPool2D(pool_size)` | Max pooling layer | `pool_size`, `stride` |\n| `Activation(type)` | Activation function | `'relu'`, `'tanh'`, `'sigmoid'`, `'softmax'` |\n| `Flatten()` | Reshape to 1D | None |\n| `Dropout(rate)` | Dropout regularization | `rate` (0.0 to 1.0) |\n| `BatchNorm2D()` | Batch normalization | `momentum`, `epsilon` |\n\n### Optimizers\n\n- **SGD** - Stochastic Gradient Descent with momentum\n ```python\n model.compile(optimizer='sgd', lr=0.01, momentum=0.9)\n ```\n\n- **Adam** - Adaptive Moment Estimation (recommended)\n ```python\n model.compile(optimizer='adam', lr=0.001, beta1=0.9, beta2=0.999)\n ```\n\n### Loss Functions\n\n- `'categorical_crossentropy'` - For multi-class classification\n- `'mse'` - Mean Squared Error for regression\n\n---\n\n## \ud83c\udfae Examples & Demos\n\nThe `examples/` directory contains several demonstrations:\n\n### 1. MNIST Training (`mnist_train-example.py`)\nComplete training pipeline with early stopping, LR scheduling, and model persistence.\n\n```bash\npython examples/mnist_train-example.py\n```\n\n### 2. Interactive GUI Demo (`mnist_gui.py`)\nDraw digits and see real-time predictions! Requires tkinter.\n\n```bash\npython examples/mnist_gui.py\n```\n\n### 3. GPU Training Test (`test_gpu_training.py`)\nBenchmark GPU vs CPU performance.\n\n```bash\npython examples/test_gpu_training.py\n```\n\n### 4. Numba Benchmark (`benchmark_numba.py`)\nCompare Numba JIT vs pure NumPy performance.\n\n```bash\npython examples/benchmark_numba.py\n```\n\n---\n\n## \u2699\ufe0f Advanced Features\n\n### GPU Acceleration\n\nConvNet-NumPy automatically detects and uses CUDA GPUs when CuPy is installed:\n\n```bash\n# Install with GPU support using extras\npip install convnet[cuda11] # For CUDA 11.x\npip install convnet[cuda12] # For CUDA 12.x\npip install convnet[cuda13] # For CUDA 13.x\n\n# Or install CuPy separately\npip install cupy-cuda11x # For CUDA 11.x\npip install cupy-cuda12x # For CUDA 12.x\npip install cupy-cuda13x # For CUDA 13.x\n```\n\nThe framework will automatically:\n- Move tensors to GPU\n- Use GPU-accelerated operations\n- Handle CPU \u2194 GPU transfers transparently\n\n### Regularization\n\n```python\nmodel.compile(\n optimizer='adam',\n lr=0.001,\n weight_decay=1e-4, # L2 regularization\n clip_norm=5.0 # Gradient clipping\n)\n```\n\n### Learning Rate Scheduling\n\n```python\nhistory = model.fit(\n dataset,\n lr_schedule='plateau', # Reduce LR when validation plateaus\n lr_factor=0.5, # Multiply LR by 0.5\n lr_patience=5, # Wait 5 epochs before reducing\n lr_min=1e-6 # Minimum learning rate\n)\n```\n\n### Early Stopping\n\n```python\nhistory = model.fit(\n dataset,\n val_data=(X_val, y_val),\n early_stopping=True,\n patience=10, # Stop after 10 epochs without improvement\n min_delta=0.001 # Minimum change to qualify as improvement\n)\n```\n\n---\n\n## \ud83d\udcca Model Inspection\n\n```python\n# Print model architecture and parameter counts\nmodel.summary()\n\n# Output:\n# Model summary:\n# Conv2D: params=80\n# Activation: params=0\n# MaxPool2D: params=0\n# Conv2D: params=1168\n# Activation: params=0\n# MaxPool2D: params=0\n# Flatten: params=0\n# Dense: params=40064\n# Activation: params=0\n# Dropout: params=0\n# Dense: params=650\n# Total params: 41962\n```\n\n---\n\n## \ud83d\udd27 Configuration\n\n### Thread Configuration\n\nThe framework automatically configures BLAS threads for optimal CPU performance:\n\n```python\nimport os\nos.environ['NN_DISABLE_AUTO_THREADS'] = '1' # Disable auto-configuration\nimport convnet\n```\n\n### Custom RNG Seeds\n\nFor reproducibility:\n\n```python\nimport numpy as np\nrng = np.random.default_rng(seed=42)\n\nmodel = Model([\n Conv2D(8, (3, 3), rng=rng),\n Dense(10, rng=rng)\n])\n```\n\n---\n\n## \ud83d\udcda Understanding the Code\n\nThis project is designed for learning. Here's how to explore:\n\n### Start Here\n1. **`convnet/layers.py`** - See how Conv2D, Dense, and other layers work\n2. **`convnet/model.py`** - Understand forward/backward propagation\n3. **`convnet/optim.py`** - Learn how optimizers update weights\n4. **`examples/mnist_train-example.py`** - Complete training example\n\n### Key Concepts Implemented\n- \ud83d\udd04 **Backpropagation** - Full gradient computation chain\n- \ud83d\udcc9 **Gradient Descent** - SGD and Adam optimization\n- \ud83c\udfb2 **Weight Initialization** - Glorot/Xavier uniform\n- \ud83e\uddee **Convolution Math** - Pure NumPy implementation\n- \ud83d\udcca **Batch Normalization** - Running mean/variance tracking\n- \ud83c\udfaf **Softmax & Cross-Entropy** - Numerically stable implementation\n\n---\n\n## \ud83c\udfaf Project Goals\n\nThis framework was built to:\n1. **Understand** deep learning by implementing it from scratch\n2. **Learn** how CNNs actually work under the hood\n3. **Teach** others the fundamentals of neural networks\n4. **Provide** a clean, readable codebase for education\n\n**Not for production use** - Use PyTorch, TensorFlow, or JAX for real applications!\n\n---\n\n## \ud83d\udce6 Project Structure\n\n```\nConvNet-NumPy/\n\u251c\u2500\u2500 convnet/ # Core framework\n\u2502 \u251c\u2500\u2500 __init__.py # Package initialization & auto-config\n\u2502 \u251c\u2500\u2500 layers.py # Layer implementations\n\u2502 \u251c\u2500\u2500 model.py # Model class with training loop\n\u2502 \u251c\u2500\u2500 optim.py # Optimizers (SGD, Adam)\n\u2502 \u251c\u2500\u2500 losses.py # Loss functions\n\u2502 \u251c\u2500\u2500 data.py # Data loading utilities\n\u2502 \u251c\u2500\u2500 utils.py # Helper functions\n\u2502 \u251c\u2500\u2500 cuda.py # GPU acceleration wrapper\n\u2502 \u251c\u2500\u2500 numba_ops.py # JIT-compiled operations\n\u2502 \u2514\u2500\u2500 io.py # Model save/load\n\u251c\u2500\u2500 examples/ # Example scripts\n\u2502 \u251c\u2500\u2500 mnist_train-example.py\n\u2502 \u251c\u2500\u2500 mnist_gui.py\n\u2502 \u251c\u2500\u2500 test_gpu_training.py\n\u2502 \u2514\u2500\u2500 benchmark_numba.py\n\u251c\u2500\u2500 requirements.txt # Dependencies\n\u251c\u2500\u2500 setup.py # Package setup\n\u251c\u2500\u2500 LICENSE.md # MIT License\n\u2514\u2500\u2500 README.md # This file\n```\n\n---\n\n## \ud83e\udd1d Contributing\n\nThis is an educational project, but contributions are welcome! Feel free to:\n- \ud83d\udc1b Report bugs\n- \ud83d\udca1 Suggest improvements\n- \ud83d\udcd6 Improve documentation\n- \u2728 Add new features\n\n---\n\n## \ud83d\udcdd Requirements\n\n### Core Dependencies\n- **Python** 3.8 or higher\n- **NumPy** \u2265 1.20.0 (the star of the show! \ud83c\udf1f)\n- **tqdm** \u2265 4.60.0 (progress bars)\n- **h5py** \u2265 3.0.0 (model serialization)\n- **Numba** \u2265 0.56.0 (JIT compilation)\n\n### Optional Dependencies\n- **CuPy** \u2265 10.0.0 (GPU acceleration)\n- **tkinter** (for GUI demo, usually included with Python)\n\n---\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details.\n\nCopyright (c) 2025 Tim Bauer\n\n---\n\n## \ud83d\ude4f Acknowledgments\n\n- Built as a school project to learn deep learning fundamentals\n- Inspired by PyTorch and TensorFlow's clean APIs\n- Thanks to the NumPy, Numba, and CuPy teams for amazing tools\n- MNIST dataset by Yann LeCun and Corinna Cortes - the perfect dataset for learning CNNs\n\n---\n\n## \ud83d\udcac Questions?\n\nFeel free to open an issue on GitHub if you have questions or run into problems!\n\n---\n\n<div align=\"center\">\n\n**Made with \u2764\ufe0f for learning and education**\n\n\u2b50 If this helped you understand CNNs better, consider giving it a star! \u2b50\n\n</div>\n",
"bugtrack_url": null,
"license": null,
"summary": "A minimal, educational convolutional neural network framework built from scratch using NumPy",
"version": "1.0.0a0",
"project_urls": {
"Bug Reports": "https://github.com/codinggamer-dev/ConvNet-NumPy/issues",
"Homepage": "https://github.com/codinggamer-dev/ConvNet-NumPy",
"Source": "https://github.com/codinggamer-dev/ConvNet-NumPy"
},
"split_keywords": [
"deep-learning",
"neural-network",
"cnn",
"convolutional",
"numpy",
"education",
"cuda",
"scratch",
"python"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "85534c86b42b0f53cd9609a2c1a8e5c46fd69db639f9033f9ffbbdbe340950f7",
"md5": "d4296b3d7a179521df499bd3261927f9",
"sha256": "c0e1a47511422061c8c914b9b2ce468bf08cc5193e51797ff09f220d8fb02a1e"
},
"downloads": -1,
"filename": "convnet-1.0.0a0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d4296b3d7a179521df499bd3261927f9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 27686,
"upload_time": "2025-11-02T15:34:53",
"upload_time_iso_8601": "2025-11-02T15:34:53.096166Z",
"url": "https://files.pythonhosted.org/packages/85/53/4c86b42b0f53cd9609a2c1a8e5c46fd69db639f9033f9ffbbdbe340950f7/convnet-1.0.0a0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-11-02 15:34:53",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "codinggamer-dev",
"github_project": "ConvNet-NumPy",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "numpy",
"specs": [
[
">=",
"1.20.0"
]
]
},
{
"name": "tqdm",
"specs": [
[
">=",
"4.60.0"
]
]
},
{
"name": "numba",
"specs": [
[
">=",
"0.56.0"
]
]
},
{
"name": "h5py",
"specs": [
[
">=",
"3.0.0"
]
]
}
],
"lcname": "convnet"
}