## APTx Neuron
This repository offers a Python code for the PyTorch implementation of the APTx Neuron, as introduced in the paper "APTx Neuron: A Unified Trainable Neuron Architecture Integrating Activation and Computation".
**Paper Title**: APTx Neuron: A Unified Trainable Neuron Architecture Integrating Activation and Computation
**Author**: [Ravin Kumar](https://mr-ravin.github.io)
**Sources**:
- [Arxiv.org](https://arxiv.org/abs/2507.14270)
- [Research Gate](https://www.researchgate.net/publication/393889376_APTx_Neuron_A_Unified_Trainable_Neuron_Architecture_Integrating_Activation_and_Computation)
#### Github Repositories:
- **APTx Neuron** (Pytorch + PyPI Package): [APTx Neuron](https://github.com/mr-ravin/aptx_neuron)
- **APTx Activation Function** (Pytorch + PyPI Package): [APTx Activation Function](https://github.com/mr-ravin/aptx_activation)
- **Experimentation Results with MNIST** (APTx Neuron): [MNIST Experimentation Code](https://github.com/mr-ravin/APTxNeuron)
#### Cite Paper as:
```
Kumar, Ravin. "APTx Neuron: A Unified Trainable Neuron Architecture Integrating Activation and Computation." arXiv preprint arXiv:2507.14270 (2025).
```
Or,
```
@article{kumar2025aptx,
title={APTx Neuron: A Unified Trainable Neuron Architecture Integrating Activation and Computation},
author={Kumar, Ravin},
journal={arXiv preprint arXiv:2507.14270},
year={2025}
}
```
---
### APTx Neuron
<b>Abstract</b>: We propose the APTx Neuron, a novel, unified neural computation unit that integrates non-linear activation and linear transformation into a single trainable expression. The APTx Neuron is derived from the [APTx activation function](https://arxiv.org/abs/2209.06119), thereby eliminating the need for separate activation layers and making the architecture both computationally efficient and elegant. The proposed neuron follows the functional form $y = \sum_{i=1}^{n} ((\alpha_i + \tanh(\beta_i x_i)) \cdot \gamma_i x_i) + \delta$, where all parameters $\alpha_i$, $\beta_i$, $\gamma_i$, and $\delta$ are trainable. We validate our APTx Neuron-based architecture on the MNIST dataset, achieving up to 96.69\% test accuracy within 11 epochs using approximately 332K trainable parameters. The results highlight the superior expressiveness and computational efficiency of the APTx Neuron compared to traditional neurons, pointing toward a new paradigm in unified neuron design and the architectures built upon it.
The APTx Neuron is a novel computational unit that unifies linear transformation and non-linear activation into a single, expressive formulation. Inspired by the parametric APTx activation function, this neuron architecture removes the strict separation between computation and activation, allowing both to be learned as a cohesive entity. It is designed to enhance representational flexibility while reducing architectural redundancy.
#### Mathematical Formulation
Traditionally, a neuron computes the output as:
$y = \phi\left( \sum_{i=1}^{n} w_i x_i + b \right)$
where:
- $x_i$ are the inputs,
- $w_i$ are the weights,
- $b$ is the bias,
- and $\phi$ is an activation function such as ReLU, Swish, or Mish.
The APTx Neuron merges these components into a unified trainable expression as:
$y = \sum_{i=1}^{n} \left[ (\alpha_i + \tanh(\beta_i x_i)) \cdot \gamma_i x_i \right] + \delta$
where:
- $x_i$ is the $i$-th input feature,
- $\alpha_i$, $\beta_i$, and $\gamma_i$ are trainable parameters for each input,
- $\delta$ is a trainable scalar bias.
This equation allows the neuron to modulate each input through a learned, per-dimension non-linearity and scaling operation. The term $(\alpha_i + \tanh(\beta_i x_i))$ introduces adaptive gating, and $\gamma_i x_i$ provides multiplicative control.
---
## 📥 Installation
```bash
pip install aptx_neuron
```
or,
```bash
pip install git+https://github.com/mr-ravin/aptx_neuron.git
```
----
## Usage
<b>1</b>. APTx Neuron-based Layer with all $\alpha_i$, $\beta_i$, $\gamma_i$, and $\delta$ as trainable:
The setting `is_alpha_trainable = True` keeps $\alpha_i$ trainable. Each APTx neuron will have $(3n + 1)$ trainable parameters, where $n$ is input dimension. Note: The default value of `is_alpha_trainable` is `True`.
```
import aptx_neuron
input_dim = 8 # assuming input vector to be of dimension 8.
output_dim = 1 # assuming output dimension equals 1.
aptx_neuron_layer = aptx_neuron.aptx_layer(input_dim=input_dim, output_dim=output_dim, is_alpha_trainable=True)
```
<b>2</b>. APTx Neuron-based Layer with $\alpha_i=1$ (not trainable); While $\beta_i$, $\gamma_i$, and $\delta$ as trainable:
The setting `is_alpha_trainable = False` makes $\alpha_i$ fixed (non-trainable). Each APTx neuron will then have $(2n + 1)$ trainable parameters, thus reducing memory and training time per epoch. Here, $n$ is input dimension.
```
import aptx_neuron
input_dim = 8 # assuming input vector to be of dimension 8.
output_dim = 1 # assuming output dimension equals 1.
aptx_neuron_layer = aptx_neuron.aptx_layer(input_dim=input_dim, output_dim=output_dim, is_alpha_trainable=False) # α_i is fixed (not trainable)
```
----
#### Conclusion
This work introduced the APTx Neuron, a unified, fully trainable neural unit that integrates linear transformation and non-linear activation into a single expression, extending the APTx activation function. By learning per-input parameters $\alpha_i$, $\beta_i$, and $\gamma_i$ for each input $x_i$, and a shared bias term $\delta$ within a neuron, the APTx Neuron removes the need for separate activation layers and enables fine-grained input transformation. The APTx Neuron generalizes traditional neurons and activations, offering greater representational power. Our experiments show that a fully connected APTx Neuron-based feedforward neural network achieves 96.69% test accuracy on the MNIST dataset within 11 epochs using approximately 332K trainable parameters, demonstrating rapid convergence and high efficiency. This design lays the groundwork for extending APTx Neurons to CNNs and transformers, paving the way for more compact and adaptive deep learning architectures.
----
### 📜 Copyright License
```python
Copyright (c) 2025 Ravin Kumar
Website: https://mr-ravin.github.io
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation
files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy,
modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
```
Raw data
{
"_id": null,
"home_page": "https://github.com/mr-ravin/aptx_neuron",
"name": "aptx-neuron",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "APTx Neuron, unified neuron, neuron, activation function, deep learning, pytorch, neural network, perceptronmachine learning, artificial intelligence, AI, ML, DL, torch",
"author": "Ravin Kumar",
"author_email": "mr.ravin_kumar@hotmail.com",
"download_url": "https://files.pythonhosted.org/packages/7d/32/90021fcfcffd637e2a5dd9879d1bb51822388fa8f82d481101b478f1b84f/aptx_neuron-0.0.2.tar.gz",
"platform": null,
"description": "## APTx Neuron \nThis repository offers a Python code for the PyTorch implementation of the APTx Neuron, as introduced in the paper \"APTx Neuron: A Unified Trainable Neuron Architecture Integrating Activation and Computation\".\n\n**Paper Title**: APTx Neuron: A Unified Trainable Neuron Architecture Integrating Activation and Computation\n\n**Author**: [Ravin Kumar](https://mr-ravin.github.io)\n\n**Sources**:\n- [Arxiv.org](https://arxiv.org/abs/2507.14270)\n- [Research Gate](https://www.researchgate.net/publication/393889376_APTx_Neuron_A_Unified_Trainable_Neuron_Architecture_Integrating_Activation_and_Computation)\n \n#### Github Repositories: \n- **APTx Neuron** (Pytorch + PyPI Package): [APTx Neuron](https://github.com/mr-ravin/aptx_neuron)\n- **APTx Activation Function** (Pytorch + PyPI Package): [APTx Activation Function](https://github.com/mr-ravin/aptx_activation)\n- **Experimentation Results with MNIST** (APTx Neuron): [MNIST Experimentation Code](https://github.com/mr-ravin/APTxNeuron)\n\n#### Cite Paper as:\n```\nKumar, Ravin. \"APTx Neuron: A Unified Trainable Neuron Architecture Integrating Activation and Computation.\" arXiv preprint arXiv:2507.14270 (2025).\n```\nOr,\n```\n@article{kumar2025aptx,\n title={APTx Neuron: A Unified Trainable Neuron Architecture Integrating Activation and Computation},\n author={Kumar, Ravin},\n journal={arXiv preprint arXiv:2507.14270},\n year={2025}\n}\n```\n \n---\n### APTx Neuron\n<b>Abstract</b>: We propose the APTx Neuron, a novel, unified neural computation unit that integrates non-linear activation and linear transformation into a single trainable expression. The APTx Neuron is derived from the [APTx activation function](https://arxiv.org/abs/2209.06119), thereby eliminating the need for separate activation layers and making the architecture both computationally efficient and elegant. The proposed neuron follows the functional form $y = \\sum_{i=1}^{n} ((\\alpha_i + \\tanh(\\beta_i x_i)) \\cdot \\gamma_i x_i) + \\delta$, where all parameters $\\alpha_i$, $\\beta_i$, $\\gamma_i$, and $\\delta$ are trainable. We validate our APTx Neuron-based architecture on the MNIST dataset, achieving up to 96.69\\% test accuracy within 11 epochs using approximately 332K trainable parameters. The results highlight the superior expressiveness and computational efficiency of the APTx Neuron compared to traditional neurons, pointing toward a new paradigm in unified neuron design and the architectures built upon it.\n\nThe APTx Neuron is a novel computational unit that unifies linear transformation and non-linear activation into a single, expressive formulation. Inspired by the parametric APTx activation function, this neuron architecture removes the strict separation between computation and activation, allowing both to be learned as a cohesive entity. It is designed to enhance representational flexibility while reducing architectural redundancy.\n\n#### Mathematical Formulation\n\nTraditionally, a neuron computes the output as:\n\n$y = \\phi\\left( \\sum_{i=1}^{n} w_i x_i + b \\right)$\n\nwhere: \n- $x_i$ are the inputs,\n- $w_i$ are the weights,\n- $b$ is the bias,\n- and $\\phi$ is an activation function such as ReLU, Swish, or Mish.\n\n\nThe APTx Neuron merges these components into a unified trainable expression as:\n\n$y = \\sum_{i=1}^{n} \\left[ (\\alpha_i + \\tanh(\\beta_i x_i)) \\cdot \\gamma_i x_i \\right] + \\delta$\n\nwhere:\n- $x_i$ is the $i$-th input feature,\n- $\\alpha_i$, $\\beta_i$, and $\\gamma_i$ are trainable parameters for each input,\n- $\\delta$ is a trainable scalar bias.\n\nThis equation allows the neuron to modulate each input through a learned, per-dimension non-linearity and scaling operation. The term $(\\alpha_i + \\tanh(\\beta_i x_i))$ introduces adaptive gating, and $\\gamma_i x_i$ provides multiplicative control.\n\n---\n## \ud83d\udce5 Installation\n```bash\npip install aptx_neuron\n```\nor,\n\n```bash\npip install git+https://github.com/mr-ravin/aptx_neuron.git\n```\n----\n\n## Usage\n<b>1</b>. APTx Neuron-based Layer with all $\\alpha_i$, $\\beta_i$, $\\gamma_i$, and $\\delta$ as trainable:\n\nThe setting `is_alpha_trainable = True` keeps $\\alpha_i$ trainable. Each APTx neuron will have $(3n + 1)$ trainable parameters, where $n$ is input dimension. Note: The default value of `is_alpha_trainable` is `True`.\n\n```\nimport aptx_neuron\ninput_dim = 8 # assuming input vector to be of dimension 8.\noutput_dim = 1 # assuming output dimension equals 1.\n\naptx_neuron_layer = aptx_neuron.aptx_layer(input_dim=input_dim, output_dim=output_dim, is_alpha_trainable=True)\n```\n\n<b>2</b>. APTx Neuron-based Layer with $\\alpha_i=1$ (not trainable); While $\\beta_i$, $\\gamma_i$, and $\\delta$ as trainable:\n\nThe setting `is_alpha_trainable = False` makes $\\alpha_i$ fixed (non-trainable). Each APTx neuron will then have $(2n + 1)$ trainable parameters, thus reducing memory and training time per epoch. Here, $n$ is input dimension.\n\n```\nimport aptx_neuron\ninput_dim = 8 # assuming input vector to be of dimension 8.\noutput_dim = 1 # assuming output dimension equals 1.\n\naptx_neuron_layer = aptx_neuron.aptx_layer(input_dim=input_dim, output_dim=output_dim, is_alpha_trainable=False) # \u03b1_i is fixed (not trainable)\n```\n\n----\n#### Conclusion\nThis work introduced the APTx Neuron, a unified, fully trainable neural unit that integrates linear transformation and non-linear activation into a single expression, extending the APTx activation function. By learning per-input parameters $\\alpha_i$, $\\beta_i$, and $\\gamma_i$ for each input $x_i$, and a shared bias term $\\delta$ within a neuron, the APTx Neuron removes the need for separate activation layers and enables fine-grained input transformation. The APTx Neuron generalizes traditional neurons and activations, offering greater representational power. Our experiments show that a fully connected APTx Neuron-based feedforward neural network achieves 96.69% test accuracy on the MNIST dataset within 11 epochs using approximately 332K trainable parameters, demonstrating rapid convergence and high efficiency. This design lays the groundwork for extending APTx Neurons to CNNs and transformers, paving the way for more compact and adaptive deep learning architectures.\n\n----\n\n### \ud83d\udcdc Copyright License\n```python\nCopyright (c) 2025 Ravin Kumar\nWebsite: https://mr-ravin.github.io\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation \nfiles (the \u201cSoftware\u201d), to deal in the Software without restriction, including without limitation the rights to use, copy, \nmodify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the \nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the \nSoftware.\n\nTHE SOFTWARE IS PROVIDED \u201cAS IS\u201d, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE \nWARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR \nCOPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, \nARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A PyTorch implementation of the APTx Neuron.",
"version": "0.0.2",
"project_urls": {
"Homepage": "https://github.com/mr-ravin/aptx_neuron"
},
"split_keywords": [
"aptx neuron",
" unified neuron",
" neuron",
" activation function",
" deep learning",
" pytorch",
" neural network",
" perceptronmachine learning",
" artificial intelligence",
" ai",
" ml",
" dl",
" torch"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "d6b690ada7475747601922cc75db05f65efe86cb966a6900bc87e135d250f973",
"md5": "8d5a871a2a857fefae7c2ef3528c893e",
"sha256": "02636716d67249ae29118e8233175ce9351624dcd67671b4156cfa455edf306a"
},
"downloads": -1,
"filename": "aptx_neuron-0.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8d5a871a2a857fefae7c2ef3528c893e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 4904,
"upload_time": "2025-07-28T15:27:42",
"upload_time_iso_8601": "2025-07-28T15:27:42.544990Z",
"url": "https://files.pythonhosted.org/packages/d6/b6/90ada7475747601922cc75db05f65efe86cb966a6900bc87e135d250f973/aptx_neuron-0.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "7d3290021fcfcffd637e2a5dd9879d1bb51822388fa8f82d481101b478f1b84f",
"md5": "7ccaa5da1ca0a3833b8709bf68870328",
"sha256": "a71c37d9d1624792684e537afefc05d4b0e035cf03e00ef8d1c564e895f3ceb7"
},
"downloads": -1,
"filename": "aptx_neuron-0.0.2.tar.gz",
"has_sig": false,
"md5_digest": "7ccaa5da1ca0a3833b8709bf68870328",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 5151,
"upload_time": "2025-07-28T15:27:43",
"upload_time_iso_8601": "2025-07-28T15:27:43.877511Z",
"url": "https://files.pythonhosted.org/packages/7d/32/90021fcfcffd637e2a5dd9879d1bb51822388fa8f82d481101b478f1b84f/aptx_neuron-0.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-28 15:27:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "mr-ravin",
"github_project": "aptx_neuron",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "aptx-neuron"
}