Papers
arxiv:2507.14270

APTx Neuron: A Unified Trainable Neuron Architecture Integrating Activation and Computation

Published on Jul 18
Authors:

Abstract

The APTx Neuron, a unified neural computation unit integrating non-linear activation and linear transformation, demonstrates superior expressiveness and computational efficiency compared to traditional neurons.

AI-generated summary

We propose the APTx Neuron, a novel, unified neural computation unit that integrates non-linear activation and linear transformation into a single trainable expression. The APTx Neuron is derived from the APTx activation function, thereby eliminating the need for separate activation layers and making the architecture both computationally efficient and elegant. The proposed neuron follows the functional form y = sum_{i=1}^{n} ((alpha_i + tanh(beta_i x_i)) cdot gamma_i x_i) + delta, where all parameters alpha_i, beta_i, gamma_i, and delta are trainable. We validate our APTx Neuron-based architecture on the MNIST dataset, achieving up to 96.69% test accuracy within 11 epochs using approximately 332K trainable parameters. The results highlight the superior expressiveness and computational efficiency of the APTx Neuron compared to traditional neurons, pointing toward a new paradigm in unified neuron design and the architectures built upon it.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.14270 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.14270 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.14270 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.