Idea


While MLPs contain saturation type nonlinearities, another important class of neural networks called Radial Basis Function networks (RBF) makes use of localized basis functions, typically with Gaussian activation functions organized within one hidden layer.

Working


The network description is

\[y = \sum_{i=1}^{n_h} w_ih(||x-c_i||)\]

For a Gaussian activation function this becomes

\[y=\sum_{i=1}^{n_h} w_i exp(-||x-c_i||_2^2 / \sigma_i^2)\]

with input \(x \in \mathbb{R}^m\), ouput \(y \in \mathbb{R}\), output weights \(w\in \mathbb{R}^{n_h}\), centers \(c_i \in \mathbb{R}^m\) and widths \(\sigma_i \in \mathbb{R} (i=1,\dots,n_h)\) where \(n_h\) dontes the number of hidden neurons.