[Missing <summary> documentation for "N:Encog.Engine.Network.Activation"]
Classes
Class  Description  

ActivationBiPolar 
BiPolar activation function. This will scale the neural data into the bipolar
range. Greater than zero becomes 1, less than or equal to zero becomes 1.
 
ActivationBipolarSteepenedSigmoid 
The bipolar sigmoid activation function is like the regular sigmoid activation function,
except Bipolar sigmoid activation function. TheOutput range is 1 to 1 instead of the more normal 0 to 1.
This activation is typically part of a CPPN neural network, such as
HyperNEAT.
It was developed by Ken Stanley while at The University of Texas at Austin.
http://www.cs.ucf.edu/~kstanley/
 
ActivationClippedLinear 
Linear activation function that bounds the output to [1,+1]. This
activation is typically part of a CPPN neural network, such as
HyperNEAT.
The idea for this activation function was developed by Ken Stanley, of
the University of Texas at Austin.
http://www.cs.ucf.edu/~kstanley/
 
ActivationCompetitive 
An activation function that only allows a specified number, usually one, of
the outbound connection to win. These connections will share in the sum of
the output, whereas the other neurons will receive zero.
This activation function can be useful for "winner take all" layers.
 
ActivationElliott  
ActivationElliottSymmetric  
ActivationGaussian 
An activation function based on the Gaussian function. The output range is
between 0 and 1. This activation function is used mainly for the HyperNeat
implementation.
A derivative is provided, so this activation function can be used with
propagation training. However, its primary intended purpose is for
HyperNeat. The derivative was obtained with the R statistical package.
If you are looking to implement a RBFbased neural network, see the
RBFNetwork class.
The idea for this activation function was developed by Ken Stanley, of
the University of Texas at Austin.
http://www.cs.ucf.edu/~kstanley/
 
ActivationLinear 
The Linear layer is really not an activation function at all. The input is
simply passed on, unmodified, to the output. This activation function is
primarily theoretical and of little actual use. Usually an activation
function that scales between 0 and 1 or 1 and 1 should be used.
 
ActivationLOG 
An activation function based on the logarithm function.
This type of activation function can be useful to prevent saturation. A
hidden node of a neural network is said to be saturated on a given set of
inputs when its output is approximately 1 or 1 "most of the time". If this
phenomena occurs during training then the learning of the network can be
slowed significantly since the error surface is very at in this instance.
 
ActivationRamp 
A ramp activation function. This function has a high and low threshold. If
the high threshold is exceeded a fixed value is returned. Likewise, if the
low value is exceeded another fixed value is returned.
 
ActivationSigmoid 
The sigmoid activation function takes on a sigmoidal shape. Only positive
numbers are generated. Do not use this activation function if negative number
output is desired.
 
ActivationSIN 
An activation function based on the sin function, with a double period.
This activation is typically part of a CPPN neural network, such as
HyperNEAT.
It was developed by Ken Stanley while at The University of Texas at Austin.
http://www.cs.ucf.edu/~kstanley/
 
ActivationSoftMax 
The softmax activation function.
 
ActivationSteepenedSigmoid 
The Steepened Sigmoid is an activation function typically used with NEAT.
Valid derivative calculated with the R package, so this does work with
nonNEAT networks too.
It was developed by Ken Stanley while at The University of Texas at Austin.
http://www.cs.ucf.edu/~kstanley/
 
ActivationStep 
The step activation function is a very simple activation function. It is the
activation function that was used by the original perceptron. Using the
default parameters it will return 1 if the input is 0 or greater. Otherwise
it will return 1.
The center, low and high properties allow you to define how this activation
function works. If the input is equal to center or higher the high property
value will be returned, otherwise the low property will be returned. This
activation function does not have a derivative, and can not be used with
propagation training, or any other training that requires a derivative.
 
ActivationTANH 
The hyperbolic tangent activation function takes the curved shape of the
hyperbolic tangent. This activation function produces both positive and
negative output. Use this activation function if both negative and positive
output is desired.

Interfaces
Interface  Description  

IActivationFunction 
This interface allows various activation functions to be used with the neural
network. Activation functions are applied to the output from each layer of a
neural network. Activation functions scale the output into the desired range.
Methods are provided both to process the activation function, as well as the
derivative of the function. Some training algorithms, particularly back
propagation, require that it be possible to take the derivative of the
activation function.
Not all activation functions support derivatives. If you implement an
activation function that is not derivable then an exception should be thrown
inside of the derivativeFunction method implementation.
Nonderivable activation functions are perfectly valid, they simply cannot be
used with every training algorithm.
 
IActivationFunctionCL 
This interface allows various activation functions to be used with the neural
network. Activation functions are applied to the output from each layer of a
neural network. Activation functions scale the output into the desired range.
Methods are provided both to process the activation function, as well as the
derivative of the function. Some training algorithms, particularly back
propagation, require that it be possible to take the derivative of the
activation function.
Not all activation functions support derivatives. If you implement an
activation function that is not derivable then an exception should be thrown
inside of the derivativeFunction method implementation.
Nonderivable activation functions are perfectly valid, they simply cannot be
used with every training algorithm.
