Encog.Engine.Network.Activation NamespaceEncog Machine Learning Framework for .Net

[Missing <summary> documentation for "N:Encog.Engine.Network.Activation"]

Classes

  ClassDescription
Public classActivationBiPolar
BiPolar activation function. This will scale the neural data into the bipolar range. Greater than zero becomes 1, less than or equal to zero becomes -1.
Public classActivationBipolarSteepenedSigmoid
The bipolar sigmoid activation function is like the regular sigmoid activation function, except Bipolar sigmoid activation function. TheOutput range is -1 to 1 instead of the more normal 0 to 1. This activation is typically part of a CPPN neural network, such as HyperNEAT. It was developed by Ken Stanley while at The University of Texas at Austin. http://www.cs.ucf.edu/~kstanley/
Public classActivationClippedLinear
Linear activation function that bounds the output to [-1,+1]. This activation is typically part of a CPPN neural network, such as HyperNEAT. The idea for this activation function was developed by Ken Stanley, of the University of Texas at Austin. http://www.cs.ucf.edu/~kstanley/
Public classActivationCompetitive
An activation function that only allows a specified number, usually one, of the out-bound connection to win. These connections will share in the sum of the output, whereas the other neurons will receive zero. This activation function can be useful for "winner take all" layers.
Public classActivationElliott
Public classActivationElliottSymmetric
Public classActivationGaussian
An activation function based on the Gaussian function. The output range is between 0 and 1. This activation function is used mainly for the HyperNeat implementation. A derivative is provided, so this activation function can be used with propagation training. However, its primary intended purpose is for HyperNeat. The derivative was obtained with the R statistical package. If you are looking to implement a RBF-based neural network, see the RBFNetwork class. The idea for this activation function was developed by Ken Stanley, of the University of Texas at Austin. http://www.cs.ucf.edu/~kstanley/
Public classActivationLinear
The Linear layer is really not an activation function at all. The input is simply passed on, unmodified, to the output. This activation function is primarily theoretical and of little actual use. Usually an activation function that scales between 0 and 1 or -1 and 1 should be used.
Public classActivationLOG
An activation function based on the logarithm function. This type of activation function can be useful to prevent saturation. A hidden node of a neural network is said to be saturated on a given set of inputs when its output is approximately 1 or -1 "most of the time". If this phenomena occurs during training then the learning of the network can be slowed significantly since the error surface is very at in this instance.
Public classActivationRamp
A ramp activation function. This function has a high and low threshold. If the high threshold is exceeded a fixed value is returned. Likewise, if the low value is exceeded another fixed value is returned.
Public classActivationSigmoid
The sigmoid activation function takes on a sigmoidal shape. Only positive numbers are generated. Do not use this activation function if negative number output is desired.
Public classActivationSIN
An activation function based on the sin function, with a double period. This activation is typically part of a CPPN neural network, such as HyperNEAT. It was developed by Ken Stanley while at The University of Texas at Austin. http://www.cs.ucf.edu/~kstanley/
Public classActivationSoftMax
The softmax activation function.
Public classActivationSteepenedSigmoid
The Steepened Sigmoid is an activation function typically used with NEAT. Valid derivative calculated with the R package, so this does work with non-NEAT networks too. It was developed by Ken Stanley while at The University of Texas at Austin. http://www.cs.ucf.edu/~kstanley/
Public classActivationStep
The step activation function is a very simple activation function. It is the activation function that was used by the original perceptron. Using the default parameters it will return 1 if the input is 0 or greater. Otherwise it will return 1. The center, low and high properties allow you to define how this activation function works. If the input is equal to center or higher the high property value will be returned, otherwise the low property will be returned. This activation function does not have a derivative, and can not be used with propagation training, or any other training that requires a derivative.
Public classActivationTANH
The hyperbolic tangent activation function takes the curved shape of the hyperbolic tangent. This activation function produces both positive and negative output. Use this activation function if both negative and positive output is desired.
Interfaces

  InterfaceDescription
Public interfaceIActivationFunction
This interface allows various activation functions to be used with the neural network. Activation functions are applied to the output from each layer of a neural network. Activation functions scale the output into the desired range. Methods are provided both to process the activation function, as well as the derivative of the function. Some training algorithms, particularly back propagation, require that it be possible to take the derivative of the activation function. Not all activation functions support derivatives. If you implement an activation function that is not derivable then an exception should be thrown inside of the derivativeFunction method implementation. Non-derivable activation functions are perfectly valid, they simply cannot be used with every training algorithm.
Public interfaceIActivationFunctionCL
This interface allows various activation functions to be used with the neural network. Activation functions are applied to the output from each layer of a neural network. Activation functions scale the output into the desired range. Methods are provided both to process the activation function, as well as the derivative of the function. Some training algorithms, particularly back propagation, require that it be possible to take the derivative of the activation function. Not all activation functions support derivatives. If you implement an activation function that is not derivable then an exception should be thrown inside of the derivativeFunction method implementation. Non-derivable activation functions are perfectly valid, they simply cannot be used with every training algorithm.