Five most widely used algorithms for training neural networks




The procedure used to carry out the learning process in a neural network is called the optimization algorithm (or optimizer). There are many different optimization algorithms. All have different characteristics and performance in terms of memory requirements, processing speed, and numerical precision.

Four major parameters are estimated in the process of developing neural network-based models. The four significant parameters of neural networks include :

1)Activation function from Input to Hidden
2)Activation function from Hidden to Output
3)Number of hidden layers
4)The magnitude of weights of the connections
(To know more about the above parameters see my tutorial on Artificial Neural Network)

This article is about the methods utilized to estimate the weights of the connections. The process of estimation of weights is similar to optimization problems. Here the weights are design variables. The transfer function prepared to transfer the information from input to output is the objective function. The method used to determine the weights can be compared to the optimization or programming techniques that are utilized to generate values for the design variables within the applied constraints. Generally, the limitations on weights vary from 0 to 1 or, in some cases, -1 to 1. The main job of the programming technique is to reduce the gap between the desired and estimated magnitude of the output variables(Supervised ANN) or to be as nearer to a target value(Unsupervised ANN).

As there are different cases as represented by the different rows of the training data, a key feature of neural networks is an iterative learning process in which data cases (rows) are presented to the network one at a time, and the weights associated with the input values are adjusted each time. After all, cases are presented, the process often starts over again with the most optimal weights or the weights at which the difference from desired and predicted or target value is minimum.

Numerous methods are used as the programming technique and collectively known as "Training or Learning ALgorithm."Broadly they can be classified into three distinct groups: Propagation Algorithms, Gradient Descent Algorithms, and other types of unique algorithms like Levenberg Marquadert Algorithm, Newtons Method, Genetic Algorithms, etc.

The five most popular algorithms among the training algorithms are :
1)Quick Propagation

There are numerous other new and modified algorithms that are now used widely to estimate the weights of the connections, but most of them are based on the above algorithms.

To select the best algorithm among the four above algorithms use the ODM and the following as criteria
i)Accuracy
ii)Reliability(What is the standard deviation of the data concerning the input data ?)
iii)Time of Learning(How many citations a paper had received using that training algorithm ?)
iv)Popularity(How many citations ??)
v)Requirements(to use the algorithm what additional parameter or variable is required ?)

If you want to give the highest priority to accuracy, then provide a maximum score to this criterion and again while scoring the options if LM has the highest precision then assign 100 to this criterion for the option LM. After scoring all, click the Update button to get the best result. It will show the most significant algorithm for training the network as per your preference.


Dr.Mrinmoy Majumder
Admin and Editor(hon)

Comments

Popular posts from this blog

Simple Decision Making Tool following the AHP Technique

Call for Internship of Water,Energy and Metaheuristics

Introduction to Genetic Algorithm