Neural Networks (NN) are important data mining tool used for classification and clustering. NN learns by examples. NN when supplied with enough examples performs classification and even discover new trends or patterns in data. NN is composed of three layers, input, output and hidden layer. Each layer can have a number of nodes and nodes from input layer are connected to the nodes of hidden layer. Nodes from hidden layer are connected to the nodes of the output layer. Those connections represent weights between nodes. This paper describes popular Back Propagation (BP) Algorithm is proposed for feed forward NN algorithms. The Back-propagation (BP) training algorithm is a renowned representative of all iterative gradient descent algorithms used for supervised learning in neural networks. The aim is to show the logic behind this algorithm. In BP algorithm, the output of NN is evaluated against desired output. If results are not satisfactory, weights between layers are modified and the process is repeated again and again until an error is minimized. BP example is demonstrated in this paper with NN 2-2-1 architecture considering momentum. It is shown that most commonly used back- propagation learning algorithms are special cases of the developed general algorithm. The Sigmoid activation function approach is used to analyze the convergence of weights, with the use of the algorithm, toward minima of the error function.