Backpropagation or "backward propagation of errors," is an algorithm for supervised learning of artificial neural networks using gradient descent.
Given an artificial neural network and an error function, the backpropagation method calculates the gradient of the error function with respect to the neural network's weights. It is a generalization of the delta rule for perceptrons to multilayer feedforward neural networks.