Quickprop, an Alternative to Back-Propagation

Johanna Appel · Published in Towards Data Science · 9 min read
.
August 25, 2020
Animation of Quickprop

Due to the slowly converging nature of the vanilla back-propagation algorithms of the ’80s/’90s, Scott Fahlman invented a learning algorithm dubbed Quickprop [1] that is roughly based on Newton’s method. His simple idea outperformed back-propagation (with various adjustments) on problem domains like the ‘N-M-N Encoder’ task — i.e. training a de-/encoder network with N inputs, M hidden units and N outputs.
One of the problems that Quickprop specifically tackles is the issue of finding a domain-specific optimal learning rate, or rather: an algorithm that adjusts it appropriately dynamically.

In this article, we’ll look at the simple mathematical idea behind Quickprop. We’ll implement the basic algorithm and some improvements that Fahlman suggests — all in Python and PyTorch.

A rough implementation of the algorithm and some background can already be found in this useful blog post by Giuseppe Bonaccorso. We are going to expand on that — both on the theory and code side — but if in doubt, have a look at how Giuseppe explains it.

The motivation to look into Quickprop came from writing my last article on the “Cascade-Correlation Learning Architecture” [2]. There, I used it to train the neural network’s output and hidden neurons, which was a mistake I realized only later and which we’ll also look into here.

To follow along with this article, you should be familiar with how neural networks can be trained using back-propagation of the loss gradient (as of 2020, a widely used approach). That is, you should understand how the gradient is usually calculated and applied to the parameters of a network to try to iteratively achieve convergence of the loss to a global minimum.

Overview

We’ll start with the mathematics behind Quickprop and then look at how it can be implemented and improved step by step.
To make following along easier, any equations used and inference steps done are explained in more detail than in the original paper.

The Mathematics Behind Quickprop

The often used learning method of back-propagation for neural networks is based on the idea of iteratively ‘riding down’ the slop of a function, by taking short steps in the inverse direction of its gradient.

These ‘short steps’ are the crux here. Their length usually depends on a learning rate factor, and that is kept intentionally small to not overshoot a potential minimum.

Back in the days when Fahlman developed Quickprop, choosing a good learning rate was something of a major problem. As he actually mentions in his paper, in the best performing algorithm, the scientist chose the learning rate ‘by eye’ (i.e. manually and based on experience) every step along the way! [1]

Faced with this, Fahlman came up with a different idea: Solving a simpler problem.

Minimizing the loss function L, especially for deep neural networks, can become extremely difficult analytically (i.e. in a general way on the entire domain).
In back-propagation, for instance, we only calculate it point-wise and then do the small steps in the right direction. If we would know how the ‘terrain’ of the function looks like in general, we could ‘jump’ to the minimum directly.

But what if we could replace the loss function with a simpler version, of which we know its terrain? This is exactly Fahlmans’ assumption taken in Quickprop: He presumes that L can be approximated by a simple parabola that opens in the positive direction. This way, calculating the minimum (of the parabola) is as simple as finding the intersection of a line with the x-axis.

And if that point is not yet a minimum of the loss function, the next parabola can be approximated from there, like in the graphic below.

Animation of Quickprop
A parabola is fit to the original function and a step is taken towards its minimum. From there, the next parabola is fit and the next step is taken. The two dotted lines are the current and a previous stationary point of the parabola. (Graphic by author)

So… How exactly can we approximate L? Easy — using a Taylor series, and a small trick.

Note that for the following equations, we consider the components of the weight vector w to be trained independently, so w is meant to be seen as a scalar. But we can still exploit the SIMD architecture of GPU’s, using component-wise computations.

We start off with the second order Taylor expansion of L, giving us a parabola (without an error term):

(To understand how this was created, check out the Wikipedia article on Taylor series linked above — it’s as simple as inputting L into the general Taylor formula up to the second term and dropping the rest.)

We can now define the update rule for the weights based on a weight difference, and input that into T:

Quickprop now further approximates L’’ linearly using the difference quotient (this is the small trick mentioned above):

Using this, we can rewrite the Taylor polynomial to this ‘Quickprop’ adjusted version and build its gradient:

And that last equation, finally, can be used to calculate the stationary point of the parabola:

That’s it! Now, to put things together, given a previous weight, a previous weight difference and the loss slope at the previous and current weight, Quickprop calculates the new weight simply by:

Putting It Into Code

Before starting with the actual Quickprop implementation, let’s import some foundational libraries:

1import numpy as np
2import torch

With the last two lines of the mathematical equation from earlier, we can start with Quickprop! If you read the first article on Cascade-Correlation, you might be already familiar with this — here, we’ll concentrate on essential parts of the algorithm first, and put it all together in the end.

Note that we use PyTorch to do the automatic gradient calculation for us. We also assume to have defined an activation and loss function beforehand.

1# Setup torch autograd for weight vector w
2w_var = torch.autograd.Variable(torch.Tensor(w), requires_grad=True)
3
4# Calc predicted values based on input x and loss based on expected output y
5predicted = activation(torch.mm(x, w_var))
6L = loss(predicted, y)
7
8# Calc differential
9L.backward()
10
11# And, finally, do the weight update
12dL = w_var.grad.detach() # =: partial(L) / partial(W)
13dw = dw_prev * dL / (dL_prev - dL)
14
15dw_prev = dw.clone()
16
17w += learning_rate * dw

This is the simplest Quickprop version for one epoch of learning. To actually make use of it, we’ll have to run it several times and see if the loss converges (we’ll cover that bit later).

However, this implementation is flawed in several ways, which we are going to investigate and fix in the following sections:

  • We didn’t actually initialize any of the ..._prev variables - in the last article I statically initialized them with ones, but that is also not a good idea (see next points)
  • The weight delta variable might get stuck on zero values, since it is used as a factor in its own update step
  • The implementation might overshoot or generally fail to converge, if the gradient ‘explodes’
  • It will result in division by zero if the gradient doesn’t change in one iteration

Improvement: Init via Gradient Descent

The first simple fix we can apply is using gradient descent (with a very small learning rate) to prepare the dw_prev and dL_prev variables. This will give us a good first glimpse of the loss function terrain, and kick-starts Quickprop in the right direction.

Gradient descent is easily implemented using pytorch again — we’ll also use the opportunity to refactor the code above a bit as well:

1def calc_gradient(x, y, w, activation, loss):
2    # Helper to calc loss gradient
3    w_var = torch.autograd.Variable(torch.Tensor(w), requires_grad=True)
4    predicted = activation(torch.mm(x, w_var))
5    L = loss(predicted, y)
6    L.backward()
7    dL = w_var.grad.detach()
8    return L, dL, predicted
9    
10def grad_descent_step(x, y, w, activation, loss, learning_rate=1e-5):
11    # Calculate the gradient as usually
12    L, dL, predicted = calc_gradient(x, y, w, activation, loss)
13    
14    # Then do a simple gradient descent step
15    dw = -learning_rate * dL
16    new_w = w + dw
17    
18    return new_w, dw, L, dL

Improvement: Conditional Gradient Addition

Sometimes, the weight deltas become vanishingly small when using the Quickprop parabola approach. To prevent that from happening when the gradient is not zero, Fahlman recommends conditionally adding the slope to the weight delta.
The idea can be described like this: Go further if you have been moving in that direction anyway, but don’t push on if your previous update sent you in the opposite direction (to prevent oscillation).

With a little piece of decider code, this can be implemented quite easily:

1# (This code is just to illustrate the process before the real implementation, it won't execute)
2
3# We'll receive dw and dw_prev and need to decide whether to apply the update or not.
4# To not have to include conditional execution (if clauses) we'll want to do it branchless.
5# This can be achieved by a simple mutliplication rule using the sign function:
6
7# Sign gives us either -1, 0 or 1 based on the parameter being less, more or exactly zero
8# (check the docs for specifics),
9    np.sign(dw) + np.sign(dw_prev)
10# With this, we'll have three cases as the outcome of the sum to consider here:
11# -2, -1, 0, 1, 2
12# But actually, we're really only interested if this is 0 or not, so we can do:
13    np.clip(np.abs(np.sign(dw) + np.sign(dw_prev)), a_min=0, a_max=1)
14# And use that as our deciding factor, which is either 1 or 0 when the dw and dw_prev share the sign or not.
15

With this, we can put it all into one small function:

1def cond_add_slope(dw, dw_prev, dL, learning_rate=1.5):
2    ddw = np.clip(np.abs(np.sign(dw) + np.sign(dw_prev)), a_min=0, a_max=1)
3    return dw + ddw * (-learning_rate * dL)

Improvement: Maximum Growth Factor

As a second step, we’ll fix the issue of exploding weight deltas near some function features (e.g. near singularities).
To do that, Fahlman suggests to clip the weight update, if it would be bigger than the last weight update times a maximum grow factor:

1def clip_at_max_growth(dw, dw_prev, max_growth_factor=1.75):
2    # Get the absolute maximum element-wise growth
3    max_growth = max_growth_factor * np.abs(dw_prev)
4    
5    # And implement this branchless with a min/max clip
6    return np.clip(dw, a_min=(-max_growth), a_max=max_growth)

Improvement: Prevent Division by Zero

On some occasions, the previous and current computed slope can be the same. The result is that we’ll try to divide by zero in the weight update rule, and will afterward continue having to work with NaN's, which obviously breaks the training.
The simple fix here is to do a gradient descent step instead.

Observe the two update rules:

1# Quickprop
2dw = dw_prev * dL / (dL_prev - dL)
3# Gradient descent
4dw = -learning_rate * dL
5
6# We'll get a nicer result if we shuffle the equations a bit:
7dw = dL * dw_prev / (dL_prev - dL)
8dw = dL * (-learning_rate)

Besides the last factor, they look similar, no?
Which means we can go branchless again (i.e. save us some if-clauses), stay element-wise and pack everything in one formula:

1# (This code is just to illustrate the process before the real implementation, it won't execute)
2
3# If (dL_prev - dL) is zero, we want to multiply the learning rate instead,
4# i.e. we want to switch to gradient descent. We can accomplish it this way:
5
6# First, we make sure we only use absolute values (the 'magnitude', but element-wise)
7    np.abs(dL_prev - dL)
8# Then we map this value onto either 0 or 1, depending on if it is 0 or not (using the sign function)
9ddL = np.sign(np.abs(dL_prev - dL))
10
11# We can now use this factor to 'decide' between quickprop and gradient descent:
12quickprop_factor = ddL       * (dw_prev / (dL_prev - dL))
13grad_desc_factor = (1 - ddL) * (-learning_rate)
14
15# Overall we get:
16dw = dL * (quickprop_factor + grad_desc_factor)

The attentive reader probably noted the ‘learning rate’ factor we used above — a parameter we thought we could get rid of…
Well, actually we sort of did, or at least we did get rid of the problem of having to adjust the learning rate over the course of the training. The Quickprop learning rate can stay fixed throughout the process. It only has to be adjusted once per domain in the beginning. The actual dynamic step sizes are chosen through the parabola jumps, which in turn depend heavily on the current and last calculated slope.

If you think this sounds awfully familiar to how back-propagation learning rate optimizers work (think: momentum), you’d be on the right track. In essence, Quickprop achieves something very similar to them — just that it doesn’t use back-propagation at its core.

Coming back to the code: Since we already implemented gradient descent earlier on, we can build on that and re-use as much as possible:

1def quickprop_step(x, y, w, dw_prev, dL_prev,
2                   activation, loss,
3                   qp_learning_rate=1.5,
4                   gd_learning_rate=1e-5):
5    # Calculate the gradient as usually
6    L, dL, predicted = calc_gradient(x, y, w, activation, loss)
7    
8    # Calculate a 'decider' bit between quickprop and gradient descent
9    ddL = np.ceil(np.clip(np.abs(dL_prev - dL), a_min=0, a_max=1) / 2)
10    
11    quickprop_factor = ddL       * (dw_prev / (dL_prev - dL))
12    grad_desc_factor = (1 - ddL) * (-gd_learning_rate)
13
14    dw = dL * (quickprop_factor + grad_desc_factor)
15    
16    # Use the conditional slope addition
17    dw = cond_add_slope(dw, dw_prev, dL, qp_learning_rate)
18    
19    # Use the max growth factor
20    dw = clip_at_max_growth(dw, dw_prev)
21
22    new_w = w + dw
23    
24    return new_w, dw, L, dL, predicted

Putting It All Together

With all of these functions in place, we can put it all together. The bit of boilerplate code still necessary just does the initialization and checks for convergence of the mean loss per epoch.

1# Param shapes: x_: (n,i), y_: (n,o), weights: (i,o)
2#   Where n is the size of the whole sample set, i is the input count, o is the output count
3#   We expect x_ to already include the bias
4# Returns: trained weights, last prediction, last iteration, last loss
5# NB: Differentiation is done via torch
6def quickprop(x_, y_, weights,
7              activation=torch.nn.Sigmoid(),
8              loss=torch.nn.MSELoss(),
9              learning_rate=1e-4,
10              tolerance=1e-6,
11              patience=20000,
12              debug=False):
13    # Box params as torch datatypes
14    x = torch.Tensor(x_)
15    y = torch.Tensor(y_)
16    w = torch.Tensor(weights)
17
18    # Keep track of mean residual error values (used to test for convergence)
19    L_mean = 1
20    L_mean_prev = 1
21    L_mean_diff = 1
22    
23    # Keep track of loss and weight gradients
24    dL = torch.zeros(w.shape)
25    dL_prev = torch.ones(w.shape)
26    dw_prev = torch.ones(w.shape)
27    
28    # Initialize the algorithm with a GD step
29    w, dw_prev, L, dL_prev = grad_descent_step(x, y, w, activation, loss)
30
31    i = 0
32    predicted = []
33
34    # This algorithm expects the mean losses to converge or the patience to run out...
35    while L_mean_diff > tolerance and i < patience:
36        # Prep iteration
37        i += 1
38        dL_prev = dL.clone()
39        
40        w, dw, L, dL, predicted = quickprop_step(x, y, w, dw_prev, dL_prev, activation, loss, qp_learning_rate=learning_rate)
41        
42        dw_prev = dw.clone()
43        
44        # Keep track of losses and use as convergence criterion if mean doesn't change much     
45        L_mean = L_mean + (1/(i+1))*(L.detach().numpy() - L_mean)
46        L_mean_diff = np.abs(L_mean_prev - L_mean)
47        L_mean_prev = L_mean
48        
49        if debug and i % 100 == 99:
50            print("Residual           ", L.detach().numpy())
51            print("Residual mean      ", L_mean)
52            print("Residual mean diff ", L_mean_diff)
53        
54    return w.detach().numpy(), predicted.detach().numpy(), i, L.detach().numpy()

Caveats

Quickprop has one major caveat that greatly reduces its usefulness: The mathematical ‘trick’ we used, i.e. the approximation of the second order derivative of the loss function with a simple difference quotient relies on the assumption that this second order derivative is a continuous function.
This is not given for activation functions like e.g. the rectified linear unit, or ReLU for short. The second order-derivative is discontinuous and the behavior of the algorithm might become unreliable (e.g. it might diverge).

Looking back at my earlier article covering the implementation of Cascade-Correlation, we trained the hidden units of the network using Quickprop and used the covariance function as a way to estimate loss in that process. However, the covariance (as implemented there) is wrapped in an absolute value function. I.e. its second-order derivative is discontinuous and therefore, Quickprop should not be used. The careful reader of Fahlman et al.’s Cascade-Correlation paper [2] may have also noticed that they are actually using gradient ascent to calculate this maximum covariance.

Apart from that, it also seems that Quickprop delivers better results on some domains rather than others. An interesting summary by Brust et al. showed that it achieved better training results compared to the quality of back-propagation based techniques on some simple image classification tasks (classifying basic shapes) while at the same time doing worse on more realistic image classification tasks [3].
I haven’t done any research in that direction, but I wonder if this could imply that Quickprop might work better on less fuzzy and more structured data (think data frames/tables used in a business context). That would surely be interesting to investigate.

Summary

This article covered Scott Fahlman’s idea of improving back-propagation. We had a look at the mathematical foundations and a possible implementation.

Now go about and try it out for your own projects — I’d love to see what Quickprop can be used for!

If you would like to see variants of Quickprop in action, check out my series of articles on the Cascade-Correlation Learning Architecture.

All finished notebooks and code of this series are also available on Github. Please feel encouraged to leave feedback and suggest improvements.

Finally, if you’d like to support the creation of this and similarly fascinating articles, you can sign up for a medium membership and/or follow my account.

[1] S. E. Fahlman, An empirical study of learning speed in back-propagation networks (1988), Carnegie Mellon University, Computer Science Department

[2] S. E. Fahlman and C. Lebiere, The cascade-correlation learning architecture (1990), Advances in neural information processing systems (pp. 524–532)

[3] C. A. Brust, S. Sickert, M. Simon, E. Rodner and J. Denzler, Neither Quick Nor Proper — Evaluation of QuickProp for Learning Deep Neural Networks (2016), arXiv preprint arXiv:1606.04333

Johannes Hollmann

CEO/Founder

Are you planning an AI project?

Let’s discuss how your data combined with machine learning technologies can increase the performance of your organization.

Get in touch!