This is a continuation to the previous linear regression document. However this time the focus is on a few different ways to perform non-linear regression using Javascript and various Python frameworks (such as NumPy, SciKitLearn, and TensorFlow).
Quadratic Regression
Perhaps this is the first logical step beyond linear regression. A parabola of best fit is found for the dataset as opposed to a line of best fit. We should note that quadratic regression is not necessarily a regression problem, instead it is usually referred to as curve fitting. Below is the equation of a parabola, and the goal is to determine the coefficients a, b and c.
To be brief the fundamental math is omitted, instead just the final solution is shown. By constructing the following matrix and then solving the resulting equations, the coefficients (a, b, c) can easily be found:
Also the R2 value can be found by:
Python Solution…
There is no premade quadratic regression function in NumPy or SciKit learn since there is a better and more generic solution that works for any polynomial (which will be examined a bit further in the next section).
However, NumPy makes it very easy to solve the above equations since it contains some nice of matrix functions.
sum_x = 0
sum_x2 = 0
sum_x3 = 0
sum_x4 = 0
sum_y = 0
sum_xy = 0
sum_x2y = 0
for point in points:
sum_x += point[0]
sum_x2 += point[0] ** 2
sum_x3 += point[0] ** 3
sum_x4 += point[0] ** 4
sum_y += point[1]
sum_xy += point[1] * point[0]
sum_x2y += point[1] * point[0] ** 2
A = np.matrix ( [ [ sum_x4, sum_x3, sum_x2 ], \
[ sum_x3, sum_x2, sum_x ], \
[ sum_x2, sum_x, len(points) ] ] )
B = np.matrix ( [ sum_x2y, sum_xy, sum_y ] )
Ainv = np.linalg.inv (A)
coefs = B * Ainv
print ('a (x^2) --> ', coefs.item(0))
print ('b (x^1) --> ', coefs.item(1))
print ('c (x^0) --> ', coefs.item(2))
Javascript Solution…
A small app was made using Javascript so one can interactively see the fitting results (click here to view it in a separate window). To calculate the matrices, the math.js library was used.
Polynomial Regression/Fitting…
Polynomial fitting is a bit more generic since the goal is to try to fit a polynomial of whatever specified degree (n) to determine the values of each \alpha) to best fit the data points.
Vandermonde Matrix Technique
There are a few different ways to solve such a problem, perhaps the most efficient is to use the Vandermonde matrix (VM) based technique. First the VM matrix is found by taking powers from 0 to the required degree of all m given x values:
Then using the VM matrix, the \alpha values can be determined by some matrix manipulation. Note that both \alpha and y are column vectors of lengths n and m respectively.
A python solution can easily be made using Numpy’s matrix related functions. A Javascript based solution is given below:
Numpy Solution
Numpy polyfit function finds the polynomial of best fit using the Vandermonde matrix. Hence, things are a bit easier than the previous solution:
import numpy as np
coefs = np.polynomial.polynomial.polyfit (np.array(xPoints),
np.array(yPoints),
maxDegree)
for i in range(len(coefs)):
print (f'x^{i} --> {coefs[i]}')
SciPy Solution
SciPy does contain the curve_fit function which can be used to find the a quadratic function of best fit (as seen in the code below). It does not use the above matrix technique, rather it uses non-linear least squares to fit a given function to the data. Note that the function can be anything other than just a polynomial.
import scipy.optimize
def QuadraticFunc (x, a, b, c):
return a * x**2 + b * x + c
results = scipy.optimize.curve_fit (QuadraticFunc, xPoints, yPoints)
coefs = results[0]
print ('x^0 --> ', coefs[2])
print ('x^1 --> ', coefs[1])
print ('x^2 --> ', coefs[0])
SciKit Learn
SciKit Learn also uses non-linear least squares, however the code is slightly more complicated since it uses two elements: PolynomialFeatures and LinearRegression….
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
polyFeatures = PolynomialFeatures (degree=maxDegree, include_bias=False)
polyX = polyFeatures.fit_transform (xPoints)
model = LinearRegression (fit_intercept=True, normalize=False, copy_X=True, n_jobs=1)
model.fit (polyX, yPoints)
print (f'x^0 --> {model.intercept_.item(0)}')
for i in range (len(model.coef_)+1):
print (f'x^{i+1} --> {model.coef_.item(i)}')
Gradient Descent
Gradient descent is a numerical technique used to perform optimization on a particular function (the objective/cost/loss function). The main goal is to adjust the parameters so that the cost function is minimized. While there are a variety of numeric optimization techniques out there, gradient descent is fairly simple and so it’s easy to implement. However there are some problems with it, namely the cost function needs to be convex, and the learning rate cannot be so large that it steps over a possible solution.
Recall that we are trying to find the polynomial P that best fits the dataset. Note that is is up to the user to know what maximum number of degrees (n) to use so that the results do not under or overfit the data.
Next the loss (or cost) function (\ell) that is used is simply the mean squared error function and is defined to as:
Two things to note about \ell are: it is computed for all the given points in the dataset (from 1 to m); next there is an extra 2 in the denominator so that things are slightly neater when the derivative is computed.
During each step in the gradient descent processes, each a coefficient value is recomputed using its current value along with the derivative of the loss function and the learning rate (or \alpha) in the following manner:
Finding the derivative (or gradient) of the loss function we get:
Therefore, at each gradient descent step, for all coefficient values we need to do the following calculations:
TensorFlow Solution
There are a few components in Tensorflow that can make solving such regression problems in a similar manner as the gradient descent technique. Instead of computing the derivative ourselves, TensorFlow 2 has the gradientTape functionality that computes the gradient numerically for all coefficients.
The main steps of with TensorFlow in solving a polynomial fitting problem are listed below:
import tensorflow as tf
coefs = tf.Variable ([0.0] * NUM_COEFFS, name='coefficients')
def PolyFunc (x):
yValue = 0
for i in range (numCoeffs):
yValue += coefs[i] * x ** i
return yValue
def CostFunc (yActual, yPred):
return tf.reduce_mean (tf.square (yActual - yPred))
for epoch in range (MAX_EPOCHS+1):
with tf.GradientTape () as tape:
yPredicted = PolyFunc (xValues)
cost = CostFunc (yPredicted, yValues)
gradients = tape.gradient (cost, coefs)
coefs.assign_sub (alpha * gradients)
for i in range(len(coefs.numpy().tolist())):
print (f'x^{i} --> {coefs.numpy().tolist()[i]}')
Note that only a TensorFlow 2 solution was created. We are moving on from TensorFlow 1.
Logistic Regression
Logistic regression is perhaps one of the most important types of regression methods out there. It is used in the modelling the probability of a certain class or event happening such as pass/fail or classifying something (such if food is a hot dog and not hot dog).
Again logistic regression can be used to classify two categories, the sigmoid function (\sigma) is used to produce two distinct classes from various input values. Note that for multi-category logistic regression, the softmax function is used instead:
Where z is a linear function comprised of weights (w_i) of each feature (x_i) and a bias (b) term to adjust the fitting.
To keep things simple, all of our solutions only have one feature so we only need to solve for one weight value, that is:
The goal with logistic regression is to determine the weights and bias terms which then assist in determining the classes/categories. Since the sigmoid function produces probabilities, then to determine what class a prediction is, the value of 0.5 is typically used; that is:
In the following Javascript program, one can play around with the weight and bias and see how it impacts the resulting sigmoid curve. Another important thing to notice is that different solution values can produce the same prediction outcome after the 0.5 filtering value is applied. Finally, since logistic regression applies to cases with two classes, then any new points are ‘snapped’ to the y = 0 or y = 1 lines based on their proximity.
Gradient Descent Solution…
Manipulating the sigmoid function, one can quickly see that there is no analytical solution to logistic regression problems. Hence it needs to be solved numerically. Perhaps the most popular way is to use gradient descent.
The main difference when using gradient descent when solving logistic regression is the cost function. Instead of using the simple mean square error function, the cross entropy function is used in this case. It can simply be derived to give:
The derivative of L_{CE} uses a few tricks related to logs and the sigmoid function to give:
SciKit Learn Solution…
SciKit Learn also uses gradient descent to solve the problem, however it makes things easy since no derivatives need to be determined by hand or any learning rate parameters need to be tweaked. Everything is nice and clean and contained in the LogisticRegression object.
import numpy as np
from sklearn.linear_model import LogisticRegression
logReg = LogisticRegression (random_state=0, solver='lbfgs')
logReg.fit (X, y)
print (f'Coefs: {logReg.coef_}')
print (f'Intercept: {logReg.intercept_}')
print (f'Actual Results: {y}')
print (f'Predict Results: {logReg.predict (X)}')
print (f'Model Score: {logReg.score (X, y)}')
StatsModel Solution…
The StatsModels package also contains a logistic regression solution however it isn’t as nice as the SciKit Learn version. The main problem is that by default it assumes for the bias to be 0.5. To change this, the add_constant function needs to be called to add a column of ones to the dataset.
import numpy as np
import statsmodels.api as sm
X, y = loadData ( <DataFile> )
# uncomment to add a constant to allow for a different x-intercept
# X = sm.add_constant (X, prepend=True)
# solve and report the results...
logit = sm.Logit (y, X).fit()
print (logit.summary())
print (logit.predict (X))
print (logit.predict (X) > 0.5)
print (y)
TensorFlow2 Solution…
TensorFlow2 has a built-in sigmoid function that is used as an activation function. However, for practice a gradient descent technique was used for our solution, as seen below…
import tensorflow as tf
import numpy as np
# Key parameters
ALPHA = 0.001
MAX_EPOCHS = 2000
# Weight and bias
tf.random.set_seed (2021)
W = tf.Variable (tf.random.normal ([1], mean=0.0))
b = tf.Variable (tf.random.normal ([1], mean=0.0))
# Hypothesis/Prediction Function
def predict (X):
z = tf.math.multiply (X, W) + b
pred = 1 / (1 + tf.exp(-z))
return pred
# Training... that is solve using gradient descent
for e in range (MAX_EPOCHS+1):
with tf.GradientTape() as tape:
hypothesis = predict (X)
cost = tf.reduce_mean (-tf.reduce_sum (y * tf.math.log (hypothesis) + (1 - y) * tf.math.log (1 - hypothesis)))
W_grad, b_grad = tape.gradient (cost, [W, b])
W.assign_sub (ALPHA * W_grad)
b.assign_sub (ALPHA * b_grad)
# report results...
results = predict (X).numpy().tolist()
for i, value in enumerate (results):
print (f'Probability: {value[0]:.5f} --> {(value[0] > 0.5)} Actual: {y[i] > 0.5}')
In the code, only one weight is used; if multiple weights are needed then the following changes need to be made (to make sure that matrices are used):
# N needs to be number of weights
W = tf.Variable (tf.random.normal ([N,1], mean=0.0))
# and when computing z...
z = tf.matmul (X, W) + b
Code…
All the code for this project (both in Python and Javascript) is available here.
No Comments