# Least Squares Fit of a General Polynomial to Data

To finish the progression of examples, I will give the equations needed to fit any polynomial to a set of data. For this I'll return to x,y data pairs, and determine coefficients for an (m-1)th order polynomial in the form: I've chosen this peculiar form for the maximum power of x, to obtain a clean final notation for the problem. In summation notation, this polynomial becomes: The form for the error function is: As before, the minimum error is at the point where the partial derivatives of the error function with respect to the coefficients are all zero.

The equation resulting from evaluating the partial derivative with respect to cj is: Dividing both sides of the final form by 2 and rearranging gives: Using standard notation for linear algebra, these equations can be written as: I leave the Fortran to you for now.

Before closing discussion on general curve fitting, it's time to answer the question: Why polynomials? Basically because they provide the simplest functions in which the undetermined coefficients appear as linear terms. Without this linearity, the equations that must be solved as a result of minimizing the derivative are themselves nonlinear and more difficult to solve.

It's common in engineering to look for coefficients to fit an equation in the form: The standard error function for fitting this to a set of data is : When you set the partial derivatives with respect to c1 and c2 equal to zero, you will discover a set of two nonlinear equations, that could be solved with a Newton iteration. However, there is an easier way out of the problem. Recast the original fitting equation as: and the error function as: We are fitting a straight line to the logarithms of the original data.