# Extension G: How are the logistic regression coefficients computed?

In logistic regression, the regression coefficients deal in probabilities so they cannot be calculated in the same way as they are in linear regression. While in theory we could do a linear regression with logits as our outcome, we don’t actually have logits for each individual observation we just have 0’s or 1’s. The regression coefficients have to be estimated from the pattern of observations (0’s and 1’s) in relation to the explanatory variables in the data. We don’t need to be concerned with exactly how this works but the process of maximum likelihood estimation (MLE) starts with an initial arbitrary “guesstimate” of what the logit coefficients should be. The MLE seeks to manipulate the b’s to maximise the log likelihood (LL) which reflects how likely it is (i.e. the log odds) that the observed values of the outcome may be predicted from the explanatory variables. After this initial function is estimated the residuals are tested and a re-estimate is made with an improved function and the process is repeated (usually about half a dozen times) until convergence is reached (that is until the improvement in the LL does not differ significantly from zero).