logo

Logistic Regression 📂Statistical Analysis

Logistic Regression

Buildup

Let’s think about performing $Y \gets X_{1} , \cdots, X_{p}$. Here, $Y$ can be a categorical variable, particularly one with only two classes, such as male and female, success and failure, positive and negative, $0$ and $1$, etc. For convenience, let’s just call it $Y=0$ or $Y=1$. In cases where the dependent variable is binary, the interest is ‘what is $Y$ when we look at independent variables $ X_{1} , \cdots X_{p}$’.

However, since $Y$ is a qualitative variable, it cannot be expressed by the linear combination $y = \beta_{0} + \beta_{1} x_{1} + \cdots \beta_{p} x_{p}$ of regression coefficients and variables as in ordinary regression analysis. Therefore, we aim to approach it by calculating the probability that it is $Y=1$.

Given $X=x$, let’s set the probability that it is $Y=1$ as follows: $$\displaystyle \pi := P ( Y = 1 | X = x ) = {{ e^{ \beta_{0} + \beta_{1} x_{1} + \cdots \beta_{p} x_{p}} } \over { 1+ e^{ \beta_{0} + \beta_{1} x_{1} + \cdots \beta_{p} x_{p}} }}$$

  • (i) The exponential function is always greater than $0$, and since the denominator is greater than the numerator in $\pi$, it is $ 0 < \pi < 1$.
  • (ii) Naturally, the probability that it is $Y = 0$ is $$ \begin{align*} 1 - \pi =& P ( Y = 0 | X = x ) \\ =& 1 - {{ e^{ \beta_{0} + \beta_{1} x_{1} + \cdots \beta_{p} x_{p}} } \over { 1+ e^{ \beta_{0} + \beta_{1} x_{1} + \cdots \beta_{p} x_{p}} }} \\ =& {{ 1 } \over { 1+ e^{ \beta_{0} + \beta_{1} x_{1} + \cdots \beta_{p} x_{p}} }} \end{align*} $$ and thus $$\displaystyle { { \pi } \over { 1 - \pi } } = { { \displaystyle {{ e^{ \beta_{0} + \beta_{1} x_{1} + \cdots \beta_{p} x_{p}} } \over { 1+ e^{ \beta_{0} + \beta_{1} x_{1} + \cdots \beta_{p} x_{p}} }} } \over { \displaystyle {{ 1 } \over { 1+ e^{ \beta_{0} + \beta_{1} x_{1} + \cdots \beta_{p} x_{p}} }} } } = e^{ \beta_{0} + \beta_{1} x_{1} + \cdots \beta_{p} x_{p}} $$. Taking the natural log of both sides gives us $$\displaystyle \ln \left( { { \pi } \over { 1 - \pi } } \right) = \beta_{0} + \beta_{1} x_{1} + \cdots \beta_{p} x_{p}$$.

Taking the log in this way is called a Logit Transformation, and $\displaystyle \ln \left( { { \pi } \over { 1 - \pi } } \right)$ is referred to as the Logit.

Model 1

A multiple regression analysis $\displaystyle \ln \left( { { \pi } \over { 1 - \pi } } \right) \gets X_{1} , \cdots, X_{p}$ that takes the logit as the dependent variable is called Logistic Regression.

By applying the inverse transformation of the logit transformation to the values obtained from the logistic model, we can obtain the original probabilities $\pi$ we wanted to know. When a coefficient $\beta_{i}$ of $X_{i}$ is positive, it means that as $X_{i}$ increases, the probability of $Y=1$ increases, and a negative coefficient means that as $X_{i}$ increases, the probability of $Y=0$ increases.

Moreover, logistic regression can also be a classification technique by suggesting an appropriate Threshold for the probability, although it is also a prediction technique as it informs the probability of outcomes given certain conditions.

See Also


  1. Hadi. (2006). Regression Analysis by Example(4th Edition): p318~320. ↩︎