ML - Expectation-Maximization Algorithm
ML - Expectation-Maximization Algorithm
In the real-world applications of machine learning, it is very common that there are
many relevant features available for learning but only a small subset of them are
observable. So, for the variables which are sometimes observable and sometimes
not, then we can use the instances when that variable is visible is observed for the
purpose of learning and then predict its value in the instances when it is not
observable.
On the other hand, Expectation-Maximization algorithm can be used for the latent
variables (variables that are not directly observable and are actually inferred from the
values of the other observed variables) too in order to predict their values with the
condition that the general form of probability distribution governing those latent
variables is known to us. This algorithm is actually at the base of many unsupervised
clustering algorithms in the field of machine learning.
It was explained, proposed and given its name in a paper published in 1977 by
Arthur Dempster, Nan Laird, and Donald Rubin. It is used to find the local maximum
likelihood parameters of a statistical model in the cases where latent variables are
involved and the data is missing or incomplete.
Algorithm:
1. Given a set of incomplete data, consider a set of starting parameters.
2. Expectation step (E – step): Using the observed available data of the
dataset, estimate (guess) the values of the missing data.
3. Maximization step (M – step): Complete data generated after the
expectation (E) step is used in order to update the parameters.
4. Repeat step 2 and step 3 until convergence.
Usage of EM algorithm –
It can be used to fill the missing data in a sample.
It can be used as the basis of unsupervised learning of clusters.
It can be used for the purpose of estimating the parameters of Hidden Markov
Model (HMM).
It can be used for discovering the values of latent variables.
Advantages of EM algorithm –
It is always guaranteed that likelihood will increase with each iteration.
The E-step and M-step are often pretty easy for many problems in terms of
implementation.
Solutions to the M-steps often exist in the closed form.
Disadvantages of EM algorithm –
It has slow convergence.
It makes convergence to the local optima only.
It requires both the probabilities, forward and backward (numerical
optimization requires only forward probability).