[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

A worked out LDA example



Folks

 I am attaching the scan of four pages from Kevin Murphy's book on Machine Learning. The last page has a small worked out example of LDA parameter setting.

Here is how to read the file:

First page:  This is there just to let you see the LDA plate model (as Kevin uses yet another notation to describe it--with \pi used in place of \theta, and q is used in place of z)

second page: shows the unrolled LDA model *before* and *after* the marginalizing of \pi. You will notice, as I mentioned in my mail yesterday, that when \pi  variables are integrated out, q variables get correlated. (What is interesting here, if you look closely, is that the connections between the q variables are direction-less. That makes that part of the network a markov network (which we haven't discussed in the class) --rather than a bayes network. The whole network becomes a mixed graphical model after marginalization. This is part of the complexity under the hood. You note that Hinrich paper derives the gibbs sampling update rules directly from the join over the q variables, without bothering to see the graphical model.)

third page--the collapsed gibbs sampler is derived (with many more missing steps than in Hinrichs)

end of third and beginning of fourth page-- a short worked out example 


regards
Rao

Attachment: lda-kevin-murphy.pdf
Description: Adobe PDF document