[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Principal components analysis writeup




The link

http://rakaposhi.eas.asu.edu/cse494/notes/pplcomps.pdf

contains a very nice writeup on principal component analysis--which is
the theory behind LSI style dimensionality reduction techniques.

The write-up is taken from Christopher Bishop's "Neural Networks for
Pattern Recognition". The picture of the ellipise shaped data I showed 
in the class was from here..

This write up explains very elegantly exactly why eigen-vectors wind
up getting mixed up in dimensionality reduction. It also points out
that dimensionality reduction using eigen vectors is a "linear"
method--in that it considers linear translation and rotation of
coordinate frames. It doesn't support nonlinear transformations (see
the little cicle picture).

Here is hoping that someone reads this stuff and redeems the hour I
spent scanning the pages, converting them to jpegs, including them in
a word file, converting the word file to postscript and then to pdf,
and uploading the pdf. 

Rao
[Feb  5, 2001]


ps: What did you say--give you xerox copies?  Aww..that is so
outmoded--look at all the time I saved by not having to go to the
xerox machine.