# Adaptive Algorithms for PCA - CNEL

Adaptive Algorithms for PCA PART II Ojas rule is the basic learning rule for PCA and extracts the first principal component Deflation procedure can be used to estimate the minor eigencomponents Sangers rule does an on-line deflation and uses Ojas rule to estimate the eigencomponents Problems with Sangers ruleStrictly speaking, Sangers rule is non-local and makes it a little harder for VLSI implementation. Non-local rules are termed as biologically non-plausible! (As

engineers, we dont care very much about this) Sangers rule converges slowly. We will see later that many algorithms for PCA converge slowly. Other Adaptive structures for PCA The first step would be to change the architecture of the network so that the update rules become local. INPUT X(n) LATERAL WEIGHTS - C

WEIGHTS -W This is the Rubner-Tavan Model. Output vector y is given by x1 x2 y Wx Cy w1

y1 y1 w1T X y2 y2 w2T X cy1 c w2 C is a lower triangular matrix and this is usually called as

the lateral weight matrix or the lateral inhibitor matrix. Feedforward weights W are trained using Ojas rule. Lateral weights are trained using anti-Hebbian rule. cij (n 1) yi (n) y j (n) Why this works? Fact We know that the eigenvectors are all orthonormal vectors. Hence the outputs of the network are all uncorrelated. Since, anti-Hebbian learning decorrelates signals, we can use this for training the lateral network.

Most important contributions of Rubner-Tavan Model Local update rules and hence biologically plausible Introduction of the lateral network for estimating minor components instead of using deflation However, the Rubner-Tavan model is slow to converge. APEX (Adaptive Principal Component Extraction) network slightly improves the speed of convergence of the Rubner-Tavan method. APEX uses exactly the same network architecture as RubnerTavan. Feedforward weights are trained using Ojas rule as before Lateral weights are trained using normalized anti-Hebbian rule

instead of just anti-Hebbian! 2 i cij (n 1) yi (n) y j (n) y (n)cij This is very similar to the normalization we did to Hebbian

learning. You can say that this is Ojas rule for anti-Hebbian APEX is faster because normalized anti-Hebbian rule is used to train the lateral net. It should be noted that when convergence is reached, all the lateral weights must go to zero! This is because, when convergence is reached, all the outputs are uncorrelated and hence there should not be any connection between them. Faster methods for PCAAll the adaptive models we discussed so far are based on gradient formulations. Simple gradient methods are usually slow and their convergence depends heavily on the selection of the right step-sizes.

Usually, the selection of step-sizes is directly dependent on the eigenvalues of the input data. Researchers have used different optimization criteria instead of the simple steepest descent. These optimizations no doubt increase the speed of convergence but they increase the computational cost as well! There is always a trade-off between speed of convergence and complexity. There are some subspace techniques like the Natural-power method and Projection Approximation Subspace Tracking (PAST) which are faster than the traditional Sangers or APEX

rules, but are computationally intensive. Most of them involve direct matrix multiplications. CNEL rule for PCA- P ( n) w(n) (1 T ) w(n 1) T Q ( n) ya (k ) wT (k 1) x(k ) P (n) P (n 1) x(k ) ya (k ) Q(n) Q(n 1) ya2 (k ) Any value of T can be choosen, but T < 1

There is no step-size in the algorithm! The algorithm is O(N) and on-line. Most importantly, it is faster than many other PCA algorithms Performance With Violin Time Series (CNEL rule) Eigenspread = 802 Sangers rule

## Recently Viewed Presentations

• ariation. according to local contexts. Comprehensive framework and methods to assess the sustainability of livestock system. Synergies and trade off between services (+ and -) Levers that can improve delivery of services .
• Comparisons: Illogical, Ambiguous, and Incomplete. From the. UWF Writing Lab's 101 . Grammar Mini-Lessons Series. Mini-Lesson #47
• CONVERGING AND DIVERGING LENSES. The PRINCIPAL AXIS of a lens is the line joining the centers of curvature of the surfaces of a lens.. For a converging lens, the . FOCAL POINT (point that the light rays either converge toward...
• Martyn Cheesbrough Project Manager [email protected] [email protected] ... Debbie Walker. Sean McPike. Ian Bosworth. Robert Morris. Huw Morgan. Keith Phipps. Nicholas Smith. Brian Cooper. Brian Dawson. Jason monkton. Paul Hunter.
• VELKO Lamp oil. μίγμα κορεσμένων υδρ/κων. ευρύ κοινό επικ. Για τοξικολ. Ιδιότ. VELKO White spirit. αλειφ-αρωμ υδρ/κες. ευρύ κοινό επικ. Για φυσικοχ. Ιδιότ. υδροχλωρικό όξύ 14-16%. Super Velco. ευρύ κοινό Για φυσικ/κές ...
• Abraham Lincoln Agenda Met on battlefield (great) Dedicate portion of field - fitting! Unfinished work (great tasks) Not on Agenda! Dedicate Consecrate Hallow (in narrow sense) Add or detract Note or remember what we say Review of Key Objectives &...
• Art of Multiprocessor Programming. Mutual Exclusion ... PLEASE READ THE DISCUSSION ABOUT TIME IN THE TEXTBOOK. Do not read this slide out, let the students read it while you talk. ... you should explain to them that what we are...
• The Great Wall of China is 6700 km long and 2200. The Great Wall of China can be seen from the moon. ... In Colosseum came 50000 spectators to watch gory games. Christ the Redeemer Statue. The Christ Redeemer statue...