[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Fwd: neural networks class in Spring 2002




>Date: Mon, 29 Oct 2001 11:05:00 -0700
>From: Kari Torkkola <Kari.Torkkola@motorola.com>
>Subject: neural networks class
>To: rao@asu.edu, hliu@asu.edu
>Organization: Motorola Labs
>X-Mailer: Mozilla 4.61 [en]C-CCK-MCD   (WinNT; I)
>X-Accept-Language: en
>
>Huan, Rao,
>
>If you have students who might be interested in neural networks
>from a machine learning perspective in the spring semester 2002,
>could you please pass the attached ad to them. It would also be
>advantageous to stress that they should register early to avoid
>cancellation of the class.
>
>Thanks!
>
>- Kari
>
>EEE511: Artificial Neural Computation Systems
>
>This course covers the principles of artificial neural networks (ANN),
>collective computational phenomena emerging from simple interconnected
>elements.  The emphasis is on word "artificial" as opposed to real.
>This class will not touch on actual neuroscience too much; it concentrates
>on engineering aspects of the field. Our point of view is from machine
>learning - how do we make a network of neuron-like elements learn from
>a set of data that is representative of the problem, to solve the problem,
>which may be pattern classification (speech recognition, handwriting
>recognition), function approximation, control of a robot or a plant,
>or anything else among the myriards of possible applications.
>
>Our purpose is to obtain a coherent overview of the field, an ability to
>find and digest deeper knowledge in any particular subfield, and most
>importantly, ability to apply an appropriate type of an ANN model to a
>given problem. A firm grounding in statistics is maintained.
>
>Of the grade, 70% will be determined by 4-5 sets of homework. Each one
>will involve problems, some programming and experimentation.
>The course will culminate in a final project, which constitutes 30% of
>the grade. This is largish real problem, preferrably of student's own
>choice and from a domain close to his/her heart.
>
>Further information is available at the class website
>http://www.eas.asu.edu/~eee511
>
>Instructor: Dr. Kari Torkkola, Motorola Labs, <Kari.Torkkola@motorola.com>
>
>Textbook: Neural Networks, A comprehensive foundation, 2nd ed.,
>by Simon Haykin.  Prentice Hall,  1999,  842 pages
>
>The tentative syllabus is as follows:
>      Artificial neural networks in relation to other disciplines
>      Application areas of ANNs
>      Motivating demonstrations
>      Neuron models
>      Single layer networks and unconstrained optimization methods (Ch. 3)
>      Single layer networks: LMS (Ch. 3)
>      Single layer networks: Perceptrons  (Ch. 3)
>      Multilayer perceptrons and back-propagation (Ch. 4)
>      Representational capabilities of MLPs, training issues (Ch. 4)
>      Generalization, overfitting with multilayer networks (Ch. 4)
>      Regularization, Cross-Validation (Ch. 4)
>      Optimization methods for MLPs (Ch. 4)
>      Radial basis function networks, Introduction, Exact interpolation 
> (Ch. 5)
>      Radial basis function networks, Regularization, Model selection (Ch. 5)
>      Radial basis function networks, Learning basis functions (Ch. 5)
>      The self-organizing map, Introduction, The basic algorithm  (Ch. 9)
>      The self-organizing map,  Some example applications, Analysis of the 
> algorithm (Ch. 9)
>      The self-organizing map,  Variants of SOMs  (Ch. 9)
>      The self-organizing map: Tree-Structurd SOMs, Applications in 
> optimization.
>      Learning vector quantization   (Ch. 9)
>      Committee Machines: Ensemble averaging, Boosting (Ch. 7)
>      Committee Machines: Mixtures of experts  (Ch. 7)
>      Support vector machines, introduction  (Ch. 6)
>      Support vector machines: Separable and nonseparable cases (Ch. 6)
>      Support vector machines: More on VC-dimension, the kernel trick (Ch. 6)
>      Support vector machines: Regression, examples (Ch. 6)
>      Temporal processing with feedforward networks (Ch 13)
>      Recurrent networks (Ch 15)
>      Reinforcement learning (Ch 12)