Table of Contents
CHAPTER I - DATA FITTING WITH LINEAR MODELS .4 1. INTRODUCTION .5 2. LINEAR MODELS .11 3. LEAST SQUARES .15 4. ADAPTIVE LINEAR SYSTEMS .20 5. ESTIMATION OF THE GRADIENT - THE LMS ALGORITHM .28 6. A METHODOLOGY FOR STABLE ADAPTATION .36 7
B lind Source Separation Using Renyis
M utual Information
Kenneth E. Hild II *
Computational NeuroEngineering Lab
University of Florida
Gainesville, FL 32611
k.hild@ieee.org
Deniz Erdogmus
Computational NeuroEngineering Lab
University of Florida
Gainesvil
JOSE C. PRINCIPE
Renyis entropy
U. OF FLORIDA
EEL 6935
352-392-2662
principe@cnel.ufl.edu
History: Alfred Renyi was looking for the most general definition of
information measures that would preserve the additivity for independent events and was compatibl
Statistical Learning Theory: The Structural
Risk Minimization Principle
Jose Principe, Ph.D.
and
Sohan Seth
Distinguished Professor ECE, BME
Computational NeuroEngineering Laboratory and
principe@cnel.ufl.edu
Statistical Learning Theory
Now that we have a
Support vector machines
October 16, 2009
Idea: Given a training sample, the support vector machine constructs a hyperplane as the decision surface in such a way that the margin of separation between a positive and negative examples is maximized.
C
Class:
EEL 6814
NEURAL NETWORKS FOR SIGNAL PROCESSING (3)
Department of Electrical and Computer Engineering, University of Florida
Graduate
Prereq: Knowledge of adaptive signal processing.
Nonlinear signal processing and neural networks. Gradient descent
learnin
The MRMI Algorithm The batch mode adaptation algorithm for the rotation matrix, which is parameterized in terms of Givens rotations, can be summarized as follows. 1. Whiten the observations cfw_z1 , . . . , z N using W to produce the samples cfw_x1 , . .
Perceptron
Neuron
McCullochPitts neuron is a logic unit
Decision
Training
Given samples
How it works?
If
then right decision in next iteration!
Delta rule
Regression
Chain rule
Radial basis function network
October 13, 2009
Covers theorem: A complex pattern-classication problem, cast in
a high dimensional space nonlinearly is more likely to be linearly
separable than in a low dimensional space, provided that the space
is not den
Information theoretic learning models
November 3, 2010
Motivation: Optimal adaptive ltering E[Xe ] = 0
Uncorrelated is not independent! Consider X U [1, 1] and
Y = X 2.
Information: Which has more information?
1. NN project is due today. 2. NN project is
EEL 6814
Project 1
Due November 2, 2010
This project deals with the development of a neural network based classifier to separate
rocks from mines sensed with sonar bounced off a metal cylinder and those bounced off a
roughly cylindrical rock. This problem
Homework #2
Bayes and Fischer Discriminant Classifiers
Evan Kriminger
9/28/2010
Overview of Data
The four input features are petal width (PW), petal length (PL), sepal width (SW), and sepal
length (SL). The Parzen window empirical distributions for each o
1
EEL 6814
Neural Networks for Signal Processing
Homework 1-Adaptive Linear Systems
Time embedded Data
I. P ROBLEM 1
The rst problem is system identication of a nonlinear
plant using a linear model. In some cases a linear model can
capture the characteris
Table of Contents
CHAPTER IV - DESIGNING AND TRAINING MLPS .3 2. CONTROLLING LEARNING IN PRACTICE .4 3. OTHER SEARCH PROCEDURES .15 4. STOP CRITERIA .29 5. HOW GOOD ARE MLPS AS LEARNING MACHINES? .33 6. ERROR CRITERION .38 7. NETWORK SIZE AND GENERALIZATI
Table of Contents
CHAPTER V- FUNCTION APPROXIMATION WITH MLPS, RADIAL BASIS FUNCTIONS, AND SUPPORT VECTOR
MACHINES .3
1. INTRODUCTION .4
2. FUNCTION APPROXIMATION .7
3. CHOICES FOR THE ELEMENTARY FUNCTIONS .12
4. PROBABILISTIC INTERPRETATION OF THE MAPPIN
Statistical Learning Theory and the C-Loss
cost function
Jose Principe, Ph.D.
Distinguished Professor ECE, BME
Computational NeuroEngineering Laboratory and
principe@cnel.ufl.edu
Statistical Learning Theory
In the methodology of science there are two prim
EEL 6814
Homework II
Due September 28, 2010
In this problem you will design several classifiers to distinguish between three types of
flowers using measurements of petal and septal length and width. The dataset is called the
IRIS data and it is in the cou
EEL 6814
HMW # 3
Due October 5, 2010
1- Code the backpropagation algorithm and test it in the following 2 class problem:
Star problem:
x1
1
0
-1
0
0.5
-.5
0.5
-.5
x2
0
1
0
-1
0.5
0.5
-.5
-.5
d
1
1
1
1
0
0
0
0
2- The sleep datasets are larger, more involve
EEL 6814
Homework #4
Due October 14, 2010
Problem 1.
Train a Radial Basis Function (RBF) network in the Spiral data classification. Compare
the performance as a function of the number of processing elements.
Problem 2.
Train the one hidden layer MLP in th
EEL 6814
HMW#6
Due November 30, 2010
The purpose of this homework is to let you program the backpropagation through time
algorithm to train recurrent networks. The problem is to create an oscillator that will
create a figure 8 in 2D space by learning the