This preview shows pages 1–13. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Radial Basis Function Networks Radial Basis Function Networks PR , ANN, & ML 2 Radial Basis Function Networks x A special types of ANN that have three layers b Input layer b Hidden layer b Output layer x Mapping from input to hidden layer is nonlinear x Mapping from hidden to output layer is linear PR , ANN, & ML 3 Comparison Multilayer perceptron x Multiple hidden layers x Nonlinear mapping x W: inner product x Global mapping x Warp classifiers x Stochastic approximation RBF Networks x Single hidden layer x Nonlinear + linear x W: distance x Local mapping x Warp data x Curve fitting PR , ANN, & ML 4 Another View: Curve Fitting x We try to estimate a mapping from patterns into classes f(patterns)>classes, f( X )>d x Patterns are represented as feature vector X x Classes are decisions d x Training samples: f( X i )>d i , i=1,. .., n x Interpolation of the f based on samples x1 x2 d PR , ANN, & ML 5 Yet Another View: Warping Data x If the problem is not linearly separable, MLP will use multiple neurons to define complicated decision boundaries (warp classifiers) x Another alternative is to warp data into higher dimensional space that they are much more likely to be linearly separable (single perceptron will do) x This is very similar to the idea of Support Vector Machine PR , ANN, & ML 6 Example x XOR x Warpped XOR x y  ] 1 , 1 [  1 ) ( t e= x x  ] , [  2 ) ( t e= x x = = ) ( ) ( ) ( 2 1 x x x y x (0,1) (1,0) (1,1) (0,0) PR , ANN, & ML 7 More Example PR , ANN, & ML 8 A Pure Interpolation Approach x Given: ( X i , d i ), i=1, , n x Desired: f( X i )= d i x Solution: f( X ), with f( X i )= d i x Radial basis function solution b ( X,X i ) general form b is shift and rotation invariant b Shift invariant requires XX i b Rotation invariant requires  XX i  x Example b Multiquadrics b Inserve Multiquadrics b Gaussan = i i i w f ) ( ) ( X X X 2 2 ) ( c r r + = 2 2 1 ) ( c r r + = 2 2 2 ) ( r e r= PR , ANN, & ML 9 Graphical Interpretation x Each neuron responds based on the distance to the center of its receptive field x The bottom level is a nonlinear mapping x The top level is a linear weighted sum ) ( 1 X X ) ( n X X) ( 2 X Xx 1 x 2 x m = i i i w f ) ( ) ( X X X w 1 w n PR , ANN, & ML 10 Other Alternatives: Global x Lagrange polynomials ) ( ) )( ( ) )( ( ) ( ) )( ( ) )( ( ) ( 1 1 1 1 1 1 , , n k k k k k k o k n k k o k n n k k n k x x x x x x x x x x x x x x x x x x x x L L y x f y= = = ++= L L L L PR , ANN, & ML 11 Other Alternatives: Local x Bezier Basis x Bspline basis PR , ANN, & ML 12 BSpline Interpolation x A big subject in mathematics x Used in many disciplines b Approximation b Pattern recognition b Computer graphics x As far as pattern recognition is concerned b Determine order of spline (DOFs) h Knot vectors (partition into intervals) h Fitting in each interval PR , ANN, & ML...
View
Full
Document
This note was uploaded on 08/06/2008 for the course CS 290I taught by Professor Wang during the Spring '07 term at UCSB.
 Spring '07
 WANG
 Machine Learning

Click to edit the document details