As usual the more neurons in the hidden layer the

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: d Time series prediction. : input layer of d dimension of training patterns : hidden layer of up to m locally tuned neurons centered over receptive fields : output layer that provides the response of the network Figure 1: Radial Basis Function Network The output of an RBF network can be expressed as a weighted sum of its radial basis functions as follows: The radial basis function is: (Gaussian without a normalization constant) note :The hidden layer has a variable number of neurons (the optimal number is determined by the training process). As usual the more neurons in the hidden layer, the higher the model complexity. Each neuron consists of a radial basis function centered on a point with the same dimensions as the input data. The radii of the RBF functions may be different. The centers and radii can be determined through clustering or an EM algorithm. When the x vector is given from the input layer, the hidden neuron computes the radial distance from the neuron’s center point and then applies RBF function to this distance. The resulting value is passed to the the output layer and weighed together to form the output. can be expressed in matrix form as: where wikicour senote.com/w/index.php?title= Stat841&pr intable= yes 47/74 10/09/2013 Stat841 - Wiki Cour se Notes is the matrix of output variables. is the matrix of Radial Basis Functions. is the matrix of weights. Here, k is the number of outputs, n is the number of data points, and m is the number of hidden units. If k = 1, and W are column vectors. relat ed reading: Introduction of the Radial Basis Function (RBF) Networks [20] (http://axiom.anu.edu.au/~daa/courses/GSAC6017/rbf.pdf) Paper about the BBFN for multi- task learning [21] (http://books.nips.cc/papers/files/nips18/NIPS2005_0628.pdf) Radial Basis Function (RBF) Networks [22] (http://documents.wolfram.com/applications/neuralnetworks/index6.html) [23] (http://lcn.epfl.ch/tutorial/english/rbf/html/index.html) [24] (http://www.dtreg.com/rbf.htm) Advantage of RBFN: 1. First, it can model any nonlinear function using a single hidden layer, which removes some design- d...
View Full Document

This document was uploaded on 03/07/2014.

Ask a homework question - tutors are online