This preview shows page 1. Sign up to view the full content.
Unformatted text preview: enter; how it is similar to the second center; and how it is similar to the
center. And this
checking process will apply to all data points. Therefore, feature space gives another representation of our data set. Methords for selecting center : Sub- sampling (http://en.wikipedia.org/wiki/Sampling_(statistics)) : Randomly- chosen training points are copied to the radial units. Since they are randomly selected,
they will represent the distribution of the training data in a statistical sense.
K- Means algorithm (http://en.wikipedia.org/wiki/K- means_clustering) : Given K radial units, it adjusts the positions of the centers so that: Each training point belongs
to a cluster center, and is nearer to this center than to any other center; Each cluster center is the centroid of the training points that belong to it.
The size of the deviation (smoothing factor) determines how spiky the Gaussian functions are. Deviations should typically be chosen so that Gaussians overlap with a few
Methods for choosing deviation are:
Choose the deviation by oursleves.
Select the deviation to reflect the number of centers and the volume of space they occupy
K- Nearest Neighbor algorithm (http://en.wikipedia.org/wiki/K- nearest_neighbor_algorithm) : Each unit's deviation is individually set to the mean distance to its K
If the Gaussians are too spiky, the network will not interpolate between known points, and the network loses the ability to generalize. If the Gaussians are very broad, the
network loses fine detail.
1.some examples, advantages & disadvantages:Basis Function (RBF) Networks
TWO- STAGE LEARNING NETWORKS: EXPLOITATION OF SUPERVISED DATA IN THE SELECTION OF HIDDEN UNIT PARAMETERS
2.comparision between BP & RBF: (http://nlpr- web.ia.ac.cn/2006papers/gjhy/gh93.pdf) Model s election or complexity control for RBF Network - a...
View Full Document
- Winter '13