Homework # 5, EE5353
1. In K-means clustering, means can be updated during the data pass or recalculated
afterwards.
(a) Under what conditions are the resulting clusters identical ?
(b) If we update the means during the data pass, under what conditions wi
Exam # 2, EE5353, Fall 2013
1. A sigmoidal MLP has 10 inputs, 8 units in the first hidden layer, 7 units in the
second hidden layer, and 2 outputs. It is fully connected. As usual, thresholds in
the hidden and output layers are handled by adding an 11th i
Neural Net Project 2:
Linear Networks for Function Approximation
(1) Download and unzip Map.zip and compile the c program. Familiarize yourself
with the code.
(a) Download the file Twod.tra from the webpage. This file has 8 inputs and 7
outputs.
(a) Apply
Neural Net Project 3: Functional Link Net (Volterra Filter) Design Using
Regression
In this project, we upgrade our software from project 2 to design a 2nd degree
polynomial network ( called a functional link net or Volterra Filter) for function
approxima
Neural Net Project 4: 11/15/2012
Comparing One- and Two-step MLP Training Algorithms
In this project, we train 2 data files using BP3, cg, and MOLF, all of which have
been somewhat discussed in class.
1. The Random10-2 datafile has 10 zero-mean, unit vari
Homework # 4, EE5353
1. For a Bayes-Gaussian classifier the mean vector for the ith class is mi and has elements mi(n), the
covariance matrix for the ith class is Ci, and the elements of the inverse covariance matrix are
a(i,m,n), where m is the row numbe
Neural Net Project 1:
Small Linear Networks for Function Approximation
1. Read the Reference Material below.
2. Using the data specified in part C, implement the linear equation solution of part D,
printing out r, c, and w. Implement the steepest descent
Neural Net Project 5:
Simple Nonlinear Networks for Function Approximation and Classification
In this project, we begin the task of producing multilayer perceptron (MLP)
training software for 3-layer networks having floating point inputs and outputs.
The
I. Introduction
A. Approximating Functions of One
Variable, Review
1. Functions of Time
Goal: Review approximation techniques
for functions of time
a. Example Applications
(1) Approximating message signals in
communications
(2) Finding local approximation
Exam # 3, EE5353, Fall 2013
1. Sometimes continuous approximations are needed. Consider a smoothed PLN
(SPLN) that uses a weighted squared Euclidean distance measure. In order to make
the mapping continuous, we can calculate intermediate outputs ypk as Ak
Homework # 1, EE5353
1. An XOR network has two inputs, one hidden unit, and one output. It is fully
connected. Give the network's weights if the output unit has a step activation and the
hidden unit activation is
(a) Also a step function
(b) The square of
Exam # 3, EE5353, Fall 2012
1. Four PLNs are to be fit to some training patterns (xp, tp). The forward (F)
network maps x to t, and has a mean-squared error (MSE) EF(i) for 1 i M. The
reverse (R) network maps t to x, and has a MSE of ER(n) for 1 n N. Now
Exam # 3, EE5353, Fall 2011
1. In a PLN with K clusters, yp = Akxap where the kth cluster is closest to xp. We
want to calculate derivatives of this mapping.
(a) Let ak(m,n) denote an element of Ak and let g(i,n) denote the partial derivative
of ypi with
Exam # 2, EE5353, Fall 2012
1. A sigmoidal MLP has 8 inputs, 12 units in the first hidden layer, 6 units in the
second hidden layer, and 3 outputs. It is fully connected. As usual, thresholds in
the hidden and output layers are handled by adding a 9th inp
Exam # 2, EE5353, Fall 2011
1. There are Nv training patterns for a sigmoidal MLP network having N inputs and
M outputs. Assume that we want the network to memorize the training patterns.
(a) How many hidden units should the network have if it has 1 hidde
Exam # 1, EE5353, Fall 2012
1. Here we consider MLPs with binary-valued inputs (0 or 1).
(a) If the MLP has N inputs, what is the maximum degree D of its PBF model?
(b) If the MLP has N inputs, what is the maximum value of L' in its PBF model?
(c) If the
Exam # 1, EE5353, Fall 2011
1. Here we consider MLPs with binary-valued inputs (0 or 1).
(a) If the MLP has N inputs, what is the maximum degree D of its PBF model?
(b) If the MLP has N inputs, what is the maximum value of L' in its PBF model?
(c) If the
Exam # 1, EE5353, Fall 2013
1. A functional link net has N inputs, M outputs, and is degree D. The weights wik,
which feed into output number i, are found by minimizing the error function,
E(i) =
1
Nv
L
Nv
[t
p
(i) - y p (i)]
2
y p (i) =
p=1
w
X p (m)
im