Homework # 5, EE5353
1. In K-means clustering, means can be updated during the data pass or recalculated
afterwards.
(a) Under what conditions are the resulting clusters identical ?
(b) If we update t
Neural Net Project 3: Functional Link Net (Volterra Filter) Design Using
Regression
In this project, we upgrade our software from project 2 to design a 2nd degree
polynomial network ( called a functio
Neural Net Project 4: 11/15/2012
Comparing One- and Two-step MLP Training Algorithms
In this project, we train 2 data files using BP3, cg, and MOLF, all of which have
been somewhat discussed in class.
Homework # 4, EE5353
1. For a Bayes-Gaussian classifier the mean vector for the ith class is mi and has elements mi(n), the
covariance matrix for the ith class is Ci, and the elements of the inverse c
Neural Net Project 1:
Small Linear Networks for Function Approximation
1. Read the Reference Material below.
2. Using the data specified in part C, implement the linear equation solution of part D,
pr
Neural Net Project 5:
Simple Nonlinear Networks for Function Approximation and Classification
In this project, we begin the task of producing multilayer perceptron (MLP)
training software for 3-layer
I. Introduction
A. Approximating Functions of One
Variable, Review
1. Functions of Time
Goal: Review approximation techniques
for functions of time
a. Example Applications
(1) Approximating message si
Neural Net Project 2:
Linear Networks for Function Approximation
(1) Download and unzip Map.zip and compile the c program. Familiarize yourself
with the code.
(a) Download the file Twod.tra from the w
Exam # 2, EE5353, Fall 2013
1. A sigmoidal MLP has 10 inputs, 8 units in the first hidden layer, 7 units in the
second hidden layer, and 2 outputs. It is fully connected. As usual, thresholds in
the h
Exam # 3, EE5353, Fall 2013
1. Sometimes continuous approximations are needed. Consider a smoothed PLN
(SPLN) that uses a weighted squared Euclidean distance measure. In order to make
the mapping cont
Exam # 3, EE5353, Fall 2012
1. Four PLNs are to be fit to some training patterns (xp, tp). The forward (F)
network maps x to t, and has a mean-squared error (MSE) EF(i) for 1 i M. The
reverse (R) netw
Exam # 3, EE5353, Fall 2011
1. In a PLN with K clusters, yp = Akxap where the kth cluster is closest to xp. We
want to calculate derivatives of this mapping.
(a) Let ak(m,n) denote an element of Ak an
Exam # 2, EE5353, Fall 2012
1. A sigmoidal MLP has 8 inputs, 12 units in the first hidden layer, 6 units in the
second hidden layer, and 3 outputs. It is fully connected. As usual, thresholds in
the h
Exam # 2, EE5353, Fall 2011
1. There are Nv training patterns for a sigmoidal MLP network having N inputs and
M outputs. Assume that we want the network to memorize the training patterns.
(a) How many
Exam # 1, EE5353, Fall 2012
1. Here we consider MLPs with binary-valued inputs (0 or 1).
(a) If the MLP has N inputs, what is the maximum degree D of its PBF model?
(b) If the MLP has N inputs, what i
Exam # 1, EE5353, Fall 2011
1. Here we consider MLPs with binary-valued inputs (0 or 1).
(a) If the MLP has N inputs, what is the maximum degree D of its PBF model?
(b) If the MLP has N inputs, what i
Exam # 1, EE5353, Fall 2013
1. A functional link net has N inputs, M outputs, and is degree D. The weights wik,
which feed into output number i, are found by minimizing the error function,
E(i) =
1
Nv
Homework # 1, EE5353
1. An XOR network has two inputs, one hidden unit, and one output. It is fully
connected. Give the network's weights if the output unit has a step activation and the
hidden unit a