This preview shows page 1. Sign up to view the full content.
Unformatted text preview: ive. Suppose we use the \absolute error" function: E = jT e ; O e j 4. (18+5 pts.) Neural networks X
e 1 where the sum is taken over the examples in the training set and where T e is the correct value for the example and Oe is the network's output. Suppose also that Oe must be in the range 0,1]. By writing out an expression for the error in terms of Oe , nd the value of Oe that minimizes the error. (e) (4, RELATIVELY HARD) Let us try to nd an error function such that the error in the above situation is minimized when Oe = p, i.e., we want the trained network to output the probability that the example is positive. Suppose that we try the following kind of error function: E = (T e ; Oe )n X
e u du for some n. Show that n = 2 is the only possibility. Reminder: d(dx ) = n dx un;1] (f) (5, EXTRA CREDIT) Analyze the case where we allow the error any polynomial function of (T e ; Oe...
View
Full
Document
This note was uploaded on 05/17/2009 for the course CS 188 taught by Professor Staff during the Spring '08 term at University of California, Berkeley.
 Spring '08
 Staff
 Computer Science

Click to edit the document details