yit finetit 1 at at each time step activation

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: (cont.) yi(t) = fi(neti(t-1)) At At each time step activation propagates forward through one layer of connections only Once some level of activation is present in the network, it will continue to flow around the units, even in the absence even of of any new input whatsoever We can now present the network with a time series of inputs, and require that it produce an output based on this series These networks can be used to model many new kinds of problems However, However, these nets also present us with many new difficult issues in training ECE 517: Reinforcement Learning in AI 12 Simple Simple Recurrent Network Consider the Elman network At At each time step, a copy of the hidden layer units is made to a copy layer Processing is done as follows Copy Copy inputs for time t to the input units Compute Compute hidden unit activations using net input from input units and from copy layer Compute Compute output unit activations as usual Copy Copy new hidden unit activations to copy layer In computing the activation, we have eliminated cycles, and so our requirement that the activations of all posterior nodes be known is met ECE 517: Reinforcement Learning in AI 13 Simple Simple Recurrent Network (cont.)...
View Full Document

This note was uploaded on 05/04/2013 for the course ECE 517 taught by Professor Arel during the Fall '11 term at University of Tennessee.

Ask a homework question - tutors are online