Networks tdnn one of the simplest ways of performing

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: NN) One of the simplest ways of performing sequence recognition Allows Allows conventional backpropagation algorithms to be used Downsides: Memory is limited by length of tapped delay line If a large number of input units are needed then computation can be slow and many examples are needed ECE 517: Reinforcement Learning in AI 6 TDNN TDNN (cont.) A simple extension to this is to allow non-uniform sampling non- x i (t ) = x (t − ωi ) where ωi is the integer delay associated with component i. Another approach is for each "input" to really be a convolution convolution of the original input sequence t x i (t ) = ∑ ci (t − τ )x (τ ) τ =1 In the case of the delay line memories: 1 ci (t − τ ) = 0 ECE 517: Reinforcement Learning in AI if t = ωi else 7 Elman Elman Nets (1990) – Simple Recurrent Neural Networks Elman nets are feed forward networks with partial recurrence Unlike feed forward nets, Elman nets have a memory memory or sense of time sense Can also be viewed as a “Markovian” NN ECE 517: Reinforcement Learning in AI 8 Learning Learning time sequences Recurrent networks have one more or more feedback loops There are many tasks that require learning a temporal sequence of events These problems can be broken into 3 distinct types of tasks Sequence Sequence Recognition: Produce a particular output pattern pattern when a specific input...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online