224s.09.lec14

224s.09.lec14 - CS 224S/LING 281 Speech Recognition,...

Info iconThis preview shows pages 1–12. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CS 224S/LING 281 Speech Recognition, Synthesis, and Dialogue Dan Jurafsky Lecture 14: Dialogue: MDPs and Speaker Detection Outline for today MDP Dialogue Architectures Speaker Recognition Now that we have a success metric Could we use it to help drive learning? In recent work we use this metric to help us learn an optimal policy or strategy for how the conversational agent should behave New Idea: Modeling a dialogue system as a probabilistic agent A conversational agent can be characterized by: The current knowledge of the system A set of states S the agent can be in a set of actions A the agent can take A goal G , which implies A success metric that tells us how well the agent achieved its goal A way of using this metric to create a strategy or policy for what action to take in any particular state. What do we mean by actions A and policies ? Kinds of decisions a conversational agent needs to make: When should I ground/confirm/reject/ask for clarification on what the user just said? When should I ask a directive prompt, when an open prompt? When should I use user, system, or mixed initiative? A threshold is a human-designed policy! Could we learn what the right action is Rejection Explicit confirmation Implicit confirmation No confirmation By learning a policy which, given various information about the current state, dynamically chooses the action which maximizes dialogue success Another strategy decision Open versus directive prompts When to do mixed initiative How we do this optimization? Markov Decision Processes Review: Open vs. Directive Prompts Open prompt System gives user very few constraints User can respond how they please: How may I help you? How may I direct your call? Directive prompt Explicit instructs user how to respond Say yes if you accept the call; otherwise, say no Review: Restrictive vs. Non- restrictive gramamrs Restrictive grammar Language model which strongly constrains the ASR system, based on dialogue state Non-restrictive grammar Open language model which is not restricted to a particular dialogue state Kinds of Initiative How do I decide which of these initiatives to use at each point in the dialogue? Grammar Open Prompt Directive Prompt Restrictive Doesnt make sense System Initiative Non-restrictive User Initiative Mixed Initiative Modeling a dialogue system as a probabilistic agent A conversational agent can be characterized by: The current knowledge of the system...
View Full Document

This note was uploaded on 04/21/2011 for the course CS 224 taught by Professor De during the Spring '11 term at Kentucky.

Page1 / 63

224s.09.lec14 - CS 224S/LING 281 Speech Recognition,...

This preview shows document pages 1 - 12. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online