Northwestern University
Marciano Siniscalchi
Fall 2009
Econ 3310
THE VALUE OF INFORMATION
1.
Introduction
This lecture marks the beginning of our study of
dynamic choice
. This week we focus on a simple,
but extremely important type of problems: prior to making a final decision, an individual can,
typically for a price, receive
information
that may help her decide. These problems are dynamic,
in that there are (at least) two points in time when the DM must decide: first, she must decide
whether or not to acquire information; second, she must act upon the information she receives. We
shall focus on how to best use information, and on its
value
to the DM.
In order to study dynamic choice, the statespace framework turns out to be really useful. That
is, we will specify a state space
S
and a set of prizes
X
(typically real numbers); the objects of
choice are random variables, viewed as maps
X
:
S
→ X
. The collection of random variables of
interest is denoted by
F
.
2.
The Value of Information: An Example
The basic takehome point of this lecture is that “information is always
potentially
beneficial
to the decisionmaker”.
This emphasizes that:
(1) more information is a good thing, but (2)
information is not good ‘per se’:
rather, it has the potential to be beneficial,
provided the
decisionmaker makes good use of it
. A simple example clarifies these points.
You are the sales manager of a monopolistic firm who faces consumers with uncertain demand.
More precisely, you know that, if you charge a price of
p
, demand will be equal to
Q
(
p, s
) =
s

p
,
where
s
∈
R
is a parameter whose value you ignore.
Your objective is to maximize the firm’s
profits; suppose that your technology exhibits constant marginal cost 2 and no fixed costs.
Let us map this problem to a statespace framework. We can think of
s
as being the state of
the world—representing market conditions, if you wish. Prizes represent monetary outcomes, or
profits. Utility is linear:
u
(
x
) =
x
.
The acts we are interested in are those corresponding to price choices; thus, we can denote by
X
p
the random variable corresponding to profits in case unit price equals
p
. Clearly,
∀
p,
X
p
(
s
) = (
p

2)
·
Q
(
p, s
)
.
That is, in state
s
, fixing a price
p
results in sales equal to
Q
(
p, s
), and unit profits of (
p

2). The
collection
F
of random variables of interest is thus given by
F
=
{
X
p
:
p
∈
R
}
.
Turn now to the analysis of this model. Suppose first, as a preliminary step. that you actually
know the value of
s
. Clearly, if
s
= 0, the optimal choice is to set
p
*
= 0; if
s >
0, we can solve for
the optimal price
p
*
via the firstorder condition
dX
p
(
s
)
dp
=
s

p

(
p

2) =
s
+ 2

2
p
= 0
.
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
2
THE VALUE OF INFORMATION
Note that the second derivative is negative, so the firstorder condition identifies an interior maxi
mum:
p
*
(
s
) =
1
2
s
+ 1
.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '09
 Marciano
 Economics, Probability theory

Click to edit the document details