Lecture17-March+9th-recap

Lecture17-March+9th-recap -  ...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview:   Midterm
3
on
Thursday.
   Chapters
7,
8,
9,
10,
11
   Lectures
12,
13,
14,
15,
16,
17
   Skip
chapter
12
   Review
sessions
   Tomorrow,
Kleiber
Hall
3,
6.00
to
8.00
pm
 ▪  E‐mail
TAs
with
questions
you
want
to
discuss
   Last
minute
Q&A
session
as
usual
Thursday
11.00
 to
12.30
   Different
sources
of
knowledge
   Personal
experience,
authority
   Limitations?
 ▪  Seek
information
consistent
with
our
beliefs
and
ignore
 inconsistent
information
 ▪  Inferences
based
on
little
information
 ▪  Our
expectations
influence
what
we
see
 ▪  Base
rate
fallacy,
ignore
comparisons
 ▪  Assume
that
propositions
that
feel
wrong
to
us
are
invalid
   Bottom
line:
wrong
conclusions,
not
self‐correcting
   Better
source
of
knowledge?
   The
scientific
method
 ▪  Higher
chance
of
getting
correct
answer
 ▪  Process
is
self
correcting!
   What
are
the
steps?
 ▪  Theory
about
how
something
works
 ▪  Generate
hypothesis
(predictions)
 ▪  Conduct
SYSTEMATIC
empirical
observation
 ▪  Test
Hypothesis
   Where
do
we
find
our
questions?
   Existing
theories,
practical
needs,
observations
   What
do
we
already
know?
   Primary
vs.
secondary
sources
   Hypothesis:
specific
and
falsifiable
   Descriptive
vs.
predictive
 ▪  Descriptive:
Does
not
explain
phenomenon
 ▪  Causal:
Explains
possible
causes
for
pattern
described.
How
 the
IV
affects
the
DV
   Directional
vs.
non‐directional
   Identify
and
define
variables
   Independent
vs
Dependent
Variables
   IV
=
Manipulated
or
Causal
Variable
   Non
true
IV
=
not
directly
manipulated,
participants
 variables
   DV
=
affected
or
outcome
variable
   Create
operational
definition
of
our
variables
   Definition
of
procedure
used
by
research
to
measure
 (DV)
or
manipulate
(IV)
variable
   Measurement
type
 ▪  Behavioral,
Physiological,
Self‐report,
Test
   Measurement
scale
 ▪  Nominal,
Ordinal,
Interval,
Ratio
 ▪  Choice
depend
on
question
 ▪  Parametric
tests
can
only
be
used
on
interval
and
ratio
data
 1.  Who
do
we
observe?
 Who
do
you
want
to
generalize
to?
   Random
sampling
 ▪  ▪  Simple,
systematic,
stratified,
cluster
 Convenience
sampling
 ▪  ▪  Quota,
snowball
 2.  How
can
we
limit
error?

 Random
and
systematic
error
   Reliability:
extent
to
which
our
measurements
are
 consistent!
(free
from
random
error)
 ▪  ▪  ▪  ▪  ▪  Entity
being
measured
is
constant
 Increasing
#
of
indicators
reduces
random
error
 Test‐retest,
alternate
forms,
internal
consistency,
inter‐rater
 reliability
 Measured
by
correlation
coefficient
 2.  How
can
we
limit
error?

 Random
and
systematic
error
   Validity:
extent
to
which
our
measurements
are
free
 from
both
sources
of
error
 ▪  Construct
validity:
are
we
measuring
what
we
intend
to?
   Nomological,
face,
content,
convergent,
discriminant
 ▪  Internal
validity:
can
we
infer
the
effect
is
due
to
our
 manipulation?
 ▪  External
validity:
are
our
results
generalizable?
 ▪  3.  What
kind
of
study
design?
 Depends
on
hypothesis:
descriptive
vs.
causal
   Descriptive
design
 ▪  ▪  ▪  ▪  Describe
individual
variables
as
they
exist
 Not
concerned
with
relationships
between
variables
 Observational
studies,
Case
study,
Survey

   Issues
in
building
and
administering
surveys
 3.  What
kind
of
study
design?
 Depends
on
hypothesis:
descriptive
vs.
causal
   Correlational
design
 ▪  ▪  ▪  Examine
and
describe
relationship
between
two
or
more
 variables.
Variables
are
not
manipulated
 Association:
linear,
monotonic,
curvilinear
 3.  What
kind
of
study
design?
 Depends
on
hypothesis:
descriptive
vs.
causal
   Experimental
designs:
allow
to
establish
cause‐effect
 relationship.
Variables
ARE
manipulated
 Variables
can
be
manipulated
between
or
within
ss
 ▪  ▪  Simple
experiment:
post‐test
only,
pre‐test/post‐test
   (1
IV,
2
groups)
 ▪  Multiple
groups
experiments
   (1
IV,
multiple
groups)
 ▪  Factorial
designs
(between,
within,
mixed)
   (multiple
IVs,
multiple
groups)
 ▪  3.  What
kind
of
study
design?
 Depends
on
hypothesis:
descriptive
vs.
causal
   Experimental
designs:
allow
to
establish
cause‐effect
 relationship.
Variables
ARE
manipulated
 Internal
validity
is
essential!!!!
 ▪  ▪  ▪  Control
for
possible
confounds
   Group
non‐equivalence
(randomly
assign
pp
to
conditions!)
   Participants
and
experimenter’s
effect
   Descriptive
design
 Describe
data
with
descriptive
stats
   Measures
of
central
tendency
 ▪  ▪  Mean,
median,
mode
 Measures
of
variability
 ▪  ▪  Range,
variance,
standard
deviation
   Correlational
design
 Association
b/w
variables
   Correlation
coefficient:
existence,
direction,
strength
 ▪  ▪          Different
coefficient
for
different
data
and
form
 Pearson:
linear.
Interval/ratio
data
 Spearman:
monotonic.
Ordinal
data
 Point‐biserial:
one
nominal,
one
interval
 Phi:
two
nominal
   Correlational
design
 Association
b/w
variables
   Established
relationship
allows
   Prediction
of
future
behavior
 Prediction
of
one
variable
(criterion)
from
another
 (predictor)
 ▪  ▪  ▪  ▪  Linear
regression
analyses
 Coefficient
of
determination:
amount
of
shared
variance
   Experimental
design
   Test
of
null
hypothesis:
Is
difference
b/w
groups/ treatment
due
to
chance?
 1.  Set
critical
level
α
 2.  What
cutoff
score
on
our
statistic
distribution

(z,
t,
F)
 corresponds
to
α?
 3.  Does
the
value
of
the
stats
for
our
experiment
exceeds
 the
cutoff
value?
(>
or
<
if
non‐directional,
2‐tailed
 test;
either
>
or
<
if
directional,
1‐tailed
test)
 ▪  ▪  If
yes,
reject
null
hypothesis
 In
not,
fail
to
reject
null
hypothesis
   Experimental
design
 Test
of
null
hypothesis:
Is
difference
b/w
groups/ treatment
due
to
chance?
   ▪  ▪  Failure
to
reject
null
hypothesis:
WHY?
 Start
process
again:
revise
theory,
generate
new
 hypothesis,
conduct
new
study,
test
new
hypothesis
   How
do
we
pick
our
test?
 1.  Are
scores
distributed
normally?
 ▪  Assume
yes
in
the
scenarios
provided
in
the
exam
 2.  Are
scores
ratio/interval?
 ▪  Choose
one
of
the
parametric
tests

 ▪  Z‐test,
t‐test,
F‐test
 3.  Are
scores
nominal/ordinal?
 ▪  Choose

a
nonparametric
test
 ▪  No
need
to
identify
which
one
   Comparing
sample
to
mean
   When
σ
is
known:
z‐test

 ▪  Rarely
do
we
know
the
standard
deviation
of
the
 population
   When
σ
is
unknown:
t‐test
 ▪  Rarely
do
we
compare
sample
mean
to
population
mean
   Comparing
two
groups?
   T‐test
 ▪  Correlated
t‐test?
If
the
scores
are
dependent
 ▪  Before‐after
design
or
any
other
within‐participants
design
 ▪  Between‐participants
when
participants
are
matched
on
a
variable
 (paired
comparison=matched
pairs=matched
design)
 ▪  Independent
samples
t‐test?
If
the
scores
are
independent
 ▪  Between‐participants
when
participants
are
NOT
matched
   Comparing
more
than
two
groups?
More
than
 one
IV?
   ANOVA
(=F‐test)
 ▪  Repeated
measures
ANOVA
If
scores
are
dependent/ correlated
 ▪  Within
participant
design
 ▪  ANOVA
If
scores
are
independent
 ▪  Between
participants
design
   ANOVA
for
multiple
groups
study
(i.e.
one
IV
 with
more
than
2
levels)
   One‐way
ANOVA
   ANOVA
for
factorial
designs
(i.e.
more
than
 one
IV)

   Two‐way
anova
if
there
are
2
IVs
(=factors)
   Three‐way
anova
if
there
are
3
IVs
(=factors)
   Etc
   To
find
if
there
are
main
effects
   Look
at
marginal
means

 ▪  If
difference
between
means=0:
no
main
effect
 ▪  If
difference
between
means≠0:
there
is
a
main
effect

   To
find
if
there
is
an
interaction
   Graph
means
(the
numbers
in
the
cells)
 ▪  Lines
parallel?
No
interaction
 ▪  Lines
have
same
direction
of
slope?
Ordinal
interaction
 ▪  Lines
have
different
directions
of
slopes?
Disordinal
 interaction
 ...
View Full Document

This note was uploaded on 06/21/2011 for the course PSYCHOLOGY Psych 41 taught by Professor Castelli during the Winter '10 term at UC Davis.

Ask a homework question - tutors are online