181
.
U
LERY
,
H
ICKLIN
,
B
USCAGLIA
&
R
OBERTS
,
supra
note 176, at 7736. Once again, the
authors arrive at a lower error rate (7.5%) due to their inclusion of 1,856
“
inconclusive
”
conclusions in the denominator. When the inconclusive conclusions (which represented nearly
1/3 of the mated samples) are set aside, the false negative error rate is 450 / 4,113 = 10.94%.
182
.
United States v. Love, No. 10cr2418
–
MMM, 2011 WL 2173644, at *5 (S.D. Cal. June
1, 2011) (
“[A]
false positive rate of 0.1% [is] quite low.
”
).

49:1369]
FORENSICS OR FAUXRENSICS?
1411
being tested. We know that fingerprint examiners respond differently when
they know that they are in a testing situation.
183
The representativeness of the
samples used is also questionable.
184
Finally, this study fails the disinterested
researcher requirement for Type II proficiency tests because it was paid for
by the FBI, and two of the four authors work for the FBI.
185
The authors
183. Langenburg,
supra
note 168, at 242 (referring to a
“
bias loop
”
that arises when
examiners know their work will be checked by verifiers who also know that they are merely
verifying another examiner
’
s decision). Others take issue with the suggestion raised in Ralph
Norman Haber & Lyn Haber
’s article
that examiners who know they are being tested will
“
perform better than when the tests are not announced and cannot be differentiated from routine
work.
” Haber &
Haber,
supra
note 175, at 386. R. Austin Hicklin et al. respond as follows:
While participants in tests may indeed have different performance than in
routine work, it is not reasonable to conclude that the results are necessarily
better in the tests: a few examiners who are not taking the test seriously could
have notably affected the results of a study, especially with respect to rare
events. For example, we do not know if the examiner who made two erroneous
individualizations was acting as s/he would have in routine work, or was just
tired and apathetic, given it was just a test. It seems likely that at least some of
the participants took the test less seriously than casework, given the serious
implications of actual casework, and the absence of any negative implications
on an anonymous test.
R. Austin Hicklin, Bradford T. Ulery, JoAnn Buscaglia & Maria Antonia Roberts,
In Response
to Haber and Haber,
“Experimental R
esults of Fingerprint Comparison Validity and Reliability:
A Review and Critical Analysis
,
”
54 S
CI
.
&
J
UST
. 390, 391 (2014).
184
. B
RADFORD
T.
U
LERY
,
R.
A
USTIN
H
ICKLIN
,
J
O
A
NN
B
USCAGLIA
&
M
ARIA
A
NTONIA
R
OBERTS
, A
S
TUDY OF THE
A
CCURACY AND
R
ELIABILITY OF
F
ORENSIC
L
ATENT
F
INGERPRINT
D
ECISIONS
A
PPENDIX
:
S
UPPORTING
I
NFORMATION
3
(2011),
(cautioning that,
“
the overall distribution of the fingerprint data cannot as a whole be considered
as statistically representative of operational data,
”
though they suggest that the prints used
included a large proportion of poor quality prints).


You've reached the end of your free preview.
Want to read all 48 pages?
- Spring '13
- Hill
- Forensic Science, Business Law, DNA, Forensics, The Land, DNA profiling, Forensic Sciences