Accuracy is calculated as the total number of two correct predictions (TP + TN) divided by the total
number of a dataset (P + N).
Confusion Matrix
Predicted No
Predicted Yes
Actual No
TN
FP
Actual Yes
FN
TP
TP = True Positive
Accuracy Rate = (TP+TN)/Total
FP = False Positive
True Positive Rate = TP/Actual Yes
TN = True Negtive
False Positive Rate = FP/Actual no
FN = False Negative
Specificity = TN/Actual No
Sensitivity = True Positive Rate

RAM MOHAN (+61406471624 (please text to watsapp)
2018 sem 1
Sensitivity (Recall or True positive rate)
Sensitivity (SN) is calculated as the number of correct positive predictions divided by the total
number of positives. It is also called recall (REC) or true positive rate (TPR). The best sensitivity is
1.0, whereas the worst is 0.0.
Sensitivity is calculated as the number of correct positive predictions (TP) divided by the total number
of positives (P).
Specificity (True negative rate)
Specificity (SP) is calculated as the number of correct negative predictions divided by the total
number of negatives. It is also called true negative rate (TNR). The best specificity is 1.0, whereas
the worst is 0.0.
Specificity is calculated as the number of correct negative predictions (TN) divided by the total number
of negatives (N).
Precision (Positive predictive value)
Precision (PREC) is calculated as the number of correct positive predictions divided by the total
number of positive predictions. It is also called positive predictive value (PPV). The best precision is
1.0, whereas the worst is 0.0.
Precision is calculated as the number of correct positive predictions (TP) divided by the total number
of positive predictions (TP + FP).

RAM MOHAN (+61406471624 (please text to watsapp)
2018 sem 1
False positive rate
False positive rate (FPR) is calculated as the number of incorrect positive predictions divided by the
total number of negatives. The best false positive rate is 0.0 whereas the worst is 1.0. It can also be
calculated as 1
–
specificity.
False positive rate is calculated as the number of incorrect positive predictions (FP) divided by the
total number of negatives (N).
Example
Predicted
NO
Predicted
YES
Actual
NO
FS=4
TP=6
Actual
YES
TN=8
FP=2
Then, the calculations of basic measures are straightforward once the confusion matrix is created.
measure
calculated value
Error rate
ERR
6 / 20 = 0.3
Accuracy
ACC
14 / 20 = 0.7
Sensitivity
SN
6 / 10 = 0.6
True positive rate
TPR
Recall
REC
Specificity
SP
8 / 10 = 0.8
True negative rate
TNR
Precision
PREC
6 / 8 =0.75
Positive predictive value
PPV
False positive rate
FPR
2 / 10 = 0.2

RAM MOHAN (+61406471624 (please text to watsapp)

#### You've reached the end of your free preview.

Want to read all 62 pages?

- One '17
- Data Mining