Screening: Owais Raza - PHD Epidemiology - Tehran University of Medical Sciences
Screening: Owais Raza - PHD Epidemiology - Tehran University of Medical Sciences
Definition
Application of a test to people who are as yet asymptomatic for the purpose of classifying them with respect to their likelihood of having a particular disease
Types of Screening
Mass screening
Multiple / multi phasic screening
Whole population.
Several screening test at the same time. Groups with specific exposure. Patient who consult doctor for some other purpose.
Targeted screening
Case finding/ opportunistic screening
Screening Test
Inexpensive Easy to administer Minimal discomfort to participants Test should be have
Validity Reliability
+ ve - ve Total
Test
Sensitivity =
TP Specificity = TN TP + FN FP + TN
- ve 30 (b)
170 (d) 200
+ ve
- ve Total
90 (a)
10 (c) 100
120
180 300
Sensitivity = a / a + c Specificity = d / b + d
False Positives
Burden on healthcare system Cost
Physical, emotion, financial
Stigma
Simultaneous screening
Two screening tests are performed together
Sequential Screening
Sequential Screening
Sequential Screening
Net Sensitivity: 315 / 500 = 63% Net Specificity: 7600 + 1710 / 9500 = 98% Decrease in sensitivity Increase in specificity
Simultaneous Screening
Test 1
Test 2
Simultaneous Screening
Sen = 80% Spec = 60%
Test 1
160
200 +ve
Test 2
180
200 +ve
144 +ve
Simultaneous Screening
Sen = 80% Spec = 60%
Test 1
480
800 ve
Test 2
720
800 ve
800 ve 48 ve 288 ve
432 ve
In simultaneous screening
Gain of net sensitivity Loss of net specificity
Length of hospital stay Cost Degree of invasiveness Third party insurance coverage.
Predictive Values
How good is the test at identifying people with the disease and people without disease? If test is positive, what proportion has the disease? Positive Predictive Value If test is negative, what proportion are actually disease free? Negative Predictive Value
Predictive Values
Gold Standard + ve Test Total
- ve 30 (b)
170 (d) 200
+ ve
- ve Total
90 (a)
10 (c) 100
120
180 300
0.1
1.0 5.0 50.0
1.8
15.4 48.6 94.
YIELD
VS
Reliability
Mean reproducibility of results
Intrasubject variation Intraobserver variation
Reliability
Interobserver variation
Quantitatively Percent Agreement
Kappa statistics
Reliability
Kappa statistics = (Percent agreement observed) (Percent agreement exp by chance) 100% - (Percent agreement exp by chance)
= 0.81