985 Comments
User's avatar
⭠ Return to thread
DR's avatar

What do you think the rate of false positives is?

Expand full comment
Thorsten's avatar

I don't have a clue really. First, the tests react to acidic pH, so if you don't precisely follow the instructions, you could trigger a positive result in the absence if any pathogen. Next, even if done correctly, just because the manufacturer has found no cross reactivity in a lab doesn't mean there is none in a real world setting. What you would need is a large number of positive samples from real people (a few hundred maybe), then do PCR confirmation and then you could calculate a false positive rate. I don't know if such a study has been done.

Expand full comment
DR's avatar

The false positive rate for antigen tests seem to be very low (<1%)

Expand full comment
CJ's avatar

Wrong.

Common sense would suggest that a test with 99% specificity would return only about 1 in a 100 false positive results. But this is not how it works. The false positive rate is far higher when disease prevalence is as low as the studies have found. The PPV of screening testing is very low when background prevalence is low (Bokhorst et al. 2012; Skittrall et al. 2020; Dinnes et al. 2021).

If 1,000 people randomly in a population where 1% have the illness at issue, and the test is 99% specific, we will have one true positive and one false positive for each 100 tests. So testing 1,000 people results in 10 true positives and 10 false positives.

Using BMJ's very own test accuracy calculator to calculate (based on available real-world background prevalence data and test accuracy data) and with a conservative assumption of 1% pre-test probability of active infection (a higher level of active infection than was found in the large vaccine clinical trials) and also assuming 58% sensitivity and 99% specificity, which are the findings of Cochrane meta-analysis combining studies of antigen test accuracy, when used to test asymptomatic cases (Dinnes, J. et al. 2021), the result in this scenario is 50% false positives (1 true positive and 1 false positive) even with a 99% specificity test.

50% is the same as random chance. In other words, this 99% specificity test can do no better than a coin flip when declaring a positive result. Data that is no better than a coin flip is not data. It is random chance.

It gets worse, because neither PCR nor antigen tests are close to a 99% specificity level in practice (Braunstein et al. 2021).

Stop spreading lies David.

Expand full comment
DR's avatar

The False positive rate is test dependent, and is independent of actual prevalence. Maybe your confusing it with false detection rate (the % of positive results that are actually negative), which is.

Considering that real world results were often below 0.5%, it would indicate that the false positive rate is <<0.5%.

If the positive rate is <<1%, widespread testing is inaccurate and a waste of time.

Expand full comment
Igor Chudov's avatar

correct

Expand full comment
User's avatar
Comment deleted
Jul 1, 2023Edited
Comment deleted
Expand full comment
DR's avatar

Do you need help?

Expand full comment