Since the onset of the SARS-CoV-2 pandemic, companies moved quickly to produce rapid antigen tests that could detect SARS-CoV-2 cases within minutes compared with hours to days required by the gold standard PCR testing (nucleic acid amplification testing (NAAT)). These rapid antigen tests, designed like pregnancy tests, have the advantage of being relatively cheap, completed within minutes and executed without specialized equipment or personnel. However, antigen tests are inherently less sensitive as they do not amplify their protein signal whereas NAAT can reliably identify just a few copies of a virus. These tests have been plagued with issues including false positives and false negatives. Some of these issues are inherent to the challenges with the test design and others result from the statistical nature of testing which must consider disease prevalence.
Rapid antigen tests typically work by using antibody pairs. For example, a sample from a nasopharyngeal swab is applied to the test strip where it encounters gold conjugated antibodies that recognize one part of SARS-CoV-2, such as a specific target on the nucleocapsid protein (Np). If the sample contains SARS-CoV-2, the Np will be bound by the Np-antibody gold complex. This complex is then carried in solution further along the strip until it reaches the strip-immobilized antibody (also known as capture antibody) that recognizes another region of the Np. The Np-antibody gold complex will bind tightly to the capture antibody and be visible as a colored line due to the gold nanoparticles.
Initially, the challenge in developing rapid antigen tests was finding the right antibody pairs as both antibodies must bind to different sites of a single protein, these antibodies cannot interfere with each other, and ideally do cannot cross-react to proteins from other coronaviruses.1
In addition to the specificity and sensitivity of a test, it is also important to consider disease prevalence. Consider an antigen test with 98% specificity. When disease prevalence is 10%, that test has an 80% positive predictive value (PPV) and will return 2 false positive results for every 10 positive test results.2 This means that for 10 true cases, it will appear as though there are 12 cases meaning that there are 20% more infections identified than actually exist.
However, when the same test is used in the scenario of 1% prevalence, there will be 2 false positives for every true positive, meaning that there is a 33% positive predictive value for this test (1/3rd). In the real world, most tests on the market have a specificity of between 50 – 90% meaning that there are even more false positives than true positives in many cases.
Although rapid tests may not be able to be broadly used for air and cruise ship travel or concert and sporting events, with careful design and interpretation considering disease prevalence, symptomatology, and close contact with positive cases, there is still a role for these tests to be used for controlling the spread of COVID-19 in regions of high prevalence and in settings where physical distancing cannot be maintained.