919 – Putting the PASC Score to the Test: Clinical vs Statistical Accuracy in Long COVID, 2025, Azola et al

forestglip

Moderator
Staff member
Now published, see post #9
---------


919 – Putting the PASC Score to the Test: Clinical vs Statistical Accuracy in Long COVID

Alba M. Azola, Leah H. Rubin, Rebecca E. Easter, Rebecca Veenhuis, Hannah Parker, Christina Della Penna, Holly Schultz, Isabel Santiuste

[Line breaks added]


Background
Long COVID (LC) is a mass disabling event affecting millions worldwide. Given the broad definitions and lack of biomarkers, there is an urgent need for diagnostic tools to identify those affected.

Here we aimed to validate the RECOVER Post-Acute Sequelae of SARS-CoV-2 Infection (PASC) Score in a cohort of SARS-CoV-2 infected patients with LC and fully recovered individuals while iteratively improving the tool’s sensitivity and specificity.

Methods
Participants included 100 LC patients from LC clinics in Baltimore, MD between August 2023 and July 2024 who met the National Academy of Medicine(NAM) 2024 LC definition, and 18 SARS-CoV-2 infected but fully recovered individuals.

LC participants were required to have least one neuropsychiatric symptom (e.g., brain fog). Exclusion criteria included history of psychosis, recent substance misuse(nicotine, cannabis excluded), and lack of English proficiency.

Participants completed comprehensive surveys and questionnaires assessing symptoms based on the methods of the PASC score publication.

Using the NAM 2024 LC definition as the ‘true’ condition, we compared evaluation metrics for the REVOVER PASC score cutoff(PASC Total >12) as well as comparing the presence of individual, pairs, and triplets of symptoms. Evaluation metrics(e.g., sensitivity, specificity, F1) were calculated based on these classifications for the overall PASC score and symptom combinations.

Results
The LC cohort(n=100) had a mean age of 47.7 years, was predominantly female(73%), White(78%), and well-educated(76% >16 years). Controls(n=18) had similar demographic characteristics.

LC diagnosis and PASC scores were significantly associated(χ2=44.72, P<0.001). The PASC score showed excellent specificity(100%) and positive predictive value(PPV; 100%) albeit limited sensitivity(80%), missing approximately 20% of the patients with LC. The negative predictive value(NPV) was 47.37%, indicating that only 47% of those who tested negatively via PASC score did not have LC.

When examining whether combinations of symptoms performed better than the total PASC score cutoff, we found that the presence of loss of smell/taste, post-exertional malaise, or brain fog demonstrated 93% sensitivity, 100% specificity, and PPV, 72% NPV, and an F1 score of 0.964.

Conclusions
Validation of the RECOVER PASC supports its utility and highlights the need for ongoing refinement of the LC definition. We call for national efforts to create and validate a readily implementable clinical tool for LC diagnosis.

Link (Conference on Retroviruses and Opportunistic Infections) [Abstract Only]
 
Last edited:
Just to illustrate its real world relevance for anyone that might not know, if you're a patient that gets told "No you don't have long COVID based on this score", negative predictive value (NPV) tells you the chance that you really don't have long COVID.

But the percent only makes sense if its based on the real prevalence of long COVID in the population, which this study's NPV is not.

As Wikipedia says:
Note that the positive and negative predictive values can only be estimated using data from a cross-sectional study or other population-based study in which valid prevalence estimates may be obtained. In contrast, the sensitivity and specificity can be estimated from case-control studies.
 
Putting the PASC Score to the Test: Clinical vs. Statistical Accuracy in Long COVID Diagnosis

Azola, Alba; Dastgheyb, Raha M.; Easter, Rebecca; Parker, Hannah; Della Penna, Christina; Santiuste, Isabel; Schultz, Holly; Ehrenspeck, Ana; Veenhuis, Rebecca; Rubin, Leah H.

Objective
To validate the RECOVER Post-Acute Sequelae of SARS-CoV-2 infection (PASC) score in a cohort of patients who develop long COVID (LC) or fully recover while iteratively improving the tool’s sensitivity and specificity.

Methods
A cross-sectional study in 130 LC patients followed at LC clinics in Baltimore, MD, USA, who met the National Academies of Sciences, Engineering, and Medicine (NASEM) 2024 LC definition, and 60 SARS-CoV-2 exposed but fully recovered individuals.

LC participants were required to have at least one neuropsychiatric symptom. Participants completed comprehensive surveys and questionnaires assessing symptoms based on published methods to determine PASC score.

Using the NASEM 2024 LC definition as the “true” condition, we compared evaluation metrics for the RECOVER PASC score cutoff ( PASC > 12) and the presence of individual/multiple symptoms. Evaluation metrics (e.g., sensitivity, specificity, F1) were calculated based on these classifications for the overall PASC score and symptom combinations.

Results
The LC cohort ( n = 130) had a mean age of 47.2 years and was predominantly female (72%), White (79%), and well-educated (77% > 16 years). Controls ( n = 60) were similar demographically.

LC diagnosis and PASC scores were significantly associated ( χ 2 = 102.99, P < 0.001). The PASC score showed excellent specificity (100%) and positive predictive value (PPV; 100%) albeit limited sensitivity (80%), missing 20% of participants with LC.

We found that loss of smell/taste, post-exertional malaise, or lack of sexual desire or capacity demonstrated 94% sensitivity, 92% specificity, and 96% PPV, 87% NPV, and an F1 score of 0.949.

Conclusion
Validation of the RECOVER PASC supports its utility and highlights the need for ongoing refinement of the LC definition. We call for national efforts to develop readily implementable clinical tools for LC diagnosis.

Web | DOI | PDF | Journal of General Internal Medicine
 
What do you expect to happen when you recruit two different groups, one based on positively having symptoms and one based on positively not having symptoms?

Of course a questionnaire asking about symptoms will do a pretty good job at differentiating them. That doesn’t mean the questionnaire is any good or has any practical value.
 
Back
Top Bottom