Global prevalence of long COVID and its most common symptoms among healthcare workers: a systematic review and meta-analysis, 2025, Al-Oraibi et al

forestglip

Senior Member (Voting Rights)
Staff member
Global prevalence of long COVID and its most common symptoms among healthcare workers: a systematic review and meta-analysis

Amani Al-Oraibi, Katherine Woolf, Jatin Naidu, Laura B Nellums, Daniel Pan, Shirley Sze, Carolyn Tarrant, Christopher A Martin, Mayuri Gogoi, Joshua Nazareth, Pip Divall, Brendan Dempsey, Danielle Lamb, Manish Pareek

Objectives
Long COVID, a condition where symptoms persist after the acute phase of COVID-19, is a significant concern for healthcare workers (HCWs) due to their higher risk of infection. However, there is limited knowledge regarding the prevalence, symptoms and clustering of long COVID in HCWs. We aimed to estimate the pooled prevalence and identify the most common symptoms of long COVID among HCWs who were infected with SARS-CoV-2 virus globally, and investigate any differences by geographical region and other factors.

Design
Systematic review and meta-analysis (PROSPERO CRD42022312781).

Data sources
We searched MEDLINE, CINAHL, EMBASE, PsycINFO and the grey literature from 31 December 2019 until 18 February 2022.

Eligibility criteria
We included studies reporting primary data on long COVID prevalence and symptoms in adult HCWs who had SARS-CoV-2 infection.

Data extraction and synthesis Methodological quality was assessed using the Joanna Briggs Institute checklist. Meta-analysis was performed for prevalence data of long COVID following SARS-CoV-2 infection.

Results
Out of 5737 articles, 28 met the inclusion criteria, with a combined sample size of 6 481 HCWs. 15 articles scored equal to or above the median score for methodological quality. The pooled prevalence of long COVID among HCWs who had SARS-CoV-2 infection was 40% (95% CI: 29% to 51%, I2: 97.2%; 12 studies), with a mean follow-up period of 22 weeks. The most prevalent symptoms reported were fatigue (35%), neurologic symptoms (25%), loss/decrease of smell and/or taste (25%), myalgia (22%) and shortness of breath (19%).

Conclusion
This review highlights the substantial burden of long COVID among HCWs worldwide. However, limitations in data quality and inconsistent definitions of long COVID impact the generalisability of these findings. To improve future interventions, we recommend enhanced cohort study designs for better characterisation of long COVID prevalence and symptoms in HCWs.

Link | PDF (BMJ Public Health) [Open Access]
 
That’s more that three years before the publication of this review. Is it normal to have such an early cut off date?
Since they're hypothesizing that both increased exposure to COVID and extreme work stressors among healthcare workers are risk factors for LC, I suspect they cut it off in 2022 because they wanted to focus on periods of highest hospitalization rates. I'm not sure why they cut it off in Feb though if Omicron was still raging by then.

That’s kind of ridiculous it took that long to get published.
It's rough but not unheard of, I know other papers that took that long despite being good papers. I suspect they got a lot of reviewer pushback on inconsistency in LC definitions that they note in the conclusion and had to substantially rework the paper.
 
I appreciate the insight but I’m still confused.
Since they're hypothesizing that both increased exposure to COVID and extreme work stressors among healthcare workers are risk factors for LC, I suspect they cut it off in 2022 because they wanted to focus on periods of highest hospitalization rates. I'm not sure why they cut it off in Feb though if Omicron was still raging by then.
It was publicly funded as well, so someone presumably wanted the results for something.
Funding: We would like to thank NHS Race and Health Observatory for funding this work. Grant number [2122-59].
If they started the work in Feb 21, I don’t understand how it could have taken them more than a year to do it. They screened over 2000 abstracts, but that’s about a month of work for one person if you assume 5 minutes per abstract. If you spend the same time checking for duplicates before that (5000 articles), you’re still only a 4 months of work for one person.

They were a team of at least six.
AA-O, KW, MP, LBN and CT designed the review. AA-O wrote the first draft with input from all authors. JaN, DP, SS, CAM, MG and JoN supported the article selection process, including screening, data extraction and quality assessment. PD supported the search strategy for conducting the review.
It's rough but not unheard of, I know other papers that took that long despite being good papers. I suspect they got a lot of reviewer pushback on inconsistency in LC definitions that they note in the conclusion and had to substantially rework the paper.
But wouldn’t that be something they could have anticipated? Assessing how homogenous the data is is a key part of any review. If you fail at that you have really not done your job.
 
If they started the work in Feb 21, I don’t understand how it could have taken them more than a year to do it. They screened over 2000 abstracts, but that’s about a month of work for one person if you assume 5 minutes per abstract. If you spend the same time checking for duplicates before that (5000 articles), you’re still only a 4 months of work for one person.
Most researchers, especially the ones doing work that doesn't need to be done in certain time constraints in a wet lab, are often working on multiple big projects at the same time. It's probably a situation where it was the third or fourth priority project in a really busy year, so they just couldn't get around to it.

It's taken me like 9 months to do a project that was probably a couple dozen hours of work time because I just had other higher priority things. Also the (vast) majority of authors on a paper aren't doing anything for the primary analysis, they're just providing occasional opinions. 90% of the work tends to be one person on a smaller paper [Edit: even though there was support for screening and data extraction]

But wouldn’t that be something they could have anticipated? Assessing how homogenous the data is is a key part of any review. If you fail at that you have really not done your job.
Ideally, yes. I'm just providing my best guess at what held them up--it's possible it was another issue altogether, or that they did try their best to get homogenous data but a particularly stringent set of reviewers could not be satisfied.

I've personally ended up in a situation where the person giving critique simply did not understand even the basics of my project despite my many many attempts to explain them, and therefore kept forcing me to redo the analysis when it wasn't necessary. There are some safeguards for getting out of a situation like that, but often they cause even more of a delay.

Also, doing the revisions is yet another time demand that might not have been immediately possible if the first author had other pressing priorities. I also know situations where the first author pushed a paper out the door as soon as they were leaving a position, so by the time the review process came around, they were at a completely different job and were using their limited free time to put the paper through review.

A long review process doesn't necessarily mean that the paper was bad to begin with.
 
Last edited:
Most researchers, especially the ones doing work that doesn't need to be done in certain time constraints in a wet lab, are often working on multiple big projects at the same time. It's probably a situation where it was the third or fourth priority project in a really busy year, so they just couldn't get around to it.

It's taken me like 9 months to do a project that was probably a couple dozen hours of work time because I just had other higher priority things. Also the majority of authors on a paper aren't doing anything for the primary analysis, they're just providing occasional opinions.
Thank you for explaining how that works. It’s completely different to what I’m used to as a consultant. There are strict deadlines and you pretty much have to meet them with very few exceptions. So the entire operation is built around resource management and scheduling (with the occasional overtime). I keep forgetting that that’s not how organisations usually operate, and it’s part of why there’s any work for us at.

I’m starting to think that this whole peer review process isn’t a very good idea after all..
 
Thank you for explaining how that works. It’s completely different to what I’m used to as a consultant. There are strict deadlines and you pretty much have to meet them with very few exceptions. So the entire operation is built around resource management and scheduling (with the occasional overtime). I keep forgetting that that’s not how organisations usually operate, and it’s part of why there’s any work for us at.

I’m starting to think that this whole peer review process isn’t a very good idea after all..
Different labs work in different ways, some will run a very tight ship like what you describe. Either way, the one consistent feature for all grad students, post docs, and lab techs is being asked to do way too much at all times.

I think peer review is overall a good thing for science, but the logistics of it could definitely use an overhaul.
 
Different labs work in different ways, some will run a very tight ship like what you describe. Either way, the one consistent feature for all grad students, post docs, and lab techs is being asked to do way too much at all times.
Sounds familiar!
I think peer review is overall a good thing for science, but the logistics of it could definitely use an overhaul.
To avoid me completely derailing the thread I’ll link this recent one about evidence-based medicine. Although I have not gotten around to writing down my thoughts yet.
https://www.s4me.info/threads/evidence-based-medicine.43739/
 
Since they're hypothesizing that both increased exposure to COVID and extreme work stressors among healthcare workers are risk factors for LC, I suspect they cut it off in 2022 because they wanted to focus on periods of highest hospitalization rates. I'm not sure why they cut it off in Feb though if Omicron was still raging by then.
If that was the case they could have gathered new studies outside of the highest hospitalization rates as a sort of control. Without them saying why it's easy to find reasons.
It's rough but not unheard of, I know other papers that took that long despite being good papers.
One of mine took nine months to even find a set of peer reviewers (but then we also got three instead of the two the journal usually had).

@Utsikt There are tools that can be used to look for duplicates. I should hope they used something like that, then it doesn't take that long at all (there can of course be some errors depending on how the tool works since publication details can vary by a bit from place to place). I guess there are tools for screening abstracts too, last time I did it I wrote a simple R code for collecting data from pubmed and downloaded all the relevant information (to me) into a searchable table. Worked a charm :thumbup:
 
Back
Top Bottom