Back to homepage

Final version accepted 12 January 2021

Objective To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease.

Design Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group.

Data sources PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020.

Study selection Studies that developed or validated a multivariable covid-19 related prediction model.

Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool).

Results 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models.

Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline.

Systematic review registration Protocol, registration

Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements ( When citing this paper please consider adding the update number and date of access for clarity.


The novel coronavirus disease 2019 (covid-19) presents an important and urgent threat to global health. Since the outbreak in early December 2019 in the Hubei province of the People’s Republic of China, the number of patients confirmed to have the disease has exceeded 47 million as the disease spread globally, and the number of people infected is probably much higher. More than 1.2 million people have died from covid-19 (up to 3 November 2020).1 Despite public health responses aimed at containing the disease and delaying the spread, several countries have been confronted with a critical care crisis, and more countries could follow.234 Outbreaks lead to important increases in the demand for hospital beds and shortage of medical equipment, while medical staff themselves can also become infected. Several regions have had or are experiencing second waves, and despite improvements in testing and tracing, several regions are again facing the limits of their test capacity, hospital resources and healthcare staff.56

To mitigate the burden on the healthcare system, while also providing the best possible care for patients, efficient diagnosis and information on the prognosis of the disease are needed. Prediction models that combine several variables or features to estimate the risk of people being infected or experiencing a poor outcome from the infection could assist medical staff in triaging patients when allocating limited healthcare resources. Models ranging from rule based scoring systems to advanced machine learning models (deep learning) have been proposed and published in response to a call to share relevant covid-19 research findings rapidly and openly to inform the public health response and help save lives.7

We aimed to systematically review and critically appraise all currently available prediction models for covid-19, in particular models to predict the risk of covid-19 infection or being admitted to hospital with the disease, models to predict the presence of covid-19 in patients with suspected infection, and models to predict the prognosis or course of infection in patients with covid-19. We included model development and external validation studies. This living systematic review, with periodic updates, is being conducted by the international COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings; group in collaboration with the Cochrane Prognosis Methods Group.


We searched the publicly available, continuously updated publication list of the covid-19 living systematic review.8 We validated whether the list is fit for purpose (online supplementary material) and further supplemented it with studies on covid-19 retrieved from arXiv. The online supplementary material presents the search strings. We included studies if they developed or validated a multivariable model or scoring system, based on individual participant level data, to predict any covid-19 related outcome. These models included three types of prediction models: diagnostic models to predict the presence or severity of covid-19 in patients with suspected infection; prognostic models to predict the course of infection in patients with covid-19; and prediction models to identify people in the general population at risk of covid-19 infection or at risk of being admitted to hospital with the disease.

We searched the database repeatedly up to 1 July 2020 (supplementary table 1). As of the third update (search date 1 July), we only include peer reviewed articles (indexed in PubMed and Embase through Ovid). Preprints (from bioRxiv, medRxiv, and arXiv) that were already included in previous updates of the systematic review remain included in the analysis. Reassessment takes place after publication of a preprint in a peer reviewed journal. No restrictions were made on the setting (eg, inpatients, outpatients, or general population), prediction horizon (how far ahead the model predicts), included predictors, or outcomes. Epidemiological studies that aimed to model disease transmission or fatality rates, diagnostic test accuracy, and predictor finding studies were excluded. We focus on studies published in English. Starting with the second update, retrieved records were initially screened by a text analysis tool developed using artificial intelligence to prioritise sensitivity (supplementary material). Titles, abstracts, and full texts were screened for eligibility in duplicate by independent reviewers (pairs from LW, BVC, MvS) using EPPI-Reviewer,9 and discrepancies were resolved through discussion.

Data extraction of included articles was done by two independent reviewers (from LW, BVC, GSC, TPAD, MCH, GH, KGMM, RDR, ES, LJMS, EWS, KIES, CW, AL, JM, TT, JAAD, KL, JBR, LH, CS, MS, MCH, NS, NK, SMJvK, JCS, PD, CLAN, RW, GPM, IT, JYV, DLD, JW, FSvR, PH, VMTdJ, BCTvB, ICCvdH, DJM, MK, and MvS). Reviewers used a standardised data extraction form based on the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist10 and PROBAST (prediction model risk of bias assessment tool; for assessing the reported prediction models.11 We sought to extract each model’s predictive performance by using whatever measures were presented. These measures included any summaries of discrimination (the extent to which predicted risks discriminate between participants with and without the outcome), and calibration (the extent to which predicted risks correspond to observed risks) as recommended in the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis; statement.12 Discrimination is often quantified by the C index (C index=1 if the model discriminates perfectly; C index=0.5 if discrimination is no better than chance). Calibration is often quantified by the calibration intercept (which is zero when the risks are not systematically overestimated or underestimated) and calibration slope (which is one if the predicted risks are not too extreme or too moderate).13 We focused on performance statistics as estimated from the strongest available form of validation (in order of strength: external (evaluation in an independent database), internal (bootstrap validation, cross validation, random training test splits, temporal splits), apparent (evaluation by using exactly the same data used for development)). Any discrepancies in data extraction were discussed between reviewers, and remaining conflicts were resolved by LW or MvS. The online supplementary material provides details on data extraction. Some studies investigated multiple models and some models were investigated in multiple studies (that is, in external validation studies). The unit of analysis was a model within a study, unless stated otherwise. We considered aspects of PRISMA (preferred reporting items for systematic reviews and meta-analyses)14 and TRIPOD12 in reporting our study. Details on all reviewed studies and prediction models are publicly available at

Patient and public involvement

It was not possible to involve patients or the public in the design, conduct, or reporting of our research. A lay summary of the project’s aims is available on The study protocol and preliminary results are publicly available on, medRxiv and


We retrieved 37 412 titles through our systematic search (of which 23 203 were included in the present update; supplementary table 1, fig 1). We included a further nine studies that were publicly available but were not detected by our search. Of 37 421 titles, 444 studies were retained for abstract and full text screening (of which 169 are included in the present update). One hundred sixty nine studies describing 232 prediction models met the inclusion criteria (of which 62 studies and 87 models added since the present update, supplementary table 1).15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 These studies were selected for data extraction and critical appraisal. The unit of analysis was the model within a study: of these 232 models, 208 were unique, newly developed models for covid-19. The remaining 24 analyses were external validations of existing models (in a study other than the model development study). Some models were validated more than once (in different studies, as described below). Many models are publicly available (box 1). A database with the description of each model and its risk of bias assessment can be found on

Fig 1

PRISMA (preferred reporting items for systematic reviews and meta-analyses) flowchart of study inclusions and exclusions

Box 1

Availability of models in format for use in clinical practice

Two hundred and eight unique models were developed in the included studies. Thirty (14%) of these models were presented as a model equation including intercept and regression coefficients. Eight (4%) models were only partially presented (eg, intercept or baseline hazard were missing). The remaining did not provide the underlying model equation.

Seventy two models (35%) are available as a tool for use in clinical practice (in addition to or instead of a published equation). Twenty seven models were presented as a web calculator (13%), 12 as a sum score (6%), 11 as a nomogram (5%), 8 as a software object (4%), 5 as a decision tree or set of predictions for subgroups (2%), 3 as a chart score (1%), and 6 in other usable formats (3%).

All these presentation formats make predictions readily available for use in the clinic. However, because all models were at high or uncertain risk of bias, we do not recommend their routine use before they are externally validated, ideally by independent investigators.


Primary datasets

One hundred seventy four (75%) models used data from a single country (table 1), 42 (18%) models used international data, and for 16 (7%) models it was unclear how many (and which) countries contributed data. Two (1%) models used simulated data and 12 (5%) used proxy data to estimate covid-19 related risks (eg, Medicare claims data from 2015 to 2016). Most models were intended for use in confirmed covid-19 cases (47%) and a hospital setting (51%). The average patient age ranged from 39 to 71 years, and the proportion of men ranged from 35% to 75%, although this information was often not reported. One study developed a prediction model for use in paediatric patients.27

Table 1

Characteristics of reviewed prediction models for diagnosis and prognosis of coronavirus disease 2019 (covid-19)

Based on the studies that reported study dates, data were collected from December 2019 to June 2020. Some centres provided data to multiple studies and several studies used open Github184 or Kaggle185 data repositories (version or date of access often unspecified), and so it was unclear how much these datasets overlapped across our identified studies.

Among the diagnostic model studies, the reported prevalence of covid-19 varied between 7% and 71% (if a cross sectional or cohort design was used). Because 75 diagnostic studies used either case-control sampling or an unclear method of data collection, the prevalence in these diagnostic studies might not be representative of their target population.

Among the studies that developed prognostic models to predict mortality risk in people with confirmed or suspected infection, the percentage of deaths ranged from 1% to 52%. This wide variation is partly because of substantial sampling bias caused by studies excluding participants who still had the disease at the end of the study period (that is, they had neither recovered nor died). Additionally, length of follow-up varied between studies (but was often not reported), and there is likely to be local and temporal variation in how people were diagnosed as having covid-19 or were admitted to the hospital (and therefore recruited for the studies).

Models to predict risk of covid-19 in the general population

We identified seven models that predicted risk of covid-19 in the general population. Three models from one study used hospital admission for non-tuberculosis pneumonia, influenza, acute bronchitis, or upper respiratory tract infections as proxy outcomes in a dataset without any patients with covid-19.16 Among the predictors were age, sex, previous hospital admission, comorbidities, and social determinants of health. The study reported C indices of 0.73, 0.81, and 0.81. A fourth model used deep learning on thermal videos from the faces of people wearing facemasks to determine abnormal breathing (not covid related) with a reported sensitivity of 80%.92 A fifth model used demographics, symptoms, and contact history in a mobile app to assist general practitioners in collecting data and to risk-stratify patients. It was contrasted with two further models that included additional blood values and blood values plus computed tomography (CT) images. The authors reported a C index of 0.71 with demographics only, which rose to 0.97 and 0.99 as blood values and imaging characteristics were added.151 Calibration was not assessed in any of the general population models.

Diagnostic models to detect covid-19 in patients with suspected infection

We identified 33 multivariable models to distinguish between patients with and without covid-19. Most models targeted patients with suspected covid-19. Reported C index values ranged between 0.65 and 0.99. Calibration was assessed for seven models using calibration plots (including two at external validation), with mixed results. The most frequently included predictors (≥10 times) were vital signs (eg, temperature, heart rate, respiratory rate, oxygen saturation, blood pressure), flu-like signs and symptoms (eg, shiver, fatigue), age, electrolytes, image features (eg, pneumonia signs on CT scan), contact with individuals with confirmed covid-19, lymphocyte count, neutrophil count, cough or sputum, sex, leukocytes, liver enzymes, and red cell distribution width.

Ten studies aimed to diagnose severe disease in patients with covid-19: nine in adults with reported C indices between value of 0.80 and 0.99, and one in children that reported perfect classification of severe disease.27 Calibration was not assessed in any of the models. Predictors of severe covid-19 used more than once were comorbidities, liver enzymes, C reactive protein, imaging features, lymphocyte count, and neutrophil count.

Seventy five prediction models were proposed to support the diagnosis of covid-19 or covid-19 pneumonia (and some also to monitor progression) based on images. Most studies used CT images or chest radiographs. Others used spectrograms of cough sounds55 and lung ultrasound.75 The predictive performance varied considerably, with reported C index values ranging from 0.70 to more than 0.99. Only one model based on imaging was evaluated by use of a calibration plot, and it appeared to be well calibrated at external validation.186

Prognostic models for patients with diagnosis of covid-19

We identified 107 prognostic models for patients with a diagnosis of covid-19. The intended use of these models (that is, when to use them, and for whom) was often not clearly described. Prediction horizons varied between one and 37 days, but were often unspecified.

Of these models, 39 estimated mortality risk and 28 aimed to predict progression to a severe or critical disease. The remaining studies used other outcomes (single or as part of a composite) including recovery, length of hospital stay, intensive care unit admission, intubation, (duration of) mechanical ventilation, acute respiratory distress syndrome, cardiac injury and thrombotic complication. One study used data from 2015 to 2019 to predict mortality and prolonged assisted mechanical ventilation (as a non-covid-19 proxy outcome).115 The most frequently used categories of prognostic factors (for any outcome, included at least 20 times) included age, comorbidities, vital signs, image features, sex, lymphocyte count, and C reactive protein.

Studies that predicted mortality reported C indices between 0.68 and 0.98. Four studies also presented calibration plots (including at external validation for three models), all indicating miscalibration1569118 or showing plots for integer scores without clearly explaining how these were translated into predicted risks.143 The studies that developed models to predict progression to a severe or critical disease reported C indices between 0.58 and 0.99. Five of these models also were evaluated by calibration plots, two of them at external validation. Even though calibration appeared good, plots were constructed in an unclear way.85121 Reported C indices for other outcomes varied between 0.54 (admission to intensive care) and 0.99 (severe symptoms three days after admission), and five models had calibration plots (of which three at external validation), with mixed results.

Risk of bias

All models were at high (n=226, 97%) or unclear (n=6, 3%) risk of bias according to assessment with PROBAST, which suggests that their predictive performance when used in practice is probably lower than that reported (fig 2). Therefore, we have cause for concern that the predictions of the proposed models are unreliable when used in other people. Figure 2 and box 2 gives details on common causes for risk of bias for each type of model.

Fig 2
Fig 2

PROBAST (prediction model risk of bias assessment tool) risk of bias for all included models combined (n=232) and broken down per type of model

Box 2

Common causes of risk of bias in the reported prediction models

Models to predict coronavirus disease 2019 (covid-19) risk in general population

All of these models had unclear or high risk of bias for the participant, outcome, and analysis domain. All were based on proxy outcomes to predict covid-19 related risks, such as presence of or hospital admission due to severe respiratory disease, in the absence of data of patients with covid-19.1692151

Diagnostic models

Ten models (30%) used inappropriate data sources (eg, due to a non-nested case-control design), nine (27%) used inappropriate inclusion or exclusion criteria such that the study data was not representative of the target population, and eight (24%) selected controls that were not representative of the target population for a diagnostic model (eg, controls for a screening model had viral pneumonia). Other frequent problems were dichotomisation of predictors (nine models, 27%), and tests used to determine the outcome (eight models, 24%) or predictor definitions or measurement procedures (seven models, 21%) that varied between participants.

Diagnostic models based for severity classification

Two models (20%) used predictor data that was assessed while the severity (the outcome) was known. Other concerns include non-standard or lack of a prespecified outcome definition (two models, 20%), predictor measurements (eg, fever) being part of the outcome definition (two models, 20%) and outcomes being assessed with knowledge of predictor measurements (two models, 20%).

Diagnostic models based on medical imaging

Generally, studies did not clearly report which patients had imaging during clinical routine. Fifty five (73%) used an inappropriate or unclear study design to collect data (eg, a non-nested case-control). It was often unclear (39 models, 52%) whether the selection of controls was made from the target population (that is, patients with suspected covid-19). Outcome definitions were often not defined or determined in the same way in all participants (18 models, 24%). Diagnostic model studies that used medical images as predictors were all scored as unclear on the predictor domain. These publications often lacked clear information on the preprocessing steps (eg, cropping of images). Moreover, complex machine learning algorithms transform images into predictors in a complex way, which makes it challenging to fully apply the PROBAST predictors section for such imaging studies. However, a more favourable assessment of the predictor domain does not lead to better overall judgment regarding risk of bias for the included models. Careful description of model specification and subsequent estimation were frequently lacking, challenging the transparency and reproducibility of the models. Studies used different deep learning architectures, some were established and others specifically designed, without benchmarking the used architecture against others.

Prognostic models

Dichotomisation of predictors was a frequent concern (22 models, 21%). Other problems include inappropriate inclusions or exclusions of study participants (18 models, 17%). Study participants were often excluded because they did not develop the outcome at the end of the study period but were still in follow-up (that is, they were in hospital but had not recovered or died), yielding a selected study sample (12 models, 11%). Additionally, many models (16 models, 15%) did not account for censoring or competing risks.


Ninety eight models (42%) had a high risk of bias for the participants domain, which indicates that the participants enrolled in the studies might not be representative of the models’ targeted populations. Unclear reporting on the inclusion of participants led to an unclear risk of bias assessment in 58 models (25%), and 76 (33%) had a low risk of bias for the participants domain. Fifteen models (6%) had a high risk of bias for the predictor domain, which indicates that predictors were not available at the models’ intended time of use, not clearly defined, or influenced by the outcome measurement. One hundred and thirty five (58%) models were rated unclear and 82 (35%) rated at low risk of bias for the predictor domain. Most studies used outcomes that are easy to assess (eg, death, presence of covid-19 by laboratory confirmation), and hence 95 (41%) were rated at low risk of bias. Nonetheless, there was cause for concern about bias induced by the outcome measurement in 50 models (22%), for example, due to the use of subjective or proxy outcomes (eg, non-covid-19 severe respiratory infections). Eighty seven models (38%) had an unclear risk of bias due to opaque or ambiguous reporting. Two hundred and eighteen (94%) models were at high risk of bias for the analysis domain. The reporting was insufficiently clear to assess risk of bias in the analysis in 13 studies (6%). Only one model had a low risk of bias for the analysis domain (<1%). Twenty nine (13%) models had low risk of bias on all domains except analysis, indicating adequate data collection and study design, but issues that could have been avoided by conducting a better statistical analysis. Many studies had small to modest sample sizes (table 1), which led to an increased risk of overfitting, particularly if complex modelling strategies were used. In addition, 50 models (22%) were neither internally nor externally validated. Performance statistics calculated on the development data from these models are likely optimistic. Calibration was only assessed for 22 models using calibration plots (10%), of which 11 on external validation data.

We found two models that were generally of good quality, built on large datasets, and had been rated low risk of bias on most domains but with an overall rating of unclear risk of bias, owing to unclear details on one signalling question within the analysis domain (table 2 provides a summary). Jehi and colleagues presented findings from developing a diagnostic model, however, there was substantial missing data and it remains unclear whether the use of median imputation influenced results, and there are unexplained discrepancies between the online calculator, nomogram, and published logistic regression model.141 Hence, the calculator should not be used without further validation. Knight and colleagues developed a prognostic model for in-hospital mortality, however, continuous predictors were dichotomised, which reduces granularity of predicted risks (even though the model had a C index comparable with that of a generalised additive model).143 The model was also converted into an sum score, but it was unclear how the scores were translated to the predicted mortality risks that were used to evaluate calibration.

Table 2

Prediction models with unclear risk of bias overall and large development samples

External validation

Forty six models were developed and externally validated in the same study (in an independent dataset, excluding random training test splits and temporal splits). In addition, 24 external validations of models were developed for covid-19 or before the covid-19 pandemic in separate studies. However, none of the external validations was scored as low risk of bias, three were rated as unclear risk of bias, and 67 were rated as high risk of bias. One common concern is that datasets used for the external validation were likely not representative of the target population (eg, patients not being recruited consecutively, use of an inappropriate study design, use of unrepresentative controls, exclusion of patients still in follow-up). Consequently, predictive performance could differ if the models are applied in the targeted population. Moreover, only 15 (21%) external validations had 100 or more events, which is the recommended minumum.187188 Only 11 (16%) external validations presented a calibration plot.

Table 3 shows the results of external validations that had at most an unclear risk of bias and at least 100 events in the external validation set. The model by Jehi et al has been discussed above.141 Luo and colleagues performed a validation of the CURB-65 score, originally developed to predict mortality of community acquired pneumonia, to assess its abilty to predict in-hospital mortality in patients with confirmed covid-19. This validation was conducted in a large retrospective cohort of patients admitted to two Chinese designated hospitals to treat patients with pneumonia from SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2).155 It was unclear whether all consecutive patients were included (although this is likely given the retrospective design), no calibration plot was used because the score gives an integer as output rather than estimates risks, and the score uses dichotomised predictors. Overall, the external validation by Luo et al was performed well. Studies that validated CURB-65 in patients with covid-19 obtained C indexes of 0.58, 0.74, 0.75, 0.84, and 0.88.130148155164189 These observed differences might be due to differences in risk of bias (all except Luo et al were rated high risk of bias), heterogeneity in study populations (South Korea, China, Turkey, and the United States), outcome definitions (progression to severe covid-19 v mortality), and sampling variability (number of events were 36, 55, 131, 201, and unclear).

Table 3

External validations with unclear risk of bias and large validation samples


In this systematic review of prediction models related to the covid-19 pandemic, we identified and critically appraised 232 models described in 169 studies. These prediction models can be divided into three categories: models for the general population to predict the risk of having covid-19 or being admitted to hospital for covid-19; models to support the diagnosis of covid-19 in patients with suspected infection; and models to support the prognostication of patients with covid-19. All models reported moderate to excellent predictive performance, but all were appraised to have high or uncertain risk of bias owing to a combination of poor reporting and poor methodological conduct for participant selection, predictor description, and statistical methods used. Models were developed on data from different countries, but the majority used data from a single country. Often, the available sample sizes and number of events for the outcomes of interest were limited. This problem is well known when building prediction models and increases the risk of overfitting the model.190 A high risk of bias implies that the performance of these models in new samples will probably be worse than that reported by the researchers. Therefore, the estimated C indices, often close to 1 and indicating near perfect discrimination, are probably optimistic. The majority of studies developed new models specifically for covid-19, but only 46 carried out an external validation, and calibration was rarely assessed. We cannot yet recommend any of the identified prediction models for widespread use in clinical practice, although a few diagnostic and prognostic models originated from studies that were clearly of better quality. We suggest that these models should be further validated in other data sets, and ideally by independent investigators.141143

Challenges and opportunities

The main aim of prediction models is to support medical decision making in individual patients. Therefore, it is vital to identify a target setting in which predictions serve a clinical need (eg, emergency department, intensive care unit, general practice, symptom monitoring app in the general population), and a representative dataset from that setting (preferably comprising consecutive patients) on which the prediction model can be developed and validated. This clinical setting and patient characteristics should be described in detail (including timing within the disease course, the severity of disease at the moment of prediction, and the comorbidity), so that readers and clinicians are able to understand if the proposed model could be suited for their population. Unfortunately, the studies included in our systematic review often lacked an adequate description of the target setting and study population, which leaves users of these models in doubt about the models’ applicability. Although we recognise that the earlier studies were done under severe time constraints, we recommend that any studies currently in preprint and all future studies should adhere to the TRIPOD reporting guideline12 to improve the description of their study population and guide their modelling choices. TRIPOD translations (eg, in Chinese and Japanese) are also available at

A better description of the study population could also help us understand the observed variability in the reported outcomes across studies, such as covid-19 related mortality and covid-19 prevalence. The variability in mortality could be related to differences in included patients (eg, age, comorbidities) and interventions for covid-19. The variability in prevalence could in part be reflective of different diagnostic standards across studies.

Covid-19 prediction will often not present as a simple binary classification task. Complexities in the data should be handled appropriately. For example, a prediction horizon should be specified for prognostic outcomes (eg, 30 day mortality). If study participants have neither recovered nor died within that time period, their data should not be excluded from analysis, which some reviewed studies have done. Instead, an appropriate time to event analysis should be considered to allow for administrative censoring.13 Censoring for other reasons, for instance because of quick recovery and loss to follow-up of patients who are no longer at risk of death from covid-19, could necessitate analysis in a competing risk framework.191

We reviewed 75 studies that used only medical images to diagnose covid-19, covid-19 related pneumonia, or to assist in segmentation of lung images, the majority using advanced machine learning methodology. The predictive performance measures showed a high to almost perfect ability to identify covid-19, although these models and their evaluations also had a high risk of bias, notably because of poor reporting and an artificial mix of patients with and without covid-19. Currently, none of these models is recommended to be used in clinical practice. An independent systematic review and critical appraisal (using PROBAST12) of machine learning models for covid-19 using chest radiographs and CT scans came to the same conclusions, even though they focused on models that met a minimum requirement of study quality based on specialised quality metrics for the assessment of radiomics and deep-learning based diagnostic models in radiology.192

A prediction model applied in a new healthcare setting or country often produces predictions that are miscalibrated193 and might need to be updated before it can safely be applied in that new setting.13 This requires data from patients with covid-19 to be available from that system. Instead of developing and updating predictions in their local setting, individual participant data from multiple countries and healthcare systems might allow better understanding of the generalisability and implementation of prediction models across different settings and populations. This approach could greatly improve the applicability and robustness of prediction models in routine care.194195196197198

The evidence base for the development and validation of prediction models related to covid-19 will continue to increase over the coming months. To leverage the full potential of these evolutions, international and interdisciplinary collaboration in terms of data acquisition, model building and validation is crucial.

Study limitations

With new publications on covid-19 related prediction models rapidly entering the medical literature, this systematic review cannot be viewed as an up-to-date list of all currently available covid-19 related prediction models. Also, 80 of the studies we reviewed were only available as preprints. These studies might improve after peer review, when they enter the official medical literature; we will reassess these peer reviewed publications in future updates. We also found other prediction models that are currently being used in clinical practice without scientific publications,199 and web risk calculators launched for use while the scientific manuscript is still under review (and unavailable on request).200 These unpublished models naturally fall outside the scope of this review of the literature. As we have argued extensively elsewhere,201 transparent reporting that enables validation by independent researchers is key for predictive analytics, and clinical guidelines should only recommend publicly available and verifiable algorithms.

Implications for practice

All reviewed prediction models were found to have an unclear or high risk of bias, and evidence from independent external validations of the newly developed models is still scarce. However, the urgency of diagnostic and prognostic models to assist in quick and efficient triage of patients in the covid-19 pandemic might encourage clinicians and policymakers to prematurely implement prediction models without sufficient documentation and validation. Earlier studies have shown that models were of limited use in the context of a pandemic,202 and they could even cause more harm than good.203 Therefore, we cannot recommend any model for use in practice at this point.

The current oversupply of insufficiently validated models is not useful for clinical practice. Moreover, predictive performance estimates obtained from different populations, settings, and types of validation (internal v external) are not directly comparable. Future studies should focus on validating, comparing, improving, and updating promising available prediction models.13 The models by Knight and colleagues143 and Jehi and colleagues141 are good candidates for validation studies in other data. We advise Jehi and colleagues to make all model equations available for independent validation.141 Such external validations should assess not only discrimination, but also calibration and clinical utility (net benefit),193198203 in large datasets187188 collected using an appropriate study design. In addition, these models’ transportability to other countries or settings remains to be investigated. Owing to differences between healthcare systems (eg, Chinese and European) and over time in when patients are admitted to and discharged from hospital, as well as the testing criteria for patients with suspected covid-19, we anticipate most existing models will be miscalibrated, but researchers could attempt to update and adjust the model to the local setting.

Most reviewed models used data from a hospital setting, but few are available for primary care and the general population. Additional research is needed, including validation of any recently proposed models not yet included in the current update of the living review (eg, Clift et al204). The models reviewed to date predicted the covid-19 diagnosis or assess the risk of mortality or deterioration, whereas long term morbidity and functional outcomes remain understudied and could be a target outcome of interest in future studies developing prediction models.205206

When creating a new prediction model, we recommend building on previous literature and expert opinion to select predictors, rather than selecting predictors in a purely data driven way.13 This is especially important for datasets with limited sample size.207 Frequently used predictors included in multiple models identified by our review are vital signs, age, comorbidities, and image features, and these should be considered when appropriate. Flu-like symptoms should be considered in diagnostic models, and sex, C reactive protein, and lymphocyte counts could be considered as prognostic factors.

By pointing to the most important methodological challenges and issues in design and reporting of the currently available models, we hope to have provided a useful starting point for further studies, which should preferably validate and update existing ones. This living systematic review has been conducted in collaboration with the Cochrane Prognosis Methods Group. We will update this review and appraisal continuously to provide up-to-date information for healthcare decision makers and professionals as more international research emerges over time.


Several diagnostic and prognostic models for covid-19 are currently available and they all report moderate to excellent discrimination. However, these models are all at high or unclear risk of bias, mainly because of model overfitting, inappropriate model evaluation (eg, calibration ignored), use of inappropriate data sources and unclear reporting. Therefore, their performance estimates are probably optimistic and not representative for the target population. The COVID-PRECISE group does not recommend any of the current prediction models to be used in practice, but one diagnostic and one prognostic model originated from higher quality studies and should be (independently) validated in other datasets. For details of the reviewed models, see Future studies aimed at developing and validating diagnostic or prognostic models for covid-19 should explicitly describe the concerns raised and follow existing methodological guidance for prediction modeling studies, because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Finally, sharing data and expertise for the validation and updating of covid-19 related prediction models is urgently needed.

What is already known on this topic

  • The sharp recent increase in coronavirus disease 2019 (covid-19) incidence has put a strain on healthcare systems worldwide; an urgent need exists for efficient early detection of covid-19 in the general population, for diagnosis of covid-19 in patients with suspected disease, and for prognosis of covid-19 in patients with confirmed disease

  • Viral nucleic acid testing and chest computed tomography imaging are standard methods for diagnosing covid-19, but are time consuming

  • Earlier reports suggest that elderly patients, patients with comorbidities (chronic obstructive pulmonary disease, cardiovascular disease, hypertension), and patients presenting with dyspnoea are vulnerable to more severe morbidity and mortality after infection

What this study adds

  • Seven models identified patients at risk in the general population (using proxy outcomes for covid-19)

  • Thirty three diagnostic models were identified for detecting covid-19, in addition to 75 diagnostic models based on medical images, 10 diagnostic models for severity classification, and 107 prognostic models for predicting, among others, mortality risk, progression to severe disease

  • Proposed models are poorly reported and at high risk of bias, raising concern that their predictions could be unreliable when applied in daily practice

  • Two prediction models (one for diagnosis and one for prognosis) were identified as being of higher quality than others and efforts should be made to validate these in other datasets


We thank the authors who made their work available by posting it on public registries or sharing it confidentially. A preprint version of the study is publicly available on medRxiv.


  • Contributors: LW conceived the study. LW and MvS designed the study. LW, MvS, and BVC screened titles and abstracts for inclusion. LW, BVC, GSC, TPAD, MCH, GH, KGMM, RDR, ES, LJMS, EWS, KIES, CW, JAAD, PD, MCH, NK, AL, KL, JM, CLAN, JBR, JCS, CS, NS, MS, RS, TT, SMJvK, FSvR, LH, RW, GPM, IT, JYV, DLD, JW, FSvR, PH, VMTdJ, MK, ICCvdH, BCTvB, DJM, and MvS extracted and analysed data. MDV helped interpret the findings on deep learning studies and MMJB, LH, and MCH assisted in the interpretation from a clinical viewpoint. RS and FSvR offered technical and administrative support. LW and MvS wrote the first draft, which all authors revised for critical content. All authors approved the final manuscript. LW and MvS are the guarantors. The guarantors had full access to all the data in the study, take responsibility for the integrity of the data and the accuracy of the data analysis, and had final responsibility for the decision to submit for publication. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

  • Funding: LW, BVC, LH, and MDV acknowledge specific funding for this work from Internal Funds KU Leuven, KOOR, and the COVID-19 Fund. LW is a postdoctoral fellow of Research Foundation-Flanders (FWO) and receives support from ZonMw (grant 10430012010001). BVC received support from FWO (grant G0B4716N) and Internal Funds KU Leuven (grant C24/15/037). TPAD acknowledges financial support from the Netherlands Organisation for Health Research and Development (grant 91617050). VMTdJ was supported by the European Union Horizon 2020 Research and Innovation Programme under ReCoDID grant agreement 825746. KGMM and JAAD acknowledge financial support from Cochrane Collaboration (SMF 2018). KIES is funded by the National Institute for Health Research (NIHR) School for Primary Care Research. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care. GSC was supported by the NIHR Biomedical Research Centre, Oxford, and Cancer Research UK (programme grant C49297/A27294). JM was supported by the Cancer Research UK (programme grant C49297/A27294). PD was supported by the NIHR Biomedical Research Centre, Oxford. MOH is supported by the National Heart, Lung, and Blood Institute of the United States National Institutes of Health (grant R00 HL141678). ICCvDH and BCTvB received funding from Euregio Meuse-Rhine (grant Covid Data Platform (coDaP) interref EMR-187). The funders played no role in study design, data collection, data analysis, data interpretation, or reporting.

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at and declare: support from Internal Funds KU Leuven, KOOR, and the COVID-19 Fund for the submitted work; no competing interests with regards to the submitted work; LW discloses support from Research Foundation-Flanders; RDR reports personal fees as a statistics editor for The BMJ (since 2009), consultancy fees for Roche for giving meta-analysis teaching and advice in October 2018, and personal fees for delivering in-house training courses at Barts and the London School of Medicine and Dentistry, and the Universities of Aberdeen, Exeter, and Leeds, all outside the submitted work; MS coauthored the editorial on the original article.

  • Ethical approval: Not required.

  • Data sharing: The study protocol is available online at Detailed extracted data on all included studies are available on

  • The lead authors affirm that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained.

  • Dissemination to participants and related patient and public communities: The study protocol is available online at

  • Provenance and peer review: Not commissioned; externally peer reviewed.



This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See:


    1. Dong E,
    2. Du H,
    3. Gardner L

    . An interactive web-based dashboard to track COVID-19 in real time. Lancet Infect Dis2020:S1473-3099(20)30120-1. doi:10.1016/S1473-3099(20)30120-1 pmid:32087114
    CrossRefPubMedGoogle Scholar

    1. Arabi YM,
    2. Murthy S,
    3. Webb S

    . COVID-19: a novel coronavirus and a novel challenge for critical care. Intensive Care Med2020. doi:10.1007/s00134-020-05955-1. pmid:32125458
    CrossRefPubMedGoogle Scholar

    1. Grasselli G,
    2. Pesenti A,
    3. Cecconi M

    . Critical care utilization for the COVID-19 outbreak in Lombardy, Italy: early experience and forecast during an emergency response. JAMA2020. doi:10.1001/jama.2020.4031 pmid:32167538
    CrossRefPubMedGoogle Scholar

    1. Xie J,
    2. Tong Z,
    3. Guan X,
    4. Du B,
    5. Qiu H,
    6. Slutsky AS

    . Critical care crisis and some recommendations during the COVID-19 epidemic in China. Intensive Care Med2020. doi:10.1007/s00134-020-05979-7 pmid:32123994
    CrossRefPubMedGoogle Scholar

    1. Looi M-K

    . Covid-19: Is a second wave hitting Europe?BMJ2020;371:m4113. doi:10.1136/bmj.m4113 pmid:33115704
    Google Scholar

    1. Woolf SH,
    2. Chapman DA,
    3. Lee JH

    . COVID-19 as the Leading Cause of Death in the United States. JAMA2021;325:123-4.pmid:33331845
    Google Scholar

  1. Wellcome Trust. Sharing research data and findings relevant to the novel coronavirus (COVID-19) outbreak 2020.

  2. Institute of Social and Preventive Medicine. Living evidence on COVID-19 2020.

  3. Thomas J, Brunton J, Graziosi S. EPPI-Reviewer 4.0: software for research synthesis [program]. EPPI-Centre Software. London: Social Science Research Unit, Institute of Education, University of London, 2010.
    1. Moons KG,
    2. de Groot JA,
    3. Bouwmeester W,
    4. et al

    . Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist. PLoS Med2014;11:e1001744. doi:10.1371/journal.pmed.1001744 pmid:25314315
    Abstract/FREE Full TextGoogle Scholar

    1. Moons KGM,
    2. Wolff RF,
    3. Riley RD,
    4. et al

    . PROBAST: a tool to assess risk of bias and applicability of prediction model studies: explanation and elaboration. Ann Intern Med2019;170:W1-33. doi:10.7326/M18-1377 pmid:30596876
    Abstract/FREE Full TextGoogle Scholar

    1. Moons KGM,
    2. Altman DG,
    3. Reitsma JB,
    4. et al

    . Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): explanation and elaboration. Ann Intern Med2015;162:W1-73. doi:10.7326/M14-0698 pmid:25560730
    Abstract/FREE Full TextGoogle Scholar

    1. Steyerberg EW

    . Clinical prediction models: a practical approach to development, validation, and updating.Springer US, 2019doi:10.1007/978-3-030-16399-0.
    CrossRefPubMedGoogle Scholar

    1. Liberati A,
    2. Altman DG,
    3. Tetzlaff J,
    4. et al

    . The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med2009;6:e1000100. doi:10.1371/journal.pmed.1000100 pmid:19621070
    CrossRefPubMedGoogle Scholar

  4. Xie J, Hungerford D, Chen H, et al. Development and external validation of a prognostic multivariable model on admission for hospitalized patients with COVID-19. medRxiv [Preprint] 2020. doi:10.1101/2020.03.28.20045997
    CrossRefPubMedGoogle Scholar

  5. DeCaprio D, Gartner J, Burgess T, et al. Building a COVID-19 vulnerability index. arXiv e-prints [Preprint] 2020.

  6. Bai X, Fang C, Zhou Y, et al. Predicting COVID-19 malignant progression with AI techniques. medRxiv [Preprint] 2020. doi:10.1101/2020.03.20.20037325
    CrossRefPubMedGoogle Scholar

  7. Feng C, Huang Z, Wang L, et al. A novel triage tool of artificial intelligence assisted diagnosis aid system for suspected covid-19 pneumonia in fever clinics. medRxiv [Preprint] 2020. doi:10.1101/2020.03.19.20039099
    Abstract/FREE Full TextGoogle Scholar

  8. Jin C, Chen W, Cao Y, et al. Development and evaluation of an AI system for covid-19 diagnosis. medRxiv [Preprint] 2020. doi:10.1101/2020.03.20.20039834
    Abstract/FREE Full TextGoogle Scholar

  9. Meng Z, Wang M, Song H, et al. Development and utilization of an intelligent application for aiding COVID-19 diagnosis. medRxiv [Preprint] 2020. doi:10.1101/2020.03.18.20035816
    Abstract/FREE Full TextGoogle Scholar

  10. Caramelo F, Ferreira N, Oliveiros B. Estimation of risk factors for COVID-19 mortality – preliminary results. medRxiv [Preprint] 2020. doi:10.1101/2020.02.24.20027268
    Abstract/FREE Full TextGoogle Scholar

  11. Lu J, Hu S, Fan R, et al. ACP risk grade: a simple mortality index for patients with confirmed or suspected severe acute respiratory syndrome coronavirus 2 disease (COVID-19) during the early stage of outbreak in Wuhan, China. medRxiv [Preprint] 2020. doi:10.1101/2020.02.20.20025510
    CrossRefPubMedGoogle Scholar

  12. Qi X, Jiang Z, YU Q, et al. Machine learning-based CT radiomics model for predicting hospital stay in patients with pneumonia associated with SARS-CoV-2 infection: a multicenter study. medRxiv [Preprint] 2020. doi:10.1101/2020.02.29.20029603
    Abstract/FREE Full TextGoogle Scholar

  13. Yan L, Zhang H-T, Xiao Y, et al. Prediction of criticality in patients with severe Covid-19 infection using three clinical features: a machine learning-based prognostic model with clinical data in Wuhan. medRxiv [Preprint] 2020. doi:10.1101/2020.02.27.20028027
    Abstract/FREE Full TextGoogle Scholar
    1. Yuan M,
    2. Yin W,
    3. Tao Z,
    4. Tan W,
    5. Hu Y

    . Association of radiologic findings with mortality of patients infected with 2019 novel coronavirus in Wuhan, China. PLoS One2020;15:e0230548. doi:10.1371/journal.pone.0230548 pmid:32191764
    Google Scholar

  14. Song Y, Zheng S, Li L, et al. Deep learning enables accurate diagnosis of novel coronavirus (covid-19) with CT images. medRxiv [Preprint] 2020. doi:10.1101/2020.02.23.20026930
    Abstract/FREE Full TextGoogle Scholar

  15. Yu H, Shao J, Guo Y, et al. Data-driven discovery of clinical routes for severity detection in covid-19 pediatric cases. medRxiv [Preprint] 2020. doi:10.1101/2020.03.09.20032219
    Google Scholar

  16. Gozes O, Frid-Adar M, Greenspan H, et al. Rapid AI development cycle for the coronavirus (covid-19) pandemic: initial results for automated detection & patient monitoring using deep learning CT image analysis. arXiv e-prints [Preprint] 2020.

  17. Chen J, Wu L, Zhang J, et al. Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography: a prospective study. medRxiv [Preprint] 2020. doi:10.1101/2020.02.25.20021568
    Abstract/FREE Full TextGoogle Scholar

  18. Xu X, Jiang X, Ma C, et al. Deep learning system to screen coronavirus disease 2019 pneumonia. arXiv e-prints [Preprint] 2020.

  19. Shan F, Gao Y, Wang J, et al. Lung infection quantification of covid-19 in CT images with deep learning. arXiv e-prints 2020.

  20. Wang S, Kang B, Ma J, et al. A deep learning algorithm using CT images to screen for corona virus disease (covid-19). medRxiv [Preprint] 2020. doi:10.1101/2020.02.14.20023028
    Abstract/FREE Full TextGoogle Scholar

  21. Song C-Y, Xu J, He J-Q, et al. COVID-19 early warning score: a multi-parameter screening tool to identify highly suspected patients. medRxiv [Preprint] 2020. doi:10.1101/2020.03.05.20031906
    Abstract/FREE Full TextGoogle Scholar

  22. Barstugan M, Ozkaya U, Ozturk S. Coronavirus (COVID-19) classification using CT images by machine learning methods. arXiv e-prints [Preprint] 2020.

  23. Jin S, Wang B, Xu H, et al. AI-assisted CT imaging analysis for COVID-19 screening: building and deploying a medical AI system in four weeks. medRxiv [Preprint] 2020. doi:10.1101/2020.03.19.20039354
    Abstract/FREE Full TextGoogle Scholar
    1. Li L,
    2. Qin L,
    3. Xu Z,
    4. et al

    . Artificial intelligence distinguishes covid-19 from community acquired pneumonia on chest CT. Radiology2020:200905. doi:10.1148/radiol.2020200905 pmid:32191588
    Google Scholar

  24. Lopez-Rincon A, Tonda A, Mendoza-Maldonado L, et al. Accurate identification of SARS-CoV-2 from viral genome sequences using deep learning. bioRxiv [Preprint] 2020. doi:10.1101/2020.03.13.990242
    CrossRefPubMedGoogle Scholar

  25. Shi F, Xia L, Shan F, et al. Large-scale screening of covid-19 from community acquired pneumonia using infection size-aware classification. arXiv e-prints [Preprint] 2020.
    1. Shi Y,
    2. Yu X,
    3. Zhao H,
    4. Wang H,
    5. Zhao R,
    6. Sheng J

    . Host susceptibility to severe COVID-19 and establishment of a host risk score: findings of 487 cases outside Wuhan. Crit Care2020;24:108. doi:10.1186/s13054-020-2833-7 pmid:32188484
    Google Scholar

  26. Zheng C, Deng X, Fu Q, et al. Deep learning-based detection for covid-19 from chest CT using weak label. medRxiv [Preprint] 2020. doi:10.1101/2020.03.12.20027185
    CrossRefPubMedGoogle Scholar

  27. Chowdhury MEH, Rahman T, Khandakar A, et al. Can AI help in screening Viral and COVID-19 pneumonia? arXiv e-prints [Preprint] 2020.
    1. Sun Y,
    2. Koh V,
    3. Marimuthu K,
    4. et al

    . Epidemiological and clinical predictors of covid-19. Clin Infect Dis2020;ciaa322.pmid:32211755
    Google Scholar

  28. Martin A, Nateqi J, Gruarin S, et al. An artificial intelligence-based first-line defence against COVID-19: digitally screening citizens for risks via a chatbot. bioRxiv [Preprint] 2020. doi:10.1101/2020.03.25.008805
    Google Scholar

  29. Wang S, Zha Y, Li W, et al. A fully automatic deep learning system for covid-19 diagnostic and prognostic analysis. medRxiv [Preprint] 2020. doi:10.1101/2020.03.24.20042317
    FREE Full TextGoogle Scholar

  30. Wang Z, Weng J, Li Z, et al. Development and validation of a diagnostic nomogram to predict covid-19 pneumonia. medRxiv [Preprint] 2020. doi:10.1101/2020.04.03.20052068
    CrossRefPubMedGoogle Scholar

  31. Sarkar J, Chakrabarti P. A machine learning model reveals older age and delayed hospitalization as predictors of mortality in patients with covid-19. medRxiv [Preprint] 2020. doi:10.1101/2020.03.25.20043331
    CrossRefGoogle Scholar

  32. Wu J, Zhang P, Zhang L, et al. Rapid and accurate identification of COVID-19 infection through machine learning based on clinical available blood test results. medRxiv [Preprint] 2020. doi:10.1101/2020.04.02.20051136
    Abstract/FREE Full TextGoogle Scholar

  33. Zhou Y, Yang Z, Guo Y, et al. A new predictor of disease severity in patients with covid-19 in Wuhan, China. medRxiv [Preprint] 2020. doi:10.1101/2020.03.24.20042119
    FREE Full TextGoogle Scholar

  34. Abbas A, Abdelsamea M, Gaber M. Classification of covid-19 in chest x-ray images using DeTraC deep convolutional neural network. medRxiv [Preprint] 2020. doi:10.1101/2020.03.30.20047456
    CrossRefPubMedGoogle Scholar
    1. Apostolopoulos ID,
    2. Mpesiana TA

    . Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks.Physical and Engineering Sciences in Medicine, 2020. doi:10.1007/s13246-020-00865-4.
    CrossRefPubMedGoogle Scholar

  35. Bukhari SUK, Bukhari SSK, Syed A, et al. The diagnostic evaluation of Convolutional Neural Network (CNN) for the assessment of chest X-ray of patients infected with COVID-19. medRxiv [Preprint] 2020. doi:10.1101/2020.03.26.20044610
    CrossRefPubMedGoogle Scholar

  36. Chaganti S, Balachandran A, Chabin G, et al. Quantification of tomographic patterns associated with covid-19 from chest CT. arXiv e-prints [Preprint] 2020.

  37. Fu M, Yi S-L, Zeng Y, et al. Deep learning-based recognizing covid-19 and other common infectious diseases of the lung by chest CT scan images. medRxiv [Preprint] 2020. doi:10.1101/2020.03.28.20046045
    CrossRefPubMedGoogle Scholar

  38. Gozes O, Frid-Adar M, Sagie N, et al. Coronavirus detection and analysis on chest CT with deep learning. arXiv e-prints [Preprint] 2020.

  39. Imran A, Posokhova I, Qureshi HN, et al. AI4COVID-19: AI enabled preliminary diagnosis for covid-19 from cough samples via an app. arXiv e-prints [Preprint] 2020.
    1. Li K,
    2. Fang Y,
    3. Li W,
    4. et al

    . CT image visual quantitative evaluation and clinical classification of coronavirus disease (COVID-19). Eur Radiol2020; doi:10.1007/s00330-020-06817-6 pmid:32215691
    Abstract/FREE Full TextGoogle Scholar

  40. Li X, Li C, Zhu D. COVID-MobileXpert: on-device covid-19 screening using snapshots of chest x-ray. arXiv e-prints [Preprint] 2020.

  41. Hassanien AE, Mahdy LN, Ezzat KA, et al. Automatic x-ray covid-19 lung image classification system based on multi-level thresholding and support vector machine. medRxiv [Preprint] 2020. doi:10.1101/2020.03.30.20047787
    CrossRefPubMedGoogle Scholar

  42. Tang Z, Zhao W, Xie X, et al. Severity assessment of coronavirus disease 2019 (covid-19) using quantitative features from chest CT images. arXiv e-prints [Preprint] 2020.

  43. Zhang J, Xie Y, Li Y, et al. COVID-19 Screening on Chest X-ray Images Using Deep Learning based Anomaly Detection. arXiv e-prints 2020.

  44. Zhou M, Chen Y, Wang D, et al. Improved deep learning model for differentiating novel coronavirus pneumonia and influenza pneumonia. medRxiv [Preprint] 2020. doi:10.1101/2020.03.24.20043117
    CrossRefGoogle Scholar

  45. Huang H, Cai S, Li Y, et al. Prognostic factors for COVID-19 pneumonia progression to severe symptom based on the earlier clinical features: a retrospective analysis. medRxiv [Preprint] 2020. doi:10.1101/2020.03.28.20045989
    Google Scholar

  46. Pourhomayoun M, Shakibi M. Predicting mortality risk in patients with covid-19 using artificial intelligence to help medical decision-making. medRxiv [Preprint] 2020. doi:10.1101/2020.03.30.20047308
    CrossRefPubMedGoogle Scholar

  47. Zeng L, Li J, Liao M, et al. Risk assessment of progression to severe conditions for patients with COVID-19 pneumonia: a single-center retrospective study. medRxiv [Preprint] 2020. doi:10.1101/2020.03.25.20043166
    Abstract/FREE Full TextGoogle Scholar
    1. Al-Najjar H,
    2. Al-Rousan N

    . A classifier prediction model to predict the status of coronavirus covid-19 patients in South Korea. Eur Rev Med Pharmacol Sci2020;24:3400-3. doi:10.26355/eurrev_202003_20709 pmid:32271458
    Google Scholar

  48. Angelov P, Soares E. Explainable-by-design approach for covid-19 classification via CT-scan. medRxiv [Preprint] 2020. doi:10.1101/2020.04.24.20078584.
    CrossRefPubMedGoogle Scholar

  49. Arpan M, Surya K, Harish R, et al. CovidAID: COVID-19 Detection Using Chest X-Ray. ArXiv e-prints [Preprint] 2020
    1. Bai HX,
    2. Wang R,
    3. Xiong Z,
    4. et al

    . AI augmentation of radiologist performance in distinguishing covid-19 from pneumonia of other etiology on chest CT. Radiology2020;201491.pmid:32339081
    Google Scholar

  50. Barda N, Riesel D, Akriv A, et al. Performing risk stratification for COVID-19 when individual level data is not available, the experience of a large healthcare organization. medRxiv [Preprint] 2020 doi:10.1101/2020.04.23.20076976.
    CrossRefGoogle Scholar

  51. Bassi PRAS, Attux R. A deep convolutional neural network for covid-19 detection using chest x-rays. ArXiv e-prints [Preprint] 2020

  52. Batista AFdM. Miraglia JL, Donato THR, et al. COVID-19 diagnosis prediction in emergency care patients: a machine learning approach. medRxiv [Preprint] 2020 doi:10.1101/2020.04.04.20052092.
    CrossRefPubMedGoogle Scholar
    1. Bello-Chavolla OY,
    2. Bahena-López JP,
    3. Antonio-Villa NE,
    4. et al

    . Predicting mortality due to SARS-CoV-2: A mechanistic score relating obesity and diabetes to COVID-19 outcomes in Mexico. J Clin Endocrinol Metab2020;105:dgaa346. doi:10.1210/clinem/dgaa346. pmid:32474598
    Abstract/FREE Full TextGoogle Scholar

  53. Benchoufi M, Bokobza J, Anthony C, et al. Lung injury in patients with or suspected COVID-19 : a comparison between lung ultrasound and chest CT-scanner severity assessments, an observational study. MedRxiv [Preprint] 2020 doi:10.1101/2020.04.24.20069633.
    FREE Full TextGoogle Scholar
    1. Borghesi A,
    2. Maroldi R

    . COVID-19 outbreak in Italy: experimental chest X-ray scoring system for quantifying and monitoring disease progression. Radiol Med2020;125:509-13. doi:10.1007/s11547-020-01200-3 pmid:32358689
    CrossRefPubMedGoogle Scholar

  54. Born J, Brandle G, Cossio M, et al. POCOVID-Net: Automatic detection of covid-19 from a new lung ultrasound imaging dataset (POCUS). ArXiv e-prints [Preprint] 2020.

  55. Brinati D, Campagner A, Ferrari D, et al. Detection of covid-19 infection from routine blood exams with machine learning: a feasibility study. medRxiv [Preprint] 2020. doi:10.1101/2020.04.22.20075143.
    CrossRefPubMedGoogle Scholar

  56. Carr E, Bendayan R, O’Gallagher K, et al. Supplementing the National Early Warning Score (NEWS2) for anticipating early deterioration among patients with covid-19 infection. medRxiv [Preprint] 2020. doi:10.1101/2020.04.24.20078006
    CrossRefPubMedGoogle Scholar

  57. Castiglioni I, Ippolito D, Interlenghi M, et al. Artificial intelligence applied on chest X-ray can aid in the diagnosis of COVID-19 infection: a first experience from Lombardy. medRxiv [Preprint] 2020. doi:10.1101/2020.04.08.20040907.
    CrossRefPubMedGoogle Scholar

  58. Chassagnon G, Vakalopoulou M, Battistella E, et al. AI-driven CT-based quantification, staging and short-term outcome prediction of covid-19 pneumonia. medRxiv [Preprint] 2020. doi:10.1101/2020.04.17.20069187.
    CrossRefPubMedGoogle Scholar
    1. Chen X,
    2. Tang Y,
    3. Mo Y,
    4. et al

    . A diagnostic model for coronavirus disease 2019 (COVID-19) based on radiological semantic and clinical features: a multi-center study. Eur Radiol2020. doi:10.1007/s00330-020-06829-2 pmid:32300971
    CrossRefPubMedGoogle Scholar

    1. Colombi D,
    2. Bodini FC,
    3. Petrini M,
    4. et al

    . Well-aerated lung on admitting chest CT to predict adverse outcome in covid-19 pneumonia. Radiology2020;201433. doi:10.1148/radiol.2020201433 pmid:32301647
    Abstract/FREE Full TextGoogle Scholar

  59. Das A, Mishra S, Gopalan SS. Predicting community mortality risk due to CoVID-19 using machine learning and development of a prediction tool. medRxiv [Preprint] 2020. doi:10.1101/2020.04.27.20081794.
    CrossRefPubMedGoogle Scholar

  60. Diaz-Quijano FA, Silva JMNd, Ganem F, et al. A model to predict SARS-CoV-2 infection based on the first three-month surveillance data in Brazil. medRxiv [Preprint] 2020. doi:10.1101/2020.04.05.20047944.
    CrossRefPubMedGoogle Scholar

  61. Guiot J, Vaidyanathan A, Deprez L, et al. Development and validation of an automated radiomic CT signature for detecting covid-19. medRxiv [Preprint] 2020. doi:10.1101/2020.04.28.20082966.
    Abstract/FREE Full TextGoogle Scholar

  62. Guo Y, Liu Y, Lu J, et al. Development and validation of an early warning score (EWAS) for predicting clinical deterioration in patients with coronavirus disease 2019. medRxiv [Preprint] 2020. doi:10.1101/2020.04.17.20064691
    CrossRefPubMedGoogle Scholar

  63. Hu C, Liu Z, Jiang Y, et al. Early prediction of mortality risk among severe covid-19 patients using machine learning. medRxiv [Preprint] 2020. doi:10.1101/2020.04.13.20064329.
    CrossRefGoogle Scholar
    1. Hu H,
    2. Yao N,
    3. Qiu Y

    . Comparing rapid scoring systems in mortality prediction of critical ill patients with novel coronavirus disease. Acad Emerg Med2020;27:461-8. doi:10.1111/acem.13992 pmid:32311790
    Abstract/FREE Full TextGoogle Scholar

    1. Hu R,
    2. Ruan G,
    3. Xiang S,
    4. et al

    . Automated diagnosis of covid-19 using deep learning and data augmentation on chest CT. medRxiv [Preprint] 2020. doi:10.1101/2020.04.24.20078998.
    CrossRefPubMedGoogle Scholar

  64. Islam MT, Fleischer JW. Distinguishing L and H phenotypes of covid-19 using a single x-ray image. medRxiv [Preprint] 2020. doi:10.1101/2020.04.27.20081984.
    CrossRefGoogle Scholar
    1. Ji D,
    2. Zhang D,
    3. Xu J,
    4. et al

    . Prediction for progression risk in patients with covid-19 pneumonia: the CALL score. Clin Infect Dis2020;ciaa414.pmid:32271369
    Google Scholar

    1. Jiang X,
    2. Coffee M,
    3. Bari A,
    4. et al

    . Towards an artificial intelligence framework for data-driven prediction of coronavirus clinical severity. Computers. Materials & Continua2020;63:537-51doi:10.32604/cmc.2020.010691.
    Abstract/FREE Full TextGoogle Scholar

  65. Jiang Z, Hu M, Fan L, et al. Combining visible light and infrared imaging for efficient detection of respiratory infections such as covid-19 on portable device. ArXiv e-prints [Preprint] 2020.

  66. Kana GEB, Kana ZMG, Kana DAF, et al. A web-based diagnostic tool for covid-19 using machine learning on chest radiographs (CXR). medRxiv [Preprint] 2020. doi:10.1101/2020.04.21.20063263.
    CrossRefPubMedGoogle Scholar

  67. Rezaul KM, Döhmen T, Rebholz-Schuhmann D, et al. DeepCOVIDExplainer: explainable covid-19 predictions based on chest x-ray images. ArXiv e-prints [Preprint] 2020.
    1. Khan AI,
    2. Shah JL,
    3. Bhat MM

    . CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images. Comput Methods Programs Biomed2020;196:105581. doi:10.1016/j.cmpb.2020.105581. pmid:32534344
    Abstract/FREE Full TextGoogle Scholar

  68. Kumar R, Arora R, Bansal V, et al. Accurate prediction of covid-19 using chest x-ray images through deep feature learning model with SMOTE and machine learning classifiers. medRxiv [Preprint] 2020. doi:10.1101/2020.04.13.20063461.
    Abstract/FREE Full TextGoogle Scholar

  69. Kurstjens S, van der Horst A, Herpers R, et al. Rapid identification of SARS-CoV-2-infected patients at the emergency department using routine testing. medRxiv [Preprint] 2020. doi:10.1101/2020.04.20.20067512
    Abstract/FREE Full TextGoogle Scholar

  70. Levy TJ, Richardson S, Coppa K, et al. Estimating survival of hospitalized covid-19 patients from admission information. medRxiv [Preprint] 2020. doi:10.1101/2020.04.22.20075416.
    Abstract/FREE Full TextGoogle Scholar

  71. Li Z, Zhong Z, Li Y, et al. From community acquired pneumonia to covid-19: a deep learning based method for quantitative analysis of covid-19 on thick-section CT scans. medRxiv [Preprint] 2020. doi:10.1101/2020.04.17.20070219.
    CrossRefPubMedGoogle Scholar

  72. Liu Q, Fang X, Tokuno S, et al. Prediction of the clinical outcome of COVID-19 patients using T lymphocyte subsets with 340 cases from Wuhan, China: a retrospective cohort study and a web visualization tool. medRxiv [Preprint] 2020. doi:10.1101/2020.04.06.20056127.
    Abstract/FREE Full TextGoogle Scholar
    1. Lyu P,
    2. Liu X,
    3. Zhang R,
    4. Shi L,
    5. Gao J

    . The performance of chest CT in evaluating the clinical severity of COVID-19 pneumonia: identifying critical cases based on CT characteristics. Invest Radiol2020;55:412-21.pmid:32304402
    Abstract/FREE Full TextGoogle Scholar

    1. McRae MP,
    2. Simmons GW,
    3. Christodoulides NJ,
    4. et al

    . Clinical decision support tool and rapid point-of-care platform for determining disease severity in patients with COVID-19. Lab Chip2020;20:2075-85. doi:10.1039/D0LC00373E. pmid:32490853
    Abstract/FREE Full TextGoogle Scholar

    1. Mei X,
    2. Lee HC,
    3. Diao KY,
    4. et al

    . Artificial intelligence-enabled rapid diagnosis of patients with COVID-19. Nat Med2020;26:1224-8. doi:10.1038/s41591-020-0931-3. pmid:32427924
    Google Scholar

    1. Menni C,
    2. Valdes AM,
    3. Freidin MB,
    4. et al

    . Real-time tracking of self-reported symptoms to predict potential COVID-19. Nat Med2020;26:1037-40. doi:10.1038/s41591-020-0916-2. pmid:32393804
    CrossRefPubMedGoogle Scholar

  73. Moutounet-Cartan PGB. Deep convolutional neural networks to diagnose covid-19 and other pneumonia diseases from posteroanterior chest x-rays. ArXiv e-prints [Preprint] 2020
    1. Ozturk T,
    2. Talo M,
    3. Yildirim EA,
    4. Baloglu UB,
    5. Yildirim O,
    6. Rajendra Acharya U

    . Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput Biol Med2020;121:103792. doi:10.1016/j.compbiomed.2020.103792 pmid:32568675
    Abstract/FREE Full TextGoogle Scholar

    1. Rahimzadeh M,
    2. Attar A

    . A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images based on the concatenation of Xception and ResNet50V2. Inform Med Unlocked2020;19:100360. doi:10.1016/j.imu.2020.100360. pmid:32501424
    CrossRefPubMedGoogle Scholar

  74. Rehman A, Naz S, Khan A, et al. Improving coronavirus (covid-19) diagnosis using deep transfer learning. medRxiv [Preprint] 2020. doi:10.1101/2020.04.11.20054643.
    Abstract/FREE Full TextGoogle Scholar
    1. Singh D,
    2. Kumar V,
    3. Vaishali NA,
    4. Kaur M

    . Classification of COVID-19 patients from chest CT images using multi-objective differential evolution-based convolutional neural networks. Eur J Clin Microbiol Infect Dis2020;39:1379-89. doi:10.1007/s10096-020-03901-z. pmid:32337662
    Abstract/FREE Full TextGoogle Scholar

  75. Singh K, Valley TS, Tang S, et al. Validating a widely implemented deterioration index model among hospitalized covid-19 patients. medRxiv [Preprint] 2020. doi:10.1101/2020.04.24.20079012.
    Abstract/FREE Full TextGoogle Scholar

  76. Soares F, Villavicencio A, Anzanello MJ, et al. A novel high specificity COVID-19 screening method based on simple blood exams and artificial intelligence. medRxiv [Preprint] 2020. doi:10.1101/2020.04.10.20061036.
    CrossRefPubMedGoogle Scholar

  77. Tordjman M, Mekki A, Mali RD, et al. Pre-test probability for SARS-Cov-2-related Infection Score: the PARIS score. medRxiv [Preprint] 2020. doi:10.1101/2020.04.28.20081687.
    Abstract/FREE Full TextGoogle Scholar
    1. Ucar F,
    2. Korkmaz D

    . COVIDiagnosis-Net: Deep Bayes-SqueezeNet based diagnosis of the coronavirus disease 2019 (COVID-19) from X-ray images. Med Hypotheses2020;140:109761. doi:10.1016/j.mehy.2020.109761 pmid:32344309
    Abstract/FREE Full TextGoogle Scholar

  78. Vaid A, Somani S, Russak AJ, et al. Machine learning to predict mortality and critical events in covid-19 positive New York City patients. medRxiv [Preprint] 2020. doi:10.1101/2020.04.26.20073411.
    Abstract/FREE Full TextGoogle Scholar

  79. Vazquez Guillamet C, Vazquez Guillamet R, Kramer AA, et al. Toward a covid-19 score-risk assessments and registry. medRxiv [Preprint] 2020. doi:10.1101/2020.04.15.20066860.
    Google Scholar

  80. Wang c, Deng R, Gou L, et al. Preliminary study to identify severe from moderate cases of COVID-19 using NLR&RDW-SD combination parameter. medRxiv [Preprint] 2020. doi:10.1101/2020.04.09.20058594.
    Abstract/FREE Full TextGoogle Scholar

  81. Wu Y-H, Gao S-H, Mei J, et al. JCS: an explainable covid-19 diagnosis system by joint classification and segmentation. ArXiv e-prints [Preprint] 2020.

  82. Zhang H, Shi T, Wu X, et al. Risk prediction for poor outcome and death in hospital in-patients with COVID-19: derivation in Wuhan, China and external validation in London. medRxiv [Preprint] 2020. doi:10.1101/2020.04.28.20082222.
    CrossRefPubMedGoogle Scholar

  83. Zhao B, Wei Y, Sun W, et al. Distinguish coronavirus disease 2019 patients in general surgery emergency by CIAAD scale: development and validation of a prediction model based on 822 cases in China. medRxiv [Preprint] 2020. doi:10.1101/2020.04.18.20071019.
    CrossRefPubMedGoogle Scholar
    1. Zhu Z,
    2. Cai T,
    3. Fan L,
    4. et al

    . Clinical value of immune-inflammatory parameters to assess the severity of coronavirus disease 2019. Int J Infect Dis2020;95:332-9. doi:10.1016/j.ijid.2020.04.041 pmid:32334118
    Google Scholar

    1. Gong J,
    2. Ou J,
    3. Qiu X,
    4. et al

    . A tool to early predict severe corona virus disease 2019 (covid-19) : a multicenter study using the risk nomogram in Wuhan and Guangdong, China. Clin Infect Dis2020;ciaa443.pmid:32296824
    Google Scholar

    1. Apostolopoulos ID,
    2. Aznaouridis SI,
    3. Tzani MA

    . Extracting possibly representative covid-19 biomarkers from x-ray images with deep learning approach and image data related to pulmonary diseases. J Med Biol Eng2020;01:1-8. doi:10.1007/s40846-020-00529-4. pmid:32412551
    Google Scholar

    1. Ardakani AA,
    2. Kanafi AR,
    3. Acharya UR,
    4. Khadem N,
    5. Mohammadi A

    . Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: results of 10 convolutional neural networks. Comput Biol Med2020;121:103795. doi:10.1016/j.compbiomed.2020.103795 pmid:32568676
    Google Scholar

    1. Bar S,
    2. Lecourtois A,
    3. Diouf M,
    4. et al

    . The association of lung ultrasound images with COVID-19 infection in an emergency room cohort. Anaesthesia2020;75:1620-5. doi:10.1111/anae.15175 pmid:32520406
    Google Scholar

    1. Bi X,
    2. Su Z,
    3. Yan H,
    4. et al

    . Prediction of severe illness due to covid-19 based on an analysis of initial fibrinogen to albumin ratio and platelet count. Platelets2020;31:674-9. doi:10.1080/09537104.2020.1760230 pmid:32367765
    Google Scholar

    1. Borghesi A,
    2. Zigliani A,
    3. Golemi S,
    4. et al

    . Chest x-ray severity index as a predictor of in-hospital mortality in coronavirus disease 2019: a study of 302 patients from Italy. Int J Infect Dis2020;96:291-3. doi:10.1016/j.ijid.2020.05.021 pmid:32437939
    Google Scholar

    1. Burian E,
    2. Jungmann F,
    3. Kaissis GA,
    4. et al

    . Intensive care risk estimation in covid-19 pneumonia based on clinical and imaging parameters: experiences from the Munich cohort. J Clin Med2020;9:E1514. doi:10.3390/jcm9051514 pmid:32443442
    Google Scholar

    1. Cecconi M,
    2. Piovani D,
    3. Brunetta E,
    4. et al

    . Early predictors of clinical deterioration in a cohort of 239 patients hospitalized for covid-19 infection in Lombardy, Italy. J Clin Med2020;9:E1548. doi:10.3390/jcm9051548 pmid:32443899
    Google Scholar

    1. Cheng FY,
    2. Joshi H,
    3. Tandon P,
    4. et al

    . Using machine learning to predict ICU transfer in hospitalized covid-19 patients. J Clin Med2020;9:E1668. doi:10.3390/jcm9061668 pmid:32492874
    Google Scholar

    1. Choi MH,
    2. Ahn H,
    3. Ryu HS,
    4. et al

    . Clinical characteristics and disease progression in early-stage covid-19 patients in South Korea. J Clin Med2020;9:E1959. doi:10.3390/jcm9061959 pmid:32585855
    Google Scholar

    1. Clemency BM,
    2. Varughese R,
    3. Scheafer DK,
    4. et al

    . Symptom criteria for covid-19 testing of heath care workers. Acad Emerg Med2020;27:469-74. doi:10.1111/acem.14009 pmid:32396670
    Google Scholar

    1. Dong Y,
    2. Zhou H,
    3. Li M,
    4. et al

    . A novel simple scoring model for predicting severity of patients with SARS-CoV-2 infection. Transbound Emerg Dis2020;67:2823-9. doi:10.1111/tbed.13651 pmid:32469137
    Google Scholar

    1. El Asnaoui K,
    2. Chawki Y

    . Using X-ray images and deep learning for automated detection of coronavirus disease. J Biomol Struct Dyn2020;1-12. doi:10.1080/07391102.2020.1767212. pmid:32397844
    Google Scholar

    1. Fu L,
    2. Li Y,
    3. Cheng A,
    4. Pang P,
    5. Shu Z

    . A novel machine learning-derived radiomic signature of the whole lung differentiates stable from progressive covid-19 infection: a retrospective cohort study. J Thorac Imaging2020. doi:10.1097/RTI.0000000000000544 pmid:32555006
    Google Scholar

    1. Galloway JB,
    2. Norton S,
    3. Barker RD,
    4. et al

    . A clinical risk score to identify patients with COVID-19 at high risk of critical care admission or death: An observational cohort study. J Infect2020;81:282-8. doi:10.1016/j.jinf.2020.05.064 pmid:32479771
    Google Scholar

    1. Gezer NS,
    2. Ergan B,
    3. Barış MM,
    4. et al

    . COVID-19 S: A new proposal for diagnosis and structured reporting of COVID-19 on computed tomography imaging. Diagn Interv Radiol2020;26:315-22. doi:10.5152/dir.2020.20351 pmid:32558646
    Google Scholar

    1. Gidari A,
    2. De Socio GV,
    3. Sabbatini S,
    4. Francisci D

    . Predictive value of National Early Warning Score 2 (NEWS2) for intensive care unit admission in patients with SARS-CoV-2 infection. Infect Dis (Lond)2020;52:698-704. doi:10.1080/23744235.2020.1784457 pmid:32584161
    CrossRefPubMedGoogle Scholar

    1. Hong Y,
    2. Wu X,
    3. Qu J,
    4. Gao Y,
    5. Chen H,
    6. Zhang Z

    . Clinical characteristics of Coronavirus Disease 2019 and development of a prediction model for prolonged hospital length of stay. Ann Transl Med2020;8:443. doi:10.21037/atm.2020.03.147 pmid:32395487
    CrossRefPubMedWeb of ScienceGoogle Scholar

    1. Huang D,
    2. Wang T,
    3. Chen Z,
    4. Yang H,
    5. Yao R,
    6. Liang Z

    . A novel risk score to predict diagnosis with coronavirus disease 2019 (COVID-19) in suspected patients: a retrospective, multicenter, and observational study. J Med Virol2020;92:2709-17. doi:10.1002/jmv.26143 pmid:32510164
    FREE Full TextGoogle Scholar

    1. Huang J,
    2. Cheng A,
    3. Lin S,
    4. Zhu Y,
    5. Chen G

    . Individualized prediction nomograms for disease progression in mild COVID-19. J Med Virol2020;92:2074-80. doi:10.1002/jmv.25969 pmid:32369205
    Abstract/FREE Full TextGoogle Scholar

    1. Jehi L,
    2. Ji X,
    3. Milinovich A,
    4. et al

    . Individualizing risk prediction for positive coronavirus disease 2019 testing: results from 11,672 patients. Chest2020;158:1364-75. doi:10.1016/j.chest.2020.05.580 pmid:32533957
    CrossRefPubMedGoogle Scholar

    1. Joshi RP,
    2. Pejaver V,
    3. Hammarlund NE,
    4. et al

    . A predictive tool for identification of SARS-CoV-2 PCR-negative emergency department patients using routine test results. J Clin Virol2020;129:104502. doi:10.1016/j.jcv.2020.104502 pmid:32544861
    FREE Full TextGoogle Scholar

    1. Knight SR,
    2. Ho A,
    3. Pius R,
    4. et al.,
    5. ISARIC4C investigators

    . Risk stratification of patients admitted to hospital with covid-19 using the ISARIC WHO Clinical Characterisation Protocol: development and validation of the 4C Mortality Score. BMJ2020;370:m3339. doi:10.1136/bmj.m3339 pmid:32907855
    CrossRefPubMedGoogle Scholar

    1. Ko H,
    2. Chung H,
    3. Kang WS,
    4. et al

    . COVID-19 pneumonia diagnosis using a simple 2D deep learning framework with a single chest CT image: model development and validation. J Med Internet Res2020;22:e19569. doi:10.2196/19569 pmid:32568730
    CrossRefPubMedGoogle Scholar

    1. Li Q,
    2. Zhang J,
    3. Ling Y,
    4. et al

    . A simple algorithm helps early identification of SARS-CoV-2 infection patients with severe progression tendency. Infection2020;48:577-84. doi:10.1007/s15010-020-01446-z pmid:32440918
    CrossRefPubMedGoogle Scholar

    1. Li Y,
    2. Yang Z,
    3. Ai T,
    4. Wu S,
    5. Xia L

    . Association of “initial CT” findings with mortality in older patients with coronavirus disease 2019 (COVID-19). Eur Radiol2020;30:6186-93. doi:10.1007/s00330-020-06969-5 pmid:32524220
    CrossRefPubMedGoogle Scholar

    1. Li Z,
    2. Zeng B,
    3. Lei P,
    4. et al

    . Differentiating pneumonia with and without COVID-19 using chest CT images: from qualitative to quantitative. J Xray Sci Technol2020;28:583-9. doi:10.3233/XST-200689 pmid:32568167
    CrossRefPubMedGoogle Scholar

    1. Liang W,
    2. Liang H,
    3. Ou L,
    4. et al.,
    5. China Medical Treatment Expert Group for COVID-19

    . Development and Validation of a Clinical Risk Score to Predict the Occurrence of Critical Illness in Hospitalized Patients With COVID-19. JAMA Intern Med2020;180:1081-9. doi:10.1001/jamainternmed.2020.2033 pmid:32396163
    CrossRefPubMedGoogle Scholar

    1. Liu F,
    2. Zhang Q,
    3. Huang C,
    4. et al

    . CT quantification of pneumonia lesions in early days predicts progression to severe illness in a cohort of COVID-19 patients. Theranostics2020;10:5613-22. doi:10.7150/thno.45985 pmid:32373235
    CrossRefPubMedGoogle Scholar

    1. Liu X,
    2. Shi S,
    3. Xiao J,
    4. et al

    . Prediction of the severity of coronavirus disease 2019 and its adverse clinical outcomes. Jpn J Infect Dis2020;73:404-10. doi:10.7883/yoken.JJID.2020.194 pmid:32475880
    Google Scholar

    1. Liu Y,
    2. Wang Z,
    3. Ren J,
    4. et al

    . A covid-19 risk assessment decision support system for general practitioners: design and development study. J Med Internet Res2020;22:e19786. doi:10.2196/19786. pmid:32540845
    CrossRefPubMedGoogle Scholar

    1. Liu YP,
    2. Li GM,
    3. He J,
    4. et al

    . Combined use of the neutrophil-to-lymphocyte ratio and CRP to predict 7-day disease severity in 84 hospitalized patients with COVID-19 pneumonia: a retrospective cohort study. Ann Transl Med2020;8:635. doi:10.21037/atm-20-2372 pmid:32566572
    CrossRefPubMedGoogle Scholar

    1. Lorente-Ros A,
    2. Monteagudo Ruiz JM,
    3. Rincón LM,
    4. et al

    . Myocardial injury determination improves risk stratification and predicts mortality in COVID-19 patients. Cardiol J2020;27:489-96.pmid:32589258
    Google Scholar

    1. Luo L,
    2. Luo Z,
    3. Jia Y,
    4. et al

    . CT differential diagnosis of COVID-19 and non-COVID-19 in symptomatic suspects: a practical scoring method. BMC Pulm Med2020;20:129. doi:10.1186/s12890-020-1170-6 pmid:32381057
    CrossRefPubMedGoogle Scholar

    1. Luo M,
    2. Liu J,
    3. Jiang W,
    4. Yue S,
    5. Liu H,
    6. Wei S

    . IL-6 and CD8+ T cell counts combined are an early predictor of in-hospital mortality of patients with COVID-19. JCI Insight2020;5:139024. doi:10.1172/jci.insight.139024 pmid:32544099
    CrossRefGoogle Scholar

    1. Luo Y,
    2. Yuan X,
    3. Xue Y,
    4. et al

    . Using a diagnostic model based on routine laboratory tests to distinguish patients infected with SARS-CoV-2 from those infected with influenza virus. Int J Infect Dis2020;95:436-40. doi:10.1016/j.ijid.2020.04.078 pmid:32371192
    Google Scholar

    1. Matos J,
    2. Paparo F,
    3. Mussetto I,
    4. et al

    . Evaluation of novel coronavirus disease (COVID-19) using quantitative lung CT and clinical data: prediction of short-term outcome. Eur Radiol Exp2020;4:39. doi:10.1186/s41747-020-00167-0 pmid:32592118
    Google Scholar

    1. Mazzaccaro D,
    2. Giacomazzi F,
    3. Giannetta M,
    4. et al

    . Non-overt coagulopathy in non-ICU patients with mild to moderate covid-19 pneumonia. J Clin Med2020;9:E1781. doi:10.3390/jcm9061781 pmid:32521707
    Google Scholar

    1. Murphy K,
    2. Smits H,
    3. Knoops AJG,
    4. et al

    . Covid-19 on the chest radiograph: a multireader evaluation of an artificial intelligence system. Radiology2020;296:E166-72. doi:10.1148/radiol.2020201874. pmid:32384019
    Google Scholar

    1. Obeid JS,
    2. Davis M,
    3. Turner M,
    4. et al.,
    5. Obeid JS,
    6. Davis M,
    7. Turner M,
    8. et al

    . An artificial intelligence approach to COVID-19 infection risk assessment in virtual visits: A case report. J Am Med Inform Assoc2020;27:1321-5. doi:10.1093/jamia/ocaa105 pmid:32449766
    Google Scholar

    1. Pu J,
    2. Leader J,
    3. Bandos A,
    4. et al

    . Any unique image biomarkers associated with COVID-19?Eur Radiol2020;30:6221-7. doi:10.1007/s00330-020-06956-w pmid:32462445
    CrossRefPubMedGoogle Scholar

    1. Rajaraman S,
    2. Antani S

    . Weakly labeled data augmentation for deep learning: a study on covid-19 detection in chest x-rays. Diagnostics (Basel)2020;10:E358. doi:10.3390/diagnostics10060358 pmid:32486140
    CrossRefGoogle Scholar

    1. Roland LT,
    2. Gurrola JG 2nd.,
    3. Loftus PA,
    4. Cheung SW,
    5. Chang JL

    . Smell and taste symptom-based predictive model for COVID-19 diagnosis. Int Forum Allergy Rhinol2020;10:832-8. doi:10.1002/alr.22602 pmid:32363809
    CrossRefPubMedGoogle Scholar

    1. Satici C,
    2. Demirkol MA,
    3. Sargin Altunok E,
    4. et al

    . Performance of pneumonia severity index and CURB-65 in predicting 30-day mortality in patients with COVID-19. Int J Infect Dis2020;98:84-9. doi:10.1016/j.ijid.2020.06.038 pmid:32553714
    CrossRefPubMedGoogle Scholar

    1. Song J,
    2. Wang H,
    3. Liu Y,
    4. et al

    . End-to-end automatic differentiation of the coronavirus disease 2019 (COVID-19) from viral pneumonia based on chest CT. Eur J Nucl Med Mol Imaging2020;47:2516-24. doi:10.1007/s00259-020-04929-1 pmid:32567006
    Google Scholar

    1. Sun L,
    2. Song F,
    3. Shi N,
    4. et al

    . Combination of four clinical indicators predicts the severe/critical symptom of patients infected COVID-19. J Clin Virol2020;128:104431. doi:10.1016/j.jcv.2020.104431 pmid:32442756
    Google Scholar

    1. Toraih EA,
    2. Elshazli RM,
    3. Hussein MH,
    4. et al

    . Association of cardiac biomarkers and comorbidities with increased mortality, severity, and cardiac injury in COVID-19 patients: A meta-regression and decision tree analysis. J Med Virol2020;92:2473-88. doi:10.1002/jmv.26166 pmid:32530509
    Google Scholar

    1. Tuncer T,
    2. Dogan S,
    3. Ozyurt F

    . An automated residual exemplar local binary pattern and iterative relieff based covid-19 detection method using chest x-ray image. Chemometr Intell Lab Syst2020;203:104054. doi:10.1016/j.chemolab.2020.104054 pmid:32427226
    Google Scholar

    1. Vaid S,
    2. Kalantar R,
    3. Bhandari M

    . Deep learning COVID-19 detection bias: accuracy through artificial intelligence. Int Orthop2020;44:1539-42. doi:10.1007/s00264-020-04609-7 pmid:32462314
    Google Scholar

    1. Vultaggio A,
    2. Vivarelli E,
    3. Virgili G,
    4. et al

    . Prompt predicting of early clinical deterioration of moderate-to-severe COVID-19 patients: usefulness of a combined score using IL-6 in a preliminary study. J Allergy Clin Immunol Pract2020;8:2575-2581.e2. doi:10.1016/j.jaip.2020.06.013 pmid:32565226
    Google Scholar

    1. Wang F,
    2. Hou H,
    3. Wang T,
    4. et al

    . Establishing a model for predicting the outcome of COVID-19 based on combination of laboratory tests. Travel Med Infect Dis2020;36:101782. doi:10.1016/j.tmaid.2020.101782 pmid:32526372
    Google Scholar

    1. Wang K,
    2. Zuo P,
    3. Liu Y,
    4. et al

    . Clinical and laboratory predictors of in-hospital mortality in patients with coronavirus disease-2019: a cohort study in Wuhan, China. Clin Infect Dis2020;71:2079-88. doi:10.1093/cid/ciaa538 pmid:32361723
    Google Scholar

    1. Wang L,
    2. Liu Y,
    3. Zhang T,
    4. et al

    . Differentiating between 2019 novel coronavirus pneumonia and influenza using a nonspecific laboratory marker-based dynamic nomogram. Open Forum Infect Dis2020;7:a169. doi:10.1093/ofid/ofaa169. pmid:32490031
    Google Scholar

    1. Wu S,
    2. Du Z,
    3. Shen S,
    4. et al

    . Identification and validation of a novel clinical signature to predict the prognosis in confirmed COVID-19 patients. Clin Infect Dis2020;ciaa793. doi:10.1093/cid/ciaa793 pmid:32556293
    Google Scholar

    1. Wu X,
    2. Hui H,
    3. Niu M,
    4. et al

    . Deep learning-based multi-view fusion model for screening 2019 novel coronavirus pneumonia: a multicentre study. Eur J Radiol2020;128:109041. doi:10.1016/j.ejrad.2020.109041 pmid:32408222
    Google Scholar

    1. Yang P,
    2. Wang P,
    3. Song Y,
    4. Zhang A,
    5. Yuan G,
    6. Cui Y

    . A retrospective study on the epidemiological characteristics and establishment of an early warning system of severe COVID-19 patients. J Med Virol2020;92:2173-80. doi:10.1002/jmv.26022 pmid:32410285
    Google Scholar

    1. Yang Y,
    2. Shen C,
    3. Li J,
    4. et al

    . Plasma IP-10 and MCP-3 levels are highly associated with disease severity and predict the progression of COVID-19. J Allergy Clin Immunol2020;146:119-127.e4. doi:10.1016/j.jaci.2020.04.027 pmid:32360286
    Google Scholar

    1. Yu C,
    2. Lei Q,
    3. Li W,
    4. et al

    . Clinical characteristics, associated factors, and predicting covid-19 mortality risk: a retrospective study in Wuhan, China. Am J Prev Med2020;59:168-75. doi:10.1016/j.amepre.2020.05.002 pmid:32564974
    Google Scholar

    1. Zhang C,
    2. Qin L,
    3. Li K,
    4. et al

    . A novel scoring system for prediction of disease severity in covid-19. Front Cell Infect Microbiol2020;10:318. doi:10.3389/fcimb.2020.00318 pmid:32582575
    Google Scholar

    1. Zhang K,
    2. Liu X,
    3. Shen J,
    4. et al

    . Clinically applicable AI System for accurate diagnosis, quantitative measurements, and prognosis of covid-19 pneumonia using computed tomography. Cell2020;181:1423-1433.e11. doi:10.1016/j.cell.2020.04.045 pmid:32416069
    Google Scholar

    1. Zheng QN,
    2. Xu MY,
    3. Zheng YL,
    4. Wang XY,
    5. Zhao H

    . Prediction of the rehabilitation duration and risk management for mild-moderate COVID-19. Disaster Med Public Health Prep2020;14:652-7. doi:10.1017/dmp.2020.214. pmid:32576328
    Google Scholar

    1. Zhou Y,
    2. He Y,
    3. Yang H,
    4. et al

    . Development and validation a nomogram for predicting the risk of severe COVID-19: a multi-center study in Sichuan, China. PLoS One2020;15:e0233328. doi:10.1371/journal.pone.0233328 pmid:32421703
    Google Scholar

    1. Zou X,
    2. Li S,
    3. Fang M,
    4. et al

    . Acute physiology and chronic health evaluation II score as a predictor of hospital mortality in patients of coronavirus disease 2019. Crit Care Med2020;48:e657-65. doi:10.1097/CCM.0000000000004411 pmid:32697506
    Google Scholar

  84. Cohen JP, Morrison P, Dao L. COVID-19 image data collection. arXiv e-prints [Preprint] 2020.

  85. Kaggle. COVID-19 Kaggle community contributions 2020.
    1. Wang S,
    2. Zha Y,
    3. Li W,
    4. et al

    . A fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis. Eur Respir J2020;56:2000775. doi:10.1183/13993003.00775-2020 pmid:32444412
    Google Scholar

    1. Collins GS,
    2. Ogundimu EO,
    3. Altman DG

    . Sample size considerations for the external validation of a multivariable prognostic model: a resampling study. Stat Med2016;35:214-26. doi:10.1002/sim.6787 pmid:26553135
    Google Scholar

    1. Vergouwe Y,
    2. Steyerberg EW,
    3. Eijkemans MJ,
    4. Habbema JD

    . Substantial effective sample sizes were required for external validation studies of predictive logistic regression models. J Clin Epidemiol2005;58:475-83. doi:10.1016/j.jclinepi.2004.06.017 pmid:15845334
    Google Scholar

    1. Levy TJ,
    2. Richardson S,
    3. Coppa K,
    4. et al

    . Estimating survival of hospitalized covid-19 patients from admission information.MedRxiv2020doi:10.1101/2020.04.22.20075416
    Google Scholar

    1. Riley RD,
    2. Ensor J,
    3. Snell KIE,
    4. et al

    . Calculating the sample size required for developing a clinical prediction model. BMJ2020;368:m441. doi:10.1136/bmj.m441 pmid:32188600
    Google Scholar

    1. Austin PC,
    2. Lee DS,
    3. Fine JP

    . Introduction to the analysis of survival data in the presence of competing risks. Circulation2016;133:601-9. doi:10.1161/CIRCULATIONAHA.115.017719 pmid:26858290
    Google Scholar

  86. Roberts M, Driggs D, Thorpe M, et al. Machine learning for covid-19 detection and prognostication using chest radiographs and CT scans: a systematic methodological review. arXiv 2020:200806388.
    1. Van Calster B,
    2. McLernon DJ,
    3. van Smeden M,
    4. Wynants L,
    5. Steyerberg EW,
    6. Topic Group ‘Evaluating diagnostic tests and prediction models’ of the STRATOS initiative

    . Calibration: the Achilles heel of predictive analytics. BMC Med2019;17:230. doi:10.1186/s12916-019-1466-7 pmid:31842878
    Google Scholar

    1. Riley RD,
    2. Ensor J,
    3. Snell KI,
    4. et al

    . External validation of clinical prediction models using big datasets from e-health records or IPD meta-analysis: opportunities and challenges [correction: BMJ 2019;365:l4379]. BMJ2016;353:i3140. doi:10.1136/bmj.i3140 pmid:27334381
    Google Scholar

    1. Debray TP,
    2. Riley RD,
    3. Rovers MM,
    4. Reitsma JB,
    5. Moons KG,
    6. Cochrane IPD Meta-analysis Methods group

    . Individual participant data (IPD) meta-analyses of diagnostic and prognostic modeling studies: guidance on their use. PLoS Med2015;12:e1001886. doi:10.1371/journal.pmed.1001886 pmid:26461078
    Google Scholar

    1. Steyerberg EW,
    2. Harrell FE Jr.

    . Prediction models need appropriate internal, internal-external, and external validation. J Clin Epidemiol2016;69:245-7. doi:10.1016/j.jclinepi.2015.04.005 pmid:25981519
    Google Scholar

    1. Wynants L,
    2. Kent DM,
    3. Timmerman D,
    4. Lundquist CM,
    5. Van Calster B

    . Untapped potential of multicenter studies: a review of cardiovascular risk prediction models revealed inappropriate analyses and wide variation in reporting. Diagn Progn Res2019;3:6. doi:10.1186/s41512-019-0046-9 pmid:31093576
    Google Scholar

    1. Wynants L,
    2. Riley RD,
    3. Timmerman D,
    4. Van Calster B

    . Random-effects meta-analysis of the clinical utility of tests and prediction models. Stat Med2018;37:2034-52. doi:10.1002/sim.7653 pmid:29575170
    Google Scholar

  87. Infervision. Infervision launches hashtag#AI-based hashtag#Covid-19 solution in Europe 2020.

  88. Surgisphere Corporation. COVID-19 response center 2020.
    1. Van Calster B,
    2. Wynants L,
    3. Timmerman D,
    4. Steyerberg EW,
    5. Collins GS

    . Predictive analytics in health care: how can we know it works?J Am Med Inform Assoc2019;26:1651-4. doi:10.1093/jamia/ocz130 pmid:31373357
    Google Scholar

    1. Enfield K,
    2. Miller R,
    3. Rice T,
    4. et al

    . Limited utility of SOFA and APACHE II prediction models for ICU triage in pandemic influenza. Chest2011;140:913A. doi:10.1378/chest.1118087.
    Google Scholar

    1. Van Calster B,
    2. Vickers AJ

    . Calibration of risk prediction models: impact on decision-analytic performance. Med Decis Making2015;35:162-9. doi:10.1177/0272989X14547233 pmid:25155798
    Google Scholar

    1. Clift AK,
    2. Coupland CAC,
    3. Keogh RH,
    4. et al

    . Living risk prediction algorithm (QCOVID) for risk of hospital admission and mortality from coronavirus 19 in adults: national derivation and validation cohort study. BMJ2020;371:m3731. doi:10.1136/bmj.m3731 pmid:33082154
    Google Scholar

    1. Mahase E

    . Covid-19: What do we know about “long covid”?BMJ2020;370:m2815. doi:10.1136/bmj.m2815 pmid:32665317
    Google Scholar

    1. Klok FA,
    2. Boon GJAM,
    3. Barco S,
    4. et al

    . The post-covid-19 functional status scale: a tool to measure functional status over time after covid-19. Eur Respir J2020;56:2001494. doi:10.1183/13993003.01494-2020 pmid:32398306
    Google Scholar

    1. van Smeden M,
    2. Moons KG,
    3. de Groot JA,
    4. et al

    . Sample size for binary logistic prediction models: beyond events per variable criteria. Stat Methods Med Res2019;28:2455-74. doi:10.1177/0962280218784726. pmid:29966490
    Google Scholar

View Abstract