Skip to main content

Table 5 Quality assessment of the selected studies by Prediction model Risk-of-Bias Assessment Tool (PROBAST) (detailed)

From: Clinical prediction models for the early diagnosis of obstructive sleep apnea in stroke patients: a systematic review

Items

Author/year

Sico/2017

Brown/2020

Boulos/2019

Katzan/2016

Bernardini/2021

Zhang/2019

Boulos/2016

Petrie/2021

Å iarnik/2020

Camilo/2014

Srijithesh/2011

Participants

Were appropriate data sources used, e.g., cohort, randomized controlled trial, or nested case–control study data?

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

Were all inclusions and exclusions of participants appropriate?

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

Predictors

Were predictors defined and assessed in a similar way for all participants?

 + 

 + 

?

 + 

 + 

?

 + 

 + 

 + 

 + 

 + 

Were predictor assessments made without knowledge of outcome data?

?

 + 

?

 + 

 + 

?

 + 

 + 

 + 

 + 

 + 

Are all predictors available at the time the model is intended to be used?

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

Outcome

Was the outcome determined appropriately?

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

Was a prespecified or standard outcome definition used?

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

Were predictors excluded from the outcome definition?

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

Was the outcome defined and determined in a similar way for all participants?

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

Was the outcome determined without knowledge of predictor information?

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

Was the time interval between predictor assessment and outcome determination appropriate?

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

Analysis

Were there a reasonable number of participants with the outcome?

-

-

 + 

 + 

-

-

-

 + 

 + 

-

-

Were continuous and categorical predictors handled appropriately?

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

Were all enrolled participants included in the analysis?

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

Were participants with missing data handled appropriately?

 + 

?

?

 + 

?

 + 

?

?

?

?

?

Was selection of predictors based on univariable analysis avoided? (Model development studies only)

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

Were complexities in the data (e.g., censoring, competing risks, sampling of control participants) accounted for appropriately?

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

 + 

Were relevant model performance measures evaluated appropriately?

-

-

-

-

-

-

-

-

-

-

-

Were model overfitting and optimism in model performance accounted for? (Model development studies only)

-

-

-

?

-

-

-

-

-

-

-

Do predictors and their assigned weights in the final model correspond to the results from the reported multivariable analysis? (Model development studies only)?

-

-

-

 + 

-

-

-

-

-

-

-

  1.  + Low risk of bias
  2.  − High risk of bias
  3. ?Unclear