Skip to main content

Table 2 Additional aspects of utility scoring criteria

From: Instruments to measure patient experience of healthcare quality in hospitals: a systematic review

 

Excellent (****)

Good (***)

Fair (**)

Poor (*)

Questions for cost efficiency

    

 1. What are the number of observations (patients, raters, times) needed to reach the required level of reliability for the purpose of the instrument?

Only a small sample needed (<30)

A moderate sample size (30–49)

Not explicit but can be assumed or (50–99 assessments needed)

No details given or (≥100 assessments needed)

 2. How long does an assessment take to complete

≤15 min

≤ 30 min

30–60 min

>60 min

 3. What are the administrative costs of completing the assessment?

Easily embedded within existing resource. Little additional support required

Some administrative resource but no specialist resource required

Large amount of resource required to assess and administer

Significant specialist expertise and administrative time required to assess and administer

 4. What is the cost to complete a reliable sample?

Minimal

Moderate

Considerable

Extensive

Questions for acceptability

    

 1. Is there evidence of subjects understanding of the instrument/assessment?

Investigations of subjects understanding (i.e. cognitive testing of instruments)

Estimated evidence of subjects understanding (i.e. high number of questions missed)

Subject understanding not explicitly stated but some can be assumed (i.e. student guide to OSCE)

No evidence of subject understanding

 2. How many assessments are not completed?

There are low numbers of missing items (<10 %) and adequate response rates (>40 %)

There are a high number of missing items (>10 %) and an adequate response rates (>40 %)

There are low numbers of missing items or poor (<10 %) and an inadequate response rate (<40 %)

There are high numbers of missing items (>10 %) and poor response rates (<40 %)

 3. Has the instrument/assessment been tested in an appropriate context?

Evidence of successful administration/use within an appropriate setting

Tested in vivo and changes recommended would be achievable

Testing in vivo and changes recommended would be difficult or only partially tested in vivo

Testing has only been conducted in vitro/simulation

Questions for educational impact

    

 1. There is evidence of the instruments intended purpose being achieved (i.e. if aim is to enable hospital ranking for patient selection, is there evidence that the results are actually influencing patient choice?)

Clear evidence of intended purpose being fulfilled

Explanatory or theoretical link between intended and actual use but no clear evidence

Evidence of theoretical work but relationship between intended and actual purpose poorly or not described

No evidence of intended purpose becoming actual

 2. The scoring system is easily translated or available in an easy to use format?

Explicitly stated and easy to calculate

Explicitly stated but not easy to calculate

Scoring only calculated by resource with statistical knowledge

Scoring not explained well enough to calculate

 3. The feedback from the results can be readily used for action where necessary?

Feedback is readily available in a format that enables necessary action

Feedback is readily available but not drilled down enough to enable targeted action

Minimal feedback available or delay results in limited impact

No explanation to determine adequacy of feedback. No direct feedback could be readily used without additional expertise