Nancy Heine
Loma Linda University, Clinical Skills Education Center, Faculty Member
It is increasingly difficult to provide adequate clinical training for new dietetics graduates. Dietetic students obtain clinical experience by visiting patients and viewing their charts in hospital settings but rarely counsel them. To... more
It is increasingly difficult to provide adequate clinical training for new dietetics graduates. Dietetic students obtain clinical experience by visiting patients and viewing their charts in hospital settings but rarely counsel them. To examine the change in nutrition and dietetic students' perceived readiness to practice after completing three Objective Structured Clinical Examinations (OSCE). 37 students (mean age 26.6±5.4 yrs, 95% female) from the Schools of Public Health and Allied Health Professions enrolled in a medical nutrition therapy course. Using a pre-post test design, 37 students completed the first 3 weeks of the laboratory section of the course at the medical center, followed by 3 weeks of OSCE. OSCE stations included reviewing a chart, counseling a standardized patient, and discussing findings with other healthcare professionals. Students answered the Perceived Readiness for Dietetic Practice questionnaire before and after the OSCE. OSCE significantly improved stu...
Research Interests:
Purpose To examine validity evidence of local graduation competency examination scores from seven medical schools using shared cases and to provide rater training protocols and guidelines for scoring patient notes (PNs). Method Between... more
Purpose
To examine validity evidence of local
graduation competency examination
scores from seven medical schools using
shared cases and to provide rater training
protocols and guidelines for scoring
patient notes (PNs).
Method
Between May and August 2016, clinical
cases were developed, shared, and
administered across seven medical
schools (990 students participated).
Raters were calibrated using training
protocols, and guidelines were
developed collaboratively across sites to
standardize scoring. Data included scores
from standardized patient encounters
for history taking, physical examination,
and PNs. Descriptive statistics were used
to examine scores from the different
assessment components. Generalizability
studies (G-studies) using variance
components were conducted to estimate
reliability for composite scores.
Results
Validity evidence was collected for
response process (rater perception),
internal structure (variance components,
reliability), relations to other variables
(interassessment correlations), and
consequences (composite score). Student
performance varied by case and task.
In the PNs, justification of differential
diagnosis was the most discriminating
task. G-studies showed that schools
accounted for less than 1% of total
variance; however, for the PNs, there
were differences in scores for varying
cases and tasks across schools, indicating
a school effect. Composite score
reliability was maximized when the PN
was weighted between 30% and 40%.
Raters preferred using case-specific
scoring guidelines with clear pointscoring
systems.
Conclusions
This multisite study presents validity
evidence for PN scores based on
scoring rubric and case-specific scoring
guidelines that offer rigor and feedback
for learners. Variability in PN scores
across participating sites may signal
different approaches to teaching clinical
reasoning among medical schools.
To examine validity evidence of local
graduation competency examination
scores from seven medical schools using
shared cases and to provide rater training
protocols and guidelines for scoring
patient notes (PNs).
Method
Between May and August 2016, clinical
cases were developed, shared, and
administered across seven medical
schools (990 students participated).
Raters were calibrated using training
protocols, and guidelines were
developed collaboratively across sites to
standardize scoring. Data included scores
from standardized patient encounters
for history taking, physical examination,
and PNs. Descriptive statistics were used
to examine scores from the different
assessment components. Generalizability
studies (G-studies) using variance
components were conducted to estimate
reliability for composite scores.
Results
Validity evidence was collected for
response process (rater perception),
internal structure (variance components,
reliability), relations to other variables
(interassessment correlations), and
consequences (composite score). Student
performance varied by case and task.
In the PNs, justification of differential
diagnosis was the most discriminating
task. G-studies showed that schools
accounted for less than 1% of total
variance; however, for the PNs, there
were differences in scores for varying
cases and tasks across schools, indicating
a school effect. Composite score
reliability was maximized when the PN
was weighted between 30% and 40%.
Raters preferred using case-specific
scoring guidelines with clear pointscoring
systems.
Conclusions
This multisite study presents validity
evidence for PN scores based on
scoring rubric and case-specific scoring
guidelines that offer rigor and feedback
for learners. Variability in PN scores
across participating sites may signal
different approaches to teaching clinical
reasoning among medical schools.
Research Interests:
Research Interests:
Research Interests:
ABSTRACT Background: The accuracy of standardized patients (SPs) as recorders is an ongoing concern in medical education. Consistent feedback from an expert observer during a clinical examination might enhance the SPs' accuracy in... more
ABSTRACT Background: The accuracy of standardized patients (SPs) as recorders is an ongoing concern in medical education. Consistent feedback from an expert observer during a clinical examination might enhance the SPs' accuracy in completing checklists. Purpose: To determine the frequency of feedback necessary to maximize SP checklist accuracy. Method: Student checklists were completed after each encounter by the SPs. Varying levels of feedback were given to SPs by their trainers. To determine checklist accuracy level, multiple reviewers developed an answer key for each student encounter studied. Two hundred ninety-eight encounters were examined for agreement among 6,566 checklist items. Results: Random feedback resulted in significantly higher levels of SP accuracy than no feedback. There was no significant difference between random and constant feedback. Conclusions: This study suggests that random feedback given to SPs is sufficient to enhance SP checklist accuracy and should be part of implementation protocols in all required clinical performance examinations.