Gerald Elsworth (BSc Melbourne, PhD James Cook) is a program evaluator and research methodologist who specialises in the application of survey methods and self-report measurement to evaluation research in public health, community safety and education. He is also interested in the development and application of program logic and program theory models in program and policy evaluation.
As part of an evaluative study of the impact of the year in which initial classroom teaching expe... more As part of an evaluative study of the impact of the year in which initial classroom teaching experience occurs, a sample of 768 students in concurrent and consecutive teacher education courses responded to test-retest administrations of a set of professional self perception scales and a measure of commitment to teaching. Reliable individual changes over the eight month period of the study were observed on commitment and the seven dimensions of self perception measured, and the relationships between these two facets of variables were found to be stronger at posttest than at pretest. Additionally, significant concomitant changes in self perception and commitment occurred. The results suggest that professional self perceptions become more central to the commitment of student teachers during training and that self perception and commitment to teaching are dynamically interrelated. It is concluded that course and practicum experiences during the year of professional experience have a major impact on the two facets of professional socialization studied.
BACKGROUND As health resources and services are increasingly delivered through digital platforms,... more BACKGROUND As health resources and services are increasingly delivered through digital platforms, electronic health (eHealth) literacy has become a set of essential capabilities to improve consumer health in the digital era. To understand eHealth literacy needs, a meaningful measure of the concept is required. Strong initial evidence for the reliability and construct validity of inferences drawn from the eHealth Literacy Questionnaire (eHLQ) was obtained during its development in Denmark but validity testing for varying purposes is an ongoing and cumulative process. OBJECTIVE This study aimed to examine validity evidence based on relations to other variables, using data collected from known-groups approach, to further explore if the eHLQ is a robust tool to understand eHealth literacy needs in different contexts. METHODS A priori hypotheses were set for expected score differences between age, sex, education and information and communication (ICT) use for each of the 7 eHealth literacy scales of the eHLQ. A Bayesian mediated multiple indicators multiple causes (MIMIC) model approach was used to simultaneously identify group differences and test measurement invariance through differential item functioning (DIF) across groups with ICT use as a mediator. Data were collected at 3 diverse health sites in Australia. RESULTS Being older was significantly related to lower scores in 4 scales with ‘3. Ability to actively engage with digital services’ (total effect=-.37, P=.00) being the strongest, followed by ‘1. Using technology to process health information’ (total effect=-.32, P=.00), ‘5. Motivated to engage with digital services’ (total effect=-.21, P=.01) and ‘7. Digital services that suit individual needs’ (total effect=-.21, P=.02). However, the effects were only partially mediated by ICT use. Higher education was associated with higher scores in the latent variables representing ‘1. Using technology to process health information’ (total effect=.22, P=.01), and ‘3. Ability to actively engage with digital services’ (total effect=.25, P=.00). Most of the effects were mediated by ICT use. Higher ICT use was related to higher scores in latent variables representing most of the eHLQ scales except ‘2. Understanding health concepts and language’ and ‘4. Feel safe and in control’. No or ignorable DIF were found across the 4 groups. CONCLUSIONS By using a Bayesian mediated MIMIC model, this study provides supportive validity evidence for the eHLQ based on relations to other variables and also established evidence on internal structure related to measurement invariance across groups for the 7 scales in the Australian community health context. It has also demonstrated that the eHLQ can be used to gain valuable insights into people’s eHealth literacy needs to help optimize access and use of digital health among users and promote health equity. CLINICALTRIAL Not applicable
BACKGROUND Digital technologies have changed how we manage our health, and eHealth literacy is ne... more BACKGROUND Digital technologies have changed how we manage our health, and eHealth literacy is needed to engage with health technologies. Any eHealth strategy would be ineffective if users’ eHealth literacy needs are not addressed. A robust measure of eHealth literacy is essential for understanding these needs. On the basis of the eHealth Literacy Framework, which identified 7 dimensions of eHealth literacy, the eHealth Literacy Questionnaire (eHLQ) was developed. The tool has demonstrated robust psychometric properties in the Danish setting, but validity testing should be an ongoing and accumulative process. OBJECTIVE This study aims to evaluate validity evidence based on test content, response process, and internal structure of the eHLQ in the Australian community health setting. METHODS A mixed methods approach was used with cognitive interviewing conducted to examine evidence on test content and response process, whereas a cross-sectional survey was undertaken for evidence on internal structure. Data were collected at 3 diverse community health sites in Victoria, Australia. Psychometric testing included both the classical test theory and item response theory approaches. Methods included Bayesian structural equation modeling for confirmatory factor analysis, internal consistency and test-retest for reliability, and the Bayesian multiple-indicators, multiple-causes model for testing of differential item functioning. RESULTS Cognitive interviewing identified only 1 confusing term, which was clarified. All items were easy to read and understood as intended. A total of 525 questionnaires were included for psychometric analysis. All scales were homogenous with composite scale reliability ranging from 0.73 to 0.90. The intraclass correlation coefficient for test-retest reliability for the 7 scales ranged from 0.72 to 0.95. A 7-factor Bayesian structural equation modeling using small variance priors for cross-loadings and residual covariances was fitted to the data, and the model of interest produced a satisfactory fit (posterior productive P=.49, 95% CI for the difference between observed and replicated chi-square values −101.40 to 108.83, prior-posterior productive P=.92). All items loaded on the relevant factor, with loadings ranging from 0.36 to 0.94. No significant cross-loading was found. There was no evidence of differential item functioning for administration format, site area, and health setting. However, discriminant validity was not well established for scales 1, 3, 5, 6, and 7. Item response theory analysis found that all items provided precise information at different trait levels, except for 1 item. All items demonstrated different sensitivity to different trait levels and represented a range of difficulty levels. CONCLUSIONS The evidence suggests that the eHLQ is a tool with robust psychometric properties and further investigation of discriminant validity is recommended. It is ready to be used to identify eHealth literacy strengths and challenges and assist the development of digital health interventions to ensure that people with limited digital access and skills are not left behind.
As part of an evaluative study of the impact of the year in which initial classroom teaching expe... more As part of an evaluative study of the impact of the year in which initial classroom teaching experience occurs, a sample of 768 students in concurrent and consecutive teacher education courses responded to test-retest administrations of a set of professional self perception scales and a measure of commitment to teaching. Reliable individual changes over the eight month period of the study were observed on commitment and the seven dimensions of self perception measured, and the relationships between these two facets of variables were found to be stronger at posttest than at pretest. Additionally, significant concomitant changes in self perception and commitment occurred. The results suggest that professional self perceptions become more central to the commitment of student teachers during training and that self perception and commitment to teaching are dynamically interrelated. It is concluded that course and practicum experiences during the year of professional experience have a major impact on the two facets of professional socialization studied.
BACKGROUND As health resources and services are increasingly delivered through digital platforms,... more BACKGROUND As health resources and services are increasingly delivered through digital platforms, electronic health (eHealth) literacy has become a set of essential capabilities to improve consumer health in the digital era. To understand eHealth literacy needs, a meaningful measure of the concept is required. Strong initial evidence for the reliability and construct validity of inferences drawn from the eHealth Literacy Questionnaire (eHLQ) was obtained during its development in Denmark but validity testing for varying purposes is an ongoing and cumulative process. OBJECTIVE This study aimed to examine validity evidence based on relations to other variables, using data collected from known-groups approach, to further explore if the eHLQ is a robust tool to understand eHealth literacy needs in different contexts. METHODS A priori hypotheses were set for expected score differences between age, sex, education and information and communication (ICT) use for each of the 7 eHealth literacy scales of the eHLQ. A Bayesian mediated multiple indicators multiple causes (MIMIC) model approach was used to simultaneously identify group differences and test measurement invariance through differential item functioning (DIF) across groups with ICT use as a mediator. Data were collected at 3 diverse health sites in Australia. RESULTS Being older was significantly related to lower scores in 4 scales with ‘3. Ability to actively engage with digital services’ (total effect=-.37, P=.00) being the strongest, followed by ‘1. Using technology to process health information’ (total effect=-.32, P=.00), ‘5. Motivated to engage with digital services’ (total effect=-.21, P=.01) and ‘7. Digital services that suit individual needs’ (total effect=-.21, P=.02). However, the effects were only partially mediated by ICT use. Higher education was associated with higher scores in the latent variables representing ‘1. Using technology to process health information’ (total effect=.22, P=.01), and ‘3. Ability to actively engage with digital services’ (total effect=.25, P=.00). Most of the effects were mediated by ICT use. Higher ICT use was related to higher scores in latent variables representing most of the eHLQ scales except ‘2. Understanding health concepts and language’ and ‘4. Feel safe and in control’. No or ignorable DIF were found across the 4 groups. CONCLUSIONS By using a Bayesian mediated MIMIC model, this study provides supportive validity evidence for the eHLQ based on relations to other variables and also established evidence on internal structure related to measurement invariance across groups for the 7 scales in the Australian community health context. It has also demonstrated that the eHLQ can be used to gain valuable insights into people’s eHealth literacy needs to help optimize access and use of digital health among users and promote health equity. CLINICALTRIAL Not applicable
BACKGROUND Digital technologies have changed how we manage our health, and eHealth literacy is ne... more BACKGROUND Digital technologies have changed how we manage our health, and eHealth literacy is needed to engage with health technologies. Any eHealth strategy would be ineffective if users’ eHealth literacy needs are not addressed. A robust measure of eHealth literacy is essential for understanding these needs. On the basis of the eHealth Literacy Framework, which identified 7 dimensions of eHealth literacy, the eHealth Literacy Questionnaire (eHLQ) was developed. The tool has demonstrated robust psychometric properties in the Danish setting, but validity testing should be an ongoing and accumulative process. OBJECTIVE This study aims to evaluate validity evidence based on test content, response process, and internal structure of the eHLQ in the Australian community health setting. METHODS A mixed methods approach was used with cognitive interviewing conducted to examine evidence on test content and response process, whereas a cross-sectional survey was undertaken for evidence on internal structure. Data were collected at 3 diverse community health sites in Victoria, Australia. Psychometric testing included both the classical test theory and item response theory approaches. Methods included Bayesian structural equation modeling for confirmatory factor analysis, internal consistency and test-retest for reliability, and the Bayesian multiple-indicators, multiple-causes model for testing of differential item functioning. RESULTS Cognitive interviewing identified only 1 confusing term, which was clarified. All items were easy to read and understood as intended. A total of 525 questionnaires were included for psychometric analysis. All scales were homogenous with composite scale reliability ranging from 0.73 to 0.90. The intraclass correlation coefficient for test-retest reliability for the 7 scales ranged from 0.72 to 0.95. A 7-factor Bayesian structural equation modeling using small variance priors for cross-loadings and residual covariances was fitted to the data, and the model of interest produced a satisfactory fit (posterior productive P=.49, 95% CI for the difference between observed and replicated chi-square values −101.40 to 108.83, prior-posterior productive P=.92). All items loaded on the relevant factor, with loadings ranging from 0.36 to 0.94. No significant cross-loading was found. There was no evidence of differential item functioning for administration format, site area, and health setting. However, discriminant validity was not well established for scales 1, 3, 5, 6, and 7. Item response theory analysis found that all items provided precise information at different trait levels, except for 1 item. All items demonstrated different sensitivity to different trait levels and represented a range of difficulty levels. CONCLUSIONS The evidence suggests that the eHLQ is a tool with robust psychometric properties and further investigation of discriminant validity is recommended. It is ready to be used to identify eHealth literacy strengths and challenges and assist the development of digital health interventions to ensure that people with limited digital access and skills are not left behind.
Uploads
Papers by Gerald Elsworth