Thomas Archibald
I am an associate professor and extension specialist in the Agricultural, Leadership, and Community Education department at Virginia Tech. My work is primarily focused on program evaluation, with a specific focus on evaluation capacity building (ECB) within the Cooperative Extension system. I am also the Director of the Feed the Future Senegal Jeunesse en Agriculture (Youth in Agriculture) project, a 5-year, $4 million project supported by USAID/Senegal. I am a recipient of the 2017 Maria Guttentag Promising New Evaluator award of the American Evaluation Association. I serve on the Board of Directors of the Eastern Evaluation Research Society and on the Editorial Board of the American Journal of Evaluation and New Directions for Evaluation.
I received my PhD in Adult and Extension Education from Cornell University. Working with Arthur (Butch) Wilson, William Trochim, and Stacey Langwick, my dissertation research was a qualitative sociological study of three cases which involved efforts to connect research and practice in non-formal educational contexts. Specifically, I focused on the epistemological politics inherent in the evidence-based program and evidence-based practice movements in non-formal education.
During my graduate studies, I was also a Graduate Research Assistant with the Cornell Office for Research on Evaluation under the direction of William Trochim. There, I worked with a dynamic team on a multi-year, multi-site research project in which we developed, implemented, and conducted research on a systems thinking-influenced approach to strengthening the capacity of non-formal educators to meaningfully evaluate their own education and outreach programs.
Supervisors: Arthur (Butch) Wilson , William Trochim, and Stacey Langwick
I received my PhD in Adult and Extension Education from Cornell University. Working with Arthur (Butch) Wilson, William Trochim, and Stacey Langwick, my dissertation research was a qualitative sociological study of three cases which involved efforts to connect research and practice in non-formal educational contexts. Specifically, I focused on the epistemological politics inherent in the evidence-based program and evidence-based practice movements in non-formal education.
During my graduate studies, I was also a Graduate Research Assistant with the Cornell Office for Research on Evaluation under the direction of William Trochim. There, I worked with a dynamic team on a multi-year, multi-site research project in which we developed, implemented, and conducted research on a systems thinking-influenced approach to strengthening the capacity of non-formal educators to meaningfully evaluate their own education and outreach programs.
Supervisors: Arthur (Butch) Wilson , William Trochim, and Stacey Langwick
less
InterestsView All (14)
Uploads
Papers by Thomas Archibald
Purpose: To address that lack, the purpose of the study presented in this paper is to refine the construct of evaluative thinking by exploring its underlying dimensions and to ascertain the internal consistency of an instrument developed to measure evaluative thinking, the Evaluative Thinking Inventory (ETI).
Setting: The ETI was developed as part of an ECB initiative focused on non-formal science, engineering, technology, and math (STEM) education in the United States, and was tested as part of a study focused on evaluating gifted education programs, also in the United States.
Intervention: Not applicable.
Research design: Survey research and exploratory factor analysis (EFA).
Data collection & analysis: The ETI was administered to participants in a study measuring the effectiveness of a tool used to conduct internal evaluations of gifted education programs. SPSS was used to conduct an EFA on 96 completed ETIs. Cronbach’s alpha was used to estimate the internal consistency of the instrument.
Findings: The analysis of the ETI revealed a two-factor model of evaluative thinking (i.e., believe in and practice evaluation and pose thoughtful questions and seek alternatives). This study also provided internal consistency evidence for the ETI showing alpha reliabilities for the two factors ranging from 0.80 to 0.82. The ETI has potentially wide applicability in research and practice in ECB and in the field of evaluation more generally.
Book Reviews by Thomas Archibald
Purpose: To address that lack, the purpose of the study presented in this paper is to refine the construct of evaluative thinking by exploring its underlying dimensions and to ascertain the internal consistency of an instrument developed to measure evaluative thinking, the Evaluative Thinking Inventory (ETI).
Setting: The ETI was developed as part of an ECB initiative focused on non-formal science, engineering, technology, and math (STEM) education in the United States, and was tested as part of a study focused on evaluating gifted education programs, also in the United States.
Intervention: Not applicable.
Research design: Survey research and exploratory factor analysis (EFA).
Data collection & analysis: The ETI was administered to participants in a study measuring the effectiveness of a tool used to conduct internal evaluations of gifted education programs. SPSS was used to conduct an EFA on 96 completed ETIs. Cronbach’s alpha was used to estimate the internal consistency of the instrument.
Findings: The analysis of the ETI revealed a two-factor model of evaluative thinking (i.e., believe in and practice evaluation and pose thoughtful questions and seek alternatives). This study also provided internal consistency evidence for the ETI showing alpha reliabilities for the two factors ranging from 0.80 to 0.82. The ETI has potentially wide applicability in research and practice in ECB and in the field of evaluation more generally.