Cfa (Initial Ppt)
Cfa (Initial Ppt)
Cfa (Initial Ppt)
Confirmatory Factor
Analysis
Confirmatory Factor Analysis
(CFA)
Is a statistical technique used primarily in social sciences to test whether
a set of observed variables aligns with a hypothesized factor structure.
This method is distinct from Exploratory Factor Analysis (EFA), which is
used to discover the underlying relationships without a predefined
model. CFA is employed to validate theories regarding the relationships
between observed variables and their latent constructs.
Objectives and Process of CFA
The main goal of CFA is to confirm if the data fits a proposed measurement
model based on prior theoretical frameworks or empirical research. The process
typically involves several critical steps:
01 02 03 04 05
Defining Constructs- Developing the Specifying the Assessing Model Fit: The Considerations:
Researchers begin Measurement Model: fit of the measurement Successful CFA
by clearly defining Model: Researchers model is evaluated using requires adherence to
the theoretical Establishing must specify various indices, such as certain assumptions,
constructs they unidimensionality the number of Chi-square, Root Mean including multivariate
wish to measure. is essential, where factors and the Square Error of normality, an
This often includes each factor is loading Approximation (RMSEA), adequate sample size
pretesting to ensure represented by patterns of the Goodness of Fit Index (usually n > 200), and
that the multiple observed observed (GFI), and Comparative Fit proper model
measurement items variables. A variables on Index (CFI). Adequate specification based on
accurately reflect common practice these factors, factor loadings (typically theoretical or
the intended is to have at least based on above 0.7) are also empirical justification.
constructs. three items per theoretical assessed to determine Random sampling is
construct. expectations. the validity of the model. also crucial to ensure
that findings can be
generalized.
How does CFA differ from Exploratory
Factor Analysis (EFA)
Confirmatory Factor Analysis (CFA) and Exploratory Factor Analysis (EFA) are both
techniques used in factor analysis, but they serve different purposes and are applied in
distinct contexts.
• Purpose: • Assumptions:
EFA: Used to explore the underlying EFA: Assumes that the researcher
structure of a set of variables without has little or no prior knowledge about
any preconceived notions about how the number of factors or the
many factors exist or how they relate relationships between variables. It
to one another. It is primarily a data- allows variables to load on multiple
driven approach aimed at identifying factors.
potential factor structures. CFA: Requires a clear theoretical
CFA: Used to test a specific framework or prior empirical evidence
hypothesized factor structure based on to specify the number of factors and
prior research or theory. It assesses the expected loadings of variables on
whether the observed data fits the those factors. Each variable is assigned
predefined model, confirming whether to a specific factor.
the data aligns with the expected
relationships among variables.
Other reasons..
• Model Specification: • Output and Interpretation:
In summary, EFA is best utilized in the early stages of research when exploring
new constructs or developing measurement tools, while CFA is suited for testing
specific hypotheses about the relationships between observed variables and their
underlying factors. Using EFA first can provide valuable insights that inform
subsequent CFA, ensuring a robust analysis of the data.
When to Choose CFA
Hypothesis Testing: CFA is appropriate Theoretical Framework: If your
when you have a clear hypothesis research is grounded in existing theory
regarding the relationships between or prior empirical research that
observed variables and their suggests a specific factor structure,
underlying latent constructs. This CFA allows you to confirm whether that
contrasts with Exploratory Factor structure holds true in your data
Analysis (EFA), which is used to
discover the factor structure without
predefined hypotheses
3. Imputation Methods
If your data is MAR, consider the following imputation techniques:
• Full Information Maximum Likelihood (FIML): This method uses all available
data to estimate parameters and is valid under the MAR assumption. Many
software packages, including JASP and lavaan, implement FIML
automatically for CFA.
• Mean/Median Imputation: Substitute missing values with the mean or
median of the observed values. While simple, this method can reduce
variability and may not be suitable if the percentage of missing data is
high.
• Multiple Imputation: This technique creates several different plausible
datasets by imputing missing values multiple times, then combines the
results. This approach accounts for the uncertainty of the missing data.
• Last Observation Carried Forward (LOCF): In longitudinal data, replace
missing values with the last observed value. This method is straightforward
but can introduce bias if the data has a trend
4. Data Removal
If the percentage of missing data is small, you might consider removing cases
with missing values. However, be cautious, as this can lead to loss of valuable
information and reduce the sample size significantly.
5. Sensitivity Analysis
Conduct sensitivity analyses by comparing results from the complete cases
(those without any missing data) against those from the full dataset using
imputation methods. This can help assess the robustness of your findings
Applications and Software
CFA is widely used in the development and validation of measurement
instruments, such as surveys and psychological scales. It helps
researchers ascertain whether their instruments effectively measure the
constructs they are intended to assess. Common statistical software for
conducting CFA includes AMOS, LISREL, and SAS, which offer various
tools for model specification and analysis.In summary, CFA is an
essential tool for researchers looking to validate their measurement
models and ensure the reliability and validity of their findings, making it
a critical component in the toolkit of quantitative research
methodologies