Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
82 views

Meta Analysis Formula

The document provides formulas and explanations for key concepts in meta-analysis, including effect size measures, weighting methods, meta-analysis models, heterogeneity assessment, and publication bias assessment. It gives the formulas for standardized mean difference (SMD) as an effect size measure, inverse variance weighting, fixed and random effects models, the Q-statistic and I^2 for heterogeneity, and how to create a funnel plot to assess publication bias.

Uploaded by

choijohnyxz499
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views

Meta Analysis Formula

The document provides formulas and explanations for key concepts in meta-analysis, including effect size measures, weighting methods, meta-analysis models, heterogeneity assessment, and publication bias assessment. It gives the formulas for standardized mean difference (SMD) as an effect size measure, inverse variance weighting, fixed and random effects models, the Q-statistic and I^2 for heterogeneity, and how to create a funnel plot to assess publication bias.

Uploaded by

choijohnyxz499
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Meta-analysis Formula

Effect Size The formula for the Standardized Mean Difference (SMD) in meta-analysis is used to quantify the effect size when the
Measures outcome measures in different studies are on different scales. Here's the general formula for SMD:

SMD = (M1 - M2) / SD

Where:
- SMD: Standardized Mean Difference.
- M1: Mean of the first group (e.g., treatment group or group using a specific green building practice).
- M2: Mean of the second group (e.g., control group or group not using the specific green building practice).
- SD: Pooled standard deviation.

The pooled standard deviation (SD) is a measure of the variability within and between the groups and is calculated as follows:

SD = √( (SD1^2 + SD2^2) / 2 )

Where:
- SD1: Standard deviation of the first group.
- SD2: Standard deviation of the second group.

The SMD gives you a measure of the effect size in standard deviation units, allowing you to compare and combine the results
of studies that may use different outcome measures or units of measurement. Positive SMD values indicate a positive effect
in favor of the first group, while negative values indicate a positive effect in favor of the second group. The magnitude of the
SMD indicates the size of the effect, with larger SMD values indicating a larger effect.
Weighting The Inverse Variance Weighting method is a common approach used in meta-analysis to assign weights to each study based
Methods on the inverse of the variance of the effect size estimate. Here's the formula for the Inverse Variance Weighting method:

Weight = 1 / Variance

Where:
- Weight: The weight assigned to a particular study in the meta-analysis.
- Variance: The variance of the effect size estimate in that study.
In the context of a systematic literature review (SLR) or meta-analysis, you typically calculate the effect size and its
corresponding variance for each study. The effect size can be, for example, the Standardized Mean Difference (SMD) or any
other effect size metric relevant to your research question.

To apply the Inverse Variance Weighting method:

1. Calculate the effect size for each study.


2. Calculate the variance of the effect size estimate for each study.
3. Take the inverse of the variance to obtain the weight for each study.
4. Use these weights when aggregating the effect sizes across all the studies in your meta-analysis. The studies with smaller
variances (greater precision) receive higher weights, indicating that their results have more influence on the overall meta-
analysis result.

This method acknowledges that studies with more precise estimates (i.e., smaller variances) should carry more weight in the
meta-analysis because they are considered more reliable and informative. It's a standard approach to combine results from
different studies while giving more consideration to those that are more precise in their effect size estimates.
Meta-Analysis In meta-analysis, you can choose between two common models to combine effect sizes from multiple studies: the Fixed-
Models Effects Model and the Random-Effects Model. Here are the formulas and applications for both models:

Fixed-Effects Model:
The Fixed-Effects Model is used when you assume that all studies in your meta-analysis share a common true effect size. In
other words, you believe that any observed variation between studies is due to sampling error and not due to true variation
in effect sizes.
Formula for the Fixed-Effects Model:

Weighted Mean Effect Size (ES_FE) = Σ (w_i * ES_i) / Σ w_i

Where:
- ES_FE: Weighted mean effect size in the Fixed-Effects Model.
- w_i: Weight for each study (typically based on Inverse Variance Weighting, as previously explained).
- ES_i: Effect size estimate for each study.

Application of the Fixed-Effects Model:


- Use this model when you have a strong theoretical reason to believe that the effect size is consistent across all the studies.
- It provides a more precise estimate of the true effect size when there is little heterogeneity (variability) among the included
studies.
- This model assumes that the variation between studies is due to random error only.

Random-Effects Model:

The Random-Effects Model is used when you assume that the true effect size may vary between studies. It acknowledges the
presence of both random error and true variation in effect sizes between studies.

Formula for the Random-Effects Model:

Weighted Mean Effect Size (ES_RE) = Σ (w_i * ES_i) / Σ w_i

Where:
- ES_RE: Weighted mean effect size in the Random-Effects Model.
- w_i: Weight for each study (typically based on Inverse Variance Weighting).
- ES_i: Effect size estimate for each study.

Application of the Random-Effects Model:


- Use this model when there is a possibility that the true effect size varies between the studies due to differences in
methodology, population, or other factors.
- It provides a more conservative estimate of the true effect size by taking into account both within-study and between-study
variability.
- This model is appropriate when heterogeneity is present among the included studies.

In systematic literature reviews (SLRs) and meta-analyses, you may choose between these models based on your
understanding of the data and research question. If you believe that the studies are highly similar and the variation is mostly
due to random error, the Fixed-Effects Model might be appropriate. If you suspect significant variation in effect sizes between
studies due to real differences, the Random-Effects Model is more suitable. Both models have their strengths and limitations,
and the choice should align with the research objectives and data characteristics.
Heterogeneity The Q-statistic and I² statistic are commonly used to assess heterogeneity in a meta-analysis, indicating the extent to which
Assessment the observed variation in effect sizes among studies is due to true differences rather than random error. Here are the formulas
and applications for both measures:
Q-Statistic (Cochran's Q):

The Q-statistic is a test statistic used to assess whether the observed variation in effect sizes among studies is statistically
significant. The formula for the Q-statistic is as follows:

Q = Σ (w_i * (ES_i - ES_W)^2)

Where:
- Q: The Q-statistic.
- w_i: Weight for each study (typically based on Inverse Variance Weighting).
- ES_i: Effect size estimate for each study.
- ES_W: The weighted mean effect size (calculated using the chosen meta-analysis model, such as Fixed-Effects or Random-
Effects).

Application of the Q-Statistic:


- The Q-statistic is used to test the null hypothesis that all studies in the meta-analysis share a common effect size (no
heterogeneity).
- If the Q-statistic is statistically significant (i.e., p < 0.05), it indicates the presence of heterogeneity, suggesting that the
variation in effect sizes is not solely due to random error.
- A significant Q-statistic suggests that further investigation into the sources of heterogeneity is needed.

I² (I-Squared) Statistic:

The I² statistic quantifies the proportion of total variation in effect sizes that is due to true between-study heterogeneity
rather than random error. It is expressed as a percentage and can range from 0% to 100%.

The formula to calculate I² is as follows:

I² = [(Q - df) / Q] * 100%

Where:
- I²: The I² statistic.
- Q: The Q-statistic.
- df: Degrees of freedom (equal to the number of studies minus one).
Application of the I² Statistic:
- I² provides a quantitative measure of the degree of heterogeneity, with values between 0% and 100%.
- I² values can be interpreted as follows:
- 0% to 25%: Low heterogeneity (most of the variation is due to random error).
- 26% to 50%: Moderate heterogeneity (a substantial portion of the variation is due to true differences).
- 51% to 75%: Substantial heterogeneity (a large proportion of the variation is due to true differences).
- 76% to 100%: Considerable heterogeneity (most of the variation is due to true differences).

The I² statistic helps researchers and readers understand the extent of heterogeneity in a meta-analysis and can guide
decisions about the suitability of using a Fixed-Effects or Random-Effects model, as well as the need for subgroup analysis or
sensitivity analysis to explore potential sources of heterogeneity.
Publication Bias Creating a Funnel Plot:
Assessment A funnel plot is a graphical tool used in meta-analysis to assess the presence of publication bias and the spread of effect sizes.
Here are instructions on how to create a funnel plot and its interpretation:
Creating a Funnel Plot:

1. Gather your data: Collect the effect sizes and their corresponding standard errors or variances from each study in your
systematic literature review (SLR) or meta-analysis.
2. Calculate the effect sizes: Depending on the type of effect size used in your analysis (e.g., standardized mean difference,
odds ratio, risk ratio), calculate the effect size for each study.
3. Calculate standard error or variance: For each study, calculate the standard error or variance of the effect size estimate.
This is typically done using the formula for the standard error or variance specific to the chosen effect size measure.
4. Prepare your dataset: Create a dataset with two columns: one for the effect sizes and another for the standard errors or
variances.
5. Create the funnel plot: On a scatterplot, plot the effect sizes on the vertical axis (y-axis) and the standard errors or variances
on the horizontal axis (x-axis). Each point on the plot represents a study.
6. Add reference lines: You can add a vertical line representing the overall effect size (calculated using your chosen meta-
analysis model) and diagonal lines representing confidence intervals.

Interpreting a Funnel Plot:


Symmetry: If the plot is roughly symmetric, with studies evenly distributed around the overall effect, it suggests that there is
no strong evidence of publication bias. This means that smaller studies are scattered near the bottom due to their larger
standard errors.
Asymmetry: If the plot shows asymmetry, with a gap in the lower part of the funnel, it may suggest publication bias. This
could be due to smaller, less significant studies with negative or null results being less likely to be published.

Small Study Effects: If there is an excess of smaller studies on one side of the plot (typically on the left side), it might indicate
the presence of small study effects, which could be related to publication bias or other factors.

Egger's Test and Begg's Test:


Egger's test and Begg's test are statistical tests used to quantitatively assess the presence of publication bias in a meta-
analysis. Here's the formula for Egger's test and its application:

Egger's Test:
Egger's test is based on the funnel plot and assesses whether there is a linear relationship between the effect size estimates
and their standard errors (precision).

Formula for Egger's Test:

t = (Effect Size / Standard Error)

Where:
- t: The test statistic used in Egger's test.

Application of Egger's Test:


1. Calculate the t-statistic for each study using the formula.
2. Perform a weighted regression of the t-statistics against the precision (1/Standard Error) to assess whether there is
evidence of publication bias.
3. A significant p-value in the regression suggests the presence of publication bias. The slope of the regression line indicates
the degree of bias.
Begg's test is another test used to assess publication bias. It's a rank correlation test that assesses the relationship between
the effect sizes and their variances across studies. The formula is as follows:

Begg's Test:
- Apply the test to rank the effect sizes and variances, then calculate the Kendall's Tau statistic.
Application of Begg's Test:
1. Calculate Kendall's Tau statistic using the ranked data.
2. Perform a significance test on Kendall's Tau.
3. A significant p-value suggests the presence of publication bias.

These tests provide quantitative assessments of publication bias, helping you determine if the funnel plot's asymmetry is
statistically significant.
Subgroup Subgroup analysis is a method used in meta-analysis to investigate whether the effect size varies across different subgroups
Analysis within your study population. It allows you to explore potential sources of heterogeneity and understand whether certain
factors influence the overall effect.

Application of Subgroup Analysis:

1. Define Subgroups: Identify the potential subgroups you want to analyze. These could be based on characteristics of the
study population, the intervention, the outcome, or other relevant factors. For example, in the context of green building
practices, subgroups could be defined by building type (residential vs. commercial), geographical region, building age, or the
type of green technology used.

2. Calculate Effect Sizes: Calculate the effect size for each study in each subgroup. This may involve applying the same effect
size formula to each subgroup separately. For example, if you're looking at energy efficiency in green residential buildings
and green commercial buildings separately, calculate the effect size for each subgroup.

3. Perform Subgroup Analysis: Using statistical software or tools, conduct a subgroup analysis. This typically involves running
a meta-analysis for each subgroup using the effect sizes and variances specific to that subgroup.

4. Compare Subgroups: After conducting the subgroup analyses, compare the effect sizes, confidence intervals, and tests of
heterogeneity (e.g., Q-statistic, I²) between the subgroups. Determine whether there are significant differences in effect sizes
between the subgroups.

5. Interpret the Results: Interpret the results of the subgroup analysis. If you find that effect sizes differ significantly between
subgroups, this suggests that the effect of the intervention or exposure varies based on the subgroup characteristics. This
can provide valuable insights into the sources of heterogeneity in your meta-analysis.
6. Consider Sources of Variation: Examine the reasons for the differences between subgroups. Are there clear patterns or
associations? Consider the implications for the broader research question.

Subgroup analysis is a powerful tool for exploring heterogeneity and understanding how the effect size might vary among
different subpopulations or under different conditions. However, it's essential to use subgroup analysis with caution and to
predefine subgroups in your research protocol to avoid data dredging or selective reporting.
Sensitivity Sensitivity analysis is a crucial step in a meta-analysis of a systematic literature review (SLR) as it helps assess the robustness
Analysis of the results by examining the impact of various decisions or assumptions on the overall outcome.

Scenario: Your SLR and meta-analysis focus on examining the effect of various insulation materials on energy efficiency in
buildings.

Steps to Conduct Sensitivity Analysis:

1. Identify Key Assumptions: Begin by identifying the critical assumptions, decisions, or potential sources of bias in your meta-
analysis. These may include the choice of meta-analysis model (Fixed-Effects vs. Random-Effects), inclusion criteria for
studies, handling of missing data, and the selection of effect size metric (e.g., Standardized Mean Difference or Odds Ratio).

2. Define the Sensitivity Analysis Scenarios: Create a list of different scenarios that represent changes to these assumptions
or decisions. For example:
- Scenario 1: Use a Fixed-Effects model instead of a Random-Effects model.
- Scenario 2: Exclude studies with a high risk of bias.
- Scenario 3: Use a different effect size metric (e.g., change from SMD to Risk Ratio).
- Scenario 4: Include or exclude studies published before a specific year.
- Scenario 5: Use alternative methods for inputting missing data.
- Scenario 6: Apply a different statistical software for the analysis.

3. Reanalyze the Data: For each scenario, reanalyze the data by applying the specified change or assumption. This may involve
recalculating the effect sizes, weights, and conducting a new meta-analysis based on the modified conditions. You should use
the same dataset but apply the adjustments specific to each scenario.

4. Compare Results: Compare the results from each sensitivity analysis scenario to the primary analysis. Pay attention to
changes in the overall effect size, confidence intervals, and the statistical significance of the findings. Assess how sensitive
the results are to the variations in assumptions or decisions.
5. Interpret the Findings: Interpret the results of the sensitivity analysis. If the conclusions and the direction of the effect
remain consistent across different scenarios, it suggests that the meta-analysis results are robust. On the other hand, if the
results are sensitive to changes in assumptions, it indicates that the findings may be influenced by those assumptions or
decisions.

6. Report the Sensitivity Analysis: In your SLR or meta-analysis report, clearly document the results of the sensitivity analysis,
including the scenarios tested and the impact on the overall findings. Discuss the implications of the sensitivity analysis and
whether the results can be considered robust under different conditions.

Sensitivity analysis is a valuable practice in meta-analysis because it helps ensure the reliability and validity of your findings
by exploring the influence of various assumptions and decisions on the results. It also allows you to communicate the
uncertainty associated with your conclusions and the potential impact of different methodological choices.
Bayesian Meta- Bayesian meta-analysis is a statistical approach that uses Bayesian methods to synthesize evidence from multiple studies in
Analysis a systematic literature review (SLR) or meta-analysis. It allows for the incorporation of prior knowledge and beliefs about the
distribution of effect sizes and uncertainty.

Scenario: Your SLR focuses on the effectiveness of different types of sustainable building materials in reducing energy
consumption in residential buildings.

Bayesian Meta-Analysis Formula:

Bayesian meta-analysis models can vary in complexity, but here's a simple example of a Bayesian model for a meta-analysis
using a normal distribution for effect sizes:

1. Define the Prior Distribution: In Bayesian meta-analysis, you start by specifying a prior distribution for the effect sizes. This
represents your prior beliefs about the distribution of effect sizes before observing the data. A common choice is the normal
distribution.

Prior distribution for effect size: θ ~ N(μ, τ²)

- θ: The effect size.


- μ: The prior mean of the effect size.
- τ²: The prior variance of the effect size.
2. Update with Data: You then update your prior beliefs with the observed data. Each study contributes its likelihood function,
which quantifies the probability of the data given the effect size.

Likelihood function for each study: Data | θ, σ ~ N(θ, σ²)

- Data: The observed data from each study.


- θ: The effect size.
- σ²: The variance of the effect size estimate.

3. Posterior Distribution: The Bayesian analysis combines the prior distribution with the likelihood function to compute the
posterior distribution of the effect size. This represents your updated beliefs about the effect size based on the data.

Posterior distribution for effect size: θ | Data ~ N(μ_post, σ_post²)

- μ_post: The posterior mean of the effect size.


- σ_post²: The posterior variance of the effect size.

Application of Bayesian Meta-Analysis:

- In your SLR, you collect data from multiple studies on the energy efficiency of different building materials.
- You specify your prior beliefs about the distribution of effect sizes based on existing knowledge and assumptions.
- You calculate the likelihood functions for each study based on the observed data and known variances.
- By combining the prior distribution with the likelihood functions, you obtain the posterior distribution for the effect size.
- The posterior distribution provides a more informed estimate of the effect size, incorporating both prior beliefs and the
evidence from the included studies.

Bayesian meta-analysis allows you to account for prior knowledge, quantify uncertainty, and provide a more comprehensive
picture of the effect size distribution in your meta-analysis. It's particularly useful when dealing with limited data or when
you want to explicitly incorporate expert opinions and prior research findings into your analysis. The specific implementation
of Bayesian meta-analysis may vary based on the software and tools used, as well as the complexity of the models.
Network Meta- Here’s an example of how you can use Network Meta-Analysis (NMA) to analyze studies comparing the energy efficiency of
Analysis (NMA) different HVAC (heating, ventilation, and air conditioning) systems in buildings:
Scenario: You are conducting a systematic literature review (SLR) on HVAC systems in buildings and want to determine which
system is the most energy-efficient among various options, including traditional HVAC systems, heat pumps, and solar
heating.

Example of Network Meta-Analysis (NMA):

1. Create a Network Plot: Start by creating a network plot that illustrates the relationships between the different HVAC
systems and the studies that provide direct comparisons.

Network Plot:

In this network plot, each node represents an HVAC system, and the lines between nodes represent studies that directly
compare the energy efficiency of those systems.

2. Collect Data: Gather data from the identified studies, including effect size estimates (e.g., energy efficiency coefficients)
and their corresponding variances for each pairwise comparison between HVAC systems.

3. Specify the NMA Model: Choose a suitable NMA model, such as the Bayesian NMA model. This model allows you to
incorporate both direct and indirect evidence into your analysis.

4. Estimate Treatment Effects: Use the NMA model to estimate the treatment effects for each HVAC system relative to a
reference system (usually chosen as the common comparator). This model combines both direct evidence from studies with
head-to-head comparisons and indirect evidence through the network of studies.

5. Rank the HVAC Systems: NMA provides estimates of the treatment effects for each HVAC system and their associated
credible intervals (like confidence intervals). You can rank the HVAC systems based on their estimated energy efficiency and
the uncertainty around these estimates.

6. Interpret the Results: You can interpret the NMA results to draw conclusions about which HVAC system is the most energy
efficient. You should consider both the point estimates and credible intervals when ranking the systems.
Key Benefits of Network Meta-Analysis:

- Enables the comparison and ranking of multiple HVAC systems, even when direct head-to-head comparisons are limited.
- Provides a comprehensive overview of the relative energy efficiency of different systems in buildings.
- Incorporates both direct and indirect evidence, which can improve the precision of estimates.
- Helps inform decisions regarding the choice of HVAC systems for energy-efficient building designs.

By using NMA in your SLR, you can provide a more comprehensive analysis of the available HVAC systems, considering a wide
range of options and indirect evidence. This approach allows them to make evidence-based recommendations for selecting
the most energy-efficient HVAC system for various building types and applications.
Multilevel Meta- Here's an example of multilevel meta-analysis in the context of collected data on energy savings using various materials for
Analysis constructing energy-efficient roofs:

Scenario: You are conducting a systematic literature review (SLR) on the effectiveness of different roofing materials in
achieving energy savings in buildings. The data you have collected is organized hierarchically, with multiple studies, each
reporting energy savings from various roofing materials. Within each study, energy savings data are measured across different
buildings, and within each building, multiple measurements are taken over time.

Example of Multilevel Meta-Analysis:

1. Data Hierarchy: Start by illustrating the hierarchical structure of the data. In you SLR, you must have:

- Studies that report the effectiveness of roofing materials (Level 1).


- Buildings within each study that have different roofing materials (Level 2).
- Multiple measurements taken over time within each building (Level 3).

2. Hierarchical Model: Introduce a multilevel meta-analysis model. In this model, you will specify equations for each level to
estimate the variance components:

- Level 1 (Within-study variance): Variability in energy savings measurements within each building.
- Level 2 (Between-building variance): Variability in energy savings between buildings with different roofing materials.
- Level 3 (Between-study variance): Variability in energy savings between different studies.
3. Data Preparation: You should prepare the data by extracting the relevant information from each study. This includes the
effect size estimates (energy savings), their variances, the type of roofing material used, and information on the building or
structure where the measurements were taken.

4. Model Equations: You have equations to estimate the within-study variance, between-building variance, and between-
study variance. The model will also include equations for calculating the overall effect size and its uncertainty.

5. Estimate the Model: Use statistical software to estimate the parameters of the multilevel model. The software will calculate
the overall effect size, within-study variance, between-building variance, between-study variance, and other parameters
based on the hierarchical structure.

6. Interpret the Results: You can interpret the results by examining the overall effect size, its confidence intervals, and the
variance components. You can also discuss how different roofing materials compare in terms of energy savings and whether
the choice of material significantly affects energy efficiency.

Key Benefits of Multilevel Meta-Analysis:

- Accounts for the hierarchical structure of the data, considering variability within and between studies, buildings, and
measurements.
- Provides a more accurate analysis when dealing with complex data structures, enhancing the reliability of effect size
estimates.
- Allows for the exploration of variability at multiple levels, shedding light on the sources of heterogeneity in energy savings
data.

By using multilevel meta-analysis in your SLR, you can address the nested and hierarchical nature of your data, leading to
more robust conclusions about the effectiveness of roofing materials in achieving energy savings. This approach is particularly
valuable when conducting research on energy efficiency in construction and building design.
Meta- Scenario: Your SLR focuses on the energy savings achieved in buildings with different shapes and orientations. You want to
Regression investigate how building characteristics, such as shape (e.g., rectangular, L-shaped) and orientation (e.g., north-facing, south-
facing), influence energy savings.

Modified Meta-Regression Formula for Building Shape and Orientation:


In this context, the meta-regression formula could be adapted to explore the relationship between building shape and
orientation and energy savings:

Application of Meta-Regression:

1. Data Collection: Collect data from your SLR, including effect sizes for energy savings, their variances, and information on
the building shape and orientation for each study.

2. Specify the Meta-Regression Model: Specify the meta-regression model, including building shape and orientation as
covariates. You are interested in understanding how these building characteristics influence energy savings.

3. Conduct the Meta-Regression: Use statistical software to estimate the coefficients (\(\beta_0\), \(\beta_1\), \(\beta_2\))
by fitting the meta-regression model to your data. The software will provide estimates of the coefficients and their standard
errors.

4. Assess Significance: Conduct hypothesis tests (e.g., Wald tests) on the coefficients to determine if building shape and
orientation significantly explain the variation in energy savings. Significant \(\beta_1\) and \(\beta_2\) values indicate a
meaningful relationship.
5. Interpret the Results: Interpret the results based on the estimated coefficients. If \(\beta_1\) and/or \(\beta_2\) are
significant, it suggests that building shape and orientation have a significant impact on energy savings. You can quantify the
size and direction of these impacts.

Meta-regression in this context allows you to explore how different building characteristics influence energy savings in a
nuanced way. It helps you understand which building shapes and orientations are associated with greater or lesser energy
efficiency and provides valuable insights for architects and builders.
Cumulative Cumulative meta-analysis is a technique used to systematically update a meta-analysis as new studies become available over
Meta-Analysis time. This approach helps researchers monitor how the overall effect estimate evolves as new evidence is added.

Scenario: Your systematic literature review (SLR) aims to evaluate the energy efficiency of various insulation materials used
in building construction. As new studies become available, you want to conduct a cumulative meta-analysis to track changes
in the overall effect estimate over time.

Cumulative Meta-Analysis Formula:

The formula for a cumulative meta-analysis is the same as the formula used in a standard meta-analysis. It calculates the
overall effect size at each cumulative stage as new studies are added. The formula for a standardized mean difference (SMD)
cumulative meta-analysis can be expressed as follows:

∑𝑛𝑖=1 𝑆𝑀𝐷𝑖
𝑆𝑀𝐷𝑐𝑢𝑚𝑢𝑙𝑎𝑡𝑖𝑣𝑒 =
𝑛
Where:
- 𝑆𝑀𝐷𝑐𝑢𝑚𝑢𝑙𝑎𝑡𝑖𝑣𝑒 is the cumulative standardized mean difference at a given stage.
- 𝑆𝑀𝐷𝑖 represents the standardized mean difference for the i-th study.
- n is the number of studies included up to that cumulative stage.

Application of Cumulative Meta-Analysis:

1. Data Collection: Collect data from the initial set of studies in your SLR. This data should include effect sizes (SMDs) and
their variances for different insulation materials.

2. Cumulative Stages: Define specific cumulative stages or time intervals at which you will update the meta-analysis. For
example, you can conduct cumulative analyses at six-month intervals or based on the publication date of new studies.
3. Calculate Cumulative Effect Sizes: At each cumulative stage, calculate the cumulative effect size by summing the SMDs from
the studies included up to that stage and dividing them by the number of studies.

4. Plot the Cumulative Results: Create a cumulative meta-analysis plot that shows how the cumulative effect size changes
over time or as more studies are added. This plot allows you to visualize the evolution of the overall effect estimate.

5. Interpret the Results: Interpret the results by looking at the cumulative effect sizes at each stage. Analyze whether the
effect size stabilizes or converges as more studies are added, indicating the degree of robustness in the findings.

6. Update Periodically: Periodically update your cumulative meta-analysis as new studies become available. This allows you
to continuously monitor the evolving evidence base and the stability of the overall effect estimate.

Cumulative meta-analysis is a valuable approach to assess the robustness and consistency of findings in your SLR over time.
It helps researchers identify whether the conclusions are influenced by early or small-sample studies or whether they remain
stable as more evidence accumulates. This method is particularly useful when studying dynamic fields where new research
is continually being published.

You might also like