2024-25 Seminars and Recordings
February 14, 2025, 11 am CST
Adapting Methods for Correcting Selective Reporting Bias in Meta-Analysis of Dependent Effect Sizes
-
Speaker: Dr. Man Chen, University of Texas at Austin
-
Description: Various methods have been proposed to adjust for selective reporting bias in average effect size estimates under the assumption that each study produces one effect size. However, there remains a need to investigate the selection bias adjustment methods in meta-analysis with multiple, dependent effect sizes. In this presentation, I will demonstrate novel adaptations of several adjustment methods based on a multivariate working model and weighting scheme that correct for selective reporting bias while handling dependencies among effect sizes. I will present the performance of existing adjustment methods and the novel adaptations based on an extensive Monte Carlo simulation study. I will conclude with a discussion of limitations, future directions, and suggestions for correcting selection bias in the context of dependent effect sizes. The presentation will cover research described in a pre-print available at https://doi.org/10.31222/osf.io/jq52s.
-
Video Recording
December 13, 2024, 11 am CST
Friday the 13th & Single Case Experimental Designs: Allegedly Unlucky Encounters for Meta-Analysts
-
Speakers: Dr. John Ferron, University of South Florida, and Dr. Megan Kirby, Language Dynamics Group
-
Description: With increased use of single-case experimental designs to study intervention effects, there is a greater need to include single-case experimental design studies in meta-analyses. In this session, we will discuss alternative methods for estimating effects from single-case studies, illustrate the estimation of these effects from freely available software apps, and provide guidance about choosing among the effect size options.
-
Video Recording
November 15, 2024, 11 am CST
Using GPT API Models as Second Screeners of Titles and Abstracts in High-Quality Systematic Reviews
-
Speaker: Dr. Mikkel Vembye, Danish Center for Social Science Research
-
Description: Independent human double screening of titles and abstracts is a critical step to ensure the quality of systematic reviews and meta-analyses. However, double screening is a resource-demanding procedure that slows down the review process. In this presentation, I will demonstrate how reviewers reliably can implement OpenAI’s GPT models as second screeners of title and abstracts to mitigate this resource demand. I will provide a practical demonstration of how to conduct title and abstract screening with GPT API models through the newly developed R package AIscreenR and highlight how the approach differs from simply using the ChatGPT interface. I will discuss implementation requirements for ensuring performance that is at least on par with typical human screening. To facilitate this, I will introduce a new screening benchmark scheme that is based on common human screening performance in 22 large-scale systematic reviews. As the initial proof of concept, I will present results of using GPT API screening in three large classification experiments. I will conclude by discussing limitations and future directions for this screening method and how it can be combined with traditional (semi-)automated screening tools. The background for this presentation can be found at https://osf.io/preprints/osf/yrhzm.
-
Video Recording
October 18, 2024, 9 am CDT
Fitting Flexible Meta-Analytic Models with Structural Equation Modeling
-
Speaker: Dr. Mike Cheung, National University of Singapore
-
Description: Understanding the differences between various meta-analytic models, such as the fixed-effect, random-effects, and multiplicative error models, can be challenging for researchers. This presentation explains these models using a graphical representation within a structural equation modeling (SEM) framework. Users specify meta-analytic models using lavaan syntax. The metaSEM package in R can convert these models to graphical models, model-implied mean and variance, and fit models using the Full Information Maximum Likelihood (FIML) estimation method. Complex meta-analytic models, such as multivariate meta-analysis and mediation models, can also be fitted. Users without strong statistical and programming backgrounds can still develop and implement novel meta-analytic models that are not available in standard meta-analysis software.
-
Video Recording
September 27, 2024
Getting Meta-Analyses Done: Practical Perspectives from Experienced Reviewers
-
Panelists:
- Dr. Carlton Fong, Texas State University
- Dr. Amanda Neitzel, Johns Hopkins University
- Dr. Peng Peng, University of Texas at Austin
-
Description: Our panel of prolific and experienced systematic reviewers will be discussing the very practical side of systematic reviews and meta-analyses, common challenges and roadblocks, and tips for getting meta-analyses done, with plenty of time for audience questions and discussion.
-
Video Recording