2023-24 Seminars and Recordings
May 17, 2024
Investigating Patterns of Fadeout using MERF, the Meta-Analysis of Educational RCTs with Follow-up
-
Speaker: Emma Hart, Doctoral Candidate, Teachers College, Columbia University
-
Description: Researchers and policymakers aspire for educational interventions to change children’s long-run developmental trajectories. Developmental theory has informed this expectation; boosts to child skills should initiate developmental cascades consistent with long-run intervention effects. Increasing evidence suggests that while interventions often generate impacts on child skills at post-test, these effects commonly fade in the years following the intervention’s end. In this talk, I will present new work that uses meta-analysis to investigate the breadth of fadeout by skill type and intervention characteristics. I will discuss using meta-analysis to test developmental theory and inform policy decision-making.
-
Video Recording
April 19, 2024
ESHackathon: Unlocking Evidence Synthesis Through Innovation
-
Speaker: Matthew Grainger, Researcher, Norwegian Institute for Nature Research
-
Description: Founded in 2017 by Martin Westgate and Neal Haddaway, the Evidence Synthesis Hackathon (ESHackathon) aims to support the development, testing and promotion of new software and workflows; build networks and capacity among researchers, practitioners and developers; and advocate for open synthesis. The organization’s goal is to create workflows that are open, reproducible, based on the best available technology and methods, and supported by the community. ESHackathon is not just for coders but is a community of practice where everyone can make a contribution and which is welcoming to a diversity of skills and career-stages. In this talk, Dr. Matt Grainger will highlight some of ESHackathon’s most popular tools, including citationchaser, EviAtlas, and Prisma2020. He will also showcase some of the tools that are currently in development and discuss ideas for new tools.
-
Video Recording
March 15, 2024
Enhancing Reliability in Automated Record Screening: A Resampling Algorithm
-
Speaker: Zhipeng Hou, Biostatistician, Eversana Life Science Services
-
Description: Record screening is a critical aspect of systematic review and meta-analysis, involving the meticulous task of identifying relevant records from a pool of candidate papers. This process is widely acknowledged as time-consuming, costly, and susceptible to human error. Despite the proliferation of automatic literature screening methods leveraging machine learning and AI in recent years to alleviate this burden, there remains a noticeable absence of methods that guarantee performance. In this presentation, we will introduce a flexible resampling algorithm capable of collaborating with any existing screening prioritization algorithm to ensure consistent performance. We will delve into the mathematical and probabilistic foundations of this algorithm and discuss its real-life implementation in record screening.
-
Video Recording
February 16, 2024
Overcoming Barriers to Conducting Systematic Reviews of Non-intervention Research
-
Speaker: Marta Topor, Postdoctoral Researcher, Linköping University
-
Description: Evidence synthesis methodology is evolving quickly thanks to detailed guidance tools, new software solutions, and computational approaches. Researchers are aware of these advancements and of the value of systematic reviews/meta-analyses. While keen to implement these approaches, they face an obstacle. Traditionally, systematic reviews have been conducted on interventional research within health-related fields. Those working in other disciplines and with other study designs, whether experimental or observational, find it challenging to follow guidance originally tailored towards interventions. They tend to make subjective adaptations to the guidance tools or omit irrelevant sections, which leads to inconsistent implementation. The Non-Intervention Reproducible and Open Evidence Synthesis (NIROES) collaboration was launched in response to this issue. We have introduced a guidance tool for systematic reviews (NIRO-SR) and are currently working on a project focused on the risk of bias and quality assessment of individual studies. During the talk, I will introduce NIRO-SR, provide an update on our current project, and discuss further plans and ideas.
-
Video Recording
December 8, 2023
Knowledge mobilization and lessons for communicating meta-analytic results
-
Speaker: Kaitlyn Fitzgerald, Azusa Pacific University, Department of Mathematics, Physics, and Statistics
-
Description: In order to make evidence-based decisions, education decision-makers are increasingly tasked with making sense of evidence across a collection of studies, which involves complex reasoning about statistical ideas such as effect sizes, uncertainty, and meta-analytic results. This talk will highlight some of the inherent difficulties in communicating meta-analytic results to non-researchers, cognitive pitfalls to avoid, and best practices established in the data visualization, statistical cognition, and Human Computer Interaction literatures. We will discuss the curse of expertise and why education researchers should not rely on common visualizations such as forest plots for communicating meta-analyses to decision-makers. Finally, we discuss the need for the education research community to directly study strategies for disseminating evidence and mobilizing knowledge, and we present a framework for how to organize such studies.
-
Video Recording
November 17, 2023
Robust Bayesian meta-regression: Model-averaged moderation analysis in the presence of publication bias
-
Speaker: František Bartoš, University of Amsterdam
-
Description: Meta-regression constitutes an essential meta-analytic tool for investigating sources of heterogeneity and assessing the impact of moderators. However, existing methods for meta-regression have limitations that include inadequate consideration of model uncertainty and poor performance under publication bias. To overcome these limitations, we extend robust Bayesian meta-analysis (RoBMA) to meta-regression (RoBMA-regression). RoBMA-regression allows for moderator analyses while simultaneously taking into account the uncertainties about the presence and impact of other factors (i.e., the main effect, heterogeneity, publication bias, and other potential moderators). We offer guidance on how to specify prior distributions for continuous and categorical moderators and introduce a Savage-Dickey density ratio test to quantify the evidence for and against the presence of the effect at different levels of categorical moderators. We illustrate RoBMA-regression in an empirical example and demonstrate its performance in a simulation study. We implemented the methodology in the RoBMA R package. Overall, RoBMA-regression presents researchers with a powerful and flexible tool for conducting robust and informative meta-regression analyses.
-
Video Recording
October 20, 2023
The logic of generalization from systematic reviews of intervention effects to policy and practice contexts
-
Speaker: Julia Littell, Bryn Mawr College, Graduate School of Social Work and Social Research
-
Description: Systematic reviews and meta-analysis (SRMAs) of controlled studies of intervention effects are potent tools for generalized causal inference, but the logic of generalization from SRMAs to diverse policy and practice contexts is woefully underdeveloped. Using recent SRMAs of two widely disseminated psychosocial interventions as examples, I explore the logic of generalization from these SRMAs from three perspectives: 1) probability theory and representative sampling, 2) principles for generalized causal inference, and 3) common rubrics used by reviewers and clearinghouses. I show that, based on nonprobability samples of studies that relied on nonprobability samples of programs and participants, SRMAs can produce pooled estimates that are not representative of any larger sets of studies, programs, or people. Application of Shadish, Cook, and Campbell’s (2002) principles for generalized causal inference is hampered by insufficient descriptive data and risks of bias in impact evaluations. Common rubrics used to formulate generalizations from systematic reviews are not well supported by theory or evidence and tend to over-estimate the generalizability and applicability of prominent interventions. Results of systematic reviews are widely misinterpreted as evidence that can be easily generalized and applied to diverse populations and settings. SRMAs can be used to test claims about the generalizability of treatment effects and to identify directions for further research that would support stronger generalized causal inferences and better applications. Their usefulness in developing new insights into issues of generalizability and applicability may be compromised by limitations of available data. Additional work is needed to articulate principles and best practices for formulating generalizations based on results of SRMAs.
-
Video Recording
September 22, 2023
Designing for Epistemic Uncertainty in Research Synthesis
-
Speaker: Alex Kale, University of Chicago, Department of Computer Science
-
Description: Summarizing what can be learned from bodies of scientific literature requires difficult judgments about which study results can be meaningfully compared, and whether it makes sense to aggregate evidence in a meta-analysis. Numerous tools for assessing quality of evidence offer guidance on identifying sources of epistemic uncertainty, such as common threats to internal or external validity of study results. However, existing software for systematic review and meta-analysis does little to emphasize how epistemic uncertainty should inform analytic choices for synthesizing findings. I present MetaExplorer, a prototype web application designed to provide a guided process for reasoning about epistemic uncertainty in meta-analysis. I also summarize findings from interviews with research synthesis methodologists and practitioners in biomedical science, education, computer science, and statistics. This work highlights the cognitive pitfalls, technical hurdles, and inconsistent standards across research communities that pose challenges to addressing epistemic uncertainty in research synthesis. I reflect on opportunities for future software development and invite the audience to join me in discussion.
-
Video Recording