Meta-analyses are considered the highest level of evidence in research design, providing a quantitatively estimated effect size for an outcome of interest pooled from some or all primary studies on a particular topic. The usefulness and accuracy of a meta-analysis depend on the rigor of the included studies and the methods adopted by the meta-analyst. In recent years, the number of meta-analyses in the field of mindfulness has increased, but the relative rigor of meta-analyses on mindfulness research has varied. Our aim with this report, therefore, is to provide a guide for future authors of mindfulness meta-analyses that offers recommendations and explanations for best practices in meta-analysis. We selected high-quality literature on meta-analyses and used the 19 meta-analyses published in the journal Mindfulness between January 2022 and 2024 to highlight methodological approaches that can enhance the rigor of future meta-analyses on mindfulness research. Although instructive content already exists on meta-analytic techniques, in this work, we brought together meta-analytic recommendations specific to mindfulness studies. We reviewed current literature on meta-analytics and now present recommendations for meta-analytic methods, reporting of meta-analytic results, and what to include in discussion sections. The present article also provides an overall checklist of mandatory and recommended items to be included in a meta-analysis.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Meta-analyses are considered the highest level of evidence in research design (Burns et al., 2011) because they provide a quantitative effect size estimate for an outcome of interest pooled from some or all of the primary studies investigating the outcome (Papakostidis & Giannoudis, 2023). The usefulness and accuracy of a meta-analysis, however, depend on the rigor of the included studies and the methodology adopted by the meta-analyst (e.g., search strategy comprehensiveness, statistical analyses; Borenstein, 2019). In recent years, because of the increasing number of primary studies focused on mindfulness, the number of meta-analyses in the field has also increased (Chin et al., 2019; Zhang et al., 2021). However, the relative rigor of meta-analyses on the topic of mindfulness has varied. Meta-analyses of mindfulness studies have been limited by methodological weaknesses, lack of diversity, a relatively small number of primary studies on any one population or outcome, and the wide array of outcome measures used in primary studies (Chin et al., 2019; Waldron et al., 2018; Zhang et al., 2021). To provide relevant examples, we hand-searched mindfulness meta-analyses published from January 2022 to January 2024 in the journal Mindfulness (n = 19); we found many that had methodological weaknesses of various kinds that could be improved with updated methodology and tools. For example, the use of at least two coders for data extraction, assessing the risk of bias in primary articles, and reporting on the meta-analytic statistical model were not uniformly employed. These methodologies, however, are relatively easy to adopt in meta-analyses. Throughout this work, we highlight what we found in the 19 meta-analytic articles (or subsets thereof) to support our overall recommendations.
Although instructive content already exists on meta-analytic techniques (e.g., Borenstein, 2019; Field & Gillett, 2010; Folk & Dunn, 2023; Johnson & Hennessy, 2019; Macnamara & Burgoyne, 2023), the present article brings together general meta-analytic recommendations with specific reference to studies of mindfulness-based interventions. First, the present article presents a review and recommendations for meta-analytic methods, including types of reviews, pre-registration, inclusion and exclusion criteria, and search strategies. Second, data management and analyses are considered, including model choice, effect size calculation and type, and how to conduct more complex analyses such as moderation. Third, recommendations are made for best practices in reporting results, including the use of PRISMA flowcharts, risk of bias reporting, and publication bias, as well as reporting participant characteristics, study characteristics, and intervention types. Finally, this article considers what should be included in the discussion section and concludes by providing an overall checklist of what we consider to be mandatory and recommended items in a meta-analysis (Table 1).
Table 1
Checklist of best practices for meta-analyses
Pre-registration on PROSPERO
□ Recommended
Search terms defined (i.e., list of search terms and definitions you are including); e.g., “mindfulness,” “meditation,” “yoga,” “breathwork,” “visualization,” “body scanning”
□ Mandatory
Librarian consultation for additional search terms or alternate spellings of words (e.g., “visualization”and “visualisation”)
□ Recommended
Clear and justified inclusion/exclusion criteria for primary studies (e.g., population of interest, intervention of interest, specific setting or location)
□ Mandatory
Report if any date restriction used in the search and if so, why (e.g., did this topic only begin to be researched a certain number of years ago)
□ Mandatory
Multi-author independent literature search
□ Highly recommended
Gray literature included (or justification why not). Gray literature assists with publication bias and will typically give you more accurate results (e.g., dissertations, negative results)
□ Highly recommended
Bibliographic and hand search (e.g., search references of primary studies and previous systematic reviews)
□ Highly recommended
Databases appropriate and comprehensive; recommend Medline/PubMed, CINAHL, Scopus, ERIC, PsycINFO
□ Mandatory
Use PRISMA as a guideline for what to include
□ Highly recommended
PRISMA flowchart for search data
□ Mandatory
Combined or separate study designs justified (e.g., why do you want to combine or separate randomized vs non-randomized studies — often can be combined if you perform a subgroup analysis comparing types of studies; did you combine objective and self-reported outcome measures)
□ Mandatory
Multi-author independent coding of data for rigor
□ Highly recommended
Number of studies > 10 for accuracy of results
□ Highly recommended
Software (and package if applicable) specified
□ Mandatory
Model is specified and is correct for the data (random-effects model must be used if primary studies are pulled from the literature)
□ Mandatory
Heterogeneity assessment includes I2, T2, T, 95% CI, PI (recall I2 cannot accurately be used to report a %)
□ Mandatory
Subgroup and meta-regression analyses were decided a priori; e.g., subgroup analyses on participant characteristics such as age group, race, ethnicity, sexual orientation; intervention characteristics such as type of mindfulness, if yoga was included, if education was included, if journaling was included, if home practice was prescribed, etc.; study characteristics such as setting by country or continent, objective vs. self-reported outcome measures, study design such as delivery method. Meta-regressions on continuous age, session duration, number of sessions, length of intervention
□ Recommended
Each subgroup and meta-regression group minimum of k = 4–5
□ Recommended
Appropriate risk-of-bias tool used (e.g., RoB for randomized studies vs. ROBINS-I for non-randomized studies)
□ Mandatory
Sensitivity analysis (one-study removed test)
□ Mandatory
Multiple publication bias tests (e.g., funnel plots, Egger’s regression, Trim and Fill)
□ Mandatory
Please see Supplement for examples that follow these guidelines that may be used in designing and reporting on a mindfulness meta-analysis
Methods
Different Types of Meta-analyses
Quantitative meta-analyses combine and analyze quantitative data on a single topic from multiple primary studies (Borenstein et al., 2021). Meta-analyses are possible when there is enough reported data in the primary articles to provide a statistical analysis of the results along with a narrative synthesis, i.e., systematic review (Borenstein et al., 2021). A crucial initial step in conducting a systematic review and meta-analysis is to define its objectives. Meta-analyses generally aim to establish either the efficacy of an intervention or the strength of association between two constructs. The first type typically involves computing a pooled effect size from pre-post data or comparing it with controls (e.g., waitlist, treatment as usual, or an active control condition like Cognitive-Behavior Therapy, with some meta-analyses offering a combination of these computations, allowing a more comprehensive analysis of the effectiveness of MBIs; e.g., Goldberg et al., 2018; Khoury et al., 2013a).
Interventional meta-analyses are invaluable as they help establish the efficacy of different interventions and ascertain the magnitude and consistency of their effects across diverse populations. For instance, within the field of mindfulness, it is common to investigate the effectiveness of mindfulness-based interventions (MBIs) for specific psychological disorders or medical conditions (e.g., anxiety, Bamber & Morpeth, 2019; multiple sclerosis, Carletto et al., 2020; psychosis, Khoury et al., 2013a; depression, Klainin-Yobas et al., 2012; stress, Sperling et al., 2023). It is also common to focus on the effects of MBIs for particular populations (e.g., youth, Borquist-Conlon et al., 2019; incarcerated populations, Per et al., 2020; healthcare workers, Spinelli et al., 2019); settings where MBIs took place (e.g., schools, Carsley et al., 2018; prisons, Per et al., 2020); and outcomes of the MBIs (e.g., weight loss, Carrière et al., 2018; suicide, Schmelefske et al., 2022). MBIs have also been investigated for their effects on clinical diagnostic accuracy (e.g., Pinnock et al., 2021).
Objectives may also include assessing clinical populations (e.g., Goldberg et al., 2018; Khoury et al., 2013b; Spijkerman et al., 2016) or non-clinical populations’ responses to MBIs (e.g., Galante et al., 2021; Khoury et al., 2015; Querstret et al., 2020). In addition, some meta-analyses examine the effectiveness of specific MBIs (e.g., mindful parenting, Anand et al., 2023; Mindfulness-Based Stress Reduction, MBSR, Khoury et al., 2015) or meditation-only practices that tend to be more broadly defined (e.g., brief meditation, Gill et al., 2020; traditional meditation retreats, Khoury et al., 2017; loving-kindness meditation, Zeng et al., 2015).
The second category of systematic reviews and meta-analyses aims to assess the presence and strength of associations between two constructs, often called correlational meta-analysis. In the context of mindfulness, examples include examining the relationship between dispositional mindfulness and other constructs (e.g., satisfaction with life, Mattes, 2019; personality traits, Banfi & Randall, 2022; symptoms, Harper et al., 2022; thoughts, intentions, or behaviors, Karyadi et al., 2014; prosocial intentions or actions, Malin, 2023; suicidal thoughts or behavior, Per et al., 2022; or health behaviors, Sala et al., 2020). In addition, within this category, some reviews/meta-analyses examine mediators (e.g., mindful parenting as a mediator of the association between mindfulness trait and child outcomes, Kil et al., 2021).
The second type of meta-analyses, associations-based meta-analyses, plays a crucial role in establishing the relationships between different constructs and assessing their robustness. Associations-based meta-analyses are particularly pertinent when exploring various facets or components of mindfulness disposition (such as the five facets of the Five Facet Mindfulness Questionnaire, FFMQ, Baer et al., 2006) and other related constructs, symptoms, or behaviors (e.g., association between the five facets of FFMQ and affective symptoms, Carpenter et al., 2019). Establishing such associations and determining their strength serve as an essential preliminary step before conducting clinical studies such as randomized controlled trials (RCTs) and longitudinal investigations, which can validate these associations and establish causal relationships. Additionally, understanding the association between different dimensions, facets, or factors involved in various constructs, such as the comparative contribution of different facets in self-harm or suicidal thoughts/behaviors (e.g., Per et al., 2022), can inform future clinical interventions by highlighting which aspects of mindfulness to cultivate.
Meta-analyses enable the identification of moderators that may influence the magnitude of the reported effects (such as age, gender, or quality of the studies, Khoury et al., 2013a, 2013b, 2015). Identifying moderators of the effects of interventions, or associations between constructs, holds direct clinical relevance and can inform the development of clinical guidelines for treating various conditions or disorders, ultimately contributing to improved physical and mental health outcomes. In summary, meta-analyses on the efficacy of an intervention or the strength of association between two constructs can complement each other in synthesizing the knowledge about mindfulness research, understanding the associations of mindfulness with other concepts or behaviors, and enhancing the impacts of mindfulness/meditation training on a wide range of outcomes and for various populations and settings.
Pre-registration
Pre-registration of systematic reviews and meta-analyses is increasingly encouraged and sometimes required by journals; a well-known pre-registration database is PROSPERO (National Institute for Health and Care Research, n.d.). Pre-registration involves documenting the research plan before conducting the review, which has several benefits, such as ensuring transparency and preventing selective reporting (Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols, PRISMA-P, Moher et al., 2015). It is important to note that the absence of pre-registration does not necessarily imply a poor-quality meta-analysis.
A comprehensive plan should be in place before pre-registering a meta-analysis. For example, PROSPERO requires details such as title, anticipated or actual start and completion dates, current stage of review, contact information and organizational affiliation, team members, funding sources, conflict(s) of interest, review questions, planned search strategies (e.g., databases, search dates, language restrictions, publication date restrictions), target population and condition under study, interventions of interest, types of controls, study designs to be included, primary and secondary outcomes, data extraction methods (e.g., inclusion and exclusion criteria, codebook, number of coders), risk of bias assessment, data synthesis strategy (e.g., software, model), planned subgroup analyses, and keywords (National Institute for Health and Care Research, n.d.).
Researchers should state deviations from their intended protocol in any published works. To effectively gauge the quality of a pre-registered meta-analysis, it would be necessary to compare the published methodology with the pre-registered protocol. This comparison would include aspects like inclusion/exclusion criteria, search strategies, data extraction, and data analyses. Scientific journals, however, rarely perform this thorough and time-demanding comparison. Instead, authors are expected to report any deviations from the pre-registered protocol, but verification of such reporting by journals is currently inconsistent. We encourage peer reviewers to assess this criteria.
Inclusion and Exclusion Criteria
Once the aims of the systematic review and meta-analysis are established, the next step involves clearly defining inclusion and exclusion criteria. This can include types of studies to be considered, a target population or setting, specific outcome measures, a date range, language criteria, and other relevant factors. These criteria should be beta tested on a set of abstracts and study reports to determine if they need to be redefined or modified (Field & Gillett, 2010). Properly defining and testing inclusion and exclusion criteria help prevent the introduction of subjective bias and ensure the integrity and reliability of the analyses and findings (Field & Gillett, 2010).
For meta-analyses focusing on the efficacy of an intervention, it is crucial to establish clear inclusion and exclusion criteria. For example, in examining the effectiveness of MBIs, it is crucial to clearly define what constitutes an MBI and what does not; some reviews may include only studies where mindful meditation is a central and primary component of the intervention (e.g., Khoury et al., 2015), whereas others may require that mindfulness practices be incorporated into every session with meditation being a primary focus. Researchers may be interested in studies which had a requirement for direct meditation practices as opposed to non-meditative mindfulness techniques. Alternatively, some researchers may opt for simplicity by only including studies with well-established MBIs (e.g., Mindfulness-Based Stress Reduction, MBSR; Khoury et al., 2015; Mindfulness-Based Stress Reduction and Mindfulness-Based Cognitive Therapy, MBSR/MBCT; Querstret et al., 2020).
Many different inclusion and exclusion criteria are valid, and in general, if choices are explained, there is little cause for concern. Additional issues to consider and explain might include the exclusion of a subgroup of the target population (as done by Chai et al., 2022 who assessed MBIs among people with schizophrenia), mean age cutoffs (e.g., Johnson et al., 2023), or the inclusion/exclusion of movement-based mindfulness practices (e.g., Verhaeghen, 2023). Meta-analysts may also choose to include or exclude different definitions or operationalizations of mindfulness, such as including studies that used objective measures (e.g., alpha and theta brainwaves) versus studies that used self-reported outcome measures. Duration of an intervention is another component that might be part of an inclusion/exclusion list, for example, only including MBIs longer than 6 weeks (e.g., Lam et al., 2022) or excluding interventions with only one session (e.g., Han & Kim, 2023).
For correlational meta-analyses, the primary inclusion and exclusion criteria include the measures used to assess the concepts, symptoms, or behaviors under investigation, for example, reviews examining the relationship between dispositional mindfulness and other variables. Correlational meta-analyses typically involve two steps: firstly, researchers must decide which measures of mindfulness to include or exclude. Some reviews may choose to incorporate all validated measures of trait mindfulness (e.g., Per et al., 2022), while others may prefer to focus on specific measures (e.g., only the FFMQ; Carpenter et al., 2019). Secondly, it is essential to define which measure(s) of the targeted concept, symptoms, or behaviors should be included or excluded. A clear rationale should be provided for the inclusion or exclusion of measures of dispositional mindfulness, as well as other variables under investigation.
Population of Interest
Once the interventions or targeted concepts are well-defined, the population of interest must be clearly defined. The targeted population can vary widely in specificity, depending on the focus of the researchers and the research questions being addressed. Therefore, the target population can be very specific, or widely defined. In the latter case, it will be important to conduct analyses and report findings separately for each subpopulation (e.g., clinical population versus non-clinical population). Additionally, clear and empirically driven criteria for selecting the population need to be established. For instance, when targeting youth or the elderly, the age range should be clearly defined based on established developmental models.
It is worth noting that gaps exist in most scientific research, including mindfulness studies, which often include samples from primarily Western and White populations. This presents a significant limitation for the generalizability of the findings to non-White/non-Western contexts. Future primary research and reviews should focus on the inclusion of racial, ethnic, and other marginalized groups to determine best practices for diverse populations.
Determining Eligible Outcome Measures
For meta-analyses aiming to assess the effectiveness of interventions, it is imperative to define the outcome measures. These can range from specific outcomes, such as anxiety or depression, to broader categories such as any psychological or physiological symptoms. Additionally, outcome measures can be based on a specific instrument, such as the Beck Depression Inventory-II (BDI-II; Beck et al., 1996) for measuring depression, or could include any validated measures of depression. The latter approach is typically recommended unless there is a specific rationale for including only one measure.
When selecting eligible outcome measures, it is crucial to include only validated measures applicable to the targeted population. For instance, if the target population is youth and the outcome is depression, only measures of depression validated with youth should be included. Including non-validated measures for the targeted population can result in inaccurate analyses and findings, as the measure might suffer from low internal consistency or fail to accurately assess the targeted outcomes. Alternatively, a subgroup analysis could be completed to compare the mean effect sizes from studies with validated outcome measures vs. non-validated (see the “Moderator Analyses” section).
Language Filters
Researchers should seek to include publications written in other languages besides English. The widespread availability of software tools can facilitate the translation of non-English reports, thereby allowing for a broader range of reports written in other languages to be included (although verification of the translation by a fluent speaker is recommended). This can enhance the generalizability of the findings to different geographical and cultural contexts.
Date of Publication or Completion
Researchers need to indicate the date range they will include for their search and the rationale for their decision. The date range is typically determined by the existing knowledge about the targeted field, specifically the existence of previous systematic reviews/meta-analyses with similar objectives, target populations, and outcome measures. In such cases, the authors might aim to replicate or append the existing review while adding new studies that were published after the previous review. Limiting the date range with no justification is a methodological weakness, as relevant studies could easily be missed.
While it is understandable that the search date will be earlier than the date of submission of the review, it is important to note that most journals would prefer a search date within 1 year (or less). This ensures that the review includes the most recent and relevant studies available at the time of publication and boosts the impact of the review paper.
Study Design
Quantitative intervention-based meta-analyses often only include RCTs to enhance the quality of the evidence. Although including only RCTs is justified, it can be limiting when reviewing an emerging field of research, as well-designed RCTs tend to take considerable time and effort before reaching publication. In addition, including only RCTs does not guarantee a high quality of evidence as RCTs can be of low quality (e.g., lack of concealment). Alternatively, a subgroup analysis can be completed comparing the magnitude of the effect sizes of RCTs to those of non-randomized study designs to assess whether there is a difference between the two mean effect sizes (see the “Moderator Analyses” section).
For correlational meta-analyses, most of the primary studies will likely use a cross-sectional or longitudinal design. Including different study designs in one meta-analysis can be done. For recommendations on appropriate management of different study designs, see Analysis.
Definitions of Search Terms
The first step in defining search terms is to ensure that the terms will include studies that align with the desired type of intervention, population, outcomes, and study design. Additionally, these terms should be specific enough to exclude studies that do not meet the defined criteria. For instance, if the review aims to include mindfulness-based interventions specifically, but not necessarily all meditation-based studies, it would be recommended to include mindful* as a search keyword rather than meditat*. The asterisk (*) acts as a truncation symbol (the search term will retrieve results containing variations of the word “mindful,” such as “mindfulness,” “mindfulness-based,” and so on). This approach ensures that the search yields all relevant studies while minimizing irrelevant results.
It is also useful to look up or consult a research librarian about common spellings or terms in other countries. For example, studies written in US English may use “somatization” whereas those in UK English may use “somatisation.” Searching all relevant terms, using variations of spellings, will ensure a thorough literature search.
Gray Literature
The decision to incorporate gray literature (unpublished studies or studies distributed outside of traditional publishing methods) into a meta-analysis should be guided by several factors, including the potential for published literature to introduce biases that cannot be adequately addressed through meta-analytical methods such as publication bias analyses (Song, 2023). Furthermore, the novelty of the targeted field and the availability of existing publications should also influence this decision. Generally, searching and utilizing gray literature are recommended to mitigate publication bias and potential inflation of the mean effect size estimate.
Non-Primary Literature
Meta-analysts must be careful to identify and document companion articles. Companion articles are those that use the same data as an already published article. Because the data are the same, companion articles need to be eliminated from the meta-analyses, and the reason for non-inclusion should be documented. Including more than one report using the same dataset violates the assumption of independence of samples (i.e., that all participants are counted only once). Companion articles can be recognized during the data coding process; meta-analysts should identify any coincidences in coded data (for example, two articles having an intervention group size of 43 and a control group size of 41, identical outcome measures, or a similar set of author names). If a companion study is identified, the meta-analyst has the option to select either study but not both. Some meta-analysts prefer to prioritize the article that was published first to maintain consistency or to prioritize the article that includes a more comprehensive report of the data.
Sufficient Power
Including only sufficiently powered studies in systematic reviews and meta-analyses may appear to be a reasonable recommendation to enhance the quality of synthesized evidence (Folk & Dunn, 2023). However, there are several caveats to consider. First, establishing a fixed sample size based on a predetermined power level, such as the commonly used 80%, may not be empirically valid, as the effect size can vary depending on numerous factors, including study design, intervention type, target population, and measurement tools (Folk & Dunn, 2023). Second, by setting a minimum sample size threshold for study inclusion, authors risk excluding valuable studies with smaller sample sizes, thereby introducing bias into the review (Khoury, 2023).
An alternative approach is to include studies with varying power levels and treat power as a moderator, exploring whether higher power influences study outcomes. Comparing results from sufficiently powered studies with those from studies with low power can shed light on potential differences and their underlying causes. This approach allows authors to include power in the review while minimizing the risk of bias associated with strict sample size criteria.
Search Strategies
An iterative and transparent process ensures that the search results are relevant and manageable for further review. We recommend having more than one researcher perform searches to cross-check results, and report how many researchers were involved with the searches and how their results were managed and combined.
Databases
The selection of databases should align with the review’s focus area. While there is no set number of databases required for the search, researchers should include as many relevant databases as possible in the search and determine the search is over when only duplicates are returned, as opposed to limiting the number of databases before the search. In some cases, such as with clinical research, that may be more than ten relevant databases. Common databases encompassing mindfulness research include PubMed, Medline, CINAHL, Embase, ProQuest, PsycINFO, Scopus, ClinicalTrials.gov, ISRCTN, Web of Science, ERIC, Sage, ScienceDirect, Cochrane Library, and Google Scholar. Key sources for gray literature include theses and dissertations, accessible through databases like ProQuest. Moreover, institutional preprint websites such as PsyArXiv, repositories, web portals, and conference proceedings serve as supplementary resources for obtaining gray literature. Software tools may be used to identify and eliminate duplicate records retrieved from different databases. Researchers should list the software tools used and provide a rationale for their selection.
Hand Searching and Bibliographic Searching
Following a database search, it is recommended that the authors conduct a thorough search of the bibliographies of select publications (bibliographic searching). This process helps in minimizing the number of omitted papers that fit the selection criteria. Another method is hand searching, where researchers identify specific journals that are likely to include articles relevant to their study aims, and they search a certain number or date range of issues. With many journals making past issues available online, hand searching is certainly not as tedious as it used to be.
PRISMA Flowchart
The PRISMA flowchart (Fig. 1; Page et al., 2021) includes (1) the number of identified records from the selected databases, including the number of records per database and the total number of records; (2) the number of records removed before screening (duplicates, books, ineligible records identified by automation tools); (3) the number of records screened, i.e., those where only the title/abstract is consulted; (4) the number of records excluded after screening based on inclusion/exclusion criteria (if a software tool is used for this process, it should be indicated); (5) the number of full articles retrieved; (6) the number of reports excluded based on inclusion/exclusion criteria, specifically the number of reports excluded per criterion; and (7) the total number of studies included in the review.
Case Sample: A systematic review and meta-analysis of the effectiveness of mindfulness-based interventions for youth with anxiety.
Inclusion criteria:
1.
Intervention type: established and standardized mindfulness-based interventions such as MBSR or MBCT.
2.
Target population: youth aged 11 to 18 years.
3.
Outcomes: anxiety, defined as symptoms or formal anxiety disorders including generalized anxiety disorder (GAD), social anxiety disorder (SAD), panic disorder (PD), or specific phobias (SP).
4.
Outcome measures: any validated measure of anxiety in youth.
5.
Study design: all RCTs, regardless of pre-registration or power level.
6.
Language: English or French (in this example, the research team was fluent in both).
7.
Date: from the earliest available date to the present.
Exclusion criteria:
1. Intervention type: interventions incorporating mindfulness as part of another treatment, such as Acceptance and Commitment Therapy (ACT) or Dialectical Behavior Therapy (DBT); interventions using other forms of meditation (e.g., transcendental meditation or Loving-Kindness Meditation (LKM)); interventions based on meditation instruction; and meditation retreats.
Search strategy:
1.
Search query: (mindful* or mbsr or mbct) and (youth or young or adolescen* or teen*) and (anxi* or phobia or worry or fear or nervous* or GAD or SAD or PD or SP).
2.
Databases: PubMed, Medline, and PsychInfo.
3.
Additional search: hand search of Mindfulness in the past 5 years; bibliographic searching of mindfulness meta-analyses published in the past five years. If all articles found were already found in the database search, discontinue. If there were new articles uncovered with hand or bibliographic searching, expand to the past 10 years. Repeat for every 5-year period.
Data Management and Analyses
Data Management
A plan for data management should ideally be in place prior to screening. Search-related outcomes can be managed using an Excel spreadsheet, which is a relatively simple approach, or specific data management software programs such as the bibliographic software Zotero and EndNote; these programs are useful for managing citations, tracking duplicate citations, and generating a reference section.
Data Extraction
Extracting data for a meta-analysis is a multi-step process. The first step is to develop a codebook, which is a set of instructions for inclusion criteria, what data to extract in the articles you requested, and how to store that data. For categorical data, a meditation intervention may be coded as 1 = yoga, 2 = breathing exercises, 3 = meditation only, and so on. To establish reliable codes, all coders must apply the codebook in the same way.
Other commonly coded information includes general study characteristics, such as authors, title, year of publication or completion as well as whether the study was funded, sample size, and drop-out rate. Intervention characteristics include the name of the intervention or program, duration, number of sessions, time-length of sessions, the setting, the facilitator/teacher type, the delivery modality, whether the home practice was assigned, and whether the study included a control group. Participants’ characteristics include sex, gender, race, ethnicity, mean age and/or age range, sexual orientation, and health diagnosis.
The next necessary category to code is outcome data, including the name of the outcome measure, the direction of effect (i.e., what does a lower number or higher number indicate for that outcome measure), if authors cite validation data, and if it is a self-reported or objective measure. Meta-analyses can assess if there are differences in effect size based on outcome measure or type of outcome measure (objective vs subjective, for example).
All available statistics should be extracted from each study report; meta-analysis software can manage many different types of data (Field & Gillett, 2010; Lin & Aloe, 2021). In Comprehensive Meta-Analysis (CMA; Borenstein et al., 2022) and R Software (R Core Team, 2021), for example, there are over 100 different types of statistical data that can be included in a meta-analysis to calculate effect sizes.
Sometimes the data reported in a study report is not adequate to calculate an effect size. There are several strategies for obtaining these data. One strategy is to determine if open-access data is available. Open-access data has been a requirement of NIH-funded studies since January 2023 (National Institutes of Health, 2023) and is increasingly a requirement for publication in journals. A second strategy is to request the missing data directly from the corresponding author of the study. If the missing data cannot be obtained, the article must be excluded, and the reason for exclusion (inadequate data) should be recorded in the PRISMA flowchart (Page et al., 2021).
The best practice is to have two or more researchers code each article, calculate inter-coder reliability, and ultimately resolve coding discrepancies via discussion. Systematic review software (e.g., Rayyan; Ouzzani et al., 2016) can assist with this process. Two or more coders provide more reliable and transparent coding, minimize data entry errors, and increase confidence in the results of the meta-analysis (Bossert et al., 2023). The categories of the codebook should be available to readers for full transparency of the research rigor (often included in supplemental materials). Although multi-author coding has long been recommended for meta-analyses (Russo, 2007), limitations in budgets and time may make it tempting to forego this step. For example, of the 19 meta-analyses published in Mindfulness between 2022 and 2024, eight had only a single coder.
Data Analyses
Choosing a Model
Three conceptual models are used in meta-analysis: fixed effect, fixed effects (plural), and random effects (Borenstein, 2019; Field & Gillett, 2010). A fixed-effect model supposes that researchers are sampling from one population with a fixed average effect size for the outcome being explored (Borenstein et al., 2022, pp. 3–4; Field & Gillett, 2010). The studies must have identical methods and populations (Borenstein, 2019, p. 3). Researchers using this method are able to make an inference about one specific population (Borenstein, 2019, p. 3). In contrast, a fixed-effects (plural) model is used when a researcher has deliberately selected studies from different populations (Borenstein, 2019, p. 3). The studies have not been sampled from the total available studies, and they therefore cannot return an effect size for all populations (Borenstein, 2019, p. 7). It is incorrect to attempt to generalize the results of a fixed-effect or fixed-effects meta-analysis to any other population or setting than the one used in the study (Borenstein, 2019). Finally, the random-effects model is used when researchers recognize all available studies (e.g., mindfulness studies), sample as many as possible (i.e., pull studies from the literature), and aim to generalize results back to all populations and settings included in the studies (Borenstein, 2019, p. 3). This includes when a meta-analysis includes studies with different populations and different study designs (Borenstein, 2019). Whenever meta-analysts retrieve studies from the literature, they will have different populations, and therefore, a random-effects model must be used (Borenstein, 2019, p. 15). Even if heterogeneity tests are not significant, the populations are different, so no other model is appropriate (Borenstein, 2019, p. 15). In meta-analyses that have high variability, the fixed-effects model often delivers a larger effect size, which can be tempting to report, but this would be incorrect (Borenstein, 2019, p. 15). It is also incorrect to use a fixed-effect model when there are too few studies to estimate between-studies variance, as the use of the fixed-effect model will only increase error in reporting (Dettori, 2010). Using the random-effects model in mindfulness research with primary studies pulled from the literature or trial registration sites is essential.
The fixed-effect and fixed-effects models do not account for variability in effect sizes across studies, as they assume the average effect size represents the true effect size for the entire population (Borenstein, 2019, p. 13). Typically, the goal of a publishable meta-analysis is to be able to assess an intervention within a population (e.g., medical students, prisoners, breast cancer survivors) and generalize findings to similar populations (e.g., all US medical students, prisoners in different facilities, breast cancer survivors in different regions) with the expectation that the effect size will fall within the prediction interval (Borenstein, 2019, p. 24). The fixed-effects model, however, cannot support such a generalization. Therefore, using this model in a meta-analysis without clearly stating that the findings are limited to the specific population studied may lead to ethical concerns. If the fixed-effects model is applied, it should be explicitly noted that the effect size only pertains to that specific group and is not a reliable prediction for other populations. To avoid potential confusion and to better address the aims of a meta-analysis, the random-effects model is generally preferred. In the 19 recent meta-analyses published in Mindfulness, 17 used the random-effects model and the other two did not specify the model they applied.
Although the random-effects model will virtually always be the correct choice in mindfulness meta-analyses, there are several limitations of the random-effects model (Borenstein, 2019, pp. 26–35). Namely, meta-analysts will likely violate some of the assumptions of the random-effects model and should recognize and report how these violations may affect results (Borenstein, 2019, p. 26). The assumptions are (1) the total number of studies which exist are well-defined and relevant to our research questions, (2) the studies that were completed are a random sample of studies from all possible studies, (3) the studies are an unbiased sample of all the studies completed, and (4) we have enough studies to return an accurate between-study variance estimate (Borenstein, 2019, p. 26). In retrieving studies from the literature, many of these assumptions may be violated; for instance, all studies may not be well-defined because each study has its own distinct inclusion/exclusion criteria (Borenstein, 2019, p. 29). The studies may not use randomization, and/or they may use convenience samples (Borenstein, 2019, p. 29). The sample of studies may also be biased for a variety of reasons, including publication bias (Borenstein, 2019, p. 29). To estimate a reasonably accurate between-study variance, up to 20 primary studies are needed for the meta-analysis, which may not be the case. (Borenstein, 2019, p. 30).
Effect Size
A comprehensive discussion on statistical pros and cons is beyond the scope of this article, but there are several ways to calculate an effect size. Cohen’s d is often compared to Hedge’s g (an adjusted Cohen’s d; Field & Gillett, 2010). Some recent evidence suggests that Hedge’s g is not as unbiased as once thought, and Cohen’s d may be the more accurate choice (Lin & Aloe, 2021). According to Cohen, the effect size of Cohen’s d can be classified as small (0.20), moderate (0.50), or large (0.80; Sullivan & Feinn, 2012). However, a mean effect size may not adequately describe the data, particularly if some subgroups benefit greatly from an intervention and others do not (Borenstein, 2019). Therefore, providing a caveat when reporting a mean effect size may be necessary, and moderator analyses may assist in clarifying results.
The total number of study participants must be identified to adequately calculate an effect size. Importantly, for experimental and interventional studies, the sample size of each condition (i.e., the mindfulness condition vs. control condition) should be recorded. Providing an accurate sample size for each condition ensures a correctly calculated effect size estimate. Determining the correct sample size can be challenging as it requires careful reading of the study to find the final sample size after attrition, missing data, or removed data.
Combining or Separating Study Designs and Control Conditions
Because of the difference in methodology and, therefore, potential biases, meta-analysts should consider separating study designs; for example, single-group pre/post designs separate from RCTs (Borenstein, 2019). This is commonly managed by performing one search but then conducting two separate meta-analyses. Another alternative is to include the design type as a moderator analysis (see the “Moderator Analyses” section). Because quantitative study designs have different levels of rigor — with RCTs tending to have greater rigor — combining the effect sizes from all study designs may produce a summary effect size that misrepresents the actual efficacy of an intervention (Borenstein, 2019). Similarly, combining control conditions (active and passive) could also influence the summary effect size, as theoretically, intervention groups compared to an active control may have a lower overall effect size compared to that for passive control (Au et al., 2020). However, these aspects of primary studies can be coded and assessed objectively using bias assessment tools (see section Assessing Risk of Bias in Primary Studies), and a subgroup analysis could be performed. Finally, studies with both self-reported and objective measures may be combined in one meta-analysis, but the data should be coded to allow a subgroup analysis. If there are insufficient study-level effect sizes to conduct a subgroup analysis that compares objective and self-reported measures, the limitations of combining subjective and objective measures should be acknowledged. In mindfulness research, outcomes are often self-reported, making them more vulnerable to response, confirmation, and historical biases, among others (Bauhoff, 2014), something that should be mentioned in the limitation section of a meta-analysis.
Direction of Effect
An important component to be aware of when coding primary studies is that not every outcome measure has the same direction of effect. This simply means that for some outcome measures lower scores indicate improvement, but for other outcome measures higher scores may indicate improvement. In mindfulness research, this may become relevant when measuring stress outcomes from a mindfulness intervention, for example. If some primary studies have a stress outcome measure in which a lower score means improvements in stress, but some have a stress outcome measure in which a high score means improvements in stress, it is important to inform the reader which direction you have chosen to indicate improvement. In CMA and other meta-analytic software, the direction of effect must be specified for each primary study to calculate an accurate effect size.
Assessing Heterogeneity and Related Statistics
The heterogeneity among effect sizes in a meta-analysis must be assessed; heterogeneity is expected in mindfulness research due to the range of study designs, interventions, durations, and populations across a collection of eligible mindfulness studies. For the mean summary effect size, both the prediction interval and confidence interval should be reported. The prediction interval indicates the range of effect sizes across the population and gives a measure of the dispersion of effect sizes, whereas the confidence interval estimates the accuracy of the mean effect size (Borenstein, 2019). The Knapp-Hartung adjustment is also recommended, as it provides a wider and more accurate confidence interval demonstrating the range of dispersion of effect sizes based on the t distribution, instead of the wider Z distribution (Jackson et al., 2017).
Other related statistics that current research supports include Cochrane’s I2, Tau-squared (T2), and Tau (T). Cochrane’s I2 is the proportion of variance in observed effect sizes compared to the variance in the true effects (T2; Borenstein, 2019). Importantly, I2 does not specify the total amount of variance, and therefore cannot accurately be used to report a “level” of heterogeneity as it is commonly seen (e.g., 25%, 50%, or 75%; Borenstein, 2019, p. 103); however, the I2 value is useful to determine whether the observed effect sizes’ variance is representative of the population (Borenstein, 2019). T2 is the variance of true effects (as opposed to sampling error), and T is the standard deviation of true effects, used to compute the prediction interval (Borenstein, 2019). The Q-within value and its respective p-value do not need to be reported for the mean summary effect size, because Q-within only applies to a fixed-effect model in which all studies are assumed to share a common effect size (and Q is merely a sum of squares on the way to finding T2; Borenstein, 2019).
Sensitivity Analyses
Including a sensitivity analysis in a meta-analysis is essential, as it helps determine the robustness of the findings by assessing how different methodological choices impact the results (Borenstein et al., 2021, p. 404). There are several ways to perform a sensitivity analysis. Firstly, subgroup analyses can serve this purpose, for instance, examining the effect size difference when only objective outcome measures are included vs. when all outcome measures are included. Secondly, statistical tests can identify outliers, and the effect size can then be recalculated with and without these outliers. Thirdly, comparing different effect size metrics, such as Cohen’s d vs. odds ratio, can provide insight into result stability (Borenstein, 2019, p. 404). Removing studies marked as “high risk of bias” also highlights how bias may influence the overall effect size. Finally, a one-study-removed sensitivity analysis assesses the impact of each study individually, offering a summary effect size and heterogeneity across studies that can then be compared to the original findings (Patsopoulos et al., 2008).
Moderator Analyses
Moderator analyses are statistical processes for determining if sample or study characteristics are contributing to the heterogeneity within a meta-analysis and if those moderator variables change the efficacy of an intervention or detect true differences in effect sizes in correlational meta-analyses (Borenstein, 2019; Sackett et al., 1986). The two types of moderator analyses are subgroup analysis and meta-regression. If a study was determined to be appropriate for a random-effects model, as virtually all mindfulness studies should be, all moderator analyses should be conducted with the mixed-effects model (Borenstein, 2019).
Subgroup Analyses
Subgroup analyses are for categorical variables that researchers wish to compare (Borenstein, 2019). In a pre-registered study, subgroup analyses are expected to be reported a priori based on the authors’ hypotheses. We encourage meta-analysts to consider theoretical moderators, such as the length of the intervention, whether the sample was clinical or non-clinical, or whether home practice was required. Theoretically, for example, longer interventions, clinical samples, and interventions that required daily home practice would be expected to have larger effect sizes. It might also be expected that effect sizes vary depending on categorical sample characteristics, such as race, ethnicity, gender, educational level, or previous mindfulness practice. Effect sizes may also be moderated by methodological variables, whether the types of designs, control groups, intervention deliveries (e.g., in person, online), and outcome measures. Other intervention characteristics that may influence the magnitude of effect sizes might include elements such as the delivery of participant education, incorporation of exercise, instructor certifications, or standardized vs. modified program protocols. When comparing protocols, it is important to account for modifications that may deviate from standard protocols (e.g., a study reports they are using MBSR but remove yoga).
It is common to see subgroup analyses done with less than an ideal number of studies. There is no consensus on how many studies should be in a subgroup, though Borenstein recommends at least ten studies per group for a reliable estimate of between-study variance while acknowledging that this minimum number will likely vary depending on the meta-analysis (Borenstein, 2019, p. 201). Subgroup analyses with a minimum of three (k) effect sizes are often adopted, but in such cases, we strongly recommend that the limitations of a small k are clearly stated. Q-between and its respective p-value should be reported for subgroup analyses (Borenstein, 2019, p. 196).
Meta-Regression
Meta-regression can be conducted using continuous predictor variables to determine whether levels of these predictor variables are associated with the relative magnitude of the mean summary effect size (Borenstein, 2019, p. 229). An effect size derived from mindfulness studies can be regressed on predictor variables such as intervention duration, session frequency, session length, participants’ mean age, and so on. The greater the number of effect sizes (k) available for meta-regression tests, the more likely the results will be stable and reliable. A meta-regression on less than 5–10 studies or measures may be less accurate, though no set number has been determined. As with subgroup analyses, plans to conduct meta-regressions should be registered a priori.
Assessing Risk of Bias in Primary Studies
Assessing the risk of bias in the primary studies included in a meta-analysis provides a level of quality control, giving the reader a sense of how precise the pooled effect size might be. There are many acceptable tools for assessing risk of bias, including the Revised Cochrane Risk-of-Bias tool for Randomized Trials (RoB2; Sterne et al., 2019), and the Risk of Bias in Non-randomized Studies of Interventions (ROBINS-I; Sterne et al., 2016). For example, the RoB2 assesses elements such as randomization, allocation concealment, baseline group differences, fidelity of the intervention, missing outcome data, and adherence to assigned interventions (Sterne et al., 2019).
In mindfulness studies specifically, implementing interventions can be particularly challenging due to control group dropouts, which would be accounted for in a risk-of-bias assessment in adherence sections (De Cassai et al., 2023). Fidelity to the intervention is particularly important as a measure of reliability. If modifications were made to the intervention or there was a high drop-out rate, reproducibility is affected and should be reflected in a lower risk-of-bias score (De Cassai et al., 2023). Modifications to the intervention can include changes to mindfulness activities or changes in protocol such as reducing the number of weeks or duration of sessions. There are limitations with all risk-of-bias tools, including that they use the researcher’s subjective judgement in the ratings and not all aspects of study design may be covered (De Cassai et al., 2023). Of the 19 meta-analyses published in Mindfulness from January 2022 to 2024, 16 used a validated risk-of-bias tool.
Publication Bias
Studies with significant findings are more likely to be published than those with null findings because researchers tend not to submit, and reviewers tend to reject, manuscripts reporting null findings (Dickersin et al., 1992; Hedges, 1984). Because studies with larger effect sizes are more likely to be published than studies with small or null effect sizes (especially if there’s also a small sample size), publication bias can lead to over-estimating the effect sizes in meta-analyses. For example, over the course of 60 years of research in psychology, more than 95% of published studies reported positive findings (Scheel et al., 2021; Sterling, 1959). Publication bias is reduced dramatically to 44% in pre-registered studies that are accepted for publication prior to data collection and based solely on the study protocol (Scheel et al., 2021). Because publication bias may be substantial, meta-analysts must seek to estimate and interpret publication bias.
Tests of publication bias estimate the extent to which there may be “missing” studies with null effects. Bias tests may also enable researchers to identify a bias in the direction of significant results that are published, suggesting an underlying assumption in the scientific community. For example, a recent meta-analysis noted a possible directional bias in the literature: namely, that studies were more likely to be published if they demonstrated a significant association between personality change and ill-being rather than a significant association between personality change and well-being (Sutton, 2023).
Types of Publication Bias Tests
Publication bias tests seek to identify whether there is heterogeneity in effect sizes, and if so, whether the observed heterogeneity is due to publication bias or other moderating variables (Field & Gillet, 2010). Several tests of publication bias are available. Each has its strengths and weaknesses (please refer to Field & Gillet, 2010, for detailed explanations), and as such, the best practice is to include and interpret several of these tests (Field & Gillett, 2010).
Rosenthal’s fail-safe N estimates the number of unpublished studies that would be needed to reduce an estimated population effect size to non-significance (Rosenthal, 1979). There are variations on this test, such as Orwin’s fail-safe N, which estimates the number of unpublished studies needed to reduce the effect size to a predetermined nominal value (Orwin, 1983). These fail-safe N methods, however, are not reliable when attempting to find a mean effect size (as most meta-analyses aim to), leading to recommendations that these tests not be used (Becker, 2005). In our review of recent meta-analyses in Mindfulness, we found four papers reported the fail-safe N, and we encourage authors to avoid the use of this test in future.
Funnel plot-based methods and selection models also assess publication bias (Lin & Chu, 2018). In funnel plots, study effect sizes are plotted against a certain measure (e.g. standard error, sample size, etc.). Funnel plots provide a visual estimate of publication bias. If the meta-analysis sample is unbiased, the plot points will form a symmetrical funnel shape around the population effect size. If there is publication bias, the plot will be asymmetric, usually due to the lack of studies with small effects and/or small sample sizes, which are typically represented on the bottom left of the plot (Borenstein, 2019, p. 158). Asymmetry in funnel plots may also exist for other reasons, however, including hidden moderators (Lau et al., 2006), another reason why multiple publication bias tests are indicated.
After identifying publication bias, meta-analyses should also attempt to correct effect size estimates using selection models. In selection models, studies are weighted based on the likelihood of their being selected for inclusion in the meta-analysis, using criteria such as publication status or reported significance level, and an estimated corrected effect size is produced (Duval & Tweedie, 2000). By modelling the process of study selection, these tests take into account the possibility that some studies are more likely to be published than others based on their results. The trim and fill method, for example, first “trims” or removes the smaller studies that are causing asymmetry in the funnel plot (Duval & Tweedie, 2000). Second, it estimates the true population effect size from this trimmed funnel, and finally, it “fills” the plot by replacing the trimmed studies and adding in their “missing” counterparts (Duval & Tweedie, 2000). In this way, both the number of missing studies and the true effect size are estimated. However, this trim and fill method assumes that missing studies have small effect sizes, and thus, it may lead to an over correction (Vevea & Woods, 2005).
Alternatively, more complex methods weigh the likelihood of a study being published based on criteria such as the significance level (Lin & Chu, 2018). For example, Egger’s regression test examines the slope of the funnel plot (Egger et al., 1997). If there is no bias, the regression intercept is zero (Egger et al., 1997). Alternatives to this test use weighted regression and avoid having to standardize the effect sizes (Lin & Chu, 2018).
Publication Bias Recommendations
In the 19 meta-analyses published in Mindfulness reviewed for this article, two papers reported no tests of publication bias, and seven reported only a single test. Another six papers reported two tests, usually funnel plots combined with Egger’s test. Only a small minority (n = 6) used at least three different tests of publication bias. Given that there is no “perfect” publication bias test or estimate, we recommend authors report three approaches for estimating publication bias. For a visual estimate of bias, we recommend the funnel plot. It is also essential to provide some quantifiable estimate of the extent of bias, for example, by using rank correlation, and to estimate the true effect size by using trim and fill or a regression-based test (Duval & Tweedie, 2000; Egger et al., 1997). In reporting and interpreting the results, the focus should be on how any publication bias discovered could influence the magnitude of the reported effect size.
What to Include in a Discussion Section
When writing a Discussion section, there are limitations inherent in or common to meta-analyses which may be beneficial to discuss further. Firstly, if there is high heterogeneity within the primary studies used in the meta-analysis, this should be commented on and discussed (Lee, 2019). Heterogeneity is expected and reported statistically, but we recommend that meta-analysts further hypothesize about the sources of heterogeneity. For example, many mindfulness studies use 6 weeks, 8 weeks, 12 weeks, or another duration for their interventions. Sometimes an explanation is provided for why a certain length of time was chosen, and sometimes it is not. It should be mentioned that comparing outcomes from studies with different intervention lengths does not provide the cleanest comparison, particularly if there were not enough studies for a meta-regression on intervention length. However, much of the valuable data in meta-analyses comes from the heterogeneity within the primary studies. Describing the different components of interventions can offer rich insights that may help others design future interventions. Commenting on differences in the type of mindfulness, session length, intervention duration, number of sessions, setting, delivery modality, and instructor credentials should be included, if relevant, as this information will be of interest to the reader.
Secondly, meta-analyses are only as good as the primary studies in them (Lee, 2019). This is why it is important to report on the quality of the primary studies through a standardized tool. As some readers may not realize that low study quality could contribute to biased meta-analysis results, this should be pointed out when interpreting the results. Similarly, if there is evidence of publication bias or other biases in the meta-analysis, this should be discussed in terms of how it may have influenced the results. If ethical issues are noted in the primary studies, these should also be addressed. For example, in some mindfulness studies, participation has been compulsory, raising concerns about personal choice in research participation.
Thirdly, a narrative review of the similarities and differences between the primary studies in the meta-analysis is clearly necessary, generally along with a table with pertinent details (e.g., Sperling et al., 2023). This helps the reader understand the content of the primary studies, such as what was done in the interventions and who was targeted. If participant groups are missing from the research, it is important to note that the results of the meta-analysis do not extend to everyone, and further research is needed.
Additionally, clinical significance and not just statistical significance should be discussed if the studies included clinical populations or potentially clinical populations (e.g., students are not a clinical population when assessed in their learning environments, but many may be seeking help for mental health). For example, a meta-analysis of mindfulness studies may find a small statistically significant difference in effect sizes between interventions that utilized peer teaching and interventions that utilized small group discussion. In practice, these two interventions might be so similar that clinical significance is unlikely. Trends might be noted as well, for example, that there were not enough studies to conduct a subgroup analysis on seated vs. walking mindfulness, but the walking mindfulness studies all had relatively high effect sizes, suggesting that further research may be warranted.
Finally, it is also helpful to future researchers to provide suggestions for research directions given the findings of the meta-analysis. For example, if the review finds a strong relationship between variables, such as mindfulness and well-being, but a lack of high-quality intervention studies, the meta-analysts could recommend that future researchers focus their efforts on interventions rather than further testing a well-established relationship. A “what next” section can be helpful in this regard.
Conclusions
We hope that the information provided will be helpful in planning a meta-analysis on mindfulness or related topics, including the enclosed checklist of best practices for meta-analyses. While conducting a meta-analysis requires considerable time and meticulous attention to detail, its benefits are substantial for assessing the current state of the research on a specific topic. Meta-analyses not only reveal gaps in existing research but also steer future research directions. Additionally, they can uncover inconsistencies in research practices and results reporting among primary studies, thereby promoting greater rigor and standardization in future research.
Declarations
Conflict of Interest
The authors declare no competing interests.
Use of Artificial Intelligence
No artificial intelligence was used in the preparation of this manuscript.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Met BSL Psychologie Totaal blijf je als professional steeds op de hoogte van de nieuwste ontwikkelingen binnen jouw vak. Met het online abonnement heb je toegang tot een groot aantal boeken, protocollen, vaktijdschriften en e-learnings op het gebied van psychologie en psychiatrie. Zo kun je op je gemak en wanneer het jou het beste uitkomt verdiepen in jouw vakgebied.