Skip to main content

Advertisement

Toward a comprehensive evidence map of overview of systematic review methods: paper 1—purpose, eligibility, search and data extraction

Article metrics

Abstract

Background

Overviews of systematic reviews attempt to systematically retrieve and summarise the results of multiple systematic reviews. Methods for conducting, interpreting and reporting overviews are in their infancy. To date, there has been no evidence map of the methods used in overviews, thus making it difficult to determine the gaps and priorities for methods research. Our objectives were to develop and populate a comprehensive framework of methods for conducting, interpreting and reporting overviews (stage I) and to create an evidence map by mapping studies that have evaluated overview methods to the framework (stage II).

Methods

We searched methods collections (e.g. Cochrane Methodology Register, Meth4ReSyn library, AHRQ Effective Health Care Program) to identify eligible studies for both stages of this research. In stage I, cross-sectional studies, guidance documents and commentaries that described methods proposed for, or used in, overviews were used to develop and populate the framework of methods. Drafts and multiple iterations of the framework were discussed and refined by all authors. In stage II, we identified and described studies evaluating overview methods and mapped these evaluations to the framework.

Results

In this paper, we present results for the four initial steps of conducting an overview: (a) specification of the purpose, objectives and scope, (b) specification of the eligibility criteria, (c) search methods and (d) data extraction. Twenty-nine studies mentioned or described methods relevant to one or more of these steps. In the developed framework, identified methods and approaches were grouped according to the steps an overview author would need to undertake. Fifteen studies evaluated identified methods, all of which mapped to the search methods step. These studies either reported the development and evaluation of a new search filter to retrieve systematic reviews or compared the performance of multiple filters.

Conclusion

Gaps in the evaluation of methods were found for the majority of steps in the framework. More empirical studies are needed to evaluate the methods outlined and provide a comprehensive evidence map. The framework is useful for planning these evaluations and for planning methods required to deal with challenges that arise when conducting an overview.

Background

Overviews of systematic reviews synthesise the results of multiple systematic reviews. Overviews are typically broader in scope than systematic reviews (SRs) and may examine different interventions for the same condition, the same intervention for different conditions, or the same intervention for the same condition but focusing on different outcomes [1,2,3,4].

The number of published overviews has increased steadily in recent years largely in response to the increasing number of SRs [5, 6]. The main steps and many of the methods used in the conduct of SRs are directly transferrable to overviews, such as independent study selection and data extraction [7]. However, many features are unique to overviews and require the application of different or additional methods. For example, methods for assessing the quality or the risk of bias of SRs, dealing with the inclusion of the same trial in multiple SRs, dealing with out-of-date SRs, and dealing with discordant results across SRs [6].

Despite the growth in overviews, there has been no evidence map identifying the range of methods for overviews and examining the evidence for using these methods. Evidence mapping is a systematic method used to characterise and catalogue a body of literature pertaining to evidence on a topic and is useful for identifying gaps in the literature [8, 9]. Evidence mapping has been commonly used to map the effects of healthcare interventions; however, the approach may also be applied for mapping the evidence on other topics, such as collating and synthesising evidence on the range and performance of research methods.

It is critical to determine whether there is evidence to support the use of methods for overviews because the validity and reliability of the findings from overviews depend on the performance of the underlying methods. This research aims to provide a comprehensive framework of overview methods and the evidence underpinning these methods—an evidence map of overview methods. In doing so, we aim to help overview authors plan for common scenarios encountered when conducting an overview and enable prioritisation of methods development and evaluation.

Objectives

The objectives of this study were to (a) develop and populate a comprehensive framework of methods that have been used, or may be used, in conducting, interpreting and reporting overviews of systematic reviews of interventions (stage I); (b) map studies that have evaluated these methods to the framework (creating an evidence map of overview methods) (stage II); and (c) identify unique methodological challenges of overviews and methods proposed to address these.

This paper is the first of two companion papers. In this first paper, we present the methods framework for the four initial steps of conducting an overview: (a) specification of the purpose, objectives and scope of the overview; (b) specification of the eligibility criteria; (c) search methods and (d) data extraction methods (stage I). We then map studies evaluating methods to this framework (stage II). In a second paper, we will present the methods framework, and a map of evaluation studies, for the subsequent steps in conducting an overview: assessing risk of bias of primary studies and SRs; certainty of evidence arising from the overview; synthesis, presentation and summary of findings; and interpretation of findings and drawing conclusions (Fig. 1).

Fig. 1
figure1

Summary of the research reported in each paper

We use the term ‘methods framework’ (or equivalently, ‘framework of methods’) to describe the organising structure we have developed to group related methods and against which methods evaluations can be mapped. The highest level of this structure is the broad steps of conducting an overview (e.g. search methods). The methods framework, together with the studies that have evaluated these methods, form the evidence map of overview methods.

Methods

A protocol for this study has been published [10]. The methods for the two stages (Fig. 2) are now briefly described, along with deviations from the planned methods.

Fig. 2
figure2

Stages in the development of an evidence map of overview methods

Stage I: development and population of the framework of methods

Search methods

We searched MEDLINE from 2000 onwards and the following methods collections: Cochrane Methodology Register, Meth4ReSyn library, Scientific Resource Center Methods library of the AHRQ Effective Health Care Program, and Cochrane Colloquium abstracts. Searches were last run on December 2, 2015 (see Additional file 1 for search strategies). We also set aside any methods articles that we identified through screening citations as part of a related research project to develop a search strategy to identify overviews in MEDLINE [5]. To identify other potentially relevant studies, we examined the reference lists of included studies and undertook forward citation searches of seminal articles using Google Scholar, Scopus and Web of Science. We contacted authors of posters to retrieve the poster, or the full report of the study, and to ask if they were aware of any related methods articles. We planned to contact researchers with expertise in methods for overviews to identify articles missed by our search, but did not undertake this step due to time constraints.

Eligibility criteria

For the development and population of the framework, we identified articles describing methods used, or recommended for use, in overviews of systematic reviews of interventions.

Inclusion criteria:

  1. i.

    Articles describing methods for overviews of systematic reviews of interventions

  2. ii.

    Studies examining methods used in a cross-section or cohort of overviews

  3. iii.

    Guidance (e.g. handbooks and guidelines) for undertaking overviews

  4. iv.

    Commentaries or editorials that discuss methods for overviews

Exclusion criteria:

  1. i.

    Articles published in languages other than English

  2. ii.

    Studies describing methods for network meta-analysis

  3. iii.

    Articles exclusively about methods for overviews of other review types (i.e. not of interventions)

We populated the framework with methods that were different or additional to those required to undertake a SR of primary research. Methods evaluated in the context of other ‘overview’ products, such as guidelines, which were of relevance to overviews, were included.

The eligibility criteria were piloted by three reviewers independently on a sample of articles retrieved from the search to ensure consistent application.

Study selection

Two reviewers independently reviewed titles and abstracts for their potential inclusion against the eligibility criteria. Full-text articles were retrieved when both reviewers agreed that inclusion criteria were met or when there was uncertainty. Any disagreement was resolved by discussion or by arbitration of a third reviewer. In instances where there was limited or incomplete information regarding a study’s eligibility (e.g. when only an abstract was available), the study authors were contacted to request the full text or further details.

Data extraction, coding and analysis

One author collected data from all included articles using a pre-tested form; a second author collected data from a 50% sample of the articles.

Data collected on the characteristics of included studies

We collected data about: (i) the type of article (coded as per our inclusion criteria), (ii) the main contribution(s) of the article (e.g. critique of methods), (iii) the extent to which each article described methods or approaches pertaining to each step of an overview (e.g. mention without description, described—insufficient detail to implement, described—implementable), (iv) a precis of the methods or approaches covered and (v) the data on which the article was based (e.g. audit of methods used in a sample of overviews, author’s experience).

Coding and analysis to develop and populate the framework of methods

We planned to code articles in NVivo software, applying a coding frame to extract descriptions of methods pertaining to each step of an overview [10]. However, during the initial phases of analysis, we found the extracts difficult to interpret when read out of context because many methods were either sparsely described or were inferred rather than explicit. As a consequence of the difficulty coding these data, we revised our analytic approach. We separated studies that described a method pertaining to a step in the overview process, from those that made cursory mention of a method. The subset of articles coded as providing description were read by two authors (CL and SB, JM or SM) who independently drafted the framework for that step to capture and categorise all identified or inferred methods. To ensure comprehensiveness of the framework, methods were inferred when a clear alternative existed to a reported method (e.g. using decision rules or an algorithm to combine eligibility criteria was rarely mentioned, but was clearly an option for multiple sub-steps).

The drafts and multiple iterations of the framework were discussed and refined by all authors, during which we delineated unique decision points faced when planning each step of an overview (e.g. determining eligibility criteria to deal with SRs with overlap, determining how discrepant data across SRs will be handled) and the methods/options available for each. We grouped conceptually similar approaches together and extracted examples to illustrate the options. For example, we categorised all approaches that involved specifying criteria to select one SR from multiple overlapping SRs together, and then listed examples of criteria suggested in included studies (e.g. select most recent SR, highest quality, most comprehensive).

Stage II: Identification and mapping of evaluations of methods

Search methods

In addition to the main searches outlined in the  'Search methods' section for Stage I, we planned to undertake purposive searches to locate evaluations of methods where the main searches were unlikely to have located these evaluations. For this paper, we undertook a purposive search to locate evaluations of search filters for the retrieval of SRs (Additional file 2) since articles describing the development and evaluation of search strategies for SRs may reasonably not have mentioned ‘overviews’ (or its synonyms) and thus would not be identified in the main searches. For the other steps, the identified methods were specific to overviews, so evaluations were judged likely to be retrieved by our main search.

Eligibility criteria

To create the evidence map, we identified articles describing evaluations of methods for overviews of systematic reviews of interventions.

Inclusion criteria:

  1. i.

    SRs of methods studies that have evaluated methods for overviews

  2. ii.

    Methods studies that have evaluated methods for overviews

Exclusion criteria:

  1. i.

    Articles published in languages other than English

  2. ii.

    Methods studies that have evaluated methods for network meta-analysis

We added the additional criterion that methods studies had to have a stated aim to evaluate methods, since our focus was on evaluation and not just application of a method.

Study selection

We used the same process for determining which studies met the inclusion criteria for stage II as for stage I (‘Study selection’ section Stage I).

Data extraction

The only methods evaluations identified were evaluations of search filters for SRs, from which we extracted the data listed in Table 1. We had originally planned to extract quantitative results from the methods evaluations relating to the primary objectives; however, on reflection, we opted not to do this since we felt this lay outside the purpose of the evidence map. Data were extracted independently by two authors (CL, JEM) from four (of 15) studies. The remaining data were extracted by one author (CL).

Table 1 Data extracted from methods studies evaluating search filters for SRs

Assessment of the risk of bias

We planned to report the characteristics of the stage II evaluation studies that may plausibly be associated with bias. For methods evaluations of search filters for identifying SRs, we used assessment criteria informed by Harbour [11]. The assessment criteria included existence of a protocol and validation of the filter on a data set distinct from the derivation set (external validation).

Analysis

The yield and characteristics of the methods evaluation studies were described and mapped to the framework of methods.

Results

Results of the search

We retrieved 1850 records through searching databases and methods collections. A further 1384 records were identified through other sources (methods articles identified as part of a related research project [5], reference checking, and forward citation searching). After removal of duplicate records, 1179 records remained (Fig. 3). From screening titles and abstracts, we excluded 1092 records that were ineligible. We assessed 87 full-text reports for eligibility and excluded 21, with reasons noted in Additional file 3. Of the remaining 66, 42 were included in stage I and 24 in stage II.

Fig. 3
figure3

Flowchart of studies retrieved for both stages I and II. *The 42 stage I studies contributed to multiple steps

Our purposive search strategy (dated May 2016) to identify studies evaluating search filters for the retrieval of SRs resulted in the inclusion of three more stage II studies (see Fig. 4 for details), bringing the total number of methods evaluations to 27.

Fig. 4
figure4

Flowchart of stage II studies of search filter evaluations

Of the 42 stage I and 27 stage II studies, 29 and 15, respectively, pertained to one or more of the four initial steps in conducting an overview and so are included in this first paper; the remainder will be included in our second companion paper. All 15 stage II studies were evaluations of search filter studies for retrieval of SRs.

Stage I: development and population of the framework of methods

We first describe the characteristics of the included articles (see ‘Characteristics of included articles’; Table 2), followed by presentation of the developed methods framework. This presentation is organised into sections representing the broad steps of conducting an overview (sections ‘Specification of purpose, objectives and scope’, ‘Specification of eligibility criteria’, ‘Search methods’ and ‘Data extraction’; Tables 3, 4, 5 and 6). In each section, we orient readers to the structure of the methods framework, which includes a set of steps and sub-steps (e.g. under ‘Search methods’, the steps are ‘plan the sources to search’, ‘plan the search strategy for retrieval of SRs’, and ‘plan how primary studies will be retrieved’). Components within the tables are referred to using labels and numbers (e.g. 2.1.3). We highlight methods/approaches to deal with commonly encountered scenarios for which overview authors need to plan (see ‘Addressing common scenarios unique to overviews’; Table 7). Our description is focused on methods/options that are distinct, have added complexity, compared with SRs of primary studies, or have been proposed to deal with major challenges in undertaking an overview. Importantly, the methods/approaches and options reflect the ideas presented in the literature and should not be interpreted as endorsement for the use of the methods. Reporting considerations for all steps are reported in Additional file 4.

Table 2 Characteristics of stage I descriptive studies
Table 3 Specification of purpose, objectives and scope
Table 4 Specification of eligibility criteria
Table 5 Search methods
Table 6 Data extraction
Table 7 Methods and approaches for addressing common scenarios unique to overviews

Characteristics of included articles

The characteristics of the included articles and the extent to which each described methods or approaches pertaining to the initial steps of an overview are indicated in Table 2. The majority of articles were published as full reports (n = 24/29; 83%). The most common type of article was one in which methods for overviews were described (n = 16/29; 55%), followed by articles that examined the methods used in a cross-section of overviews (n = 8/29; 28%), guidance documents (n = 4/29; 14%) and commentaries and editorials (n = 1/29; 3%). Methods for the specification of purpose, objectives and scope (n = 22); specification of eligibility criteria (n = 21); search methods (n = 18) and methods for data extraction (n = 17) were similarly mentioned or described. Relatively, few articles described methods across all of the initial steps in conducting an overview (n = 6).

Specification of purpose, objectives and scope

The two steps in the framework under ‘specification of purpose, objectives and scope’ were ‘determine stakeholder involvement in planning the overview (1.0)’ and ‘define the purpose, objectives and scope (2.0)’ (Table 3). In the following, we focus on the methods/approaches and options for the step ‘define the purpose, objectives and scope (2.0)’. Other methods/approaches are similar to those in planning a SR, but have been included in the framework for completeness.

We identified different purposes for undertaking an overview (2.1), with some of these purposes being ‘map the type and quantity of available evidence (2.1.1)’, ‘compare multiple interventions with the intent of drawing inferences about the comparative effectiveness of the interventions for the same condition (2.1.2)’ and ‘summarise the effects of an intervention across different conditions, populations, or problems (2.1.4)’. The latter borrows strength when there is sparse data for a single condition and a similar mechanism of action for the intervention is predicted across conditions. Options for confirming that an overview is the appropriate type of study for addressing the purpose and objectives (as compared with an intervention review or network meta-analysis) (2.2), included the ‘use of a decision tool (2.2.1)’ or ‘use other reasoning (2.2.2)’. A further identified sub-step was to ‘determine any constraints that will restrict the scope of the overview (2.3)’. Considerations arising from sub-steps 2.1–2.3 will influence whether an overview is conducted to address a narrow or broad question (2.4). This decision is then operationalised in the final identified sub-step ‘define the objectives using Population, Intervention, Comparison, Outcome (PICO) elements (or equivalent) to develop an answerable question (2.5)’.

Specification of eligibility criteria

The two steps in the framework under ‘specification of eligibility criteria’ were ‘plan the eligibility criteria (1.0)’ and ‘plan the study selection process (2.0)’ (Table 4). In the following, we focus on the step ‘plan the eligibility criteria (1.0)’, which covers methods that are key to dealing with common scenarios and challenges that arise in overviews (Table 7).

A unique decision in planning overviews is to ‘determine methodological eligibility criteria for SRs (1.4)’. Multiple criteria were identified, including approaches for selecting reviews that meet minimum quality criteria, or reviews that take a particular methodological approach (1.4.2). These criteria underpin many of the identified approaches for dealing with SRs with overlap in information and data (1.5). Overlap can arise when SRs with similar topics include one or more identical primary studies. One identified option was to include all SRs that meet the PICO criteria irrespective of overlap, that is, ignore overlap, note overlap, or deal with overlap using other methods (e.g. data extraction, synthesis) (1.5.1). However, other approaches aim to minimise overlap by specifying criteria to select one SR from multiple (1.5.2). These approaches include selecting one SR based on methodological criteria for SRs (see options in 1.4.2), selecting the most comprehensive SR, or excluding SRs that do not contain any unique primary studies (1.5.4). The latter approach may still result in inclusion of multiple overlapping SRs. An inherent complexity in using eligibility criteria to deal with overlap is that using single criteria can result in unintended loss of information through exclusion of important SRs (for example, the most recent SR could be excluded if only the highest quality SR is selected). An approach that overcomes this is to combine multiple criteria in an algorithm (1.5.3).

Another identified decision was whether to include additional primary studies (1.6). One option was to include primary studies only if pre-specified eligibility criteria are met (1.6.2). Circumstances that may prompt inclusion of primary studies are outlined in 1.6.2.

Search methods

The three steps in the framework under ‘search methods’ were ‘plan the types of sources to search (1.0)’, ‘plan the search strategy for retrieval of SRs (2.0)’, and ‘plan how primary studies will be retrieved, if eligibility criteria determines that primary studies should be included (3.0)’ (Table 5). Search methods for overviews largely parallel those used in a SR of primary studies. Unique considerations relate to the option to restrict searches to SR databases (1.1.1), the use of filters developed to retrieve SRs (2.1), and approaches to searching for additional primary studies.

If additional primary studies are eligible for the overview, authors will need to determine the sequence of searching for SRs and primary studies. The search for primary studies may be done in parallel with the search for SRs (3.1.1), or in sequence, searching first for SRs then for primary studies (3.1.2). The latter strategy focuses on retrieving primary studies where evidence is missing (i.e. where SRs are not up-to-date or where the SRs provide incomplete coverage of the overview question).

Data extraction

The two steps in the framework under ‘data extraction’ were ‘plan the data elements to extract (1.0)’ and ‘plan the data extraction process (2.0)’ (Table 6). We now highlight methods/approaches for dealing with these two steps, with a focus on methods for dealing with scenarios described in Table 7.

An identified sub-step in planning the data elements to extract (1.0) was determining the data to extract about the results from the SRs (1.3). For overviews, this will be driven by the purpose of the overview (e.g. whether the aim of the overview is to summarise results narratively from included SRs, or synthesise the results from component trials, or meta-analyses, from the included SRs). In addition to determining the data to extract about results from SRs, if the eligibility criteria of the overview include primary studies, then the data to extract from primary studies will also need to be determined (1.4).

A complexity that arises when undertaking an overview is the challenge of how to deal with overlapping (2.2) and discrepant (2.3) information and data across SRs. Identified options include extraction of information from all SRs, noting any discrepancies (2.2.1, 2.3.1), or extraction of data and information from only one SR (2.2.2, 2.3.2) based on pre-specified criteria, such as using the most recent SR, or the SR of the highest quality. Or, when there are discrepancies, different data elements (e.g. effect estimates, quality assessments) might be extracted from different SRs that meet certain decision rules (2.3.3), such as the SR that reports the most complete information on effect estimates. Methods for dealing with variation in the information reported and missing data are outlined in sub-step 2.4. In overviews, compared with SRs, there is additional complexity in resolving variation in information reported and missing data since there is an additional source of information (SRs in addition to primary studies).

Addressing common scenarios unique to overviews

Many of the identified methods were proposed to overcome common methodological challenges unique to overviews. Table 7 summarises these scenarios, showing methods that could be used to address each. While the literature reviewed often suggested a single method or step at which a scenario should be dealt with, Table 7 shows that there are multiple options, some of which can be combined.

Stage II: identification and mapping of evaluations of methods

We found no studies that had evaluated methods in the steps of the framework for ‘specification of purpose, objectives and scope’, ‘specification of eligibility criteria’ and ‘data extraction’. Fifteen studies, published between 1998 and 2016, evaluated search filters for the retrieval of SRs (Table 8). One study [12], evaluated the performance of seven bibliographic databases to determine their coverage of SRs. This evaluation mapped to the option ‘select the types of databases to search’ (1.1.1) of the ‘search methods’ step of the framework (Table 5). Of the remaining 14 studies, two compared the performance of multiple published filters [13, 14], four developed new search filters and compared their performance against other published filters [15,16,17,18], and eight developed and evaluated new search filters (but without comparison with other published filters) [19,20,21,22,23,24,25,26]. These evaluations mapped to the option ‘select a published SR filter’ (2.1.1) of the ‘search methods’ step of the framework (Table 5).

Table 8 Characteristics of stage II evaluation of methods studies

The filters were designed to retrieve SRs across a range of databases (CINAHL, DARE, EMBASE, PsycINFO, Epistemonikos, MEDLINE, Simple Web Indexing System for Humans (SWISH) and TRIP). Seven studies developed the gold standard by handsearching journals, three used a combination of handsearching journals and database searches and five used only database searches. The performance measures used included sensitivity/recall, specificity, precision, accuracy and the number needed to read. In terms of risk of bias, none of the evaluation studies referred to a study protocol or noted the existence of one, and only three validated their search filter on a data set distinct from the derivation set [13, 16, 17].

Discussion

Despite the emergence of overviews as a common form of evidence synthesis, to date, there has been no comprehensive map of overview methods or the evidence underpinning these methods. We aimed to address this gap. A framework was developed for the initial steps in the conduct, interpretation and reporting of an overview (specification of the purpose, objectives and scope; specification of the eligibility criteria; search methods; and data extraction methods) with associated methods/approaches and options. The framework makes explicit large number of steps and methods that need to be considered when planning an overview and demonstrates some of the added complexity in an overview compared with a SR of primary studies. The framework also demonstrates that challenges in undertaking an overview, such as dealing with overlapping information across SRs, may be dealt with at different steps of the overview process (e.g. specification of eligibility criteria or data extraction). Fifteen evaluation studies were found in stage II, all of which mapped to the ‘search methods’ step of the framework. These studies either developed and evaluated a new search filter or compared the performance of existing search filters to retrieve SRs.

What this study adds to guidance and knowledge about overview methods?

Our analysis aligns with findings of other recent reviews in identifying important gaps in guidance on the conduct of overviews [27, 28]. These gaps include patchy coverage of methods, wherein guidance covers selected options but not alternatives and insufficient description to operationalise many methods (Table 2). While others have concluded there is a lack of consensus over many methods [28], overviews serve many purposes, and different approaches are needed for different purposes. Recognising this, the framework attempts to capture the spectrum of options available to overview authors, providing a tool for systematic consideration of alternative approaches. We highlight the scenarios for which overview authors need to plan and identify methods proposed to tackle each scenario (Table 7). While these contributions help to address the patchy coverage of methods, the framework cannot address the lack of operational detail in current guidance. A forthcoming update of the Cochrane Handbook should help [28], but other guidance will be needed to cover the many methods not applicable to Cochrane overviews. For authors writing guidance, the framework could serve as a checklist to ensure comprehensive coverage of the methods proposed in the literature.

The lack of evaluation studies identified in stage II indicates that there is limited evidence to inform methods decision-making in overviews. For each of the steps in the framework, there is often a range of different methods to use, which could conceivably impact on the results and conclusions of the overview, their utility for decision-makers, and the time/resources required to complete the overview. This lack of evaluation of methods means [28] there may be inappropriate variability in the methods employed across overviews (as has been observed [6]). Further, overviews that seek to address the same research question, but which are undertaken using different methods, may reach discordant conclusions.

How might the framework be used by overview authors and methodologists?

The framework may be useful to researchers conducting overviews and methodologists. As highlighted above, the framework is useful for making explicit the decisions overview authors need to make when planning an overview. Using the framework as a checklist to plan methods for dealing with common scenarios should lessen the challenges that arise when conducting an overview. Using the framework during protocol development may also lead to less post-hoc decision-making that can arise from not being aware of the decisions that need to be made before commencing the overview. Less post hoc decision-making may limit potential bias in the process of undertaking the overview. For overview methodologists, the provision of comparative options for each of the steps of the framework facilitates identification (and prioritisation) of methods evaluations that might be undertaken. For example, examining those steps where selection of a different option is hypothesised to importantly impact on the results and conclusions of the overview (discussed below under ‘Future research to refine and populate the framework and evidence map’).

Strengths and limitations

To our knowledge, this is the first attempt to create a comprehensive framework of the many methods proposed for use in overviews. It is also the first study, of which we are aware, that has used evidence mapping in the context of methods research. A protocol of this investigation has been published [10] and any post hoc decisions have been documented. During our analysis, we developed an organising structure to group-related methods and used consistent language to synthesise the varied descriptions encountered in the literature. We also made inferences to ensure that where a clear alternative to a described method existed, it was captured in the framework. Both steps helped generate a more uniform and complete inventory of methods than would have been possible through simply collating methods as described.

Methods studies related to overviews are challenging to find other than in specialist methodology registers, such as the Cochrane Methodology Register and the Meth4ReSyn library, meaning that some methods articles may have been missed. We conducted reference checking and forward citation searching in three databases to minimise the number of missed articles. Further, we focused our search on locating articles that used the term ‘overview’ (or related terminology). However, methods that may be applicable to overviews, such as those used in clinical practice guidelines, may not have been located. We did not broaden our search, or specifically examine guidance documents for producing guidelines, to keep the project containable. Our analysis involved piecing together information spread across multiple sources, and ‘translating’ varied descriptions of methods into a common language. This process, and the many decisions involved in structuring our framework required considerable judgement. While the process led to a more complete and uniform description of methods than we identified in any other source, the subjective nature of this analysis means that other researchers may have made different decisions.

Future research to refine and populate the framework and evidence map

Future research will involve seeking input on the framework from methodologists and researchers conducting overviews in terms of their face validity, that is, the structure of the framework and the comprehensiveness of the steps and identified methods. Hence, the framework will likely be refined and evolve over time. Further, as methods for overviews are evaluated, the evidence map can be further populated. While there is currently too little methods evaluation for a visual representation (or map) of the evidence to be useful, the framework provides the structure for creating this map. Some priority areas requiring evaluation, which we encourage methodologists to consider, relate to the decisions around eligibility and data extraction. For example, what is the effect of selecting one SR from multiple SRs addressing the same topic versus including all SRs? Outcomes of interest may include proximal measures, such as whether eligible primary studies or important data are missed. More distal measures include time taken to complete the overview, utility for decision-makers, and whether the findings and conclusions of the review change. Additionally, researchers could examine whether observed effects vary when different eligibility criteria are used to select one SR from a multiple. Similar questions can be posed about the effects of extracting data from one versus multiple SRs, from primary studies to SRs only, and so on. Evidence arising from these evaluations should lead to further refinement of the framework and, more importantly, empirical data about the trade-offs associated with alternative methodological approaches.

Conclusions

A framework of methods for conducting, interpreting and reporting overviews of systematic reviews for the initial four steps of undertaking an overview was developed and populated. Studies evaluating methods for overviews were identified and mapped to the framework. Evaluation of methods allows us to make informed choices about the most appropriate methods to use. However, gaps in the evaluation of methods were found in the majority of steps. More evaluation of the methods used in overviews is needed. The results of this research are useful for identifying and prioritising methods research on overviews and provide a basis for the development of planning and reporting checklists.

Abbreviations

AHRQ’s EPC:

Agency for Healthcare Research and Quality’s Evidence-based Practice Center

AMSTAR:

A Measurement Tool to Assess Systematic Reviews

CDSR:

Cochrane Database of Systematic Reviews

CMIMG:

Comparing Multiple Interventions Methods Group

CRD:

Centre for Reviews and Dissemination

JBI:

Joanna Briggs Institute

M-A:

Meta-analysis

N/A:

Not applicable

NNR:

Number needed to read

NR:

Not reported

PH:

Public Health

PICO:

Population (P), intervention (I), comparison (C) and outcome (O)

PROSPERO:

International Prospective Register of Systematic Reviews

RCT:

randomised controlled trial

SRs:

Systematic reviews

SWISH:

Simple Web Indexing System for Humans

References

  1. 1.

    Becker LA, Oxman AD. Overviews of reviews. Cochrane Handbook for Systematic Reviews of Interventions. In: Higgins JPT, Green S, editors. . Hoboken: John Wiley & Sons; 2008. p. 607–31.

  2. 2.

    Chen YF, Hemming K, Chilton PJ, Gupta KK, Altman DG, Lilford RJ. Scientific hypotheses can be tested by comparing the effects of one treatment over many diseases in a systematic review. J Clin Epidemiol. 2014;67(12):1309–19.

  3. 3.

    Salanti G, Becker L, Caldwell D, Churchill R, Higgins J, Li T, Schmid C. Evolution of Cochrane intervention reviews and overviews of reviews to better accommodate comparisons among multiple interventions. In: Report from a meeting of the Cochrane comparing multiple interventions methods groups. Madrid: Cochrane Comparing Multiple Interventions Methods Groups; 2011.

  4. 4.

    CMIMG C: Review Type & Methodological Considerations --background paper for the first part of the Paris CMIMG discussion. 2012.

  5. 5.

    Lunny C, McKenzie JE, McDonald S. Retrieval of overviews of systematic reviews in MEDLINE was improved by the development of an objectively derived and validated search strategy. J Clin Epidemiol. 2016;74:107–18.

  6. 6.

    Pieper D, Buechter R, Jerinic P, Eikermann M. Overviews of reviews often have limited rigor: a systematic review. J Clin Epidemiol. 2012;65(12):1267–73.

  7. 7.

    Edwards P, Clarke M, DiGuiseppi C, Pratap S, Roberts I, Wentz R. Identification of randomized controlled trials in systematic reviews: accuracy and reliability of screening records. Stat Med. 2002;21(11):1635–40.

  8. 8.

    Snilstveit B, Vojtkova M, Bhavsar A, Stevenson J, Gaarder M. Evidence & Gap Maps: a tool for promoting evidence informed policy and strategic research agendas. J Clin Epidemiol. 2016;79:120-9.

  9. 9.

    Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

  10. 10.

    Lunny C, Brennan SE, McDonald S, McKenzie JE. Evidence map of studies evaluating methods for conducting, interpreting and reporting overviews of systematic reviews of interventions: rationale and design. Syst Rev. 2016;5:4.

  11. 11.

    Harbour J, Fraser C, Lefebvre C, Glanville J, Beale S, Boachie C, Duffy S, McCool R, Smith L. Reporting methodological search filter performance comparisons: a literature review. Health Inf Libr J. 2014;31(3):176–94.

  12. 12.

    Rathbone J, Carter M, Hoffmann T, Glasziou P. A comparison of the performance of seven key bibliographic databases in identifying all relevant systematic reviews of interventions for hypertension. Syst Rev. 2016;5:27.

  13. 13.

    Boluyt N, Tjosvold L, Lefebvre C, Klassen TP, Offringa M. Usefulness of systematic review search strategies in finding child health systematic reviews in MEDLINE. Arch Pediatr Adolesc Med. 2008;162(2):111–6.

  14. 14.

    Wong SS, Wilczynski NL, Haynes RB. Comparison of top-performing search strategies for detecting clinically sound treatment studies and systematic reviews in MEDLINE and EMBASE. J Med Libr Assoc. 2006;94(4):451–5.

  15. 15.

    Boynton J, Glanville J, McDaid D, Lefebvre C. Identifying systematic reviews in MEDLINE: developing an objective approach to search strategy design. J Inf Sci. 1998;24(3):137–54.

  16. 16.

    Lee E, Dobbins M, Decorby K, McRae L, Tirilis D, Husson H. An optimal search filter for retrieving systematic reviews and meta-analyses. BMC Med Res Methodol. 2012;12:51.

  17. 17.

    Montori VM, Wilczynski NL, Morgan D, Haynes RB. Optimal search strategies for retrieving systematic reviews from Medline: analytical survey. BMJ. 2005;330(7482):68.

  18. 18.

    White VJ, Glanville JM, Lefebvre C, Sheldon TA. A statistical approach to designing search filters to find systematic reviews: objectivity enhances accuracy. J Inf Sci. 2001;27(6):357–70.

  19. 19.

    Eady AM, Wilczynski NL, Haynes RB. PsycINFO search strategies identified methodologically sound therapy studies and review articles for use by clinicians and researchers. J Clin Epidemiol. 2008;61(1):34–40.

  20. 20.

    Golder S, McIntosh HM, Loke Y. Identifying systematic reviews of the adverse effects of health care interventions. BMC Med Res Methodol. 2006;6:22.

  21. 21.

    Shojania KG, Bero LA. Taking advantage of the explosion of systematic reviews: an efficient MEDLINE search strategy. Eff Clin Pract. 2001;4(4):157–62.

  22. 22.

    Wilczynski NL, Haynes RB. EMBASE search strategies achieved high sensitivity and specificity for retrieving methodologically sound systematic reviews. J Clin Epidemiol. 2007;60(1):29–33.

  23. 23.

    Wilczynski NL, Haynes RB. Consistency and accuracy of indexing systematic review articles and meta-analyses in medline. Health Inf Libr J. 2009;26(3):203–10.

  24. 24.

    Wilczynski NL, McKibbon KA, Haynes RB. Sensitive clinical queries retrieved relevant systematic reviews as well as primary studies: an analytic survey. J Clin Epidemiol. 2011;64(12):1341–9.

  25. 25.

    Zacks MP, Hersh WR. Developing search strategies for detecting high quality reviews in a hypertext test collection. Proceedings/AMIA Symp. 1998;663–7.

  26. 26.

    Wong SS, Wilczynski NL, Haynes RB. Optimal CINAHL search strategies for identifying therapy studies and review articles. J Nurs Scholarsh. 2006;38(2):194–9.

  27. 27.

    Ballard M, Montgomery P. Risk of bias in overviews of reviews: a scoping review of methodological guidance and four-item checklist. Res Synth Methods. 2017;8(1):92–108.

  28. 28.

    Pollock M, Fernandes RM, Becker LA, Featherstone R, Hartling L. What guidance is available for researchers conducting overviews of reviews of healthcare interventions? A scoping review and qualitative metasummary. Syst Rev. 2016;5(1):190.

  29. 29.

    Baker PRA, Costello JT, Dobbins M, Waters EB. The benefits and challenges of conducting an overview of systematic reviews in public health: a focus on physical activity. Aust J Public Health. 2014;36(3):517–21.

  30. 30.

    Bolland MJ, Grey A, Reid IR. Differences in overlapping meta-analyses of vitamin D supplements and falls. J Clin Endocrinol Metab. 2014;99(11):4265–72.

  31. 31.

    Caird J, Sutcliffe K, Kwan I, Dickson K, Thomas J. Mediating policy-relevant evidence at speed: are systematic reviews of systematic reviews a useful approach? Evid Policy. 2015;11(1):81–97.

  32. 32.

    Cooper H, Koenka AC. The overview of reviews: unique challenges and opportunities when research syntheses are the principal elements of new integrative scholarship. Am Psychol. 2012;67(6):446–62.

  33. 33.

    Flodgren G, Shepperd S, Eccles M. Challenges facing reviewers preparing overviews of reviews (P2A194). In: Cochrane Colloquium. Madrid; 2011.

  34. 34.

    Foisy M, Becker LA, Chalmers JR, Boyle RJ, Simpson EL, Williams HC. Mixing with the ‘unclean’: including non-Cochrane reviews alongside Cochrane reviews in overviews of reviews (P2A157). In: Cochrane Colloquium. Madrid; 2011.

  35. 35.

    Hartling L, Chisholm A, Thomson D, Dryden DM. A descriptive analysis of overviews of reviews published between 2000 and 2011. PLoS One. 2012;7(11):e49667.

  36. 36.

    Hartling L, Dryden D, Vandermeer B, Fernandes R. Generating empirical evidence to support methods for overviews of reviews. In: Cochrane Colloquium: 2013; Quebec City, Canada; 2013.

  37. 37.

    Hartling L, Vandermeer B, Fernandes RM. Systematic reviews, overviews of reviews and comparative effectiveness reviews: a discussion of approaches to knowledge synthesis. Evid Based Child Health Cochrane Rev J. 2014;9(2):486–94.

  38. 38.

    Ioannidis JPA. Integration of evidence from multiple meta-analyses: a primer on umbrella reviews, treatment networks and multiple treatments meta-analyses. CMAJ. 2009;181(8):488–93.

  39. 39.

    James BM, Baker PRA, Costello JT, Francis DP. Informing methods for preparing public health overviews of reviews: a comparison of public health overviews with Cochrane overviews published between 1999 and 2014. In: Cochrane Colloquium. Hyderabad; 2014.

  40. 40.

    Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P. Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Int J Evid Based Healthc. 2015;13(3):132–40.

  41. 41.

    Joanna Briggs Institute. Methodology for JBI umbrella reviews. Adelaide: The University of Adelaide; 2014.

  42. 42.

    Kovacs FM, Urrutia G, Alarcon JD. “Overviews” should meet the methodological standards of systematic reviews. Eur Spine J. 2014;23(2):480.

  43. 43.

    Kramer S, Langendam M, Elbers R, Scholten R, Hooft L. Preparing an overview of reviews: lessons learned. Poster. In: Cochrane colloquium: 2009 Oct 11–14 2009; Singapore; 2009.

  44. 44.

    Li LM, Tian JT, Tian H, Sun R, Liu Y, Yang K. Quality and transparency of overviews of systematic reviews. J Evid-Based Med. 2012;5(3):166–73.

  45. 45.

    Buchter R, Pieper D, Jerinic P. Overviews of systematic reviews often do not assess methodological quality of included reviews. Poster. In: 19th Cochrane colloquium. Madrid: Cochrane Database Syst Rev; 2011. p. 105–6.

  46. 46.

    Pieper D, Antoine S-L, Morfeld J-C, Mathes T, Eikermann M. Methodological approaches in conducting overviews: current state in HTA agencies. Research Synthesis Methods. 2014;5(3):187–99.

  47. 47.

    Pieper D, Antoine S, Neugebauer EA, Eikermann M. Up-to-dateness of reviews is often neglected in overviews: a systematic review. J Clin Epidemiol. 2014;67(12):1302–8.

  48. 48.

    Robinson KA, Chou R, Berkman ND, Newberry SJ, Fu R, Hartling L, Dryden D, Butler M, Foisy M, Anderson J, et al. Integrating bodies of evidence: existing systematic reviews and primary studies. In: Methods guide for effectiveness and comparative effectiveness reviews. Rockville: Agency for Healthcare Research and Quality (US); 2008.

  49. 49.

    Robinson KA, Chou R, Berkman ND, Newberry SJ, Fu R, Hartling L, Dryden D, Butler M, Foisy M, Anderson J, et al. Twelve recommendations for integrating existing systematic reviews into new reviews: EPC guidance. J Clin Epidemiol. 2016;70:38–44.

  50. 50.

    Robinson KA, Whitlock EP, O'Neil ME, Anderson JK, Hartling L, Dryden DM, Butler M, Newberry SJ, McPheeters M, Berkman ND. Integration of existing systematic reviews. In: Research white paper (prepared by the Scientific Resource Center under contract no 290–2012-00004-C). Rockville: Agency for Healthcare Research and Quality; 2014.

  51. 51.

    White CM, Ip S, McPheeters MC, Tim S, Chou R, Lohr KN, Robinson K, McDonald K, Whitlock EP. Using existing systematic reviews to replace de novo processes in conducting comparative effectiveness reviews. In: Methods guide for comparative effectiveness reviews. Rockville: Agency for Healthcare Research and Quality; 2009.

  52. 52.

    Whitlock EP, Lin JS, Chou R, Shekelle P, Robinson KA. Using existing systematic reviews in complex systematic reviews. Ann Intern Med. 2008;148:776–82.

  53. 53.

    Ryan R, Hill S. Developing an overview of systematic reviews: a framework for synthesising the evidence on interventions to improve communication. In: Cochrane Colloquium: 2005; Melbourne, Australia; 2005.

  54. 54.

    Ryan RE, Kaufman CA, Hill SJ. Building blocks for meta-synthesis: data integration tables for summarising, mapping, and synthesising evidence on interventions for communicating with health consumers. BMC Med Res Methodol. 2009;9:16.

  55. 55.

    Silva V, Grande AJ, Carvalho AP, Martimbianco AL, Riera R. Overview of systematic reviews—a new type of study. Part II. Sao Paulo Med J. 2015;133(3):206–17.

  56. 56.

    Singh JP. Development of the metareview assessment of reporting quality (MARQ) checklist. Revista Facultad de Medicina de la Universidad Nacional de Colombia. 2012;60(4):325–32.

  57. 57.

    Smith V, Devane D, Begley CM, Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC Med Res Methodol. 2011;11(1):15.

  58. 58.

    Thomson D, Russell K, Becker L, Klassen TP, Hartling L. The evolution of a new publication type: steps and challenges of producing overviews of reviews. Res Syn Method. 2010;1(3–4):198–211.

  59. 59.

    Thomson D, Foisy M, Oleszczuk M, Wingert A, Chisholm A, Hartling L. Overview of reviews in child health: evidence synthesis and the knowledge base for a specific population. Evid Based Child Health Cochrane Rev J. 2013;8(1):3–10.

  60. 60.

    Büchter R, Pieper D. How do authors of Cochrane overviews deal with conflicts of interest relating to their own systematic reviews? In: Cochrane Colloquium: 2015; Vienna, Austria; 2015.

Download references

Acknowledgements

Not applicable.

Funding

This work was conducted as part of a Ph.D. undertaken by CL, who is funded by an Australian Postgraduate Award and an International Postgraduate Research Scholarship administered through Monash University, Australia. JEM holds a National Health and Medical Research Council (NHMRC) Australian Public Health Fellowship (1072366). The funding bodies had no involvement in the design of the study, data collection, analysis, interpretation, preparation of the manuscript or the decision to submit the manuscript.

Availability of data and materials

All data generated or analysed during this study are included in this published article.

Author information

CL, JEM, SEB and SM are responsible for the conception and design of the study. CL, JEM and SM did the search strategy development. CL, JEM and SEB contributed in the study selection and data extraction. CL, JEM, SEB and SM took part in the independent development of the framework and group refinement and consensus and in the drafting and editing of the manuscript. All authors read and approved the final manuscript.

Correspondence to Joanne E. McKenzie.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

JEM is an Associate Editor of Systematic Reviews and is a Guest Editor for the thematic series ‘Overviews of systematic reviews: development and evaluation of methods’, to which this paper was submitted. SEB is an Associate Editor of Systematic Reviews. Neither JEM nor SEB were involved in the peer-review or editorial decisions for this manuscript. CL and SM declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

Search strategies. (DOCX 3.49 kb)

Additional file 2:

Purposive search strategies. (DOCX 3.53 kb)

Additional file 3:

Characteristics of excluded studies. (DOCX 5.65 kb)

Additional file 4:

Table of reporting considerations. (DOCX 6.65 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Lunny, C., Brennan, S.E., McDonald, S. et al. Toward a comprehensive evidence map of overview of systematic review methods: paper 1—purpose, eligibility, search and data extraction. Syst Rev 6, 231 (2017) doi:10.1186/s13643-017-0617-1

Download citation

Keywords

  • Overviews of systematic reviews
  • Overview
  • Meta-review
  • Umbrella review
  • Review of reviews
  • Overview methods
  • Systematic review methods
  • Evidence mapping
  • Evaluation of methods
  • Evidence synthesis

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Please note that comments may be removed without notice if they are flagged by another user or do not comply with our community guidelines.