- Open Access
- Open Peer Review
The influence of the team in conducting a systematic review
Systematic Reviews volume 6, Article number: 149 (2017)
There is an increasing body of research documenting flaws in many published systematic reviews’ methodological and reporting conduct. When good systematic review practice is questioned, attention is rarely turned to the composition of the team that conducted the systematic review. This commentary highlights a number of relevant articles indicating how the composition of the review team could jeopardise the integrity of the systematic review study and its conclusions. Key biases require closer attention such as sponsorship bias and researcher allegiance, but there may also be less obvious affiliations in teams conducting secondary evidence-syntheses. The importance of transparency and disclosure are now firmly on the agenda for clinical trials and primary research, but the meta-biases that systematic reviews may be at risk from now require further scrutiny.
Systematic reviews benefit from team working, and co-production is an essential part of high-quality research synthesis and healthcare decision [1, 2]. However, despite their reputation as transparent and rigorous products, they are influenced by the people who conduct them and this in turn could affect the resulting conclusions. A review team may comprise experienced systematic reviewers, information specialists, statisticians, and content experts, or, as no licence is required to conduct a systematic review, the review team may include none of these specialties. In this case, it may be wise to consider who is conducting the systematic review and why. The number of systematic reviews indexed in MEDLINE has increased threefold over the last decade  indicating a steady spread in their employment globally. However, as in primary research,  systematic reviews can vary hugely in their reporting quality or their methodological quality [5,6,7]. Indeed, many systematic reviews are receiving increasing criticism for failing to live up to their reputation as high-quality, well-conducted pieces of research [8, 9]. They can be susceptible to bias, for example, when reviews are conducted by people who have a stake in the conclusions (researcher allegiance) [10, 11]. Alternatively, reviews could be conducted carelessly, in the chosen methods of meta-analysis or study selection (meta-bias) , or by failing to report research misconduct even when identified . If flawed systematic reviews continue to be published, they risk losing their eminent position in the evidence hierarchy; therefore, closer examination of who might be conducting them, and how the output can be affected, is warranted.
Ordinarily, systematic reviews are conducted to summarise evidence objectively for an intervention’s effectiveness to decision-makers or to highlight where evidence to support an intervention is lacking. Funding bodies often require applicants of new primary research to reference an existing systematic review or to review the relevant evidence prior to commencing new clinical trials. A less worthy motive for conducting a systematic review could be simply to get something published,  whether affiliated to the intervention or not. But a more troubling motive is that systematic reviews may be conducted and published by people who have a clear stake in publishing positive results. How can we know if the authors of a systematic review are affiliated to the review question? Despite the existence of guidelines and reporting standards for systematic reviews, it is not currently a requirement to declare the motives of the reviewers, i.e., who is behind the origination of the review question (except for sponsored work) and whether the same people who set the review question are also involved with answering the review question. Information about whether the research question originated from outside the review team, such as through a commissioned call, are not currently deemed as relevant to potential assessment of bias in systematic reviews. Recent research by Borah et al.  indicates that funded reviews took significantly longer to complete and involved more authors and team members than studies which did not report funding. A longer timeframe could be indicative of increased rigour in funded reviews. The same study authors also noted from inspection of 195 published reviews identified from the PROSPERO database that the number of “team members” is also discrepant from the number of authors of resulting publications. Their findings indicate that by the time the review reaches publication, there are more authors than people initially registered. How might systematic reviews accrue authors from protocol to final publication and should this be of concern?
Whilst explicit and transparent a priori methods are a hallmark of systematic reviews, in practice, they can be an iterative process, with movement back and forth between phases depending on the results obtained at each step. Despite this flexibility, systematic review authors are responsible for ensuring transparency. A systematic review protocol aims to ensure that necessary deviations from planned methods are evidenced and ideally discussed by the review team. However, even when methods are stated a priori in systematic review protocols, deviations from the protocol, whether justified or not, may be seldom reported in the final publication, as highlighted by Silagy et al.  who found 43 out of a sample of 47 published Cochrane reviews had undergone a major change compared with the most recently published protocol. Moreover, Kirkham et al.  found discrepancies between protocol and published reviews in a sample of 288 Cochrane reviews in the reported primary outcome. The authors highlight this potential bias in the systematic review process, with changes being made after knowledge of the results of the individual trials. These findings relating specifically to Cochrane reviews indicate that even when stringent guidelines are in place and content experts are likely to be involved, adherence to good practice does not necessarily always follow. Additionally, Page and colleagues  found when analysing 682 systematic reviews (both Cochrane and non-Cochrane) that at least a third did not report use of a systematic review protocol. This means that substantial portions of systematic reviews are being conducted without making an accountable public record of the planned methods. Reporting guidelines are in place for systematic reviews in the form of the PRISMA statement [19, 20] to promote adherence to good practice, but again, uptake of compliance with guidance from systematic review authors is varied  and often poor. Wasiak and colleagues  report that, from a review of 60 systematic reviews in burn care management, 13 of the 27 PRISMA checklist items were addressed in less than 50% of cases. Whilst journals may stipulate to authors that the PRISMA checklist is followed when submitting systematic reviews for publication, peer reviewers and journal editors may be unlikely to find the time to monitor or regulate authors adherence to these guidelines, in addition to reviewing the scientific and reporting quality of the draft manuscript. The responsibility of maintaining good practice in systematic reviews should reside with the systematic review team. Awareness of relevant methodological guidelines and adherence to good practice may be more likely in a review team comprising of at least one experienced reviewer.
From protocol design to analysis, there are opportunities for team members to shape the project and these individual influences can affect the output . Biases may be fairly obvious such as highlighting the most favourable results or perhaps more subtle and hard to detect such as biases in study selection (into the review or into the meta-analysis) towards positive studies. The motives of the review team can influence whether research findings represent true treatment effects and are able to be replicated  such as financial or other interests within the review team . Ebrahim et al.  conducted a study of 185 meta-analyses in antidepressants to find that 29% of papers contained authors who were employees of the assessed drug manufacturer and that 79% had an industry link to the drug assessed. This study also found that meta-analyses including an author who was an employee of the manufacturer of the assessed drug were 22 times less likely to have negative statements about the drug than other meta-analyses. Additionally, Gómez-García et al.  assessed 220 reviews to investigate the role of the funding source in systematic reviews and meta-analyses in psoriasis. They report that reviews containing a high number of authors with conflicts of interest were of lower quality. Financial ”conflict of interest” (CoI) in systematic reviews has been shown to influence the results of published systematic review in the field of sugar-sweetened beverages and weight gain or obesity . This evidence collectively suggests that a review team comprising individuals who are known to have a stake in the outcome of the review is a potential threat to the integrity of the report (sponsorship bias).
However, standard CoI statements often focus on narrow commercial interests and may be inadequate to reveal potentially hidden agendas . Consumers of systematic reviews cannot rely solely on declarations of competing interests, which may relate to recent pecuniary funding (within the last 3 years), as opposed to more long-term affiliations to health interventions, to know whether those conducting the review have an interest in the results of the research (researcher allegiance) . For example, for some, their very employment is reliant on a given intervention’s reputation such as homoeopaths or psychotherapists and they are unsurprisingly unlikely to publish a rigorous review with neutral or negative conclusions underpinning the basis of their profession. A recent study investigating the relationship between CoI and the conclusions of systematic reviews of psychological therapies found that “non-financial CoI” and particularly the inclusion of own primary studies into reviews were frequently seen in systematic reviews of psychological therapies . Moreover, author allegiance to psychological therapy was never disclosed in 15 out of 95 reviews. Similarly, despite the apparent benefit that topic experts may bring to the review team, Gotzsche and Ioannidis (2012)  point out that the strong opinions of content area experts, such as clinical experts, can make it difficult to perform unbiased systematic reviews. For example, they assert that “people who have an interest in concealing uncomfortable evidence, clinicians, for example, find it particularly difficult to acknowledge the harms their interventions may cause”. Current methods for describing who is involved in the conduct of systematic reviews and their level of affiliation may occupy insufficient attention in final peer-reviewed journal publications.
Objective research, such as secondary evidence syntheses, should not necessarily be carried out dispassionately. The contribution of user perspectives and lay-expert engagement in research are now well-recognised . Without patient and public involvement, due consideration to whether the review question or the included evidence are patient-centred may not be given . In addition, global collaboration benefits the visibility and quality of scientific research activity , and failure to work collaboratively between institutions can have a silo effect on evidence synthesis projects supposedly deemed to be at the forefront of evidence-based medicine . However, methodologists with particular skill sets who are not systematic reviewers such as statisticians and modellers employed in systematic reviews may be likely to recommend strategies which play to their strengths and that they have employed in the past and possibly without due consideration of other appropriate methods. Systematic reviews that fail to accommodate complex research questions may be at risk of being inflexible to useful innovations due to a reliance of “tried and tested” methodology . For example, focusing on efficacy studies rather than pragmatic trials may jeopardise the external validity of review findings [36, 37]. Review teams preoccupied with critiquing heterogeneity of the existing evidence may be unlikely to elaborate on the external validity of the review, e.g., by searching and critiquing relevant grey literature, or service-user perspectives. Team experience or expertise could affect the external validity of a systematic review therefore. For example, research indicates that few strategies in the Cochrane Collaboration explicitly address the research priorities of disadvantaged populations and innovative approaches are needed to ensure that the research priorities of diverse stakeholders are considered [38, 39]. Equitable representation of population demographics within the team, as well as the more obviously required skills and experience may potentially influence the importance and uptake of the review.
In summary, there are a variety of factors that may influence the efficiency and appropriateness of the review team in addition to ensuring that the team comprises individuals with experience in systematic reviews. A well-trained methodological research workforce, continuing professional development, and involvement of non-conflicted stakeholders may help to ensure reproducible, accurate findings [2, 40]. But there are some less clear components underlying the nature and conduct of a review that can be influenced by motive, bias or error that may not be sufficiently addressed by current quality assessment tools for systematic reviews such as AMSTAR , GRADE [42, 43] and ROBIS  or explicitly evident from CoI declarations. Therefore, closer attention to who is conducting these secondary research projects, and why, is required rather than relying on the methods and conclusions of systematic reviews to speak for themselves. Systematic reviews are a powerful instrument in evidence-based medicine and are arguably more cost-effective than primary research . But if systematic reviews are to maintain their primacy, their methodology should evolve through continued scrutiny in order to understand some of the (un)seen biases that can influence reviews from the review team composition, as well as the evidence that goes into them.
Conflicts of interest
Swan, J., et al., Evidence in management decisions (EMD)—advancing knowledge utilization in healthcare management. Final Report. NIHR Health Services and Delivery Programme, 2012.
Ioannidis JP, et al. Meta-research: evaluation and improvement of research methods and practices. PLoS Biol. 2015;13(10):e1002264.
Page MJ, et al. Investigation of bias in meta-analyses due to selective inclusion of trial effect estimates: empirical study. BMJ Open. 2016;6(4):e011863.
Flacco ME, et al. Head-to-head randomized trials are mostly industry sponsored and almost always favor the industry sponsor. J Clin Epidemiol. 2015;68(7):811–20.
Moher D, et al. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.
Page, M.J., et al., Bias due to selective inclusion and reporting of outcomes and analyses in systematic reviews of randomised trials of healthcare interventions. The Cochrane Library, 2014.
Pussegoda K, et al. Identifying approaches for assessing methodological and reporting quality of systematic reviews: a descriptive study. Systematic Reviews. 2017;6(1):117.
Iqbal SA, et al. Reproducible research practices and transparency across the biomedical literature. PLoS Biol. 2016;14(1):e1002333.
Ioannidis JP. Discussion: why “an estimate of the science-wise false discovery rate and application to the top medical literature” is false. Biostatistics. 2014;15(1):28–36.
Egger M, Smith GD. Bias in location and selection of studies. BMJ: British Medical Journal. 1998;316(7124):61.
Munder T, et al. Researcher allegiance in psychotherapy outcome research: an overview of reviews. Clin Psychol Rev. 2013;33(4):501–11.
Goodman S, Dickersin K. Metabias: a challenge for comparative effectiveness research. Ann Intern Med. 2011;155(1):61–2.
Elia N, et al. How do authors of systematic reviews deal with research malpractice and misconduct in original studies? A cross-sectional analysis of systematic reviews and survey of their authors. BMJ Open. 2016;6(3):e010442.
Humaidan P, Polyzos NP. (Meta) analyze this: systematic reviews might lose credibility. Nat Med. 2012;18(9):1321.
Borah R, et al. Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open. 2017;7(2):e012545.
Silagy CA, Middleton P, Hopewell S. Publishing protocols of systematic reviews: comparing what was done to what was planned. JAMA. 2002;287(21):2831–4.
Kirkham JJ, Altman DG, Williamson PR. Bias due to changes in specified outcomes during the systematic review process. PLoS One. 2010;5(3):e9810.
Page MJ, et al. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med. 2016:13(5).
Moher D, et al. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9.
Moher D, et al. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement. Systematic reviews. 2015;4(1):1.
Wasiak, J., et al., Poor methodological quality and reporting standards of systematic reviews in burn care management. International wound journal, 2016.
Dudden RF, Protzko SL. The systematic review team: contributions of the health sciences librarian. Medical reference services quarterly. 2011;30(3):301–15.
Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2(8):e124.
Ebrahim S, et al. Meta-analyses with industry involvement are massively published and report no caveats for antidepressants. J Clin Epidemiol. 2016;70:155–63.
Gómez-García, F., et al., Systematic reviews and meta-analyses on psoriasis: role of funding sources, conflict of interest, and bibliometric indices as predictors of methodological quality. British Journal of Dermatology, 2017.
Bes-Rastrollo M, et al. Financial conflicts of interest and reporting bias regarding the association between sugar-sweetened beverages and weight gain: a systematic review of systematic reviews. PLoS Med. 2014;10(12):e1001578.
Spiegelhalter D. The importance of what you don’t see. In: [Blog] Understanding Uncertainty; 2016.
Eisner M, et al. Disclosure of financial conflicts of interests in interventions to improve child psychosocial health: a cross-sectional study. PLoS One. 2015;10(11):e0142803.
Lieb K, et al. Conflicts of interest and spin in reviews of psychological therapies: a systematic review. BMJ Open. 2016:6(4).
Gøtzsche PC, Ioannidis JP. Content area experts as authors: helpful or harmful for systematic reviews and meta-analyses? BMJ. 2012;345:e7031.
Fleurence RL, et al. Engaging patients and stakeholders in research proposal review: the patient-centered outcomes research institute. Ann Intern Med. 2014;161(2):122–30.
Basch E, et al. Methodological standards and patient-centeredness in comparative effectiveness research. JAMA-Journal of the American Medical Association. 2012;307(15):1636–40.
Catalá-López F, et al. Global collaborative networks on meta-analyses of randomized trials published in high impact factor medical journals: a social network analysis. BMC Med. 2014;12(1):1.
Lomas J. Using 'linkage and exchange' to move research into policy at a Canadian foundation. Health Aff. 2000;19(3):236.
Boudreau, K., et al., The novelty paradox & bias for normal science: evidence from randomized medical grant proposal evaluations. 2012.
Steckler A, McLeroy KR. The importance of external validity. Am J Public Health. 2008;98(1):9–10.
Petticrew M. Time to rethink the systematic review catechism? Moving from ‘what works’ to ‘what happens’. Systematic reviews. 2015;4(1):1.
Nasser M, et al. An equity lens can ensure an equity-oriented approach to agenda setting and priority setting of Cochrane Reviews. J Clin Epidemiol. 2013;66(5):511–21.
Burford BJ, et al. Testing the PRISMA-Equity 2012 reporting guideline: the perspectives of systematic review authors. PLoS One. 2013;8(10):e75122.
Ioannidis JP, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.
Shea BJ, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7
Andrews J, et al. GRADE guidelines: 14. Going from evidence to recommendations: the significance and presentation of recommendations. J Clin Epidemiol. 2013;66(7):719–25.
Guyatt GH, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ: British Medical Journal. 2008;336(7650):924.
Glasziou P, Djulbegovic B, Burls A. Are systematic reviews more cost-effective than randomised trials? Lancet. 2006;367(9528):2057–8.
Availability of data and materials
The authors are employed by the Universities of Sheffield and Birmingham.
Lesley Uttley is a Research Fellow in Evidence Synthesis at the School of Health and Related Research (ScHARR), University of Sheffield, UK.
Paul Montgomery is a Professor of Social Intervention at the Department of Social Policy and Social Work, University of Birmingham, UK.
Ethics approval and consent to participate
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Uttley, L., Montgomery, P. The influence of the team in conducting a systematic review. Syst Rev 6, 149 (2017) doi:10.1186/s13643-017-0548-x