Module 3 - Finding the Evidence - Searching Principles - Claire Twose, MLIS, Guest Lecturer
Mastering Systematic Review Searching: Principles, Strategies, and Documentation
Conducting a high-quality systematic review and meta-analysis hinges on performing an exceptionally thorough and well-documented search. This process, akin to a scientific experiment, is foundational to the integrity and reliability of your findings. This article outlines the essential principles, strategies, and documentation practices required to conduct a comprehensive, unbiased search that meets rigorous systematic review standards.
The Critical Role of a Comprehensive Search
The quality of your search profoundly impacts the quality of your research and can even have life-altering consequences. Many researchers are surprised by the extensive effort and time required for a high-quality search, often questioning if simply using a large database like PubMed is sufficient. The answer is a definitive “no.”
Why a Comprehensive Search is Essential
-
Addressing Bias: Bias can infiltrate your systematic review in two primary ways:
- Risk of Bias (Individual Studies): Bias present within the individual studies included in your review (e.g., how patients are allocated to treatment, data collection methods). This relates to the “quality” of included studies.
- Metabias (Systematic Review Process): Bias in how you conduct your systematic review and meta-analysis. This often relates to publication bias, where studies with positive or significant findings are more likely to be published than those with negative or null results. Failing to find all relevant evidence can severely skew your review’s findings.
-
Ensuring Completeness and Accuracy: Relying on a single database, even a large one, is insufficient. Studies have consistently shown that each database contributes unique articles to a comprehensive review, making it necessary to search multiple sources to capture all relevant literature.
-
Real-World Impact:
- The Hexmethonium Trial Example: Over a decade ago, a researcher at Johns Hopkins undertook an asthma study involving bronchial challenge in healthy subjects. A 24-year-old research assistant died due to adverse effects from the drug used. Subsequent searches by information professionals revealed that evidence flagging the procedure as dangerous was readily available. Had the researcher consulted an information professional and conducted an extensive search, this tragedy might have been averted. This highlights how a high-quality search can literally impact individuals’ lives.
- The Aprotinin Study Example: A compelling study on the effectiveness of aprotinin (a blood substitute in cardiac surgery) analyzed published randomized controlled trials (RCTs). Researchers observed a surprisingly low citation rate among these studies, with the most frequently cited being the original one, not necessarily the largest or most impactful. All published studies showed positive findings for aprotinin. A meta-analysis demonstrated that after just 12 studies, the results were unequivocal. However, between that point and the last study cited, over 4,000 additional people were randomized to prove the same point. This means at least half of them did not receive the best known treatment because researchers did not fully rely on previously published literature.
Developing a Search Protocol
To minimize bias and ensure transparency, you must develop a detailed sub-protocol for your search.
- Documentation is Key: Begin documenting every step from the outset. This saves significant time and effort later, as recalling specific terms or unsuccessful attempts days later can be challenging.
- Identify and Document Sources: Record all sources used, including large bibliographic databases, specialized registers, and other processes like hand-searching and citation tracking.
- Detailed Search Strategy Records: Document the dates you conducted searches and the exact strategies used in each database.
- Inclusion/Exclusion Criteria: Clearly define how you make decisions about what to include or exclude.
- Duplicate Screening: To reduce bias, systematic reviews often employ duplicate screening, where two people independently review each citation and abstract for inclusion.
- Information Specialists: Standards like those from the Institute of Medicine emphasize the need to include an information specialist when conducting systematic reviews. Chapter six of the Cochrane Handbook provides an excellent resource for detailed guidance on this topic.
Key Sources for Evidence Identification
A comprehensive search requires consulting a variety of electronic databases, unpublished literature sources, and manual techniques.
Major Bibliographic Databases
These large, developed databases provide citations and abstracts across various subject areas.
-
Importance of Multiple Databases: Studies, like one by Lawrence on public health injury prevention, have shown that searching multiple databases is crucial. In Lawrence’s study, every database searched contributed unique articles across five different areas, underscoring the necessity of a multi-database approach for comprehensive results.
-
Controlled Vocabulary:
- Definition: Most large, subject-specific bibliographic databases utilize a controlled vocabulary – a standardized list of terminology used to index information and facilitate retrieval.
- Purpose: This provides consistency. For example, if “myocardial infarction” is the controlled vocabulary term for “heart attack,” searching for this term will retrieve all relevant articles, regardless of the language they were published in.
- Strategy: For a systematic review, it’s important to use both controlled vocabulary and keywords (free-text searching of words in titles, abstracts, and other fields) to ensure maximum sensitivity.
-
PubMed/MEDLINE:
- Relationship: PubMed is the National Library of Medicine’s (NLM) online access point to the MEDLINE database and additional citations.
- Content: MEDLINE contains citations and abstracts from over 5,600 journals, with over 19 million records. PubMed totals over 22 million records, including newer, unindexed articles and older articles rolled into the product.
- Indexing: Each MEDLINE citation is indexed by PhD professionals at the NLM who review articles and apply terms from a controlled vocabulary. This ensures articles are discoverable even if a term isn’t in the abstract or if no abstract is present.
- Medical Subject Headings (MeSH): PubMed’s controlled vocabulary consists of Descriptors and Subheadings.
- A thesaurus is available to search for terms and their definitions used by indexers.
- MeSH terms are organized in a hierarchical “tree” structure with narrower and broader terms. Indexers apply the most specific term.
- Entry Terms: These are synonyms or related phrases that, if present in an article, will result in the application of a specific MeSH term. They are valuable for identifying effective keywords for articles that haven’t been indexed yet.
- Automatic Inclusion: When searching with a MeSH term, PubMed automatically includes articles indexed with its narrower terms unless you specifically opt out.
- Indexing for Specific Study Types:
- Randomized Controlled Trials (RCTs): Historically, developing effective indexing for RCTs has been a challenge, with terms changing over time (e.g., no specific terms before 1977). To retrieve older literature, include older terms in your search. The Cochrane Collaboration at Johns Hopkins partnered with NLM to improve RCT indexing.
- Observational Studies: An even broader range of terms exists (e.g., Epidemiologic studies, Case control, Cohort studies). A pilot study by Susan Wieland found that while outcome and design terms were fairly well indexed, exposure terms were less so. Precision (identifying only relevant studies) and Sensitivity (comprehensively finding all relevant citations) varied depending on the term type.
-
EMBASE:
- Description: A European medical bibliographic database, similar to MEDLINE, based on Excerpta Medica and provided by Elsevier.
- Content: Includes 19 million citations from over 7,000 periodicals. It often licenses MEDLINE content, so searching Embase.com can cover both.
- Controlled Vocabulary: Uses EMTREE terms, which is a distinct vocabulary from MeSH.
-
Cochrane Central Register of Controlled Trials (CENTRAL):
- Description: Provided by the Cochrane Library, a voluntary organization dedicated to high-quality systematic reviews.
- Content: Cochrane review groups systematically comb the literature for controlled trials and contribute them to CENTRAL.
- Benefit: Searching CENTRAL complements searches in PubMed and Embase, as it benefits from this extensive manual curation. Articles are indexed using MeSH terms.
-
Web of Science and Scopus:
- Description: These databases differ from the medical-focused ones by covering a much broader range of disciplines (social sciences, arts, humanities, all sciences).
- Key Feature: Citation Searching: For every article (some dating back to the 1800s), the reference list is included. This enables Snowballing, where you can:
- Look at the reference lists of key articles.
- See who has cited a key article since its publication, allowing you to follow research trends.
- Limitation: Due to their broad scope, they typically do not have a controlled vocabulary. PubMed also offers a “related articles” feature for similar functionality.
Sources of Unpublished (“Gray”) Literature
Including unpublished literature is crucial for a comprehensive, sensitive search and to counteract publication bias.
- Clinical Trials Registries: Increasingly required for systematic reviewers, these registries list ongoing and completed trials. Examples include ClinicalTrials.gov, ISRCTN, WHO International Clinical Trials Registry Platform, and the EU Clinical Trials Register.
- Government Sites: Agencies like the FDA are excellent sources, particularly for drug studies submitted for licensing.
- Organizational and Foundation Websites: Key organizations and foundations in your topic area may publish literature not found in traditional databases.
- Specialized Search Engines: Scirus.com is tailored to find high-quality literature from the open web.
- Impact of Unpublished Data: Hart and colleagues’ meta-analysis of citations for nine new medications compared results with and without including FDA-identified studies. In several cases, including unpublished data changed efficacy estimates (sometimes higher, sometimes lower). Crucially, estimates of harm were almost always greater when unpublished data was included, highlighting its importance for a balanced review.
Manual (Hand) Searching and Personal Contacts
These methods supplement electronic searches, ensuring maximum coverage.
-
Hand-Searching:
- Requirement: Mandatory for Cochrane reviews to ensure comprehensive searches.
- Process: Identify key journals for your topic, select a range of years, and review their tables of contents. While often done electronically now, it remains a deliberate, article-by-article review.
- Additional Sources: Conference proceedings (available from Biosis, open web searches), and reviewing reference lists of all included studies. Gray literature repositories for dissertations and reports are also valuable.
- Effectiveness: Research by Sally Hopewell and colleagues found that hand-searching was highly effective (92-100% accuracy) at identifying relevant RCTs, often outperforming searches in MEDLINE, Embase, and PsycINFO alone. While too time-consuming for all literature, it effectively augments established database search strategies.
-
Personal Contacts: Leverage experts in your field or industry contacts to help identify relevant articles or studies that might otherwise be missed.
Crafting a High-Quality Search Strategy
Building an effective electronic search strategy is central to a systematic review. The guiding principle is replicability – ensuring your search can be precisely recreated by others.
Iterative Development Process
The process for developing a MEDLINE search strategy for a systematic review is iterative:
- Start Simple: Begin with a relatively simple search using initial terms derived from a few known citations.
- Retrieve and Analyze: Retrieve some initial results. Pull out the Medical Subject Headings (MeSH) and keywords from these studies, creating a table to track their presence across articles.
- Identify Useful Terms: From your analysis, identify the MeSH terms and keywords most useful for your search.
- Revise and Rerun: Refine your strategy by adding these new terms and rerun the search.
- Document Consistently: Document your actions at every stage, even noting terms that proved unhelpful. This saves repeated effort.
- Optimize and Import: Once you achieve an optimal search strategy, run it to retrieve citations and import them into bibliographic management software (e.g., EndNote).
- Adapt for Other Databases: When moving to other databases, adapt your strategy by finding the corresponding controlled vocabulary, truncation symbols, and other features specific to each source.
Breaking Down the Research Question (PICO)
Take your research question and break it down into core concepts, often using the PICO format:
- Population: The group of subjects being studied.
- Intervention: The treatment or exposure being investigated.
- Comparison: The alternative treatment or control.
- Outcome: The result of interest.
Example: “Are intravitreal injections of Lucentis better than Avastin to prevent vision loss in patients with age-related macular degeneration?”
- P: Age-related macular degeneration
- I: Intravitreal injections, Lucentis
- C: Avastin
- O: Vision loss
Once concepts are identified, translate them into database-understandable terms. For a systematic review, fewer concepts are generally better to maintain a manageable result set. It is rare to include outcome terms in a systematic review search strategy due to the high risk of overlooking a relevant term and missing studies.
Boolean Logic
Boolean logic forms the basis of database searching:
- AND: Narrows your search, finding the intersection between two sets of terms (e.g., “macular degeneration AND intravitreal injections”).
- OR: Expands your search, finding articles containing any of the synonyms or related terms (e.g., “Lucentis OR Avastin”).
- NOT: Excludes terms. While useful for assessing search results (e.g., “Search 1 NOT Search 2” to see unique results), it should generally not be used in final search strategies for systematic reviews as it can inadvertently exclude relevant articles.
PubMed in Practice
- Search Details: When you type terms into PubMed’s search box, PubMed often automatically maps your terms to MeSH headings. Always check the “Search Details” to see the exact query PubMed executed. This helps identify unintended mappings.
- Extracting Terms from Records:
- In the abstract view of a retrieved citation, look for “Publication Types, MeSH Terms, and Other Terms” (often hidden behind a plus sign). Clicking this reveals the MeSH terms applied by indexers, offering a great way to identify relevant terms for your search.
- Changing the display settings from “Abstract” to “Medline” view shows the raw database record structure, including Field Codes (e.g., MH for MeSH Heading, TA for journal title, TI for title). This helps understand where specific information is stored.
- Using the MeSH Database: Access the dedicated MeSH database from the PubMed homepage. Here, you search the vocabulary itself, not articles. You can explore MeSH terms, view their definitions, narrower/broader terms, and entry terms (useful for keywords). You can also explicitly opt out of PubMed’s automatic inclusion of narrower MeSH terms.
- Common Field Tags for Keywords: When keyword searching, consider these common field tags:
- [tw] (text word): A good compromise, including most but not all text fields (e.g., excludes affiliation or individual author fields).
- [tiab] (title/abstract): Narrows the search if “text word” pulls up too many false results.
- [All Fields]: Can be useful for less core medical topics or those less well indexed.
Refining Your Strategy
- Variations: Pay attention to plurals (e.g., “therapy” vs. “therapies”), abbreviations (if specific enough), and spelling variations (e.g., British vs. American English, like “anemia”). Truncation symbols (e.g.,
diseas*
for disease, diseases) can manage these. - Caution with Limits: Be very cautious about adding limits to your search, such as language or publication year. You must justify any limit.
- Language Limits: Egger’s study found that more positive findings tended to be published in English than German for the same research project. Limiting to English only could bias results towards positive findings. Similar trends were seen in complementary medicine studies from China, Russia, and Taiwan.
- Date Limits: Generally only used if a specific event, like a drug’s introduction or disease emergence, dictates a clear start date.
Study Design Filters
For systematic reviews of specific study designs like randomized controlled trials or observational studies, pre-tested search filters can be invaluable.
- Purpose: These filters consist of concepts and terms specifically designed to identify particular study designs within a database.
- Examples: The Cochrane Highly Sensitive Search Strategy (available in Chapter 6 of the Cochrane Handbook) is a well-known example for identifying RCTs in MEDLINE and EMBASE. Research has also been done to develop optimal search strategies for finding other study types.
- Implementation: If tested filters are available for your chosen databases, you can add them to your content-specific search.
Adapting Strategies Across Databases
One of the most overlooked aspects is the need to adapt your search strategy when moving from one database (e.g., PubMed) to another (e.g., EMBASE, Cochrane). Each database has its own:
- Controlled Vocabulary: (e.g., MeSH vs. EMTREE).
- Field Tags:
- Truncation Symbols:
- Syntax Rules:
You must convert your carefully crafted strategy to work effectively in each new source.
Evaluating Search Quality
- PRESS Checklist: Margarete Sampson and colleagues identified seven key criteria for assessing the quality of electronic search strategies, known as PRESS (Peer Reviewed Electronic Search Strategies). This checklist is an excellent tool for double-checking your work and is often provided alongside systematic review training materials.
- Common Errors: Research by Sampson and McGowan revealed common errors even in published search strategies, often related to the failure to adapt strategies properly across different databases.
Documentation and Reporting Standards
Thorough documentation is not just for your benefit; it’s a critical component of transparent and reproducible systematic review reporting.
Bibliographic Management Software
These tools are essential for managing the vast number of citations collected.
- Purpose: They serve as your database of records for all collected citations, allowing you to store key information about each one (e.g., whether it was included or excluded).
- Examples: EndNote, RefWorks, Reference Manager, and QUOSA (which offers automatic full-text retrieval for some citations).
Essential Search Documentation
You need to record precise details about your search process:
- Date of Search: Day, month, and year.
- Sources Used: Specific databases (e.g., PubMed, Embase, CENTRAL) and other sources (e.g., clinical trial registries, government sites, organizational websites).
- Exact Strategy: The precise search query as executed in each database (e.g., the full “Search Details” from PubMed).
- Trial Registers: Which registers were searched and the strategies used for them.
- Communication: Any communications with experts or industry contacts that led to identifying studies.
- Bibliographies/Citation Tracking: Which bibliographies were searched, and any citation tracking processes used (e.g., in Web of Science).
- Inclusion/Exclusion Decisions: Document the flow of citations through your screening process.
PRISMA Guidelines
PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) provides a standard for transparent reporting of systematic reviews and meta-analyses.
- Purpose: It’s an excellent tool for understanding what’s required for a high-quality, rigorous review and is widely adopted by journal editors (over 125 have signed on).
- Components: Includes a 27-step checklist and a flowchart to visually represent the selection process.
- PRISMA Flowchart Breakdown: This flowchart illustrates the specific pieces of information you need to record:
- Records identified: Total number of hits from all database searches (e.g., PubMed, Embase, Cochrane, subject-specific databases).
- Records identified through other sources: Number found via reference lists, tables of contents, snowballing, etc.
- Duplicates removed: As many databases overlap, you must track and report the number of duplicate citations removed.
- Records screened: The number remaining after duplicate removal.
- Records excluded (title/abstract screen): Number of citations excluded at the initial title and abstract screening stage.
- Full-text articles assessed for eligibility: The number of articles for which full text was retrieved.
- Full-text articles excluded (with reasons): The number excluded after full-text review, along with the reasons for exclusion.
- Studies included in qualitative synthesis: The final number of studies included in your qualitative review.
- Studies included in quantitative synthesis (meta-analysis): The subset of studies included in any meta-analysis.
Conclusion
Developing a comprehensive, sensitive search to support your systematic review and meta-analysis is a complex but crucial endeavor. It involves:
- Breaking down your research question into components using a structured approach like PICO.
- Developing tailored search strategies for large electronic bibliographic databases.
- Leveraging a variety of sources, including peer-reviewed literature and vital gray literature.
- Critically, documenting every step of your process from the very beginning.
By adhering to these principles and utilizing the tools and guidelines available (like the PRESS checklist and PRISMA standards), you will be well on your way to producing a rigorous and impactful systematic review.
Core Concepts
- High-Quality Search Strategy: A structured and comprehensive approach to finding evidence that is reproducible and minimizes bias in systematic reviews.
- Documentation of Search: The critical process of meticulously recording every step, source, date, and strategy used in a systematic review search to ensure transparency and reproducibility.
- Bias (in Systematic Reviews): Various systematic errors that can affect the validity of individual studies (risk of bias) or the overall systematic review process (metabias), including publication bias.
- Electronic Databases and Controlled Vocabulary: Key online repositories of published literature, often utilizing standardized indexing terms (like MeSH or EMTREE) to facilitate precise and consistent retrieval of information.
- PICO Framework and Boolean Logic: A structured method for formulating research questions by breaking them into core components (Population, Intervention, Comparison, Outcome) and the logical operators (AND, OR, NOT) used to combine search terms within databases.
- Gray Literature and Hand-Searching: Sources of unpublished or non-peer-reviewed information and techniques like reviewing journal tables of contents or reference lists to find additional relevant studies beyond major databases.
- PRISMA Standards: International guidelines that provide a checklist and flowchart for transparent and complete reporting of systematic reviews and meta-analyses.
Concept Details and Examples
High-Quality Search Strategy
Detailed Explanation: A high-quality search strategy for a systematic review is akin to a scientific experiment itself, where the ‘subjects’ are the literature. It emphasizes comprehensiveness, precision, and reproducibility to ensure all relevant evidence is found and bias is minimized. This involves using multiple databases, controlled vocabularies, keywords, and specific filters. Examples:
- Instead of just searching PubMed for ‘asthma treatment,’ a high-quality search would involve searching PubMed, Embase, Cochrane CENTRAL, and potentially Web of Science, using a combination of MeSH terms (e.g., ‘Asthma/drug therapy’) and keywords (e.g., ‘bronchodilator,’ ‘inhaler’).
- To ensure reproducibility, every search string, database accessed, and date of access is recorded, allowing another researcher to replicate the exact search if needed. Common Pitfalls/Misconceptions: A common pitfall is underestimating the time and effort required; many assume a simple PubMed search is sufficient. Misconception: A large number of hits always means a comprehensive search, whereas precision and relevance are equally crucial.
Documentation of Search
Detailed Explanation: Documenting the search involves creating a sub-protocol for the search itself, detailing every source consulted, the exact search strategies (including Boolean operators, field tags, and specific terms), the dates of the searches, and how decisions were made about inclusion/exclusion. This record-keeping is vital for transparency, reproducibility, and fulfilling reporting standards. Examples:
- A systematic review protocol might include a table listing each database searched (e.g., PubMed, Embase), the date of the search (e.g., ‘2023-10-26’), and the full search string used (e.g., ‘(macular degeneration[MeSH] OR ‘AMD’[TIAB]) AND (lucentis[TIAB] OR avastin[TIAB])’).
- Using bibliographic management software like EndNote to store all retrieved citations, noting whether each was included or excluded, and why. Common Pitfalls/Misconceptions: A pitfall is delaying documentation, leading to forgotten details. Misconception: Only the final search string needs to be documented, whereas the iterative process and all sources (even those with few hits) should be noted.
Bias (in Systematic Reviews)
Detailed Explanation: Bias refers to systematic errors that can lead to misleading results. In systematic reviews, bias can occur at two levels: ‘risk of bias’ within the individual studies included (e.g., how patients were allocated) and ‘metabias’ in the systematic review process itself (e.g., publication bias, where studies with positive findings are more likely to be published). The goal is to identify and mitigate these biases. Examples:
- Risk of Bias: If an individual clinical trial’s participants were not randomly assigned to treatment groups, it introduces selection bias, which is a ‘risk of bias’ in that study.
- Metabias: If a systematic review only includes studies published in English, and studies with negative findings are more often published in non-English journals, this introduces language bias, a form of ‘metabias’ that could alter the overall conclusion. Common Pitfalls/Misconceptions: A common pitfall is only considering publication bias (positive results being published) and overlooking other forms of metabias like selective outcome reporting. Misconception: A meta-analysis can correct for poor quality or biased individual studies.
Electronic Databases and Controlled Vocabulary
Detailed Explanation: Electronic databases are vast online repositories of scientific literature (e.g., PubMed, Embase, Cochrane CENTRAL, Web of Science, Scopus). To ensure consistent and comprehensive retrieval, many of these databases employ controlled vocabularies (e.g., MeSH in PubMed, EMTREE in Embase). These are standardized lists of terms used by human indexers to categorize articles, ensuring that a search for a specific concept retrieves all relevant articles, regardless of the exact phrasing used by authors. Examples:
- Electronic Databases: Searching PubMed for medical literature, Embase for pharmacological and biomedical content, or Cochrane CENTRAL specifically for controlled trials.
- Controlled Vocabulary: An article about a heart attack might use the MeSH term ‘Myocardial Infarction,’ ensuring it’s found even if the abstract only uses ‘heart attack’ or a different synonym. If an article doesn’t explicitly mention ‘myocardial infarction’ in its title or abstract but is about the topic, the MeSH term ensures its discoverability. Common Pitfalls/Misconceptions: A pitfall is relying solely on keyword searching without utilizing controlled vocabulary, missing comprehensively indexed articles. Misconception: All databases use the same controlled vocabulary, whereas each (e.g., PubMed, Embase) has its own unique system.
PICO Framework and Boolean Logic
Detailed Explanation: The PICO framework (Population, Intervention, Comparison, Outcome) is a mnemonic used to structure a research question, breaking it down into manageable concepts for search strategy development. Boolean logic, utilizing operators like AND, OR, and NOT, allows researchers to combine these concepts in a precise way: AND narrows a search, OR broadens it by including synonyms, and NOT excludes specific terms. Examples:
- PICO Framework: For the question ‘Are intavireal injections of Lucentis better than Avastin to prevent vision loss in age-related macular degeneration (AMD)?’ P=AMD, I=Lucentis, C=Avastin, O=Vision Loss.
- Boolean Logic: Using ‘(macular degeneration OR AMD) AND (lucentis OR avastin)’ combines the population synonyms with the intervention/comparison synonyms. Using ‘NOT animal studies’ would exclude animal research from the results. Common Pitfalls/Misconceptions: A common pitfall is using too many ‘AND’ operators, making the search too restrictive and missing relevant articles. Misconception: The ‘NOT’ operator is safe to use in final searches; it can unintentionally exclude highly relevant studies that mention the excluded term in an unexpected context.
Gray Literature and Hand-Searching
Detailed Explanation: Gray literature refers to unpublished or non-peer-reviewed materials, such as clinical trial registries, government reports (e.g., FDA), dissertations, and conference proceedings. Hand-searching involves systematically reviewing journal tables of contents (often electronically) and the reference lists of included studies (‘snowballing’) to identify additional relevant articles that might be missed by database searches alone. These methods are crucial for a truly comprehensive and unbiased review. Examples:
- Gray Literature: Checking ClinicalTrials.gov for registered but unpublished trials on a specific drug or looking for adverse event reports on the FDA’s website.
- Hand-Searching: Systematically reviewing the table of contents for the last five years of a highly relevant journal (e.g., ‘Ophthalmology’) or checking the reference list of a key review article to find older studies. Common Pitfalls/Misconceptions: A pitfall is underestimating the importance of gray literature, which can often contain negative or null findings susceptible to publication bias if overlooked. Misconception: Hand-searching is an outdated practice no longer relevant with large databases; studies show it still identifies unique, critical articles.
PRISMA Standards
Detailed Explanation: PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) provides a 27-item checklist and a four-phase flow diagram to guide authors in transparently reporting systematic reviews and meta-analyses. Adhering to PRISMA standards ensures that all key information about the review’s methods and findings, including the search process and results, is clearly presented, enhancing the review’s rigor and reproducibility. Examples:
- Checklist Use: A review team consults the PRISMA checklist to ensure they have reported all necessary elements, such as the eligibility criteria, information sources, search strategy, and data collection process.
- Flowchart Use: The review team fills out the PRISMA flow diagram, indicating the number of records identified from databases and other sources, duplicates removed, records screened, full-texts assessed for eligibility, and finally, the number of studies included in the review. Common Pitfalls/Misconceptions: A common pitfall is treating PRISMA as merely a post-hoc reporting tool rather than a guide that should influence the systematic review’s design from the outset. Misconception: PRISMA is only for meta-analyses; it applies to all systematic reviews, even those without a quantitative synthesis.
Application Scenario
A research team is planning a systematic review on the efficacy of mindfulness-based interventions for reducing anxiety in adolescents. They aim to include all relevant studies, both published and unpublished. The lesson’s concepts would be applied by first defining their PICO question, then developing a comprehensive search strategy across multiple databases (using controlled vocabulary and keywords), meticulously documenting every search step, and including searches for gray literature. Finally, they would adhere to PRISMA guidelines for transparent reporting of their search and selection process to minimize bias.
Quiz
-
Which of the following is NOT a primary purpose of developing a high-quality search strategy for a systematic review? a) To ensure the search is reproducible. b) To discover all relevant evidence comprehensively. c) To provide a quick and easy way to find articles. d) To minimize bias in the evidence collection process.
-
True or False: Using the ‘NOT’ Boolean operator is generally recommended for the final search strategy in a systematic review to precisely exclude irrelevant articles.
-
Short Answer: Name two distinct types of bias discussed in the lesson that can affect a systematic review, and briefly explain the difference between them.
-
A research team is conducting a systematic review on a rare disease. They’ve searched PubMed and Embase using a combination of MeSH/EMTREE terms and keywords. What additional steps or sources, beyond these major databases, should they consider to ensure a comprehensive search, as per the lesson’s principles? a) Only search Google Scholar for additional articles. b) Focus only on studies published in English journals to ensure quality. c) Conduct hand-searching of key journals and review clinical trial registries. d) Limit the search to only the last 5 years to include the most current evidence.
ANSWERS---
-
c) To provide a quick and easy way to find articles. Explanation: The lesson emphasizes that a high-quality search for a systematic review is time-consuming and effort-intensive, not quick and easy. Its primary purposes are reproducibility, comprehensiveness, and bias minimization.
-
False. Explanation: The lesson advises caution with the ‘NOT’ operator in final search strategies because it can unintentionally exclude relevant studies. It’s suggested for assessing search performance but not for the final strategy.
-
Risk of Bias and Metabias. Explanation: Risk of Bias refers to bias within the individual studies included in the review (e.g., selection bias in a clinical trial). Metabias refers to bias in the systematic review process itself (e.g., publication bias, where certain types of studies are more likely to be published and found).
-
c) Conduct hand-searching of key journals and review clinical trial registries. Explanation: The lesson stresses the importance of going beyond major bibliographic databases to include gray literature sources like clinical trial registries and hand-searching (reviewing journal tables of contents and reference lists) to ensure a comprehensive and unbiased search, especially for potentially under-reported areas like rare diseases. Options a, b, and d are either insufficient or introduce potential biases (language bias, date bias).
- Resources
- API
- Sponsorships
- Open Source
- Company
- xOperon.com
- Our team
- Careers
- 2025 xOperon.com
- Privacy Policy
- Terms of Use
- Report Issues