Skip to content

Module 2 - Framing The Question

Framing Your Research Question for a Systematic Review: A Comprehensive Guide

Framing the research question is arguably the most critical step in conducting a systematic review. Get this foundational element right, and the subsequent stages—from selecting studies to interpreting findings—will naturally fall into place. This guide will walk you through the essential considerations and established frameworks for formulating a precise and answerable question.

Section 1: Essential Resources for Question Framing

The process of framing your research question is collaborative and iterative, often requiring multiple sessions with your team. Fortunately, several authoritative resources can guide you:

  1. “Finding What Works in Healthcare” by the Institute of Medicine (IOM): Published around 2010, this book (also freely available online) sets standards for systematic reviews. Its section on “Initiating a Systematic Review” outlines crucial preliminary steps:

    • Establish your team: Form your working group.
    • Manage bias and conflict of interest: Identify and address potential biases within your team and from stakeholders. Stakeholders might include environmental epidemiologists, clinicians, or patients/consumers, whose input is vital.
    • Formulate the topic or frame the question: This involves several substeps:
      • Confirm the need for a new review: Ensure a similar review hasn’t been recently completed or updated.
      • Develop an analytic framework (optional for this course): This visual tool (discussed in Section 7) helps clarify complex relationships.
      • Use a standard format to articulate each question: The PICO/PECO format is highly recommended.
      • State your rationale for each question.
      • Refine the question iteratively: Expect to refine your question as you discover existing literature.
  2. Cochrane Handbook for Systematic Reviews of Interventions: This comprehensive handbook, with version 1.1 from 2011 being the latest online, is another invaluable resource. Chapter 5, “Defining the Review Question and Eligibility Criteria,” offers detailed guidance, including separate sections on participants, interventions, and other criteria. While the Cochrane Handbook primarily focuses on interventions, its principles are broadly applicable. Key points from the Cochrane Handbook for defining a well-framed question include:

    • Specify the PICO/PECO elements.
    • Focus on clearly defined outcomes: This is an increasingly recognized critical area in systematic reviews.

You do not need to use both resources; choose one as your primary guide for this course.

Section 2: Classifying Your Research Question and Study Designs

Not all research questions are created equal. Each type of question demands a specific study design to minimize bias and provide the most reliable answer. Therefore, the first crucial step after forming your team is to classify your question.

Common Types of Health-Related Questions and Corresponding Study Designs:

  • Incidence Question: What proportion of the population is newly diagnosed with this problem each year?
    • Study Designs: Surveys, Cohort Studies.
  • Prevalence Question: What proportion of the population is currently living with this problem?
    • Study Designs: Surveys, Cohort Studies.
  • Therapy Question: What should be done to treat this problem?
    • Study Designs: Randomized Clinical Trials (RCTs).
  • Screening Question: Will detecting this problem early (before symptoms) make a difference in my health (outcome)?
    • Study Designs: Randomized Clinical Trials (RCTs).
  • Prevention Question: How can this problem be prevented?
    • Study Designs: Randomized Clinical Trials (RCTs).
  • Diagnostic Accuracy Question: How good is this test at detecting this problem?
    • Study Designs: Cross-sectional Studies (RCTs are ideal but rarely found). Focus on sensitivity and specificity.
  • Prognosis Question: What is the likely outcome of this problem?
    • Study Designs: Cohort Studies (Clinical trials can also provide prognostic data).
  • Harm Question: Will there be any negative effects of this intervention?
    • Study Designs: Randomized Clinical Trials (ideal but often too short or too small for rare harms), Cohort Studies, Case-Control Studies (often necessary due to limitations of RCTs for harm detection).
  • Etiology Question: What causes this problem?
    • Study Designs: Observational Studies (Cohort Studies, Case-Control Studies).

The Hierarchy of Evidence: This widely recognized “pyramid of evidence” is a useful concept, but it’s crucial to understand its specific application. The Hierarchy of Evidence primarily applies to intervention questions (therapy, harm, screening, prevention). It generally does not apply well to questions of etiology, prognosis, incidence, or prevalence.

  • Top (Highest Evidence): Systematic reviews of Randomized Clinical Trials (RCTs).
  • High Evidence: Single RCTs (if a systematic review isn’t available or has fewer than two RCTs).
  • Lower Evidence: Systematic reviews of observational studies, single observational studies (cohort, case-control), expert opinion, unsystematic clinical observations.

The hierarchy illustrates that the studies you find least often (e.g., systematic reviews of RCTs) are often the highest form of evidence for determining intervention effectiveness. Studies found frequently (e.g., unsystematic clinical observations) provide very low evidence for treatment decisions.

As statistician John Tukey once said, “Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise.” This maxim is particularly relevant when framing systematic review questions, as it encourages focusing on meaningful, even if initially broad, inquiries.

Section 3: Deciding the Type and Scope of Your Question: Broad vs. Narrow

The choice between a broad or narrow question profoundly impacts your systematic review. While precision is appealing, overly narrow questions can limit applicability and the number of available studies.

Factors by which Studies can Differ: Studies can vary significantly in their:

  • Population characteristics: Inclusion/exclusion criteria (e.g., presence of co-morbidities like diabetes).
  • Intervention definition: Dose, timing, duration, route of administration.
  • Comparison group: Placebo, standard care, no treatment, different intervention.
  • Outcome definition: How the outcome is measured, at what time points.
  • Study design and quality: Methodological rigor, handling of missing data.

Consequences of Narrow Questions:

  • Limited applicability: Results may only apply to a very specific subgroup (e.g., aspirin studies only in men aged 35-40 for heart attack prevention).
  • Fewer studies: Can lead to a smaller sample size, increasing the risk of spurious findings (e.g., the incorrect association between dysfunctional uterine bleeding and BMI in African American women due to a too-narrow question).

Consequences of Broad Questions:

  • “Apples and Oranges” criticism: Combining disparate studies (e.g., different aspirin doses or timings for heart attack prevention) can lead to a meta-analysis that compares dissimilar interventions. The goal is to compare “different kinds of apples,” not “apples and oranges.” You might plan subgroup analyses if differences are suspected (e.g., for men vs. women).
  • Difficult literature search: Broad questions can generate a massive number of search results, making the screening and synthesis process more challenging and time-consuming.

The Cochrane Handbook (Table 5.6.a) provides extensive tables comparing the pros and cons of broad and narrow questions. Your team must carefully decide how much variation in study characteristics is acceptable for a valid answer.

Section 4: Elements of a Well-Constructed Question: PICO/PECO

A well-framed question is the first step in the entire evidence-based healthcare process:

  1. Frame the question.
  2. Find the best evidence.
  3. Critically appraise the evidence.
  4. Apply the evidence (e.g., in policy or practice).

Without a clear question, effective evidence application is impossible. The standard format for constructing an answerable clinical question is PICO or PECO:

  • P: Patients, Population, or Problem
  • I: Intervention (for intervention studies)
  • E: Exposure (for observational/etiology studies)
  • C: Comparison Group
  • O: Outcome

Detailed Breakdown of PICO/PECO Elements:

  1. P (Patients, Population, or Problem):

    • Condition/Disease: Define the specific condition of interest, including explicit diagnostic criteria (e.g., how “heart attack” is defined). Systematic reviewers often accept definitions used by included studies, but may also specify their own.
    • Setting: Community-based, hospitalized, outpatient, nursing home, intensive care.
    • Demographics: Age group, race/ethnicity, sex.
  2. I/E (Intervention or Exposure):

    • Timing: When did the intervention/exposure occur (e.g., how long after a first heart attack)?
    • Route of Administration: How was it delivered?
    • Dose/Level: Specific quantity (e.g., aspirin dosage, alcohol consumption level).
    • Duration: How long was the treatment or exposure (e.g., “exercise” defined as 30 minutes of walking daily)?
  3. C (Comparison Group):

    • For Clinical Trials: Typically placebo, standard therapy, or no treatment. Ethical considerations are paramount when no treatment or placebo is used for serious conditions (e.g., severe depression).
    • For Epidemiologic/Etiology Studies: Defining a suitable control group is challenging. It could be people not exposed, people exposed to a different level, or people from a similar background not working in a specific environment (e.g., office workers vs. chicken plant workers). Multiple comparison groups may be found in systematic reviews of epidemiologic studies.
  4. O (Outcome): Outcomes are increasingly complex to define and are crucial for the review’s relevance.

    • Importance: Is the outcome important to the patient/consumer (e.g., ability to return to work, nausea) or primarily to clinicians (e.g., lab values)? Death is universally important.
    • Timing of Measurement: When was the outcome assessed (e.g., 2 weeks, 1 year, 10 years)?
    • Measurement Method/Scale: How was it measured (e.g., Hamilton Anxiety Rating Scale, Snellen Visual Acuity Chart)?
    • Specific Metric: The type of data reported (e.g., a value at a certain time point, change from baseline).
    • Method of Aggregation: How data were summarized (e.g., mean value across a group).

Example PICO Question (Amblyopia): P: For preschool children with mild to moderate visual acuity impairment, I: Are glasses or spectacles and patching O: Effective in improving visual acuity C: Compared with glasses alone or no treatment?

(Note: While the question itself is concise, the full details for timing and measurement of outcomes are elaborated in subsequent eligibility criteria.)

A well-formulated question guides the entire systematic review process:

  • It determines your eligibility criteria for study selection.
  • It helps develop your search strategy for bibliographic databases and other sources.
  • It specifies what data to abstract from included studies for meta-analysis.

Section 5: Refining the Question and Detailed Outcome Definition

The Cochrane Handbook’s Box 5.2a provides factors for defining participants (P), Box 5.3a for interventions (I), and Box 5.4c for outcomes (O). These “crib notes” are excellent checklists for ensuring thoroughness.

While Cochrane focuses on interventions, the same principles apply to exposures; simply adapt the terminology.

Comprehensive Outcome Definition (from ClinicalTrials.gov): To define an outcome rigorously, consider these five elements (courtesy of Ian Saldana and Deborah Zarin, clinicaltrials.gov):

  1. Domain: The name of the outcome (e.g., anxiety, heart attack, visual acuity).
  2. Measurement: The specific tool or method used (e.g., Hamilton Anxiety Rating Scale, Snellen Visual Acuity).
  3. Specific Metric: The type of value derived (e.g., value at a time point, change from baseline).
  4. Method of Aggregation: How individual data points are summarized (e.g., mean across a group).
  5. Time Point: When the outcome was measured (e.g., 1 month, 3 months, 6 months).

While complex, explicitly defining these elements ensures clarity and allows for proper data abstraction and synthesis.

PICOTS/PICOS: Some researchers expand PICO to PICOTS or PICOS to explicitly include Timing and Setting. For instance, Timing might refer to the minimum follow-up duration required (e.g., 1 year for education effectiveness studies) or the timing of the intervention. Setting might specify community-dwelling individuals versus nursing home residents. While we typically incorporate these elements elsewhere in the eligibility criteria, using PICOTS is also an acceptable approach.

Balancing Prior Knowledge with Data: While it’s crucial to formulate your hypothesis before seeing the data to avoid bias, some familiarity with the topic is beneficial. Knowing what outcomes have been measured in previous trials and what outcomes experts in the field deem important (e.g., death in cancer studies) can help you select relevant and impactful outcomes for your review. For instance, quality of life, now recognized as important, might not have been measured in older trials. You must decide whether to only include outcomes measured in studies or to prioritize patient-important outcomes regardless of their common measurement.

Section 6: Practical Examples of Framing Questions

Let’s look at how a concise research question translates into detailed eligibility criteria using the PICO/PECO framework.

Example 1: Drug Therapy for Hypertension (Intervention/Therapy)

  • Concise Question: Is drug therapy associated with long-term morbidity and mortality in older persons with moderate hypertension?

  • PICO Breakdown:

    • P (Population/Patients): Older persons with moderate hypertension.
      • Definition of “Older Person”: People older than 60 years old who are outpatients.
      • Definition of “Moderate Hypertension”: Systolic blood pressure of 140-179 mmHg AND diastolic blood pressure of 90-109 mmHg.
    • I (Intervention): Drug therapy for moderate hypertension.
      • Eligible Drug Classes: ACE inhibitors, Angiotensin Receptor Antagonists (ARA), Beta-adrenergic blockers, Combined alpha and beta blockers, Calcium-channel blockers, Diuretics, Central sympatholytics, Direct vasodilators. (Note: This is a broad definition, encompassing various drug classes).
    • C (Comparison): Not explicitly stated in the question, but would be implicit (e.g., placebo, no treatment, or different drug class).
    • O (Outcome): Long-term morbidity and mortality.
      • Definition of “Long-term”: At least one year (≥ 1 year).
      • Definition of “Morbidity and Mortality”: Fatal and nonfatal strokes, fatal and nonfatal coronary heart disease, cardiovascular events, and total mortality. (Each of these would require further specific definition, though often systematic reviewers will accept the definitions used by the original study authors unless there is significant disagreement in the field.)

Example 2: Exercise Training and Falls (Exposure/Etiology)

  • Concise Question: Is a history of exercise training associated with falls in community-dwelling and institutionalized people?

  • PECO Breakdown:

    • P/S (Population/Setting): Community-dwelling and institutionalized people.
      • Definition: Outpatients and people in nursing homes, aged ≥ 65 years old.
    • E (Exposure): History of exercise training.
      • Definition: Training in the past two years, including balance training, mobility training, physical therapy, strength training, Tai Chi.
    • C (Comparison): Implicit (e.g., no history of exercise training).
    • O (Outcome): Falls.
      • Definition: Number of falls, injurious falls (to be defined), hospitalization, fracture, and death. (Considerations: how “fall” is recorded, e.g., through diaries).

Example 3: Alcohol Consumption and Stroke (Exposure/Etiology - Group Project Example)

  • Concise Question: Does moderate to heavy alcohol consumption reduce the risk of stroke?

  • PECO Breakdown (as formulated by a 2009 course group):

    • P (Population): Adults without prior stroke.
    • E (Exposure): Alcohol consumption.
      • Definition: Presented as “drinks per day,” measured over the past month or longer, including binge drinking or short-term consumption (broad definition).
    • C (Comparison): Non-drinkers (total non-drinkers).
    • O (Outcome): Ischemic or hemorrhagic stroke, or both.
      • Required Information: Enough data for authors to estimate relative risk, odds ratio, and attributable risk with 95% confidence intervals.
    • Study Design Constraint: Only cohort studies were included.

These examples illustrate that a short research question is merely the tip of the iceberg, with extensive definitions and criteria underpinning each PICO/PECO element.

Section 7: Optional: Analytic Frameworks

An analytic framework (also known as a logic model, conceptual framework, or influence diagram) is a visual representation that links evidence and explains how interventions or exposures relate to outcomes within specific populations. While optional for this course due to time constraints, developing one is highly encouraged, as it’s becoming a standard in systematic reviews.

Purpose of Analytic Frameworks:

  • Clarify thinking: Visually map out the logical chain from intervention/exposure to desired outcomes.
  • Identify multiple questions: Helps to disentangle a complex chain of logic into distinct questions, which may require different types of evidence.
  • Distinguish intermediate from final outcomes: For example, Hemoglobin A1C (an interim outcome in diabetes studies) versus final health outcomes like amputation or death.

Components: Analytic frameworks use boxes (rounded for populations, square for interventions/outcomes) and arrows to depict relationships. Numbers often denote key research questions.

Sample Framework (Mammography Screening): A framework for mammography screening might show:

  1. Population at Risk: E.g., women over 60 at risk of breast cancer.
  2. Intervention/Screening: Mammography.
  3. Key Questions/Paths:
    • Effectiveness of Screening: How does screening affect breast cancer mortality (a desired final outcome)?
    • Adverse Effects of Screening: Are there negative consequences like anxiety from false positives?
    • Early Detection and Treatment: Screening leads to early detection, which leads to treatment.
    • Adverse Effects of Treatment: What are the harms of treating early-detected breast cancer (e.g., chemotherapy side effects, long-term radiation effects)?
    • Intermediate Outcomes: Detection of breast cancer, type of surgery, disease progression.
    • Final Outcomes: Reduced morbidity and mortality.

By illustrating these relationships, the analytic framework helps define the specific questions being asked, the interventions/exposures, outcomes, and populations. Crucially, it aids in making informed decisions about which data to abstract from the included studies, ensuring you collect just the right amount of relevant information.

Conclusion

Framing your research question effectively is the bedrock of a robust systematic review. It’s an iterative process that requires careful thought, collaboration, and a deep understanding of your topic. By systematically applying frameworks like PICO/PECO, classifying your question type, and considering the scope, you set the stage for a comprehensive and impactful review. The next critical step will be developing a systematic search strategy to identify all relevant literature.

Core Concepts

  • Framing the Research Question: The foundational step in a systematic review, involving defining the specific inquiry to be addressed, which subsequently dictates all other review processes.
  • PICO/PECO Framework: A standardized mnemonic (Population/Patient, Intervention/Exposure, Comparison, Outcome) used to structure and articulate a focused and answerable research question for systematic reviews.
  • Question Type Classification: The process of categorizing a research question (e.g., therapy, diagnosis, prognosis, etiology, incidence, prevalence) to identify the most suitable study designs for evidence gathering and bias minimization.
  • Hierarchy of Evidence: A structured ranking of different study designs based on their inherent ability to minimize bias, primarily emphasizing the strength of evidence for intervention questions.
  • Broad vs. Narrow Questions: The strategic decision regarding the scope of a systematic review question, balancing the potential for wider applicability and more comprehensive data with the risk of heterogeneity or limited studies.
  • Outcome Definition (5 Elements): A comprehensive approach to specifying research outcomes by detailing the domain, measurement tool, specific metric, method of aggregation, and time point of assessment.
  • Analytic Framework (Logic Model): A visual or conceptual diagram that illustrates the presumed causal pathways and relationships between an intervention/exposure, intermediate outcomes, and ultimate health outcomes, clarifying the review’s scope and questions.

Concept Details and Examples

Framing the Research Question

Framing the research question is the crucial first step in a systematic review, as a well-defined question ensures all subsequent stages, from literature searching to data synthesis, are focused and relevant. It sets the scope, determines eligibility criteria for studies, and ultimately impacts the applicability of the review’s findings. Getting this step right is paramount, as errors here can invalidate the entire review process.

  • Example 1 (Too Vague): “Is diet good for health?” This question is too broad; it doesn’t specify which diet, for whom, or which aspect of health.
  • Example 2 (Well-Framed): “In overweight adults, does a ketogenic diet significantly reduce cardiovascular disease risk markers (e.g., cholesterol, blood pressure) compared to a low-fat diet over a 6-month period?” This question clearly defines PICO elements.
  • Common Pitfall: Not spending enough time on this step, leading to ill-defined questions that result in irrelevant search results or difficulty in synthesizing heterogeneous data.

PICO/PECO Framework

The PICO (Population/Patient, Intervention, Comparison, Outcome) or PECO (Population/Patient, Exposure, Comparison, Outcome) framework is a mnemonic used to formulate a clear, answerable clinical or research question. It guides the systematic reviewer in identifying the key elements of their inquiry, making it easier to define search strategies and inclusion/exclusion criteria for studies. The “E” in PECO is specifically used for observational studies where an exposure (e.g., smoking, environmental factor) rather than an intervention is being investigated.

  • Example 1 (PICO - Therapy): “In adults with Type 2 Diabetes (P), does Metformin (I) improve glycemic control (O) compared to lifestyle changes alone (C)?”
  • Example 2 (PECO - Etiology): “In pregnant women (P), is exposure to certain pesticides (E) associated with an increased risk of neural tube defects in offspring (O) compared to no pesticide exposure (C)?”
  • Common Pitfall: Forgetting to specify the comparison group, leading to an incomplete question, or having a comparison that isn’t truly comparable. Another pitfall is defining an “intervention” for an epidemiological question where “exposure” is more appropriate.

Question Type Classification

Classifying the type of research question involves categorizing it based on its primary objective, such as therapy, diagnosis, prognosis, etiology (harm), incidence, or prevalence. This classification is critical because each question type inherently requires and is best addressed by specific study designs that minimize bias relevant to that objective. For instance, therapy questions are ideally answered by randomized controlled trials, while etiology questions often rely on cohort or case-control studies.

  • Example 1 (Therapy Question): “Is acupuncture effective in reducing chronic low back pain?” This would ideally require randomized controlled trials.
  • Example 2 (Prognosis Question): “What is the likelihood of developing chronic kidney disease in patients with newly diagnosed hypertension over 10 years?” This is best addressed by cohort studies.
  • Common Pitfall: Using an inappropriate study design to answer a specific question type (e.g., trying to answer a therapy question with case series or anecdotal evidence), which can lead to biased or unreliable conclusions.

Hierarchy of Evidence

The Hierarchy of Evidence is a tiered model that ranks different study designs based on their internal validity and ability to minimize bias, particularly for intervention-based questions. At the apex are systematic reviews and meta-analyses of randomized controlled trials (RCTs), followed by individual RCTs, then observational studies (cohort, case-control), and finally, expert opinion or case series. This hierarchy helps researchers prioritize which types of evidence to seek when answering certain questions, recognizing that higher-tier evidence generally provides more reliable answers.

  • Example 1 (Highest Evidence): A systematic review of multiple randomized controlled trials examining the effectiveness of a new vaccine.
  • Example 2 (Lower Evidence): A case series describing the outcomes of a few patients who received an experimental treatment, without a control group.
  • Common Pitfall: Misapplying the hierarchy to all question types (it’s primarily for intervention effectiveness) or assuming that a higher-tier study automatically means better quality without critical appraisal. A systematic review of poorly conducted RCTs might be less reliable than a well-conducted cohort study for certain questions (e.g., rare harms).

Broad vs. Narrow Questions

The decision to formulate a broad or narrow question involves balancing the generalizability of findings against the specificity of the inquiry and the feasibility of finding sufficient, homogeneous studies. Broad questions (e.g., “Is exercise effective for heart health?”) can yield many studies and wide applicability but risk comparing “apples and oranges” (heterogeneity). Narrow questions (e.g., “Is 30 minutes of moderate-intensity aerobic exercise daily effective in reducing blood pressure in adults aged 40-60 with pre-hypertension?”) offer precision and reduce heterogeneity but might yield too few relevant studies or limit the applicability of findings.

  • Example 1 (Broad Question): “Does diet affect cancer risk?” This question is too broad, leading to an overwhelming and likely heterogeneous body of literature on various diets and cancer types.
  • Example 2 (Narrow Question): “In postmenopausal women with a history of breast cancer, does a Mediterranean diet reduce recurrence rates compared to a standard Western diet over a 5-year follow-up period?” This is highly specific, potentially limiting the number of available studies.
  • Common Pitfall: Making the question too narrow, leading to very few or no studies that directly answer it, or making it too broad, resulting in unmanageable literature searches and problematic synthesis due to extreme heterogeneity.

Outcome Definition (5 Elements)

A proper outcome definition includes five key elements: Domain, Measurement, Specific Metric, Method of Aggregation, and Time Point. This detailed specification ensures clarity and comparability across studies included in a systematic review. For example, instead of just “pain,” a precise outcome might be “Pain intensity (domain) measured by a 0-10 Numeric Rating Scale (measurement), as a mean change from baseline (specific metric), aggregated as the average change across all participants (method of aggregation) at 3 months post-intervention (time point).”

  • Example 1 (Incomplete Outcome): “Improved vision.” This is vague.
  • Example 2 (Complete Outcome): “Visual acuity (domain) measured by Snellen chart (measurement), expressed as mean change from baseline (specific metric) for each treatment group (method of aggregation) at 6 months post-treatment (time point).”
  • Common Pitfall: Using vague or incomplete outcome definitions, which makes it difficult to compare results across studies or to determine if a study truly measured the outcome of interest in a consistent way. Over-reliance on “surrogate” outcomes (like lab values) without connecting them to patient-important outcomes is another pitfall.

Analytic Framework (Logic Model)

An analytic framework, also known as a logic model or conceptual framework, visually depicts the relationships between an intervention/exposure, intermediate outcomes, and final health outcomes. It clarifies the causal pathways hypothesized in a review, helps identify all relevant questions (including those on harms or intermediate steps), and guides the selection of data for abstraction. This structured approach ensures a comprehensive understanding of the entire process from intervention to impact.

  • Example 1 (Intervention to Outcome): A framework might show “Preschool Reading Program” -> “Improved Early Literacy Scores (intermediate outcome)” -> “Higher High School Graduation Rates (final health outcome).”
  • Example 2 (Complex Pathways): For mammography screening: “Mammography Screening” -> “Early Detection of Breast Cancer (intermediate outcome)” -> “Reduced Breast Cancer Mortality (final health outcome).” It might also branch to “False Positives/Biopsies (adverse effects of screening)” and “Overtreatment (adverse effects of treatment).”
  • Common Pitfall: Not developing or using an analytic framework, which can lead to overlooking important intermediate outcomes, harms, or missing key questions within a complex intervention pathway. It can also make it harder to justify the scope of the review or to explain complex findings.

Application Scenario

A public health agency is concerned about rising rates of childhood obesity and wants to understand the effectiveness of school-based interventions. To inform policy, they decide to commission a systematic review. The key concepts from this lesson would guide them in framing their question, classifying it as an intervention type, and identifying suitable study designs like randomized controlled trials to ensure the highest level of evidence. They would use PICO to define the student population, specific interventions (e.g., nutrition education, increased physical activity), comparisons (e.g., standard curriculum), and crucial outcomes like BMI reduction or improved dietary habits, while also considering potential harms. An analytic framework would further clarify the links between intervention components, intermediate changes (e.g., knowledge, behavior), and long-term health outcomes.

Quiz

  1. Multiple Choice: Which of the following is NOT typically a primary goal of classifying a research question in a systematic review? a) To determine the most appropriate study designs to include. b) To minimize bias in the evidence selected. c) To immediately identify all relevant studies without further searching. d) To guide the development of eligibility criteria.

  2. True/False: The Hierarchy of Evidence states that a systematic review of randomized controlled trials (RCTs) is the highest form of evidence for all types of research questions, including those on prognosis and incidence.

  3. Short Answer: Explain why a very broad systematic review question, like “Is diet good for health?”, can be problematic for a systematic reviewer. Provide at least two reasons.

  4. Application Question: A systematic review is being planned on the effectiveness of mindfulness meditation for reducing stress in healthcare workers. Formulate a complete, well-structured PICO question for this review, ensuring all four elements are clearly identifiable.


ANSWERS

  1. c) To immediately identify all relevant studies without further searching.

    • Explanation: Classifying a question helps narrow down what kind of studies to look for and how to search, but it doesn’t instantly identify all studies. Searching databases and applying eligibility criteria are subsequent, distinct steps.
  2. False.

    • Explanation: The Hierarchy of Evidence is primarily applicable to intervention questions (e.g., therapy, screening, prevention, harm). While a systematic review of RCTs is indeed the highest form of evidence for interventions, other question types (like prognosis or incidence) require different study designs (e.g., cohort studies for prognosis, surveys for incidence) to provide the most appropriate and least biased evidence.
  3. Explanation: A very broad question like “Is diet good for health?” is problematic for several reasons:

    • Overwhelming Literature: It would likely yield an unmanageably large number of hits across diverse dietary interventions, health outcomes, and populations, making the literature search extremely difficult and time-consuming.
    • Heterogeneity (Apples and Oranges): The studies found would be highly heterogeneous, comparing vastly different diets (e.g., ketogenic vs. vegan), different health outcomes (e.g., cardiovascular health vs. mental health), and different populations. This makes meaningful synthesis and meta-analysis challenging, as it would truly be comparing “apples and oranges.”
    • Lack of Specificity: The lack of specific PICO elements makes it difficult to establish clear eligibility criteria, leading to an unfocused review that cannot provide a precise or actionable answer.
  4. Application Question:

    • P (Population): Healthcare workers (e.g., nurses, doctors, allied health professionals).
    • I (Intervention): Mindfulness meditation programs.
    • C (Comparison): Standard stress management techniques, no intervention, or waitlist control.
    • O (Outcome): Reduction in perceived stress levels (e.g., measured by Perceived Stress Scale), reduction in burnout rates, or improvement in psychological well-being.

    Well-structured PICO question: “In healthcare workers (P), is participation in mindfulness meditation programs (I) effective in reducing perceived stress levels (O) compared to standard stress management techniques or no intervention (C)?”