Evidence based practice: what it is and how it can be properly utilized.
Evidence based medicine, whose philosophical origins extend back to mid- 19th century Paris and earlier, remains a hot topic for clinicians, public health practitioners, purchasers, planners, and the public. Over the past decade or so this has become a hot topic for health care practitioners to debate over social media, at conferences, and in the classroom. While this makes for better medicine when used appropriately, there are misunderstandings as to what it is and how to utilize it.
Evidence based practice is defined by Sackett et al as the integration of the current best research evidence with clinical expertise and patient values. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. By individual clinical expertise we mean the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice. Increased expertise is reflected in many ways, but especially in more effective and efficient diagnosis and in the more thoughtful identification and compassionate use of individual patients’ predicaments, rights, and preferences in making clinical decisions about their care.
The three components to evidence-based practice are:
- Best available evidence
- Clinicians experience, knowledge, and skills
- The patient’s needs, wants, and beliefs
The American Physical Therapy Association breaks discusses these three components and gives equal value to each. From their website:
“Document HOD P06‐19‐10‐05: APTA supports development and use of evidence‐based practice that integrates best available research, clinical expertise, and patient values and circumstances.”
To determine the best available evidence, you need to incorporate the best evidence from well-designed studies. There are a variety of rating systems and hierarchies of evidence that grade the strength or quality of evidence generated from a research study or report. Being knowledgeable about evidence-based practice and levels of evidence is important to every clinician as clinicians need to be confident about how much emphasis they should place on a study, report, practice alert or clinical practice guideline when making decisions about a patient’s care.
The levels of evidence listed here have been developed with the help of nurse experts and others in the medical industry. Evidence-based information ranges from Level A (the strongest) to Level C (the weakest). In 2013, Level ML, multilevel, was added to identify clinical practice guidelines that contain recommendations based on more than one level of evidence:
LEVEL A: Evidence obtained from:
- Randomized control trials: the classic “gold standard” study design. In RCTs, subjects are randomly selected and randomly assigned to groups to undergo rigorously controlled experimental conditions or interventions.
- Systematic review or meta-analysis of all relevant RCTs. A systematic review is a critical assessment of existing evidence that addresses a focused clinical question, includes a comprehensive literature search, appraises the quality of studies and reports results in a systematic manner. Meta-analysis is a study design that uses statistical techniques to combine and analyze data from many RCTs.
- Clinical practice guidelines: based on systematic reviews of RCTs. Evidence-based clinical practice guidelines provide the strongest level of evidence to guide clinical practice because they are based on rigorous reviews of the best evidence on specific topics.
LEVEL B: Evidence obtained from:
- Well-designed control trials without randomization: In this type of study, random assignment is not used to assign subjects to experimental and control groups. Therefore, this type of research is less strong in internal validity because it cannot be assumed the subjects in the study are equal on major demographic and clinical variables at the beginning of the trial. Frequent problems with this type of study include intentional or unintentional bias in sample enrollment; nonblinding, unclear criteria for participant selection; or unreliable or invalid tools.
- Clinical cohort study: an examination of groups of people who have common characteristics or exposure experiences to compare outcomes in those exposed vs. outcomes in those not exposed (e.g., development of heart disease after exposure or non-exposure to 10 years of secondhand smoke).
- Case-controlled study: use of an observational approach in which subjects known to have a disease or outcome are compared with subjects known not to have that disease or outcome. Subjects are matched on characteristics so that they are as similar as possible except for the disease or outcome. Case-control studies are generally designed to estimate the odds (using an odds ratio) of developing the studied condition or disease and can determine if an associated relationship exists between the condition/disease and risk factors.
- Uncontrolled study: studies that do not control participant selection or interventions (e.g., a convenience sample, such as patients on a given unit, may be studied because it is the only group reasonably available).
- Epidemiological study: studies that observe people over a long time to determine risk or likelihood of developing diseases. These studies include retrospective database searches or prospective studies that follow a population over time.
- Qualitative study/quantitative study: descriptive, word-based phenomena, such as symptoms, behaviors, culture, and group dynamics. Quantitative studies use statistical methods to establish numerical relationships that are correlational or cause and effect.
LEVEL C: Evidence obtained from:
- Consensus viewpoint and expert opinion: a study that obtains agreement about specific practices from all clinical experts on a review panel. Expert opinion involves obtaining agreement from a majority of clinical experts on a review panel. Note: This level of evidence is used when there are no quantitative or qualitative studies in a particular area.
- Meta-synthesis: a systematic review that synthesizes findings from qualitative studies using an interpretive technique to bring small study findings, such as case studies, to clinical application.
LEVEL ML (multilevel): clinical practice guidelines, recommendations based on evidence obtained from:
- More than one level of evidence as defined in the rating system.
After taking into consideration research studies, the knowledge and skills for the clinician need to be part of the clinical decision making. This accumulation of knowledge, patient care experience, treatment decisions and outcomes make up a critical part of evidence-based practice. Sackett et al. reported that the clinician’s “proficiency and judgement gained from school, continuing education, and clinical practice experience should be considered when making patient care decisions”. They go on to say that “clinical expertise is not just an afterthought, and if it is not integrated practice risks becoming tyrannized by evidence, for even excellent external evidence may be inapplicable to or inappropriate for an individual patient.”
Finally, the patient’s needs, wants and beliefs should be given equal consideration. Patient preferences can be religious or spiritual values, social and cultural values, thoughts about what constitutes quality of life, personal priorities, and beliefs about health. One way to do this is to use the ASK (AskShareKnow) Patient–Clinician Communication Model. This tool to teaches patients and families three questions to ask their healthcare providers to get information they need to make healthcare decisions:
- What are my options?
- What are the possible benefits and harms of those options?
- How likely are each of those benefits and harms to happen to me, and what will happen if I do nothing?
Sackett et all summed it up well by saying “Evidence based medicine is not “cookbook” medicine. Because it requires a bottom up approach that integrates the best external evidence with individual clinical expertise and patients’ choice, it cannot result in slavish, cookbook approaches to individual patient care. External clinical evidence can inform, but can never replace, individual clinical expertise, and it is this expertise that decides whether the external evidence applies to the individual patient at all and, if so, how it should be integrated into a clinical decision. Similarly, any external guideline must be integrated with individual clinical expertise in deciding whether and how it matches the patient’s clinical state, predicament, and preferences, and thus whether it should be applied. Clinicians who fear top down cookbooks will find the advocates of evidence-based medicine joining them at the barricades”.