American Journal of Nursing
© 1999 Lippincott Williams & Wilkins, Inc.

Volume 99(9)             September 1999             pp 51-59
Want to Know How You're Doing
[Article]

Cassidy, Catherine A. PhD, FNP

Catherine Cassidy is an associate professor in the Graduate Department of Nursing at Seton Hall University's College of Nursing in South Orange, NJ.


Browse Table of Contents

Output...


Links...

Help
Logoff
History...

Outline

Graphics

Abstract

Here's a step-by-step guide, with examples, to improving patient care through outcomes evaluation.



Suppose that a representative from your managed care organization arrives at your office to audit your charts. She'll be looking for adequacy of care as reflected in 10 randomly chosen charts of your patients with type 2 diabetes. Or imagine that a hospitalized patient wants to know why you're performing a dressing change one way when the night nurse does it another. For a moment, you wonder: How can you be certain that the care you provide is the best?

Either of these scenarios could provoke anxiety-unless you administer nursing care that you know to be effective because it's based on research findings. You won't have to worry about the managed care representative if you've done your own chart audit and have developed a form that adequately documents standards of care for patients with diabetes. And you'll feel comfortable with how you perform dressing changes if your unit has already determined that wounds heal better when dressed in a certain way.

You can move toward an evidence-based nursing practice by compiling your observations of changes in patients' status, generating theories about what caused them, testing the theories, and implementing changes in accordance with the results. If you've wanted to investigate your theories about patient outcomes but haven't been sure how to go about it, your next step could be developing and implementing an outcomes evaluation study.

SELECTING THE PROBLEM

To conduct a study that will elicit data useful for improving practice, first select a problem that generates a clear question. For example, suppose a high percentage of your patients with type 2 diabetes have blood glucose levels that suggest their disease isn't well controlled. This problem might generate the question "If I implement an individualized patient teaching program, will it make a difference?" In a 1994 article, D. Peters identified the following criteria for determining the merits of your question:

* Does your facility have an interest or expertise in this area?
* Do baseline data exist?
* Is this a problem related to high patient volume, high risk to patients, or high cost? (In other words, will finding a solution to this problem help improve care while curtailing costs? Will it be worth the time, money, and other resources used?)
* Has a deficiency been identified in the problem area? (For example, regarding your patients whose diabetes isn't well controlled, is there a deficiency in the education program?)
* Does the problem relate to the mission and goals of the agency?

Suppose that a chart review of the 80 patients in your primary care clinic reveals that 50 patients had been diagnosed with type 2 diabetes, and that over the last six months 40 of those 50 patients showed no improvement in blood glucose levels. Your research into this limited population can be made more manageable by looking only at what might improve blood glucose levels. You can thus create a researchable question, such as "What effect does an individualized education program have on patients with type 2 diabetes?"

Next, ask yourself how the question meets the five criteria listed above. By choosing a problem (in this case, poor glucose control) that affects the quality of life for a significant number of patients (40 of the patients with type 2 diabetes in your clinic), you verify interest in the problem area. Baseline data exist in the form of lab values; as you determine other pertinent variables such as age or gender, you'll need to gather baseline data for them as well. For the high volume of your patients with type 2 diabetes, complications from uncontrolled blood glucose levels exacerbate both high risk and high costs, in terms of quality of life and health care dollars. A deficiency exists in your clinic's diabetes education program-it isn't individualized according to a patient's readiness to change relevant self-care behaviors (with respect to diet and exercise, for instance). And your clinic evidently seeks to improve patient health as its primary goal.

REVIEWING THE LITERATURE

A literature search will help you determine whether and how the identified problem has been handled previously. As you investigate, you should ask:

* Has this problem been previously identified and studied?
* If so, what were the results?
* Can those results provide a solution to your problem?
* If your problem hasn't been specifically researched, can you build on existing findings of studies of similar problems?

If an initial Internet search of library databases yields too little, try searching under different phrases or word combinations. For instance, your search using the phrase "diabetes education programs" produces little information addressing your specific question about individualized programs. But when you search using the phrase "behavioral changes," you find something you can use: the transtheoretical model of behavioral change developed by psychologist James Prochaska in the early 1990s.

This model proposes that efficient behavioral change depends on doing the right things (cognitive and behavioral processes) at the right times (stages). Prochaska has determined that people pass through five stages of change as they master a new behavior: precontemplation, contemplation, preparation, action, and maintenance. To aid this transition, the clinician identifies the most appropriate strategy and supports the patient in using it; cognitive strategies (such as raising patients awareness about health risks) have demonstrated effectiveness during the precontemplation and contemplation stages, and behavioral processes (such as contracting to check glucose levels more often) have demonstrated effectiveness during the preparation, action, and maintenance stages.

Practitioners have used the transtheoretical model to design interventions, such as smoking-cessation and weight-loss programs, that target not only those ready and willing to change behavior, but also those-the majority of people-who are not yet prepared to do so. (In their 1996 study of exercise behavior, psychologist Bess Marcus and colleagues found that 54% of subjects were in either the precontemplative or the contemplative stage of change.) By identifying a patient's stage of change, you can individualize an intervention for him, potentially accelerating his progress in adopting and maintaining desired behaviors.

On discovering this theoretical model, you decide to tailor your patient teaching by first ascertaining the stage of change of each of the 40 patients with uncontrolled diabetes; you then employ the processes of change described in the transtheoretical model to create an education program capable of being individualized to each patient.

DEVELOPING A MODEL TO EXPLAIN THE PROBLEM

At this point you're ready to devise a conceptual model-a map of your ideas. Essentially, it's a tool to help you clarify the problem to be studied, develop an intervention, and guide analysis of the most relevant variables and the ways they interact with the outcomes you're studying. Your literature review may have turned up an existing model that you can adapt and use. Or a model can be developed through observation, clinical experience, literature review, or a combination of these.

For example, Conceptual Model for Patients with Type 2 Diabetes on page 52 illustrates the demographic and clinical factors, interventions, and outcomes for patients with type 2 diabetes, and their possible relationships. (It's based on public health expert Avedis Donabedian's linear model.) The clinical and demographic factors can affect patient outcomes, but may not be affected by the "treatment"-in our example, the individualized education program.



[Help with image viewing]

Figure. Conceptual Model for Patients with Type 2 Diabetes

SPECIFYING VARIABLES

After you've developed or adapted a conceptual model, you can use it to identify and select the factors that could affect the patient outcomes you're studying. Once identified, you'll determine how they can be described or quantified using one or more measures.

For example, Conceptual Model for Patients with Type 2 Diabetes designates certain "givens," patient characteristics that could affect the study's results, including clinical factors (such as duration of illness and comorbidity) and demographics (such as age and stage of change). If you've noticed that all of your patients with uncontrolled type 2 diabetes are over age 69 while all of the patients with controlled diabetes are under age 40, you'd probably select age as a significant factor. The literature review also indicated that stage of change, duration of illness, and comorbidity are important variables. The patient outcomes, or dependent variables, to be studied include glycosylated hemoglobin (HbA1c) levels on days 1, 90, 180, and 270 (measured by laboratory values), and quality-of-life levels (measured by self-reported surveys or questionnaires, observation, or both). The intervention, or treatment variable, that you've selected for study is the individualized education program.

Depending on the problem chosen for study and the available resources (such as time, equipment, and number of data collectors), you may decide to work with several variables or only a few.

SELECTING INSTRUMENTS FOR DATA MEASUREMENT

In most cases when choosing an instrument, a compromise is in order: Scientific rigor has to be balanced with practical constraints. It helps to remember that anything investigators measure is removed from its context, and that there's some degree of error inherent in all measurement. Furthermore, consider the measure from various standpoints, such as the following:

From your patient's point of view, ask

* Is the measure (survey or questionnaire) too complex or lengthy to complete easily?
* Will the measure require multiple self-reports or diary entries?

From your point of view, ask

* Will I have to measure the variable so frequently that it will interfere with overall patient care?
* What are the time constraints? How long will data collection last (days, weeks, months)?
* What degree of patient interaction is involved (number of interactions, time involved)? Will my patients be accessible for the study's duration (through home visits or phone calls)?

For data analysis, ask

* What kinds of questions does the measure ask? (Open-ended questions usually yield longer, more complex responses, may require more interpretation, and are less amenable to quantitative analysis, compared to multiple-choice and yes-or-no questions.)
* What kind of data collection form is being used? Will responses be scannable, or will they require keying? (Data entry can be time-consuming even in a small-scale study; computerized records and scannable forms can help.)

Also consider whether you'll use a generic or disease-specific measure, whether it will cover all possible situations or only a few representative ones, and what level of precision you'll need. And it's important to avoid collecting data that can't or won't be used, either because of the complexity of the analysis required or because of an excess of collected data. Planning clearly at the outset how data will be analyzed helps eliminate collection of redundant or superfluous information.

It's crucial to determine instrument reliability (the extent to which a measure yields the same results in repeated applications on an unchanged population or phenomenon) and validity (confidence that a test measures what it's intended to measure) beforehand, as well. Otherwise, a researcher can't say with certainty that the findings are accurate and reproducible.

If you can't find an instrument that has been tested, you'll have to create your own, developing reliability and validity scores using a population other than your study sample. Use instruments that either you or another researcher have already tested to strengthen the likelihood that your findings result from your intervention and not from poorly measured data. And review the study question and the conceptual model to clarify the possible relationships between the variables. By doing so, you can develop a study that stays on course, clearly identifying the relationships between variables. Information about selected variables such as age, stage of change, duration of illness, and comorbidity can be obtained from demographic questionnaires. Glycosylated hemoglobin levels can be determined from laboratory reports. Generic (as opposed to disease-specific) quality-of-life questionnaires that have already been developed and tested for reliability and validity are available in books and published research studies. You might also look for those designed to address similar specific illness and conditions that could be adapted to your patient group. For example, nurse educator Barbara Redman recently collected and published a set of measurement tools for evaluating patient education programs. Keeping data collection simple and focused will facilitate completion of the study.

RESOURCES

The resources required to administer an intervention and to manage and analyze the data also must be determined; estimates should include both those needed and those available.

For example, consider the costs, time, and effort involved in creating and implementing the individualized education program and in developing, printing, and administering the quality-of-life questionnaires. In order to accurately estimate these costs and identify glitches in the study protocol, carry out a pilot test by walking through every aspect of the study protocol with a small number of patients. Be mindful of the demands made on respondents; asking patients to complete long questionnaires will probably result in a low participation rate.

DEVELOPING THE ANALYSIS PLAN

The conceptual model provides a general framework for the analysis plan, the specifics of which will be determined by the nature of the variables. For instance, in order to measure the effects of your education program on your patients, you'll compare quality-of-life scores both before and after the program's implementation, keeping in mind that outcomes are influenced by many considerations, both measurable and immeasurable. Because of this, the results of an outcomes study may not be determined solely or predominantly by the intervention. Still, it's important to establish a relationship between a treatment and its supposed effects, even if the relationship doesn't account for all of the effects. Guidelines for the provision of nursing care are determined by a clear understanding of how the study treatment influenced the outcome.

The main question is whether the treatment or intervention (in this case, the individualized education program) is significantly related to the outcomes (HbA1c levels and quality of life), while taking into account the demographic and preexisting clinical variables (the effects of age, stage of change, duration of illness, and comorbidity). These relationships can be examined by multiple regression analyses, which can help ascertain the extent to which independent variables predict the outcomes (dependent variables) of interest. In our example, this type of analysis would provide answers to the question "Compared to patients who weren't part of this study, how much impact did the education program have on HbA sub 1c levels over and above age, stage of change, duration of illness, and comorbidity?"

FINAL POINTS

Depending on where and how the study is conducted, it's important to submit it to the appropriate institutional board for review. And be sure to communicate with all those affected by the study plan, including patients, colleagues, and support staff. Anyone involved in data collection will need to be trained to assure reliability of the intervention. And those potentially affected by the study's results also need to be involved in its development. Practitioners tend to think that their research studies will help bring about favorable change. But a nonresearching coworker whose job may be influenced by a study's findings might not think so. Establishing a time line for the evaluation and a monitoring system to assure accuracy in data collection is also essential to a study's success.

You might encounter a loss of potential subjects if, for example, secretarial staff aren't alerted to the need to solicit all patients who have type 2 diabetes. By developing the study plan before beginning data collection-and by alerting all staff to procedures ahead of time-such a problem can be avoided.

To determine the relationship between nursing care and a patient's change in health status, ask yourself two questions: Was there a beneficial effect? And can the aspect of care that most likely caused the effect be identified by the research? If the answer to both questions is yes, the intervention can be instituted as sound, evidence-based nursing practice. If the answer to either question is no, the information from this study can be used as the basis of the next research project. Either way, you will have participated in developing evidence that advances nursing practice.

SELECTED REFERENCES

Cassidy CA. Using the transtheoretical model to facilitate behavior change in patients with chronic illness. J Am Acad Nurse Pract 1999;11(7):281-7. [Medline Link]

DiClemente CC. Changing addictive behaviors: a process perspective. Cur Dir Psychol Sci 1993;2(4):101-6.

Donabedian A. Evaluating the quality of medical care. Millbank Memorial Fund Q 1966;44(Part 2):166-206.

Marcus BH, et al. Longitudinal shifts in employees' stages and processes of exercise behavior change. Am J Health Promot 1996;10(3):195-200.

Metcalfe D. The measurements of outcomes in general practice. In: Stewart M, et al., eds. Tools for primary care research. Newbury Park, CA: Sage Publications; 1992, p. 14-27.

Peters D. Strategic directions for using outcomes. Remington Rep 1994 June-July; p. 9-13.

Prochaska JO, et al. In search of how people change. Applications to addictive behaviors. Am Psychol 1992;47(9):1102-14. [Medline Link]

Redman BK, [editor]. Measurement tools in patient education. New York: Springer Pub; 1998.

Tilly KF, et al. Continuous monitoring of health status outcomes: experience with a diabetes education program. Diabetes Educ 1995;21(5):413-9. [Medline Link] [CINAHL Link]



Accession Number: 00000446-199909000-00042
Browse Table of Contents

Copyright (c) 2000-2001 Ovid Technologies, Inc.
Version: Version rel4.5.0, SourceID 1.5686.1.11