Category Archives: Interviews

Hand stand–a look at manual blood smear reviews.

Parham, Sue. Feature Story: Hand stand–a look at manual blood smear reviews. CAP Today. April 2005.

Though labs have largely automated the process of performing CBCs, thanks to better hematology instruments, hematopathologists agree it’s still necessary to perform manual peripheral blood smear reviews on a certain percentage of specimens. But pinpointing the appropriate frequency of manual blood smear reviews can be tricky. There’s no set formula too many manual reviews waste technologists’ valuable time, but too few manual reviews could threaten the quality of patient care.

Since the number of manual peripheral blood smear reviews a laboratory performs is so tightly bound to patient population (a sicker patient population requires more manual reviews), labs can’t tap any real guideline to settle on an appropriate frequency for the procedure. Until recently, they also didn’t have any real data to help them compare their manual peripheral blood smear review rates with those of their peers.

But first-of-its-kind benchmark data are now available in a new CAP Q-PROBES report, “Rate of Peripheral Blood Smear Review,” which explores the current standard of practice in U.S. labs. “I visit a lot of laboratories, and I’ve seen some labs that are manually reviewing 60 to 70 percent of their CBCs, while others are manually reviewing only five to 10 percent of their CBCs,” says David Wilkinson, MD, PhD, professor and chairman of the Department of Pathology at Virginia Commonwealth University (VCU), Richmond, and lead author of the Q-PROBES study. The aim of the study, he says, was to establish a normative distribution of the review rate among participating laboratories, “and then try to associate certain types of practice with higher or lower rates of review.”

To establish this normative distribution, Dr. Wilkinson and his coauthors David Novis, MD, Jonathan Ben-Ezra, MD, and Mary St. Louis, MT (ASCP) asked the 263 participating laboratories to select 10 automated CBCs from each traditional laboratory shift and determine if a manual review was performed on the blood smear. For each manual review, patient age, hemoglobin value, WBC count, platelet count, and primary reason for the review were recorded. Participants were also asked to determine if the manual review turned up any new information. The laboratories continued to do this until 60 manual reviews had been collected. The authors then determined manual review rates, examined demographic and practice parameters, and searched for associations between review rates and institutions’ demographic and practice variable information. The study was based on the review of more than 95,000 automated CBC specimens.

The median laboratory participating in the study manually reviewed 26.7 percent of its automated CBCs, and the mean was even higher, at 28.7 percent. “I was surprised to learn that more than one-fourth of all CBCs actually have a technologist look at the smear under the microscope. I would have expected that number to be lower,” says Dr. Ben-Ezra, professor of pathology and director of hematology laboratories at the Medical College of Virginia Campus of VCU.

What’s more, 10 percent of the laboratories surveyed were performing manual reviews of 50 percent or more of their smears. “I would have expected that number to be less as well,” says Dr. Novis, chief of pathology and medical director of the clinical laboratory at Wentworth-Douglass Hospital, Dover, NH. From an efficiency standpoint, he says, technology should be able to reduce that rate.

Neither of these numbers surprised Dr. Wilkinson, who, based on anecdotal information he collected before the study was launched, suspected that labs were performing a significant number of manual reviews. Still, he says, the findings provide useful benchmark data for the clinical laboratory community. “Not that the median number of manual peripheral blood smear reviews being performed by labs is anything magic, but if you are a large lab and you are reviewing 50 to 60 percent of the peripheral smears, and you can see that half the people in the country are reviewing less than 26 percent of them, then maybe you are doing too many,” Dr. Wilkinson says. “On the other hand, if your review rate is five percent, you may be doing too few.”

Each lab must determine its own cutpoints or triggers for manual review because patient populations differ, as do clinicians’ needs and desires. But laboratories can now use the Q-PROBES findings to help them evaluate and set their own strategies for performing manual peripheral blood smear reviews, Dr. Wilkinson says.

The study data are broken down in a number of different ways, and the authors provide insight into what types of labs are performing the fewest and the most manual reviews, though they note in the analysis that higher or lower review rates do not necessarily indicate better or worse performance. As expected, institution size does matter when it comes to manual blood smear review rates. “The larger the hospital, the greater the percentage of smears they tended to review,” Dr. Ben-Ezra says. “My feeling is that larger hospitals see more complex clinical cases, and therefore there are more things that would trigger such a smear review.”

Higher rates of manual reviews were often associated with lower efficiency for the lab in terms of billed tests per full-time equivalent. “The more labor you put into turning out a CBC result, the more your productivity drops. That’s self-evident, but now we have documentation of that,” Dr. Novis says. Conversely, lower total review rates and lower leukocyte differential count review rates were associated with higher productivity ratios.

Also associated with the lower manual review rates and the higher productivity ratios were automated instrument triggers that were set at a higher threshold in the labs that performed fewer manual reviews. “Our study showed, for example, that some labs are now using platelet counts over 1 million as their cutpoint,” Dr. Wilkinson says. “At the 50th percentile, the upper limit is 800,000, and at the 90th percentile it’s 1 million.” A laboratory whose cutpoint is triggering a manual peripheral blood smear review at 500,000 or 600,000 is in the minority, he adds, because 90 percent of the labs that participated in the study don’t review a smear unless the count is higher than that.

Hospitals that have criteria that limit how often they review a smear manually tend to have a lower rate of review. “However, those criteria must be hospital-driven,” Dr. Ben-Ezra says. “The number of manual reviews a hospital performs ultimately depends on the needs of the clinical staff, which in turn depends on the complexity mix of the patients seen at that particular institution.”

How can labs use these data to increase their efficiency? If a lab has a very high rate of manual differential counts, the less labor-intensive manual scans may be able to be substituted for the more labor- intensive manual differential counts, say the study’s authors. Dr. Novis says some laboratories may be able to use the study’s information on thresholds to begin a discussion about adjusting the criteria they are using for what triggers a manual blood smear review.

In addition, he says, if labs want to make changes, they should find out what their customers want from manual reviews. “I suspect there may be a disconnect between the users and the providers of this service, given that our data indicate that only 3.5 percent of the manual diffs or screens were done at the request of the doctors.” Most of them were prompted within the labs by flags on the instruments. “It may be that if the flags on the instruments were adjusted a little bit, they could save labs some time and overhead, without adversely affecting patient care,” Dr. Novis says.

If a lab is determined to find its optimal rate of review, it should first define what optimal means. “You’d need to go to providers and determine under what conditions they absolutely need to have a manual smear review performed, and work backwards from there,” Dr. Novis says. “You’d have to correlate your manual review rates with some kind of clinical outcome. For example, if certain patients benefited from a review, you would need to figure out what that benefit might be and then measure it and work backwards.”

More than 36 percent of the study’s participants reported that when a manual peripheral smear review was done, useful information was discovered. This suggests that in many cases the manual reviews uncover something that the automated instruments might have missed, but there’s no way to know what that information was, why it might have been useful, and who found it useful, Dr. Ben-Ezra says. “We don’t have a handle on the type of information that was gleaned, and consequently we don’t know whether that median smear review rate of 26 percent is really worth the additional effort.”

These are the types of questions laboratories will grapple with as they use the Q-PROBES data to guide their efforts to boost efficiency, and it may take another study to generate firm answers. “If this Q-PROBES is repeated, perhaps the next thing would be to ask both technologists and physicians if they learned new information from performing the manual review, and if there is a disconnect there, what it is and why it’s happening,” Dr. Novis says. “This information alone could allow you to go back and reset the flags on your instruments and cut down the number of manual reviews you’re doing.”

In an upcoming issue of the Archives of Pathology & Laboratory Medicine, the study’s authors will discuss the findings in detail, as well as the activities of some laboratories that are attempting to recommend certain triggers for manual review. “These triggers are not based on outcome data or evidence, so more work needs to be done in this area,” Dr. Novis says. In the meantime, labs can use the new Q-PROBES data to make changes. “The idea is to cut your overhead and increase quality at the same time, and believe it or not, it is possible,” he says.

Interviews

 

Victoria Stagg Elliott. Generation gaps: Managing a multigenerational staff. AMA News June 21, 2010. http://www.ama-assn.org/amednews/2010/06/14/bisa0614.htm

Pepper, Leslie. Can you trust your lab results? Good Housekeeping. July 2007. Pages 45-50

Landro, Laura. Hospitals Move to Cut Dangerous Lab Errors. Wall Street Journal. June 14, 2006. Pages D1 and D11.

  • Landro, Laura. Hospitals Move to Cut Dangerous Lab Errors. Wall Street Journal. June 14, 2006. Pages D1 and D11.  TheInformedPatient

Parham, Sue. Feature Story: Hand stand–a look at manual blood smear reviews. CAP Today. April 2005.

  • Parham, Sue. Feature Story: Hand stand–a look at manual blood smear reviews. CAP Today. April 2005. Though labs have largely automated the process of performing CBCs, thanks to better hematology instruments, hematopathologists agree it’s still necessary to perform manual peripheral blood smear reviews on a certain percentage of specimens. But pinpointing the appropriate frequency of manual blood smear reviews can be tricky. There’s no set formula too many manual reviews waste technologists’ valuable time, but too few manual reviews could threaten the quality of patient care.Since the number of manual peripheral blood smear reviews a laboratory performs is so tightly bound to patient population (a sicker patient population requires more manual reviews), labs can’t tap any real guideline to settle on an appropriate frequency for the procedure. Until recently, they also didn’t have any real data to help them compare their manual peripheral blood smear review rates with those of their peers.But first-of-its-kind benchmark data are now available in a new CAP Q-PROBES report, “Rate of Peripheral Blood Smear Review,” which explores the current standard of practice in U.S. labs. “I visit a lot of laboratories, and I’ve seen some labs that are manually reviewing 60 to 70 percent of their CBCs, while others are manually reviewing only five to 10 percent of their CBCs,” says David Wilkinson, MD, PhD, professor and chairman of the Department of Pathology at Virginia Commonwealth University (VCU), Richmond, and lead author of the Q-PROBES study. The aim of the study, he says, was to establish a normative distribution of the review rate among participating laboratories, “and then try to associate certain types of practice with higher or lower rates of review.”To establish this normative distribution, Dr. Wilkinson and his coauthors David Novis, MD, Jonathan Ben-Ezra, MD, and Mary St. Louis, MT (ASCP) asked the 263 participating laboratories to select 10 automated CBCs from each traditional laboratory shift and determine if a manual review was performed on the blood smear. For each manual review, patient age, hemoglobin value, WBC count, platelet count, and primary reason for the review were recorded. Participants were also asked to determine if the manual review turned up any new information. The laboratories continued to do this until 60 manual reviews had been collected. The authors then determined manual review rates, examined demographic and practice parameters, and searched for associations between review rates and institutions’ demographic and practice variable information. The study was based on the review of more than 95,000 automated CBC specimens.The median laboratory participating in the study manually reviewed 26.7 percent of its automated CBCs, and the mean was even higher, at 28.7 percent. “I was surprised to learn that more than one-fourth of all CBCs actually have a technologist look at the smear under the microscope. I would have expected that number to be lower,” says Dr. Ben-Ezra, professor of pathology and director of hematology laboratories at the Medical College of Virginia Campus of VCU.What’s more, 10 percent of the laboratories surveyed were performing manual reviews of 50 percent or more of their smears. “I would have expected that number to be less as well,” says Dr. Novis, chief of pathology and medical director of the clinical laboratory at Wentworth-Douglass Hospital, Dover, NH. From an efficiency standpoint, he says, technology should be able to reduce that rate.

    Neither of these numbers surprised Dr. Wilkinson, who, based on anecdotal information he collected before the study was launched, suspected that labs were performing a significant number of manual reviews. Still, he says, the findings provide useful benchmark data for the clinical laboratory community. “Not that the median number of manual peripheral blood smear reviews being performed by labs is anything magic, but if you are a large lab and you are reviewing 50 to 60 percent of the peripheral smears, and you can see that half the people in the country are reviewing less than 26 percent of them, then maybe you are doing too many,” Dr. Wilkinson says. “On the other hand, if your review rate is five percent, you may be doing too few.”

  • Each lab must determine its own cutpoints or triggers for manual review because patient populations differ, as do clinicians’ needs and desires. But laboratories can now use the Q-PROBES findings to help them evaluate and set their own strategies for performing manual peripheral blood smear reviews, Dr. Wilkinson says.The study data are broken down in a number of different ways, and the authors provide insight into what types of labs are performing the fewest and the most manual reviews, though they note in the analysis that higher or lower review rates do not necessarily indicate better or worse performance. As expected, institution size does matter when it comes to manual blood smear review rates. “The larger the hospital, the greater the percentage of smears they tended to review,” Dr. Ben-Ezra says. “My feeling is that larger hospitals see more complex clinical cases, and therefore there are more things that would trigger such a smear review.”Higher rates of manual reviews were often associated with lower efficiency for the lab in terms of billed tests per full-time equivalent. “The more labor you put into turning out a CBC result, the more your productivity drops. That’s self-evident, but now we have documentation of that,” Dr. Novis says. Conversely, lower total review rates and lower leukocyte differential count review rates were associated with higher productivity ratios.Also associated with the lower manual review rates and the higher productivity ratios were automated instrument triggers that were set at a higher threshold in the labs that performed fewer manual reviews. “Our study showed, for example, that some labs are now using platelet counts over 1 million as their cutpoint,” Dr. Wilkinson says. “At the 50th percentile, the upper limit is 800,000, and at the 90th percentile it’s 1 million.” A laboratory whose cutpoint is triggering a manual peripheral blood smear review at 500,000 or 600,000 is in the minority, he adds, because 90 percent of the labs that participated in the study don’t review a smear unless the count is higher than that.

    Hospitals that have criteria that limit how often they review a smear manually tend to have a lower rate of review. “However, those criteria must be hospital-driven,” Dr. Ben-Ezra says. “The number of manual reviews a hospital performs ultimately depends on the needs of the clinical staff, which in turn depends on the complexity mix of the patients seen at that particular institution.”

    How can labs use these data to increase their efficiency? If a lab has a very high rate of manual differential counts, the less labor-intensive manual scans may be able to be substituted for the more labor- intensive manual differential counts, say the study’s authors. Dr. Novis says some laboratories may be able to use the study’s information on thresholds to begin a discussion about adjusting the criteria they are using for what triggers a manual blood smear review.

    In addition, he says, if labs want to make changes, they should find out what their customers want from manual reviews. “I suspect there may be a disconnect between the users and the providers of this service, given that our data indicate that only 3.5 percent of the manual diffs or screens were done at the request of the doctors.” Most of them were prompted within the labs by flags on the instruments. “It may be that if the flags on the instruments were adjusted a little bit, they could save labs some time and overhead, without adversely affecting patient care,” Dr. Novis says.

    If a lab is determined to find its optimal rate of review, it should first define what optimal means. “You’d need to go to providers and determine under what conditions they absolutely need to have a manual smear review performed, and work backwards from there,” Dr. Novis says. “You’d have to correlate your manual review rates with some kind of clinical outcome. For example, if certain patients benefited from a review, you would need to figure out what that benefit might be and then measure it and work backwards.”

    More than 36 percent of the study’s participants reported that when a manual peripheral smear review was done, useful information was discovered. This suggests that in many cases the manual reviews uncover something that the automated instruments might have missed, but there’s no way to know what that information was, why it might have been useful, and who found it useful, Dr. Ben-Ezra says. “We don’t have a handle on the type of information that was gleaned, and consequently we don’t know whether that median smear review rate of 26 percent is really worth the additional effort.”

    These are the types of questions laboratories will grapple with as they use the Q-PROBES data to guide their efforts to boost efficiency, and it may take another study to generate firm answers. “If this Q-PROBES is repeated, perhaps the next thing would be to ask both technologists and physicians if they learned new information from performing the manual review, and if there is a disconnect there, what it is and why it’s happening,” Dr. Novis says. “This information alone could allow you to go back and reset the flags on your instruments and cut down the number of manual reviews you’re doing.”

    In an upcoming issue of the Archives of Pathology & Laboratory Medicine, the study’s authors will discuss the findings in detail, as well as the activities of some laboratories that are attempting to recommend certain triggers for manual review. “These triggers are not based on outcome data or evidence, so more work needs to be done in this area,” Dr. Novis says. In the meantime, labs can use the new Q-PROBES data to make changes. “The idea is to cut your overhead and increase quality at the same time, and believe it or not, it is possible,” he says.