Author Archives: David Novis

Providing Value

As might occur at a national meeting of any group of doctors, the informal conversations that take place among pathologists in conference hallways and hotel bars often gravitate to the two occupational disasters they fear most: misdiagnosing lesions and losing their jobs. The pathology literature is packed with advice on the former but does not offer much on the latter. That is not surprising. Who wants to share his failures with colleagues?

When relations between hospital administrators and pathologists begin to strain, pathologists assume that it is about money: the hospital wants to ratchet down its contract fee in order to improve the bottom line. In fact, it is almost never about money; it is about value.

In search of value

Hospital administrators do not like problems with doctors. Skirmishes with the one or two physicians have a way of embroiling the entire medical staff into political, time-consuming, emotionally draining, and counterproductive wars. It would be a naïve administrator who would jeopardize the operation of a well-functioning service contract and alienate a complacent medical staff just to trim a fraction of a percentage point off the hospital operating margin. But give a CEO cause to doubt the value of those services? That contract becomes a blip on his fiduciary radar.

In any business, customers must be able to articulate the value that service providers bring to them. Clinicians must be able to cite the merits of their pathology departments beyond recounting anecdotes concerning national experts who confirmed diagnoses made by local pathologists. Administrators must have a handle on precisely what their laboratory medical directors do to run the laboratory.

 Making the correct diagnosis is not enough

Clinicians regard accurate and timely pathology diagnoses as baseline performance — nothing less than the service they expect from mechanics who repair their cars or waiters who serve them their meals. If doctors are dissatisfied with pathology services, it is not with the analytic phase — the level of diagnostic acumen — but, rather, with the pre- and post-analytic phases of service.

Most pathologists are usually unaware of physician dissatisfaction. They wait for doctors to complain. The problem is nobody likes to complain. Doctors do not like face-to-face confrontations with other physicians. Disgruntled clinicians verbalize their grievances in operating rooms, doctor’s lounges, and administrator’s offices — everywhere but in pathology laboratories. Most medical-staff dissatisfaction is resolvable but only if clinicians believe the pathologists are earnest in making things right. Nothing moves dissatisfaction to anger more quickly than the feeling that no one is listening.1

For instance, pathologists may be proud of their latest, cutting-edge, CAP-based template pathology reports. But those reports may anger physicians who are unable to convince pathologists that they prefer those old, antiquated but colorful narratives. Physicians who are unhappy with laboratory turnaround times may become irate by being stonewalled with arguments that laboratory performance exceeds national benchmarks, especially when those benchmarks were established in institutions other than the one in which they practice.2

By the time pathologists become aware of complaints, or at least aware of the degree to which they have ignited passions, it may be too late to recover. The administrator may have already begun to look at solutions, some of which exclude them.

 Problems in the laboratory

Pathologists’ relationships with medical technologists and laboratory managers are equally fragile. Some laboratory medical directors believe that the standards of the Clinical Laboratory Improvement Amendment (CLIA) bestow upon them some form of entitlement. They might believe they have a license to hire and fire employees summarily, purchase equipment with little or no justification, and see their orders executed without question. (Actually, CLIA is a directive not to laboratory medical directors but to the laboratory owners who hire them.) Hospital administrators already have on their payrolls laboratory managers and superiors to whom those managers report. It is unlikely that they welcome another layer of management. When overzealous pathologist oversight creates, rather than solves, problems, administrators may begin to rethink relationships.

Problems with laboratory oversight can head in the opposite direction. Pathologists may regard their laboratory duties as interfering with their anatomic-pathology activities, especially if anatomic pathology provides their major source of revenue. They may abrogate the lion’s share of their CLIA responsibilities to laboratory managers and do little more than affix their signatures to laboratory documents. If the laboratory managers are competent, pathologists can fly under the radar of scrutiny for quite some time. If accreditation-inspection reports begin to accumulate citations or if doctors start complaining about laboratory services, these pathologists’ images begin to appear in the cross hairs of institutional reorganization. Administrators for whom the notion of having pathologists oversee laboratory operations was never theirs in the first place may start to wonder what the hospitals are getting for six-digit oversight checks they dole out.

Creating opportunities to fail

It is not that pathologists do not care. They just have not been trained in business-related skills that define the executive positions they are asked to occupy. Never in their training did they learn the basics of production, contract negotiation, or customer service. Pathology residents complain that they are poorly trained in laboratory management. Indeed, the individuals responsible for training young doctors may be veterans of the large urban academic centers they joined immediately after residency, but they deploy recruits to small community hospitals with which they have no experience. As well intentioned as they may be, these mentors may never have had to sign a paycheck, to be held accountable for falling revenues, to risk their personal finances to grow a business, or to fend off a national laboratory marketing blitz.

The manner in which pathology services are engaged can also undermine pathologists’ relationships with hospitals. In most practices, pathologists provide services under exclusive contracts. Pathologists may view these monopolies favorably, but they can backfire. Monopolies are not known to raise the bars of innovation or customer service. They tend to de-incentivize accountability and encourage providers (rather than customers) to define the levels of service. Paraphrasing Henry Ford, some pathologists might say, “You may have a pathology report in any format you like as long as it is this one. You can have us assist you with needle aspirations at any time as long as it is not on Friday after 5:00 p.m.”3

Pathologists may see no reason to develop, and may even dread, the notion of performance standards. The medical staff may not want to force the issue for fear of having peers scrutinize their performance. Pay-for-performance incentives are becoming incorporated in some “Part A” pathology contracts.4 Indeed, without performance metrics by which to gauge the level of service, pathologists may never know when they are drifting off course and headed towards an iceberg.

Hospital administrators may bear some responsibility in this. Hospital CEOs do not embrace laboratory medical directors onto their executive staffs as they do, say, physician hospital medical directors. They provide no platform by which to make pathologists aware of, let alone contribute to, resolving the day-to-day tribulations of hospital operations. How is a laboratory medical director to know that her request for a new hematology analyzer came on the day the CEO had to deal with the news of a competing surgi-center, an impending nursing shortage, and a plummeting bond rating? Distanced from the “big hospital picture,” pathologists are left to focus only on the small laboratory details. They are squeezed into operational vacuums that keep them out of touch and bias their perceptions.

Controlling the damage

Waiting for that day when customers complain before taking action is waiting one day too long. Table 1 offers some suggestions as to what steps pathologists can take proactively to improve customer satisfaction. Not all suggestions are appropriate for every hospital. Among other things, they must be customized to institutional culture, hospital operations, expertise, and interests of pathology department members, and the level to which relations may have deteriorated.

References

1. Gilly MC. Post-complaint processes: from organizational response to repurchase behavior. J Consum Aff. 1987;21:293-313.

2. Novis DA. The Quality of Customer Service in Anatomic Pathology. Diagnostic Histopathology. 2008;14:308-315.

3. Kass ME, et al. Adequacy of Pathology Resident Training for Employment: A Survey Report from the Future of Pathology Task Group. Arch Pathol Lab Med. 2007;131:545-555.

4. Raich M, president/CEO, Vachette Pathology. Personal communication, August 15, 2002.

 

 

Customer Service

Abstract. Customer service, namely ensuring that the quality of goods and services meet the expectations of those who use them is a fundamental element by which customers gauge the value of a company. The subject of customer service in the practice of Anatomic Pathology (AP) receives little time in pathologists’ training programs and little print in medical literature. In this paper, the author will discuss the importance of customer service to customer retention in the practice of AP. The author will also compare the use of two metrics–one of process: test turnaround time and the other of outcome: customer satisfaction–by which the success of customer service is evaluated.

Prospective Review of Surgical Pathology Cases

FULL ARTICLE:   http://davidnovis.com/wp-content/uploads/2014/03/Doubleread-copy-2.pdf

ABSTRACT: When surgical pathology reports are discovered to contain errors after those reports have been released to clinicians, it is common practice for pathologists to correct and reissue them as amended reports. Measuring the rates with which surgical pathology reports are amended is a convenient quality assurance tool by which to gauge the frequencies of errors occurring in surgical pathology reporting. The purpose of this study was to determine whether or not routine review of surgical pathology case material prior to the release of surgical pathology reports would lower the rate with which surgical pathology reports were amended to correct misdiagnoses. In the year-long periods before and after institution of this intervention, the annual rates of amended reports issued for the purpose of correcting misdiagnoses were 1.3 per 1000 cases and 0.6 per 1000 cases respectively.

Novis, DA. Current state of malpractice litigation. Acta Cytol. 1998; 42:1302-4.

To the Editors:

I enjoyed reading the definitive and comprehensive review by Frable et al concerning the current state of malpractice litigation, as well as the thoughtful and provocative commentaries that followed it [Frable WJ et al: Medicolegal affairs. IAC Task Force summary. Acta Cytol1998:42:76-132].

My interest alighted on several endorsements, both explicit and implied, concerning the notion of establishing centralized panels to review Pap smears in litigation. Until recently, I was convinced that the creation of review panels would improve our system of malpractice litigation. I also believed that the American Society of Cytopathology (ASC) should be the institution that establishes these panels because I thought that might be a way for the ASC to resolve several major problems facing it. I now believe that these review panels are unworkable, and that the ASC is already well on its way to resolving its problems without needing to establish review panels.

It had seemed to me that an institutionalized mechanism of slide review may have undermined what I consider to be betrayal of our membership by officers who use their ASC status to profit from malpractice litigation brought against the very Society members who elected them to those offices in the first place. I’m not saying that our colleagues shouldn’t be allowed to sell their expertise to plaintiffs’ attorneys. However, when expert witnesses bolster their credentials in court by conjuring up their positions of leadership in our esteemed Society, lawyers have a way of making it sound as if they speak for all of us. If that be the case, I think we should be a part of the process that determines what the standards of performance are going to be, and who, in the name of our Society, will articulate them.

As it turns out, the Society is already attempting to deal with this issue. Candidates for ASC office must now declare their malpractice activity to the membership. If we choose, we can take these activities into account when we cast our votes. Secondly, I believed that an ASC-based national arbitration board would show that, contrary to the characterization that it sometimes inadvertently projects, the Society’s leadership is truly sensitive to the anxieties of its members. In a recent poll conducted by the ASC, members indicated that the number one issue that they wanted the Society to confront was that of practice standards, particularly regarding malpractice litigation. There, too, the Society may be on the way to resolving this, if it indeed embraces the so-called South Carolina Guidelines.

Finally, and this really provided me the main impetus for the concept, I believed that the creation of an impartial arbitration board reviewing litigation material, never knowing if they were rendering opinions for the plaintiff or the defense, struck me as a fair way to decide whether or not a defendant achieved, and the plaintiff received, a reasonable standard of care.

Subsequently, I came to find out that the Committee on Cytopathology Practice considered, and then rejected the notion of a review board quite some time ago. To understand why, I retraced their research. I talked to lawyers and malpractice risk man managers representing the Doctor’s Company, the College of American Pathologists, the American College of Radiology, and the American Medical Association, as well as to private practitioners of malpractice law. Their opinions, with only a few exceptions, were much the same: the system is not about what is or what is not fair to cytopathologists angered at having their competence publicly impugned. It’s about winning cases in malpractice court.

The people with whom I spoke all agreed that a central review board is, in concept, a great idea. In fact, many states have arbitration boards for civil litigation. Nobody uses them. In many states, the court itself can call its own unbiased expert witnesses. They don’t. This does not represent some sort of legal irony; it’s how our legal justice system operates. Malpractice attorneys don’t start out with missed cells on a Pap smear. They start out with a client who claims injury and an obligation to that client to convince a jury that the client should be compensated for that injury. If the plaintiff’s attorney needs to show that Pap smear results contributed to the injury, he/she will try to find someone to say so. Indeed, in most states, a plaintiff’s attorney cannot initiate legal action without the endorsement of an expert witness.

The defense cannot coerce the plaintiff into submitting a smear to some central arbitration panel. Defendants’ insurance companies do not necessarily endorse these arbitration panels, either. Once a case has been filed, insurance companies prefer to have their arguments articulated by experienced experts who are adept at defense testimony rather than by impartial panels who may render an opinion that might be less than favorable to their own position. In fact, the last thing that the defense wants to do is to give the plaintiff’s expert the soap box upon which to perch in front of a jury and crow about how cumbersome and unnecessary the review panel is to conclude what is obvious to the most casual observer, namely, that the defendant’s error was gross and that the laboratory’s practice did not meet the most minimal standard of care.

Until we see tort reform in America, I think we’re stuck with this system. Rather than trying to change the entire legal system, maybe all the ASC can do is try to change the behavior of those who choose to belong to it. The Society can establish standards of practice for its members. It can devise mechanisms of case review for members who would like to measure how their practice compares to that of their peers. It can ratify uniform standards of slide review, such as those embodied in the South Carolina Guidelines. I suspect that not many members would choose to deviate from Society standards, at least not if they desired maintaining the esteem of their fellow Society members, let alone their very membership in the Society.

Perhaps, too, Society members might perceive these types of activities as adding value to their ASC membership. As I understand it, the Committee on Cytopathology Practice is engaged in setting standards of practice and standards of behavior for members involved in malpractice litigation. I patiently await their report later this year.

David A Novis, M.D. Wentworth Douglass Hospital Dover, New Hampshire 03820

 

 

 

Laboratory Accreditation

The College of American Pathologists (CAP) is the primary professional organization accrediting clinical medical laboratories. The CAP bases their accreditation standards on scientific evidence that links best practices to best outcomes. The following is a list of the CAP Accreditation Checklist standards that have emanated from clinical research published by Dr. Novis and his coworkers.

COMMISSION ON LABORATORY ACCREDITATION

Laboratory Accreditation Program

All Checklists are ©2005. College of American Pathologists. All rights reserved

LABORATORY GENERAL CHECKLIST

GEN.20316 Phase II N/A YES NO

Are key indicators of quality monitored and evaluated to detect problems and opportunities for improvement?

NOTE: Key indicators are those that reflect activities critical to patient outcome, that affect a large proportion of the laboratory’s patients, or that have been problematic in the past. The laboratory must document that the selected indicators are regularly compared against a benchmark, where available and applicable. The benchmark may be a practice guideline, CAP Q-PROBES data, or the laboratory’s own experience. New programs or services should be measured to evaluate their impact on laboratory service. The number of monitored indicators should be consistent with the laboratory’s scope of care. Special function laboratories may monitor a single indicator; larger laboratories should monitor multiple aspects of the scope of care commensurate with their scope of service. (However, there is no requirement that an indicator(s) be assessed in every section of the laboratory during every calendar year.)

Examples of key indicators include, but are not limited to the following. (Indicators related to CAP patient safety goals include numbers 1, 4, 7, 8 and 9.)

1. Patient/Specimen Identification. May be any of the following: percent of patient wristbands with errors, percent of ordered tests with patient identification errors, or percent of results with identification errors.

2. Test Order Accuracy. Percent of test orders correctly entered into a laboratory computer.

3. Stat Test Turnaround Time. May be collection-to-reporting turnaround time or receipt-in-laboratory-to-reporting turnaround time of tests ordered with a stat priority. May be confined to the Emergency Department or intensive care unit if a suitable reference database is available. Laboratories may monitor mean or median turnaround time or the percent of specimens with turnaround time that falls within an established limit.

4. Critical Value Reporting. Percent of critical values with documentation that values have been reported to caregivers

5. Customer Satisfaction. Must use a standardized satisfaction survey tool with a reference database of physician or nurse respondents.

6. Specimen Acceptability. Percent of general hematology and/or chemistry specimens accepted for testing.

7. Corrected Reports General Laboratory. Percent of reports that are corrected.

8. Corrected Reports Anatomic Pathology. Percent of reports that are corrected.

9. Surgical Pathology/Cytology Specimen Labeling. Percent of requisitions or specimen containers with one or more errors of pre-defined type.

10. Blood Component Wastage. Percentage of red blood cell units or other blood components that are not transfused to patients and not returned to the blood component supplier for credit or reissue.

11. Blood Culture Contamination. Percent of blood cultures that grow bacteria that are highly likely to represent contaminants.

While there is no requirement that the specific key quality indicators listed above be monitored, these indicators have been field-tested and shown to be measurable in a consistent manner, to demonstrate variability from laboratory-to-laboratory, and to be important to clinicians and to patient care. For the above indicators, performance should be compared with multi-institutional performance surveys that have been conducted within ten years of the laboratory s most recent measurement, where such surveys are available (see references below). Action plans should be developed for any indicator in which laboratory performance falls below the 25th percentile (i.e., 75% or more of the other laboratories in the study perform better). Use of the indicators listed above does not require enrollment in any quality monitoring product.

4) Novis DA, et al. Biochemical markers of myocardial injury test turnaround time. Arch Pathol Lab Med. 2004; 128:158-164;

10) Novis DA, et al. Quality indicators of fresh frozen plasma and platelet utilization. Arch Pathol Lab Med. 2002; 126:527-532\

GEN.20348 Phase II N/A YES NO

Are preanalytic variables monitored?

NOTE: Preanalytic (i.e., pre-examination) variables include all steps in the process prior to the analytic phase of testing, starting with the physician s order. Examples include accuracy of transmission of physicians’ orders, specimen transport and preparation, requisition accuracy, quality of phlebotomy services, specimen acceptability rates, etc. This list is neither all-inclusive nor exclusive. The variables chosen should be appropriate to the laboratory’s scope of care.

13) Dale JC, Novis DA. Outpatient phlebotomy success and reasons for specimen rejection. A Q-Probes study. Arch Pathol Lab Med. 2002;126:416-419;

GEN.20364 Phase II N/A YES NO

Are postanalytic variables monitored?

NOTE: Postanalytic (i.e., post-examination) variables include all steps in the overall laboratory process between completion of the analytic phase of testing and results receipt by the requesting physician. Examples are accuracy of data transmission across electronic interfaces, reflex testing, turnaround time from test completion to chart posting (paper and/or electronic), and interpretability of reports. This list is neither all-inclusive nor exclusive, providing the variables chosen are appropriate to the laboratory’s scope of care.

1) Novis DA, Dale JC. Morning rounds inpatient test availability. A College of American Pathologists Q-Probes study of 79 860 morning complete blood cell count and electrolyte test results in 367 institutions. Arch Pathol Lab Med. 2000;124:499-503;

4) Jones BA, Novis DA. Nongynecologic cytology turnaround time. A College of American Pathologists Q-Probes study of 180 laboratories. Arch Pathol Lab Med. 2001;125:1279-1284

point-of-care testing CHECKLIST

POC.03200 Phase II N/A YES NO

Is the POCT program enrolled in the appropriate available graded CAP Surveys or a CAP approved alternative proficiency testing program for the patient testing performed?

COMMENTARY:

The POCT program must participate in a CAP Surveys or CAP approved program of graded interlaboratory comparison testing appropriate to the scope of the laboratory, if available. This must include enrollment in surveys with analytes matching those for which the laboratory performs patient testing (e.g., patient whole blood glucose testing requires enrollment in CAP survey WBG or approved equivalent). Laboratories will not be penalized if they are unable to participate in an oversubscribed program.

6) Novis DA, Jones BA. Interinstitutional comparison of bedside glucose monitoring. Characteristics, accuracy performance, and quality control documentation: a College of American Pathologists Q Probes study of bedside glucose monitoring performed in 226 small hospitals. Arch Pathol Lab Med. 1998;122:495-502

POC.03225 Phase II N/A YES NO

For tests for which CAP does not require PT, does the laboratory at least semiannually 1) participate in external PT, or 2) exercise an alternative performance assessment system for determining the reliability of analytic testing?

NOTE: Appropriate alternative performance assessment procedures may include: participation in ungraded proficiency testing programs, split sample analysis with reference or other laboratories, split samples with an established in-house method, assayed material, regional pools, clinical validation by chart review, or other suitable and documented means. It is the responsibility of the Laboratory Director to define such alternative performance assessment procedures, as applicable, in accordance with good clinical and scientific laboratory practice.

COMMENTARY:

For analytes where graded proficiency testing is not available, performance must be checked at least semi annually with appropriate procedures such as: participation in ungraded proficiency surveys, split sample analysis with reference or other laboratories, split samples with an established in house method, assayed material, regional pools, clinical validation by chart review, or other suitable and documented means. It is the responsibility of the Laboratory Director to define such procedures, as applicable, in accordance with good clinical and scientific laboratory practice.

2) Novis DA, Jones BA. Interinstitutional comparison of bedside glucose monitoring. Characteristics, accuracy performance, and quality control documentation: a College of American Pathologists Q Probes study of bedside glucose monitoring performed in 226 small hospitals. Arch Pathol Lab Med. 1998;122:495-502;

POC.03500 Phase II N/A YES NO

Does the point-of-care testing program have a written QC/QM program?

NOTE: The QM/QC program for POCT must be clearly defined and documented. The program must ensure quality throughout the preanalytic, analytic, and post-analytic (reporting) phases of testing, including patient identification and preparation; specimen collection, identification, and processing; and accurate result reporting. The program must be capable of detecting problems and identifying opportunities for system improvement. The laboratory must be able to develop plans of corrective/preventive action based on data from its QM system.

COMMENTARY:

The quality control (QC) and quality management (QM) program in POCT should be clearly defined and documented. The program must ensure quality throughout the preanalytic, analytic, and post-analytic (reporting) phases of testing, including patient identification and preparation; specimen collection, identification, and processing; and accurate result reporting. The program must be capable of detecting problems and identifying opportunities for system improvement. The POCT program must be able to develop plans of corrective/preventive action based on data from its QM system.

Before patient results are reported, QC data must be judged acceptable. The Laboratory Director or designee must review QC data at least monthly. Beyond these specific requirements, a laboratory may (optionally) perform more frequent review at intervals that it determines appropriate. Because of the many variables across laboratories, the CAP makes no specific recommendations on the frequency of any additional review of QC data.

5) Novis DA, Jones BA. Interinstitutional comparison of bedside glucose monitoring. Characteristics, accuracy performance, and quality control documentation: a College of American Pathologists Q Probes study of bedside glucose monitoring performed in 226 small hospitals. Arch Pathol Lab Med. 1998;122:495-502

POC.08800 Phase II N/A YES NO

For QUANTITATIVE tests, are control materials at more than one concentration (level) used for all tests at least daily?

NOTE: For coagulation tests under CLIA 88, 2 different levels of control material are required during each 8 hours of patient testing, and each time there is a change in reagents. For blood gas testing under CLIA-88, a minimum of 1 quality control specimen for pH, pCO2 and pO2 is required during each 8 hours of patient testing.

COMMENTARY:

For quantitative tests, an appropriate quality control (QC) system must be in place.

The daily use of 2 levels of instrument and/or electronic controls as the only QC system is acceptable only for unmodified test systems cleared by the FDA and classified under CLIA 88 as “waived” or “moderate complexity.” The laboratory is expected to provide documentation of its validation of all instrument reagent systems for which daily controls are limited to instrument and/or electronic controls. This documentation must include the federal complexity classification of the testing system and data showing that calibration status is monitored.

6) Novis DA, Jones BA. Interinstitutional comparison of bedside glucose monitoring. Characteristics, accuracy performance, and quality control documentation: a College of American Pathologists Q Probes study of bedside glucose monitoring performed in 226 small hospitals. Arch Pathol Lab Med. 1998;122:495-502

TRANSFUSION MEDICINE CHECKLIST

TRM.20000 Phase II N/A YES NO

Does the transfusion medicine section have a written quality management/quality control (QM/QC) program?

NOTE: The QM/QC program in the transfusion medicine section must be clearly defined and documented. The program must ensure quality throughout the preanalytic, analytic, and post-analytic (reporting) phases of testing, including patient identification and preparation; specimen collection, identification, preservation, transportation, and processing; and accurate, timely result reporting. The program must be capable of detecting problems in the laboratory s systems, and identifying opportunities for system improvement. The laboratory must be able to develop plans of corrective/preventive action based on data from its QM system.

All QM questions in the Laboratory General Checklist pertain to the transfusion medicine section.

9) Novis DA, et al. Quality indicators of blood utilization. Three College of American Pathologists Q-probes studies of 12, 288, 404 red blood cell units in 1639 hospitals. Arch Pathol Lab Med. 2002;126:150-156;

10) Novis DA, et al. Quality indicators of fresh frozen plasma and platelet utilization. Three College of American Pathologists Q-probes studies of 8 981 796 units of fresh frozen plasma and platelets in 1639 hospitals. Arch Pathol Lab Med. 2002;126:527-532;

11) Novis DA, et al. Operating room blood delivery turnaround time. A College of American Pathologists Q-Probes study of 12 647 units of blood components in 466 institutions. Arch Pathol Lab Med. 2002;126:909-914.

CYTOPATHOLOGY CHECKLIST

CYP.00800 Phase II N/A YES NO

Is there a clearly defined and documented quality management program in cytopathology?

NOTE: Laboratories should consistently review activities and monitor their effectiveness in improving performance. Each laboratory should design a program that meets its needs and conforms to appropriate regulatory and accreditation standards.

6) Jones BA, Novis DA. Cervical biopsy-cytology correlation. A College of American Pathologists Q-Probes study of 22439 correlations in 348 laboratories. Arch Pathol Lab Med. 1996;120:523-531;

CYP.07569 Phase II N/A YES NO

Is an effort made to correlate gynecologic cytopathology findings with available clinical information?

NOTE: Methods of clinical correlation should be documented in the laboratory procedure manual, and selected reports can be reviewed to confirm practice. Possible mechanisms may include: focused rescreening of cases based on clinical history, history of bleeding, or previous abnormality; correlation of glandular cells with hysterectomy status, age of patient, and last menstrual period; review of previous or current biopsy material. Documentation of clinical correlation may include policies, problem logs with resolution, or notes in reports.

COMMENTARY:

An effort must be made to correlate gynecologic cytopathology findings with available clinical information.

3) Jones BA, Novis DA. Follow-up of abnormal gynecologic cytology. A College of American Pathologists Q-Probes study of 16 132 cases from 306 laboratories. Arch Pathol Lab Med. 2000;124:665-671; .

CYP.07690 Phase I N/A YES NO

Are 90% of reports on routine non-gynecologic cytology cases completed within 2 working days of receipt by the laboratory performing the evaluation?

NOTE: This question is primarily concerned with the majority of routine specimens, and applies to all laboratories. Longer reporting times may be allowed for specimens requiring special processing or staining (e.g., immunohistochemistry or other molecular analysis), or for screening (as opposed to diagnostic) specimens (for example, urines). If the laboratory has certain classes of specimens, patient types, etc., for which longer turnaround times are clinically acceptable, these must be identified, together with reasonable target reporting times, for Inspector review. Documentation may consist of continuous monitoring of data or periodic auditing of reports by the laboratory. In lieu of this documentation, the Inspector may audit sufficient reports to confirm turn around time.

Jones BA, Novis DA. Nongynecologic cytology turnaround time. A College of American Pathologists Q-Probes study of 180 laboratories. Arch Pathol Lab Med. 2001;125:1279-1284.

LIMITED SERVICE LABORATORY CHECKLIST

LSV.37050 Phase II N/A YES NO

Are routine and STAT results available within a reasonable time?

NOTE: A reasonable time for routine daily service, assuming receipt or collection of specimen in the morning is 4 to 8 hours. Emergency or STAT results that do not require additional verification procedures should be reported within 1 hour after specimen receipt in the laboratory.

COMMENTARY:

Routine and stat results must be available within a reasonable time. A reasonable time for routine daily service, assuming receipt or collection of specimen in the morning, is 4 to 8 hours. Emergency or stat results that do not require additional verification procedures should be reported within 1 hour after specimen receipt in the laboratory.

2) Steindel SJ, Novis DA. Using outlier events to monitor test turnaround time. A College of American Pathologists Q-Probes study in 496 laboratories. Arch Pathol Lab Med. 1999;123:607-614;

CAP Today

CAP Today is the trade news journal of the College of American Pathologists.

Low and inside: reducing laboratory staff turnover. 

May 2019

On a regular basis, CAP Today includes Q & A and general education sections for CAP physician members. The following is a list of questions for which the answers reference published studies performed by Dr. Novis and his coworkers.

December 2005

Q. Is there a benchmark or community standard for the percentage of stat tests relative to total workload? Our overseas military hospital would probably be most comparable to a small community hospital.

A. Stats vary according to the institutional services and, as such, are a barometer of those services, rather than a target to be achieved. I am unaware of any published general benchmarks, although specific articles appear occasionally regarding turnaround time, which may have data regarding stats embedded in them, such as for troponin measurements (Novis DA, Jones BA, Dale JC, et al. Arch Pathol Lab Med. 2004;128: 158-164).

I believe staff should review the CAP Laboratory Accreditation Program standards regarding TAT and ensure that the needs of the medical staff are being met. If there is a perception that too many stats are being ordered (although that is not implied in the question), the medical director should perhaps review lab TAT in general or with respect to particular tests to ascertain if process improvement is needed. Alternatively, if there are individual abusers among the medical staff who regularly order stats inappropriately, then it is the medical director’s responsibility to attempt to change these ordering patterns and, thereby, assure appropriate laboratory use and resource consumption.

June 2005

Q. Our laboratory maintains cytology/histology correlation in a separate file and keeps track of any discrepancies noted. We send out a letter to clinicians for all abnormal cytology cases without correlation asking for documentation of followup. We have several questions about these correlations: Should a comment on correlation be included in our pathology reports? When should requests for followup be made, and what should be done with this information?

A. CLIA 88 mandates that there is laboratory comparison of clinical information, when available, with cytology reports and comparison of all gynecologic cytology reports with a diagnosis of high-grade squamous intraepithelial lesion (HSIL), adenocarcinoma, or other malignant neoplasms with the histopathology report, if available in the laboratory (either on-site or in storage), and determination of the causes of any discrepancies.1 These requirements are reflected in cytopathology checklist questions CYP.01900, CYP.07543, CYP.07556, and CYP.07569.2

The method for documenting the cytohistologic correlation results is not specified and is left to each individual laboratorys discretion. Communication of cytology and biopsy correlation results to clinicians is key and provides critical information for optimal patient management.6 Cytohistologic correlation for individual patients can be documented in biopsy reports, via phone calls, or in letters, and, in more general terms, correlation statistics can be discussed in interdepartmental committees or conferences.

Evaluation of cytohistologic correlations is also an important part of a laboratorys quality improvement program.10 The definition of what constitutes diagnostic discrepancy should be established, and it should be recognized that perfect correlation is not realistic. The 1996 CAP Q-Probes study5 of 22,429 paired cervicovaginal cytology and biopsy specimens reported a discrepancy rate of 16.5 percent with a Pap sensitivity of 89 percent and specificity of 65 percent. The majority of discrepancies in this study were due to sampling differences rather than screening or interpretive errors. Because both the Pap test and colposcopic biopsy are subject to sampling errors, reasons for discrepancies should be pursued when the biopsy is negative, as it may not always represent the gold standard. Negative cytology cases should also be reviewed, if available, when the biopsy is positive. Peer or multidisciplinary review or both of noncorrelating specimens may be helpful in achieving consensus. Regular summary and evaluation of results can identify trends and improvements.

There is no requirement for correlating biopsies with lesser abnormalities, for correlating subsequent cytology with previous biopsies, or for correlating concurrent biopsies. However, these cytohistologic correlations are recorded in many laboratories and may be a useful part of the quality improvement program. Laboratories are required to document the number of cases that do have histologic correlation in the annual laboratory cytology statistical report.

In a 1997 article by Andrew Renshaw, MD,7 the optimal time for correlation of cytology and subsequent biopsies was found to be between 60 to 100 days, during which time the correlation of the Pap test and the subsequent biopsies was the highest. Biopsies performed over 100 days were less likely to correlate with the initial Pap. The difference may be explained by regression of lesions over time.

When followup material is not available in the laboratory, documentation of followup correspondence or reports, telephone calls, or requests for information, whether as separate letters or in the histology report, must be maintained and kept for two years. In cases without biopsy followup, other studies such as human papillomavirus testing, repeat Pap tests, and colposcopic examination findings may provide useful information, especially in cases of HSIL, glandular abnormalities, and carcinoma.

References

5. Jones BA, Novis DA. Cervical biopsy-cytology correlation. A College of American Pathologists Q-Probes study of 22439 correlations in 348 laboratories. Arch Pathol Lab Med. 1996;120:523531.

May 2003

Is there a right time for cyto/histo correlation in gyn cytology?

Cytologic-histologic correlation is an important component of any quality improvement program in cytology. A documented effort must be made to obtain and review follow-up histologic reports or material that is available within the laboratory when high-grade squamous intraepithelial lesion or malignant findings are identified in gynecologic cytology.1 There is no specific requirement to obtain correlation for any gynecologic cytology specimen in the absence of HSIL, and there is no specific requirement that histologic findings be correlated with cytologic findings, though many laboratories do make these correlations and the results certainly can be a component of a quality improvement program.2-5

The time period over which these correlations should be made is not specified. Data do suggest, however, that the optimal period for examination may be 60 to 100 days. In a study involving 419 low-grade squamous intraepithelial lesion and 277 HSIL smears, Renshaw, et al.6 correlated the rate of subsequent biopsy and the rate of correlation with that biopsy over a period of one year. In this study, 811 biopsies were performed. While biopsies that correlated with the initial cytologic finding could be identified as late as one year after the initial cytology, the highest rate of confirmation was obtained in biopsies performed within 60 days, and fully 78 percent of all correlating biopsies were obtained within the first 100 days. The chance of finding a correlating smear decreased after that time. In other words, biopsies performed more than 100 days after the initial biopsy were less likely to correlate with the initial cytologic finding.

One explanation for the increased number of discrepancies was regression of the lesions. After 100 days, there is a greater likelihood of regression, which leads to an increase in the number of perceived false-positive cytology results when, in fact, a number of them are actually true positives. Limiting correlations to 100 days after the cytologic specimen was obtained is a reasonable way to limit the impact of false-positive correlations on the quality improvement program and the cytologic staff, while at the same time obtaining the majority of all biopsies for which correlation is available.

More controversial is whether cytologic specimens should be taken at the same time as the biopsy and correlated with it. Some literature suggests that cytologic specimens taken at that time have a higher likelihood of being false-negativesthat is, the cytologic specimen is more likely to not sample the lesion found in the biopsy.7 In the study by Renshaw, et al.,6 this was not found to be the case, and indeed cytologic specimens obtained at the time of biopsy were more likely to correlate with the results of biopsy than cytologic specimens taken at any subsequent time.

No requirement specifies that concurrent Pap tests need to be correlated with the biopsy since these cytology specimens were not the reason for obtaining the biopsy. Technically concurrent biopsies are not a followup to the cytology. In the interests of patient care, however, HSIL or malignancy identified on the cytology specimen with a concurrent negative or low-grade biopsy result should be reconciled. Furthermore, the subsequent histologic specimens must be correlated. It appears that the optimal biopsies to correlate are those obtained within 60 to 100 days after the Pap test.

Reference

• Jones BA, Novis DA. Cervical Biopsy-Cytology Correlation. Arch Pathol Lab Med. 1996;120:523-531.

November 2002

Fresh frozen plasma and platelet utilization

The authors of this study reported normative rates of expiration and wastage for units of fresh frozen plasma and platelets. Participants in the CAP Q-Probes laboratory quality improvement program collected data retrospectively on the number of units of FFP and PLTs that expired or were wasted due to mishandling. The participants also completed questionnaires describing their hospitals’ and blood banks’ laboratory and transfusion practices. The studies covered 1,639 public and private institutions and included data submitted on 8,981,796 units of FFP and PLTs. The aggregate combined FFP and PLT expiration rates ranged from 5.8 to 6.4 percent, and aggregate combined FFP and PLT wastage rates ranged from 2.0 to 2.5 percent. Among the top-performing participants (at the 90th percentile and above), FFP and PLT expiration rates were 0.6 percent or lower and FFP and PLT wastage rates were 0.5 percent or lower. Among the worst-performing participants (at the 10th percentile and below), expiration rates were 13.8 percent or higher and wastage rates were 6.8 percent or higher. The authors were unable to associate selected hospital characteristics or blood bank practices with lower rates of FFP and PLT utilization. They concluded that it is possible for hospital blood bank personnel to achieve FFP and PLT expiration and wastage rates of less than one percent.

Novis DA, Renner S, Friedberg RC, et al. Quality indicators of fresh frozen plasma and platelet utilization. Arch Pathol Lab Med. 2002;126:527-532.

Reprints: Dr. David A. Novis, For reprints, contact Dr. Novis at davidnovis.com .

August 2002

Q. New requirement for nongyn TAT

A. The 2002 CAP Laboratory Accreditation Program checklist contains a new question related to turnaround time of nongynecologic cytology, or NGC, cases:

0YP.06532 Phase I: Are 90 percent of reports on routine nongynecologic cytology cases completed within two working days of receipt by the laboratory performing the evaluation?

This question was added to the checklist to underscore the importance of turnaround time as a measure of laboratory service quality. In a 2000 Q-Probe authored by Bruce A. Jones, MD, and David A. Novis, MD (QP08), the factors influencing TAT for 16,925 NGC specimens from 180 laboratories were analyzed. The authors found that 50 percent of participating laboratories had a mean TAT of 2.1 days or less from specimen collection to final report sign-off. The factors that delayed TAT included the use of reference laboratories for screening, lack of timely transcription, difficulty obtaining adequate specimen information from the submitting physician, and pulling old slides/tissue blocks for review or performing special stains, or both.

The CAP believes that a goal of two working days TAT for routine NGC specimens is reasonable. Documentation can consist of continuous monitoring of data or periodic auditing of reports. Longer times may be allowed for specimens requiring special processing or staining (for example, immunohistochemistry), provided these special classes of specimens are documented so that the inspector can evaluate their appropriateness.

For laboratories that are finding it difficult to meet the CAP TAT guidelines, the 2000 Q-Probe study makes recommendations for improving overall TAT. They are as follows: minimize the use of reference laboratories; educate the submitting physician’s office staff or change requisitions to expedite the gathering of important information, or both; reevaluate general laboratory workflow and transcription services; and continuously monitor TAT.

Nongynecologic cytology plays an important role in diagnosing and managing patients, many of whom may be acutely ill. The new CAP guideline emphasizes the importance of NGC turnaround time for patient care and clinical decision-making in today’s competitive, customer-service-oriented health care systems. Of course, the quality of diagno-sis should never be compromised for the sake of TAT.

June 2001

Q. I have a question about correlation between Pap testsboth regular and ThinPrepand biopsy. What percentage is the benchmark? Some physicians are not satisfied with our service, and they seem to expect 100 percent correlation.

A. The percentage of cytology-biopsy discrepancies depends on the definition of discrepancy and the methods used to track discrepancies. One discrepancy definition offered in the CAP Quality Improvement Manual in Anatomic Pathology is a difference in interpretation that would have an impact on patient management decisions.1 Another definition is a two-step interpretive difference, for example low-grade squamous intraepithelial lesion on biopsy versus squamous cancer on the Pap. Excluding certain specimen types, for example endocervical curettings, will also have an impact on the discrepancy rates. Finally, the time interval and number of specimens considered per patient (single versus multiple cytology-histology combinations) will also affect the calculation.

A discordant Pap-biopsy combination, as defined by Joste et al, is one “in which one of the specimens is reported as a significant squamous or glandular lesion and the other specimen is reported as within normal limits.”2 This definition excluded atypical squamous cells of undetermined significance and biopsies lacking the transformation zone. In their 14-month study of 56,497 cervical smears, 2.8 percent (1,582) were followed by cervical biopsy. Of 1,582 paired samples, 175 cases (11 percent) were identified as discrepant. (This group represents 0.3 percent of all smears reviewed.) In a vast majority (93.2 percent) both cytologic and histologic diagnoses were confirmed and the discrepancies were classified as sampling errors. Only 3.4 percent of cases were found to have correctable (interpretive or screening) errors. Tritz et al also found an 11 percent discrepancy rate, with the majority representing sampling issues, although the definition of discrepancy involved a two-step difference in interpretation.3

Jones and Novis reported results of 12 months’ followup of 16,132 cervical smears from 306 laboratories as part of a CAP Q-Probes evaluation.4 They found that 18 percent of patients with low-grade squamous intraepithelial lesion on cytology had high-grade squamous intraepithelial lesion, or HSIL, on followup biopsy. Only 67 percent of patients with LSIL on cytology had LSIL on biopsy, and 86.5 percent had any abnormal biopsy. Of those patients with HSIL on smear, 15.5 percent had LSIL on corresponding biopsy, 75.5 percent had HSIL on biopsy, and 93.5 percent had an abnormal biopsy.4 Similar to the American experience, the United Kingdom’s screening program ranges between 65 and 85 percent concordance for biopsy-proven HSIL after HSIL on cervical smear.5

Brown et al evaluated 48 discrepant cases of HSIL on cervical smears with corresponding biopsies revealing LSIL.6 Biopsy specimens were tested and typed for HPV with molecular techniques. Thirty-seven cases were positive for HPV DNA: two for low-risk HPV types, 17 for high-risk types, and 18 for types of unknown oncogenicity. The prevalence of high-risk HPV was significantly higher in LSIL biopsies with a history of HSIL smears.6

Some cytology-histology discrepancy data have also been reported using liquid-based cytology. For example, Diaz-Rosario and Kabawat reported that 20.9 percent of HSIL ThinPreps and 26.8 percent of LSIL ThinPreps were followed by negative biopsies.7

It is unrealistic to expect 100 percent correlation between cervical cytology and cervical biopsies, and an open discussion with concerned clinicians is recommended. Cervical cytology is appropriately used as a screening test, which means that some specificity will be sacrificed for increased sensitivity, while the colposcopically guided cervical biopsy is recommended as a confirmatory test. Both tests are subject to sampling error. Although the cervical biopsy is often considered the gold standard, not all lesions will be fully characterized on an initial colposcopy, and a lesion that is small or deep in the glands may not be sampled. Some lesions will regress in the interval between the Pap test and the colposcopy. In some cases, the cervical smear may better represent the pathology of the cervix than the biopsy.2-6 Appropriate treatment and followup should then be dictated by a combination of clinical, cytology, and biopsy data. In addition, the pathologist’s advice or report comments may be extremely helpful.

References.

4. Jones BA, Novis DA. Follow-up of abnormal gynecologic cytology. A College of American Pathologists Q-Probes study of 16,132 cases from 306 laboratories. Arch Pathol Lab Med. 2000;124:665-671.

May 2001

Sidestepping common deficiencies

A top deficiency from the anatomic pathology checklist comes from this recently revised question, 08:1182, on frozen section turnaround time: “Are at least 90 percent of frozen section interpretations rendered within 20 minutes of specimen arrival in the frozen section area?”

The new guideline is based on a Q-Probe study of frozen section turnaround time published in the Archives of Pathology & Laboratory Medicine (Novis DA, et al. 1997;121: 559-567). It requires specimens to be prepared, analyzed, interpreted, and reported within 20 minutes. Previously, frozen section slides had to be ready for a pathologist to analyze within 15 minutes.

“A lot of labs just didn’t realize that it changed or they’re not tracking their turnaround time, so they can’t say whether they’re hitting that [target] or not,” Dr. Ruhlen says.

Complicated cases that require multiple frozen sections, however, aren’t expected to meet this new standard. One example is a skin lesion with multiple margins that requires several frozen specimens for a complete interpretation. “Clearly it would be ridiculous to say you have to do them all in 20 minutes when that’s often just impossible,” Dr. Ruhlen says.