Category Archives: Uncategorized

Providing Value

As might occur at a national meeting of any group of doctors, the informal conversations that take place among pathologists in conference hallways and hotel bars often gravitate to the two occupational disasters they fear most: misdiagnosing lesions and losing their jobs. The pathology literature is packed with advice on the former but does not offer much on the latter. That is not surprising. Who wants to share his failures with colleagues?

When relations between hospital administrators and pathologists begin to strain, pathologists assume that it is about money: the hospital wants to ratchet down its contract fee in order to improve the bottom line. In fact, it is almost never about money; it is about value.

In search of value

Hospital administrators do not like problems with doctors. Skirmishes with the one or two physicians have a way of embroiling the entire medical staff into political, time-consuming, emotionally draining, and counterproductive wars. It would be a naïve administrator who would jeopardize the operation of a well-functioning service contract and alienate a complacent medical staff just to trim a fraction of a percentage point off the hospital operating margin. But give a CEO cause to doubt the value of those services? That contract becomes a blip on his fiduciary radar.

In any business, customers must be able to articulate the value that service providers bring to them. Clinicians must be able to cite the merits of their pathology departments beyond recounting anecdotes concerning national experts who confirmed diagnoses made by local pathologists. Administrators must have a handle on precisely what their laboratory medical directors do to run the laboratory.

 Making the correct diagnosis is not enough

Clinicians regard accurate and timely pathology diagnoses as baseline performance — nothing less than the service they expect from mechanics who repair their cars or waiters who serve them their meals. If doctors are dissatisfied with pathology services, it is not with the analytic phase — the level of diagnostic acumen — but, rather, with the pre- and post-analytic phases of service.

Most pathologists are usually unaware of physician dissatisfaction. They wait for doctors to complain. The problem is nobody likes to complain. Doctors do not like face-to-face confrontations with other physicians. Disgruntled clinicians verbalize their grievances in operating rooms, doctor’s lounges, and administrator’s offices — everywhere but in pathology laboratories. Most medical-staff dissatisfaction is resolvable but only if clinicians believe the pathologists are earnest in making things right. Nothing moves dissatisfaction to anger more quickly than the feeling that no one is listening.1

For instance, pathologists may be proud of their latest, cutting-edge, CAP-based template pathology reports. But those reports may anger physicians who are unable to convince pathologists that they prefer those old, antiquated but colorful narratives. Physicians who are unhappy with laboratory turnaround times may become irate by being stonewalled with arguments that laboratory performance exceeds national benchmarks, especially when those benchmarks were established in institutions other than the one in which they practice.2

By the time pathologists become aware of complaints, or at least aware of the degree to which they have ignited passions, it may be too late to recover. The administrator may have already begun to look at solutions, some of which exclude them.

 Problems in the laboratory

Pathologists’ relationships with medical technologists and laboratory managers are equally fragile. Some laboratory medical directors believe that the standards of the Clinical Laboratory Improvement Amendment (CLIA) bestow upon them some form of entitlement. They might believe they have a license to hire and fire employees summarily, purchase equipment with little or no justification, and see their orders executed without question. (Actually, CLIA is a directive not to laboratory medical directors but to the laboratory owners who hire them.) Hospital administrators already have on their payrolls laboratory managers and superiors to whom those managers report. It is unlikely that they welcome another layer of management. When overzealous pathologist oversight creates, rather than solves, problems, administrators may begin to rethink relationships.

Problems with laboratory oversight can head in the opposite direction. Pathologists may regard their laboratory duties as interfering with their anatomic-pathology activities, especially if anatomic pathology provides their major source of revenue. They may abrogate the lion’s share of their CLIA responsibilities to laboratory managers and do little more than affix their signatures to laboratory documents. If the laboratory managers are competent, pathologists can fly under the radar of scrutiny for quite some time. If accreditation-inspection reports begin to accumulate citations or if doctors start complaining about laboratory services, these pathologists’ images begin to appear in the cross hairs of institutional reorganization. Administrators for whom the notion of having pathologists oversee laboratory operations was never theirs in the first place may start to wonder what the hospitals are getting for six-digit oversight checks they dole out.

Creating opportunities to fail

It is not that pathologists do not care. They just have not been trained in business-related skills that define the executive positions they are asked to occupy. Never in their training did they learn the basics of production, contract negotiation, or customer service. Pathology residents complain that they are poorly trained in laboratory management. Indeed, the individuals responsible for training young doctors may be veterans of the large urban academic centers they joined immediately after residency, but they deploy recruits to small community hospitals with which they have no experience. As well intentioned as they may be, these mentors may never have had to sign a paycheck, to be held accountable for falling revenues, to risk their personal finances to grow a business, or to fend off a national laboratory marketing blitz.

The manner in which pathology services are engaged can also undermine pathologists’ relationships with hospitals. In most practices, pathologists provide services under exclusive contracts. Pathologists may view these monopolies favorably, but they can backfire. Monopolies are not known to raise the bars of innovation or customer service. They tend to de-incentivize accountability and encourage providers (rather than customers) to define the levels of service. Paraphrasing Henry Ford, some pathologists might say, “You may have a pathology report in any format you like as long as it is this one. You can have us assist you with needle aspirations at any time as long as it is not on Friday after 5:00 p.m.”3

Pathologists may see no reason to develop, and may even dread, the notion of performance standards. The medical staff may not want to force the issue for fear of having peers scrutinize their performance. Pay-for-performance incentives are becoming incorporated in some “Part A” pathology contracts.4 Indeed, without performance metrics by which to gauge the level of service, pathologists may never know when they are drifting off course and headed towards an iceberg.

Hospital administrators may bear some responsibility in this. Hospital CEOs do not embrace laboratory medical directors onto their executive staffs as they do, say, physician hospital medical directors. They provide no platform by which to make pathologists aware of, let alone contribute to, resolving the day-to-day tribulations of hospital operations. How is a laboratory medical director to know that her request for a new hematology analyzer came on the day the CEO had to deal with the news of a competing surgi-center, an impending nursing shortage, and a plummeting bond rating? Distanced from the “big hospital picture,” pathologists are left to focus only on the small laboratory details. They are squeezed into operational vacuums that keep them out of touch and bias their perceptions.

Controlling the damage

Waiting for that day when customers complain before taking action is waiting one day too long. Table 1 offers some suggestions as to what steps pathologists can take proactively to improve customer satisfaction. Not all suggestions are appropriate for every hospital. Among other things, they must be customized to institutional culture, hospital operations, expertise, and interests of pathology department members, and the level to which relations may have deteriorated.

References

1. Gilly MC. Post-complaint processes: from organizational response to repurchase behavior. J Consum Aff. 1987;21:293-313.

2. Novis DA. The Quality of Customer Service in Anatomic Pathology. Diagnostic Histopathology. 2008;14:308-315.

3. Kass ME, et al. Adequacy of Pathology Resident Training for Employment: A Survey Report from the Future of Pathology Task Group. Arch Pathol Lab Med. 2007;131:545-555.

4. Raich M, president/CEO, Vachette Pathology. Personal communication, August 15, 2002.

 

 

Customer Service

Abstract. Customer service, namely ensuring that the quality of goods and services meet the expectations of those who use them is a fundamental element by which customers gauge the value of a company. The subject of customer service in the practice of Anatomic Pathology (AP) receives little time in pathologists’ training programs and little print in medical literature. In this paper, the author will discuss the importance of customer service to customer retention in the practice of AP. The author will also compare the use of two metrics–one of process: test turnaround time and the other of outcome: customer satisfaction–by which the success of customer service is evaluated.

Prospective Review of Surgical Pathology Cases

FULL ARTICLE:   http://davidnovis.com/wp-content/uploads/2014/03/Doubleread-copy-2.pdf

ABSTRACT: When surgical pathology reports are discovered to contain errors after those reports have been released to clinicians, it is common practice for pathologists to correct and reissue them as amended reports. Measuring the rates with which surgical pathology reports are amended is a convenient quality assurance tool by which to gauge the frequencies of errors occurring in surgical pathology reporting. The purpose of this study was to determine whether or not routine review of surgical pathology case material prior to the release of surgical pathology reports would lower the rate with which surgical pathology reports were amended to correct misdiagnoses. In the year-long periods before and after institution of this intervention, the annual rates of amended reports issued for the purpose of correcting misdiagnoses were 1.3 per 1000 cases and 0.6 per 1000 cases respectively.

Novis, DA. Current state of malpractice litigation. Acta Cytol. 1998; 42:1302-4.

To the Editors:

I enjoyed reading the definitive and comprehensive review by Frable et al concerning the current state of malpractice litigation, as well as the thoughtful and provocative commentaries that followed it [Frable WJ et al: Medicolegal affairs. IAC Task Force summary. Acta Cytol1998:42:76-132].

My interest alighted on several endorsements, both explicit and implied, concerning the notion of establishing centralized panels to review Pap smears in litigation. Until recently, I was convinced that the creation of review panels would improve our system of malpractice litigation. I also believed that the American Society of Cytopathology (ASC) should be the institution that establishes these panels because I thought that might be a way for the ASC to resolve several major problems facing it. I now believe that these review panels are unworkable, and that the ASC is already well on its way to resolving its problems without needing to establish review panels.

It had seemed to me that an institutionalized mechanism of slide review may have undermined what I consider to be betrayal of our membership by officers who use their ASC status to profit from malpractice litigation brought against the very Society members who elected them to those offices in the first place. I’m not saying that our colleagues shouldn’t be allowed to sell their expertise to plaintiffs’ attorneys. However, when expert witnesses bolster their credentials in court by conjuring up their positions of leadership in our esteemed Society, lawyers have a way of making it sound as if they speak for all of us. If that be the case, I think we should be a part of the process that determines what the standards of performance are going to be, and who, in the name of our Society, will articulate them.

As it turns out, the Society is already attempting to deal with this issue. Candidates for ASC office must now declare their malpractice activity to the membership. If we choose, we can take these activities into account when we cast our votes. Secondly, I believed that an ASC-based national arbitration board would show that, contrary to the characterization that it sometimes inadvertently projects, the Society’s leadership is truly sensitive to the anxieties of its members. In a recent poll conducted by the ASC, members indicated that the number one issue that they wanted the Society to confront was that of practice standards, particularly regarding malpractice litigation. There, too, the Society may be on the way to resolving this, if it indeed embraces the so-called South Carolina Guidelines.

Finally, and this really provided me the main impetus for the concept, I believed that the creation of an impartial arbitration board reviewing litigation material, never knowing if they were rendering opinions for the plaintiff or the defense, struck me as a fair way to decide whether or not a defendant achieved, and the plaintiff received, a reasonable standard of care.

Subsequently, I came to find out that the Committee on Cytopathology Practice considered, and then rejected the notion of a review board quite some time ago. To understand why, I retraced their research. I talked to lawyers and malpractice risk man managers representing the Doctor’s Company, the College of American Pathologists, the American College of Radiology, and the American Medical Association, as well as to private practitioners of malpractice law. Their opinions, with only a few exceptions, were much the same: the system is not about what is or what is not fair to cytopathologists angered at having their competence publicly impugned. It’s about winning cases in malpractice court.

The people with whom I spoke all agreed that a central review board is, in concept, a great idea. In fact, many states have arbitration boards for civil litigation. Nobody uses them. In many states, the court itself can call its own unbiased expert witnesses. They don’t. This does not represent some sort of legal irony; it’s how our legal justice system operates. Malpractice attorneys don’t start out with missed cells on a Pap smear. They start out with a client who claims injury and an obligation to that client to convince a jury that the client should be compensated for that injury. If the plaintiff’s attorney needs to show that Pap smear results contributed to the injury, he/she will try to find someone to say so. Indeed, in most states, a plaintiff’s attorney cannot initiate legal action without the endorsement of an expert witness.

The defense cannot coerce the plaintiff into submitting a smear to some central arbitration panel. Defendants’ insurance companies do not necessarily endorse these arbitration panels, either. Once a case has been filed, insurance companies prefer to have their arguments articulated by experienced experts who are adept at defense testimony rather than by impartial panels who may render an opinion that might be less than favorable to their own position. In fact, the last thing that the defense wants to do is to give the plaintiff’s expert the soap box upon which to perch in front of a jury and crow about how cumbersome and unnecessary the review panel is to conclude what is obvious to the most casual observer, namely, that the defendant’s error was gross and that the laboratory’s practice did not meet the most minimal standard of care.

Until we see tort reform in America, I think we’re stuck with this system. Rather than trying to change the entire legal system, maybe all the ASC can do is try to change the behavior of those who choose to belong to it. The Society can establish standards of practice for its members. It can devise mechanisms of case review for members who would like to measure how their practice compares to that of their peers. It can ratify uniform standards of slide review, such as those embodied in the South Carolina Guidelines. I suspect that not many members would choose to deviate from Society standards, at least not if they desired maintaining the esteem of their fellow Society members, let alone their very membership in the Society.

Perhaps, too, Society members might perceive these types of activities as adding value to their ASC membership. As I understand it, the Committee on Cytopathology Practice is engaged in setting standards of practice and standards of behavior for members involved in malpractice litigation. I patiently await their report later this year.

David A Novis, M.D. Wentworth Douglass Hospital Dover, New Hampshire 03820

 

 

 

Laboratory Accreditation

The College of American Pathologists (CAP) is the primary professional organization accrediting clinical medical laboratories. The CAP bases their accreditation standards on scientific evidence that links best practices to best outcomes. The following is a list of the CAP Accreditation Checklist standards that have emanated from clinical research published by Dr. Novis and his coworkers.

COMMISSION ON LABORATORY ACCREDITATION

Laboratory Accreditation Program

All Checklists are ©2005. College of American Pathologists. All rights reserved

LABORATORY GENERAL CHECKLIST

GEN.20316 Phase II N/A YES NO

Are key indicators of quality monitored and evaluated to detect problems and opportunities for improvement?

NOTE: Key indicators are those that reflect activities critical to patient outcome, that affect a large proportion of the laboratory’s patients, or that have been problematic in the past. The laboratory must document that the selected indicators are regularly compared against a benchmark, where available and applicable. The benchmark may be a practice guideline, CAP Q-PROBES data, or the laboratory’s own experience. New programs or services should be measured to evaluate their impact on laboratory service. The number of monitored indicators should be consistent with the laboratory’s scope of care. Special function laboratories may monitor a single indicator; larger laboratories should monitor multiple aspects of the scope of care commensurate with their scope of service. (However, there is no requirement that an indicator(s) be assessed in every section of the laboratory during every calendar year.)

Examples of key indicators include, but are not limited to the following. (Indicators related to CAP patient safety goals include numbers 1, 4, 7, 8 and 9.)

1. Patient/Specimen Identification. May be any of the following: percent of patient wristbands with errors, percent of ordered tests with patient identification errors, or percent of results with identification errors.

2. Test Order Accuracy. Percent of test orders correctly entered into a laboratory computer.

3. Stat Test Turnaround Time. May be collection-to-reporting turnaround time or receipt-in-laboratory-to-reporting turnaround time of tests ordered with a stat priority. May be confined to the Emergency Department or intensive care unit if a suitable reference database is available. Laboratories may monitor mean or median turnaround time or the percent of specimens with turnaround time that falls within an established limit.

4. Critical Value Reporting. Percent of critical values with documentation that values have been reported to caregivers

5. Customer Satisfaction. Must use a standardized satisfaction survey tool with a reference database of physician or nurse respondents.

6. Specimen Acceptability. Percent of general hematology and/or chemistry specimens accepted for testing.

7. Corrected Reports General Laboratory. Percent of reports that are corrected.

8. Corrected Reports Anatomic Pathology. Percent of reports that are corrected.

9. Surgical Pathology/Cytology Specimen Labeling. Percent of requisitions or specimen containers with one or more errors of pre-defined type.

10. Blood Component Wastage. Percentage of red blood cell units or other blood components that are not transfused to patients and not returned to the blood component supplier for credit or reissue.

11. Blood Culture Contamination. Percent of blood cultures that grow bacteria that are highly likely to represent contaminants.

While there is no requirement that the specific key quality indicators listed above be monitored, these indicators have been field-tested and shown to be measurable in a consistent manner, to demonstrate variability from laboratory-to-laboratory, and to be important to clinicians and to patient care. For the above indicators, performance should be compared with multi-institutional performance surveys that have been conducted within ten years of the laboratory s most recent measurement, where such surveys are available (see references below). Action plans should be developed for any indicator in which laboratory performance falls below the 25th percentile (i.e., 75% or more of the other laboratories in the study perform better). Use of the indicators listed above does not require enrollment in any quality monitoring product.

4) Novis DA, et al. Biochemical markers of myocardial injury test turnaround time. Arch Pathol Lab Med. 2004; 128:158-164;

10) Novis DA, et al. Quality indicators of fresh frozen plasma and platelet utilization. Arch Pathol Lab Med. 2002; 126:527-532\

GEN.20348 Phase II N/A YES NO

Are preanalytic variables monitored?

NOTE: Preanalytic (i.e., pre-examination) variables include all steps in the process prior to the analytic phase of testing, starting with the physician s order. Examples include accuracy of transmission of physicians’ orders, specimen transport and preparation, requisition accuracy, quality of phlebotomy services, specimen acceptability rates, etc. This list is neither all-inclusive nor exclusive. The variables chosen should be appropriate to the laboratory’s scope of care.

13) Dale JC, Novis DA. Outpatient phlebotomy success and reasons for specimen rejection. A Q-Probes study. Arch Pathol Lab Med. 2002;126:416-419;

GEN.20364 Phase II N/A YES NO

Are postanalytic variables monitored?

NOTE: Postanalytic (i.e., post-examination) variables include all steps in the overall laboratory process between completion of the analytic phase of testing and results receipt by the requesting physician. Examples are accuracy of data transmission across electronic interfaces, reflex testing, turnaround time from test completion to chart posting (paper and/or electronic), and interpretability of reports. This list is neither all-inclusive nor exclusive, providing the variables chosen are appropriate to the laboratory’s scope of care.

1) Novis DA, Dale JC. Morning rounds inpatient test availability. A College of American Pathologists Q-Probes study of 79 860 morning complete blood cell count and electrolyte test results in 367 institutions. Arch Pathol Lab Med. 2000;124:499-503;

4) Jones BA, Novis DA. Nongynecologic cytology turnaround time. A College of American Pathologists Q-Probes study of 180 laboratories. Arch Pathol Lab Med. 2001;125:1279-1284

point-of-care testing CHECKLIST

POC.03200 Phase II N/A YES NO

Is the POCT program enrolled in the appropriate available graded CAP Surveys or a CAP approved alternative proficiency testing program for the patient testing performed?

COMMENTARY:

The POCT program must participate in a CAP Surveys or CAP approved program of graded interlaboratory comparison testing appropriate to the scope of the laboratory, if available. This must include enrollment in surveys with analytes matching those for which the laboratory performs patient testing (e.g., patient whole blood glucose testing requires enrollment in CAP survey WBG or approved equivalent). Laboratories will not be penalized if they are unable to participate in an oversubscribed program.

6) Novis DA, Jones BA. Interinstitutional comparison of bedside glucose monitoring. Characteristics, accuracy performance, and quality control documentation: a College of American Pathologists Q Probes study of bedside glucose monitoring performed in 226 small hospitals. Arch Pathol Lab Med. 1998;122:495-502

POC.03225 Phase II N/A YES NO

For tests for which CAP does not require PT, does the laboratory at least semiannually 1) participate in external PT, or 2) exercise an alternative performance assessment system for determining the reliability of analytic testing?

NOTE: Appropriate alternative performance assessment procedures may include: participation in ungraded proficiency testing programs, split sample analysis with reference or other laboratories, split samples with an established in-house method, assayed material, regional pools, clinical validation by chart review, or other suitable and documented means. It is the responsibility of the Laboratory Director to define such alternative performance assessment procedures, as applicable, in accordance with good clinical and scientific laboratory practice.

COMMENTARY:

For analytes where graded proficiency testing is not available, performance must be checked at least semi annually with appropriate procedures such as: participation in ungraded proficiency surveys, split sample analysis with reference or other laboratories, split samples with an established in house method, assayed material, regional pools, clinical validation by chart review, or other suitable and documented means. It is the responsibility of the Laboratory Director to define such procedures, as applicable, in accordance with good clinical and scientific laboratory practice.

2) Novis DA, Jones BA. Interinstitutional comparison of bedside glucose monitoring. Characteristics, accuracy performance, and quality control documentation: a College of American Pathologists Q Probes study of bedside glucose monitoring performed in 226 small hospitals. Arch Pathol Lab Med. 1998;122:495-502;

POC.03500 Phase II N/A YES NO

Does the point-of-care testing program have a written QC/QM program?

NOTE: The QM/QC program for POCT must be clearly defined and documented. The program must ensure quality throughout the preanalytic, analytic, and post-analytic (reporting) phases of testing, including patient identification and preparation; specimen collection, identification, and processing; and accurate result reporting. The program must be capable of detecting problems and identifying opportunities for system improvement. The laboratory must be able to develop plans of corrective/preventive action based on data from its QM system.

COMMENTARY:

The quality control (QC) and quality management (QM) program in POCT should be clearly defined and documented. The program must ensure quality throughout the preanalytic, analytic, and post-analytic (reporting) phases of testing, including patient identification and preparation; specimen collection, identification, and processing; and accurate result reporting. The program must be capable of detecting problems and identifying opportunities for system improvement. The POCT program must be able to develop plans of corrective/preventive action based on data from its QM system.

Before patient results are reported, QC data must be judged acceptable. The Laboratory Director or designee must review QC data at least monthly. Beyond these specific requirements, a laboratory may (optionally) perform more frequent review at intervals that it determines appropriate. Because of the many variables across laboratories, the CAP makes no specific recommendations on the frequency of any additional review of QC data.

5) Novis DA, Jones BA. Interinstitutional comparison of bedside glucose monitoring. Characteristics, accuracy performance, and quality control documentation: a College of American Pathologists Q Probes study of bedside glucose monitoring performed in 226 small hospitals. Arch Pathol Lab Med. 1998;122:495-502

POC.08800 Phase II N/A YES NO

For QUANTITATIVE tests, are control materials at more than one concentration (level) used for all tests at least daily?

NOTE: For coagulation tests under CLIA 88, 2 different levels of control material are required during each 8 hours of patient testing, and each time there is a change in reagents. For blood gas testing under CLIA-88, a minimum of 1 quality control specimen for pH, pCO2 and pO2 is required during each 8 hours of patient testing.

COMMENTARY:

For quantitative tests, an appropriate quality control (QC) system must be in place.

The daily use of 2 levels of instrument and/or electronic controls as the only QC system is acceptable only for unmodified test systems cleared by the FDA and classified under CLIA 88 as “waived” or “moderate complexity.” The laboratory is expected to provide documentation of its validation of all instrument reagent systems for which daily controls are limited to instrument and/or electronic controls. This documentation must include the federal complexity classification of the testing system and data showing that calibration status is monitored.

6) Novis DA, Jones BA. Interinstitutional comparison of bedside glucose monitoring. Characteristics, accuracy performance, and quality control documentation: a College of American Pathologists Q Probes study of bedside glucose monitoring performed in 226 small hospitals. Arch Pathol Lab Med. 1998;122:495-502

TRANSFUSION MEDICINE CHECKLIST

TRM.20000 Phase II N/A YES NO

Does the transfusion medicine section have a written quality management/quality control (QM/QC) program?

NOTE: The QM/QC program in the transfusion medicine section must be clearly defined and documented. The program must ensure quality throughout the preanalytic, analytic, and post-analytic (reporting) phases of testing, including patient identification and preparation; specimen collection, identification, preservation, transportation, and processing; and accurate, timely result reporting. The program must be capable of detecting problems in the laboratory s systems, and identifying opportunities for system improvement. The laboratory must be able to develop plans of corrective/preventive action based on data from its QM system.

All QM questions in the Laboratory General Checklist pertain to the transfusion medicine section.

9) Novis DA, et al. Quality indicators of blood utilization. Three College of American Pathologists Q-probes studies of 12, 288, 404 red blood cell units in 1639 hospitals. Arch Pathol Lab Med. 2002;126:150-156;

10) Novis DA, et al. Quality indicators of fresh frozen plasma and platelet utilization. Three College of American Pathologists Q-probes studies of 8 981 796 units of fresh frozen plasma and platelets in 1639 hospitals. Arch Pathol Lab Med. 2002;126:527-532;

11) Novis DA, et al. Operating room blood delivery turnaround time. A College of American Pathologists Q-Probes study of 12 647 units of blood components in 466 institutions. Arch Pathol Lab Med. 2002;126:909-914.

CYTOPATHOLOGY CHECKLIST

CYP.00800 Phase II N/A YES NO

Is there a clearly defined and documented quality management program in cytopathology?

NOTE: Laboratories should consistently review activities and monitor their effectiveness in improving performance. Each laboratory should design a program that meets its needs and conforms to appropriate regulatory and accreditation standards.

6) Jones BA, Novis DA. Cervical biopsy-cytology correlation. A College of American Pathologists Q-Probes study of 22439 correlations in 348 laboratories. Arch Pathol Lab Med. 1996;120:523-531;

CYP.07569 Phase II N/A YES NO

Is an effort made to correlate gynecologic cytopathology findings with available clinical information?

NOTE: Methods of clinical correlation should be documented in the laboratory procedure manual, and selected reports can be reviewed to confirm practice. Possible mechanisms may include: focused rescreening of cases based on clinical history, history of bleeding, or previous abnormality; correlation of glandular cells with hysterectomy status, age of patient, and last menstrual period; review of previous or current biopsy material. Documentation of clinical correlation may include policies, problem logs with resolution, or notes in reports.

COMMENTARY:

An effort must be made to correlate gynecologic cytopathology findings with available clinical information.

3) Jones BA, Novis DA. Follow-up of abnormal gynecologic cytology. A College of American Pathologists Q-Probes study of 16 132 cases from 306 laboratories. Arch Pathol Lab Med. 2000;124:665-671; .

CYP.07690 Phase I N/A YES NO

Are 90% of reports on routine non-gynecologic cytology cases completed within 2 working days of receipt by the laboratory performing the evaluation?

NOTE: This question is primarily concerned with the majority of routine specimens, and applies to all laboratories. Longer reporting times may be allowed for specimens requiring special processing or staining (e.g., immunohistochemistry or other molecular analysis), or for screening (as opposed to diagnostic) specimens (for example, urines). If the laboratory has certain classes of specimens, patient types, etc., for which longer turnaround times are clinically acceptable, these must be identified, together with reasonable target reporting times, for Inspector review. Documentation may consist of continuous monitoring of data or periodic auditing of reports by the laboratory. In lieu of this documentation, the Inspector may audit sufficient reports to confirm turn around time.

Jones BA, Novis DA. Nongynecologic cytology turnaround time. A College of American Pathologists Q-Probes study of 180 laboratories. Arch Pathol Lab Med. 2001;125:1279-1284.

LIMITED SERVICE LABORATORY CHECKLIST

LSV.37050 Phase II N/A YES NO

Are routine and STAT results available within a reasonable time?

NOTE: A reasonable time for routine daily service, assuming receipt or collection of specimen in the morning is 4 to 8 hours. Emergency or STAT results that do not require additional verification procedures should be reported within 1 hour after specimen receipt in the laboratory.

COMMENTARY:

Routine and stat results must be available within a reasonable time. A reasonable time for routine daily service, assuming receipt or collection of specimen in the morning, is 4 to 8 hours. Emergency or stat results that do not require additional verification procedures should be reported within 1 hour after specimen receipt in the laboratory.

2) Steindel SJ, Novis DA. Using outlier events to monitor test turnaround time. A College of American Pathologists Q-Probes study in 496 laboratories. Arch Pathol Lab Med. 1999;123:607-614;