Small community-based pathology practice–The current issues. CBLPath, Rye Brook, New York. April 21, 2010
Powerpoint presentation not available.
Small community-based pathology practice–The current issues. CBLPath, Rye Brook, New York. April 21, 2010
Powerpoint presentation not available.
[pdf]/wp-content/uploads/2013/12/TheInformedPatient.pdf[/pdf]
College of American Pathologists Cancer Protocols and Checklists Upper Aerodigestive Tract (Including Salivary Glands)Updated November 16, 2006 http://www.cap.org/apps/docs/cancer_protocols/2005/upperaero05_ckw.doc
CAP Today is the trade news journal of the College of American Pathologists. On a regular basis, CAP Today includes Q & A and general education sections for CAP physician members. The following is a list of questions for which the answers reference published studies performed by Dr. Novis and his coworkers. December 2005 Q. Is there a benchmark or community standard for the percentage of stat tests relative to total workload? Our overseas military hospital would probably be most comparable to a small community hospital.
A. Stats vary according to the institutional services and, as such, are a barometer of those services, rather than a target to be achieved. I am unaware of any published general benchmarks, although specific articles appear occasionally regarding turnaround time, which may have data regarding stats embedded in them, such as for troponin measurements (Novis DA, Jones BA, Dale JC, et al. Arch Pathol Lab Med. 2004;128: 158-164).
I believe staff should review the CAP Laboratory Accreditation Program standards regarding TAT and ensure that the needs of the medical staff are being met. If there is a perception that too many stats are being ordered (although that is not implied in the question), the medical director should perhaps review lab TAT in general or with respect to particular tests to ascertain if process improvement is needed. Alternatively, if there are individual abusers among the medical staff who regularly order stats inappropriately, then it is the medical director’s responsibility to attempt to change these ordering patterns and, thereby, assure appropriate laboratory use and resource consumption.
June 2005
Q. Our laboratory maintains cytology/histology correlation in a separate file and keeps track of any discrepancies noted. We send out a letter to clinicians for all abnormal cytology cases without correlation asking for documentation of followup. We have several questions about these correlations: Should a comment on correlation be included in our pathology reports? When should requests for followup be made, and what should be done with this information?
A. CLIA 88 mandates that there is laboratory comparison of clinical information, when available, with cytology reports and comparison of all gynecologic cytology reports with a diagnosis of high-grade squamous intraepithelial lesion (HSIL), adenocarcinoma, or other malignant neoplasms with the histopathology report, if available in the laboratory (either on-site or in storage), and determination of the causes of any discrepancies.1 These requirements are reflected in cytopathology checklist questions CYP.01900, CYP.07543, CYP.07556, and CYP.07569.2
The method for documenting the cytohistologic correlation results is not specified and is left to each individual laboratorys discretion. Communication of cytology and biopsy correlation results to clinicians is key and provides critical information for optimal patient management.6 Cytohistologic correlation for individual patients can be documented in biopsy reports, via phone calls, or in letters, and, in more general terms, correlation statistics can be discussed in interdepartmental committees or conferences.
Evaluation of cytohistologic correlations is also an important part of a laboratorys quality improvement program.10 The definition of what constitutes diagnostic discrepancy should be established, and it should be recognized that perfect correlation is not realistic. The 1996 CAP Q-Probes study5 of 22,429 paired cervicovaginal cytology and biopsy specimens reported a discrepancy rate of 16.5 percent with a Pap sensitivity of 89 percent and specificity of 65 percent. The majority of discrepancies in this study were due to sampling differences rather than screening or interpretive errors. Because both the Pap test and colposcopic biopsy are subject to sampling errors, reasons for discrepancies should be pursued when the biopsy is negative, as it may not always represent the gold standard. Negative cytology cases should also be reviewed, if available, when the biopsy is positive. Peer or multidisciplinary review or both of noncorrelating specimens may be helpful in achieving consensus. Regular summary and evaluation of results can identify trends and improvements.
There is no requirement for correlating biopsies with lesser abnormalities, for correlating subsequent cytology with previous biopsies, or for correlating concurrent biopsies. However, these cytohistologic correlations are recorded in many laboratories and may be a useful part of the quality improvement program. Laboratories are required to document the number of cases that do have histologic correlation in the annual laboratory cytology statistical report.
In a 1997 article by Andrew Renshaw, MD,7 the optimal time for correlation of cytology and subsequent biopsies was found to be between 60 to 100 days, during which time the correlation of the Pap test and the subsequent biopsies was the highest. Biopsies performed over 100 days were less likely to correlate with the initial Pap. The difference may be explained by regression of lesions over time.
When followup material is not available in the laboratory, documentation of followup correspondence or reports, telephone calls, or requests for information, whether as separate letters or in the histology report, must be maintained and kept for two years. In cases without biopsy followup, other studies such as human papillomavirus testing, repeat Pap tests, and colposcopic examination findings may provide useful information, especially in cases of HSIL, glandular abnormalities, and carcinoma.
References
5. Jones BA, Novis DA. Cervical biopsy-cytology correlation. A College of American Pathologists Q-Probes study of 22439 correlations in 348 laboratories. Arch Pathol Lab Med. 1996;120:523531.
May 2003
Is there a right time for cyto/histo correlation in gyn cytology?
Cytologic-histologic correlation is an important component of any quality improvement program in cytology. A documented effort must be made to obtain and review follow-up histologic reports or material that is available within the laboratory when high-grade squamous intraepithelial lesion or malignant findings are identified in gynecologic cytology.1 There is no specific requirement to obtain correlation for any gynecologic cytology specimen in the absence of HSIL, and there is no specific requirement that histologic findings be correlated with cytologic findings, though many laboratories do make these correlations and the results certainly can be a component of a quality improvement program.2-5
The time period over which these correlations should be made is not specified. Data do suggest, however, that the optimal period for examination may be 60 to 100 days. In a study involving 419 low-grade squamous intraepithelial lesion and 277 HSIL smears, Renshaw, et al.6 correlated the rate of subsequent biopsy and the rate of correlation with that biopsy over a period of one year. In this study, 811 biopsies were performed. While biopsies that correlated with the initial cytologic finding could be identified as late as one year after the initial cytology, the highest rate of confirmation was obtained in biopsies performed within 60 days, and fully 78 percent of all correlating biopsies were obtained within the first 100 days. The chance of finding a correlating smear decreased after that time. In other words, biopsies performed more than 100 days after the initial biopsy were less likely to correlate with the initial cytologic finding.
One explanation for the increased number of discrepancies was regression of the lesions. After 100 days, there is a greater likelihood of regression, which leads to an increase in the number of perceived false-positive cytology results when, in fact, a number of them are actually true positives. Limiting correlations to 100 days after the cytologic specimen was obtained is a reasonable way to limit the impact of false-positive correlations on the quality improvement program and the cytologic staff, while at the same time obtaining the majority of all biopsies for which correlation is available.
More controversial is whether cytologic specimens should be taken at the same time as the biopsy and correlated with it. Some literature suggests that cytologic specimens taken at that time have a higher likelihood of being false-negativesthat is, the cytologic specimen is more likely to not sample the lesion found in the biopsy.7 In the study by Renshaw, et al.,6 this was not found to be the case, and indeed cytologic specimens obtained at the time of biopsy were more likely to correlate with the results of biopsy than cytologic specimens taken at any subsequent time.
No requirement specifies that concurrent Pap tests need to be correlated with the biopsy since these cytology specimens were not the reason for obtaining the biopsy. Technically concurrent biopsies are not a followup to the cytology. In the interests of patient care, however, HSIL or malignancy identified on the cytology specimen with a concurrent negative or low-grade biopsy result should be reconciled. Furthermore, the subsequent histologic specimens must be correlated. It appears that the optimal biopsies to correlate are those obtained within 60 to 100 days after the Pap test.
Reference
Jones BA, Novis DA. Cervical Biopsy-Cytology Correlation. Arch Pathol Lab Med. 1996;120:523-531.
November 2002
Fresh frozen plasma and platelet utilization
The authors of this study reported normative rates of expiration and wastage for units of fresh frozen plasma and platelets. Participants in the CAP Q-Probes laboratory quality improvement program collected data retrospectively on the number of units of FFP and PLTs that expired or were wasted due to mishandling. The participants also completed questionnaires describing their hospitals’ and blood banks’ laboratory and transfusion practices. The studies covered 1,639 public and private institutions and included data submitted on 8,981,796 units of FFP and PLTs. The aggregate combined FFP and PLT expiration rates ranged from 5.8 to 6.4 percent, and aggregate combined FFP and PLT wastage rates ranged from 2.0 to 2.5 percent. Among the top-performing participants (at the 90th percentile and above), FFP and PLT expiration rates were 0.6 percent or lower and FFP and PLT wastage rates were 0.5 percent or lower. Among the worst-performing participants (at the 10th percentile and below), expiration rates were 13.8 percent or higher and wastage rates were 6.8 percent or higher. The authors were unable to associate selected hospital characteristics or blood bank practices with lower rates of FFP and PLT utilization. They concluded that it is possible for hospital blood bank personnel to achieve FFP and PLT expiration and wastage rates of less than one percent.
Novis DA, Renner S, Friedberg RC, et al. Quality indicators of fresh frozen plasma and platelet utilization. Arch Pathol Lab Med. 2002;126:527-532.
Reprints: Dr. David A. Novis, For reprints, contact Dr. Novis at davidnovis.com.
August 2002
Q. New requirement for nongyn TAT
A. The 2002 CAP Laboratory Accreditation Program checklist contains a new question related to turnaround time of nongynecologic cytology, or NGC, cases:
0YP.06532 Phase I: Are 90 percent of reports on routine nongynecologic cytology cases completed within two working days of receipt by the laboratory performing the evaluation?
This question was added to the checklist to underscore the importance of turnaround time as a measure of laboratory service quality. In a 2000 Q-Probe authored by Bruce A. Jones, MD, and David A. Novis, MD (QP08), the factors influencing TAT for 16,925 NGC specimens from 180 laboratories were analyzed. The authors found that 50 percent of participating laboratories had a mean TAT of 2.1 days or less from specimen collection to final report sign-off. The factors that delayed TAT included the use of reference laboratories for screening, lack of timely transcription, difficulty obtaining adequate specimen information from the submitting physician, and pulling old slides/tissue blocks for review or performing special stains, or both.
The CAP believes that a goal of two working days TAT for routine NGC specimens is reasonable. Documentation can consist of continuous monitoring of data or periodic auditing of reports. Longer times may be allowed for specimens requiring special processing or staining (for example, immunohistochemistry), provided these special classes of specimens are documented so that the inspector can evaluate their appropriateness.
For laboratories that are finding it difficult to meet the CAP TAT guidelines, the 2000 Q-Probe study makes recommendations for improving overall TAT. They are as follows: minimize the use of reference laboratories; educate the submitting physician’s office staff or change requisitions to expedite the gathering of important information, or both; reevaluate general laboratory workflow and transcription services; and continuously monitor TAT.
Nongynecologic cytology plays an important role in diagnosing and managing patients, many of whom may be acutely ill. The new CAP guideline emphasizes the importance of NGC turnaround time for patient care and clinical decision-making in today’s competitive, customer-service-oriented health care systems. Of course, the quality of diagno-sis should never be compromised for the sake of TAT.
June 2001
Q. I have a question about correlation between Pap testsboth regular and ThinPrepand biopsy. What percentage is the benchmark? Some physicians are not satisfied with our service, and they seem to expect 100 percent correlation.
A. The percentage of cytology-biopsy discrepancies depends on the definition of discrepancy and the methods used to track discrepancies. One discrepancy definition offered in the CAP Quality Improvement Manual in Anatomic Pathology is a difference in interpretation that would have an impact on patient management decisions.1 Another definition is a two-step interpretive difference, for example low-grade squamous intraepithelial lesion on biopsy versus squamous cancer on the Pap. Excluding certain specimen types, for example endocervical curettings, will also have an impact on the discrepancy rates. Finally, the time interval and number of specimens considered per patient (single versus multiple cytology-histology combinations) will also affect the calculation.
A discordant Pap-biopsy combination, as defined by Joste et al, is one “in which one of the specimens is reported as a significant squamous or glandular lesion and the other specimen is reported as within normal limits.”2 This definition excluded atypical squamous cells of undetermined significance and biopsies lacking the transformation zone. In their 14-month study of 56,497 cervical smears, 2.8 percent (1,582) were followed by cervical biopsy. Of 1,582 paired samples, 175 cases (11 percent) were identified as discrepant. (This group represents 0.3 percent of all smears reviewed.) In a vast majority (93.2 percent) both cytologic and histologic diagnoses were confirmed and the discrepancies were classified as sampling errors. Only 3.4 percent of cases were found to have correctable (interpretive or screening) errors. Tritz et al also found an 11 percent discrepancy rate, with the majority representing sampling issues, although the definition of discrepancy involved a two-step difference in interpretation.3
Jones and Novis reported results of 12 months’ followup of 16,132 cervical smears from 306 laboratories as part of a CAP Q-Probes evaluation.4 They found that 18 percent of patients with low-grade squamous intraepithelial lesion on cytology had high-grade squamous intraepithelial lesion, or HSIL, on followup biopsy. Only 67 percent of patients with LSIL on cytology had LSIL on biopsy, and 86.5 percent had any abnormal biopsy. Of those patients with HSIL on smear, 15.5 percent had LSIL on corresponding biopsy, 75.5 percent had HSIL on biopsy, and 93.5 percent had an abnormal biopsy.4 Similar to the American experience, the United Kingdom’s screening program ranges between 65 and 85 percent concordance for biopsy-proven HSIL after HSIL on cervical smear.5
Brown et al evaluated 48 discrepant cases of HSIL on cervical smears with corresponding biopsies revealing LSIL.6 Biopsy specimens were tested and typed for HPV with molecular techniques. Thirty-seven cases were positive for HPV DNA: two for low-risk HPV types, 17 for high-risk types, and 18 for types of unknown oncogenicity. The prevalence of high-risk HPV was significantly higher in LSIL biopsies with a history of HSIL smears.6
Some cytology-histology discrepancy data have also been reported using liquid-based cytology. For example, Diaz-Rosario and Kabawat reported that 20.9 percent of HSIL ThinPreps and 26.8 percent of LSIL ThinPreps were followed by negative biopsies.7
It is unrealistic to expect 100 percent correlation between cervical cytology and cervical biopsies, and an open discussion with concerned clinicians is recommended. Cervical cytology is appropriately used as a screening test, which means that some specificity will be sacrificed for increased sensitivity, while the colposcopically guided cervical biopsy is recommended as a confirmatory test. Both tests are subject to sampling error. Although the cervical biopsy is often considered the gold standard, not all lesions will be fully characterized on an initial colposcopy, and a lesion that is small or deep in the glands may not be sampled. Some lesions will regress in the interval between the Pap test and the colposcopy. In some cases, the cervical smear may better represent the pathology of the cervix than the biopsy.2-6 Appropriate treatment and followup should then be dictated by a combination of clinical, cytology, and biopsy data. In addition, the pathologist’s advice or report comments may be extremely helpful.
References.
4. Jones BA, Novis DA. Follow-up of abnormal gynecologic cytology. A College of American Pathologists Q-Probes study of 16,132 cases from 306 laboratories. Arch Pathol Lab Med. 2000;124:665-671.
May 2001
Sidestepping common deficiencies
A top deficiency from the anatomic pathology checklist comes from this recently revised question, 08:1182, on frozen section turnaround time: “Are at least 90 percent of frozen section interpretations rendered within 20 minutes of specimen arrival in the frozen section area?”
The new guideline is based on a Q-Probe study of frozen section turnaround time published in the Archives of Pathology & Laboratory Medicine (Novis DA, et al. 1997;121: 559-567). It requires specimens to be prepared, analyzed, interpreted, and reported within 20 minutes. Previously, frozen section slides had to be ready for a pathologist to analyze within 15 minutes.
“A lot of labs just didn’t realize that it changed or they’re not tracking their turnaround time, so they can’t say whether they’re hitting that [target] or not,” Dr. Ruhlen says.
Complicated cases that require multiple frozen sections, however, aren’t expected to meet this new standard. One example is a skin lesion with multiple margins that requires several frozen specimens for a complete interpretation. “Clearly it would be ridiculous to say you have to do them all in 20 minutes when that’s often just impossible,” Dr. Ruhlen says.
The College of American Pathologists (CAP) is the primary professional organization accrediting clinical medical laboratories. The CAP bases their accreditation standards on scientific evidence that links best practices to best outcomes. The following is a list of the CAP Accreditation Checklist standards that have emanated from clinical research published by Dr. Novis and his coworkers.
COMMISSION ON LABORATORY ACCREDITATION
Laboratory Accreditation Program
All Checklists are ©2005. College of American Pathologists. All rights reserved
LABORATORY GENERAL CHECKLIST
GEN.20316 Phase II N/A YES NO
Are key indicators of quality monitored and evaluated to detect problems and opportunities for improvement?
NOTE: Key indicators are those that reflect activities critical to patient outcome, that affect a large proportion of the laboratory’s patients, or that have been problematic in the past. The laboratory must document that the selected indicators are regularly compared against a benchmark, where available and applicable. The benchmark may be a practice guideline, CAP Q-PROBES data, or the laboratory’s own experience. New programs or services should be measured to evaluate their impact on laboratory service. The number of monitored indicators should be consistent with the laboratory’s scope of care. Special function laboratories may monitor a single indicator; larger laboratories should monitor multiple aspects of the scope of care commensurate with their scope of service. (However, there is no requirement that an indicator(s) be assessed in every section of the laboratory during every calendar year.)
Examples of key indicators include, but are not limited to the following. (Indicators related to CAP patient safety goals include numbers 1, 4, 7, 8 and 9.)
1. Patient/Specimen Identification. May be any of the following: percent of patient wristbands with errors, percent of ordered tests with patient identification errors, or percent of results with identification errors.
2. Test Order Accuracy. Percent of test orders correctly entered into a laboratory computer.
3. Stat Test Turnaround Time. May be collection-to-reporting turnaround time or receipt-in-laboratory-to-reporting turnaround time of tests ordered with a stat priority. May be confined to the Emergency Department or intensive care unit if a suitable reference database is available. Laboratories may monitor mean or median turnaround time or the percent of specimens with turnaround time that falls within an established limit.
4. Critical Value Reporting. Percent of critical values with documentation that values have been reported to caregivers
5. Customer Satisfaction. Must use a standardized satisfaction survey tool with a reference database of physician or nurse respondents.
6. Specimen Acceptability. Percent of general hematology and/or chemistry specimens accepted for testing.
7. Corrected Reports General Laboratory. Percent of reports that are corrected.
8. Corrected Reports Anatomic Pathology. Percent of reports that are corrected.
9. Surgical Pathology/Cytology Specimen Labeling. Percent of requisitions or specimen containers with one or more errors of pre-defined type.
10. Blood Component Wastage. Percentage of red blood cell units or other blood components that are not transfused to patients and not returned to the blood component supplier for credit or reissue.
11. Blood Culture Contamination. Percent of blood cultures that grow bacteria that are highly likely to represent contaminants.
While there is no requirement that the specific key quality indicators listed above be monitored, these indicators have been field-tested and shown to be measurable in a consistent manner, to demonstrate variability from laboratory-to-laboratory, and to be important to clinicians and to patient care. For the above indicators, performance should be compared with multi-institutional performance surveys that have been conducted within ten years of the laboratory s most recent measurement, where such surveys are available (see references below). Action plans should be developed for any indicator in which laboratory performance falls below the 25th percentile (i.e., 75% or more of the other laboratories in the study perform better). Use of the indicators listed above does not require enrollment in any quality monitoring product.
4) Novis DA, et al. Biochemical markers of myocardial injury test turnaround time. Arch Pathol Lab Med. 2004; 128:158-164;
10) Novis DA, et al. Quality indicators of fresh frozen plasma and platelet utilization. Arch Pathol Lab Med. 2002; 126:527-532\
GEN.20348 Phase II N/A YES NO
Are preanalytic variables monitored?
NOTE: Preanalytic (i.e., pre-examination) variables include all steps in the process prior to the analytic phase of testing, starting with the physician s order. Examples include accuracy of transmission of physicians’ orders, specimen transport and preparation, requisition accuracy, quality of phlebotomy services, specimen acceptability rates, etc. This list is neither all-inclusive nor exclusive. The variables chosen should be appropriate to the laboratory’s scope of care.
13) Dale JC, Novis DA. Outpatient phlebotomy success and reasons for specimen rejection. A Q-Probes study. Arch Pathol Lab Med. 2002;126:416-419;
GEN.20364 Phase II N/A YES NO
Are postanalytic variables monitored?
NOTE: Postanalytic (i.e., post-examination) variables include all steps in the overall laboratory process between completion of the analytic phase of testing and results receipt by the requesting physician. Examples are accuracy of data transmission across electronic interfaces, reflex testing, turnaround time from test completion to chart posting (paper and/or electronic), and interpretability of reports. This list is neither all-inclusive nor exclusive, providing the variables chosen are appropriate to the laboratory’s scope of care.
1) Novis DA, Dale JC. Morning rounds inpatient test availability. A College of American Pathologists Q-Probes study of 79 860 morning complete blood cell count and electrolyte test results in 367 institutions. Arch Pathol Lab Med. 2000;124:499-503;
4) Jones BA, Novis DA. Nongynecologic cytology turnaround time. A College of American Pathologists Q-Probes study of 180 laboratories. Arch Pathol Lab Med. 2001;125:1279-1284
point-of-care testing CHECKLIST
POC.03200 Phase II N/A YES NO
Is the POCT program enrolled in the appropriate available graded CAP Surveys or a CAP approved alternative proficiency testing program for the patient testing performed?
COMMENTARY:
The POCT program must participate in a CAP Surveys or CAP approved program of graded interlaboratory comparison testing appropriate to the scope of the laboratory, if available. This must include enrollment in surveys with analytes matching those for which the laboratory performs patient testing (e.g., patient whole blood glucose testing requires enrollment in CAP survey WBG or approved equivalent). Laboratories will not be penalized if they are unable to participate in an oversubscribed program.
6) Novis DA, Jones BA. Interinstitutional comparison of bedside glucose monitoring. Characteristics, accuracy performance, and quality control documentation: a College of American Pathologists Q Probes study of bedside glucose monitoring performed in 226 small hospitals. Arch Pathol Lab Med. 1998;122:495-502
POC.03225 Phase II N/A YES NO
For tests for which CAP does not require PT, does the laboratory at least semiannually 1) participate in external PT, or 2) exercise an alternative performance assessment system for determining the reliability of analytic testing?
NOTE: Appropriate alternative performance assessment procedures may include: participation in ungraded proficiency testing programs, split sample analysis with reference or other laboratories, split samples with an established in-house method, assayed material, regional pools, clinical validation by chart review, or other suitable and documented means. It is the responsibility of the Laboratory Director to define such alternative performance assessment procedures, as applicable, in accordance with good clinical and scientific laboratory practice.
COMMENTARY:
For analytes where graded proficiency testing is not available, performance must be checked at least semi annually with appropriate procedures such as: participation in ungraded proficiency surveys, split sample analysis with reference or other laboratories, split samples with an established in house method, assayed material, regional pools, clinical validation by chart review, or other suitable and documented means. It is the responsibility of the Laboratory Director to define such procedures, as applicable, in accordance with good clinical and scientific laboratory practice.
2) Novis DA, Jones BA. Interinstitutional comparison of bedside glucose monitoring. Characteristics, accuracy performance, and quality control documentation: a College of American Pathologists Q Probes study of bedside glucose monitoring performed in 226 small hospitals. Arch Pathol Lab Med. 1998;122:495-502;
POC.03500 Phase II N/A YES NO
Does the point-of-care testing program have a written QC/QM program?
NOTE: The QM/QC program for POCT must be clearly defined and documented. The program must ensure quality throughout the preanalytic, analytic, and post-analytic (reporting) phases of testing, including patient identification and preparation; specimen collection, identification, and processing; and accurate result reporting. The program must be capable of detecting problems and identifying opportunities for system improvement. The laboratory must be able to develop plans of corrective/preventive action based on data from its QM system.
COMMENTARY:
The quality control (QC) and quality management (QM) program in POCT should be clearly defined and documented. The program must ensure quality throughout the preanalytic, analytic, and post-analytic (reporting) phases of testing, including patient identification and preparation; specimen collection, identification, and processing; and accurate result reporting. The program must be capable of detecting problems and identifying opportunities for system improvement. The POCT program must be able to develop plans of corrective/preventive action based on data from its QM system.
Before patient results are reported, QC data must be judged acceptable. The Laboratory Director or designee must review QC data at least monthly. Beyond these specific requirements, a laboratory may (optionally) perform more frequent review at intervals that it determines appropriate. Because of the many variables across laboratories, the CAP makes no specific recommendations on the frequency of any additional review of QC data.
5) Novis DA, Jones BA. Interinstitutional comparison of bedside glucose monitoring. Characteristics, accuracy performance, and quality control documentation: a College of American Pathologists Q Probes study of bedside glucose monitoring performed in 226 small hospitals. Arch Pathol Lab Med. 1998;122:495-502
POC.08800 Phase II N/A YES NO
For QUANTITATIVE tests, are control materials at more than one concentration (level) used for all tests at least daily?
NOTE: For coagulation tests under CLIA 88, 2 different levels of control material are required during each 8 hours of patient testing, and each time there is a change in reagents. For blood gas testing under CLIA-88, a minimum of 1 quality control specimen for pH, pCO2 and pO2 is required during each 8 hours of patient testing.
COMMENTARY:
For quantitative tests, an appropriate quality control (QC) system must be in place.
The daily use of 2 levels of instrument and/or electronic controls as the only QC system is acceptable only for unmodified test systems cleared by the FDA and classified under CLIA 88 as “waived” or “moderate complexity.” The laboratory is expected to provide documentation of its validation of all instrument reagent systems for which daily controls are limited to instrument and/or electronic controls. This documentation must include the federal complexity classification of the testing system and data showing that calibration status is monitored.
6) Novis DA, Jones BA. Interinstitutional comparison of bedside glucose monitoring. Characteristics, accuracy performance, and quality control documentation: a College of American Pathologists Q Probes study of bedside glucose monitoring performed in 226 small hospitals. Arch Pathol Lab Med. 1998;122:495-502
TRANSFUSION MEDICINE CHECKLIST
TRM.20000 Phase II N/A YES NO
Does the transfusion medicine section have a written quality management/quality control (QM/QC) program?
NOTE: The QM/QC program in the transfusion medicine section must be clearly defined and documented. The program must ensure quality throughout the preanalytic, analytic, and post-analytic (reporting) phases of testing, including patient identification and preparation; specimen collection, identification, preservation, transportation, and processing; and accurate, timely result reporting. The program must be capable of detecting problems in the laboratory s systems, and identifying opportunities for system improvement. The laboratory must be able to develop plans of corrective/preventive action based on data from its QM system.
All QM questions in the Laboratory General Checklist pertain to the transfusion medicine section.
9) Novis DA, et al. Quality indicators of blood utilization. Three College of American Pathologists Q-probes studies of 12, 288, 404 red blood cell units in 1639 hospitals. Arch Pathol Lab Med. 2002;126:150-156;
10) Novis DA, et al. Quality indicators of fresh frozen plasma and platelet utilization. Three College of American Pathologists Q-probes studies of 8 981 796 units of fresh frozen plasma and platelets in 1639 hospitals. Arch Pathol Lab Med. 2002;126:527-532;
11) Novis DA, et al. Operating room blood delivery turnaround time. A College of American Pathologists Q-Probes study of 12 647 units of blood components in 466 institutions. Arch Pathol Lab Med. 2002;126:909-914.
CYTOPATHOLOGY CHECKLIST
CYP.00800 Phase II N/A YES NO
Is there a clearly defined and documented quality management program in cytopathology?
NOTE: Laboratories should consistently review activities and monitor their effectiveness in improving performance. Each laboratory should design a program that meets its needs and conforms to appropriate regulatory and accreditation standards.
6) Jones BA, Novis DA. Cervical biopsy-cytology correlation. A College of American Pathologists Q-Probes study of 22439 correlations in 348 laboratories. Arch Pathol Lab Med. 1996;120:523-531;
CYP.07569 Phase II N/A YES NO
Is an effort made to correlate gynecologic cytopathology findings with available clinical information?
NOTE: Methods of clinical correlation should be documented in the laboratory procedure manual, and selected reports can be reviewed to confirm practice. Possible mechanisms may include: focused rescreening of cases based on clinical history, history of bleeding, or previous abnormality; correlation of glandular cells with hysterectomy status, age of patient, and last menstrual period; review of previous or current biopsy material. Documentation of clinical correlation may include policies, problem logs with resolution, or notes in reports.
COMMENTARY:
An effort must be made to correlate gynecologic cytopathology findings with available clinical information.
3) Jones BA, Novis DA. Follow-up of abnormal gynecologic cytology. A College of American Pathologists Q-Probes study of 16 132 cases from 306 laboratories. Arch Pathol Lab Med. 2000;124:665-671; .
CYP.07690 Phase I N/A YES NO
Are 90% of reports on routine non-gynecologic cytology cases completed within 2 working days of receipt by the laboratory performing the evaluation?
NOTE: This question is primarily concerned with the majority of routine specimens, and applies to all laboratories. Longer reporting times may be allowed for specimens requiring special processing or staining (e.g., immunohistochemistry or other molecular analysis), or for screening (as opposed to diagnostic) specimens (for example, urines). If the laboratory has certain classes of specimens, patient types, etc., for which longer turnaround times are clinically acceptable, these must be identified, together with reasonable target reporting times, for Inspector review. Documentation may consist of continuous monitoring of data or periodic auditing of reports by the laboratory. In lieu of this documentation, the Inspector may audit sufficient reports to confirm turn around time.
Jones BA, Novis DA. Nongynecologic cytology turnaround time. A College of American Pathologists Q-Probes study of 180 laboratories. Arch Pathol Lab Med. 2001;125:1279-1284.
LIMITED SERVICE LABORATORY CHECKLIST
LSV.37050 Phase II N/A YES NO
Are routine and STAT results available within a reasonable time?
NOTE: A reasonable time for routine daily service, assuming receipt or collection of specimen in the morning is 4 to 8 hours. Emergency or STAT results that do not require additional verification procedures should be reported within 1 hour after specimen receipt in the laboratory.
COMMENTARY:
Routine and stat results must be available within a reasonable time. A reasonable time for routine daily service, assuming receipt or collection of specimen in the morning, is 4 to 8 hours. Emergency or stat results that do not require additional verification procedures should be reported within 1 hour after specimen receipt in the laboratory.
2) Steindel SJ, Novis DA. Using outlier events to monitor test turnaround time. A College of American Pathologists Q-Probes study in 496 laboratories. Arch Pathol Lab Med. 1999;123:607-614;
HEMATOLOGY – COAGULATION CHECKLIST
HEM.23150 Phase II N/A YES NO
Are routine and STAT results available within a reasonable time?
NOTE: A reasonable time for routine daily service, assuming receipt or collection of specimen in the morning, is 4-8 hours. For common hematology and coagulation tests, emergency or STAT results that do not requir
Victoria Stagg Elliott. Generation gaps: Managing a multigenerational staff. AMA News June 21, 2010. http://www.ama-assn.org/amednews/2010/06/14/bisa0614.htm
Pepper, Leslie. Can you trust your lab results? Good Housekeeping. July 2007. Pages 45-50
Landro, Laura. Hospitals Move to Cut Dangerous Lab Errors. Wall Street Journal. June 14, 2006. Pages D1 and D11.
Parham, Sue. Feature Story: Hand stand–a look at manual blood smear reviews. CAP Today. April 2005.
Neither of these numbers surprised Dr. Wilkinson, who, based on anecdotal information he collected before the study was launched, suspected that labs were performing a significant number of manual reviews. Still, he says, the findings provide useful benchmark data for the clinical laboratory community. “Not that the median number of manual peripheral blood smear reviews being performed by labs is anything magic, but if you are a large lab and you are reviewing 50 to 60 percent of the peripheral smears, and you can see that half the people in the country are reviewing less than 26 percent of them, then maybe you are doing too many,” Dr. Wilkinson says. “On the other hand, if your review rate is five percent, you may be doing too few.”
Hospitals that have criteria that limit how often they review a smear manually tend to have a lower rate of review. “However, those criteria must be hospital-driven,” Dr. Ben-Ezra says. “The number of manual reviews a hospital performs ultimately depends on the needs of the clinical staff, which in turn depends on the complexity mix of the patients seen at that particular institution.”
How can labs use these data to increase their efficiency? If a lab has a very high rate of manual differential counts, the less labor-intensive manual scans may be able to be substituted for the more labor- intensive manual differential counts, say the study’s authors. Dr. Novis says some laboratories may be able to use the study’s information on thresholds to begin a discussion about adjusting the criteria they are using for what triggers a manual blood smear review.
In addition, he says, if labs want to make changes, they should find out what their customers want from manual reviews. “I suspect there may be a disconnect between the users and the providers of this service, given that our data indicate that only 3.5 percent of the manual diffs or screens were done at the request of the doctors.” Most of them were prompted within the labs by flags on the instruments. “It may be that if the flags on the instruments were adjusted a little bit, they could save labs some time and overhead, without adversely affecting patient care,” Dr. Novis says.
If a lab is determined to find its optimal rate of review, it should first define what optimal means. “You’d need to go to providers and determine under what conditions they absolutely need to have a manual smear review performed, and work backwards from there,” Dr. Novis says. “You’d have to correlate your manual review rates with some kind of clinical outcome. For example, if certain patients benefited from a review, you would need to figure out what that benefit might be and then measure it and work backwards.”
More than 36 percent of the study’s participants reported that when a manual peripheral smear review was done, useful information was discovered. This suggests that in many cases the manual reviews uncover something that the automated instruments might have missed, but there’s no way to know what that information was, why it might have been useful, and who found it useful, Dr. Ben-Ezra says. “We don’t have a handle on the type of information that was gleaned, and consequently we don’t know whether that median smear review rate of 26 percent is really worth the additional effort.”
These are the types of questions laboratories will grapple with as they use the Q-PROBES data to guide their efforts to boost efficiency, and it may take another study to generate firm answers. “If this Q-PROBES is repeated, perhaps the next thing would be to ask both technologists and physicians if they learned new information from performing the manual review, and if there is a disconnect there, what it is and why it’s happening,” Dr. Novis says. “This information alone could allow you to go back and reset the flags on your instruments and cut down the number of manual reviews you’re doing.”
In an upcoming issue of the Archives of Pathology & Laboratory Medicine, the study’s authors will discuss the findings in detail, as well as the activities of some laboratories that are attempting to recommend certain triggers for manual review. “These triggers are not based on outcome data or evidence, so more work needs to be done in this area,” Dr. Novis says. In the meantime, labs can use the new Q-PROBES data to make changes. “The idea is to cut your overhead and increase quality at the same time, and believe it or not, it is possible,” he says.
Summary of Projects
CONSULTING
Dr. Novis assists clients directly, and as a subcontractor for other consulting companies with providing laboratory and pathology services. Dr. Novis has:
For hospital administrators:
For private laboratories
For pathology organizations and support vendors:
For physicians
MEDICAL PRACTICE, MANAGEMENT, ADMINISTRATION
Courtagen Life Sciences, Woburn, MA.
Since 2011, Dr. Novis has been the CLIA Laboratory Director of Courtagen Life Sciences a privately held clinical laboratory that performs next generation sequencing (NGS) technology to diagnose mitochondrial disease in children and adults. Dr. Novis
Oxford Diagnostic Laboratories (ODL), Marlborough, MA
Since 2008, Dr. Novis has been the Medical Director of ODL. ODL is a privately held clinical laboratory that performs TSpot-TB test, a cutting edge blood test for the diagnosis of Tuberculosis that is replacing the antiquated TB skin test., Dr. Novis has
Young Novis PA (YNPA)
For 25 years, YNPA provided anatomic and clinical pathology services to two community hospitals, the University of New Hampshire Student Heath Center laboratory, and Path Lab Inc, a regional private laboratory. As managing partner of YNPA, Dr. Novis :
Northeast New England Pathology Associates (NENEPA)
For 10 years, NENEPA, a consortium of four pathology practices provided anatomic and clinical pathology services to regional and national laboratories in the Seacoast region on New Hampshire and Southern Maine. As founding partner and President, Dr. Novis
Physicians Professional Management Company. (PPMC) www.ppmcbilling.com
PPMC is a physicians billing and practice management company serving office-based and hospital practices in Massachusetts, Maine, and New Hampshire. As Founding Partner and Director, Dr. Novis participated in:
QUALITY
College of American Pathologists (CAP) www.cap.org
The CAP is the professional organization of pathologists. Dr. Novis has served as Vice Chair of the Quality Practices Committee and is a laboratory inspector for the CAP s Laboratory Accreditation Program
Wentworth Douglass Hospital (WDH) www.wdhospital.com
WDH is the largest of five acute care community hospital, serving a population of 100,000 people residing in the Seacoast region of New Hampshire and Southern Maine.
Advised hospital Performance Improvement Committee; work cited for excellence by the JCAHO.
HOSPITAL GOVERNANCE AND PLANNING
Wentworth Douglass Hospital (WDH) www.wdhospital.com
WDH is the largest of five acute care community hospitals serving a population of 100,000 people residing in the Seacoast region of New Hampshire and Southern Maine. WDH maintains a full service cancer diagnostic and treatment facility and owns The Works Family Health and Fitness Center, one of the largest health and fitness centers in New Hampshire. For 10 years, Dr. Novis was a Hospital Trustee, serving on the Strategic Planning and Community Benefits Committees. He was Chairman of the Board of The Works.
PROFESSIONAL ACTIVITIES
College of American Pathologists (CAP) www.cap.org; Clinical Laboratory Management Association (CLMA) www.CLMA.org); American Society of Cytopathology (ASC) www.cytopathology.org; Northeast Medical Association (NEMA) www.nemaonline.com
Dr. Novis has served on the CAP’s Quality Practices, Education and Cancer Committees and is the New Hampshire Representative to the House of Delegates.
The ASC is the is the professional organization of pathologists and technologists sub-specializing in the field of Cytopathology. Dr. Novis headed the ASC s task force on membership, and was a member of their Strategic Planning Committee.
NEMA is a multi-specialty medical society devoted to studying advance in outdoor medicine and sports-related injuries. Dr. Novis has served as NEMA’s Secretary-Treasurer and President.
As a member of these organizations, Dr. Novis has: