Data Integrity Considerations for Conventional and Rapid Microbiological Methods

Data integrity refers to maintaining and assuring the accuracy and consistency of data over its entire life-cycle, and is a critical aspect to the design, implementation and usage of any system which stores, processes, or retrieves data. Data integrity is a key regulatory concern and to aid those working in pharmaceuticals and healthcare guidance documents have been produced by the U.S. Food and Drug Administration (FDA) and the MHRA (1, 2). These call on the ALCOA approach to be followed: data must be attributable, legible (permanent), contemporaneous, original and accurate.

Data integrity applies to all production processes and laboratories, including microbiology. That microbiological practices are a potential concern is evidenced in several FDA warning letters, especially in relation to sample handling and reading. These concerns extend to both conventional methods of testing and to rapid methods, methods reliant upon paper documentation and computerized systems, paper documentation (3).

Even with the growing adoption of rapid methods, microbiology laboratories handle a lot of data. Unlike analytical laboratories, however, the data is more often an even mix of quantitative and qualitative data. Examples of data include the result of a settle plate exposed during the environmental monitoring of a cleanroom; the endotoxin content in a sample of water; the weight result from the weighing of a raw material for use in the pharmacopeia Microbial Limits Test and so on.

Despite the current high profile, the subject of data integrity in the pharmaceutical microbiology laboratory has not been afforded very much attention in terms of regulatory guidance. This article looks at two data integrity concerns within the microbiology laboratory, one relating to conventional methods and one to rapid methods, and considers some of the steps that can be taken to address identified weaknesses.

Conventional Methods
Environmental monitoring using conventional methods, such as air-sampling, settle plates and contact plates contributes to a cohesive assessment in the assurance of finished product quality. However, traditional culture-based microbiological methods typically possess inherent and unavoidable variability and as such these might result in erroneous conclusions (4). These concerns have been raised in FDA warning letters. Two take two examples: “plate counting, where colony forming units are miscounted” and “missing samples, such as environmental monitoring samples not being taken or dropped on transit to an incubator.” While the latter is careless, colony counting errors can occur where confluent growth occurs.

Some regulators have requested secondary checks for plate reading. This is an erroneous demand and unnecessary – personnel are either good at plate reading or they are not. Solutions to correct colony counting include rapid methods; good training in testing (to ensure that colony numbers fall within the countable range for the plate size); and effective training for reading plates, spotting phenomena like merged colonies, spreading organisms, and diminutive colonies that require reading under magnification and with white light.

Rapid Microbiological Methods
There are an array of different rapid microbiological methods, each with their own technologies and testing protocols. Data integrity concerns arise at the design, validation and operation stages. Taking validation, samples need to be representative of what will be tested using the instrument and tested multiple times and by different technicians in order to build in repeatability and robustness. Aspects that give validity to the result, such as limit of detection and limit of quantification (either directly in relation to microorganisms or indirectly through monitoring biological events) need to be introduced.

In terms of operations, data integrity extends to data capture, retention, archiving, and processing. Most rapid methods use computerized systems and here systems should be designed in a way that encourages compliance with the principles of data integrity. Examples include multi-level password control; user access rights which prevent (or audit trail) data amendments; measures to prevent user access to clocks; having automated data capture; ensuring systems have data backup.

When carrying out reviews of rapid microbiological methods five general and important data integrity questions are:

Is electronic data available?Is electronic data reviewed?Is meta data (audit trails) reviewed regularly?Are there clear segregation of duties?Has the system been validated for its intended use?

A satisfactory answer should be available for each of these five key questions.

Summary
Data integrity is applicable to microbiology, both conventional and rapid methods, as with every other part of the pharmaceutical operation. This short article has highlighted two areas that microbiologists need to be mindful of in terms of plate counting and computerized systems. Data integrity is an important subject and a regulatory ‘hot topic’; the ideas presented here and wider issues require careful attention by the microbiologist.

References

1. FDA (2016) Data Integrity and Compliance With CGMP, Draft Guidance for Industry, April 2016, U.S. Department of Health and Human Services, Food and Drug Administration, Washington

2. MHRA (2015) MHRA GMP Data Integrity Definitions and Guidance for Industry March 2015, Medicines Healthcare products and Regulatory Agency, London, UK

3. Sandle, T. (2016) Data Integrity Considerations for the Pharmaceutical Microbiology Laboratory, Journal of GXP Compliance, 20 (6): 1-12

4. Tidswell, E. C. and Sandle, T. (2017) Microbiological Test Data – Assuring Data Integrity, PDA Journal of Pharmaceutical Science and Technology, doi:10.5731/pdajpst.2017.008151

Overview of a CAPA program

Overview of a CAPA program

Introduction

Our objective is to identify problems in a timely manner, to correct them and to prevent their re-occurrence. As a corollary to this, we want to foresee potential issues and ward them off before they become real. The ultimate goal is for the laboratory operation to become self-correcting.

To be successful, management has put into place a corrective-and-preventative actions (CAPA) system. Any CAPA system must work effectively and its “effectiveness” must be measured. It must also work efficiently. The CAPA program itself should not become a problem. The yoke of the CAPA program should be light and easy to bear.

Management Commitment to Quality

Commercial off-the-shelf software is available to facilitate the administration of the CAPA program and is designed around many of the common industry needs.2 Such software programs help one to log and track CAPAs, responsible persons, effectiveness checks, completion dates, etc. and have many desirable attributes. The essential ingredients for implementing an effective CAPA program are, however, not the tools used. Rather, they are (1) knowing what you want the CAPA program to do and (2) having an environment (quality culture) in which to install the program that will support and promote it and allow it to change. This is what we mean by “Management’s Commitment to Quality”. In this way, the CAPA program is no different from any other quality system in place at the compliant facility.

Management puts the process into place, funds it, oversees, and tends to it. It is important to note that it is Management’s commitment and not the Quality Assurance group’s commitment we are talking about. Management has the resources, vision, and a vested interest and puts a Quality Assurance group into place as one, albeit important, “cog in the wheel”

Roles and Responsibilities

Understanding “who is responsible for what” in the CAPA program is critical and our approach is simple: Management has the resources and sets the goals and therefore must drive the CAPA program. To do this, in part, Management puts a Quality Assurance program in place. Quality Assurance performs audits, finds problems, and reports them to Management. Management must “own” the problems as well as their solutions. It is not the job of the QA group to “fix” the problems (although they may make suggestions), since to do so would create a conflict of interest much like the fox guarding the chicken coop. Rather, an independent QA group plus a strong and actively-engaged Management work together to bring about the desired improvements.

Management and QA work together to approve the written procedures (SOPs) to be followed in the conduct of the CAPA program, including how the program is to be audited, how CAPA data are to be shared across the company, how CAPAs are to be reported to Management, approved, and ultimately “closed” and so on.

To effectively implement the CAPA program, everyone must “buy-in.” For each corrective and preventative action (CAPA), Management identifies the “responsible” person or persons who will work to close-out the CAPA. Such individuals typically report to Management, have an intimate knowledge of the problem that has been cited for CAPA (they may in fact have caused it) and can significantly contribute to its solution. In the “no-fault” culture we promote, we are not looking to blame. The responsible person plays a key role in helping to determine the root cause and will take actions to ensure improvement. They do not do this alone.

Another prominent role is that of CAPA coordinator. This role is two-fold: (1) to coordinate the activities required to ultimately close-out (finalize) the CAPA; and (2) to help manage and improve the CAPA program. The position is noted on the organizational chart, has an associated job description, and is tied to specific job training that is documented in the employees’ training binder. The task is shared by two individuals: One from the lab side (typically at the group leader level) and the other from the QA side (typically an experienced auditor).

The CAPA coordinators conduct regularly scheduled meetings (called CAPA meetings) where they invite responsible persons to enter into a brainstorming session in order to determine root causes and corrective and preventative actions. Present also is an experienced QA auditor, who determines an “effectiveness check” on the CAPA, communicates this to the group, and ensures that such checks are done according to schedule.

All employees must know that they are an integral part of the CAPA program and that they will be treated fairly. Having a mechanism in place by which any employee can contest the legitimacy of any CAPA finding is a cornerstone.

Leadership

Managing the CAPA program requires leadership skills. The approach must be risk-based while simultaneously building credibility with employees and outside auditors. This is done by (1) making decisions based on complete and good data; and (2) following up with a high level of consistency and transparency. It is essential that Leadership provide the program with a high-visibility, while protecting client confidentiality. The CAPA program must function as a “performance driver” using terms, definitions, and actions that are clear-cut for all employees to understand and that facilitate meaningful training. Managing is just one part, albeit an important part, of implementing an effective CAPA program.

The Quality Systems Approach

The quality systems approach to organizing and managing an operation that will comply with regulatory requirements is standard practice in the Pharmaceutical and Medical Device industries, and is well documented in the literature.3

A quality system is a portion of the larger operation that you want to bring under control via a system of management. For example, lab management decides to use only a qualified instrument in the conduct of an experimental study. Next, the decision is made to identify this instrument as “qualified,” to maintain an instrument logbook on it, along with a calibration schedule, etc. They soon find they need to do this on a number of instruments and determine that each instrument qualification shall follow a standard operating procedure and be documented in the same way –they are scaling this operation up. They consider all the details that must go in to ensure that laboratory instruments, associated documentation, reference standards, etc. are meeting all pre-determined criteria for performance, reliability, compliance with regulations: in short, “Quality.” They call this quality system “Metrology” and hire or train a Metrologist to manage it.

The quality systems together form a platform on which to conduct quality regulated laboratory work.

Written Procedures (SOPs)

Standard Operating Procedures (SOPs) provide a basis for setting expectations, organizing lab and study activities, maintaining qualified equipment and instruments, conducting training and auditing of laboratory and the administrative functions (e.g. Document Control). Table 1 provides a listing of some of the key SOPs (by title) in place to support the CAPA program. Having good written procedures in place is critical. The CAPA program generates data used by Management to decide on who should be trained on SOPs and how frequently, how to improve, add or delete written procedures, (including the SOP on the CAPA program) and how to improve training on written procedures. Every employee either working in the lab or providing support to the GMP/GLP program must be trained on the CAPA program. Training on procedures that are specific to experimental studies such as analytical (test) methods and study protocols or Metrology, Quality Assurance or Document Control may be aimed at employees working in these areas. The old adage “say what you are going to do and then do what you say” is akin to writing good procedures, performing quality audits against these procedures and taking corrective and preventative actions when such audits reveal a failure to follow these procedures and/or flawed procedures.

Quality Metrics

By “quality metric,” we mean a quantitatively measurable attribute or parameter that we wish to optimize that reports back on the work product, process or personnel. Quality metrics often have assigned pre-defined acceptance criteria, target values, or specifications. Furthermore, the numerical value assigned to that “specification” will often be set through a program of Quality Risk Management.

Quality metrics are monitored in our work flow at strategic positions. It is not possible in an article of this length to show all the details. However, Figure 4 should suffice to make that point. In Figure 4, we’ve shown a few positions (P1 – P7) in the work flow of an experimental study where we find it strategic to take CAPA data specific to the quality metric “conformance with the GMP or GLP study protocol.” Referring to the figure, the study begins with the study protocol and ends with the archiving of all study materials. At P1, the Principal Scientist checks that participating scientists are trained in the protocol; at P2, the Sample Coordinator checks that all samples, test articles, control articles, and standards have been properly received and are stored under those conditions specified in the protocol; at P3, QA checks that analysts are using only those analytical test methods called for in the protocol and that those methods have been properly validated or verified, the analyst checks that all system suitability results, and reportable test results meet any/all specifications called out in the protocol; at P4, QA checks that there were no unapproved deviations from the protocol; at P5, QA checks that any/all CAPAs generated in the conduct of the study have been closed out; at P6, QA checks that the data report has been written in conformance to the protocol; at P7, QA checks that the study has been archived in accordance with protocol.

Monitoring other quality metrics, such as “proper recording and storage of raw data,” is done similarly and a diagram of the work flow through the lab that is not study-specific would also follow the same logic.

Numerous quality systems are put into place at the laboratory to facilitate hitting the target or optimal value of each quality metric. The CAPA quality system is put into place to identify when the target was not hit, why, who are responsible, and what should be done about it. These concerns translate in CAPA lingo to deviation, root cause, responsible person, corrective action, preventative action, and effectiveness check. The effort is made to ensure that the quality metrics used by the lab point to high-level company goals so that they are meaningful. Working together, Quality Assurance and Management define the set of quality metrics or standards to which to hold the lab to.

Managing CAPAs

Managing CAPAs involves logging, reporting, closing, and trending. The CAPA log is the repository of all the data gathered, mostly by the QA group, on all of the quality metrics over time. The CAPA coordinators conduct the CAPA meetings and keep the CAPA log current. This log is a searchable database, with an audit trail function, that is shared with authorized personnel as a read-only file on the company’s public drive. Such a database permits QA to correlate data to provide trends analysis over any/all of the company’s quality metrics and to follow up on the effectiveness of the preventative actions taken.

The procedure followed to handle a single CAPA is given in the flow chart of Figure 5. Referring to this figure, problem identification includes assigning a CAPA number and responsible person to the problem, making all required notifications and coordinating the CAPA meeting; determining actions includes root cause analysis and determining the corrective action(s) and preventative action(s); setting the target completion date involves a risk assessment and evaluation of available resources; implementation of actions typically includes going through a change control procedure to ensure that the CAPA does not create more problems; determining an effectiveness check includes setting criteria by which to judge effectiveness; performing the effectiveness check and evaluating “effectiveness” is just that; and closing out the CAPA involves documenting that all changes were made as proposed and all required documentation is appropriately approved and filed.

A hypothetical example of trending is the following: The QA group has determined from examination of CAPA data collected over a period of time that there has been an increase in the number of chromatographic system suitability failures logged. While failing system suitability is not a deviation from the GMPs, GLPs or any written procedure used in our lab, a large number of failures is indicative of a problem and a grossly inefficient use of resources including analyst and instrument time. Metrology records show no indication of a problem with any of the associated instruments over the time period concerned and closer examination shows that the problem stems from a particular analyst, running a particular analytical method, regardless of the instrument they used. . The root cause is determined to be a lack of understanding of the analytical method by the analyst, the corrective action is to assign the analysis to a more experienced analyst and the preventative action is to provide additional training to analysts on the test method. The Effectiveness check is to track the number of system suitability failures for a period of time to see if the number associated with this analyst and this method has decreased.

In this way, Quality Assurance and Management identify problems, correct them in a timely manner consistent with their risk level, and prevent their re-occurrence. Trending analysis allows us to foresee potential issues and ward them off before they become real. Having such a CAPA system installed in a strong quality culture ensures that the laboratory operation becomes self-correcting.

Change Control

Implementing corrective and preventative actions necessitates change. It is essential that these changes be accommodated through the company change control program. The guiding principle is akin to the Hippocratic principle “physician, do no harm.” Our quality systems align with well-defined unit operations in the work flow through the laboratory. Thus it is relatively easy to see how a proposed change in one part of the operation affects or has the potential to affect an undesirable change in another.

References

1. “CAPA for the FDA-Regulated Industry” by Jose Rodriguez-Perez, ASQ Quality Press, Milwaukee, WI, 2011. (ISBN 978-0-87389-797-6)

2. “How to Set Up a CAPA Program from Scratch” by Gabriela Bodea, Journal of GXP Compliance, April 2007, Volume 11, Number 3.

3. ICH Q10 Pharmaceutical Quality Systems, Guidance for Industry, April, 2009 and 21 Code of Federal Regulations, Part 820: Medical Devices: Current Good Manufacturing Practice Final Rule: Quality System Regulations, 1996.