The world’s leading publication for data science, AI, and ML professionals.

Why you never really validate your analytical method unless you use the total error approach (part…

By Thomas de Marchin (Senior Manager Statistics and Data Sciences at Pharmalex), Milana Filatenkova (Manager Statistics and Data Sciences…

Opinion

Part I: Concept

Image from Pixnio.
Image from Pixnio.

By Thomas de Marchin (Senior Manager Statistics and Data Sciences at Pharmalex), Milana Filatenkova (Manager Statistics and Data Sciences at Pharmalex) and Eric Rozet (Director Statistics and Data Sciences at Pharmalex)

Consistent and efficient use of any analytical procedure requires the knowledge of its reliability prior to its use. It is therefore necessary for each laboratory to validate their analytical methods. Validation is not only required by regulatory authorities [ICH, FDA, GxP] or in order to access accreditation [ISO 17025] but is also the essential step prior to the routine use of the method. The role of Analytical Method validation is to provide confidence in laboratory generated results that will later be used to make critical decisions.

The validation must give confidence that each future measurement generated in routine will be sufficiently close to the true value. This exercise is traditionally performed by assessing different characteristics of method performance such as Trueness (also confusingly referred to as Accuracy in the ICH Q2), Precision (Repeatability and Intermediate Precision), Specificity, Detection Limit, Quantitation Limit, Linearity, Range and Robustness. In order to determine whether a method is valid or not, the measurements of these parameters obtained within validation experiments are compared to pre-defined acceptance criteria.

Here, we will focus on the concepts of Trueness and Precision. Analytical errors are commonly divided into two components (Figure 1): random error (herein referred to as "precision") and systematic error (herein referred to as "trueness"). These two elements of the analytical error can be estimated on several repeated measurements of the same sample. It is worth noting that precision measured on multiple independent series of repeated measurements can be further split into repeatability and series-to-series variability and is defined as "Intermediate precision". Precision is calculated as the standard deviation (SD) of the repeated measurements and trueness is computed as the difference between the average of the repeats (Xm) and a reference value (µT). In practice, the metrics most often reported in validation of these two types of error are: %CV (SD/ µT100) for precision and relative bias ((Xm-µT )/µT100) for trueness.

Figure 1: Schematic representation of the analytical error components; Orange arrow: systematic error or bias; Green arrows: random error or standard deviation; Red arrow: total error. Xm is the average of the results (red dots) and µT is the true value of the sample. Image by Author.
Figure 1: Schematic representation of the analytical error components; Orange arrow: systematic error or bias; Green arrows: random error or standard deviation; Red arrow: total error. Xm is the average of the results (red dots) and µT is the true value of the sample. Image by Author.

As mentioned above, the common practice is to compare the metrics obtained during the validation to some pre-defined acceptance criteria (for example, 15% CV for precision and 10% relative bias for the trueness) to determine whether the method is valid or not. This is so called "descriptive" approach. But hold on… Let’s come back to the initial objective of the validation: the validation must give confidence that each future measurement that will be made in routine will be close to the true value. This statement concerns a single future measurement, not a mean or dispersion of repeated measurements. As illustrated in Figure 1, a single measurement depends on its total analytical error (herein also referred to accuracy), i.e. the simultaneous combination of the systematic and random parts of the error. Therefore, the impacts of these two parts individually are irrelevant. Indeed, it is not important whether an analytical method has a poor trueness or a poor precision individually, as long as the combination of both error components is acceptable.

Since assessing precision and trueness and comparing it to a pre-defined criterion doesn’t answer the question of validation a procedure should be qualified with regards to its total error. The analytical method performance may be regarded acceptable if it is highly likely ("guarantee") that the difference between every future measurement (Xi) of a sample and its "true value" (µT) is inside the acceptance limits predefined by the analyst. The notion of "good analytical procedure" with a known risk can be translated into the following equation:

P states for Probability of any future result to be inside the acceptance limits fixed a priori by the analyst according to the objectives of the method (λ, for example +-30% ) that should be greater than a minimum quality level β (let’s say 0.95).

But how to represent this probably and how to take the decision on the criteria? The accuracy profile may serve as an adequate rule for this decision (Hubert et al. 2004; Hubert, Nguyen-Huu, Boulanger, Chapuzet, Chiap, et al. 2007; Hubert, Nguyen-Huu, Boulanger, Chapuzet, Cohen, et al. 2007). Figure 2 shows an example of an accuracy profile. For each concentration level, a prediction interval (also referred as beta-expectation tolerance interval) is computed to evaluate the expected relative error range for 95% of future measurements. This interval is computed from the available estimates of the bias and precision of the analytical procedure. Then the lower limits of the prediction intervals on one hand and the upper limits on the other hand are connected. The area in green in Figure 2 describes the dosage interval in which the procedure is able to produce a measurement with a known accuracy and a risk level fixed by the analyst. If for example, an acceptable level of risk of 5% is preset by the analyst, this validation methodology will guarantee that on average 95% of the future results given by his analytical method will be included in the acceptance limits fixed according to the requirements (e.g.: 1% or 2% on bulk, 5% on Pharmaceutical specialties, 15% in bioanalysis, …).

Figure 2: Illustration of the accuracy profile as decision tool. In this example, 3 series of measurements were performed at 5 concentration levels. The Relative error of each measurement are represented by the points. The area between the two dotted blue lines represents the prediction interval where we expect 95% of future measurements. The valid dosing range is represented by the light green area. This figure was generated using the Enoval software (https://www.pharmalex.com/enoval).
Figure 2: Illustration of the accuracy profile as decision tool. In this example, 3 series of measurements were performed at 5 concentration levels. The Relative error of each measurement are represented by the points. The area between the two dotted blue lines represents the prediction interval where we expect 95% of future measurements. The valid dosing range is represented by the light green area. This figure was generated using the Enoval software (https://www.pharmalex.com/enoval).

If, as illustrated in Figure 2, a subsection of the accuracy profile falls outside the acceptance limits, then new limits of quantification are to be defined, and by consequence, a new dosage interval. Figure 2 represents these new limits ULOQ (upper limits of quantification) and LLOQ (lower limit of quantification) that are in perfect agreement with the definition of this criterion, i.e. the highest and smallest quantity of the substance to analyze that can be measured with a defined accuracy (trueness + precision), respectively.

The use of the accuracy profile as a single decision tool, allows not only reconciling the objective of the Validation with the one of the analytical method, but also to visually grasp the capacity of the procedure to fulfill its analytical objective [Hubert et al. 2004; Hubert, Nguyen-Huu, Boulanger, Chapuzet, Chiap, et al. 2007; Hubert, Nguyen-Huu, Boulanger, Chapuzet, Cohen, et al. 2007]. This last point becomes even more critical with the new analytical Quality by Design (aQbD) and analytical procedure lifecycle concepts developed in the forthcoming USP 1220 and ICH Q14. In order to demonstrate that the analytical procedure is fit for its intended purpose, USP 1220 states that the validation criteria should be aligned with the specification of product and process needs. In this context, the use of the total error approach greatly facilitates the interpretation of the performance of the method in the context of its use. Indeed, the end-user of the results associates their quality with their distance to a true value, i.e. the total error, rather than with their dispersion or bias individually. It is also worth mentioning that the use of predictions intervals is proposed in USP 1210 to evaluate if an analytical procedure is fit for its intended purpose.

In conclusion, we saw that the total error approach is an adequate approach which allows to meet the objective of the validation and to demonstrate that the method is fit for purpose. It is a key tool to implement into the aQbD and analytical procedure lifecycle concepts.

In part II, we will see that the total error approach has also strong advantages in terms of reduction of Business and Consumer risks compared to the traditional validation approach.

Bibliography

USP chapter Statistical tools for procedure validation

USP draft chapter Analytical Procedure Lifecycle

Hubert, Ph., J. J. Nguyen-Huu, B. Boulanger, E. Chapuzet, P. Chiap, N. Cohen, P. A. Compagnon, W. Dewé, M. Feinberg, M. Lallier, M. Laurentie, N. Mercier, G. Muzard, C. Nivet, and L. Valat. 2004. "Harmonization of Strategies for the Validation of Quantitative Analytical Procedures: A SFSTP Proposal – Part I." Journal of Pharmaceutical and Biomedical Analysis 36(3):579–86. doi: 10.1016/j.jpba.2004.07.027.

Hubert, Ph., J. J. Nguyen-Huu, B. Boulanger, E. Chapuzet, P. Chiap, N. Cohen, P. A. Compagnon, W. Dewé, M. Feinberg, M. Lallier, M. Laurentie, N. Mercier, G. Muzard, C. Nivet, L. Valat, and E. Rozet. 2007. "Harmonization of Strategies for the Validation of Quantitative Analytical Procedures: A SFSTP Proposal – Part II." Journal of Pharmaceutical and Biomedical Analysis 45(1):70–81. doi: 10.1016/j.jpba.2007.06.013.

Hubert, Ph., J. J. Nguyen-Huu, B. Boulanger, E. Chapuzet, N. Cohen, P. A. Compagnon, W. Dewé, M. Feinberg, M. Laurentie, N. Mercier, G. Muzard, L. Valat, and E. Rozet. 2007. "Harmonization of Strategies for the Validation of Quantitative Analytical Procedures: A SFSTP Proposal–Part III." Journal of Pharmaceutical and Biomedical Analysis 45(1):82–96. doi: 10.1016/j.jpba.2007.06.032.


Related Articles