PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of aapspharmspringer.comThis journalToc AlertsSubmit OnlineOpen Choice
 
AAPS PharmSciTech. 2016 April; 17(2): 523–527.
Published online 2015 August 12. doi:  10.1208/s12249-015-0380-3
PMCID: PMC4984887

Process and Method Variability Modeling to Achieve QbD Targets

Abstract

A statistical modeling tool is presented that enables real-time viewing of how changes in method, process, and stability variability/bias impact product acceptance rate. The tool can be used to set and justify specifications. As needed, additional sources of variability/bias can be added to further optimize the tool’s prediction power. The tool can be used to assess each manufacturing run to ensure the process is in control. Aberrant results can then be investigated to see what source of variability/bias may have changed. To enable continuous improvement, the impact of new processes, methods, or technologies can also be addressed and such changes justified.

Electronic supplementary material

The online version of this article (doi:10.1208/s12249-015-0380-3) contains supplementary material, which is available to authorized users.

KEY WORDS: AtP, method variability, modeling, process variability, stability

Performing development to assure adequate pharmaceutical manufacturing process understanding is required by health authorities globally. Statistical modeling of the manufacture process can be used to show this understanding. Such modeling is becoming more of a regulatory expectation for pharmaceutical products, evident in the FDA Validation Guidance (1), ICH Q8 (R2) Guidance (2), and various regulatory presentations (3,4). Such modeling is needed to inter-relate the various manufacturing sources of variability to show how changes in any of these affect the overall drug substance and drug product quality. In addition, such modeling makes business sense, as it:

  • Reduces the number of out of specification (OOS) events and defects
  • Aids in the ability to define transfer functions that will be used in process control
  • Is a critical link in establishing line of sight from the Target Product Profile (TPP) and Quality Target Product Profile (QTPP) to Critical Quality Attributes (QAs) to process models and to release assays
  • Reduces development time
  • Ensures patients have a continuous supply of safe, efficacious, and quality product
  • Eliminates post-approval submissions, if the change is within design space
  • Enables continuous improvement

Two main sources of variability are needed to develop a model: variability of the process and variability of analytical methods used to test the product. Understanding each of these separately is not enough. We need to understand how they inter-relate and influence each other. Only then can we ensure product quality. Presented here is a unique tool to show this inter-relationship to set specifications, a control strategy, and enable continuous improvement.

Components of Variation

There are many other sources of variability:

  • Process variability and method variability
  • Method only variability
  • Bias of the process mean
  • Bias of the method
  • Stability

Once these sources of variability are determined, acceptable specifications can be set.

Setting Specifications-Product Acceptance Criteria

Specification setting is a continuous process that needs to be employed throughout the development cycle. It starts early in development with setting wider specifications; then, tightening these as the sources of variability are reduced. Developing models and tools to visualize and quantify variability will help in setting specifications.

Control Strategy and Continuous Improvement

Once the specifications are set, a control strategy is needed to ensure they are continuously met. This control strategy needs to enable continuous improvement and movement to new technologies. Again, having a modeling tool is essential to meet these goals.

This paper presents examples of how method and process variability can be determined; then, how these variabilities can be modeled to show their inter-relationships and influence on product acceptance criteria. This model, termed an Accuracy to Precision (AtP) model, would help meet the goals of Quality by Design; and, in turn, the ICH Q8 (R2) and FDA Validation Guidelines. (The AtP model uses a non-Bayesian type approach. For the reader interested in Bayesian approaches, they are directed to these references (57)).

Case Studies

Determining Method Variability and Bias

Method variability can be determined through a recovery study where multiple levels of drug substance are spiked at 75, 100, and 125% of target label into separate solutions of placebo. The data for such a study is then statistically analyzed. The resultant analysis shown in Fig. 1 shows the root mean squared error (RMSE) is 1%. Fig. 2 shows the distribution plot of the percent recovery data indicates the mean has shifted by 1%. This is the bias of the method. These values for method variability (1%) and bias (1%) will be used in the AtP model.

Fig. 1
Bivariate fit of percent recovery by percent spiked
Fig. 2
Distribution of recovery data

This method development and validation is the first key step, to understanding method variability, so this can be factored out of the overall product variability to determine the process variability.

Note that based on the phase of development, a broader DOE-based validation approach can be performed if new dosage strengths or formulations are anticipated. Such an approach is outlined in the following Method Validation by Design (MVbD) reference (8).

Determining Product Variability and Bias

Overall product variability is typically determined by assaying samples across a manufacturing run at equally spaced time intervals. Figure Figure33 shows a graphical representation of such data showing the overall product mean at 99% of target and the variability is 2%. This overall product variability is composed of both the method and process variability as defined by the follow equation:

MethodVariability2+ProcessVariability22
Fig. 3
Bivariate fit of product assay by time

Using this equation and the previously determined method variability, the process variability can be determined, which for this example is 1%.

To determine the process mean, the overall product mean (99%) is corrected for the 1% method bias, equaling 98%.

These values for process variability (1%) and process mean (98%) will be used in the AtP model.

Stability Variability

Another source of variability that can be factored into the model is product stability. For this example, the degradation rate or variability is 0.1% of the acceptance criteria per month. Again, based on the phase of development, this can be determined by Accelerated Stability Modeling (9,10) followed by confirmatory stability studies. For non-linear stability models, the following reference can be used (11).

The Accuracy to Precision Model

The Accuracy to Precision (AtP) model is generated based on the variability and bias of the method, process variability and mean, and, for this example, the product’s stability variability. Based on the data set, a normal distribution for each of the variabilities is assumed (For different data sets, alternative distributions, such as a t-distribution, may be needed and the model adjusted accordingly). Based on the current data set, a model is generated using a prediction profiler, which is available with standard statistical software packages, such as JMP. The equation used by the prediction profiler to determining the “% Samples in Specifications” is below. This and other equations presented in this paper are written in the JMP format for ease of use.

1NormalDistributionLSLMethodBias+ProcessMeannoMethodBiaserrorMethodVariability2+ProcessVariability2+StabilityVariability21NormalDistributionUSLMethodBias+ProcessMeannoMethodBiaserrorMethodVariability2+ProcessVariability2+StabilityVariability2
LSL
Lower acceptance criteria (95% of label)
USL
Upper acceptance criteria (105% of label)
Method bias
Mean percent recovery—100% (1%)
Process mean (no method bias error)
Mean of process (no method bias error) (98%)
Method variability
Mean standard deviation (1%)
Process variability
Process variability (1%)
Stability variability
Stability degradation rate (% of acceptance criteria)/month (0.1%)

The numbers in parenthesis above are the values based on the case study presented.

The Prediction Profiler determines the impact of variability and bias on the overall product acceptance rates as shown in Fig. 4. For this example, based on the method and process variability of 1%, method bias of 1%, process mean of 98%, and stability variability of 0.1% predicts that 99.76% of the samples will be in specification.

Fig. 4
AtP profiler model

Different values for variability and bias can be entered to determine the percent of samples that will be within specification. The advantage of this tool is it can be used to easily and readily show the impact of changes. For example, if the process variability increased to 2%, the acceptance rate would decrease to 95.94% (Fig. 5). This means that 4 out of 100 batches would fail specification; and, these would need to be rejected or reworked. Based on the cost of each batch, this could be a significant impact.

Fig. 5
AtP profiler model with 2% process variability

Values for %CV, CpK, K Sigma, and PPM can also be readily calculated based on the percent samples in specifications and the specification limits. Equations used for each of these and the data table used to generate the prediction profiler are included in Attachment 1.

Figure Figure66 shows the relationship between method variability and method bias. This is useful as it indicates what method variability and bias are needed to meet the specifications. As long as the combination of method variability and bias stay within the “white space,” the method will be acceptable.

Fig. 6
Method variability to method bias relationship

Control Strategy and Continuous Improvement

Using this AtP model, changes in the method and process variability can be monitored with shifts in the “% Samples in Specification” possibly requiring an investigation.

The impact of new analytical technologies and/or formulations is assessed based on their impact on the overall product quality. If the impact on variability still meets the “% Samples in Specification,” then movement to these technologies and/or formulations is justified. This enables continuous improvement for efficiency and quality gains.

This tool can be used to justify moving to new specifications by showing that the desired “% Sample in Specification” is still met.

As needed, other sources of variability can be added into the AtP model, such as raw materials, process equipment changes, inter-lot variability as well as variability introduced by transferring to different manufacturing and testing sites.

Summary

This paper presents a tool that shows the inter-relationship among method, process, and stability variability/bias and how they impact the percentage of samples expected to be within specification for each manufacturing process run. The tool also allows additional sources of variability to be added, as needed.

This tool allows assessment of different levels of variability and bias impact on acceptance rate. Additionally, this tool can be used to monitor each manufacturing process to ensure it is in control. The impact of the introduction of new processes, methods, or technologies can also be addressed.

Electronic supplementary material

Below is the link to the electronic supplementary material.

ESM 1(97K, pdf)

(PDF 97 kb)

Disclaimer

This article represents the opinion of the authors and not necessarily those of their respective companies.

Contributor Information

Mark Alasandro, Phone: 215-478-3594, moc.loa@ordnasalam.

Thomas A. Little, Phone: 1-925-285-1847, moc.mot-rd@elttilrd.

References

1. FDA, Process validation: general principles and practices. (Rockville, MD, Jan 2011).
2. ICH, Q8(R2) Pharmaceutical development .ICH, Q9 quality risk management.
3. “EMA-FDA pilot program for parallel assessment of quality-by-design applications: lessons learnt and Q&A resulting from the first parallel assessment”. 20 August 2013, EMA/430501/2013, Human Medicines Development and Evaluation
4. “Questions and answers on design space verification”. October 2013, EMA/603905/2013
5. Peterson JJ. A posterior predictive approach to multiple response surface optimization. J Qual Technol. 2004;36(2):139–153.
6. Miro-Quesada G, del Castillo E, Peterson JJ. A Bayesian approach of multiple response surface optimization in the presence of noise variables. J Appl Stat. 2004;31(3):251–270. doi: 10.1080/0266476042000184019. [Cross Ref]
7. LeBlond D, Mockus L. The posterior probability of passing a compendial standard, part 1: uniformity of dosage units. Stat Biopharma Res. 2014;6:270. doi: 10.1080/19466315.2014.928231. [Cross Ref]
8. Alasandro M, Little TA, Fleitman J. “Method validation by design to support formulation development”. Pharma Technol. 2013.
9. Waterman KC, Colgan ST. A science-based approach to setting expiry dating for solid drug products. Regulat Rapporteur. 2008;5:7/8.
10. Waterman KC, MacDonald BC. Package selection for moisture protection for solid, oral drug products. J Pharm Sci. 2010;99(11):4437–52. doi: 10.1002/jps.22161. [PubMed] [Cross Ref]
11. Alasandro M, Little TA. “Multifactor non-linear modeling for accelerated stability analysis and prediction”. Pharma Technol. 2014.

Articles from AAPS PharmSciTech are provided here courtesy of American Association of Pharmaceutical Scientists