# Introduction ndividuals involved in supplier management need to be able to determine if the quality performance of one supplier is significantly different from another. This information can be used in supplier selection, allocation of the amount of product purchased from each supplier, supplier process improvement programs, and the decision to terminate the purchasing relationship. "The goal of having a good performance metric is to allow the purchaser to assess supplier related performance risk and to take appropriate action" [Bernstein, 1996]. Given a quality characteristic and specified requirements for conformance, there are several statistics that are commonly used to measure supplier process capability. These include the traditional fraction nonconforming (i.e., p or NC), and the modern capability indices C p , C pk , C pm , C pmk , etc. [Kotz, 1993]. The proper application of these modern indices assumes that the process distribution is stable and approximately Normal. To get around this Normality requirement, several authors have offered alternate solutions [Chou, 1998], [Somerville, 1997]. Or alternately, a Box-Cox power transform can be used to Normalize the observed non-normal data distribution. Statistical two-sample comparison tests procedures have been developed for all of the common capability indices. However, another measure of potential process risk is net sensitivity (NS) [Flaig, 1999]. Net Sensitivity is a measure of the robustness of the process to potential changes in the mean, and/or variance, and/or specification limits. More specifically, Net Sensitivity is the instantaneous rate of change of the combined areas under the distribution curve above the USL and below the LSL given a change in parameters or specifications. It basically measures the potential effect on the nonconformance rate of changes in the distribution mean, standard deviation, USL, or LSL. This is a useful process performance measure, but it is relatively new and until now there was no two-sample comparison test procedure for practitioners to use to compare net sensitivity results. A reasonable approach to evaluating the differences in supplier performance for the purchasing department might be to measure the nonconformance rate and the net sensitivity for each supplier and then test to see if any observed differences are significant. Since tests for differences in nonconformance rates exist, the only remaining thing to develop is a test for differences in net sensitivity. This is the goal of the next section. # a) Methodology It is assumed that theproduct performance distributions for both suppliers' are stable,mound shaped, that the specification limits are near the tails of each distribution for the quality characteristic of interest, and that the observed distributions can be adequately approximated by Johnson distribution curves. This is illustrated in Figure 1. for Normal distributions where and where the k i are given by: I © 2013 Global Journals Inc. (US) x L = LSL and z L is the transformed value x U = USL and z U is the transformed value The formula used to compute the constants?, ?, ?, and?for the SU and SB distributions are given by Farnum [Farnum, 1996]. If the observed distribution is approximately Normal or can be transformed into an approximately Normal distribution, then the Net Sensitivity (NS) can be approximated by: (1) Net Sensitivity is an estimate of the instantaneous rate of change in the fraction nonconforming (i.e., the area under the approximating curve to the left of the LSL combined with the area to the right of the USL) given a change in the mean or standard deviation, or the specification limits of the process. The variability of NS is determined by the random variables in equation ( 1). The fixed variables in equation ( 1) are the specification limits (i.e., USL and LSL) and the random variables are the mean and standard deviation (i.e., m and s). The two distributions making up NS are Normal and the mean and standard deviation are independent for Normal distributions, so the variability of NS follows from the sampling distribution of the random variables m and s. The standard error of the mean is , andthe standard error of the standard deviation is respectively. # II. # Example Let the following supplier management scenario, product specifications, and performance results for two suppliers form the basis for the comparison. The supply base manager would like to know if the nonconformance risk performance difference between the two suppliers is significant at a 95% confidence level. The procedure for answering this question might go as follows: The objective in robust process design is to have the value of Net Sensitivity (NS) as close to zero as possible. So in this case, there is sufficient evidence to reject the Null Hypothesis and conclude that the nonconformance risk of supplier A is significantly smaller than that of supplier B with 95% confidence. The practitioner needs exercise care when applying equation ( 1) to non-Normal data as it can lead to significant errors because NS is quite sensitive to the distribution shape. Hence when computing the confidence interval for non-Normal data the practitioner must apply the correct dz/dx formula for the type of Johnson curve that is being used to approximate the observed data distribution, or alternately use the Box-Cox transformation to Normalize the observed data distribution. # III. # Summary Sensitivity analysis provides a way of assessing the robustness of a process to the possible impact of changes in the process distribution parameters or specification limits on process capability. So it is important to be able to determine if the net sensitivity of one supplier is significantly different from another. However, this test should be combined with a test for the difference in fraction nonconforming to get a more complete picture of the similarities and differences between suppliers. In some sense, the nonconformance test is a test of expected performance and the net sensitivity is a test of the potential variance of performance. Applying both tests provides a rigorous decision making tool for supplier management. 1![Figure 1 : The Performance Distribution of Supplier A and B Author: California Southern University, 1237 Clark Way California San Jose United States. e-mail: johnflaig@yahoo.com](image-2.png "Figure 1 :J") 1![The engineer selects Net Sensitivity as the metric to assess the processes nonconformance risk.2. Assume H 0 :NS A = NS B and H 1 : NS A ? NS B 3. The product has characteristic performance requirements of LSL = -1, and USL = 2 4. The characteristic performance distributions for each supplier are approximately Normal 5. The sample characteristic performance statistics from each supplier are: Supplier A: n = 100, m = 0, s = 1 Supplier B: n = 100, m = 1.7, s = 1 Then the 95% confidence interval (CI) for the net sensitivity of supplier A's performance (NS A ) is: CI A = (101,000 < NS A < 283,000) The two sided 100*(1?)% confidence limits for the upper and lower limits are found by evaluating equation (1) using the t-distribution t(?/2, df) for the four combinations of the mean t(?/2, df)*SEm, and the standard deviation t(?/2, df)*SEs. For example, where t(?/2, df) = 1.98, = .1000, and = .0707. Evaluation the four cases yields: NS(+,+) = 101,000 DPM/unit x, where DPM is Defects Per Million NS(+,-) = 124,000 DPM/unit x NS(-,+) = 219,000 DPM/unit x NS(-,-) = 283,000 DPM/unit x Given the sample estimate of the supplier's net sensitivity, then the confidence interval for the population value of NS is denoted: NS L < NS < NS U The lower and upper values of the confidence interval for NS are computed as follows: NS L = min{NS(+,+), NS(+,-), NS(-,+), NS(-,-)} = 101,000 DPM/unit x, and NS U = max{NS(+,+), NS(+,-), NS(-,+), NS(-,-)} = 283,000 DPM/unit x internal for NS, i.e., NS L < NS < NS U . CI A = (101,000 < NS A < 283,000) Min(Abs(NS A )) = 101,000 DPM/unit x Similarly, the confidence interval for the net sensitivity of supplier B's performance (NS B ) is: CI B = (-459,000 < NS B < -286,000) Min(Abs(NS B )) = 286,000 DPM/unit x](image-3.png "1 .") 2![The result is the 100*(1? % confidence )](image-4.png "2 GlobalJ") © 2013 Global Journals Inc. (US) * The New Religion of Risk © PLBernstein Global Journals Inc 1996. 2013 US * Management. Harvard Business Review March-April 1996 * Transforming Non-Normal Data to Normality in Statistical Process Control Y-MChou AMPolansky RLMason Journal of Quality Technology 30 2 1998 * JJFlaig Process Capability Sensitivity Analysis. Quality Engineering ess Capability Sensitivity Analysis. Quality Engineering 1999. 1999 11 * JJFlaig Process Capability Sensitivity Analysis.Quality Control and Applied Statistics ess Capability Sensitivity Analysis.Quality Control and Applied Statistics 2000. 2000 45 * Using Johnson Curves to Describe Non-normal Process Data. Quality Engineering NRFarnum Marcel Dekker 9 2 1996 * Process Capability Indices SKotz NLJohnson 1993. 1993 Chapman and Hall New York * SESomerville DCMontgomery Process Capability Indices and Non-normal Distributions. Quality Engineering ess Capability Indices and Non-normal Distributions. Quality Engineering 1997 9