Happy Holidays!  We will be temporarily adjusting our business hours and shipping schedule for the upcoming holiday season.

Click here for details.

 1.800.377.9684         Product InsertsInserts    SDSSDS              

Print

Data Analysis FAQs 

Questions:

How do I report a non-numeric result on the data submission form?

Where can I find a detailed explanation of the Data Reduction Report?

Can you provide guidance on interpreting the results?

Can you provide guidance on setting acceptable performance criteria?

What should I do if I get results that fall outside of the applied limits?

What should I do if I get results that fall outside of the applied limits, but I am consistent with my Peer Group?

What should I do if I get results that fall outside of the applied limits and I am different from my Peer Group?

When I use the data analysis method described in your product insert, I get different target values than those from your data reduction report.  Why?

Answers:

How do I report a non-numeric result on the data submission form? 
Enter your results on the data submission form exactly as the instrument system provides.  A non-numeric result does not always imply a ‘zero’ recovery.  Reporting a non-numeric result as zero (0) on your data submission form can effect your data analysis.

Back to top

 

Where can I find a detailed explanation of the Data Reduction Report? 
You can download a copy of our Data Reduction Report Explanation.

Back to top

 

Can you provide guidance on interpreting the results? 
Prior to interpretation, your laboratory should establish acceptable performance criteria for linearity and calibration verification experiments for each analyte. Interpretation should always include consideration from a ‘clinical use’ perspective for each analyte.

Care should be observed when interpreting results that fall outside of the applied limits on the extreme low end of a linearity and calibration verification evaluation. The results are generated solely on the statistical evaluation of the data.  Often, the analytical differences are small and do not impact clinical use of the test. Clinical significance of differences needs to be considered in the interpretation of any results.

Back to top

 

Can you provide guidance on setting acceptable performance criteria? 
For each test performed, the laboratory is responsible for establishing performance specifications for calibration verification and for reportable range verification. When setting limits for non-linearity, the error allowed can be based on either analytical goals or clinical goals.  Ideally, the amount of error allowed due to non-linearity should be independent of error that results from bias and other analytical error (imprecision).  Bias is typically that error that is the result of calibration and the element of accuracy imparted by calibration set points.  Imprecision error can be influenced by a number of variables such as system maintenance and sample integrity.  The CLIA limits represent the entire error allowed, which includes all three sources of error outlined above.  Each lab should establish what portion of the total error budget is allowed for non-linearity.

Acceptance criteria should always include consideration from a ‘clinical use’ perspective for each analyte.  For example, ask the question:  with a "Target" of 4.30 mg/dL for glucose, is a recovered value of 4.5 mg/dL glucose an acceptable value?  This type of logic can be applied to the entire reportable range to help set acceptance criteria.

The CLIA '88 'criteria for acceptable performance' can be found on the internet in two locations:

Back to top

 

What should I do if I get results that fall outside of the applied limits? 
Example:  Level 5 is outside of the applied limits for linearity analysis:

Results need to be compared to the acceptance criteria you established for the analyte.  Points that fall outside of the applied limits may or may not affect the linear range, depending upon your acceptance criteria.

Care should be observed when interpreting results that fall outside of the applied limits on the extreme low or high end of a linearity and calibration verification evaluation.  The results shown in our output table are generated solely on the statistical evaluation of the data – clinical significance is not taken into account.  Often, at the low or high end of the evaluation, the statistical differences may not impact clinical use of the test.  Clinical significance of differences needs to be considered in the interpretation of any result.  More information on statistical flags on your data reduction reports is provided in this letter.

If the nonlinearity seen is deemed acceptable, or, not clinically significant, you could accept the result and document your reasoning.  If the nonlinearity seen is determined to be unacceptable, or, clinically significant, you could proceed with troubleshooting.  The initial recommendation for troubleshooting is recalibration of the assay, rerunning of the linearity set for the analyte, and resubmission of data.

It may be useful for you to submit method information and receive Peer Group Analysis.  Peer Group Analysis allows you to see how your results compare to others running the same VALIDATE® test kit on the same analyzer.  We have generated FAQs about results that are outside the limits and consistent with Peer Group and results that are outside the limits and different from the Peer Group.

Back to top

 

What should I do if I get results that fall outside of the applied limits, but I am consistent with my Peer Group? 
Example:  Level 5 is outside of the applied limits for linearity analysis, but is consistent with the Peers at that level:

Peer Group Analysis allows you to see how your results compare to others running the same VALIDATE® test kit on the same analyzer.  If your results are nonlinear but consistent with your peers, this indicates that the nonlinearity is not limited to your specific analyzer and that the method itself may truly be nonlinear.  Any result that falls outside the allowable error limits should be evaluated by the laboratory for clinical significance.  If the nonlinearity seen is deemed acceptable, or, not clinically significant, you could accept the result and document your reasoning.  If the nonlinearity seen is determined to be unacceptable, or, clinically significant, you could chose to limit the upper end of your range to the mean of the highest level tested that was within the statistical limits (Level 4 in the example above), or, proceed with troubleshooting.  In the case of nonlinear results that are consistent with Peers, the recommendation would be to contact your instrument manufacturer for troubleshooting steps.

Back to top

 

What should I do if I get results that fall outside of the applied limits and I am different from my Peer Group? 
Example:  Level 4 and Level 5 are outside of the applied limits for linearity analysis, and are different from the Peers at those levels:

Peer Group Analysis allows you to see how your results compare to others running the same VALIDATE® test kit on the same analyzer.  If your results are nonlinear and not consistent with your peers, this indicates that the nonlinearity may be limited to your analyzer.  Any result that falls outside the allowable error limits should be evaluated by the laboratory for clinical significance.  If the nonlinearity seen is deemed acceptable, or, not clinically significant, you could accept the result and document your reasoning.  If the nonlinearity seen is determined to be unacceptable, or, clinically significant, you could chose to limit the upper end of your range to the mean of the highest level tested that was within the statistical limits (Level 3 in the example above), or, proceed with troubleshooting.  The initial recommendation for troubleshooting in this case would be recalibration of the assay, rerunning of the linearity set for the analyte, and resubmission of data.

Back to top

 

When I use the data analysis method described in your product insert, I get different target values than those from your data reduction report.  Why? 
The methods described in the data analysis in our product insert are methods that can easily be done without the use of complicated statistics.  Our Data Reduction method uses more complex statistical analysis to calculate target values.  Both methods are acceptable methods to analyze the data.

One method of analysis in the package insert uses two consecutive levels to calculate the target values.  The difference between the chosen consecutive levels is first calculated.  Then, this difference is added or subtracted as needed to calculate the five target values based on the chosen levels.

The second method in the package insert uses Levels 1 and 5 to calculate the target values.  The difference between Level 5 and Level 1 is divided by four to determine the delta for each level.  Then, this difference is consecutively added to Level 1 three times to determine the target values for Levels 2 through 4.

Our Data Reduction method uses linear regression with specific data points as part of the target value calculation.  This method is more complex than the one outlined in the product insert and most likely will not result in exactly the same target values.  For a detailed explanation of this process, please refer to the method explained page.

Back to top

Print