Accuracy and precision of quantitativ... Log Out | Topics | Search
Moderators | Register | Edit Profile

NIR Discussion Forum » Bruce Campbell's List » General, All others » Accuracy and precision of quantitative models. « Previous Next »

Author Message
Top of pagePrevious messageNext messageBottom of page Link to this message

Jerry Jin (jcg2000)
Senior Member
Username: jcg2000

Post Number: 48
Registered: 1-2009
Posted on Tuesday, February 26, 2013 - 11:41 am:   

Hi, Yi

For you reference, please refer to one of the NIR guidelines published by European Agency on Evaluation of Medicinal Products. CPMP/QWP/3309/01, EMEA/CVMP/961/01. It has explicit equation for accuracy comparison. But you probably want to justify it before use it.

Jerry Jin
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 521
Registered: 9-2001
Posted on Tuesday, February 26, 2013 - 11:17 am:   

Yi - I agree with Gabi. You can think of it this way: one quantity that is always missing in discussions of NIR error is the "true" value of the constituent. That's because it is unknown. If we knew it, we wouldn't have to try to measure it.

But since we don't know "truth", all we can do is take measurements, which unfortunately all have error. So let's say that you had an NIR method with no error, that every time you took a reading, it would report the "true" value for the sample. But we don't know that, so we can only compare it to the reference lab, and then the error of the reference lab reading will be seen as the error of the analysis.

Your method of using duplicate measurements from the reference lab is good, but doesn't report the "true" error any more than the "true" value for any given sample. There are many reasons for this, one of which, for example, is that there are often sysematic errors in reference readings, and those won't show up in the duplicate measurements. But measuring duplicates (or triplicates, or ...) is the best we can do to estimate the error, as well as estimating the sample value.

Over the years, chemists and statisticians have devised methods for overcoming all these problems, but those are long, cumbersome and expensive. It's a lousy situation but we're stuck wth it unless you want to make a major research project out of every routine calibration exercise you do.

So for practical purposes, you're best off to use Gabi's "rules of thumb" to estimate what you should expect.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Gabi Levin (gabiruth)
Senior Member
Username: gabiruth

Post Number: 84
Registered: 5-2009
Posted on Tuesday, February 26, 2013 - 10:36 am:   

Hi all,
There is not much one can add after all that has been said. Again, from a practitioner's point of view and using best memory chip in my head, I gathered over time a rule of thumb that is mostly useful for many cases, but can't be always assumed to hold - this rule of thumb I use for my self is that if the reference error is X (as informed by the user, not necessarily true) the anticipated SEP from the NIR will be 1.05X to 1.1X
I am not sure it helps you, but if it does, so much the better.
There is another aspect to reference error that I find that many times is overlooked, misunderstood, or people simply don't even know about it.

I will call the reference error for this purpose Uncertainty, i.e., the amount of uncertainty we have in the value from the ref method.
The second factor is the range over which we have data, which I will denote by R.
It is important to understand that for any calibration of minimum value the following rule should be observed:

U < 0.05R

For really good valuable calibrations the following should be acheived:

U < 0.03R and of course, the smaller it is, the better.
At the same time, increasing the value of R is useful, but due to the statistical nature of the regressions, the SEP increases. The most desireable way was an will always be reducing the U.
I can testify about people that spent hundreds of samples, getting nowhere because the value of U > 0.1R. They couldn't understand why they get nowhere even though they collected hundreds of samples.
I hope this was a small contribution.

Gabi Levin
Brimrose
Top of pagePrevious messageNext messageBottom of page Link to this message

Yi Peng (yi_peng)
New member
Username: yi_peng

Post Number: 2
Registered: 2-2013
Posted on Tuesday, February 26, 2013 - 10:29 am:   

Hi Howard ,
Normally after calibration, we apply modle to predict some unknown samples, we use SEP or RMSEP to evaluate the prediction results, we consider they should as lower as possible, but it is impossible to be zero, so the question is how much lower is acceptable, in my opinion is that SEP should lower or equal to lab error, because lab chemical analysis should has error as well, another question is what is lab error and how to calculate it, I used stander deviation of difference (SDD)which calculated
by replicates value for each sample, I have two replicates for each soil samples, if SEP lower or equal to 2* SDD, that is means NIR technique is acceptable compare to traditional lab measurement, but in soil NIR study area, nobody do this comparison, I don't know my method is right or not.
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 520
Registered: 9-2001
Posted on Tuesday, February 26, 2013 - 9:22 am:   

Yi - Ordinarily, since only the difference between the NIR measurement and the reference lab value is available, it's difficult to separate the errors from the two. However, if you have another, independent, analytical value (or more than one more) from more than just those two measurements, it can be done, in principle, at least.

A paper in Anal. Chem.; 61(5), p.398-403 (1989) describes two algorithms for sorting out the errors when values from three or more independent analytical methods are available. One algorithm is suitable for three analytical methods, the other one for when there are more than three analytical methods.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Yi Peng (yi_peng)
New member
Username: yi_peng

Post Number: 1
Registered: 2-2013
Posted on Tuesday, February 26, 2013 - 8:08 am:   

Hi all,
Does anybody can send me some papers about how to compare the prediction error(such as RMSEP or SEP) from NIR and error from lab chemical analysis. thank you!
[email protected]

Yi
Top of pagePrevious messageNext messageBottom of page Link to this message

Licen Ziaato
Posted on Tuesday, March 02, 2004 - 1:30 am:   

I have learned that Vset-bias is a measure of the precision of a model, being this value somehow the difference between the predicted values and the true ( measured) values. Systematic errors should be represented by this value.
The supplier of the chemometric software I am using is claiming that it is possible to calculate a model accuracy without repetitive measurements because they say 95 % of the results for a given sample would fall within a +/- 2 SEP interval.Is there anyone who can explain to me how this is possible.
Being NIR/Chemometric a secondary measuring technique I think that the accuracy should somehow be related to the accuracy of the parent method ( primary ). Is there any calculation that gives accuracy starting from the accuracy of the primary method and indicators of the model ( like SEP for istance ).
Thanks
Licen
Top of pagePrevious messageNext messageBottom of page Link to this message

hlmark
Posted on Tuesday, March 02, 2004 - 10:00 am:   

Licen - yes and no. The original basis of calibration goes way back to the time of Gauss, and regression analysis for many years was the province of statisticians. Most of the calibration methods we now use are based, in one way or another, on that. Unfortunately, many chemometricians ignore what the statisticians have learned. One of those things is that regression analysis is properly defined only when certain assumptions hold. Among those assumptions is one that says that there should be no error in the X (independent) variable(s): i.e., the NIR absorbance values.

When that assumption does in fact hold, then the only error present is that in the Y (dependent) variable, i.e., the reference values against which we calibrate the NIR instruments, and the value of the SEP will be the same as error of the reference method. Adding certain additional asuumptions, such as the one that states that the errors are Normally (Gaussian) distributed, leads to the statement by the vendor, since the +/- 2 SD point of the Normal distribution includes 95% of the cases.

In typical NIR usage, the assumption doesn't hold, since the NIR readings also have error. Part of the saving grace of NIR is that at least some of the instrumental errors are very small, so that we can say they approach zero. Not all of them are, however, especially errors related to the sample, or to the optical effects (optical scatter, etc). Also, using an improper, non-optimum calibration model will introduce additional errors. There are several ways to approach dealing with this:

1) At the simplest level, we can note that we cannot expect the instrumental values to agree with the reference values any better than they agree with themselves, i.e., the reference method error is a limiting value for agreement between NIR and reference.

2) With 2 methods, each having error, you can't separate the contribution of the two methods to the total. Under some circumstances, however, you can calculate the error due to individual methods, if results from more than two methods are available. This is an empirical approach, that simply uses the available data to estimate errors.

3) More sophisticated approaches can be used to estimate the various contributions to the NIR method, and sum them all together to get a value for the NIR error.

If you would like to discuss these points, you can contact me directly at [email protected] and we can delve into them further.

Howard

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Christopher D. Brown
Posted on Tuesday, March 02, 2004 - 12:07 pm:   

Licen,

The terms accuracy and precision are often used loosely, so let me first clarify my internal definitions:

accuracy: ability of a method to indicate the 'true' property values over many measurements of a sample (does the mean value approach truth?). The accuracy is usually concentration/property dependent, so I personally think of accuracy as indicated by the regression line for predicted values vs. reference values. (Slope of 1, intercept of 0 is perfectly accurate)

precision: ability of a method to produce the same value over many measurements of a sample (what is the variability of the repeated measures)

Under these definitions, NIR is certainly dependent on the accuracy of the reference method on which it was calibrated. If the reference method is biased, the NIR method will also be biased.

The _true_ precision of the NIR method is only loosely dependent on the precision of the reference method, however. Provided you have quite a few samples per variable/factor in your calibration model, the reference precision is not usually a big deal. But, the _apparent_ precision of the NIR method is directly dependent on the precision of the reference method. Since you're comparing the NIR predictions to a flawed reference (two sources of variance) the _observed_ RMS error is

RMS_observed^2 = RMS_NIR^2 + RMS_ref^2

Therefore, if you have an estimate of the reference error you can directly estimate the true RMS error of the NIR method for predicting new samples. [Note that there are a couple of assumptions for this formula, perhaps most importantly that the reference precision can be adequately summarized with an RMS error, sometimes not the case for reference methods showing proportional error.]

I can pass along a few literature references if you're interested in further detail.

~ Chris
Top of pagePrevious messageNext messageBottom of page Link to this message

Bruce H. Campbell (Campclan)
Posted on Tuesday, March 02, 2004 - 1:49 pm:   

I want to add a comment to what Chris has so nicely stated. That comment is that the formula given holds as long as the intercepts are statistically close to zero. I am not in a position to define statistically close; I am passing along what a Ph. D. statistician told me.

I have used this relationship in many calibrations and have found that when I measure the squared standard deviation of the NIR method, the variance, and subtract it from the variance of the validation and taking the square root of the result, the standard deviation assigned to the reference method is close to the correctly calculated reproducibility of the reference method. (Here I am using the reproducibility as defined by the standard deviation measured over days, by different analysts, and with different instruments but the same model.)
Bruce
Top of pagePrevious messageNext messageBottom of page Link to this message

Licen Ziaato
Posted on Wednesday, March 03, 2004 - 5:24 am:   

Dear all,
yes, in the first message I have mixed up accuracy and precision, shame on me.
I can follow what you have said ( I hope ) and I think I can't accept that the NIR spectrometer is free from errors.Isn't it against the second principle ( uhh philosophy ). Anyway , I think I 'll follow the calculation that Chris suggested as it is a general calculation that is valid for every set of combined processes ( variance in a combined process being the sum of all the variances of each single process ). Unfortunately this is not saving me time as (now) I have to measure via repetitive analysis the variance I have in my spectrometric measurements.The conclusion seems to be that the vendor is a bit too much confident in the capabilities of his instrument or he is telling a kind good hope truth.
Thanks
Licen

Add Your Message Here
Posting is currently disabled in this topic. Contact your discussion moderator for more information.