Significance of the bias Log Out | Topics | Search
Moderators | Register | Edit Profile

NIR Discussion Forum » Bruce Campbell's List » Chemometrics » Significance of the bias « Previous Next »

Author Message
Top of pagePrevious messageNext messageBottom of page Link to this message

Tony Davies (td)
Moderator
Username: td

Post Number: 230
Registered: 1-2001
Posted on Tuesday, May 25, 2010 - 5:35 pm:   

Hello again Leonardo,

Just seen your data (Its 23.23 here in the UK)I've been out and tomorrow I'm going sailing! So I need to answer now. Luckily some friends have done it for me! Your example has very few samples and the bias values are quite small. In these cases RMSEP and SEP are quite similar but the difference between n and n-1 becomes significant.

It is OK to have small sets when you are learning how to run the software but for real calibration you MUST use much larger sets. (Kim's book is very good and it will repay all the time you spend on completing all the examples!).

In general I consider the RMSEP to be the more important statistic. When the SEP comes out lower do not be tempted to quote it! Bias has it's own set of uncertainties!

Best wishes,

Tony
Top of pagePrevious messageNext messageBottom of page Link to this message

David W. Hopkins (dhopkins)
Senior Member
Username: dhopkins

Post Number: 142
Registered: 10-2002
Posted on Tuesday, May 25, 2010 - 10:45 am:   

Leonardo,

It is worthwhile for you to go through the validation exercise, so you can reproduce the calculations that The Unscrambler presents. In addition to the question of dividing by (n-1) inside the sqrt for SEP (because of the calculation of the bias) and by n for RMSEP, where n is the number of observations in the data set, there is the issue of the calibration set. Some sources recommend dividing by (n-1-nf) or (n-nf) where nf is the number of factors. There is some discussion whether this should be done for PLS. I have checked this for The Unscrambler in the past, but I don't know what they do in their latest version of the program. It is never a good idea to assume that the software you use is doing a calculation as you expect, and it is a relatively simple exercise to take the listed errors for the sample set and do the calculations in a spreadsheet, so you can see what assumptions are made.

Best regards,
Dave
Top of pagePrevious messageNext messageBottom of page Link to this message

Pierre Dardenne (dardenne)
Senior Member
Username: dardenne

Post Number: 47
Registered: 3-2002
Posted on Tuesday, May 25, 2010 - 10:25 am:   

Leonardo,

Most of the time, RMSEP > SEP.
In this case with only 11 samples, SEP are > than RMSEP because the division by (I-1) as Marion mentioned.
ex for the last one,
RMSEP = 7.20, SEP = 7.54,
(7.20^2)*11 > (7.54^2)*10
570.2 > 568.5
the values are very closed because the bias is negligeable.

Pierre
Top of pagePrevious messageNext messageBottom of page Link to this message

Ciaccheri Leonardo (leonardo)
New member
Username: leonardo

Post Number: 2
Registered: 5-2010
Posted on Tuesday, May 25, 2010 - 9:36 am:   

I am reporting this example of RMSEP, SEP and Bias values to answer the Tony's question.

Source data come from a training dataset attached to the book "Multivariate Analysis in Practice" by Kim Esbensen. It contains NIR spectra of alcoholic mixtures (Methanol,Ethanol,Propanol). There are 15 samples in the calibration set and 11 samples in the test set.

I made a PLS2 run and, retaining 3 PLS factors, I got these values:

Metanol: RMSEP = 1.99, SEP = 2.08, Bias = -0.15
Ethanol: RMSEP = 3.08, SEP = 3.06, Bias = -0.99
Propanol: RMSEP = 1.71, SEP = 1.34, Bias = +1.15

While Ethanol and Propanol show RMSEP larger than SEP (as I expected), Methanol has a RMSEP smaller than SEP.

Retaining only 2 factors, instead, I got the following table:

Metanol: RMSEP = 3.93, SEP = 4.09, Bias = +0.67
Ethanol: RMSEP = 4.68, SEP = 4.90, Bias = -0.36
Propanol: RMSEP = 7.20, SEP = 7.54, Bias = -0.31

Now RMSEP is smaller for all three alcohols.

Best regards,

Leonardo Ciaccheri
Top of pagePrevious messageNext messageBottom of page Link to this message

Marion Cuny (marion)
Junior Member
Username: marion

Post Number: 7
Registered: 6-2009
Posted on Tuesday, May 25, 2010 - 8:20 am:   

that's right :-)
Top of pagePrevious messageNext messageBottom of page Link to this message

Gabi Levin (gabiruth)
Senior Member
Username: gabiruth

Post Number: 33
Registered: 5-2009
Posted on Tuesday, May 25, 2010 - 7:46 am:   

Hi,

From a practical point of view, when running a validation set, that is completley independent from the calibration set we predict the values, and then we can calculate the following:
1. SEP - the STDEV of the deviations between the predicted and measured.
2. Average the deviations from the measured. If the average of the devaitions is zero - the bias is zero. If it is different from zero, we say that we have a bias. In the cross validation by Unscrambler (which I use regularly) it does a calculation which I believe is doing the same, unless I am wrong, so Marion, being a Camo guy, would you confirm it is doing the same, or are they using a different formula for bias?

Gabi
Top of pagePrevious messageNext messageBottom of page Link to this message

Marion Cuny (marion)
Junior Member
Username: marion

Post Number: 6
Registered: 6-2009
Posted on Tuesday, May 25, 2010 - 6:01 am:   

Hi,

I can try to give an answer for CAMO.

I think we say in our documents that this is an approximate relation.
In Martens and N�s (page 251) it is mentioned that the relation holds for an individual object,
but it is also an issue of dividing by (I-1) or I (which we do for RMSECV but not RMSEC)
The formula shows why the SEP can be larger than RMSEP:
SEP = sqrt(sum(Yhat - y - (yhatmean - ymean)^2)/(I-1))
where (yhatmean - ymean) is the bias.

The interpretation of the bias is only relevant in the case of a test set. For cross validation the bias is always close to 0.

Regards,

Marion

--
Dr. Marion Cuny
Chemometrician

tel.: (+47) 22 39 63 01, www.camo.com/contact

CAMO Software �Inspired by Science�
Top of pagePrevious messageNext messageBottom of page Link to this message

Tony Davies (td)
Moderator
Username: td

Post Number: 229
Registered: 1-2001
Posted on Tuesday, May 25, 2010 - 5:34 am:   

Hello Leonardo!

Welcome to the Forum.

Before we get too involved can you give us a little more information?

What are your values for RMSEP, SEP and bias? How many samples in your validation set?

Best wishes

Tony
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 320
Registered: 9-2001
Posted on Tuesday, May 25, 2010 - 5:29 am:   

Leonardo - that's a question best asked of the software manufacturer (CAMO). Historically, different people used the same terms to mean different things (and vice versa). Some of these became encapsulated in the software packages that are available. The software manufacturers are sometimes reluctant to change their software to conform to what might otherwise be considered "standard" in the community, because they are concerned with confusing their existing customers and making their historical results not be comparable with new results they might obtain.

Also, while you can formally show that

RMSEP^2 = SEP^2 + Bias^2

I think this should be discouraged. "SEP" has a meaning and any statistician can tell you how to find confidence limits for that. Similarly, "Bias" has a meaning and any statistician can tell you how to find confidence limits for THAT. The RMSEP, as defined above, however, you cannot find any confidence interval for, because its distribution is unknown. This is all elementary Statistics, but unfortunately few people doing calibration work seem to bother with that.

Howard

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Ciaccheri Leonardo (leonardo)
New member
Username: leonardo

Post Number: 1
Registered: 5-2010
Posted on Tuesday, May 25, 2010 - 4:21 am:   

Hello,

I am a resercher from italy and I am new in this discussion board. I am learning to use chemometrics tools and I am a bit confused about RMSEP, SEP and Bias.
I have found in my books (for example: "A user's frendly guide to Multivariate Calibration and Classification" - NIR Publications) that SEP accounts for random errors, Bias accounts for systematic errors and RMSEP is an estimation of the overall error. With the definition given in my books, they are linked by the following formula:

RMSEP^2 = SEP^2 + Bias^2

My software however (The Unscrambler), give me a SEP larger than RMSEP. At first I thought that Unscrabler simply excanged the roles of SEP and RMSEP. However, the values I got for SEP, RMSEP and Bias, usually do not fulfill the above formula. Why?

It is possible that The Unscrambler uses alternative definitions for the prediction errors?

Thank for assistance.

Leonardo Ciaccheri
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 317
Registered: 9-2001
Posted on Thursday, April 29, 2010 - 1:29 pm:   

David - you well know that in NIR, "bias" always means the average difference between the instrumentally measured values and the reference values, adn does not have the more general statistical meaning of bias.

That said, we should also note that the Y-intercept is also known as the "constant term of the equation", and also as the "B0" term of the equation.

In my book "Principles and Practice of Spectroscopic Calibration" I show that under certain conditions (mainly when the particle size variation, or as it's also known, the "repack variation" becomes large) the intercept can be expected to approach the mean value of the constituent. It turns out that this is one condition for minimizing the sum-squared-difference between two sets of data, when the spectral values aren't enough to accomodate them.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

David W. Hopkins (dhopkins)
Senior Member
Username: dhopkins

Post Number: 140
Registered: 10-2002
Posted on Thursday, April 29, 2010 - 11:56 am:   

Howard,

The term 'bias' is sometimes confusing. Does this refer to the Y-intercept value or the average of the predictions minus the average of the reference values? Most often, the Y-intercept can be far from zero, and I am more concerned with the average deviation of the results from the expected value.

Thanks,
Dave
Top of pagePrevious messageNext messageBottom of page Link to this message

Bruno Bernuy (brunober)
New member
Username: brunober

Post Number: 2
Registered: 4-2010
Posted on Thursday, April 29, 2010 - 11:27 am:   

Thanks for your time and for such a generous answer.
Tomorrow I should have "Statistics in Spectroscopy" in
my hands.
best regards
BB
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 316
Registered: 9-2001
Posted on Wednesday, April 28, 2010 - 1:36 pm:   

Bruno - yes, this is a standard statistical t-test, although the equation is not quite in standard form. It is described in virtually every book on elementary Statistics. You could do worse than to read "Statistics in Spectroscopy"; Elsevier (1991)

There are a number of caveats that go into the interpretation of a t test, though, including the fact that the residuals used to calculate the SEP should be random, indepedent and Normally distributed. If the residuals do not meet those criteria, then the probability points do not fall at the specified values from standard tables, that would normally be used for determining significance.

In practice, few NIR spectroscopists are aware of these details, and fewer bother to check them. To some extent, it is also true that a bias that is statistically significant is not necessarily of practical importance.

Because of these fairly vague areas of imprecision, in practice the strict statistical criteria have been replaced by rules of thumb, such as the "30% of RMSEP" rule you mention. There are others.

You can, if you want, generate a rule of thumb for your own use, from the following formula:

Let Y = the RMSEP for your data
Let B = the bias

Calculate YY = sqrt (Y^2 + B^2) for various values of B; this will give you a value for total error (YY) that combines the random and systematic parts. Set your allowable limit for bias (B) to be that value corresponding to the maximum value of YY you feel you can tolerate. Don't compare this to any statistical tables, however, since it will NOT have the proper distribution for such a comparison. An interesting exercise for you will be to graph YY versus B; it may help you pick a value to use for your rule of thumb.

You'll likely come up with a value near 30% anyway, I suspect. When the bias is less than the RMSEP then the SEP value will dominate the calculation, with little effect from the bias. When the bias is greater than the SEP, then the bias will dominate the calculation and the value of YY you calculate will be close to the bias value. A value of 30% more than SEP is in the region when both errors are comparable in their effect on the total, and there will be a range of values that have that property, so you will probably want to choose one of them.

Howard

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Bruno Bernuy (brunober)
New member
Username: brunober

Post Number: 1
Registered: 4-2010
Posted on Wednesday, April 28, 2010 - 11:20 am:   

Dear list members;
I am trying to test the significance of the bias in my PLS models.
I found a brasilian reference that mention a t-test and they describe it as:

"First, an average bias is calculated for the validation set. Then the SEP is obtained. Finally, the t value is given by:

t = ( absolute_value (bias) * sqrt (nval) ) / SEP

where nval denotes the number of samples on the validation set.
If the t calculated is greater than the critical t value at the 95% confidence level, there is evidence that the bias included in the multivariate model is significant. "

Do you agree with the concept?
I cannot find the original source. Do you know another source?

Thanks for any hint
BB

P.s.: I've read on other posts (and I am conscious) "that one can also agree in advance to accept a "bias" that is 30% of RMSEP" and "This analysis do not make the NIR determinations more accurate".

Add Your Message Here
Posting is currently disabled in this topic. Contact your discussion moderator for more information.