Water in Drug Powder Log Out | Topics | Search
Moderators | Register | Edit Profile

NIR Discussion Forum » Bruce Campbell's List » I need help » Water in Drug Powder « Previous Next »

Author Message
Top of pagePrevious messageNext messageBottom of page Link to this message

Ralf Marbach (ralf)
Junior Member
Username: ralf

Post Number: 7
Registered: 9-2007
Posted on Tuesday, May 19, 2009 - 3:22 pm:   

Howard:

>> The spectrum of PURE water is well-known. But just as well-known is the fact that the
>> spectrum changes due to interactions with ANYTHING

Exactly, that�s my point, that�s why spectroscopic expertise is needed and better than statistical correlation. The shape of the water response spectrum is application-specific anyway because of differences in the spectrometer, sampling optic, and scattering properties of the sample. And how much of the (pre-processed) spectrum is linear and time-invariant, i.e., useful for measurement, in my opinion only spectroscopic expertise and application knowledge can tell.

Tony:

>> There was almost no correlation between moisture in flour and their absorption at
>> 1940nm (the KNOWN absorption maximum of water)

Sorry, misunderstanding. Univariate is hopeless here given the baseline fluctuations, we agree. I meant multivariate measurement based on whole spectral regions of the water response spectrum.

Best,

Ralf
Top of pagePrevious messageNext messageBottom of page Link to this message

Tony Davies (td)
Moderator
Username: td

Post Number: 189
Registered: 1-2001
Posted on Tuesday, May 19, 2009 - 3:41 am:   

Ralf,

Tom Fearn demonstrated the need for multiple regression techniques in 1986 (The Osborne and Fearn book). There was almost no correlation between moisture in flour and their absorption at 1940nm (the KNOWN absorption maximum of water). The data has to be corrected for the (unknown) effects of particle size. MLR is the quickest way to find the correction.
Best wishes,

Tony
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 230
Registered: 9-2001
Posted on Monday, May 18, 2009 - 7:27 pm:   

Rolf - I have to disagree with your statement:

"..., because the spectral response of the water is well known"

The spectrum of PURE water is well-known. But just as well-known is the fact that the spectrum changes due to interations with ANYTHING: dissolved impurities, intended solutes, surfaces (including suspended contaminants), temperature, pH (and differently for different anions), etc.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Ralf Marbach (ralf)
Junior Member
Username: ralf

Post Number: 6
Registered: 9-2007
Posted on Monday, May 18, 2009 - 6:21 pm:   

Dan,

>> The data set consists of about 220 samples,
>> 150 are used for calibration and 70 are used
>> for validation. What is troubling me is that
>> the RMSEC (0.15) is greater than the RMSEP
>> (0.10)

There is a beautiful paper, T.W. Anderson, Asymptotic theory for principal component analysis, Ann. Math. Stat. 34, 122�148 (1963). The key statement is also given in the book by Mardia, Kent, Bibby (1979), p.230. The multivariate case is 1:1 to the univariate case in that, when you estimate a standard deviation, the standard deviation of the standard deviation is 1/sqrt(2*n) of the true standard deviation, where n the number of independent samples. This is an approximation, but numerically accurate for n>10 and thus useful in virtual all practical cases. I love this tool because it is so much easier than the more complicated testing procedures described elsewhere.

So, following Anderson, the 2sigma confidence interval of your RMSEP number is:
0.10 +/- (2x 0.10/sqrt(140)) = 0.10 +/- 0.017
and the 2sigma confidence interval of your RMSEC number is:
0.15 +/- (2x 0.15/sqrt(300)) = 0.15 +/- 0.017
This would indicate that the two are indeed different, for some reason. However, we know that this is unlikely. You said (in the other threat, "RMSEP vs RMSEC") that the data sets were balanced in terms of range and distribution, and in practice we know that any real difference is then quiet unlikely to exist. In my view, the most likely explanation is therefore as follows. Your error is dominated by some physical or chemical effect in the spectra (likely to do with particle size) for which you do not have 150 and 70 independent realizations, but fewer, maybe only around 10. (Any suspects? DoE?) In other words, I believe the difference between your two numbers, 0.10 and 0.15, is just random, because the formulas above have to be computed not with n=150/n=70, but with n=10..20. Not sure if you can share any details, but that�s my guess. If interested you could probably get lots of feedback on possible physical-effect culprits from this group.

Regards,

Ralf
VTT Optical Instrument Center
MTT Multantiv
Finland

PS: BTW, my contribution here does not mean that I condone the use of PLS, or other statistical calibration methods, for this application. Measurement of water in the NIR is a prime example of an application that belongs to the field of measurement science, not statistics, because the spectral response of the water is well known. If it is not necessary to measure and rely on a population of calibration standards, then why do it? The optimum calibration can, and should be, determined directly from spectroscopic know-how.
Top of pagePrevious messageNext messageBottom of page Link to this message

Richard Kramer (kramer)
Junior Member
Username: kramer

Post Number: 10
Registered: 1-2001
Posted on Saturday, May 09, 2009 - 9:35 am:   

Quoting NIR Discussions <[email protected]>:

NIR Discussion Forum: Bruce Campbell's List: I need help: Water in Drug Powder
------------------------------------------------------------

>Posted by Bruce H. Campbell (campclan) on Friday, May 08, 2009 - 9:38 pm:

>The following is from Dan Miller.

>I am using Unscrambler to develop a PLS calibration to predict water content in a formulated drug powder. The data set consists of about 220 samples, 150 are used for calibration and 70 are used for validation. The NIR spectral data were pre-processed using MSC; several different pre-processing techniques were used, but MSC resulted in the best RMSEP in a reasonable number of PLS factors (3).
The RMSEP is about 0.1, which is reasonable, given that the reference technique (Karl Fischer titrimetry) has a comparable precision, about
0.1 % w/w.

If I undeerstand your description correctly, you have chosen a particular calibration as "best" based on the SEPs when "validating" each candidate calibration with 70 "independent" "validation" samples. I use quotation marks in the previous sentence because if you use "validation" samples for the purpose of choosing the "best" calibration, then these "validation" samples are NOT validation samples. Having contributed to the decisions involved in picking the "best" calibration, these samples are simply calibration samples used in a way which is different from the way the other 150 samples were used.

>What is troubling me is that the RMSEC (0.15) is greater than the RMSEP (0.10). This makes no sense to me, as I&#146;d expect that the &#147;error&#148; with unknown samples would be greater than that within a calibration set. I&#146;ve looked for outliers, and haven&#146;t found anything particularly worrisome (I avoid dropping data points unless there is a good justification). I&#146;ve also scrutinized the calibration and validation data sets, and didn&#146;t see any samples that had large y-residuals in the calibration set (which would tend to exaggerate RMSEC).

If I have correctly understood the way you chose the winning calibration, these results are not particularly surprising since the described procedure could easily choose as the "winner" the calibration which happend to best overfit the noise in the "validation" samples. When a "winning" calibration is chosen as you describe, that calibration must be subsequently independently validated with another completely independent set of validation samples. If the "validation" samples were used to choose the "winner," then in every sense of the word, the calibration cannot yet be said to have been validated.

Also, you did not mention how you segregated calibration and "validation" samples. If the "validation" samples are distributed througout the calibration space differently than the calibration samples, then it is easily possible to get the kind of results you describe.

It is not uncommon to observe that calibrations with poorer SEC tend to be the ones which deliver better SEP, and it is also not uncommon to observe that SEC and SEP are statistically indistinguisable (often the caase when the reference values are poorer than the underlying ability of the multivariate calibration), when SEP is statistically significantly better than SEC, that almost always means that something is wrong.

In general, I would recommend approaches which provide more validation samples than calibration samples.

You might want to have a look at ASTM E2617 "Standard Practice for Validating Empirically Derived Multivariate Calibrations".

Richard

>Perhaps the answer is that the above RMSEC and RMSEP values are similar enough that I shouldn&#146;t worry about it (i.e., one should only be concerned when there is a gross difference in the RMSEC and RMSEP values).

>-Dan
Top of pagePrevious messageNext messageBottom of page Link to this message

Jerry Jin (jcg2000)
Member
Username: jcg2000

Post Number: 11
Registered: 1-2009
Posted on Friday, May 08, 2009 - 3:16 pm:   

It is probable that the concentrations of your testing samples are very close to the mean value of the training samples. In this case, it gives smaller RMSEP.

How did you choose training data and testing data? If you use random selection, you are going to be happy about your results.

Jerry Jin
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 227
Registered: 9-2001
Posted on Friday, May 08, 2009 - 3:08 pm:   

The RMSEP and RMSEC are reasonably similar, but what about the ranges? If the validation set has an appreciably smaller range, then that might point to a clue. And what are the correlation coefficients, and other global statistics between predicted and reference values, for the two data sets?

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Bruce H. Campbell (campclan)
Moderator
Username: campclan

Post Number: 115
Registered: 4-2001
Posted on Friday, May 08, 2009 - 2:38 pm:   

The following is from Dan Miller.

I am using Unscrambler to develop a PLS calibration to predict water content in a formulated drug powder. The data set consists of about 220 samples, 150 are used for calibration and 70 are used for validation. The NIR spectral data were pre-processed using MSC; several different pre-processing techniques were used, but MSC resulted in the best RMSEP in a reasonable number of PLS factors (3). The RMSEP is about 0.1, which is reasonable, given that the reference technique (Karl Fischer titrimetry) has a comparable precision, about 0.1 % w/w.

What is troubling me is that the RMSEC (0.15) is greater than the RMSEP (0.10). This makes no sense to me, as I�d expect that the �error� with unknown samples would be greater than that within a calibration set. I�ve looked for outliers, and haven�t found anything particularly worrisome (I avoid dropping data points unless there is a good justification). I�ve also scrutinized the calibration and validation data sets, and didn�t see any samples that had large y-residuals in the calibration set (which would tend to exaggerate RMSEC).

Perhaps the answer is that the above RMSEC and RMSEP values are similar enough that I shouldn�t worry about it (i.e., one should only be concerned when there is a gross difference in the RMSEC and RMSEP values).

-Dan

Add Your Message Here
Posting is currently disabled in this topic. Contact your discussion moderator for more information.