How are detection limits determined in NIR?

ianm's picture

13. How are detection limits determined in NIR?

There is a variant to the discussion on secondary calibrations, when one has a direct response to the species of interest at high ranges. The range is extended so it approaches the detection limit. Does the calibration go from a direct response to a mixed response (direct plus secondary) and finally to a secondary response? If so, how would one determine if that is happening? And what would it do to detection limits? (I am using the definition of detection limits as three times the standard deviation of replication.)

This also may be a question on detection limits; i.e., is the signal-to-noise ratio the only/major limitation to detection limits? What effect does wavelength reproducibility have in detection limits? Does the noise becomes non-normally distributed, which would change the mathematical relationship of noise and detection limits?

Bruce - very good questions: nobody has considered the concept of detections limits as such in NIR. Some of the reasons I think I know; some I can make a good stab at. As usual, many of them have to do with the historical development of the technique: it was not developed by analytical chemists, so the idea of detection limit never arose.

Add to this the legitimate fact that the validity of an NIR analysis is defined only over the range of the calibration samples used (for good reasons, as we all know). Since any real analysis never (or at least almost never: in any case I have never heard of anybody doing this) goes all the way down to zero concentration, the concept of "detection limit" as such (i.e., how to distinguish a sample containing a minimal amount of analyte from one that has none) simply doesn't apply to the situation. Does anybody know of a real case where the range of concentrations went all the way down to zero - I do NOT include contrived sample sets such as my own water-methanol-acetic acid mixtures (where they all went from 0-100%) as falling into this category, for example.

Theoretically, in addition to pure random noise, there are a number of other phenomena that would affect the detection limit, some of which you mentioned. In addition to those, there is what I will call the "statistical noise" for want of a better term: the instability of the calibration model itself due to the existence of noise and error in the constituent values. While we all know that this contributes to the total measured error of an NIR analysis, few non-statisticians appreciate the effect of these errors on the calibration itself: how a different set of samples with the same overall noise characteristics would give rise to a different model solely due to the differences in the details of how the individual errors are distributed among the different samples.

This "statistical" model variation is separate and in addition to the real (and more obvious) variations due to drift, wavelength non-reproducibility, etc., but all of that would contribute to the total error that would have to be taken into account when determining the true detection limit.

On top of it all. some of these are affected by methodology: the contribution of instrument drift, for example, would be affected by how often, and how carefully, bias and skew checks were done and variations corrected. The better this was done, the lower the detection limit would be, even for the same calibration model, noise level, etc.

As far as I know, except for some old work of my own(1,2), nobody has done any research into the effect of wavelength variation, non-normal data, or any of those other phenomena on the nature of the calibrations produced. I guess the advent of PCA/PLS calibration methodologies, and their promotion as "the magic answer to all possible calibration problems" simply drew all attention away from the fact that real analytical chemistry could be done with NIR, if anybody would only bother.

I guess the bottom line here is that, like most other aspects of NIR, it is not a simple question (or rather, it is a simple question, but the answer gets complicated because of the way the calibration interacts with the hardware, sample sets and everything else, so that it becomes impossible to separate out one factor and study it in isolation, as is normally done in scientific studies), and few people have the time, inclination and/or resources to tackle it properly.

As far as the nature of the calibration goes, if the range is high enough so you're pretty sure that you are using a direct response to the species of interest, then just because the range includes zero would not change the nature of the calibration, assuming that you're using one calibration for the whole range. On the other hand, if you use "range splitting", with different calibrations for the high and the low end, then it may very well happen that the low end calibration might find & key in on a secondary characteristic. This case would be no different than any other calibration for low concentrations, and the same considerations you use there would apply.


1) Mark, H. & Workman, J.; Spectroscopy; 3(11), p.28-36 (1987)

2) Mark, H.; Appl. Spect.; 42(8), p.1427-1440 (1988)