Absolute Referencing in Remission Log Out | Topics | Search
Moderators | Register | Edit Profile

NIR Discussion Forum » Bruce Campbell's List » Equipment » Absolute Referencing in Remission « Previous Next »

Author Message
Top of pagePrevious messageNext messageBottom of page Link to this message

Jakob Schultz (schultz)
New member
Username: schultz

Post Number: 4
Registered: 9-2005
Posted on Friday, September 09, 2011 - 11:02 am:   

Howard,

Thanks for the explanation. It is very helpful.

/jakob
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 456
Registered: 9-2001
Posted on Tuesday, September 06, 2011 - 2:24 pm:   

Jakob - I couldn't easily find what I was looking for, but it's so simple and straightforward (if a little messy because of the prolifreation of terms) that it makes more sense just to recreate it here:

We collect data as transmission (or reflectance, depending on the experimental setup, but that doesn't affect the math.) The result is a set of spectral data (T1, T2, ... Tn), each of which is the result of dividing the sample reading at each wavelength by the reference reading at the same waveelengths:

T1 = S1 / R1 (Eq 1a)
T2 = S2 / R2 (Eq 1b)
.....
Tn = Sn / Rn (Eq 1c)

If there's a change in the reference reading, say because the reference got dirty, then the reference values change by a constant amount. Then the new transmittance values are:

T'1 = S1 / (R1 * e) (Eq 2a)
T'2 = S2 / (R2 * e) (Eq 2b)
.....
T'n = Sn / (Rn * e) (Eq 2c)

where e represents the fractional change in the reference reading.

If we were to use the original data in a calibration model, where b represents the calibration coefficients of the model, the predicted value (P) is):

P = b0 - b1 Log(T1) - b2 Log(T2) - ... - bn Log(Tn) (Eq 3)

(the negative signs come from the fact that absorbance, which is the quantity used in calibration models, is -log(T))

Expanding those to look at the effects of the measured data:

P = b0 - b1 Log(S1 / R1) - b2 Log(S2 / R2) - ... - bn Log(Sn / Rn) (Eq 4)

Expanding this expression, and noting that log (S/R) = log(S)-log(R):

P = b0 - b1*Log(S1) + b1*log(R1) - b2*Log(S2) + B2*log(R2) - ... - bn*Log(Sn) + bn*log(Rn) (Eq 5)


Now, that's all before the reference reading changed. After the reference reading changed, equation 5 becomes:

P = b0 - b1*Log(S1) + b1*log(R1 * e) - b2*Log(S2) + B2*log(R2 * e) - ... - bn*Log(Sn) + bn*log(Rn * e) (Eq 6)

Note that since only the reference changed, there's no change in the sample reading, or the logarithm of the sample reading.

Again we note that log (Ri * e) = log (Ri) + Log (e), therefore we apply that substitution to each of the terms of the equation:

P = b0 - b1*Log(S1) + b1*(log(R1) + log(e)) - b2*Log(S2) + b2*(log(R2) + log(e)) - ... - bn*Log(Sn) + bn*(log(Rn) + log(e)) (Eq 7)

We now multiply through each of the denominator terms:

P = b0 - b1*Log(S1) + b1*log(R1) + B1*log(e) - b2*Log(S2) + B2*log(R2) + B2*log(e) - ... - bn*Log(Sn) + bn*log(Rn) + Bn*log(e) (Eq 8)


Now we rearrange the terms, to bring all the terms involving log(e) to the right-hand end of the expression, which we write on a separate line:


P = b0 - b1*Log(S1) + b1*log(R1) - b2*Log(S2) + B2*log(R2) - ... - bn*Log(Sn) + bn*log(Rn) (Eq 9, first line)

+ B1*log(e) + B2*log(e) + ...+ Bn*log(e) (Eq 9, second line)


Here's the key to the whole shebang: note that the expression on the first line of equation 9 is term-by-term identical to equation 5.

The second line of equation 9 (which can also be rewritten as (log (e) * (b1 + b2 + ... + bn)) is composed of a bunch of "variables" that are actually constants because their values, once measured or computed), are constant. Therefore that expression, once evaluated, can simply be added to the B0 term of the calibration model (which we will call b0'), and the final result is:


P = b0' - b1*Log(S1) + b1*log(R1) - b2*Log(S2) + B2*log(R2) - ... - bn*Log(Sn) + bn*log(Rn) (Eq 10)


Therefore, we see that equation 10, which is the same as equation 5 with a different b0 (constant) term, gives us the same result after the change in the reference, as equation 5 did befoer the change in the reference. Therefore my previous statement that when the logarithm transform is used, only the constant term of the calibration model need be changed to correct for the change in result of a change in the reference reading. A caveat here is that that change must be constant, i.e., after it changes once, it cannot change any more.

Howard

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Jakob Schultz (schultz)
New member
Username: schultz

Post Number: 3
Registered: 9-2005
Posted on Tuesday, September 06, 2011 - 8:39 am:   

Hi Howard

Do you have a reference where the effect of using the log(1/R) transform instead of reflectance is shown in more detail?

/jakob
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 428
Registered: 9-2001
Posted on Wednesday, May 25, 2011 - 6:37 pm:   

Gabi - in an ideal world, when you shine light at any given wavelength on a sample, the amount reflected could vary between 0 (zero, when all the light is absorbed) and 100% (when all the light is reflected. The reflectance is defined as the amount of energy remitted from the sample divided by the amount of energy impinging on the sample. For simplicity's sake we'll ignore spatial and angular variations of the reflectance). For practical reasons (it's very difficult to measure both the light energy impinging on a sample and the amount reflected) we measure "reflectance" by comparing the amount of light energy reflected from the sample with the light energy reflected from a highly reflecting material, which thus serves as a surrogate for the measurement of the light impinging on the sample.

It would be useful to be able to measure the amount reflected, as a fraction of the light at the given wavelength, impinging on the sample, since (we are still in the ideal world) that would be an inherent intensive property of the sample. In this same ideal world, we would be able to determine that by measuring the response of a detector when exposed to light after reflection from a sample, and to light from two "standards" reflecting 0% and 100%. If you know the true reflectances of your "standards" you coud use those to correct the measured reflectance of your sample to an "absolute reflectance" basis.

In the real world, there exist no known materials that absorb 100% or that reflect 100%. Therefore, even under the best conditions, the accuracy of the "reflectance" measurement depends on the accuracy of knowledge of the reflectance of the two "standards".

Therefore, in answer to Don's question, there's not really very much hardware needed to determine the absolute reflectance, most of what's necessary can be done in software. Some instruments, particularly the more sophisticated ones, have "dark" and "white" references built in, and the necessary software to automatically measure them and make the appropriate corrections to the sample data. However, for those concerned about these things, the necessary measurements can always be made manually, simply by measuring the "dark" and "white" standards whenever samples are being measured. The calculations and the software needed to make the corrections to the sample data are near-trivial.

All this still leaves a couple of holes in the process, however. First is that the true reflectances of the "standards" are usually unknown. With the possible exception of a small number of academicians, this discrepancy is usually ignored.

What is considered more important is consistency of readings, especially over time. As long as the reflectances of the two "standards" are constant, then the sample readings, referenced to those two "standards" will be constant as well, and therefore usable for calibration purposes, which is the ultimate goal of the whole exercise.

Here's where the log (1/R) transform comes in, along with the reason it is critical for the practical usage of NIR spectroscopy.

No, it is not the theoretically correct transform of reflection data from solid powders.

No, it does not ALWAYS allow for spectra to be subtracted, despite that fact that the subtraction can be done more often that we would expect from theoretical considerations.

But the use of log(1/R) has a property that is all-important to the practical usage of quantitative NIR, and that property is this:

Suppose you measure the spectra of a set of samples, and perform a calibration; the calibration model always has the form:

C = b0 + b1A1+ b2A2 + b3A3 + ...

where:
C = the computed concentration of the analyte in the samples
bi = the calibration coefficient for the ith wavelength
Ai = the absorbance of the sample at the ith wavelength.

All this, of course, took a lot of time, effort and resources to determine. Suppose, now, that you've been using this calibration model successfully for some period of time and then something happens, let's say that someone drops the white standard on the floor and it gets dirty. What do you do? There are several possibilities. You can ignore the dirt and continue to use the standard, you can try to clean it (including, for example, removing the surface layer (if the standard is spectralon) with sandpaper, you can buy a new standard from the instrument manufacturer. No matter what you do, the new white standard will have a different reflectance than the old (or the standard before it got dirty). What is the effect of this on the results from your calibration model?

It turns out, that if you used the log(1/R) transform, and only if you used the log(1/R) transform, when you work through the equations, you find that the effect on the model, of changing the reflectance of the reference, is to add a constant value to the B0 term of the calibration equation.

Therefore, the seemingly disastrous accident that changed the reflectance of the "standard", that is used under the assumption that it's constant, can be easily compensated for by measuring some small number (say, 10) of samples with known concentration values, calculating the average error of the new predicted values, and performing a "bias correction" by changing the b0 term of the model.

By using any transformation different than log(1/R), an accident of the sort described WOULD have been a disaster, because it would have required the user to completely recalibrate the instrument.

THIS is the reason log(1/R) is universally used in NIR spectroscopy. Yes, that data transformation has other useful properties, but it is the ease of recovery from disaster that gives it the status it has, of being the sole data transformation used for quantitative analysis.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Gabi Levin (gabiruth)
Senior Member
Username: gabiruth

Post Number: 59
Registered: 5-2009
Posted on Wednesday, May 25, 2011 - 5:05 am:   

Hi Donald,

I am not sure what absolute referencing in remission means.However, if you refer to say a "SPECTRALON" type high reflectance "SO CALLED" standard that is built into the spectrometer and can be used for "BASE LINE" correction, i.e., collecting signal from this reference and storing the intensity of light at each wavelength for subsequent calculation of "TRANSMITTANCE" or "REFLECTANCE" then many do include such a "STANDARD".
However, there is at least one spectrometer that does not require such a device, because it is real time dual beam, meaning that the light sent to the sample is a selected wavelength, not a white light and on the way to the sample the selected wavelength light is sampled by a reference detector and measured for each wavelength in real time while scanning the spectrum.
If this is what you are looking forward I wll be glad to send more info on the dual beam arrangement if you write to me at
[email protected]
Top of pagePrevious messageNext messageBottom of page Link to this message

Peter Tillmann (tillmann)
Advanced Member
Username: tillmann

Post Number: 21
Registered: 11-2001
Posted on Wednesday, May 25, 2011 - 4:45 am:   

My understanding is: "no" they don't. A few do, but most don't.

For daily use a "rugged" reference is needed, typically opal glass, gold standard or some other standard. In some software these references are traced back to a NIST standard and the reference scan is corrected accordingly.

From a practical viewpoint this is of little importance since the design of the light-sample-detector interface limits all calibration models to a model of instrument or at least similar instruments. I.e. a difference in the distances sample to detector will give different absorption spectra.

For single instruments of a given model of instrument the manufacturer will be keen to have a standard that is "spectrally reproducible". So difference between standards will not cause to much trouble in calibration transfer.

So from a the practical viewpoint there is little benefit in absolute referencing in remission. In transmission things are clearly different.


Peter
Top of pagePrevious messageNext messageBottom of page Link to this message

Donald J Dahm (djdahm)
Senior Member
Username: djdahm

Post Number: 61
Registered: 2-2007
Posted on Wednesday, May 25, 2011 - 3:49 am:   

Do most instruments come equipped with the hardware and/or software to do absolute referencing in remission?
In the eyes of users, is this an important feature?

Add Your Message Here
Posting is currently disabled in this topic. Contact your discussion moderator for more information.