Bruker S/N Log Out | Topics | Search
Moderators | Register | Edit Profile

NIR Discussion Forum » Bruce Campbell's List » I need help » Bruker S/N « Previous Next »

Author Message
Top of pagePrevious messageNext messageBottom of page Link to this message

Phil James
Posted on Tuesday, April 20, 2004 - 12:23 pm:   

Has anybody got any signal-to-noise data on Bruker's MPA instrument?
Top of pagePrevious messageNext messageBottom of page Link to this message

Schmitt
Posted on Tuesday, October 19, 2004 - 6:24 am:   

I have no specific data...

However, the noise is really important with this device (MPA Bruker).
When you use the 2sd derivative with a MPA spectra, you have only noise...

The spectral repeatability of the MPA is lower than FOSS 5000.
Top of pagePrevious messageNext messageBottom of page Link to this message

Schmitt
Posted on Tuesday, October 19, 2004 - 6:27 am:   

I have no specific data...

However, the noise is really important with this device (MPA Bruker).
When you use the 2sd derivative with a MPA spectra, you have only noise...

The spectral repeatability of the MPA is lower than FOSS 5000.
Top of pagePrevious messageNext messageBottom of page Link to this message

David Russell (Russell)
Posted on Tuesday, October 19, 2004 - 7:17 am:   

How is the 2nd derivative being calculated?
Some vendors implement it poorly and accentuate the noise.
Top of pagePrevious messageNext messageBottom of page Link to this message

hlmark
Posted on Tuesday, October 19, 2004 - 9:46 am:   

I'd like to comment on Dave's posting. There are many ways to calculate a second derivative (or derivative of any order). I don't think it's fair to characterize any of them as "good" or "bad" although they definitely behave differently.

The analytic mathematical definition of a derivative clearly emphasizes the high-frequency components of whatever signal is the subject of the derivative process. That makes the derivative tend to emphasize the noise, since noise shows up at high frequencies.

There are several ways that various programs try to reduce that tendency. One way, which is implemented in the FOSS software as well as others, is to compute the derivative from data spaced several data (wavelength) increments apart, this emphasizes the signal and therefore increases the S/N.

Another way is to average several adjacent data points together, which reduces the noise (by the square root of the number of points averaged). The two procedures can also be combined.

Another way is to use the Savitzky-Golay convolution functions, this both averages the data from adjacent points and includes contributions to the derivative from data points separated by several increments.

There is a general tendency that, the more you improve the S/N behavior of any order derivative, the more the calculated derivative departs from the underlying analytic mathematical value. So whether a particular method of calculation is "good" or "bad" depends on your purpose. For quantitative analysis, it is generally found that good S/N performance is more important than conforming to the "true" (that is the analytic mathematical) derivative.

Dave Hopkins has been investigating the behavior of derivatives and the results from different ways of calculating them, for a long time. He can probably tell you more about this. He will probably disagree with some of my statements in the previous paragraph, though.

I don't know what Bruker does to calculate derivatives and if it's not in theit manual then we may not be able to know. But there is a "trick" that Dave Hopkins showed me that can help. If you can prepare a data file containing what engineers call an "impulse function" (that's a file where all the values are zero except one piece of data in the middle has the value unity) then putting this file through the derivative algorithm will shed light on what the algorithm is doing.

Howard

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

David W. Hopkins (Dhopkins)
Posted on Tuesday, October 19, 2004 - 1:18 pm:   

Howard, Dave, and all,

I fundamentally agree with Howard and Dave. I think your Bruker can give you good results, and you may want to work with a different resolution to improve your S/N.

However, I do have a problem with talking about an "analytic" derivative for anything other than a continuous function. When we have digitized data, we are always faced with the task of estimating the derivatives. It is a mistake to think that you are doing the best job to take a 2-point forward difference and think you are estimating the "analytic" derivative. You will always do better with 3 or more points in a convolution function. And as soon as you employ more points, you get the benefit of averaging. The use of more points does 2 things: it accomplishes noise reduction by increasing the number of readings in the average, and the high frequency noise is decreased by the square root of the number of readings. Also, using points further separated allows a better estimate of the signal, as Howard describes. This is a kind of leverage effect, the wider points really add better estimates than the inner points.

The Savitzky-Golay convolutes arrive at the weighting factors for all the points in the interval by a numeric process accomplishing curve-fitting and determination of the true analytic derivative for the fitted curve. But the closeness of the estimate still has to be judged by looking at the results and judging how much the curves have been broadened by the selected method, and choosing the method that seems to be the best compromise between smoothing and broadening.

Elsewhere I have presented the use of the RSSC statistic on convolution factors to give the total effect on high-frequency noise reduction in the results. See the Derivatives Thread under the General area in the Discussion Group. I have published this also in NIR News Vol 12 No. 3 (2001) and the Korean Journal, Near Infrared Analysis 2(1), 1-13 (2001).

I hope this helps clear up the discussion.

Dave Hopkins
Top of pagePrevious messageNext messageBottom of page Link to this message

hlmark
Posted on Tuesday, October 19, 2004 - 2:08 pm:   

I agree with Dave Hopkins' assessment of the "true" analytic mathematical derivative: it's a mathematical construct that really only applies to mathematical functions. I think there's a pretty good argument that can be made that a sampled function does not have a derivative, because if you calculate the difference between two adjacent data points, that "derivative" applies only in that interval. The next interval will have a different "derivative" and there will be a discontinuity at the data point itself: presumably the place you want to know the derivative at.

That said, the underlying spectrum is presumably continuous and will have a continuous derivative, but we don't know what that is. At best, any attempt to compute the derivative will result only in an estimate of its value at any point, whether or not there is noise present. Noise complicates the situation, but does not change its basic nature. All the different methods of computing the derivative result in different estimates, some of which suppress the noise more, some of which provide better estimates of the underlying "true" derivative at the expense of increased noise. As Dave H. said, averaging tends to broaden the spectral peaks while it reduced the noise level. Both of these considerations have to be taken into account, and the "best" derivative depends on your goals, and your purpose in doing the project in the first place.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Bob Limon
Posted on Wednesday, October 20, 2004 - 4:58 pm:   

While I could not find anything on the Bruker instrument specifically, I found this note that addresses the problem in more general terms. It shows that dispersive grating-based instruments such as the Foss units outperform the FT- instruments like the Bruker with regard to signal-to-noise.

To get repeatable measurements from an instrument, you must have low noise (high S/N ratios). If you are making a quantitative measurement:

% constituent = K0 + K1 [f(wavelength) + photometric noise]

hence, the noise will contribute to the precision of the measurement. The effect of noise will be dependent upon the application and the size of the K1 value, which can range from 10-3000 dependent upon the application. In order to maintain a measurement precision on the order of 0.05% absolute, the photometric noise must be of the order of 10-20 uA.

Random noise on spectra controls the repeatability error or precision of any calibration. Normally, the precision error is much smaller than the error in accuracy. In these cases, the random error has only a small effect on the accuracy. However, if the precision error approaches the error in accuracy, then the random error will affect the accuracy as well.

Noise is less important for identification of sufficiently different materials, but it can affect the number of samples required for the library. FT-NIR instruments often require 10-15 samples for identification, whereas a good dispersive instrument can use 3. When you need to discriminate closely related materials or get quality information, a more rigorous distance algorithm is needed. Here you are looking at small differences in peak height and these will be more affected by noise. This means that with an FT-instrument you will be less able to distinguish more closely related materials or a lower level contaminant with worse noise.

Instrument Performance (Polystyrene Spectra)
Instrument(scans) / Range(nm) / Average Std. Dev. / RMS Difference
Bomem (10)/ 1100-2200 / 292 / 160
Perkin Elmer (10)/ 1100-2200 / 293 / 1490
Foss (32) / 1100-2450 / 44 / 13
Foss (1) / 1100-2450 / 60 / 90

This table shows noise values for several instruments with data taken on repeat polystyrene scans. The first column shows a summary of the standard deviation of the polystyrene peaks, the second shows the RMS of the difference scans of 2 spectra. The Foss dispersive spectra are clearly less noisy by a factor of 10-100 compared to the Bomem and Perkin Elmer Michelson-based interferometers. This makes the dispersive instruments clearly the most repeatable.
Top of pagePrevious messageNext messageBottom of page Link to this message

hlmark
Posted on Wednesday, October 20, 2004 - 9:09 pm:   

To expand on Bob Limon's discussion: his formula applies to single-wavelength calibration models. When you're using a multiwavelength model you can characterize the situation this way:

The corresponding multivariate equation is:

%C = K0 + K1*(A1+N1) + K2*(A2+N2) + ... Kn*(An+Nn)

where:
%C represents the predicted constituent value

The Ki represent the coefficients of the model

The Ai represent the "true" absorbances at each wavelength

The Ni represent the noise of the absorbance measurement

I have shown back in 1991 that with these definitions, the noise in the prediction of the constituent value is:

Nc = N * sqrt ((Ki^2))

where:

Nc is the standard deviation of the predicted values due to the noise.

N represents the (overall) standard deviation of the noise of the absorbance at each wavelength. This presumes that the S.D. of the noise at all the wavelengths is the same. If they are not the same, a somewhat more complicated formula is needed.

The summation is taken over all the calibration coefficients EXCEPT K0.

The rest of Bob's comments contain good advice.

Howard

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

hlmark
Posted on Wednesday, October 20, 2004 - 9:17 pm:   

Addendum to my posting: I stated that A represented the absorbance. Oftentimes a data transform is applied to the absorbance data before computing a calibration model (e.g., a first or second derivative). In this case, the equations I posted apply equally well to the transformed data, as long as the "noise" S.D. is the S.D of the transformed data.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

David W. Hopkins (Dhopkins)
Posted on Thursday, October 21, 2004 - 11:10 am:   

Howard and Bob,

Where are you located, Bob?

I would like to tie together what Bob and Howard have said. The noise in the transformed data is RSSC * the noise in the absorbance data, for transformations that can be expressed as convolution functions, as I have discussed above. Therefore the noise of the predictions from a calibration using a multivariate regression of transformed variables is RSSC(cal) * RSSC(conv) * SD noise in absorbance, where RSSC(cal) is ICE, or the RSSC calculated as Howard describes above for the regression coefficients, and RSSC(conv) is calculated for the convolution coefficients as I have described above. I just lifted the derivation of RSSC from Howard's book, and called it a name that suggests its derivation, Root Sum of Squared Coefficients.

Dave
Top of pagePrevious messageNext messageBottom of page Link to this message

Bob Limon
Posted on Thursday, December 02, 2004 - 12:37 pm:   

To get back to Phil James' original question, a colleague in Germany told me that Bruker is claiming noise of 20 uAu, while Foss says theirs is 2 uAu. It looks like the Foss instrument is 10 times better. I guess grating instruments really are better in this regard.
Top of pagePrevious messageNext messageBottom of page Link to this message

suchin
Posted on Thursday, December 02, 2004 - 12:53 pm:   

Additional question on the comparison of S/N between the FT and grating instrumet: Dose the set-up of resolution on FT instrument affecting the final calculation of S/N ? I mean, is it possible to get different S/N results on low resolution (e.g., 64 cm-1) and high resolution (2 or 4 cm-1)? If the answer is yes, then how to compare the the S/N between the FT and grating instruments based on similar set-up?
Top of pagePrevious messageNext messageBottom of page Link to this message

hlmark
Posted on Thursday, December 02, 2004 - 2:26 pm:   

Suchin - yes, it's been known since the beginning of FTIR that increasing the resolution of the FTIR spectrum decreases the S/N ratio. The same is true for dispersive instruments, by the way. Nobody notices that fact with the modern dispersive instruments available, however, since very few of them give the user a way to adjust the spectral resolution.

The problem of comparing FTIR with dispersive instruments involves more fundamental issues than even that, however: specifically the fact that in FTIR instruments the resolution is constant in wavenumbers while in dispersive it's constant in wavelength. Since wavelength and wavenumber are inverly related, if you measure on one type of instrument and try to compare with the other, you find that the effective bandwidth changes inversely with the square of the original units, as you go across the spectrum. The bandwidth affects the energy throughput, and therefore that relationship makes it extremely difficult to get comparable readings at more than a very small portion of the spectrum - if even that can be done.

Howard

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Stephen Medlin (Medlin)
Posted on Friday, December 03, 2004 - 9:51 am:   

One comment on S/N of an FT instrument (and I've seen Bruker data on this) and that is the FT parameters can make a dramatic difference in the resulting S/N. Specifically, you might want to evaluate the phase correction and/or apodization function. The phase correction can change the resulting S/N for sure (try Mertz -vs- Power for example).

The comments previously re. resolution are also true. During feasibility assessments, I normally collect data at several different resolutions to see what the minimum resolution that is required. No need to go to higher resolution if not required for the application since the result is lower S/N without any significant gain in model accuracy.
Top of pagePrevious messageNext messageBottom of page Link to this message

Phil James
Posted on Wednesday, December 22, 2004 - 7:24 am:   

Stephen,
Can you share some of the Bruker data that you've seen?
Phil

Add Your Message Here
Posting is currently disabled in this topic. Contact your discussion moderator for more information.