NIR substitution Log Out | Topics | Search
Moderators | Register | Edit Profile

NIR Discussion Forum » Bruce Campbell's List » Calibration transfer » NIR substitution « Previous Next »

Author Message
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 64
Registered: 9-2001
Posted on Thursday, January 04, 2007 - 2:36 pm:   

I wasn't actively involved in the following, but I heard some of the stories:

The Coblentz Society, which is essentially the mid-IR cousin of the CNIRS, sponsored a project to measure mid-IR spectra of various materials. Mostly liquids, which is what was of interest to the mid-IR world at the time. I don't know the details, but eventually they came up with definitions for three "classes" of spectra:

Class "A" represents "the" spectrum of the material, without any influence of the measuring device, environment, or anything else.

Class "B" represents a superior spectrum, with as little influence from external considerations as could be achieved.

Class "C" spectra are ordinary research-quality spectra.

A spectrum not up to class C standard isn't even classified. I'm sure the more stringent minds would say that they're not worthy of consideration or anybody's time.

As far as I know, at that time no class A spectra were produced, and I believe no class A spectra have been produced to this day.

Class B spectra were extraordinarily difficult to measure, but many materials were eventually measured to that level, and form the compendium of mid-IR spectra that they (along with NIST) recently converted to electronic form and make available.

To address Tony's questions: I don't know if there's a "problem" or not. On the one side of the argument is the fact that we've been successfully applying NIR, for a long time, to "real-world" situations and benefitting people by so doing, and relying on the chemometrics to enable that for us.

On the other side of the argument is the fact that, after all, we're supposed to be scientists, and simply saying "it works" shouldn't be good enough for us. As scientists, we should want to be able to understand and control the systems we work with.

So at the bottom, it comes down to a matter of attitude, and how we see ourselves as the experts in our field.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Tony Davies (td)
Moderator
Username: td

Post Number: 137
Registered: 1-2001
Posted on Thursday, January 04, 2007 - 11:58 am:   

Instrumental differences

I have been trying to devise or discover a universal NIR standard for over 20 years. But very few others have been interested. Woody Barton�s recent paper, (JNIRS 14-6) demonstrated that 2D correlation can be used to show the differences between instruments which is a useful advance but it does not solve the problem.
I agree with Howard that photometric response is the more difficult one to handle perhaps this is why most people ignore it. I wrote recently in NIR news (17.7) �There is no such thing as THE spectrum of sucrose �� what we may have is a spectrum of sucrose measured on a particular spectrometer at a certain time. I expected a reaction but so far nothing has happened!
Questions:
1) Do we agree that there is a problem?
2) Do we believe that it can be solved?
3) Are we going to collaborate to find the solution?

I do have a vested interest in this topic. We have (at last) published our paper on the method for CARNAC-D, (JNIRS, 14-6). I have great hopes that this is the future for NIR analysis. In this future we will not have calibrations but large databases of analysed samples which can be utilised by CARNAC. This cannot work unless we are able to tie all NIR spectra to a common standard.
Best wishes for 2007!

Tony
Top of pagePrevious messageNext messageBottom of page Link to this message

venkatarman (venkynir)
Senior Member
Username: venkynir

Post Number: 29
Registered: 3-2004
Posted on Wednesday, January 03, 2007 - 11:12 pm:   

Kenneth Gallaher

Thanks for your brief discussion .I do agree with you. But the off-line equipment should be rigid in calibration and other related parameters.
I have read the prospects of different moisture gauge transmitter based on NIR. They claim +/- .1 % which very difficult to prepare sample to measure.
Particularly the on-line measurement it is very difficult to the draw specification and claimed accuracy. But NIR will work for the span with properly .
Top of pagePrevious messageNext messageBottom of page Link to this message

Kenneth Gallaher (ken_g)
New member
Username: ken_g

Post Number: 5
Registered: 7-2006
Posted on Wednesday, January 03, 2007 - 1:48 pm:   

�It's possible to create a calibration model that would be valid for several instruments, of the same type and for a given type of sample. There are two problems in implementing it, though:

It won't save you the need to measure the samples on all the instruments.�

Depends on what the differences are and if you know how to correct them. On some analyzers a wavelength shift is correctable. Or a pathlength difference. If so this should be handled by preprocessing before any calibration or prediction.



�Ken - the problem with the efforts to fix the hardware, including the PioNIR and the FTIR instruments, is that they concentrate on the wavelength scale, without paying comparable attention to the absorbance (or reflectance, or energy) measurement. No question different technologies are easier to deal with the wavelength reproducibility issues than others, but even having done that, you've only addressed half the problem.�

True the wavelength shift is relatively easy to fix on some FT instruments (not all) � but some vendors have go well beyond that - and we go into proprietary territory here � but suffice it to say it is optics and electronics. Done correctly you subtract standard samples from multiple instruments and you are left with nothing but noise.

Even using toluene for a standard only addresses the wavelength measurement issues. And I've become convinced over the years that the wavelength problem is by far the easier one.

Toluene addresses everything in a spectral subtraction.

And I don't know that anyone has even THOUGHT about how to deal the effects of changes in the atmospheric temperature/pressure/composition
(humidity!) and how that might affect an instrument. Tony Davies wrote an article years and years ago, about how he found his readings changing in step with the air conditioner going on and off.

Umm you thermostat the analyzer and dry the air, compensate for barometric pressure if you must����the other issues are for the most part academic.
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 63
Registered: 9-2001
Posted on Wednesday, January 03, 2007 - 12:03 pm:   

Nuno - what you said isn't COMPLETELY true. It's possible to create a calibration model that would be valid for several instruments, of the same type and for a given type of sample. There are two problems in implementing it, though:

1) It won't save you the need to measure the samples on all the instruments.

2) None of the commonly available software packages have provision to deal with the data in the necessary manner. By suitably fiddling with the data you could implement it on an unmodified MLR program, but hardly anyone uses MLR these days. A PLS or PCR program would have to be rewritten to accomodate it.

And you would still have to bias-correct the model for the individual instruments, although that could be done using the same data as for the calibration.


Ken - the problem with the efforts to fix the hardware, including the PioNIR and the FTIR instruments, is that they concentrate on the wavelength scale, without paying comparable attention to the absorbance (or reflectance, or energy) measurement. No question different technologies are easier to deal with the wavelength reproducibility issues than others, but even having done that, you've only addressed half the problem.

Even using toluene for a standard only addresses the wavelength measurement issues. And I've become convinced over the years that the wavelength problem is by far the easier one.

And I don't know that anyone has even THOUGHT about how to deal the effects of changes in the atmospheric temperature/pressure/composition (humidity!) and how that might affect an instrument. Tony Davies wrote an article years and years ago, about how he found his readings changing in step with the air conditioner going on and off.

More recently, Peter Griffiths did a masterful exercise to check the wavelength calibration of NIST standards (publshed in JNIRS), using water vapor bands, and found discrepancies between his values and the NIST values. Discussions with Steve Choquette revealed that the NIST values were corrected to vacuum, while Peter's weren't; making the correction brought them into agreement. At these levels, all kinds of unexpected phenomena can influence the results. So while humidity can be dealt with easily enough if you're aware of it and can put a dry air or N2 flush on your instrument, when basic phenomena like the speed and wavelength of light changes because the air pressure changes, that's not so easily dealt with.

And that's still only the easier part of the problem!!

Howard

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Kenneth Gallaher (ken_g)
New member
Username: ken_g

Post Number: 4
Registered: 7-2006
Posted on Wednesday, January 03, 2007 - 10:55 am:   

Thanks for your good comments � as usual - Howard. Yes stating the problem is a start - -- and I did that because many are still trying to solve a hardware problem with software so it needs to be repeated.

Some have addressed the hardware problem � and some have done it quite well. One who addressed it was Perkin Elmer with their diode array analyzer PIONIR (don�t work for them/never did). I do not know how well that worked but they surely tried. That analyzer is now sold by Sundstrand. A major problem with it is that it is limited to third overtone and so has limited application.

I would argue that calibration transfer with a moving grating analyzer is essentially impossible. No matter how well built there is still variation from scan to scan and month to month due to grating motor slop and wear. If you doubt this take 100 single (not co-added) scans of toluene or some other sharp banded substance and take the standard deviation at each wavelength. You will see that the deviation is largest on the sides of peaks because of subtle wavelength shifts from scan to scan. Because moving grating is so much a part of NIR history the whole technique still is often tarred by moving grating weaknesses.

Diode array analyzers have no moving parts but are limited by the finite number of detection elements and the vendor�s ability and/or care in controlling detector and other optical position from analyzer to analyzer � and good temperature control so that alignment does not change.

FT instruments have the best chance for calibration transfer but transfer is still by no means a given. Great care must still be taken. Some FT designs are prone to wear in the scanning mechanism, in others the detectors are too slow, or non-linear. In others insufficient care is taken in optical alignment or in temperature control or electronics design. I have seen FT instruments that were �the same� where the apertures were clearly different � it matters. It should also be noted that the end-user�s optics can affect transfer. If the sampling optics is the limiting aperture transfer will be affected.

If any vendor claims calibration transfer � get them to define what that means quantitatively � in a hardware performance test. At minimum compare toluene � or other sharp spectra � on 3+ analyzers for wavelength and absorbance precision. And compare after routine service � replacement of source, laser, scanning motor etc. An environmental temperature test is also in order. I seriously doubt any honest/knowledgeable vendor will claim the ability to transfer calibrations between vendors. They have no control over your old hardware or calibration.


�There are no optical standards that exist that are accurate or stable to that degree.� True but you can invent your own internally. Reagent grade toluene � a pure compound and hence well defined � works when you use the same thermostated cell or correct for slightly different pathlengths with different cells.

When one thinks about cost of a �super-instrument" one needs to consider the whole cost of calibration, method certification etc. For a difficult calibration the hardware is a small fraction of total project cost. These types of instruments have been commercially competitive and in fact dominant for 10 years plus for some applications.

�To a large extent, one of the activities of the chemometric models is just to compensate for all these various changes in the instrument, as exhibited by their effect on the calibration data� Chemometrics will help with past hardware errors but not future ones � and at the cost of poorer calibration performance. It has become very clear to me that chemometrics is limited mainly by lack of information. The information limit is imposed by the hardware.

�But that only works while the effects themselves are linear.� Actually I have been amazed at how good PLS can do on non-linear data. Do a calibration on transmittance data some time. I don�t recommend this � it�s just true to a fair degree.
Top of pagePrevious messageNext messageBottom of page Link to this message

Nuno Matos (nmatos)
Senior Member
Username: nmatos

Post Number: 41
Registered: 2-2005
Posted on Wednesday, January 03, 2007 - 6:38 am:   

Dear all,

I have to agree with Howard. Reality shows that equipments from the same supplier vary significantly. The �under control� has no parallel in reality. I had the chance of doing calibrations transferences between two equipments that were the same model from the same vendor and guess what: The calibrations gave different results. What I did was to linearly correct the result from each calibration using spectra obtained from the same samples investigated in both equipments. I do not know if this is the best or the most fast way of doing calibration transference but at the time it was the only possibility due to software limitations. Now, imagine you have 20 NIR and you want to change those NIR with other 20 identical ones. Quite a heavy job, right? And this gets even worse if we think that those NIR are used for on-line monitoring.
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 62
Registered: 9-2001
Posted on Tuesday, January 02, 2007 - 9:44 pm:   

Ken has certainly summarized the technical issues regarding calibration transfer, succinctly. But stating a problem doesn't solve it - although it may be a good place to start.

Part of the reason that some of the issues Ken mentions are problems is that NIR instruments are so exquisitely sensitive and precise. It is necessary that they be so, or NIR would not be as valuable technique as it is.

But the other side of that coin is that, as Ken says, the instruments are not in control, and possibly may never be able to be brought into control. From the beginning of the development of NIR as a broadly used analytical technique, the instrumental noise level, and short-term precision has been on the order of a few microabsorbance. Translated into other terms, that means precision, and short-term stability of one part in 100,000 to one part in 1,000,000.

There are no optical standards that exist that are accurate or stable to that degree. NIST does not and does not even try to provide optical standards that they certify to even within two orders of magnitude of that precision, because of the stability problem. In fact, there is only one other technique used in chemical applications that even comes close, and that is weighing. And there is only one other physical measurement that I know of at all, that has better precision and accuracy, and that's the measurement of time (or, alternatively, frequency).

And without standards, how can anyone expect to bring or keep an instrument in control?

And all that is even without consideration of financial issues. In every other type of device: cameras, hi-fis, cars, telescopes - almost anything you can think of - the tendency is for the price to increase more-or-less exponentially with the quality. So even if we had the capability of building a "super-instrument" that could theoretically be brought into the type of control as Ken indicates, it likely would not be something that could be made into a commercially competitive system.

If, in fact, the problem could be solved, so that two instruments could be made to be identical in all respects, then clearly the same calibration could be used on both of them. But in the real world, instruments are never the same, if for no other reason than manufacturing tolerances. One instrument may differ from another by only, say, one part in 10,000, which is a reasonable guesstimate of how well a manufacturer can recreate all their instruments. Looking at that as a raw number, it looks pretty good. But taken in the context of NIR, we find that it's still two orders of magnitude worse than the ultimate precision and stabilty that's needed. So we should not be surprised that the 1 part in 10,000 difference becomes a major factor in causing calibrations to give different results on the different instruments.

Individual parts of the instrument have similar effects. For example the linearity (or non-linearity of the detector(s)). If the source energy changes (as it might with changes in temperature, alignment of the optics, voltage supplied to the source, aging of the lamp, etc.) then different portions of the detectors active range will be used, and the non-linearity, which again my differ by parts in 10,000 to parts in 100,000 will affect the ultimate result.

We all know that the chemometrics play a major role in giving us accurate results. But to a large extent, one of the activities of the chemometric models is just to compensate for all these various changes in the instrument, as exhibited by their effect on the calibration data, as well as the changes in the sample. But that only works while the effects themselves are linear, and are usually worked out by having coefficients that give equal and opposite contributions from the various wavelengths that are measured, to the final result. Thus, the chemometrics act as a "noise rejection" filter, where the "noise" here is the sum total of all these extraneous effects that act on the instrument, and through it, on the data. The better this is done, then the better the results are. But the other side of THAT coin is that the more suppression there is of the top-level variations, the closer the uncompensated variations are brought to the level at which their effects become noticable.

I've ignored Ken's discussion of the problems with samples. Those are true enough, but when those sorts of issues arise, I think we have to say that when you have different samples you need different calibrations. We all know that, so I don't know why anyone should be surprised about it. Even if somebody thinks they're the same, or would like them to be the same, that has no bearing on the fact that under the conditions Ken describes, they're NOT the same and unless someone is very lucky, won't run on the same calibration. If we could solve the instrument problem, I think we'd be doing very well; then would be the time to worry about measuring different samples with the same calibration.

PS - regardless what the time-stamp is that the web site assigns to this message when you receive it, it's only about 10:30 PM EST on Jan 2, here!

Howard

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Bruce H. Campbell (campclan)
Moderator
Username: campclan

Post Number: 92
Registered: 4-2001
Posted on Tuesday, January 02, 2007 - 7:34 pm:   

The recent comments about drift, etc, reminded my of software I had with a grating spectrometer. This software compared the spectrum of a standard, taken during analysis, with that of the same standard taken during calibrtion and applied a correction factor. When tested over a period of years, it showed the correction gave results that were more than acceptable. I was even able to use the correction to transfer the calibration from the grating spectrometer to an AOTF-based one. I don't know if this correction software is availbable anymore. I don't see it in The Unscrambler.
Top of pagePrevious messageNext messageBottom of page Link to this message

Kenneth Gallaher (ken_g)
New member
Username: ken_g

Post Number: 3
Registered: 7-2006
Posted on Tuesday, January 02, 2007 - 3:14 pm:   

I see the calibration transfer issue being confounded with two different issues.
One is the spectrometer hardware. If the hardware is not the same over time � the term �drift� is often used � then the hardware is not under control. If one cannot change a serviceable part such as a light bulb without the results changing then the hardware is not under control. If many analyzers that are �the same� give different results due to wavelength shifts, resolution differences, non-linearity then the hardware is not under control. All of these issues are verifiable INDEPENDENT of any calibration and should be a first step in any analysis. If one needs to make �drift� corrections, bias corrections etc etc. one has to ask why � it�s a hardware problem.

Historically these conditions have not been well met in NIR hardware and many early successful analyses were successful because the spectra were broad and the variations observed were large and recalibration was easy. For many instruments the �bias correction� was a daily ritual. As one pushes the technique, tries to transfer calibrations, does complex calibrations where a univariant bias correction is not enough, where recalibration is unacceptably expensive, the demands on the hardware get more and more.

Now a separate issue is that of whether a calibration developed at plant A is applicable in plant B. That depends on the samples and if they are really the same. For example my experience is that gasoline/Mogas calibrations are only roughly transferable because the sample is a highly varying complex blend. Within a single refinery as many as 12-15 separate RON calibrations may be needed to get desired results because they make so many grades for different markets and seasons. On the other hand I have seen reformate calibrations (a single refinery stream) transfer quite well out of the box.

The confusion comes in folk�s minds because they observe calibration transfer failure only through the failure of a specific application � which does not point to the source of the problem � is the hardware? Is it the highly variant sample? Or perhaps is it poorly selected calibration parameters (whose selection might be helped by properly understanding the answers to the first two questions). These answers only come by handling the problem piecewise � first assuring stable hardware, second understanding the sample well, and lastly by doing good chemometrics and perhaps spectral preprocessing based on everything you know about hardware behavior and sample.
Top of pagePrevious messageNext messageBottom of page Link to this message

David Russell (russell)
Member
Username: russell

Post Number: 30
Registered: 2-2001
Posted on Saturday, December 23, 2006 - 1:41 pm:   

The only scenario I can imagine for replacing 20 units after 3 yrs is that the original instruments had a technology limitation that needed to be overcome.

The application would, additionally have to have significant payback either in improved safety, environmental protection or direct savings.

As Howard mentioned in his original reply, most NIR instruments last way more than 3 years.
Top of pagePrevious messageNext messageBottom of page Link to this message

venkatarman (venkynir)
Advanced Member
Username: venkynir

Post Number: 21
Registered: 3-2004
Posted on Thursday, December 21, 2006 - 11:43 pm:   

NIRS based on Inferential technique.Global calibration model may not suited due to two reasons .Look at developing country there still long way for the raw material utilizations (e.g manufacturing paper industries in India they use gunny ,waste materials and available other resource materials ).In such case the global model fails .NIR calibration is tedious task still now. I do agree that homogeneous environment and industries this calibration and model work perfectly .
Top of pagePrevious messageNext messageBottom of page Link to this message

Lez Dix (lez_dix)
New member
Username: lez_dix

Post Number: 5
Registered: 10-2006
Posted on Thursday, December 21, 2006 - 5:42 am:   

Firstly I have to state a vested interest here. I work for a company that supplies NIR analysers.

I am aware of at least one multinational company that uses NIR analysers over it's entire group (aprox 30 analysers) via a global calibration model. they are multi product multi parameter. They have a default NIR analyser but do have NIR analysers from other manufacturers, though not by choice. There "default" position is to plan rolling replacements of their NIRs on a 7 year basis. Incomming instruments have to be cloned before use due to the technlogy those analysers use. They have a system of standard samples sent around the group and spectra collected and corrections applied. This system works well but has been designed around that technology. However with some FTNIR anlysers the need for cloning is significantlty reduced if not removed and drift is not such an issue either. Filter instruments are not really suited to this idea except maybe that is the thinking behind the 3 year replacement to take out the drift factor.
Top of pagePrevious messageNext messageBottom of page Link to this message

Nuno Matos (nmatos)
Senior Member
Username: nmatos

Post Number: 40
Registered: 2-2005
Posted on Thursday, December 21, 2006 - 2:32 am:   

First of all let me thank you for your replies.

Howard:
This is something I heard about. Supposedly it is a procedure done in a certain company. We can discuss if the measure is right or wrong also, but at this point I think this is a good point and a nice challenge for us to think about. So let�s assume that the 20 NIR need to be replaced every 3 years.

Venkatarman:
That is another interesting point. There are ways to test the on-going validity of the models. You don�t need calibration transfer to do this. You can use the reference method (whatever it may be) to test the error of prediction with a certain frequency.

Michel:
You present two options. One of the options assumes that all the NIR are used on the same product. Then, by using an independent set of samples one can transfer the calibration models using only a linear correction. I have done that already and it generally works with one problem. You should assess the conformity of the spectra before interrogate it with the calibration model (EMEA guidelines). So you use some discriminative tool. However, a spectrum from another NIR will probably step away from the �conformity�, in other words, it will move away from the �center� of the cluster. This is biased. How can you transfer a discriminative model?
A second option that you present would be using a standardization algorithm used on a standard spectrum. Which algorithm do you suggest? The spectrum to be used can be of an international standard (e.g.: NIST standard) or it should be a standard from the process you are monitoring (internal standard)?
Top of pagePrevious messageNext messageBottom of page Link to this message

Michel Coene (michel)
Senior Member
Username: michel

Post Number: 42
Registered: 2-2002
Posted on Thursday, December 21, 2006 - 1:43 am:   

Solving this problem needs to be started before you take the first analyser into drift. Select an instrument provider which has experience in calibration transfer. Consider a standardisation algorithm where you first convert the spectra to an instrument-independent spectrum, based on some physical standards you have scanned in the instrument. If the 20 NIRS are used on the same product(s), join their spectra into a "global" model, and then perform slope/bias adjustment for each instrument based on just a few samples. There are consultancy companies which can set up a network of NIRS instruments and deal with all this. (I can give you one, but surely there must be more.)
Top of pagePrevious messageNext messageBottom of page Link to this message

venkatarman (venkynir)
Intermediate Member
Username: venkynir

Post Number: 20
Registered: 3-2004
Posted on Wednesday, December 20, 2006 - 11:43 pm:   

The word Howard Mark might correct provide robust and rugged calibration carried in the instrument's developing stage itself. I have doubt in the word of using 20 years and even more as said by him.I under stood that periodic calibration and transfer of calibration standard helps the on-line instruments to meet the required measurement range.
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 61
Registered: 9-2001
Posted on Wednesday, December 20, 2006 - 7:10 pm:   

The first question I would ask is why in the world would a company want to do that, especially when it unnecessarily causes just those sorts of problems? The instruments last a lot longer than 3 years. Some of the original crop of Technicon instruments, for example, have been in use for 20, and even 30 years and are still going strong.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Nuno Matos (nmatos)
Senior Member
Username: nmatos

Post Number: 39
Registered: 2-2005
Posted on Wednesday, December 20, 2006 - 11:28 am:   

Dear all,

Imagine a company with 20 NIR used for on-line monitoring. Imagine that the company replaces those NIR every 3 years. What about the calibration models? Calibration transfer or new model calibration?

Add Your Message Here
Posting is currently disabled in this topic. Contact your discussion moderator for more information.