Instrument to instrument transfer Log Out | Topics | Search
Moderators | Register | Edit Profile

NIR Discussion Forum » Bruce Campbell's List » Calibration transfer » Instrument to instrument transfer « Previous Next »

Author Message
Top of pagePrevious messageNext messageBottom of page Link to this message

Peter Tillmann (tillmann)
Senior Member
Username: tillmann

Post Number: 30
Registered: 11-2001
Posted on Thursday, January 17, 2013 - 8:40 am:   

The answer to such a question is (as always) it depends.

We do this kind of calibration transfer since 15 years with other instruments and since 5 years with Bruker instruments as well. We do have MPA and Matrix-I in our network and use a common calibration model. We do succeed with the transfer of our models between Bruker instruments. The pitfalls are not the instruments themselves.

Of course this doesn't state you will always be fortunate with a transfer. Because it depends on many other things.

Peter
Top of pagePrevious messageNext messageBottom of page Link to this message

Sirinnapa Saranwong (mui)
Member
Username: mui

Post Number: 15
Registered: 10-2006
Posted on Thursday, January 17, 2013 - 6:27 am:   

Bruker provides calibration transfer tools in various aspects across various kinds of instruments. In many cases, you do not have to do calibration transfer between Bruker instruments (MPA, Matrix, Tango).

This is quite a common procedure. Your local Bruker staffs should be able to assist you on this issue. Do you have their contact numbers?
Top of pagePrevious messageNext messageBottom of page Link to this message

Gabi Levin (gabiruth)
Senior Member
Username: gabiruth

Post Number: 82
Registered: 5-2009
Posted on Thursday, January 17, 2013 - 5:22 am:   

This seems to be a question almost as old as the technology - but since it was asked about specific product from a specific manufacturer - the duty of proof resides with the manufacturer.
On a basic ground -
If two instruments from Bruker are FT - the calibration can be transferred from one to the other over the range of wavenumbers that overlap. If the calibration for one instrument is from X to Y wavenumbers and the second instrument is from X to Z, where Z>Y - then you should have an "easy" transfer, using same resolution of course, and a range from X to Y leaving the range Y to z out of the scanning range. If Z<Y, then you need to find out if the calibration on first instrument holds well for the reduced range X to Z - if it holds well (the result could be somewhat worse than for the full range, but still useful), then you should have an "easy" transfer.
If one instrument is FT and the other is dispersive - it is a whole lot of different story - if Bruker doesn't provide you with a reliable transfer function - then you are to either create a new calibration from scratch, or to buy a transfer from people who sell such transfers - I will provide information to a center of research in Belgium that can do that for payment, but it takes time and patience to work with it, even aftre the transfer.

I hope this helps a little

Gabi Levin
Top of pagePrevious messageNext messageBottom of page Link to this message

Tony Davies (td)
Moderator
Username: td

Post Number: 293
Registered: 1-2001
Posted on Thursday, January 17, 2013 - 4:39 am:   

Hello Vaithilingam,

Welcome to the group.

The first answer is ask Bruker! It will obviously need some software to match-up the wavenumbers and of course it would only be possible for wavenumbers common to both instruments.

Best wishes,

Tony
Top of pagePrevious messageNext messageBottom of page Link to this message

Vaithilingam (nvlingam)
New member
Username: nvlingam

Post Number: 1
Registered: 1-2013
Posted on Thursday, January 17, 2013 - 3:31 am:   

Are Bruker calibrations transferable seamlessly from one of their instruments to another (MPA,Matrix-i, Tango)despite their having different wave number ranges and different monochromators?
Top of pagePrevious messageNext messageBottom of page Link to this message

Fatih KAHRIMAN (caucasus)
Junior Member
Username: caucasus

Post Number: 7
Registered: 10-2010
Posted on Thursday, February 10, 2011 - 1:08 am:   

Hi All

I want to say some information of this subject. This subject is one of the our research area. We are working about transfer of the different model spectrometer calibrations whic are Bruker FT-IR to Spectrastar NIR. This is not commercial way because this taransfer will make not NIR to NIR instrument. We used some calibration development programs namely OPUS and CWS. An we try some data manuplation and application for trasnferiablity of calibrations. This research will be finished five months later. This is a basic study only about calibration transfer and good calibration model development.

Fatih KAHRIMAN
Top of pagePrevious messageNext messageBottom of page Link to this message

David Russell (russell)
Senior Member
Username: russell

Post Number: 36
Registered: 2-2001
Posted on Wednesday, October 03, 2007 - 10:18 am:   

My practice in transferring calibrations has been to expect the worst and hope for the best.

So I develop models on the new instrument and have those available along with the incumbents.

When the instruments are similar (same Vendor same technology not necessarily same "model") a well done incumbent may outperform the new.

I've experienced this first hand. But with production at stake you have to be prepared for anything.
Top of pagePrevious messageNext messageBottom of page Link to this message

Kenneth Gallaher (ken_g)
Senior Member
Username: ken_g

Post Number: 33
Registered: 7-2006
Posted on Wednesday, October 03, 2007 - 8:40 am:   

What you can do/and has been done is produce one well controlled spectrometer that is like all its kin...transferring between different brands or families is a long way off.
The salesmen who claim transfer from any instrument to theirs? %^$#@ expletive deleted...
Top of pagePrevious messageNext messageBottom of page Link to this message

David W. Hopkins (dhopkins)
Senior Member
Username: dhopkins

Post Number: 127
Registered: 10-2002
Posted on Tuesday, October 02, 2007 - 9:39 pm:   

Hi Tony,

I think that we will always have to use actual samples or similar compounds, because the resulting spectra depend upon the half-band width of the spectrophotometer and the band widths of the components of interest.

I just don't see how moving mirrors (or any optics we can devise) can emulate that.

Best regards,
Dave
Top of pagePrevious messageNext messageBottom of page Link to this message

Tony Davies (td)
Moderator
Username: td

Post Number: 169
Registered: 1-2001
Posted on Tuesday, October 02, 2007 - 5:34 pm:   

Good question Michel; glad to know that you are still coming up with them! I would also like know your new e-mail address; please e-mail me.

The only comment I will make at this time is that the transfer problem will not be solved until we have a universal standard. (Yes it's that old hobby horse again!). I have been trying for over 20 years to find it. If only some clever people would do a bit of thinking I'm sure an answer would have been found by now.

Best wishes,

Tony
Top of pagePrevious messageNext messageBottom of page Link to this message

Kenneth Gallaher (ken_g)
Senior Member
Username: ken_g

Post Number: 32
Registered: 7-2006
Posted on Tuesday, October 02, 2007 - 11:25 am:   

Hmm yes I did read that article, and required several aspirin afterwards - good job. Ultimately it comes back to 1) Know your spectrometer 2) know your statistics 3) know where your samples came from 4) know where your reference data came from. There are still NIR papers published - that somehow get by editors that clearly miss one or more of these.
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 166
Registered: 9-2001
Posted on Tuesday, October 02, 2007 - 10:35 am:   

Ken - the situation is worse than that. Recommend you read Spectroscopy; 22(6); p.20-26 (2007)

Nevertheless, with good data, it's entirely feasible to "mix" data from different spectrometers. What you'll arrive at is an "average" model from all the spectrometers - not necessarily optimized for any one of them, but could work satisfactorily on all of them - and with a good chance of working well on other spectrometers of that type. There is a "trick" or two involved, however.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Kenneth Gallaher (ken_g)
Senior Member
Username: ken_g

Post Number: 31
Registered: 7-2006
Posted on Tuesday, October 02, 2007 - 9:32 am:   

"The manufacturers announce their RMS noise for one instrument. Why do not ask them to provide the RMS noise between instruments? It will be the first base of comparison without any prediction model involved. This requirement should be added in their brochures."

This is exactly correct and it is what companies who can really do calibration transfer have done for a long time - even thought they may not publish explicit specifications. You may have heard terms like "toluene test". Use of a model to "prove" calibration transfer is a way to hide because you can always come up with some dumb simple calibration that will transfer. But using a two factor PLS model to "prove" transfer when your real model has 7 factors is fooling yourself.

Beyond that, rigid hardware specifications allow the user to separate problems. Is it hardware or the model? If you are trying to answer that with a model, you can not. If you have explicit model-independent hardware specifications you can.

There has been discussion here of combining data and hardware differences from several different instruments. That works - but at a price that may or may not be acceptable. Any such combination is adding noise to the data set which the model is then taught to ignore. If you get your result that way fine - but it is adding noise which ultimately limits what you can analyze. PLS really does analyze down to the noise - so the more noise you have - the quicker you will be limited in what you can do. If you are using that kind of equipment and you find you cannot do an application - you should try different equipment - not assume that NIR cannot do it. There is a lot of very poor legacy hardware out there - that if you were starting from scratch you would never consider today.
Top of pagePrevious messageNext messageBottom of page Link to this message

Pierre Dardenne (dardenne)
Senior Member
Username: dardenne

Post Number: 26
Registered: 3-2002
Posted on Tuesday, October 02, 2007 - 5:04 am:   

Hi,

Interesting discussion.

I set up with Dr John Shenk our first network of instruments in 1987 and I am using since the rules given by John. A good transfer is achieved when the spectral differences between instruments are lower than the packing error. i) Standardizing the instruments the best you can using the same kind of samples that the ones you have to measure (samples must cover the absorbance�s range � same cups to avoid packing effect) (all the attempts using (grey) standards failed). ii) Add some spectra from the secondary instrument. iii) recalibrate quickly with the same setting. This procedure works fine with at least Foss instruments. When you have 15-20 instruments already in the data base the models becomes very robust and it is not always necessary to add new spectra when a n+1 instrument is included into the network.
Of course as it has been said the transfer is also dependent on the parameter. It is easier to transfer a moisture model than glucosinolates in rapeseed (function of the sensitivity of the NIR response to the analyte).

The problem I feel is that we monitor models with very small data sets (10-20) and the users observed biases and correct for biased which are not significant. Shenk and Westerhaus published in the Agricultural Handbook 643 the stats to check the significance of the bias and of SEP. With 10 sample the bias must be 70% higher than SEC (SECV) to be considered different than zero.
It is strange that we could trust more 10 new samples than the thousands already into the calibration database.
Another drawback is that the models are often checked with reference data coming from different labs and the lab biases are generally larger than NIR biases.
We are currently transferring data bases from different brands of instrument following the same scheme and the results are satisfactory. But most the brands are still need biases adjustments when transferring models from one instrument to another one of the same type.
The manufacturers announce their RMS noise for one instrument. Why do not ask them to provide the RMS noise between instruments? It will be the first base of comparison without any prediction model involved. This requirement should be added in their brochures.

Pierre
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 165
Registered: 9-2001
Posted on Monday, October 01, 2007 - 4:10 pm:   

Ralf - I don't think you need to worry. Nobody is proposing a standard for how to transfer a calibration.

The proposal is for a standard way to ascertain whether or not you've successfully transferred one, whatever method you use to do the transfer itself.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Charles E. Miller (millerce)
New member
Username: millerce

Post Number: 2
Registered: 10-2006
Posted on Monday, October 01, 2007 - 4:04 pm:   

Hello Michel:

I was put into a similar situation some years back- involving the same model instrument on 4 different production lines.

Practical constraints on calibration data collection protocol required us to attempt to "mix" calibration data from different instruments . Somewhat surprisingly, this produced good results for many (but not all) of the methods required.

I suppose that we got away with this because the inter-instrument differences didn't adversely affect the ability to quantitate most of the analytes. What the heck, though- it saved us $$$ in trying to run standards on all instruments.

Regards,
Chuck
Top of pagePrevious messageNext messageBottom of page Link to this message

Ralf Marbach (ralf)
New member
Username: ralf

Post Number: 1
Registered: 9-2007
Posted on Monday, October 01, 2007 - 3:03 pm:   

Mike, All

It is better to hold your horses with new standards. The existing standards are bad enough and need clean-up first.

Technically, re. calibration transfer, note: (1) All multivariate calibrations consist of two and only two parts: (a) the analyte signal "response spectrum") and the spectral noise ("covariance matrix of everything else except the analyte -- interfering spectra, hardware noise, everything). Obvious or implicit, there are only these two parts, in any calibration.

Calibration transfer is staightforward in SBC, because both parts are explicitly controlled by the user. Use the same signal on all instruments (*), measure the noise fresh if required(**), done.

Because of (1) above, the same SHOULD also hold for PLS, PCR etc. But we all know, it doesn't. What is the conclusion? -- Many calibrations are not measuring what their ownwers think they are measuring. Rather, they are measuring s.th. else or s.th. more. Proof? -- Otherwise, simple offset adjustment would be all that's ever needed for transfer.

(*) Situations where the signal is not always the same instr-to-instr I don't call "transfer" -- I call that one ugly uncontrolled measuement situation that should not even be considered for serious analysis.

(**) If the overall SNR is good enough, you can add instr-to-instr "noise" as an "extra" component to the noise estimate used for calibration, in which case transfer is often no longer necessary because the calibration is now global. This is the preferred option ... at least in Finland. But if you insist, or are desperately struggling for SNR, then you can make an instrument-specific noise estimate, i.e. leave the instr-to-instr noise part out of your calibration noise. In this case, you need to update the noise estimate during transfer, which often just means offset adjustment.

Regards

Ralf

PS -- I will be very busy the next 2 weeks, sorry, no quick replies possible.
Top of pagePrevious messageNext messageBottom of page Link to this message

Michael C Mound (mike)
Senior Member
Username: mike

Post Number: 37
Registered: 7-2007
Posted on Monday, October 01, 2007 - 2:24 pm:   

Howard,

Guess I must be getting a reputation for maverick thinking...I don't really know where I might be in November...I'll check out the details on the site, as you suggested. If I do make it there, would be a pleasure to meetcha and I might even get motivated to suggeest a draft, as you put it.

I have a taste of how lengthy the process of adoption can be with standards committees, but, sometimes one has to begin somewhere.

Thanx,

Mike
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 164
Registered: 9-2001
Posted on Monday, October 01, 2007 - 1:42 pm:   

Mike - EAS is where it "always" is: in Somerset, New Jersey. You can check it out at eas.org

Don't go there just for the ASTM meeting, though. Even though Gary is in favor, a proposal to develop a practice for determining when a calibration has been transferred may not be voted in, and even if it is, it may not see action for some while afterward. Unless you'd like to come to the meeting and volunteer to write some drafts.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Hector Casal (casalh)
New member
Username: casalh

Post Number: 4
Registered: 1-2007
Posted on Monday, October 01, 2007 - 12:57 pm:   

Ken,

There are optical reasons why one might need a bias or slope correction. The 'easiest' to see is stray light - of course, if you have stray light you have a badly configured instrument. However, with some sampling devices (disposable vials for one) some optical distortions are introduced and biases will show up. It takes a few well designed tests to detect and quantify them - and also someone who does not believe anything until proven twice.

Hector
Top of pagePrevious messageNext messageBottom of page Link to this message

Michael C Mound (mike)
Senior Member
Username: mike

Post Number: 36
Registered: 7-2007
Posted on Monday, October 01, 2007 - 12:41 pm:   

Thanks, guys,

Howard, you're right on the timeline. I was working with B & L's instruments in 1988, and before then (1983), with a variety of applications in a tin smelter, plastics, chemicals, petrochemicals, metals, etc. Can you imagine someone wanting to appear older than he actually is???

Anyway, I am pleased that both of you are so much behind this very worthwhile goal, transparent to the diaphane, as it may be...

Ken, I also like and appreciate your stylized approach to where we should exercise caution and skepticism without yielding to problems that were insurmountable in the (good?) old days, but have advanced to a more reasonable possibility of achievement these days. Good to shake off the scales of academic agnosticism based on antediluvial concepts...

Howard, when and where is the EAS this year. Living in Switzerland, as I do, I am not au courant.

Thanks again,

Mike
Top of pagePrevious messageNext messageBottom of page Link to this message

Kenneth Gallaher (ken_g)
Senior Member
Username: ken_g

Post Number: 30
Registered: 7-2006
Posted on Monday, October 01, 2007 - 11:48 am:   

I plead guilty to having worked for a couple of vendors, but would argue that that makes one particularly well qualified to discuss calibration transfer since we are able to see many more analyzers than the typical user.

One aside, on several comments that such transfer might be impossible, it was not possible by my definition on older technology analyzers, so if that is your experience you are correct, not possible.

A critical point is that every NIR user needs to be interested in calibration transfer. This is because, even if you are only interested in one analyzer, analyzers will change with time, and components will fail and need to be replaced. So on a grating unit the motor will die, the lamp will die. On an FT unit the laser will die, the lamp will die, and in some the interferometer itself will die.

A second point is that the ability � or not � to transfer calibrations is very application dependent. Any statement about calibration transfer needs to specify the calibration and application. Early NIR applications worked exactly because the spectroscopic demands were low. The broad peaks of food products and such do not require accurate wavelength control, nor high factor calibrations, nor good electronics, nor detectors. On the other hand analysis of motor fuels with sharpish peaks and demanding prediction performance criteria is more problematic. �Everyone� did octane and related 10 years ago, very few vendors will touch it now.

Regarding bias and slope corrections. I would argue that if they are needed that is a huge hint that by modern age criteria the analyzer has problems. Why would a slope/sensitivity correction ever be needed in a ratioed instrument? How could a slope and bias correction made on one or a few samples be expected to ever correct the instrument problems in a multifactor calibration?

So how to test? � of course there are ways. I would never trust a vendor without proof on this. They all claim calibration transfer, LOL.

So � test what matters to you. Have the vendor move your calibration from one analyzer to another. Watch what he has to do to do that. Measure SEP on a good test set before and after. Watch the vendor change, grating motor, lamp, whatever and see what he has to do. Measure SEP on a good test set before and after. Get the specifics of their warranty in writing.
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 163
Registered: 9-2001
Posted on Monday, October 01, 2007 - 11:44 am:   

Mike - there's something wrong with the timeline. As I recall, B+L didn't take over Technicon until at least 1985, and maybe as late as 1987. Before then the instruments always carried the Technicon name. But what's a few years among friends??

In any case, I agree that the goal of all such efforts is to reduce the requirements for the users. Initially we were hoping to make a "black box" that the user could use to read a couple of samples to adjust zero and sensitivity (although we didn't call them that, even then) and then use the instruments just like you'd use a ruler, or a scale or any other common device. After all, where would we be now, if we couldn't just pick up a ruler, or a micrometer, or a stopwatch, and just use it with precision and accuracy built in? But alas, that was not to be.

Nevertheless, the question of defining what we mean by "calibration transfer" is a serious and important scientfic question. If not answered it just means what the Mad Hatter (I think it was) proposed to Alice: that it means anything you want it to mean.

In fact, when I got the e-mail message containing my own rant, I forwarded it to Gary Ritchie, who's currently chair of the Chemometrics committee of ASTM, and he thought it was a reasonable activity for the committee to take up. So maybe we can make some progress on that front, after all. The next meeting of the committee will be at EAS this November. All are invited; ASTM committee meetings are open to the public. I predict fireworks (with 99% probability, I think).

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Michael C Mound (mike)
Senior Member
Username: mike

Post Number: 35
Registered: 7-2007
Posted on Monday, October 01, 2007 - 10:51 am:   

Howard,

Actually, there is some hope and clarity (not to be confused with hope and charity)in terms of what I think (!!!) is intended by calibration transfer. Let's agree that it is probably not quite such a simple matter to open the box and plug that shiny new instrument in and expect it to start running. We are not talking about fundamental parameters (I started fooling around with such at a few bus stops of my own, including some work with Bran + Lubbe, etc., in 1983...not quite the hoary pedigree you can claim, but...one does what one can...), but rather a kind of "starter" base. After all, if you are really transferring from device to device, what you are trying to accomplish is to know how much to depend on bias adjustments that will effectively "dampen" poor R values, whether you are also concerned about RMSD and SEP (not much difference there, I reckon).

Of course, the devil's in the details...the point of this is that we are hoping to reduce the tweaking from instrument to instrument to shorten time and effort in the service of standardization and robustness...

Thanks,

Mike
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 162
Registered: 9-2001
Posted on Monday, October 01, 2007 - 9:31 am:   

Mike and Mike: forget the data for now. You've got a much more fundamental problem: defining/deciding what it means to "transfer a calibration".

In all the time that NIR has been a factor in the analytical community, "transferable calibrations", or "universal calibration" as they used to be called, have been one of the holy grails of NIR technology. When I started working at Technicon Instruments in 1976 (for you youngsters out there, Technicon was one of the first three companies to make analytical instruments based on NIR and chemometrics, a situation that lasted for quite a long time, before other companies saw the "light" (pun intended)) developing a "universal calibration" for protein in wheat was a goal that the vice-president listed as one of the top-priority activities for me.

But then, as now, there was no clear definition of what that meant. When we say an instrument (or a calibration model) must work "out of the box", are we allowed (for example) to make a "bias adjustment" (with all respect to Klaas)? Note that every instrument made has provision for adjusting the zero point, which is essentially what a "bias adjustment" is.

Similarly, what we call "slope adjustment" or "skew adjustment" is equivalent to adjusting sensitivity in other instrumental technolgies; is that allowed?

And whether either or both adjustments are allowed (or not), how do we know when we've succeeded in "transferring the calibration"? Do we base the decision on SEP, RMSEP, maximum allowed error, include "mean prediction error" as a criterion, include some measure of non-linearity, etc?

None of these items are defined, or even addressed, in most discussions of "calibration transfer". There's hardly any discussion of these points (or any at all, I think) in the NIR literature, even for a single application, much less as a general principle. That allows a company to claim anything they want as representing a "calibration transfer".

Anybody have any ideas about how to start addressing this question?

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Michael C Mound (mike)
Senior Member
Username: mike

Post Number: 34
Registered: 7-2007
Posted on Monday, October 01, 2007 - 5:00 am:   

Michael,

This is food for a really good discussion. Most of us have longed for a kind of global calibration. Some time ago, there was a consortium in the UK led by a L. Hanna, who was close to such a holy grail, but the results were not published. Recently, I have seen claims for transfer between probes and instruments by some vendors that made sense, especially if you were able to determine the ILS of each instrument. I have also seen warnings that no two unit models from the same manufacturer (same model, etc) could be automatically assumed to be plug and play.

We are planning to test some of these claims, but have no hard data so far.

Please keep me informed if you have success or otherwise...I will do likewise.

Good luck,

Mike
Top of pagePrevious messageNext messageBottom of page Link to this message

Michel Coene (michel)
Senior Member
Username: michel

Post Number: 44
Registered: 2-2002
Posted on Monday, October 01, 2007 - 3:38 am:   

I recently started at a new job, and found myself in an already running NIR project. The vendor claims the calibration can be taken of the instrument and used on another one, even without so much as a slope/bias correction. I have heard this promessed so many times, but I have never seen it done. (This excludes Foss - style setups with worldwide calibrations run on 3 different masters, I am talking one instrument on one production line.) Has anybody here ever swapped an instrument and gotten good results, "straight out of the box"? I would appreciate if the vendors here refrain from answering, time to let the customers speak!

Add Your Message Here
Posting is currently disabled in this topic. Contact your discussion moderator for more information.