Norris vs Savitzky Goley Log Out | Topics | Search
Moderators | Register | Edit Profile

NIR Discussion Forum » Bruce Campbell's List » Chemometrics » Norris vs Savitzky Goley « Previous Next »

Author Message
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 515
Registered: 9-2001
Posted on Sunday, December 16, 2012 - 6:16 pm:   

Dave - Apparently I was a bit careless and gave the wrong number for the ASTM standard I was thinking of. The one I had in mind was E-2617 - 10 "Standard Practice for Validation of Empirically Derived Multivariate Calibrations". That also might be confused with a QC procedure, but is also far more complicated and time-consuming than would be reasonable for a daily check procedure. Also, as you say, any QC procedure should be continued indefninitely, or at least as long as the process and analytical procedure are in use!

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

David Semmes (dsemmes)
Member
Username: dsemmes

Post Number: 13
Registered: 6-2010
Posted on Sunday, December 16, 2012 - 4:30 pm:   

Howard, it was a definitely an oversight that I did not mention the bias test. I focused on the �check sample� test because you and I had recently discussed it, and because it seems less common to me, but still very important.

To avoid misunderstanding, I�m sure you meant D6122 (Validation of Performance of Spectrophotometer), and not D1622 (Density of Rigid Cellular Plastics) or E1655 (Infrared Multivariate Quantitative Analysis).

While D6122-10 is not a QC procedure, it does include creating and routinely executing QC procedures. Besides what it calls �Probationary Validation,� it includes �General and Continual Validation.� And while the continual validation includes or focuses on what is essentially a bias test, it also includes measurements of �check samples� to test the analyzer independently of the samples.

I also did not mean to give the impression I�d suggest repeating D6122 as a QC procedure. Instead, like in D6122, I create a calibration for the �check sample� that is the same as or analogous to the test method for samples, and I simply have the �check sample� tested in QC for consistent, in-spec results. If �check sample� results are not consistent or in-spec, then that of course may indicate a relatively small, but significant instrument response change after events like the use of a new lot of sample cells, change in ATR alignment, replacement of the lamp, relocation of the instrument etc., even if the wavelength accuracy and other tests are fully in-spec. My intention is that the �check sample� test assesses the effect of relatively small changes in the system response on the sample test method with the same sensitivity as the change might affect sample results.

I agree too that the frequency of the test could vary with circumstances.

You wrote �that SOME QC procedure should be instituted when a new model is deployed.� I agree. I also suggest that generally the QC procedure should (1) continue indefinitely (2) include tests of bias in the results of ongoing sample testing, which may change due to raw material or process changes and (3) include �check sample� monitoring so that instrument/system changes can be identified independently of sample changes. Generally, my experience is that I can achieve this with simple, routine pass/fail tests of lactose, water etc, with limits that reflect the high reproducibility of instrument, and are far tighter than would normally be detected, except with the test method itself.

It occurs to me that your recommended procedure is fundamentally different � you do not include the �check sample� test in D6122. Instead, you include a less frequent test of slope and bias in addition to a more frequent bias test.
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 514
Registered: 9-2001
Posted on Friday, December 14, 2012 - 9:17 am:   

Dave - most of your posting is good, practical advice for Ferananda to follow, and is basically a rudimentary Quality Control procedure for NIR measurements.

However, I have to point out that the ASTM D1622-10 standard is NOT intended as a QC procedure, it is far too lengthy and resource-intensive for routine repetition. D1622 is intended as a one-time (or a few-time) validation procedure, as it's title tells us, intended to thoroughly and exhaustively test whether a newly-developed calibration model is in fact suitable for use in routine analysis for the relevant application.

Your method of using check samples, etc. for testing the instrument/model performance every day is very much more to the point here. Even here, there are variations that can be applied. For instance, most NIR practitioners would recommend performing a bias check at least once a day. Beyond that, more complicated tests (e.g., for bias and slope) could also be applied at various intervals, depending on the particular instrument, analysis, model and QC environment.

My default recommendation for that, is that when a new model is first deployed, a test for bias and (non-unity) slope be run every day, for at least a week. If the system proves to be stable for at least a week, then the test interval can be increased to once a week, for at least a month. If the system is found to be stable for a month, then that can be applied monthly on a continuing basis, as a monitoring method. I pesonally would feel uncomfortable extending the interval to longer than a month, but if that's found appropriate in some particular case, that could be done, too. The purpose is to match the test interval to what is empirically found to be necessary. Daily bias checks are also run on a continuing basis, in addition to the longer test method. These two tests provide a reasonably comprehensive QC procedure that takes some extra work at the beginning, but then requires only minimal additional work on an ongoing basis.

Some may feel that this schedule is not appropriate, and certainly there is room for other opinions on this. The point, I think, is that SOME QC procedure should be instituted when a new model is deployed.

Beyond that rule-of-thumb method I described, someone needing more comprehensive or more objectively justifiable procedures can institute formal SQC (Statistical Quality Control) procedures. These, however, are far beyond the scope of this discussion group, I think.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

David Semmes (dsemmes)
Member
Username: dsemmes

Post Number: 12
Registered: 6-2010
Posted on Friday, December 14, 2012 - 5:15 am:   

Fernanda, I think that routine monitoring of the entire method, not just the instrument, is critical to test whether the method consistently gives accurate results over time after calibration. In several completely different spectral methods, I have observed unexpected changes that could have led to inaccurate results which otherwise might not have been detected. For example, in a NIR method for liquids, results differed using nominally identical quartz liquid cells because the +/-~ 1 mil or 25.4 �m specification in the pathlength was much greater than the corresponding reproducibility of the NIR method. In an ATR method for powders, after several years of use, a dependence of the relative band intensities was measured after a relatively small change in the ATR alignment. I've also experienced other examples of unexpected method changes.

ASTM D6122-10, the Standard Practice for Validation of the Performance of Multivariate Online, At-Line, and Laboratory Infrared Spectrophotometer Based Analyzer Systems is an exhaustive approach to on-going monitoring of the spectral methods, but I am not aware of its use in practice.

My practice to test for changes like those has been to routinely monitor "Check Standards" which are purposely similar in composition, physical properties, and measurement procedure to test samples. I purposely do not use flat, shiny, internal, or spectrally featureless Certified Reference Materials. My goal is not to test whether results are consistent, but instead to assess the magnitude of any changes larger than normal variability compared to the spectral differences measured by the method. I've measured NIR and ATR spectra of lactose and water and other materials that have remained consistent for years for this purpose. I usually begin by simply assessing spectral overlay plots so that I don't miss any spectral changes that are not detected by a particular calibration. I can then test the system suitability for a method before a sample measurement with a sensitivity comparable to the difference that is measured by the method.

A much more basic approach is to only monitor the instrument using standard instrument qualification protocols to test wavelength accuracy and similar variables. Only monitoring variables like the wavelength accuracy, however, will only very indirectly and insensitively test the consistency of the method. Neither of the examples above would have been detected.

I also think it's a common good practice to evaluate the Q-residual and Hotelling's T-squared statistics to assess whether an observed sample spectrum is consistent with a calibration model. A problem with only testing the fit that way, and not using a "Check Standard" in addition is that it'd be hard to determine whether any unexpected results in the future are due changes in the samples or changes in the instrument or method.

Good luck! I hope that's helpful and not too long!
Top of pagePrevious messageNext messageBottom of page Link to this message

Fernanda Haffner (fernanda)
New member
Username: fernanda

Post Number: 4
Registered: 12-2012
Posted on Wednesday, December 12, 2012 - 5:51 pm:   

Howard,

Do you have any reference in mind that could help me with the routine monitoring of the instrument you just mentioned ?

I appreciate it!

Fernanda
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 510
Registered: 9-2001
Posted on Tuesday, December 11, 2012 - 5:52 am:   

Fernanda - in the end, that's what counts: accuracy and robustness. Underneath all the math and statsitics, NIR is "just another" method of chemical analysis, and has be looked at in that light: does it do what all analytical methods have to do?

So to follow that line of thought: now that you've developed your analytical method, you need to set up a program for routine monitoring of the instrument/method, to insure it's continued accuracy.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Fernanda Haffner (fernanda)
New member
Username: fernanda

Post Number: 3
Registered: 12-2012
Posted on Monday, December 10, 2012 - 8:18 pm:   

Thanks Tony and Howard. I should say I do not feel intimidated, but somehow inspired to learn more. And of course, starting a NIR calibration from scratch and minimal guidance requires a lot of effort... I am getting there.
Unfortunately my software does not show the calculations (as far as I know). But my model with 172 calibration and 58 validation samples seems to be robust.

Thanks for your comments again!

Best,
Fernanda
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 509
Registered: 9-2001
Posted on Saturday, December 08, 2012 - 10:12 am:   

Fernanda - If there wasn't so much good advice contained in them I would be tempted, at this point, to advise you to ignore all the previous comments, including my own!

The reason is this: Usually when so many of we NIR "experts" have commented on a question, especially one by a beginner, at least one responder has the good sense to ask the key questions that should be the starting point for any rational discussion, but that is missing from this thread:

"What is it that you're trying to measure, what are you trying to measure it in, and how accurate do your results have to be?"

Obviously, I'm as guilty as the rest of the herd! Also, Don, I could have said what I wanted to say without being offensive about it, especially to a newcomer who is probably already intimidated by making their first posting to this crew. Following your own advice, however, I make no apology to YOU for lumping you in with the rest of us "older and wiser(?) heads"; if we were all so smart as we like to think we are, we wouldn't all of us have missed that basic point!!

Karl, the main problem with calling an algorithm a "Norris Derivative", which is intended to do you (deserved) honor, is that the software manufacturers put an algorithm into their software package that is intended to duplicate the functionality of the software that you originally wrote and used (and still use, to great effect), with the intention of implying that if a user applies the "Norris Derivative" they can get the same results that you would have obtained on any given data set. Not mentioned is the unavoidable fact that of necesstity, they have to leave out the most important and critical part of the algorithm, which is the part that goes on inside your head while you're using your computer to deal with the computational details. Therefore, with the best intentions, nobody is going to be able to duplicate what you do with the Norris Derivative, at least not until you can develop a method for how to license out parts of your brain!

Fernanda - I think that what we are all saying, each in our own way, is that there are many ways to develop a good model for doing any particular analysis, and in the end they are probably all going to be equivalent. In that context, worrying about the details of how different "derivative" methods work is more in the nature of "rearranging the deck chairs on the Titanic" (as he saying goes), as far as the contribution to the performance of the model is concerned. First you need to answer the question "Is a derivative needed at all?", or is some other way of dealing with the data preferred? Thus you may be directing your attention, and your time and efforts, to the wrong part of the problem.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Tony Davies (td)
Moderator
Username: td

Post Number: 289
Registered: 1-2001
Posted on Friday, December 07, 2012 - 4:54 am:   

Dear Fernanda,

Welcome to the group, I'm sorry if you got too much technical information and not much practical help!

First of all you have the right idea. Test different settings and see what works best as measured by your SEP. I afraid that there is nearly always a BUT! But 1) Before you start calibrating you should know what will be an acceptable result (in terms of SEP). But 2)It does depend on where you are with the calibration development. I always start by running the raw data to see if there is a chance of finding something useful. Then try a smoothing routine and see if you get an improvement. But 3) How many samples do you have in your validation set? If the number is small you can very quickly obtain a calibration that is optimised to that set and be of little use for your development of a useful calibration. You need quite large databases before you can decide what is going to be the best pre-processing of your spectra but remember that you do not need "The Best"; you need a satisfactory solution.

Your question does raise two important questions:
1) How can you learn how to do NIR calibrations?
There are books and there is quite a lot of on-line help but you really need to go on a training course. Your best chance of finding a training course is at an NIR conference. The next meeting of the international body ICNIRS is in France next summer. If you can get to it you will learn a lot.

2). Your software should tell you how it makes the calculations but not all software does (Which is why no one has offered to answer your question because the answer is dependent on the actual algorithm in your computer). NIR software requirements are very demanding and not many spectrometer companies have the required capacity in their software departments. So what you get from the spectrometer developer is a sub-set of programs sufficient to get you started but what you need is very dependent on the problems you have to tackle. If your spectrometer is aimed at a specialised segment of the NIR market then it may have very good software for that segment. I have always suggested that instrument developers should give you sufficient control of the spectrometer to get out the best possible spectra for your application and leave it to commercial software developers to provide all the programs (tools) that we would like to have. If your company has made a long-term commitment to NIR analysis, I recommend you ask them to buy an NIR package from one of the specialised chemometric development companies.

I hope this helps,

Best wishes,
Tony
Top of pagePrevious messageNext messageBottom of page Link to this message

Fernanda Haffner (fernanda)
New member
Username: fernanda

Post Number: 2
Registered: 12-2012
Posted on Wednesday, December 05, 2012 - 6:52 pm:   

Thank you all for the comments and knowledge shared.

Norris: The software can be a little confusing then (at least the version 8 I have). Savitzky Golay and Norris derivative filter are both under 'smoothing'. And thanks again for your comment!
Top of pagePrevious messageNext messageBottom of page Link to this message

Karl Norris (knnirs)
Senior Member
Username: knnirs

Post Number: 68
Registered: 8-2009
Posted on Wednesday, December 05, 2012 - 2:03 pm:   

Hi Fernanda,
You asked a question which in my opinion has not been answered, in part because as far as I know the Norris smoothing does not exist. I do not have access to the Thermo software, but I believe it includes the use of so-called "Norris derivative". I did not create this name but I use a derivative routine which uses a segment and a gap. The segment size varies the smoothing in the derivative calculation. This can not be compared to the Savitsky-Golay smoothing, because it does not include the derivative. There is a Savitsky-Golay derivative function, but it does not include the smoothing. I hope this helps you.
Karl
Top of pagePrevious messageNext messageBottom of page Link to this message

Donald J Dahm (djdahm)
Senior Member
Username: djdahm

Post Number: 81
Registered: 2-2007
Posted on Wednesday, December 05, 2012 - 8:58 am:   

Karl may chime in and tell me I'm full of crap, but I don�t think he was worried about most of the stuff being talked about when he �invented� the Norris derivatives. Obviously, he didn�t want to introduce noise, but his criteria for what kind of derivative to take tends to be dominated by improving the calibration.
Remember, he prefers MLR, which any right thinking spectroscopist would (tongue only a little bit in cheek.) When he was developing a calibration he would vary the gap and segment in order to get rid of interferences. Sometimes he was creating wavelengths where their spectral contribution was zero. Other times he was creating new Isosbestic points in the spectra, so that their contribution was constant. There was a lot of art in what he was doing, but there was tons of spectroscopy as well.
In my opinion, it is fine to wonder about the question being raised, but the Norris Derivative should not be considered a �standard� in the realm of statistical issues concerning derivatives.
And Howard, if I had to apologize every time I told a student to hit the books to answer a question he just asked, I would be a �sayin� sorry fool�. Letting a newcomer know when the answer is easy and when it�s going to be hard is a great service. Keep at it, unapologetically.
Top of pagePrevious messageNext messageBottom of page Link to this message

Dusan Kojic (dkojic)
Junior Member
Username: dkojic

Post Number: 8
Registered: 7-2011
Posted on Wednesday, December 05, 2012 - 4:13 am:   

Marta,
If you go through Howard's response again, you might notice the part (for example):
"It's a lifetime task."

Please be aware that some things just take time. No IQ involved.

I've been reading through quite a few vigorous discussions on this forum and NONE of them was offensive in any way !
Instead, could you be more precise about the part that sounds discouraging, I'm sure some kind of misunderstanding might have taken place here.

Kind regards,
Dusan
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 508
Registered: 9-2001
Posted on Wednesday, December 05, 2012 - 3:47 am:   

I suppose I should apologize to Fernanda and to Marta. Upon rereading my previous response I can see where it might be taken to be somewhat mean, which was not my intention. I thought it would come through that my tongue was planted pretty firmly in my cheek while I was writing it, but from the other responses, I guess I was wrong about that.

Therefore I apologize.

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Gabi Levin (gabiruth)
Senior Member
Username: gabiruth

Post Number: 79
Registered: 5-2009
Posted on Wednesday, December 05, 2012 - 1:51 am:   

Hi guys,

I definitely sympathize with Marta. On the technical side only I agree with much of what Howard says.
We need to separate two important issues:

Resolution (as should be expressed in terms of the ability to distinguish between two neighboring peaks because at the end of the day this is all that matters)
Qunatitative "usefulness". I know it is a term I just "invented" for the sake of this discussion, but again - in the end of the day ths is all that matters.
Resoution is very important in some cases, for example, distiguishing between nylon 6 and nylon 6,6, or between two very similar excipients in pharmaceutical formulations. In such cases, resolution could be the only thing between us and achieveing a practical goal.
When it comes to quantitative "usefulness" the situation can be totally different - as in most cases the entities we whish to quantify differ substantially from the rest of the matrix, resolution takes second place to noise. If to achieve resolution we sacrifice in noise, the preference should be to reduce noise.
To give an example - in quantifying oil in certain seeds at 30 seeds per minute we found that using even 17 smoothing (8 on each side) yields better prediction capability. We still use 2 nm increment in moving the center wavelength of our beam when scanning the spectrum, but we use more smoothing to reduce noise that comes from the fact that in transmission through large seeds the signal is quite weak.

I would summarize all in a very old wisdom - when you have an application - study the nature of the problem, define what is important, and if you can't define it by using the logical analysis, use trials, compare the results, and define.

I assume that with experience and time you would be able to use more and more the logical analysis, and need less and less trials. If you are young enough, you will achieve a reasonable degree of success before you retire.

best of luck

Gabi Levin
Brimrose
Top of pagePrevious messageNext messageBottom of page Link to this message

venkatarman (venkynir)
Senior Member
Username: venkynir

Post Number: 159
Registered: 3-2004
Posted on Wednesday, December 05, 2012 - 12:57 am:   

Hi Marta ;

I have gone through Howard Mark answer :Don't take in to heart any thing . There is no straight answer for your question .How ever , I am always work with Saviitizky and other modified one .I found good.
Top of pagePrevious messageNext messageBottom of page Link to this message

Marta Lichtig (marta)
Junior Member
Username: marta

Post Number: 9
Registered: 8-2009
Posted on Tuesday, December 04, 2012 - 11:55 pm:   

I always enjoy the answwers given by the wise NIR people to us beginners, an I learned a lot from them.
But this last answer hurt my feelings as a NIR beginner. We all know we need to learn (a lot, a process that never ends), but we also would like to find some shortcuts if possible. Such answers may discourage people like myself to ask questions, and that will be a pitty for all the community!
Top of pagePrevious messageNext messageBottom of page Link to this message

Howard Mark (hlmark)
Senior Member
Username: hlmark

Post Number: 507
Registered: 9-2001
Posted on Tuesday, December 04, 2012 - 1:50 pm:   

Fernanda -

You asked:

"Is there a setting where both are smoothing the data the same way?"

The short answer is: NO.

You also said:

"We all know that smoothing decreases the spectral resolution and I want to avoid such a thing."

To which I answer "why?" In conventional spectroscopy the main, and often the only, criterion available is the ability to reproduce the original, unsmoothed spectrum. When using the spectra for quantitative calibration, however, we often find that distortion of the spectrum has very little effect on the final results, such as test statistics like the RMSEP. Heinz Seisler has done considerable work on these effects, you should look up some of his papers.

Finally, you said:

"if I use a segment length of 5nm and 5 points as gap between segments (Norris) is that directly comparable to 7 data points and polynomial order of 3 (Savitzky Golay)?"

Once more, the short answer is NO.

The medium answer is: in the context of calibration, it doesn't matter a whole lot. There's a whole slew of effects that can and will change the results far more than using a different type of smoothing function, or a different set of parameters for a given smoothing function. It's usually difficult, if not downright impossible, to tell a priori, how a given change to the data, or to the algorithm, is going to filter through the computations and change the results. But as I said above, it usually doesn't matter, the differences are so small that they are less than the random error of the data will cause, so you can't tell the difference even if it exists.

The long asnwer is: there are a lot of books and papers on NIR, statistics, chemometrics and data analysis that you need to study, before you can understand all the ramifications of your questions. It's a lifetime task. If you REALLY want to know the answers, there's no other way than to get started immediately

\o/
/_\
Top of pagePrevious messageNext messageBottom of page Link to this message

Fernanda Haffner (fernanda)
New member
Username: fernanda

Post Number: 1
Registered: 12-2012
Posted on Tuesday, December 04, 2012 - 10:59 am:   

Hello all,

I'm quite new in the NIR field and would like to ask a question, perhaps silly, regarding a possible straight comparison between the smoothing methods Norris and Savitzky-Golay. I would like to be able to decide which one suits best my purpose.

I work with a software called TQ-Analyst from Thermo, which means I can only play around with the parameters built-in it.

We all know that smoothing decreases the spectral resolution and I want to avoid such a thing. With this being said, my question is: how could I effectively compare the degree of smoothing of Norris vs Savitzky-Golay. Is there a setting where both are smoothing the data the same way ? So I could base my decision on the lower RMSEP calculated by one of them.
For instance, if I use a segment length of 5nm and 5 points as gap between segments (Norris) is that directly comparable to 7 data points and polynomial order of 3 (Savitzky Golay) ?

Hope my question makes sense.

Thank you all for the attention and for sharing your knowledge!

Best,
Fernanda

Add Your Message Here
Posting is currently disabled in this topic. Contact your discussion moderator for more information.