Author |
Message |
Noah
| Posted on Wednesday, February 23, 2005 - 4:35 am: | |
Hi! After reading nuno's tread, some questions are risen in my mind. About Qualitative Calibration, in order to define if a sample is "good" or "bad" I've read that you don't suggest to insert out-of-spec into calibration. I don't understand why. If I would like to discriminate good from bad, I need 2 different clusters. And if I use validation sample in order to "validate" my calibration set what can I validate if I introduce out-of-spec that are not presente in Calibration set? I hope my question appears clear. In front of my eyes it seen like a random coil!! Thanks Noah |
Nuno Matos (Nmatos)
| Posted on Wednesday, February 23, 2005 - 4:43 am: | |
If you add to the calibration set OOS samples, you will associate to the OOS samples those particular OOS. In questions form, it would be like: 1. You to the model: Those are good and those are bad. 2. You to the model with a new sample: Is this sample good? 3. Model to you: This sample is like the good samples you've presented! OR This sample is like the bad ones you've presented. What I want the model to answer is: 3'. Model to you: This sample is like the good ones you've presented! OR This samples isn't like the good ones. Please investigate! Is the coil still random? Nuno |
Noah
| Posted on Wednesday, February 23, 2005 - 4:56 am: | |
Uhm... now the coil is not so random. Looking your questions form seem to me that 3 and 3' are really closed. Result is, more or less, the same. Infact, if the sample is similar to bad ones, you must investigate. Noah |
hlmark
| Posted on Wednesday, February 23, 2005 - 5:03 am: | |
Noah - basically, because a material can be "out of spec" in many different ways, presumably most of those would affect the material's spectrum differently. If you don't know beforehand ALL the ways that out-of-spec can occur, then you'll likely miss some. Then, when one of those happens during routine use, it will erroneously register as being in-spec - not good! By creating a model based only on "good" samples, you're essentially telling the algorithm, "this is what good material looks like, let me know if you see anything different". Measuring out-of-spec samples during the validation step is sometimes called "challenging" the qualitative model, to verify that it can, at least, flag samples that are known to be bad. The more thoroughly you can challenge your model, the more confidence you can have that it will perform correctly in routine use. Howard \o/ /_\ |
Noah
| Posted on Friday, February 25, 2005 - 2:43 am: | |
Thank you very much for your comments! Noah |
|