11-16-2020, 04:55 PM
Interesting one. I have a few ideas:
- It could be limit of the numerical stability of our implementation. Even the strongly visible peak is only a deviation of 0.02% and with a sigma of 20 the algorithm is calculating 60 samples to the right and left of the Gauß distribution, summing up all values and normalizing by the raw sum of these samples from the Gauß curve. Especially the small values at the edge will be close to the resolution of the floating point numbers when added to the sum, so something might be happening there...
- If could be that the peak occurs exactly 60 samples before the end of the measurement. As mentioned above, we calculate a width of 3*sigma=60 samples to get numerically close to a true Gauss, which otherwise needs to be calculated across all samples. When we do not have enough samples to the right (or left) we need to change the strategy and instead of using a pre-calculated normalization value, we need to normalize with the values that match the available data. Again, I do not see it right away in the code, but this will also have a numerical limit. Could you check if it is 60 samples to the end?
By the way: Have you tried the LOESS smoothing? Not sure if it is more stable though.
- It could be limit of the numerical stability of our implementation. Even the strongly visible peak is only a deviation of 0.02% and with a sigma of 20 the algorithm is calculating 60 samples to the right and left of the Gauß distribution, summing up all values and normalizing by the raw sum of these samples from the Gauß curve. Especially the small values at the edge will be close to the resolution of the floating point numbers when added to the sum, so something might be happening there...
- If could be that the peak occurs exactly 60 samples before the end of the measurement. As mentioned above, we calculate a width of 3*sigma=60 samples to get numerically close to a true Gauss, which otherwise needs to be calculated across all samples. When we do not have enough samples to the right (or left) we need to change the strategy and instead of using a pre-calculated normalization value, we need to normalize with the values that match the available data. Again, I do not see it right away in the code, but this will also have a numerical limit. Could you check if it is 60 samples to the end?
By the way: Have you tried the LOESS smoothing? Not sure if it is more stable though.