08-28-2020, 05:56 AM
Hello, I'm a new user to phyphox and I have question/observation. I'm looking at the accelerometer data without g with the phone at rest on a table (this is an iPhone 6s). Since an accelerometer naturally measures the local gravitational field, I'm assuming that the "accelerometer without g" applies a correction to the z-data to ostensibly get zero.
However, when I do the experiment, I see a small positive offset in the z-data. The x- and y-data are very close to zero (variable due to noise and no more than ±0.02 m/s²) but the z-reading is consistently between +0.05 and +0.10, more or less (~5x larger than x or y). And indeed, when I switch to the "with g" mode, the z-data is consistently about 9.9, not 9.8.
I'm trying to understand why this might be – should I assume this is merely a systematic offset in the accelerometer on my particular phone? Is the non-zero value an error on the part of other sensors that are used to determine the correction (e.g., magnetometer to determine "down")? Is the applied correction simply not any more accurate than to the 0.1 level? Is this common to all smartphone accelerometers? Any thoughts or suggestions?
However, when I do the experiment, I see a small positive offset in the z-data. The x- and y-data are very close to zero (variable due to noise and no more than ±0.02 m/s²) but the z-reading is consistently between +0.05 and +0.10, more or less (~5x larger than x or y). And indeed, when I switch to the "with g" mode, the z-data is consistently about 9.9, not 9.8.
I'm trying to understand why this might be – should I assume this is merely a systematic offset in the accelerometer on my particular phone? Is the non-zero value an error on the part of other sensors that are used to determine the correction (e.g., magnetometer to determine "down")? Is the applied correction simply not any more accurate than to the 0.1 level? Is this common to all smartphone accelerometers? Any thoughts or suggestions?