Originally Posted by
apricissimus
This makes sense only if the computer does not sense every pass of the magnet at higher speeds. It would then correct for this by throwing out very short term low speed readings in the middle of sustained high speed readings. Is this true?
I'm not sure exactly what you mean. My understanding of decent computers is that they'll integrate over a window of a few seconds, calculating distance as (#rotations x circumference( and time as the difference between the recorded time of the first and last measurement. If the computer is even reasonably decent and correctly set up, it won't be dropping measurements. That sort of behavior makes your speed look very glitchy.
As I understand it, there are two competing effects:
*As you go faster, accuracy *increases* because you get more measurements (ie, rotations) over the averaging window.
*As you go faster, accuracy *decreases* because it eventually gets hard for the electronics to determine exactly when the magnet is right over the sensor. Note this is also a function of placement of the magnet on the wheel.
What I don't know is how those two effects play out, and which dominates. I expect it varies with the computer. There also is probably a "sweet spot" where it gets the benefits of averaging before error starts increasing significantly.