Chronology | Current Month | Current Thread | Current Date |
[Year List] [Month List (current year)] | [Date Index] [Thread Index] | [Thread Prev] [Thread Next] | [Date Prev] [Date Next] |
When the students use a hand-held timer to time an object, the timer
provides a reading down to the hundreths of a second. The resolution of
the measurement is about 0.01 s.
However, the measurement is not accurate to 0.01 s. Due to reaction time,
the measurement cannot be repeated exactly the same from case to case.
For example, when timing a ball that is dropped from about a meter, most
students obtain a spread in their timings of about 0.1 s.
In other words, the precision is on the order of 0.1 s and the resolution
is on the order of 0.01 s.
There may be other contributions to poor accuracy, e.g., bias in reaction
time, parallax, poorly calibrated timer, etc.; here I am only discussing
the distinction between resolution and precision. BTW, because precision
refers to the repeatability of a measurement, poor precision can be
addressed somewhat by multiple readings, whereas poor resolution cannot.
I'm not an expert on error analysis so I may be mistaken about this. If
so, I'd appreciate if someone would set me straight before I mislead all
of my students.