Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] Half-Life measurement : uncertainties, correlations, SVD



On 10/18/21 5:33 PM, I discussed the uncertainties in the
half-life data. I should have mentioned that was for the
my_data.csv observations.

Just now I did the same analysis for the sean_data.csv observations.

sean_data
cost² errorbar fast.amp fast.dk slow.amp slow.dk bl turn
55387.0 0.425% -0.007962 0.071437 0.786899 -0.60846 0.073478 15.0°
4287.48 1.527% 0.011017 -0.090177 -0.603719 -0.768252 0.192509 22.7°
659.969 3.893% -0.790928 0.583555 -0.085985 -0.05101 -0.154603 5.0°
87.595 10.685% 0.472814 0.75671 -0.051547 0.069502 0.443113 13.2°
33.393 17.305% -0.388195 -0.271329 0.079134 0.179258 0.858656 23.6°

To a first approximation this is similar to the other data set
(see below).
++ The big uncertainty is still more than 40× larger than the small
uncertainty. The log-improbability landscape is taco-shaped.

Even so, there are some differences we can notice.
−− The eigenvector with the most uncertainty has rotated about 24°
relative to the other data set, becoming more nearly aligned with
the "baseline" parameter.
−− The eigenvectors with the least and next-to-least uncertainty
have both rotated to become much less sensitive to the "baseline"
parameter. They are now mostly the gerade and ungerade combination
of the slow amplitude and slow decay constant.

my_data
cost² errorbar fast.amp fast.dk slow.amp slow.dk bl
42177.2 0.487% -0.00775 0.065394 0.837471 -0.46193 0.284476
3165.41 1.777% 0.010599 -0.078896 -0.532074 -0.633576 0.556005
375.998 5.157% -0.772627 0.60252 -0.082759 -0.130203 -0.12734
40.978 15.622% 0.459398 0.734074 -0.028012 0.293975 0.403588
23.718 20.533% -0.437983 -0.29598 0.088915 0.530875 0.656379


Looking farther upstream, the main difference between the two data
sets is that sean started out with a markedly larger amount of the
fast component.

Action item: This suggests that technique is important. If I were
doing it, I would /practice/ with a blank sample, carrying it across
the room, emplacing it, slamming the shields closed, and starting
the clock and counter. Minimize the time it takes for all that.

This is what we call a /pre-thought/ process. I never advocate
doing anything thoughtlessly ... but often it helps to think
things through in advance, and to practice, so that you don't
need to spend much time on /additional/ thinking when dealing
with the live sample.

Also I renew the suggestion to split the experiment: Do it once
optimized for the fast component, and again optimized for the slow
component.