To a first approximation this is similar to the other data set
(see below).
++ The big uncertainty is still more than 40× larger than the small
uncertainty. The log-improbability landscape is taco-shaped.
Even so, there are some differences we can notice.
−− The eigenvector with the most uncertainty has rotated about 24°
relative to the other data set, becoming more nearly aligned with
the "baseline" parameter.
−− The eigenvectors with the least and next-to-least uncertainty
have both rotated to become much less sensitive to the "baseline"
parameter. They are now mostly the gerade and ungerade combination
of the slow amplitude and slow decay constant.
Looking farther upstream, the main difference between the two data
sets is that sean started out with a markedly larger amount of the
fast component.
Action item: This suggests that technique is important. If I were
doing it, I would /practice/ with a blank sample, carrying it across
the room, emplacing it, slamming the shields closed, and starting
the clock and counter. Minimize the time it takes for all that.
This is what we call a /pre-thought/ process. I never advocate
doing anything thoughtlessly ... but often it helps to think
things through in advance, and to practice, so that you don't
need to spend much time on /additional/ thinking when dealing
with the live sample.
Also I renew the suggestion to split the experiment: Do it once
optimized for the fast component, and again optimized for the slow
component.