The clock estimator has a potential division by zero.
Using `iteration + 1` seems also more logical to me for
an average.
Found with coverity in a downstream project.
Specific platforms (e.g. TDM-GCC) can have terrible timer resolution,
and our checking code will then loop for an inordinate amount of time.
This change will make it so that the calibration gives up after 3
seconds and just uses the already measured values.
This leaves one open question, how to signal that the resolution
is terrible and benchmarking should not happen?
Fixes#1237