* Experimental performance metering
* Average on repeated runs with `--repeat` option
* Add "version" field to the report
The version is got from `air/Cargo.toml`.
* Allow disabling preparing binaries
with the `--no-prepare-binaries` option.
* Human-readable execution time in the report
* Add dashboard benchmark
* Human-readable text report
* Fix stale benchmarks
* Data (de)serialization and execution benchmarks:
Two kind of benchmark: relatively short, but with huge call results, and
long trace of small call results. Moreover, there are two case for each:
with same data to be merged with comparison, and data from different
par branches merged w/o comparison.