* feat(aquavm-air-cli): `run` fails if AquaVM fails
Unless `run --no-fail` is provided. It will make benchmarks fail
on errors, unless you provide `--no-fail` to specific benchmark.
* Fix dashboard and network_explore benches
* Convert benchmark data to new format
* `performance_metering`: use dirs only
Ordinary files like README.md are not considered to be a benchmark.
* Update `benches/performance_metering/README.md`
* Fix performance report
Looks like performance reports was merged in wrong order: data is not
sorted by machine ID. The sorting is needed for stable diffs.
* Run benchmarks on Macbook Air M1
* Experimental performance metering
* Average on repeated runs with `--repeat` option
* Add "version" field to the report
The version is got from `air/Cargo.toml`.
* Allow disabling preparing binaries
with the `--no-prepare-binaries` option.
* Human-readable execution time in the report
* Add dashboard benchmark
* Human-readable text report
* Fix stale benchmarks
* Data (de)serialization and execution benchmarks:
Two kind of benchmark: relatively short, but with huge call results, and
long trace of small call results. Moreover, there are two case for each:
with same data to be merged with comparison, and data from different
par branches merged w/o comparison.