* chore(benchmark): clear stale benchmark data
Originally, benchmark util merge benchmark data, i.e. only inserted new
data in the benchmark store. However, sometimes benches disappear, and
incorrect information is stored for them (i.e. AquaVM version which is
per-machine).
However, sometimes this functionality is useful, when you want to add a
new benchmark without running all the others. For such case,
`aquavm_performance_metergin run --unsafe-merge-results` option is
added.
* Restore the `null` benchmark
* Do text report generation exactly one time
It was regenerated from 0 to N times before, and it is wrong.
* feat(avm-server)!: keypair and particle ID arguments
Add `&fluence_keypair::KeyPair` argument to `AVM::call` and
`AVMRunner::call`. This value is further forwarded in a deconstructed
form to WASM Air interpreter, but is not used there yet. Also,
`AVMRunner::call` gets `particle_id: String` argument.
feat(air-interpreter)!: `invoke` methods have three new arguments:
`key_format: u8`, `secret_key_bytes: Vec<u8>` and `paritcle_id: String`.
feat(aquavm-air): `air::execute_air` has two three arguments:
`key_format: u8`, `secret_key_bytes: Vec<u8>` and `paritcle_id: String`.
feat(aquavm-air-cli)!: add `--random-key`/`--ed25519-key file` options to AIR CLI.
* feat(avm-server)!: Add `RunnerError::KeypairError`
* chore(bench): Add signature performance benchmarks
These benchmarks contain valid signature, so they should work with
verification out of the box.
---------
Co-authored-by: Artsiom Shamsutdzinau <shamsartem@gmail.com>
Co-authored-by: folex <0xdxdy@gmail.com>
* feat(aquavm-air-cli): `run` fails if AquaVM fails
Unless `run --no-fail` is provided. It will make benchmarks fail
on errors, unless you provide `--no-fail` to specific benchmark.
* Fix dashboard and network_explore benches
* Convert benchmark data to new format
* `performance_metering`: use dirs only
Ordinary files like README.md are not considered to be a benchmark.
* Update `benches/performance_metering/README.md`
* Fix performance report
Looks like performance reports was merged in wrong order: data is not
sorted by machine ID. The sorting is needed for stable diffs.
* Run benchmarks on Macbook Air M1
* Experimental performance metering
* Average on repeated runs with `--repeat` option
* Add "version" field to the report
The version is got from `air/Cargo.toml`.
* Allow disabling preparing binaries
with the `--no-prepare-binaries` option.
* Human-readable execution time in the report
* Add dashboard benchmark
* Human-readable text report