* chore(testing-framework)!: fix WASM test runner
Native mode was used before because some package used native runner
for its tests.
This PR allows to explicitly select test runner for tests. Many testing-framework
types are now parametrized with a runner type with almost compatible defaults.
* chore(testing-framework): Add `ReleaseWasmAirRunner`
* chore(testing-framework)!: Rename `AirScriptExecutor::simple` to `AirScriptExecutor::from_annotated`.
* feat(avm-server)!: keypair and particle ID arguments
Add `&fluence_keypair::KeyPair` argument to `AVM::call` and
`AVMRunner::call`. This value is further forwarded in a deconstructed
form to WASM Air interpreter, but is not used there yet. Also,
`AVMRunner::call` gets `particle_id: String` argument.
feat(air-interpreter)!: `invoke` methods have three new arguments:
`key_format: u8`, `secret_key_bytes: Vec<u8>` and `paritcle_id: String`.
feat(aquavm-air): `air::execute_air` has two three arguments:
`key_format: u8`, `secret_key_bytes: Vec<u8>` and `paritcle_id: String`.
feat(aquavm-air-cli)!: add `--random-key`/`--ed25519-key file` options to AIR CLI.
* feat(avm-server)!: Add `RunnerError::KeypairError`
* chore(bench): Add signature performance benchmarks
These benchmarks contain valid signature, so they should work with
verification out of the box.
---------
Co-authored-by: Artsiom Shamsutdzinau <shamsartem@gmail.com>
Co-authored-by: folex <0xdxdy@gmail.com>
* feat(aquavm-air-cli): `run` fails if AquaVM fails
Unless `run --no-fail` is provided. It will make benchmarks fail
on errors, unless you provide `--no-fail` to specific benchmark.
* Fix dashboard and network_explore benches
* Convert benchmark data to new format
* `performance_metering`: use dirs only
Ordinary files like README.md are not considered to be a benchmark.
* Update `benches/performance_metering/README.md`
* Fix performance report
Looks like performance reports was merged in wrong order: data is not
sorted by machine ID. The sorting is needed for stable diffs.
* Run benchmarks on Macbook Air M1
* Experimental performance metering
* Average on repeated runs with `--repeat` option
* Add "version" field to the report
The version is got from `air/Cargo.toml`.
* Allow disabling preparing binaries
with the `--no-prepare-binaries` option.
* Human-readable execution time in the report
* Add dashboard benchmark
* Human-readable text report
* Fix stale benchmarks
* Data (de)serialization and execution benchmarks:
Two kind of benchmark: relatively short, but with huge call results, and
long trace of small call results. Moreover, there are two case for each:
with same data to be merged with comparison, and data from different
par branches merged w/o comparison.