Instead of being boxed for each node, the `Instruction` type contains a boxed variable-size elements. Thus `Instruction` is quite lean, and the allocator deals with variable-sized elements.
Total number of allocations is more or less same, but less space is wasted for unused memory: previously the Instruction's size was 112 bytes in WASM, now it is 16.
It reduces memory consumption on large AIR scripts (heap size decreased from 7.7MiB to 4.625MiB in parser-10000-100, and for the new parser-calls-10000-100 benchmark, it decreased from 5.115MiB to 4.375MiB).
This is a breaking change as the API changes (though the code that navigates the parsed tree generally should work as is).
* chore(benchmark): clear stale benchmark data
Originally, benchmark util merge benchmark data, i.e. only inserted new
data in the benchmark store. However, sometimes benches disappear, and
incorrect information is stored for them (i.e. AquaVM version which is
per-machine).
However, sometimes this functionality is useful, when you want to add a
new benchmark without running all the others. For such case,
`aquavm_performance_metergin run --unsafe-merge-results` option is
added.
* Restore the `null` benchmark
* Do text report generation exactly one time
It was regenerated from 0 to N times before, and it is wrong.
* feat(avm-server)!: keypair and particle ID arguments
Add `&fluence_keypair::KeyPair` argument to `AVM::call` and
`AVMRunner::call`. This value is further forwarded in a deconstructed
form to WASM Air interpreter, but is not used there yet. Also,
`AVMRunner::call` gets `particle_id: String` argument.
feat(air-interpreter)!: `invoke` methods have three new arguments:
`key_format: u8`, `secret_key_bytes: Vec<u8>` and `paritcle_id: String`.
feat(aquavm-air): `air::execute_air` has two three arguments:
`key_format: u8`, `secret_key_bytes: Vec<u8>` and `paritcle_id: String`.
feat(aquavm-air-cli)!: add `--random-key`/`--ed25519-key file` options to AIR CLI.
* feat(avm-server)!: Add `RunnerError::KeypairError`
* chore(bench): Add signature performance benchmarks
These benchmarks contain valid signature, so they should work with
verification out of the box.
---------
Co-authored-by: Artsiom Shamsutdzinau <shamsartem@gmail.com>
Co-authored-by: folex <0xdxdy@gmail.com>
* feat(aquavm-air-cli): `run` fails if AquaVM fails
Unless `run --no-fail` is provided. It will make benchmarks fail
on errors, unless you provide `--no-fail` to specific benchmark.
* Fix dashboard and network_explore benches
* Convert benchmark data to new format
* `performance_metering`: use dirs only
Ordinary files like README.md are not considered to be a benchmark.
* Update `benches/performance_metering/README.md`
* Fix performance report
Looks like performance reports was merged in wrong order: data is not
sorted by machine ID. The sorting is needed for stable diffs.
* Run benchmarks on Macbook Air M1
* Experimental performance metering
* Average on repeated runs with `--repeat` option
* Add "version" field to the report
The version is got from `air/Cargo.toml`.
* Allow disabling preparing binaries
with the `--no-prepare-binaries` option.
* Human-readable execution time in the report
* Add dashboard benchmark
* Human-readable text report