We have a peerstore that keeps all data for all observed peers in memory with no eviction.
This is fine when you don't discover many peers but when using the DHT you encounter a significant number of peers so our peer storage grows and grows over time.
We have a persistent peer store, but it just periodically writes peers into the datastore to be read at startup, still keeping them in memory.
It also means a restart doesn't give you any temporary reprieve from the memory leak as the previously observed peer data is read into memory at startup.
This change refactors the peerstore to use a datastore by default, reading and writing peer info as it arrives. It can be configured with a MemoryDatastore if desired.
It was necessary to change the peerstore and *book interfaces to be asynchronous since the datastore api is asynchronous.
BREAKING CHANGE: `libp2p.handle`, `libp2p.registrar.register` and the peerstore methods have become async
* chore: auto relay example
* chore: update examples to use process arguments
* chore: add test setup for node tests and test for auto-relay
* chore: apply suggestions from code review
* chore: do not use promise for multiaddrs event on example