We have a peerstore that keeps all data for all observed peers in memory with no eviction.
This is fine when you don't discover many peers but when using the DHT you encounter a significant number of peers so our peer storage grows and grows over time.
We have a persistent peer store, but it just periodically writes peers into the datastore to be read at startup, still keeping them in memory.
It also means a restart doesn't give you any temporary reprieve from the memory leak as the previously observed peer data is read into memory at startup.
This change refactors the peerstore to use a datastore by default, reading and writing peer info as it arrives. It can be configured with a MemoryDatastore if desired.
It was necessary to change the peerstore and *book interfaces to be asynchronous since the datastore api is asynchronous.
BREAKING CHANGE: `libp2p.handle`, `libp2p.registrar.register` and the peerstore methods have become async
* fix: not dial all known peers on startup
* feat: connection manager should proactively connect to peers from peerStore
* chore: increase bundle size
* fix: do connMgr proactive dial on an interval
* chore: address review
* chore: use retimer reschedule
* chore: address review
* fix: use minConnections in default config
* chore: minPeers to minConnections everywhere