We have a peerstore that keeps all data for all observed peers in memory with no eviction.
This is fine when you don't discover many peers but when using the DHT you encounter a significant number of peers so our peer storage grows and grows over time.
We have a persistent peer store, but it just periodically writes peers into the datastore to be read at startup, still keeping them in memory.
It also means a restart doesn't give you any temporary reprieve from the memory leak as the previously observed peer data is read into memory at startup.
This change refactors the peerstore to use a datastore by default, reading and writing peer info as it arrives. It can be configured with a MemoryDatastore if desired.
It was necessary to change the peerstore and *book interfaces to be asynchronous since the datastore api is asynchronous.
BREAKING CHANGE: `libp2p.handle`, `libp2p.registrar.register` and the peerstore methods have become async
The `AbortController` class is supported by browsers and node 14+ - we only support node 16+ (e.g. LTS+Current) so the `abort-controller` module isn't needed any more.
Implements the idea from #1060 - allows us to get some insight into what's happening in a libp2p node out side of just bandwidth stats.
Configures a few default metrics if metrics are enabled - current connections, the state of the dial queue, etc.
Also makes the `Metrics` class not depend on the `ConnectionManager` class, otherwise we can't collect simple metrics from the connection manager class due to the circular dependency.
Looks like this project stopped running the `test:node` npm script when it was migrated to gh actions.
Re-enable it and fix all the related test failures.
Do not abort all attempts to find peers when `findPeers` on one router throws synchronously
Co-authored-by: Robert Kiel <robert.kiel@hoprnet.io>
Co-authored-by: achingbrain <alex@achingbrain.net>
`interface-datastore` now only contains the interface definition,
`datastore-core` has the various implementations.
BREAKING CHANGE: datastore implementations provided to libp2p must be compliant with interface-datastore@6.0.0
* chore: remove ipfs-utils dep
We only use it for the env detection, so use [wherearewe](https://www.npmjs.com/package/wherearewe)
instead which is that, but pulled out into a tiny module.
The `TextDecoder` class is global everywhere we support so we don't
need to pull it in from `ipfs-utils` and it's been removed from v8
anyway.
* chore: update ipfs-http-client
BREAKING CHANGES:
top level types were updated, multiaddr@9.0.0 is used, dialer and keychain internal property names changed and connectionManager minPeers is not supported anymore
The NAT manager test will throw if the current computer is behind a
double NAT as at runtime it won't be possible to map external ports.
We don't care about that during the test so configure a fake external
IP so the test will still test the shut-down functionality on a computer
that is behind a double NAT.
* feat: add uPnP nat manager
Adds a really basic nat manager that attempts to use UPnP to punch
a hole through your router for any IPV4 tcp addresses you have
configured.
Adds any configured addresses to the node's observed addresses list
and adds observed addresses to `libp2p.multiaddrs` so we exchange
them with peers when performing `identify` and people can dial you.
Adds configuration options under `config.nat`
Hole punching is async to not affect start up time.
Co-authored-by: Vasco Santos <vasco.santos@moxy.studio>
* fix: store provider multiaddrs during find providers
Changes the behaviour of `libp2p.contentRouting.findProviders` to store
the multiaddrs reported by the routers before yielding results to
the caller, so when they try to dial the provider, the multiaddrs are
already in the peer store's address book.
Also dedupes providers reported by routers but keeps all of the addresses
reported, even for duplicates.
Also, also fixes a performance bug where the previous implementation would
wait for any router to completely finish finding providers before sending
any results to the caller. It'll now yield results as they come in which
makes it much, much faster.