mirror of
https://github.com/fluencelabs/dweb-transports
synced 2025-03-15 18:30:49 +00:00
Copy files from omnibus dweb-transport, and comment out unused parts
This commit is contained in:
parent
2dabacf4f7
commit
47e32e42ed
661
dist/LICENSE
vendored
Normal file
661
dist/LICENSE
vendored
Normal file
@ -0,0 +1,661 @@
|
||||
GNU AFFERO GENERAL PUBLIC LICENSE
|
||||
Version 3, 19 November 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU Affero General Public License is a free, copyleft license for
|
||||
software and other kinds of works, specifically designed to ensure
|
||||
cooperation with the community in the case of network server software.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
our General Public Licenses are intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
Developers that use our General Public Licenses protect your rights
|
||||
with two steps: (1) assert copyright on the software, and (2) offer
|
||||
you this License which gives you legal permission to copy, distribute
|
||||
and/or modify the software.
|
||||
|
||||
A secondary benefit of defending all users' freedom is that
|
||||
improvements made in alternate versions of the program, if they
|
||||
receive widespread use, become available for other developers to
|
||||
incorporate. Many developers of free software are heartened and
|
||||
encouraged by the resulting cooperation. However, in the case of
|
||||
software used on network servers, this result may fail to come about.
|
||||
The GNU General Public License permits making a modified version and
|
||||
letting the public access it on a server without ever releasing its
|
||||
source code to the public.
|
||||
|
||||
The GNU Affero General Public License is designed specifically to
|
||||
ensure that, in such cases, the modified source code becomes available
|
||||
to the community. It requires the operator of a network server to
|
||||
provide the source code of the modified version running there to the
|
||||
users of that server. Therefore, public use of a modified version, on
|
||||
a publicly accessible server, gives the public access to the source
|
||||
code of the modified version.
|
||||
|
||||
An older license, called the Affero General Public License and
|
||||
published by Affero, was designed to accomplish similar goals. This is
|
||||
a different license, not a version of the Affero GPL, but Affero has
|
||||
released a new version of the Affero GPL which permits relicensing under
|
||||
this license.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU Affero General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Remote Network Interaction; Use with the GNU General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, if you modify the
|
||||
Program, your modified version must prominently offer all users
|
||||
interacting with it remotely through a computer network (if your version
|
||||
supports such interaction) an opportunity to receive the Corresponding
|
||||
Source of your version by providing access to the Corresponding Source
|
||||
from a network server at no charge, through some standard or customary
|
||||
means of facilitating copying of software. This Corresponding Source
|
||||
shall include the Corresponding Source for any work covered by version 3
|
||||
of the GNU General Public License that is incorporated pursuant to the
|
||||
following paragraph.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the work with which it is combined will remain governed by version
|
||||
3 of the GNU General Public License.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU Affero General Public License from time to time. Such new versions
|
||||
will be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU Affero General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU Affero General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU Affero General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as published
|
||||
by the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If your software can interact with users remotely through a computer
|
||||
network, you should also make sure that it provides a way for users to
|
||||
get its source. For example, if your program is a web application, its
|
||||
interface could display a "Source" link that leads users to an archive
|
||||
of the code. There are many ways you could offer source, and different
|
||||
solutions will be better for different programs; see section 13 for the
|
||||
specific requirements.
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU AGPL, see
|
||||
<http://www.gnu.org/licenses/>.
|
2
dist/README.md
vendored
Normal file
2
dist/README.md
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
# DwebTransports
|
||||
General transport library for Decentralized Web handles multiple underlying transports
|
38
package.json
Normal file
38
package.json
Normal file
@ -0,0 +1,38 @@
|
||||
{
|
||||
"name": "dweb-transports",
|
||||
"version": "0.0.1",
|
||||
"description": "Internet Archive Decentralized Web Transports Library",
|
||||
"dependencies": {
|
||||
"cids": "latest",
|
||||
"ipfs": "latest",
|
||||
"ipld-dag-pb": "latest",
|
||||
"ipfs-unixfs": "latest",
|
||||
"readable-stream": "latest",
|
||||
"node-fetch": "latest",
|
||||
"webtorrent": "latest",
|
||||
'yjs': "latest",
|
||||
'y-memory': "latest",
|
||||
'y-array': "latest",
|
||||
'y-text': "latest",
|
||||
'y-map': "latest",
|
||||
'y-ipfs-connector': "latest",
|
||||
'y-indexeddb': "latest"
|
||||
},
|
||||
"keywords": [],
|
||||
"license": "AGPL-3.0",
|
||||
"author": {
|
||||
"name": "Mitra Ardron",
|
||||
"email": "mitra@archive.org",
|
||||
"url": "http://www.mitra.biz"
|
||||
},
|
||||
"devDependencies": {
|
||||
"browserify": "^14.5.0",
|
||||
"watchify": "^3.11.0"
|
||||
},
|
||||
"scripts": {
|
||||
"bundle": "cd src; browserify ./dweb-transports_all.js > ../dist/dweb_transports_bundle.js",
|
||||
"watch": "watchify src/dweb-transports_all.js -o dist/dweb_transports_bundle.js --verbose",
|
||||
"test": "cd src; node ./test.js",
|
||||
"help": "echo 'test (test it)'; echo 'bundle (packs into ../examples)'; echo 'watch: continually bundles whenever files change'"
|
||||
}
|
||||
}
|
106
src/Errors.js
Normal file
106
src/Errors.js
Normal file
@ -0,0 +1,106 @@
|
||||
errors = {};
|
||||
|
||||
// Use this when the code logic has been broken - e.g. something is called with an undefined parameter, its preferable to console.assert
|
||||
// Typically this is an error, that should have been caught higher up.
|
||||
class CodingError extends Error {
|
||||
constructor(message) {
|
||||
super(message || "Coding Error");
|
||||
this.name = "CodingError"
|
||||
}
|
||||
}
|
||||
errors.CodingError = CodingError;
|
||||
// These are equivalent of python exceptions, will log and raise alert in most cases - exceptions aren't caught
|
||||
class ToBeImplementedError extends Error {
|
||||
constructor(message) {
|
||||
super("To be implemented: " + message);
|
||||
this.name = "ToBeImplementedError"
|
||||
}
|
||||
}
|
||||
errors.ToBeImplementedError = ToBeImplementedError;
|
||||
|
||||
class TransportError extends Error {
|
||||
constructor(message) {
|
||||
super(message || "Transport failure");
|
||||
this.name = "TransportError"
|
||||
}
|
||||
}
|
||||
errors.TransportError = TransportError;
|
||||
|
||||
/*---- Below here are errors copied from previous Dweb-Transport and not currently used */
|
||||
/*
|
||||
class ObsoleteError extends Error {
|
||||
constructor(message) {
|
||||
super("Obsolete: " + message);
|
||||
this.name = "ObsoleteError"
|
||||
}
|
||||
}
|
||||
errors.ObsoleteError = ObsoleteError;
|
||||
|
||||
// Use this when the logic of encryption wont let you do something, typically something higher should have stopped you trying.
|
||||
// Examples include signing something when you only have a public key.
|
||||
class EncryptionError extends Error {
|
||||
constructor(message) {
|
||||
super(message || "Encryption Error");
|
||||
this.name = "EncryptionError"
|
||||
}
|
||||
}
|
||||
errors.EncryptionError = EncryptionError;
|
||||
|
||||
// Use this something that should have been signed isn't - this is externally signed, i.e. a data rather than coding error
|
||||
class SigningError extends Error {
|
||||
constructor(message) {
|
||||
super(message || "Signing Error");
|
||||
this.name = "SigningError"
|
||||
}
|
||||
}
|
||||
errors.SigningError = SigningError;
|
||||
|
||||
class ForbiddenError extends Error {
|
||||
constructor(message) {
|
||||
super(message || "Forbidden failure");
|
||||
this.name = "ForbiddenError"
|
||||
}
|
||||
}
|
||||
errors.ForbiddenError = ForbiddenError;
|
||||
|
||||
class AuthenticationError extends Error {
|
||||
constructor(message) {
|
||||
super(message || "Authentication failure");
|
||||
this.name = "AuthenticationError"
|
||||
}
|
||||
}
|
||||
errors.AuthenticationError = AuthenticationError;
|
||||
|
||||
class IntentionallyUnimplementedError extends Error {
|
||||
constructor(message) {
|
||||
super(message || "Intentionally Unimplemented Function");
|
||||
this.name = "IntentionallyUnimplementedError"
|
||||
}
|
||||
}
|
||||
errors.IntentionallyUnimplementedError = IntentionallyUnimplementedError;
|
||||
|
||||
class DecryptionFailError extends Error {
|
||||
constructor(message) {
|
||||
super(message || "Decryption Failed");
|
||||
this.name = "DecryptionFailError"
|
||||
}
|
||||
}
|
||||
errors.DecryptionFailError = DecryptionFailError;
|
||||
|
||||
class SecurityWarning extends Error {
|
||||
constructor(message) {
|
||||
super(message || "Security Warning");
|
||||
this.name = "SecurityWarning"
|
||||
}
|
||||
}
|
||||
errors.SecurityWarning = SecurityWarning;
|
||||
|
||||
class ResolutionError extends Error {
|
||||
constructor(message) {
|
||||
super(message || "Resolution failure");
|
||||
this.name = "ResolutionError"
|
||||
}
|
||||
}
|
||||
errors.ResolutionError = ResolutionError;
|
||||
*/
|
||||
exports = module.exports = errors;
|
308
src/Transport.js
Normal file
308
src/Transport.js
Normal file
@ -0,0 +1,308 @@
|
||||
const Url = require('url');
|
||||
const stream = require('readable-stream');
|
||||
const errors = require('./Errors'); // Standard Dweb Errors
|
||||
|
||||
function delay(ms, val) { return new Promise(resolve => {setTimeout(() => { resolve(val); },ms)})}
|
||||
|
||||
|
||||
class Transport {
|
||||
|
||||
constructor(options, verbose) {
|
||||
/*
|
||||
Doesnt do anything, its all done by SuperClasses,
|
||||
Superclass should merge with default options, call super
|
||||
*/
|
||||
}
|
||||
|
||||
setup0(options, verbose) {
|
||||
/*
|
||||
First part of setup, create obj, add to Transports but dont attempt to connect, typically called instead of p_setup if want to parallelize connections.
|
||||
*/
|
||||
throw new errors.IntentionallyUnimplementedError("Intentionally undefined function Transport.setup0 should have been subclassed");
|
||||
}
|
||||
|
||||
p_setup1(options, verbose) { return this; }
|
||||
p_setup2(options, verbose) { return this; }
|
||||
|
||||
static async p_setup(options, verbose) {
|
||||
/*
|
||||
Setup the resource and open any P2P connections etc required to be done just once.
|
||||
In almost all cases this will call the constructor of the subclass
|
||||
|
||||
:param obj options: Data structure required by underlying transport layer (format determined by that layer)
|
||||
:param boolean verbose: true for debugging output
|
||||
:resolve Transport: Instance of subclass of Transport
|
||||
*/
|
||||
let t = await this.setup0(options, verbose) // Sync version that doesnt connect
|
||||
.p_setup1(verbose); // And connect
|
||||
|
||||
return t.p_setup2(verbose); // And connect
|
||||
}
|
||||
togglePaused() {
|
||||
switch (this.status) {
|
||||
case Transport.STATUS_CONNECTED:
|
||||
this.status = Transport.STATUS_PAUSED;
|
||||
break;
|
||||
case Transport.STATUS_PAUSED:
|
||||
this.status = Transport.STATUS_CONNECTED; // Superclass might change to STATUS_STARTING if needs to stop/restart
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
supports(url, func) {
|
||||
/*
|
||||
Determine if this transport supports a certain set of URLs and a func
|
||||
|
||||
:param url: String or parsed URL
|
||||
:return: true if this protocol supports these URLs and this func
|
||||
:throw: TransportError if invalid URL
|
||||
*/
|
||||
if (typeof url === "string") {
|
||||
url = Url.parse(url); // For efficiency, only parse once.
|
||||
}
|
||||
if (url && !url.protocol) {
|
||||
throw new Error("URL failed to specific a scheme (before :) " + url.href)
|
||||
} //Should be TransportError but out of scope here
|
||||
// noinspection Annotator supportURLs is defined in subclasses
|
||||
return ( (!url || this.supportURLs.includes(url.protocol.slice(0, -1)))
|
||||
&& (!func || this.supportFunctions.includes(func)))
|
||||
}
|
||||
|
||||
p_rawstore(data, verbose) {
|
||||
/*
|
||||
Store a blob of data onto the decentralised transport.
|
||||
Returns a promise that resolves to the url of the data
|
||||
|
||||
:param string|Buffer data: Data to store - no assumptions made to size or content
|
||||
:param boolean verbose: true for debugging output
|
||||
:resolve string: url of data stored
|
||||
*/
|
||||
throw new errors.ToBeImplementedError("Intentionally undefined function Transport.p_rawstore should have been subclassed");
|
||||
}
|
||||
|
||||
async p_rawstoreCaught(data, verbose) {
|
||||
try {
|
||||
return await this.p_rawstore(data, verbose);
|
||||
} catch (err) {
|
||||
|
||||
}
|
||||
}
|
||||
p_store() {
|
||||
throw new errors.ToBeImplementedError("Undefined function Transport.p_store - may define higher level semantics here (see Python)");
|
||||
}
|
||||
|
||||
//noinspection JSUnusedLocalSymbols
|
||||
|
||||
p_rawfetch(url, {verbose=false}={}) {
|
||||
/*
|
||||
Fetch some bytes based on a url, no assumption is made about the data in terms of size or structure.
|
||||
Where required by the underlying transport it should retrieve a number if its "blocks" and concatenate them.
|
||||
Returns a new Promise that resolves currently to a string.
|
||||
There may also be need for a streaming version of this call, at this point undefined.
|
||||
|
||||
:param string url: URL of object being retrieved
|
||||
:param boolean verbose: true for debugging output
|
||||
:resolve string: Return the object being fetched, (note currently returned as a string, may refactor to return Buffer)
|
||||
:throws: TransportError if url invalid - note this happens immediately, not as a catch in the promise
|
||||
*/
|
||||
console.assert(false, "Intentionally undefined function Transport.p_rawfetch should have been subclassed");
|
||||
}
|
||||
|
||||
p_fetch() {
|
||||
throw new errors.ToBeImplementedError("Undefined function Transport.p_fetch - may define higher level semantics here (see Python)");
|
||||
}
|
||||
|
||||
p_rawadd(url, sig, verbose) {
|
||||
/*
|
||||
Store a new list item, ideally it should be stored so that it can be retrieved either by "signedby" (using p_rawlist) or
|
||||
by "url" (with p_rawreverse). The underlying transport does not need to guarantee the signature,
|
||||
an invalid item on a list should be rejected on higher layers.
|
||||
|
||||
:param string url: String identifying an object being added to the list.
|
||||
:param Signature sig: A signature data structure.
|
||||
:param boolean verbose: true for debugging output
|
||||
:resolve undefined:
|
||||
*/
|
||||
throw new errors.ToBeImplementedError("Undefined function Transport.p_rawadd");
|
||||
}
|
||||
|
||||
p_rawlist(url, verbose) {
|
||||
/*
|
||||
Fetch all the objects in a list, these are identified by the url of the public key used for signing.
|
||||
(Note this is the 'signedby' parameter of the p_rawadd call, not the 'url' parameter
|
||||
Returns a promise that resolves to the list.
|
||||
Each item of the list is a dict: {"url": url, "date": date, "signature": signature, "signedby": signedby}
|
||||
List items may have other data (e.g. reference ids of underlying transport)
|
||||
|
||||
:param string url: String with the url that identifies the list.
|
||||
:param boolean verbose: true for debugging output
|
||||
:resolve array: An array of objects as stored on the list.
|
||||
*/
|
||||
throw new errors.ToBeImplementedError("Undefined function Transport.p_rawlist");
|
||||
}
|
||||
|
||||
p_list() {
|
||||
throw new Error("Undefined function Transport.p_list");
|
||||
}
|
||||
p_newlisturls(cl, verbose) {
|
||||
/*
|
||||
Must be implemented by any list, return a pair of URLS that may be the same, private and public links to the list.
|
||||
returns: ( privateurl, publicurl) e.g. yjs:xyz/abc or orbitdb:a123
|
||||
*/
|
||||
throw new Error("undefined function Transport.p_newlisturls");
|
||||
}
|
||||
|
||||
//noinspection JSUnusedGlobalSymbols
|
||||
p_rawreverse(url, verbose) {
|
||||
/*
|
||||
Similar to p_rawlist, but return the list item of all the places where the object url has been listed.
|
||||
The url here corresponds to the "url" parameter of p_rawadd
|
||||
Returns a promise that resolves to the list.
|
||||
|
||||
:param string url: String with the url that identifies the object put on a list.
|
||||
:param boolean verbose: true for debugging output
|
||||
:resolve array: An array of objects as stored on the list.
|
||||
*/
|
||||
throw new errors.ToBeImplementedError("Undefined function Transport.p_rawreverse");
|
||||
}
|
||||
|
||||
listmonitor(url, callback, verbose) {
|
||||
/*
|
||||
Setup a callback called whenever an item is added to a list, typically it would be called immediately after a p_rawlist to get any more items not returned by p_rawlist.
|
||||
|
||||
:param url: string Identifier of list (as used by p_rawlist and "signedby" parameter of p_rawadd
|
||||
:param callback: function(obj) Callback for each new item added to the list
|
||||
obj is same format as p_rawlist or p_rawreverse
|
||||
:param verbose: boolean - true for debugging output
|
||||
*/
|
||||
console.log("Undefined function Transport.listmonitor"); // Note intentionally a log, as legitamte to not implement it
|
||||
}
|
||||
|
||||
|
||||
// ==== TO SUPPORT KEY VALUE INTERFACES IMPLEMENT THESE =====
|
||||
// Support for Key-Value pairs as per
|
||||
// https://docs.google.com/document/d/1yfmLRqKPxKwB939wIy9sSaa7GKOzM5PrCZ4W1jRGW6M/edit#
|
||||
|
||||
async p_newdatabase(pubkey, verbose) {
|
||||
/*
|
||||
Create a new database based on some existing object
|
||||
pubkey: Something that is, or has a pubkey, by default support Dweb.PublicPrivate, KeyPair or an array of strings as in the output of keypair.publicexport()
|
||||
returns: {publicurl, privateurl} which may be the same if there is no write authentication
|
||||
*/
|
||||
throw new errors.ToBeImplementedError("Undefined function Transport.p_newdatabase");
|
||||
}
|
||||
//TODO maybe change the listmonitor / monitor code for to use "on" and the structure of PP.events
|
||||
//TODO but note https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy about Proxy which might be suitable, prob not as doesnt map well to lists
|
||||
async p_newtable(pubkey, table, verbose) {
|
||||
/*
|
||||
Create a new table,
|
||||
pubkey: Is or has a pubkey (see p_newdatabase)
|
||||
table: String representing the table - unique to the database
|
||||
returns: {privateurl, publicurl} which may be the same if there is no write authentication
|
||||
*/
|
||||
throw new errors.ToBeImplementedError("Undefined function Transport.p_newtable");
|
||||
}
|
||||
|
||||
async p_set(url, keyvalues, value, verbose) { // url = yjs:/yjs/database/table/key
|
||||
/*
|
||||
Set one or more keys in a table.
|
||||
url: URL of the table
|
||||
keyvalues: String representing a single key OR dictionary of keys
|
||||
value: String or other object to be stored (its not defined yet what objects should be supported, e.g. any object ?
|
||||
*/
|
||||
throw new errors.ToBeImplementedError("Undefined function Transport.p_set");
|
||||
}
|
||||
async p_get(url, keys, verbose) {
|
||||
/* Get one or more keys from a table
|
||||
url: URL of the table
|
||||
keys: Array of keys
|
||||
returns: Dictionary of values found (undefined if not found)
|
||||
*/
|
||||
throw new errors.ToBeImplementedError("Undefined function Transport.p_get");
|
||||
}
|
||||
|
||||
async p_delete(url, keys, verbose) {
|
||||
/* Delete one or more keys from a table
|
||||
url: URL of the table
|
||||
keys: Array of keys
|
||||
*/
|
||||
throw new errors.ToBeImplementedError("Undefined function Transport.p_delete");
|
||||
}
|
||||
|
||||
async p_keys(url, verbose) {
|
||||
/* Return a list of keys in a table (suitable for iterating through)
|
||||
url: URL of the table
|
||||
returns: Array of strings
|
||||
*/
|
||||
throw new errors.ToBeImplementedError("Undefined function Transport.p_keys");
|
||||
}
|
||||
async p_getall(url, verbose) {
|
||||
/* Return a dictionary representing the table
|
||||
url: URL of the table
|
||||
returns: Dictionary of Key:Value pairs, note take care if this could be large.
|
||||
*/
|
||||
throw new errors.ToBeImplementedError("Undefined function Transport.p_keys");
|
||||
}
|
||||
// ------ UTILITY FUNCTIONS, NOT REQD TO BE SUBCLASSED ----
|
||||
|
||||
static mergeoptions(a) {
|
||||
/*
|
||||
Deep merge options dictionaries
|
||||
*/
|
||||
let c = {};
|
||||
for (let i = 0; i < arguments.length; i++) {
|
||||
let b = arguments[i];
|
||||
for (let key in b) {
|
||||
let val = b[key];
|
||||
if ((typeof val === "object") && !Array.isArray(val) && c[key]) {
|
||||
c[key] = Transport.mergeoptions(a[key], b[key]);
|
||||
} else {
|
||||
c[key] = b[key];
|
||||
}
|
||||
}
|
||||
}
|
||||
return c;
|
||||
}
|
||||
|
||||
async p_test_kvt(urlexpectedsubstring, verbose=false) {
|
||||
/*
|
||||
Test the KeyValue functionality of any transport that supports it.
|
||||
urlexpectedsubstring: Some string expected in the publicurl of the table.
|
||||
*/
|
||||
if (verbose) {console.log(this.name,"p_test_kvt")}
|
||||
try {
|
||||
let table = await this.p_newtable("NACL VERIFY:1234567","mytable", verbose);
|
||||
let mapurl = table.publicurl;
|
||||
if (verbose) console.log("newtable=",mapurl);
|
||||
console.assert(mapurl.includes(urlexpectedsubstring));
|
||||
await this.p_set(mapurl, "testkey", "testvalue", verbose);
|
||||
let res = await this.p_get(mapurl, "testkey", verbose);
|
||||
console.assert(res === "testvalue");
|
||||
await this.p_set(mapurl, "testkey2", {foo: "bar"}, verbose); // Try setting to an object
|
||||
res = await this.p_get(mapurl, "testkey2", verbose);
|
||||
console.assert(res.foo === "bar");
|
||||
await this.p_set(mapurl, "testkey3", [1,2,3], verbose); // Try setting to an array
|
||||
res = await this.p_get(mapurl, "testkey3", verbose);
|
||||
console.assert(res[1] === 2);
|
||||
res = await this.p_keys(mapurl);
|
||||
console.assert(res.includes("testkey") && res.includes("testkey3"));
|
||||
res = await this.p_delete(mapurl, ["testkey"]);
|
||||
res = await this.p_getall(mapurl, verbose);
|
||||
if (verbose) console.log("getall=>",res);
|
||||
console.assert(res.testkey2.foo === "bar" && res.testkey3["1"] === 2 && !res.testkey1);
|
||||
await delay(200);
|
||||
if (verbose) console.log(this.name, "p_test_kvt complete")
|
||||
} catch(err) {
|
||||
console.log("Exception thrown in ", this.name, "p_test_kvt:", err.message);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
Transport.STATUS_CONNECTED = 0; // Connected - all other numbers are some version of not ok to use
|
||||
Transport.STATUS_FAILED = 1; // Failed to connect
|
||||
Transport.STATUS_STARTING = 2; // In the process of connecting
|
||||
Transport.STATUS_LOADED = 3; // Code loaded, but haven't tried to connect. (this is typically hard coded in subclasses constructor)
|
||||
Transport.STATUS_PAUSED = 4; // It was launched, probably connected, but now paused so will be ignored by validFor
|
||||
exports = module.exports = Transport;
|
331
src/TransportHTTP.js
Normal file
331
src/TransportHTTP.js
Normal file
@ -0,0 +1,331 @@
|
||||
const errors = require('./Errors'); // Standard Dweb Errors
|
||||
const Transport = require('./Transport'); // Base class for TransportXyz
|
||||
const Transports = require('./Transports'); // Manage all Transports that are loaded
|
||||
const nodefetch = require('node-fetch'); // Note, were using node-fetch-npm which had a warning in webpack see https://github.com/bitinn/node-fetch/issues/421 and is intended for clients
|
||||
const Url = require('url');
|
||||
|
||||
var fetch,Headers,Request;
|
||||
if (typeof(Window) === "undefined") {
|
||||
//var fetch = require('whatwg-fetch').fetch; //Not as good as node-fetch-npm, but might be the polyfill needed for browser.safari
|
||||
//XMLHttpRequest = require("xmlhttprequest").XMLHttpRequest; // Note this doesnt work if set to a var or const, needed by whatwg-fetch
|
||||
console.log("Node loaded");
|
||||
fetch = nodefetch;
|
||||
Headers = fetch.Headers; // A class
|
||||
Request = fetch.Request; // A class
|
||||
} else {
|
||||
// If on a browser, need to find fetch,Headers,Request in window
|
||||
console.log("Loading browser version of fetch,Headers,Request");
|
||||
fetch = window.fetch;
|
||||
Headers = window.Headers;
|
||||
Request = window.Request;
|
||||
}
|
||||
//TODO-HTTP to work on Safari or mobile will require a polyfill, see https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/fetch for comment
|
||||
|
||||
defaulthttpoptions = {
|
||||
urlbase: 'https://gateway.dweb.me:443'
|
||||
};
|
||||
|
||||
servercommands = { // What the server wants to see to return each of these
|
||||
rawfetch: "content/rawfetch",
|
||||
rawstore: "contenturl/rawstore",
|
||||
rawadd: "void/rawadd",
|
||||
rawlist: "metadata/rawlist",
|
||||
get: "get/table",
|
||||
set: "set/table",
|
||||
delete: "delete/table",
|
||||
keys: "keys/table",
|
||||
getall: "getall/table"
|
||||
};
|
||||
|
||||
|
||||
class TransportHTTP extends Transport {
|
||||
|
||||
constructor(options, verbose) {
|
||||
super(options, verbose);
|
||||
this.options = options;
|
||||
this.urlbase = options.http.urlbase;
|
||||
this.supportURLs = ['contenthash', 'http','https'];
|
||||
this.supportFunctions = ['fetch', 'store', 'add', 'list', 'reverse', 'newlisturls', "get", "set", "keys", "getall", "delete", "newtable", "newdatabase"]; //Does not support: listmonitor - reverse is disabled somewhere not sure if here or caller
|
||||
this.supportFeatures = ['fetch.range']
|
||||
this.name = "HTTP"; // For console log etc
|
||||
this.status = Transport.STATUS_LOADED;
|
||||
}
|
||||
|
||||
static setup0(options, verbose) {
|
||||
let combinedoptions = Transport.mergeoptions({ http: defaulthttpoptions },options);
|
||||
try {
|
||||
let t = new TransportHTTP(combinedoptions, verbose);
|
||||
Transports.addtransport(t);
|
||||
return t;
|
||||
} catch (err) {
|
||||
console.log("Exception thrown in TransportHTTP.p_setup", err.message);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
async p_setup1(verbose) {
|
||||
return this;
|
||||
}
|
||||
|
||||
async p_status(verbose) { //TODO-BACKPORT
|
||||
/*
|
||||
Return a string for the status of a transport. No particular format, but keep it short as it will probably be in a small area of the screen.
|
||||
resolves to: String representing type connected (always HTTP) and online if online.
|
||||
*/
|
||||
try {
|
||||
this.info = await this.p_info(verbose);
|
||||
this.status = Transport.STATUS_CONNECTED;
|
||||
} catch(err) {
|
||||
console.log(this.name, ": Error in p_status.info",err.message);
|
||||
this.status = Transport.STATUS_FAILED;
|
||||
}
|
||||
return this.status;
|
||||
}
|
||||
|
||||
async p_httpfetch(httpurl, init, verbose) { // Embrace and extend "fetch" to check result etc.
|
||||
/*
|
||||
Fetch a url based from default server at command/multihash
|
||||
|
||||
url: optional (depends on command)
|
||||
resolves to: data as text or json depending on Content-Type header
|
||||
throws: TransportError if fails to fetch
|
||||
*/
|
||||
try {
|
||||
if (verbose) console.log("httpurl=%s init=%o", httpurl, init);
|
||||
//console.log('CTX=',init["headers"].get('Content-Type'))
|
||||
// Using window.fetch, because it doesn't appear to be in scope otherwise in the browser.
|
||||
let response = await fetch(new Request(httpurl, init));
|
||||
// fetch throws (on Chrome, untested on Firefox or Node) TypeError: Failed to fetch)
|
||||
// Note response.body gets a stream and response.blob gets a blob and response.arrayBuffer gets a buffer.
|
||||
if (response.ok) {
|
||||
let contenttype = response.headers.get('Content-Type');
|
||||
if (contenttype === "application/json") {
|
||||
return response.json(); // promise resolving to JSON
|
||||
} else if (contenttype.startsWith("text")) { // Note in particular this is used for responses to store
|
||||
return response.text(); // promise resolving to arrayBuffer (was returning text, but distorts binaries (e.g. jpegs)
|
||||
} else { // Typically application/octetStream when don't know what fetching
|
||||
return new Buffer(await response.arrayBuffer()); // Convert arrayBuffer to Buffer which is much more usable currently
|
||||
}
|
||||
}
|
||||
// noinspection ExceptionCaughtLocallyJS
|
||||
throw new errors.TransportError(`Transport Error ${response.status}: ${response.statusText}`);
|
||||
} catch (err) {
|
||||
// Error here is particularly unhelpful - if rejected during the COrs process it throws a TypeError
|
||||
console.log("Note error from fetch might be misleading especially TypeError can be Cors issue:",httpurl);
|
||||
if (err instanceof errors.TransportError) {
|
||||
throw err;
|
||||
} else {
|
||||
throw new errors.TransportError(`Transport error thrown by ${httpurl}: ${err.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async p_GET(httpurl, opts={}) {
|
||||
/* Locate and return a block, based on its url
|
||||
Throws TransportError if fails
|
||||
opts {
|
||||
start, end, // Range of bytes wanted - inclusive i.e. 0,1023 is 1024 bytes
|
||||
verbose }
|
||||
resolves to: URL that can be used to fetch the resource, of form contenthash:/contenthash/Q123
|
||||
*/
|
||||
let headers = new Headers();
|
||||
if (opts.start || opts.end) headers.append("range", `bytes=${opts.start || 0}-${opts.end || ""}`);
|
||||
let init = { //https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/fetch
|
||||
method: 'GET',
|
||||
headers: headers,
|
||||
mode: 'cors',
|
||||
cache: 'default',
|
||||
redirect: 'follow', // Chrome defaults to manual
|
||||
keepalive: true // Keep alive - mostly we'll be going back to same places a lot
|
||||
};
|
||||
return await this.p_httpfetch(httpurl, init, opts.verbose); // This s a real http url
|
||||
}
|
||||
async p_POST(httpurl, type, data, verbose) {
|
||||
// Locate and return a block, based on its url
|
||||
// Throws TransportError if fails
|
||||
//let headers = new window.Headers();
|
||||
//headers.set('content-type',type); Doesn't work, it ignores it
|
||||
let init = {
|
||||
//https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/fetch
|
||||
//https://developer.mozilla.org/en-US/docs/Glossary/Forbidden_header_name for headers tat cant be set
|
||||
method: 'POST',
|
||||
headers: {}, //headers,
|
||||
//body: new Buffer(data),
|
||||
body: data,
|
||||
mode: 'cors',
|
||||
cache: 'default',
|
||||
redirect: 'follow', // Chrome defaults to manual
|
||||
keepalive: true // Keep alive - mostly we'll be going back to same places a lot
|
||||
};
|
||||
return await this.p_httpfetch(httpurl, init, verbose);
|
||||
}
|
||||
|
||||
_cmdurl(command) {
|
||||
return `${this.urlbase}/${command}`
|
||||
}
|
||||
_url(url, command, parmstr) {
|
||||
if (!url) throw new errors.CodingError(`${command}: requires url`);
|
||||
if (typeof url !== "string") { url = url.href }
|
||||
url = url.replace('contenthash:/contenthash', this._cmdurl(command)) ; // Note leaves http: and https: urls unchanged
|
||||
url = url.replace('getall/table', command);
|
||||
url = url + (parmstr ? "?"+parmstr : "");
|
||||
return url;
|
||||
}
|
||||
async p_rawfetch(url, opts={}) {
|
||||
/*
|
||||
Fetch from underlying transport,
|
||||
Fetch is used both for contenthash requests and table as when passed to SmartDict.p_fetch may not know what we have
|
||||
url: Of resource - which is turned into the HTTP url in p_httpfetch
|
||||
opts: {start, end, verbose} see p_GET for documentation
|
||||
throws: TransportError if fails
|
||||
*/
|
||||
//if (!(url && url.includes(':') ))
|
||||
// throw new errors.CodingError("TransportHTTP.p_rawfetch bad url: "+url);
|
||||
if (((typeof url === "string") ? url : url.href).includes('/getall/table')) {
|
||||
console.log("XXX@176 - probably dont want to be calling p_rawfetch on a KeyValueTable, especially since dont know if its keyvaluetable or subclass"); //TODO-NAMING
|
||||
return {
|
||||
table: "keyvaluetable",
|
||||
}
|
||||
} else {
|
||||
return await this.p_GET(this._url(url, servercommands.rawfetch), opts);
|
||||
}
|
||||
}
|
||||
|
||||
p_rawlist(url, verbose) {
|
||||
// obj being loaded
|
||||
// Locate and return a block, based on its url
|
||||
if (!url) throw new errors.CodingError("TransportHTTP.p_rawlist: requires url");
|
||||
return this.p_GET(this._url(url, servercommands.rawlist), {verbose});
|
||||
}
|
||||
rawreverse() { throw new errors.ToBeImplementedError("Undefined function TransportHTTP.rawreverse"); }
|
||||
|
||||
async p_rawstore(data, verbose) {
|
||||
/*
|
||||
Store data on http server,
|
||||
data: string
|
||||
resolves to: {string}: url
|
||||
throws: TransportError on failure in p_POST > p_httpfetch
|
||||
*/
|
||||
//PY: res = self._sendGetPost(True, "rawstore", headers={"Content-Type": "application/octet-stream"}, urlargs=[], data=data, verbose=verbose)
|
||||
console.assert(data, "TransportHttp.p_rawstore: requires data");
|
||||
let res = await this.p_POST(this._cmdurl(servercommands.rawstore), "application/octet-stream", data, verbose); // resolves to URL
|
||||
let parsedurl = Url.parse(res);
|
||||
let pathparts = parsedurl.pathname.split('/');
|
||||
return `contenthash:/contenthash/${pathparts.slice(-1)}`
|
||||
|
||||
}
|
||||
|
||||
p_rawadd(url, sig, verbose) { //TODO-BACKPORT turn date into ISO before adding
|
||||
//verbose=true;
|
||||
if (!url || !sig) throw new errors.CodingError("TransportHTTP.p_rawadd: invalid parms",url, sig);
|
||||
if (verbose) console.log("rawadd", url, sig);
|
||||
let value = JSON.stringify(sig.preflight(Object.assign({},sig)))+"\n";
|
||||
return this.p_POST(this._url(url, servercommands.rawadd), "application/json", value, verbose); // Returns immediately
|
||||
}
|
||||
|
||||
p_newlisturls(cl, verbose) {
|
||||
let u = cl._publicurls.map(urlstr => Url.parse(urlstr))
|
||||
.find(parsedurl =>
|
||||
(parsedurl.protocol === "https" && parsedurl.host === "gateway.dweb.me" && parsedurl.pathname.includes('/content/rawfetch'))
|
||||
|| (parsedurl.protocol === "contenthash:" && (parsedurl.pathname.split('/')[1] === "contenthash")));
|
||||
if (!u) {
|
||||
u = `contenthash:/contenthash/${ cl.keypair.verifyexportmultihashsha256_58() }`; // Pretty random, but means same test will generate same list and server is expecting base58 of a hash
|
||||
}
|
||||
return [u,u];
|
||||
}
|
||||
|
||||
// ============================== Key Value support
|
||||
|
||||
|
||||
// Support for Key-Value pairs as per
|
||||
// https://docs.google.com/document/d/1yfmLRqKPxKwB939wIy9sSaa7GKOzM5PrCZ4W1jRGW6M/edit#
|
||||
async p_newdatabase(pubkey, verbose) {
|
||||
//if (pubkey instanceof Dweb.PublicPrivate)
|
||||
if (pubkey.hasOwnProperty("keypair"))
|
||||
pubkey = pubkey.keypair.signingexport()
|
||||
// By this point pubkey should be an export of a public key of form xyz:abc where xyz
|
||||
// specifies the type of public key (NACL VERIFY being the only kind we expect currently)
|
||||
let u = `${this.urlbase}/getall/table/${encodeURIComponent(pubkey)}`;
|
||||
return {"publicurl": u, "privateurl": u};
|
||||
}
|
||||
|
||||
|
||||
async p_newtable(pubkey, table, verbose) {
|
||||
if (!pubkey) throw new errors.CodingError("p_newtable currently requires a pubkey");
|
||||
let database = await this.p_newdatabase(pubkey, verbose);
|
||||
// If have use cases without a database, then call p_newdatabase first
|
||||
return { privateurl: `${database.privateurl}/${table}`, publicurl: `${database.publicurl}/${table}`} // No action required to create it
|
||||
}
|
||||
|
||||
//TODO-KEYVALUE needs signing with private key of list
|
||||
async p_set(url, keyvalues, value, verbose) { // url = yjs:/yjs/database/table/key //TODO-KEYVALUE-API
|
||||
if (!url || !keyvalues) throw new errors.CodingError("TransportHTTP.p_set: invalid parms",url, keyvalyes);
|
||||
if (verbose) console.log("p_set", url, keyvalues, value);
|
||||
if (typeof keyvalues === "string") {
|
||||
let kv = JSON.stringify([{key: keyvalues, value: value}]);
|
||||
await this.p_POST(this._url(url, servercommands.set), "application/json", kv, verbose); // Returns immediately
|
||||
} else {
|
||||
let kv = JSON.stringify(Object.keys(keyvalues).map((k) => ({"key": k, "value": keyvalues[k]})));
|
||||
await this.p_POST(this._url(url, servercommands.set), "application/json", kv, verbose); // Returns immediately
|
||||
}
|
||||
}
|
||||
|
||||
_keyparm(key) {
|
||||
return `key=${encodeURIComponent(key)}`
|
||||
}
|
||||
//TODO-KEYALUE got to here on KEYVALUE in HTTP
|
||||
async p_get(url, keys, verbose) {
|
||||
if (!url && keys) throw new errors.CodingError("TransportHTTP.p_get: requires url and at least one key");
|
||||
let parmstr =Array.isArray(keys) ? keys.map(k => this._keyparm(k)).join('&') : this._keyparm(keys)
|
||||
let res = await this.p_GET(this._url(url, servercommands.get, parmstr), {verbose});
|
||||
return Array.isArray(keys) ? res : res[keys]
|
||||
}
|
||||
|
||||
async p_delete(url, keys, verbose) { //TODO-KEYVALUE-API need to think this one through
|
||||
if (!url && keys) throw new errors.CodingError("TransportHTTP.p_get: requires url and at least one key");
|
||||
let parmstr = keys.map(k => this._keyparm(k)).join('&');
|
||||
await this.p_GET(this._url(url, servercommands.delete, parmstr), {verbose});
|
||||
}
|
||||
|
||||
async p_keys(url, verbose) {
|
||||
if (!url && keys) throw new errors.CodingError("TransportHTTP.p_get: requires url and at least one key");
|
||||
return await this.p_GET(this._url(url, servercommands.keys), {verbose});
|
||||
}
|
||||
async p_getall(url, verbose) {
|
||||
if (!url && keys) throw new errors.CodingError("TransportHTTP.p_get: requires url and at least one key");
|
||||
return await this.p_GET(this._url(url, servercommands.getall), {verbose});
|
||||
}
|
||||
/* Make sure doesnt shadow regular p_rawfetch
|
||||
async p_rawfetch(url, verbose) {
|
||||
return {
|
||||
table: "keyvaluetable",
|
||||
_map: await this.p_getall(url, verbose)
|
||||
}; // Data struc is ok as SmartDict.p_fetch will pass to KVT constructor
|
||||
}
|
||||
*/
|
||||
|
||||
p_info(verbose) { return this.p_GET(`${this.urlbase}/info`, {verbose}); } //TODO-BACKPORT
|
||||
|
||||
static async p_test(opts={}, verbose=false) {
|
||||
if (verbose) {console.log("TransportHTTP.test")}
|
||||
try {
|
||||
let transport = await this.p_setup(opts, verbose);
|
||||
if (verbose) console.log("HTTP connected");
|
||||
let res = await transport.p_info(verbose);
|
||||
if (verbose) console.log("TransportHTTP info=",res);
|
||||
res = await transport.p_status(verbose);
|
||||
console.assert(res === Transport.STATUS_CONNECTED);
|
||||
await transport.p_test_kvt("NACL%20VERIFY", verbose);
|
||||
} catch(err) {
|
||||
console.log("Exception thrown in TransportHTTP.test:", err.message);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
static async test() {
|
||||
return this;
|
||||
}
|
||||
|
||||
}
|
||||
Transports._transportclasses["HTTP"] = TransportHTTP;
|
||||
exports = module.exports = TransportHTTP;
|
||||
|
371
src/TransportIPFS.js
Normal file
371
src/TransportIPFS.js
Normal file
@ -0,0 +1,371 @@
|
||||
/*
|
||||
This is a shim to the IPFS library, (Lists are handled in YJS or OrbitDB)
|
||||
See https://github.com/ipfs/js-ipfs but note its often out of date relative to the generic API doc.
|
||||
*/
|
||||
|
||||
// IPFS components
|
||||
|
||||
const IPFS = require('ipfs');
|
||||
const CID = require('cids');
|
||||
// noinspection NpmUsedModulesInstalled
|
||||
const dagPB = require('ipld-dag-pb');
|
||||
// noinspection Annotator
|
||||
const DAGNode = dagPB.DAGNode; // So can check its type
|
||||
const unixFs = require('ipfs-unixfs');
|
||||
|
||||
// Library packages other than IPFS
|
||||
const Url = require('url');
|
||||
const stream = require('readable-stream'); // Needed for the pullthrough - this is NOT Ipfs streams
|
||||
// Alternative to through - as used in WebTorrent
|
||||
|
||||
// Utility packages (ours) And one-liners
|
||||
//No longer reqd: const promisify = require('promisify-es6');
|
||||
//const makepromises = require('./utils/makepromises'); // Replaced by direct call to promisify
|
||||
|
||||
// Other Dweb modules
|
||||
const errors = require('./Errors'); // Standard Dweb Errors
|
||||
const Transport = require('./Transport.js'); // Base class for TransportXyz
|
||||
const Transports = require('./Transports'); // Manage all Transports that are loaded
|
||||
const utils = require('./utils'); // Utility functions
|
||||
|
||||
const defaultoptions = {
|
||||
ipfs: {
|
||||
repo: '/tmp/dweb_ipfsv2700', //TODO-IPFS think through where, esp for browser
|
||||
//init: false,
|
||||
//start: false,
|
||||
//TODO-IPFS-Q how is this decentralized - can it run offline? Does it depend on star-signal.cloud.ipfs.team
|
||||
config: {
|
||||
// Addresses: { Swarm: [ '/dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star']}, // For Y - same as defaults
|
||||
// Addresses: { Swarm: [ ] }, // Disable WebRTC to test browser crash, note disables Y so doesnt work.
|
||||
Addresses: {Swarm: ['/dns4/ws-star.discovery.libp2p.io/tcp/443/wss/p2p-websocket-star']}, // from https://github.com/ipfs/js-ipfs#faq 2017-12-05 as alternative to webrtc
|
||||
},
|
||||
//init: true, // Comment out for Y
|
||||
EXPERIMENTAL: {
|
||||
pubsub: true
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
class TransportIPFS extends Transport {
|
||||
/*
|
||||
IPFS specific transport
|
||||
|
||||
Fields:
|
||||
ipfs: object returned when starting IPFS
|
||||
yarray: object returned when starting yarray
|
||||
*/
|
||||
|
||||
constructor(options, verbose) {
|
||||
super(options, verbose);
|
||||
this.ipfs = undefined; // Undefined till start IPFS
|
||||
this.options = options; // Dictionary of options { ipfs: {...}, "yarrays", yarray: {...} }
|
||||
this.name = "IPFS"; // For console log etc
|
||||
this.supportURLs = ['ipfs'];
|
||||
this.supportFunctions = ['fetch', 'store']; // Does not support reverse, createReadStream fails on files uploaded with urlstore TODO reenable when Kyle fixes urlstore
|
||||
this.status = Transport.STATUS_LOADED;
|
||||
}
|
||||
|
||||
/*
|
||||
_makepromises() {
|
||||
//Utility function to promisify Block
|
||||
//Replaced promisified utility since only two to promisify
|
||||
//this.promisified = {ipfs:{}};
|
||||
//makepromises(this.ipfs, this.promisified.ipfs, [ { block: ["put", "get"] }]); // Has to be after this.ipfs defined
|
||||
this.promisified = { ipfs: { block: {
|
||||
put: promisify(this.ipfs.block.put),
|
||||
get: promisify(this.ipfs.block.get)
|
||||
}}}
|
||||
}
|
||||
*/
|
||||
p_ipfsstart(verbose) {
|
||||
/*
|
||||
Just start IPFS - not Y (note used with "yarrays" and will be used for non-IPFS list management)
|
||||
Note - can't figure out how to use async with this, as we resolve the promise based on the event callback
|
||||
*/
|
||||
const self = this;
|
||||
return new Promise((resolve, reject) => {
|
||||
this.ipfs = new IPFS(this.options.ipfs);
|
||||
this.ipfs.on('ready', () => {
|
||||
//this._makepromises();
|
||||
resolve();
|
||||
});
|
||||
this.ipfs.on('error', (err) => reject(err));
|
||||
})
|
||||
.then(() => self.ipfs.version())
|
||||
.then((version) => console.log('IPFS READY',version))
|
||||
.catch((err) => {
|
||||
console.log("Error caught in p_ipfsstart");
|
||||
throw(err);
|
||||
});
|
||||
}
|
||||
|
||||
static setup0(options, verbose) {
|
||||
/*
|
||||
First part of setup, create obj, add to Transports but dont attempt to connect, typically called instead of p_setup if want to parallelize connections.
|
||||
*/
|
||||
const combinedoptions = Transport.mergeoptions(defaultoptions, options);
|
||||
if (verbose) console.log("IPFS loading options %o", combinedoptions);
|
||||
const t = new TransportIPFS(combinedoptions, verbose); // Note doesnt start IPFS
|
||||
Transports.addtransport(t);
|
||||
return t;
|
||||
}
|
||||
|
||||
async p_setup1(verbose) {
|
||||
try {
|
||||
if (verbose) console.log("IPFS starting and connecting");
|
||||
this.status = Transport.STATUS_STARTING; // Should display, but probably not refreshed in most case
|
||||
await this.p_ipfsstart(verbose); // Throws Error("websocket error") and possibly others.
|
||||
this.status = (await this.ipfs.isOnline()) ? Transport.STATUS_CONNECTED : Transport.STATUS_FAILED;
|
||||
} catch(err) {
|
||||
console.error("IPFS failed to connect",err);
|
||||
this.status = Transport.STATUS_FAILED;
|
||||
}
|
||||
return this;
|
||||
}
|
||||
|
||||
async p_status(verbose) {
|
||||
/*
|
||||
Return a string for the status of a transport. No particular format, but keep it short as it will probably be in a small area of the screen.
|
||||
*/
|
||||
this.status = (await this.ipfs.isOnline()) ? Transport.STATUS_CONNECTED : Transport.STATUS_FAILED;
|
||||
return this.status;
|
||||
}
|
||||
|
||||
// Everything else - unless documented here - should be opaque to the actual structure of a CID
|
||||
// or a url. This code may change as its not clear (from IPFS docs) if this is the right mapping.
|
||||
static urlFrom(unknown) {
|
||||
/*
|
||||
Convert a CID into a standardised URL e.g. ipfs:/ipfs/abc123
|
||||
*/
|
||||
if (unknown instanceof CID)
|
||||
return "ipfs:/ipfs/"+unknown.toBaseEncodedString();
|
||||
if (typeof unknown === "object" && unknown.hash) // e.g. from files.add
|
||||
return "ipfs:/ipfs/"+unknown.hash;
|
||||
if (typeof unknown === "string") // Not used currently
|
||||
return "ipfs:/ipfs/"+unknown;
|
||||
throw new errors.CodingError("TransportIPFS.urlFrom: Cant convert to url from",unknown);
|
||||
}
|
||||
|
||||
static cidFrom(url) {
|
||||
/*
|
||||
Convert a URL e.g. ipfs:/ipfs/abc123 into a CID structure suitable for retrieval
|
||||
url: String of form "ipfs://ipfs/<hash>" or parsed URL or CID
|
||||
returns: CID
|
||||
throws: TransportError if cant convert
|
||||
*/
|
||||
if (url instanceof CID) return url;
|
||||
if (typeof(url) === "string") url = Url.parse(url);
|
||||
if (url && url["pathname"]) { // On browser "instanceof Url" isn't valid)
|
||||
const patharr = url.pathname.split('/');
|
||||
if ((url.protocol !== "ipfs:") || (patharr[1] !== 'ipfs') || (patharr.length < 3))
|
||||
throw new errors.TransportError("TransportIPFS.cidFrom bad format for url should be ipfs:/ipfs/...: " + url.href);
|
||||
if (patharr.length > 3)
|
||||
throw new errors.TransportError("TransportIPFS.cidFrom not supporting paths in url yet, should be ipfs:/ipfs/...: " + url.href);
|
||||
return new CID(patharr[2]);
|
||||
} else {
|
||||
throw new errors.CodingError("TransportIPFS.cidFrom: Cant convert url",url);
|
||||
}
|
||||
}
|
||||
|
||||
static ipfsFrom(url) {
|
||||
/*
|
||||
Convert to a ipfspath i.e. /ipfs/Qm....
|
||||
Required because of strange differences in APIs between files.cat and dag.get see https://github.com/ipfs/js-ipfs/issues/1229
|
||||
*/
|
||||
if (url instanceof CID)
|
||||
return "/ipfs/"+url.toBaseEncodedString();
|
||||
if (typeof(url) !== "string") { // It better be URL which unfortunately is hard to test
|
||||
url = url.path;
|
||||
}
|
||||
if (url.indexOf('/ipfs/') > -1) {
|
||||
return url.slice(url.indexOf('/ipfs/'));
|
||||
}
|
||||
throw new errors.CodingError(`TransportIPFS.ipfsFrom: Cant convert url ${url} into a path starting /ipfs/`);
|
||||
}
|
||||
|
||||
static multihashFrom(url) {
|
||||
if (url instanceof CID)
|
||||
return cid.toBaseEncodedString();
|
||||
if (typeof url === 'object' && url.path)
|
||||
url = url.path; // /ipfs/Q...
|
||||
if (typeof(url) === "string") {
|
||||
const idx = url.indexOf("/ipfs/");
|
||||
if (idx > -1) {
|
||||
return url.slice(idx+6);
|
||||
}
|
||||
}
|
||||
throw new errors.CodingError(`Cant turn ${url} into a multihash`);
|
||||
}
|
||||
|
||||
async p_rawfetch(url, {verbose=false, timeoutMS=60000, relay=false}={}) {
|
||||
/*
|
||||
Fetch some bytes based on a url of the form ipfs:/ipfs/Qm..... or ipfs:/ipfs/z.... .
|
||||
No assumption is made about the data in terms of size or structure, nor can we know whether it was created with dag.put or ipfs add or http /api/v0/add/
|
||||
|
||||
Where required by the underlying transport it should retrieve a number if its "blocks" and concatenate them.
|
||||
Returns a new Promise that resolves currently to a string.
|
||||
There may also be need for a streaming version of this call, at this point undefined since we havent (currently) got a use case..
|
||||
|
||||
:param string url: URL of object being retrieved
|
||||
:param boolean verbose: true for debugging output
|
||||
:resolve buffer: Return the object being fetched. (may in the future return a stream and buffer externally)
|
||||
:throws: TransportError if url invalid - note this happens immediately, not as a catch in the promise
|
||||
*/
|
||||
if (verbose) console.log("IPFS p_rawfetch", utils.stringfrom(url));
|
||||
if (!url) throw new errors.CodingError("TransportIPFS.p_rawfetch: requires url");
|
||||
const cid = TransportIPFS.cidFrom(url); // Throws TransportError if url bad
|
||||
const ipfspath = TransportIPFS.ipfsFrom(url) // Need because dag.get has different requirement than file.cat
|
||||
|
||||
try {
|
||||
const res = await utils.p_timeout(this.ipfs.dag.get(cid), timeoutMS);
|
||||
// noinspection Annotator
|
||||
if (res.remainderPath.length)
|
||||
{ // noinspection ExceptionCaughtLocallyJS
|
||||
throw new errors.TransportError("Not yet supporting paths in p_rawfetch");
|
||||
} //TODO-PATH
|
||||
let buff;
|
||||
if (res.value instanceof DAGNode) { // Its file or something added with the HTTP API for example, TODO not yet handling multiple files
|
||||
if (verbose) console.log("IPFS p_rawfetch looks like its a file", url);
|
||||
//console.log("Case a or b" - we can tell the difference by looking at (res.value._links.length > 0) but dont need to
|
||||
// as since we dont know if we are on node or browser best way is to try the files.cat and if it fails try the block to get an approximate file);
|
||||
// Works on Node, but fails on Chrome, cant figure out how to get data from the DAGNode otherwise (its the wrong size)
|
||||
buff = await this.ipfs.files.cat(ipfspath); //See js-ipfs v0.27 version and https://github.com/ipfs/js-ipfs/issues/1229 and https://github.com/ipfs/interface-ipfs-core/blob/master/SPEC/FILES.md#cat
|
||||
|
||||
/* Was needed on v0.26, not on v0.27
|
||||
if (buff.length === 0) { // Hit the Chrome bug
|
||||
// This will get a file padded with ~14 bytes - 4 at front, 4 at end and cant find the other 6 !
|
||||
// but it seems to work for PDFs which is what I'm testing on.
|
||||
if (verbose) console.log("Kludge alert - files.cat fails in Chrome, trying block.get");
|
||||
let blk = await this.promisified.ipfs.block.get(cid);
|
||||
buff = blk.data;
|
||||
}
|
||||
END of v0.26 version */
|
||||
} else { //c: not a file
|
||||
buff = res.value;
|
||||
}
|
||||
if (verbose) console.log(`IPFS fetched ${buff.length} from ${ipfspath}`);
|
||||
return buff;
|
||||
} catch (err) {
|
||||
console.log("Caught misc error in TransportIPFS.p_rawfetch");
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
async p_rawstore(data, verbose) {
|
||||
/*
|
||||
Store a blob of data onto the decentralised transport.
|
||||
Returns a promise that resolves to the url of the data
|
||||
|
||||
:param string|Buffer data: Data to store - no assumptions made to size or content
|
||||
:param boolean verbose: true for debugging output
|
||||
:resolve string: url of data stored
|
||||
*/
|
||||
console.assert(data, "TransportIPFS.p_rawstore: requires data");
|
||||
const buf = (data instanceof Buffer) ? data : new Buffer(data);
|
||||
//return this.promisified.ipfs.block.put(buf).then((block) => block.cid)
|
||||
//https://github.com/ipfs/interface-ipfs-core/blob/master/SPEC/DAG.md#dagput
|
||||
//let res = await this.ipfs.dag.put(buf,{ format: 'dag-cbor', hashAlg: 'sha2-256' });
|
||||
const res = (await this.ipfs.files.add(buf,{ "cid-version": 1, hashAlg: 'sha2-256'}))[0];
|
||||
//TODO-IPFS has been suggested to move this to files.add with no filename.
|
||||
return TransportIPFS.urlFrom(res);
|
||||
//return this.ipfs.files.put(buf).then((block) => TransportIPFS.urlFrom(block.cid));
|
||||
}
|
||||
|
||||
// Based on https://github.com/ipfs/js-ipfs/pull/1231/files
|
||||
|
||||
async p_offsetStream(stream, links, startByte, endByte) {
|
||||
let streamPosition = 0
|
||||
try {
|
||||
for (let l in links) {
|
||||
const link = links[l];
|
||||
if (!stream.writable) { return } // The stream has been closed
|
||||
// DAGNode Links report unixfs object data sizes 14 bytes larger due to the protobuf wrapper
|
||||
const bytesInLinkedObjectData = link.size - 14
|
||||
if (startByte > (streamPosition + bytesInLinkedObjectData)) {
|
||||
// Start byte is after this block so skip it
|
||||
streamPosition += bytesInLinkedObjectData;
|
||||
} else if (endByte && endByte < streamPosition) { // TODO-STREAM this is copied from https://github.com/ipfs/js-ipfs/pull/1231/files but I think it should be endByte <= since endByte is first byte DONT want
|
||||
// End byte was before this block so skip it
|
||||
streamPosition += bytesInLinkedObjectData;
|
||||
} else {
|
||||
let lmh = link.multihash;
|
||||
let data;
|
||||
await this.ipfs.object.data(lmh)
|
||||
.then ((d) => unixFs.unmarshal(d).data)
|
||||
.then ((d) => data = d )
|
||||
.catch((err) => {console.log("XXX@289 err=",err);});
|
||||
if (!stream.writable) { return; } // The stream was closed while we were getting data
|
||||
const length = data.length;
|
||||
if (startByte > streamPosition && startByte < (streamPosition + length)) {
|
||||
// If the startByte is in the current block, skip to the startByte
|
||||
data = data.slice(startByte - streamPosition);
|
||||
}
|
||||
console.log(`Writing ${data.length} to stream`)
|
||||
stream.write(data);
|
||||
streamPosition += length;
|
||||
}
|
||||
}
|
||||
} catch(err) {
|
||||
console.log(err.message);
|
||||
}
|
||||
}
|
||||
async p_f_createReadStream(url, verbose=false) { // Asynchronously return a function that can be used in createReadStream TODO-API
|
||||
verbose = true;
|
||||
if (verbose) console.log("p_f_createReadStream",url);
|
||||
const mh = TransportIPFS.multihashFrom(url);
|
||||
const links = await this.ipfs.object.links(mh)
|
||||
let throughstream; //Holds pointer to stream between calls.
|
||||
const self = this;
|
||||
function crs(opts) { // This is a synchronous function
|
||||
// Return a readable stream that provides the bytes between offsets "start" and "end" inclusive
|
||||
console.log("opts=",JSON.stringify(opts));
|
||||
/* Can replace rest of crs with this when https://github.com/ipfs/js-ipfs/pull/1231/files lands (hopefully v0.28.3)
|
||||
return self.ipfs.catReadableStream(mh, opts ? opts.start : 0, opts && opts.end) ? opts.end+1 : undefined)
|
||||
*/
|
||||
if (!opts) return throughstream; //TODO-STREAM unclear why called without opts - take this out when figured out
|
||||
if (throughstream && throughstream.destroy) throughstream.destroy();
|
||||
throughstream = new stream.PassThrough();
|
||||
|
||||
self.p_offsetStream( // Ignore promise returned, this will right to the stream asynchronously
|
||||
throughstream,
|
||||
links, // Uses the array of links created above in this function
|
||||
opts ? opts.start : 0,
|
||||
(opts && opts.end) ? opts.end : undefined);
|
||||
return throughstream;
|
||||
}
|
||||
return crs;
|
||||
}
|
||||
|
||||
static async p_test(opts, verbose) {
|
||||
if (verbose) {console.log("TransportIPFS.test")}
|
||||
try {
|
||||
const transport = await this.p_setup(opts, verbose); // Assumes IPFS already setup
|
||||
if (verbose) console.log(transport.name,"setup");
|
||||
const res = await transport.p_status(verbose);
|
||||
console.assert(res === Transport.STATUS_CONNECTED)
|
||||
|
||||
let urlqbf;
|
||||
const qbf = "The quick brown fox";
|
||||
const qbf_url = "ipfs:/ipfs/zdpuAscRnisRkYnEyJAp1LydQ3po25rCEDPPEDMymYRfN1yPK"; // Expected url
|
||||
const testurl = "1114"; // Just a predictable number can work with
|
||||
const url = await transport.p_rawstore(qbf, verbose);
|
||||
if (verbose) console.log("rawstore returned", url);
|
||||
const newcid = TransportIPFS.cidFrom(url); // Its a CID which has a buffer in it
|
||||
console.assert(url === qbf_url, "url should match url from rawstore");
|
||||
const cidmultihash = url.split('/')[2]; // Store cid from first block in form of multihash
|
||||
const newurl = TransportIPFS.urlFrom(newcid);
|
||||
console.assert(url === newurl, "Should round trip");
|
||||
urlqbf = url;
|
||||
const data = await transport.p_rawfetch(urlqbf, {verbose});
|
||||
console.assert(data.toString() === qbf, "Should fetch block stored above");
|
||||
//console.log("TransportIPFS test complete");
|
||||
return transport
|
||||
} catch(err) {
|
||||
console.log("Exception thrown in TransportIPFS.test:", err.message);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
Transports._transportclasses["IPFS"] = TransportIPFS;
|
||||
exports = module.exports = TransportIPFS;
|
293
src/TransportWEBTORRENT.js
Normal file
293
src/TransportWEBTORRENT.js
Normal file
@ -0,0 +1,293 @@
|
||||
/*
|
||||
This Transport layers builds on WebTorrent
|
||||
|
||||
Y Lists have listeners and generate events - see docs at ...
|
||||
*/
|
||||
|
||||
// WebTorrent components
|
||||
|
||||
const WebTorrent = require('webtorrent');
|
||||
const stream = require('readable-stream');
|
||||
|
||||
// Other Dweb modules
|
||||
const errors = require('./Errors'); // Standard Dweb Errors
|
||||
const Transport = require('./Transport.js'); // Base class for TransportXyz
|
||||
const Transports = require('./Transports'); // Manage all Transports that are loaded
|
||||
|
||||
let defaultoptions = {
|
||||
webtorrent: {}
|
||||
};
|
||||
|
||||
class TransportWEBTORRENT extends Transport {
|
||||
/*
|
||||
WebTorrent specific transport
|
||||
|
||||
Fields:
|
||||
webtorrent: object returned when starting webtorrent
|
||||
*/
|
||||
|
||||
constructor(options, verbose) {
|
||||
super(options, verbose);
|
||||
this.webtorrent = undefined; // Undefined till start WebTorrent
|
||||
this.options = options; // Dictionary of options
|
||||
this.name = "WEBTORRENT"; // For console log etc
|
||||
this.supportURLs = ['magnet'];
|
||||
this.supportFunctions = ['fetch', 'createReadStream'];
|
||||
this.status = Transport.STATUS_LOADED;
|
||||
}
|
||||
|
||||
p_webtorrentstart(verbose) {
|
||||
/*
|
||||
Start WebTorrent and wait until for ready.
|
||||
*/
|
||||
let self = this;
|
||||
return new Promise((resolve, reject) => {
|
||||
this.webtorrent = new WebTorrent(this.options.webtorrent);
|
||||
this.webtorrent.once("ready", () => {
|
||||
console.log("WEBTORRENT READY");
|
||||
resolve();
|
||||
});
|
||||
this.webtorrent.once("error", (err) => reject(err));
|
||||
this.webtorrent.on("warning", (err) => {
|
||||
console.warn("WebTorrent Torrent WARNING: " + err.message);
|
||||
});
|
||||
})
|
||||
}
|
||||
|
||||
static setup0(options, verbose) {
|
||||
/*
|
||||
First part of setup, create obj, add to Transports but dont attempt to connect, typically called instead of p_setup if want to parallelize connections.
|
||||
*/
|
||||
let combinedoptions = Transport.mergeoptions(defaultoptions, options);
|
||||
console.log("WebTorrent options %o", combinedoptions);
|
||||
let t = new TransportWEBTORRENT(combinedoptions, verbose);
|
||||
Transports.addtransport(t);
|
||||
|
||||
return t;
|
||||
}
|
||||
|
||||
async p_setup1(verbose) {
|
||||
try {
|
||||
this.status = Transport.STATUS_STARTING;
|
||||
await this.p_webtorrentstart(verbose);
|
||||
} catch(err) {
|
||||
console.error("WebTorrent failed to connect",err);
|
||||
this.status = Transport.STATUS_FAILED;
|
||||
}
|
||||
return this;
|
||||
}
|
||||
|
||||
async p_status(verbose) {
|
||||
/*
|
||||
Return a string for the status of a transport. No particular format, but keep it short as it will probably be in a small area of the screen.
|
||||
*/
|
||||
if (this.webtorrent && this.webtorrent.ready) {
|
||||
this.status = Transport.STATUS_CONNECTED;
|
||||
} else if (this.webtorrent) {
|
||||
this.status = Transport.STATUS_STARTING;
|
||||
} else {
|
||||
this.status = Transport.STATUS_FAILED;
|
||||
}
|
||||
|
||||
return this.status;
|
||||
}
|
||||
|
||||
webtorrentparseurl(url) {
|
||||
/* Parse a URL
|
||||
url: URL as string or already parsed into Url
|
||||
returns: torrentid, path
|
||||
*/
|
||||
if (!url) {
|
||||
throw new errors.CodingError("TransportWEBTORRENT.p_rawfetch: requires url");
|
||||
}
|
||||
|
||||
const urlstring = typeof url === "string" ? url : url.href
|
||||
const index = urlstring.indexOf('/');
|
||||
|
||||
if (index === -1) {
|
||||
throw new errors.CodingError("TransportWEBTORRENT.p_rawfetch: invalid url - missing path component. Should look like magnet:xyzabc/path/to/file");
|
||||
}
|
||||
|
||||
const torrentId = urlstring.slice(0, index);
|
||||
const path = urlstring.slice(index + 1);
|
||||
|
||||
return { torrentId, path }
|
||||
}
|
||||
|
||||
async p_webtorrentadd(torrentId) {
|
||||
return new Promise((resolve, reject) => {
|
||||
// Check if this torrentId is already added to the webtorrent client
|
||||
let torrent = this.webtorrent.get(torrentId);
|
||||
|
||||
// If not, then add the torrentId to the torrent client
|
||||
if (!torrent) {
|
||||
torrent = this.webtorrent.add(torrentId);
|
||||
|
||||
torrent.once("error", (err) => {
|
||||
reject(new errors.TransportError("Torrent encountered a fatal error " + err.message));
|
||||
});
|
||||
|
||||
torrent.on("warning", (err) => {
|
||||
console.warn("WebTorrent Torrent WARNING: " + err.message + " (" + torrent.name + ")");
|
||||
});
|
||||
}
|
||||
|
||||
if (torrent.ready) {
|
||||
resolve(torrent);
|
||||
} else {
|
||||
torrent.once("ready", () => {
|
||||
resolve(torrent);
|
||||
});
|
||||
}
|
||||
|
||||
if (typeof window !== "undefined") { // Check running in browser
|
||||
window.WEBTORRENT_TORRENT = torrent;
|
||||
torrent.once('close', () => {
|
||||
window.WEBTORRENT_TORRENT = null
|
||||
})
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
webtorrentfindfile (torrent, path) {
|
||||
/*
|
||||
Given a torrent object and a path to a file within the torrent, find the given file.
|
||||
*/
|
||||
const filePath = torrent.name + '/' + path;
|
||||
const file = torrent.files.find(file => {
|
||||
return file.path === filePath;
|
||||
});
|
||||
|
||||
if (!file) {
|
||||
//debugger;
|
||||
throw new errors.TransportError("Requested file (" + path + ") not found within torrent ");
|
||||
}
|
||||
|
||||
return file;
|
||||
}
|
||||
|
||||
p_rawfetch(url, {verbose=false}={}) {
|
||||
/*
|
||||
Fetch some bytes based on a url of the form:
|
||||
|
||||
magnet:xyzabc/path/to/file
|
||||
|
||||
(Where xyzabc is the typical magnet uri contents)
|
||||
|
||||
No assumption is made about the data in terms of size or structure. Returns a new Promise that resolves to a buffer.
|
||||
|
||||
:param string url: URL of object being retrieved
|
||||
:param boolean verbose: true for debugging output
|
||||
:resolve buffer: Return the object being fetched.
|
||||
:throws: TransportError if url invalid - note this happens immediately, not as a catch in the promise
|
||||
*/
|
||||
return new Promise((resolve, reject) => {
|
||||
if (verbose) console.log("WebTorrent p_rawfetch", url);
|
||||
|
||||
const { torrentId, path } = this.webtorrentparseurl(url);
|
||||
this.p_webtorrentadd(torrentId)
|
||||
.then((torrent) => {
|
||||
torrent.deselect(0, torrent.pieces.length - 1, false); // Dont download entire torrent as will pull just the file we want
|
||||
const file = this.webtorrentfindfile(torrent, path);
|
||||
file.getBuffer((err, buffer) => {
|
||||
if (err) {
|
||||
return reject(new errors.TransportError("Torrent encountered a fatal error " + err.message + " (" + torrent.name + ")"));
|
||||
}
|
||||
resolve(buffer);
|
||||
});
|
||||
})
|
||||
.catch((err) => reject(err));
|
||||
});
|
||||
}
|
||||
|
||||
async p_f_createReadStream(url, verbose) { //TODO-API
|
||||
if (verbose) console.log("TransportWEBTORRENT p_f_createreadstream %o", url);
|
||||
try {
|
||||
const {torrentId, path} = this.webtorrentparseurl(url);
|
||||
let torrent = await this.p_webtorrentadd(torrentId);
|
||||
let filet = this.webtorrentfindfile(torrent, path);
|
||||
let self = this;
|
||||
return function (opts) {
|
||||
return self.createReadStream(filet, opts, verbose);
|
||||
};
|
||||
} catch(err) {
|
||||
console.log(`p_f_createReadStream failed on ${url} ${err.message}`);
|
||||
throw(err);
|
||||
};
|
||||
}
|
||||
|
||||
createReadStream(file, opts, verbose) {
|
||||
/*
|
||||
Fetch bytes progressively, using a node.js readable stream, based on a url of the form:
|
||||
|
||||
magnet:xyzabc/path/to/file
|
||||
|
||||
(Where xyzabc is the typical magnet uri contents)
|
||||
|
||||
No assumption is made about the data in terms of size or structure. Returns a new Promise that resolves to a node.js readable stream.
|
||||
|
||||
Node.js readable stream docs:
|
||||
https://nodejs.org/api/stream.html#stream_readable_streams
|
||||
|
||||
:param string url: URL of object being retrieved
|
||||
:param boolean verbose: true for debugging output
|
||||
:returns stream: The readable stream.
|
||||
:throws: TransportError if url invalid - note this happens immediately, not as a catch in the promise
|
||||
*/
|
||||
if (verbose) console.log("TransportWEBTORRENT createreadstream %o %o", file.name, opts);
|
||||
|
||||
try {
|
||||
const through = new stream.PassThrough();
|
||||
const fileStream = file.createReadStream(opts);
|
||||
fileStream.pipe(through);
|
||||
return through;
|
||||
} catch(err) {
|
||||
if (typeof through.destroy === 'function') through.destroy(err)
|
||||
else through.emit('error', err)
|
||||
};
|
||||
}
|
||||
|
||||
static async p_test(opts, verbose) {
|
||||
try {
|
||||
let transport = await this.p_setup(opts, verbose); // Assumes IPFS already setup
|
||||
if (verbose) console.log(transport.name, "setup");
|
||||
let res = await transport.p_status(verbose);
|
||||
console.assert(res === Transport.STATUS_CONNECTED)
|
||||
|
||||
// Creative commons torrent, copied from https://webtorrent.io/free-torrents
|
||||
let bigBuckBunny = 'magnet:?xt=urn:btih:dd8255ecdc7ca55fb0bbf81323d87062db1f6d1c&dn=Big+Buck+Bunny&tr=udp%3A%2F%2Fexplodie.org%3A6969&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.empire-js.us%3A1337&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337&tr=wss%3A%2F%2Ftracker.btorrent.xyz&tr=wss%3A%2F%2Ftracker.fastcast.nz&tr=wss%3A%2F%2Ftracker.openwebtorrent.com&ws=https%3A%2F%2Fwebtorrent.io%2Ftorrents%2F&xs=https%3A%2F%2Fwebtorrent.io%2Ftorrents%2Fbig-buck-bunny.torrent/Big Buck Bunny.en.srt';
|
||||
|
||||
let data1 = await transport.p_rawfetch(bigBuckBunny, {verbose});
|
||||
data1 = data1.toString();
|
||||
assertData(data1);
|
||||
|
||||
const stream = await transport.createReadStream(bigBuckBunny, verbose);
|
||||
|
||||
const chunks = [];
|
||||
stream.on("data", (chunk) => {
|
||||
chunks.push(chunk);
|
||||
});
|
||||
stream.on("end", () => {
|
||||
const data2 = Buffer.concat(chunks).toString();
|
||||
assertData(data2);
|
||||
});
|
||||
|
||||
function assertData(data) {
|
||||
// Test for a string that is contained within the file
|
||||
let expectedWithinData = "00:00:02,000 --> 00:00:05,000";
|
||||
|
||||
console.assert(data.indexOf(expectedWithinData) !== -1, "Should fetch 'Big Buck Bunny.en.srt' from the torrent");
|
||||
|
||||
// Test that the length is what we expect
|
||||
console.assert(data.length, 129, "'Big Buck Bunny.en.srt' was " + data.length);
|
||||
}
|
||||
} catch (err) {
|
||||
console.log("Exception thrown in TransportWEBTORRENT.p_test:", err.message);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
Transports._transportclasses["WEBTORRENT"] = TransportWEBTORRENT;
|
||||
|
||||
exports = module.exports = TransportWEBTORRENT;
|
335
src/TransportYJS.js
Normal file
335
src/TransportYJS.js
Normal file
@ -0,0 +1,335 @@
|
||||
/*
|
||||
This Transport layers builds on the YJS DB and uses IPFS as its transport.
|
||||
|
||||
Y Lists have listeners and generate events - see docs at ...
|
||||
*/
|
||||
const Url = require('url');
|
||||
|
||||
//const Y = require('yjs/dist/y.js'); // Explicity require of dist/y.js to get around a webpack warning but causes different error in YJS
|
||||
const Y = require('yjs'); // Explicity require of dist/y.js to get around a webpack warning
|
||||
require('y-memory')(Y);
|
||||
require('y-array')(Y);
|
||||
require('y-text')(Y);
|
||||
require('y-map')(Y);
|
||||
require('y-ipfs-connector')(Y);
|
||||
require('y-indexeddb')(Y);
|
||||
//require('y-leveldb')(Y); //- can't be there for browser, node seems to find it ok without this, though not sure why..
|
||||
|
||||
// Utility packages (ours) And one-liners
|
||||
function delay(ms, val) { return new Promise(resolve => {setTimeout(() => { resolve(val); },ms)})}
|
||||
|
||||
// Other Dweb modules
|
||||
const errors = require('./Errors'); // Standard Dweb Errors
|
||||
const Transport = require('./Transport.js'); // Base class for TransportXyz
|
||||
const Transports = require('./Transports'); // Manage all Transports that are loaded
|
||||
const utils = require('./utils'); // Utility functions
|
||||
|
||||
let defaultoptions = {
|
||||
yarray: { // Based on how IIIF uses them in bootstrap.js in ipfs-iiif-db repo
|
||||
db: {
|
||||
name: 'indexeddb', // leveldb in node
|
||||
},
|
||||
connector: {
|
||||
name: 'ipfs',
|
||||
//ipfs: ipfs, // Need to link IPFS here once created
|
||||
},
|
||||
}
|
||||
};
|
||||
|
||||
class TransportYJS extends Transport {
|
||||
/*
|
||||
YJS specific transport - over IPFS, but could probably use other YJS transports
|
||||
|
||||
Fields:
|
||||
ipfs: object returned when starting IPFS
|
||||
yarray: object returned when starting yarray
|
||||
*/
|
||||
|
||||
constructor(options, verbose) {
|
||||
super(options, verbose);
|
||||
this.options = options; // Dictionary of options { ipfs: {...}, "yarrays", yarray: {...} }
|
||||
this.name = "YJS"; // For console log etc
|
||||
this.supportURLs = ['yjs'];
|
||||
this.supportFunctions = ['fetch', 'add', 'list', 'listmonitor', 'newlisturls',
|
||||
'connection', 'get', 'set', 'getall', 'keys', 'newdatabase', 'newtable', 'monitor']; // Only does list functions, Does not support reverse,
|
||||
this.status = Transport.STATUS_LOADED;
|
||||
}
|
||||
|
||||
async p__y(url, opts, verbose) {
|
||||
/*
|
||||
Utility function to get Y for this URL with appropriate options and open a new connection if not already
|
||||
|
||||
url: URL string to find list of
|
||||
opts: Options to add to defaults
|
||||
resolves: Y
|
||||
*/
|
||||
if (!(typeof(url) === "string")) { url = url.href; } // Convert if its a parsed URL
|
||||
console.assert(url.startsWith("yjs:/yjs/"));
|
||||
try {
|
||||
if (this.yarrays[url]) {
|
||||
if (verbose) console.log("Found Y for", url);
|
||||
return this.yarrays[url];
|
||||
} else {
|
||||
let options = Transport.mergeoptions(this.options.yarray, {connector: {room: url}}, opts); // Copies options, ipfs will be set already
|
||||
if (verbose) console.log("Creating Y for", url); //"options=",options);
|
||||
return this.yarrays[url] = await Y(options);
|
||||
}
|
||||
} catch(err) {
|
||||
console.log("Failed to initialize Y");
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
async p__yarray(url, verbose) {
|
||||
/*
|
||||
Utility function to get Yarray for this URL and open a new connection if not already
|
||||
url: URL string to find list of
|
||||
resolves: Y
|
||||
*/
|
||||
return this.p__y(url, { share: {array: "Array"}}); // Copies options, ipfs will be set already
|
||||
}
|
||||
async p_connection(url, verbose) {
|
||||
/*
|
||||
Utility function to get Yarray for this URL and open a new connection if not already
|
||||
url: URL string to find list of
|
||||
resolves: Y - a connection to use for get's etc.
|
||||
*/
|
||||
return this.p__y(url, { share: {map: "Map"}}); // Copies options, ipfs will be set already
|
||||
}
|
||||
|
||||
|
||||
|
||||
static setup0(options, verbose) {
|
||||
/*
|
||||
First part of setup, create obj, add to Transports but dont attempt to connect, typically called instead of p_setup if want to parallelize connections.
|
||||
*/
|
||||
let combinedoptions = Transport.mergeoptions(defaultoptions, options);
|
||||
if (verbose) console.log("YJS options %o", combinedoptions); // Log even if !verbose
|
||||
let t = new TransportYJS(combinedoptions, verbose); // Note doesnt start IPFS or Y
|
||||
Transports.addtransport(t);
|
||||
return t;
|
||||
}
|
||||
|
||||
async p_setup2(verbose) {
|
||||
/*
|
||||
This sets up for Y connections, which are opened each time a resource is listed, added to, or listmonitored.
|
||||
p_setup2 is defined because IPFS will have started during the p_setup1 phase.
|
||||
Throws: Error("websocket error") if WiFi off, probably other errors if fails to connect
|
||||
*/
|
||||
try {
|
||||
this.status = Transport.STATUS_STARTING; // Should display, but probably not refreshed in most case
|
||||
this.options.yarray.connector.ipfs = Transports.ipfs(verbose).ipfs; // Find an IPFS to use (IPFS's should be starting in p_setup1)
|
||||
this.yarrays = {};
|
||||
} catch(err) {
|
||||
console.error("YJS failed to start",err);
|
||||
this.status = Transport.STATUS_FAILED;
|
||||
}
|
||||
return this;
|
||||
}
|
||||
|
||||
async p_status(verbose) {
|
||||
/*
|
||||
Return a string for the status of a transport. No particular format, but keep it short as it will probably be in a small area of the screen.
|
||||
For YJS, its online if IPFS is.
|
||||
*/
|
||||
this.status = (await this.options.yarray.connector.ipfs.isOnline()) ? Transport.STATUS_CONNECTED : Transport.STATUS_FAILED;
|
||||
return this.status;
|
||||
}
|
||||
|
||||
async p_rawlist(url, verbose) {
|
||||
/*
|
||||
Fetch all the objects in a list, these are identified by the url of the public key used for signing.
|
||||
(Note this is the 'signedby' parameter of the p_rawadd call, not the 'url' parameter
|
||||
Returns a promise that resolves to the list.
|
||||
Each item of the list is a dict: {"url": url, "date": date, "signature": signature, "signedby": signedby}
|
||||
List items may have other data (e.g. reference ids of underlying transport)
|
||||
|
||||
:param string url: String with the url that identifies the list.
|
||||
:param boolean verbose: true for debugging output
|
||||
:resolve array: An array of objects as stored on the list.
|
||||
*/
|
||||
try {
|
||||
let y = await this.p__yarray(url, verbose);
|
||||
let res = y.share.array.toArray();
|
||||
// .filter((obj) => (obj.signedby.includes(url))); Cant filter since url is the YJS URL, not the URL of the CL that signed it. (upper layers verify, which filters)
|
||||
if (verbose) console.log("p_rawlist found", ...utils.consolearr(res));
|
||||
return res;
|
||||
} catch(err) {
|
||||
console.log("TransportYJS.p_rawlist failed",err.message);
|
||||
throw(err);
|
||||
}
|
||||
}
|
||||
|
||||
listmonitor(url, callback, verbose) {
|
||||
/*
|
||||
Setup a callback called whenever an item is added to a list, typically it would be called immediately after a p_rawlist to get any more items not returned by p_rawlist.
|
||||
|
||||
:param url: string Identifier of list (as used by p_rawlist and "signedby" parameter of p_rawadd
|
||||
:param callback: function(obj) Callback for each new item added to the list
|
||||
obj is same format as p_rawlist or p_rawreverse
|
||||
:param verbose: boolean - true for debugging output
|
||||
*/
|
||||
let y = this.yarrays[typeof url === "string" ? url : url.href];
|
||||
console.assert(y,"Should always exist before calling listmonitor - async call p__yarray(url) to create");
|
||||
y.share.array.observe((event) => {
|
||||
if (event.type === 'insert') { // Currently ignoring deletions.
|
||||
if (verbose) console.log('resources inserted', event.values);
|
||||
//cant filter because url is YJS local, not signer, callback should filter
|
||||
//event.values.filter((obj) => obj.signedby.includes(url)).map(callback);
|
||||
event.values.map(callback);
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
rawreverse() {
|
||||
/*
|
||||
Similar to p_rawlist, but return the list item of all the places where the object url has been listed.
|
||||
The url here corresponds to the "url" parameter of p_rawadd
|
||||
Returns a promise that resolves to the list.
|
||||
|
||||
:param string url: String with the url that identifies the object put on a list.
|
||||
:param boolean verbose: true for debugging output
|
||||
:resolve array: An array of objects as stored on the list.
|
||||
*/
|
||||
//TODO-REVERSE this needs implementing once list structure on IPFS more certain
|
||||
throw new errors.ToBeImplementedError("Undefined function TransportYJS.rawreverse"); }
|
||||
|
||||
async p_rawadd(url, sig, verbose) {
|
||||
/*
|
||||
Store a new list item, it should be stored so that it can be retrieved either by "signedby" (using p_rawlist) or
|
||||
by "url" (with p_rawreverse). The underlying transport does not need to guarantee the signature,
|
||||
an invalid item on a list should be rejected on higher layers.
|
||||
|
||||
:param string url: String identifying list to post to
|
||||
:param Signature sig: Signature object containing at least:
|
||||
date - date of signing in ISO format,
|
||||
urls - array of urls for the object being signed
|
||||
signature - verifiable signature of date+urls
|
||||
signedby - urls of public key used for the signature
|
||||
:param boolean verbose: true for debugging output
|
||||
:resolve undefined:
|
||||
*/
|
||||
console.assert(url && sig.urls.length && sig.signature && sig.signedby.length, "TransportYJS.p_rawadd args", url, sig);
|
||||
if (verbose) console.log("TransportYJS.p_rawadd", typeof url === "string" ? url : url.href, sig);
|
||||
let value = sig.preflight(Object.assign({}, sig));
|
||||
let y = await this.p__yarray(url, verbose);
|
||||
y.share.array.push([value]);
|
||||
}
|
||||
|
||||
p_newlisturls(cl, verbose) {
|
||||
let u = cl._publicurls.map(urlstr => Url.parse(urlstr))
|
||||
.find(parsedurl =>
|
||||
(parsedurl.protocol === "ipfs" && parsedurl.pathname.includes('/ipfs/'))
|
||||
|| (parsedurl.protocol === "yjs:"));
|
||||
if (!u) {
|
||||
u = `yjs:/yjs/${ cl.keypair.verifyexportmultihashsha256_58() }`; // Pretty random, but means same test will generate same list
|
||||
}
|
||||
return [u,u];
|
||||
}
|
||||
|
||||
|
||||
// Support for Key-Value pairs as per
|
||||
// https://docs.google.com/document/d/1yfmLRqKPxKwB939wIy9sSaa7GKOzM5PrCZ4W1jRGW6M/edit#
|
||||
async p_newdatabase(pubkey, verbose) {
|
||||
//if (pubkey instanceof Dweb.PublicPrivate)
|
||||
if (pubkey.hasOwnProperty("keypair"))
|
||||
pubkey = pubkey.keypair.signingexport()
|
||||
// By this point pubkey should be an export of a public key of form xyz:abc where xyz
|
||||
// specifies the type of public key (NACL VERIFY being the only kind we expect currently)
|
||||
let u = `yjs:/yjs/${encodeURIComponent(pubkey)}`;
|
||||
return {"publicurl": u, "privateurl": u};
|
||||
}
|
||||
|
||||
//TODO maybe change the listmonitor / monitor code for to use "on" and the structure of PP.events
|
||||
//TODO but note https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy about Proxy which might be suitable, prob not as doesnt map well to lists
|
||||
async p_newtable(pubkey, table, verbose) {
|
||||
if (!pubkey) throw new errors.CodingError("p_newtable currently requires a pubkey");
|
||||
let database = await this.p_newdatabase(pubkey, verbose);
|
||||
// If have use cases without a database, then call p_newdatabase first
|
||||
return { privateurl: `${database.privateurl}/${table}`, publicurl: `${database.publicurl}/${table}`} // No action required to create it
|
||||
}
|
||||
|
||||
async p_set(url, keyvalues, value, verbose) { // url = yjs:/yjs/database/table/key //TODO-KEYVALUE-API
|
||||
let y = await this.p_connection(url, verbose);
|
||||
if (typeof keyvalues === "string") {
|
||||
y.share.map.set(keyvalues, JSON.stringify(value));
|
||||
} else {
|
||||
Object.keys(keyvalues).map((key) => y.share.map.set(key, keyvalues[key]));
|
||||
}
|
||||
}
|
||||
_p_get(y, keys, verbose) {
|
||||
if (Array.isArray(keys)) {
|
||||
return keys.reduce(function(previous, key) {
|
||||
let val = y.share.map.get(key);
|
||||
previous[key] = typeof val === "string" ? JSON.parse(val) : val; // Handle undefined
|
||||
return previous;
|
||||
}, {});
|
||||
} else {
|
||||
let val = y.share.map.get(keys);
|
||||
return typeof val === "string" ? JSON.parse(val) : val; // Surprisingly this is sync, the p_connection should have synchronised
|
||||
}
|
||||
}
|
||||
async p_get(url, keys, verbose) { //TODO-KEYVALUE-API - return dict or single
|
||||
return this._p_get(await this.p_connection(url, verbose), keys);
|
||||
}
|
||||
|
||||
async p_delete(url, keys, verbose) { //TODO-KEYVALUE-API
|
||||
let y = await this.p_connection(url, verbose);
|
||||
if (typeof keys === "string") {
|
||||
y.share.map.delete(keys);
|
||||
} else {
|
||||
keys.map((key) => y.share.map.delete(key)); // Surprisingly this is sync, the p_connection should have synchronised
|
||||
}
|
||||
}
|
||||
|
||||
async p_keys(url, verbose) {
|
||||
let y = await this.p_connection(url, verbose);
|
||||
return y.share.map.keys(); // Surprisingly this is sync, the p_connection should have synchronised
|
||||
}
|
||||
async p_getall(url, verbose) {
|
||||
let y = await this.p_connection(url, verbose);
|
||||
let keys = y.share.map.keys(); // Surprisingly this is sync, the p_connection should have synchronised
|
||||
return this._p_get(y, keys);
|
||||
}
|
||||
async p_rawfetch(url, {verbose=false}={}) {
|
||||
return { // See identical structure in TransportHTTP
|
||||
table: "keyvaluetable", //TODO-KEYVALUE its unclear if this is the best way, as maybe want to know the real type of table e.g. domain
|
||||
_map: await this.p_getall(url, verbose)
|
||||
}; // Data struc is ok as SmartDict.p_fetch will pass to KVT constructor
|
||||
}
|
||||
async monitor(url, callback, verbose) {
|
||||
/*
|
||||
Setup a callback called whenever an item is added to a list, typically it would be called immediately after a p_rawlist to get any more items not returned by p_rawlist.
|
||||
Stack: KVT()|KVT.p_new => KVT.monitor => (a: Transports.monitor => YJS.monitor)(b: dispatchEvent)
|
||||
|
||||
:param url: string Identifier of list (as used by p_rawlist and "signedby" parameter of p_rawadd
|
||||
:param callback: function({type, key, value}) Callback for each new item added to the list
|
||||
|
||||
:param verbose: boolean - true for debugging output
|
||||
*/
|
||||
url = typeof url === "string" ? url : url.href;
|
||||
let y = this.yarrays[url];
|
||||
if (!y) {
|
||||
throw new errors.CodingError("Should always exist before calling monitor - async call p__yarray(url) to create");
|
||||
}
|
||||
y.share.map.observe((event) => {
|
||||
if (['add','update'].includes(event.type)) { // Currently ignoring deletions.
|
||||
if (verbose) console.log("YJS monitor:", url, event.type, event.name, event.value);
|
||||
// ignores event.path (only in observeDeep) and event.object
|
||||
if (!(event.type === "update" && event.oldValue === event.value)) {
|
||||
// Dont trigger on update as seeing some loops with p_set
|
||||
let newevent = {
|
||||
"type": {"add": "set", "update": "set", "delete": "delete"}[event.type],
|
||||
"value": JSON.parse(event.value),
|
||||
"key": event.name,
|
||||
};
|
||||
callback(newevent);
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
}
|
||||
TransportYJS.Y = Y; // Allow node tests to find it
|
||||
Transports._transportclasses["YJS"] = TransportYJS;
|
||||
exports = module.exports = TransportYJS;
|
522
src/Transports.js
Normal file
522
src/Transports.js
Normal file
@ -0,0 +1,522 @@
|
||||
const Url = require('url');
|
||||
const errors = require('./Errors');
|
||||
|
||||
/*
|
||||
Handles multiple transports, API should be (almost) the same as for an individual transport)
|
||||
*/
|
||||
|
||||
|
||||
class Transports {
|
||||
constructor(options, verbose) {
|
||||
if (verbose) console.log("Transports(%o)",options);
|
||||
}
|
||||
|
||||
static _connected() {
|
||||
/*
|
||||
Get an array of transports that are connected, i.e. currently usable
|
||||
*/
|
||||
return this._transports.filter((t) => (!t.status));
|
||||
}
|
||||
static connectedNames() {
|
||||
return this._connected().map(t => t.name);
|
||||
}
|
||||
static connectedNamesParm() {
|
||||
return this.connectedNames().map(n => "transport="+n).join('&')
|
||||
}
|
||||
static validFor(urls, func, options) {
|
||||
/*
|
||||
Finds an array or Transports that can support this URL.
|
||||
|
||||
Excludes any transports whose status != 0 as they aren't connected
|
||||
|
||||
urls: Array of urls
|
||||
func: Function to check support for: fetch, store, add, list, listmonitor, reverse - see supportFunctions on each Transport class
|
||||
returns: Array of pairs of url & transport instance [ [ u1, t1], [u1, t2], [u2, t1]]
|
||||
*/
|
||||
console.assert((urls && urls[0]) || ["store", "newlisturls", "newdatabase", "newtable"].includes(func), "Transports.validFor failed - coding error - urls=", urls, "func=", func); // FOr debugging old calling patterns with [ undefined ]
|
||||
if (!(urls && urls.length > 0)) {
|
||||
return this._connected().filter((t) => (t.supports(undefined, func)))
|
||||
.map((t) => [undefined, t]);
|
||||
} else {
|
||||
return [].concat(
|
||||
...urls.map((url) => typeof url === 'string' ? Url.parse(url) : url) // parse URLs once
|
||||
.map((url) =>
|
||||
this._connected().filter((t) => (t.supports(url, func))) // [ t1, t2 ]
|
||||
.map((t) => [url, t]))); // [[ u, t1], [u, t2]]
|
||||
}
|
||||
}
|
||||
static http(verbose) {
|
||||
// Find an http transport if it exists, so for example YJS can use it.
|
||||
return Transports._connected().find((t) => t.name === "HTTP")
|
||||
}
|
||||
static ipfs(verbose) {
|
||||
// Find an ipfs transport if it exists, so for example YJS can use it.
|
||||
return Transports._connected().find((t) => t.name === "IPFS")
|
||||
}
|
||||
|
||||
static async p_resolveNames(urls) {
|
||||
/* If and only if TransportNAME was loaded (it might not be as it depends on higher level classes like Domain and SmartDict)
|
||||
then resolve urls that might be names, returning a modified array.
|
||||
*/
|
||||
if (this.namingcb) { //
|
||||
return await this.namingcb(urls); // Array of resolved urls
|
||||
} else {
|
||||
return urls;
|
||||
}
|
||||
}
|
||||
static resolveNamesWith(cb) {
|
||||
// Set a callback for p_resolveNames
|
||||
this.namingcb = cb;
|
||||
}
|
||||
|
||||
static async _p_rawstore(tt, data, verbose) {
|
||||
// Internal method to store at known transports
|
||||
let errs = [];
|
||||
let rr = await Promise.all(tt.map(async function(t) {
|
||||
try {
|
||||
return await t.p_rawstore(data, verbose); //url
|
||||
} catch(err) {
|
||||
console.log("Could not rawstore to", t.name, err.message);
|
||||
errs.push(err);
|
||||
return undefined;
|
||||
}
|
||||
}));
|
||||
rr = rr.filter((r) => !!r); // Trim any that had errors
|
||||
if (!rr.length) {
|
||||
throw new errors.TransportError(errs.map((err)=>err.message).join(', ')); // New error with concatenated messages
|
||||
}
|
||||
return rr;
|
||||
|
||||
}
|
||||
static async p_rawstore(data, verbose) {
|
||||
/*
|
||||
data: Raw data to store - typically a string, but its passed on unmodified here
|
||||
returns: Array of urls of where stored
|
||||
throws: TransportError with message being concatenated messages of transports if NONE of them succeed.
|
||||
*/
|
||||
let tt = this.validFor(undefined, "store").map(([u, t]) => t); // Valid connected transports that support "store"
|
||||
if (verbose) console.log("Valid for transports:", tt.map(t => t.name));
|
||||
if (!tt.length) {
|
||||
throw new errors.TransportError('Transports.p_rawstore: Cant find transport for store');
|
||||
}
|
||||
return this._p_rawstore(tt, data, verbose);
|
||||
}
|
||||
static async p_rawlist(urls, verbose) {
|
||||
urls = await this.p_resolveNames(urls); // If naming is loaded then convert to a name
|
||||
let tt = this.validFor(urls, "list"); // Valid connected transports that support "store"
|
||||
if (!tt.length) {
|
||||
throw new errors.TransportError('Transports.p_rawlist: Cant find transport for urls:'+urls.join(','));
|
||||
}
|
||||
let errs = [];
|
||||
let ttlines = await Promise.all(tt.map(async function([url, t]) {
|
||||
try {
|
||||
return await t.p_rawlist(url, verbose); // [sig]
|
||||
} catch(err) {
|
||||
console.log("Could not rawlist ", url, "from", t.name, err.message);
|
||||
errs.push(err);
|
||||
return [];
|
||||
}
|
||||
})); // [[sig,sig],[sig,sig]]
|
||||
if (errs.length >= tt.length) {
|
||||
// All Transports failed (maybe only 1)
|
||||
throw new errors.TransportError(errs.map((err)=>err.message).join(', ')); // New error with concatenated messages
|
||||
}
|
||||
let uniques = {}; // Used to filter duplicates
|
||||
return [].concat(...ttlines)
|
||||
.filter((x) => (!uniques[x.signature] && (uniques[x.signature] = true)));
|
||||
}
|
||||
|
||||
static async p_rawfetch(urls, opts) {
|
||||
/*
|
||||
Fetch the data for a url, transports act on the data, typically storing it.
|
||||
urls: array of urls to retrieve (any are valid)
|
||||
opts {
|
||||
verbose,
|
||||
start, integer - first byte wanted
|
||||
end integer - last byte wanted (note this is inclusive start=0,end=1023 is 1024 bytes
|
||||
timeoutMS integer - max time to wait on transports (IPFS) that support it
|
||||
}
|
||||
returns: string - arbitrary bytes retrieved.
|
||||
throws: TransportError with concatenated error messages if none succeed.
|
||||
*/
|
||||
let verbose = opts.verbose;
|
||||
urls = await this.p_resolveNames(urls); // If naming is loaded then convert to a name
|
||||
let tt = this.validFor(urls, "fetch"); //[ [Url,t],[Url,t]]
|
||||
if (!tt.length) {
|
||||
throw new errors.TransportError("Transports.p_fetch cant find any transport for urls: " + urls);
|
||||
}
|
||||
//With multiple transports, it should return when the first one returns something.
|
||||
let errs = [];
|
||||
let failedtransports = []; // Will accumulate any transports fail on before the success
|
||||
for (const [url, t] of tt) {
|
||||
try {
|
||||
let data = await t.p_rawfetch(url, opts); // throws errors if fails or timesout
|
||||
//TODO-MULTI-GATEWAY working here
|
||||
if (opts.relay && failedtransports.length) {
|
||||
console.log(`Relaying ${data.length} bytes from ${typeof url === "string" ? url : url.href} to ${failedtransports.map(t=>t.name)}`);
|
||||
this._p_rawstore(failedtransports, data, verbose)
|
||||
.then(uu => console.log(`Relayed to ${uu}`)); // Happening async, not waiting and dont care if fails
|
||||
}
|
||||
//END TODO-MULTI-GATEWAY
|
||||
return data;
|
||||
} catch (err) {
|
||||
failedtransports.push(t);
|
||||
errs.push(err);
|
||||
console.log("Could not retrieve ", url.href, "from", t.name, err.message);
|
||||
// Don't throw anything here, loop round for next, only throw if drop out bottom
|
||||
//TODO-MULTI-GATEWAY potentially copy from success to failed URLs.
|
||||
}
|
||||
}
|
||||
throw new errors.TransportError(errs.map((err)=>err.message).join(', ')); //Throw err with combined messages if none succeed
|
||||
}
|
||||
|
||||
static async p_rawadd(urls, sig, verbose) {
|
||||
/*
|
||||
urls: of lists to add to
|
||||
sig: Sig to add
|
||||
returns: undefined
|
||||
throws: TransportError with message being concatenated messages of transports if NONE of them succeed.
|
||||
*/
|
||||
//TODO-MULTI-GATEWAY might be smarter about not waiting but Promise.race is inappropriate as returns after a failure as well.
|
||||
urls = await this.p_resolveNames(urls); // If naming is loaded then convert to a name
|
||||
let tt = this.validFor(urls, "add"); // Valid connected transports that support "store"
|
||||
if (!tt.length) {
|
||||
throw new errors.TransportError('Transports.p_rawstore: Cant find transport for urls:'+urls.join(','));
|
||||
}
|
||||
let errs = [];
|
||||
await Promise.all(tt.map(async function([u, t]) {
|
||||
try {
|
||||
await t.p_rawadd(u, sig, verbose); //undefined
|
||||
return undefined;
|
||||
} catch(err) {
|
||||
console.log("Could not rawlist ", u, "from", t.name, err.message);
|
||||
errs.push(err);
|
||||
return undefined;
|
||||
}
|
||||
}));
|
||||
if (errs.length >= tt.length) {
|
||||
// All Transports failed (maybe only 1)
|
||||
throw new errors.TransportError(errs.map((err)=>err.message).join(', ')); // New error with concatenated messages
|
||||
}
|
||||
return undefined;
|
||||
|
||||
}
|
||||
|
||||
static listmonitor(urls, cb) {
|
||||
/*
|
||||
Add a listmonitor for each transport - note this means if multiple transports support it, then will get duplicate events back if everyone else is notifying all of them.
|
||||
*/
|
||||
// Note cant do p_resolveNames since sync but should know real urls of resource by here.
|
||||
this.validFor(urls, "listmonitor")
|
||||
.map(([u, t]) => t.listmonitor(u, cb));
|
||||
}
|
||||
|
||||
static async p_newlisturls(cl, verbose) {
|
||||
// Create a new list in any transport layer that supports lists.
|
||||
// cl is a CommonList or subclass and can be used by the Transport to get info for choosing the list URL (normally it won't use it)
|
||||
// Note that normally the CL will not have been stored yet, so you can't use its urls.
|
||||
let uuu = await Promise.all(this.validFor(undefined, "newlisturls")
|
||||
.map(([u, t]) => t.p_newlisturls(cl, verbose)) ); // [ [ priv, pub] [ priv, pub] [priv pub] ]
|
||||
return [uuu.map(uu=>uu[0]), uuu.map(uu=>uu[1])]; // [[ priv priv priv ] [ pub pub pub ] ]
|
||||
}
|
||||
|
||||
// Stream handling ===========================================
|
||||
|
||||
static async p_f_createReadStream(urls, verbose, options) { // Note options is options for selecting a stream, not the start/end in a createReadStream call
|
||||
/*
|
||||
urls: Urls of the stream
|
||||
returns: f(opts) => stream returning bytes from opts.start || start of file to opts.end-1 || end of file
|
||||
*/
|
||||
let tt = this.validFor(urls, "createReadStream", options); //[ [Url,t],[Url,t]] // Passing options - most callers will ignore TODO-STREAM support options in validFor
|
||||
if (!tt.length) {
|
||||
throw new errors.TransportError("Transports.p_createReadStream cant find any transport for urls: " + urls);
|
||||
}
|
||||
//With multiple transports, it should return when the first one returns something.
|
||||
let errs = [];
|
||||
for (const [url, t] of tt) {
|
||||
try {
|
||||
return await t.p_f_createReadStream(url, verbose);
|
||||
} catch (err) {
|
||||
errs.push(err);
|
||||
console.log("Could not retrieve ", url.href, "from", t.name, err.message);
|
||||
// Don't throw anything here, loop round for next, only throw if drop out bottom
|
||||
//TODO-MULTI-GATEWAY potentially copy from success to failed URLs.
|
||||
}
|
||||
}
|
||||
throw new errors.TransportError(errs.map((err)=>err.message).join(', ')); //Throw err with combined messages if none succeed
|
||||
}
|
||||
|
||||
|
||||
// KeyValue support ===========================================
|
||||
|
||||
static async p_get(urls, keys, verbose) {
|
||||
/*
|
||||
Fetch the values for a url and one or more keys, transports act on the data, typically storing it.
|
||||
urls: array of urls to retrieve (any are valid)
|
||||
keys: array of keys wanted or single key
|
||||
returns: string - arbitrary bytes retrieved or dict of key: value
|
||||
throws: TransportError with concatenated error messages if none succeed.
|
||||
*/
|
||||
let tt = this.validFor(urls, "get"); //[ [Url,t],[Url,t]]
|
||||
if (!tt.length) {
|
||||
throw new errors.TransportError("Transports.p_get cant find any transport for urls: " + urls);
|
||||
}
|
||||
//With multiple transports, it should return when the first one returns something.
|
||||
let errs = [];
|
||||
for (const [url, t] of tt) {
|
||||
try {
|
||||
return await t.p_get(url, keys, verbose); //TODO-MULTI-GATEWAY potentially copy from success to failed URLs.
|
||||
} catch (err) {
|
||||
errs.push(err);
|
||||
console.log("Could not retrieve ", url.href, "from", t.name, err.message);
|
||||
// Don't throw anything here, loop round for next, only throw if drop out bottom
|
||||
}
|
||||
}
|
||||
throw new errors.TransportError(errs.map((err)=>err.message).join(', ')); //Throw err with combined messages if none succeed
|
||||
}
|
||||
static async p_set(urls, keyvalues, value, verbose) {
|
||||
/* Set a series of key/values or a single value
|
||||
keyvalues: Either dict or a string
|
||||
value: if kv is a string, this is the value to set
|
||||
throws: TransportError with message being concatenated messages of transports if NONE of them succeed.
|
||||
*/
|
||||
urls = await this.p_resolveNames(urls); // If naming is loaded then convert to a name
|
||||
let tt = this.validFor(urls, "set"); //[ [Url,t],[Url,t]]
|
||||
if (!tt.length) {
|
||||
throw new errors.TransportError("Transports.p_set cant find any transport for urls: " + urls);
|
||||
}
|
||||
let errs = [];
|
||||
let success = false;
|
||||
await Promise.all(tt.map(async function([url, t]) {
|
||||
try {
|
||||
await t.p_set(url, keyvalues, value, verbose);
|
||||
success = true; // Any one success will return true
|
||||
} catch(err) {
|
||||
console.log("Could not rawstore to", t.name, err.message);
|
||||
errs.push(err);
|
||||
}
|
||||
}));
|
||||
if (!success) {
|
||||
throw new errors.TransportError(errs.map((err)=>err.message).join(', ')); // New error with concatenated messages
|
||||
}
|
||||
}
|
||||
|
||||
static async p_delete(urls, keys, verbose) { //TODO-KEYVALUE-API
|
||||
/* Delete a key or a list of keys
|
||||
kv: Either dict or a string
|
||||
value: if kv is a string, this is the value to set
|
||||
throws: TransportError with message being concatenated messages of transports if NONE of them succeed.
|
||||
*/
|
||||
urls = await this.p_resolveNames(urls); // If naming is loaded then convert to a name
|
||||
let tt = this.validFor(urls, "set"); //[ [Url,t],[Url,t]]
|
||||
if (!tt.length) {
|
||||
throw new errors.TransportError("Transports.p_set cant find any transport for urls: " + urls);
|
||||
}
|
||||
let errs = [];
|
||||
let success = false;
|
||||
await Promise.all(tt.map(async function([url, t]) {
|
||||
try {
|
||||
await t.p_delete(url, keys, verbose);
|
||||
success = true; // Any one success will return true
|
||||
} catch(err) {
|
||||
console.log("Could not rawstore to", t.name, err.message);
|
||||
errs.push(err);
|
||||
}
|
||||
}));
|
||||
if (!success) {
|
||||
throw new errors.TransportError(errs.map((err)=>err.message).join(', ')); // New error with concatenated messages
|
||||
}
|
||||
}
|
||||
static async p_keys(urls, verbose) {
|
||||
/*
|
||||
Fetch the values for a url and one or more keys, transports act on the data, typically storing it.
|
||||
urls: array of urls to retrieve (any are valid)
|
||||
keys: array of keys wanted
|
||||
returns: string - arbitrary bytes retrieved or dict of key: value
|
||||
throws: TransportError with concatenated error messages if none succeed.
|
||||
*/
|
||||
urls = await this.p_resolveNames(urls); // If naming is loaded then convert to a name
|
||||
let tt = this.validFor(urls, "keys"); //[ [Url,t],[Url,t]]
|
||||
if (!tt.length) {
|
||||
throw new errors.TransportError("Transports.p_keys cant find any transport for urls: " + urls);
|
||||
}
|
||||
//With multiple transports, it should return when the first one returns something.
|
||||
let errs = [];
|
||||
for (const [url, t] of tt) {
|
||||
try {
|
||||
return await t.p_keys(url, verbose); //TODO-MULTI-GATEWAY potentially copy from success to failed URLs.
|
||||
} catch (err) {
|
||||
errs.push(err);
|
||||
console.log("Could not retrieve keys for", url.href, "from", t.name, err.message);
|
||||
// Don't throw anything here, loop round for next, only throw if drop out bottom
|
||||
}
|
||||
}
|
||||
throw new errors.TransportError(errs.map((err)=>err.message).join(', ')); //Throw err with combined messages if none succeed
|
||||
}
|
||||
|
||||
static async p_getall(urls, verbose) {
|
||||
/*
|
||||
Fetch the values for a url and one or more keys, transports act on the data, typically storing it.
|
||||
urls: array of urls to retrieve (any are valid)
|
||||
keys: array of keys wanted
|
||||
returns: array of strings returned for the keys. //TODO consider issues around return a data type rather than array of strings
|
||||
throws: TransportError with concatenated error messages if none succeed.
|
||||
*/
|
||||
urls = await this.p_resolveNames(urls); // If naming is loaded then convert to a name
|
||||
let tt = this.validFor(urls, "getall"); //[ [Url,t],[Url,t]]
|
||||
if (!tt.length) {
|
||||
throw new errors.TransportError("Transports.p_getall cant find any transport for urls: " + urls);
|
||||
}
|
||||
//With multiple transports, it should return when the first one returns something.
|
||||
let errs = [];
|
||||
for (const [url, t] of tt) {
|
||||
try {
|
||||
return await t.p_getall(url, verbose); //TODO-MULTI-GATEWAY potentially copy from success to failed URLs.
|
||||
} catch (err) {
|
||||
errs.push(err);
|
||||
console.log("Could not retrieve all keys for", url.href, "from", t.name, err.message);
|
||||
// Don't throw anything here, loop round for next, only throw if drop out bottom
|
||||
}
|
||||
}
|
||||
throw new errors.TransportError(errs.map((err)=>err.message).join(', ')); //Throw err with combined messages if none succeed
|
||||
}
|
||||
|
||||
static async p_newdatabase(pubkey, verbose) {
|
||||
/*
|
||||
Create a new database in any transport layer that supports databases (key value pairs).
|
||||
pubkey: CommonList, KeyPair, or exported public key
|
||||
resolves to: [ privateurl, publicurl]
|
||||
*/
|
||||
let uuu = await Promise.all(this.validFor(undefined, "newdatabase")
|
||||
.map(([u, t]) => t.p_newdatabase(pubkey, verbose)) ); // [ { privateurl, publicurl} { privateurl, publicurl} { privateurl, publicurl} ]
|
||||
return { privateurls: uuu.map(uu=>uu.privateurl), publicurls: uuu.map(uu=>uu.publicurl) }; // { privateurls: [], publicurls: [] }
|
||||
}
|
||||
|
||||
static async p_newtable(pubkey, table, verbose) {
|
||||
/*
|
||||
Create a new table in any transport layer that supports the function (key value pairs).
|
||||
pubkey: CommonList, KeyPair, or exported public key
|
||||
resolves to: [ privateurl, publicurl]
|
||||
*/
|
||||
let uuu = await Promise.all(this.validFor(undefined, "newtable")
|
||||
.map(([u, t]) => t.p_newtable(pubkey, table, verbose)) ); // [ [ priv, pub] [ priv, pub] [priv pub] ]
|
||||
return { privateurls: uuu.map(uu=>uu.privateurl), publicurls: uuu.map(uu=>uu.publicurl)}; // {privateurls: [ priv priv priv ], publicurls: [ pub pub pub ] }
|
||||
}
|
||||
|
||||
static async p_connection(urls, verbose) {
|
||||
/*
|
||||
Do any asynchronous connection opening work prior to potentially synchronous methods (like monitor)
|
||||
*/
|
||||
urls = await this.p_resolveNames(urls); // If naming is loaded then convert to a name
|
||||
await Promise.all(
|
||||
this.validFor(urls, "connection")
|
||||
.map(([u, t]) => t.p_connection(u, verbose)));
|
||||
}
|
||||
|
||||
static monitor(urls, cb, verbose) {
|
||||
/*
|
||||
Add a listmonitor for each transport - note this means if multiple transports support it, then will get duplicate events back if everyone else is notifying all of them.
|
||||
Stack: KVT()|KVT.p_new => KVT.monitor => (a: Transports.monitor => YJS.monitor)(b: dispatchEvent)
|
||||
*/
|
||||
//Cant' its async. urls = await this.p_resolveNames(urls); // If naming is loaded then convert to a name
|
||||
this.validFor(urls, "monitor")
|
||||
.map(([u, t]) => t.monitor(u, cb, verbose));
|
||||
}
|
||||
|
||||
static addtransport(t) {
|
||||
/*
|
||||
Add a transport to _transports,
|
||||
*/
|
||||
Transports._transports.push(t);
|
||||
}
|
||||
|
||||
// Setup Transports - setup0 is called once, and should return quickly, p_setup1 and p_setup2 are asynchronous and p_setup2 relies on p_setup1 having resolved.
|
||||
|
||||
static setup0(transports, options, verbose) {
|
||||
/*
|
||||
Setup Transports for a range of classes
|
||||
transports is abbreviation HTTP, IPFS, LOCAL or list of them e.g. "HTTP,IPFS"
|
||||
Handles "LOCAL" specially, turning into a HTTP to a local server (for debugging)
|
||||
|
||||
returns array of transport instances
|
||||
*/
|
||||
// "IPFS" or "IPFS,LOCAL,HTTP"
|
||||
let localoptions = {http: {urlbase: "http://localhost:4244"}};
|
||||
return transports.map((tabbrev) => {
|
||||
let transportclass;
|
||||
if (tabbrev === "LOCAL") {
|
||||
transportclass = this._transportclasses["HTTP"];
|
||||
} else {
|
||||
transportclass = this._transportclasses[tabbrev];
|
||||
}
|
||||
if (!transportclass) {
|
||||
let tt = Object.keys(this._transportclasses);
|
||||
console.error(`Requested ${tabbrev} but ${tt.length ? tt : "No"} transports have been loaded`);
|
||||
return undefined;
|
||||
} else {
|
||||
return transportclass.setup0(tabbrev === "LOCAL" ? localoptions : options, verbose);
|
||||
}
|
||||
}).filter(f => !!f); // Trim out any undefined
|
||||
}
|
||||
static async p_setup1(verbose) {
|
||||
/* Second stage of setup, connect if possible */
|
||||
// Does all setup1a before setup1b since 1b can rely on ones with 1a, e.g. YJS relies on IPFS
|
||||
await Promise.all(this._transports.map((t) => t.p_setup1(verbose)));
|
||||
}
|
||||
static async p_setup2(verbose) {
|
||||
/* Second stage of setup, connect if possible */
|
||||
// Does all setup1a before setup1b since 1b can rely on ones with 1a, e.g. YJS relies on IPFS
|
||||
await Promise.all(this._transports.map((t) => t.p_setup2(verbose)));
|
||||
}
|
||||
static async test(verbose) {
|
||||
if (verbose) {console.log("Transports.test")}
|
||||
try {
|
||||
/* Could convert this - copied fom YJS to do a test at the "Transports" level
|
||||
let testurl = "yjs:/yjs/THISATEST"; // Just a predictable number can work with
|
||||
let res = await transport.p_rawlist(testurl, verbose);
|
||||
let listlen = res.length; // Holds length of list run intermediate
|
||||
if (verbose) console.log("rawlist returned ", ...utils.consolearr(res));
|
||||
transport.listmonitor(testurl, (obj) => console.log("Monitored", obj), verbose);
|
||||
let sig = new Dweb.Signature({urls: ["123"], date: new Date(Date.now()), signature: "Joe Smith", signedby: [testurl]}, verbose);
|
||||
await transport.p_rawadd(testurl, sig, verbose);
|
||||
if (verbose) console.log("TransportIPFS.p_rawadd returned ");
|
||||
res = await transport.p_rawlist(testurl, verbose);
|
||||
if (verbose) console.log("rawlist returned ", ...utils.consolearr(res)); // Note not showing return
|
||||
await delay(500);
|
||||
res = await transport.p_rawlist(testurl, verbose);
|
||||
console.assert(res.length === listlen + 1, "Should have added one item");
|
||||
*/
|
||||
//console.log("TransportYJS test complete");
|
||||
/* TODO-KEYVALUE reenable these tests,s but catch http examples
|
||||
let db = await this.p_newdatabase("TESTNOTREALLYAKEY", verbose); // { privateurls, publicurls }
|
||||
console.assert(db.privateurls[0] === "yjs:/yjs/TESTNOTREALLYAKEY");
|
||||
let table = await this.p_newtable("TESTNOTREALLYAKEY","TESTTABLE", verbose); // { privateurls, publicurls }
|
||||
let mapurls = table.publicurls;
|
||||
console.assert(mapurls[0] === "yjs:/yjs/TESTNOTREALLYAKEY/TESTTABLE");
|
||||
await this.p_set(mapurls, "testkey", "testvalue", verbose);
|
||||
let res = await this.p_get(mapurls, "testkey", verbose);
|
||||
console.assert(res === "testvalue");
|
||||
await this.p_set(mapurls, "testkey2", {foo: "bar"}, verbose);
|
||||
res = await this.p_get(mapurls, "testkey2", verbose);
|
||||
console.assert(res.foo === "bar");
|
||||
await this.p_set(mapurls, "testkey3", [1,2,3], verbose);
|
||||
res = await this.p_get(mapurls, "testkey3", verbose);
|
||||
console.assert(res[1] === 2);
|
||||
res = await this.p_keys(mapurls);
|
||||
console.assert(res.length === 3 && res.includes("testkey3"));
|
||||
res = await this.p_getall(mapurls, verbose);
|
||||
console.assert(res.testkey2.foo === "bar");
|
||||
*/
|
||||
|
||||
} catch(err) {
|
||||
console.log("Exception thrown in Transports.test:", err.message);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
Transports._transports = []; // Array of transport instances connected
|
||||
Transports.namingcb = undefined;
|
||||
Transports._transportclasses = {}; // Pointers to classes whose code is loaded.
|
||||
|
||||
exports = module.exports = Transports;
|
118
src/utils.js
Normal file
118
src/utils.js
Normal file
@ -0,0 +1,118 @@
|
||||
|
||||
utils = {}; //utility functions
|
||||
|
||||
// ==== OBJECT ORIENTED JAVASCRIPT ===============
|
||||
// This is a general purpose library of functions, the commented out ones come from other this libraries use in other places
|
||||
|
||||
// Utility function to print a array of items but just show number and last.
|
||||
utils.consolearr = (arr) => ((arr && arr.length >0) ? [arr.length+" items inc:", arr[arr.length-1]] : arr );
|
||||
|
||||
/*
|
||||
//Return true if two shortish arrays a and b intersect or if b is not an array, then if b is in a
|
||||
//Note there are better solutions exist for longer arrays
|
||||
//This is intended for comparing two sets of probably equal, but possibly just intersecting URLs
|
||||
utils.intersects = (a,b) => (Array.isArray(b) ? a.some(x => b.includes(x)) : a.includes(b));
|
||||
|
||||
|
||||
utils.mergeTypedArraysUnsafe = function(a, b) { // Take care of inability to concatenate typed arrays such as Uint8
|
||||
//http://stackoverflow.com/questions/14071463/how-can-i-merge-typedarrays-in-javascript also has a safe version
|
||||
const c = new a.constructor(a.length + b.length);
|
||||
c.set(a);
|
||||
c.set(b, a.length);
|
||||
return c;
|
||||
};
|
||||
*/
|
||||
/*
|
||||
//TODO-STREAM, use this code and return stream from p_rawfetch that this can be applied to
|
||||
utils.p_streamToBuffer = function(stream, verbose) {
|
||||
// resolve to a promise that returns a stream.
|
||||
// Note this comes form one example ...
|
||||
// There is another example https://github.com/ipfs/js-ipfs/blob/master/examples/exchange-files-in-browser/public/js/app.js#L102 very different
|
||||
return new Promise((resolve, reject) => {
|
||||
try {
|
||||
let chunks = [];
|
||||
stream
|
||||
.on('data', (chunk) => { if (verbose) console.log('on', chunk.length); chunks.push(chunk); })
|
||||
.once('end', () => { if (verbose) console.log('end chunks', chunks.length); resolve(Buffer.concat(chunks)); })
|
||||
.on('error', (err) => { // Note error behavior untested currently
|
||||
console.log("Error event in p_streamToBuffer",err);
|
||||
reject(new errors.TransportError('Error in stream'))
|
||||
});
|
||||
stream.resume();
|
||||
} catch (err) {
|
||||
console.log("Error thrown in p_streamToBuffer", err);
|
||||
reject(err);
|
||||
}
|
||||
})
|
||||
};
|
||||
*/
|
||||
/*
|
||||
//TODO-STREAM, use this code and return stream from p_rawfetch that this can be applied to
|
||||
//TODO-STREAM debugging in streamToBuffer above, copy to here when fixed above
|
||||
utils.p_streamToBlob = function(stream, mimeType, verbose) {
|
||||
// resolve to a promise that returns a stream - currently untested as using Buffer
|
||||
return new Promise((resolve, reject) => {
|
||||
try {
|
||||
let chunks = [];
|
||||
stream
|
||||
.on('data', (chunk)=>chunks.push(chunk))
|
||||
.once('end', () =>
|
||||
resolve(mimeType
|
||||
? new Blob(chunks, { type: mimeType })
|
||||
: new Blob(chunks)))
|
||||
.on('error', (err) => { // Note error behavior untested currently
|
||||
console.log("Error event in p_streamToBuffer",err);
|
||||
reject(new errors.TransportError('Error in stream'))
|
||||
});
|
||||
stream.resume();
|
||||
} catch(err) {
|
||||
console.log("Error thrown in p_streamToBlob",err);
|
||||
reject(err);
|
||||
}
|
||||
})
|
||||
};
|
||||
*/
|
||||
|
||||
utils.stringfrom = function(foo, hints={}) {
|
||||
try {
|
||||
// Generic way to turn anything into a string
|
||||
if (foo.constructor.name === "Url") // Can't use instanceof for some bizarre reason
|
||||
return foo.href;
|
||||
if (typeof foo === "string")
|
||||
return foo;
|
||||
return foo.toString(); // Last chance try and convert to a string based on a method of the object (could check for its existence)
|
||||
} catch (err) {
|
||||
throw new errors.CodingError(`Unable to turn ${foo} into a string ${err.message}`)
|
||||
}
|
||||
};
|
||||
/*
|
||||
utils.objectfrom = function(data, hints={}) {
|
||||
// Generic way to turn something into a object (typically expecting a string, or a buffer)
|
||||
return (typeof data === "string" || data instanceof Buffer) ? JSON.parse(data) : data;
|
||||
}
|
||||
|
||||
utils.keyFilter = function(dic, keys) {
|
||||
// Utility to return a new dic containing each of keys (equivalent to python { dic[k] for k in keys }
|
||||
return keys.reduce(function(prev, key) { prev[key] = dic[key]; return prev; }, {});
|
||||
}
|
||||
*/
|
||||
utils.p_timeout = function(promise, ms, errorstr) {
|
||||
/* In a certain period, timeout and reject
|
||||
promise: A promise we want to watch to completion
|
||||
ms: Time in milliseconds to allow it to run
|
||||
errorstr: Error message in reject error
|
||||
*/
|
||||
let timer = null;
|
||||
|
||||
return Promise.race([
|
||||
new Promise((resolve, reject) => {
|
||||
timer = setTimeout(reject, ms, errorstr || `Timed out in ${ms}ms`);
|
||||
}),
|
||||
promise.then((value) => {
|
||||
clearTimeout(timer);
|
||||
return value;
|
||||
})
|
||||
]);
|
||||
}
|
||||
|
||||
exports = module.exports = utils;
|
Loading…
x
Reference in New Issue
Block a user