Beside distributing binary files, the build inputs of a derivation could be cached in IPFS.
That would speed up CI quite a bit since the cache could move "closer" to the build instance (using an IPFS Gateway that is within the local network).
Current Status:
@knupfer created fetchIPFS: NixOS/nix#859 (comment)
@CMCDragonkai is doing something similiar @ https://github.com/MatrixAI/Forge-Package-Archiving
NAR replacement discussion: NixOS/nix#1006
There are different methods on how to approach this:
- put build inputs into IPFS and make a
sha256:IPFS mapping inside nixpkgs (a huge set)
Migration: easy (can be solved by a script that builds all /nix/store/ paths and puts them into IPFS)
Support: every fetch derivation needs to be patched to look up the IPFS hash first, either fetch it from the local running daemon or a gateway. If no hash is found each derivation falls back to its normal
operation (curl, git, svn etc.)
- append the IPFS hash to each package (@knupfer's idea).
Migration: takes a lot of effort (each derivation must be touched)
Support: already implemented, however it must be added manually to each package
Should we produce an intermediate cache format (tar? -> can be chunked efficiently in IPFS) that is used to cache each src? This can then be used within each fetch derivation. As a big plus, the build inputs are deduplicated and share blocks with the official package.
However it would be better just to reuse an already archived src (like tar.*, .zip etc.) and not to archive it again -> We need some form of manifest -> We need IPLD
Beside distributing binary files, the build inputs of a derivation could be cached in IPFS.
That would speed up CI quite a bit since the cache could move "closer" to the build instance (using an IPFS Gateway that is within the local network).
Current Status:
@knupfer created fetchIPFS: NixOS/nix#859 (comment)
@CMCDragonkai is doing something similiar @ https://github.com/MatrixAI/Forge-Package-Archiving
NAR replacement discussion: NixOS/nix#1006
There are different methods on how to approach this:
sha256:IPFSmapping inside nixpkgs (a huge set)Migration: easy (can be solved by a script that builds all /nix/store/ paths and puts them into IPFS)
Support: every fetch derivation needs to be patched to look up the IPFS hash first, either fetch it from the local running daemon or a gateway. If no hash is found each derivation falls back to its normal
operation (curl, git, svn etc.)
Migration: takes a lot of effort (each derivation must be touched)
Support: already implemented, however it must be added manually to each package
Should we produce an intermediate cache format (tar? -> can be chunked efficiently in IPFS) that is used to cache each
src? This can then be used within eachfetchderivation. As a big plus, the build inputs are deduplicated and share blocks with the official package.However it would be better just to reuse an already archived
src(like tar.*, .zip etc.) and not to archive it again -> We need some form of manifest -> We need IPLD