Currently the README states that this verifier is limited to IPFS objects with no links. This is because the protobuf byte definition will change if an object has defined a links array in the protobuf message.
We may be stuck on being able to verify IPFS objects with links since an IPFS chunk is currently hard coded to 256k, meaning the file is already pretty large if it has links and the input data will cause large overhead in the sha256 hashing. I am interested in seeing what the max input data size is for this verifier, perhaps at peak optimization and only doing sha256 hashing and comparison without conversions.
However if we reduce the fixed size of an IPFS chunk and found the hard limit on data input we could perhaps handle the protobuf definition for an IPFS object with links.
Proposal:
These are just quick thoughts but I would love to take a stab a PR on this if we hammer this out a bit more.
Currently the README states that this verifier is limited to IPFS objects with no links. This is because the protobuf byte definition will change if an object has defined a links array in the protobuf message.
We may be stuck on being able to verify IPFS objects with links since an IPFS chunk is currently hard coded to 256k, meaning the file is already pretty large if it has links and the input data will cause large overhead in the sha256 hashing. I am interested in seeing what the max input data size is for this verifier, perhaps at peak optimization and only doing sha256 hashing and comparison without conversions.
However if we reduce the fixed size of an IPFS chunk and found the hard limit on data input we could perhaps handle the protobuf definition for an IPFS object with links.
Proposal:
These are just quick thoughts but I would love to take a stab a PR on this if we hammer this out a bit more.