Rather than relying on a third party score like npms.io, we should create our own with transparency.
This means we need to do the following:
- Research which metrics could be used in the scoring algorithm
- Trial some of these out against a random set of packages to see what their scores end up as
- Ensure whichever metrics we settle on are publicly available
Importantly, we need to be able to explain why a package was scored the way it was. We can't really do that if we use a third party scoring mechanism, so we should build our own.
This will take some discussion so nobody needs to try implement this yet. For now, we just need to research which metrics may be useful.
If you have any suggestions, reply here and we will maintain a list in this post.
If anyone wants to pick up the trial work (trying to create the algorithm and try it out), let us know here.
Summary
- All metrics should be publicly available
- Packages should show their total score
- A drilldown page or panel should show each individual underlying score and what that means
- The e18e data project may be useful here as we will have enriched registry data with things like deep license types, ESM/CJS, etc
- Proposed algorithms need to be trialled out against a random set of packages
Rather than relying on a third party score like
npms.io, we should create our own with transparency.This means we need to do the following:
Importantly, we need to be able to explain why a package was scored the way it was. We can't really do that if we use a third party scoring mechanism, so we should build our own.
This will take some discussion so nobody needs to try implement this yet. For now, we just need to research which metrics may be useful.
If you have any suggestions, reply here and we will maintain a list in this post.
If anyone wants to pick up the trial work (trying to create the algorithm and try it out), let us know here.
Summary