This project provides tools to migrate IPFS content to Storacha with optional Pinata pinning, tracking progress in a Redis database.
- Node.js 16+ installed
- Redis server running locally (for initial import)
- IPFS daemon running
- Storacha private key and proof (required)
- Pinata API key and secret (optional, for dual storage)
- (Optional) Slack workspace and bot token for notifications
The system uses Storacha as the primary storage provider and can optionally pin to Pinata:
- Files are uploaded to Storacha first
- After successful upload, if Pinata credentials are provided, the CID is automatically pinned to Pinata without waiting for the pinning process to complete
- This provides redundancy across both storage providers with minimal processing time
To enable dual storage, make sure PINATA_JWT is set in your environment.
- Install dependencies:
npm install- Make sure your local Redis server is running:
redis-server --daemonize yes- Configure environment variables:
Create a .env file in the project root with the following variables:
Ensure your cids.csv file is in the project root with the format:
article_version_id,article_id,media_hash,data_hash
npm run import(Optional) Dump and import to another Redis instance:
# Dump
# https://github.com/upstash/upstash-redis-dump
upstash-redis-dump -db 0 -host localhost -port 6379 > redis.dump
# Import
redis-cli -u redis://localhost:6379 --pipe < redis.dumpnpm run startThe script uploads content to Storacha. Configure your Storacha credentials in the .env file:
STORACHA_RATE_LIMIT_PER_MINUTE=250
STORACHA_PRIVATE_KEY=your_storacha_private_key_here
STORACHA_PROOF=your_storacha_proof_here
To get your private key and proof:
- In your command line where
w3cliis configured with the Space you want to use:
# Generate a new key and get the private key
w3 key create
# The output will include a private key starting with "Mg..."
# Store this as STORACHA_PRIVATE_KEY
# Create a delegation from your w3cli agent to the agent you generated above
# First make sure you're using the right space with: w3 space use <your_space_did>
w3 delegation create <did_from_key_create_command> --base64
# Store the output as STORACHA_PROOFThe script will:
- Export the CID to a CAR file using
ipfs dag export - Upload the CAR file to Storacha using the w3up-client library
For dual storage, you can configure Pinata credentials in the .env file:
PINATA_JWT=your_pinata_jwt_here
PINATA_GROUP_ID=your_pinata_group_id_here
PINATA_GATEWAY_URL=your_pinata_gateway_url_here
When Pinata credentials are provided, after a successful upload to Storacha, the CID will be pinned to Pinata without waiting for the pinning process to complete.
This project is written in TypeScript. The source code is in the src directory and compiled JavaScript is output to the dist directory.
To build the project:
npm run buildThe script provides progress updates during processing. You can also check the Redis sets:
pending_cids: CIDs waiting to be processedsuccess_cids: Successfully processed CIDsfailure_cids: Failed CIDsskipped_cids: CIDs that were skipped (not found locally)
For each CID, detailed information is stored in a hash at cid:{cid}.
The application can send progress updates and notifications to a Slack channel. To enable this feature:
-
Create a Slack app in your workspace and generate a bot token with the following permissions:
chat:writechat:write.public
-
Add the following environment variables to your
.envfile:
SLACK_TOKEN=xoxb-your-slack-token-here
SLACK_CHANNEL_ID=C12345678
SLACK_ENABLED=true
When enabled, the application will send notifications for:
- Process start and completion
- Progress updates at regular intervals
- Error notifications
- Final results summary
- If the upload script gets interrupted, it can be safely restarted as it tracks progress in Redis
- CIDs that were successfully processed won't be processed again
- Environment variables: batch size, ipfs paths, etc.
- Install dependencies: npm packages, redis, ipfs, w3cli
- Configure Storacha credentials locally
- Backup Redis database to S3 after migration
- Restore IPFS archived snapshots and create volumes
- Mount all volumes to EC2 instance on different paths
- Delete all volumes and snapshots after migration