A high-performance Solana blockchain indexer that streams real-time blockchain data using yellowstone-grpc to a PostgreSQL database via Redis for efficient data processing and storage.
This project implements a multi-crate Rust workspace that processes Solana blockchain data through a pipeline:
Solana Network → Geyser Adapter → Redis → Database Consumer → PostgreSQL
core: Shared data models and Solana gRPC client integrationgeyser-adapter: Connects to Solana gRPC streams and publishes updates to Redisredis-adapter: Handles Redis pub/sub operations for data distributiondb: Database operations and consumer logic for processing Redis messagesconfig: Centralized configuration management
- Data Ingestion:
geyser-adapterconnects to Solana gRPC streams and receives real-time updates - Data Processing: Incoming Solana data is filtered and converted to internal models
- Data Distribution: Processed data is published to Redis channels at
redis-adapterfor asynchronous consumption - Data Storage:
dbconsumer reads from Redis channels and batches transactions into PostgreSQL - Data Persistence: Structured blockchain data is stored with proper indexing for efficient queries
- Rust 1.70+ and Cargo
- Docker and Docker Compose
- Diesel cli for database migrations
Create a .env file in the project root:
# Commands are based on the docker-compose.yml config
DATABASE_URL=postgres://solana_user:secure_password@localhost:5432/indexer_db
REDIS_URL=redis://localhost:6379
RPC_URL=""# Start PostgreSQL and Redis containers
docker compose up -d# Run database migrations
cd crates/db
diesel migration run# From project root
cd crates/geyser-adapter
cargo run # From project root
cd crates/db
cargo run - Redis channels will show incoming data
- Logs will display processing status and database insertions

- Database tables will be created and populated as data flows through the system.

Configure what data to index in crates/geyser-adapter/filters.json:
# Example filter to index all transactions involving the Vote
{
"accounts": [
{
"accounts": ["Vote111111111111111111111111111111111111111"],
"owners": ["TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA"],
"filters": []
}
],
"accounts_memcmp": [],
"accounts_datasize": null,
"include_slots": false,
"include_blocks": false,
"blocks_include_transactions": false,
"blocks_include_accounts": false,
"blocks_include_entries": false,
"transactions": {
"vote": false,
"failed": true,
"account_include": ["MemoSq4gqABAXKb96qnH8TysNcWxMyWCqXgDLGmfcHr","TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA"],
"account_exclude": [],
"account_required": []
},
// More filter options...
// like blocks, accounts and slots
}The system creates three main tables:
transactions: Transaction details and metadataaccounts: Account state changesslots: Slot information and status Indexes are created on frequently queried fields for performance.
sol-indexer/
├── crates/
│ ├── core/ # Shared models and Solana integration
│ ├── config/ # Configuration management
│ ├── geyser-adapter/ # Solana gRPC client and data publisher
│ ├── redis-adapter/ # Redis pub/sub implementation
│ └── db/ # Database operations and consumer
├── docker-compose.yml # Infrastructure configuration
└── Cargo.toml # Workspace configuration
# Monitor Redis channels
docker exec -it sol_indexer_redis redis-cli monitor
# Check specific channel
docker exec -it sol_indexer_redis redis-cli subscribe transactions# Connect to PostgreSQL
docker exec -it sol_indexer_db psql -U solana_user -d indexer_db
# Check transaction count
SELECT COUNT(*) FROM transactions;
# View recent transactions
SELECT id, slot, signature, fee FROM transactions ORDER BY id DESC LIMIT 10;# View geyser adapter logs
cargo run -p geyser-adapter 2>&1 | tee geyser.log
# View database consumer logs
cargo run -p db 2>&1 | tee db.log- Batched Inserts: Transactions are batched (100 per insert) for optimal database performance
- Connection Pooling: Uses r2d2 connection pool for efficient database connections
- Asynchronous Processing: Non-blocking Redis pub/sub for high-throughput data handling
- Configurable Filtering: Selective data indexing to reduce storage and processing overhead
-
PostgreSQL Connection Failed
- Ensure Docker containers are running:
docker-compose ps - Check
.envfile has correctDATABASE_URL
- Ensure Docker containers are running:
-
Redis Connection Failed
- Verify Redis container is running:
docker exec -it sol_indexer_redis redis-cli ping - Check
REDIS_URLin.envfile
- Verify Redis container is running:
-
Compilation Errors
- Install PostgreSQL client:
brew install postgresql(macOS) - Ensure Rust toolchain is up to date:
rustup update
- Install PostgreSQL client:
- Adjust batch size in
crates/db/src/store.rsfor your database performance - Modify Redis channel buffer sizes based on memory constraints
- Tune PostgreSQL connection pool settings in
crates/db/src/store.rs
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License.
