Skip to content

[Infra] Fix 429 rate limits — batch transport, memory cache, more endpoints #399

@realproject7

Description

@realproject7

Summary

PlotLink gets constant 429 rate limit errors because it's missing three techniques that mintpad and mint.club-v2-web use to prevent them. This is NOT an architecture change — it's 3 targeted fixes.

Root Cause Analysis

Technique mintpad mint.club PlotLink (current)
`batch: true` on transport Yes Yes No
In-memory cache with request dedup Yes (singleton) Yes (Zustand) No
RPC endpoints (Base) 14 10 5

Fix 1: Add `batch: true` + reduce timeout on transport

Files: `lib/rpc.ts`

The biggest win. With `batch: true`, viem automatically combines multiple pending `readContract()` calls into a single JSON-RPC batch HTTP request. If TradingWidget makes 3 reads, they become 1 HTTP request.

Also reduce timeout from 5s to 2s — fail fast and rotate to next endpoint sooner (both mintpad and mint.club use 2s).

Apply to BOTH `browserClient` and `publicClient` transports:

```typescript
// browserClient transport (CORS endpoints)
http(url, {
timeout: 2_000, // was 5_000
retryCount: 0,
batch: true, // NEW — combines multiple reads into 1 HTTP request
fetchOptions: { mode: "cors", credentials: "omit" },
})

// publicClient transport (server endpoints)
http(url, {
timeout: 2_000, // was 10_000
retryCount: 0, // was 1
batch: true, // NEW
})
```

Fix 2: Add in-memory price cache with request deduplication

New file: `lib/cache.ts`

Port mintpad's singleton cache pattern. Two key features:

  1. TTL-based caching — cached values returned for 60 seconds without hitting RPC
  2. In-flight deduplication — if the same key is already being fetched, await that promise instead of firing a duplicate RPC call

```typescript
class MemoryCache {
private cache = new Map<string, { value: unknown; expires: number }>();
private inFlight = new Map<string, Promise>();

async get(key: string, fetcher: () => Promise, ttlSeconds = 60): Promise {
// Return cached if fresh
const cached = this.cache.get(key);
if (cached && Date.now() < cached.expires) return cached.value as T;

// Dedup: if same key is already being fetched, await that promise
const pending = this.inFlight.get(key);
if (pending) return pending as Promise<T>;

// Fetch, cache, and clean up
const promise = fetcher().then((value) => {
  this.cache.set(key, { value, expires: Date.now() + ttlSeconds * 1000 });
  this.inFlight.delete(key);
  return value;
}).catch((err) => {
  this.inFlight.delete(key);
  throw err;
});

this.inFlight.set(key, promise);
return promise;

}
}

export const priceCache = new MemoryCache();
```

Then wrap RPC reads in `lib/price.ts`:

```typescript
import { priceCache } from "./cache";

export async function getTokenPrice(tokenAddress: Address, client = publicClient) {
return priceCache.get(
`price:${tokenAddress.toLowerCase()}`,
async () => {
const [priceRaw, totalSupplyRaw] = await Promise.all([
client.readContract({ ... priceForNextMint ... }),
client.readContract({ ... totalSupply ... }),
]);
return { pricePerToken: formatUnits(priceRaw, 18), ... };
},
60, // 1 minute TTL
);
}
```

Same pattern for `getTokenTVL()`, `get24hPriceChange()`, and `getBatchTokenData()`.

Fix 3: Add more CORS RPC endpoints (5 → 12)

File: `lib/rpc.ts`

mintpad uses 14 Base endpoints, mint.club uses 10. PlotLink only has 5.

```typescript
const PUBLIC_CORS_ENDPOINTS = [
"https://base-rpc.publicnode.com",
"https://mainnet.base.org",
"https://base.drpc.org",
"https://base.llamarpc.com",
"https://base.meowrpc.com",
"https://base-mainnet.public.blastapi.io",
"https://1rpc.io/base",
"https://base.gateway.tenderly.co",
"https://rpc.notadegen.com/base",
"https://base.blockpi.network/v1/rpc/public",
"https://developer-access-mainnet.base.org",
"https://base.api.onfinality.io/public",
];
```

Also update `PUBLIC_RPC_ENDPOINTS` (server-side) with the same expanded list.

Important: Move `mainnet.base.org` to position 2 (not first). `publicnode.com` has higher free-tier limits.

Files to change

File Change
`lib/rpc.ts` Add `batch: true`, reduce timeout to 2s, expand endpoint list to 12
`lib/cache.ts` New file — singleton memory cache with TTL + dedup
`lib/price.ts` Wrap `getTokenPrice()`, `getTokenTVL()`, `get24hPriceChange()`, `getBatchTokenData()` with `priceCache.get()`

DO NOT

  • Do not build server API proxy routes — adds latency, complexity, server bottleneck
  • Do not add Zustand store — the singleton cache + React Query is sufficient
  • Do not copy Prisma cache layer — overkill for current scale
  • Do not change any component files — the fixes are all in lib/

Expected Result

Before After
15+ HTTP requests per page load 2-3 batched HTTP requests
5 endpoints, 5s timeout 12 endpoints, 2s timeout
No dedup, duplicate concurrent calls Singleton cache deduplicates
429 bombs in console Clean console

Acceptance Criteria

  • `batch: true` on both browserClient and publicClient transports
  • Timeout reduced to 2s on all transports
  • 12 CORS endpoints, 12+ server endpoints
  • `lib/cache.ts` with MemoryCache (TTL + in-flight dedup)
  • `getTokenPrice()`, `getTokenTVL()`, `get24hPriceChange()` wrapped with cache
  • No 429 errors in browser console during normal browsing
  • `npm run typecheck` passes
  • `npm run lint` passes

Branch

`task/{issue-number}-rpc-batch-cache`

Metadata

Metadata

Assignees

No one assigned

    Labels

    agent/T3Assigned to T3 builder agentenhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions