USPS API Rate Limits:
How to Handle 60 req/hr
USPS v3 REST API ships with a 60 request/hour rate limit. That's 1 request per minute. Here's how to build a production shipping system that doesn't fall over.
The Problem
A typical e-commerce order touches the USPS API 3-5 times:
- Address validation at checkout
- Rate shopping (1-3 calls for service comparison)
- Label creation
- Tracking number registration
- Tracking status polling (ongoing)
At 60 req/hr, you can process 12-20 orders per hour before hitting the wall. A mid-size Shopify store doing 200 orders/day needs 600+ API calls minimum.
200 orders/day × 3 API calls each = 600 calls/day
600 calls / 24 hours = 25 calls/hour ← under limit
But orders aren't evenly distributed:
Peak hour (2-4 PM): 40 orders × 3 calls = 120 calls
USPS limit: 60/hour ← 2x over limit Strategy 1: Aggressive Caching
Address validation is the lowest-hanging fruit. A validated address doesn't change — cache it for 30 days and you eliminate 30-40% of API calls.
import hashlib, json, time
class AddressCache:
TTL = 30 * 86400 # 30 days
def cache_key(self, street, city, state, zip):
raw = f"{street}|{city}|{state}|{zip}".upper()
return hashlib.sha256(raw.encode()).hexdigest()
def get(self, key):
entry = self.store.get(key)
if entry and time.time() - entry["ts"] < self.TTL:
return entry["data"] # Cache hit
return None # Miss → call USPS What to cache: Address validation (30 days), city/state lookups (forever), service standards (7 days). Don't cache: Tracking (stale within minutes), prices (change seasonally), labels (one-time).
Strategy 2: Request Queuing
Instead of sending requests immediately, queue them and process at a steady rate. This smooths peak-hour bursts and prevents 429 errors.
┌──────────┐ ┌───────────────┐ ┌──────────┐
│ Your App │───▶│ Request Queue │───▶│ USPS API │
└──────────┘ └───────────────┘ └──────────┘
│
Rate: 1 req/min
Burst: 5 req/min
Retry: 3x exponential
Priority levels:
P0: Label creation (blocking checkout)
P1: Address validation (blocking form)
P2: Rate shopping (can show spinner)
P3: Tracking polls (background, no rush) Cloudflare Queues are ideal for this — they're native to the edge runtime, support dead letter queues, and batch up to 25 messages per consumer invocation.
Strategy 3: Request a Rate Limit Increase
Most developers don't know this: USPS will increase your rate limit if you ask.
Contact USPS API support via emailus.usps.com. Include:
- Your USPS Developer Portal CRID and app name
- Estimated monthly volume (be specific: "3,000 address validations + 1,500 labels")
- Business justification (you ship mail and need faster throughput)
Typical increases: 300 req/hr (small business), 1,000 req/hr (mid-volume), 5,000+ req/hr (enterprise). Response time varies: 1-5 business days.
Strategy 4: BYOK (Bring Your Own Keys)
If you use a managed API like RevAddress with BYOK support, your requests use your USPS credentials — meaning your rate limit increase applies through the managed infrastructure.
This gives you the best of both worlds:
- Your rate limit — not shared with other tenants
- Managed caching — address cache, token refresh, retry logic
- Your data stays yours — AES-GCM encrypted credentials, no proxy snooping
Strategy Comparison
| Strategy | Effort | Effective Limit | Best For |
|---|---|---|---|
| Caching | Low | ~100 req/hr | Everyone |
| Queue smoothing | Medium | ~60 req/hr (smoothed) | Bursty workloads |
| Rate limit increase | Low (email) | 300-5,000 req/hr | Growing businesses |
| Managed API (BYOK) | Lowest | 600 req/min | Production systems |
Strategy 5: Use an SDK with Built-in Rate Handling
The official USPS v3 SDKs handle token caching and provide a clean interface for building your own rate limiting layer. Available for both Python and Node.js:
Both SDKs auto-manage OAuth tokens (8-hour lifetime, 30-minute proactive refresh). Pair with your caching layer of choice and the rate limit becomes manageable.
Done fighting rate limits?
RevAddress handles caching, queuing, token management, and rate limit smoothing. BYOK support on Pro and Enterprise plans.