Skip to main content

Rate Limits

The API enforces rate limits per tenant to ensure fair usage and platform stability. Limits are enforced at the gateway level using distributed Redis-backed counters.

Default Limits

Endpoint GroupLimitWindow
General (all endpoints)100 requests1 minute
Transfers30 requests1 minute
Transactions (queries)50 requests1 minute
Journal posting50 requests1 minute
Standing instructions20 requests1 minute
Loan operations20 requests1 minute

Burst allowance: up to 20 concurrent requests above the base rate.

note

Sandbox environments use 10x the production limits. See Sandbox Testing.

Response Headers

Every response includes rate limit headers:

HeaderDescriptionExample
X-RateLimit-LimitMax requests per window100
X-RateLimit-RemainingRequests remaining in current window87
X-RateLimit-ResetUnix timestamp when the window resets1740009600
HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1740009600

Rate Limit Exceeded

When you exceed the limit:

HTTP/1.1 429 Too Many Requests
Retry-After: 45
{
"code": "RATE_LIMIT_EXCEEDED",
"message": "Too many requests, please retry after 45 seconds",
"retry_after": 45,
"request_id": "req-a1b2c3d4"
}

The retry_after field (also in the Retry-After header) tells you how many seconds to wait.

Handling Rate Limits

async function apiCallWithRateLimit(url, options) {
const res = await fetch(url, options);

const remaining = parseInt(res.headers.get("X-RateLimit-Remaining"));
const resetAt = parseInt(res.headers.get("X-RateLimit-Reset"));

if (res.status === 429) {
const retryAfter = parseInt(res.headers.get("Retry-After")) || 60;
await new Promise((r) => setTimeout(r, retryAfter * 1000));
return apiCallWithRateLimit(url, options);
}

if (remaining < 5) {
console.warn(
`Rate limit low: ${remaining} remaining, resets at ${new Date(resetAt * 1000)}`
);
}

return res;
}

Strategies

  • Monitor X-RateLimit-Remaining — Back off when approaching zero.
  • Use exponential backoff — On 429 responses, wait the Retry-After duration.
  • Batch operations — Use batch journal endpoints (POST /api/v1/journals/batch) instead of individual calls.
  • Cache responses — Cache account lists, GL account hierarchies, and other slowly-changing data.
  • Use cursor pagination — Fetch larger pages (up to limit=200) to reduce the number of requests.

Next Steps