Rate limits
The Datamingle API does not currently impose a hard per-key rate limit. Two practical limits apply:
- Per-request payload size. Bodies above 10 MB are rejected at the ingress layer (
413 Request Entity Too Large). - Concurrent requests per key. Bursty single-key traffic above 50 in-flight requests is queued at the gateway, not rejected.
Expect this to evolve. Clients should assume standard HTTP throttling can arrive at any time, with two headers used as the contract:
| Header | Meaning |
|---|---|
Retry-After | Seconds to wait before retrying |
X-RateLimit-Remaining | Requests left in the current window |
Recommended client behavior
- Exponential backoff with jitter on
429,502,503,504and network timeouts. - Respect
Retry-Afterwhen present. Don't treat the value as a floor — at minimum wait that long. - Cap total retry time. A sync job that runs every 15 minutes shouldn't retry a single request for 10 minutes.
- Parallelize cautiously. Prefer 4–8 concurrent workers per key for bulk loads; higher counts rarely speed things up and risk hitting a future limit.
Reducing API pressure
- Prefer bulk endpoints (
POST /inventory/with multiple rows) over many single-row requests. - Debounce upstream events — don't push the same order on every ERP tick.
- Cache product/location lookups client-side; they change rarely.