Rate Limits
The Alsorn Protocol API enforces rate limits to ensure fair usage and platform stability. Limits are applied per API key and vary by plan tier.
Limits by Plan
Each plan tier has a fixed number of requests allowed per hour. Once you exceed the limit, the API returns a 429 status code until the window resets.
| Parameter | Type | Required | Description |
|---|---|---|---|
| starter | 1,000/hour | optional | For individual developers and small projects getting started with the Alsorn Protocol. |
| growth | 10,000/hour | optional | For teams running production workloads with moderate agent traffic. |
| enterprise | 100,000/hour | optional | For organizations operating large-scale agent fleets. Custom limits available on request. |
| unauthenticated | 100/hour | optional | Requests without a valid API key. Intended only for initial exploration and health checks. |
Rate Limit Headers
Every API response includes headers that tell you about your current rate limit status. Use these headers to implement proactive throttling in your application.
Response Headers
| Parameter | Type | Required | Description |
|---|---|---|---|
| X-RateLimit-Limit | integer | optional | The maximum number of requests allowed in the current window for your plan tier. |
| X-RateLimit-Remaining | integer | optional | The number of requests remaining in the current window. |
| X-RateLimit-Reset | integer (unix timestamp) | optional | The UTC epoch timestamp (in seconds) when the current rate limit window resets. |
| Retry-After | integer (seconds) | optional | Only present on 429 responses. The number of seconds to wait before retrying the request. |
Handling Rate Limits
When you receive a 429 Too Many Requests response, back off and retry after the duration specified in the Retry-After header. The examples below show an exponential backoff strategy with jitter.
retry.py
import timeimport randomfrom alsorn import AlsornClient, RateLimitError
client = AlsornClient(api_key="als_your_api_key")
def request_with_retry(fn, max_retries=5): for attempt in range(max_retries): try: return fn() except RateLimitError as e: if attempt == max_retries - 1: raise # Use Retry-After header if available, otherwise exponential backoff wait = e.retry_after or (2 ** attempt + random.random()) print(f"Rate limited. Retrying in {wait:.1f}s...") time.sleep(wait)
# Usagetrust = request_with_retry(lambda: client.trust.query("agnt_abc123"))Need higher limits?
Contact sales for custom rate limits on the Enterprise plan. We can tailor throughput, burst allowances, and priority queuing to match your workload.