Rate limits

Rate limits

CE.png

To ensure stability and fair usage, we enforce rate limits on our public APIs. These limits prevent abuse and provide a reliable experience for all users. This page gives you an overview of the rate limits applied to our APIs and best practices for handling them.

API rate limits

The following rate limits apply to each client IP address:

  • Authentication API

    • Endpoint: Generate access token

    • Maximum: 1,000 requests per five minutes

  • Meter Type APIs

    • Endpoints: CRUD operations for meter types

    • Maximum: 5,000 requests per five minutes

  • Meter APIs

    • Endpoints: CRUD operations for meters

    • Maximum: 10,000 requests per five minutes

How rate limiting works

Our rate limiting uses the token bucket algorithm:

  • Token bucket: Each client IP has a bucket that holds tokens (requests).

  • Capacity: The bucket has a maximum capacity.

  • Refill rate: The system refills tokens at a set rate within the defined time window.

If the bucket is empty, the system denies further requests until more tokens are added.

Handling rate limit exceeded errors

When you exceed the rate limit, the API responds with a 429 Too Many Requests status code. You must:

  • Implement retry logic using exponential backoff to retry requests after a delay.

  • Monitor your application's requests to ensure you do not exceed the rate limits.

Burst rate limits

Burst rate limits let you temporarily exceed the steady request rate for short periods. This helps you handle sudden increases in traffic without exceeding the overall rate limit immediately.

Example - Delete parameter API rate limit
Using the Delete parameters as a reference, each client can make up to one request every one second. The burst capacity is two requests.

  • You can send two requests back-to-back.

  • If you try to send a third request immediately, the API returns a 429 Too Many Requests response.

  • After one second, you can send another request as the bucket refills.

Burst rate limits help you manage occasional spikes in usage while protecting the system from sustained high traffic.