Summary
Timepiece app employs Rate Limiting to avoid the overuse of the service.
Rate Limits control the number of reports that you can get from Timepiece service in a predefined time period. This includes all report requests ...
- Main reporting screen
- Dashboard gadgets
- Issue view screen tabs
- REST API
- File exports
Timepiece prepares its reports based on data that it gets from Jira Cloud. Sending very frequent report requests put too much pressure on both Timepiece service and your Jira Cloud instance. Rate limits are put in place to protect the performance of both Timepiece service and your Jira Cloud instance.
What are the limits?
The exact limits are not public because limit definitions are ever-changing. We are continually monitoring, improving, and fine-tuning the system for better performance.
Limits are set well above the average use so customers should not expect to hit rate limits under daily use of the app. Limits come into play especially for REST integration scenarios where an unrestrained number of requests are sent in a very short time period.
What happens when you hit the limit?
When you hit the rate limit, your requests will return HTTP 429 with a response like...
Timepiece rate-limit exceeded. You have sent too many requests in a short time period. See the following document for details about rate-limit: https://dev.obss.com.tr/confluence/display/MD/TiS+Cloud+-+Rate+Limit
How to avoid hitting rate limits?
There are several things you can do to avoid hitting rate limits.
First, if you are getting paged data via REST API, consider getting the data as a single file export. File exports are available through the REST API. When you use a file export, you can get the same data with much fewer requests to Timepiece service so you can easily avoid hitting rate limits.
Second, if you can't use file exports for some reason, try lowering the frequency of your report requests.
Further Support
If you are hitting rate limits unexpectedly and the guidance in this document is not enough for you, please reach our support team as instructed here.