zaro

What is the limit of API call in ADF?

Published in Azure Data Factory Limits 2 mins read

The primary limit for Read API calls in Azure Data Factory (ADF) is 12,500 per hour.

Understanding API Call Limits in Azure Data Factory

Azure Data Factory operations, including interacting with monitoring data and managing resources, often involve calls to underlying Azure services. It's important to note that the most prominent "API call" limit affecting ADF users for read operations is not imposed by Azure Data Factory itself, but rather by Azure Resource Manager.

This means that various actions such as fetching factory metadata, checking pipeline run statuses, or querying dataset configurations through the Azure portal, PowerShell, Azure CLI, or SDKs contribute to this limit.

Key API-Related Limits in Azure Data Factory

Here's a summary of the API and query-related limits that are relevant to Azure Data Factory operations:

Resource Default Limit Maximum Limit Notes
Read API calls 12,500/hour 12,500/hour This specific limit is imposed by Azure Resource Manager, affecting control plane operations like reading resource configurations and metadata.
Monitoring queries per minute 1,000 1,000 This limit applies to the rate at which monitoring data can be queried within Azure Data Factory.

When the limit for Read API calls is reached, you might encounter throttling errors (HTTP 429 - Too Many Requests), which can temporarily prevent further operations until the rate resets.

Practical Insights and Best Practices for Managing Limits

To effectively manage these API and query limits and ensure smooth operation of your Azure Data Factory solutions, consider the following best practices:

  • Optimize Read Operations:
    • Batching: When possible, consolidate multiple individual read operations into a single API call to reduce the overall call count.
    • Caching: For frequently accessed static or slowly changing metadata and configuration data, implement caching mechanisms to avoid repetitive API calls.
  • Efficient Monitoring:
    • Targeted Queries: Design your monitoring queries to retrieve only the essential data, avoiding broad or overly frequent polling of information.
    • Event-Driven Monitoring: Leverage Azure Monitor alerts and metrics-based monitoring instead of continuous, manual API polling for status updates.
  • Implement Robust Error Handling:
    • Retry Logic: Incorporate retry mechanisms with exponential back-off in your automation scripts or applications that interact with Azure Data Factory APIs. This helps gracefully handle transient throttling errors (HTTP 429).

By understanding these limits and adopting efficient practices, you can design and operate resilient and high-performing Azure Data Factory workflows within the Azure ecosystem.