zaro

What is the Limit of Temporal Workflow?

Published in Temporal Workflow Limits 3 mins read

A primary and critical operational limit for Temporal workflows directly impacts their scale and longevity: the Workflow Execution Event History. This history is rigorously capped to ensure system stability and performance, specifically limited to 51,200 Events or 50 MB, whichever is reached first.

Understanding Temporal Workflow Event History Limits

The "Event History" of a Temporal Workflow Execution is a comprehensive, immutable log of every command, event, and result that occurs throughout the workflow's lifecycle. This includes workflow tasks, activity tasks, timers, signals, queries, and their outcomes. Temporal uses this history to reconstruct the workflow state deterministically during replays, ensuring fault tolerance and durability.

To prevent workflows from consuming excessive resources and to maintain the integrity of the system, Temporal enforces strict limits on this history.

Here's a breakdown of the Event History limits and warning thresholds:

Limit Type Warning Threshold Hard Limit
Number of Events 10,240 Events 51,200 Events
History Size (Data) 10 MB 50 MB

When a workflow execution approaches these limits, Temporal issues a warning. If either the event count or the total size of the history reaches the hard limit, the workflow execution will fail or terminate, as it can no longer reliably record its progress or be replayed.

Why Event History Limits Matter

These limits are fundamental to Temporal's design for several reasons:

  • Performance: Larger histories require more memory and processing power to store, transmit, and replay, impacting the overall performance and latency of the Temporal cluster.
  • Reliability & Replayability: The deterministic replay of workflow history is a core tenet of Temporal. Extremely large histories can make this process unwieldy, slow, or prone to resource exhaustion.
  • Resource Management: Limiting history size helps in efficient management of database storage and network bandwidth within the Temporal service.
  • Debugging & Observability: While detailed, an overly long history can become cumbersome to analyze when debugging complex workflows.

Strategies to Manage Workflow History

To design robust and long-running Temporal workflows that operate within these limits, several key strategies can be employed:

  1. Utilize ContinueAsNew: This is the most common and powerful mechanism for managing large histories. ContinueAsNew effectively restarts the workflow execution with a fresh history, passing necessary state to the new run. This allows for indefinitely long-running workflows without accumulating an unbounded history.
  2. Externalize Large State: Avoid storing excessively large amounts of data directly within the workflow state or history. Instead, store large payloads in external data stores (e.g., S3, databases) and pass references (IDs, URLs) within the workflow history.
  3. Optimize Activity and Child Workflow Invocations: Be mindful of the granularity of your activities and child workflows. Each invocation and completion adds events to the history. While fine-grained activities are often beneficial for progress tracking, consider combining very small, frequently executed operations into larger activities where appropriate to reduce event churn.
  4. Careful Use of Timers and Signals: While essential, an excessive number of timers or signals can also contribute to history growth. Ensure they are used judiciously.

By understanding and proactively managing the Workflow Execution Event History, developers can build scalable, resilient, and long-lived applications on the Temporal platform.