release notes
release notes
Published 20 hours ago
MinorContains breaking changes📦 PyPI: https://pypi.org/project/apache-airflow/3.2.0/ 📚 Docs: https://airflow.apache.org/docs/apache-airflow/3.2.0/ 🛠 Release Notes: https://airflow.apache.org/docs/apache-airflow/3.2.0/release_notes.html 🐳 Docker Image: "docker pull apache/airflow:3.2.0" 🚏 Constraints: https://github.com/apache/airflow/tree/constraints-3.2.0
The headline feature of Airflow 3.2.0 is asset partitioning — a major evolution of data-aware scheduling. Instead of triggering Dags based on an entire asset, you can now schedule downstream processing based on specific partitions of data. Only the relevant slice of data triggers downstream work, making pipeline orchestration far more efficient and precise.
This matters when working with partitioned data lakes — date-partitioned S3 paths, Hive table partitions, BigQuery table partitions, or any other partitioned data store. Previously, any update to an asset triggered all downstream Dags regardless of which partition changed. Now only the right work gets triggered at the right time.
For detailed usage instructions, see :doc:/authoring-and-scheduling/assets.
Airflow 3.2 introduces multi-team support, allowing organizations to run multiple isolated teams within a single Airflow deployment. Each team can have its own Dags, connections, variables, pools, and executors— enabling true resource and permission isolation without requiring separate Airflow instances per team.
This is particularly valuable for platform teams that serve multiple data engineering or data science teams from shared infrastructure, while maintaining strong boundaries between teams' resources and access.
For detailed usage instructions, see :doc:/core-concepts/multi-team.
.. warning::
Multi-Team Deployments are experimental in 3.2.0 and may change in future versions based on user feedback.
Deadline Alerts now support synchronous callbacks via SyncCallback in addition to the existing
asynchronous AsyncCallback. Synchronous callbacks are executed by the executor (rather than
the triggerer), and can optionally target a specific executor via the executor parameter.
A Dag can also define multiple Deadline Alerts by passing a list to the deadline parameter,
and each alert can use either callback type.
.. warning::
Deadline Alerts are experimental in 3.2.0 and may change in future versions based on
user feedback. Synchronous deadline callbacks (SyncCallback) do not currently
support Connections stored in the Airflow metadata database.
For detailed usage instructions, see :doc:/howto/deadline-alerts.
Grid View Virtualization: The Grid view now uses virtualization -- only visible rows are rendered to the DOM. This dramatically improves performance when viewing Dags with large numbers of task runs, reducing render time and memory usage for complex Dags. (#60241)
XCom Management in the UI: You can now add, edit, and delete XCom values directly from the Airflow UI. This makes it much easier to debug and manage XCom state during development and day-to-day operations without needing CLI commands. (#58921)
HITL Detail History: The Human-in-the-Loop approval interface now includes a full history view, letting operators and reviewers see the complete audit trail of approvals and rejections for any task. (#56760, #55952)
Gantt Chart Improvements:
--only-idle flag for the scheduler CLIThe airflow scheduler command has a new --only-idle flag that only counts runs when the
scheduler is idle. This helps users run the scheduler once and process all triggered Dags and
queued tasks. It requires and complements the --num-runs flag so one can set a small value
instead of guessing how many iterations the scheduler needs.
The grid, graph, gantt, and task-detail views now fetch task-instance
summaries through a single streaming HTTP request
(GET /ui/grid/ti_summaries/{dag_id}?run_ids=...) instead of one request
per run. The server emits one JSON line per run as soon as that run's task
instances are ready, so columns appear progressively rather than all at once.
What changed:
GET /ui/grid/ti_summaries/{dag_id}?run_ids=... is now the sole endpoint
for TI summaries, returning an application/x-ndjson stream where each
line is a serialized GridTISummaries object for one run.GET /ui/grid/ti_summaries/{dag_id}/{run_id}
has been removed.dag_version_id, avoiding redundant deserialization.run_ids.The new json_logs option under the [logging] section makes Airflow
produce all its output as newline-delimited JSON (structured logs) instead of
human-readable formatted logs. This covers the API server (gunicorn/uvicorn),
including access logs, warnings, and unhandled exceptions.
Not all components support this yet — notably airflow celery worker but
any non-JSON output when json_logs is enabled will be treated as a bug. (#63365)
The interfaces and functions located in airflow.traces were
internal code that provided a standard way to manage spans in
internal Airflow code. They were not intended as user-facing code
and were never documented. They are no longer needed so we
remove them in 3.2. (#63452)
Airflow now sources task-facing exceptions (AirflowSkipException, TaskDeferred, etc.) from
airflow.sdk.exceptions. airflow.exceptions still exposes the same exceptions, but they are
proxies that emit DeprecatedImportWarning so Dag authors can migrate before the shim is removed.
What changed:
airflow-core at runtime.airflow.providers.common.compat.sdk centralizes compatibility imports for providers.Behaviour changes:
ValueError (instead of
AirflowException) when poke_interval/ timeout arguments are invalid.airflow.exceptions logs a warning directing users to
the SDK import path.Exceptions now provided by airflow.sdk.exceptions:
AirflowException and AirflowNotFoundExceptionAirflowRescheduleException and AirflowSensorTimeoutAirflowSkipException, AirflowFailException, AirflowTaskTimeout, AirflowTaskTerminatedTaskDeferred, TaskDeferralTimeout, TaskDeferralErrorDagRunTriggerException and DownstreamTasksSkippedAirflowDagCycleException and AirflowInactiveAssetInInletOrOutletExceptionParamValidationError, DuplicateTaskIdFound, TaskAlreadyInTaskGroup, TaskNotFound, XComNotFoundAirflowOptionalProviderFeatureExceptionBackward compatibility:
airflow.exceptions continue to work, though
they log warnings.airflow.providers.common.compat.sdk to keep one import path that works
across supported Airflow versions.Migration:
airflow.sdk.exceptions (or from the provider compat shim).ValueError for invalid sensor arguments if it
previously caught AirflowException.The retry_exponential_backoff parameter now accepts numeric values to specify custom exponential backoff multipliers for task retries. Previously, this parameter only accepted boolean values (True or False), with True using a hardcoded multiplier of 2.0.
New behavior:
2.0, 3.5) directly specify the exponential backoff multiplierretry_exponential_backoff=2.0 doubles the delay between each retry attemptretry_exponential_backoff=0 or False disables exponential backoff (uses fixed retry_delay)Backwards compatibility:
Existing Dags using boolean values continue to work:
retry_exponential_backoff=True → converted to 2.0 (maintains original behavior)retry_exponential_backoff=False → converted to 0.0 (no exponential backoff)API changes:
The REST API schema for retry_exponential_backoff has changed from type: boolean to type: number. API clients must use numeric values (boolean values will be rejected).
Migration:
While boolean values in Python Dags are automatically converted for backwards compatibility, we recommend updating to explicit numeric values for clarity:
retry_exponential_backoff=True → retry_exponential_backoff=2.0retry_exponential_backoff=False → retry_exponential_backoff=0Airflow now sources serde logic from airflow.sdk.serde instead of
airflow.serialization.serde. Serializer modules have moved from airflow.serialization.serializers.*
to airflow.sdk.serde.serializers.*. The old import paths still work but emit DeprecatedImportWarning
to guide migration. The backward compatibility layer will be removed in Airflow 4.
What changed:
airflow-core to task-sdk packageairflow.serialization.serializers.* to airflow.sdk.serde.serializers.*airflow.sdk.serde.serializers.* namespaceCode interface changes:
airflow.sdk.serde.serializers.* instead of airflow.serialization.serializers.*airflow.sdk.serde instead of airflow.serialization.serdeBackward compatibility:
airflow.serialization.serializers.* continue to work with deprecation warningsMigration:
airflow.sdk.serde.serializers.*airflow.sdk.serde.serializers.* namespace (e.g., create task-sdk/src/airflow/sdk/serde/serializers/your_serializer.py)On (experimental) class PriorityWeightStrategy, functions serialize()
and deserialize() were never used anywhere, and have been removed. They
should not be relied on in user code. (#59780)
On class TaskInstance, functions run(), render_templates(),
get_template_context(), and private members related to them have been
removed. The class has been considered internal since 3.0, and should not be
relied on in user code. (#59780, #59835)
DagBagNew behavior:
DagBag now uses Path.relative_to for consistent cross-platform behavior.FileLoadStat now has two additional nullable fields: bundle_path and bundle_name.Backward compatibility:
FileLoadStat will no longer produce paths beginning with / with the meaning of "relative to the dags folder".
This is a breaking change for any custom code that performs string-based path manipulations relying on this behavior.
Users are advised to update such code to use pathlib.Path. (#59785)
--conn-id option from airflow connections listThe redundant --conn-id option has been removed from the airflow connections list CLI command.
Use airflow connections get instead. (#59855)
render_template_as_native_obj overrideOperators can now override the Dag-level render_template_as_native_obj setting,
enabling fine-grained control over whether templates are rendered as native Python
types or strings on a per-task basis. Set render_template_as_native_obj=True or
False on any operator to override the Dag setting, or leave as None (default)
to inherit from the Dag.
The API server now supports gunicorn as an alternative server with rolling worker restarts to prevent memory accumulation in long-running processes.
Key Benefits:
Rolling worker restarts: New workers spawn and pass health checks before old workers are killed, ensuring zero downtime during worker recycling.
Memory sharing: Gunicorn uses preload + fork, so workers share memory via copy-on-write. This significantly reduces total memory usage compared to uvicorn's multiprocess mode where each worker loads everything independently.
Correct FIFO signal handling: Gunicorn's SIGTTOU kills the oldest worker (FIFO), not the newest (LIFO), which is correct for rolling restarts.
Configuration:
.. code-block:: ini
[api]
# Use gunicorn instead of uvicorn
server_type = gunicorn
# Enable rolling worker restarts every 12 hours
worker_refresh_interval = 43200
# Restart workers one at a time
worker_refresh_batch_size = 1
Or via environment variables:
.. code-block:: bash
export AIRFLOW__API__SERVER_TYPE=gunicorn
export AIRFLOW__API__WORKER_REFRESH_INTERVAL=43200
Requirements:
Install the gunicorn extra: pip install 'apache-airflow-core[gunicorn]'
Note on uvicorn (default):
The default uvicorn mode does not support rolling worker restarts because:
If you need worker recycling or memory-efficient multi-worker deployment, use gunicorn. (#60921)
The config max_num_rendered_ti_fields_per_task is renamed to num_dag_runs_to_retain_rendered_fields
(old name still works with deprecation warning).
Retention is now based on the N most recent dag runs rather than N most recent task executions, which may result in fewer records retained for conditional/sparse tasks. (#60951)
requires_access_dag on the DagAccessEntity.Runis_authorized_backfill of the BaseAuthManager interface has been removed. Core will no longer call this method and their
provider counterpart implementation will be marked as deprecated.
Permissions for backfill operations are now checked against the DagAccessEntity.Run permission using the existing
requires_access_dag decorator. In other words, if a user has permission to run a Dag, they can perform backfill operations on it.
Please update your security policies to ensure that users who need to perform backfill operations have the appropriate DagAccessEntity.Run permissions. (Users
having the Backfill permissions without having the DagRun ones will no longer be able to perform backfill operations without any update)
Airflow 3.2.0 adds support for Python 3.14. (#63787)
SerializedDAG loads on task startThe API server no longer loads the full SerializedDAG when starting tasks,
significantly reducing memory usage. (#60803)
MySQL client support has been removed from official Airflow container images. MySQL users building on official images must install the client themselves. (#57146)
The PythonOperator parameter python_callable now also supports async callables in Airflow 3.2,
allowing users to run async def functions without manually managing an event loop. (#60268)
The schedule="[@continuous](https://github.com/continuous)" parameter now works without requiring a start_date, and any Dags with this schedule will begin running immediately when unpaused. (#61405)
PYTHON_LTO build argument (#58337)--queues CLI option for the trigger command (#59239)--show-values and --hide-sensitive flags to CLI connections list and variables list to hide sensitive values by default (#62344)AIRFLOW__SECRETS__BACKEND_KWARG__<KEY> environment variables (#63312)only_new parameter to Dag clear to only clear newly added task instances (#59764)log_timestamp_format config option for customizing component log timestamps (#63321)--action-on-existing-key option to pools import and connections import CLI commands (#62702)--use-migration-files flag for airflow db init (#62234)AllowedKeyMapper for partition key validation in asset partitioning (#61931)ChainMapper for chaining multiple partition mappers (#64094)AgenticOperator (#63081)[@task](https://github.com/task).stub decorator to allow tasks in other languages to be defined in Dags (#56055)TriggerDagRunOperator (#60810)allowed_run_types to whitelist specific Dag run types (#61833)dag_id and dag_run_id in bulk task instance endpoint (#57441)operator_name_pattern, pool_pattern, queue_pattern as task instance search filters (#57571)update_mask support for bulk PATCH APIs (#54597)source parameter to Param (#58615)TaskInstance on RuntimeTaskInstance (#59712)max_trigger_to_select_per_loop config for Triggerer HA setup (#58803)uvicorn_logging_level config option to control API server access logs (#56062)executor.running_dags gauge metric to expose count of running Dags (#52815)GitDagBundle (#59911)GitHook for Dag bundles (#58194)RemoteIO for ObjectStorage (#54813)--dev flag (#57741)auth list-envs command to list CLI environments and auth status (#61426)airflow info command output (#59124)db_clean to explicitly include or exclude Dags (#56663)TaskStreamFilter (#60549)globalCss in custom themes (#61161)run_after date filter on Dag runs page (#62797)AIRFLOW__API__THEME in addition to brand (#64232)non-sensitive-only value as True (#59880)InvalidStatsNameException for pool names with invalid characters by auto-normalizing them when emitting metrics (#59938)AIRFLOW__API__BASE_URL basename is configured (#63141)/mapped to group URLs (#63205)ti_skip_downstream overwriting RUNNING tasks to SKIPPED in HA deployments (#63266)[@task](https://github.com/task) decorator failing for tasks that return falsy values like 0 or empty string (#63788)LatestOnlyOperator not working when direct upstream of a dynamically mapped task (#62287)XCom return type in mapped task groups with dynamic mapping (#59104)DagRun span emission crash when context_carrier is None (#64087)next_dagrun fields are None (#63962)relativedelta (#61671)task_instance_mutation_hook receiving run_id=None during TaskInstance creation (#63049)None dag_version access (#62225)MetastoreBackend.expunge_all() corrupting shared session state (#63080)airflowignore negation pattern handling for directory-only patterns (#62860)TYPE_CHECKING-only forward references in TaskFlow decorators (#63053)structlog JSON serialization crash on non-serializable objects (#62656)queued_tasks type mismatch in hybrid executors (CeleryKubernetesExecutor, LocalKubernetesExecutor) (#63744)pathlib.Path objects incorrectly resolved by Jinja templater in Task SDK (#63306)make_partial_model for API Pydantic models (#63716)_execution_api_server_url() ignoring configured value and falling back to edge config (#63192)DetachedInstanceError for airflow tasks render command (#63916)[@task](https://github.com/task) definition causing Dag parsing errors (#62174)limit parameter not sent in execute_list server requests (#63048)airflow.configuration causing ImportError on Python 3.14 (#63787)map_index range validation in CLI commands (#62626)FabAuthManager race condition on startup with multiple workers (#62737)FabAuthManager race condition when workers concurrently create permissions, roles, and resources (#63842)JWTValidator not handling GUESS algorithm with JWKS (#63115)FabAuthManager first idle MySQL disconnect in token auth (#62919)JWTBearerTIPathDep import errors in Human-In-The-Loop routes (#63277)log_pos (#63531)null dag_run_conf causing serialization error in BackfillResponse (#63259)savepoints with per-Dag transactions (#63591)deadline.callback_id (#63612)interval causing query failures in deadline_alert (#63494)serialize_dag query failure during deadline migration (#63804)visibility_timeout that kills long-running Celery tasks (#62869)CronPartitionedTimetable (#62441)AssetModel when updating asset partition DagRun — adds mutex lock (#59183)auth_manager load_user causing PendingRollbackError (#61943)joinedload for asset in dags_needing_dagruns() (#60957)NotMapped exception when clearing task instances with downstream/upstream (#58922)partition_key filter in PALK when creating DagRun (#61831)ObjectStoragePath to exclude conn_id from storage options passed to fsspec (#62701)XComObjectStorageBackend (#55805)default_email_on_failure/default_email_on_retry from config (#59912)TaskInstance.get_dagrun returning None in task_instance_mutation_hook (#60726)dag_display_name property bypass for DagStats query (#64256)TaskAlreadyRunningError not raised when starting an already-running task instance (#60855)enable_swagger_ui config not respected in API server (#64376)conf.has_option not respects default provider metadata (#64209)TaskInstance crash when refreshing task weight for non-serialized operators (#64557)XCom edit modal value not repopulating on reopen (#62798)RenderedJsonField collapse behavior (#63831)RenderedJsonField not displaying in table cells (#63245)-1) slots not rendering correctly (#62831)DurationChart labels and disable animation flicker during auto-refresh (#62835)/dagruns page not working (#62848)total_received count in partitioned Dag runs view (#62786)RenderedJsonField flickering when collapsed (#64261)TISummaries not refreshing when gridRuns are invalidated (#64113)DagRun window (#64179)api.page_size config in favor of api.fallback_page_limit (#61067)get_dag_runs API endpoint performance (#63940)filter_authorized_dag_ids (#63184)gc.freeze (#62212)get_task_instances endpoint (#62910)IN clause in asset queries with CTE and JOIN for better SQL performance (#62114)ConnectionResponse serializer safeguard to prevent accidental sensitive field exposure (#63883)dag_id filter on DagRun task instances API query (#62750)expose_stacktrace is disabled (#63028)update_mask fields in PATCH API endpoints against Pydantic models (#62657)order_by parameter to GET /permissions endpoint for pagination consistency (#63418)BaseXcom to airflow.sdk public exports (#63116)TaskSDK conf respect default config from provider metadata (#62696)airflow.sdk.observability.trace (#63554)external_executor_id on PostgreSQL (#63625)DagRun.created_at during migration for faster upgrades (#63825)ExecuteCallback by including dag_id and run_id (#62616)get_connection_form_widgets and get_ui_field_behaviour hook methods (#63711)[workers] config section (#63659)TaskInstance API for external task management (#61568)airflow.datasets, airflow.timetables.datasets, and airflow.utils.dag_parsing_context modules (#62927)PyOpenSSL from core dependencies (#63869)SerializedDAG (#56694)pop(0) to popleft() (#61376).git folder from versions in GitDagBundle to reduce storage size (#57069)airflow.utils.process_utils (#57193)KubernetesPodOperator handling of deleted pods between polls (#56976)ToXXXMapper to StartOfXXXMapper in partition-mapper for clarity (#64160)partition_date (#62866)AIRFLOW__API__THEME config (#64232)DeadlineReferences (#57222)FilterBar with DateRangeFilter for compact UI (#56173)RedisTaskHandler configuration example (#63898)_shared folders (#63468)modules_management docs (#63634)max_active_tasks Dag parameter documentation (#63217)GitHook parameters (#63265)example_bash_decorator (#62948)release notes
Published 20 hours ago
MinorContains breaking changes📦 PyPI: https://pypi.org/project/apache-airflow/3.2.0/ 📚 Docs: https://airflow.apache.org/docs/apache-airflow/3.2.0/ 🛠 Release Notes: https://airflow.apache.org/docs/apache-airflow/3.2.0/release_notes.html 🐳 Docker Image: "docker pull apache/airflow:3.2.0" 🚏 Constraints: https://github.com/apache/airflow/tree/constraints-3.2.0
The headline feature of Airflow 3.2.0 is asset partitioning — a major evolution of data-aware scheduling. Instead of triggering Dags based on an entire asset, you can now schedule downstream processing based on specific partitions of data. Only the relevant slice of data triggers downstream work, making pipeline orchestration far more efficient and precise.
This matters when working with partitioned data lakes — date-partitioned S3 paths, Hive table partitions, BigQuery table partitions, or any other partitioned data store. Previously, any update to an asset triggered all downstream Dags regardless of which partition changed. Now only the right work gets triggered at the right time.
For detailed usage instructions, see :doc:/authoring-and-scheduling/assets.
Airflow 3.2 introduces multi-team support, allowing organizations to run multiple isolated teams within a single Airflow deployment. Each team can have its own Dags, connections, variables, pools, and executors— enabling true resource and permission isolation without requiring separate Airflow instances per team.
This is particularly valuable for platform teams that serve multiple data engineering or data science teams from shared infrastructure, while maintaining strong boundaries between teams' resources and access.
For detailed usage instructions, see :doc:/core-concepts/multi-team.
.. warning::
Multi-Team Deployments are experimental in 3.2.0 and may change in future versions based on user feedback.
Deadline Alerts now support synchronous callbacks via SyncCallback in addition to the existing
asynchronous AsyncCallback. Synchronous callbacks are executed by the executor (rather than
the triggerer), and can optionally target a specific executor via the executor parameter.
A Dag can also define multiple Deadline Alerts by passing a list to the deadline parameter,
and each alert can use either callback type.
.. warning::
Deadline Alerts are experimental in 3.2.0 and may change in future versions based on
user feedback. Synchronous deadline callbacks (SyncCallback) do not currently
support Connections stored in the Airflow metadata database.
For detailed usage instructions, see :doc:/howto/deadline-alerts.
Grid View Virtualization: The Grid view now uses virtualization -- only visible rows are rendered to the DOM. This dramatically improves performance when viewing Dags with large numbers of task runs, reducing render time and memory usage for complex Dags. (#60241)
XCom Management in the UI: You can now add, edit, and delete XCom values directly from the Airflow UI. This makes it much easier to debug and manage XCom state during development and day-to-day operations without needing CLI commands. (#58921)
HITL Detail History: The Human-in-the-Loop approval interface now includes a full history view, letting operators and reviewers see the complete audit trail of approvals and rejections for any task. (#56760, #55952)
Gantt Chart Improvements:
--only-idle flag for the scheduler CLIThe airflow scheduler command has a new --only-idle flag that only counts runs when the
scheduler is idle. This helps users run the scheduler once and process all triggered Dags and
queued tasks. It requires and complements the --num-runs flag so one can set a small value
instead of guessing how many iterations the scheduler needs.
The grid, graph, gantt, and task-detail views now fetch task-instance
summaries through a single streaming HTTP request
(GET /ui/grid/ti_summaries/{dag_id}?run_ids=...) instead of one request
per run. The server emits one JSON line per run as soon as that run's task
instances are ready, so columns appear progressively rather than all at once.
What changed:
GET /ui/grid/ti_summaries/{dag_id}?run_ids=... is now the sole endpoint
for TI summaries, returning an application/x-ndjson stream where each
line is a serialized GridTISummaries object for one run.GET /ui/grid/ti_summaries/{dag_id}/{run_id}
has been removed.dag_version_id, avoiding redundant deserialization.run_ids.The new json_logs option under the [logging] section makes Airflow
produce all its output as newline-delimited JSON (structured logs) instead of
human-readable formatted logs. This covers the API server (gunicorn/uvicorn),
including access logs, warnings, and unhandled exceptions.
Not all components support this yet — notably airflow celery worker but
any non-JSON output when json_logs is enabled will be treated as a bug. (#63365)
The interfaces and functions located in airflow.traces were
internal code that provided a standard way to manage spans in
internal Airflow code. They were not intended as user-facing code
and were never documented. They are no longer needed so we
remove them in 3.2. (#63452)
Airflow now sources task-facing exceptions (AirflowSkipException, TaskDeferred, etc.) from
airflow.sdk.exceptions. airflow.exceptions still exposes the same exceptions, but they are
proxies that emit DeprecatedImportWarning so Dag authors can migrate before the shim is removed.
What changed:
airflow-core at runtime.airflow.providers.common.compat.sdk centralizes compatibility imports for providers.Behaviour changes:
ValueError (instead of
AirflowException) when poke_interval/ timeout arguments are invalid.airflow.exceptions logs a warning directing users to
the SDK import path.Exceptions now provided by airflow.sdk.exceptions:
AirflowException and AirflowNotFoundExceptionAirflowRescheduleException and AirflowSensorTimeoutAirflowSkipException, AirflowFailException, AirflowTaskTimeout, AirflowTaskTerminatedTaskDeferred, TaskDeferralTimeout, TaskDeferralErrorDagRunTriggerException and DownstreamTasksSkippedAirflowDagCycleException and AirflowInactiveAssetInInletOrOutletExceptionParamValidationError, DuplicateTaskIdFound, TaskAlreadyInTaskGroup, TaskNotFound, XComNotFoundAirflowOptionalProviderFeatureExceptionBackward compatibility:
airflow.exceptions continue to work, though
they log warnings.airflow.providers.common.compat.sdk to keep one import path that works
across supported Airflow versions.Migration:
airflow.sdk.exceptions (or from the provider compat shim).ValueError for invalid sensor arguments if it
previously caught AirflowException.The retry_exponential_backoff parameter now accepts numeric values to specify custom exponential backoff multipliers for task retries. Previously, this parameter only accepted boolean values (True or False), with True using a hardcoded multiplier of 2.0.
New behavior:
2.0, 3.5) directly specify the exponential backoff multiplierretry_exponential_backoff=2.0 doubles the delay between each retry attemptretry_exponential_backoff=0 or False disables exponential backoff (uses fixed retry_delay)Backwards compatibility:
Existing Dags using boolean values continue to work:
retry_exponential_backoff=True → converted to 2.0 (maintains original behavior)retry_exponential_backoff=False → converted to 0.0 (no exponential backoff)API changes:
The REST API schema for retry_exponential_backoff has changed from type: boolean to type: number. API clients must use numeric values (boolean values will be rejected).
Migration:
While boolean values in Python Dags are automatically converted for backwards compatibility, we recommend updating to explicit numeric values for clarity:
retry_exponential_backoff=True → retry_exponential_backoff=2.0retry_exponential_backoff=False → retry_exponential_backoff=0Airflow now sources serde logic from airflow.sdk.serde instead of
airflow.serialization.serde. Serializer modules have moved from airflow.serialization.serializers.*
to airflow.sdk.serde.serializers.*. The old import paths still work but emit DeprecatedImportWarning
to guide migration. The backward compatibility layer will be removed in Airflow 4.
What changed:
airflow-core to task-sdk packageairflow.serialization.serializers.* to airflow.sdk.serde.serializers.*airflow.sdk.serde.serializers.* namespaceCode interface changes:
airflow.sdk.serde.serializers.* instead of airflow.serialization.serializers.*airflow.sdk.serde instead of airflow.serialization.serdeBackward compatibility:
airflow.serialization.serializers.* continue to work with deprecation warningsMigration:
airflow.sdk.serde.serializers.*airflow.sdk.serde.serializers.* namespace (e.g., create task-sdk/src/airflow/sdk/serde/serializers/your_serializer.py)On (experimental) class PriorityWeightStrategy, functions serialize()
and deserialize() were never used anywhere, and have been removed. They
should not be relied on in user code. (#59780)
On class TaskInstance, functions run(), render_templates(),
get_template_context(), and private members related to them have been
removed. The class has been considered internal since 3.0, and should not be
relied on in user code. (#59780, #59835)
DagBagNew behavior:
DagBag now uses Path.relative_to for consistent cross-platform behavior.FileLoadStat now has two additional nullable fields: bundle_path and bundle_name.Backward compatibility:
FileLoadStat will no longer produce paths beginning with / with the meaning of "relative to the dags folder".
This is a breaking change for any custom code that performs string-based path manipulations relying on this behavior.
Users are advised to update such code to use pathlib.Path. (#59785)
--conn-id option from airflow connections listThe redundant --conn-id option has been removed from the airflow connections list CLI command.
Use airflow connections get instead. (#59855)
render_template_as_native_obj overrideOperators can now override the Dag-level render_template_as_native_obj setting,
enabling fine-grained control over whether templates are rendered as native Python
types or strings on a per-task basis. Set render_template_as_native_obj=True or
False on any operator to override the Dag setting, or leave as None (default)
to inherit from the Dag.
The API server now supports gunicorn as an alternative server with rolling worker restarts to prevent memory accumulation in long-running processes.
Key Benefits:
Rolling worker restarts: New workers spawn and pass health checks before old workers are killed, ensuring zero downtime during worker recycling.
Memory sharing: Gunicorn uses preload + fork, so workers share memory via copy-on-write. This significantly reduces total memory usage compared to uvicorn's multiprocess mode where each worker loads everything independently.
Correct FIFO signal handling: Gunicorn's SIGTTOU kills the oldest worker (FIFO), not the newest (LIFO), which is correct for rolling restarts.
Configuration:
.. code-block:: ini
[api]
# Use gunicorn instead of uvicorn
server_type = gunicorn
# Enable rolling worker restarts every 12 hours
worker_refresh_interval = 43200
# Restart workers one at a time
worker_refresh_batch_size = 1
Or via environment variables:
.. code-block:: bash
export AIRFLOW__API__SERVER_TYPE=gunicorn
export AIRFLOW__API__WORKER_REFRESH_INTERVAL=43200
Requirements:
Install the gunicorn extra: pip install 'apache-airflow-core[gunicorn]'
Note on uvicorn (default):
The default uvicorn mode does not support rolling worker restarts because:
If you need worker recycling or memory-efficient multi-worker deployment, use gunicorn. (#60921)
The config max_num_rendered_ti_fields_per_task is renamed to num_dag_runs_to_retain_rendered_fields
(old name still works with deprecation warning).
Retention is now based on the N most recent dag runs rather than N most recent task executions, which may result in fewer records retained for conditional/sparse tasks. (#60951)
requires_access_dag on the DagAccessEntity.Runis_authorized_backfill of the BaseAuthManager interface has been removed. Core will no longer call this method and their
provider counterpart implementation will be marked as deprecated.
Permissions for backfill operations are now checked against the DagAccessEntity.Run permission using the existing
requires_access_dag decorator. In other words, if a user has permission to run a Dag, they can perform backfill operations on it.
Please update your security policies to ensure that users who need to perform backfill operations have the appropriate DagAccessEntity.Run permissions. (Users
having the Backfill permissions without having the DagRun ones will no longer be able to perform backfill operations without any update)
Airflow 3.2.0 adds support for Python 3.14. (#63787)
SerializedDAG loads on task startThe API server no longer loads the full SerializedDAG when starting tasks,
significantly reducing memory usage. (#60803)
MySQL client support has been removed from official Airflow container images. MySQL users building on official images must install the client themselves. (#57146)
The PythonOperator parameter python_callable now also supports async callables in Airflow 3.2,
allowing users to run async def functions without manually managing an event loop. (#60268)
The schedule="[@continuous](https://github.com/continuous)" parameter now works without requiring a start_date, and any Dags with this schedule will begin running immediately when unpaused. (#61405)
PYTHON_LTO build argument (#58337)--queues CLI option for the trigger command (#59239)--show-values and --hide-sensitive flags to CLI connections list and variables list to hide sensitive values by default (#62344)AIRFLOW__SECRETS__BACKEND_KWARG__<KEY> environment variables (#63312)only_new parameter to Dag clear to only clear newly added task instances (#59764)log_timestamp_format config option for customizing component log timestamps (#63321)--action-on-existing-key option to pools import and connections import CLI commands (#62702)--use-migration-files flag for airflow db init (#62234)AllowedKeyMapper for partition key validation in asset partitioning (#61931)ChainMapper for chaining multiple partition mappers (#64094)AgenticOperator (#63081)[@task](https://github.com/task).stub decorator to allow tasks in other languages to be defined in Dags (#56055)TriggerDagRunOperator (#60810)allowed_run_types to whitelist specific Dag run types (#61833)dag_id and dag_run_id in bulk task instance endpoint (#57441)operator_name_pattern, pool_pattern, queue_pattern as task instance search filters (#57571)update_mask support for bulk PATCH APIs (#54597)source parameter to Param (#58615)TaskInstance on RuntimeTaskInstance (#59712)max_trigger_to_select_per_loop config for Triggerer HA setup (#58803)uvicorn_logging_level config option to control API server access logs (#56062)executor.running_dags gauge metric to expose count of running Dags (#52815)GitDagBundle (#59911)GitHook for Dag bundles (#58194)RemoteIO for ObjectStorage (#54813)--dev flag (#57741)auth list-envs command to list CLI environments and auth status (#61426)airflow info command output (#59124)db_clean to explicitly include or exclude Dags (#56663)TaskStreamFilter (#60549)globalCss in custom themes (#61161)run_after date filter on Dag runs page (#62797)AIRFLOW__API__THEME in addition to brand (#64232)non-sensitive-only value as True (#59880)InvalidStatsNameException for pool names with invalid characters by auto-normalizing them when emitting metrics (#59938)AIRFLOW__API__BASE_URL basename is configured (#63141)/mapped to group URLs (#63205)ti_skip_downstream overwriting RUNNING tasks to SKIPPED in HA deployments (#63266)[@task](https://github.com/task) decorator failing for tasks that return falsy values like 0 or empty string (#63788)LatestOnlyOperator not working when direct upstream of a dynamically mapped task (#62287)XCom return type in mapped task groups with dynamic mapping (#59104)DagRun span emission crash when context_carrier is None (#64087)next_dagrun fields are None (#63962)relativedelta (#61671)task_instance_mutation_hook receiving run_id=None during TaskInstance creation (#63049)None dag_version access (#62225)MetastoreBackend.expunge_all() corrupting shared session state (#63080)airflowignore negation pattern handling for directory-only patterns (#62860)TYPE_CHECKING-only forward references in TaskFlow decorators (#63053)structlog JSON serialization crash on non-serializable objects (#62656)queued_tasks type mismatch in hybrid executors (CeleryKubernetesExecutor, LocalKubernetesExecutor) (#63744)pathlib.Path objects incorrectly resolved by Jinja templater in Task SDK (#63306)make_partial_model for API Pydantic models (#63716)_execution_api_server_url() ignoring configured value and falling back to edge config (#63192)DetachedInstanceError for airflow tasks render command (#63916)[@task](https://github.com/task) definition causing Dag parsing errors (#62174)limit parameter not sent in execute_list server requests (#63048)airflow.configuration causing ImportError on Python 3.14 (#63787)map_index range validation in CLI commands (#62626)FabAuthManager race condition on startup with multiple workers (#62737)FabAuthManager race condition when workers concurrently create permissions, roles, and resources (#63842)JWTValidator not handling GUESS algorithm with JWKS (#63115)FabAuthManager first idle MySQL disconnect in token auth (#62919)JWTBearerTIPathDep import errors in Human-In-The-Loop routes (#63277)log_pos (#63531)null dag_run_conf causing serialization error in BackfillResponse (#63259)savepoints with per-Dag transactions (#63591)deadline.callback_id (#63612)interval causing query failures in deadline_alert (#63494)serialize_dag query failure during deadline migration (#63804)visibility_timeout that kills long-running Celery tasks (#62869)CronPartitionedTimetable (#62441)AssetModel when updating asset partition DagRun — adds mutex lock (#59183)auth_manager load_user causing PendingRollbackError (#61943)joinedload for asset in dags_needing_dagruns() (#60957)NotMapped exception when clearing task instances with downstream/upstream (#58922)partition_key filter in PALK when creating DagRun (#61831)ObjectStoragePath to exclude conn_id from storage options passed to fsspec (#62701)XComObjectStorageBackend (#55805)default_email_on_failure/default_email_on_retry from config (#59912)TaskInstance.get_dagrun returning None in task_instance_mutation_hook (#60726)dag_display_name property bypass for DagStats query (#64256)TaskAlreadyRunningError not raised when starting an already-running task instance (#60855)enable_swagger_ui config not respected in API server (#64376)conf.has_option not respects default provider metadata (#64209)TaskInstance crash when refreshing task weight for non-serialized operators (#64557)XCom edit modal value not repopulating on reopen (#62798)RenderedJsonField collapse behavior (#63831)RenderedJsonField not displaying in table cells (#63245)-1) slots not rendering correctly (#62831)DurationChart labels and disable animation flicker during auto-refresh (#62835)/dagruns page not working (#62848)total_received count in partitioned Dag runs view (#62786)RenderedJsonField flickering when collapsed (#64261)TISummaries not refreshing when gridRuns are invalidated (#64113)DagRun window (#64179)api.page_size config in favor of api.fallback_page_limit (#61067)get_dag_runs API endpoint performance (#63940)filter_authorized_dag_ids (#63184)gc.freeze (#62212)get_task_instances endpoint (#62910)IN clause in asset queries with CTE and JOIN for better SQL performance (#62114)ConnectionResponse serializer safeguard to prevent accidental sensitive field exposure (#63883)dag_id filter on DagRun task instances API query (#62750)expose_stacktrace is disabled (#63028)update_mask fields in PATCH API endpoints against Pydantic models (#62657)order_by parameter to GET /permissions endpoint for pagination consistency (#63418)BaseXcom to airflow.sdk public exports (#63116)TaskSDK conf respect default config from provider metadata (#62696)airflow.sdk.observability.trace (#63554)external_executor_id on PostgreSQL (#63625)DagRun.created_at during migration for faster upgrades (#63825)ExecuteCallback by including dag_id and run_id (#62616)get_connection_form_widgets and get_ui_field_behaviour hook methods (#63711)[workers] config section (#63659)TaskInstance API for external task management (#61568)airflow.datasets, airflow.timetables.datasets, and airflow.utils.dag_parsing_context modules (#62927)PyOpenSSL from core dependencies (#63869)SerializedDAG (#56694)pop(0) to popleft() (#61376).git folder from versions in GitDagBundle to reduce storage size (#57069)airflow.utils.process_utils (#57193)KubernetesPodOperator handling of deleted pods between polls (#56976)ToXXXMapper to StartOfXXXMapper in partition-mapper for clarity (#64160)partition_date (#62866)AIRFLOW__API__THEME config (#64232)DeadlineReferences (#57222)FilterBar with DateRangeFilter for compact UI (#56173)RedisTaskHandler configuration example (#63898)_shared folders (#63468)modules_management docs (#63634)max_active_tasks Dag parameter documentation (#63217)GitHook parameters (#63265)example_bash_decorator (#62948)Apache Airflow - A platform to programmatically author, schedule, and monitor workflows