Skip to main content

Command Palette

Search for a command to run...

Terraform Stacks on Azure: Is it ready to replace Workspaces?

Published
7 min read
Terraform Stacks on Azure: Is it ready to replace Workspaces?

HashiCorp released Terraform Stacks into general availability at HashiConf 2025 after a year in public beta. Stacks introduces a new way to organise and deploy Terraform configurations, with a particular focus on managing the same infrastructure across multiple environments.

In this post, I'll cover what Stacks is, how it compares to the workspace pattern most Azure teams use today, walk through a basic two-environment deployment, and share my thoughts on whether it is ready to replace workspaces.

What is a Stack

A Stack is a configuration layer that sits above your Terraform modules. Instead of defining a root module and running it once per workspace, you define two new concepts:

Components - references to your Terraform modules with their inputs and providers configured. A component is the "what to deploy."

Deployments - concrete instances of the whole set of components. A deployment is the "where to deploy it." Dev, staging, and prd each become a deployment.

The configuration across deployments stays identical. Only the input values change. HCP Terraform then plans and applies each deployment, understands the dependency graph between components, and can orchestrate rollouts across deployments.

Stacks uses two file extensions. Component configuration lives in *.tfcomponent.hcl files, and deployment configuration lives in *.tfdeploy.hcl files. If you followed beta tutorials, note that .tfstack.hcl was renamed to .tfcomponent.hcl at GA.

How Stacks compares to Workspaces on Azure

The workspace-per-environment pattern has served us well, but it has some common pain points on Azure that Stacks addresses directly.

Multi-subscription deployments - Azure landing zones typically place development, staging, and production in different subscriptions. With workspaces, each one needs its own provider configuration, service principal or federated credential, and variable set. With Stacks, a single identity_token block per deployment handles OIDC federation, and the component configuration stays the same across environments.

Cross-workspace dependencies - If your networking is in one workspace and your AKS cluster is in another, you likely use terraform_remote_state or a data source lookup to wire them together. In a Stack, you pass outputs between components natively, and Terraform builds the dependency graph for you.

Ordered rollouts - Rolling a change from dev to prd across multiple workspaces usually means multiple pipeline stages. Stacks has deployment groups and auto-approval checks built into the language, so you describe the rollout once in your deployment configuration and HCP Terraform handles it.

Prerequisites

  • An HCP Terraform organisation on a plan that supports Stacks (legacy Team plans are not supported)

  • Two Azure subscriptions, or a single subscription scoped to different resource groups

  • An EntraID app registration per environment, with federated credentials trusting HCP Terraform's OIDC issuer

  • A VCS repository connected to HCP Terraform (GitHub, GitLab, Azure DevOps, and Bitbucket are all supported)

Walkthrough: a two-environment Azure Stack

For this walkthrough, I will deploy a resource group and a storage account to dev and prd Azure subscriptions, authenticate with OIDC, auto-approve dev, and require manual approval for prd.

Repository layout

my-azure-stack/
├── components.tfcomponent.hcl
├── providers.tfcomponent.hcl
├── variables.tfcomponent.hcl
├── deployments.tfdeploy.hcl
└── modules/
    ├── resource_group/
    └── storage/

The modules folder contains standard Terraform modules. Nothing in them is Stacks-specific, which means your existing modules will continue to work as components.

Define the variables

Create variables.tfcomponent.hcl to declare the variables your deployments will pass in:

variable "location" {
  type = string
}

variable "environment" {
  type = string
}

variable "subscription_id" {
  type = string
}

variable "client_id" {
  type = string
}

variable "tenant_id" {
  type = string
}

Configure the provider

Create providers.tfcomponent.hcl. The identity_token is passed in from the deployment, which means no credentials are hardcoded:

required_providers {
  azurerm = {
    source  = "hashicorp/azurerm"
    version = "~> 4.0"
  }
}

provider "azurerm" "this" {
  config {
    features {}
    subscription_id = var.subscription_id
    client_id       = var.client_id
    tenant_id       = var.tenant_id
    use_oidc        = true
    oidc_token      = var.identity_token
  }
}

variable "identity_token" {
  type      = string
  ephemeral = true
}

Define the components

Create components.tfcomponent.hcl to wire your modules in as components:

component "resource_group" {
  source = "./modules/resource_group"

  inputs = {
    name     = "rg-app-${var.environment}"
    location = var.location
  }

  providers = {
    azurerm = provider.azurerm.this
  }
}

component "storage" {
  source = "./modules/storage"

  inputs = {
    resource_group_name = component.resource_group.name
    location            = var.location
    environment         = var.environment
  }

  providers = {
    azurerm = provider.azurerm.this
  }
}

The component.resource_group.name reference is the native way to pass outputs between components. This replaces the terraform_remote_state pattern you would use between workspaces.

Define the deployments

Create deployments.tfdeploy.hcl. This is where the multi-environment setup lives:

identity_token "azurerm" {
  audience = ["api://AzureADTokenExchange"]
}

deployment "dev" {
  inputs = {
    location        = "uksouth"
    environment     = "dev"
    subscription_id = ***
    client_id       = ***
    tenant_id       = ***
    identity_token  = identity_token.azurerm.jwt
  }

  deployment_group = deployment_group.fast_lane
}

deployment "prd" {
  inputs = {
    location        = "uksouth"
    environment     = "prd"
    subscription_id = ***
    client_id       = ***
    tenant_id       = ***
    identity_token  = identity_token.azurerm.jwt
  }
}

deployment_group "fast_lane" {
  auto_approve_checks = [
    deployment_auto_approve.no_deletes,
  ]
}

deployment_auto_approve "no_deletes" {
  check {
    condition = context.plan.changes.remove == 0
    reason    = "Auto-approval requires zero resource deletions."
  }
}

The deployment_group and auto_approve_checks syntax is the GA replacement for the old orchestrate "auto_approve" block from the beta. If you are following older tutorials, you will see the deprecated version. The dev deployment uses the fast_lane group, which auto-approves any plan that is not deleting resources. The prd deployment has no group assigned, so it requires manual approval by default.

Connect it to HCP Terraform

In HCP Terraform, create a new Stack, connect it to your VCS repository, and point it at your branch. HCP Terraform will pick up the two deployments automatically. On every commit, you will see two plans side by side. Dev applies itself when the auto-approval condition passes. Prd waits for a human to approve.

To tear a deployment down, add destroy = true to its deployment block and commit. The destroy option was removed from the UI at GA, so destruction is now declarative.

Things to be aware of

Stacks has a hard dependency on HCP Terraform. You cannot run it against a local backend or an Azure Storage backend. Terraform Community Edition has only minimal local tooling for validation, and self-hosted Terraform Enterprise support was expected after GA but should be verified against your current version.

Each Stack supports up to 10,000 resources, can link to 20 upstream Stacks, and can expose values to 25 downstream Stacks.

Billing also changed at GA. During beta, Stacks resources did not count toward Resources Under Management. They now aggregate with your workspace RUM for billing.

My thoughts on Terraform Stacks

I believe Stacks is a better default than workspaces for new multi-environment Azure deployments on HCP Terraform. You get one source of truth for your component graph, native cross-component dependencies, OIDC per deployment without duplicating provider code, and declarative rollout orchestration in the language itself.

For existing workspace setups, I would be more cautious. HashiCorp released a terraform migrate tool in public beta in late 2025 to help move workspaces to Stacks, but I would wait for GA on the migration tool, or migrate a non-critical workload first to learn the failure modes before committing a whole estate.

If you are not on HCP Terraform, this is not a question you need to answer yet. Workspaces are not going anywhere.

Are you planning to move to Stacks, or staying on workspaces for now? Let me know in the comments.

%buymeacoffe-butyellow