Terraform Plan as a Pull Request Comment in GitHub Actions
Executive technology leader responsible for platform reliability, cloud operations, security posture, and enterprise technology risk within an investor-backed fintech environment. I lead technology operations at the intersection of engineering execution, governance, and business outcomes — ensuring platforms are scalable, resilient, and trusted by investors, regulators, and clients.
Currently VP of DevOps at InvestorFlow, where I focus on building board-ready technology operations, strengthening risk and resilience, and shaping long-term platform strategy to support growth and regulatory confidence.
If you have ever sat through a Terraform pull request review where the reviewer simply trusted that the author had run plan locally and got the right output, you already know why this post exists. The whole point of a code review for infrastructure is to catch the thing that is about to be destroyed before it is destroyed, and you cannot do that without seeing the plan.
The fix is not complicated. We can have GitHub Actions run terraform plan on every pull request and post the output back into the PR as a comment, so the reviewer sees exactly what is going to change before they hit merge. In this post, we will walk through setting that up end to end, including the federated authentication to Azure, the workflow itself, and a few of the niceties that make the comment actually pleasant to read.
What do you need
To follow this post, you will need:
An Azure Subscription
A GitHub repository containing some Terraform
An Azure Storage Account for remote state (or another backend you trust)
Permissions in the Azure tenant to create an App Registration and federated credentials
We will be authenticating from GitHub to Azure using OIDC, so there are no long-lived secrets stored in the repository. If you are still using a service principal client secret in a GitHub Actions secret, this post is also a good prompt to migrate.
Setting up Azure authentication with OIDC
Before we get to the workflow, we need GitHub Actions to be able to talk to Azure. The cleanest way to do that is to use workload identity federation, which lets GitHub mint a short-lived token that Azure trusts, with no client secret involved.
I have already covered the full setup for this in a previous post, Using OpenID Connect to access Azure from GitHub, so I will not repeat the App Registration and federated credential walkthrough here. If you have not done this yet, work through that post first and then come back.
There is one thing worth flagging that the original post does not really lean on, because it caught me out the first time I built this workflow. The federated credential's subject identifier has to match the exact scenario the workflow is running under. A pull request run uses repo:<org>/<repo>:pull_request, and a push to main uses repo:<org>/<repo>:ref:refs/heads/main. They are two different subjects, so they need two separate federated credentials on the same App Registration. If you only set up the branch one and then wonder why your PR plan workflow keeps failing at the login step, this is almost always why.
Make a note of the App Registration's Client ID, the Tenant ID, and the Subscription ID. We will plug those into the workflow as repository variables, not secrets, because none of them are sensitive on their own.
The Terraform backend
For this to work in CI, the state has to live somewhere both the PR job and the apply job can reach. A local backend is fine for tinkering, but the moment you have more than one person or more than one pipeline, you need remote state.
Here is a minimal backend.tf using an Azure Storage Account:
terraform {
required_version = ">= 1.6.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 4.0"
}
}
backend "azurerm" {
resource_group_name = "rg-tfstate-prod"
storage_account_name = "sttfstateprod001"
container_name = "tfstate"
key = "platform.tfstate"
use_oidc = true
}
}
provider "azurerm" {
features {}
use_oidc = true
}
The use_oidc = true on both the backend and the provider is what tells the AzureRM provider to use the federated token from GitHub Actions instead of looking for a client secret. Miss this off and the workflow will fall back to interactive auth, which obviously will not work in CI.
The workflow
Now for the interesting part. Create a file at .github/workflows/terraform-plan.yml and drop in the following:
name: Terraform Plan
on:
pull_request:
branches:
- main
paths:
- '**.tf'
- '.github/workflows/terraform-plan.yml'
permissions:
id-token: write
contents: read
pull-requests: write
jobs:
plan:
name: Plan
runs-on: ubuntu-latest
env:
ARM_CLIENT_ID: ${{ vars.AZURE_CLIENT_ID }}
ARM_SUBSCRIPTION_ID: ${{ vars.AZURE_SUBSCRIPTION_ID }}
ARM_TENANT_ID: ${{ vars.AZURE_TENANT_ID }}
ARM_USE_OIDC: true
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Azure login
uses: azure/login@v2
with:
client-id: ${{ vars.AZURE_CLIENT_ID }}
tenant-id: ${{ vars.AZURE_TENANT_ID }}
subscription-id: ${{ vars.AZURE_SUBSCRIPTION_ID }}
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.9.5
- name: Terraform fmt
id: fmt
run: terraform fmt -check -recursive
continue-on-error: true
- name: Terraform init
id: init
run: terraform init
- name: Terraform validate
id: validate
run: terraform validate -no-color
- name: Terraform plan
id: plan
run: |
terraform plan -no-color -out=tfplan 2>&1 | tee plan_output.txt
echo "exitcode=\({PIPESTATUS[0]}" >> \)GITHUB_OUTPUT
continue-on-error: true
A few of these lines are worth explaining.
The permissions block
id-token: write - Required for the OIDC token exchange. Without this,
azure/logincannot get a token to hand to Azurecontents: read - The default for a checkout
pull-requests: write - Required to post the comment back to the PR. This one is easy to forget and the failure mode is a 403 right at the end of the workflow
continue-on-error on plan
This looks counter-intuitive but it is deliberate. If terraform plan fails, we still want the workflow to carry on and post the failure into the PR comment, otherwise the reviewer has to dig into the Actions logs to find out what went wrong. We capture the exit code, post the comment, and then fail the job at the very end if the plan was not successful.
Capturing both stdout and the exit code
The 2>&1 | tee plan_output.txt pattern captures the plan output to a file we can read in a later step, while ${PIPESTATUS[0]} grabs the actual exit code from terraform plan rather than from tee, which will always be zero. This is a common bash pitfall and worth being explicit about.
Posting the plan as a comment
With the plan output saved to a file, we now need a step that reads it and posts it to the PR. Before we get to the script, there is one constraint that shapes the whole approach: GitHub has a hard limit of 65,536 characters per issue comment. Go over it and the API call fails outright.
That sounds like a lot until you actually look at a real plan. A medium-sized change to a single module can produce a plan in the tens of thousands of characters, and once you are touching anything with a lot of nested blocks, like a Front Door or an API Management instance, you will hit the limit easily. Truncating the plan and pointing the reviewer at the workflow logs is one option, but it is a bad option, because the most interesting line in the plan, the destroy at the very bottom, is exactly the bit that gets cut off.
A much better pattern is to extract a summary of what is changing, post that at the top of the comment, and put the full plan inside a collapsed <details> block underneath. The reviewer always sees the headlines, and the full plan is one click away if they want to read it.
We need a small step before the comment to extract the summary from the plan file:
- name: Build plan summary
id: summary
if: always()
run: |
summary_line=$(grep -E '^Plan: ' plan_output.txt || echo "No changes")
changes=$(grep -E '^ # ' plan_output.txt | sed 's/^ # //' || true)
{
echo "summary_line=${summary_line}"
echo "changes<<EOF"
echo "${changes}"
echo "EOF"
} >> "$GITHUB_OUTPUT"
Terraform helpfully prints a single line at the bottom of the plan in the form Plan: 1 to add, 2 to change, 1 to destroy, and every resource being acted on is announced earlier in the plan with a line that starts with #, followed by the action. Pulling those two pieces out gives us everything we need for the summary.
Now the comment step itself, which uses actions/github-script to give us a Node.js environment with the GitHub API client already wired up:
- name: Post plan to PR
uses: actions/github-script@v7
if: github.event_name == 'pull_request'
env:
PLAN_EXIT: ${{ steps.plan.outputs.exitcode }}
SUMMARY_LINE: ${{ steps.summary.outputs.summary_line }}
CHANGES: ${{ steps.summary.outputs.changes }}
with:
script: |
const fs = require('fs');
const planOutput = fs.readFileSync('plan_output.txt', 'utf8');
const maxPlanLength = 55000;
const truncated = planOutput.length > maxPlanLength
? planOutput.slice(0, maxPlanLength) +
'\n\n... (truncated, see workflow logs for full output)'
: planOutput;
const status = process.env.PLAN_EXIT === '0' ? '✅ Success' : '❌ Failed';
const summaryLine = process.env.SUMMARY_LINE || 'No changes';
const changes = process.env.CHANGES || '_No resource changes detected._';
const body = `### Terraform Plan ${status}
**${summaryLine}**
#### Resources
\`\`\`
${changes}
\`\`\`
<details><summary>Show full plan</summary>
\`\`\`hcl
${truncated}
\`\`\`
</details>
*Pushed by: @\({{ github.actor }}, Workflow: \`\){{ github.workflow }}\`*`;
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
const existing = comments.find(c =>
c.user.type === 'Bot' && c.body.includes('### Terraform Plan')
);
if (existing) {
await github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: existing.id,
body: body,
});
} else {
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: body,
});
}
- name: Fail if plan failed
if: steps.plan.outputs.exitcode != '0'
run: exit 1
A few things about this step worth calling out, because they all came from running an earlier version of this workflow on a real codebase and finding it annoying.
The summary block at the top is the bit reviewers will actually look at. They get the Plan: X to add, Y to change, Z to destroy headline immediately, followed by the list of resources being touched. If a resource is being destroyed, they see it at the top of the comment, not buried thousands of lines deep in the full output.
The full plan still goes into the comment, but inside a <details> block so it collapses by default. On a small change this is overkill, but the first time someone refactors a module and the plan is two thousand lines long, an uncollapsed comment will push every other PR comment off the screen.
The truncation limit is set to 55,000 characters rather than the full 65,536, to leave headroom for the summary, the changes list, and the rest of the comment scaffolding. If the full plan does get truncated, the summary at the top still tells the reviewer what is happening, so the comment is still useful.
The script also looks for an existing plan comment on the PR and updates it in place rather than posting a new one every time. Without this, every push to the PR branch leaves another comment behind, and after a few iterations the conversation tab is unreadable.
Finally, the last step re-fails the job if the plan exit code was non-zero. This is what makes the branch protection rule actually useful, because a failed plan should block the merge regardless of what the comment says.
What the comment looks like
When the workflow runs, the reviewer gets a comment on the PR that looks roughly like this:
### Terraform Plan ✅ Success
Plan: 1 to add, 2 to change, 1 to destroy
#### Resources
azurerm_storage_account.logs will be created
azurerm_key_vault.platform will be updated in-place
azurerm_log_analytics_workspace.platform will be updated in-place
azurerm_storage_account.legacy_logs will be destroyed
▼ Show full plan
Terraform will perform the following actions:
# azurerm_storage_account.logs will be created
+ resource "azurerm_storage_account" "logs" {
+ access_tier = "Hot"
+ account_kind = "StorageV2"
...
}
Plan: 1 to add, 2 to change, 1 to destroy.
Pushed by: @cookjames, Workflow: Terraform Plan
It is not flashy, but it is exactly what you need during a review. The summary at the top tells the reviewer what is being touched and, crucially, what is being destroyed, before they even decide whether to open the full plan. If everything looks expected they can approve in seconds. If something looks wrong, the full plan is one click away.
Adding the apply workflow
The plan workflow is only half of the story. Once the PR is merged, you usually want a separate workflow to run terraform apply against the same state. That workflow lives in a separate file, triggers on push to main, and uses the second federated credential we set up earlier.
I am not going to walk through the full apply workflow here because it is largely the same shape as the plan one, minus the comment posting and with terraform apply -auto-approve tfplan at the end. The important bit is that both workflows authenticate the same way, and both use the same remote state, so the plan you reviewed in the PR is the plan that gets applied.
If you want belt-and-braces, you can also upload the tfplan file from the PR run as an artifact, then download it in the apply run and apply that exact file. That guarantees the applied changes are bit-for-bit identical to what the reviewer saw, which removes the small risk of drift between the time the PR was approved and the time it was merged.
Wrapping Up
A plan in a PR comment is one of those small workflow improvements that has a disproportionate impact on the quality of your reviews. It removes the trust gap between the author and the reviewer, it gives you a record of what was about to be applied at the point of merge, and it forces the conversation about destructive changes to happen before they happen rather than after.
The pattern we built here is not the only way to do this, and there are some excellent third-party actions that wrap up the same behaviour into a single step if you would rather not maintain the script yourself. But knowing how the underlying pieces fit together is worth doing at least once, because the moment something breaks, you will be glad you did not treat it as a black box.





