Skip to main content

Command Palette

Search for a command to run...

Ultimate Guide to Using Azure DevOps CLI for Pipeline Runs

Published
12 min read
Ultimate Guide to Using Azure DevOps CLI for Pipeline Runs

Most of us spend a fair amount of time clicking around the Azure DevOps portal to trigger pipelines, check run statuses, or pull logs out of a failed job. It works, but once you are doing it several times a day across multiple projects, it starts to drag. The Azure DevOps CLI, which is an extension of the Azure CLI, lets you do all of this from the terminal and it plays very nicely with scripts and automation.

In this post, I will walk through getting the CLI set up, the different ways to trigger pipelines, managing runs, and a handful of other functions I have found genuinely useful day-to-day.

Installing the Azure DevOps CLI extension

The Azure DevOps CLI is not a separate tool, it is an extension of the Azure CLI. You will need the Azure CLI installed first. If you do not have it, grab it from the Microsoft docs page.

Once the Azure CLI is installed, add the DevOps extension:

az extension add --name azure-devops

You can confirm it installed correctly by running:

az extension list --output table

You should see azure-devops listed in the output.

Logging in and setting defaults

Before running any commands, you need to authenticate. The standard az login will work, but for Azure DevOps specifically, you can also use a Personal Access Token (PAT). I find the PAT approach better for scripts and CI contexts.

To log in with a PAT:

az devops login --organization https://dev.azure.com/your-org-name

You will be prompted to paste in your PAT. Make sure the token has the correct scopes — for pipeline operations you will want at least Build (Read & execute) and Release (Read, write & execute).

To save yourself from typing the organisation and project in every command, set them as defaults:

az devops configure --defaults organization=https://dev.azure.com/your-org-name project=your-project-name

From here on, you can omit --organization and --project from every command.

Listing available pipelines

Before you can trigger anything, it helps to know what pipelines exist. The command for this is:

az pipelines list --output table

This returns a table with the pipeline ID, name, folder, and a few other fields. The ID is what you will use to reference the pipeline in later commands. If you have a lot of pipelines, you can filter by name:

az pipelines list --name "terraform-deploy-*" --output table

You can also list by folder, which is useful when you keep pipelines organised in a folder structure:

az pipelines list --folder-path "\\infrastructure\\terraform" --output table

Triggering a pipeline by ID or name

This is the command you will likely use the most. To queue a pipeline run by its ID:

az pipelines run --id 42

Or by name if you prefer:

az pipelines run --name "terraform-deploy-prod"

The command returns a JSON object with the run details, including the run ID, which you will need for anything that follows.

Triggering against a specific branch

By default, a run is queued against the pipeline's default branch. To target a different branch, pass the --branch flag:

az pipelines run --name "terraform-deploy-prod" --branch feature/new-module

This is useful when you want to test a pipeline change in a feature branch before merging it to main.

Triggering against a specific commit

If you need to trigger a run against a specific commit rather than the latest commit on a branch, you can pass the --commit-id flag:

az pipelines run --name "terraform-deploy-prod" --branch main --commit-id a1b2c3d4

This is handy when rolling back, or when you need to re-run a pipeline against a known-good commit to reproduce an earlier result.

Triggering with variables and parameters

Pipelines often have runtime variables or parameters that change the behaviour of a run. You can pass variables using the --variables flag:

az pipelines run --name "terraform-deploy-prod" \
  --branch main \
  --variables environment=staging region=uksouth

For parameters declared in the pipeline YAML under parameters:, use --parameters:

az pipelines run --name "terraform-deploy-prod" \
  --branch main \
  --parameters deployApproval=true terraformAction=apply

The distinction between variables and parameters matters here — if you pass an input to the wrong flag, the pipeline will either ignore it or fail validation, depending on how the YAML is written.

Triggering with open mode for browser review

If you want the CLI to kick off the run but still see the portal view afterwards, add the --open flag. This queues the run and opens it in your browser:

az pipelines run --name "terraform-deploy-prod" --open

A nice middle ground when you want the speed of the CLI but still want to watch progress in the UI, or handle approvals.

Checking the status of a run

Once you have queued a pipeline, you will want to see how it is getting on. The command for a specific run is:

az pipelines runs show --id 1234

The id here is the run ID, not the pipeline ID. This will return the run status, result, who queued it, and other metadata.

If you want to list recent runs for a pipeline, use:

az pipelines runs list --pipeline-ids 42 --top 5 --output table

The --top flag limits how many results come back, which keeps the output readable.

For only the runs that are currently in progress, filter by status:

az pipelines runs list --status inProgress --output table

Other valid statuses include completed, cancelling, notStarted, and postponed.

A useful pattern: queue and wait

One of the things the portal does not do well is give you a single view of "queue this pipeline and tell me when it is done". You can script this fairly easily with the CLI. Here is a bash example:

RUN_ID=$(az pipelines run --name "terraform-deploy-prod" --branch main --query "id" -o tsv)

echo "Queued run $RUN_ID, waiting for completion..."

while true; do
  STATUS=\((az pipelines runs show --id \)RUN_ID --query "status" -o tsv)
  if [ "$STATUS" = "completed" ]; then
    RESULT=\((az pipelines runs show --id \)RUN_ID --query "result" -o tsv)
    echo "Run \(RUN_ID finished with result: \)RESULT"
    break
  fi
  sleep 15
done

I use a variant of this in a couple of helper scripts when I need to chain things together locally without waiting in front of the portal. It is also handy for triggering a pipeline from within another pipeline and waiting for the downstream one to finish.

Queue and wait inside an agentic coding session

This pattern has become a lot more useful to me recently when working with AI coding agents like Claude Code, Copilot's agent mode, or Codex. One of the more frustrating parts of an agentic workflow is when the agent makes a Terraform change, opens a pull request, and then just sits there with no idea whether the downstream pipeline has passed, failed, or even started. You end up being the middleman, flicking between the portal and the chat window copying statuses back to the agent.

The queue and wait script solves that quite neatly. If you give the agent access to run the Azure DevOps CLI as part of its tool set, it can trigger the pipeline itself, poll for completion, and then read the result into its own context. The agent stays fully up to speed on whether its change worked without you having to tell it. Combined with the log-fetching REST call mentioned earlier, the agent can also pull the failure logs on its own and attempt a fix.

A slightly more agent-friendly version of the script returns structured output:

RUN_ID=$(az pipelines run --name "terraform-plan-pr" \
  --branch $BRANCH \
  --variables prNumber=$PR_NUMBER \
  --query "id" -o tsv)

while true; do
  STATUS=\((az pipelines runs show --id \)RUN_ID --query "status" -o tsv)
  if [ "$STATUS" = "completed" ]; then
    az pipelines runs show --id $RUN_ID \
      --query "{id:id, result:result, finishTime:finishTime, url:_links.web.href}" \
      -o json
    break
  fi
  sleep 15
done

The JSON output at the end gives the agent everything it needs — the result, the timestamp, and a direct link to the run. I have also used this pattern for letting an agent bootstrap its own CI setup: it creates the pipeline with az pipelines create, runs it against a feature branch, and iterates on the YAML until the run succeeds. The agent is effectively driving the feedback loop for itself, which is a big step up from having to paste errors back in.

A couple of things worth being aware of when giving an agent this level of access:

Scope the PAT tightly — only grant the minimum scopes needed, and set a short expiry. Agents can and will run commands you did not expect, and a scoped PAT limits the blast radius.

Pin the target pipelines — rather than letting the agent discover and run anything, I prefer giving it a specific list of pipelines it is allowed to trigger. A simple wrapper script that validates the pipeline name before passing it to az pipelines run goes a long way.

Watch the polling interval — a tight sleep inside an agent loop will chew through API rate limits quickly, especially if the agent is running multiple pipeline checks in parallel. Fifteen to thirty seconds is usually fine.

Cancelling a run

If you have queued a run by mistake, or you spot something wrong mid-flight, you can cancel it:

az pipelines runs update --id 1234 --status cancelling

The state goes to cancelling first and then to completed with a result of canceled once the agents have stopped.

Pulling logs from a run

When a run fails, you normally need the logs. You can list the logs available for a run with:

az pipelines runs show --id 1234 --query "logs"

To actually download a specific log, you will need to call the Azure DevOps REST API directly, as the CLI does not have a first-class command for downloading logs yet. The URL pattern is:

https://dev.azure.com/{organisation}/{project}/_apis/build/builds/{runId}/logs/{logId}

You can curl this with your PAT as basic auth, which makes it easy to pipe into grep when hunting a specific error.

Managing pipeline definitions

The CLI is not just for running pipelines. You can also create, update, and delete the pipeline definitions themselves.

To create a new pipeline from a YAML file in a repository:

az pipelines create \
  --name "terraform-deploy-dev" \
  --repository myorg/infrastructure \
  --repository-type gitHub \
  --branch main \
  --yaml-path pipelines/terraform-dev.yml

To update the name or configuration of an existing pipeline:

az pipelines update --id 42 --new-name "terraform-deploy-development"

And to delete a pipeline definition:

az pipelines delete --id 42

I have found the create command particularly useful when spinning up new projects where you want to bootstrap a standard set of pipelines programmatically, rather than clicking through the UI for each one.

Managing pipeline variables and variable groups

You can manage pipeline variables directly from the CLI, which is useful for bulk updates or for keeping variables in sync across environments.

To add a variable to a specific pipeline:

az pipelines variable create \
  --pipeline-id 42 \
  --name "tfVersion" \
  --value "1.9.5"

For variable groups, which are shared across pipelines, use the variable-group subcommand:

az pipelines variable-group create \
  --name "terraform-shared" \
  --variables tfVersion=1.9.5 backendRg=rg-tfstate

You can also mark variables as secret using the --secret flag, which prevents them from being shown in logs.

Working with agent pools

For admin-heavy workflows, you can query and manage agent pools:

az pipelines pool list --output table

To see agents within a specific pool:

az pipelines agent list --pool-id 5 --output table

I do not use this one often, but it is useful when troubleshooting why a run is stuck in the queue and you want to check if the self-hosted agents are online.

Why bother using the CLI

For a single run, clicking the Run pipeline button is probably quicker. Where the CLI really earns its place is when you are doing any of the following:

Automation and scripting — chaining pipeline runs together, triggering deployments from CI scripts, or integrating with tooling outside of Azure DevOps.

Bulk operations — running the same pipeline across multiple branches for testing, or cancelling a batch of stuck runs.

Faster feedback loops — when you are actively working on a pipeline and need to trigger it ten times in a row to test a change, the CLI is noticeably quicker than the portal.

Bootstrapping new projects — creating pipelines, variable groups, and default configuration programmatically when standing up a new project.

Integrating with IaC workflows — I often use it alongside Terraform deployments where I want to trigger a downstream pipeline after an infrastructure change, without jumping over to the UI.

Giving coding agents a feedback loop — as covered above, letting an AI agent trigger pipelines, poll for results, and read failure logs keeps it fully self-sufficient during an agentic coding session.

The CLI is not a replacement for the portal, but once you have it configured, it will quietly save you time most days. Well worth the five minutes it takes to set up.