All the Small Things: Azure CLI Leakage and Problematic Usage Patterns

Nov 14, 2023
11 minutes
275 views

At the beginning of July 2023, I took a stroll around the azure/login GitHub Action repository. Looked through the repository’s issues section, I immediately noticed issue number 315. The issue was titled “SECURITY: Azure/login in some cases leaks Azure Application Variables to the GitHub build log”. And don’t you just love when things leak stuff? I had to click! Let’s see what’s up.

Excuse Me? You Dropped Your Environment Variables

The issue reported by @NoCopy stated that “azure/login in some cases leaks Azure Application Variables to the GitHub build log.” The user included an example workflow, a relevant az command use case and an example output that contains alleged credentials.

Security issue reported in the Azure/login project
Figure 1: Security issue reported in the Azure/login project

Well this is pretty straight forward, I thought. You tell me that the Azure CLI simply outputs environment variables to CI/CD logs without anyone (or at least without many people) knowing? And I can simply try to find these occurrences in the wild? (Remember – “some cases”). That doesn’t sound hard. I decided to give it a shot.

A search for the string shown in the report, “az webapp config appsettings”, using GitHub’s code search, yielded the following result in a Microsoft-owned repository. See line 49 in figure 2.

GitHub Actions workflow running an az CLI command
Figure 2: GitHub Actions workflow running an az CLI command

Ok, I thought, let’s see if it’s really that easy.

I clicked the View Runs button at the top to see the GitHub Actions workflow logs, scrolled to the relevant step of the workflow run, and then saw these two lurking around:

Microsoft’s workflow logs exposing sensitive information
Figure 3: Microsoft’s workflow logs exposing sensitive information

Well, that was easy. I smiled.

Seeing that the issue is indeed true, I did an initial lookup while also trying to see if I can find other commands.

The initial lookup yielded five vulnerability reports, four to Microsoft and one to GitHub. Throughout the research, I was able to disclose more findings to some other groups that I can’t disclose, per their requests. I reported the findings to the relevant vulnerability disclosure programs, and all were all accepted and fixed. The findings' severities ranged from informative to critical.

The Azure CLI: Bug or Feature?

In fact, many az functions (which are being run using the Azure CLI) echo back the accessed/created/updated/deleted resource alongside their environment variables, secrets, etc.

Down the line, I also found the following issues:

  1. https://github.com/Azure/login/issues/27 - [Security: Potential leak of az secrets on cmdline]
  2. https://github.com/Azure/k8s-create-secret/issues/3 - [Security: Pass secrets with --from-file instead of over the command line]

Both of these issues showed environment variables echoing back to the log. That said, I didn’t find a bug here. The Azure CLI actually echoes back this information as intended, so there’s nothing buggy regarding the tool or its output.

What’s actually problematic is the combination of where this tool is running and who can access the run logs.

So while the Azure CLI doesn’t perform anything buggy, when executed in a pipeline with the echoed credentials stored in the pipeline’s log, we suddenly find ourselves in a “who should be able to read the logs” kind of problem.

For public repositories and pipelines, this problem is easy to see and understand — random internet stalkers (comme moi!) shouldn’t access your production database keys.

For private repositories you may get a false sense of security due to the “private” title. But given one compromised account/token with the lowest “READ” permissions — suddenly an actor can access raw production credentials and possibly escalate their privileges. Whoops.

Moving on, I wanted to find more variants and occurrences. To do so, I cloned the Azure CLI repo and looked through the various modules ( == about 64 options for different CMDs when running az CMD, e.g., “az webapp”), searching for existing leaks in CI logs using command variants.

az commands list
Figure 5: az commands list

Observing Usage Patterns of the Azure CLI in the Wild, Wild GitHub Actions

When I looked at the Azure CLI usages, I noted that even for cases where the tool was “supposed” to leak credentials, the developers’ use differentiated between a full leak or the mitigation of it.

Where some developers didn’t know about the tool’s tendency to emit sensitive data, others did know (or at-least played it safe) and proactively mitigate the problem.

Classifying the usages, I found three main variations of usage patterns when using Azure CLI in GitHub actions.

Pattern 1: Folks Who Didn’t Know

Use cases among people who didn’t anticipate the issue are especially problematic and an easy target for attackers. The developers weren’t aware that the tool is spewing their credentials, so they didn’t put any mitigations in place. This implies, then, that their logs contain raw sensitive information.

In some implementations, though, I saw developers getting “saved by the bell”. This is when the developers defined the about-to-be-echoed credentials as secrets in the workflow — but mainly for the input phase. GitHub Actions later masked, or partially masked, the output of the tool, protecting the tool users. Whether they knew about the nature of the output of the tool, I can’t tell.

In the majority of the “saved by the bell” cases, I wasn’t able to find full raw credentials. For the remaining cases, I encountered partial or insufficient maskings that still left secrets and sensitive data exposed. So no bell today.

Workflow logs with masking and credentials leakage
Figure 6: Workflow logs with masking and credentials leakage

Pattern 2: Folks Who Had It Right

Some developers knew, or assumed, that the Azure CLI would leak sensitive data. In these workflows, the developers either manually masked the entirety of the returned values or stored the responses in variables rather than letting them echo to the log. This usage pattern yielded zero credentials. Kudos!

Pattern 3: The Folks Who Almost Had It Right

Incidents where folks who almost escaped without mishap but didn’t make it in the end were unfortunate to witness and yet fun to find. These incidents happened where developers set up separate pipelines for create and delete actions (or equivalent).

To explain, let’s look at an example. Let’s assume there's a resource definition in a pipeline called Pipeline A that consumes a secret called “MY_SECRET”. When Pipeline A runs and executes the az command, it prints the echoed secret from the Azure CLI response — but masked. This is because GitHub Actions identifies the string as sensitive information, as it should, and masks the string for us (similar to pattern 2; see figure 7).

Pipeline A, defining a secret
Figure 7: Pipeline A, defining a secret

Meanwhile, its sibling pipeline, Pipeline B, performs other actions on the same resource, like delete. This time the secret “MY_SECRET” isn’t needed to execute the delete command and is not defined or used in Pipeline B.

So, when Pipeline B executes the delete command, the Azure CLI echoes the resource data securely created by Pipeline A back to Pipeline B! And since Pipeline B never defined “MY_SECRET” as a secret, GitHub Actions doesn’t mask the returned credentials. Eventually, we find ourselves with a pipeline emitting raw credentials to its log, similar to pattern 1.

Pipeline B log leaking the secret
Figure 8: Pipeline B log leaking the secret

How to Safely Use Azure CLI in Pipelines

So how can you sleep at night without knowing if your Azure CLI usage will emit sensitive information?

If you’re working solely with private repositories and CI instances, you’re “saved” by the authentication and authorization mechanisms you have. The problem remains bad, just not as bad as it would be for public repositories. Make no mistake, though. Relying on the privacy of your repositories and CIs is an incident waiting to happen, so don’t do it.

To mitigate the issue, you have a few options, depending on your needs.

Prior to trying to handle the output in the log, you should consider replacing the static values in the applications with a more robust mechanism. Azure has a solution using its Key Vault feature, and by utilizing Bicep, for example, you could replace the static sensitive values in your applications settings with references to secrets stored in the vault. Doing so will make all the ‘leakages' in the tool harmless, as the settings will now reference secrets instead of containing their values.

If you need to use the output of the az command, you could do either of the following:

  1. Store the output in a variable so it doesn’t get echoed to the log and use it later in your workflow. This holds up, for example, when testing the return code of an “az” invocation or grepping specific parts of the output.
  2. Use JMESPath queries when fetching information with the tool using the built-in “--query” feature.

JMESPath (JSON Matching Expression paths) is a query language for searching JSON documents that allows you to declaratively extract elements from a JSON document.

By using JMESPath you could directly access the desired property in the tool’s response and output only the relevant section/value.

If you don’t need the output of the az command, you could:

  1. Redirect the output to /dev/null — This is a basic redirection option you could use to mute the output. Apply it like: “az webapp config ... &> /dev/null”. Note that it’s best to pipe both streams (stdout + stderr) to the location-of-no-return, as Azure CLI sometimes emits the credentials as a part of its error messages. In other words, a simple “az ... > /dev/null” may not suffice.
  2. Use the Azure CLI “output” option — Although I’ve seen a low number of usages of this option, Azure allows setting the desired output format using the “--output/-o” option. This option supports various values, and for our purposes we could use the “--output none” option.
  3. Selective masking [Not recommended] — You could go and start masking every returned value in your pipeline, but this will generate a headache and require attention and maintenance, as the usage will change with time. And the tool will change. And GitHub Actions will change. And TL;DR … I do not recommend this approach.

Famous Last Words

This lookup was fun and a cool thing to accidentally pick up from a random GitHub issues pile. As I stated in the beginning of this post, the bug isn’t sophisticated or actually a bug at all. The usage patterns of Azure CLI, though, are “bugged” and should be reported.

So while we love the relative new ease of cloud-services usage in 202x, we need to remain mindful of what’s printing into those logs, where the logs reside and who can read them.

Happy coding!

Update: Collaborating with Microsoft on CVE-2023-36052

In addition to solving the issues reported on their open-source projects, Microsoft validated the issues with the Azure CLI and assigned CVE-2023-36052 with a CVSS score of 8.6. Microsoft then made changes to the Azure CLI, Azure Pipelines and GitHub Actions. They published a new release of Azure CLI as part of their November 2023 Patch Tuesday. By avoiding echoing secrets, the new release prevents leakage in CI pipeline logs, developers' machines, and log aggregators.

We recommend updating the Azure CLI versions used in CI runners and developers’ machines to 2.54, to make sure no secrets are printed to the logs.

Learn More

Because of the data they store and the workloads they run, CI/CD systems are among the most critical and sensitive assets in your organization. Discover how to apply policy-as-code, implement an effective secrets scanning strategy, adopt least-privileged access, and establish robust logging and monitoring with our CI/CD Security Checklist

If you haven’t tried Prisma Cloud and would like to, we’d love for you to experience a free 30-day Prisma Cloud trial.


Subscribe to Cloud Native Security Blogs!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.