Do you know what the main threat is to your CI/CD systems? It’s not the code you write, the tools you use, or the cloud provider you rely on. It’s the supply chain, and that is frequently the most vulnerable part of the development process. Today, let’s understand why.
It begins with the components you use to build your code. The libraries and frameworks you rely on are often open-source and community-driven. Most developers decide on the packages to use based on word of mouth, popularity, or simple internet searches. They don’t often consider the security of the package or the practices of the maintainers. While they may review some of the code, it’s rare to find companies that fully review the code or development practices.
As teams become busier, it’s harder to spend the time required for a complete review. That leads development tools to use third-party software to try to identify vulnerable packages. Interestingly, most tools miss the fact that modern packages are designed to be able to influence the build process. That means these packages can include tools that change the process, add generated code, or run scripts during the lifecycle. This means that a package can introduce vulnerabilities that are not easily detected by a simple code scan. A malicious package could modify the built code, or it could take over the build environment to perform a malicious task. Most companies have no processes or practices in place to identify or review these scripts for potential issues.
Ironically, this problem is often made riskier by corporate security practices. Many companies want to self-host their CI/CD systems on their own networks to ensure security. In truth, it can have the opposite effect. Building on an ephemeral, cloud-hosted runner (such as GitHub hosted runners) means that the process occurs in an isolated network. While the resulting compiled code could be compromised, the corporate network is not. By comparison, most self-hosted CI/CD systems are connected to internal resources and services. This means that a compromised package can gain access to internal systems, including the central package stores. With enough privileges, they might even be able to compromise the system hosting the runners. In short, the blast radius for an exploit has been substantially increased by this approach to self-hosting. That’s one of several reasons security-conscious organizations avoid self-hosting these kinds of systems.
Sadly, that’s not the only risk. The supply chain can also be compromised by other third party dependencies from the build system tooling. While many companies have some policies in place for their NPM, Maven, and NuGet packages, it’s very rare to see a similar approach to how they handle GitHub Actions (or similar features on other platforms). Extensions for CI/CD systems are frequently given more trust than they deserve. This includes GitHub Actions.
With GitHub Actions, most companies fall into one of two camps. The first optimizes for development velocity and allows any Action to be used. Some of these restrict the development to any GitHub-verified Action, feeling that adds extra security. In truth, the verification process is not a security review. The blue checkmark for “Published domain and email verified” only indicates:
- The publisher has verified their domain (and has the verified badge on their profile)
- The publisher confirmed their email address, allowing GitHub Support to reach them
- The publisher has required two-factor authentication for their organization
Notice there’s nothing security-related? It’s just verifying the publisher is reachable. GitHub Apps have additional guidelines, but none of those are security related. In short, you still need to understand and manage your supply chain.
The second approach is more restrictive. These companies only allow Actions from an approved list of Actions. Those Actions tend to have received an internal code review. In my experience, most teams provide a cursory level of checks due to a lack of time. They may even allow developers to reference the Action using a version tag (such as @v1
), not considering that the owner of the repository can always change the code pointed to by that tag at any time. In fact, that’s how semantic versioning of Actions is implemented. In the wrong hands, however, this can create a security risk.
The risks with Actions comes from two places. The first is the code itself. This one is easy enough to review and secure, assuming a company has the resources. Once a specific commit is reviewed, teams can then use the commit SHA to ensure that they are using the approved version of the code. For example, instead of actions/setup-node@v4
or actions/[email protected]
, developers would use actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8
. That said, code reviews of Actions are often not as straight-forward as it might appear.
Most JavaScript code used in Actions is minimized and transpiled into a single JavaScript file in a dist
folder. This makes it harder to review the code (assuming the code even originates from the provided source). It also means that the dependencies listed in the package.json
may or may not be included in the final distribution; if they are there, they are often altered by the packaging process, removing unused code (or injecting additional code). That means you also have to understand how the code is being packaged!
The second risk with Actions comes from the external dependencies for the Action. These are rarely reviewed by companies. If it’s a Docker image (or Dockerfile), it’s important to review that image. Pinning the Action version won’t ensure the image isn’t altered. You need to also ensure you trust the vendor and their security practices. The same is true for Actions that rely on external binaries. The actions/setup-node
Action, for example, downloads the Node.js binaries from
https://github.com/actions/node-versions/releases. This means you must trust the security practices of the actions
organization as well as their practices for maintaining and versioning those binaries. Not all organizations are as transparent as GitHub in this regard.
Bringing all of these together can create a perfect storm. Most workflows run with excessive permissions (instead of least privileges). This provides the Actions access to read or write issues, pull requests, or other repository details. This makes the workflows an easy target for an Action with a malicious payload. If the company is using self-hosted runners connected to corporate resources, then the risk is substantially higher. Now a malicious Action can access internal systems, databases, or other resources. It may even have the ability to update one or more repositories, allowing its impact to spread beyond even needing the Action.
The only real fix to this is education. You can rely on a single security team to review all of this code. Instead, the developers have to own a lot of the responsibility. To do that, they need to understand these exploit vectors and the risks they create. They also need to understand how to best review the code in their packages and dependencies to find potential risks.
To be clear, vendors also have a responsibility which is often neglected. Look at the workflows in
actions/setup-java
for an example of this. Beyond the typical validations for the code itself, these workflows include pull request and merge triggers to help with the review:
- Validate that the code in
dist
is perfectly recreated from the provided source code (check-dist.yml
) - Ensure that CodeQL code scanning was used to identify potential exploits (
codeql-analysis.yml
) - Test the specific behaviors of the Actions to ensure it does what it claims
- Keeps the configurations of these tests up-to-date with the latest versions of the tools (
update-config-files.yml
) - Reviews the licenses of all of the dependencies to ensure they are approved for use, avoiding potential legal hazards (
licensed.yml
)
And since it’s a GitHub repository, the SBOM (software bill of materials) is also available, allowing you to understand the dependencies and their license. This repository is a great example of how a vendor should work to provide transparency and security to users. These workflows ensure that anyone using the Action has some guarantees about the code which can be independently verified. It’s a great way to help end-users trust the Action.
Hopefully this gives you some insights into the supply chain risks in your CI/CD systems and how you can avoid them. It’s a complex problem that requires a multi-faceted approach to solve. It’s not just about the code you write or the tools you use. It’s about the entire process, including the trust you place in the components you use and the transparency and security that third-parties provide.