Dependency Security in 2026
The Attack Surface Nobody Owns
In 2021, a two-line change in a logging library brought down thousands of services worldwide. In 2022, a typosquatted npm package exfiltrated environment variables from developer machines across the industry. In 2024, a single maintainer's compromised account pushed a backdoored release of a compression library that nearly shipped in a major Linux distribution.
The pattern is consistent: the attack surface is not your code. It is everything your code depends on — and the tooling to secure it has been catching up to the threat faster than most teams realize.
By 2026, the tooling is genuinely good. SBOMs are generatable in under a minute. SLSA frameworks have real implementations. Artifact signing via Sigstore requires about 30 lines of CI configuration. The gap is not capability — it's adoption. Most teams still don't know what's in their dependency tree, can't verify the provenance of their build artifacts, and have no response plan for a transitive dependency compromise.
This post closes that gap.
What Is Actually In Your Dependency Tree
Before you can protect your dependencies, you need to know what they are. Not just your direct dependencies — the full transitive graph. For a typical Node.js microservice with 50 direct dependencies, the transitive tree routinely contains 800–1,200 packages. For a Python data pipeline, it can exceed 2,000.
Most of those packages were written by individuals you've never heard of, aren't actively maintained, and have never had a security audit. That's not a criticism — it's just the reality of the open-source ecosystem. The question is whether you have visibility into this graph.
An SBOM (Software Bill of Materials) is a formal inventory of every component in your software. The two dominant formats are SPDX and CycloneDX. Both are machine-readable, integrable with vulnerability databases, and generatable from your existing package manager.
# Node.js — generate CycloneDX SBOM
npx @cyclonedx/cyclonedx-npm --output-file sbom.json --output-format json
# Python — using cyclonedx-bom
pip install cyclonedx-bom
cyclonedx-bom -r requirements.txt -o sbom.json --format json
# Java/Maven
mvn org.cyclonedx:cyclonedx-maven-plugin:makeAggregateBom
# Go
go install github.com/CycloneDX/cyclonedx-gomod/cmd/cyclonedx-gomod@latest
cyclonedx-gomod app -output sbom.jsonOnce you have an SBOM, you can cross-reference it against the OSV (Open Source Vulnerabilities) database:
# Install osv-scanner
go install github.com/google/osv-scanner/cmd/osv-scanner@latest
# Scan using the SBOM
osv-scanner --sbom sbom.json
# Or scan the source tree directly
osv-scanner -r ./The SBOM also needs to ship with the artifact. If you're building a container, embed the SBOM as an OCI annotation:
FROM node:22-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
# Generate SBOM during build
RUN npx @cyclonedx/cyclonedx-npm --output-file /sbom.json --output-format json
FROM node:22-alpine
WORKDIR /app
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/src ./src
COPY --from=build /sbom.json /sbom.json
LABEL org.opencontainers.image.created="$(date -u +%Y-%m-%dT%H:%M:%SZ)"SLSA: Provenance You Can Verify
An SBOM tells you what's in the artifact. SLSA (Supply-chain Levels for Software Artifacts) tells you how the artifact was built — and lets you verify that the artifact you're running matches the source code it claims to come from.
SLSA has four levels. The practical distinction for most teams:
For most product teams, SLSA Level 2 is the right target. It means your CI system generates a signed provenance document that proves: this artifact was built from this source commit, by this workflow, at this time. No one can substitute a different artifact without the signature failing verification.
GitHub Actions makes this nearly free:
# .github/workflows/build-and-attest.yml
name: Build with SLSA Provenance
on:
push:
branches: [main]
release:
types: [published]
permissions:
contents: read
id-token: write
attestations: write
packages: write
jobs:
build:
runs-on: ubuntu-latest
outputs:
digest: ${{ steps.build.outputs.digest }}
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: build
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ghcr.io/${{ github.repository }}:${{ github.sha }}
- name: Generate SLSA provenance
uses: actions/attest-build-provenance@v1
with:
subject-name: ghcr.io/${{ github.repository }}
subject-digest: ${{ steps.build.outputs.digest }}
push-to-registry: trueVerification at deploy time:
gh attestation verify \
ghcr.io/myorg/myservice@sha256:abc123... \
--owner myorg \
--predicate-type https://slsa.dev/provenance/v1If the artifact was tampered with after build — even by a single byte — verification fails and the deploy is rejected.
Signing and Verifying Artifacts with Sigstore
Sigstore is the keyless artifact signing infrastructure that now underpins most of the open-source ecosystem's signing infrastructure. It works by generating a short-lived signing certificate tied to an OIDC identity (your GitHub Actions workflow, your Google account, your GitHub username) and recording the signature in a public, append-only transparency log called Rekor.
The "keyless" part matters. You don't manage a long-lived private key that can be stolen. The signing certificate is valid for 10 minutes, after which it's useless to an attacker.
For container images:
# Sign after push (run in CI with OIDC token available)
cosign sign --yes ghcr.io/myorg/myservice@sha256:abc123...
# Verify — checks Rekor transparency log, confirms OIDC identity
cosign verify \
--certificate-identity-regexp "https://github.com/myorg/myservice/.github/workflows/.*" \
--certificate-oidc-issuer https://token.actions.githubusercontent.com \
ghcr.io/myorg/myservice@sha256:abc123...For arbitrary files — release binaries, SBOMs, config files:
# Sign
cosign sign-blob --yes sbom.json --bundle sbom.json.bundle
# Verify
cosign verify-blob \
--bundle sbom.json.bundle \
--certificate-identity you@example.com \
--certificate-oidc-issuer https://accounts.google.com \
sbom.jsonAdd a verification step to your Kubernetes admission controller so unsigned or unverified images cannot run in production:
# Using Kyverno policy
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-signed-images
spec:
validationFailureAction: Enforce
rules:
- name: check-image-signature
match:
any:
- resources:
kinds: [Pod]
namespaces: [production]
verifyImages:
- imageReferences: ["ghcr.io/myorg/*"]
attestors:
- entries:
- keyless:
subject: "https://github.com/myorg/*"
issuer: "https://token.actions.githubusercontent.com"
rekor:
url: https://rekor.sigstore.devTransitive Dependency Risk
Direct dependencies are only the start. The real supply chain risk is in the transitive graph — the packages your packages depend on.
The key insight is that you have no contractual relationship with transitive dependencies. When a maintainer of a package three levels deep introduces a backdoor, you have no notification channel, no influence over their practices, and often no awareness that the package exists in your system.
The mitigations are complementary:
Lock files with integrity hashes. npm's package-lock.json, Python's pip-compile output, and Go's go.sum all pin exact versions with content hashes. If a package is tampered with on the registry, the hash won't match and the install fails.
# Verify npm lockfile integrity
npm ci # --ci mode verifies checksums and won't update lockfile
# Python with pip-tools
pip-compile --generate-hashes requirements.in > requirements.txt
pip install --require-hashes -r requirements.txtPrivate artifact mirrors. Mirror the packages you use in your own registry (Artifactory, GitHub Packages, AWS CodeArtifact). This prevents supply chain attacks via registry compromise or package removal. Your builds always pull from your mirror, not from the public internet.
Automated dependency updates with security scanning. Dependabot and Renovate both flag known vulnerabilities in PRs. The pattern that works is: enable auto-merge for patch updates to non-critical packages, require human review for minor/major updates, and block merges when CVE severity exceeds your threshold.
# .github/dependabot.yml
version: 2
updates:
- package-ecosystem: npm
directory: "/"
schedule:
interval: weekly
open-pull-requests-limit: 10
groups:
dev-dependencies:
patterns: ["*"]
dependency-type: "development"
ignore:
- dependency-name: "*"
update-types: ["version-update:semver-major"]Response Playbook for Transitive Compromise
When a transitive dependency is disclosed as compromised — which will happen — you need to be able to answer three questions in under an hour:
- Are we using this package? (SBOM query)
- Are we using the affected version? (SBOM + lockfile)
- Is the vulnerable code path reachable? (Reachability analysis)
The SBOM answers questions 1 and 2 immediately. A CycloneDX SBOM is just JSON — you can query it with jq:
# Is log4j-core in our SBOM, and what version?
jq '.components[] | select(.name == "log4j-core") | {name, version}' sbom.jsonQuestion 3 requires a reachability analyzer. For Java, tools like Google's OSS-Fuzz integration and commercial tools like Endor Labs can determine whether the vulnerable function is actually called by your code. For other ecosystems, this is still maturing, but conservative teams treat "present in SBOM" as sufficient reason to update regardless of reachability.
The response timeline:
If you don't have an SBOM, question 1 takes hours of manual grep-and-search. This is the moment that makes teams wish they had generated one.
Key Takeaways
- SBOMs are not a compliance checkbox — they are the operational tooling that lets you answer "are we affected?" in minutes rather than hours when a supply chain incident hits.
- SLSA Level 2 is achievable with 50 lines of GitHub Actions configuration and prevents the most common supply chain attack: artifact substitution between build and deployment.
- Sigstore's keyless signing model eliminates the long-lived key management problem that made artifact signing impractical for most teams historically.
- Transitive dependencies are the real risk; direct dependencies are just the entry point — lock files with content hashes and private artifact mirrors reduce exposure to registry-level compromises.
- A pre-written response playbook for transitive dependency compromise cuts response time from hours to under an hour when the next major disclosure happens.
- The supply chain tooling ecosystem has matured significantly — the gap is adoption, not capability, and the effort to close it is now measured in days, not quarters.