Back to Blog
SecurityDevSecOpsCI/CDAutomation

Building a DevSecOps Pipeline That Doesn't Slow You Down

Kavora SystemsMarch 3, 202611 min read

The security scanning problem

Most teams know they should be scanning for vulnerabilities. The problem isn't awareness -- it's implementation. Security tools are notorious for generating mountains of findings, most of which are either false positives or so low-severity that fixing them is a waste of engineering time.

The result? Teams either ignore security scanning entirely or set it up once, get overwhelmed by the noise, and quietly disable it. Neither outcome is acceptable, especially if you're handling customer data or pursuing SOC 2 compliance.

This guide builds a practical DevSecOps pipeline that catches the vulnerabilities that actually matter, runs fast enough that developers don't hate it, and integrates into your existing GitHub Actions workflow without a separate security team to manage it.

The five layers of DevSecOps

A complete DevSecOps pipeline scans at five different layers. Each catches a different category of vulnerability:

LayerWhat it catchesToolSpeed
Secret scanningAPI keys, passwords, tokens committed to codeGitLeaks< 30s
Dependency scanningKnown vulnerabilities in third-party packagesSnyk / Dependabot< 60s
Static analysis (SAST)Code-level vulnerabilities (SQL injection, XSS, etc.)Semgrep1-3 min
Container scanningVulnerabilities in base images and OS packagesTrivy1-2 min
Dynamic analysis (DAST)Runtime vulnerabilities in deployed applicationsOWASP ZAP5-15 min

You don't need all five on day one. Start with secrets and dependencies (they catch the highest-severity issues with the least effort), then layer in the rest as your pipeline matures.

Layer 1: Secret scanning with GitLeaks

Leaked secrets are the most critical vulnerability category because they provide direct access to your systems. A single committed AWS key can cost you thousands in unauthorized compute charges -- or worse, a data breach.

GitLeaks scans your codebase for patterns that look like secrets: API keys, database passwords, private keys, tokens. It runs in seconds and has a low false-positive rate.

toml
# .gitleaks.toml -- customize to reduce noise
[allowlist]
  description = "Global allowlist"
  paths = [
    '''(.*)test(.*)''',
    '''(.*)mock(.*)''',
    '''(.*)fixture(.*)''',
  ]
  regexes = [
    '''EXAMPLE_KEY''',
    '''test-api-key''',
  ]

What it catches that matters: Real API keys, database connection strings, private keys, OAuth tokens. We've seen production AWS credentials, Stripe secret keys, and database passwords caught by GitLeaks in client codebases -- all of which would have been catastrophic if they'd reached a public repository.

What's noise: Test fixtures with fake keys, documentation examples, environment variable templates. Use the allowlist to suppress these.

Layer 2: Dependency scanning

Your application's dependencies are the single largest attack surface you have. A typical Node.js application pulls in 500-1,500 transitive dependencies. Each one is a potential vulnerability.

Dependabot is free and built into GitHub. Enable it and configure it to open PRs automatically:

yaml
# .github/dependabot.yml
version: 2
updates:
  - package-ecosystem: "npm"
    directory: "/"
    schedule:
      interval: "weekly"
      day: "monday"
    open-pull-requests-limit: 10
    reviewers:
      - "your-team"
    labels:
      - "dependencies"
      - "security"
    # Group minor and patch updates to reduce PR noise
    groups:
      production-deps:
        patterns:
          - "*"
        update-types:
          - "minor"
          - "patch"

Snyk provides deeper analysis if you need it -- it understands whether a vulnerable code path is actually reachable in your application, which dramatically reduces false positives. The free tier covers most small-to-mid-size teams.

What it catches that matters: Known CVEs in your dependency tree, especially critical and high severity ones with public exploits. The Log4Shell vulnerability (CVE-2021-44228) was caught by dependency scanners days before most teams knew it existed.

What's noise: Low-severity findings in dev-only dependencies. Filter your pipeline to fail only on high and critical findings in production dependencies.

Layer 3: Static analysis with Semgrep

SAST tools analyze your source code for vulnerability patterns without running it. Semgrep is our recommendation because it's fast, has excellent language support, and its rule syntax is readable enough that you can write custom rules.

yaml
# .semgrep.yml -- custom rules for your codebase
rules:
  - id: no-raw-sql-queries
    patterns:
      - pattern: $DB.query($SQL, ...)
      - pattern-not: $DB.query($SQL, $PARAMS, ...)
    message: "Use parameterized queries to prevent SQL injection"
    severity: ERROR
    languages: [typescript, javascript]

  - id: no-eval
    pattern: eval(...)
    message: "eval() is a security risk -- use safer alternatives"
    severity: ERROR
    languages: [typescript, javascript]

  - id: no-hardcoded-jwt-secret
    pattern: jwt.sign($PAYLOAD, "...", ...)
    message: "JWT secret should come from environment variables, not hardcoded strings"
    severity: ERROR
    languages: [typescript, javascript]

What it catches that matters: SQL injection, cross-site scripting, insecure cryptographic usage, hardcoded secrets in code (complements GitLeaks), and unsafe deserialization. These are the vulnerability classes that lead to actual breaches.

What's noise: Style issues masquerading as security findings, findings in generated code. Use Semgrep's severity levels and focus your CI gate on ERROR-level findings only.

Need help implementing this? Our team can help you put these practices into action.

Layer 4: Container scanning with Trivy

If you're deploying containers (and you should be), the base image you choose brings along hundreds of OS-level packages, each with their own vulnerability history. Trivy scans your container images and reports known CVEs.

bash
# Scan a local image
trivy image --severity HIGH,CRITICAL your-app:latest

# Scan and fail CI on critical findings
trivy image --exit-code 1 --severity CRITICAL your-app:latest

# Scan your IaC files too
trivy config --severity HIGH,CRITICAL ./terraform/

Pro tip: Use minimal base images. Switching from node:20 to node:20-alpine typically reduces your vulnerability count by 60-80% because Alpine has a fraction of the installed packages. Distroless images are even leaner.

Layer 5: The complete pipeline

Here's a GitHub Actions workflow that chains all four automated layers together. DAST (Layer 5) runs separately against your staging environment after deployment -- it's too slow and too flaky for PR-level checks.

yaml
# .github/workflows/security.yml
name: Security Pipeline

on:
  pull_request:
    branches: [main]
  push:
    branches: [main]

permissions:
  contents: read
  security-events: write

jobs:
  secret-scan:
    name: Secret Scanning
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Run GitLeaks
        uses: gitleaks/gitleaks-action@v2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

  dependency-scan:
    name: Dependency Scanning
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node
        uses: actions/setup-node@v4
        with:
          node-version: "20"

      - name: Install dependencies
        run: npm ci

      - name: Run npm audit
        run: npm audit --audit-level=high --production
        continue-on-error: false

      - name: Run Snyk
        uses: snyk/actions/node@master
        env:
          SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
        with:
          args: --severity-threshold=high --all-projects
        continue-on-error: false

  sast:
    name: Static Analysis
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Run Semgrep
        uses: semgrep/semgrep-action@v1
        with:
          config: >-
            p/owasp-top-ten
            p/nodejs
            p/typescript
            .semgrep.yml
        env:
          SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}

  container-scan:
    name: Container Scanning
    runs-on: ubuntu-latest
    needs: [secret-scan, dependency-scan, sast]
    steps:
      - uses: actions/checkout@v4

      - name: Build image
        run: docker build -t app:ci .

      - name: Run Trivy
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: "app:ci"
          format: "sarif"
          output: "trivy-results.sarif"
          severity: "HIGH,CRITICAL"
          exit-code: "1"

      - name: Upload scan results
        uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: "trivy-results.sarif"

  security-gate:
    name: Security Gate
    runs-on: ubuntu-latest
    needs: [secret-scan, dependency-scan, sast, container-scan]
    steps:
      - name: All security checks passed
        run: echo "All security scans passed successfully"

Notice a few design decisions here:

  • Secret scanning, dependency scanning, and SAST run in parallel. They don't depend on each other, so running them simultaneously keeps the pipeline fast.
  • Container scanning runs after the first three pass. No point building and scanning an image if the code itself has problems.
  • The security gate job gives you a single status check to require on your branch protection rules. If any scan fails, the gate fails, and the PR can't merge.
  • SARIF output from Trivy integrates with GitHub's Security tab, giving you a centralized view of findings alongside Dependabot alerts.

Tuning for signal over noise

The biggest mistake teams make is setting every scanner to maximum sensitivity on day one. You'll get hundreds of findings, most of which are low-risk, and your team will learn to ignore the security checks entirely.

Instead, start strict and widen gradually:

  1. Week 1: Enable secret scanning and dependency scanning (high/critical only). These catch the most dangerous issues with nearly zero false positives.
  2. Week 2: Add Semgrep with the OWASP Top 10 ruleset. Review findings and suppress false positives.
  3. Week 3: Add container scanning with Trivy (critical only at first).
  4. Week 4+: Gradually lower thresholds as your team builds the habit of addressing findings quickly.

The goal is a pipeline where every finding is actionable. If your team is clicking "dismiss" on most findings, your thresholds are too aggressive.

Compliance-as-code bonus

If you're pursuing SOC 2 or ISO 27001, this pipeline generates evidence automatically. Every PR shows that security scanning ran and passed before code reached production. Export your workflow run history as compliance evidence -- your auditor will love you for it.

yaml
# Add to your security gate job for compliance logging
- name: Log compliance evidence
  run: |
    echo "Security scan completed at $(date -u +%Y-%m-%dT%H:%M:%SZ)" >> compliance-log.txt
    echo "Commit: ${{ github.sha }}" >> compliance-log.txt
    echo "PR: ${{ github.event.pull_request.number }}" >> compliance-log.txt
    echo "All checks: PASSED" >> compliance-log.txt

The bottom line

A DevSecOps pipeline isn't a one-time project -- it's a practice. Start with the layers that catch the highest-severity issues (secrets and dependencies), add layers as your team matures, and tune relentlessly for signal over noise.

The pipeline above runs in under 5 minutes for most codebases and catches the vulnerability categories responsible for the vast majority of real-world breaches. That's a better security posture than most companies achieve with a dedicated security team -- and it runs automatically on every pull request.

Need help implementing this?

Our team can help you put these practices into action.

Get in touch

Get engineering insights delivered

Practical advice on automation, cloud, DevOps, and scaling -- no spam, no fluff.