Netlify
Netlify Regression Testing
Deploy Previews
CI/CD
Playwright
E2E Testing
GitHub Actions

Netlify Regression Testing Strategy for Deploy Previews

Netlify regression testing pipeline for deploy previews with Playwright and GitHub Actions
Jan, 2026

Quick summary: Build a regression testing pipeline for Netlify deploy previews using Playwright and GitHub Actions. Test against actual Netlify infrastructure instead of localhost to catch bugs before production. This guide covers three workflow approaches (wait-for-netlify-deploy action, get-netlify-url with polling, manual workflow_dispatch), Netlify-specific authentication (Basic, Team, SSO), parallel execution with sharding, selective testing, and cost optimization. Includes complete code examples for Playwright configuration, GitHub Actions workflows, and handling Netlify's unique deployment triggers.

Table of Contents

  1. Introduction
  2. Understanding Netlify's Deployment Architecture
  3. Setting Up Playwright for Netlify Deploy Previews
  4. GitHub Actions Workflows: From Trigger to Results
  5. Advanced Patterns for Robust Pipelines
  6. Best Practices and Common Pitfalls
  7. When Traditional Automation Isn't Enough
  8. Conclusion
  9. FAQ

Introduction

You pushed your code. All tests passed locally. You merged to main. Netlify deployed to production. Three hours later, production was down. A critical bug slipped through because your tests never ran against a real Netlify deployment.

This happens more than you think. Teams ship with confidence, only to discover their local testing environment doesn't match Netlify's infrastructure. Different build artifacts, different edge behavior, different environment variables-lots of things break that your local tests never catch.

Netlify's deploy previews change the game for Netlify regression testing. Every push to a branch generates a live, production-like URL you can test against. Combine that with automated regression testing, and suddenly you're catching bugs before they ever reach production.

This guide shows you how to build a robust regression testing pipeline for Netlify deploy previews using Playwright and GitHub Actions. You'll get complete, production-ready code examples, Netlify-specific authentication strategies, and battle-tested patterns that intermediate-to-advanced developers need.

Netlify Regression Testing: Understanding Deployment Architecture

Before building your testing pipeline, understand how Netlify deploy previews work. Every push to a non-main branch triggers Netlify to build your code, deploy artifacts to the global CDN, and generate a unique preview URL.

Netlify dashboard showing deploy previews

Preview URLs follow this pattern: https://deploy-preview-number--site-name.netlify.app. Each deployment has isolated cache, environment variables, and function instances. This isolation is perfect for regression testing. This isolation is perfect for regression testing-each test run targets a fresh, production-like environment.

Testing against Netlify deployments differs from local testing. Your local dev server runs with different Node versions, build outputs, and network latency. Netlify Functions behave differently locally than on Netlify's infrastructure. Environment variables might not match. Database connections might use different endpoints. If you're unsure whether you need full regression testing or simpler integration tests, our integration vs E2E testing guide explains when to use each approach.

Netlify's atomic deployments mean each build replaces the previous deployment entirely, unlike cache-based approaches. This affects how you handle test data and database resets between runs. The challenge becomes triggering tests at the right moment and handling Netlify's different authentication mechanisms.

The real advantage of regression testing on Netlify isn't just catching bugs, it's catching them in the same environment where they'd break production-with the same CDN, functions, and build process.

Playwright Netlify Integration: Setting Up for Deploy Previews

Playwright excels at Netlify integration. If you're using Page Object Model for maintainable tests, see our Page Object Model guide. For more details on Playwright configuration, see the official Playwright documentation. It handles parallel execution natively, runs 35-45% faster than alternatives, and supports multiple authentication strategies for Netlify's protection methods. While this guide focuses on Playwright, teams considering other frameworks should review our comprehensive framework comparison. Here's a complete configuration optimized for Netlify deploy previews.

import { defineConfig, devices } from '@playwright/test';
import dotenv from 'dotenv';
 
dotenv.config();
 
export default defineConfig({
  testDir: './tests/e2e',
  timeout: 30000,
  expect: {
    timeout: 5000,
  },
  fullyParallel: true,
  forbidOnly: !!process.env.CI,
  retries: process.env.CI ? 2 : 0,
  workers: process.env.CI ? 1 : undefined,
 
  reporter: [
    ['html', { outputFolder: 'playwright-report' }],
    ['blob', { outputFolder: 'blob-report' }],
    ['junit', { outputFile: 'test-results/junit.xml' }],
  ],
 
  use: {
    baseURL: process.env.BASE_URL || 'http://localhost:3000',
    headless: true,
    viewport: { width: 1280, height: 720 },
    ignoreHTTPSErrors: true,
 
    extraHTTPHeaders: {
      // Basic Auth for password-protected previews
      ...(process.env.NETLIFY_PREVIEW_BASIC_AUTH ? {
        'Authorization': `Basic ${process.env.NETLIFY_PREVIEW_BASIC_AUTH}`,
      } : {}),
    },
 
    trace: 'on-first-retry',
    screenshot: 'only-on-failure',
    video: 'retain-on-failure',
  },
 
  projects: [
    {
      name: 'chromium',
      use: { ...devices['Desktop Chrome'] },
    },
    {
      name: 'firefox',
      use: { ...devices['Desktop Firefox'] },
    },
    {
      name: 'webkit',
      use: { ...devices['Desktop Safari'] },
    },
  ],
 
  webServer: process.env.CI ? undefined : {
    command: 'npm run dev',
    url: 'http://localhost:3000',
    reuseExistingServer: !process.env.CI,
    timeout: 120000,
    stdout: 'pipe',
    stderr: 'pipe',
  },
});

This configuration handles several Netlify-specific requirements. The BASE_URL environment variable lets you switch between localhost during development and your preview deployment during CI. The Authorization header handles Netlify's Basic auth protection for password-protected previews.

For team login or SSO protection, you'll need a different approach. Team login requires authentication cookies from a Netlify team member, and SSO tokens expire after 1 hour. Here's how to handle team login authentication:

// tests/auth-helper.ts
import { test as base } from '@playwright/test';
 
export const test = base.extend<{}>({
  async setup({}, use) {
    const authContext = await test.request.newContext({
      // Use cookies from authenticated Netlify session
      storageState: 'netlify-auth-state.json',
    });
 
    await use({ authContext });
  },
});

Notice the webServer configuration. When running locally (!process.env.CI), Playwright starts your dev server. When running in CI, it expects the server to already be running your Netlify preview deployment.

For test isolation, reset your test database before each test suite. Netlify makes this straightforward with environment-specific database URLs:

// tests/setup.ts
import { test as base } from '@playwright/test';
 
const databaseUrl = process.env.NETLIFY_ENV === 'preview'
  ? process.env.DATABASE_URL_PREVIEW
  : process.env.DATABASE_URL_PRODUCTION;
 
export const test = base.extend<{}>({
  // Reset database before each test file
  async setup({}, use) {
    await resetDatabase(databaseUrl);
    await use({});
  },
});

GitHub Actions Netlify: Three Regression Testing Workflows

Netlify doesn't provide a built-in deployment_status event, Netlify doesn't provide a built-in GitHub integration for deployment triggers. You have three approaches, each with trade-offs.

Approach 1: wait-for-netlify-deploy Action (Recommended)

The kukiron/wait-for-netlify-deploy@v1.2.2 action polls the Netlify API until your preview deployment completes successfully, then outputs the URL. This is the most reliable approach because tests only run when Netlify reports the deployment is actually ready.

name: E2E Tests on Netlify Deploy Preview
 
on:
  pull_request:
    types: [opened, synchronize, reopened]
 
jobs:
  e2e-tests:
    runs-on: ubuntu-latest
    timeout-minutes: 30
 
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
 
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
 
      - name: Install dependencies
        run: npm ci
 
      - name: Install Playwright browsers
        run: npx playwright install --with-deps
 
      - name: Wait for Netlify preview deployment
        uses: kukiron/wait-for-netlify-deploy@v1.2.2
        id: waitForNetlifyPreviewDeployment
        env:
          NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
          NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
        with:
          max_timeout: 600
          check_interval: 15
 
      - name: Run Playwright tests
        run: npx playwright test
        env:
          BASE_URL: ${{ steps.waitForNetlifyPreviewDeployment.outputs.url }}
          NETLIFY_PREVIEW_BASIC_AUTH: ${{ secrets.NETLIFY_PREVIEW_BASIC_AUTH }}
 
      - name: Upload Playwright report
        uses: actions/upload-artifact@v4
        if: always()
        with:
          name: playwright-report
          path: playwright-report/
          retention-days: 30

This workflow extracts the preview URL from the wait-for-netlify-deploy action using ${{ steps.waitForNetlifyPreviewDeployment.outputs.url }}. The action polls Netlify's API every 15 seconds until deployment status becomes deploy_succeeded.

Approach 2: get-netlify-url Action with Polling

Sometimes you want more control or a simpler polling mechanism. The voorhoede/get-netlify-url@v2 action queries the Netlify API for the latest deploy preview URL without waiting for completion-you handle the waiting yourself.

name: E2E Tests on Netlify Preview (Manual Polling)
 
on:
  pull_request:
    types: [opened, synchronize, reopened]
 
jobs:
  test_setup:
    name: Get Netlify Preview URL
    runs-on: ubuntu-latest
    outputs:
      preview_url: ${{ steps.getNetlifyUrl.outputs.url }}
    steps:
      - name: Get Netlify preview URL
        uses: voorhoede/get-netlify-url@v2
        id: getNetlifyUrl
        env:
          NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
          NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
 
      - name: Wait for deployment readiness
        run: |
          timeout 600 bash -c 'until curl -f -s -o /dev/null ${{ steps.getNetlifyUrl.outputs.url }}; do sleep 15; done'
 
  test:
    name: Playwright Tests
    needs: test_setup
    runs-on: ubuntu-latest
    timeout-minutes: 60
 
    steps:
      - uses: actions/checkout@v4
 
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
 
      - name: Install dependencies
        run: npm ci
 
      - name: Install Playwright browsers
        run: npx playwright install --with-deps
 
      - name: Run Playwright tests
        run: npx playwright test
        env:
          BASE_URL: ${{ needs.test_setup.outputs.preview_url }}
          NETLIFY_PREVIEW_BASIC_AUTH: ${{ secrets.NETLIFY_PREVIEW_BASIC_AUTH }}
 
      - name: Upload Playwright report
        uses: actions/upload-artifact@v4
        if: always()
        with:
          name: playwright-report
          path: playwright-report/
          retention-days: 30

The polling approach gives you flexibility but adds complexity. You're responsible for implementing the wait logic correctly, and tests might run against incomplete deployments if the timeout isn't sufficient. This works when you need to trigger workflows without relying on community actions.

Approach 3: Manual workflow_dispatch

For on-demand testing or debugging, use workflow_dispatch with manual URL input. This approach doesn't automate testing but gives you granular control when you need it.

name: E2E Tests (Manual Trigger)
 
on:
  workflow_dispatch:
    inputs:
      preview_url:
        description: 'Netlify preview URL to test'
        required: true
 
jobs:
  test:
    runs-on: ubuntu-latest
    timeout-minutes: 60
 
    steps:
      - uses: actions/checkout@v4
 
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
 
      - name: Install dependencies
        run: npm ci
 
      - name: Install Playwright browsers
        run: npx playwright install --with-deps
 
      - name: Run Playwright tests
        run: npx playwright test
        env:
          BASE_URL: ${{ inputs.preview_url }}
          NETLIFY_PREVIEW_BASIC_AUTH: ${{ secrets.NETLIFY_PREVIEW_BASIC_AUTH }}
 
      - name: Upload Playwright report
        uses: actions/upload-artifact@v4
        if: always()
        with:
          name: playwright-report
          path: playwright-report/
          retention-days: 30

GitHub Actions workflow running E2E tests on Netlify deployment

Which Approach to Choose?

Use wait-for-netlify-deploy for most cases—it polls until Netlify deployment completes and outputs URL. Use get-netlify-url if you need simpler polling without waiting. Use workflow_dispatch only for on-demand testing or debugging.

Manual workflows are useful for ad-hoc testing or debugging. They're not recommended as your primary automated approach.

The wait-for-netlify-deploy action is simplest and most reliable for most teams. It actively polls Netlify, handles timeouts correctly, and only runs tests when deployments actually succeed. Use get-netlify-url when you need simpler polling. Use manual workflow_dispatch only for on-demand testing or debugging.

Netlify CI/CD Pipeline: Advanced Testing Patterns

Once you have basic workflows running, you'll want patterns that scale. Large test suites need parallelization. Frequent PRs need selective testing. CI budgets need optimization.

Netlify Preview URL Automation: Parallel Execution with Sharding

Playwright supports test sharding, splitting your test suite across multiple runners for faster execution. This is critical when your regression suite exceeds 10 minutes.

name: Sharded Playwright Tests on Netlify
 
on:
  pull_request:
    types: [opened, synchronize, reopened]
 
jobs:
  playwright-tests:
    runs-on: ubuntu-latest
    timeout-minutes: 60
 
    strategy:
      fail-fast: false
      matrix:
        shardIndex: [1, 2, 3, 4]
        shardTotal: [4]
 
    steps:
      - uses: actions/checkout@v4
 
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
 
      - name: Install dependencies
        run: npm ci
 
      - name: Install Playwright browsers
        run: npx playwright install --with-deps
 
      - name: Wait for Netlify preview deployment
        uses: kukiron/wait-for-netlify-deploy@v1.2.2
        id: waitForNetlifyPreviewDeployment
        env:
          NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
          NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
        with:
          max_timeout: 600
          check_interval: 15
 
      - name: Run Playwright tests (sharded)
        run: npx playwright test --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }}
        env:
          BASE_URL: ${{ steps.waitForNetlifyPreviewDeployment.outputs.url }}
          NETLIFY_PREVIEW_BASIC_AUTH: ${{ secrets.NETLIFY_PREVIEW_BASIC_AUTH }}
 
      - name: Upload blob report
        uses: actions/upload-artifact@v4
        if: always()
        with:
          name: blob-report-${{ matrix.shardIndex }}
          path: blob-report
          retention-days: 1
 
  merge-reports:
    needs: playwright-tests
    runs-on: ubuntu-latest
    if: always()
 
    steps:
      - uses: actions/checkout@v4
 
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
 
      - name: Install Playwright
        run: npx playwright install --with-deps
 
      - name: Download blob reports
        uses: actions/download-artifact@v4
        with:
          pattern: blob-report-*
          path: blob-report
 
      - name: Merge reports
        run: npx playwright merge-reports
 
      - name: Upload HTML report
        uses: actions/upload-artifact@v4
        with:
          name: playwright-report
          path: playwright-report/

Sharding reduces a 20-minute test suite to 5 minutes across four runners. The blob reporter collects results from each shard, and the merge job combines them into a single HTML report.

Netlify Automated Testing: Selective Testing Based on Changed Files

Running your full regression suite on every PR change wastes CI minutes. A better approach: run only tests related to changed files.

name: Selective E2E Tests on Netlify
 
on:
  pull_request:
    types: [opened, synchronize, reopened]
 
jobs:
  detect-changes:
    runs-on: ubuntu-latest
    outputs:
      changed_files: ${{ steps.changed-files.outputs.all_changed_files }}
 
    steps:
      - uses: actions/checkout@v4
 
      - name: Detect changed files
        id: changed-files
        uses: tj-actions/changed-files@v45
        with:
          base_ref: ${{ github.base_ref }}
 
  wait-for-deploy:
    name: Wait for Netlify Preview
    needs: detect-changes
    runs-on: ubuntu-latest
    outputs:
      preview_url: ${{ steps.waitForNetlifyPreviewDeployment.outputs.url }}
    steps:
      - name: Wait for Netlify preview deployment
        uses: kukiron/wait-for-netlify-deploy@v1.2.2
        id: waitForNetlifyPreviewDeployment
        env:
          NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
          NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
        with:
          max_timeout: 600
          check_interval: 15
 
  e2e-tests:
    needs: [detect-changes, wait-for-deploy]
    runs-on: ubuntu-latest
    timeout-minutes: 30
 
    steps:
      - uses: actions/checkout@v4
 
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
 
      - name: Install dependencies
        run: npm ci
 
      - name: Install Playwright browsers
        run: npx playwright install --with-deps
 
      - name: Run tests based on changed files
        run: |
          if echo "${{ needs.detect-changes.outputs.changed_files }}" | grep -q "pages/auth"; then
            npx playwright test --grep "@auth"
          elif echo "${{ needs.detect-changes.outputs.changed_files }}" | grep -q "pages/checkout"; then
            npx playwright test --grep "@checkout"
          else
            npx playwright test
          fi
        env:
          BASE_URL: ${{ needs.wait-for-deploy.outputs.preview_url }}
          NETLIFY_PREVIEW_BASIC_AUTH: ${{ secrets.NETLIFY_PREVIEW_BASIC_AUTH }}

Tag your tests by feature area: @auth, @checkout, @billing. Run only relevant tags based on changed file paths. This reduces CI usage by 60-80% while maintaining full coverage.

For advanced flakiness handling strategies, see our guide on reducing test flakiness.

Flaky tests waste engineering time. The most common cause in Netlify testing? Tests running before deployments are fully ready. Implement explicit waits and retry logic.

- name: Wait for deployment readiness
  run: |
    echo "Waiting for deployment to be ready..."
    timeout 600 bash -c '
      until curl -f -s -o /dev/null ${{ steps.waitForNetlifyPreviewDeployment.outputs.url }}/health; do
        echo "Deployment not ready, waiting 15 seconds..."
        sleep 15
      done
      echo "Deployment is ready!"
    '

Combine this with Playwright's retry configuration: 2 retries in CI, 0 locally, and trace recording on first retry. When tests fail, the trace file shows you exactly what happened.

Cost Optimization Strategies

GitHub Actions charges by minute. Reduce costs with:

  1. Cache dependencies and browsers: Use actions/cache@v4 for node_modules and Playwright's browser cache. This cuts 2-3 minutes per workflow.
  2. Shard intelligently: Group fast tests on fewer shards, slow tests on more shards.
  3. Selective testing by default: Run full suite only on main branch. PRs get selective testing.
  4. Limit browser diversity: Test on Chromium by default. Firefox and WebKit weekly, not on every PR.

Best Practices and Common Pitfalls

Building a robust pipeline requires avoiding common mistakes. Teams learn these the hard way-you don't have to.

Critical Practices

Always test against actual Netlify deployments. Local testing is for development speed. Regression testing must validate the exact environment that will serve production traffic. The Netlify Function you deployed behaves differently from your local dev server.

Testing locally gives you confidence. Testing against Netlify gives you certainty. The difference matters when you're deploying at 3 AM.

Wait for deployment readiness explicitly. Never use sleep 60 or fixed timeouts. Netlify deployments complete in 30 seconds sometimes, 5 minutes other times. Use active polling.

Configure proper authentication for your protection type. Netlify supports three protection methods, each requiring different auth strategies. Password protection uses Basic auth. Team login requires cookies. SSO requires token handling with 1-hour expiration. Pick the right strategy for your setup.

Implement retry logic in CI. Two retries in CI reduces false negatives by 40%. When tests fail on retry 1 but pass on retry 2, it's usually a transient network issue, not a real bug.

Upload artifacts for debugging. Screenshots, videos, and trace files from failed tests save hours. Store them for 30 days, enough time to investigate flaky tests.

Common Pitfalls to Avoid

Assuming Netlify has native deployment events . It doesn't. You must use community actions or implement polling logic yourself. Tests won't trigger automatically.

Using the wrong authentication strategy. Your tests fail with 401 or 403 errors. You spend hours debugging test code only to discover you're using the wrong authentication strategy on Netlify, which requires different auth handling for each protection type.

Forgetting about SSO token expiration. SSO tokens expire after 1 hour. If your test suite runs longer than this, tests fail mid-execution. Either implement token refresh or switch to Basic auth for CI environments.

Running full suite on every PR change. This kills CI budgets and slows down development. Use selective testing or run full suite only on main branch merges.

Not handling Netlify's atomic deployment behavior. Each deployment replaces the previous entirely. Test data you created in one deploy doesn't persist to the next. Implement proper data seeding and cleanup in your test setup.

PracticeWhy It MattersHow to Implement
Test against Netlify deploymentsValidates actual production environmentUse wait-for-netlify-deploy URL as BASE_URL
Use polling for deployment readinessNetlify has no native deployment_status eventUse wait-for-netlify-deploy or manual polling
Match auth strategy to protection typeNetlify has 3 protection methodsBasic auth header, team login cookies, or SSO tokens
Implement retry logicReduces false negatives from transient issuesSet retries: 2 in Playwright config for CI
Upload artifactsEnables debugging failed testsAlways upload screenshots, videos, and traces
Handle SSO token expirationSSO tokens expire after 1 hourImplement refresh or use Basic auth for CI

When Traditional Automation Isn't Enough

Traditional regression testing catches bugs before production, but at a cost. UI changes break brittle selectors, tests require constant maintenance, and false positives waste engineering time. Eventually, maintaining test suites costs more than the bugs they catch. Teams spend hours fixing tests instead of shipping features.

At scale, traditional automation breaks down. AI-powered, self-healing tests adapt automatically as your UI evolves. Tests are generated from user stories, not brittle scripts, eliminating ongoing maintenance. Teams focus on building, not babysitting test suites. Autonoma provides a no-code platform for zero-maintenance test automation.

Conclusion

Building a robust regression testing pipeline transforms how you ship code. Bugs get caught before production. PR reviews become faster. Confidence in deployments increases.

Start with the wait-for-netlify-deploy action workflow. Configure Playwright with the right authentication for your protection type. Add selective testing for PRs. Implement sharding when your suite grows. Optimize costs with caching and intelligent test selection.

The competitive advantage isn't catching bugs, it's catching them in the same environment where they'd break production. Your competitors might have tests. You'll have tests that actually work-on Netlify infrastructure, with proper authentication, and reliable deployment triggers.

FAQ

Use community GitHub Actions like voorhoede/get-netlify-url@v2 or kukiron/wait-for-netlify-deploy@v1.2.2. These actions query the Netlify API and output the preview URL. Alternatively, use Netlify webhooks to trigger workflows with deployment details.
Netlify uses Basic auth for password protection on Pro plans. Configure Playwright to send Authorization: Basic base64(username:password) header. Store credentials in GitHub Secrets and encode them in your workflow. Team login and SSO protection require different auth strategies.

Team login requires authentication cookies from a Netlify team member. SSO tokens expire after 1 hour (check `Netlify-Site-Protection-Expires-In` response header). For automation, either disable protection for testing or use Basic auth instead of team login.

Use polling actions instead of fixed timeouts. Netlify deployments typically complete in 1-3 minutes, but complex builds can take up to 10 minutes. Set a maximum timeout of 600 seconds and poll every 15 seconds to check deployment status with `deploy_succeeded` event.

Use kukiron/wait-for-netlify-deploy@v1.2.2 for most cases-it's the recommended approach that waits for deployment and outputs the URL. Use voorhoede/get-netlify-url@v2 if you need simpler polling. Use manual workflow_dispatch only for on-demand testing or debugging.

Implement three strategies: retry logic (2 retries in CI), deployment readiness checks (polling instead of fixed delays), and trace recording on first retry. When tests fail consistently across retries, it's a real bug. When they fail randomly, investigate Netlify deployment timing or CDN edge propagation.

Yes. Cypress works with Netlify deploy previews. Configure CYPRESS_BASE_URL as your preview URL and use cy.intercept to add auth headers for deployment protection. However, Playwright offers better CI/CD integration, native parallel execution, and faster execution times.

For a 10-minute test suite: ~10 minutes per PR = $0.008 per PR (GitHub Actions standard plan). With caching and selective testing, most teams spend $10-50/month on CI. Sharding adds parallel runners, multiply by your shard count.

Run tests on every push to non-main branches. This catches issues early in the development cycle. Run full regression suite on main branch merges. For teams with high PR volume, implement selective testing on pushes and full suite on PR merges only.

GitHub Actions marks the workflow as failed. If you've configured the workflow as a required status check in your repository settings, the PR cannot be merged until tests pass. The workflow uploads test reports and artifacts for debugging, review these in the Actions tab to investigate failures.

Yes. Configure your workflow with multiple jobs, each targeting a different Netlify site. Use GitHub Secrets to store multiple NETLIFY_AUTH_TOKEN values and site IDs. Each job waits for its respective deployment to complete before running tests.

SSO tokens expire after 1 hour. If your test suite runs longer than this, implement token refresh logic or switch to Basic auth protection for CI environments. Check the `Netlify-Site-Protection-Expires-In` response header to monitor expiration.