How to fix Playwright worker crashes from memory pressure in parallel runs?

Playwright

Memory-related worker crashes in Playwright parallel runs manifest as browser processes being killed mid-test, producing errors like TargetClosedError or Protocol error: Target closed with no user code explanation. Each Playwright worker runs its own browser instance — a full Chromium process — which consumes 150–400 MB of RAM depending on page complexity. Multiplied by the default worker count (which matches CPU core count), the total memory footprint can easily exceed CI container limits, causing the OS to OOM-kill browser processes unpredictably.

Common mistake

// playwright.config.ts — default config with no memory considerations
import { defineConfig } from '@playwright/test';

export default defineConfig({
  fullyParallel: true,
  // On a 16-core CI runner: 16 workers × 300 MB ≈ 4.8 GB peak
  // Default container memory limit: 2 GB → crashes
});

Large test fixtures loaded in every test compound the problem:

test.beforeEach(async ({ page }) => {
  // Loading a 5 MB JSON fixture in every test across 16 workers
  await page.route('**/api/data', (route) =>
    route.fulfill({ body: require('./fixtures/large-dataset.json') })
  );
});

The fix

Cap workers in CI, isolate heavy test suites, and avoid loading large data in every test:

import { defineConfig } from '@playwright/test';

export default defineConfig({
  workers: process.env.CI ? 2 : undefined,
  fullyParallel: process.env.CI ? false : true,
  retries: process.env.CI ? 2 : 0,
  use: {
    trace: 'on-first-retry',
    launchOptions: {
      args: ['--disable-dev-shm-usage'],
    },
  },
});

For suites with large fixtures, load them once at the worker level:

import { test as base } from '@playwright/test';

// Worker-scoped fixture: created once per worker, not once per test
const test = base.extend<{}, { largeDataset: object }>({
  largeDataset: [
    async ({}, use) => {
      const data = await import('./fixtures/large-dataset.json');
      await use(data);
    },
    { scope: 'worker' },
  ],
});

test('renders table with data', async ({ page, largeDataset }) => {
  await page.route('**/api/data', (route) =>
    route.fulfill({ body: JSON.stringify(largeDataset) })
  );
  await page.goto('/table');
  await expect(page.getByRole('table')).toBeVisible();
});

For diagnosing which tests cause peak memory, run shards sequentially and observe memory:

# Run each shard with 1 worker to find memory-heavy tests
npx playwright test --shard=1/4 --workers=1

Why it works

Worker-scoped fixtures in Playwright are created once per worker process and shared across all tests in that worker. Loading a large JSON fixture at worker scope instead of test scope reduces memory allocation by the test count — a 100-test suite in a 2-worker config goes from 200 allocations to 2. --disable-dev-shm-usage redirects Chromium's shared memory from the 64 MB /dev/shm default to the container's main heap, preventing GPU process crashes on memory-constrained Docker containers.

Tips

  • Monitor your CI job's peak memory using the runner's built-in resource metrics — most CI platforms show memory graphs per job run.
  • If specific test files consistently cause worker crashes when run in parallel, mark them with test.describe.configure({ mode: 'serial' }) to run them sequentially within a shard.
  • Use test.afterEach(async ({ page }) => page.close()) only if your test opens multiple pages manually — the Playwright fixture teardown handles standard page cleanup automatically.
  • When crashes persist despite worker reduction, check whether your test suite creates contexts manually without closing them — leaked contexts accumulate browser processes until OOM.