The Setup: next dev in Production
Our production app had been running next dev for months. Nobody noticed because: (a) dev mode actually works — just slowly, (b) we were focused on features not infrastructure, and (c) the VPS had enough RAM to absorb dev mode's overhead. We only discovered it when we tried to add a proper build step.
The moment we ran next build for the first time, the process consumed 28 GB of RSS and got OOM-killed by the kernel. Our 8-core VPS has 16 GB of RAM plus swap. The build never stood a chance.
What followed was a 10-attempt, multi-day odyssey through Turbopack OOM kills, Webpack swap thrashing, Tailwind scanner explosions, and ultimately the discovery that one configuration entry changes everything.
Why the Build Uses 30 GB
When you run next build, the bundler resolves the dependency tree for every imported package — for both client and server bundles. If your server code imports @aws-sdk/client-s3, the bundler traces all 50–80 transitive dependencies (@aws-sdk/middleware-*, @smithy/*, etc.) and builds module graphs for both sides.
Our app imports 12 server-only packages, several of which have massive dependency trees:
@aws-sdk/client-s3— 50–80 transitive packagesduckdb/duckdb-async— native binary (~100 MB)pg/pg-boss/pg-copy-streams— native bindings, libpqparquetjs— Thrift, snappy, compression libs@modelcontextprotocol/sdk— protocol buffers, transportcommander,tsx,bottleneck,dotenv,gray-matter,remark,remark-html
None of these packages run in browsers. But the bundler doesn't know that. It resolves every import, builds every module graph, and holds all of it in memory simultaneously. At scale, this means gigabytes of ASTs and module metadata that serve no purpose in the client bundle.
The 10 Attempts
Here's every approach we tried, with measured results on an 8-core, 16 GB VPS:
| # | Bundler | Key Change | Peak RSS | Time | Result |
|---|---|---|---|---|---|
| 1 | Turbopack | None (first ever build) | 28 GB | — | OOM |
| 2 | — | Tailwind v3 downgrade | — | — | Broke CSS |
| 3 | Turbopack | Stop pdl-web service | 28 GB | — | OOM |
| 4 | Turbopack | @source + stop ALL services |
~31 GB | — | Fragile |
| 5a | Turbopack | + outputFileTracingExcludes |
31 GB | — | OOM |
| 5b | Turbopack | + move data/ directory | 31 GB | — | OOM |
| 6 | Webpack | @source + excludes (no sEP) |
30 GB | 5 min | Swap |
| 7 | Turbopack | + serverExternalPackages |
31 GB | — | OOM |
| 8 | Turbopack | + all optimizations combined | 31 GB | — | OOM |
| 9 | Webpack | + serverExternalPackages |
??? | — | Unmeasured |
| 10 | Webpack | serverExternalPackages (measured) |
1.16 GB | 50s | Baseline |
Attempt 4 is the most interesting failure. Turbopack did succeed once — but only after stopping every other process on the VPS and letting it use 31 GB (RAM + swap). The build was fragile and unrepeatable. Turbopack's Rust-based allocator (jemalloc) doesn't respect V8-style GC pressure signals, so memory grows monotonically until the OOM killer intervenes.
The Turbopack Dead End
Attempts 7 and 8 prove something important: serverExternalPackages doesn't help Turbopack on a memory-constrained machine. Even with 12 packages excluded, Turbopack still OOM-killed at 31 GB.
The reason is architectural. Turbopack uses Rust with jemalloc. Once memory is allocated, jemalloc rarely returns it to the OS — it holds freed pages for reuse. Webpack, by contrast, runs in V8's managed heap where the garbage collector can reclaim memory between compilation phases. On a 16 GB VPS, this difference is fatal.
If your machine has abundant RAM (32+ GB), Turbopack may build faster. But on VPS instances, CI runners, or any environment where memory is constrained, Webpack with serverExternalPackages is the reliable choice. Don't assume the newer bundler is better for your environment.
The Fix: serverExternalPackages
serverExternalPackages tells Next.js: "these packages are server-only. Don't resolve their dependency trees for client bundles. Just require() them at runtime."
The bundler stops tracing imports for these packages entirely. No ASTs, no module graphs, no transitive dependency resolution for the client side. The packages are loaded via Node.js's native require() at runtime, exactly as they would be in a non-bundled Node.js application.
const nextConfig = {
// Force Webpack (Turbopack OOMs on 16 GB VPS)
// Run: next build --webpack
serverExternalPackages: [
"duckdb", "duckdb-async",
"@aws-sdk/client-s3",
"pg", "pg-boss", "pg-copy-streams",
"parquetjs", "@modelcontextprotocol/sdk",
"commander", "tsx", "bottleneck", "dotenv",
"gray-matter", "remark", "remark-html",
],
};
That's it. This single configuration entry dropped peak RSS from 30 GB to 1.16 GB — a 26x reduction. Build time dropped from 5 minutes (with swap thrashing) to 50 seconds (warm cache). Swap usage went from ~3 GB to zero.
The Misleading Attempt #9
This is worth calling out because it almost derailed us. Attempt #9 applied serverExternalPackages to Webpack and appeared to still use 30 GB. We almost concluded the fix didn't work.
The problem: the .next cache was stale from a previous build without the config. The bundler was reusing cached module graphs that still included the resolved dependency trees. Attempt #10 ran after a clean cache, and the RSS dropped to 1.16 GB.
If you add or modify serverExternalPackages, delete the .next directory before building. Stale caches will use the old module graphs and you won't see the improvement. Run rm -rf .next && next build --webpack.
Supporting Cast
serverExternalPackages is the critical fix, but two other configurations contributed to a clean build:
Tailwind @source Directive
Tailwind CSS v4 scans for class names by default, walking the entire file tree. On our VPS, this included multi-gigabyte data directories, Parquet files, and node_modules. The @source directive constrains the scanner to only look at actual source files:
@import "tailwindcss";
@source "../src/**/*.{ts,tsx}";
@source "../components/**/*.{ts,tsx}";
outputFileTracingExcludes
This tells Next.js's file tracing (used for standalone builds and deployment) to skip large directories during build analysis:
outputFileTracingExcludes: {
"/**": [
"./data/**",
"./public/uploads/**",
"./.git/**",
],
},
Neither of these alone solved the OOM problem. Attempt #6 had both of these but not serverExternalPackages — it still peaked at 30 GB. They're good practice, but serverExternalPackages is what actually fixes the memory explosion.
How to Audit Your Own Build
If your Next.js build is consuming more memory than expected, here's how to diagnose it:
1. Measure peak RSS
# Clean build with measurement
rm -rf .next
/usr/bin/time -v npm run build 2>&1 | grep "Maximum resident"
2. Identify server-only packages
Any package that:
- Uses Node.js built-ins (
fs,net,child_process) - Has native bindings (C/C++ addons, Rust via NAPI)
- Is only imported in API routes, server components, or server actions
- Has a large transitive dependency tree
...should go in serverExternalPackages.
3. Check if Turbopack or Webpack is better for your machine
# Turbopack (default in Next.js 15+)
rm -rf .next && /usr/bin/time -v npx next build
# Webpack (explicit)
rm -rf .next && /usr/bin/time -v npx next build --webpack
If Turbopack OOMs and Webpack doesn't, you've hit the jemalloc vs V8 GC boundary. Stick with Webpack.
The Complete next.config.ts Pattern
Here's the production configuration pattern we use. Copy and adapt the serverExternalPackages list for your own dependencies:
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
// Server-only packages: don't resolve these for client bundles.
// Each entry prevents the bundler from tracing the package's
// entire dependency tree for client-side compilation.
serverExternalPackages: [
// Native binaries
"duckdb", "duckdb-async",
// AWS SDK (50-80 transitive deps)
"@aws-sdk/client-s3",
// Database drivers
"pg", "pg-boss", "pg-copy-streams",
// File format / protocol libs
"parquetjs",
"@modelcontextprotocol/sdk",
// CLI and runtime utilities
"commander", "tsx", "bottleneck", "dotenv",
// Markdown processing
"gray-matter", "remark", "remark-html",
],
// Exclude large directories from file tracing
outputFileTracingExcludes: {
"/**": [
"./data/**",
"./public/uploads/**",
"./.git/**",
],
},
};
export default nextConfig;
When You Need This
- Your
next buildOOM-kills or uses more than 4 GB RSS - You import server-only packages (database drivers, cloud SDKs, native bindings)
- You're building on a VPS, CI runner, or any machine with less than 32 GB RAM
- Your build spills to swap (check with
free -hduring build)
When You Don't Need This
- Your app only uses browser-compatible packages
- You don't have server components, API routes, or server actions
- Your build already completes within reasonable RSS (under 2–3 GB)
Maintenance Rule
When you add a new server-only dependency to package.json, also add it to serverExternalPackages in next.config.ts. Forgetting this will cause build RSS to spike as the bundler resolves the new package's full dependency tree for client bundles. Make this part of your PR review checklist.
Key Takeaways
serverExternalPackagesis the fix. It dropped our build from 30 GB to 1.16 GB — a 26x reduction.- Turbopack isn't always better. On memory-constrained machines, Webpack's V8 GC wins over Turbopack's jemalloc.
- Clean your cache. Always
rm -rf .nextafter changing build config. Stale caches hide improvements. - Measure with
/usr/bin/time -v. Don't guess at memory usage —Maximum resident set sizeis the number that matters. - Infrastructure debt compounds silently. Our app ran
next devin production for months. Nobody noticed until we tried to build it properly.