A slow Next.js app is almost always a configuration problem, not a framework problem. Here are 10 battle-tested optimisations to achieve sub-second load times.
1. Commit to Static Generation — Your CDN Will Thank You
The single highest-impact performance decision in Next.js is choosing Static Site Generation (SSG) over Server-Side Rendering (SSR) wherever possible. Static pages are pre-built at deploy time and served directly from CDN edge nodes worldwide — no server computation, no database query, no wait. The result is response times measured in single-digit milliseconds rather than hundreds. For pages with content that updates infrequently — blog posts, landing pages, service pages, case studies — use generateStaticParams to pre-render every route at build time. For content that needs to stay fresh without a full rebuild, use Incremental Static Regeneration with a revalidate interval. The default should always be: can this be static? Only add server-side dynamism when the use case genuinely demands it.
2. Replace Every img Tag with next/image — No Exceptions
If your Next.js codebase contains any plain HTML img tags, replace them immediately. The next/image component does an extraordinary amount of work behind the scenes: it converts images to WebP or AVIF format automatically, generates multiple size variants for different viewport widths, implements lazy loading by default, and prevents Cumulative Layout Shift by reserving space before the image loads. The performance delta between a plain img tag and next/image can be 50-70% reduction in image payload — typically the single largest contributor to page weight. Set the priority prop on your above-the-fold hero images so they are preloaded immediately, and always define width, height, or sizes to help the browser allocate space correctly before the image arrives.
3. Dynamic Imports: Load What You Need, When You Need It
Every kilobyte of JavaScript your application sends to the browser must be downloaded, parsed, and executed before the page becomes interactive. For components that are not immediately visible — modals, off-screen sections, interactive charts, rich text editors, animation heavy elements — this is wasted work on initial load. Use next/dynamic to defer these components until they are actually needed. A modal that opens on button click does not need to be in your initial bundle — it should download when the user triggers it. This pattern can reduce your First Contentful Paint significantly on pages with complex secondary functionality. Combine with the ssr: false option for client-only components like map libraries or canvas-based animations that have no server-side utility.
4. Server Components Are Free — Use Them Aggressively
The App Router's most powerful paradigm shift is the Server Component model. Server Components execute on the server and send pure HTML to the client — zero JavaScript payload, zero hydration cost, zero bundle bloat. The instinct to add 'use client' to every component is one of the most expensive mistakes in Next.js development. Before adding the directive, ask: does this component use useState, useEffect, event handlers, browser APIs, or third-party hooks? If the answer to all of those is no, it can and should remain a Server Component. Audit your component tree aggressively. In our experience, at least 40-60% of components that developers mark as client components could remain server-rendered without any functional limitation — and the performance benefit is substantial.
5. next/font Eliminates Flash of Unstyled Text and External Requests
Custom web fonts are one of the most frequently overlooked performance problems. The traditional approach — linking to Google Fonts via an HTML link tag — makes an external network request to Google's servers before your page can render, introduces a layout shift when the custom font swaps in, and creates a privacy linkage to Google's infrastructure. The next/font module solves all three problems simultaneously: it downloads and self-hosts fonts at build time (eliminating the external request), applies size-adjust automatically to prevent layout shift, and adds font-display: swap to prevent invisible text during loading. Load only the specific weights and subsets your design actually uses — font files are larger than most developers realise, and loading 400, 500, 600, and 700 weights when your design uses only 400 and 700 is pure waste.
6 & 7. Bundle Analysis and Surgical Tree Shaking
Most Next.js production bundles contain significantly more code than necessary, and the path to fixing this begins with visibility. Add @next/bundle-analyzer to your project (it is free and takes five minutes to configure) and run next build to generate an interactive treemap of exactly what is in your JavaScript bundle. Common discoveries include the entirety of moment.js (330KB) imported when only date formatting is needed (replace with date-fns, import only the functions you use), an entire icon library loaded when only twelve icons appear on the site (switch to importing individual icons), and lodash imported wholesale when three utility functions would suffice. Tree shaking — the automatic elimination of unused code — only works with ES module syntax. If you or your dependencies use CommonJS require() calls anywhere in the chain, tree shaking cannot operate. Audit and modernise.
8 & 9. Cache Everything That Does Not Change Frequently
Caching is the highest-leverage performance strategy available to you because it eliminates the work of serving a resource entirely for returning visitors. Configure long-lived cache headers on all static assets — JavaScript bundles, CSS files, fonts, and images should have Cache-Control: max-age=31536000, immutable because Next.js appends content hashes to their filenames, making them safe to cache forever. For dynamic API routes, implement stale-while-revalidate: serve the cached response immediately and refresh it in the background so users never wait. Deploy your static export behind Cloudflare's free CDN tier to serve assets from the edge node geographically nearest your visitors — critical for reducing latency across the diverse geographies of Southeast Asia where your users may be in Brunei, Malaysia, Singapore, or beyond simultaneously.
10. Core Web Vitals Monitoring: Performance Is Never Finished
The most critical mindset shift in performance engineering is understanding that optimisation is not a project with a completion date — it is an ongoing operational practice. Dependencies update and introduce regressions. New marketing pixels get added and balloon page weight. Design changes add animation libraries. Without continuous monitoring, even a highly optimised codebase degrades over time. Set up real-user monitoring with the web-vitals library, which surfaces your actual LCP, INP, and CLS scores from real users on real devices and connections — not just the ideal conditions of Lighthouse. Configure alerts for performance regressions so that a new deployment that hurts your scores is caught immediately rather than discovered months later. What gets measured gets managed. Treat your Core Web Vitals scores with the same seriousness you treat your conversion rates — because in Google's current algorithm, they are directly connected.
