Technical SEO

How to Fix Page Speed Issues: Core Web Vitals Guide 2026

18 min readPerformance OptimizationUpdated for INP (replaced FID March 2024)

Page speed is a confirmed Google ranking factor, and Core Web Vitals are the metrics that matter most. In March 2024, Google replaced FID with INP, fundamentally changing how responsiveness is measured. This guide covers every Core Web Vital with current thresholds, root causes, and production-tested fixes.

TL;DR -- Quick Summary

  • LCP (Largest Contentful Paint): Target 2.5s or less -- optimize hero images, use CDN, preload critical assets
  • INP (Interaction to Next Paint): Target 200ms or less -- replaced FID in March 2024, measures ALL interactions, fix with code splitting and yielding to main thread
  • CLS (Cumulative Layout Shift): Target 0.1 or less -- set image dimensions, use font-display: swap, reserve space for ads
  • Google uses field data (real users from CrUX) not lab data for ranking signals -- always check the field data section in PageSpeed Insights
  • Mobile performance is what Google uses for ranking -- not desktop

Core Web Vitals Thresholds (2026)

LCP
Loading
Good
≤ 2.5s
Needs Work
2.5s - 4s
Poor
> 4s
INP
Interactivity
Good
≤ 200ms
Needs Work
200ms - 500ms
Poor
> 500ms
CLS
Visual Stability
Good
≤ 0.1
Needs Work
0.1 - 0.25
Poor
> 0.25

FID (First Input Delay) was retired in March 2024 and replaced by INP

Core Web Vitals thresholds for 2026 -- all three metrics must pass at the 75th percentile of real user visits

Core Web Vitals as a Google Ranking Signal

Google confirmed Core Web Vitals as a ranking signal in June 2021 as part of the Page Experience update. The update rolled out to desktop in February 2022, making page experience a full ranking signal across all device types. In July 2024, Google completed the shift to 100% mobile-first indexing, which means your mobile Core Web Vitals performance is what determines your ranking signal -- not desktop.

Core Web Vitals measure three aspects of real user experience: loading performance (LCP), interactivity (INP), and visual stability (CLS). Google evaluates these metrics using field data from the Chrome User Experience Report (CrUX) -- real measurements from Chrome users who visit your site. To pass, at least 75% of your page visits must meet the "good" threshold for each metric.

Key Insight: Content Still Matters Most

Core Web Vitals are a tiebreaker signal, not a dominant ranking factor. Relevant, high-quality content will always outrank a faster page with worse content. However, when two pages are equally relevant, the one with better Core Web Vitals will rank higher. For highly competitive queries, page speed can be the deciding factor.

According to Google's own case studies, sites that pass all three Core Web Vitals see 24% fewer page abandonment rates. A 2023 analysis by web.dev found that passing Core Web Vitals correlates with longer session durations and higher conversion rates across industries.

LCP Deep Dive: Largest Contentful Paint

LCP measures how long it takes for the largest content element visible in the viewport to render. This is the metric users perceive as "how fast the page loaded." The element can be an image, a video poster, a background image loaded via CSS, or a text block -- whichever is largest within the initial viewport.

What Counts as the LCP Element?

The browser identifies the LCP element dynamically as content loads. Common LCP elements include:

  • Hero images -- the most common LCP element on content and e-commerce sites
  • H1 heading text blocks -- common on text-heavy pages without hero images
  • Video poster images -- the thumbnail shown before a video plays
  • CSS background images -- if rendered via background-image and large enough
  • SVG elements -- inline or referenced SVGs within the viewport

Critical Warning: Lazy-Loading the LCP Image

Never add loading="lazy" to your LCP image. This tells the browser to defer loading the element, directly increasing LCP time. Your hero image or above-the-fold image should have fetchpriority="high" and no lazy loading attribute.

Top Causes of Poor LCP

  1. Unoptimized hero images: Serving a 4MB JPEG when a 200KB WebP would suffice. Convert images to WebP or AVIF format and serve appropriate sizes with srcset.
  2. Slow server response (TTFB): If Time to First Byte exceeds 800ms, LCP cannot possibly be under 2.5s. Use a CDN, enable server-side caching, and consider edge computing.
  3. Render-blocking resources: CSS and synchronous JavaScript in the <head> block rendering until fully downloaded and parsed. Inline critical CSS and defer non-critical resources.
  4. Client-side rendering delays: SPAs that render content with JavaScript add an extra round trip. Server-side rendering (SSR) or static generation (SSG) deliver content in the initial HTML response.
  5. Missing resource hints: Without <link rel="preload"> for the LCP image and <link rel="preconnect"> for third-party origins, the browser discovers these resources late.

LCP Optimization: Identify and Fix

1

Identify LCP Element

Open DevTools > Performance panel

Look for "Largest Contentful Paint" marker

Hover to see which element is LCP

2

Diagnose the Cause

Slow TTFB? Server/hosting issue

Late resource discovery? Missing preload

Large file? Needs compression

3

Apply the Fix

Preload LCP image with fetchpriority

Convert to WebP/AVIF format

Use CDN for faster delivery

Three-step process for identifying your LCP element and applying targeted fixes

LCP Fix: Code Example

HTML -- preload LCP image

<!-- BEFORE: Browser discovers image late -->

<img src="hero.jpg" alt="Hero" width="1200" height="600">

<!-- AFTER: Preloaded with high priority -->

<link rel="preload" as="image" href="hero.webp" fetchpriority="high">
<link rel="preconnect" href="https://cdn.example.com">

<img src="hero.webp" alt="Hero" width="1200" height="600"
fetchpriority="high" decoding="async">

INP Deep Dive: Interaction to Next Paint (Replaced FID)

March 2024: INP Replaced FID

FID (First Input Delay) is no longer a Core Web Vital. It was officially replaced by INP (Interaction to Next Paint) in March 2024. FID only measured the delay before the browser began processing the first user interaction. INP measures responsiveness across all interactions throughout the entire page lifecycle -- clicks, taps, and keyboard inputs -- making it a far more comprehensive measure of real-world interactivity.

INP observes the latency of every user interaction during a page visit and reports a value near the worst-case interaction (specifically, it ignores the single worst interaction on pages with 50+ interactions). The metric captures the full cycle: input delay (time before event handler runs), processing time (event handler execution), and presentation delay (time for the browser to paint the next frame).

AspectFID (Retired March 2024)INP (Current)
What it measuresDelay before first interactionFull latency of all interactions
Interactions countedOnly the first oneEvery click, tap, key press
Includes processing time?No -- only input delayYes -- delay + processing + paint
Good thresholdWas 100ms200ms or less
Poor thresholdWas 300msAbove 500ms

Top Causes of Poor INP

  1. Heavy JavaScript on the main thread: Long tasks (over 50ms) block the browser from responding to user input. The most common cause is large, unoptimized JavaScript bundles that execute synchronously.
  2. Third-party scripts: Analytics, chat widgets, and ad scripts often add hundreds of milliseconds of processing time. Each third-party script competes for main thread time.
  3. Expensive event handlers: Click handlers that trigger complex DOM updates, large state changes, or synchronous layout calculations (forced reflows) delay the next paint.
  4. Large DOM size: Pages with more than 1,500 DOM nodes make style recalculations and layout operations significantly slower, increasing processing time for every interaction.
  5. Lack of yielding: JavaScript that runs in a single long task without yielding to the browser prevents the browser from processing pending user interactions.

INP Fixes

JavaScript -- yield to main thread
// 1. Yield to the main thread using scheduler.yield()
async function handleClick() {
  updateUI();            // Quick visual feedback
  await scheduler.yield(); // Let browser paint
  doExpensiveWork();     // Heavy processing after paint
}

// 2. Break long tasks into smaller chunks
async function processLargeList(items) {
  for (const item of items) {
    processItem(item);
    // Yield every 5ms to stay responsive
    if (performance.now() - start > 5) {
      await scheduler.yield();
      start = performance.now();
    }
  }
}

// 3. Code-split with dynamic imports
const HeavyComponent = lazy(() => import('./HeavyComponent'));

Quick INP Wins

  • Defer non-critical JavaScript with async or defer attributes
  • Use content-visibility: auto on off-screen content to skip rendering work
  • Debounce rapid-fire events like scroll, resize, and mousemove handlers
  • Move heavy computation to Web Workers to avoid blocking the main thread
  • Reduce DOM size by virtualizing long lists and removing unnecessary wrapper elements

CLS Deep Dive: Cumulative Layout Shift

CLS quantifies how much the visible content shifts unexpectedly during page load and user interaction. A CLS score of 0.1 or less is considered good. Every time a visible element changes position without user initiation, the browser calculates a layout shift score based on the fraction of the viewport affected and the distance moved.

CLS uses a session window approach: layout shifts are grouped into windows of at most 5 seconds, with a maximum 1-second gap between shifts. The CLS value is the maximum session window score, not the sum of all shifts. This means a brief burst of shifts during page load is penalized less than continuous shifting throughout the page lifecycle.

Top Causes of Poor CLS

Before: CLS = 0.35 (Poor)

Ad loads here -- pushes content down
Content shifts down
img without width/height

After: CLS = 0.02 (Good)

Space reserved: min-height: 250px
img width=800 height=400
No layout shift
Before and after fixing CLS -- reserve space for dynamic content and set explicit image dimensions
  1. Images without explicit dimensions: Without width and height attributes (or CSS aspect-ratio), the browser cannot reserve space for images before they load. This causes content below to jump down. Always specify dimensions.
  2. Ads, embeds, and iframes without reserved space: Third-party content that loads asynchronously without a placeholder container causes the most severe layout shifts. Reserve space with a fixed-size container using min-height.
  3. Web fonts causing FOUT/FOIT: When a custom font loads and replaces the fallback font, text may reflow if the metrics differ. Use font-display: swap with a size-adjusted fallback, or font-display: optional to prevent any flash at all.
  4. Dynamically injected content: Banners, cookie notices, and notification bars pushed into the DOM after initial render cause everything below them to shift. Insert dynamic content above the fold only if it reserves space or uses CSS transforms (which do not trigger layout shifts).
  5. CSS animations using layout properties: Animating top, left, width, or height triggers layout recalculations. Use transform and opacity instead -- these are composited properties that do not affect layout.
CLS prevention techniques
<!-- 1. Always set image dimensions -->
<img src="photo.webp" width="800" height="450" alt="..." />

<!-- 2. Or use CSS aspect-ratio -->
<style>
  .hero { aspect-ratio: 16 / 9; width: 100%; }
</style>

<!-- 3. Reserve space for ads -->
<div style="min-height: 250px; contain: layout;">
  <!-- Ad loads here without shifting content -->
</div>

<!-- 4. Size-adjusted font fallback -->
<style>
  @font-face {
    font-family: 'CustomFont';
    src: url('custom.woff2') format('woff2');
    font-display: swap;
    size-adjust: 105%;  /* Match fallback metrics */
  }
</style>

Measuring Core Web Vitals

There are three primary tools for measuring Core Web Vitals, each with different strengths. Use them in combination for the most accurate picture of your site's performance.

1. PageSpeed Insights (PSI)

PageSpeed Insights is Google's primary tool. It shows two sections:

  • Field data (top section): Real user metrics from the Chrome User Experience Report (CrUX). This is what Google uses for ranking signals. Data is collected over a rolling 28-day period from real Chrome users.
  • Lab data (bottom section): A controlled Lighthouse test run in a simulated environment. Useful for debugging but not what Google uses for rankings. Lab data does not include INP because it requires real user interactions.

PageSpeed Insights Score Breakdown

Field Data (What Google Uses for Rankings)CrUX / Real Users
1.8s
LCP
142ms
INP
0.04
CLS
Lab Data (For Debugging Only)Lighthouse / Simulated
92

FCP: 1.2s | LCP: 2.1s | TBT: 120ms

CLS: 0.03 | Speed Index: 1.8s

Note: INP not available in lab data

PageSpeed Insights shows both field data (used for rankings) and lab data (used for debugging) -- focus on the field data section

2. Google Search Console -- Core Web Vitals Report

The Search Console Core Web Vitals report shows how all your pages perform, grouped into "Good," "Needs Improvement," and "Poor." It uses the same CrUX field data as PageSpeed Insights but shows trends over time and groups pages with similar issues. This is the best tool for monitoring your entire site's health at scale.

3. Chrome DevTools -- Performance Panel

For debugging specific interactions, Chrome DevTools' Performance panel provides frame-by-frame analysis. Record a trace, interact with the page, and the timeline shows exactly where long tasks block the main thread, which event handlers are slow, and when layout shifts occur. The Web Vitals lane in the timeline marks LCP, CLS events, and interaction timings.

Field Data vs Lab Data: Which Google Uses

This distinction is critical and widely misunderstood. Google uses field data -- not lab data -- for its ranking signal. Field data comes from the Chrome User Experience Report (CrUX), which aggregates anonymous performance metrics from real Chrome users who visit your site over a rolling 28-day period.

AspectField Data (CrUX)Lab Data (Lighthouse)
Used for rankings?YesNo
SourceReal Chrome usersSimulated environment
Includes INP?YesNo (uses TBT as proxy)
Time periodRolling 28 daysSingle point in time
VariabilityReflects real device/network mixFixed simulated conditions
Best forUnderstanding real user experienceDebugging specific issues

This means your Lighthouse performance score of 95 does not guarantee you pass Core Web Vitals for Google's ranking signal. A site can score 95 in the lab but fail in the field because real users have slower devices, worse connections, or interact with the page differently than a synthetic test. Conversely, a site can score 70 in the lab but pass in the field if most real visitors have fast connections and devices.

Important: New Sites Without Field Data

New websites or pages with low traffic may not have enough field data in CrUX. In this case, PageSpeed Insights shows "The Chrome User Experience Report does not have sufficient real-world speed data for this page." Google may fall back to origin-level data (aggregated across your entire domain) or use no CWV signal at all. Lab data still helps you optimize proactively.

Mobile vs Desktop: Which One Matters for Rankings

Since July 2024, Google uses 100% mobile-first indexing. This means Google uses the mobile version of your page for indexing and ranking -- including Core Web Vitals. Your desktop performance matters for desktop user experience, but it is the mobile field data that determines your Core Web Vitals ranking signal.

Mobile Core Web Vitals scores are typically worse than desktop for several reasons:

  • Lower CPU power: Mobile processors are significantly slower than desktop CPUs, increasing JavaScript execution time (directly hurting INP)
  • Network variability: Mobile users frequently switch between WiFi, 4G, and 5G with varying latency and bandwidth (directly hurting LCP)
  • Smaller viewport: Content reflows differently on mobile screens, and font swaps can cause more visible layout shifts (hurting CLS)
  • Touch interactions: Tap targets and mobile-specific behaviors add complexity to interaction handling

Action: Always Check Mobile First

In PageSpeed Insights, always switch to the Mobile tab first. That is the data Google uses for rankings. A common mistake is celebrating a 95 desktop score while ignoring a failing 45 mobile score. Your mobile performance is what matters for search.

Quick Wins: Immediate Page Speed Improvements

These are the highest-impact, lowest-effort changes you can make today. Each one addresses a specific Core Web Vital:

LCP

Compress images to WebP

Convert JPEG/PNG to WebP for 25-35% size reduction. Use AVIF for 50% savings where supported. Directly improves LCP.

LCP

Preload the LCP image

Add <link rel="preload"> for your hero image with fetchpriority="high". Eliminates late discovery. Directly improves LCP.

LCP

Remove render-blocking CSS/JS

Inline critical CSS, defer non-critical CSS, add async/defer to scripts. Improves both FCP and LCP.

CLS

Set image width and height

Every <img> needs explicit width and height attributes or CSS aspect-ratio. Prevents images from causing layout shifts.

CLS

Use font-display: swap

Prevents invisible text during font loading. Pair with size-adjust on the fallback font to minimize CLS from font swaps.

INP

Defer third-party scripts

Load analytics, chat widgets, and ads after the page becomes interactive. Reduces main thread blocking and improves INP.

LCP

Enable Brotli compression

Brotli compresses text assets 15-20% better than gzip. Reduces transfer sizes for HTML, CSS, JS, and JSON.

LCP

Use a CDN

Serve assets from edge servers near your users. Reduces TTFB from seconds to milliseconds. Directly improves LCP.

Prioritization Order

Fix issues in this order for maximum impact: 1) LCP image optimization (often the single biggest improvement), 2) Render-blocking resource removal (improves FCP and LCP simultaneously), 3) CLS fixes (usually straightforward HTML attribute additions), 4) INP optimization (often the most complex, requiring JavaScript refactoring).

Key Takeaways

  • INP replaced FID in March 2024 -- measure and optimize for INP (200ms threshold), not FID
  • Google uses field data from CrUX for ranking signals -- your Lighthouse score alone does not determine whether you pass
  • Mobile performance is what Google uses for rankings (100% mobile-first indexing since July 2024)
  • LCP: Preload hero image, use WebP/AVIF, deploy a CDN -- target 2.5 seconds or less
  • CLS: Set image dimensions, reserve space for dynamic content, use font-display: swap -- target 0.1 or less

Audit your Core Web Vitals across all pages with our free tool:

Run Free Site Audit →

Frequently Asked Questions

What replaced FID in Core Web Vitals?
INP (Interaction to Next Paint) replaced FID (First Input Delay) as a Core Web Vital in March 2024. FID only measured the delay before the browser started processing the first interaction. INP is far more comprehensive -- it measures the full latency (input delay + processing + presentation delay) of all interactions throughout the page lifecycle, not just the first one. The good threshold for INP is 200ms or less.
What are the three Core Web Vitals in 2026?
The three Core Web Vitals are: LCP (Largest Contentful Paint) with a good threshold of 2.5 seconds or less, measuring loading performance; INP (Interaction to Next Paint) with a good threshold of 200ms or less, measuring interactivity; and CLS (Cumulative Layout Shift) with a good threshold of 0.1 or less, measuring visual stability. FID is no longer a Core Web Vital.
Does page speed directly affect Google rankings?
Yes. Google confirmed Core Web Vitals as a ranking signal in 2021. However, it functions as a tiebreaker -- content relevance is still the primary factor. When two pages have similar content quality and relevance, the one with better Core Web Vitals will rank higher. For competitive keywords where many pages have similar content, page speed can be a significant differentiator.
Why is my mobile Lighthouse score much lower than desktop?
Lighthouse simulates a mid-tier mobile device (Moto G Power) on a throttled 4G connection for mobile tests, while desktop tests use no CPU throttling and faster network. This is intentional -- most users browse on phones with limited resources. Since Google uses mobile-first indexing, always optimize for the mobile score first. A score of 70-80 on mobile is acceptable; below 50 needs urgent attention.
Does Google use Lighthouse scores or field data for ranking?
Google uses field data from the Chrome User Experience Report (CrUX), not Lighthouse lab scores. Field data captures real performance from actual Chrome users visiting your site over a rolling 28-day period. Lighthouse is excellent for debugging specific issues, but your Lighthouse score alone does not determine your ranking signal. Always check the "field data" section at the top of PageSpeed Insights.
How long does it take for Core Web Vitals improvements to affect rankings?
CrUX data uses a rolling 28-day window, so improvements take at least 28 days to fully reflect in field data. After that, Google must recrawl your pages and reassess the CWV signal. In practice, expect 4-8 weeks from implementation to visible ranking changes. Use Search Console's Core Web Vitals report to track your progress over time.
What is a good PageSpeed Insights score?
A Lighthouse performance score of 90-100 is good, 50-89 needs improvement, and below 50 is poor. However, the score matters less than passing Core Web Vitals in field data. A page with a Lighthouse score of 75 that passes all three CWV in the field is better positioned for rankings than a page scoring 95 in the lab but failing CWV with real users. Focus on the field data section first.
How can I measure INP if it requires real user interactions?
INP requires real user interactions and is only available in field data (CrUX). You can measure it in PageSpeed Insights (field data section), Search Console Core Web Vitals report, or your own Real User Monitoring (RUM) setup using the web-vitals JavaScript library. In the lab, Total Blocking Time (TBT) correlates with INP -- reducing TBT usually improves INP. Chrome DevTools' Performance panel can also show interaction timings during manual testing.