What are Core Web Vitals?
Core Web Vitals are a set of real-world, user-centered metrics that Google uses to evaluate the loading performance, interactivity, and visual stability of a web page. Introduced by Google in May 2020 and incorporated into ranking signals in June 2021, Core Web Vitals are now a confirmed part of Google's Page Experience ranking system.
As of March 2024, the three Core Web Vitals are Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS). INP replaced the previous metric, First Input Delay (FID), which was officially deprecated in March 2024 because it only measured the delay of the first interaction rather than the full responsiveness of the page throughout its lifecycle.
Meeting Core Web Vitals thresholds is not just about SEO rankings. Pages that pass these thresholds provide a significantly better user experience, leading to lower bounce rates, higher engagement, and better conversion rates.
Core Web Vitals Thresholds
| Metric | Good | Needs Improvement | Poor |
|---|---|---|---|
| LCP | ≤ 2.5s | 2.5s – 4.0s | > 4.0s |
| INP | ≤ 200ms | 200ms – 500ms | > 500ms |
| CLS | ≤ 0.1 | 0.1 – 0.25 | > 0.25 |
Understanding Each Metric
Page speed encompasses far more than just "how fast the page loads." Each metric captures a different dimension of the user experience. Understanding what each metric measures and what causes it to degrade is essential for targeted optimization.
Largest Contentful Paint (LCP)
LCP measures the time from when the page starts loading to when the largest text block or image element is rendered within the viewport. It represents the point at which the user perceives that the main content of the page has loaded. The target for a good LCP is 2.5 seconds or less.
Common causes of poor LCP:
- Slow server response time (high TTFB)
- Render-blocking CSS and JavaScript in the document head
- Slow-loading hero images or large media files without preload hints
- Client-side rendering that delays content display until JavaScript executes
- Missing preload hints for the LCP resource
Interaction to Next Paint (INP)
INP measures the responsiveness of a page to user interactions throughout its entire lifecycle. It tracks the latency of all click, tap, and keyboard interactions, and reports a value near the worst-case latency. A low INP means the page consistently responds quickly to user input. The target for a good INP is 200 milliseconds or less.
In lab testing environments where real user interactions cannot be measured, Total Blocking Time (TBT) serves as the closest proxy metric for INP.
Common causes of poor INP:
- Long-running JavaScript tasks blocking the main thread
- Excessive DOM size requiring expensive layout recalculations
- Heavy third-party scripts (analytics, ads, chat widgets)
- Synchronous event handlers that delay visual feedback
Cumulative Layout Shift (CLS)
CLS measures the sum of all unexpected layout shifts that occur during the entire lifespan of a page. A layout shift happens when a visible element changes its position from one rendered frame to the next without user interaction triggering it. CLS quantifies how "jumpy" the page feels. The target for a good CLS is 0.1 or less.
Common causes of poor CLS:
- Images and iframes without explicit width and height attributes
- Dynamically injected content above existing content (ads, banners, cookie notices)
- Web fonts causing a Flash of Unstyled Text (FOUT) or Flash of Invisible Text (FOIT)
- Content inserted by JavaScript after the initial render
First Contentful Paint (FCP)
FCP measures the time from navigation to when the browser renders the first piece of DOM content (text, image, SVG, or non-white canvas element). It marks the point at which the user first sees something on screen, providing early feedback that the page is loading. A good FCP is under 1.8 seconds. FCP is closely related to LCP but captures the initial paint rather than the largest meaningful paint.
Time to First Byte (TTFB)
TTFB measures the time between the browser's request for a resource and when the first byte of the response begins to arrive. It reflects server processing time, network latency, and any redirects that occur before the response. A good TTFB is under 800 milliseconds. High TTFB directly delays all subsequent metrics because nothing can render until the server begins responding. Common causes include slow databases, missing CDN, cold server starts, and redirect chains.
Total Blocking Time (TBT)
TBT measures the total amount of time between First Contentful Paint and Time to Interactive during which the main thread was blocked long enough to prevent input responsiveness. Any task longer than 50 milliseconds is considered a "long task," and the amount of time beyond 50ms for each long task contributes to TBT. A good TBT is under 200 milliseconds. TBT is the primary lab-based proxy for INP and accounts for approximately 30% of the Lighthouse Performance score.
How Page Speed Affects SEO
Page speed has been a Google ranking factor since 2010 for desktop searches and since 2018 for mobile searches (the "Speed Update"). With the introduction of Core Web Vitals as ranking signals in 2021, Google made it clear that user experience metrics directly influence search rankings. Here are the key ways page speed impacts your SEO performance.
Google Page Experience Signals
Core Web Vitals are part of Google's broader Page Experience signals, which also include mobile-friendliness, HTTPS usage, and the absence of intrusive interstitials. Sites that pass all Page Experience criteria may receive a ranking boost, especially in competitive SERPs where content quality is similar across results.
Mobile-First Indexing
Google uses the mobile version of your site for indexing and ranking. Since mobile devices often have slower network connections and less processing power, page speed optimization is even more critical for mobile. A site that loads quickly on desktop but slowly on mobile will be evaluated based on its mobile performance.
Bounce Rate and Engagement
Research by Google shows that as page load time increases from 1 second to 3 seconds, the probability of bounce increases by 32%. At 5 seconds, the probability increases by 90%. Slow pages not only rank lower but also lose visitors before they can engage with the content, compounding the negative SEO impact through reduced engagement signals.
Conversion Impact
Studies consistently show that every 100ms improvement in page load time can increase conversion rates by up to 1%. For e-commerce sites, a 1-second delay in page load can result in a 7% reduction in conversions. Page speed directly impacts your bottom line, making it both an SEO and a business priority.
Field Data vs Lab Data
Understanding the difference between field data and lab data is essential for interpreting page speed results correctly. Both have their place in the optimization workflow, but they serve fundamentally different purposes.
Chrome User Experience Report (CrUX) — Field Data
Field data comes from real users visiting your website. The Chrome User Experience Report (CrUX) collects anonymized performance data from Chrome users who have opted in, aggregated over a rolling 28-day period. This is the data Google uses for ranking purposes.
Field data reflects the actual experience of your users across their diverse devices, network conditions, and geographic locations. It captures metrics like INP that can only be accurately measured through real interactions. However, field data requires sufficient traffic volume to be available (a minimum number of page loads over the 28-day window) and takes time to reflect changes you have deployed.
Lighthouse — Lab Data
Lab data comes from tools like Lighthouse that simulate a page load in a controlled environment with a predefined device profile and network speed. Lab data is reproducible, available immediately for any URL, and useful for debugging specific performance bottlenecks before deploying fixes.
Lab data cannot measure INP (since there are no real user interactions), so it uses Total Blocking Time (TBT) as a proxy. Lab results may not reflect the conditions experienced by real users on varied devices and networks. Our page speed checker uses the Google PageSpeed Insights API, which provides both lab data (via Lighthouse) and field data (via CrUX) when available.
Why Field Data Matters More for Google Rankings
Google uses field data (CrUX) for its Page Experience ranking signals, not lab data. This means a site could score perfectly in Lighthouse lab tests but still fail Core Web Vitals if real users on slow devices experience poor performance. Conversely, a site with modest lab scores but excellent field data will pass Google's requirements. Always prioritize field data when it is available for your domain.
When Lab Data Is Useful
Lab data is invaluable for debugging and development. Use it when you need to identify specific performance bottlenecks, test the impact of code changes before deployment, audit new pages or sites with insufficient traffic for CrUX data, or establish a baseline before beginning optimization work. Lab data provides a consistent, controlled measurement that is ideal for comparing before-and-after results of optimizations.
Our 5-Category Scoring System
Our page speed checker evaluates your website across five weighted categories, providing a comprehensive assessment that goes beyond Core Web Vitals alone. The total possible score is approximately 120 points, capped at 100, ensuring that excellence in multiple areas is rewarded while still requiring broad competence across all categories.
1. Core Web Vitals (40% weight)
The most heavily weighted category, reflecting Google's emphasis on these metrics as ranking signals. This category evaluates six performance metrics:
- LCP — Largest Contentful Paint (target: ≤ 2.5 seconds)
- INP — Interaction to Next Paint (target: ≤ 200 milliseconds)
- CLS — Cumulative Layout Shift (target: ≤ 0.1)
- FCP — First Contentful Paint (target: ≤ 1.8 seconds)
- TTFB — Time to First Byte (target: ≤ 800 milliseconds)
- TBT — Total Blocking Time (target: ≤ 200 milliseconds)
2. Lighthouse Scores (25% weight)
Google Lighthouse provides four audited scores that reflect overall page quality. Each score ranges from 0 to 100, and a score of 90 or above is considered good:
- Performance — Overall performance based on metric timings and weights
- Accessibility — WCAG compliance, screen reader support, and keyboard navigation
- Best Practices — Security, modern APIs, and browser compatibility
- SEO — Technical SEO fundamentals (meta tags, crawlability, semantic HTML)
3. Resource Optimization (15% weight)
Evaluates how efficiently your page uses JavaScript, CSS, and DOM resources:
- Unused JavaScript and CSS (dead code that should be eliminated or deferred)
- Render-blocking resources that delay the first paint
- JavaScript and CSS minification (removing whitespace, comments, and shortening variable names)
- DOM size (total number of elements, maximum depth, and maximum children per element)
4. Image Optimization (10% weight)
Images are often the largest resources on a page. This category checks:
- Properly sized images (no oversized images scaled down by CSS in the browser)
- Modern image formats (WebP, AVIF instead of PNG or unoptimized JPEG)
- Compression and quality optimization (appropriate quality levels for the use case)
- Lazy loading for below-the-fold images and preloading for the LCP image
5. Caching and Compression (10% weight)
Proper caching and compression reduce repeat load times and overall bandwidth usage:
- Browser cache headers (Cache-Control, ETag, Expires) for static assets
- Text compression (GZIP or Brotli) for HTML, CSS, JavaScript, and JSON responses
- HTTP/2 or HTTP/3 protocol usage for multiplexed connections and header compression
How to Improve Page Speed
Improving page speed requires a systematic approach. Start with the highest-impact changes and measure the improvement after each optimization. Here are the key techniques, each with practical implementation guidance and code examples.
Optimize Images
Images are frequently the largest resources on a page. Converting to modern formats like WebP and AVIF, properly sizing images to match their display dimensions, and implementing lazy loading can dramatically reduce load times. Preloading the LCP image with the fetchpriority="high" attribute ensures the most important visual element renders as quickly as possible.
<!-- Preload the LCP image for faster rendering -->
<link rel="preload" as="image" href="/hero.webp" type="image/webp" />
<!-- Use modern formats with fallback -->
<picture>
<source srcset="/hero.avif" type="image/avif" />
<source srcset="/hero.webp" type="image/webp" />
<img src="/hero.jpg" alt="Hero image"
width="1200" height="600"
loading="eager"
fetchpriority="high" />
</picture>
<!-- Lazy load below-the-fold images -->
<img src="/product.webp" alt="Product photo"
width="400" height="300"
loading="lazy"
decoding="async" />Reduce JavaScript
JavaScript is the most expensive resource on the web per byte. It must be downloaded, parsed, compiled, and executed — each step consuming CPU time that blocks the main thread. Use code splitting to load only what is needed for the current page, tree shake unused exports at build time, and defer non-critical scripts so they do not block the initial render.
<!-- Defer non-critical JavaScript -->
<script src="/analytics.js" defer></script>
<!-- Async for independent third-party scripts -->
<script src="/third-party-widget.js" async></script>
<!-- Dynamic import for code splitting (React/Next.js) -->
const HeavyComponent = dynamic(
() => import('./HeavyComponent'),
{ loading: () => <Skeleton /> }
);
// Tree-shake by importing only what you need
import { debounce } from 'lodash-es'; // NOT import _ from 'lodash'Enable Compression
Text-based resources (HTML, CSS, JavaScript, JSON, SVG) should always be served with compression enabled. Brotli provides 15–25% better compression ratios than GZIP and is supported by all modern browsers. GZIP remains a solid fallback for older clients.
# Nginx - Enable Brotli compression
brotli on;
brotli_types text/plain text/css application/javascript
application/json image/svg+xml;
# Apache - Enable GZIP compression
<IfModule mod_deflate.c>
AddOutputFilterByType DEFLATE text/html text/css
AddOutputFilterByType DEFLATE application/javascript
AddOutputFilterByType DEFLATE application/json
</IfModule>Leverage Browser Caching
Proper cache headers allow browsers to store static resources locally, eliminating redundant network requests on repeat visits. Use long cache durations with the immutable directive for versioned assets and shorter durations with revalidation for HTML pages.
# Nginx - Cache static assets for 1 year
location ~* \.(js|css|png|webp|avif|woff2)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# HTML pages - short cache with revalidation
location ~* \.html$ {
add_header Cache-Control "public, max-age=3600, must-revalidate";
}Use a CDN
A Content Delivery Network distributes your content across servers worldwide, reducing the physical distance between users and your resources. CDNs dramatically reduce TTFB for geographically distributed audiences. Most modern CDNs also provide automatic compression, HTTP/2 or HTTP/3 support, edge caching, and image optimization out of the box, making them one of the most impactful single changes you can make for global performance.
Minimize Render-Blocking Resources
CSS and synchronous JavaScript in the <head> block rendering until they are fully downloaded and parsed. Inline the critical CSS needed for above-the-fold content and load the full stylesheet asynchronously to allow the page to render sooner.
<!-- Inline critical CSS for above-the-fold content -->
<style>
body { font-family: system-ui, sans-serif; margin: 0; }
.hero { min-height: 100vh; display: flex; align-items: center; }
</style>
<!-- Load full stylesheet asynchronously -->
<link rel="preload" href="/styles.css" as="style"
onload="this.onload=null;this.rel='stylesheet'" />
<noscript><link rel="stylesheet" href="/styles.css" /></noscript>Reduce DOM Size
A large DOM tree (more than 1,500 nodes) increases memory usage, causes longer style recalculations, and slows down layout and paint operations. Review your HTML structure to remove unnecessary wrapper elements, flatten deeply nested structures, and virtualize long lists. Lighthouse recommends fewer than 1,500 DOM elements and a maximum depth of 32 levels.
<!-- Virtualize long lists instead of rendering all items -->
import { FixedSizeList } from 'react-window';
<FixedSizeList
height={600}
width="100%"
itemSize={50}
itemCount={10000}
>
{({ index, style }) => (
<div style={style}>Row {index}</div>
)}
</FixedSizeList>Common Page Speed Issues
These are the most frequently encountered performance problems across websites of all sizes. Addressing even a few of these can result in significant, measurable improvements to your page speed scores and user experience.
Large Uncompressed Images
Serving images in legacy formats like BMP or uncompressed PNG, or serving JPEG images without quality optimization, is one of the most common performance issues. A single unoptimized hero image can be several megabytes, dominating page weight and causing extremely slow LCP times. Convert to WebP or AVIF and use appropriate quality settings (80–85% for photographs is usually visually indistinguishable from 100%).
Unused JavaScript and CSS
Many websites ship large bundles containing code that is never executed on the current page. JavaScript frameworks, component libraries, and third-party scripts frequently include features that go unused. Use the browser DevTools Coverage tab to identify unused code, then implement code splitting and tree shaking to reduce bundle sizes. Aim for less than 20% unused code per resource.
No Browser Caching Headers
Without proper Cache-Control headers, browsers must re-download every resource on each visit. This wastes bandwidth and makes repeat visits nearly as slow as the first load. Static assets like images, fonts, CSS, and JavaScript should have cache durations of at least one month — ideally one year for files with versioned filenames (content hashing).
Render-Blocking Third-Party Scripts
Analytics scripts, ad networks, chat widgets, social media embeds, and A/B testing tools often load synchronously and block rendering. Each third-party script adds DNS lookups, TCP connections, and download time. Audit your third-party scripts regularly, load them with async or defer attributes, and consider delaying non-essential scripts until after the first user interaction.
Large DOM Tree
Pages with more than 1,500 DOM elements suffer from slow style recalculations, expensive reflows, and increased memory consumption. Complex navigation menus, long product listings without virtualization, and deeply nested component structures are common culprits. Flatten your HTML hierarchy, remove unnecessary wrapper divs, and use virtual scrolling for long lists.
No Text Compression
Serving HTML, CSS, JavaScript, and JSON without GZIP or Brotli compression wastes bandwidth and increases transfer times. Text compression typically reduces file sizes by 60–80%. Most web servers and CDNs support compression out of the box but may need it explicitly enabled in the server configuration.
Missing Image Dimensions Causing CLS
When images load without explicit width and height attributes (or equivalent CSS), the browser cannot reserve space for them during the initial layout. As images load, they push surrounding content down, creating layout shifts that hurt your CLS score. Always specify dimensions on images and use the CSS aspect-ratio property as an additional safeguard for responsive layouts.
Tools for Measuring Page Speed
Multiple tools exist for measuring and analyzing page speed. Each has different strengths, and using a combination provides the most complete picture of your site's performance.
Google PageSpeed Insights
The official Google tool that combines real-world CrUX field data with Lighthouse lab data. It provides both mobile and desktop analysis and is the closest reflection of how Google evaluates your page's performance for ranking purposes. It is free, requires no installation, and is available at pagespeed.web.dev.
Lighthouse (Chrome DevTools)
Built into Chrome DevTools (accessible via F12 or right-click and Inspect), Lighthouse performs a comprehensive audit of your page including performance, accessibility, best practices, and SEO. It generates a detailed report with specific, actionable recommendations for improvement. For consistent results, run it in incognito mode to avoid browser extension interference.
Chrome User Experience Report (CrUX)
CrUX provides real-world performance data collected from millions of Chrome users who have opted in to sharing usage statistics. It is available via the CrUX API, Google BigQuery, and the CrUX Dashboard. This is the definitive source for understanding how real users experience your site and what Google sees when evaluating your pages for Core Web Vitals ranking signals.
WebPageTest
An advanced, open-source tool that allows you to test from different geographic locations, real devices, and network conditions. It provides waterfall charts, filmstrip views, connection-level analysis, and multi-step test scripts. Particularly useful for diagnosing the impact of third-party scripts, redirect chains, and server configuration issues.
InstaRank SEO Page Speed Checker
Our Page Speed Checker combines data from the Google PageSpeed Insights API with our proprietary 5-category scoring system. It evaluates Core Web Vitals, Lighthouse scores, resource optimization, image optimization, and caching in a single comprehensive report with actionable recommendations. Use it as part of your full website audit or as a standalone tool for quick performance checks.
Frequently Asked Questions
What are Core Web Vitals?
What happened to First Input Delay (FID)?
Does page speed directly affect SEO rankings?
What is a good page speed score?
Why are my Lighthouse scores different every time?
How can I improve my LCP score?
fetchpriority="high" on the LCP element, and ensure critical CSS is inlined for above-the-fold content.