Core Web Vitals Engineers

Core Web Vitals hitting Good on real users, not on local.

LCP, INP and CLS fixed at the root cause. CrUX field data baseline, code-level PRs, monthly RUM scorecard. Not synthetic theatre.

  • Field data from CrUX, not localhost runs
  • Per-metric root cause fixes for LCP/INP/CLS
  • Monthly RUM scorecard, segmented by device
LASEO performance engineer reading Core Web Vitals field data in Chrome DevTools

Trusted by ambitious brands worldwide

56+
SEMrush
#1

SEMrush agency

Rated #1 SEO agency in The Netherlands by SEMrush

Most agencies hand you a Lighthouse score on localhost. Then claim victory while the CrUX report stays Poor. We optimise for real Chrome users.

Lighthouse is a synthetic lab tool running once on a developer's laptop with throttled 4G. The Chrome User Experience Report aggregates 28 days of real Chrome users on real devices on real networks. Google ranks on the second one. We do too, and we fix the metrics in your codebase until the field data turns green.

See CWV outcomes

How a Core Web Vitals engagement runs

How a LASEO Core Web Vitals engagement runs

Five phases over 10 to 14 weeks. Field data baseline, per-metric root cause fixes for LCP, INP and CLS, then RUM monitoring so the next deploy does not regress.

  • 01

    CrUX field data baseline (week 1)

    We pull your CrUX data from the public BigQuery dataset: 28-day Origin score, plus per-URL data for your top 20 organic landing pages via the CrUX API. Cross-reference with GSC Core Web Vitals report. Output: a baseline scorecard showing p75 LCP, INP and CLS for Origin and per-URL, broken down by device (mobile vs desktop).

  • 02

    LCP root cause and fix (week 2-4)

    Profile LCP element in Chrome DevTools Performance panel and PageSpeed Insights. Typical fixes: preload the LCP image, set fetchpriority="high" on the hero img tag, convert hero to AVIF or WebP with correct srcset, size the image to the actual rendered dimensions, add a CDN in front of static assets, and shave server response time (TTFB) by caching or moving render off the critical path.

  • 03

    INP root cause and fix (week 5-8)

    Profile long tasks in DevTools Performance with CPU throttling 4x. Typical fixes: split long JavaScript tasks under 50ms each, defer non-critical scripts with type="module" or async, move heavy work off the main thread into Web Workers, schedule low-priority work via requestIdleCallback, and audit your tag manager for synchronous third-party scripts blocking interaction.

  • 04

    CLS root cause and fix (week 9-11)

    Identify layout shift sources via DevTools Performance Insights and PageSpeed Insights. Typical fixes: add explicit width and height attributes on every image and video, set font-display: optional or swap with a size-adjusted fallback, reserve space for ad slots and embeds with min-height containers, and avoid inserting DOM content above existing content after page load.

  • 05

    Continuous RUM and monthly scorecard (ongoing)

    Instrument the official Web Vitals JS library and report to GA4 or a dedicated endpoint. Build a Looker Studio dashboard segmented by device, country and template. Monthly: a one-page CWV scorecard with p75 deltas, the regression alerts that fired, and the next fixes queued. Review with your engineering lead.

Software Improvement Group dashboard from a LASEO Core Web Vitals engagement
The model
02 - How we work

Per-metric root cause,not a generic speed report.

A bad LCP, a bad INP and a bad CLS each have entirely different causes. LCP is usually a render-blocking critical resource or a slow server. INP is usually a long JavaScript task on a click handler. CLS is usually a font swap or an image with no dimensions. We diagnose them separately and ship separate fixes for each.

3Metrics diagnosed separately
28dCrUX field data window
75thPercentile Google ranks on

What Core Web Vitals actually measures

Core Web Vitals, decoded metric by metric

Core Web Vitals is three field metrics Google uses as a small but real ranking signal. LCP for loading, INP for interactivity, CLS for visual stability. Each measured at the 75th percentile of real Chrome users across a 28-day window. Lighthouse approximates them in a lab, but ranking is based on field data.

01

LCP (Largest Contentful Paint)

Time until the largest above-the-fold element renders. Good is under 2.5s at p75. Usually the hero image, hero video poster, or a large block of text. Fixed by preloading the LCP resource, setting fetchpriority="high", serving modern formats (AVIF/WebP), correct sizing, CDN delivery, and improving server response time.

02

INP (Interaction to Next Paint)

Time from any user interaction (click, tap, key) to the next paint. Replaced FID in March 2024. Good is under 200ms at p75. Usually broken by long JavaScript tasks on the main thread. Fixed by splitting tasks under 50ms, deferring scripts, offloading work to Web Workers, and scheduling low-priority work with requestIdleCallback.

03

CLS (Cumulative Layout Shift)

Sum of unexpected layout shifts during the page lifecycle. Good is under 0.1. Usually caused by images without width/height, font swaps that resize text, ads or embeds loading without reserved space, or banners inserted above existing content. Fixed by reserving space, sizing media, and tuning font-display.

04

Origin score vs URL score

CrUX reports both. Origin is the aggregate across the entire domain over 28 days. URL-level is per-page (only if it has enough traffic for the sample). Your Origin can be Poor while a specific URL is Good, and vice versa. We optimise both, focusing first on the templates your organic traffic actually lands on.

05

Field data vs synthetic data

Field data (CrUX, RUM) is what real users experience. Synthetic data (Lighthouse, WebPageTest) is a single run in a controlled lab. Google ranks on field data. Lighthouse is useful for debugging a specific change, not for declaring victory. A Lighthouse 95 with a Poor CrUX is common, and it does not help your rankings.

06

How much CWV affects rankings

Core Web Vitals is a confirmed Google ranking factor since 2021, but a small one combined with many others. It is a tie-breaker more than a primary lever. Where it matters most: mobile commerce SERPs with very similar competitors, where a Good Origin score moves you from page two to page one. It also affects conversion rate directly, which is often the bigger commercial reason to fix it.

Philips
WOOOD
Bugaboo
Balmain Hair
Martini
Intersport
CUBE
Lovens
Philips
WOOOD
Bugaboo
Balmain Hair
Martini
Intersport
CUBE
Lovens

Why LASEO vs alternatives

Core Web Vitals optimisation compared honestly

Lighthouse-only audits and speed plugins both have a place. Here is where LASEO fits and where it does not.

Typical CWV audit

Lighthouse screenshot, no field data

  • One Lighthouse run on the auditor's laptop. Mobile throttled to a fake 4G profile. A score that is 30 points different on the next refresh.
  • Recommendation: 'optimise images'. No identification of which element is the LCP or what is blocking it.
  • Recommendation: 'reduce JavaScript'. No breakdown of which tasks are long or which scripts to defer.
  • Recommendation: 'add image dimensions'. Misses font swap shifts and late-inserted content.
  • Slides, Excel, PDF. You hire a developer separately to implement, often months later.
  • None. The auditor disappears after handover. The next deploy regresses LCP and nobody notices for weeks.
LASEO

CrUX field data and code-level PRs

  • CrUX field data from the public BigQuery dataset, 28-day aggregate of real Chrome users at the 75th percentile. The exact data Google uses for ranking.
  • DevTools Performance trace identifies the LCP element per template. Fix specifies preload tag, fetchpriority, format conversion, CDN config, and TTFB target.
  • Long-task breakdown per template via DevTools and Web Vitals attribution. Specific fix per task: split, defer, Web Worker, or requestIdleCallback.
  • Shift sources identified per template via Performance Insights. Fixes for images, fonts (font-display tuning), reserved space on ads and embeds, and DOM stability.
  • PRs to your Git repo: head template preload tags, fetchpriority on image components, font CSS, layout containers. Your engineers review and merge.
  • Real User Monitoring via the Web Vitals JS library piped to GA4 or BigQuery. Slack alerts on regressions. Monthly scorecard with CrUX deltas.

What our CWV clients say

Core Web Vitals work in their own words

Our previous agency sent us a Lighthouse 92 screenshot every month. CrUX stayed Poor. LASEO actually pulled our BigQuery CrUX data, found the LCP image was a 3.2MB JPEG, and shipped a PR that took p75 LCP from 3.8s to 1.9s in three weeks.
Daniel Okafor
Head of Engineering · Loop Earplugs
INP was the metric we could not move. Our cart click handler ran a 480ms long task. LASEO split the work, deferred the analytics tag chain, and moved INP from 340ms to 140ms at p75. Conversion rate moved with it.
Eva Lindqvist
VP Product · Daniel Wellington
We had a 0.27 CLS we could not figure out. They identified a late-loading consent banner pushing content down and the hero font swap resizing text. Both fixed in one PR, CLS now 0.04 at p75 and holding through six releases.
Tom Verlinden
Lead Frontend · Bever
Our previous agency sent us a Lighthouse 92 screenshot every month. CrUX stayed Poor. LASEO actually pulled our BigQuery CrUX data, found the LCP image was a 3.2MB JPEG, and shipped a PR that took p75 LCP from 3.8s to 1.9s in three weeks.
Daniel Okafor
Head of Engineering · Loop Earplugs
INP was the metric we could not move. Our cart click handler ran a 480ms long task. LASEO split the work, deferred the analytics tag chain, and moved INP from 340ms to 140ms at p75. Conversion rate moved with it.
Eva Lindqvist
VP Product · Daniel Wellington
We had a 0.27 CLS we could not figure out. They identified a late-loading consent banner pushing content down and the hero font swap resizing text. Both fixed in one PR, CLS now 0.04 at p75 and holding through six releases.
Tom Verlinden
Lead Frontend · Bever

Honest answers about Core Web Vitals optimisation LASEO

What engineering leads and SEO buyers actually ask before signing a Core Web Vitals engagement.

LASEO engineer reading CrUX field data and Web Vitals RUM dashboard

Core Web Vitals is Google's set of three user-experience metrics measured on real Chrome users in the field. LCP (Largest Contentful Paint) measures loading speed, with Good under 2.5 seconds at the 75th percentile. INP (Interaction to Next Paint) measures responsiveness, with Good under 200 milliseconds. CLS (Cumulative Layout Shift) measures visual stability, with Good under 0.1. All three are aggregated over a 28-day window in the Chrome User Experience Report and used as a small but real ranking signal in Google Search. You need all three Good to pass.

Core Web Vitals diagnosis

Bring your CrUX problemto a performance engineer.

30 minutes with a LASEO performance engineer. We will pull your CrUX data live, identify the metric that is failing hardest, name the likely root cause, and tell you whether we are the right team to fix it.

Badge Plus
Agency Partner
Sortlist Elite Agency
56+
Trusted by ambitious brands worldwide

Book a Core Web Vitals diagnosis

Fill in the form, a performance engineer responds within one business day