Collective

We’re a worker-owned agency that designs, builds and supports websites for organisations we believe in.

Improving Your Website's Performance: From Audit to Impact

Lessons from Our Latest Webinar

Earlier this month, our lead developer, James Hall, ran a lunchtime webinar demystifying Web Performance and Google’s Core Web Vitals (CWV).  The session was packed with actionable, practical insights for web managers, product owners, and digital leads to make your site faster, more accessible, and easier to use.

In this write-up, we’ve pulled together the key ideas, tools, and takeaways — including what Core Web Vitals actually measure, why real-user data matters, and how to prioritise performance improvements to deliver meaningful results for your users.

Watch the video

Why Core Web Vitals matter

“Web performance” can feel like an abstract goal. We all want our sites to be faster — but faster for whom, under what conditions, and measured in which way? As James explained, Google’s Core Web Vitals were introduced in 2020 to answer exactly that. Rather than only measuring technical speed, CWV focuses on how fast a site feels to real users.

The three Core Web Vitals are:

  • Largest Contentful Paint (LCP) - how quickly the most important element loads.
  • Interaction to Next Paint (INP) - how quickly a page responds to user input.
  • Cumulative Layout Shift (CLS) - how stable the page layout is while loading.

These metrics are particularly powerful because they’re user-centric. A site may appear fast to you on fibre broadband, but painfully slow to someone on a mid-range Android device on 4G — a scenario many organisations underestimate. As James puts it: “There’s often a five-second difference between a broadband user and someone on a mobile network.” Understanding the reality of those differences is the first step toward fixing them.

Synthetic testing vs real-user measurement (RUM)

James outlined two broad ways to measure CWV:

1. Synthetic tests (lab-style tests)

Tools like PageSpeed Insights, WebPageTest and Chrome DevTools simulate user conditions such as network speed, device type or location. They’re quick, easy and great for checking regressions during development.

But they come with a big catch — synthetic tests aren’t real life. They can give you a false sense of confidence if your controlled conditions don’t match your users’ actual experiences.

2. Real User Measurements (RUM)

RUM data is collected from real people using your site on real devices, over real networks. That means:

  • You see trends in genuine user behaviour
  • You get a fuller picture of performance issues
  • And you avoid misleading results

The downside is that RUM is more complex to set up — and you need enough traffic to gather useful data.

Using CrUX: Google’s public RUM dataset

Most web teams aren’t gathering their own RUM data, which is where the Chrome User Experience Report (CrUX) comes in. CrUX collects CWV metrics from consenting Chrome users and makes the aggregated data available via several tools.

Three of the most helpful tools James demoed were:

  • PageSpeed Insights — shows CrUX data alongside synthetic insights
  • Google Search Console (Experience > Core Web Vitals) — groups similar pages and highlights patterns
  • Looker Studio — useful for building custom dashboards if you have RUM data feeding into it

CrUX data is aggregated over a 28-day window at the 75th percentile, which means improvements take time to show up. Useful for reliable trends, less useful if you’re trying to debug something today. And importantly, not all sites are included. You need:

  • Enough traffic
  • Public, indexable pages (no login-only content)

If you don’t meet these criteria, CrUX simply can’t help — meaning you need another solution, for example, setting up RUM analytics using services like DataDog or DebugBear.

Setting up your own RUM analytics

For organisations that want more control — or quicker feedback loops — James recommended rolling your own RUM pipeline using tools many teams already have:

This is the setup James used for Agile Collective’s audit of Friends of the Earth, enabling him to pinpoint the exact elements causing performance issues — including a cookie banner being treated as the largest contentful element for users who hadn’t yet dismissed it.

This level of granularity makes it far easier to prioritise work. But it does depend on having at least 4–6 weeks of data before an audit begins.

The issues we see most often

Across the many performance audits we’ve run, some issues come up repeatedly:

  • High Time to First Byte (TTFB) - Often caused by slow hosting, heavy backend processing or lots of logged-in users. TTFB drags everything else down.
  • Poor caching strategy - If pages can’t be cached effectively, servers must rebuild them on every request — slowing down the experience for everyone.
  • Oversized images -Still one of the biggest (and easiest) wins.
  • Unused JavaScript or CSS - Sending unnecessary code increases load times and delays interactivity.
  • Layout shift issues -Ad scripts, images without dimensions or late-loading UI components all contribute to frustrating CLS scores.

The sweet spot is identifying fixes that are high impact, low effort — something synthetic tools can help you spot, but real-user data helps you prioritise.

Continuous monitoring beats one-off audits

When asked how often organisations should check their CWV data, James’ answer was clear: continuously. Websites change, content changes, and regressions slip in easily. Daily dashboards highlight problems early, and trend lines help you distinguish noise from issues affecting real users.

Want help improving your site performance?

If you’d like support setting up RUM analytics, improving your Core Web Vitals or running a performance audit, our team would love to help. Get in touch — or explore our Site Performance Audit service for more details on what’s involved.

Meet the Authors
Back to blog