Tech Breakout Summit/2021: Improve Your Site With A Real-Time Core Web Vitals View

Google’s Core Web Vitals provide unified guidance for website performance related to user experience, and they’re redefining the way websites are measured. Google has already started using Core Web Vitals as a signal that impacts site ranking within Google search.

In 2021, WP Engine’s own Web Marketing team undertook the major project of understanding and achieving >90% Core Web Vitals across the board. This session offers a closer look at that project and the use of New Relic’s Browser to gain real-world insights into the elements that affect Core Web Vitals.

Video: Improve Your Site With A Real-Time Core Web Vitals View

Slides from the session

In this session, WP Engine Web Development Manager Ryan Hoover and New Relic Senior Product Manager Lindsy Farina discuss:

  • The impact Core Web Vitals will have on your business in 2021 and beyond and how to understand your scores using tools like New Relic’s Browser. 
  • The importance of real user data to overcome the limitations of synthetic tests and gain peace of mind that you’re in control of the way Google is scoring your website’s user experience. 

There are tons of ways to pivot, slice, and dice your data to figure out what’s important to you. Obviously, we have out-of-the-box experiences but we also let you build dashboards on your own. You are the masters of your data with New Relic. We’re going to let you have access to all of it, and we want you to be able to build the experiences that are important to you.

New Relic Senior Product Manager Lindsy Farina

Full text transcript

LINDSY FARINA: Hello everyone and welcome to our talk today about how to improve your site with real-time Core Web Vitals from Google. My name is Lindsy Farina, and I am from New Relic. I’m a senior product manager there. I’ve been there a little over two years focusing on front-end web performance and also expanding into some other areas of New Relic like our very cool anomaly detecting product called Lookout. And today I will be co-presenting with Ryan who will introduce himself.

RYAN HOOVER: I’m Ryan Hoover. I’m the WP Engine Manager of our Web Development team. We work to make sure that WP Engine’s public web properties really showcase what’s possible with WordPress and with WP Engine’s technologies.

LINDSY FARINA: Great. So today let’s go through just a quick agenda of what we’re going to be talking about. First off, we want to make sure that you understand what the Core Web Vitals are because, in order to optimize them, you need to know what they are. We’ll give you a little bit of information about how you measure them. And then some of the exciting things that you can do with New Relic’s Browser, not just Core Web Vitals but everything else. And then we’re going to take a look at what WP Engine has done on their own to go on this journey of optimization and getting their site up to par with the Core Web Vitals.

RYAN HOOVER: Let’s kick it off by just talking about, what are web vitals and why are we talking about them so much these days? We’ve been talking about web performance for decades now, and we’ve been trying to figure out how to make the web fast, how to make users’ experiences good when they visit our websites.

We’ve used a lot of different measurements and processes over the years. We’ve talked about Time to First Byte, Window Load Events, Document Ready, all kinds of different metrics, but it’s always been a little bit off and a little artificial. Last year, in the summer of 2020, Google came out with their Core Web Vitals standards, which are some really clear-cut standards of how we should measure page experience. It goes beyond just how fast did the content load to what does the user feel.

With web vitals, Google has decided to really double down on making sure the web is a useful place for everyone, that everybody has a good experience when they’re visiting sites. And so they’re basically pressuring companies and site owners around the world to start pushing their page experiences forward by factoring those core vitals into the everyday Google products.

Starting this month, and rolling out through August, Google will slowly increase the way Core Web Vitals factor into your search rankings. They’re using this as another signal of how well your site performs, how often people visit it, and so it’s going to go into your organic search rankings. It’s also going to factor into the effectiveness of your paid search ads. You’ll see cost-per-click potential changes. You’ll see changes in the amount that your ads show and something that Google has hinted at, is that they may release what we’ve personally nicknamed a “fast batch.”

Basically, for sites that get a good page experience, Google will label that on the search results. They’re not necessarily going to ding you for being slow, but they will highlight those sites that have a really good search experience which is going to give them more traffic.

This is Google’s chance to really try and drive home the idea that page experience is critical and that these are the metrics they’re using to define what a good page experience looks like.

LINDSY FARINA: So speaking of these metrics, what are they and what do they mean to you? The thing that I always like to remember is that we are all consumers of websites and so we all want a good experience. And we’ve classically over time as we’ve gotten adjusted to the internet become incredibly impatient and demand the best experience when we go to a website. And so what does the best experience really look and feel like? And Google has decided that these three metrics are the things that answer that question. So you’ll see LCP, FID, and CLS. So what do those mean?

Largest Contentful Paint. It’s essentially like how long does it take for the most important or key piece of content to load on my page. What’s the biggest thing and how long does it take to get there?

First Input Delay, which is my favorite metric. It’s really that measurement of like, great, you got me content. What is it? What does it do? Can I use it? If I click that button, am I going to feel like it’s snappy? And if I don’t get a response when I click that button, what happens when I start right-clicking, right? So that experience that we’ve all had when we’ve gotten a fast load of visual content, we go to click that Login button and nothing happens and we click at 75 more times and then we leave the page. So optimizing that First Input Delay is really the thing that you want to focus on to keep your users from getting that frustration of feeling that your website doesn’t work.

And then CLS, so this is– it’s called Cumulative Layout Shift and this is the one that’s like probably the new kid on the block. It’s not factored quite as much into the Lighthouse scores yet, but it’s that jankiness, that feeling that your website is jumpy. So you go to a page, let’s say you’re reading a news article and suddenly an ad pops in the middle of your page. That’s a terrible user experience. Like I’m in the middle of consuming something from your website and now it’s gone and I have to scroll and go find that.

So not only did they say that OK, these are the three metrics that are going to tell you that, you’re visually good, your responsiveness is good, your stability is good but they’ve given you key ranges that say what is good. So you don’t have to guess like, is 3 seconds good enough for this. They’ve given discrete points in time and for CLS, it’s a little different. It’s a score. It’s not an amount of time. So you have these ranges and they’ve also told you to query your population by the 75th percentile. So 75% of your population is experiencing something in this range and that’s really what you want to focus on getting up into that green good spot.

So what’s next? Handing over to Ryan to talk about some of the ways that you can see these today.

RYAN HOOVER: All right. Let’s talk about some of the tools you can use to figure out how you’re doing on these three different metrics.

Google’s giving us two tools out of the box that we can use. They have the Chrome User Experience database, the CRUX database. This is a massive data set. Everybody uses Chrome as a web browser, and whenever you visit any website, any web page, via Chrome, it takes the experience that you had and sends it back to this massive database. It has billions of records inside of it.

Chrome has also put this into BigQuery and made it publicly available for all of us to query. So you can actually go and run reports and see how your website is doing, and how it has done historically. This is incredibly useful, and this is also the big data set that Google uses to see how you are doing. This is their source to determine how good is your website.

Unfortunately, it has its limitations. One, you can’t really dig down deeper than your entire domain. You can’t look at how is your blog doing specifically, or that new landing page that you just launched. You can’t look at those nuances. Also, they release data on a monthly basis, on the second week of the month. So just last week, we got the data for May. Unfortunately, that means that you’re always a little bit behind and that means that you can’t really look at how you’re doing right now.

Google also is giving us, via Site Search Console, more information beyond what you get directly in Core Web Vitals for your own domains—things that you’ve registered in Site Search Console. They’ll give you breakdowns by mobile and desktop and really score you on poor, needs improvement, or good across the different metrics. They’ll let you drill down a little bit and they’ll show you groups of URLs that might be having problems. This will tell you if your blog is having issues, but it won’t necessarily tie issues to a single blog article that might be slowing you down.

Both of these are great tools. They also really drive home that 75th percentile mark that Google is stressing for three out of four visitors to your site to get a good experience. If a few stragglers still get a bad experience, they have a long load time, that’s OK to Google. And this helps to showcase or to show you what that looks like and how you’re doing.

Unfortunately, neither of these gives you good real-time views. That’s where synthetic tests come into play. WebPageTest is something that web developers have been using for years to test how their sites are doing, how they load. It’s actually recently updated itself, with a big refresh to include Core Web Vitals in the metrics they show. They’re reporting on that data for you and they also include all the other features we know and love for more page tests like waterfall graphs, breakdowns of resource loads, types of content, all the great things that we love about WebPageTests.

Google also has released Lighthouse. Lighthouse is open source software that Google has put up on its GitHub that will run a test on your website and report back its Lighthouse score. Or, if you’re used to Page Performance, Google’s Page Performance tool also uses Lighthouse. So this kicks back that nice little score from 1 to 100. You see like on the screen, we’ve got an example of one of our pages with a score of 85. Doing pretty good, not the best as we could be, but we’re doing pretty good there. This is great in that it gives you a good, simple holistic score and also gives you some more detail about what that particular bot saw when it ran a test on your site. You can see what its Largest Contentful Paint was, what its Cumulative Layout Shift was. You won’t actually see First Input Delay. That’s a bit of an oddity in that, that needs a more holistic view of multiple visits but it does give you things like total blocking time and speed index which factor into that First Input Delay.

Unfortunately, both of these have an issue. These are both synthetic tests. This is a bot that goes to a single website, a single web page that you gave it, that you told to crawl, loads that page once, and just reports what it saw on that one page for that one load. That leads to a lot of problems. Synthetic tests actually can really give you a false sense of success. We saw this ourselves at We started a big effort to get our Core Web Vitals scores up last October. We sat down. We heard some rumors Google might be going this direction and really factoring it in and so we said, our scores aren’t where we need them to be. Honestly, they were in the 20s. We were not looking too good in terms of our Lighthouse scores. So we set out to get those scores higher. Actually, pretty quickly, we got that score up to a 98. We were awesome. We felt, oh we crushed this, we did great. We gave ourselves a pat on the back, went about our business.

We checked a month later, we went over the CRUX database, took a look at what CRUX was saying, and we actually didn’t see any difference. We really hadn’t moved the needle on what CRUX was saying. So we started digging a little bit more and you’ll see on these graphs that we’ve got, I know they’re messy, we’re not going to dig into them a lot, but there’s just not a lot of really high numbers there. People weren’t seeing good experiences on our websites. We had to go back and look, and actually what we discovered is that how the bot loaded the page and record was different from how users were doing it. And we were using some tricks and techniques to lazy-load content that really was just– the bot would never see it. So people were still getting content lazy-loaded and that lazy-loading is still dragging us down to the point where we weren’t seeing the gains that we wanted.

So we were in a tough spot. We had this old data from Google that just wasn’t really good enough for us. It wasn’t up to date so that we could actively work on getting those scores higher. It wasn’t responsive enough. We also had these synthetic tests that just weren’t a good fit, like we knew they didn’t match what was actually going on. We needed something better. We’re looking around and actually, some of our New Relic partners steered us to the New Relic Browser, and that really turned into the answer for us. So Lindsy is going to explain to us a little bit about how the New Relic Browser works.

LINDSY FARINA: Great. So yeah, New Relic Browser. We’re not just a Core Web Vitals reporter. We want to give you a look at all of your data for your users, right? Everything that’s happening from the real user perspective. When we talk about some of the things that you can get from those synthetic tests, these user-centric perceived performance metrics that you’ve heard about, there’s not just those three. There’s many, many more. They’re delineated by real user versus lab. So some of the metrics are only lab, like Total Blocking Time or Time To Interactive, and some of the metrics are only RUM, which means it requires a user to actually have engaged with the site.

So First Input Delay is predicated on the fact that you need to actually do something. So a bot isn’t really going to go click some buttons and therefore they can’t really report back that data to you. But from a real user perspective, we know when your users are clicking buttons. We know when your users are taking actions or making route changes or doing things, and so we can report all of that information to you. So on top of the Core Web Vitals, which obviously we put front and center on the summary page of your application, we also report things like your JavaScript errors, because no good product would be good without telling you if you have errors. You can do a lot of performance improvements, but if you don’t have good support around your errors then you might be missing some key things that are also frustrating your customers.

And then we want to be able to give you a look in too, no good front end is not supported by a good back end. So a look at that flow of your data, your AJAX request, all the way through your back-end services. So if something is going wrong in say, a worker downstream that is behaving in an anomalous manner, you can track that all the way through and ultimately decide who to blame for that bad performance from your customers. So we give you that and we also, with all of this rich data, we give you the ability to understand who your customers are. Like there’s so many different attributes associated with your data that tell you a great deal about how your customers think and move through your site, where they are spatially. And that is really the thing that you’re going to need in order to understand performance and the way that you prioritize your data.

And ultimately, the goal is that you really want to take all of, be armed with all of this information, and move from that reactive state where you’re getting a support call and having to go and try to debug something to a more proactive state. You’re using these Core Web Vitals to find these problems before they’re happening and solving them so that you know you’re delivering a good customer experience.

So it sounds scary, especially if you’ve never embarked upon a performance journey of your own at your business. Where do you start, right? And so the thing that I always want to make people– it’s kind of like what I just said, like know your customers. Know what kind of site you manage. If you’re a news site, you need to prioritize stability and content. You’re less involved in clicking buttons and taking action but if you’re an e-commerce site where you really need people to be able to move through a shopping experience seamlessly, click that button, add it to a cart, go to checkout, then you want to really focus on making sure that your page is both visually loaded quickly but also very responsive.

So things like, is my main thread being overutilized for activities that are going to block it from being able to process that action when the user clicks a button. And then, take all of that information and then figure out where you start. Find that handful of users that are experiencing that bad experience. Focus on the URL that you know that most, if not all, of your users hit. Like maybe it’s your home page that you want to focus on straight out of the gate. Maybe it’s your login experience. Maybe it’s your shopping cart experience. What do you know about your own site as the subject matter experts that would help you decide which place is the most important? And with New Relic Browser, we can help you get all of that data in order to define that experience.

So there’s tons of ways to pivot, slice, and dice your data to figure out what’s important to you. Obviously, we have out-of-the-box experiences but we also let you build dashboards on your own. If you– you are the masters of your data with New Relic. We’re going to let you have access to all of it, and we want you to be able to build the experiences that are important to you.

The other thing that I would note is that if you are not currently a New Relic customer, there is a free option available where you can just go and start playing around with it, upload your data, see what installer agent, and you can have every single feature out-of-the-box available to you for free which is amazing. But now that you get an idea of where you’re going, focus on that customer experience because that’s what this is all about. You’re improving the experience for your customers. So what works for one company may not be the same story of a performance journey for yours. So it’s very important to not get kind of caught up in the fact that there are all these different metrics and you need to be perfect. It’s not going to happen. It’s not going to happen straight out of the gate but you can set goals.

And yeah, let’s look at what an actual customer has done.

RYAN HOOVER: All right. Yeah, so going off of what Lindsy was just saying, that sometimes you get too many metrics and you need to find the metrics that work. So New Relic helped us to set up a dashboard that was great, full of data. Unfortunately, it got a little bit too much honestly. We looked at it and we got overwhelmed with all the different things we were trying to track. So we went back and worked with that and then came up with what you’re seeing here, a very simple, straightforward dashboard. What we’ve done is we’ve narrowed down what our big performance goals are for our two websites, and, our two primary flagship websites. We separated them into mobile and desktop traffic, in part because we just get very different experiences for them and in part because we know that Google is tracking those separately. And we’re looking at the three different web vitals for all four different experiences.

So we’ve got 12 total metrics that we’re tracking. So we’ve got this great dashboard that we’ve put together that just really quickly at a glance shows us how are we doing. We’ve got those holistic scores in a big column that says very fast. How has our average been over the past week? What is our average score? Are we in the black good zone? Are we in that yellow zone or are we actually into the red? We see ourselves bounce up and down a little bit there. You’ll see right now, honestly, we’ve been doing pretty good. We still got some Largest Contentful Paint issues to straighten out, especially in mobile, and so we’re working through those right now but this has been really helpful. This is actually the dashboard that we share all the time with both our own team and with our senior leadership. This is something that just we constantly show off as, hey, this is our view of how our site performance is doing.

It’s even let us, with these graphs, see some interesting little trends pop up. You’ll notice right there in the middle,’s Largest Contentful Paint has a cyclical behavior where it gets slower, faster, slower, faster. It gets slower when US traffic goes to sleep and we start getting more international traffic. That traffic just has a harder time getting our content from West Virginia and so they just tend to see slower times. We’re still having to work through that as we work to get the scores down of how do we get that international traffic faster.

So that was one of these aspects of this dashboard that really helped us dive in, focus on only what we cared about, and really helped to improve those specific metrics that we’re going after.

LINDSY FARINA: So on top of all of that, being able to understand where you need to start your journey. You might have picked a metric. You might have decided that LCP is the thing that you want to optimize but what do you do. And again this is– it’s one of those things where the journey is not always going to be the same, and Google has put out a ton of amazing content for each of these metrics on tips and tricks for optimization, places to go and look, things to explore when it comes to LCP, things like making sure that if you are going to load an ad, that you created space for it in advance and that it’s not just going to bounce in and take over a screen. All of your images and videos, make sure those things are optimized before you send them out into the wild, paying attention to what things are running on your main thread. Do you have a lot of long tasks, using those tools like Lighthouse to find out what your total blocking time is? That’s going to really help you understand like OK, these are the areas that I might be having some areas of friction, and it will help you.

It’s not necessarily going to be a templated guideline on this, step one, step two, and you may end up having to kind of go back and forth with things that you try. But the good thing about having something like a mix of synthetic and real-world testing is that you can try these things out before you release them to production and see if you’ve made an impact. So test your staging environment, explore different opportunities. Don’t be scared if what you change didn’t have the effect that you thought it was going to have. This is a journey. It’s definitely a marathon and not a sprint. And if you do manage to do it in a sprint, I would love to hear about that because I have yet to experience that.

And there’s also like I said, Google has put out some awesome—there’s some YouTube videos of presentations where they’ve taken a very specific customer use case and they run you all the way through with code snippets and what they did to optimize to get them out on the other side. So there’s a lot of great examples of how to get started. And speaking of just real-world examples. let’s talk about what WP Engine has done on their own to kind of explore making things better.

RYAN HOOVER: All right, thanks. So just digging in a little bit, I want to take a little bit of time and talk to you about some of the stuff that we’ve learned along the way. Not going to get into details about how we made our site fast. I’m happy to do that on other forums. But I think so many sites are so idiosyncratic that once you get past Google’s general recommendations, you really need to look at what you’re doing. So instead, I just want to talk about some things that we’ve done along the way that made this performance project go smoother, lessons we’ve learned the hard way.

First off, promote a common language. One thing we found is that, depending on how long you have been working with the web, you have a different definition for what a fast site means. When we talk with our engineers that run the hosting platform, they focus on Time To First Byte because that’s what they primarily have control over. They really can’t control what the layout shift is. That’s more the content on the page. When we talk to SEO people who’ve been around for a while, window load comes up a lot because that’s something that we’ve looked at for a long time. Even on our own team between developers, we’ve had different experiences and different understandings.

So something that we’ve learned is to really create those clear definitions, those clear terms of what is it that you’re tracking. How do you define what fast is and how do you measure it. That’s where that New Relic dashboard has been so good for us. We use this thing constantly. Our team is pulling it up a couple times a day to check for ourselves. That’s what we use whenever we have a team meeting and talk about it, we pull up the dashboard. When we talk with leadership, we pull up that dashboard and we share what that looks like. That is our common definition of what it looks like, and that has really helped us to make sure that everybody’s on the same page. And honestly, it’s gotten us past that dreaded comment of I tried it, I tried to load the website from home and it was really slow. Do we have a performance problem? Those little one-off comments that you might get sometimes and are just so infuriating because you can’t recreate them and you don’t know what happened.

LINDSY FARINA: We always have that common thing—if it doesn’t happen on my machine…

RYAN HOOVER: [LAUGHING] It works fine on mine.

LINDSY FARINA: You can’t hide from it when you have real user data.

RYAN HOOVER: Yeah. Oh so speaking of that works fine on my machine, performance is a whack-a-mole game. Oh man, we have struggled with this one. A lot of times when you’re working through performance issues on your site, you’ll fix one, but fixing that one problem will cause another one. Lindsy mentioned a little bit about web fonts and how you can work with that as one of Google’s recommendations. We honestly have struggled back and forth with that. We initially optimize our web fonts. We got our Largest Contentful Paint down but really delaying how our web fonts load. But doing that causes our CLS to shoot back up. So it’s one of these games where you fix one problem, and another one pops up.

So the graph that you see there right underneath where the poor little moles are getting whacked in the head, that’s our actual– that’s’s homepage’s Lighthouse scores for the past year. So you’ll see where through the summer and fall we actually weren’t doing too great there. We knew there was a problem. It just one of those things that we are going to get to, going to get to it and we finally put in that concerted effort. And you see those big jumps up as we started making those big performance pushes. You can see that little green spike there where we hit that 98 and we thought we were done. Once we got past that and we got into the reality of how is our site really performing and not just for the robots. We started this issue and you’ll see those spikes up and down and up and down. And that’s us testing out features, deploying something that we think is going to make it better and it just makes something else worse or back and forth and back and forth.

So this is just something you’re going to have to deal with. There’s even going to be times where you’ll notice we had a little drop off about 3/4 of the way through that graph. What happened was honestly, we didn’t deploy code for a whole week before that. What happened is that Google just changed how it calculates its algorithms, which is great and they did a much better job. The algorithm got better, but it made our score drop a good 10-15 points, and we’ve been struggling working back up to that score that we had a little bit of time there.

Last thing I’ll say, is be open to change. Going along with that idea of whack-a-mole and you cause one problem– you fix one problem, you cause another. One thing we’re going to find out as we work through performance is that what used to work last month isn’t going to work again. From the attitude of just how you handle web fonts and maybe you need to delay them. Maybe you don’t. How do you handle images? We ourselves have struggled with, on we have a lot of illustrations. We’ve initially moved to have a lot of those be inline SVGs.

We got to a point where that got us better but actually it kind of hit a point where it wasn’t and we had to go back to web queues loaded from outside of static files. has a lot of videos on it. We’ve played around with having HTML5 videos on there. That got our speed down on some pages, which got our speed to a much better score. Some pages, actually embedding things from YouTube or Wistia was faster.

And it’s a thing where we have to keep trying, and there really wasn’t just one thing that would fit and would work, even on the same site. This goes across everything from delaying JavaScript, or do you load it as fast as you can? Do you put your CSS as external files? Do you put it in line? There’s a whole lot of trade-offs and you just got to kind of work at it to find what that balance might be. And then Google might come along and actually figure out, you know actually, there is a better way to measure this. They’re doing that right now with Cumulative Layout Shift. They’re figuring out there’s a better way to measure layout shift and so they’re reworking those definitions. And we might have to change our strategy a little bit to respond to that. And so that’s just something that’s going to have to keep going on.

LINDSY FARINA: Yep and the one thing I’ll note is that Google is also– these are recommendations that are going to be coming out annually. So the Core Web Vitals is going to be something that they’re continuously researching. If you’ve been with the user-centric performance journey from its inception, you’ll know that it has gone through a lot of change. We’ve had metrics that have come and gone. We’ve had brand new metrics that have kind of jumped in and been weighted high in the Lighthouse scores but then suddenly have been demoted. And there’s a lot of folks who’ve like latched on to a particular metric and have decided ‘This is the one,’ and the reality is it is no longer that space where we have one metric that rules them all. It is going to be a consistent thing that we have to look at multiple things in order to define user experience.

And as we get more data and as we continue to research, it’s just– it’s a science. This is data science and as more data comes in and Google does this evaluation, they’re going to be changing the way that they weight things, the way that the algorithms are used to define them, and they may also completely deprecate a metric. They may decide that this isn’t the thing and we need to have a different one. So Largest Contentful Paint and CLS are still somewhat new.

We had First Paint. We had First Contentful Paint. We had Time To Interactive in a lab metric. All of these things kind of swirl around so it got very confusing. So be on the lookout for 2021 recommendations. They may change a little bit and so we just—this is a concept of a marathon, not a sprint. Don’t panic. Be open to change. All of these things are part of your performance journey.

So where do we end? You know this is something that we at New Relic feel very passionate about. I love this space. I find it very exciting, and somewhat selfishly too. Like I want my website experiences to be good and I want my customers who manage websites to feel like they have the data that they need in order to make that happen so that when I selfishly go use their website it’s a good experience. And so that takes you to really, just know your customers. Invest that upfront time in looking at your data, getting your baselines, setting goals. It doesn’t have to be from poor to good in a week. It can be that journey over time, and knowing your users’ performance and knowing how that looks and how that feels to them is really going to help you figure out where to prioritize that journey.

RYAN HOOVER: I got it. And on that, as you progress on that journey, it is going to be a journey. You may be like us and want to take on a big project to get the scores up to a good healthy spot, but once you get it there you can’t just sit around and say, great, we’re done let’s go back to the other things we’ve been working on. You need performance to factor into how you work and how you think about your website. It needs to be a part of your lifestyle. Those metrics in that dashboard that you create for your big project, that needs to be something that you keep in the front and that you keep returning to time and again to make sure your scores go up. Because frankly, if you let your setup stay, your scores will slowly go down over time. I’ve seen that time and again with websites that over time, little things creep in, little nuggets, little script changes or images, whatever, might creep in and start dragging your score back up.

This is going to be something that you have to keep working on. Bake it into your daily workflows. Bake it into your quality assurance test that you might run on your website. Whatever that might mean for you, make sure that performance is something that you stick with and that you make a part of just how you think about working with your website.

Finally, with any good marathon, you can’t take on the whole thing at once. You’ve got to set clear, simple, straightforward goals that you can achieve. Don’t overwhelm your team with thinking that we’ve got to get everything up to 90% great scores immediately. There’s too much complexity, too much data to consume, too much to process. Instead, take on the project, get a general sense of what you want your angle to be. But then look at short-term goals that you can achieve and set those and work for them. Give yourself some breaks between. Our team, we love performance, so many of us. Just it’s what we most enjoy about web development. It’s one of the most enjoyable projects we have, but even after a month or six weeks of plugging hard at it, we need to take a break and work on something else. We’ll come back with fresh eyes. So set yourself some good clear short term goals and work towards those and just make sure that in the long term, you’re getting those scores and that user experience, that page experience to be what you want it to be for your customers.

LINDSY FARINA: And what we can all hope is that once you get there, you can stay there. And it makes me happy as the product manager on New Relic Browser and Lookout, which is the other screenshot here, to see these black numbers and those gray bubbles. That’s what you want to see. You want to see that your data is stable that you’ve not toggled into a red or green or red or yellow scenario, that suddenly your data over time hasn’t started deviating. That thing where you suddenly see a creep and things aren’t behaving as well as they did before. But we have all of those tools and you can continuously watch them. So we want you to be able to have that experience, find those problems as quickly as possible, notice when you have a deviation and when something starts to blip out of that good range or out of the needs improvement back into poor. Just be mindful, keep checking and we wish you good luck.

I hope that everyone enjoyed what we presented to you today. It’s definitely my pleasure to be able to co-present with you, Ryan. I really enjoyed hearing your story, and I’m very happy that New Relic Browser was able to work for you.

RYAN HOOVER: Thanks, Lindsy. This has been great. It’s great to always help people try to get their sites faster and try to make the web a better experience for everybody. Have a great rest of the Summit, and thank you so much for listening.

LINDSY FARINA: Thank you all.

Get started.

Build faster, protect your brand, and grow your business with a WordPress platform built to power remarkable online experiences.