Introduction
Web Vitals is a project proposed by Google with the main purpose of providing various metrics to quantify the performance of a website. Previously, there was an interesting video in Taiwanese on the Google Chrome Developer channel introducing Core Web Vitals as an example. I highly recommend everyone to check it out ↓
Nowadays, many websites have implemented the process of measuring Web Vitals to identify areas for improvement. There are several tools available to assist developers in measuring Web Vitals, such as:
- Lighthouse
- PageSpeed Insights
- CrUX
- Search Console
Sentry's Web Vitals Statistics Feature
However, what fewer people know is that Sentry has also introduced a Web Vitals statistics feature1. It allows developers to track various indicators of Web Vitals, such as LCP, FP, and CLS, and view related charts in the backend. It even marks the average values, which is quite considerate.
However, it seems that the data in Sentry is only saved for 30 days and there is no way to perform comparisons, such as comparing metrics from the previous week with the current week. Additionally, the charts can only be viewed within Sentry, which incurs switching costs for other teams. As developers, it is natural to want to automate the entire process.
After some investigation, although Sentry provides API integration (with a token), there doesn't seem to be API documentation specifically for Web Vitals. However, upon inspecting the console out of curiosity, it is evident that the charts are generated through an API. I successfully made a CURL call by replicating the API endpoint and parameters. It appears that the API details were simply not documented.
There are two main APIs: (Note: Since there is no documentation, the parameters or paths may change.)
/events-measurements-histogram/?...
/eventsv2
Automated Workflow Design
With the data in hand, we can integrate it into Slack to track regularly and easily share it with other team members. For more detailed information, one can still refer to the Sentry backend. The entire workflow is as follows:
By calling the API through a cron job (or implementing it as serverless), we retrieve the data from Sentry, upload it, and generate reports.
One thing to note is that the query parameters contain +
signs. If we directly use encodeURIComponent
or URLSearchParams
, the +
sign will be encoded as %2B
, causing an error in Sentry. Therefore, it is important to either exclude the +
sign or find a way to prevent it from being encoded.
For example, the API query parameter may look like this: field=percentile(measurements.fp%2C+0.75)&field=percentile(measurements.fcp%2C+0.75)&...
. We can define the data to be returned in a similar function call manner. For instance, percentile(measurements.fp,+0.75)
returns the 75th percentile of the FP data (in milliseconds). Although the ,
is encoded as %2C
, the +
sign is not encoded. If there are any errors in creating the query parameter, this is the part to focus on.
The returned data type looks like this:
{
"path": "/my-page",
"data": [
{
"count": 10,
"bin": 2000
},
...
]
}
Next, we generate a .json
file and upload it to S3 or any other storage. This allows us to compare it with yesterday's (or previous) data and generate reports to be sent to Slack:
If you want to generate charts, you can use chart.js and chartjs-node-canvas in Node.js. This way, you can create charts without relying on a browser (canvas). The only thing to note is that if the corresponding fonts are not installed on the server, the layout may be distorted or the fonts may be altered.
Regarding the choice of cron job, although there are many mature solutions available now, even hardcoding a server and adding it to crontab is possible. However, in this case, we used the cron job feature of drone CI for implementation. The main reason is that the configuration file in drone is convenient to write, and we already have a dedicated drone CI server internally. Additionally, using external services is not allowed, so integrating with drone CI is the simplest way to configure everything.
However, for some reason, every time the cron event is triggered, other pipelines are also triggered. Therefore, we eventually modified the .yaml
file by creating an additional branch to call the API:
steps:
- name: upload and report
image: byrnedo/alpine-curl
commands:
- curl -X POST YOUR_SERVER_ENDPOINT
With this, the entire automation implementation is complete. Now we can see the reports being sent to Slack every day, making it easier for developers to observe changes and receive feedback.