Chrome Cookie Policy Adjustments and Reflections

Written byKalanKalan
💡

If you have any questions or feedback, pleasefill out this form

This post is translated by ChatGPT and originally written in Mandarin, so there may be some inaccuracies or mistakes.

As users increasingly prioritize their privacy, various services have gradually begun to adjust their privacy policies and security measures. For instance, after Mac's update to Catalina, it has become quite annoying to be constantly asked whether you agree to let xxx access certain resources.

While security has indeed become more stringent, this has sometimes led to unfortunate situations. For example, users have reported issues with the WACOM drawing tablet not functioning properly.

In 2019, Google announced security policy adjustments at the Google I/O conference, the most significant of which for current web development is the gradual removal of support for third-party cookies in Chrome.

Starting from Chrome 80+, the samesite attribute of cookies will be set to lax by default (it was None in version 80). In this article, we will explore the definition of samesite, its uses, and reflect on cookies and my thoughts regarding these changes.

What are Cookies?

Cookies are a small storage mechanism (4KB) implemented on the client side, controlled by the Response Header returned by the server. Traditionally, the implementation of cookies relied heavily on the browser, which would decide whether to send cookies based on current conditions and set headers, including when they expire. As long as the cookie has not expired and meets certain conditions, the cookie will automatically be included in every request sent out.

For stateless HTTP requests, the cookie mechanism allows us to retain some user data, enabling the server to assess the current user state or for tracking purposes.

I would say that the most convenient yet potentially dangerous aspect of cookies is this very mechanism.

As long as the cookie has not expired and meets certain conditions, the cookie will automatically be included in every request sent out.

Why do I say this? In typical scenarios, elements like <form>, <iframe>, <a>, <link>, and <img> will send cookies by default, allowing us to perform actions such as the following:

  1. Tracking through iframe

    • You log in to your Google account, and google.com returns a Set-Cookie header that is stored in your browser.
    • While browsing website B, it embeds a Google iframe for tracking.
    • The iframe sends the cookie to Google.
  2. Tracking through <img>

    • You log in to your Google account, and google.com returns a Set-Cookie header that is stored in your browser.
    • While browsing website B, it sends an image request via <img src="xxxx.google.com/track/pageview" /> during page load.
    • The cookie is sent to Google for statistics, revealing which site you are currently visiting.
  3. Malicious actions through <a>

    • You log in to website B.
    • This site has poor implementation, deleting a user with a URL that looks like GET /user/delete.
    • A hacker sends you an email with a link: <a href="xxx.com/user/delete">click me</a>.
    • You click it, and your account gets deleted.
  4. Malicious actions through <form>

    • You log in to website B.
    • You fill out a form on a malicious site A, which actually submits to website B, for example, payment information.
    • Your payment data is submitted to website B, causing significant losses.

Points 3 and 4 refer to the well-known CSRF (Cross-Site Request Forgery) attacks. The common defenses include 1. Avoid using GET methods for operations that have side effects, and 2. Use CSRF tokens for validation to ensure that requests genuinely originate from trusted sources.

To mitigate CSRF attacks, the most common approach is to include a CSRF token in HTML, and the server checks this token before executing any operations to ensure the request comes from its own service.

CSRF does address the security issues posed by cookies. Experienced engineers know that implementing a stateful CSRF token mechanism can be cumbersome and error-prone (especially in high-traffic environments), making discussions about CSRF tokens often met with reluctant expressions.

In fact, as early as 13 years ago, some suggested preventing elements like <img> and <link> from sending cookies (article), but the response was:

The attack described here is well-known and called "Cross-site request forgery". Most believe that it is the web application's responsibility to fix it, not the web browser's.

To tackle the issues caused by CSRF, Chrome 51+ introduced the SameSite attribute to prevent such attacks. Its principle is to leave the choice of whether to send cookies to the implementer, offering three values:

  • strict: Never send cookies under any circumstances. Although the safest option, there are cases where you might not want this—for instance, if you link from website B to YouTube, you might end up being logged out due to cookies not being sent.
  • lax: Cookies are sent only when navigating via GET requests to the target URL.
  • none: Cookies are sent by default.

The much-discussed SameSite cookie policy refers to Chrome 80+ changing the default for SameSite=none to SameSite=lax to enhance user security.

Now that we have the background, let's discuss my reflections.

Reflection 1: Does adding SameSite=lax mean I no longer need to implement CSRF tokens?

Since the SameSite standard is relatively new (though not brand new), if users are on older browsers that do not support SameSite, could they still be vulnerable to CSRF attacks?

This policy change feels somewhat redundant to me; aside from preventing third-party tracking, it still necessitates implementing CSRF tokens for your own services to effectively guard against attacks.

In fact, cookie implementations across different browsers vary, and there are many security issues associated with cookies, such as:

In summary, I want to point out that relying on browser mechanisms for cookies isn't always convenient, which leads me to my second reflection.

Reflection 2: What if we try to avoid cookies altogether?

A quick online search led me to this article (Cookies Are Bad for You), and the practices mentioned there seem worth considering.

The key is to choose a mechanism that is controlled by the web application, not the browser.

Since using cookies means relying on browser mechanisms, why not eliminate cookies altogether and shift the entire implementation to JavaScript? What does this mean?

  • In JavaScript, you can use fetch with credentials: include to decide whether to send cookies, along with CORS headers.
  • JavaScript can effectively prevent CSRF attacks (detailed below).
  • No need to depend on various browser implementations.

To avoid CSRF attacks, we can require any request needing user authentication to include a Request Header, such as an Authorization header, thus preventing CSRF without implementing any CSRF token mechanism.

When submitting forms, we also avoid using the browser's mechanism directly and instead use JavaScript APIs.

const form = new FormData();
form.append('keyA', 'valueA');

fetch('/my-api', { body: form, method: 'POST', headers: { 'Authorization': 'xxx' } })

However, using JavaScript comes with its own challenges, including the need to manage expiration and request handling, as well as considering the following points:

  • Where to store data?
  • What if there’s an XSS attack?
  • What if the user disables JavaScript?

Where to store data?

Data (like access tokens) can be stored in memory, localStorage, sessionStorage, or even indexDB. I know this might sound a bit odd at first—what if there's an XSS attack? Let's delve deeper.

First, avoid storing too much sensitive information on the client side, and try to keep access token lifetimes short to minimize impact in case of leakage, while using a refresh token mechanism to maintain user experience.

What about XSS attacks?

I would argue that any mechanism comes with inherent risks; cookies have had their share of security vulnerabilities as well. With the support of frontend frameworks, we may be able to bypass most XSS vulnerabilities.

What if users disable JavaScript?

Disabling JavaScript is indeed a trade-off. Take a look at Facebook, YouTube, and Netflix—don’t these services require you to have JavaScript enabled?

Regarding screen readers, I believe that more screen readers will incorporate basic JavaScript for better experiences in the future. While recent trends have led to JavaScript packages ballooning to hundreds of KB, in terms of screen readers or accessibility, JavaScript offers more nuanced interactions.

So, should we also use JavaScript to handle API calls for images, forms, and the like? In the era of SPAs, an increasing number of services are submitting APIs via XHR. Although this requires using JavaScript and handling some complexities, it results in more reliable security.

Reflection 3: OAuth

This was mentioned in the article.

With the OAuth protocol, we can exchange tokens with an authorization server, store the tokens on the client side, and then use HMAC algorithms to ensure that the request method and URL are correct.

Conclusion

Personally, I believe that security and convenience are two sides of the same coin. While it can be cumbersome to move all implementation, validation, and expiration mechanisms to the server side, I used to think cookies were safe and convenient. Now, seeing this policy shift alongside numerous case studies has prompted me to rethink my stance. There may be perspectives I initially overlooked in this article, and I welcome any feedback.

If you found this article helpful, please consider buying me a coffee ☕ It'll make my ordinary day shine ✨

Buy me a coffee