Some ideas on writing tests (front-end)

Written byKalanKalan
💡

If you have any questions or feedback, pleasefill out this form

This post is translated by ChatGPT and originally written in Mandarin, so there may be some inaccuracies or mistakes.

In the past month of project development, I have completely changed my perspective on writing tests.

From a front-end standpoint, I used to enjoy writing tests not just unit tests but also testing component functionalities, such as simulating clicks and user interactions, especially when developing with React. If Redux is used to manage side effects, I would also write tests related to Redux logic. This is because, in the world of Redux, we can maintain purity, making writing tests relatively straightforward.

For example, let's consider a feature for fetching articles. If we break it down into three actions: FETCH_ARTICLE, FETCH_SUCCESS, and FETCH_FAILED, which are triggered when a user clicks a button, the code might look like this:

const Article = ({
  fetchArticle,
  status,
  article,
}) => {
  if (status === 'idle') {
		return <button onClick={fetchArticle}>click me</button>
  } else if (status === 'loading') {
    return <Loading />
  }
  
  if (status === 'error') {
    return <Error />
  }
  return isLoading ? null : <article>{article}</article>
};

const mapStateToProps = state => ({
	article: state.article,  
  status: state.article.status,
});

const mapDispatchToProps = {
  fetchArticle,
}

export default connect(mapStateToProps, mapDispatchToProps)(Article);

Now, let's look at the testing part, which we can break down into several components:

describe('<Article/>', () => {
  it('should render button when isLoading', () => {
    expect(...); // button exists 
  });
  
  it('should call fetchArticle if button is clicked', () => {
    find(button).simulate('click');
    expect(...) // fetchArticle should be triggered
  })
  
  it('should render article when isLoading is false', () => {
    expect(...); // button doesn't exist and article gets rendered
  });
});

The Redux part is even simpler:

describe('actions', () => {
  it('should return correct type', () => {
    expect(fetchArticle()).toBe({
	    type: 'FETCH_ARTICLE',
  	});
  });
	
  it('should return correct type and response', () => {
    expect(fetchSuccess(content)).toBe({
	    type: 'FETCH_ARTICLE_SUCCESS',
      payload: content,
  	});
  });
  
  it('should return correct type', () => {
    expect(fetchSuccess(err)).toBe({
	    type: 'FETCH_ARTICLE_FAILED',
      payload: err,
  	});
  });
});

describe('reducers', () => {
  it('should return correct state', () => {
    expect(reducer(fetchArticle(), initialState)).toEqual({
      status: 'loading',
      article: null
    });
  });
  
  it('should return correct state', () => {
    expect(reducer(fetchArticleSuccess(content), initialState)).toEqual({
      status: 'loaded',
      article: content
    });
  });
  
  it('should return correct state', () => {
    expect(reducer(fetchArticleFailed(err), initialState)).toEqual({
      status: 'error',
			error: err,
    });
  });
});

I would say these tests look quite good and are, in fact, fairly ideal. However, I’ve recently realized that even if I diligently write comprehensive tests, there will always be overlooked cases, and they occur quite frequently, prompting me to rethink the purpose of testing.

From the simple scenario above, I can ponder over a few small issues:

  • What if the article returns incorrect values? What happens if we try to access a field of the article at that moment?
  • During the loading state, could unexpected errors arise from network instability, timeouts, parameter errors, server errors, or browser mounting issues? Is it appropriate to classify all these under the same error handling?
  • Is there a problem with users clicking the button multiple times? Should we use disabled or another method to prevent repeated clicks?
  • Does the article need to break lines? If so, how should we break them? Should users scroll horizontally or vertically?
  • Are the wordings for the button and various parts correct?

These are all aspects not covered in the tests above. You might ask, shouldn’t we add these incrementally? However, as we add tests, we're also modifying components. Initially, I thought this would be comforting, but after making changes, unexpected scenarios always emerge, leading to an endless cycle of QA issues, deployment delays, and ultimately, chaos for everyone involved.

There are countless scenarios in the front end that require interaction with the UI, and it’s difficult to achieve this behavior through typical testing.

Eventually, I discovered that simply writing test cases won’t magically cover scenarios (states) that were previously unconsidered. Moreover, even then, in the front end, if you don’t actually simulate user behavior, you can easily encounter various issues across different browsers, devices, and real-world situations (like unstable networks).

In my recent projects, what has changed my perspective the most is my approach to writing tests. Don’t get me wrong, I still strive to write tests whenever possible, but I want to focus on more meaningful tests rather than writing seemingly irrelevant test cases for self-entertainment.

So while writing tests is now a necessity for me, it doesn’t mean we should dive straight into writing tests from the get-go.

The most important first step is to thoroughly clarify various requirements. This is essential, and there’s no other way around it.

Too many QA issues arise simply because we didn’t fully understand the requirements from the beginning, or there was a discrepancy in understanding between both parties, which led to QA problems.

For example, in the article retrieval case, we should clarify when to call the API, what possible responses the API might have, whether any additional handling is needed for various API scenarios, and whether any special handling is required for UI content (like the aforementioned line breaks or other UI considerations), as well as whether to prevent duplicate clicks. Confirming all of these aspects before starting to write tests is not urgent.

The reason is simple: the UI users see is their medium for interacting with the application, but in reality, we conduct too few E2E tests (though this concept is gradually being recognized and corrected, which is encouraging). As a result, no matter how many tests we write, we can still fail at the UI level. Conducting complete E2E tests can be challenging and sometimes requires backend collaboration (like preparing a database and environment for simulation), making every effort to align the testing environment with reality.

Another point is, if you’re going to write tests, don’t spend too much time on Unit Tests. Of course, you should still cover necessary tests, but there’s no need to obsess over obvious cases like expect(1+1).toBe(2).

In this area, I highly recommend Cypress and Puppeteer for integration testing tools (Puppeteer can also be used for other purposes). They are comprehensive and have everything you need. With these tools, the hassle is no longer an excuse for avoidance.

Be Cautious with API Calls in useEffect

While useEffect is designed for executing side effects, it can easily turn into a nightmare if not handled properly. If your API's return results will change the internal state of the component, remember to cancel the API call on unmount; otherwise, it may lead to potential memory leaks.

Why? If you unmount the component before the API request completes, when the API returns and the callback is executing, it will throw a warning because the callback is trying to modify the state of an unmounted component.

Additionally, placing APIs directly in useEffect (such as fetching) makes testing quite challenging. Writing tests for a component with such APIs can require extensive mocking, and you must carefully ensure alignment with API responses. Don’t sacrifice long-term stability for short-term convenience.

Status Explosion

Moreover, many QA bugs stem from the complexity of state management. During the early stages of iteration, when there aren't many states, you might be able to get by with simple useState. However, as states grow, if statements start to litter the code, adding a new state can make you question your sanity, especially when you see code like this:

if (isLoading && !isEditing && profile && isNotEmpty && isLoggedIn) {
  // fuck my logic
}

If there’s only one such instance, it might be manageable, but if the component is riddled with this kind of code, it’s highly likely that several QA issues will arise. Writing too many tests doesn’t mask the complexity of state management, nor does it turn bad logic into good logic. Even if your current tests cover all states, errors can still occur when adding new states.

Thus, how we manage state (in an elegant way) is a significant question in front-end development. Writing Redux can be frustrating, but this approach can help people write more maintainable code, and more importantly, manage complex states through reducers.

In React, we can use useReducer to avoid state explosion because a reducer essentially acts like a state machine, allowing us to observe all state transitions through the reducer function. However, this alone isn’t enough because states aren’t like enums; they resemble a tree or a graph, where state changes are interdependent. For instance, the idle state doesn’t jump directly to success; it must first move to loading before reaching success, or the failed state cannot directly revert to idle but must return to loading, and then decide where to go based on the returned result.

Standard reducers cannot achieve this because they can’t detect the dependencies between states. This means we have to rely solely on the discipline and integrity of engineers, which we all know can be the least reliable factor in the world and prone to errors.

A better approach is to use a more descriptive language to model states. The recently popular xstate is worth checking out, as it covers nearly every scenario in state management. If you’re interested, it’s worth exploring.

If you found this article helpful, please consider buying me a coffee ☕ It'll make my ordinary day shine ✨

Buy me a coffee