Introduction
- This article uses JavaScript's primitive types as examples, and the way they work may differ from other programming languages.
- The behavior of primitive types is different from objects and arrays. Here, we only focus on primitive types.
What is a Mental Model?
A mental model refers to how we anticipate the development and operation of things, a cognitive process of understanding how things work.
It may sound a bit academic, but let's take the example of seeing a button UI on the screen. We would expect this UI to be clickable, and clicking it may trigger a series of events. Therefore, when a user finds that this UI does not behave as expected, they may feel confused.
But why do we consider this UI as a button? On the one hand, it is because of the environment we are in, and on the other hand, it is based on our experience. Users have been judging buttons based on this rule since they started using web pages, so naturally, they apply it to other web pages as well.
Take this dial as an example. Why do we think of rotating it instead of pressing it when we see it? It's because in our daily lives, most mechanisms with a circular shape are usually adjusted by rotation. Therefore, we apply this rule to other things.
Or consider the handle of a door. Why don't we use a rotating motion to open it, but instead use a pressing motion? It's also because we have a certain understanding of how the handle works, so we expect that pressing it will open the door.
When learning a programming language, as we progress, we also develop a mental model of how the programming language operates. We mentally compile and anticipate how these code snippets will execute.
The Importance of Mental Models
Let's take an example with JavaScript code.
let a = 3;
let b = a + 3;
a += 1;
console.log(b); // b
This is a very basic JavaScript code. Experienced individuals already know the answer and where the pitfalls lie. Suppose we depict this code with an incorrect mental model. We might get the following result.
1. Before executing a+=1:
2. After executing a+=1:
Naturally, because a+1
, we change the 3 inside the circle to 4.
Since b=a+3
, since a has changed, the circle pointed to by b should also change to 7. Therefore, someone with an incorrect understanding would naturally answer, "7."
I want to emphasize that although the answer is wrong, I don't think it's entirely the learner's fault. From a normal thinking perspective, I interpret b = a + 3
as a formula. So, if a += 1
, it's natural to expect b to change accordingly.
Once learners adopt this thinking pattern, it becomes particularly difficult to correct it in future learning, and they may encounter bugs they can't explain.
Where Did the Problem Arise?
In JavaScript, you cannot change the value of a primitive. This statement may seem short, but for beginners, it is almost impossible to understand its essence.
The correct behavior should be as follows:
Because you cannot change the value of any primitive type, we cannot directly change the 3 inside the circle to 4 (3 + 1). Instead, we must create a new number, 4, and point the arrow of a to it. The original 3 will not be changed (blue circle), so no matter how a changes, it will not affect the result of b.
Rather than quoting from textbooks, I prefer to answer in this way. It helps with understanding and is indeed the actual way JavaScript code operates. If you observe carefully, the two models are very similar, but the key lies in whether the learner understands the characteristics and principles of JavaScript's primitive values.
Whether learners can apply this to various examples relies on their mental model. If learners continue to use the first mental model, even if they have come across the statement "primitive types are immutable" during the learning process, they are still likely to arrive at incorrect results.