The Reason for Intricately Defining the Convergence of Sequences in University Mathematics
Definition
Let $\left\{ x_{n } \right\}_{n = 1}^{\infty}$ be a sequence of real numbers. If for every $\varepsilon > 0$, there exists $N \in \mathbb{N}$ such that $n \ge N \implies | x_{n} - a | < \varepsilon$ is satisfied, then we say that $\left\{ x_{n } \right\}$ converges to $a \in \mathbb{R}$.
$$ \lim_{n \to \infty} x_{n} = a \iff \forall \varepsilon > 0 , \exists N \in \mathbb{N} : n \ge N \implies | x_{n} - a | < \varepsilon $$
Explanation
This kind of definition and its development is often referred to as the epsilon-delta argument. Though one might reluctantly accept the reason for rigorously redefining the limit of a sequence as ’necessary’, it certainly isn’t helpful advice for beginners.
For Koreans, even reading formulas like this can be challenging, but it generally can be read as follows:
For every positive epsilon, there exists some natural number large N such that for all small n greater than or equal to large N, the absolute value of x_n minus a is less than epsilon.
To some, this might even feel like a malevolent wordplay. The necessity aside, explanations on why it must be so complicated are scarce, which is why it might be useful to add some commentary. Mathematicians prefer conciseness for a reason, and their use of complex expressions serves its purpose.
Why specifically $\varepsilon$?
Epsilon $\varepsilon$ is derived from the first letter of Error, which, in the context of convergence, can be viewed as the permissible error $\varepsilon$, i.e., the deviation of sequence $x_{n}$ from the limit $a$. If the error is smaller than $\varepsilon$, it means that $x_{n}$ is that much closer to $a$.
Why must it be every $\varepsilon>0$?
The phrase “for every positive $\varepsilon$” means there is absolutely no gap between $x_{n}$ and $a$. This isn’t just about $x_{n}$ and $a$ being very close; it’s about them being infinitely close, thus indispensable when discussing convergence.
Despite clearly stating “every positive number”, if you felt that the context only cares about very small $\varepsilon$, then you’ve understood the concept perfectly. If for a small $\epsilon_{0}$, $N_{0}$ exists, then for any larger $\epsilon_{1}$, whether $N_{1}$ exists or not, $N_{0}$ naturally satisfies the condition, so there’s no need to worry. However, “not needing to worry” does not mean it doesn’t have to be every positive number.
Why is there talk of $N$ existing?
The existence of $N$ is exactly what defines convergence. Conversely, imagine a small $\varepsilon$ given but no $N$ satisfies the condition. If $x_{n}$ and $a$ have a non-zero error, then it can’t be said to ‘converge’.
This is where the power of the epsilon-delta argument shines. Realistically, claiming that a sequence converges when $n \to \infty$ seems too radical. Although time might be infinite, humans are finite; how can we possibly check every $\varepsilon$?
Thus, rather than assessing every $\varepsilon$ at once, we just need to ensure that for the given $\varepsilon$ in front of us, such $N$ exists for that moment. If the professor uses an expression like $N = N ( \varepsilon )$, this is precisely what it means.
For instance, to show that a sequence $\displaystyle {{ 1 } \over { e^{n} }}$ converges to $0$, if $\displaystyle \left| {{ 1 } \over { e^{n} }} \right| < {{ 1 } \over { 2^{n} }}$, we set $\displaystyle \varepsilon := {{ 1 } \over { 2^{N} }}$, ensuring a $N := - [ \log_{2} \varepsilon ]$ always exists that satisfies the condition. Viewing $N$ as a function of $\varepsilon$, we find it can be written as $N ( \varepsilon ) = - [ \log_{2} \varepsilon ]$.
Then why do we need expressions like $n \ge N$?
Simply put, there’s no guarantee that the error will always decrease for some sequences. Demonstrating the existence of $N$ and showing that $n \ge N$ also holds might offer some peace of mind. $N$ is often referred to in Korean textbooks as ‘sufficiently large number’ which fulfills the condition generously.
What if I still don’t understand after reading all this?
It’s possible. The content is inherently difficult, so as long as you don’t lose confidence, it’s fine. Solving many exercises can be helpful. If you are a math major, even if you don’t want to understand, you will have to keep looking at it until graduation, so you’ll inevitably get used to the argument, at least by your junior year.