Substituting Zero

(Pengchong here). While in NS, I've been looking at some topics that I might've encountered back in school but not really gone too deeply into - one of which is real analysis, quite a common starting point for undergrad math. Though for those preparing for olympiads it probably doesn't have a high 'ROI', I think much of this stuff provides plenty of intuition for some olympiad problems I've encountered before. And of course, most importantly, I found it pretty fun (looking at Rudin's classic book, in particular).

I'll give a brief introduction to some fundamental analysis concepts using ISL 2020 A8 as the vessel, and as an example of how these concepts can sometimes (though rarely) turn out pretty critical for a 'real' olympiad problem. I've used this problem during a session for this year's IMO team a while back (and actually got this in a mock during my own IMO training). It will only use the very fundamentals of analysis (stuff that would show up in a more rigorous calculus course rather than in a real analysis one) that are enough for some pretty important results, so I think it's quite a good starting point. As such, this post is more targeted towards the younger ones who might not have seen these yet.

First, some definitions.

What is a limit?

For a function $f: \mathbb{R} \rightarrow \mathbb{R}$, we often say that $f(x)$ approaches some limit $L$ as $x$ approaches some constant $c$. Intuitively, when $x$ is close to $c$, $f(x)$ is close to $L$. But we need something more rigorous to do further math on this. Can you see how the definition below essentially says the same thing, but in a rigorous way?

We say that $\lim_{x\to p} f(x)=L$ if, for every $\epsilon > 0$, there exists a neighbourhood $E$ of $p$ (in this case that's some open interval $(p-\delta, p + \delta)$ where $\delta > 0$) such that if $x \in E$ and $x \neq p$, then $|f(x) - L| < \epsilon$.

Translated into English: no matter how close to $L$ you want $f(x)$ to be, I can guarantee that by forcing $x$ to be in some narrow range around $p$. This sounds more like our initial intuitive idea of a limit. Notably, we ignore the value of $f$ at $p$ itself, that is irrelevant as we are interested in its behaviour around $p$.

For more completeness, we can also define a neighbourhood of $+\infty$ or $-\infty$ as $(c, \infty)$ or $(-\infty, c)$ respectively for some constant c, so that we can take limits to infinity.

You might also have come across limits like these: $\lim_{x\to p^+} f(x)=q$, where we approach $p$ 'from above', 'from the right'. In these cases, the neighbourhood $E$ is cut in half, ie. taken as $(p, p + \delta)$. The rest of the definition is the same, try to read it again and make sense of it with your initial intuitive understanding.

A simple visualisation. Here, $\lim_{x\to p} f(x)$ does not exist (try it, no possible $L$ can be chosen that satisfies the conditions set out in the definition). But $\lim_{x\to p^+} f(x)=q_2$, $\lim_{x\to p^-} f(x)=q_1$. For some functions, one-sided limits might not exist either. Think about the behaviour of $f(x) = sin(\frac{1}{x})$ around zero. The limit towards zero does not exist, as the function oscillates between $-1$ and $1$. No open interval, no matter how narrow, tames this function around zero.

inf and sup

Something else fundamental to analysis proofs is this: the real numbers possess this thing called the least upper bound property.

If $S$ is a set of real numbers, call $M$ an upper bound if $M \geq x$ for any $x \in S$. Suppose an upper bound exists. Then there exists a 'least upper bound' $m$, where $m$ is an upper bound satisfying $m \leq M$ for any upper bound $M$ of $S$.

This least upper bound is called the supremum, written as $sup$. This is different from the largest number of a set: for example, the set of all reals in the sequence $a_n = 1 - \frac{1}{n}$ does not have a maximum element. But its $sup$ is 1. For finite sets, the supremum is simply the largest element.

The corresponding concept of a 'greatest lower bound' is the infimum, written as $inf$.

ISL 2020 A8

Congratulations, we already have the necessary tools to tackle this problem.

Let $\mathbb{R}^+$ be the set of positive real numbers. Determine all functions $f: \mathbb{R}^+ \rightarrow \mathbb{R}^+$ such that for all positive real numbers $x$ and $y$, $$f(x+f(xy))+y=f(x)f(y)+1$$

There are some standard FE things we can prove first, as we try to find an entry point to make some progress. For $\mathbb{R^+}$ FEs, we know eventually we probably have to do some bounding - make use of the fact that anything $f$ spits out is positive to derive some common properties we might suspect of functions (injectivity/surjectivity, bounded, monotonicity, periodicity, etc.) Injectivity, for example, comes pretty easily:

Suppose $a \neq b$, $f(a) = f(b)$. Then $$f(1+f(a)) + a = f(1)f(a) + 1$$ $$f(1+f(b)) + b = f(1)f(b) + 1$$ $$\implies a = b$$

Okay, but pretty useless for now. It's an $\mathbb{R}^+$ FE, but something I like to try (and I think I did try when I first got this FE, though I didn't know how to proceed afterwards) is to try something illegal - what happens if we just sub zero?

$x= 0: f(f(0)) + y = f(0)f(y) + 1$

Oh, I see why the range is restricted to positives now. A zero sub kills the FE as rearranging the above shows that $f$ is linear. Easy algebra will find the necessary constants and finish from there.

And here comes our act of defiance: let's sub zero anyways.

"Subbing" Zero

The idea is this: we can't sub zero, so the next best thing we can do is take the limit to zero (from the right, ie. $\lim_{x\to 0^+}$) on both sides. It's as if we're putting in stuff arbitrarily close to zero, but never zero itself. But how can we ensure that a limit exists? It would be quite challenging to come up with this if you have not seen the key result beforehand - the fact that one-sided limits work very well with monotone functions. It'll make sense soon, so bear with me: to proceed, let's first prove that $f$ is monotone.

Here's the original FE again, rearranged slightly:

$f(x+f(xy))=f(x)f(y)+1 - y$

The idea is to force a contradiction if non-decreasing is not respected.

Suppose f(a) < f(b) but a > b

The above statement allows us to perform one of the most quintessential FE magic tricks: keeping one side the same while changing the other. If non-decreasing is not respected, it is possible to increase $xy$ (perhaps by increasing $x$), while simultaneously decreasing $f(xy)$. Because $x$ has increased but $f(xy)$ has decreased, it's possible to solve for some substitution that keeps $x+f(xy)$, and therefore the LHS, constant!

Let $x_1y = a$, $x_1 = \frac{a}{y}$

Let $x_2y = b$, $x_2 = \frac{b}{y}$

We try to have:

$\frac{a}{y} +f(a) = \frac{b}{y} + f(b)$

$\implies y = \frac{a-b}{f(b)-f(a)} > 0$

Perfect, so $y$ is solvable. Reverse engineer $x_1, x_2$ and do the necessary subs. Now we have $f(\frac{a}{y}) = f(\frac{b}{y})$ from comparing the RHS, by injectivity $a = b$, contradiction. So $f$ is monotone increasing.

Alright, we're ready.

Claim (well-known result for monotonic functions): one-sided limits exist for monotonic functions, ie. we can let $\lim_{x\to 0^+}f(x) = p$, $\lim_{x\to p^+}f(x)= q$

Proof: We prove that $\lim_{x\to c^+}f(x)$ exists for any $c \geq 0$. What should this limit be? This function is monotone increasing, so intuitively, our limit $L$ should be smaller than or equals to all $f(x)$ where $x > c$. A lower bound clearly exists (0). We should take the maximum of all possible lower bounds, otherwise there will be a 'gap' between $L$ and the elements of $\{f(x) | x > c\}$. Therefore, we guess that the limit $L$ is the infimum of the set $\{f(x) | x > c\}$. Let's show it.

Going back to definitions, we want to show that for any $\epsilon$, we can find a $\delta$ such that $|f(x)-L|<\epsilon$ holds for $c < x < c +\delta$. For the sake of contradiction, suppose no such $\delta$ exists for some particular $\epsilon$. Then, arbitrarily close to $c$, we can find $x > c$ such that $f(x) \geq L + \epsilon$. Because $f$ is monotonic increasing, this implies no $f(x) < L +\epsilon$ when $x > c$. $L + \epsilon > L$ is a lower bound, a contradiction. (As an easy consequence, $p \geq 0$, which gets rid of some potential headaches in the subsequent step)

Then, take the limits on both sides (these two are equal):

$\lim_{x\to 0^+}f(x)f(y) + 1- y = pf(y) + 1 - y$.

$\lim_{x\to 0^+}f(x + f(xy)) = \lim_{u\to p^+}f(u) = q$.

Exercise: I know these make intuitive sense, but can you supply a proof of these two statements from the definition of limits? The first one is pretty straightforward, for the second, you need to 'do it twice': for small enough $x$, $xy$ is positive but close to zero, therefore $x + f(xy)$ is at least $p$ but close to $p$, and so... Rigorously write this with the $\epsilon - \delta$ structure.

Okay, we're done. Rearrange a bit to show $f$ is now linear (quickly disposing of the $p=0$ case), and we can solve for the constants, and the A8 is done. $f(x) = x+1$ is the only solution, by the way.

Some Concluding Thoughts

This argument might be a bit strange to some. On first glance it is easy to even be a bit skeptical: how come we can look at zero in an FE that specifically ruled it out?! The answer, of course, is that we aren't. We're looking at the function's behaviour as it approaches zero, just like how we might examine a sequence's behaviour as it approaches the $\infty$th term. We haven't stepped into any space in which our function is not defined.

Notice that all this trouble is just because we're missing zero in the domain. I think this is a pretty common theme in analysis - the boundaries make all the difference; a closed vs open interval/set have vastly different behaviours. And you can blame these subtleties as the source of pain for several olympiad problems. Off the top of my head: when you do smoothing in inequalities, you can't immediately go "assume this gives the minimal result", "do this optimisation", "it goes lower, contradiction", because that minimum might not exist. In these cases, knowing the concepts of continuity, and in particular properties of a continuous function on a closed interval/ more generally a compact metric space, might fix this issue, but that's a story for another time.

Anyways, to clarify again, analysis is NOT the best use of your time if your goal is to mug for Olympiads. But if you're interested, you can take a look, or read up after you graduate before your uni. It can be quite fascinating. The more senior people will have more to say about this of course, but personally I thought Rudin's classic analysis book is quite good. Yes, he doesn't explain stuff much, and would rather give you an elegant proof than an intuitive one. But sometimes trying to reverse engineer the motivation / the intuitive meaning, without it being given to you, is half the fun.

Comments

Popular posts from this blog

SMO Open 2024 ??% Speedrun

Musings about the IMO