A tricky functional equation
(David here.) In this post, I go through a surprisingly tricky functional equation that appeared on the 2018 edition of the IMO Revenge, a contest where the contestants made problems for their trainers.
The problem
(IMO Revenge 2018/4) Find all functions such that for all .
Fun fact - I was actually at that IMO as an observer! I had good memories of attempting the test, solving problem 3 and meeting the contestant-proposer (who later went on to propose an actual IMO Q3).
Initial observations
Clearly, works. Furthermore, the equation is homogeneous in - if works then so must . It's also cheap to get that and .
This was roughly where I ran out of cheap things to find - I didn't manage to get any more special values or any standard properties (like injectivity or surjectivity).
Some progress
When you get stuck on a functional equation (e.g. on or ), it's often fruitful to try to solve for all integer inputs first. I'll spare you the trial and error and point at these: Therefore, because the first two add up to the third, we get
So either:
- , which must mean that , or
- satisfies a linear recurrence of the form where .
This linear recurrence can be solved using standard methods: consider the characteristic equation
- When , we get that for some . But because , we have that .
- When , we get that , which is really forced to be .
- When , the equation has two roots satisfying , and the general form of the sequence is for some constants . But again, because we must have , so .
- Notice that if then is real, otherwise if then is a complex number of norm 1.
So we're done, at least for the integers. (Secretly, we're done also for any integer multiple of some fixed real number.)
Generalizing to the rationals
You might expect this step to be very easy or very hard. It turns out to be a bit of both.
Suppose we have two rational numbers and . Then, we end up with two sequences
each satisfying a linear recurrence, which we can solve for separately. However, nothing stops a nasty "mixture of solutions": could it happen that but ?
It turns out that this is impossible - we can always find some common subdivision where both and are multiples of . Then we note that each of those sequence are subsequences of so we have to be able to generate both from a single "formula". In other words, any such sequence must be "consistent" with some underlying sequence!
It's useful to divide the solutions that we had for into three classes that cannot "mix" for this reason:
- ("hyperbolic") for real
- ("parabolic") or
- ("elliptic") for complex of norm 1. If we write , this can also be written as .
These classes don't mix because the first grows exponentially, the second grows linearly and the last is bounded, so any subsequence you take must also be in the same "class" of growth rate.
The easy cases
The hyperbolic case turns out to be really easy - suppose then it follows that and (because we're simply taking a subsequence going from multiples of to multiples of ). Therefore there is a single canonical value of for which .
The parabolic case is also really easy - basically cannot happen. I'll omit the for rescaling, but imagine if you just considered every "half-step" of such a sequence: There are no solutions we found previously that fit this pattern, so this cannot happen. Hence the only "parabolic" solution is .
The elliptic case... remember that I said something about "very hard"?
Uh oh
In what follows we assume that we are firmly in the elliptic case.
You might be wondering what goes wrong if we just follow the argument for the hyperbolic case. The problem is that is injective, but is certainly not (and so there is no unique single that we can infer).
At this point, I had a scary thought. Clearly, because the elliptic case solution is a solution, we have the identity but what if we had and simply had ? Then it would still follow that so then such an satisfies the original function! But, consider this problem:
(USA TST 2015/4) Does there exist an additive that's not linear mod 1? ( denotes " mod 1".)
Cue some vertical space, in case you don't want the answer spoiled.
...
...
...
...
...
The answer is that such exists 👻.
Additive but nonlinear
How can such a thing be possible? We briefly mentioned that that over " mod 1", there is no canonical halving operation, so one possible solution here (in the AoPS thread) is to construct a function by iteratively defining the half-step values in a self-consistent way that doesn't match any linear function.
But here's also a conceptual way. Let's think about the minimum amount of information that we'd need to specify such a function. The perhaps surprising claim is that we only need to specify subject to some consistency conditions, like and that's basically a well-defined function , since any is a multiple of some .
This "consistency" clause gets a little less mysterious if we focus less on the value on and more on the "implied coefficient" of the linear function - then, for any , we have
So really, what we want is a sequence of integers that satifies this strong "modular" relation. This leads to the following slick solution (also in the AoPS thread): consider which makes no sense as a number, but considering makes total sense over and yet cannot match any "regular" linear function. (This solution certainly makes my top 10 list for shock value.)
Diversion: really solving it
We can attempt the more ambitious:
(Harder?) Find all additive .
To really classify all the solutions, we resort to the one important trick when we deal with modulos in general. By the Chinese Remainder Theorem, to specify something mod (or mod ), all you really have to do is to specify things modulo the prime powers that divide it. So in our case, we can have a consistent sequence of integers satisfying .
These things have a name: they are the -adic integers . Usually we write a -adic integer (in ) as but really, it can also be thought of a sequence of partial sums that are consistent mod , mod starting from the second term, mod starting from the third and so on.
In conclusion, to produce a , we pick some element of for each prime , which corresponds to picking for all . However, these are completely indepdent choices! So the set of 's can be reasonably called
Hello from the other side
I'll talk about some interesting connections to advanced math here, but feel free to skip this section.
Categorical limits. Let's rephrase how we defined the -adic integers. I'll write the integers mod as . Then we have the following diagram where each arrow describes an additive function - for example, the refers to the function of "taking mod " (that is, we are given something mod , and we take mod to get something mod ).
There's a general idea (that works for any kind of diagram with arrows) that we can define a (direct) limit from this diagram by considering consistent choices of elements, exactly in the way that we had to do from this problem. We can therefore reasonably write something like
You can see some parallels between this idea of a "limit" vs a limit that you'd meet in analysis - both are defined from sequence of things satisfying some kind of consistency relations between elements of the sequence.
Pontryagin duality. Let be " mod 1". We can talk about the so-called Pontryagin dual of - the set of all continuous group homomorphisms . It turns out that this ends up being simply "all additive functions ", which is the exact question that we solved, so we can reasonably say that:
is the Pontryagin dual of .
In general, Pontryagin duality is an indication that we can do a kind of generalized Fourier analysis on the group, but I won't attempt to explain it further here.
Adele rings. If you google "adele ring", you'll get lots of pictures of Adele's 10-carat engagement ring. But for us, the adelic ring ends up being the "rational version" of . I unfortunately don't have any concrete understanding of what this is for - based on Wikipedia/ncatlab it seems to (also?) be related to a more general "Fourier analysis".
Back to the problem
We've looked at a class of pathological 's, but we know nothing about whether this was the end of it. Luckily for us, we're kinda close!
There are two differences between our problem and the USA TST problem:
- instead of being just mod 1, we have some kind of equivalence condition based on the value of . Let's write if , and work mod .
- instead of the target being , it can now be any real number.
- there's one additional step to make sure that the 's are consistent
These turns out to be not too hard to work around, and I leave you to conclude that in the elliptic case, must be of the form where is additive mod 1.
Meta-epilogue
Over the years, there has been much chatter about how "olympiad problems" are not reflective of "real mathematics", and there is certainly some truth to that. However, if you've reached the top-tier of contest problem-solving, you'll certainly have bumped into some problems where the solution seems unsatisfying.
My advice is to hold on to those problems, because occasionally, like we've done with these two problems, you'll get to connect the dots. Sometimes, such problems show up years apart, but I think the mathematical adventure that comes out of it is well worth the wait, and you would have learned some things that people would consider "deep mathematics" by accident.
Comments
Post a Comment