Putnam 2024 Testsolve

(David here!)

Putnam 2024 was released recently, so I decided to try a bunch of the problems while on the plane ride from NYC to Chicago.

(A3) Let $T$ be a (uniformly) random bijection $$ T: \{1,\,2,\,3\}\times\{1,\,2,\,\ldots,\,2024\}\to\{1,\,2,\,\ldots,\,6072\} $$ conditioned on $T$ being increasing in both arguments. Does there exist indices $(a,b)$ and $(c,d)$ such that $$1/3 \le P(T(a,b)\lt T(c,d)) \le 2/3?$$

I took some liberties with reformulating it probabilistically here.

My initial reaction was that this was probably true. My first idea was that if $(a,b) \lneq (c,d)$, then the probability is 0, and flipped around it is 1. So maybe we can take some intermediate sequence of indices that slowly switches them? Didn't get very far with this.

Well, thinking slightly meta, the problem can't possibly be that hard - there is no way we're figuring out exactly how many $T$ satisfy $T(a,b) \lt T(c,d)$, so maybe we should be looking for bijections (or partial bijections) $$\{T:T(a,b) \lt T(c,d)\} \longleftrightarrow \{T:T(a,b) \gt T(c,d)\}.$$

One idea here is to take the leftmost $3\times 3$ grid of indices and flip it about the main diagonal. This is a bijection but only if it preserves the conditions, which is somewhat awkward.

Where else can we get a bijection? I thought about how I could flip a square to minimize the disruption to the surrounding cells, and the best thing that could happen is have two consecutive indices along a diagonal: $$ \begin{array}{ccccc} \bullet & 3 & \bullet & \bullet & \bullet \\ 2 & \bullet & \bullet & \bullet & \bullet \\ \bullet & \bullet & \bullet & \bullet & \bullet \end{array} $$ Then, we can safely switch 2 and 3, because from the perspective of all the other indices they are the same. In fact, we can safely switch 2 and 3, as long as they are not adjacent.

What about any two other numbers? For any $a\lt b$, it seems that you can trace a path from $a, a+1, a+2, \cdots, b$ and as long as they don't form a pairwise adjacent path, you can devise some kind of map that replaces $a$ with $a+1$ and $b$ with $a$. (This turns out to not be very helpful.)

I mulled over the line in my notebook that "the case that cannot [sic] is if there is a path", and as I was drawing some basic cases $$ \begin{matrix} 1 & \boxed 4 \\ \boxed{2} & 5 \\ \boxed{3} & 6 \\ \end{matrix}\qquad\to \qquad \begin{matrix} 1 & \boxed 2 \\ \boxed 3 & 5 \\ \boxed 4 & 6 \\ \end{matrix} $$ I realized that 1 has to be on the top left corner (duh!) and there probably weren't many cases for things around it: $$ \begin{matrix} 1 & 3 \\ 2 & \bullet \\ \bullet & \bullet \\ \end{matrix}, \qquad \begin{matrix} 1 & 2 \\ 3 & \bullet \\ \bullet & \bullet \\ \end{matrix}, \qquad \begin{matrix} 1 & 4 \\ 2 & \bullet \\ 3 & \bullet \\ \end{matrix}, \qquad \begin{matrix} 1 & 2 & 3 & ... & x-1\\ x & \bullet & \bullet & ... & \bullet \\ \bullet & \bullet & \bullet & ... & \bullet \\ \end{matrix}, \quad \text{where } x>4 $$ so our pair was most likely $T(1,2)$ and $T(2,1)$. Well, the first two things paired up by switching 2 and 3. The 3rd pair bijects to $$ \begin{matrix} 1 & 4 \\ 2 & \bullet \\ 3 & \bullet \\ \end{matrix} \longleftrightarrow \begin{matrix} 1 & 3 \\ 2 & \bullet \\ 4 & \bullet \\ \end{matrix} \longleftrightarrow \begin{matrix} 1 & 2 \\ 3 & \bullet \\ 4 & \bullet \\ \end{matrix} $$ so this case is definitely smaller than the first two. The very last case is tricky, because the naive thing $$ \begin{matrix} 1 & 2 & 3 & ... & x-1\\ x & \bullet & \bullet & ... & \bullet \\ \bullet & \bullet & \bullet & ... & \bullet \\ \end{matrix} \longleftrightarrow \begin{matrix} 1 & 3 & 4 & ... & x\\ 2 & \bullet & \bullet & ... & \bullet \\ \bullet & \bullet & \bullet & ... & \bullet \\ \end{matrix} $$ doesn't produce disjoint cases across $x$. I spent a long time figuring out how to remap this to different cases, and eventually stumbling on $$ \begin{matrix} 1 & 3 & 4 & ... & x-1 & x\\ 2 & \bullet & \bullet & ... & \bullet & \bullet \\ \bullet & \bullet & \bullet & ... & \bullet & \bullet \\ \end{matrix} \longleftrightarrow \begin{matrix} 1 & 3 & 4 & ... & x-1 & x+1\\ 2 & \bullet & \bullet & ... & \bullet & \bullet \\ \bullet & \bullet & \bullet & ... & \bullet & \bullet \\ \end{matrix} $$ and now these are disjoint and part of the case $\begin{pmatrix}1&3 \\ 2&\bullet\end{pmatrix}$.

Hence, case 3 and case 4 both has less than case 1 = case 2, so it follows that the sum of case 1 and case 2 is between 1/3 and 2/3.

$\newcommand{\E}{\mathbb E}$

(B3) Let $n$ be a positive integer. Set $a_0 = 1$. For $k\geq 0$, let $$ a_{k+1}= a_{k} + \mathrm{sign}(m_k-a_k), \qquad m_k \sim \textrm{Unif}(\{1,2,...,n\}) $$ Determine $\lim_{n\to\infty}\E [a_n]/n$.

In the original problem, the sequence was double indexed to emphasize the fact that the sequence generation depends on $n$. But with that in mind, it's much cleaner to write the problem this way.

Turns out the first thing I thought of worked. The key (as with martingales) is that the condition expectation is very clean:

$$ \begin{align*} \E[a_{k+1} | a_k] &= a_k + \E[\mathrm{sign}(m_k-a_k)] \\ &= a_k + \left(\frac{n-a_k}{n} - \frac{a_k-1}{n}\right) \\ &= \left(1-\frac{2}{n}\right)a_k + \frac{n+1}{n} \end{align*} $$

Thus, $$ \E[a_n] = \left(1-\frac{2}{n}\right) \E[a_{n-1}] + \frac{n+1}{n} $$ which is really easy to deal with. We want a $c$ such that $$ \E[a_n]+c = \left(1-\frac{2}{n}\right) \left(\E[a_{n-1}]+c\right) $$ so $c = \frac{n+1}{2}$. So, $$ \E[a_n] = \frac{n+1}{2} + \left(1-\frac{2}{n}\right)^{n-1} \left(\E[a_0] - \frac{n+1}{2}\right) =\boxed{\frac 1 2 (1-e^{-2})}n + O(1). $$

Pretty straightforward if you know the trick.

(B5) Let $k$ and $m$ be positive integers. For a positive integer $n$, let $f(n)$ be the number of integer sequences $x_1,\,\ldots,\,x_k,\,y_1,\,\ldots,\,y_m,\,z$ satisfying $1\leq x_1\leq\cdots\leq x_k\leq z\leq n$ and $1\leq y_1\leq\cdots\leq y_m\leq z\leq n$. Show that $f(n)$ can be expressed as a polynomial in $n$ with nonnegative coefficients.

I initially read the question as showing that this is a polynomial. So even before realizing what's going on, I write down $$f(n) = \sum_{z=1}^n \binom{z+k-1}{k} \binom{z+m-1}{m}$$ and pour 'snake oil' on it: $$ \sum_{n\ge 1} f(n)t^n = \sum_{n\ge 1} \sum_{z=1}^n \binom{z+k-1}{k} \binom{z+m-1}{m} t^z = \sum_{z\ge 1} \binom{z+k-1}{k} \binom{z+m-1}{m} \frac{t^z}{1-t}. $$

Well, this is not simplified yet, so naturally we pour even more snake oil on it: $$ \sum_{k,m\ge 1} \sum_{z\ge 1} \binom{z+k-1}{k} \binom{z+m-1}{m} \frac{t^z}{1-t} x^k y^m = \sum_{z\ge 1} \frac{t^z}{(1-t)(1-x)^{z-1}(1-y)^{z-1}} = \frac{1}{(1-t)(1-\frac{t}{(1-x)(1-y)})}. $$

At this point, I realized that it's trivial to note that $f(n)$ is a polynomial, even without any of this stuff. So what was the question, actually? Ah, we have to show that the coefficients of the polynomial are nonnegative.

Now this is kind of tricky. I couldn't think of very good ways to recover the coefficients from the values of the polynomials itself. On the other hand, non-negative polynomials form a "cone" - they are closed under non-negative linear combinations (i.e. $a_1P_1+a_2P_2$ must have non-negative coefficients if $a_1,a_2\ge 0$ and $P_1,P_2$ are themselves non-negative).

I had fantasies of some general inductive criteria (maybe if $P(n)-P(n-1)$ was non-negative then $P(n)$ was non-negative? And dealing with the constant term somehow?), but it didn't lead anywhere. Eventually, I fell back on looking at some small cases.

  • If $m=0$, then the statement is trivial, since $\sum_{z\le n} \binom{z+k-1}{k} = \binom{n+k}{k}$, which has only nonnegative coefficients.

  • Let's look at the first nontrivial case. If $k=m=1$, then clearly $$\sum_{z\le n} z^2 = \frac{n(n+1)(2n+1)}{6}$$ but how could we have known this?

  • Even for $m=1$ in general, the sum feels like something we should be able to do. We use the Abel summation trick to get $$ \begin{align*} \sum_{z\le n} z\binom{z+k-1}{k} &= \sum_{1\le i\le n} \left(\binom{i+k-1}{k} + \binom{i+k}{k} + ... + \binom{n+k-1}{k}\right)\\ &= \sum_{1\le i\le n} \left(\binom{n+k}{k+1} - \binom{i+k}{k+1}\right)\\ &= \sum_{1\le i\le n} \binom{n+k}{k+1} - \sum_{1\le i\le n} \binom{i+k}{k+1}\\ &= n\binom{n+k}{k+1} - \binom{n+k}{k+2}\\ &= \binom{n+k}{k+1} \cdot \frac{(k+1)n + 1}{k+2} \end{align*} $$

This feels doable! What about the next power?

$$ \begin{align*} \sum_{z\le n} \binom{z+1}{2} \binom{z+k-1}{k} &= \binom{n+1}{2} \binom{n+k}{k+1} - \sum_{z\le n} z \binom{z+k-1}{k+1}\\ &= \binom{n+1}{2} \binom{n+k}{k+1} - n\binom{n+k}{k+2} + \binom{n+k}{k+3} \\ &= \binom{n+k}{k+1} \left(\frac{n(n+1)}{2} - \frac{n(n-1)}{k+2}\right) + \binom{n+k}{k+3} \end{align*} $$

At this point, I realized that you can pretty much "continue" this process: $$ \begin{align*} \sum_{z\le n} \binom{z+k-1}{k} \binom{z+m-1}{m} &= \end{align*} $$ and each adjacent pair of terms makes a non-negative polynomial, so we're done.

Footnote. I actually fakesolved this the first round because I read $\le$ as $\lt$. Also, it's not true that summing a polynomial with non-negative coefficients gives a polynomial with non-negative coefficients.

(A4) Find all primes $p>5$ for which there exists an integer $a$ with the following property: the sequence $1,\,a,\,a^2,\,\ldots,\,a^{p-5}$ is the permutation of a (non-constant) arithmetic progression modulo $p$.

If $a$ is not a primitive root, then we have at most $(p-1)/2$ distinct values, but an AP mod $p$ must have distinct values, so immediately we have $(p-1)/2 \ge p-4$, or $p\le 7$. For $p=7$, note that $1,3,9\equiv 2$ is the permutation of an AP.

Otherwise, we assume it's a primitive root. Then because $1,a,...,a^{p-2}$ should be a permutation of $1,...,p-1$, the remaining three numbers and 0 form an AP. There aren't too many ways this can play out now, and I'm too lazy to work through all of them.

(B6) For a real number $a$, let $F_a(x)=\sum_{n\geq 1}n^ae^{2n}x^{n^2}$ for $0\leq x<1$. Find a real number $c$ such that $$ \begin{align*} \lim_{x\to 1^-}F_a(x)e^{-1/(1-x)}&=0 \ \ \ \text{for all $a<c$, and}\\ \lim_{x\to 1^-}F_a(x)e^{-1/(1-x)}&=\infty \ \ \ \text{for all $a>c$.} \end{align*} $$

I was really bored at this point, so I actually decided to try to find the value of $c$, even though it fails my aesthetic test. Also, this is a problem about sum estimation, and my [entire senior thesis](https://web.stanford.edu/~linkewei/pdf/senior-thesis.pdf) was about estimating sums, so I was curious to see if I still had it in me.

The first obvious step is the substitution $t=1/(1-x)$, so that we have a better parametrization of $x\to 1^-$ as well as the limit we're working with. Abusing notation, we have $$F_a(t) e^{-t} = \sum_{n\ge 1}n^a e^{2n-t} (1-1/t)^{n^2}.$$ I'm going to take some liberties and say that because $t\to\infty$ anyway, we might as well convert that last factor into an exponential: $$F_a(t) e^{-t} \approx \sum_{n\ge 1}n^a e^{2n - t-n^2/t}.$$ The exponent is kind of interesting, since we can see that $t + n^2 / t \ge 2n$ by AM-GM. Well, let's just rewrite it as a square: $$F_a(t) e^{-t} \approx \sum_{n\ge 1}n^a e^{-(n-t)^2 / t}.$$ So, which terms in this sum actually matter? If we fix $t$, it feels kind of Gaussian-y, and there we know that we should probably sum over where the weight is above some constant (maybe $e^{-1}$?), so $$F_a(t) e^{-t} \approx \sum_{n\in [t\pm \sqrt{t}]}n^a e^{-(n-t)^2 / t}$$ and whatever, let's just make that last term 1: $$F_a(t) e^{-t} \approx \sum_{n\in [t\pm \sqrt{t}]}n^a e^{-(n-t)^2 / t} \approx \sqrt{t} \cdot t^a$$ So if we don't want this to go to 0 or $\infty$ as $t\to\infty$, we really only have one choice, which is $a=-1/2$.

(A2) For which real polynomials $p$ does $(p(x) - x)^2 \mid p(p(x)) - x$?

Well, for easier "power" tracking we write $p(x) = x + r(x)$, which turns this into $$r(x)^2 | r(x+r(x)) + r(x).$$ We naturally expand the right part using Taylor's series (since we're taking mod $r(x)$): $$ \begin{align*} r(x+r(x)) &= r(x) + r'(x)r(x) + \frac{r''(x)r(x)^2}{2} + \cdots \\ &\equiv r(x) + r'(x)r(x) \pmod {r(x)^2} \end{align*} $$ so we have $r(x)^2 | (r'(x)+2)r(x)$, which cannot happen unless $r$ is constant or $r'(x) = -2$. So $p(x) = \pm x + c$ are the only solutions.

Epilogue

At the end of the day, the Putnam is a 6 hours contest with 12 problems. This means that the test is (typically) designed so that each problem is fast to solve. Often, this means that the first thing you think of will work, and if you don't think of it you'll spend a while looking for the trick. Consistency across topics and speed ends up playing a much bigger role than IMO-style deep problem solving.

Still, there are some interesting problems from time to time, and the style gets heavily switched up because the problem committees rotate out after every 3 years. Putnam problems also tend not to be utilized for IMO training, so you should have a look at the Putnam archives (especially old ones!) to see if you can find some interesting problems.

Comments

Popular posts from this blog

SMO Open 2024 ??% Speedrun

Musings about the IMO