Dynamic Programming in Math Problems

(Yu Peng here)

For this post, I'll go through a common idea in competitive programming - dynamic programming - and some of my thought processes when I apply it to problems.

Let's start with a recent problem:

Problem

(2022 IMO P6) Let $n$ be a positive integer. A Nordic square is an $n \times n$ board containing all the integers from $1$ to $n^2$ so that each cell contains exactly one number. Two different cells are considered adjacent if they share a common side. Every cell that is adjacent only to cells containing larger numbers is called a valley. An uphill path is a sequence of one or more cells such that:

(i) the first cell in the sequence is a valley, (ii) each subsequent cell in the sequence is adjacent to the previous cell, and (iii) the numbers written in the cells in the sequence are in increasing order.

Find, as a function of $n$, the smallest possible total number of uphill paths in a Nordic square.

I won't be going through the solution for this problem, but rather just some of the initial thoughts I had on reading the problem. It's just to give a sense of what this post will be about.

My first thought when reading the problem was, given a board configuration, how do I even count the number of total uphill paths? Since it seems fairly non-trivial to count this quantity, it makes sense to consider this first rather than jumping straight into a lower bound / construction, to get a better understanding of the problem.

Anyway, it's not hard to come up with an algorithm that counts the number of uphill paths - just enumerate all possible paths and check if they are uphill. But obviously, this is extremely useless as it doesn't give us any additional insight into the problem. We want an algorithm that is more efficient and also gives us some idea about how the number of uphill paths behaves, and a dynamic programming approach is quite natural for this.

Dynamic Programming (DP)

From Wikipedia, "The word dynamic was chosen by Bellman to capture the time-varying aspect of the problems, and because it sounded impressive" if you're wondering why it's named like that.

Let's look at another problem now, a famous result by Erdős and Szekeres.

Problem 1

Given a distinct sequence of $mn + 1$ real numbers, show that there exists an increasing subsequence of length $n + 1$ or a decreasing subsequence of length $m + 1$.

According to Wikipedia, "a subsequence of a given sequence is a sequence that can be derived from the given sequence by deleting some or no elements without changing the order of the remaining elements."

So again, when looking at a problem, one thing to think about is, how do I verify this result empirically? So basically, given a sequence of real numbers, how do I find an increasing sequence of length $n + 1$ or a decreasing sequence of length $m + 1$. (subsequence is too long to type)

Instead of just verifying this result, why not go one step further - how do we find the longest increasing sequence or the longest decreasing sequence?

Suppose the longest increasing sequence is like $b_1, b_2, \dots, b_k$ (these aren't the indices in the sequence I'm just lazy). Then the longest increasing sequence ending at $b_{k - 1}$ must be $b_1, \dots, b_{k - 1}$. Hence, we see that there is some optimal substructure in the problem - the longest increasing sequence ending at each point is dependent on the longest increasing sequence ending at some previous point.

Let's define this rigorously. Let the entire sequence be $a_1, \dots, a_n$. Define $LIS(i), LDS(i)$ to be the lengths of the longest increasing sequence and longest decreasing sequence ending at $a_i$ respectively. It's easy to see the recurrence relations:

$$\begin{aligned} LIS(i) &= \max_{1 \leq j < i}(LIS(j)) + 1,& a_j < a_i \\ LDS(i) &= \max_{1 \leq j < i}(LDS(j)) + 1,& a_j > a_i \end{aligned}$$

With base cases $LIS(1) = LDS(1) = 1$.

Ok, so now we have some structure relating the LIS and LDS ending at different points, and hopefully, we can use this to show that some $LIS(i) \geq n + 1$ or $LDS(i) \geq m + 1$. In other words, we want to make sure that we can't have both the LIS and LDS values remaining small, so we must force them to increase at some points.

So how do they increase? An easy observation is that for $j < i$, if $a_j < a_i$ then $LIS(j) < LIS(i)$ and if $a_j > a_i$ then $LDS(j) < LDS(i)$. But we always have one of the two cases, so one of the LIS or LDS values is forced to increase!

More specifically, this means that the ordered pairs $(LIS(i), LDS(i))$ are all distinct. Now this means that $LIS(i), LDS(i)$ can't always be small! The bound follows immediately by contradiction and counting.

So as a summary, when asked to prove the existence of something or just maximising / minimising some value, it could be helpful to think about how to verify the statement empirically with some algorithm or something, as this gives insight into structures and substructures in the problem.

Ok, now let's try another problem that's slightly more complicated.

Problem 2

(All-Russian 2014) In a country of $n$ cities, an express train runs both ways between any two cities. For any train, ticket prices in either direction are equal, but for any different trains, these prices are different. Prove that the traveler can select the starting city, leave it, and go on, successively, $n-1$ trains, such that each fare is smaller than that of the previous fare. (A traveler can enter the same city several times.)

Let's just take the obvious graph theory interpretation. So once again, how do we verify this result? Could we try to find an algorithm that constructs such a path of length $n - 1$ given that it exists, or try to determine the longest such path in a given graph?

Well obviously we'll take the second approach since that's what this post is about, but in general, finding an algorithm to count something is probably easier than finding an algorithm to construct something, and we should try the easier approaches to get a sense of the problem.

Alright, so we want to determine the longest path of decreasing train fares in the graph. But how? In the previous problem, we were given an array, which already has some sort of ordering, so we had an iterative algorithm just by going from 1 to $n$. In general, supposing we want to take a dynamic programming approach since the idea is to make use of substructures to compute the answer, we generally need some sort of order of doing things. But can we even find an order in this case?

Looking back to the previous example, what we were doing was effectively extending the increasing and decreasing sequences. So for this case, we can try to find some way to extend our paths of decreasing train fares. Then if we want to extend a path of decreasing train fares, what we need is an edge with a lower fare...

So let's just look at the edges in decreasing order of train fares. We'll add the edges in this order, and see how existing paths get extended. Consider adding a new edge between cities $a, b$. The only paths that get extended are those ending at $a$ and $b$, and any such path can be extended since the edge we are adding has a lower price than any existing edge.

Thus, it makes sense to define $path(i)$ to be the length of the longest path ending at city $i$ given the current set of edges that have been added. So when the graph is empty, $path(i) = 0$ for all $i$.

Now when we add an edge between cities $a, b$, we simply change $path(a)$ to $\max(path(a), path(b) + 1)$. This is because the longest path ending at city $a$ is either the existing longest path ending there or the longest path ending at city $b$ extended by the edge $a-b$ to end at $a$ now. Similarly $path(b)$ becomes $\max(path(b), path(a) + 1)$.

Now, we can see that the values for $path(i)$ are certainly increasing as we add edges, so we want to make use of this to show that we have some $path(i) \geq n - 1$ after adding all the edges.

A fairly clear monovariance is that $path(a) + path(b)$ increases by at least 2 in the above process. Therefore, the sum of all $path(i)$ increases by at least 2 every time we add an edge, so the sum is at least $n(n - 1)$ at the end, so some $path(i) \geq n - 1$, done!

For this problem, the key insight was identifying what we wanted, which in this case was to extend paths of decreasing train fares, and respond accordingly, which in this case was adding edges in decreasing order of their train fares. This is not limited to the idea of DP, or even algorithms - it should probably be a mindset that you try to take in general.

On a side note, this DP approach gives you an idea of how you might try to construct an equality case for the problem where the longest path of decreasing train fares is exactly $n - 1$, so you can think about whether the equality case exists for all $n$, or which $n$ it exists for. Perhaps you could extend this problem to "find the largest $M$ such that there always exists a path of decreasing train fares of length $M$"?

Conclusion

When dealing with a problem, try to explore beyond just solving the problem to gain insight into its structure. Think about things like whether it is easy to verify the result empirically, whether we can generalize the problem, or if the given result is even tight. DP is just one of those techniques you can use for these purposes.

With these ideas, do try the IMO problem in the introduction, you can probably make some progress. Don't be too scared by the P6 label!

(Maybe even try IO if you're interested)

Comments

Popular posts from this blog

SMO Open 2024 ??% Speedrun

Musings about the IMO