Single Round match 814 Editorials


SRM_Blog

We had a thematic SRM: most of the problems were about steps, hops and jumps in some levels of computer games.

StepHopJumpEasy

We are given a level: a line that consists of safe blocks and traps. We need to produce one way to traverse it.

Clearly, if there are three traps in a row anywhere, we cannot traverse the level - we cannot jump that far.

In all other cases a solution exists.

One way to construct the solution is to explicitly step on each of the safe blocks: make a step if the next block is safe (“-”), a hop if there is one trap (“*-”), and a jump if there are two traps (“**-”) immediately to your right.


public String solve(String level) {
String answer = "";
int where = 0;
while (where+1 < level.length()) {
if (where+1 < level.length() && level.charAt(where+1) == '-') {
answer += 'S'; where += 1; continue;
}
if (where+2 < level.length() && level.charAt(where+2) == '-') {
answer += 'H'; where += 2; continue;
}
if (where+3 < level.length() && level.charAt(where+3) == '-') {
answer += 'J'; where += 3; continue;
}
return "";
}
return answer;
}

Another way that also works: always take the longest valid movement - jump if you can, hop if you cannot jump but can hop, and step if neither longer option works.

StepHopJumpMedium

We have the same setting, but now we are asked to count the number of ways in which a level can be traversed. This is a good introductory problem for dynamic programming.

Let dp[n] be the number of ways in which we can reach block number n. For the start we have dp[0] = 1. Whenever a block x contains a trap, it cannot be reached at all, and thus dp[x] = 0.

In all other cases, we can argue as follows:

  • Cell x was reached by a step, a hop, or a jump.

  • If it was reached by a step, it was reached from cell x-1. The number of ways to reach cell x by a step is therefore equal to the number of ways to reach cell x-1.

  • The same for x-2 and a hop, and for x-3 and a jump.

  • The three groups of solutions we just described are clearly disjoint - e.g., no solution can end in a step and at the same time end in a hop.

  • Thus, in this case dp[x] = dp[x-1] + dp[x-2] + dp[x-3].


public int countWays(String level) {
int N = level.length();
int[] dp = new int[N];
dp[0] = 1;
for (int n=1; n<N; ++n) {
if (level.charAt(n) == '*') continue;
for (int s=1; s<=3; ++s) {
if (n
= s && level.charAt(n-s) == '-') dp[n] += dp[n-s];
if (dp[n]
= 1_000_000_007) dp[n] -= 1_000_000_007;
}
}
return dp[N-1];
}

StepLeapSurviveTraps

In this task we slightly modify the setting. We still have a 1-dimensional level of the game, but now we are allowed to make longer jumps: any distance from 1 to J is allowed. We can also jump to the left, if we want to. All cells contain traps of different strengths. Our goal is to get ourselves from the left end to the right end while suffering the least total amount of damage.

Clearly, each solution starts with a movement to the right. We now claim that in an optimal solution we never move left. Why? Consider the first movement left, from block B to block C. The previous movement was a movement right from some block A to block B. Now, we can clearly replace these two movements by a single movement from A to C, and we’ll get a better solution (as now we don’t step onto B).

Thus, we now have a problem in which we only jump to the right. Here, we can again use dynamic programming. Let b[x] be the minimum total damage taken that can get us to block x. To start, we have b[0] = 0, as we don’t get any damage by just starting and doing nothing. In general, block x must be directly reached from one of the previous J blocks (or fewer, if we are still close to the beginning). Thus, b[x] equals (T[x] for stepping onto x) + min( b[y] : y < x is a block such that we can jump from y to x ).

A straightforward implementation of this recurrence takes O(NJ) time to evaluate, which is too slow. Luckily, there are several easy ways to speed up the computation.

A more obvious improvement is to use a better data structure. When computing b[x], we need to know the minimum of the set { b[x-1], …, b[x-J] }. We can store these values in a multiset. Each time we are computing a new b[x], we can query the multiset in O(log J) time for the smallest value it contains. Then, we update the multiset to be ready for the next calculation. This includes two changes: first, deleting b[x-J] that can no longer be used, and second, adding the value b[x] we just computed. These updates can also be done in O(log J), so the total time complexity of this approach is O(N log J).


public long minDamage(int N, int J, int seed, int M) {
int[] T = new int[N+1];
T[0] = 0;
long state = seed;
for (int n=1; n<=N; ++n) {
state = (state * 1103515245 + 12345) % (1L << 31);
T[n] = (int)(1 + (state % M));
}
long[] dp = new long[N+1];
dp[0] = 0;
TreeMap<Long, Integer
options = new TreeMap<Long, Integer();
options.put( 0L, 1 );
for (int n=1; n<=N; ++n) {
// remove the already unreachable option
int x = n - J - 1;
if (x
= 0) {
int cnt = options.get( dp[x] );
if (cnt == 1) {
options.remove( dp[x] );
} else {
options.put( dp[x], cnt-1 );
}
}
// find the best option we currently have
dp[n] = T[n] + options.firstKey();
// add it to the options available later
if (options.containsKey(dp[n])) {
int cnt = options.get( dp[n] );
options.put( dp[n], cnt+1 );
} else {
options.put( dp[n], 1 );
}
}
return dp[N];
}

There is also a different approach that is more tricky but leads to an asymptotically faster solution: instead of a multiset we can cleverly use a double-ended queue. We’ll show this approach on an example. Imagine that J = 6 and that b[1] through b[6] are 10, 8, 14, 13, 13, 27. What can we tell about these? We will never use b[1] in any future solution. Why? Because b[2] is smaller and it will be available longer - b[1] can only be used when computing b[7], b[2] will be usable for b[7] and b[8]. For the same reason, b[3] and b[4] are also useless because b[5] is at least as good and it will be usable longer.

Thus, let’s look at the last J values we computed and let’s call a value candidate if it’s smaller than all following values. Once a value stops being a candidate, it is no longer needed. So, instead of the whole multiset of the last J values, we can just sort the candidates. And as the candidates are already ordered by size (do you see how that follows from their definition?), we don’t need to sort them explicitly. We can just have them stored in a deque.

Each new value b[x] is now computed and processed as follows:

  • look at the front of the deque to find the best candidate

  • jump from there to x to get the value b[x]

  • if the left end of the deque is now too old, remove it

  • while the right end of the deque is greater or equal to b[x], remove it (as that value just stopped being a candidate)

  • insert b[x] as the new last candidate into the deque (on the right)

This solution runs in O(N) time.

DevuAndEqualizingLCM

This was the only problem that did not include any jumps. Instead, we are modifying a sequence of integers B to have the same LCM as another sequence of integers A.

If any B[i] does not divide LCM(A), we must change it. If we have at least one such B[i], it is sufficient to change precisely all these B[i] and nothing else: we can change each of them into LCM(A). After this change clearly LCM(B) = LCM(A).

If all B[i] divide LCM(A), we know that LCM(B) divides LCM(A). We can now do the same test in the other direction: if each A[i] divides LCM(B), we know that LCM(A) also divides LCM(B), and therefore they must be equal and no changes are needed.

If LCM(B) divides but does not equal LCM(A), we need to change at least one B[i]. But, on the other hand, changing exactly one B[i] to LCM(A) clearly makes LCM(B) equal LCM(A), so that’s the optimal solution in this case.

All that remains is to describe how to do the check we just used multiple times. How can we check whether a specific value b divides LCM(A)? Of course, the trick is to do this without having to actually compute LCM(A), which can be quite huge.

One possible approach is based on computing the prime factorization of b. We can then iterate over all elements of A and for each of them and each prime that divides b we look at the power of that prime in our current element of A.

Another (asymptotically much faster) approach can be based on the computation of the greatest common divisor. Again, we want to check whether LCM(A) is a multiple of some specific value b. Let C[i] = gcd( A[i], b ). Then, b divides LCM(A) if and only if b divides LCM(C). (Proof: from each A[i] we discarded only the parts that don't matter: primes that don't appear in b, and too high powers of primes that do appear in b.)

On the other hand, each C[i] divides b and therefore LCM(C) divides b. This means that we can actually compute LCM(C) without getting an overflow, and all that remains is to check whether LCM(C) equals b.

StepHopJumpConstruct

There are two fairly obvious observations we can make about constructions like this one.  First, if you give the player multiple independent options (connect several smaller mazes "in parallel"), the total number of options is the sum of options for the individual mazes.  Second, if you give the player several independent mazes to solve sequentially, one after another (connect them "in series"), the total number of options multiplies: if I have A options to solve the first maze and then B options to solve the second one, I can solve the whole thing in A*B different ways.

In each setting with these properties it is possible to construct mazes with an arbitrary number of solutions. One possible construction is that you can use two paths in parallel to get a maze with 2 solutions, connect those in series to get mazes with 2^k solutions, for each k, and then connect those in parallel to get any number of solutions. (Write N in binary, collect the different powers of two that sum up to N, and then use mazes with that many solutions to build a maze with N solutions.)

This construction always works and yields mazes of decently small size (usually with area roughly proportional to log^2 N), but our problem was intentionally trying to push the limits somewhat lower and fitting this type of a solution into a 50 by 50 grid was (probably) impossible.

Another similar approach is to construct a maze with N solutions from a maze with floor(N/2) solutions by multiplying the number of solutions by 2 and then adding 1 if N is odd.  Depending on the details of the specific construction, this may be possible to do in O(log N) space.

Yet another approach that can be used to compress the space in the first approach we had is to build just one chain of O(log N) small mazes with 2 solutions, and then to build N solutions by adding paths from some points on this chain to the end of the maze. In our task, we can fit this into the 50 by 50 grid, but we have to be very efficient with space. Ideally, we want to spend only one extra row+column on each of the "times two" mazes, and we need to make sure the adjacent paths that are shortcuts to the end don't interfere. For a solution along these lines, take a look at the code submitted by ecnerwal in the contest.

In the rest of this solution writeup I will describe a different type of a solution.

In our problem, we can easily make sure that different parts of a maze are independent if we leave three empty rows (i.e., rows with traps) between them. If we, for example, restrict ourselves to mazes with at most two rows each, we can easily fit ten such mazes into 50 rows.  Is it possible to solve our task this way?

Given a maze, we can easily count the number of ways in which it can be solved (using dynamic programming). We can now start by implementing this DP, generating some small random 2xC mazes and calculating the number of solutions for each of them. What we'll get is a collection of numbers that has no very large gaps, and in particular contains all possible small values.

With some intuition for knapsack-style (and more specifically coin-change-style) problems, we should now expect that such a dense set of "coins" should be able to "pay" any sum using just a few coins. And this is indeed the case. Even better, we don't even have to look for the optimal way of paying, we can simply use the naive greedy approach.

The best part is that we can easily test this. Once we have generated enough random mazes, we can spend a few seconds on actually running the greedy algorithm for paying each sum between 1 and 10^9, and make sure that we can get each of those values as a sum of a small enough number of coins.

The reference solution is written using this approach, with one extra (fairly unnecessary) optimization to make its code short: after generating the random mazes we remembered just the mazes for all small answers (1 to 100) and then we took a sequence of additional mazes such that the next one has always roughly 1.1 times the number of solutions of the previous one. We have verified that the set of “coins” constructed this way can greedily “pay” each value up to 10^9 using at most 8 coins.

Below is one possible output of the reference solution, constructing a maze with W = 12741 solutions using three small mazes with 11717 + 1007 + 17 solutions. For better readability the safe blocks are shown as hash signs and the traps are periods.


#..##.#.###########..............................
...##.#############..#..#..#..#..#..#..#..#..#..#
#................................................
.................................................
................................................#
#..##########.#..................................
...#######.##.#..#.#.#..#..#..#..#..#..#..#..#..#
#................................................
.................................................
................................................#
#..#####.........................................
...####.#..#.#.#..#..#..#..#..#..#..#..#..#..#..#
#................................................
.................................................
................................................#
................................................#

Tags
SRM Editorials