If you want a ticket to the Finals, it all comes down to Round 3! With only two and a half hours and four problems, you have to know what you can solve, and solve it fast. The competitors are among the best in the world, and there is little room for error!
The round opened with Zillionim, an interactive coin-taking game in which it was far from trivial to outsmart even a randomly playing AI. Pancake Pyramid was the obligatory pancake problem — actually, we almost didn't have one at all this year! It was challenging, but approachable with more "standard" techniques. Datacenter Duplex and Napkin Folding were more unusual, and the latter in particular was a very difficult geometry problem. (We wonder how many folded pieces of paper we created on our contestants' desks!) Napkin Folding was worth a lot of points, but not enough to render the other problems unnecessary.
After half an hour, each of the first three problems had been fully solved at least once. Contestants vied to be the first to finish A + B + C + the first test set of D, knowing that the second test set of D was a real monster that probably wouldn't get 25 solves. ACRushTC was the first to hit 61 points, at 1:37:35, followed by rng..58 with 1:41:03 and our longtime defending champion Gennady.Korotkevich with 1:41:47. The Code Jam staff eagerly watched to see if someone would fully solve D, but alas, nobody did. That made 61 (or a fast enough 57) the (unofficial) advancement bar.
If you ranked high enough, we will be in touch about the Finals. As always, though, making it to Round 3 is a major accomplishment, and you will soon have the shirt to prove it! Even if you just watched the contest, preparing for next year, you have our thanks and our best wishes for success in 2020. In the meantime, though, get ready for the Finals on August 9!
Zillionim: Written by Pablo Heiber and Igor Naverniouk. Prepared by Pablo Heiber.
Pancake Pyramid: Written by Andy Huang. Prepared by John Dethridge.
Datacenter Duplex: Written by Pablo Heiber. Prepared by Jonathan Irvin Gunawan and Micah Stairs.
Napkin Folding: Written by Pablo Heiber. Prepared by Timothy Buzzelli.
Solutions and other problem preparation and review by David Arthur, Liang Bai, Darcy Best, Timothy Buzzelli, Jonathan Irvin Gunawan, Md Mahbubul Hasan, Igor Naverniouk, Trung Thanh Nguyen, Pi-Hsun Shih, Micah Stairs, Ian Tullis and Tony Wong.
Zillionim is a turn-based game for two players. Initially, 1012 coins are arranged end-to-end in a single line, numbered from 1 to 1012 from left to right. During a turn, a player must select 1010 consecutive coins and remove them. Two coins that were not originally consecutive do not become consecutive even if all of the coins in between them are removed.
On their turn, a player makes a valid move if possible, and then it is their opponent's turn. If a player cannot make a valid move on their turn, they lose the game (and the opponent wins the game).
Because our engineers are still hard at work training our machine learning model to play Zillionim, we have created a simple AI that plays Zillionim by making random moves. The AI always gets the first turn. On each of the AI's turns, the AI determines all valid moves and chooses one of them uniformly at random.
Can you beat this AI... at least most of the time?
This is an interactive problem. You should make sure you have read the information in the Interactive Problems section of our FAQ.
Initially, your program should read a single line containing two integers T, the number of test cases, and W, the minimum number of games you need to win for your solution to be considered correct. Then, you need to process T test cases, each of which is a single game of Zillionim.
Each test case is processed by making exchanges with the judge until one player wins the game. For each exchange, the judge first outputs a single line with a single integer P, to be interpreted as follows:
-2, your last move caused you to win the current game.
-3, the AI has made a move that caused it to win the current game. Notice that in this case, the judge does not send you the AI's last move.
-1, the last information you sent to the judge was malformed data or an invalid move (out of range or attempting to remove coins that were not there), meaning that you will get a Wrong Answer verdict for not playing correctly (more below).
After receiving a positive integer P, you should send back a single line with a positive integer Q (1 ≤ Q ≤ 1012 - 1010 + 1) representing that you are removing coins numbered Q, Q + 1, ..., Q + 1010 - 1. Each of these coins must not have been previously removed during the current game.
After the judge sends a
-3, if it was the last game,
the judge will terminate and so should your program. Otherwise, the judge will proceed
to send data corresponding to the first exchange of the next game. The judge
will not check how many games you have won or lost until all games have been processed correctly.
For example, if you win T - 1 games and then send malformed data during the last game,
you will receive a Wrong Answer verdict, regardless of the value of W.
After receiving a
-1, your program should terminate to receive a
Wrong Answer verdict. If your program continues to wait for the judge after receiving
-1, your program will time out, resulting in a Time Limit Exceeded error.
Notice that it is your responsibility to have
your program exit normally and within the time limit to receive a Wrong Answer verdict
instead of a Runtime Error or Time Limit Exceeded.
The seed for the random generator is predetermined (and is different) for each game. This means that two submissions that make the exact same sequence of moves in a given game will receive the exact same sequence of moves from the AI for that game. It also means the play of the AI in a game does not depend, even in the pseudo-random generation sense, on the plays made in previous games within the same test set.
Time limit: 50 seconds per test set.
Memory limit: 1GB.
T = 500.
-3 ≤ P ≤ 1012 - 1010 + 1.
P ≠ 0.
P represents a valid play or valid information about the game's status, as explained above.
W = 300.
W = 475.
W = 499.
You can use this testing tool to test locally or on our platform. To test locally, you will need to run the tool in parallel with your code; you can use our interactive runner for that. For more information, read the instructions in comments in that file, and also check out the Interactive Problems section of the FAQ.
Instructions for the testing tool are included in comments within the tool. We encourage you to add your own test cases. Please be advised that although the testing tool is intended to simulate the judging system, it is NOT the real judging system and might behave differently. If your code passes the testing tool but fails the real judge, please check the Coding section of the FAQ to make sure that you are using the same compiler as us.
For simplicity, the following interaction uses 50 coins in total instead of 1012, and each move removes 10 consecutive coins instead of 1010. The rules are otherwise the same.
t, w = readline_int_list() // reads 500 into t and 300 into w p = readline_int() // reads 23 into p; this is the beginning of the first game. The // AI has taken coins 23 through 32, inclusive. printline 38 to stdout // we decide to take coins 38 through 47, inclusive flush stdout p = readline_int() // reads 3 into p. The AI has taken coins 3 through 12, inclusive. printline 13 to stdout // we decide to take coins 13 through 22, inclusive // (and this is our only remaining move!) flush stdout p = readline_int() // reads -2 into p. We won the first game since the AI had no move. p = readline_int() // reads 32 into p; this is the beginning of the second game. The // AI has taken coins 32 through 41, inclusive. printline 13 to stdout // we decide to take coins 13 through 22, inclusive flush stdout p = readline_int() // reads -3 into p. We don't know the AI's move, but it left us // with no valid move, so we lost the second game. p = readline_int() // reads 10 into p; this is the beginning of the third game. The // AI has taken coins 10 through 19, inclusive. printline 0 to stdout // we select an invalid index (coin numbering starts at 1!) flush stdout p = readline_int() // reads -1 into p -- we have made a mistake! exit // exits to avoid an ambiguous TLE error
You have just finished cooking for some diners at the Infinite House of Pancakes. There are S stacks of pancakes in all, and you have arranged them in a line, such that the i-th stack from the left (counting starting from 1) has Pi pancakes.
Your supervisor was about to bring out the stacks to the customers, but then it occurred to her that a picture of the stacks might make for a good advertisement. However, she is worried that there might be too many stacks, so she intends to remove the L leftmost stacks and the R rightmost stacks, where L and R are nonnegative integers such that L + R ≤ S - 3. (Notice that at least 3 stacks of pancakes will remain after the removal.)
Your supervisor also thinks the remaining stacks will look aesthetically pleasing if they have the pyramid property. A sequence of N stacks of heights H1, H2, ... , HN has the pyramid property if there exists an integer j (1 ≤ j ≤ N) such that H1 ≤ H2 ≤ ... ≤ Hj-1 ≤ Hj and Hj ≥ Hj+1 ≥ ... ≥ HN-1 ≥ HN. (It is possible that this sequence might not look much like a typical "pyramid" — a group of stacks of the same size has the pyramid property, and so does a group in which the stack heights are nondecreasing from left to right, among other examples.)
Note that the sequence of stacks remaining after your supervisor removes the L leftmost and R rightmost stacks might not yet have the pyramid property... but you can fix that by adding pancakes to one or more of the stacks! The pyramidification cost of a sequence of stacks is the minimum total number of pancakes that must be added to stacks to give the sequence the pyramid property.
While your manager is carefully deciding which values of L and R to choose, you have started to wonder what the sum of the pyramidification costs over all valid choices of L and R is. Compute this sum, modulo the prime 109+7 (1000000007).
The first line of the input gives the number of test cases, T. T test cases follow. Each begins with one line containing one integer S: the number of stacks of pancakes. Then, there is one more line containing S integers P1, P2, ..., PS. The i-th of these is the number of pancakes in the i-th stack from the left.
For each test case, output one line containing
Case #x: y, where
x is the test case number (starting from 1) and
is the sum of the pyramidification costs over all valid choices of L and R,
modulo the prime 109+7 (1000000007).
Time limit: 30 seconds per test set.
Memory limit: 1GB.
1 ≤ T ≤ 100.
1 ≤ Pi ≤ 109, for all i.
S = 3000, for up to 20 test cases.
3 ≤ S ≤ 500, for all remaining cases.
S = 106, for up to 1 test case.
S = 105, for up to 3 test cases.
3 ≤ S ≤ 10000, for all remaining cases.
3 3 2 1 2 5 1 6 2 5 7 4 1000000000 1 1 1000000000
Case #1: 1 Case #2: 16 Case #3: 999999991
In Sample Case #1, your supervisor must choose L = 0 and R = 0, so that is the only scenario you need to consider. The optimal strategy for that scenario is to add a single pancake to the middle stack. Although the resulting sequence of stacks looks flat, notice that it has the pyramid property; in fact, any index will work as the j value.
In Sample Case #2, here are all possible choices of L and R, the corresponding remaining stacks, and what you should do in each scenario.
So the answer is (5 + 3 + 0 + 5 + 3 + 0) modulo (109 + 7), which is 16.
In Sample Case #3, we only need to add extra pancakes to create the pyramid property when L = 0 and R = 0. In that case, it is optimal to add 999999999 pancakes to each of the second and third stacks. (We hope the diners are hungry!) So the answer is (999999999 + 999999999) modulo (109 + 7) = 999999991.
Two companies, Apricot Rules LLC and Banana Rocks Inc., are sharing the same datacenter. The datacenter is a matrix of R rows and C columns, with each cell containing a single server tower. Each tower contains intellectual property belonging to exactly one of the two companies.
At first, they built walls on the edges between cells assigned to different companies. This allowed orthogonally adjacent cells belonging to the same company to remain connected. Also, two cells x and y are considered connected if x is connected to a cell that is, directly or indirectly, connected to y. With this definition, it was possible that two cells assigned to the same company were not connected, which was unacceptable.
Both companies agreed to build narrow hallways running through cell corners that allow two diagonally adjacent cells to be connected directly. Let us write (i, j) to represent the cell at row i and column j. At most one narrow hallway can be built through any given vertex, which means either (i, j) and (i + 1, j + 1) can be connected, or (i + 1, j) and (i, j + 1) can be connected, or neither pair, but not both. Of course, only hallways between cells assigned to the same company can be built.
Given a matrix where each cell is labeled
on which company it is assigned to, find a way to add
connections through diagonal adjacencies such that all
connected and all
Bs are connected.
The first line of the input gives the number of test cases, T.
T test cases follow. Each test case begins with one line containing
two integers R and C, the number of rows and columns of the
matrix representing the datacenter. Then, there are R more
lines containing C characters each. The j-th character on the i-th of these lines
B, indicating which company owns the cell at (i, j).
For each test case, first output one line containing
Case #x: y,
x is the test case number (starting from 1) and
IMPOSSIBLE if there is no way to assign the
diagonal connections such that the
A cells are connected and
B cells are connected,
POSSIBLE otherwise. Then, if you output
output R - 1 more lines of C - 1 characters each.
These characters must correspond to a valid arrangement as described in the statement above.
The j-th character of the i-th of those lines must be
\ if cells (i, j)
and (i + 1, j + 1) are to be connected,
/ if cells (i + 1, j) and (i, j + 1)
are to be connected, or
. if neither pair is to be connected.
1 ≤ T ≤ 100.
Time limit: 20 seconds per test set.
Memory limit: 1GB.
2 ≤ C ≤ 100.
Mi,j = uppercase
A or uppercase
for all i and j.
Mi,j = uppercase
A for at least one pair of i and j.
Mi,j = uppercase
B for at least one pair of i and j.
2 ≤ R ≤ 4.
2 ≤ R ≤ 100.
3 2 2 AB BA 2 3 AAB ABB 3 4 BBAA BABA BBAA
Case #1: IMPOSSIBLE Case #2: POSSIBLE .. Case #3: POSSIBLE //\ .//
In Sample Case #1, the pair of
A cells and the pair of
need to be connected, but since both connections would have to cross the same vertex,
at most one of the connections can exist.
In Sample Case #2, the cells are already connected in the required way in the input,
so no additional connections are necessary. Note that you can add unnecessary valid
connections, so another valid answer would be
would be wrong.
In Sample Case #3, there are also multiple solutions, one of which is displayed.
Chalk has been actively traveling the world with his friends taking pictures in all the coolest places. Most recently, he made his way to Europe, where he studied the history of napkin folding. Ever since, Chalk has been collecting a wide variety of napkins to practice the art of napkin folding.
Chalk's napkins can be defined as simple polygons. A simple polygon is a polygon in which no edges intersect except for adjacent edges which meet at their shared vertex. Each vertex of the polygon is on exactly two edges.
Chalk folds his napkins by first drawing a folding pattern on them. A folding pattern is a set of K-1 line segments which are drawn on the napkin. Each line segment connects two points with rational coordinates on the border of the polygon defining the napkin and is fully contained in the polygon. No two line segments in a folding pattern may touch or overlap, except possibly at common endpoints. A folding pattern of K-1 line segments splits the napkin into K polygonal regions. Two points are in the same region if there exists some continuous line (not necessarily straight) between them which does not intersect any edge of the polygon or any line segment in the folding pattern — even at endpoints.
Chalk is only interested in neat folding patterns. A folding pattern is neat if any two regions that are adjacent to the same folding line segment F are symmetric with respect to F. This means that folding the napkin along that line segment would result in the two regions lining up perfectly.
The following picture illustrates a neat folding pattern with K=8 regions.
Chalk has been successfully folding his collection of napkins using neat folding patterns. But he has some napkins in his collection that he has not been able to find a neat folding pattern for. For each of those napkins, Chalk needs your help to find a neat folding pattern with K regions or determine that no such neat folding pattern exists.
The first line of the input gives the number of test cases, T. T test cases follow. Each test case starts with a line containing two integers N and K: the number of points in the polygon defining Chalk's napkin and the number of regions to split the napkin into with a neat folding pattern containing K-1 line segments.
The polygon defining the napkin is represented as a list of the N vertices, as encountered when traveling along the perimeter of the polygon in the clockwise direction, with the first vertex being chosen arbitrarily. The next N lines represent that list. The i-th of these contains two integers Xi and Yi, indicating that the i-th point is located at (Xi, Yi) in two-dimensional space.
For each test case, output one line containing
Case #x: y, where
x is the test case number (starting from 1) and
POSSIBLE if it is possible to make a neat folding pattern
with K regions and
If it is possible to make a neat folding pattern with K regions, output K-1
more lines listing the segments of a neat folding pattern with K regions, in any order.
Each line should represent a different segment as
Ax Ay Bx By, where
By) are the two endpoints
of the segment, in any order. Each of
By should be in the
D are positive
integers (written with no leading zeroes) sharing no common prime factors,
and representing the rational number
must be no whitespace between
/, or between
Time limit: 60 seconds per test set.
Memory limit: 1GB.
1 ≤ T ≤ 100.
3 ≤ N ≤ 200.
1 ≤ Xi ≤ 1000, for all i.
1 ≤ Yi ≤ 1000, for all i.
The N points are given in clockwise order.
No two adjacent edges of the polygon are collinear.
The polygon is a simple polygon with strictly positive area.
No two edges intersect except for adjacent edges at their shared endpoint.
K = 2.
2 ≤ K ≤ 10.
4 4 2 1 1 1 2 2 2 2 1 3 2 1 1 1 2 2 1 8 2 1 3 3 5 5 5 4 4 7 3 5 1 4 2 3 1 8 2 1 3 3 5 4 4 5 5 7 3 5 1 4 2 3 1
Case #1: POSSIBLE 1/1 2/1 2/1 1/1 Case #2: POSSIBLE 1/1 1/1 3/2 3/2 Case #3: IMPOSSIBLE Case #4: POSSIBLE 1/1 3/1 7/1 3/1
1 10 8 4 1 3 1 2 2 2 3 1 3 1 4 2 4 3 3 3 2 4 2
Case #1: POSSIBLE 3/1 1/1 4/1 2/1 3/1 1/1 3/1 2/1 2/1 2/1 3/1 2/1 2/1 2/1 3/1 3/1 2/1 3/1 3/1 3/1 2/1 3/1 2/1 4/1 1/1 3/1 2/1 4/1
Note: Sample 2 is not valid for Test set 1. Only Sample 1 will be tested prior to running Test set 1 (the same way samples normally are). Moreover, Sample 2 will not be tested prior to running Test set 2.
For Sample Case #1, a neat folding pattern with K=2 can be drawn using any of
the 4 dashed lines:
For Sample Case #2, a neat folding pattern with K=2 can be drawn as follows:
For Sample Case #3, there are no neat folding patterns:
For Sample Case #4, there are two possible neat folding patterns with K=2:
For the test set 2 sample case, a neat folding pattern with K=8 can be drawn as follows:
There may not be a zillion possible solutions to this problem, but there are more than we can discuss in this analysis! We'll only touch on a few here, some of which work, and some of which don't. You're welcome to share yours in the Code Jam Google Group!
What if we always choose a move uniformly at random from among all available moves, just like the AI does? Intuitively, since the overall number of coins is large relative to the size of any one move, going first or second is not very important (given that in this case, both sides are playing randomly and not using any strategy), and every game is close to a fair coin flip. Suppose that we have a 50% chance of winning a game with this strategy; then, per the binomial distribution, we have only about an 0.0004% chance of winning 300 or more games out of 500. So this approach won't even get us the 1 point for test set 1!
If we were allowed to go first in this game, we could guarantee a win using the following strategy. Imagine a "center line" drawn between the (1012 / 2)-th and ((1012 / 2) + 1)-th coins, dividing the coins into two regions of equal size. On the first turn, we could take the group of 1010 coins bisected by this line — that is, the coins numbered from ((1012 - 1010) / 2) + 1 to ((1012 + 1010) / 2). Then, after every move by the opponent, we could make the same move, but reflected across this center line. For example, if the opponent were to take the 1010 coins starting from coin number 2, we would respond by taking the group of 1010 coins starting from the next-to-last coin and going left. We would always be guaranteed a move, by symmetry, so the opponent would have to eventually lose.
Sadly, we cannot move first, but we can try to adapt this strategy by just making the mirrored version of the opponent's move. This will always be possible unless the opponent makes a move that crosses the center line, in which case we cannot mirror it. But if the opponent happens to move close to the center line without crossing it — specifically, taking a coin fewer than 1010 / 2 coins away from the center without crossing the center — then they will ensure their own defeat. If they do happen to make a move that crosses the center line, we can abandon our strategy and move randomly.
We can run a local simulation and find that this strategy does better than randomness, winning around 57.5% of the time. Unfortunately, this only gives us about a 14% chance of passing test set 1. Moreover, because our moves depend on the judge's moves, we cannot easily tweak the judge's randomness; we only get some control over that once the judge has made a move that hurts our strategy. So, all in all, it's probably best to abandon this approach.
Let's call a remaining set of (at least 1010) contiguous coins a "run". Observe that a "run" of exactly 2 × 1010 remaining coins can be very useful for us. We have the option of taking the first 1010 or the last 1010 coins from that run, and leaving a run of exactly 1010 coins (and therefore one possible move) behind. On the other hand, if we take any other group of 1010 coins from that run, we will leave no moves behind, "destroying" the run. Also notice that if the AI happens to move within this run, it will virtually always make the second type of move; the odds of it happening to choose the first or last 1010 coins are negligible.
Because of this, a run of exactly 2 × 1010 coins lets us control the parity of the game. Suppose, as a thought experiment, that the only remaining runs are of exactly 2 × 1010 coins each, and that it is our turn. If the number of remaining runs is odd, we can move in a way that destroys one of the runs, and then the AI will do the same, and so on until we leave the AI with no moves. If the number of remaining runs is even, we can take the first 1010 coins from one run, leaving the other 1010 behind as a smaller run. Now the AI is in the same bad "even number of remaining runs" situation that we were just in.
We can set up a situation like this by making many moves early on that leave behind runs of exactly 2 × 1010 coins. For example, we can repeatedly choose the largest remaining run of size 3 × 1010 or greater, and start from the (2 × 1010 + 1)-th coin from the left in that group, leaving a run of exactly size 2 × 1010 behind (in addition to any leftover piece to the right of our move). Then, as long as there are still runs larger than 2 × 1010 but smaller than 3 × 1010, we can use our moves to destroy them by moving in their centers. Our goal is to eliminate all runs larger than 2 × 1010 before the AI randomly destroys all of our runs of 2 × 1010. Intuitively, we do not need to worry too much about this, since the AI is more likely to move within some remaining larger runs than within our last remaining run of 2 × 1010 runs, so it will usually be helping us out! Once all runs are of size 2 × 1010 or smaller, and we have at least one run of size 2 × 1010, we have imposed the near-lockdown situation described above.
We hope you will forgive us for not trying to calculate an exact success probability for the above strategy, but one can use a local simulation to show that it wins, e.g., all 100000 of 100000 games. The chances of it losing more than one of 500 games are vanishingly small... and even if the worst somehow happens, in this case, we do have some control over the overall randomness, and we can try again with a slight tweak.
In Round 1C this year, we had Bacterial Tactics , another problem about a turn-taking game. You may recall that the analysis brought up the Sprague–Grundy theorem ; can we use a similar parity-based strategy here?
As in our 2 × 1010 strategy, we can try to keep the number of remaining runs even for the AI by moving in the middle of a run if there is an even number of runs, or otherwise taking the left end of some other large run. Empirically, this strategy solves TS1, and it even solves TS2 if we always take the largest remaining run. It ends up being similar to our best strategy described above, even though it does not allow for such fine control.
We can even achieve a perfect solution for this problem by exhaustively calculating Grundy numbers and storing them using run-length encoding! However, it is possible to arrive at the 2 × 1010 idea above, which is good enough, without knowing anything about this theory.
We can make a couple of useful observations at the outset. First, if we have an interval of length 1 or 2, we do not need to add any pancakes for it to have the pyramid property, so we can ignore the restriction of length ≥ 3 in the problem. Second, for any interval, the "peak" in the optimal answer is the largest stack in the original interval (we leave this for you to think about). If there are multiple largest stacks in an interval, we will take the leftmost largest stack as the peak.
For each of the ((S + 1) choose 2) intervals, determine where the peak will be located once the interval is turned into a pyramid. Once we know where the peak will be, we have two smaller problems: we need a non-decreasing sequence to the peak's left and a non-increasing sequence to the peak's right. To compute how many pancakes we need in the non-decreasing interval, we may simply sweep from the leftmost point and add pancakes until no stack in the interval to the left of the i-th stack is strictly taller than the i-th stack. By maintaining the running maximum as we sweep, we can compute the number of needed pancakes needed per interval in O(S) operations. Since there are O(S2) intervals, this algorithm requires O(S3) operations in total.
The ideas above lay the framework for a quicker solution. Instead of independently recomputing the number of pancakes needed to make an interval non-decreasing (or non-increasing), we can use the results from other intervals. Say we know the index of largest stack of pancakes in the range [L, R] (call this index M[L, R]) and the smallest number of pancakes needed to make the interval [L, R] into a non-decreasing sequence (call this number X[L, R]). We can compute both M[L, R+1] and X[L, R+1] in O(1) time since the height at M[L, R+1]=max(PM[L, R], PR+1) and X[L, R+1] = X[L, R] + (PR+1 - PM[L, R+1]). Similarly, we can store Y[L, R], which is the smallest number of pancakes needed to make the interval [L, R] into a non-increasing sequence.
For any interval [L, R], the smallest number of pancakes needed to turn the interval into a pyramid is simply X[L, M[L, R]] + Y[M[L, R], R]. The precomputation takes O(S2) time and memory and the second step uses O(1) time per interval to compute the answer. Thus, in total, this is O(S2).
The above strategy will be too slow and require too much memory to handle the larger bounds. For this test set, we will still use the same underlying idea of needing the number of pancakes to make an interval non-decreasing or non-increasing. But instead of computing X and Y, we will compute cumulative values: define X'[L, R] = X[L, R] + X[L+1, R] + ... + X[R, R] and Y'[L, R] = Y[L, L] + Y[L, L+1] + ... + Y[L, R]. Now, instead of focusing on the left and right endpoints, we will base our strategy on the peaks of the intervals.
Initially, we do not know the value of X' or Y' for any interval. In our analysis, we will assume that we only know the X' and Y' values for maximal intervals. That is, if two intervals are side-by-side, we will merge them (we will never have intersecting intervals). For example, if we know X'[L, k] and X'[k+1, R], we will merge these together into X'[L, R] and forget about X'[L, k] and X'[k+1, R]. The full process of how to merge is explained below. In particular, this means that any given stack is in at most one known interval for X' and one known interval for Y'.
We will process the peaks from smallest to largest. When we process the i-th stack, we are only interested in intervals that have stack i as their peak. If X'[L, i-1] is computed for some value of L, then L must be the furthest left index such that PL, PL+1, ... , Pi-1 are all less than Pi (since we are processing the stacks from smallest to largest). Similarly, if Y'[i+1, R] is computed for some value R, then R must be the furthest right index such that Pi+1, Pi+2, ... , PR are all at least Pi. If such L and R exist, then we can compute the number of pancakes needed over all intervals that have i as their peak:
X'[L, i-1] * (R - i + 1) + Y'[i+1, R] * (i - L + 1)
Note that if we don't know X'[L, i-1] for any L, then Pi-1 ≥ Pi, so i cannot be a peak with any interval that includes both i-1 and i. The answer for those intervals will be computed later when we consider stack i-1 as the peak (and likewise for stack i+1 if we do not know Y'[i+1, R] for any R). In these cases, we may use X' = 0 (or Y' = 0). Note that since our intervals are maximal and we are computing from smallest to largest, PL-1 ≥ Pi (similarly, PR+1 > Pi).
Now we want to merge X'[L, i-1], X'[i+1, R] and stack i into X'[L, R]. We will do this in two steps. First, note that X'[L, i] = X'[L, i-1]: since we are processing the stacks from smallest to largest, Pi can be freely added as the right endpoint of any non-decreasing sequence in this range. Now let's merge X'[L, i] and X'[i+1, R]. Observe that X'[L, R] sums over intervals that end at the R-th stack. If an interval starts in the range [i+1, R], then it is already counted in X'[i+1, R]. If an interval starts in [L, i], then we can start with some sequence in [L, i], but since Pi is the peak, every value on the right must be exactly Pi. The number of pancakes needed to bring every value in [i+1, R] up to Pi can be computed in O(1) time using cumulative sums. Thus, the full merge is:
X'[L, R] = X'[i+1, R] + (X'[L, i-1] + Pi × (i-L+1) × (Pi+1 + ... + PR))
The Y' values can be computed similarly. In terms of complexity, we need to sort the stacks at the beginning in O(S log S) time and the remaining steps take constant time per peak, so O(S) overall. This means the algorithm takes O(S log S) time.
Although the O(S log S) solution is fast enough to solve test set 2, an O(S) solution is also possible! We present a sketch of the idea here, which can be read independently of the solution above.
To make things easier, we pretend that the stacks all have different heights. Each time we compare the heights of two stacks of identical height, we break the tie by assuming that the stack with the larger index is higher.
Let us think through the solution starting from the end. For each stack, we want to compute the pyramidification cost for all the ranges in which this stack is the highest; in such cases, it will be the peak of the pyramid. Then we can sum all of those values for all of the stacks, and that will be our overall result.
In order to compute those pyramidification costs, we can compute the following for each stack s:
Then the pyramidification cost for s can be calculated as CL × DR + CR × DL.
Now, how can we compute CL and DL for each stack? (The solution is analogous for CR and DR; the only major difference is that the inequality used to compare stacks is strict on one side and non-strict on the other, due to tie-breaking.)
We can traverse the stacks from left to right, keeping a Stack structure (capital S to avoid confusion) X that remembers the longest decreasing sequence of stacks ending on the current stack. We iterate through the stacks and when seeing a stack s we consume from X all stacks that are lower than s, adding their contributions to the current left pyramidification cost. Let t' = s at the beginning, and later t' = the previously consumed stack. The contribution of a consumed lower stack t is the number of pancakes missing between t and t' (calculated in constant time, if we precompute the cumulative sum of stack sizes) multiplied by the distance to the left blocker of t'. The left blocker of s is the first stack we can't consume from X, because it's higher than s. Once we're done with s, we add s to X and keep going.
We can start by modeling the problem as a graph where each cell of the input matrix represents a node. Orthogonally adjacent cells with the same label are connected by edges, and we can optionally add edges connecting diagonally adjacent cells with the same label. The goal is to end up with exactly two connected components.
Test set 1 can be solved with dynamic programming, iterating over columns. When considering the i-th column from left to right, we can summarize the state of the connections we have added in the submatrix S of columns 1 through i by recording: (1) whether we have seen (at least one cell of) each of A and B so far, and (2) whether any two cells on the i-th column that contain the same label but are not orthogonally adjacent are connected by a path in S. Then, we can use brute force, and consider all possible choices of how to connect each of the up to 3 cell corners between columns i and i+1 (there are at most 4 rows in Test set 1). There are 3 possibilities for each corner, \, / and no connection, but it can be reduced by noticing if at least one of the connections is valid, it is always optimal to use one, which reduces the number of choices per corner to 2 at the most. If at any point we discover a new isolated A component that cannot be connected to previously seen As, we have failed, and same is true for B. If we finish, we can then use a second pass to reconstruct one choice of diagonal connections that led to solving the problem.
This seemingly simple idea has several technical details that we are omitting here. Some of them can be simplified by starting and ending the process in columns that contain both As and Bs and preprocessing to check if any leftmost or rightmost columns that contain only one type already disconnect the other type.
The overall time complexity of this solution is exponential in R, because we are trying all combinations of O(R) corners and recording the connected status of O(R) cells at each state, and linear in C, with the exact formula depending on how the technical details are handled. As long as the base of the exponential factor is not too large, this is fast enough to pass within the time limit.
The key observation is: after we decide to add some edges (diagonal adjacencies), two cells c and d containing the same label X end up separated regardless of any future decisions if and only if one of the following two conditions holds:
Let us use G to denote the graph with no diagonal edges added and H to denote the final graph with all edges added. Consider the border of the matrix. Suppose that it contains two cells c and d labeled X that are not connected in G. Since they are not connected in G, that means, going around the border, there are cells e and f labeled Y ≠ X such that e is in between c and d in clockwise order and f is between c and d in counter-clockwise order. That means neither c and d nor e and f can be connected in H with a path of only border cells. Therefore, if c and d are connected in H, the path connecting them disconnects e and f, and vice versa. So, by the second condition, having two cells on the border with the same label that are disconnected in G results in an impossible case.
Notice that there is never an incentive to generate an edge from a diagonal adjacency to connect two cells that are already connected. Per the paragraph above, if a case is possible, then all border cells with the same label are already connected in G. Therefore, if an algorithm never adds an edge from a diagonal adjacency that connects two cells that are already connected, it will never generate a path between two border cells. Additionally, if we never connect cells that are already connected, any cycle in H is a cycle that was already present in G, so again, if we end up in a disconnected situation, the case must have been impossible from the start, before we added any connections.
These observations directly suggest the following algorithm: consider each diagonal adjacency and generate an edge if and only if it will connect two previously disconnected cells. If both choices work, we can choose either, since we already established that the decision of not connecting previously connected cells is enough to guarantee an algorithm will not generate an H with more than 2 connected components when a different one with exactly 2 was possible. After this process, check if there are 2 connected components or more than 2, and print the results.
If we implement the algorithm above using a union-find to maintain the connected components, we need O(R × C) checks for whether two cells are connected and O(R × C) connections (joins). Since union-find provides almost constant amortized time for both operations, the overall time complexity of the algorithm is quasilinear.
An equivalent algorithm is to calculate the connected components of G first and then use shortest paths to join any two components of the same label until we cannot do it anymore. Since shortest paths cannot create new cycles, an argument similar to the one above proves that this solution is also correct. This solution can be implemented in linear time if the partial minimum path tree graph is reused for each new connection we need.
For Test Set 1, we want to find exactly one folding segment such that when we fold the napkin across the segment, the regions on either side of the segment line up perfectly. Because the folding segment must split the napkin into two symmetric regions, we can show that each of the folding segment's endpoints either coincides with a vertex of the polygon defining the napkin, or is the midpoint of an edge of the polygon. If we tried to make a folding segment connecting any other points, the two parts of the split edge could not possibly line up perfectly. Thus, all we need to do is try every line segment connecting any pair of points that are vertices or midpoints of the polygon's edges. Then, for each potential folding segment, we check whether it is valid by making sure it is fully contained within the polygon and that the two regions it creates are symmetric across the folding line segment.
Checking for intersections between line segments can be done by using only integers. Reflecting points across a line or taking the midpoint of a side normally could produce points with non-integer coordinates. But we can scale up our points initially such that if the reflection were to produce a point with non-integer coordinates, it could not possibly be one of the valid endpoints for folding line segments. Of course, we can also choose to work with fractions.
With N points in our polygon, we have 2×N points to choose from for our folding segment's endpoints. Each potential folding segment can be checked in O(N) time. Because there are O(N2) possible segments to check, this results in an O(N3) overall time complexity.
Note that in order to check for symmetry across a folding segment, we cannot just check that the sets of vertices of the polygons defining each region are symmetrical. Rather, we must show that for every edge of the polygon of one region, there exists exactly one edge in the polygon defining the second region which is symmetric across the folding line segment. In other words, the order in which the points appear on each side matters.
Finally, we can note that if a given line is indeed an axis of symmetry of the polygon, then it is guaranteed that it doesn't intersect the polygon more than twice. This means that we don't need an explicit check in the code for the folding segment not to intersect the polygon outside its endpoints. A similar simplification can be applied to the solution of Test set 2, coming up next.
Since we want to draw a neat folding pattern of K-1 non-intersecting line segments, we must partition our napkin into K regions of identical size and shape, all of which are symmetric with other regions sharing common line segments. Each of these K regions is a polygon with edges that are made up of folding line segments and/or edges or parts of edges from the polygon defining the napkin. It can be shown that at least two of these regions are only adjacent to one line segment in our folding pattern, with the other edges that define the region's polygon coming from the original polygon. Let's call these regions corner regions.
If we have a corner region, we can reflect that region across its one folding line segment to find the region that must be adjacent to it. If we keep reflecting these regions across the edges of their polygons, being careful not to create intersecting regions or leave the polygon defining the napkin, we can reconstruct the neat folding pattern. Thus, every neat folding pattern can be uniquely defined by just one folding line segment connecting two points on the border of the polygon defining the napkin. That segment cuts off a corner region which can be successively reflected to give us the entire folding pattern.
We need to consider pairs of points on the napkin's border defining a folding line segment. For a given candidate folding line segment, we can successively reflect the corner region we form to get the full folding pattern. If we use more than K-1 reflections, or, if after we finish reflecting we don't end up creating the original polygon, we know that the chosen line segment does not give us a neat folding pattern.
Now all that remains is to select all pairs of points on the napkin's border. Clearly we cannot test every pair of points with rational coordinates, because there are infinitely many such points. Rather, we can show that the endpoints of the line segments in our neat folding pattern must be either vertices or points on the polygon's edges that are X/Y of the way between consecutive vertices, where 1 ≤ X ≤ Y-1 and 2 ≤ Y ≤ K (the proof for this is below). Therefore, we can create all of these candidate points and check every pair. With K ≤ 10 and N ≤ 200, there are at most 6400 candidate points for the vertices of the line segments in our neat folding pattern. This puts an upper bound of 64002 on the number of pairs that we might need to check. But, we can reduce this significantly by only considering pairs which create a corner region with an area equal to 1/K of the napkin's area. Every point can pair with at most 2 other points to create a line segment which is fully within the polygon and splits off a corner region with the proper area. Therefore, we only need to check these pairs.
We can check if a single line segment from a pair of points gives us a valid folding pattern in O(N) time if we are careful to stop once a reflection creates a point which is not on the border of one of our polygon's line segments. We can precompute all the valid points and use a hash table for this check. Because all of the points are of the form X/Y × p, where p is a point with integer coordinates and Y is between 2 and K, we can multiply everything by lcm(2, 3, ..., K) at the beginning and then work in integers, dividing and simplifying fractions only for the output.
To summarize, the algorithm requires these steps:
There are up to O(N × K2) possible endpoints. If we fix one endpoint P and iterate the other, we can compute the area as we advance, finding the up to 2 segments that have an endpoint in P in O(N × K2) time. So step 2 takes O(N2 × K4) time in total. For step 3, each unfolding requires reflecting the current region and finding new folding segments. All of that is linear in the size of the region, and the sum over the sizes of all regions is at most the total number of endpoints plus 2 × K shared ones, so this takes O(N × K2) time overall. Putting it all together, the algorithm takes O(N2 × K4) time in total. It is possible to make it significantly more efficient than that, but the implementation is complicated enough as it is.
To prove that all folding segments endpoints are X/Y of the way between consecutive vertices for 1 ≤ X ≤ Y-1 and 2 ≤ Y ≤ K, we can prove that the number of folding segments that are incident on a non-vertex point of the input polygon is odd or zero. If that's true, let P be an endpoint and Q and R be the two vertices and/or folding segment endpoints that are closest to P towards each side over the same polygon edge E as P. Then, by reflection PQ and PR are equal. By repeating this over E we can see that E is divided into equal length sections by every point that is a folding segment endpoint.
Now we prove that the number of incident folding segments in a non-vertex point of the polygon is odd or zero. Assume that P is a point over an edge E of the polygon with I > 1 of incident folding segments. Let Qi for i = 1, 2, ..., I be the non-P endpoints of each of those segments, in clockwise order. Let Q0 be the reflection of Q2 across PQ1 and QI+1 the reflection of QI-1 across PQI. Notice both those points have to be on E. Now, the I+1 angles QiQi+1 for i = 0, 1, ..., I are all equal. Let Ri be the reflection of Qi across E. Now, because Qi must reflect onto Qi+2, the length of PQ0, PQ2, PQ4, ..., PQI are all equal, and equal to the lengths of PR2, PR4, ..., PRI. Since the angles between two consecutive segments of those are also all equal, Q0Q2Q4...QIRIRI-2...R2 is a regular polygon. All points have rational coordinates because they are either input points, endpoints of folding segments, or reflections calculated from other points with rational coordinates. It is known that the only regular polygon whose vertices have rational coordinates in 2 dimensions is the square, which means I / 2 + 1 + I / 2 = 4, which implies I = 3 is odd.
The following picture illustrates the above proof for the case I = 4. P is the point in the center, and line segments of the same color are guaranteed to be of the same length.
A consequence of this proof is that for any non-vertex point on the original polygon, it must be adjacent to exactly 0, 1 or 3 folding line segments.