The Final round turned out to be unusually difficult. Most contestants started with problem A and later realized that repeated simulation would work for Esmall. Moving beyond that, however, proved to be a nearly impossible feat.
The winner, rng..58, ended up being the only contestant to solve the fiendishly difficult Elarge. mystic took second place with Bsmall and Dsmall on top of A and Esmall. meret came very close to taking the top spot, if not for a failed Blarge (caused by a 32bit int overflow). He finished in third.
Congratulations to the winners! We hope to see everybody next year. The 2012 finals will be held in New York City!
Cast
Problem A. Runs Written by Albert Mao and Bartholomew Furrow. Prepared by Jorge Bernadas Saragoza.
Problem B. Rains Over Atlantis Written by Onufry Wojtaszczyk and Bartholomew Furrow. Prepared by Onufry Wojtaszczyk and Jorge Bernadas Saragoza.
Problem C. Program within a Program Written by David Arthur and Jorge Bernadas Saragoza. Prepared by Jorge Bernadas Saragoza and David Arthur.
Problem D. Ace in the Hole Written by David Arthur and Jorge Bernadas Saragoza. Prepared by Bartholomew Furrow.
Problem E. Google Royale Written by David Arthur and Bartholomew Furrow. Prepared by Onufry Wojtaszczyk.
Contest analysis by Bartholomew Furrow and David Arthur.
Solutions and other problem preparation by Jorge Bernadas Saragoza, John Dethridge, Igor Naverniouk, David Arthur, Bartholomew Furrow, Tomek Czajka, and Onufry Wojtaszczyk.
I have a string S consisting of lowercase alphabetic characters, 'a'  'z'. Each maximal sequence of contiguous characters that are the same is called a "run". For example, "bookkeeper" has 7 runs. How many different permutations of S have exactly the same number of runs as S?
Two permutations a
and b
are considered different if there exists some index i
at which they have a different character: a[i] ≠ b[i]
.
The first line of the input gives the number of test cases, T. T lines follow. Each contains a single nonempty string of lowercase alphabetic characters, S, the string of interest.
For each test case, output one line containing "Case #x: y", where x is the case number (starting from 1) and y is the number of different permutations of S that have exactly the same number of runs as S, modulo 1000003.
1 ≤ T ≤ 100.
S is at least 1 character long.
Memory limit: 1GB.
S is at most 100 characters long.
Time limit: 30 seconds.
S is at most 450000 characters long.
S has at most 100 runs.
The input file will not exceed 1 megabyte in size.
Time limit: 60 seconds.
2 aabcd bookkeeper
Case #1: 24 Case #2: 7200
Rains fall on the isle of Atlantis, and will erode all the land to nothingness. What you want to know, so that you can organize the evacuation, is how soon it will happen.
You have a map of Atlantis. The map is a square grid, and each square contains the height of the land in that square, in metres, above sea level. All squares outside the map have height 0; all squares with height 0 are water, and all squares with larger heights are land. There are no squares with lower height.
Water can flow from a source square to a target square if the source square and target square share an edge, and if the height of the water in the target square is lower than or equal to the height of water in the source square.
It's raining very quickly, which means that if there is nowhere for the rain water in a square to flow, water in that square will accumulate until there is a square to which the rain water can flow. Squares that are not on the map can accept any amount of flow. For example, the following map:5 9 9 9 9 9 0 8 9 0 2 5 3 9 9 9 9 9Will quickly fill up with water. We'll call the height of water in each square, plus the height of the land, the water level. It will be:
5 9 9 9 9 9 0 8 9 5 5 5 3 9 9 9 9 9
Note that the 0 in the middle of the land, although it's water, is not connected to the outside of the map and so just accumulates water. The 0 on the border of the land, however, is connected to the outside of the map, and so the water from the 8 can flow through it to the outside.
The direction in which water flows is determined by the water level. If there are multiple possible squares where water could flow from one particular source square, the water from that source will flow to the square with the lowest water level (ties don't matter, as you will see).
Now the erosion begins. Each day, a square is eroded—its height decreases—depending on how water is flowing from it. If water is flowing from S to T, then S's height decreases by min(WaterLevel(S)  WaterLevel(T), M)
. All erosion happens at exactly the same time, at the end of the day. For example, with M=5, the map above will erode to:
0 4 4 4 4 4 0 3 5 0 2 0 0 4 4 4 4 4After a day's erosion, excess water will flow away: squares with water level higher than a neighbour's water level will lose water until they are of the same height. Water will also accumulate in the same way that it did on the first day. After the first day, this map's water level will become:
0 4 4 4 4 4 0 3 5 2 2 0 0 4 4 4 4 4After another day of erosion, the map will look like:
0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0
...and the Atlanteans will need to escape in a big hurry. Your task is to determine how many days it will take for all the heights to erode to 0.
The first line of the input gives the number of test cases, T. T test cases follow. Each test case begins with a line containing three spaceseparated integers: H, W and M. The first two denote the size of the map, while the third is the maximum amount a square can erode in one day, as described above. H lines follow, each of which contains W spaceseparated integers. The i^{th} integer on the j^{th} line denotes the height of the square at (i, j).
For each test case, output one line containing "Case #x: y", where x is the case number (starting from 1) and y is the number of days it takes to erode all the island.
1 ≤ T ≤ 40.
Memory limit: 1GB.
1 ≤ H, W ≤ 10.
1 ≤ M ≤ 100.
0 ≤ all heights ≤ 100.
Time limit: 30 seconds.
1 ≤ H, W ≤ 20.
1 ≤ M ≤ 10^{15}.
0 ≤ all heights ≤ 10^{15}.
Time limit: 60 seconds.
2 3 6 5 5 9 9 9 9 9 0 8 9 0 2 5 3 9 9 9 9 9 3 6 3 3 8 10 11 10 8 7 5 2 12 8 8 6 9 11 9 8 4
Case #1: 3 Case #2: 5
In the second case, the water height looks like:
3 8 10 11 10 8
7 7 7 12 8 8
6 9 11 9 8 4
After one day the island looks as follows:
0 5 7 8 7 5
4 5 2 9 8 5
3 6 8 6 5 1
And after the second day:
0 2 4 5 4 2
1 4 2 6 5 2
0 3 5 3 2 0
And the third day:
0 0 1 2 1 0
0 1 2 3 2 0
0 0 2 0 0 0
After the fourth day, things are looking desperate for the Atlanteans:
0 0 0 0 0 0
0 0 1 0 0 0
0 0 0 0 0 0
Finally, on the fifth day the last square erodes away. Atlantis lasted for five days; they probably shouldn't have built their city out of brown sugar.
You have a robot on an infinite eastwest highway, and it has a cake to deliver. Every mile along the highway, in both directions, there is a lamppost. You want to program the robot to move exactly N lampposts to the east, and release the cake there. The route does not have to be direct, as long as the robot eventually releases the cake in the right place.
Unfortunately, the robot comes equipped with only very little memory, and it is capable of no advanced logic. To control the robot you will have to give it a very simple program at the start that will get it to release the cake at the proper location. This program must be composed of one or more statements, each of which tells the robot what to do under certain conditions. These statements must be in the following format:
<S> <M> > <action>
which means that if all of the following conditions are met:
S
.M
.then it will perform exactly one of the following actions:
<action>
must be formatted as "<D> <NS> <NM>"
, where D
is the direction to move (use 'W' for west and 'E' for east), NS
is the robot's new state and NM
is the new mark for the current lightpost.
<action>
must be formatted as "R"
.
If you output two or more statements with the same values of S
and M
, the robot will misbehave and destroy the cake.
If at any time the robot is in a state X at a lamppost marked with Y such that there is no statement with S=X
and M=Y
, then the robot will get confused and eat the cake.
All states and marks must be integers with absolute value no greater than one million (10^{6}). Assume that initially the robot is in state zero and all lampposts are marked with zero.
Given N, write a program so the robot releases the cake in the appropriate place. Your program must use at most 30 statements, and it must terminate within X steps.
The first line of the input gives the number of test cases, T. T test cases follow. Each test case consists of a single line containing an integer N, which indicates the lamppost where the robot must release the cake.
For each test case, first output "Case #x: y", where x is the number of the test case (starting with 1) and y is the number of statements you will use. Next output y lines, each of which represents a statement for the robot in the format described previously.
WARNING: Judge's response might take up to 5 seconds longer than usual to appear, because your output is run as part of the validation.
1 ≤ T ≤ 15.
Memory limit: 1GB.
0 ≤ N ≤ 500.
X = 250,000 (2.5 × 10^{5}).
Time limit: 30 seconds.
0 ≤ N ≤ 5000.
X = 150,000 (1.5 × 10^{5})
Time limit: 60 seconds.
Input 
Output 
3

Case #1: 1

In the second case, the robot has five states: 0, 1, 2, 3, and 1. The robot performs the following actions:
In the third case, the robot has two states, and performs the following actions:
Amy has a deck of N cards with values 1 through N. She arranges the deck so that the values of the cards have no decreasing subsequence of length 3. For example, 1, 5, 4, 6, 3, 2 would be an illegal ordering because 5, 3, 2 is decreasing.
Amy now gives the deck of cards to Ben. Ben knows that the deck has no decreasing subsequence of length 3, but he does not know the exact ordering. He wants to find the card with value 1. He does this by choosing an arbitrary card, picking it up to observe its value, and then repeating until he has found the card with value 1. At each step, Ben chooses a card that will minimize the worstcase number of cards he has to examine.
Ben later tells you that he got unlucky and had to examine all N cards before finding the card with value 1. Given the order in which he examined the cards of the deck, what value did each card have? If there are multiple possibilities, choose the lexicographically greatest one.
A deck A
is lexicographically greater than a deck B
if and only if, at the first index at which they differ, the card in A
has a value greater than the value of the card in B
.
Example: N = 3, and Ben tried the cards in order 2, 1, 3 (the indices are 1based). The values of the cards must have been: 2, 3, 1.
Explanation: If card #2 had value 1, then Ben would have stopped immediately. If card #2 had value 2, then Ben would have known the first card must have been the 1, because the ordering (3, 2, 1) is a decreasing subsequence of length 3, and thus could not have been the ordering. In either case, Ben would not have needed 3 guesses. Therefore, we can deduce card #2 have had value 3. Similarly, card #1 could not have had value 1, or Ben could have stopped early. Therefore, the card values must have been 2, 3, 1.
The first line of the input gives the number of test cases, T. T test cases follow. Each test case begins with a line containing one integer N, the number of cards in the deck. The next line will contain N integers separated by single spaces, describing the order in which Ben examined the deck: the first integer denotes the 1based position of the first card he examined, the second integer denotes the 1based position of the second card he examined, and so on.
For each test case, output one line containing "Case #x: y", where x is the case number (starting from 1) and y is the sequence of cards' values, separated by spaces.
1 ≤ T ≤ 100
For the provided sequence of guesses, there will be at least one deck that meets all the constraints of the problem, including the constraint that Ben's strategy required him to look at N cards.
Memory limit: 1GB.
1 ≤ N ≤ 8
Time limit: 30 seconds.
1 ≤ N ≤ 300
Time limit: 60 seconds.
3 3 2 1 3 1 1 3 3 2 1
Case #1: 2 3 1 Case #2: 1 Case #3: 1 3 2
While visiting the planet Theta VIII, your team of space explorers is forced to participate in the plot of a badlywritten book, which takes place in a hotel/casino called the Google Royale. In order to escape the Royale, you will have to make enough money from gambling that you can buy the hotel for V dollars and leave.
You start with A dollars, and you will participate in betting rounds until one of two conditions is met. If you finish any betting round with ≤ 0 dollars, you will lose; if you finish a betting round with ≥ V dollars, you will buy the hotel and leave. Otherwise you'll keep starting new betting rounds.
Each betting round consists of one or more coin flips. If you have X dollars at the start of the round, you can choose any integer B
between 1 and min(X, M)
to bet on the first coin flip.
With probability 50%, you win the coin flip, and the Royale immediately pays you B
dollars. You now have
dollars, and the betting round ends.
With probability 50%, you lose the coin flip and owe the Royale B
dollars. You can now pay the B
dollars you owe and end the round. Or if 2B ≤ M
2B
dollars. If you lose again, then you owe the Royale B+2B=3B
dollars. You can continue doubling your bet in this way to 4B
, 8B
, etc., until either you win a coin flip, you choose to stop, or your next bet would exceed M. You can even continue if the total of all your bets in the current betting round exceeds X
.
Once the round is over, you must pay the Royale for each coin flip you lost, and if you won a coin flip, the Royale pays you for that. For example, if you start with a bet of 1 dollars, lose three coin flips, and then win one, you would gain $8  $4  $2  $1 = $1. If you lose three coin flips and then stopped, you would lose $4 + $2 + $1 = $7. If you are left with $0 or less after paying, then you are broke, and you have just lost the game.
Luckily you have brought an android with you, and he is able to compute the probability that you will win if you follow an optimal strategy. What is that probability, and what is the largest possible first bet you could make to have that probability? Remember that you are not allowed to bet more than M!
Suppose that you decide to use the following (suboptimal) strategy. You have A
=5 dollars; M=20
and V=40
. The following sequence of events is possible:
5*2 ≤ 20
, you may do another coin flip with a bet of 5*2=10
dollars. You choose not to. You lose 5 dollars, and the betting round ends. Now you have 2 dollars.2*16>M
, you cannot flip another coin and you must pay what you owe. Now you have 26 dollars; you have lost.The first line of the input gives the number of test cases, T. T lines follow. Each line contains three integers separated by single spaces: A, M and V, in that order.
For each test case, output one line containing "Case #x: y z", where x is the case number (starting from 1); y is the probability of winning if you follow an optimal strategy; and z is the maximum first bet you can make without reducing your probability of winning. y must be correct to within an absolute or relative error of 10^{6}.
1 ≤ T ≤ 100.
Memory limit: 1GB.
1 ≤ M ≤ 20.
1 ≤ A < V ≤ 20.
Time limit: 30 seconds.
1 ≤ M ≤ 10^{16}.
1 ≤ A < V ≤ 10^{16}.
Time limit: 60 seconds.
4 1 1 3 3 6 12 4 20 15 13 6 20
Case #1: 0.333333333 1 Case #2: 0.500000000 3 Case #3: 0.755555555 3 Case #4: 0.730769231 6
On a contest made up of unconventional problems, this one stands out as being a little more "normal" than the rest. That doesn't make it easy though! The main techniques here are good oldfashioned dynamic programming and counting, but you need to be very good at them to stand a chance.
The most important observation is that you cannot afford to build strings from left to right. With 450,000 characters, there is just far too much state to keep track of. The correct approach turns out to be placing all of one character type, then all of the next, and so on. Towards that end, let's define N_{c} to be the number of characters of type c, and define X_{c,r} to be the number of ways of arranging all characters of type 1, 2, ..., c so that there are exactly r runs. We will compute the X values iteratively using dynamic programming.
Consider a string S using only the first c  1 character types, and having r_{0} runs. Note that S has exactly M = N_{1} + N_{2} + ... + N_{c1} characters altogether. Additionally, it has a few different types of "character boundaries":
Therefore, the number of ways of adding all N_{c} typec characters to S so as to get a string with exactly r runs can be calculated as follows:
This method is actually quite fast. We can use O(450,000 * 100) time to precompute all the choose values. Everything else runs in O(26 * 100^{3}) time.
By the way, you probably need to calculate the choose values iteratively rather than recursively. 450,000 recursive calls will cause most programs to run out of stack space and crash! Here is a pseudocode with a sample implementation (modulo operations removed for clarity):
def CountTransitions(M, Nc, r0, r): # Special case: If adding the first batch of characters the # only possible result is to have one run, and there is only # one way to achieve that. if r0 == 0: return r == 1 ? 1 : 0 result = 0 dr = r  r0 for (y = 0; r0 + 2 * y <= r; ++y): x = r  (r0 + 2 * y) nways_select_x = Choose(r0 + 1, x) nways_select_y = Choose(M  r0, y) nways_split = Choose(Nc  1, x + y  1) result += nways_select_x * nways_select_y * nways_split return result def Solve(freq, runs_goal): runs_count = [0 for i in range(0, runs_goal + 1)] runs_count[0] = 1 M = 0 for i, Nc in enumerate(freq): if Nc > 0: old_runs_count = list(runs_count) runs_count = [0 for i in range(0, runs_goal + 1)] for (r0 = 0; r0 <= runs_goal; ++r0): for (int r = r0 + 1; r <= runs_goal; ++r): nways = CountTransitions(M, Nc, r0, r) runs_count[r] += nways * old_runs_count[r0] M += Nc return runs_count[runs_goal]
One obvious solution (which works for small input) is to simulate. The trouble starts when the heights are large, and M is small. Thus, we need to be able to do multiple steps at once.
Begin by noting that, given the current map of heights, we can determine the water levels (i.e., what areas are submerged and what areas are not) using a single run of Dijkstra's shortest path algorithm, in
We will have a single step procedure. What it does is:
This is one step of the simulation, which is also needed for the naive solution.
We will also have a multistep procedure, which tries to perform multiple single steps at once, as long as all of the following are true:
The multistep procedure will work as follows:
Note that in each multistep one field that was submerged becomes uncovered, and it's easy to see that a field that is not submerged will never become submerged in the future  thus there are at most RC multisteps in the algorithm (actually, fewer because the fields on the edge of the board are never submerged).
Now we want to know how many steps are possible before we can perform a multistep. Define an auxilliary graph that has a node for each unsubmerged field. For each lake we join all the fields in this lake to an arbitrary field on the boundary of the lake which defines the water level for the lake (the outlet of the lake). We define natural edges in the graph  two nodes have an edge if any of the two fields merged into the two nodes were adjacent. For each nodes we can define its "parent" to be one of the nodes with the lowest level adjacent to it. Thus, we have a tree, rooted in the external sea. We define any path going upward in the tree to be "fixed" if all of the edges along the path have a height difference of at least M.
We can now prove that each day either
If all vertices lie on fixed paths, then we can perform a multistep. As there can be at most RC increases before all vertices are on fixed paths, this means we get at most RC steps and at most one multistep before some field is uncovered, meaning at most (RC)^{2} steps and RC multisteps in total, giving a running time of (RC)^{3} log(RC). The constants are very small here, so this will easily run in time.
Now consider any fixed path. We assume the node distribution remains the same (i.e. the lakes remain lakes, and nothing gets uncovered). First notice that it remains a fixed path after one day  all of the fields on the path decrease by the same amount, so the differences remain the same. Now take any vertex that is not on a fixed path, but its parent is. Note that the sea level is on a fixed path by definition, so if no such vertex exists, it means that all vertices are on a fixed path. Then after one day the height of this vertex decreases to the height of the parent, while the height of the parent decreases by M. So the new difference is M, meaning that the vertex has joined the fixed path. This ends the proof.
Note that the upper bound can actually be (more or less) hit in the following map with M = 2: take a winding path of length that is on the order of RC, that does not touch itself, like so:
Where X represent very large values, and P represents the path. Thus, for the foreseeable future, the tree described above is actually the single path marked by Ps.
Now take the following values on the path: A, A2L1, A+1, A  4L  1, A+1, A  6L  1, A+1, A  8L  1, etc., where L is the length of the path. It is not difficult to check that "off by one" changes get propagated up the path with speed one, and each change gets to be fully propagated before the next one kicks in (due to the uncovering of another lake).
Here at the Google Code Jam, we have always tried to encourage a variety of programming languages. But in this problem, we took things a little further by forcing you to program in the language that started them all  the abstract Turing Machine! Unfortunately, abstract Turing Machines are pretty unwieldy, and that makes even the simple task here quite difficult.
With so few rules allowed, it is important to take advantage of the numbers you can mark onto the lampposts. The obvious approach is to write down, perhaps in binary, the number of steps you need to make, and then have the robot makes decisions based on that. Unfortunately, there is also a rather tight limit on the number of moves, which means you cannot afford to keep going back to the start to check on your number. You will need to take the data with you as you move.
Here is one effective way of doing this:
And the copy phase is similar but conceptually simpler.
Almost any implementation of this algorithm should be good enough to solve the problem, but there is room for more optimization if you want a challenge! With the 30rule limit, we were able to bring the number of steps down to about 95,000. Here is the full program to move 5,000 lampposts:
0 0 > E 1 1 1 0 > E 2 3 2 0 > E 3 1 3 0 > E 4 3 4 0 > E 5 2 5 0 > E 6 3 6 0 > E 7 1 7 0 > E 8 3 8 0 > W 9 3 9 0 > R 10 0 > E 11 0 9 1 > W 9 3 10 1 > W 10 1 9 2 > W 10 1 10 2 > W 10 2 9 3 > W 10 2 10 3 > W 10 3 11 1 > E 12 0 12 0 > W 9 3 12 1 > E 12 1 12 2 > E 13 1 12 3 > E 14 1 13 0 > W 10 1 13 1 > E 12 2 13 2 > E 13 2 13 3 > E 14 2 14 0 > W 10 2 14 1 > E 12 3 14 2 > E 13 3 14 3 > E 14 3
Can you do better? We'd love to hear about it if so!
This is a rather unconventional problem that requires a lot of thought upfront and relatively little implementation.
Let's think about the problem as a competitive game. Ben chooses a card position, and then Amy chooses the value of that card. Amy is not allowed (a) to reuse values, (b) to create a decreasing subsequence of length 3, or (c) to place the 1 before the final turn. Our task it to understand what decisions Amy could have made during the game. We will present the answer first, and then prove it is correct. Define an "adversarial strategy" for Amy as follows:
We claim that the problem conditions are equivalent to saying that Amy chooses the deck values according to an adversarial strategy. Once that is established, it will be pretty straightforward to find the lexicographically largest solution.
The main technical challenge is to prove that adversarial strategies really do require Ben to look at every card. On a contest of course, you would not need to make this argument quite so rigorously.
Lemma: If the deck values are assigned according to an adversarial strategy, then the deck contains no decreasing subsequence of length 3.
Proof: We follow along as Ben looks at the cards one at a time. At each step, we will say a card is "safe" if Ben has looked at it, and either (a) all cards with a higher position have also been looked at, or (b) all cards with a higher value have also been looked at. Let's suppose the remaining cards are in positions p_{1} < p_{2} < ... < p_{m}, and have values v_{1} < v_{2} < ... < v_{m}. These are the "unsafe" cards.
We claim the following is true:
Now let's suppose Ben looks at a card C in some position p_{k}.
In any case, the inductive hypothesis holds at each step, and therefore the lemma is proven.
It now follows immediately that Ben requires all N guesses if the cards are assigned according to an adversarial strategy. No matter what cards Ben looks at, Amy can continue her adversarial strategy and avoid revealing either a decreasing subsequence of length 3 or the card with value 1. In particular, Ben can never finish early.
The previous argument is important from a technical perspective, but in practice, you would begin solving this problem from the other side: first showing how Ben can exploit poor strategies. Towards that end, we show that if Amy ever deviates from an adversarial strategy, then Ben can find the 1card without requiring all N turns.
Let's begin by focusing on the first card Ben looks at.
Observation 1: Suppose Ben looks at card i < N. Then Amy must assign it value i + 1.
Reason: Suppose he sees value j ≠ i + 1. We claim that the value1 card cannot be in position N. Indeed, if it was in that position, then the condition that the deck has no decreasing subsequence of length 3 implies that (a) all cards with position less than i have value in the range [2, j1], and (b) all cards with position between i + 1 and N  1 have value in the range [j+1, N]. If j < i + 1, then there are not enough cards in the range [2, j1] for this to be possible, and if j > i + 1, then there are not enough cards in the range [j+1, N] to be possible.
Therefore, if Amy assigns this card a value other than i + 1, then Ben need never look at card N. But we know he did indeed have to look at all the cards, so it must be that Amy assigned value i + 1.
Observation 2: Suppose Ben looks at card N. Then Amy must assign it value N  1 or N.
Reason: Suppose he sees value j < N  1. If N = 3, then Ben just found the value1 card, which is an immediate contradiction. Otherwise, N ≥ 4, and we show that Ben can find the 1value card in less than N  1 further card checks, which is another contradiction.
Forgetting the information Ben has already obtained, we know the remaining N  1 card values are placed in the N  1 preceding positions. By Observation 1, if Ben checks the card in position i, then it must have the (i+1)^{th} smallest value of the remaining cards. (Otherwise, we already know he can find the 1value card without checking everything.) So if Ben checks the cards in all positions except N  3 and N  1, he must see the following values:
2, 3, ..., j1, j+1, j+2, ..., N2, ???, N, ???, j.
The two unknown cards have value 1 and N  1 in some order. However, we know the secondlast card cannot have value N  1, or we would have a decreasing subsequence: N, N  1, j. Therefore, the 1 must be in that position, and so Ben never needs to check the fourthlast card, and the proof is complete.
There is only one tricky case left. At a given point, let the remaining unrevealed cards be in positions p_{1} < p_{2} < ... < p_{m}, and have values v_{1} < v_{2} < ... < v_{m}. Suppose that there is a previously revealed card in position less than p_{m} with value between v_{m1} and v_{m}. We need to show that if Ben looks at the card in position p_{m}, then Amy must assign it value v_{m}.
Reason: Suppose Amy does not do this. Consider the first time where she does not. Let C be the card in position p_{m}, and let C' be the revealed card that is positioned before C and that has value between v_{m1} and v_{m}.
Up until this point, Amy has followed an adversarial strategy, so we can use the inductive statement proven in our first lemma to divide the deck into safe and unsafe cards. We know there is an unrevealed card positioned after C' (namely C) and there is an unrevealed card value that is higher than the value of C' (namely v_{m}), so C' must be an unsafe card.
Let the unsafe cards have positions q_{1}, q_{2}, ..., q_{r} and values w_{1}, w_{2}, ..., w_{r}. Since v_{m1} is an unrevealed value, it belongs to an unsafe card and we have v_{m1} = w_{u} for some u. Also C' has value w_{t} for some t > u and position p_{t1}.
Now suppose Amy assigns C a value of v_{m1} = w_{u}. Then the card in position q_{u1} must be unrevealed, or it would have had the same value. Also, as argued earlier, the card in position q_{r1} must be unrevealed or it would have value w_{r} and therefore be safe already. Ben can now exploit Amy's strategy by looking at every card other than q_{u1} and q_{r1}. It is easy to check this will leave only the values 1 and w_{u+1}. But the card in position q_{r1} cannot have value w_{u+1} or we would get a decreasing subsequence: w_{t}, w_{u+1}, w_{u}. Therefore, this card must have value 1, and Ben can avoid checking the card with position q_{u1}. This proves the final part!
We are now finally ready to discuss the solution. At each step, Amy's strategy is almost completely determined. The only question is whether she should assign v_{m} or v_{m1} to the card in position p_{m} when she has the choice. In fact, the two choices lead to identical deck orderings except with v_{m} and v_{m1} swapped. In order to make the ordering as large lexicographically as possible, we want v_{m} early on, and so we should always prefer using v_{m1}.
That's it! We just need to implement the adversarial strategy, always preferring v_{m1} over v_{m}. But of course it took a lot of reasoning to get to this stage!
Google Royale is perhaps the most unapproachable problem we have ever posed for the Google Code Jam. It requires a great deal of understanding before any progress at all can be made on the large input. This is because the interaction between winning probability and starting money is rather complicated, and you cannot possibly afford to do a complete search through all 10^{16} states. Although the scenario here might seem similar to Millionaire, a previous Code Jam problem, you simply cannot afford the kind of computation that worked there.
First, let's try to understand a betting round. Let the initial bet be y. If you lose k times and then win, your total earnings will be y * 2^{k1}  y * 2^{k2}  y * 2^{k3}  ...  y, which equals y no matter what k is. If you lose, your total earnings will be negative and the exact value will depend on how many times you doubled.
The key to the whole problem is that your expected winnings from a betting round is exactly 0, no matter what your initial bet is and no matter how many times you are willing to double if you keep losing. (Recall that the expected winnings is your "average" winnings  formally it is defined as sum p_{i} * i, where p_{i} is the probability that you win exactly i dollars.) The expected value is 0 because at each step of each round, you have a 50% chance of winning some money, and a 50% chance of losing the same amount of money. The average winnings is always 0 in each step, and therefore 0 overall.
Now let's think about strategy a little bit. In general, there will be several different strategies that maximize your probability of winning. For now, we will work on identifying only one of them. We will come back to the question of finding the maximum bet later.
Observation 1: If you have x dollars, there is no reason to start a betting round with more than V  x.
Reason: If you bet more than V  x, then winning will always put you over V. If you decrease the bet slightly, then winning is just as good, but losing is no worse. Therefore, you might as well bet less.
From now on, we will always restrict our attention to strategies that follow Observation 1, and therefore, we will never end up with more than V dollars, no matter how lucky or unlucky we are.
Observation 2: Fix a strategy. Let P be the probability of reaching V dollars, and let L be the expected number of dollars you end up with if you lose. Then P = 1  (V  A) / (A  L). Since V and A are fixed, this means maximizing P is equivalent to minimizing L.
Reason: This follows from the fact that your expected winnings after any number of betting rounds is always equal to 0, or equivalently, your total expected money is always equal to A. Let's see what this tells us at the end of all betting rounds. At that point, you will have won with probability P, in which case your money is exactly V by Observation 1. Or you will have lost with probability 1  P, in which case your expected money is exactly L. Therefore, we get:
P * V + (1  P) * L = A
=> P * (V  L) = A  L
=> P = 1  (V  A) / (A  L).
So now the question reduces to making L as small as possible. Using these ideas, we can make a couple major deductions about the optimal strategy.
Observation 3: If you have x dollars, you might as well do one of two things: (1) bet exactly x and double until you win or until it is no longer possible to double, or (2) bet exactly 1 and do not double even if you lose.
Reason: An optimal strategy will bet some number y and double up to k times. If one of the betting steps results in a win, you will end up with x + y dollars. Otherwise, you will end up with some number x  z dollars. As always, your expected number of dollars is constant, and so is equal to x. We will show how to replace the strategy of betting y and doubling up to k times with a new strategy (possibly requiring multiple betting rounds) that follows the rules we want, and that is at least as effective at reaching x + y dollars.
Case 1: x  z ≥ 0. Instead of betting y, repeatedly bet 1 and do not double. Stop only when your total amount of money increases to x + y or decreases to x  z. Since x  z ≥ 0, it is legal to continue betting until one of these outcomes happens. Like the original strategy, this strategy will result in the same two outcomes (x + y dollars or x  z dollars), and the expected money at the end will still be x for the exact same reason. However, as Observation 2 shows, if two strategies have identical winning and losing outcomes and identical expected values at the end, they have identical probability of getting the better outcome. Therefore, this strategy is identical to the original strategy, and we can just do this instead.
Case 2: x  z < 0. Instead of betting y, repeatedly bet 1 and do not double. Stop only when your total amount of money increases to x + y or decreases to y. If you end up at y, then bet y, and keep doubling until you win or go broke. If you win, go back to the first step and continue betting 1 until you reach x + y or return back down to y. If you end up at y again, bet y, and repeat. Like the original strategy, this will end with you having exactly x + y dollars or with you being broke. However, it is easy to check that L is strictly smaller here than in the original strategy while the expected money at the end, as always, stays equal to x. Therefore Observation 2 guarantees that this new strategy is strictly better than the original strategy.
We have shown that any strategy can switch to betting 1 or x without becoming any worse, and that proves Observation 3.
This is a very powerful observation, but there is still more to understand! With 10^{16} possible money amounts, we cannot afford to try both strategies in every situation. There is one more idea that drastically cuts down the search space.
Observation 4: For each nonnegative integer i, let M_{i} be the largest integer such that M_{i} * 2^{i} ≤ M. We will call these numbers "inflection points". Then you should never bet all your money unless the amount you have is an inflection point.
Reason: The proof here is very similar to Observation 3. Consider a strategy that bets everything for x satisfying M_{i} < x < M_{i1}. Let y equal x/2, rounded up. Note that 2y ≤ x + 1 ≤ M_{i1}, so if we bet y, then we can double i times.
The current strategy either gives you 2x dollars or x * (1  1  2  ...  2^{i1}) = 2x * (2^{i1}  1). However, y / x ≥ 1/2 > (2^{i1}  1) / (2^{i}  1). Therefore, the losing amount of money for this strategy is more than 2y * (2^{i}  1).
Now we consider an alternate strategy. Instead of going all in at x, repeatedly bet 1 and do not double. Stop only when your total amount of money increases to 2x or decreases to y. In the latter case, go all in and double up to i times, or until you win. As noted above, your bet will never exceed M, so this is a legal strategy. If you win, start over from the beginning. Like the original strategy, this will either leave you with 2x dollars or broke. However, if you are broke, you end up with y * (1  1  2  ...  2^{i}) = 2y * (2^{i}  1). As argued above, this losing value is less than it was for the original strategy, and so Observation 2 implies the new strategy has a higher probability of reaching 2x. Therefore, the original strategy could not have been optimal.
When to go AllIn: Let's say a dollar amount is an "allin" point if we should bet everything when we have that amount of money. Observation 4 guarantees that allin points are a subset of the inflection points, but it is not true that every inflection point is an allin point.
Consider the inflection points in increasing order. The smallest inflection point will be 1, and certainly that is an allin point. Let's now consider the second smallest inflection point. Observation 2 says that our goal is to minimize L, so we calculate what we end up with if we go all in from this inflection point and lose. If it is our lowest total yet, we should go all in there. Otherwise, we can do better by betting 1 until we get to the lower inflection point. The same argument applies for the third inflection point, and then the fourth and so on. In this way, we can very quickly calculate an optimal strategy at every dollar amount.
There are two tricky cases to watch out for here:
Calculating the Winning Probability: Let P_{x} denote the probability of reaching V if your current amount of money is x. If x is not a strict allin point, then one optimal strategy is to bet 1 until we reach the strict allin point directly above x or directly below x, or until we reach V itself. For any x between these allin points, we have the recurrence: P_{x} = (P_{x1} + P_{x+1})/2, or in other words, P_{x}  P_{x1} = P_{x+1}  P_{x}. This implies P_{x} is linear in this range, and therefore we can calculate P_{x} if we know the value of P at both endpoints.
We can now calculate P by starting from the largest allin point and working down. For example, suppose the largest allin point is y and the probability of losing immediately from going all in there is 1  p. Then P_{y} = p * P_{2y}. Also, by linearity, P_{2y} = P_{y} * (V  2y) / (V  y) + 1 * y / (V  2y). We know p, V, and y, so it is easy to calculate P_{y} from here. Once we have P_{y}, we can use the same trick to calculate P for the secondlargest allin point, then for the thirdlargest allin point, and so on, until eventually we have calculated all probabilities.
Calculating the Largest Optimal Bet: It remains only to calculate the largest optimal bet. If A is an allin point, either strict or optional, then certainly we can and should bet A there.
Otherwise, consider a dollar amount x. If x is not a strict allin point, then we already saw that P_{x}  P_{x1} = P_{x+1}  P_{x}. This means P_{x} is completely linear between strict allin points, and so we can certainly increase the bet until either winning or losing would take us to a strict allin point on either side. If x is equal to a strict allin point however, then 1 is not an optimal bet, and we have P_{x} > (P_{x1} + P_{x+1}) / 2, or equivalently, P_{x}  P_{x1} > P_{x+1}  P_{x}. In particular, this means that increasing the bet further would hurt us. Therefore, the largest possible bet is the distance between A and the nearest strict allin point.