The last of our Round 1 sub-rounds is over! A further 1000 contestants have advanced to Round 2 and Distributed Round 1. For those of you who did not advance, we hope that you found the Round 1 problems enjoyable, and we certainly hope to see you back next year!

This round featured *Ample Syrup*, a surprise additional pancake-themed
problem for 2017. Although it was superficially geometry-based, it was really
more about reducing the complexity of a search space.
*Parenting Partnering* was inspired by real life events on the Code
Jam team; to solve it, it helped to think carefully about intervals and have
a greedy insight. Finally, *Core Training* was a bit of a wild card. The
first dataset is tractable, and solving it provides a hint toward the solution
of the second dataset, but getting there still required some tricky observations
and perhaps some experimentation. Even with multiple attempts allowed, less than
1% of the people who solved the first dataset went on to solve the second one
correctly!

Slightly more than 1000 coders scored 57 points or better, which is the score achieved for solving both datasets for the first two problems, so almost all of those people will advance to the next round.

EgorKulikov won the round, solving all three problems with a time of 43:59. Seven more coders also got a perfect score of 100.

**Cast**

Problem A (Ample Syrup): Written by Pablo Heiber and Ian Tullis. Prepared by John Dethridge.

Problem B (Parenting Partnering): Written by Pablo Heiber. Prepared by Ian Tullis.

Problem C (Core Training): Written by David Arthur and Ian Tullis. Prepared by Ahmed Aly.

Solutions and other problem preparation and review by Md Mahbubul Hasan, Brian Hirashiki, Lalit Kundu, Petr Mitrichev, Rohan Mukkamala, Trung Thanh Nguyen, and Erick Wong.

Analysis authors:

- Ample Syrup: Ian Tullis
- Parenting Partnering: Ian Tullis
- Core Training: David Arthur, Pablo Heiber, and Ian Tullis

The kitchen at the Infinite House of Pancakes has just received an order for
a stack of **K** pancakes! The chef currently has **N** pancakes
available, where **N** ≥ **K**. Each pancake is a cylinder, and
different pancakes may have different radii and heights.

As the sous-chef, you must choose **K** out of the **N**
available pancakes, discard the others, and arrange those **K** pancakes
in a stack on a plate as follows. First, take the pancake that has the
largest radius, and lay it on the plate on one of its circular faces. (If
multiple pancakes have the same radius, you can use any of them.) Then, take
the remaining pancake with the next largest radius and lay it on top of that
pancake, and so on, until all **K** pancakes are in the stack and the
centers of the circular faces are aligned in a line perpendicular to the
plate, as illustrated by this example:

You know that there is only one thing your diners love as much as they love pancakes: syrup! It is best to maximize the total amount of exposed pancake surface area in the stack, since more exposed pancake surface area means more places to pour on delicious syrup. Any part of a pancake that is not touching part of another pancake or the plate is considered to be exposed.

If you choose the **K** pancakes optimally, what is the largest total
exposed pancake surface area you can achieve?

The first line of the input gives the number of test cases, **T**.
**T** test cases follow. Each begins with one line with two integers
**N** and **K**: the total number of available pancakes, and the size
of the stack that the diner has ordered. Then, **N** more lines follow.
Each contains two integers **R _{i}** and

For each test case, output one line containing `Case #x: y`

,
where `x`

is the test case number (starting from 1) and
`y`

is the maximum possible total exposed pancake surface area, in
millimeters squared. `y`

will be considered correct if it is
within an absolute or relative error of 10^{-6} of the correct
answer. See the FAQ for an
explanation of what that means, and what formats of real numbers we accept.

Time limit: 20 seconds per test set.

Memory limit: 1 GB.

1 ≤ **T** ≤ 100.

1 ≤ **K** ≤ **N**.

1 ≤ **R _{i}** ≤ 10

1 ≤

1 ≤ **N** ≤ 10.

1 ≤ **N** ≤ 1000.

Sample Input

4 2 1 100 20 200 10 2 2 100 20 200 10 3 2 100 10 100 10 100 10 4 2 9 3 7 1 10 1 8 4

Sample Output

Case #1: 138230.076757951 Case #2: 150796.447372310 Case #3: 43982.297150257 Case #4: 625.176938064

In sample case #1, the "stack" consists only of one pancake. A stack of just
the first pancake would have an exposed area of π ×
**R _{0}**

In sample case #2, we can use both of the same pancakes from case #1. The
first pancake contributes its top area and its side, for a total of 14000π
mm^{2}. The second pancake contributes some of its top area (the
part not covered by the first pancake) and its side, for a total of 34000π
mm^{2}. The combined exposed surface area is 48000π
mm^{2}.

In sample case #3, all of the pancakes have radius 100 and height 10. If we
stack two of these together, we effectively have a single new cylinder of
radius 100 and height 20. The exposed surface area is 14000π
mm^{2}.

In sample case #4, the optimal stack uses the pancakes with radii of 8 and 9.

Cameron and Jamie are longtime life partners and have recently become parents! Being in charge of a baby, exciting as it is, is not without challenges. Given that both parents have a scientific mind, they have decided to take a scientific approach to baby care.

Cameron and Jamie are establishing a daily routine and need to decide who will be the main person in charge of the baby at each given time. They have been equal partners their whole relationship, and they do not want to stop now, so they decided that each of them will be in charge for exactly 12 hours (720 minutes) per day.

Cameron and Jamie have other activities that they either need or want to
do on their own. Cameron has **A _{C}** of these and Jamie has

Cameron and Jamie want to come up with a daily baby care schedule such that:

- Scheduled baby time must not interfere with a scheduled activity. That is, during Cameron's activities, Jamie has to be in charge of the baby, and vice versa.
- Each of Cameron and Jamie must have exactly 720 minutes assigned to them.
- The number of exchanges — that is, the number of times the person in charge of the baby changes from one partner to the other — must be as small as possible.

For example, suppose that Jamie and Cameron have a single activity each: Jamie has a morning activity from 9 am to 10 am, and Cameron has an afternoon activity from 2 pm to 3 pm. One possible but suboptimal schedule would be for Jamie to take care of the baby from midnight to 6 am and from noon to 6 pm, and for Cameron to take care of the baby from 6 am to noon and 6 pm to midnight. That fulfills the first two conditions, and requires a total of 4 exchanges, which happen at midnight, 6 am, noon and 6 pm. If there is an exchange happening at midnight, it is counted exactly once, not zero or two times.

A better option would be for Cameron to take care of the baby from midnight to noon, and Jamie to take care of the baby from noon to midnight. This schedule also fulfills the first two conditions, but it uses only 2 exchanges, which is the minimum possible.

Given Cameron's and Jamie's lists of activities, and the restrictions above, what is the minimum possible number of exchanges in a daily schedule?

The first line of the input gives the number of test cases, **T**.
**T** test cases follow. Each test case starts with a line containing two
integers **A _{C}** and

For each test case, output one line containing `Case #x: y`

,
where `x`

is the test case number (starting from 1) and
`y`

the minimum possible number of exchanges, as described in the
statement.

Time limit: 20 seconds per test set.

Memory limit: 1 GB.

1 ≤ **T** ≤ 100.

0 ≤ **C _{i}** <

0 ≤

Any two of the intervals of {[

sum of {

sum of {

0 ≤ **A _{C}** ≤ 2.

0 ≤

1 ≤

0 ≤ **A _{C}** ≤ 100.

0 ≤

1 ≤

Sample Input

5 1 1 540 600 840 900 2 0 900 1260 180 540 1 1 1439 1440 0 1 2 2 0 1 1439 1440 1438 1439 1 2 3 4 0 10 1420 1440 90 100 550 600 900 950 100 150 1050 1400

Sample Output

Case #1: 2 Case #2: 4 Case #3: 2 Case #4: 4 Case #5: 6

Note that Cases #4 and #5 would not appear in the Small dataset.

Case #1 is the one described in the problem statement.

In Case #2, Jamie must cover for all of Cameron's activity time, and then Cameron must cover all the remaining time. This schedule entails four exchanges.

In Case #3, there is an exchange at midnight, from Cameron to Jamie. No matter how the parents divide up the remaining 1438 non-activity minutes of the day, there must be at least one exchange from Jamie to Cameron, and there is no reason to add more exchanges than that.

In Case #4, note that back-to-back activities can exist for the same partner or different partners. There is no exchange at midnight because Cameron has activities both right before and right after that time. However, the schedule needs to add some time for Cameron in between Jamie's activities, requiring a total of 4 exchanges. Notice that it is optimal to add a single interval for Cameron of length 718 somewhere between minutes 2 and 1438, but the exact position of that added interval does not impact the number of exchanges, so there are multiple optimal schedules.

In Case #5, a possible optimal schedule is to assign Cameron to the intervals (in minutes) 100-200, 500-620, and 900-1400.

Writing Code Jam problems is hard, so we have built an AI to come up with new
ideas. To make the AI as creative as possible, we have given it **N**
different "cores", each of which has its own "personality". However, just
like people, these cores may become distracted or corrupt or may refuse to
work; the i-th core has a *success probability* **P _{i}** of
functioning properly. As long as at least

To prevent this from happening, we plan to train one or more of the cores to
become more reliable. We have a total of **U** "training units" that we
can use to improve the cores. Spending X units on the i-th core will
add X to its success probability. We can divide up the units among the
cores however we like, and it is possible that one or more cores may not
receive any units. Of course, a core's success probability cannot be
increased above 1.

If we assign the training units to maximize the probability that the AI will function properly, what is that probability?

This problem has 2 Small datasets and no Large dataset. You must solve the first Small dataset before you can attempt the second Small dataset. You will be able to retry either of the datasets (with a time penalty).

The first line of the input gives the number of test cases, **T**.
**T** test cases follow; each consists of three lines. The first line
contains two integers **N** and **K**: the total number of cores, and
the minimum number of cores that must succeed for the AI to function
properly. The second line contains one rational **U**: the number of
training units. The third line contains **N** rational numbers
**P _{i}**; the i-th of these gives the probability that the i-th
core will function properly. All of these probabilities are specified to
exactly four decimal places of precision.

For each test case, output one line containing `Case #x: y`

, where
`x`

is the test case number (starting from 1) and `y`

is the probability that the AI will function properly if the training units
are assigned optimally. `y`

will be considered correct if it is
within an absolute or relative error of 10^{-6} of the correct
answer. See the FAQ for an
explanation of what that means, and what formats of real numbers we accept.

Memory limit: 1 GB.

1 ≤ **T** ≤ 100.

1 ≤ **N** ≤ 50.

For all i, 0.0000 ≤ **P _{i}** ≤ 1.0000.

0.0000 ≤

Time limit: 20 seconds.

**K** = **N**.

(All of the cores must function properly for the AI
to function properly.)

Time limit: 40 seconds.

1 ≤ **K** ≤ **N**.

Sample Input

4 4 4 1.4000 0.5000 0.7000 0.8000 0.6000 2 2 1.0000 0.0000 0.0000 2 1 0.0000 0.9000 0.8000 2 1 0.1000 0.4000 0.5000

Sample Output

Case #1: 1.000000 Case #2: 0.250000 Case #3: 0.980000 Case #4: 0.760000

Note that the last two sample cases would not appear in Small dataset 1.

In Sample Case #1, we have enough training units to spend to give all cores a success probability of 1, so the AI will certainly function properly.

In Sample Case #2, both of the cores must function properly for the AI to function properly, so we must give each core at least some training units. The best option turns out to be to train each one up to 0.5. Then the probability that the AI functions properly is 0.5 × 0.5 = 0.25. Any other assignment is inferior; for instance, if we train one core to 0.9 and the other core to 0.1, the probability of success is only 0.9 × 0.1 = 0.09.

In Sample Case #3, we have no training units to spend, and at least one of
the two cores must function properly for the AI to function properly. We can
approach this by first calculating the probability that the AI does
*not* function properly, which happens only if both cores fail to
function properly. The probability that both cores fail is (1 - 0.9) ×
(1 - 0.8) = 0.02. So the probability that at least one core functions
properly, and thus that the AI functions properly, is 1 - 0.02 = 0.98.

In Sample Case #4, the optimal strategy is to give all the training units to the second core. That makes the probability of at least one core functioning properly 1 - (0.4 × 0.6) = 0.76. All other options are inferior; for example, giving all the training units to the first core only yields 0.75, and dividing them equally among the cores gives 0.7525.

With at most ten pancakes to consider, we can easily enumerate and check all
subsets of size **K**, perhaps using a library function such as Python's
`itertools.combinations`

. When we are considering a subset, the
statement's rules tell us exactly how to stack them: in nondecreasing radius
order from top to bottom. So all we need is a way to calculate a stack's
exposed pancake surface area.

Except for the top pancake, any pancake that is not completely covered by the
one above it exposes a ring-shaped outer area of its upper surface. It is
possible to calculate and sum the areas of these rings, but there is a much
easier method. Observe that the exposed top areas of all pancakes in the
stack exactly add up to the top area of the bottom pancake in the stack. That
is, if you were to look down on the exact center of the stack, ignoring the
heights of the pancakes, the view would be indistinguishable from the top of
the bottom pancake. So the exposed surface area of a stack is equal to the
top area of the bottom pancake, plus the combined areas of all pancakes'
sides. This is π × **R**^{2} for the bottom pancake,
plus the sum (over all pancakes in the stack) of 2 × π ×
**R _{i}** ×

For the Large dataset, we cannot afford to check every possible subset, so we
need a better approach. Suppose that we choose a certain pancake P to be on
the bottom of our stack. Then every other pancake in the stack must have a
radius no larger than the radius of P. Recall from our area-calculating
simplification above that the other pancakes besides P effectively only
contribute their sides to the total exposed surface area, so we do not need
to think about their tops. So, out of the pancakes besides P, we should
choose the **K** - 1 of them that have the largest values of
**R _{i}** ×

So, we can try every pancake as a possible bottom; once we choose a possible
bottom, the criterion above tells us exactly which other pancakes we should
stack on top of it. We can simply search the list for these each time we try
a new bottom, given the low maximum value of **N**, but we can do better.
For example, we can start with one list of pancakes sorted in nonincreasing
order by side area, and another list of pancakes sorted in nonincreasing
order by radius. Then we can go through the list of possible radii in
decreasing order, treating each pancake in turn as the possible bottom; for
each one, we choose the first **K** - 1 pancakes from the list sorted by
side area. Whenever we encounter a pancake in the list sorted by side area
that comes earlier in the list sorted by radius than our current possible
bottom does, we can remove it forever from the list sorted by side area. It
is always safe to do this. If that pancake's radius is larger than the
radius of our possible bottom, we cannot use it now or ever again, since all
future possible bottoms will have a radius that is no larger. If that
pancake's radius is equal to the radius of our possible bottom, we have
already tried to use that pancake as a bottom previously (since it is earlier
in the list sorted by radius), so we have already checked an equivalent stack
in which pancakes of the same radius differed only in their order in the
stack. Of course, if we encounter our current possible bottom in the list
sorted by side area, we should remove that too, because we cannot use the
same pancake twice in a stack.

This strategy drops the time complexity to O(**N** log **N** (for the
sorts) + **N** × **K**), which is effectively
O(**N**^{2}) in the worst case. We can further improve upon this
by storing our best set of **K** - 1 as a min heap / priority queue
bases on index in the radius list, so that we do not need to check
all **K** - 1 values each time to see whether they have "expired". This
drops the complexity to O(**N** log **N** + **K** log **K**),
which is equivalent to O(**N** log **N**).

Test Data

We recommend that you practice debugging solutions without looking at the test data.

It is clear that whenever one parent has an activity, the other parent must care for the baby throughout that activity. But how should Cameron and Jamie divide up their remaining time after all activities are covered?

In the Small dataset, there are at most two activities, so we can use casework:

- If only one parent has an activity, an optimal solution is to have the other parent care for the baby in a continuous 720 minute block that includes that activity. Then the parent with the activity can handle the other 720 minutes. So only 2 exchanges are required.
- If both parents have one activity each, the same idea as above works. It is always possible to choose some way to divide the 24-hour cycle exactly in half, such that Cameron's activity falls completely within Jamie's half and vice versa. So the answer is again 2.
- If one parent (without loss of generality, we'll say it's Cameron) has two activities, then they divide the 24-hour cycle such that there is a gap between activity 1 and activity 2, and a gap between activity 2 and activity 1. Jamie has to cover both activities. If one of these gaps is empty (which can occur if the activities are back-to-back) or can be completely filled in by Jamie, then the split the day in half strategy works again and the answer is 2. But if the activities are too far apart and/or too long, Jamie may not have enough remaining time to fill in either gap; in that case, both gaps must contain a switch from Jamie to Cameron and back, so the answer is 4. (We will explain later how we can fill in such gaps without adding more exchanges than we need to, regardless of how much time each parent contributes to filling in the gap.)

It does not take much code to implement the above arguments. However, care must be taken to handle midnight correctly when calculating the lengths of the gaps in the third case.

Let's consider a list of the activities, sorted by time. To gracefully handle the midnight transition and the time before the first activity / after the last activity, we will add a copy of the first activity to the end of the list. In this list, each interval is surrounded by two activities. Some intervals may be instantaneous transitions between activities that are back-to-back (let's call these "empty"); the other intervals represent all of the non-activity time. Each of these other intervals is either surrounded by activities covered by different parents, or by activities covered by the same parent.

We have no decisions to make about the empty intervals, but we must count up the exchanges they add. What about the different-parent intervals? For each, we will have to add some time from one or both parents to cover the interval; we can always do this optimally by starting with the time (if any) from the parent covering the activity on the left, and ending with the time (if any) from the parent covering the activity on the right. This strategy always leaves us with just one exchange. We can do no better than that (since the two different parents guaranteed that there would be at least one exchange), and we have no reason to do worse than that. So we do not need to worry about the different-parent intervals at all! We can assume that whatever time we have available (from one or both parents) can fill them up without creating any new exchanges.

Same-parent intervals require some more thought. If we can fill one in with
time from the parent who is covering the endpoint activities, we can avoid an
exchange, whereas if we include any time at all from the other parent, we
will add *two* more exchanges. (We can avoid adding more than two by
putting the time from the other parent in in one continuous chunk.) We may
not be able to avoid the latter, depending on how much available time each
parent has left, but we should fill as many intervals with "matching" time as
we can, to minimize the number of new exchanges we create. Each interval
contributes equally to the number of possible exchanges, but the intervals
may have different lengths, and longer ones take up more of a parent's
available time. So our best strategy is to sort the list of Cameron-bounded
intervals and greedily fill in the shortest ones with Cameron time until we
do not have enough Cameron time left to fill in the shortest remaining
interval. Then, we do the same for the Jamie-bounded intervals. For every
interval we fail to fill in this way, we add two exchanges to our total.

After we have handled the same-parent intervals, we are done and our current total number of exchanges is our answer. Whatever remaining time we have from one or both parents can safely go into the different-parent intervals without creating any new exchanges. As explained above, we do not need to think about the details; we leave those as an exercise for Cameron and Jamie.

This method is O(**N** log **N**) (because of the required sorting
operations) and is easily fast enough for the Large dataset.

Test Data

We recommend that you practice debugging solutions without looking at the test data.

In the Small dataset, every core must succeed. The probability that this will
happen is the product of the success probabilities of the individual cores:
**P _{0}** ×

We will show that it is always best for us to spend our units on the core
with the lowest success probability. Suppose that **P _{0}** <

The argument above has one issue to iron out: what if increasing
**P _{0}** causes it to rise above

In the Large dataset, **K** of the cores must succeed. Now it is no longer
necessarily optimal to improve the smallest probability. For example, suppose
that **N** = 2, **K** = 1, and **U** = 0.01, and the
**P _{i}** values for the two cores are 0.99 and 0.01. If we spend
all of our 0.01 units on the first core, its success probability becomes 1
and we succeed regardless of whether the second core succeeds. There is no
reason to consider spending any units on the second core.

Let's extend this strategy of investing most heavily in a subset of the cores
and ignoring the rest. We will sort the cores' success probabilities from
lowest to highest, and focus on the ones from some index i onward. As in the
Small solution, we will start by improving the success probability of the
core at index i to match the success probability of the core at index i+1,
then improve those two until they match the success probability of the core
at index i+2, and so on. (If all of the success probabilities including and
beyond index i become 1, then we can improve the core at index i-1, and so
on.) We will show that, for some value of i, this is the optimal strategy.
We can then try every possibility for i and keep the one that yields the
largest overall answer. Notice that, for one optimal i, at most one
"previous" core i-1 needs to be improved, because there is at most
one i such that capacity is enough to improve
cores i, i+1, ..., **N** up to probability 1 and have some left
to improve i-1 but not up to 1 (otherwise, we can just choose
i-1 instead).

First, let us consider whether it is better to improve core i or core
i+1. Let A_{i} and B_{i} be the probability of exactly
**K**-2 and **K**-1, respectively, of the cores
1, 2, ..., i - 1, i + 2, i + 3, ..., **N** succeeding. Let
P_{i,d} be the probability of at least **K** cores succeding
if the success probability of core i is P_{i}+d and the
probability of success of any other core j is P_{j}.
By replacing the definitions and cancelling out some values, we
can see that P_{i,d} - P_{i+1,d} =
(A_{i} - B_{i}) ×
(P_{i+1} - P_{i}) × d.
Since (P_{i+1} - P_{i}) × d is positive,
improving core i+1 is better than improving core i if and only if
B_{i} > A_{i}. Moreover, this doesn't depend on
the initial success probabilities of cores i+1 and i, but only on their
relative values. So, if we improve core i+1 a little bit, it is still better
to keep improving core i+1 if we can instead of switching to improve core i.

Now we want to show that there is some i_{0} such that
B_{i} > A_{i} if and only if i ≥ i_{0}.
That i_{0} is the core where we want to start improving.
Notice that A_{i} depends on **N**-2 probabilities,
and A_{i+1} depends on other **N**-2 cores, but
**N**-3 of those overlap. Assume fixed **N**-3 core probabilities
and let A(p) be the probability that exactly **K**-1 of the **N**-3
fixed cores and an additional core with probability p succed.
Define B(p) similarly. We will show that
A(p + d) - B(p + d) > A(p) - B(p) for all p and d.
Let U, V and W be the probabilities of having exactly **K**-3,
**K**-2 and **K**-1 successes out of the fixed **N**-3 cores.
Then, A(p) = U × p + V × (1 - p) and
B(p) = V × p + W × (1 - p). Then, B(p) - A(p) is a linear
function on p, which means if it ever changes from positive to
negative, it must be B(0) - A(0) > 0 and B(1) - A(1) < 0, which
implies W > V and V < U. However, this is impossible, because it
is a well known fact in probability theory that the function f(k)
defined as the number of successes in a given set of independent
experiments is convex, so it has no local minimum at **K**-2.

With the above claim established, we can check all possible values of i and then take the largest overall success probability that we find. To compute the probability of at least K successes, we can use a dynamic programming method similar to the one described in the analysis for last year's Red Tape Committee problem.

It is difficult to prove this method under time pressure in a contest, but we can discover it via brute force simulation, e.g., by dividing up the available units into quanta and exploring all possible ways to partition them among a small number of cores, and seeing which assignment yields the greatest chance of success. It is also possible to arrive at the correct answers via other methods such as gradient descent.

Test Data

We recommend that you practice debugging solutions without looking at the test data.