Google Code Jam Archive — World Finals 2018 problems

Overview

When a Code Jam final round has five problems instead of six, watch out! However, the first two problems were somewhat safe places to start. Jurisdiction Restrictions was not exactly a warm-up, but it could be solved by piecing together some classic algorithmic techniques in the right way. Two-Tiling looked like it had many edge cases, and approaching it via casework would have eaten up far too much time, but it was solvable with some relatively straightforward backtracking code plus a couple of key optimizations.

The other three problems required even more creativity and coding accuracy. Go, Gophers!, our final interactive problem of 2018, was structured in a way that shut down obvious approaches, and contestants had to find another way in. The Cartesian Job had an outer layer of planar geometry that masked a simple but hard-to-find solution. Finally, in Swordmaster, understanding the fairly complex scenario was only the first step along the way to becoming a virtuoso of dueling (and, perhaps, the Code Jam champion!)

The scoring breakdown was unusual by Code Jam standards; there were two "easier" problems worth the same amount, and three harder ones worth the same larger amount. This gave the contestants some flexibility to pick their preferred problem(s) at a particular point level, but it also did not offer many clues about what was hardest! Of course, the three hardest problems tested very different skillsets, so it would be difficult to pick an objective "most difficult"...

During the round, Jurisdiction Restrictions and Go, Gopher got a lot of attention from contestants; so did the Cartesian Job, which proved to be dangerous! By design, it was very difficult to squeeze a floating-point solution through. A couple hours in, it became clear that nobody was going to solve all five problems, or even four; contestants vied for three, and we saw the usual handful of heroic last-minute submissions. By the end of the contest, the optimistic scoreboard showed three contestants (rng..58, SnapDragon, and LHiC) tied at the optimistic score of 124, with two more (Gennady.Korotkevich and PavelKunyavskiy) following at 104.

As usual, it isn't over until it's over, and the final reveal changed everything! rng..58 moved into third with 86 (beating out LHiC, who had a higher penalty); Errichto.rekt jumped to second place thanks to a truly last-minute Cartesian Job submission. But in first, with all 104 of his points surviving judgment, it was our now-five-time Code Jam World Champion, Gennady.Korotkevich! Every problem was solved by at least one contestant; overtroll was the only contestant to solve Swordmaster. Congratulations to all of our contestants for making it through another very tough problem set! You can check out individual contestants' screencasts here.

2018 was an exciting year for Code Jam! Whether you solved a single Qualification Round problem, sweated it out in the Finals, or just followed along with the contest, we're glad you joined us and we sincerely hope you'll do so again next year.


Cast

Jurisdiction Restrictions: Written by Pablo Heiber. Prepared by Jonathan Irvin Gunawan.

Two-Tiling: Written by Petr Mitrichev. Prepared by Petr Mitrichev and Ian Tullis.

Go, Gophers!: Written by Pablo Heiber. Prepared by Pablo Heiber and Ian Tullis.

The Cartesian Job: Written by David Arthur. Prepared by Pi-Hsun Shih and Kevin Tran.

Swordmaster: Written by Onufry Wojtaszczyk. Prepared by John Dethridge.

Solutions and other problem preparation and review by Liang Bai, Shane Carr, Jackson Gatenby, Jonathan Irvin Gunawan, Md Mahbubul Hasan, Igor Naverniouk, Trung Thanh Nguyen, and Micah Stairs.

Analysis authors:

  • Jurisdiction Restrictions: Pablo Heiber
  • Two-Tiling: Ian Tullis
  • Go, Gophers!: Pablo Heiber
  • The Cartesian Job: Pablo Heiber
  • Swordmaster: Jonathan Irvin Gunawan and Onufry Wojtaszczyk

A. Jurisdiction Restrictions

Problem

The city of Gridtopia is a matrix of square cells ("blocks") with R rows and C columns; rows are numbered (starting from 1) from top to bottom, and columns are numbered (starting from 1) from left to right. The city is served by S different police stations; the i-th station is in the block located in the Rith row and the Cith column, and no block contains more than one station.

Each station is only able to patrol blocks that are no more than Di blocks away from that station, either horizontally or vertically. That is, the i-th station can only patrol the block in row R' and column C' if max(|R' - Ri|, |C'- Ci|) ≤ Di. Put another way, the i-th station can patrol only blocks within the square of side length 2Di + 1 centered on that station.

As the new police commissioner, you need to assign some blocks within the city to exactly one station that is able to patrol it. Blocks that contain stations and blocks that no station is able to patrol should not be assigned. All other blocks have to be assigned. Moreover, you must distribute this assignment load as evenly as possible among stations. Let Ai denote the number of blocks assigned to the i-th station; then your goal is to minimize the difference between the maximum of all the Ai values and the minimum of all of the Ai values. If you make optimal assignments, what is the smallest possible difference?

Input

The first line of the input gives the number of test cases, T. T test cases follow. Each case begins with one line with three integers R, C, and S: respectively, the numbers of rows and columns in the grid of cells, and the number of stations. Then, there are S more lines. The i-th of these has three integers Ri, Ci, and Di: respectively, the row and column in which the i-th station is located, and the parameter that determines which blocks that station is able to patrol, as described above.

Output

For each test case, output one line containing Case #x: y, where x is the test case number (starting from 1) and y is the difference described above.

Limits

1 ≤ T ≤ 100.
2 ≤ S ≤ 15.
1 ≤ RiR, for all i.
1 ≤ CiC, for all i.
For all i ≠ j, RiRj and/or CiCj. (No two stations are in the same block.)
1 ≤ Di < max(R, C), for all i.
Time limit: 30 seconds per test set.
Memory limit: 1GB.

Test set 1 (Visible)

1 ≤ R ≤ 20.
1 ≤ C ≤ 20.

Test set 2 (Hidden)

1 ≤ R ≤ 109.
1 ≤ C ≤ 109.

Sample

Sample Input
content_copy Copied!
2
3 4 2
1 1 1
3 3 2
5 5 2
4 1 2
3 2 2
Sample Output
content_copy Copied!
Case #1: 4
Case #2: 0

In Sample Case #1, the city consists of a grid with 3 rows and 4 columns, with one station in the upper left block and one station in the block to the left of the lower right block. The first station can only patrol the three blocks that touch the edge or corner of its block; every other block is at a horizontal or vertical distance of more than 1 away. The second station can patrol any block in the grid (except for the blocks containing the stations). The difference in number of blocks assigned is minimized if you assign station 1 all three of the blocks it can patrol, and then assign the remaining seven blocks to station 2.

In Sample Case #2, one optimal strategy is to assign the blocks as follows. In this picture, 1 represents station 1, 2 represents station 2, ! represents a block assigned to station 1, @ represents a block assigned to station 2, and . represents a block assigned to neither station (because neither station can patrol it). Notice that a station's assigned blocks do not need to form a single continuous area.

@@@@.
!!!@.
!2!@.
1!!@.
!@!@.

B. Two-Tiling

Problem

Your game company just ordered a lot of square game boards with 64 blank unit cells, intending to turn them into chessboards, but your boss has suddenly declared that chess is out of fashion. To make the best use of all of these boards, you have designed a new puzzle that uses tiles.

A tile is a contiguous set of unit cells, connected edge-to-edge, that can fit inside a 3x3 square of unit cells. For example, the following are examples of valid tiles (with each @ denoting a piece of the tile, and extra . characters for padding):

... @@@ @@@ .@@
... @@@ @.@ @.@
.@. @@@ @.. @@@

The following would NOT be valid tiles:

@@. @.@ .@@.
... .@. @@@@
.@@ @.@ .@@.

When the solver of your new puzzle places a tile on the board, its unit cells must exactly overlap some unit cells on the board that have not already been covered by other tiles. A tile is still considered the same type of tile even after being arbitrarily translated, rotated (by multiples of 90 degrees), and/or reflected, and the solver is allowed to do any of those things to a tile while placing it. For example, all of these are considered to be the same tile (and other variants of that tile are possible):

.@. ..@ @.. ... @@.
@@. .@@ @@. .@@ .@@
@.. .@. .@. @@. ...

To make your puzzle, you will color one or more of the cells on the board red. The solver will solve the puzzle by placing tiles on the board such that all red cells are covered up, but no other cell is covered up. To save on manufacturing costs, the solver receives only one type of tile, but they are given exactly enough copies of it to be able to cover all of the red cells.

Your job is to decide which of the board's cells to color red. Unfortunately, your boss is still deciding which of two particular tiles to use for the game. You are tired of waiting, so you have decided to try to color a set of cells such that the puzzle can be solved regardless of which of the tiles ends up being used.

Input

The first line of the input gives the number of test cases, T. T test cases follow; each consists of four lines. Each of the first three lines has three characters, then a single space, then three more characters. The fourth line is a blank line.

When looking at an entire case, the space characters separate a 3x3 grid on the left and a 3x3 grid on the right. Each grid represents a frame in which one of the two tiles is displayed. In each grid, each character is either @, representing a cell that is part of the tile, or ., representing a cell that is not part of the tile. Note that these . cells have nothing to do with the puzzle or the board, and are just padding to make the shape of the tile clear. It is guaranteed that the two tiles are not the same, as described in the statement above.

Output

For each test case, output one line with Case #x: y, where x is the test case number (starting from 1), and y is POSSIBLE if there is a solution to the problem, and IMPOSSIBLE if there is not. Then, if there is a solution, output eight more lines of seventeen characters each, forming two 8x8 grids with one column of space characters in between. Each grid must use dot (.) characters to denote any blank cells, or characters from the following set of 64:

!?0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ

to denote the individual tiles used in a solution to the puzzle. Within each 8x8 grid, each non-dot character must denote a part of the same single tile, and different characters must denote different tiles. Each tile in the grid on the left must be the same as the tile on the left in input, up to rotations, translations and reflections. Each tile in the grid on the right must be the same as the tile on the right in input, up to rotations, translations and reflections. The set of cells that are not dots in both 8x8 grids must be the same, and must be nonempty.

If there are multiple valid solutions, you may output any one of them.

Limits for Test set 1 (Visible; the only test set)

T = 595. (Every possible test case, up to isomorphism, is included.)
Time limit: 30 seconds.
Memory limit: 1GB.
The cells in each tile in the input form a single contiguous group via their edge-to-edge connections.
The two tiles in the input are not the same, as described in the statement.

Sample

Sample Input
content_copy Copied!
4
.@@ .@.
.@. .@.
.@@ @@.

@@@ @@@
@.@ @@@
@@@ @@@

.@. ...
@@. .@@
@.. ...

... ..@
... ..@
@.. ...

Sample Output
content_copy Copied!
Case #1: POSSIBLE
....11.. ....11..
...221.. ...221..
...211.. ...321..
...22... ...32...
.333.... .433....
4343.... 5444....
444..... 555.....
........ ........
Case #2: IMPOSSIBLE
Case #3: POSSIBLE
........ ........
..T..I.. ..T..I..
.TT..II. .tT..Ii.
.T....I. .t....i.
........ ........
.LL..EE. .LL..EE.
..LLEE.. ..llee..
........ ........
Case #4: POSSIBLE
the.CODE AAB.CDDE
Jam.2018 FFB.CGGE
........ ........
World... HHIIJ...
.FiNALS. .KLLJMM.
.cup.... .KNN....
........ ........
TRIUMPH! OOPPQQRR

The sample output displays one set of answers to the sample cases. Other answers may be possible.

In Sample Case #2, there is no possible set of red cells that would make the puzzle solvable regardless of which of the two tiles is chosen.

In Sample Cases #3 and #4, note that the chosen set of red cells is not required to be contiguous. Also note that the dots in the input for a tile are not considered part of the tile, and have no significance in creating the puzzle. For example, the given answer would also be acceptable for the following input:

... ...
.@. .@.
... .@.

Moreover, that input is isomorphic with Sample Case #4, and would not appear in the same test set along with Sample Case #4.

C. Go, Gophers!

Problem

Earlier this year, the Code Jam team planted an orchard with the help of an industrious gopher. It must have told other gophers, because we now have somewhere between 2 and 25 gophers living in the orchard. But it is hard to be sure exactly how many there are, because these gophers only emerge from their underground tunnels to eat at night, and we are too tired after a hard day of tree-pruning to stay up and watch for them. However, we do know how to make one "gopher snack" per day, which we can leave out each night to see whether it gets eaten. We think we can use this information to determine the number of gophers.

Here is what we know about the way that gophers eat. The N gophers meet during one day in a council to determine an order in which they will emerge over the following N nights, one at a time. Then, during each of the i-th of the next N nights, the i-th gopher in the order emerges and looks for a gopher snack. Each gopher has its own particular taste level (which never changes), and it will eat a snack if and only if the snack's quality level is at least as high as that gopher's taste level. During the day after the N-th gopher in the order has emerged, the gophers choose a new order and the process continues. Notice that even if a gopher chooses not to eat the snack that it finds, it still does not emerge again until it comes up in the next order chosen by the council.

We must make exactly one new gopher snack each day; even if a snack is not eaten, it spoils and cannot be reused the next night. Each morning, we learn whether or not the previous night's snack was taken.

Today, we know that the gophers are meeting in their council to determine their next order, so tonight will mark the start of that order. We are willing to devote some serious time to this investigation — as many as 105 nights. Using S or fewer snacks, can you help us figure out how many gophers there are?

Input and output

This problem is interactive, which means that the concepts of input and output are different than in standard Code Jam problems. You will interact with a separate process that both provides you with information and evaluates your responses. All information comes into your program via standard input; anything that you need to communicate should be sent via standard output. Remember that many programming languages buffer the output by default, so make sure your output actually goes out (for instance, by flushing the buffer) before blocking to wait for a response. See the FAQ for an explanation of what it means to flush the buffer. Anything your program sends through standard error is ignored, but it might consume some memory and be counted against your memory limit, so do not overflow it. To help you debug, a local testing tool script (in Python) is provided at the very end of the problem statement. In addition, sample solutions to a previous Code Jam interactive problem (in all of our supported languages) are provided in the analysis for Number Guessing.

Initially, your program should read a single line containing a single integer T indicating the number of test cases. Then, you need to process T test cases. For each test case, your program will first read one line containing one integer S: the maximum number of snacks you can use. Then, your program will process up to S + 1 exchanges with our judge, in which the last exchange must be a guess at the answer.

For the i-th exchange, your program needs to use standard output to send a single line containing an integer Qi.

  • If Qi is in the inclusive range [1, 106], it represents that you will leave out a gopher snack with quality level Qi. In response, the judge will print a single line with a single integer: 1 if the gopher ate the snack, or 0 if it did not. This line will be printed to your input stream, as described above, and your program must read it through standard input. Then, you can start another exchange.
  • If Qi is in the inclusive range [-25, -2], it represents that your answer to the test case is that there are -Qi gophers. If your answer is correct, the judge will proceed to the next test case, if there is one.

The judge will print a single line with the integer -1, and then stop sending output to your input stream, if any of the following happen:

  1. Your program sends a malformed or out-of-bounds value (e.g., 1000001, -1, or GO_IS_THE_BEST_LANGUAGE), or too many values (e.g., 1 2).
  2. Your program sends a value not in the inclusive range [-25, -2] after having already sent S values for the current test case.
  3. Your program sends a value in the inclusive range [-25, -2] that is not a correct answer. Note that this means that you only get one chance to answer a test case correctly.

If your program continues to wait for the judge after receiving -1, your program will time out, resulting in a Time Limit Exceeded error. Notice that it is your responsibility to have your program exit in time to receive the appropriate verdict (Wrong Answer, Runtime Error, etc.) instead of a Time Limit Exceeded error. As usual, if the total time or memory is exceeded, or your program gets a runtime error, you will receive the appropriate verdict.

You should not send additional information to the judge after solving all test cases. In other words, if your program keeps printing to standard output after sending the answer to the last test case, you will get a Wrong Answer judgment.

Important warning

Context switching between your program and the judge is expensive, and more so in our judging system. As opposed to other interactive problems, we found it necessary in all our reference solutions for this problem to bundle the exchanges to the server. That is, instead of "print taste level, read response, print taste level, read response" we can do "print taste level, print taste level, read response, read response" which requires less context switching.

Benchmarks: To give you some idea of how a given bundling of queries will perform in our system, we are providing some benchmarks. We wrote a program that performs S = 105 exchanges bundled into groups of specific sizes B — that is, it prints B taste levels, then reads B responses, then prints B more, then reads B more, and so on, S / B times. We implemented this in both Python and C++, always printing the B taste levels to a string variable and printing that string later, ensuring the buffer is not flushed within a bundle. Here are the results for each bundle size B, in seconds (rounded up to the next half-second, and taking the worst case over multiple runs):

B11050100200500105
Python167216.55.555>250
C++130185.55.54.52.5>250

Notice that with somewhat small bundle sizes, the context switching time gets below 5s per test, which is under a minute per test set.

Limits

1 ≤ T ≤ 10.
Time limit: 90 seconds per test set. (See important warning in the input/output section).
Memory limit: 1GB.
The number of gophers is between 2 and 25, inclusive. The taste level of each gopher is between 1 and 106, inclusive. S = 105.

Test set 1 (Visible)

No two gophers have the same taste level.
The order in which the gophers emerge each night is chosen uniformly at random from all possible orders, and independently of all other orders.

Test set 2 (Hidden)

The GCD of the set {x : there exist exactly x ≥ 1 gophers in the input that share a taste level} = 1.
The order in which the gophers emerge is chosen independently of the provided snacks.

For each test case, the multiset of taste levels and the seed for the random number generation are generated by the problem setters in advance of the contest, and will be the same for any contestant, for any submission. That means two submissions that offer the same number si of snacks for test case i will see the gophers emerge in the same order.

For example, the following scenario would be possible in either of the test sets:

  • two gophers, one with taste level 1, and one with taste level 2

The following scenario would be possible in test set 2, but not in test set 1:

  • three gophers, two with taste level 1, and one with taste level 2

The following scenarios would not be possible in either of the test sets:

  • six gophers, four with taste level 1, and two with taste level 2
  • two gophers, both with taste level 7

Sample Interactions

The following interaction is for Test set 1.

  // In this example, the problem setters have already determined that the first
  // test case has two gophers with taste levels 1 and 2 (we will call them A
  // and B, respectively), and that the second test case has four gophers with
  // taste levels 1, 999, 123, and 4567 (we will call them C, D, E, and F,
  // respectively).
  // The judge randomly generates the first order: A, B.
  t = readline_int()           // Code reads 2 into t.
  s = readline_int()           // Code reads 100000 into s.
  printline 1 to stdout        // Code sends a snack with quality level 1.
  flush stdout
  resp = readline_str()        // Code reads 1 into resp (gopher A ate the
                               //   snack).
  printline 1 to stdout
  flush stdout
  resp = readline_srt()        // Code reads 0 into resp (gopher B did not eat
                               //   the snack).
                               // Judge randomly generates B, A as the next
                               //   order.
  printline 2 to stdout
  flush stdout
  resp = readline_str()        // Code reads 1 into resp (gopher B ate the
                               //   snack).
  printline 1 to stdout
  flush stdout
  resp = readline_str()        // Code reads 1 into resp (gopher A ate the
                               //   snack).
                               // Judge randomly generates B, A as the next
                               //   order.
  printline 2 to stdout
  flush stdout
  resp = readline_str()        // Code reads 1 into resp (gopher B ate the
                               //   snack).
  printline 2 to stdout
  flush stdout
  resp = readline_str()        // Code reads 1 into resp (gopher A ate the
                               //   snack).
  printline -2 to stdout       // Code correctly determines that the only
  flush stdout                 //   scenario consistent with the information
                               //   given so far is two gophers with taste
                               //   levels 1 and 2.
                               // Judge rules that the answer is correct, and
                               //   prepares the next test case...
                               // Judge randomly generates C, E, F, D as the
                               //   first order.
  s = readline_int()           // Code reads 100000 into s. (This also shows
                               //   that the answer to the first test case was
                               //   correct.)
  printline 0 to stdout        // Code sends an invalid value.
  flush stdout
  resp = readline_str()        // Code reads -1 into resp.
  exit                         // Code exits to avoid an ambiguous TLE error.

The following interaction is for Test set 2. Notice that the interactions in the first test case are the same as in the previous example, but the outcome is different.

  // In this example, the problem setters have already determined that the first
  // test case has three gophers with taste levels 1, 2, and 1; we will call
  // them A, B, and C, respectively, and they will be ordered ABCCBAABCCBA...
  t = readline_int()           // Code reads 1 into t.
  s = readline_int()           // Code reads 100000 into s.
  printline 1 to stdout
  flush stdout
  resp = readline_str()        // Code reads 1 into resp (gopher A ate
                               //   the snack).
  printline 1 to stdout
  flush stdout
  resp = readline_srt()        // Code reads 0 into resp (gopher B did not eat
                               //   the snack).
  printline 1 to stdout
  flush stdout
  resp = readline_str()        // Code reads 1 into resp (gopher C ate the
                               //   snack).
  printline 2 to stdout
  flush stdout
  resp = readline_str()        // Code reads 1 into resp (gopher C ate the
                               //   snack).
  printline 2 to stdout
  flush stdout
  resp = readline_str()        // Code reads 1 into resp (gopher B ate the
                               //   snack).
  printline -2 to stdout       // Code erroneously decides that there
                               //   are two gophers A and B with taste levels
                               //   1 and 2; this is consistent with the
                               //   information given so far for the order
                               //   A,B,A,B,A, but the true number of gophers
  flush stdout                 //   is different, so judge rules it is wrong.
  s = readline_str()           // Code tries to read s but gets -1, meaning
                               //   that the answer to the last test case was
                               //   wrong.
  exit                         // Code exits to avoid an ambiguous TLE error.

Testing Tool

You can use this testing tool to test locally or on our platform. To test locally, you will need to run the tool in parallel with your code; you can use our interactive runner for that. For more information, read the instructions in comments in that file, and also check out the Interactive Problems section of the FAQ.

Instructions for the testing tool are included in comments within the tool. We encourage you to add your own test cases. Please be advised that although the testing tool is intended to simulate the judging system, it is NOT the real judging system and might behave differently. If your code passes the testing tool but fails the real judge, please check the Coding section of the FAQ to make sure that you are using the same compiler as us.

Download testing tool

D. The Cartesian Job

Problem

You may have heard of the platinum-iridium cylinder that serves as the standard for the kilogram, but did you know that there is a special line segment used as the standard for the kilometer? It runs from (0, 0) to (0, 1000) in a 2D plane in a confidential and very flat location.

Naturally, this segment is extremely valuable, so it is protected by N rotating surveillance lasers, which are rays in the 2D plane. Each laser has a fixed endpoint, and it rotates around that endpoint at a constant speed of 1 revolution per second. Whether each laser rotates clockwise or counterclockwise is chosen uniformly and independently at random by the security system.

Lasers are not blocked by other lasers or their endpoints, or the segment itself. No laser has an endpoint on the segment.

You have been hired to audit the security system, but all you have to work with is a single snapshot from an instant in time, which shows the endpoint and orientation (at that instant) of each laser. Since the image is just a snapshot, you have no way of inferring the rotation directions of the lasers.

You have determined that the segment could be stolen in a heist if there is ever a non-empty open interval of time during which no laser is touching the segment. What is the probability of this happening?

Input

The first line of the input gives the number of test cases, T. T test cases follow. Each case begins with one line containing one integer N: the number of lasers. Then, N more lines follow. The i-th of these lines represents the ray that is the i-th laser, and contains four integers Xi, Yi, Xi', and Yi', representing the 2D coordinates of the endpoint of the ray, followed by the 2D coordinates of some other point on the ray.

Output

For each test case, output one line containing Case #x: y, where x is the test case number (starting from 1) and y is the probability described above. y will be considered correct if it is within an absolute or relative error of 10-6 of the correct answer. See the FAQ for an explanation of what that means, and what formats of real numbers we accept.

Limits

1 ≤ T ≤ 100.
Time limit: 40 seconds per test set.
Memory limit: 1GB.
-106Xi ≤ 106, for all i.
-106Yi ≤ 106, for all i.
-106Xi' ≤ 106, for all i.
-106Yi' ≤ 106, for all i.
(Xi, Yi) ≠ (Xi', Yi'), for all i.
If Xi = 0, then either Yi < 0 or Yi > 1000, for all i. (No laser's endpoint is on the segment.)

Test set 1 (Visible)

1 ≤ N ≤ 10.

Test set 2 (Hidden)

1 ≤ N ≤ 10000.
There are at most 8 cases with N > 100.

Sample

Sample Input
content_copy Copied!
3
5
0 1001 -1 1001
0 1001 -1 1001
0 1001 -2 1001
0 1001 0 500
0 1002 1234 5678
4
500 500 1000 1000
500 500 0 1000
500 500 0 0
500 500 1000 0
4
500 500 1000 1001
500 500 0 1000
500 500 0 0
500 500 1000 0
Sample Output
content_copy Copied!
Case #1: 1.000000
Case #2: 0.750000
Case #3: 1.000000

In Sample Case #1, note that multiple lasers might share the same endpoint and initial orientation, but this does not necessarily imply that they rotate in the same direction. (Also note that the second and third lasers have the same initial orientation even though it is specified differently.) Regardless of their rotation directions, though, each of these lasers only touches the segment at the instant that it is pointing in the negative y direction, so there is clearly some other open interval during which no laser is touching the segment, and the answer is 1.

In Sample Case #2, each of the lasers touches the segment during exactly 1/4 of its rotation, and the segment will be touched by a laser at all times if and only if lasers 1 and 4 rotate in the same direction, and lasers 2 and 3 rotate in the same direction. The probability of that is 1/4, so the answer is 3/4.

Sample Case #3 is like Sample Case #2, but with a slight difference that guarantees that there will be an instant at which no laser is touching the segment, even if the lasers are all rotating the same way. So the answer is 1.

E. Swordmaster

Problem

You are a duelist aspiring to become the next Swordmaster. You will work toward this title by dueling with opponents until you win against every opponent. Every opponent is always available for dueling, and opponents do not duel each other.

Each duelist (including you) knows at least one attack, and at least one defense. There are at most P pairs of attacks and defenses in the world; the i-th defense only counters the i-th attack, and the i-th attack is only countered by the i-th defense. It is possible that there are attacks and/or defenses that no duelist knows. You can use any attack or defense that you know as many times as you like; they do not get "used up".

Here are the rules for each individual duel with an opponent:

  • As the aspiring Swordmaster, you always get to attack first. You select an attack that you know. If the opponent knows the corresponding defense, they may choose to use it. If they do not know that defense, or they choose not to use it, then they do not defend.
  • Then, the opponent selects an attack that they know. If you know the corresponding defense, you may choose to use it. If you do not know that defense, or you choose not to use it, then you do not defend.
  • If you successfully defended and the opponent did not, you win the duel! Otherwise, you do not win, but your quest to become the Swordmaster can continue.

You can fight as many duels as you want, including multiple duels with the same opponent, regardless of the outcomes of any previous duels. You do not need to determine a complete schedule of duels in advance; you can base your next decision on what has already happened. Once you have won at least once against every opponent, you become the Swordmaster!

You are an especially quick learner. After each duel, regardless of the outcome of the duel, you can add the attack and the defense (if any) used by the opponent to your own set of known attacks/defenses. (Note that if an opponent uses an unfamiliar defense against you, you do not learn it during the duel itself, so you cannot use it against the opponent's attack in the same duel.) Only you have this advantage; the attacks and defenses known by your opponents never change.

Moreover, after you win against an opponent, and before your next duel, that opponent will teach you all of the attacks and defenses that they know and that you do not already know. (Once they have lost to you, it looks better for them if you eventually do become the Swordmaster!)

You know which attacks and defenses each opponent knows. If you make optimal choices, is it possible to guarantee that you will become the Swordmaster, regardless of what choices your opponents make?

Input

The first line of the input gives the number of test cases, T. T test cases follow.

  • Each case begins with one line with two integers N and P: the number of duelists (including you), and the maximum number of attack/defense pairs in the world.
  • Then, there are N groups of three lines each. The i-th of these groups represents one of the duelists; in particular, the first of them represents you. Each group has the following structure:
    1. One line with two integers Attacksi and Defensesi: the numbers of different attacks and defenses, respectively, known by the i-th duelist.
    2. One line with Attacksi different integers Aij, sorted in increasing order: the identities of the attacks known by the i-th duelist.
    3. One line with Defensesi different integers Dij, sorted in increasing order: the identities of the defenses known by the i-th duelist.

Output

For each test case, output one line containing Case #x: y, where x is the test case number (starting from 1) and y is YES if you can guarantee that you will become the Swordmaster (as described in the problem statement), or NO otherwise.

Limits

1 ≤ T ≤ 100.
2 ≤ N ≤ 1000.
1 ≤ P ≤ 1000.
1 ≤ AttacksiP, for all i.
1 ≤ DefensesiP, for all i.
1 ≤ Aij < Ai(j+1)P, for all i and j.
1 ≤ Dij < Di(j+1)P, for all i and j.
The sum of all Attacksi + the sum of all Defensesi, over all i, does not exceed 50000.
Time limit: 10 seconds per test set.
Memory limit: 1GB.

Test set 1 (Visible)

Ai1 = 1, for all i. (Attack 1 is known by all the duelists, including you.)
Di1 = 1, for all i. (Defense 1 is known by all the duelists, including you.)

Test set 2 (Hidden)

No extra restrictions.

Sample

Sample Input
content_copy Copied!
5
2 2
1 2
1
1 2
2 1
1 2
1
2 2
1 1
1
2
1 1
2
1
2 5
1 1
2
3
2 1
2 4
2
3 5
3 2
1 2 3
3 4
2 4
3 4
2 3 4 5
2 5
4 5
1 2 3 4 5
4 4
1 1
1
4
2 3
2 3
2 3 4
1 3
4
1 2 4
1 3
4
1 3 4
Sample Output
content_copy Copied!
Case #1: NO
Case #2: YES
Case #3: NO
Case #4: NO
Case #5: YES

Note that the last four sample cases would not appear in Test set 1.

In Sample Case #1, as long as your opponent keeps choosing defense 1 and attack 1, you cannot win the duel. There is no guarantee that your opponent will ever choose attack 2 or choose not to use defense 1, so it is not possible to guarantee that you will become the Swordmaster.

In Sample Case #2, you know attack 1 and defense 2, and your (only) opponent knows attack 2 and defense 1. The following strategy is guaranteed to make you the Swordmaster:

  • In your first duel, you must choose attack 1; the opponent may defend with defense 1. Then, the opponent must choose attack 2; you should choose defense 2.
    • If the opponent did not defend, then you won and you are now the Swordmaster.
    • Otherwise, you do not win, but you learn attack 2 and defense 1 afterward. Then, start a second duel with that opponent. This time, choose attack 2; the opponent cannot defend against it. Once again, the opponent must choose attack 2; you should choose defense 2. You have won and you are now the Swordmaster.

In Sample Case #3, in your first duel, if your opponent always chooses attack 4, you will never be able to defend, since nobody knows the defense to that attack. So, there is no way for you to ever become the swordmaster. Note that there can be attacks and/or defenses that exist in the world, but are not known by any of the duelists in this problem.

In Sample Case #4, there is an opponent that knows every defense, so you cannot guarantee that you will ever win against them (they would have to be nice and not defend!)

Here is one guaranteed winning strategy for Sample Case #5:

  1. Duel the first opponent. You must choose attack 1, and they cannot defend. We will proceed assuming that they choose attack 2. (If they choose attack 3, an isomorphic strategy will work.) You cannot defend, and you do not win the duel, but you learn attack 2.
  2. Duel the third opponent, and use attack 2 and defense 4 for a guaranteed win. You learn attack 4 (which you will never use) and defenses 1 and 3.
  3. Duel the second opponent, and use attack 2. You are guaranteed to learn defense 2: either the opponent will use it against you, or they will not use it and you will win (and learn all of their attacks and defenses).
  4. Duel the first opponent again, and choose attack 1. Now, whichever attack they use, you can defend, and you win. You learn attack 3.
  5. Duel the second opponent again, using attack 3, if you did not already win against them before.

Analysis — A. Jurisdiction Restrictions

Test set 1

We begin modeling the problem as a flow network. We construct a directed graph G where the set of nodes is {source, sink} ∪ S ∪ B where S is the set of blocks that contain a station and B is the set of blocks that do not contain a station and are within reach of some station. The edges of the graph are {source → s : s ∈ S} ∪ {s → b : s ∈ S, b ∈ B} ∪ {b → sink : b ∈ B}. Notice that each path from source to sink links a specific station with a specific block. If we refine G by assign capacity |B| to each edge source → s, and capacity 1 to each other edge, a valid flow of the network is a union of paths from source to sink, and due to the capacities, each station will belong to at most one of those paths. Moreover, a maximum flow will cover all stations, so there is a bijection between maximum flows and station assignments. In this way, we can reframe the problem as minimizing the difference between maximum and minimum flows going through edges that come out of the source in a maximum flow of G.

If we want to set an upper bound U for the flow going through edges coming out of the source, we can adjust the capacities of those edges to U. If we want to set a lower bound L on those quantities, we can add a second copy of each of those edges with capacity L and have a cost of 0 in the new edges and 1 in the max-capacity version (plus, cost 0 for all other types of edges). Using min-cost max-flow instead of max flow will give preference to flows that use at least L capacity through each station, and if the retrieved flow doesn't use the full capacity, it means that one doesn't exist.

With the above, we can just try every possibility for L and U, discarding the combinations that don't achieve a max flow of |B| that saturates all the cost 0 edges coming out of the source. The non-discarded combination with minimum U-L is the answer. Both U and L are bounded by |B|, which is bounded by R × C. The size of G is O(S × R × C), and min-cost max-flow on a graph with only 0 and 1 costs takes only linear time per augmenting path, so quadratic time overall. This means the described algorithm takes time O(S2 × R4 × C4). This can be too slow even for test set 1, but there are some optimizations that will shed some of that time, and you only need one of them to pass test set 1.

  • Instead of trying all combinations of L and U, you can try only the ones that have a chance to improve: if a combination (L, U) doesn't work, then we know (L+1,U), (L+2,U), etc will also not work, so we can move on to (L, U+1). If a combination (L, U) works, we know that (L, U+1), (L, U+2), etc will also work but will yield a larger difference, so we can skip them and move directly to (L+1, U). This requires us to test a linear (instead of quadratic) number of combinations of L and U.
  • An even simpler version of the optimization above is to try all possibilities for only one of L or U, and then use binary search to find the optimal choice for the other. This doesn't take the number of combinations down to linear in |B|, but it does make it quasilinear.
  • A flow for the combination (L, U) is also a valid flow for the combination (L, U+1), so instead of starting the flow from scratch, we can use it, dramatically reducing the number of augmenting paths we need to find overall.

Test set 2

For test set 2, the values of R and C are too high to appear linearly in the running time of the algorithm. That means we have to improve upon the solution above in at least two fronts: the size of G, which is linear on R and C and ultimately impacts any algorithm we want to run on G, and the number of combinations of L and U we try, which is also linear or more.

We start by defining G for test set 2 a little differently, using a technique commonly known as coordinate compression. We define B' as a set of sets of blocks that form a partition of B above. The way to partition is by taking the full grid and cutting through up to 2S horizontal and 2S vertical lines at the boundaries of the jurisdiction of each station. The resulting rectangles have the property that the set of stations that can reach any block of the rectangle is the same. Rectangles only represent the blocks within them that do not contain a station. We can now define G as having nodes {source, sink} ∪ S ∪ B' and edges {source → s : s ∈ S} ∪ {s → b' : s ∈ S, b' ∈ B'} ∪ {b' → sink : b' ∈ B'}. Note that each node in B' now represents a set of blocks, so we also have to fiddle with capacities. Edges source → s have capacity |B| (effectively infinity) as before, and so do edges s → b'. Edges b' → sink, however, have capacity |b'|. In this way, each node b' is the union of many nodes b from the previous version of G, so its outgoing edge has the sum of the capacities of the outgoing edges from those nodes b. The size of this G is O(S3) with O(S2) nodes.

Now, in G, each path source → s → b' → sink that carries flow X represents that X blocks from set b' are assigned to station s. Notice that there is no longer a bijection between flows and station assignments, but there is a bijection if we consider permutations of blocks within a rectangle to be equivalent. Since all those blocks are equivalent for the purpose of this problem, this bijection is enough. As before, we now have to find a maximum flow in G such that the difference between the maximum and minimum flow in edges going out of the sink is minimized.

Since we cannot try all possibilities for L and U, we will use a little theory. Let GC be a copy of G, but with the capacities of all edges source → s changed to C. G = G|B|. Notice that a valid flow in GC is also a valid flow in any GC' with C' ≥ C.

Now, let U be minimum such that GU allows a flow of total size |B|. Let L be maximum such that there exists a maximum flow of size |S| × L in GL, that is, there is a flow that saturates all edges of type source → s. Any maximum flow in G has at least one edge of type source → sink carrying flow at least U (otherwise, such maximum flow is a valid flow in GU-1, contradicting the definition of U). Also, any maximum flow in G has at least one edge of type source → sink carrying at most L (otherwise, we can subtract flow from any path sink to source that carries more than L+1 to obtain a flow that carries exactly L+1 through all nodes, and thus it would be a valid flow in GL+1 of size |S| × (L+1), contradicting the definition of L). Finally, if we start with the flow in GL that justifies the definition of L and use it as a valid flow in GU, we know that it can be extended to a maximum flow F in GU via augmenting paths. Since the augmenting paths don't contain cycles, they don't decrease flow in edges of type source → s. Therefore, F is a maximum flow in GU such that the difference between the maximum and minimum flow going through edges coming out of the source is exactly U - L. And by definition, a maximum flow of GU is a maximum flow of G. The previous observations show that there is no flow in G with such a difference less than U - L, so U - L is the answer to the problem. We can binary search for L and U independently using their original definitions, which yields an algorithm that takes time O(S5 × log (R × C)).

Test Data
info We recommend that you practice debugging solutions without looking at the test data.

Analysis — B. Two-Tiling

The restriction on tile size limits the problem to just 35 distinct tiles, and therefore "only" 35 * 34 / 2 = 595 possible cases. We might consider whittling away at this number via individual insights — e.g., any case with a 9-cell tile and an 8-cell tile must be impossible, since the least common multiple of 9 and 8 is 72, which is larger than the allowed board size. We may even be tempted to cut out some paper tiles and do some solving by hand, but if so, we will quickly discover that this is harder than it seems, and it is particularly hard to be sure that a case is impossible. We need to either write a Master's thesis on properties of n-ominoes, or take an exhaustive approach.

We cannot simply find all of the ways to tile part or all of the board with each tile, and then intersect those two sets; there could be too many possible tilings to generate (e.g., 264 for the 1x1 tile). Even if we could come up with special-case solutions to get around this issue for enough of the smaller tiles, we would still waste a lot of time enumerating sets that are obviously incompatible with the other tile.

Instead, we can use a backtracking strategy that takes advantage of our constraints, using the placement one type of tile to guide how we place the other type. The strategy does not have to be lightning-fast; we know that every case will appear in the test set, so we can enumerate and solve them all offline and then submit once we have an answer for each one. However, it must be fast enough to generate all 595 answers before the World Finals end!

Let us number the cells of the 8x8 board in row-major order, which is the same order in which our backtracking search will explore. We can translate each tile (and all of its rotations and reflections) around the 8x8 board to exhaustively find all possible placements. Then, we can assign each of those placements to the lowest-numbered cell that the placement uses. For example, one placement of the 2x2 square tile can cover cells 1, 2, 9, and 10 of the board, and the lowest-numbered of those cells is 1, so we assign that placement to cell 1. Now we have a helpful map from each board cell to the tile placements that fit there. Notice that it would be redundant to list our example 2x2 square tile placement under cell 10. For example, by the time our search reaches cell 10, it will have already considered that same placement at cell 1, so there would be no need to consider it again.

Then, starting in the upper left cell and going in row-major order, we try to build a set of cells that can be tiled by both tiles. For each cell c, we decide whether to fill it. If it is already covered by a previous tile placement, we must fill it; if we decide to fill it, and it is not already covered by one or both types of tiles, then we have our search explore all of the tile positions assigned to that cell in our map above.

Since the board is 8x8, we can use the bits of a 64-bit data type to represent each tile's coverage of the board, and we can use bitmasking to quickly see whether desired cells are occupied, or generate one state from an existing state. It is also very quick to check whether two states are equal; if this is ever the case, we stop immediately, step back through our backtracking to figure out exactly how to divide that set up into tiles of each type, and output that tiling. If the backtracking process finishes without finding such a set, then the case is impossible.

This solution turns out to be fast in practice, running in seconds. Intuitively, small tiles are more "flexible" and it is easier to find solutions quickly for cases involving them, whereas large tiles fill up the board's cells rapidly and do not open up as many choices. That is, the recursion tree for the backtracking is wide but short in the case of small tiles, and tall but narrow in the case of large tiles.

This approach does not necessarily find minimal sets of cells, but it can generate some artistically pleasing shapes and tilings, and/or puzzles that really would be fun to solve! Here are a couple examples generated by one of our solutions:

These angular tiles create a solution with a surprising mix of dense and holey areas!

aaa.bbb. AAA.BBB.
a..cbcb. A..CCCB.
aaacbcb. ADDDECB.
...ccc.. ...DEC..
..dddeee ..FDEEEG
..d....e ..F....G
..dddeee ..FFFGGG
........ ........

We hope you "loved" this problem! Notice that a test case is not necessarily any easier when the two tiles differ in only a very small way.

aa...bb. AA...BB.
aaa..bbb AAA..CBB
acacdbdb DDACCCBB
.cccddd. .DDCCEE.
.cceedd. .DDFFEE.
..eefff. ..FFEEG.
..eeeff. ..FFGGG.
....ff.. ....GG..
Test Data
info We recommend that you practice debugging solutions without looking at the test data.

Analysis — C. Go, Gophers!

Let M = 25 be the maximum possible number of gophers.

Test set 1

Our approach to test set 1 will be as follows: first, we will find the minimum taste level among all gophers, and then we will use it to determine the total number of gophers N.

To find the minimum taste level, we can simply binary search using the question "is the minimum taste level strictly greater than X?" This is equivalent to asking: "Would a snack of quality X go uneaten by all gophers?" To answer such a question, we can offer 2M-1 snacks of quality X consecutively, which guarantees that each gopher is exposed to at least one of them. If none of those snacks are eaten, every gopher's taste level must be greater than X, but if at least one snack is eaten, then there is at least one gopher with taste level at or below X. We can even stop offering snacks as soon as one gets eaten, if we want to save some snacks. This requires ceil(log 106) × 49 = 980 snacks at the most.

Once we know that there is a gopher g with level exactly L, and that L is the minimum taste level, we can use our snacks to answer a query "is it g's turn?" by offering a snack of quality L, because g is the only gopher who would eat it. If we offer that many times in a row and calculate the fraction of eaten snacks, that should approximate 1/number of gophers fairly well. At this point, we might as well use our enormous number of leftover snacks to estimate, and then just answer with the N such that the result is closest to 1/N. It turns out that M3 tries of this experiment guarantee that the answer is correct, even in the worst case. The proof is included below as part of the explanation of the solution for test set 2.

Test set 2

The solution from test set 1 does not extend naturally to test set 2. In particular, it no longer suffices to find out what fraction of gophers have the minimum (or maximum) taste level, because there could be multiple gophers with that taste level; if we find a fraction of 1/K, the number of gophers could be any multiple of K. So, we need to investigate other levels. Investigating taste levels other than an extreme one brings about the problem of the result being impacted by gophers with taste levels other than the one we are interested in. We can still use the idea of answering general queries by repeating snacks of the same quality, but they are significantly more complicated than a simple disjunction of the snack outcomes.

The first query Q we describe is somewhat natural: What is the exact fraction of gophers with taste level ≥ X?

Notice that this query alone is enough to solve the problem: we can do a binary search of sorts: given a range [A, B] of at least two levels, and known fractions of gophers of those taste levels X and Y, respectively, we can calculate the fraction of gophers Z for taste level C = (A + B) / 2. We recurse over the range [A,C] if X ≠ Z and over the range [C,B] if Y ≠ Z, because the fractions are different if and only if there is a gopher with a taste level in the range producing the change. This algorithm allows to identify all levels at which at least one gopher exists. For each of them we calculate the fraction of gophers at level L using our query and then subtracting the fraction of gophers at each other level < L. Finally, we can take the least common multiple (LCM) of the denominators of all those fractions to find the answer.

This algorithm requires about N × ceil(log 106 - N) queries like Q to be made. Unfortunately, we see below that this would be too many.

One way W1 of solving Q is to try enough snacks and then round to the nearest feasible fraction. If we use enough snacks, the rounding will give the exact answer. Note that if we give X consecutive snacks, we are guaranteed to have the right answer in all but a prefix and a suffix of the tries, both of which have length up to M-1. This bounds the error by 2M-2. Moreover, since our experiment has a binary answer, the farthest we can be from the true number of positive (or negative) answers is M-1, in the case when there are exactly ceil(M/2) answers of one type and floor(M/2) of the other, and we happen to hit both a prefix and suffix of size floor(M/2) giving the same answer (this is for odd M, the worst case would be M for an even M).

Additionally, since we only consider fractions with denominators up to M, the distance between two different results is bounded by 1 / (M × (M-1)). This means that if our experiment has an error of less than half of that, rounding is guaranteed to produce the right answer. Putting the total error of M together, this implies that we need M3 snacks to get a perfect answer for Q. A total number of snacks of M × ceil(log 106 - M) × M3 exceeds the allotted total by a large margin.

A different way W2 to answer Q is to always use R consecutive snacks, where R is the LCM of all the possible results. That means the error is always zero, since we give each gopher exactly the same number of snacks. Unfortunately, LCM(2,3,...,25) is also much too large, so this doesn't work either.

Another strategy to reduce the number of queries is to notice that we only need an exact number for the final levels, right before doing the LCM to get the result. For all intermediate values of our "multi-leaf binary search", we only need to decide whether there is some gopher in a range or not. One way W3 to do it would be to, instead of using exact fractions X, Y and Z for A, B and C above, have approximations X', Y' and Z' that are good enough that we can decide whether the real numbers are equal. Using 2M2 snacks guarantees that we will encounter each gopher at least 2M times, which means that if X and Z are different, their approximations of the number of total positives will differ by at least 2M. Since we showed that the total error of both approximations is at most M-1, the error of the difference is at most 2M-1, which means comparing that difference with 2M is enough to determine whether the real fractions are equal or not. This significantly reduces the total required number of snacks, since we only need to use M × ceil(log 106 - M) × M2 for the multi-leaf binary search, and then M × M3 or M × R to get the final precise answers. However, this is still too many.

The final algorithm requires us to use all 3 of the variants above: we start the multi-leaf binary search using W3. Each time we find a level, we use either W1 or W2, whichever is cheapest, to get the real fraction of the found level. If we recurse on the larger intervals first, we'll find the levels from highest to lowest, so we can do the subtraction. Once we have the real fraction, we potentially restrict the number of possible results N to the multiples of the denominator. This reduces R. Eventually, R might be small enough that we can use W2 for the binary search instead of W3 as well. Notice that each time we use W2, since we started with other methods, we need an additional R to "align" ourselves, depending on how many snacks we have used so far. However, since we eventually use R for everything, the additional cost of these alignments is extremely small.

We leave the precise analysis of total number of snacks needed to the reader, but it's possible, with some careful bounding, to prove that it never exceeds S, and we couldn't find a case that gets above ~85% of S. The reason is that if the denominator of a fraction is larger than M/2, then there's only one possible result left and we are done. If it's less than M/2 but somewhat large, the number of possibilities for the result is reduced significantly, and its LCM is reduced even more because the GCD of those possibilities is more than 1. If the denominator is small, it means that the found level has multiple gophers, which reduces the total cost of the multi-leaf binary search.

Analysis — D. The Cartesian Job

The first thing to notice about the problem is that all lasers rotate at the same speed. That means that after each second, regardless of direction, the lasers will be back at the configuration given in the input, and each subsequent second will just be a copy of the previous one. Then, we only need to check what happens within a second. In what follows, we assume the configuration given in the input happens at time = 0 seconds and we only care about what happens for times in the range 0 to 1 seconds.

Test set 1

In Test set 1 there are such a low number of lasers that we can try every possibly combination of directions and add 2-N to the result for each one that leaves the segment uncovered some of the time.

For fixed directions of rotation, we can check whether the segment is covered by first mapping each laser to the interval of time during which it will cover the segment and then seeing if the union of all those intervals covers the full second. We refer to intervals in the modulo 1-second ring; that is, an interval can start at t0 and end at t1 < t0, covering the time from t0 to 1 second and from 0 seconds to t1. There are several ways to check if the union of intervals, even intervals in a ring, cover the full second, but a simple one is to split the intervals that wrap around into two non-wrapping-around ones, sort those intervals by start time, let t = 0, and do for each interval (t0, t1), if t0 > t, then there's a hole; else, t = max(t, t1). If we reach the end, there's a hole if and only if t < 1.

To obtain the covering interval for a given laser with endpoint p and second point q we check the angle qp(0,0) and qp(0,1000), and then divide by 2π. Notice that if we do this with floating point arithmetic, it is highly likely that we will have precision problems. We can do it with integer only arithmetic by representing the times symbolically as the angles represented by the original vectors. To simplify the presentation, we assume we computed actual times, but all the operations needed for the rest of the solution can be implemented for an indirect representation of those times intervals.

Since we need to check every combination of clockwise and counterclockwise rotations, and evaluating a particular combination requires looping over each laser, this solution takes O(N × 2N) time, which is sufficient for Test set 1.

Test set 2

We start by observing that the two time intervals corresponding to each direction of rotation of a given laser are symmetrical; that is, they are of the form (t0, t1) and (1-t1, 1-t0). Notice that the symmetry is both to the 1/2 second point and to the extreme 0 seconds / 1 second, because we are in a ring. Adding the fact that intervals are less than 1/2 long, we can notice that the pair of intervals coming from a given laser can be categorized into one of three types:

  • Two non-overlapping intervals, one within (0, 1/2) and the other within (1/2, 1). Example: [0.2, 0.3] and [0.7, 0.8].
  • Two intervals that overlap around 1/2. Example: [0.3, 0.6] and [0.4, 0.7].
  • Two intervals that overlap around 0 a.k.a. 1. Example: [0.8, 0.1] and [0.9, 0.2].

For the last two types, the overlapped area is an interval of time during which the segment is guaranteed to be guarded. Therefore, we can remove the overlapped part from further consideration and assume that the two intervals are just their symmetric difference. In the first example above, we'd remove [0.4, 0.6] from consideration and keep the pair of intervals [0.3, 0.4] and [0.6, 0.7]. After we do this with all intervals, the part that remains to be considered is some subinterval of (0, 1/2) and some subinterval of (1/2, 1), and each pair of intervals is exactly two symmetrical intervals, one on each side.

At this point the problem we need to solve is, given two symmetrical intervals u1 within (0, 1/2) and u2 within (1/2, 1), and a set of pairs of intervals (a, b), (1 - b, 1 - a), what is the probability that the union of picking one from every pair uniformly at random does not cover both u1 and u2? Notice that because of symmetry, one particular split (a collection of one interval from each pair) covers u1 if and only if the opposite split covers u2. So, an alternate way to frame the problem is: given the list of intervals from each pair that are inside u1, what is the probability at least one part of a random split of them doesn't cover u1? This problem we can solve with a dynamic programming approach over the list of intervals.

The dynamic programming algorithm is, in a way, a simulation of the algorithm we presented above to check if the union of the intervals covers the universe, over all combinations at once. We iterate the intervals in non-decreasing order of left end. We also keep a state of which prefix of u1 each side of the split has already covered. Formally, if u1 = (v, w) we calculate a function f(i, x, y) = probability of a split of intervals i, i+1, ..., N not covering both the remainder of the first side (x, w) and the remainder of the second side (y, w). Equivalently, this is the probability of the cover not being full given that intervals 1, 2,..., i are split in a way that the union of the intervals from one side of the split is (v, x) and the union of the intervals from the other side is (v, y). The base case is if min(x, y) ≥ w, then f(i, x, y) = 0, or if i > N or min(x, y) > the left end of the i-th interval, then f(i, x, y) = 1. For all other cases, we can calculate f(i, x, y) = (f(i+1, max(x, b), y) + f(i+1, x, max(y, b))) / 2, where (a, b) is the i-th interval in the order. Finally, the answer to the problem is f(0, v, v).

Noticing the values x and y for which we need to calculate f are always either s or the right end of an interval bounds the size of the part of f's domain that we need, and thus the running time of the algorithm, by O(N3). We can further notice that max(x, y) is always equal to the maximum between s and the maximum right end of intervals 0, 1, ..., i-1. This reduces the domain size and running time to O(N2). One further optimization needed is to notice that if min(x, y) is not one of the largest K right ends of intervals 0, 1, ..., i-1, then the result of f(i, x, y) is multiplied by 2-K or less to calculate the final answer. For values as small as 50 for K, that means the impact in the answer of the value of f is negligible in those cases and we can just approximate it as 0, making the size of the domain of f we have to recursively calculate only O(K × N).

Implementing the dynamic programming algorithm as explained above can be tricky, especially if you want to memoize using integer indices over the intervals and largest K last values, as opposed to floating point numbers. However, doing a forward-looking iterative algorithm can be a lot easier. We maintain a dictionary of states to probability, where a state is just the two values x and y, always sorted so that x ≤ y. We start with just {(s, s): 1}. The, we consider each interval (a, b) iterative and for each state (s1, s2) with probability p in the last step's result, we add probability p / 2 to new states sorted(max(s1, b), s2) and sorted(s1, max(s2, b)) if a ≤ s1. If a > s1, we add p to our accumulated result and don't generate any new state, since state (s1, s2) is guaranteed to leave some unguarded time. This is a simple implementation of the quadratic version, but the real trick is when making the optimization to bring the time down to linear, which in this case is simply ignoring states with too low probability (i.e., if p < ε, do nothing).

Test Data
info We recommend that you practice debugging solutions without looking at the test data.

Analysis — E. Swordmaster

The only way we can fail to become the Swordmaster is to get stuck in a situation where for all remaining duelists (duelists that we have not beaten yet) we will not defeat them without learning additional skills. At any point in time, the remaining duelists can be divided into four groups:

  1. D1: Duelists we certainly can defeat.
  2. D2: Duelists that have an attack we cannot defend against and a defense for every one of our attacks.
  3. D3: Duelists that do not have a defense for one of our attacks, but have an attack we cannot defend against.
  4. D4: Duelists that do not have an attack we cannot defend against, but have a defense for every one of our attacks.

If there is any remaining duelist in D1, we should just defeat them. If there is someone in D3 and there is someone in D4, then we can make some progress (i.e. reducing the number of remaining duelists, or moving a remaining duelist to D1) by doing the following:

  1. We choose any duelist in D3 (call them A) and any duelist in D4 (call them B).
  2. We ask A for a duel. Since A is in D3, A has an attack that we don't defend against. Either A will use an attack that we don't defend against (thus we will learn the attack), or we can defend and win the duel (thus learning all of A's attacks). At the end of the duel, we will learn an attack which we can't defend against. Let us call this attack is a.
  3. If B cannot defend against a, then B has moved to D1 and we have made some progress. Otherwise, we ask B for a duel and use attack a. If B cannot defend, then we will win the duel and make some progress. Otherwise, we will learn defense a.
  4. Now that we have learned defense a, A might have moved to D1, or might still be in D3 (by having another attack that we can't defend against). If A moved to D1, then we have made progress. Otherwise, we repeat step 2 until A has no attack that we can't defend against.

Therefore, the only possibilities in which we might fail to become the Swordmaster are when either every remaining duelist is in D2 or D3, or every remaining duelist is in D2 or D4.

If every remaining duelist is in D2 or D3, then every remaining duelist has an attack we can't defend. From this point, every remaining duelist can just attack us with an attack we can't defend against and choose to not defend against, which means that we will not learn any new defenses. Therefore, we can't defeat any more duelists and are certain to fail to become the Swordmaster. Notice that we can actually detect this situation up front — if there exists a group of duelists G1 (not containing us), such that every duelist in G1 has at least one attack against which nobody outside G1 can defend, then we are doomed. The strategy for the duelists is that everybody in G1 can just successfully attack us and choose to not defend (thus we are not learning any new defense known by duelists in G1).

We can check the existence of G1 by starting with G1', the set of all duelists except us. We can iteratively remove from G1' any duelist we can defend against regardless of their attack, and get all their defenses. This can be done in O(N × P) time. This complexity is also bounded by the sum of the number of attacks and defenses known to all duelists, which is linear in the size of the input.

Test set 1

For test set 1, the situation when every remaining duelist is in D2 or D4 is similar. This means every remaining duelist can defend against all of our attacks. Therefore, there exists a group of duelists G2 (not containing us), such that every duelist in G2 has a defense for every attack available outside G2. Therefore, we are also doomed, since everybody in G2 can just defend against our attack and always use attack 1 (thus we are not learning any new attack known by duelists in G2). We can check the existence of such G2 up front with an algorithm analogous to the one presented above to check for G1.

Test set 2

For test set 2, the situation when every remaining duelist is in D2 or D4 is complex. The difference is that our opponents have to attack, so even when dueling with some duelist who can defend against all of our attacks, we can learn a new attack. This new attack potentially causes some duelist to move from D4 to D1, or from D2 to D3. Therefore, the fact that every remaining duelist is in D2 or D4 is not a sufficient condition for us to fail to become the Swordmaster.

Now imagine we are in a situation where we are actually doomed — we are in a position where, finally, every remaining duelist is in D2 or D4 and we will never defeat any duelist, nor will learn a new attack or defense. Such a situation has to be reached at some point since there is only a finite number of steps of learning (attack or defense) to be made. In this situation, we could fight a duel with every remaining duelist and learn the attacks they use. By definition, we already knew all these attacks, and so, also by definition, all our opponents can defend against all these attacks. So, for us to be doomed through this path, there must exist a group G2 of duelists and an assignment of an attack for each of them a : G2 -> Attacks, such that every duelist d in G2 knows attack a(d) and knows how to defend against every attack known outside G2 and every attack a(d') for d' in G2.

This condition is harder to check. Let us assume there exists such G2. Consider any two duelists X and Y. If X has an attack that Y cannot defend against, then Y being in G2 implies X must be in G2 as well. This defines a directed graph on the duelists, with the edge going from Y to X.

We find the strongly connected components (SCCs) of this graph. If any duelist of a SCC is in G2, then the whole SCC is in G2. Therefore, G2 will be an upper subset (a subset with no edges going out of the subset) of the directed acyclic graph (DAG) of the strongly connected components of the graph. Notice that if we can choose some upper subset of duelists to be a valid set G2, then any subset of G2 which is also an upper subset will also be a valid G2 with the same attack selections. Therefore, it is enough to check only leaf SCCs in the DAG. If any valid G2 exists, then a G2 which is a leaf in the DAG also exists.

Therefore, for each leaf SCC in the DAG, we can check whether each duelist in the SCC can choose an attack that every duelist in the SCC can defend against. To do this, we can first find the set D which is an intersection of the defenses known by every duelist in the SCC, and see whether every duelist in the SCC has an attack in D.

To construct the graph in O(N × P) time, we can introduce nodes for attacks as well. We add an edge from a duelist to an attack if the duelist cannot defend against the attack, and an edge from an attack to a duelist if the duelist knows that attack. The overall solution runs in O(N × P) time.

Test Data
info We recommend that you practice debugging solutions without looking at the test data.

Statistics — A. Jurisdiction Restrictions

Test set 1: 19 correct solutions (76.0% solve rate)

First
LHiC C++, 39:54
Radewoosh C++, 40:38
tourist aka Gennady.Korotkevich C++, 44:00
jcvb C++, 50:07
msg555 C++, 66:34
Shortest
msg555 C++, 1645 bytes
tourist aka Gennady.Korotkevich C++, 2092 bytes
rng_58 aka rng..58 C++, 2937 bytes
Syloviaely C++, 3837 bytes
jcvb C++, 3941 bytes

Test set 2: 15 correct solutions (60.0% solve rate)

First
LHiC C++, 39:54
tourist aka Gennady.Korotkevich C++, 44:00
msg555 C++, 66:34
Syloviaely C++, 66:48
dario2994 C++, 68:55
Shortest
msg555 C++, 1645 bytes
tourist aka Gennady.Korotkevich C++, 2092 bytes
rng_58 aka rng..58 C++, 2937 bytes
Syloviaely C++, 3837 bytes
V--o_o--V aka overtroll C++, 4264 bytes

Statistics — B. Two-Tiling

Test set 1: 3 correct solutions (12.0% solve rate)

First
tourist aka Gennady.Korotkevich C++, 134:37
pashka C++, 210:17
PavelKunyavskiy C++, 214:30
Shortest
tourist aka Gennady.Korotkevich C++, 18275 bytes
pashka C++, 68301 bytes
PavelKunyavskiy C++, 290011 bytes

Statistics — C. Go, Gophers!

Statistics — D. The Cartesian Job

Test set 1: 3 correct solutions (12.0% solve rate)

First
rng_58 aka rng..58 C++, 136:45
SnapDragon C++, 205:11
Errichto C++, 238:48
Shortest
rng_58 aka rng..58 C++, 3861 bytes
Errichto C++, 5654 bytes
SnapDragon C++, 10449 bytes

Test set 2: 2 correct solutions (8.0% solve rate)

First
rng_58 aka rng..58 C++, 161:46
Errichto C++, 239:18
Shortest
rng_58 aka rng..58 C++, 3920 bytes
Errichto C++, 5653 bytes

Statistics — E. Swordmaster

Test set 1: 3 correct solutions (12.0% solve rate)

First
V--o_o--V aka overtroll C++, 137:32
eatmore Java, 172:23
LHiC C++, 193:33
Shortest
LHiC C++, 3212 bytes
V--o_o--V aka overtroll C++, 6163 bytes
eatmore Java, 6668 bytes

Test set 2: 1 correct solutions (4.0% solve rate)

First
V--o_o--V aka overtroll C++, 239:49
Shortest
V--o_o--V aka overtroll C++, 6163 bytes