Problem 1: Unique Elements
There are several solutions to this problem, but the easiest is to sort the list of numbers, and
count the number of indices where the element at that index is not equal to either of its
adjacent elements. Since the array is sorted, this would indicate that the chosen element is
unique. You could also use a set, and count how many elements are in the set exactly once.
Problem 2: Egocentric Subarrays
Since the constraints are low, we can run a brute force solution that iterates over all possible
subarrays (each subarray has to have a unique start and end point, so there are O(n^2) of
them). For each subarray, we can find its minimum and maximum in O(n), making the total
time complexity of the algorithm O(n^3), which is fast enough.
Problem 3: Infectious Letters
The easiest solution to this problem is to just simulate the process of infection. First, we’ll
consider all of the “a” letters infecting characters adjacent to them on the right. We can do
this by looping from left to right, and changing a character into an “a” if the character
adjacent to it on the left is an “a”, and the character itself isn’t a “b”. This will take care of all
infections that go to the right. We’ll repeat this for all of the infections that go to the left, but
we’ll now loop in reverse order. The process for this is similar, but now we’ll change a
character into an “a” if the character adjacent to it on the right is an “a”, and the character
itself isn’t a “b”. After we do this, we can count the total number of “a” characters in the
resulting string, which is our answer.
Problem 4: Park Fountains
We can run a brute force program that updates the fountains after each query, and prints out
the final answer at the end. This runs in O(NQ), which passes the easier subtask, but is too
slow for the full problem.
To speed this up, we can think about the fountains from left to right. Because of the
constraints, Q people will always have used the first fountain (since everyone will have used
at least one fountain). When we transition to the second fountain, the same number of people
have used this fountain as the first fountain, except for the people that have used exactly one
fountain. This is similar to the prefix sum technique.
After observing this, we can make a counter array A, where the i’th element stores how many
people will use the fountains from 1 to A[i]. Based on this, we know that Q people will use the
first fountain, and Q - A[1] (1-based indexing) will use the second fountain (everyone except
people who used only the first fountain). We can keep a tally of people for each fountain, and
subtract A[i] from it at each iteration. Finally, we can count the number of fountains that will
break based on this, which will be our answer.
Problem 5: Channel Surfing
Since watching more channels can’t decrease your total TV score, the TV score will only
increase as you’re TV is capable of recording more channels. Thus, we can binary search on
the answer. If your TV can record C channels at once, you can calculate the TV score by
iterating over all possible minutes in time, and for each minute, choosing the C highest
enjoyment values that have interesting content. If the total TV score is at least K, then we
know that this value of C works, and otherwise, we know that it doesn’t, which we can use in
our binary search code. Finally, we can check if using N channels works. If it doesn’t, the
answer is -1, and otherwise, the answer is whatever we calculated in the binary search.
Problem 6: Birdwatching
For this problem, the difference between the position of each camera and each bird (looking
at all pairs) will all have a certain value at the start. When we move the cameras K positions
to the right, these values will increase by K as well, since the position of the cameras relative
to each other will remain the same.
After making this observation, the problem boils down to having an array of N * M values (the
differences between all pairs of cameras and birds), and you’re trying to increase or decrease
all of them by a certain value in order to make the maximum number of them equal to zero.
Thus, the maximum number of bird-camera matchups that we can make is equal to the
number of equal values in this array, since at one time this is the maximum number of values
that could be equal to zero. You can calculate this using a set, or by sorting the array of
differences.
Problem 7: Trailing Zeros
The first thing to observe in this problem is that no matter how we order the array, the only
thing that matters are how many factors of two each number has (since this is the only thing
that will affect the number of trailing zeros in the binary representation of cumulative
product value). So, we can make a new array B where the i’th element of B is the number of
factors of two A[i] has.
Once we do this, we can return to the main task, which is finding the optimal reordering of the
array, that maximizes the number of trailing zeros in the binary representation of the
cumulative product value. For a given array B consisting of the number of two factors in the
main array, the number of trailing zeros in the binary representation of the cumulative
product value can be rewritten as B[0] * 1 + B[1] * 2 + B[2] * 3 + … + B[N-1] * N, since each factor
of two adds one more trailing zero to the binary representation. By the rearrangement
inequality, the optimal solution for this would be to place the smallest values of B towards
the front of the array, i.e. the array B would be sorted in non-decreasing order in the optimal
solution. Since each operation consists of a swap of two adjacent elements, it will take a
number of swaps equal to the number of inversions in B in order to sort B.
We can calculate the number of inversions using a segment tree or BIT, but this is not
necessary since the values of B are small (less than log(a[i])). Instead, we can loop from left
to right in B, and for each element higher than the current element we’re looking at, we can
add the number of times we’ve seen that element before (by using a counter array) to the
total number of inversions.
Problem 8: Maximum Donut
To solve the easier subtask, we can find the maximum donut using prefix sums. For a pair of
positions (i, j) and (k, l) in the 2D grid, the donut sum will be equal to the sum of the sum of
elements from j to l in the i’th row, the sum of elements from j to l in the k’th row, the sum of
elements from i to k in the j’th column, and the sum of elements from i to k in the l’th column.
You also have to be careful of not double-counting the corners of the donut. This can be
calculated using prefix sums in O(n^4).
The solution above is too slow for the full problem. To speed it up, we’ll use a trick similar to
the solution to the Maximum Subarray Sum problem. We’ll first iterate over all pairs of the
bottom and top row of the donut. Then, we’ll iterate over the right column. For a given
rightmost position, we’ll calculate the sum of these values, marked in green (this can be
calculated using prefix sums on each row and column):
Then, the sum of any donut with the right column, top row, and bottom row that we’re looking
at can be calculated as this green sum, minus this red sum, and plus this yellow sum (since
we’re overcounting the red in the green sum, and not counting the yellow sum):
Thus, the maximum value of a donut with the right column, top row, and bottom row that
we’re looking at will be green - (red - yellow). Since the green is fixed, we can then subtract
the minimum value of (red - yellow) that we’ve seen so far. This can be done by keeping a
running maximum of (red - yellow) for all positions that we’ve seen that are at least two
columns to the left of the current one (to avoid counting a degenerate 2xn donut). This can be
done using prefix sums. Since there are O(n^2) pairs of top row and bottom row, and we’re
taking O(n) time to iterate from left to right, the total time complexity of the algorithm will be
O(n^3), which is fast enough to solve the problem.
Problem 9: Plane and Simple
For clarity, let’s call M the number of conveyor belt positions (i.e. the number of “x” characters
in the grid). To solve the partial subtask, you can iterate over all subsets of conveyor belt
positions of size k, and for each one, just simulate the process of moving the conveyor belt,
checking how close the adjacent bags will get at any point in the rotation. This will run in
O(2^M * M^2). This is fast enough for the partial subtask, but is way too slow for the full
problem (it would take approximately 1 0111❑ years to run on a large input)
To speed this up, the first observation to make is that if two adjacent bags are X units apart
on the conveyor belt, then there is a fixed minimum distance that these bags will have to be
away from each other, no matter the position of the two bags, since the conveyor will rotate
through all possible position offsets. Given this, we can compute the minimum manhattan
distance that any bags will be for each possible distance apart on the conveyor belt. Now, the
problem boils down to choosing K positions from 1 to M such that the minimum across the
difference from every pair of adjacent positions (including the 1st and the K’th) is the
maximum possible. To do this, we can use dynamic programming. We’ll make an array DP[M]
[K], defining DP[i][j] to be the maximum possible minimum distance if we place the first j bags
on the positions from 0 to i, but excluding the distance between the j’th bag and the 1st bag.
Then, the answer is the maximum value of DP[i][K], except for some of these positions, the
distance between the first and last bag would be greater than DP[i][K], so we’ll update this if
this is the case. There are O(MK) DP indices to consider, and the transitions take O(M), so the
total time complexity of the algorithm is O(K*M^2), which is fast enough.