Editorial for COCI '23 Contest 3 #4 Restorani
Submitting an official solution before solving the problem yourself is a bannable offence.
We can reformulate the task in the following way: given a tree of size and two sets of nodes
and
of
size
each, we need to find the length and construct the shortest path that begins and ends in node
,
alternates between nodes from set
and nodes from set
, and visits each node from both sets at least
once.
Let's call a node interesting if
,
or
. Notice that there are
interesting nodes.
Let be minimal length of a path if we are currently in node
, we've already visited all nodes
from set
and set
. If
, node
that we are currently in belongs to set
. The
transition is:
where is the distance between interesting nodes
and
. Otherwise,
and node
belongs
to set
. Transition is similar as the one mentioned. We can solve this with dynamic programming with
bitmasks. The answer is then contained in
, and we can construct the solution by backtracking
our
.
To calculate the minimum path length, let's consider each edge separately and determine the minimum
number of times it must be traversed. This edge divides the tree into two subtrees. Let be the number of
nodes from set
in one of these subtrees, and
be the number of nodes from set
in the same subtree. If
, this edge will never need to be traversed, and its contribution to the answer is
. Otherwise, we
must traverse this edge at least
times. We claim that there always exists a construction
that traverses each edge exactly this many times!
In the second subtask, the tree is a chain, and we can identify it with a sequence of length . For an
arbitrary sequence, let
be the number of elements from set
in the prefix of the sequence of length
. Similarly, let
be the number of elements from set
in the prefix of the sequence of length
.
Let's call a sequence of length
positive if it satisfies
and
for all
.
Analogously, let's call a sequence of length
negative if it satisfies
and
for
all
. The entire sequence can now be divided into consecutive positive and negative subsequences
(you will best understand this with an example).
Observation. Any negative sequence in reverse order is positive.
We can traverse a positive sequence by pairing the -th smallest element from set
with the
-th smallest
element from set
, from the beginning of the sequence.
First, we will traverse the entire sequence from left to right (from node to the last interesting node),
visiting all positive subsequences. On the way back to node
, we will traverse all negative subsequences
(which, due to the opposite direction, will effectively be positive). It is easy to verify that this achieves the
minimum number of passes through each edge. The complexity of this solution is
.
For an arbitrary tree, similar to the second subtask, let be the number of elements from set
in
the subtree rooted at node
(including the node itself), and let
be the number of elements from set
in the subtree rooted at node
. Let's call the subtree rooted at node
positive if
, and
negative if
. Note that a subtree without interesting nodes is both positive and negative. We
will never need to traverse such subtrees, so this case is ignored in the following text.
We solve the third subtask by traversing the tree node by node. We say we are in transition if the last
interesting node we visited belonged to set (and now we are on the way to a node from set
). Otherwise,
we say we are not in transition.
Let's assume we are currently at node . We propose the following naive algorithm:
- If we are not in transition and
, add
to the solution path, and
.
- If we are in transition and
, add
to the solution path, and
.
- If we are not in transition and there is a child
of node
whose subtree is positive, move to
.
- If we are in transition and there is a child
of node
whose subtree is negative, move to
.
- If none of these conditions is satisfied, go to the parent of node
(or terminate the algorithm if we are at node
).
There are many incorrect algorithms, but the key idea is to ensure that if a subtree has elements from both sets, we will pair them within that subtree (without unnecessary exiting and returning to the subtree). This ensures that we traverse each edge the minimum required number of times.
Each time we change the sets or
, we need to recalculate
and
for each
. This can be done
in
, and we will change the sets
times. Therefore, the overall complexity of this algorithm is
.
In the last subtask, we cannot go node by node; instead, we need a better algorithm to determine the order
of visiting interesting locations. For each node, we will maintain the optimal order of visiting interesting
locations in its subtree and sets and
of locations that need to be paired with some locations outside
the subtree of that node.
Observation. We will never have and
for any node.
Otherwise, with the same intuition as in the previous subtask, we would want to pair elements from
different sets within the same subtree (to avoid the need for an additional traversal of the edge leading
outside the subtree). Specifically, for a positive subtree, we will have , and for a negative one,
.
First, we will calculate the required values for each child of node , and then we will merge the traversals of
the children into one traversal. Let's assume that each child has an empty traversal. We will then traverse
from positive to negative subtree and vice versa, alternately visiting nodes from set
and set
(until
one of these sets becomes empty). If some child
has a non-empty traversal, and for it,
, it
is sufficient to add the traversal of its subtree to the beginning or end of the overall traversal from node
.
The remaining case is when a child, for which or
holds, has a non-empty traversal. Now
we need to choose the moment when we will traverse its subtree. However, instead of choosing this when
merging multiple subtrees into a larger one, during the alternating traversal of subtrees, we can "attach"
the obtained path to some unvisited node. In the future, when passing through that node, we will add
the attached traversal and continue the algorithm. The added traversal will leave us in the same subtree,
so the overall path will not be unnecessarily extended. This way, we bypass several cases in which our
algorithm repeats an edge more times than necessary.
There are multiple ways to write the solution, and to avoid a lengthy implementation, you can find a
solution using a stack and a linked list in the provided code. The overall complexity of the algorithm is
.
Comments