Remember to use this editorial only when stuck, and not to copy-paste code from it. Please be respectful to the problem author and editorialist.
Submitting an official solution before solving the problem yourself is a bannable offence.
Author: zhouzixiang2004
Subtask 1
Hint
Which sequences of
numbers can be the degree sequence of a tree with
nodes?
Solution
For subtask 1, note that any sequence of integers
with
for all
and
can be the degree sequence of a tree with
nodes.
Proof
Clearly, these conditions are necessary. To prove they are sufficient, induct on
with the
case being clear. As
, there exists indices
and
with
and
, so we can subtract
from
, delete
, inductively construct a tree for the remaining sequence, and attach a leaf to node
labelled
.
We can now perform a 2D knapsack dynamic programming, where
represents the minimum
if
, with
transitions at each state.
Time Complexity: 
Subtask 2
Hint
The subtask 1 solution performs the same transitions
times. There is a technique, similar to fast exponentiation, that reuses previously computed results so that we only need to perform the transitions
times.
Solution
The subtask 1 solution transitions only from
to
by looping through all possible values of
(with additional cost
). However, we can also transition directly from
to
by looping through all possible values of
(with additional cost
). This allows us to compute
using only
transitions.
This technique is known as the "
" trick. If we view the transitions as performing min-plus convolutions (and
as a min-plus convolution to the power of
), the similarities between the
trick and fast exponentiation may become more clear.
Time Complexity: 
Subtask 3
Hint
Find a dynamic programming approach that only uses
states.
Hint 2
The subtask 1 approach may be misleading. Try to find a recurrence that works more directly with the tree structure.
Solution
Consider a group of
leaves with the same parent. If we delete all of these leaves, we have a smaller tree, and the degree of the parent node is
which gets replaced by a degree of
. This leads to a new dynamic programming formulation, where we let
be the minimum cost of a tree with
nodes, with transitions of the form
Time Complexity: 
Subtask 4
Hint
View the subtask 3 solution as an instance of unbounded knapsack with
items. When
is really large, what items should we take?
Solution
We can write the subtask 3 recurrence in the form
where
for
, which is an instance of unbounded knapsack.
Intuitively, when
is really large, we should repeatedly take the
with the smallest
as this minimizes the cost per unit of "weight." In fact, this is true whenever
.
Proof
Suppose
and we must never use this minimal
. As each item occupies a "weight" of at most
, we must have used
items, say
to form
(possibly with duplicates). It is well known that in any set of
numbers, some nonempty subset of them sums to a multiple of
(Proof: Pigeonhole on the
prefix sums modulo
and take their difference). Applying this lemma to
, some nonempty subset of them sum to a multiple of
. We can replace this subset with copies of item
, which doesn't make the result worse since
is minimal, giving a contradiction.
How tight is this bound?
It is tight up to terms linear in
. We may have a test case where
and
are the only
that aren't too big to be useful, with
. If the capacity of the knapsack is
, then we must use at least
copies of
to fully fill the knapsack, occupying a total space of
.
Thus we can do the dynamic programming up to
and do some math to simulate repeatedly using item
after that.
Time Complexity: 
Subtask 5
Hint
Recall the subtask 2 solution. Can we apply a similar doubling idea?
Hint 2
It suffices to know
to calculate
.
Solution
Note that it suffices to know
to calculate
, as we can partition the items used in forming
into two subsets with sums between
and
. If we were to compute
recursively in this way, then the only states we visit are between
and
for some
, and it follows that there are at most
states, with
transitions each. Implementing this idea iteratively (in descending order of
) gives an
solution.
A similar idea with a more similar flavour as the
trick is described here (Knapsack Speedup 3).
Time Complexity: 
Alternative Solution
Trying to extend the idea from subtask 4, it is natural to wonder: Does it suffice to only consider the top few
with the smallest
? It turns out that if we only consider the
best
when calculating
, then this solution is provably correct.
Proof
Suppose this is false for the first time at
. Let
, let
be the
with the smallest
, and let
be the items we used to form
. The main idea is that we will find a nonempty subset of
that can be represented as the sum of a linear combination of
, each coefficient being positive. Towards this, note that:
- A theorem of Erdos and Graham (1972) on Frobenius numbers states that for any sequence
with
, every number larger than
can be represented as the sum of a linear combination of
with all positive coefficients. We can extend this theorem to the case when
by applying the theorem to
, giving the result that every number larger than
can be represented in this way.
- To apply this theorem, we need to first bound
. Sort
in increasing order of
. As
are all between
and
, some difference of adjacent terms
is at most
. It follows that
.
- Now we need to find a subset of
with a large sum that is a multiple of
. Recall the lemma that every set of
numbers has a subset that sums to a multiple of
. We can repeatedly use this lemma to extract a subset of unused elements of
, summing to a multiple of
, as long as the sum of the remaining unused elements is at least
. This way, we can find a subset
of
that sums to a multiple of
and is at least
.
- To apply the theorem to
and the sum of elements in
, we need to check that
Which follows from
as needed.
Now replacing this subset with copies of items
as needed gives the contradiction.
Time Complexity: 
Comments