Editorial for BPC 1 S5 - Temple
Submitting an official solution before solving the problem yourself is a bannable offence.
Author:
Subtask 1
Picking up a treasure essentially consists of going to the node on the shortest path, increasing the weight of each edge by
Let
Time Complexity:
Subtask 2
For this subtask, we need to be able to BFS efficiently on the complement of a graph.
We can modify normal BFS to do this.
We keep a list of all nodes that were not yet visited.
Then, whenever we pop a node from our BFS queue, we iterate over all unvisited nodes and check whether they have an edge to the current node.
If not, remove them from the list of unvisited nodes and add them to the BFS queue like normally.
This has good time complexity because it happens at most
Time Complexity:
Subtask 3
Before moving on to the actual solution, it is useful to quantify some properties of worst case distances of nodes.
First, all nodes will have distance at most proportional to
For this subtask, we need to dynamically maintain the shortest distance to each node. We will process the queries in reverse order since that is more convenient for this solution. This means that each query will add an edge to our almost complete graph instead of removing one. This will only ever decrease the distances of nodes.
Whenever we add an edge and decrease the distance of a node, we can effectively start a mini-BFS at that node to determine which other nodes need to have their distance values reduced.
To make it more efficient than the normal BFS on a complement graph as outlined in subtask
This solution can be implemented efficiently with dynamically sized arrays or linked lists for storing the list of nodes for each distance.
The constant of this solution isn't bad in practice since there are softer requirements the complexity doesn't account for well.
For example, the fact that whenever we reduce the distance of the node and the distance of the other node doesn't decrease due to there being an edge between them, the other node would also "use" a bunch of edges to have a large distance.
However, we can prove the lower bound of the solution to be the same as the upper bound by considering a graph where the number of nodes is proportional to
Time Complexity:
Subtask 4
The solution to the final subtask is much the same as to the third.
The only difference is we need to add some data structure to calculate the total cost given the
Time Complexity:
Comments