Problem dependent reparameterization of a knapsack problem for asymptotic efficiency

Consider the following problem

We want to schedule some long-running tasks on two servers, a paid one and a free one. A task can be run on the free server only if some task is already running on the paid server. The ith task takes time[i] units of time and costs cost[i] dollars to complete on the paid-server, but the free server is magical because any task takes 0-cost and a single unit of time to complete on the free server. Find minimum cost to complete all the tasks.

We can formalize this problem by letting \(b_i \in {0, 1}\)denote whether task i is assigned to paid server or not. Let \(b \in \{0, 1\}^n\) be the vector collecting all the b_i. Then the cost paid is \(c^Tb\) and the constraint is that \(t^Tb \ge n - 1^Tb\), that is the time spent on paid servers must be greater than \(\sum_{i=1}^n (1 - b_i) = n - 1^Tb\). So the problem to solve is 

\( \min c^Tb \text{ such that } (t+1)^T b \ge n, b \in \{0, 1\}^n \qquad\qquad - (1)\)

Now this problem is very closely related to the 0-1 knapsack problem as follows. Let \(b' = 1 - b\) then we are trying to find 

\(\displaystyle\begin{align} &\min c^T( 1 - b') \text{ such that } (t+1)^T ( 1 - b') \ge n, b' \in \{0, 1\}^n\\& \equiv c^T1 - (\max c^Tb' \text{ such that } (t + 1)^Tb' \le t^T1) \end{align}\)

so one could in theory just run a standard knap-sack solver with inputs (cost vector, time vector + 1, total time required) but DP algorithms for knapsack run in pseudo-polynomial time \(\mathscr{O}(n W)\) and here W will be equal to total time required. But in the original problem (1) the constraint was exactly \(n\) so actually if we write a DP specifically for this use-case then we can create an \(\mathscr{O}(n ^2)\) algorithm! 

So we can see that reformulating/reducing an optimization problem to fit a standard formulation can increase/decrease the runtime of the solution.

def P(i, k, cost, time, table):
    ret = None
    if (i, k) in table:
        return table[i, k]
    elif k <= 0:
        ret = 0
    elif i == 1:
        ret = float('inf') if time[0] + 1 < k else cost[0]
    else:
        ret = min(P(i - 1, k, cost, time, table),
                  P(i - 1, k - time[i - 1] - 1, cost, time, table) + cost[i - 1])
    table[i,k] = ret
    return ret

def getMinCost(cost, time):
    table = {}
    n = len(cost)
    val = None
    for i in range(1, n+1):
        for k in range(0, n+1):
            val = P(i, k, cost, time, table)
    return val