3

Problem set

Given $k\in\mathbb{N}$ (positive integer) I want to find the minimum sum over $k$ different pairs, where the pairs are of the form $\left(a_{i},b_{i}\right)\in\mathbb{N}\times\mathbb{N}$, meaning finding:

$$\min_{\left(a_{i},b_{i}\right)\in\mathbb{N}\times\mathbb{N}}\sum_{i=1}^{k}a_{i}+b_{i}$$ $$\forall i\neq j\,\,\,\,\,\left(a_{i},b_{i}\right)\neq\left(a_{j},b_{j}\right)$$

So for examples I have:

  • $k=1 \Rightarrow$ the minimum sum is 2 since $(1,1)$ is the minimum pair possible.
  • $k=2 \Rightarrow$ the minimum sum is 5: $(1,1), (1,2)$ is a possible solution.
  • $k=4 \Rightarrow$ the minimum sum is 12: $(1,1), (1,2), (2,1), (2,2)$ is a possible solution.

Intuition

So my thoughts on this problem are that the best possible solution is to take always the pair whose sum is minimal, meaning, using greedy algorithm. I'm trying to prove it by greedy choice property or some induction on the choices but I'm not quite sure this is the right proof (e.g. mimic Kruskal proof of correctness).

Another try is to follow through with bijection $f:\mathbb{N}\times\mathbb{N}\rightarrow\mathbb{N}$, similar to the bijection here: Bijective Function from N to N x N and then impose the optimization problem into optimization over $\mathbb{N}$ which is much simpler (But I didn't know how to prove that minimization over one soltuon will give the same minimum over the other problem meaning: if $x$ is solution to one problem, $f(x)$ is a solution to the other.)

I would like to hear you thoughts on this.

  • So, you do not consider $0$ a natural number? – Thomas Andrews Jun 15 '24 at 19:25
  • I don't consider 0 to be in N – linuxbeginner Jun 15 '24 at 19:28
  • 1
    You can write your condition more clearly as $$\forall i\neq j:(a_i,b_i)\neq (a_j,b_j).$$ It also might let you see a way to approach the question. – Thomas Andrews Jun 15 '24 at 19:30
  • is there a reason why $(1,1),(1,2),(2,1),(1,3),(3,1),\dots$ is not the minimal such sequence of pairs? – AnCar Jun 15 '24 at 19:37
  • Because you can do better. For example, we can replace $(4,1)$ with $(1,1)$ and get a smaller sum. @AnCar Basically, the list should be in terms of the values $a_i+b_i.$ So the sequence should be $$(1,1),(1,2),(2,1),(1,3),(2,2),(3,1),(1,4),(2,3),(3,2),(4,1),\dots$$ – Thomas Andrews Jun 15 '24 at 19:42
  • ah, actually there is, after $(3,1)$ we can squeeze in $(2,2)$ before following with $(1,4)$ – AnCar Jun 15 '24 at 19:42
  • Basically we sort all pairs in increasing order by $a_i+b_i.$ – Thomas Andrews Jun 15 '24 at 19:43
  • 1
    Indeed, then the idea is as follows: for $j\geq 2$, define $s_j$ to be the number of distinct ways one can write $j$ as a sum of two natural numbers. for $n\geq 2$, define $S_n=\sum_{j=2}^n s_j$. There will exist a largest $n$ such that $S_n\leq k$. then the minimal sum will just be $\sum_{j=2}^n j s_j+(n+1)(k-S_n)$. It is also not difficult to see that $s_j=j-1$. – AnCar Jun 15 '24 at 19:48

2 Answers2

2

As stated above, the problem reduces to summing how many distinct pairs of sum $j+1$ appear. This number is $j$ always, as the different pairs are $(i,j+1-i)$ for $1\leq i \leq j$. If your $K$ is of the form $1+2+...+j=\binom{j+1}{2}$ for some $j$, then your answer will be $\sum_{i=1}^j i\cdot(i+1)$ (counting $i$ instances of the summand $i+1$). Otherwise, find the maximal $j$ with $\binom{j+1}{2}<K$, and the answer will be $\left(\sum_{i=1}^j i\cdot(i+1)\right) + (K-\binom{j+1}{2})\cdot(j+1)$ (counting the same as before, but fewer instances of the biggest summand).

  • small fix to your calculation, I think it's:

    $\left(k-\binom{j+1}{2}\right)\cdot\left(j+2\right)$

    and not

    $\left(k-\binom{j+1}{2}\right)\cdot\left(j+1\right)$

    – linuxbeginner Oct 24 '24 at 12:50
1

Since the solution is a greedy algorithm, you can use the methods for proving greedy algorithms, e.g. the "greedy stays ahead" method.

For this case, sort the possible tuples ascending by their sum. Let's call the summing function $\Sigma:\mathbb N\times \mathbb N\to \mathbb N, \quad (a,b)\mapsto a+b$.

Your greedy solution is to pick always the lowest element you didn't pick so far, till you have all $k$ elements accounted for. Let this solution be $G=(g_1,...,g_k)$.

Now let $F=(f_1,...,f_k)$ denote another feasible choice of tuples. Let $F$ be sorted such that the sum of tuple elements is monotonously increasing.

Now assume for each $i$ we have $\Sigma f_i = \Sigma g_i$. Then we have $\sum_{i=1}^k \Sigma f_i =\sum_{i=1}^k \Sigma g_i$, so both solutions are as good as another.

Next we'll use induction. Let $F^{(n)} := (f_1,...,f_n)$ and $G^{(n)}:= (g_1,...,g_n)$ be two sequences whose elements are unique, sorted ascendingly according to $\Sigma$, and from a set $M\subseteq \mathbb N ^2$. Let further $G^{(n)}$ be the sequence of using our greedy algorithm on $M$.

Now our induction hypothesis is: If for some $i$ holds that $\Sigma f_i\neq \Sigma g_i$ , then $G^{(n)}$ is the superior solution of the two.

For $n=1$ this holds trivially. Now let the statement be proven for $n-1$ and $G^{(n)}$ and $F^{(n)}$ be given. Assume we have for some $i$ that $\Sigma f_i\neq \Sigma g_i$. Since $G$ was chosen at each point as the element with minimal tuple sum, we must have that $\Sigma f_i > \Sigma g_i$. So if we only had $i$ tuples to pick, $G$ would be better than $F$.

Since $F$ was sorted ascendingly, it can not contain any tuple with sum $\Sigma g_i$. So every choice $F$ could make from point $i+1$ on, $G$ could make as well.

So we can split $G^{(n)}$ and $F^{(n)}$ each into their first $i$ elements and the remaining. For the first part we've shown $G^{(n)}$ to be superior. For the second part we've shown that both sequences choose from the same set and now have $<n$ elements, so our induction hypothesis can be used.

Therefore both the first and the second part of $G^{(k)}$ is superior to the corresponding part of $F^{(k)}$, and therefore $G^{(k)}$ is the superior solution.

With this we've shown the induction hypothesis for $k$, and thus the induction hypothesis holds for all $k$.

ConnFus
  • 1,387