2

I came across this question during my course on Discrete Math, while we were discussing the pigeonhole principle:

We are given $n$ coins each of weight $0$ or $1$. We are given a scale using which we can weigh an arbitrary subset of the coins. The objective is to determine the weight of each of the $n$ coins, using a minimal number of weighings. We are allowed to use information from previous weighings. For example if we know $\{1, 2\}$ weighs $1$ and $\{2, 3\}$ weighs $2$, we can conclude that coin $1$ has weight $0$. Show that you will need to weigh at least $n/\log_2(n+1)$ subsets of coins to determine the final weight.

Unfortunately, I am at a loss to solve it, and am entirely unsure on how to proceed. Here is what I have gathered:

  1. No matter what algorithm is given, there is a case in which it is necessary to weigh at least $n/\log_2(n+1)$ subsets.
  2. The inequality essentially simplifies to $(n+1)^k \geq 2^n$, where $k$ is the number of weighings.
  3. Since we're talking about subsets and the pigeonhole principle, the number $2^n$ of subsets of $n$ coins is relevant.

Can I have a hint as to how to proceed? Is it possible that induction could be a way forward?

  • I don't understand your question. Since we can take any arbitrary subset, we could weigh the whole set itself, using only $1$ weighing. – Ritam_Dasgupta May 14 '21 at 11:11
  • @Ritam_Dasgupta I had the same doubt, but we're considering the worst case scenario, and we have to figure out the weight of each individual coin. – Lt. Commander. Data May 14 '21 at 11:12
  • You don't need a scale. Throw all the coins at the ceiling. The ones with weight 0 will stay there, and the ones of weight 1 will fall to the floor. –  May 14 '21 at 11:17
  • For more discussion of this problem, see https://math.stackexchange.com/q/25270/177399 – Mike Earnest May 15 '21 at 20:00

1 Answers1

3

Let $\texttt{A}$ be an algorithm that chooses subsets of coins to weigh, to determine the weights $w = (w_1, w_2, ..., w_n) \in \{0,1\}^n.$ Suppose that $\texttt{A}$ never requires more than $k$ weighings to figure out $w_1, ..., w_n.$ At step $j$ of $\texttt{A}$ (that is, immediately after the $j$th weighing), there is some set $Aj \subseteq \{0,1\}^n$ of possible $w$s. The $(j+1)$st weighing whittles down the size of $Aj$, obtaining $A(j+1).$ We can therefore think of our algorithm as corresponding to a rooted tree $T$, as above. For a particular input $w,$ the algorithm runs from the 'root' $A0 = \{0,1\}^n$ down to some 'leaf', and this leaf must contain only the element $w$. A given weighing of at-most $n$ coins has at-most $n+1$ possible outcomes. Therefore each 'parent' node in the above tree has at most $n+1$ 'children'. Since the depth of the tree is (at most) $k$, $$\#\{\text{Leaves of } T\} \leq (n+1)^k $$ Think of the $2^n$ possible weight-lists $w$ as pigeons, and the leaves of $T$ as pigeonholes. Since each leaf gets at most one $w$, $$\# \{\text{Leaves of } T\} \geq 2^n.$$ By transitivity, $$2^n \leq (n+1)^k,$$ which (as you pointed out in your question) translates to $$k \geq \frac{n}{\log_2(n+1)}.$$

CTVK
  • 517