13

I've been given the following problem in an interview (that I've already failed to solve, not trying to cheat my way past): The game starts with a positive integer number $A_0$. (E.g. $A_0 = 1234$.) This number is converted to binary representation, and $N$ is the number of bits set to $1$. (E.g. $A_0 = b100\ 1101\ 0010$, $N = 5.$)

Player 1 chooses a number $B_0$ lesser than $A_0$. $B_0$ must have only one bit set to 1. (E.g. $B_0 = b10\ 0000\ 0000 = 512$.) Let $A_1 = A_0 - B_0$. (E.g. $A_1 = 1234-512 = 722 = b10 1101 0010$.) A move is valid if $B_0$ satisfies the previous constraints, and if the number of bits set in $A_1$ is still equal to N.

Player 2 continues from $A_1$ by choosing a valid $B_1$, then player 1 continues from $A_2$, and so forth. A player loses if they have no valid moves left.

Assuming both players play optimally, determine the winning player using a reasonably efficient method. (In my problem definition, the constraints on this were that the program has to be able to deliver a solution for a few million input numbers that fit into a signed 32-bit integer.) That is, the solution doesn't need to be fully analytical.


My personal interest here is figuring out whether the expectation of me to have found and implemented the correct solution with no way feedback on correctness in the 120 minutes I was given was reasonable; or if this was one of those "let's see if they've seen this puzzle before" questions.

I'd failed because I chose to implement what seemed like a reasonable strategy, that gave me correct results for the few test cases that I've been given up front, wasted too much time making this run fast, and ended up handing in incorrect full output as my time ran out.

In retrospect I should've implemented a brute-force search and memorized partial solutions for small starting numbers, but hindsight is always 20/20. I'm curious however if there's a different common approach that eluded me as a flunkee.

millimoose
  • 233
  • 2
  • 6

3 Answers3

21

Take a moment for yourself to realize that if we can only substract a power of two, and the popcount can't change we have to subtract $01$ in a position where the other number is $10$. The result of that is always $01$ in that position, and the number doesn't change anywhere else.

In other words, the game is a series of swaps of $10 \rightarrow 01$, and the game ends if all ones are on the right hand side. Note that it's impossible for this game to end early - you can not get stuck. You will always end up at a position where all zeroes are on the left, and all ones are on the right.

So the only determining factor in a game is how many swaps it takes to get to the state where all ones are on the right, and there is no winning or losing strategy. The parity of the number of swaps it takes is the only determining factor.

So how many swaps does it take? Note that $1$s can't cross eachother, so if we numbered them and tracked them through swaps they'd remain in the same order in the final state. Each swap brings them one closer to their final position.

So if the $i$th $1$ (counting from the right, the rightmost $1$ is the $0$th $1$) is in position $k$ from the right, it needs $k - i$ swaps to get to its correct position. This gives us an algorithm to count the amount of swaps required:

i = 0
k = 0
total = 0
while n > 0:
    if n & 1:
       total += k - i
       i += 1
    n >>= 1
    k += 1

We now can just look at the parity of total to see who wins. Time complexity is $O(\log n)$.

orlp
  • 13,988
  • 1
  • 26
  • 41
6

Note from @orlp's answer that we want the parity of the sum of the displacements from the start position to the end position. Let's annotate this:

       9876543210
       9 76 4  1    (positions at start)
start: 1011010010
end:   0000011111
            43210   (positions at end)

So we want

  ((1 - 0) + (4 - 1) + (6 - 2) + (7 - 3) + (9 - 4)) & 1
= ((1 + 4 + 6 + 7 + 9) - (0 + 1 + 2 + 3 + 4)) & 1
= ((1 + 0 + 0 + 1 + 1) - (0 + 1 + 0 + 1 + 0)) & 1

The first part is just the parity of the number of bits in the odd positions. You can mask that by taking the maximum unsigned integer, dividing by 0b11 and negating.

= (bitcount(x & ~(UINT_MAX / 0b11)) ^ (0 + 1 + 0 + 1 + 0)) & 1

The second part is the parity of half the number of bits in x.

= (bitcount(x & ~(UINT_MAX / 0b11)) ^ (bitcount(x) >> 1)) & 1

bitcount can either use the hardware popcnt instruction, or can be implemented manually utilizing that only either the last or second-to-last bit is needed, with fast reductions like this.

Veedrac
  • 992
  • 1
  • 7
  • 16
5

One way to solve such a problem is as follows:

  • Find the solution for a few simple values using the "memoized brute-force" approach you are suggesting.

  • Guess the answer (which positions are winning and which are losing).

  • Try to prove your answer. If you succeed, great. Otherwise, try to find a counterexample, and use it to guess another answer. Here it could be helpful to solve a few more cases.

It is really hard to say how much time that takes. However, in interviews you are not necessarily expected to find the solution. Rather, the interviewers want to know how you approached solving the problem, and what progress you managed to make.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514