Given the comment to (apt) Gono's answer, I add that for large $k,n$, one would expect that the approximation of assuming independent variables is reasonable.
This, of course, assuming that $\sum_{i=1}^n p_i = k$ (a necessary condition, given that $E (\sum X_i) = k = \sum E(X_i)=\sum p_i $).
A heuristic justification is that, for large $n$ the sum of the independent (unconditioned) variables will be near the expected value, hence the conditioning turns rather irrelevant (a rather similar argument as used in "Poissonization" approximations).
In this approximation, then
$$H(\mathbf{X}) \approx \sum_{i=1}^n h(p_i) \tag{1}$$
where $h()$ is the binary entropy function.
Further, as a quick sanity check, notice that for the case of constant $p_i=\frac{k}{n}$, the exact entropy you got was $H=\log_2 ({n \choose k})$ - which for large $n,k$ can be approximated by $ n \, h(\frac{k}{n}) $. This coincides with the above approximation, which assumes (approximate) independence.
Edit: let it make a little more rigorous
Let $\mathbf{Y}$ be the independent Bernoullis, with $\sum_{i=1}^n p_i = k$. And let $S=\sum_{i=1}^n Y_i$.
From the chain rule $H(\mathbf{Y},S)= H(\mathbf{Y})+ H(S \mid \mathbf{Y}) = H(S)+H(\mathbf{Y} \mid S) $
But $H(S \mid \mathbf{Y}) =0$. Hence
$$ H(\mathbf{Y} \mid S)=\sum_s H( \mathbf{Y} \mid S=s) P(S=s)=H(\mathbf{Y}) - H(S) \tag{2}$$
For large $n$, $P(S)$ will be gaussian-like, with mean $\mu_S=k$. Assuming $H( \mathbf{Y} \mid S=s)$ is well behaved, smooth and not too asymmetric around $s=\mu_S$, then we should be able to approximate the weighted sum by the value corresponding to the mean. And this is precisely the entropy of $\mathbf{X}$:
$$\sum_s H( \mathbf{Y} \mid S=s) P(S=s) \approx H( \mathbf{Y} \mid S=k)=H(\mathbf{X}) \tag{3}$$
Also, we know $H( \mathbf{Y}) =\sum_{i=1}^n h(p_i) $, then
$$ H(\mathbf{X}) \approx \sum_{i=1}^n h(p_i) - H(S) \tag{4}$$
It remains to compute $H(S)$ - there is no simple formula, but it can be bounded above (probably tightly) by the Binomial distribution. Hence we can refine the approximation (plugging yet another approximation):
$$ H(\mathbf{X}) \approx \sum_{i=1}^n h(p_i) - \frac12 \log_2(2 \pi e k (1 - \frac{k}{n})) \tag{5}$$
It looks reasonable that $H(\mathbf{X}) < H(\mathbf{Y}) $, because we are adding a restriction.
Also, again, this result can be sanity-checked against the case of constant $p_i$. The term $H(S)$ gives a (fair) second order correction to $(1)$.