Here's the intuition from my perspective.
First at a high level, the mutual information (MI) is measuring the (expected, logarithmic) distance between the joint distribution $p(x,y)$ and the product of the marginals $p(x)p(y)$. This means you need some kind of way to estimate densities. The obvious way is to make a histogram, and count the (normalized) number of points falling into each bin.
However, keep in mind that we are ultimately interested in the probability density, which is directly related to the sample density. But, just intuitively, samples are dense in a particular location if there are many of them close together; i.e., the distances to their nearest neighbours are small. In other words, areas of high density have densely packed samples, meaning the distance between nearby points is small. Conversely, areas of low density have few samples, and so the distance to their closest neighbours will be large.
Hence, the paper tries to
estimate MI from k-nearest neighbour statistics
So basically we can just think of using neighbour distances as another proxy for sample density, instead of counting samples in bins. It sort of reminds me of kernel density estimation.
More concretely, for this paper, however, they are thinking more about a different perspective on the mutual information, namely as the difference between the independent (marginal) entropies and the joint entropy. They then use estimators for the entropies, based on the neighbour statistics.
Another intuition, I think, is that counting samples based on local neighbourhood distances (in the joint space) makes the algorithm adaptive (as opposed to choosing a single bin size over the whole sample space).