Earlier I both asked questions and found some interesting sources on the internet of how to do approximate division by combining and counting number of $0$s following most significant bit in denominator.
Example: We represent a fraction $7/3$ as two ordered integers $(7,3)$.
We want to calculate a floating point approximation of it.
If we multiply both numerator and denominator by 11 we get the representation $(77,33)$ (for the same fraction).
Looking at the representation of 33 in binary: 100001, we can be tempted to replace the division by a bit-shift, if we pretend the binary expansion is 100000 ( pretending the second 1 and all that follows it is a 0 ).
The relative error will be about $1/32$ or in other words $3\%$. But all involved operations are super-fast to perform in electronics hardware (as long as we know a binary multiplicative "number-friend" of 3 is 11).
Now to my question, can we apply this same idea to approximate logarithms?
For example to calculate the 2-logarithm of $3$, we calculate $3\cdot 3$ is 9, bit expansion 1001, which we pretend to be 1000, giving an approximation of $\log_2(9) \approx 3$ or with log law: $2\log_2(3) \approx 3$ and therefore $\log_2{3} \approx \frac{3}{2} = 1.5$. The real value is $1.585...$, somewhat close(?)
Given that only multiplications and bit shifts are required and that we can without trouble throw away the least significant half of the product result, could this be a feasible approach for a floating point log approximator?
I suspect that it is reasonable to believe that this has already been tried, so any references you may have to such methods are heartily welcomed.