This is a question about simple arithmetic, especially multiplication, of positive integers. The usual way of computing the product of, for example $53$ and $47$, is $$ 53\times 47=50\times 40 + 3 \times 40+50\times 7+3 \times 7 = 2000+120+350+21 = 2491, $$ but by playing with formulae like $(a+b)(a-b)=a^2-b^2$ a for a while, one easily get a lot of tricks for more efficient computation in special cases.
Some people use these tricks quite often, while others prefer using the standard route (unless the shortcut is very obvious), and do not think these tricks are very efficient.
Here is another "shortcut". Some people memorize the product of any two two-digit integers, and do all computations in base $100$. This appears to increase the efficiency of multiplication.
On the other hand, however, for integer multiplication in computers, I have not seen an algorithm which saves all result from $1\times 1$ to $127\times 127$ in the memory and then do computation in base $128.$ (Or, to take it to the extreme, for 2-bit signed integers, save all result from $0\times 0$ to $32767\times 32767$ in the memory, and when doing multiplication, just use a bunch of "if" to look up in the table.)
Also, I have never seen computers use formulae like $(a+b)(a-b)=a^2-b^2$ to achieve faster arithmetic.
So, this leads to the following questions:
- What is the reason why these tricks and shortcuts do not work well for computers?
- Why do these tricks sometimes work well for humans? (Well, we have to answer this first: does it really actually work for humans? Or is it just our illusions?)