1

This is a question about simple arithmetic, especially multiplication, of positive integers. The usual way of computing the product of, for example $53$ and $47$, is $$ 53\times 47=50\times 40 + 3 \times 40+50\times 7+3 \times 7 = 2000+120+350+21 = 2491, $$ but by playing with formulae like $(a+b)(a-b)=a^2-b^2$ a for a while, one easily get a lot of tricks for more efficient computation in special cases.

Some people use these tricks quite often, while others prefer using the standard route (unless the shortcut is very obvious), and do not think these tricks are very efficient.

Here is another "shortcut". Some people memorize the product of any two two-digit integers, and do all computations in base $100$. This appears to increase the efficiency of multiplication.

On the other hand, however, for integer multiplication in computers, I have not seen an algorithm which saves all result from $1\times 1$ to $127\times 127$ in the memory and then do computation in base $128.$ (Or, to take it to the extreme, for 2-bit signed integers, save all result from $0\times 0$ to $32767\times 32767$ in the memory, and when doing multiplication, just use a bunch of "if" to look up in the table.)

Also, I have never seen computers use formulae like $(a+b)(a-b)=a^2-b^2$ to achieve faster arithmetic.

So, this leads to the following questions:

  1. What is the reason why these tricks and shortcuts do not work well for computers?
  2. Why do these tricks sometimes work well for humans? (Well, we have to answer this first: does it really actually work for humans? Or is it just our illusions?)
Ma Joad
  • 7,696
  • 1
    What do you mean with "it does not work for computers" ? – Peter Feb 15 '21 at 12:44
  • @Peter See my explanation above. It means computers hardly ever use such strategy of computation (as far as I know). – Ma Joad Feb 15 '21 at 12:47
  • 1
    Do not forget, computers are manmade, and there must be a strong reason why the professionals must take extra efforts to reduce computing. The reason why such Algorithms are uncommon ( they do exist) is that that are not needed much, especially in the modern era... – Aatmaj Feb 15 '21 at 12:52
  • 3
    In response to 1: lookup tables require memory, and accessing memory is slow, in most cases, way slower than manipulating internal registers. – user3733558 Feb 15 '21 at 13:06
  • @Aatmaj Simple arithmetic is common, so is it the case that introducing such algorithms does not offer significant improvements? What are some examples of such algorithms? – Ma Joad Feb 15 '21 at 14:07
  • I am no expert, but I have a rough idea, and a reference. Too long to comment will answer in a day or two – Aatmaj Feb 15 '21 at 16:10
  • When I looked at $53$ and $47$ the first thing I thought of was $50+3$ and $50-3$ and then $2500-9$. But I know at least one person that does it the long way in his head just about as fast as I do using the short cut. His hrair limit is much larger than mine. – Steven Alexis Gregory Feb 16 '21 at 00:23
  • https://math.stackexchange.com/questions/3956994/a-problem-related-to-minimizing-the-multiplication-signs--refrence – Aatmaj Feb 17 '21 at 15:57
  • The basic key lies in the fact that computers "see" things differently than us...................eg 99^2 can be computed easily by us using (a+b)^2 formula, as w know 99 is 'near ' to 10...but for the computer to know it is near, it is difficult, rather going by the hard way is easier for the computer than us...the complexity of implementing the algorithms is worse than the computation... – Aatmaj Feb 17 '21 at 15:59
  • You might like https://en.wikipedia.org/wiki/Karatsuba_algorithm – Aatmaj Jun 19 '21 at 09:49

0 Answers0