There seems to be something I am deeply missing about the assumptions while calculating the complexity of multiplication
Let us say we have two number m,n, that we want to multiply and we have n>m, and Log(n) = bwe have a set of primes $p_1, p_2, ... p_k$ such that $ \Pi_{i=1}^k p_i>n\times m $ Furthermore suppose we already have $n_i \equiv n mod (p_i) $and similarly define $m_i $. It is quite obvious, that if we start at $p_1 =2 $ and go incrementally then $p_k = O(log(n\times m))$. Now it is quite clear can represent $n = (n_1, n_2, ..., n_k), m = (m_1, m_2,..., m_k)$ where $n\times m = ((n_1\times m_1)mod(p_1),... (n_k\times m_k)mod(p_k)) $. We can recover $n\times m$ using Chinese remainder theorem. The number of multiplication is of order $b $and each multiplication has $O(log b) digits$. Therefore, $T(b)= b\times T(Log(b)) $ which should mean that the complexity is $O(n^{1+\epsilon}) not O(nlog(n) )$ as implied by wikipedia https://en.wikipedia.org/wiki/Multiplication_algorithm I understand this algorithm is not practical, because of having to know the primes before hand and actually recovering the numbers in decimal format would be comparatively slow but it seems like the algorithms dont account for the time taken in representing a number in their complexity calculation. So what is it that I am missing here?