Before the rest of this answer, I should say that there is essentially no way to show that any problem is hard, except perhaps showing that it implies or reduces to some other problem which people have deemed (for one reason or another) hard. Ramanujan presumably thought this was hard because he tried (and failed) to show that it was multiplicative, and many other people presumably thought this was hard because Ramanujan tried and failed.
For context, the year is 1916. This is when Ramanujan presented his paper "On Certain Arithmetical Functions" to the Cambridge Philosophical Society and stated that he empirically believed that $\tau(\cdot)$ was multiplicative.
The study of multiplicative functions wasn't new. But the functions happened to be interesting arithmetical functions and multiplicativity frequently appears in elementary number theory. Functions weren't studied simply because they were multiplicative (though this would could about later). In fact, from reading Ramanujan's paper (and Hardy's and Mordell's papers around the same time), it appears that the term "multiplicative" hadn't even been introduced yet. [I do not know when that term was introduced].
Thus there is no systematic consideration of showing functions are multiplicative at the time. Simple statements of multiplicativity for elementary functions were considered worth remarking --- for instance, Glaisher proved that many functions were multiplicative in "The Arithmetical Functions $P(m)$, $Q(m)$, $\Omega(m)$ in the Quarterly Journal of Mathematics, and this was considered publication-worthy (though the standard of what was publication worthy was quite different at the time, and Glaisher was serving as the editor of the journal so perhaps his decision was what mattered).
For essentially all previously studied multiplicative arithmetic functions, the multiplicative property was established by essentially combinatorial tools. The elementary functions lend themselves to that very nicely. To that end, many of the tools that Ramanujan used to study $\tau(n)$ were combinatorial (or very nearly combinatorial).
We can phrase questions of multiplicativity in terms of Dirichlet series. These were continuing to rise in prominence at the time. We need some additional context. Dirichlet proved his theorem on primes in arithmetic progressions in 1837, involving Dirichlet $L$-functions. Riemann's memoir appeared in 1860, inspiring greater interaction between complex analysis and functions similar to $\zeta(s)$. Hadamard and de la Vallée-Poussin handled some subtle technicalities in Riemann's analysis and finally proved the prime number theorem in 1896. This is to say that in 1916, Dirichlet series and multiplicative functions are not something widely understood and mastered.
In his paper, Ramanujan included a statement boiling down to proving an Euler product for the associated Dirichlet series $\sum \tau(n) n^{-s}$. But the product is different from simpler ones studied, in that it's a degree 2 product.
The zeta function has a representation
$$ \sum_{n \geq 1} \frac{1}{n^s} = \prod_p \frac{1}{1 - \frac{1}{p^{s}}},$$
which is the simplest Euler product. This reflects both unique factorization and the fact that the constant function $1$ is multiplicative.
The multiplicativity of Dirichlet characters corresponds to the Euler product
$$ \sum_{n \geq 1} \frac{\chi(n)}{n^s} = \prod_p \frac{1}{1 - \frac{\chi(p)}{p^{s}}}.$$
Perhaps the simplest degree 2 Euler product comes from the divisor function $d(n)$, whose Dirichlet series factors as
$$ \sum_{n \geq 1} \frac{d(n)}{n^s} = \prod_p \left(\frac{1}{1 - \frac{1}{p^{s}}}\right)^2 = \zeta(s)^2,$$
but this is admittedly pretty simple.
The proposed Euler product (and thus the proposed structure of the multiplicativity) for $\tau(n)$ was
$$ \sum_{n \geq 1} \frac{\tau(n)}{n^s} = \prod_p \frac{1}{(1 - \frac{\tau(p)}{p^s} + \frac{p^{11}}{p^{2s}})}.$$
Each individual factor in the Euler product is a degree 2 polynomial in $p^{-s}$.
On the level of multiplicativity, this corresponds to the fact that $\tau(\cdot)$ is not completely multiplicative (as opposed to $1$ or $\chi(\cdot)$). And unlike functions like $d(\cdot)$, it doesn't appear that $\tau(\cdot)$ is built out of simpler multiplicative functions. For reference, one classical way to show that $d(\cdot)$ is multiplicative is to show that the convolution of multiplicative functions is multiplicative, and then to show that $d(n) = 1 * 1(n)$ (this is equivalent, of course, to the identity between Dirichlet series $\sum d(n) n^{-s} = \zeta^2(s)$).
Thus the structure of the multiplicativity of $\tau(\cdot)$ is a bit different to previously studied multiplicative functions, and it took a new idea. In short, it is in this context that I would say that proving that $\tau(\cdot)$ is multiplicative is hard.
But it's not actually that hard. Less than a year after Ramanujan's paper appeared, Mordell showed that $\tau(\cdot)$ was indeed multiplicative in his paper "On Mr. Ramanujan's Empirical Expansions of Modular Functions." The idea is new, but simple, and the proof is essentially done in the first 3 pages of the article.
But the idea was new, which is the hardest part. It is interesting to place this in context as well. At the end of his paper, Mordell notes that proving corresponding Euler products for modular functions which are modular on subgroups of $\mathrm{SL}(2, \mathbb{Z})$ are harder, and in particular "it seems hardly worth while to go into details." But later Hecke would go into these details and study what we now call Hecke operators (which essentially standardize Mordell's approach), and show that there are bases of modular forms which are simultaneous eigenfunctions of all the Hecke operators, which in turn implies that they are multiplicative and have a degree $2$ Euler product like $\tau(\cdot)$. And this in turn has been generalized to modular forms on $\mathrm{SL}(3, \mathbb{Z})$, and indeed $\mathrm{SL}(n, \mathbb{Z})$. And so on.
In hindsight, the multiplicativity of $\tau(\cdot)$ is the simplest example of multiplicativity in a large family of modular functions.