My question is not meant to be dumb. I'm working on a much more complicated problem, but I forgot some of the basics.
Often we perform steps in math without really knowing why we can do a certain step. Why does $\frac{10}{35} = \frac{2}{7}$? Obviously it simply reduces to $\frac{2}{7}$, but exactly why ? We simply divide both numerator and denominator by $5$ but isn't this really multiplying by $5$? I'm wondering operationally how we do this.
Is it simple $\frac{10}{35} \cdot \frac{5}{5}$? Then we cross reduce? If we think of it $\frac{10}{35}$ divided by $5$ then I think of this as $\frac{10}{35} \cdot \frac{1}{5}$ (the reciprocal), BUT $\frac{10}{35} \cdot \frac{1}{5} = \frac{2}{35}$ (dividing the $5$ by itself and the $10$ by $5$ to get $2$.) Is there an actual proof or law that allows us to do this ?