Whether they are independent depends on the distribution of the errors. The context and general conventions make it clear that what is assumed something like this:
$$
Y_i = \alpha + \beta_1 x_{1i} + \cdots + \beta_p x_{pi} + \varepsilon_i
$$
where
- $\varepsilon_i \sim \operatorname{i.i.d. N}(0,\sigma^2)$ for $i=1,\ldots,n$ and typically $n\gg p;$
- $x_{ij}$ for $i=1,\ldots,p,$ $j=1,\ldots,n$ are constant (i.e. not random) and observable;
- $\alpha,\beta_1,\ldots,\beta_p$ are constant and unobservable;
- $Y_i$ for $i=1,\ldots,n$ are observable (and of course random since $\varepsilon_i$ are random).
One can write
$$
Y = X\beta + \varepsilon
$$
where
- $Y\in\mathbb R^{n\times 1}$ (a long column vector);
- $X\in\mathbb R^{n\times(p+1)}$ (a matrix with many rows and few columns);
- $\beta\in\mathbb R^{(p+1)\times1}$ (a short column vector);
- $\varepsilon\in\mathbb R^{n\times 1}$ (the same size and shape as $Y$, of course).
We will show that $\operatorname{SSE} = \|AY\|^2$ and $\operatorname{SSR} = \|BY\|^2$ where $A$ and $B$ are certain matrices with $n$ columns (and also $n$ rows, as we will see).
Central to the problem is this identity:
$$
\operatorname{cov}(AY, BY) = A\Big( \operatorname{var}(Y) \Big) B^\top \tag{main identity}
$$
and here $\operatorname{var}(Y)$ is an $n\times n$ nonnegative-definite matrix and $\operatorname{cov}(AY, BY)$ is a matrix with as many rows as $A$ and as many columns as $B^\top$ (thus also $n\times n$).
By the information in your quoted "Hint", we only need to show that this covariance is $0$ (i.e. the $n\times n$ zero matrix). It will follow that $AY$ and $BY$ are independent, and therefore functions of them are independent.
The vector of fitted values is the orthogonal projection of the vector $Y$ onto the column space of the matrix $X$. The vector of fitted values is therefore
$$
\widehat Y = HY
$$
where $H\in\mathbb R^{n\times n}$ is the "hat matrix" (so called because it transforms $Y$ to $\widehat Y$)
$$
H = X\Big( X^\top X\Big)^{-1} X^\top = \underbrace{\quad X\quad}_{n\times(p+1)} \Big( \underbrace{\quad X^\top X \quad}_{(p+1)\times(p+1)} \Big)^{-1} \underbrace{\quad X^\top \quad}_{(p+1)\times n}.
$$
To show that that is the orthogonal projection, it suffices to show two things: (1) If $Y$ is orthogonal to the column space, then $HY=0$. That's easy because in that case $X^\top Y=0.$ (2) If $Y$ is in the column space, then $HY=Y.$ That is shown by saying that in this case $Y=Xu$ for some $u\in\mathbb R^{(p+1)\times1}$ and then multiplying.
The vector of residuals is
$$
\widehat\varepsilon = (I-H)Y
$$
i.e. observed minus fitted equals residual. (This should not be confused with the unobservable vector $\varepsilon$ of true errors.)
Therefore $I-H$ will be in the role of the matrix $B$ in the main identity. And we have
$$
\operatorname{SSE} = \|\widehat\varepsilon\|^2 = \|(I-H)Y\|^2.
$$
The way $\operatorname{SSR}$ is usually defined is as $\sum_{i=1}^n (\widehat Y_i - \overline Y)^2,$ where $\overline Y = (Y_1+\cdots+Y_n)/n,$ the average $Y$ value. This is
$$
\|(H-P)Y\|^2
$$
where $P$ is the $n\times n$ matrix whose every entry is $1/n.$
Thus $H-P$ will be in the role of $A$ in the main identity.
Now apply the main identity. (You will need to show that $(H-P)(I-H)=0.$ For that you need to notice that the columns of $P$ are in the column space of $H$.)