I found the original question and amd's answer both .. sneaky, in the typical way to mathematicians. Correct, but .. sneaky, for lack of a better term.
A hypersphere of radius $R$ centered at $\vec{c}$ is defined by points $\vec{p}$ that fulfill
$$\lVert \vec{p} - \vec{c} \rVert^2 = R^2$$
A hyperplane is defined by its unit normal vector $\hat{n}$, and its minimum signed distance $D$ from origin; its points $\vec{p}$ fulfill
$$\vec{p} \cdot \hat{n} = D$$
The minimum distance between the hyperplane and the center of the hypersphere is
$$\begin{array}{rl}
& d = \left( \vec{c} - D\hat{n} \right ) \cdot \hat{n}\\
\iff & d = \vec{c}\cdot\hat{n} - D\end{array}$$
where the first right side is as in amd's answer (but using the point on the hyperplane closest to origin, $D\hat{n}$, as the measurement point), and the second is the minimum signed distance from the hyperplane to the origin subtracted from the length of the hypersphere center vector $\vec{C}$ projected to the hyperplane surface normal; or, in other terms, the length of the vector between the center of the hypersphere and the point on the hyperplane nearest to origin, projected to/measured along the hyperplane surface normal.
The intersection is nonempty if and only if $-R \le d \le +R$. It does correspond to a hypersphere of one less dimension, centered at $\vec{o} = \vec{c} - d\hat{n}$,
$$\begin{array}{rl}
&\vec{o} = \vec{c} - d\hat{n}\\
\iff&\vec{o} = \vec{c} - \left(\vec{c}\cdot\hat{n} - D\right)\hat{n}\\
\iff&\vec{o} = \vec{c} + D\hat{n} - \left(\vec{c}\cdot\hat{n}\right)\hat{n}\end{array}$$
with radius
$$\begin{array}{ll}
&r = \sqrt{R^2 - d^2}\\
\iff&r = \sqrt{R^2 - \left(\vec{c}\cdot\hat{n} - D\right)^2}\end{array}$$
and perpendicular to $\hat{n}$.
The part I call sneaky is related to the three last words in the previous paragraph: perpendicular to $\hat{n}$.
The intersection is the part of the hyperplane bounded by the original hypersphere. That is, for points $\vec{p}$ on the intersection, both
$$\begin{cases}
\left\lVert\vec{p} - \vec{c}\right\rVert^2 = R^2\\
\vec{p} \cdot \hat{n} = D\end{cases}$$
apply. If you rotate the coordinate system so that the unit normal for the hyperplane $\hat{n}$ is parallel to an axis, the coordinate for that axis can be omitted, and the intersection in the rotated coordinate system simplifies to only a hypersphere of one less dimension, centered at $\vec{o} = \vec{c} - d\hat{n}$ in the original coordinates, with radius $r = \sqrt{R^2 - d^2}$.
Original asker obviously forgot about the rotation needed. In the original coordinate system, both the original hypersphere and the hyperplane equations apply to the points; only in the rotated coordinates is the one-less-dimensional hypersphere enough, and that because the rotation rotates the hyperplane perpendicular to one axis, allowing that axis to be eliminated from consideration. Essentially, it implicitly enforces the hyperplane requirement!
Now, amd did mention this explicitly, just in mathematical terms: a new orthonormal basis spanning the space orthogonal to the hyperplane unit normal vector $\hat{n}$, is just a mathematical description for the rotation needed.
I found this sneaky because it hides the crucial, core point in the answer -- and the point the OP had missed --, as concise jargon. I felt it was like the tiny little over-powered hand cannon in movie Men in Black.
Mathematicians do this all the time. I suspect this is because they want to keep their arts secret to us mere mortals with weaker math-fu.
Implementation-wise, we haven't yet (as of this writing) mentioned any practical ways of constructing the rotation matrix (or the new orthonormal basis vectors, which basically amount to the same thing).
Again, the rotation matrix needed is an orthonormal $d$-by-$d$ square matrix, which rotates the hyperplane surface normal parallel to one of the axes. Most useful would be, if it would rotate it parallel to the positive final axis, as that would make it easier to use in practice.
Although in specific dimensions (2 and 3, in particular) it is possible to construct the rotation directly (via vector cross product and axis-angle representation for the rotation), there is at least two approaches that should work in any number of dimensions (greater than one).
First, and my preferred one, is based on Givens rotations. Start with the original hyperplane surface normal vector, and an identity matrix. Picking the dimension with the largest component in magnitude, excluding the last dimension, apply a Givens rotation to rotate that component to the final dimension, to the current rotation matrix. (This only modifies two columns or two rows in the rotation matrix, depending on how you implement it; and two components in the current unit normal vector, turning the earlier component zero, and increasing the final component in magnitude.) Repeat until the final component is one, and all other components zero, in the normal vector, and you end up with the rotation matrix needed.
Another method is to start by copying the hyperplane unit normal to the rotation matrix unit vector corresponding to the final dimension, and then orthogonalize each new basis vector against previously added new basis vectors. Mathematically, this looks appealing, but in my experience, it is numerically terribly unstable with higher number of dimensions; subtracting the vector sum of dot products with the existing basis vectors seems to be too sensitive to rounding errors to really work in practice.
I intended to include here some example code in C on how to implement the Givens rotations, but my initial attempt didn't rotate the hyperplane unit normal correctly to $(0,\dots,0,1)$, although the rotation matrix was orthonormal. Such issues are common with code doing such rotations/changing to a new orthonormal basis, since it is hard for us code-oriented math-weaklings to keep track of the order of the matrix multiplications (as the order of the rotations is important) and when to use the transpose or not (because for orthonormal matrices, the transpose is the same as the inverse, and this rotation matrix is an orthonormal matrix). The algorithm is not hard per se; there are just an annoying amount of details and complexity to keep in mind. It really needs some care from a math-oriented person, verifying the details (order and transposes taken).