$
\def\tss{\mathop{\bigotimes}\limits}
\def\dss{\mathop{\odot}\limits}
\def\bbR#1{{\mathbb R}^{#1}}
\def\a{\alpha}\def\b{\beta}\def\g{\gamma}\def\t{\theta}
\def\l{\lambda}\def\s{\sigma}\def\e{\varepsilon}
\def\n{\nabla}\def\o{{\tt1}}\def\p{\partial}
\def\E{{\cal E}}\def\F{{\cal F}}\def\G{{\cal G}}
\def\L{\left}\def\R{\right}\def\LR#1{\L(#1\R)}
\def\vec#1{\operatorname{vec}\LR{#1}}
\def\diag#1{\operatorname{diag}\LR{#1}}
\def\Diag#1{\operatorname{Diag}\LR{#1}}
\def\trace#1{\operatorname{Tr}\LR{#1}}
\def\qiq{\quad\implies\quad}
\def\grad#1#2{\frac{\p #1}{\p #2}}
\def\hess#1#2#3{\frac{\p^2 #1}{\p #2\,\p #3}}
\def\c#1{\color{red}{#1}}
\def\m#1{\left[\begin{array}{r}#1\end{array}\right]}
\def\notimplies{\;\not\!\!\!\implies}
$The following method can be utilized even if you do not have access to a simple series definition of the function but are able to evaluate it for any value in its domain by some black-box method.
The first insight is that, as a consequence of the Cayley-Hamilton theorem, any analytic function of an $n\times n$ matrix is equal to a polynomial of degree $(n-1)$ with scalar coefficients.
For example, in the case of the matrix $C$ above
$$f(C) = \b_0 I + \b_1C + \b_2C^2$$
The problem boils down to calculating the $\b_k$ coefficients.
The next insight is that, if $\l_k$ are the eigenvalues of $C$,
then $f(\l_k)$ are the eigenvalues of $f(C)$. Therefore, you can write an eigenvalue version of the polynomial
$$f(\l_k) = \b_0 + \b_1\l_k + \b_2\l_k^2$$
Substituting the eigenvalues of $C$ yields a $3\times 3$ linear system, which can be solved for the $\b_k$ coefficients. The eigenvalues of
$C$ are known to be $\{\pm i\a,0\}$ where
$${
a = \m{a_1\\a_2\\a_3},\qquad
\a = \big\|a\big\|_2=\frac{1}{\sqrt 2}\big\|C\big\|_F
}$$
The linear system
$$\eqalign{
&\b_0 + i\a\b_1 - \a^2\b_2 &= f(i\a) &\doteq f_p
\quad&\big(f\;{\rm plus}\big) \\
&\b_0 - i\a\b_1 - \a^2\b_2 &= f(-i\a) &\doteq f_m
\quad&\big(f\;{\rm minus}\big) \\
&\b_0 &= f(0) &\doteq f_z
\quad&\big(f\;{\rm zero}\big) \\
}$$
can be solved by Cramer's Rule to obtain
$$\eqalign{
\b_0 &= f_z \qquad
\b_1 &= \frac{f_p-f_m}{2i\a} \qquad
\b_2 &= \frac{2f_z-(f_p+f_m)}{2\a^2} \\
}$$
Again, these formulas are valid for any function of $C$:
$\;\,$log()$,\;$ erf()$,\;$ sinh()$,\, \ldots$
Let's evaluate the coefficients in the case of exp()
$$\eqalign{
\b_0 &= e^0 = \o \\
\b_1 &= \frac{e^{i\a}-e^{-i\a}}{2i\a}
= \frac{2i\sin(\a)}{2i\a}
= \frac{\sin(\a)}{\a} \\
\b_2 &= \frac{2-(e^{i\a}+e^{-i\a})}{2\a^2}
= \frac{2-2\cos(\a)}{2\a^2}
= \frac{\o-\cos(\a)}{\a^2} \\
}$$
Therefore
$$\eqalign{
\exp(C) = I + \frac{\sin(\a)}{\a}\;C + \frac{\o-\cos(\a)}{\a^2}\;C^2 \\
\\
}$$
This idea of eigenvalue interpolation is associated with the
names of$\,$ Sylvester $\,$and$\,$ Hermite.