I have come across many papers which reference the Jacobian when solving certain finite difference inverse problems. And I have seen many articles and textbooks which discuss the mathematical properties of the Jacobian in an abstract sense. I have also seen examples for calculating the Jacobian when the functions are known and analytic.
But, amidst all this, I still have no idea how to sit down at my computer and compute a Jacobian for real data and non-analytic functions.
For example, I have some predicted data vector, $\mathbf{f}$ and some model vector, $\mathbf{m}$.
The Jacobian is defined as:
$$\mathbf{J}_{ij} = \frac{\partial f_i}{\partial m_j} $$
But how can this actually be computed?
Note, I have come across many answers that say that you can compute them using finite differences (for example, see this answer here). In this case, we can say:
$$\mathbf{F}(\mathbf{m}) = f_1(\mathbf{m})\mathbf{i}+f_2(\mathbf{m})\mathbf{j}+f_3(\mathbf{m})\mathbf{k}+...$$
and so the Jacobian is defined via finite difference with some model perturbation:
$$\mathbf{J}_{ij} = \frac{f_i(m_j+\Delta m)-f_i(m_j)}{\Delta m}$$
However, this method requires me to solve $\mathbf{F}(\mathbf{m})$ for every predicted function for every model perturbation. My particular problem involves thousands of predicted points and millions of model parameters and each function evaluation takes minutes to hours. It is not feasible to use the method suggested.
Any explanation or resources are appreciated.