9

Yesterday I learned about the strange Fabius function $f$ in this question. Given my interest in neural networks and the fact that this function has a distinct sigmoid shape, I became curious about how to calculate this function.

Three formulas are given on the wikipedia page above:

  1. First regarding it's Fourier Transform: $$ \mathcal F(f(x))(z)=\hat f (z) = \prod_{m=1}^{\infty} \left(\cos\left(\frac{\pi z}{2^m}\right)\right)^m$$

  2. A probabilistic formula regarding it says that it equals the cumulative distribution function of

$$\sum_{n=1}^{\infty}2^{-n}\xi_n, \text{ where } \xi_n = U(0,1)$$

  1. A functional equation is also given, which it fulfils:

$$f'(x) = 2f(2x)$$


So now to the question ... given these equations, or what we can derive from them, what would be a practical approach to calculate function values on an equally spaced grid for this function?

In other words to estimate $$f(\Delta_t n), \,\, \text{ where }\,\,\cases{ n= \{0,\cdots ,2^{m}-1\}\\ \Delta_t = 2^{-m}}$$

Bonus points for considering all in all computational resources required for the calculations.

mathreadler
  • 26,534
  • Just pointing out, the Fabius function is not a sigmoid function because its not monotonically increasing. And I'm not sure if you could use it for neural networks because it's slower to compute than other traditionally used functions. But this is a good question none the less. – Miksu Oct 31 '17 at 14:53
  • @Miksu : It seems monotonically increasing on $[0,1[$ which is the interval I consider in the question, but well I haven't proved it. Could be a follow up question to prove it. I don't think it should be too hard if one starts with the functional equation involving the derivative. – mathreadler Oct 31 '17 at 15:54
  • Yeah, it should be increasing on [0, 1[ but it's not incresing on the whole real line so it's not sigmoid function. And that's also one reason why you can't use it in neural networks because neural networks need sigmoid functions that are monotonically indreasing on the whole real line... at least in theory... – Miksu Oct 31 '17 at 16:05
  • @Miksu : That is not true, if you know for example a limit in the dynamic range of your inputs (which you do for many applications), it hardly requires the whole real line to work. Also if it did, that is easy to fix with some remapping. Ok it is a bit to short to explain in a comment, but I can make a new question about it, ok? – mathreadler Oct 31 '17 at 16:16
  • No, your question is very good and I did understand your comment. Maybe I don't know enough about neural nets so let's just leave it here :D – Miksu Oct 31 '17 at 18:55
  • It is a very good question. A good exercise for anyone interested in neural nets to try and show. – mathreadler Oct 31 '17 at 18:56
  • I can imagine that – Miksu Oct 31 '17 at 19:06
  • 2
    See https://mathematica.stackexchange.com/questions/120331/how-do-i-numerically-evaluate-and-plot-the-fabius-function, https://arxiv.org/abs/1702.05442, https://arxiv.org/abs/1702.06487, https://arxiv.org/abs/1609.07999, https://www.pdf-archive.com/2017/02/20/rvachev/rvachev.pdf, https://www.facebook.com/groups/afunctions – Vladimir Reshetnikov Apr 02 '18 at 20:24
  • 2
    See also: https://math.stackexchange.com/questions/240687/recursive-integration-over-piecewise-polynomials-closed-form, https://math.stackexchange.com/questions/218832/how-to-compute-the-values-of-this-function-fabius-function – Vladimir Reshetnikov Apr 02 '18 at 20:30
  • Wow that's a lot @VladimirReshetnikov Thank you I will check it out tomorrow. – mathreadler Apr 02 '18 at 20:51

1 Answers1

3

This article answers your question: "Evaluating the Fabius function, Jan Kristian Haugland". https://arxiv.org/pdf/1609.07999.pdf