I'm not an expert in code obfuscation, but two things jump out at me in reading that article:
They don't seem to have a definition for what "securely obfuscate" means. What they do have is abstract and built on top of assumptions that probably won't hold in the real-world. [*]
Nowhere in that article is anybody proposing a solution. It reads more like "if a secure obfuscator ever exists, it might have some of these properties".
Neither that article, or the source paper by Amit Sahai et. al address the issue that for a program to be executable, it must contain everything that it needs to de-obfuscate itself into machine code, which means that with enough effort, reverse engineers can do the same. So the whole idea of un-reverse-engineerable-yet-executable code seems like a bit questionable. [**]
These points put this research into the early phases of mathematical dreamland, I wouldn't expect to see it in Haskell or Lisp any time soon.
Matthew Green's section starting with "We can obfuscate some things!" basically says that it's possible to architect code such that the secrets can't be retrieved: simply don't put the secrets in the code, put hashes of the secrets instead, or some other trick.
If I paraphrase your question as:
Is there a compiler that will take badly designed code and make it secure?
No, no there is not. And nobody is suggesting that there will be.
Expanding on my thoughts:
- [*] The article by Matthew Green defines the problem as:
Alice should not get any more information from seeing the obfuscated program
but quickly admits that:
One thing you may have noticed is that the user Alice who views the obfuscated program truly does learn something more ... the user who got the obfuscated program still has a copy of the obfuscated program.
Similarly, in the abstract of the paper by Amit Sahai et. al they state:
In this work, we initiate a theoretical investigation of obfuscation. Our main result is that,
even under very weak formalizations of the above intuition, obfuscation is impossible.
- [**] It's possible that under some strict conditions we could obtain this un-reverse-engineerability that we want, for example if you ship your software inside a FIPS 140-2 Hardware Crypto Module which will self-destruct if it detects tampering, or if operating systems / motherboards start offering hardware obfuscation support (I have no idea what this would look like, but I won't rule it out). But the idea of a Haskell compiler that will allow your source code to execute but stay obfuscated against an arbitrary attacker on arbitrary hardware seems unlikely.