I'm trying to use automatic differentiation in Haskell for a nonlinear control problem, but have some problems getting it to work. I basically have a cost function, which should be optimized given an initial state. The types are:
data Reference a = Reference a deriving Functor
data Plant a = Plant a deriving Functor
optimize :: (RealFloat a) => Reference a -> Plant a -> [a] -> [[a]]
optimize ref plant initialInputs = gradientDescent (cost ref plant) initialInputs
cost :: (RealFloat a) => Reference a -> Plant a -> [a] -> a
cost = ...
This results in the following error message:
Couldn't match expected type `Reference
(Numeric.AD.Internal.Reverse.Reverse s a)'
with actual type `t'
because type variable `s' would escape its scope
This (rigid, skolem) type variable is bound by
a type expected by the context:
Data.Reflection.Reifies s Numeric.AD.Internal.Reverse.Tape =>
[Numeric.AD.Internal.Reverse.Reverse s a]
-> Numeric.AD.Internal.Reverse.Reverse s a
at test.hs:13:5-50
Relevant bindings include
initialInputs :: [a] (bound at test.hs:12:20)
ref :: t (bound at test.hs:12:10)
optimize :: t -> t1 -> [a] -> [[a]] (bound at test.hs:12:1)
In the first argument of `cost', namely `ref'
In the first argument of `gradientDescent', namely
`(cost ref plant)'
I'm not even sure if I understand the error correctly. Is it, that the types of ref and plant need access to the s, which is inside the scope of the first argument to gradientDescent?
Is it possible to make this work? While looking for a solution, I tried reducing the problem to a minimal example and found that the following definition produces a similar error message:
optimize f inputs = gradientDescent f inputs
This seems odd, because optimize = gradientDescent doesn't produce any error.