Dear TensorFlow experts,
I am trying to understand the following output of the TensorFlow LeakyReLU function, which seems to have very low precision:
import tensorflow as tf
tf.keras.backend.set_floatx('float64')
import numpy as np
np.set_printoptions(precision=16)
tf.version # 2.15.0 (newest)
alpha = 1./3.
x0 = np.array([np.array([-1., -1.])])
#----- EXPECTED RESULT ------
alpha*x0
high precision output (as expected):
returns: array([[-0.3333333333333333, -0.3333333333333333]])
#----- TENSORFLOW RESULT ------
tf.keras.layers.LeakyReLU(alpha=alpha)(x0).numpy()
low precision output (not expected):
array([[-0.3333333432674408, -0.3333333432674408]])
I want to understand / my questions are
- Why is the numerical precision of the result so low?
- Where is the numerical imprecision originating from (in detail)?
Regarding 2)
I have already located the leaky_relu source code under https://github.com/tensorflow/tensorflow/blob/8e742eb385332ad4f53b4bbeeeec550e2bcc44a9/tensorflow/python/ops/nn_ops.py#L3670
but it calls a function gen_nn_ops.leaky_relu, which I am somehow unable to locate in the TensorFlow source code.