0

Dear TensorFlow experts,

I am trying to understand the following output of the TensorFlow LeakyReLU function, which seems to have very low precision:

import tensorflow as tf
tf.keras.backend.set_floatx('float64')

import numpy as np np.set_printoptions(precision=16)

tf.version # 2.15.0 (newest)

alpha = 1./3. x0 = np.array([np.array([-1., -1.])])

#----- EXPECTED RESULT ------ alpha*x0

high precision output (as expected):

returns: array([[-0.3333333333333333, -0.3333333333333333]])

#----- TENSORFLOW RESULT ------ tf.keras.layers.LeakyReLU(alpha=alpha)(x0).numpy()

low precision output (not expected):

array([[-0.3333333432674408, -0.3333333432674408]])

I want to understand / my questions are

  1. Why is the numerical precision of the result so low?
  2. Where is the numerical imprecision originating from (in detail)?

Regarding 2) I have already located the leaky_relu source code under https://github.com/tensorflow/tensorflow/blob/8e742eb385332ad4f53b4bbeeeec550e2bcc44a9/tensorflow/python/ops/nn_ops.py#L3670 but it calls a function gen_nn_ops.leaky_relu, which I am somehow unable to locate in the TensorFlow source code.

1 Answers1

1

The alpha parameter is float32.

import numpy as np

x = np.array([-1, -1], dtype=np.float64)

alpha1 = np.array(1/3, dtype=np.float64) (x * alpha1).tolist() > [-0.3333333333333333, -0.3333333333333333]

alpha2 = np.array(1/3, dtype=np.float32) (x * alpha2).tolist() > [-0.3333333432674408, -0.3333333432674408] ```

Karl
  • 1,176
  • 5
  • 7