0

In the epsilon-delta proof, one chooses an epsilon as a tolerance on the error in the value of the function, then uses that to specify the size of the delta on the error in the argument to the function. But why not flip the script: establish that for the error in the argument to be smaller, one must have the epsilon be within a tolerance? That is, instead of making delta dependent on epsilon, why not make epsilon dependent on delta?

0 Answers0