I want to calculate the error of exp function under finite precision(data type is double). Is taylor series or other special algorithm?
Generally, the best way to implement ex is by calling the
exp function provided by your computing platform.
Failing this, implementing the
exp function is complicated and requires several esoteric skills. An implementation typically involves:
- Testing the input for various special cases, such as NaNs.
- Multiplying the input by a specially prepared representation of log2e, to transform the problem from ex to 2y, where y = x • log2e.
- Moving the integer part of y into the exponent field of a floating-point encoding.
- Evaluating the exponential of the fractional part of y with a minimax polynomial.
- Combining the two results above.
- The minimax polynomial is engineered, often with special software, using the Remez algorithm or something similar.
- The work must be done with some extended precision so that the final result can be calculated precisely.
Taylor series are inappropriate for evaluating functions because they are inaccurate away from their center points. This means they take too many terms to converge to the necessary precision. Having too many terms not only takes time but also makes it difficult to do the arithmetic accurately.