Fix-up coefficients and smallest denormal value for ieeeDouble in exp()#3514
Fix-up coefficients and smallest denormal value for ieeeDouble in exp()#3514MartinNowak merged 1 commit intodlang:stablefrom
Conversation
|
Note, though doesn't actually fix the bug. The problem is with the test itself. |
Can you make a pull to fix the test then? |
|
According to @kinke - the test works on LDC+Win64 and when using LLVM intrinsics. However I get an "off by one" precision difference when using this function with GDC+Linux (-mlong-double-64), when using GLibc, and when using GCC builtins. I think the conclusive result might be to ensure that we are accurate up to mantissa bits - 1? This would mean that all tests would be affected. |
I understand too little of the topic to make that decision and lack the time to go into the details. Can we already merge this PR which seems to fix some constants, then follow-up on the test failure. |
Fix-up coefficients and smallest denormal value for ieeeDouble in exp()
I don't know. When I run the test using GDC to build, I get the same result back as when using GCC to call the libm function. @kinke used LDC and LLVM and got back a different result (albeit, on a different platform). It might be purely down to a conflict between compiler backends/C runtimes. |
Issue 14732 – [2.068 beta] Failing unittest in std.math