-
Notifications
You must be signed in to change notification settings - Fork 1k
improve expt on a flonum to a large exact power
#950
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I think you're mixing up terms for accuracy and precision in your comment. It's been a long time since my numerical analysis class, but I'm pretty sure the error propagation rules say that there are no significant digits left after using 2^56 as an exponent for a value that only has 2^53 bits of precision to start with. I agree that we'd like to be as accurate as is feasible, but this whole issue is poorly motivated. Flonums are, by definition, inexact values, and the computation being performed on them magnifies the "inexactness" (i.e., the range of real values that might be represented by the inexact number) so that it's larger than the result. The operation simply doesn't make sense. On the other hand, if we want to pretend that we're doing an operation on the value represented exactly by the IEEE double precision floating point number then the test needs to use a number with an exact floating point representation. The difference in the mathematical result of As an aside, I suspect that the difference you're seeing in the C library implementations may have something to do with whether the underlying hardware (or VM in the case of GitHub CI) supports 80-bit extended precision (which I think is an Intel-only thing). |
Thanks — I tired to correct the comment and release note.
I will have to defer to @gambiteer's expertise on this question. I've just learned (over many years) to take whatever improvements @gambiteer has been kind enough to offer. :)
Probably my lack of expertise here again, but I'm not clear on what you're saying. I could write the test as |
|
The current algorithm used for The latter algorithm is not very accurate for large The second answer has only 7 digits accuracy; it's smaller than the first answer, when it should be bigger. The correctly rounded answers, computed with my computable reals package, is and in Gambit, which uses the proposed algorithm, one has i.e., you get the correctly rounded result (which doesn't always happen, of course). The current Chez code gives the correctly rounded result for |
|
I think I confused myself into thinking that the particular number in the test was something other than an arbitrary number close enough to 1.0 that the result isn't +inf. (Specifically I was thinking that it was supposed to be 1+ε but it isn't.) But something is still nagging at me about this test. I think maybe that it's just a bunch of "magic" numbers with no indication of why we expect those to be the correct values. They definitely aren't the mathematically correct values (i.e., if input and computed with infinite precision) – not least of which is because the input has no exact flonum representation, and the rounding error there will compound until it's larger than the result. Perhaps that concern can be "solved" with a short comment like "these values are computed by decomposing the calculation like blah and computing in 80-bit extended precision" or similar. This also illustrates my other concern, which has very little to do with the accuracy of the computation. The numbers involved in this example are so large that any difference between the exact value stored in the flonum and the mathematical quantity that the user thought they were operating on will be magnified so much that the computation may well be nonsense. But I suppose it isn't our job to make sure the user understands error propagation and what that means in terms of what conclusions they can draw from the result of one of these calculations. In summary, I agree that this is an improvement, and we absolutely should accept it, but I am concerned that it may give users an unwarranted idea of how much they can rely on the results of these calculations with huge exponents unless they are very sure about the inputs. |
Advice and implementation provided by @gambiteer.
Makes sense. I've added a comment with the explanation as I understand it, so far: |
|
Thanks, @gambiteer and @jltaylor-us! |
Advice and implementation provided by @gambiteer via racket/racket#5228.
I'm not sure about my approach to testing here. In principle,
exptshould produce a accurate result, but the result relies onpowfrom the C library, and the quality of that function varies. On GitHub CI, evena6osxdoesn't produce a accurate result, even though it does on machines that I have access to — at least, I get a precise result with the changes inprim5.c. Overall, maybe there is a better approach.