I heard Bill talk at Cardiff University about 5 years ago and was an instant convert to interval arithmetic. I agreed then, as I do now, that intervals will play an increasingly important part in numerical computing. This will not be a niche area of computing that you can afford to ignore. Older readers will remember the days when floating point units were optional and programmers lived in a world of integers; single-point arithmetic will experience the same paradigm shift.
One minor comment: Bill points out that parallelization can break the order of floating point operations, and, hence, will change the answer. In my experience the result is very sensitive to the order you perform the operations; if you doubt this, why not call stl::reverse and stl::accumulate on an stl::vector of a few thousand reals and compare the answers? A couple of similar problem areas I can see are:
- 3rd party libraries, where the accuracy of the answer might actually exceed the quoted error bounds (assuming the library is sophisticate enough to offer error bounds). Intervals would always deliver this better-than-expected accuracy.
- Synthetic programs (as advocated by John Koza), where the inner workings of the algorithm are not well suited to analytical tolerance-analysis. Through the “evolutionary” process, these programs could exploit rounding and underflow inherent in single-point arithmetic to give the false impression of finding the correct answer.
There is much more to say on interval arithmetic, but that will have to wait for another posting.
I haven’t read any of the other Contrarian Minds interviews, but they all look interesting.