The use of twice the usual number of bits to represent a number, giving greater arithmetic accuracy.
- My binary math is a little rusty, but that limit seems to correspond to 32-bit double precision real arithmetic.
- When writing software, this can be achieved by using statistical software or a programming language offering double precision rather than a spreadsheet, and by obtaining error estimates using the methods reviewed in this paper.
- The type is notorious because of its single precision overflow behavior, so most people end up using even when double precision is not required.
Definition of double precision in:
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed.