Site hosted by Angelfire.com: Build your free website today!
On the Standard Rounding Rule for Multiplication and Division

Christopher L. Mulliss1 and Wei Lee2
1Department of Physics and Astronomy, University of Toledo,
Toledo, Ohio 43606, U.S.A.
2Department of Physics, Chung Yuan Christian University,
Chung-Li, Taiwan 320, R.O.C.
(Received February 23, 1998)

A detailed study of the standard rounding rule for multiplication and division is presented including its derivation from a basic assumption. Through Monte-Carlo simulations, it is shown that this rule predicts the minimum number of significant digits needed to preserve precision only 46.4% of the time and leads to a loss in precision 53.5% of the time. An alternate rule is studied and is found to be significantly more accurate than the standard rule and completely safe for data, never leading to a loss in precision. It is suggested that this alternate rule be adopted as the new standard.

PACS. 01.30.Pp – Textbooks for undergraduates.
PACS. 01.55.+b – General physics.


I. Introduction

    Introductory physics textbooks often describe a standard rounding rule for multiplication and division. This rule states that the proper number of significant figures in the result of multiplication or division is the same as the smallest number of significant figures in any of the numbers used in the calculation. Over the years, college students and teachers of physics have come to rely on this rule. In a recent publication by Good [1], a simple division problem is discussed where the application of the standard rounding rule leads to a loss of precision in the result. Even more disturbing in his note is the long list of popular physics textbooks at the first-year university level which advocate the standard rule without warning its readers that data may be jeopardized because valuable information can be lost. The fact that rounding rules can lead to such a loss is due to their approximate nature and has been well documented by Schwartz [2]. Some researchers feel that the approximate nature of significant figures and rounding rules precludes the need for a detailed investigation of their effects on error propagation [3]. Other researchers including the authors, however, recognize the importance of rounding rules as common and convenient (even if approximate) tools in error analysis [1, 4], especially for students of introductory Physics and Chemistry. The purpose of this work is, therefore, to investigate the standard rounding rule for multiplication and division in detail and to use the results to show that an equally simple rounding rule exists which is more accurate than the standard rule and always preserves precision.
    In this paper, the theoretical basis for the standard rounding rule (from a fundamental assumption) is presented. The standard rounding rule, as applied to the simple multiplication and division problems x = y · z and x = y / z, is then considered. A statistical test is applied to quantify the accuracy of the standard rule, determining the percentage of multiplication and division problems that fall into the following three categories: those where the true uncertainty is as large or larger than that predicted by the standard rule but of the same order of magnitude, those where the true uncertainty is less than predicted, and those where the true uncertainty is an order of magnitude larger than predicted. In the first case, the standard rule is said to "work" because it predicts the minimum number of significant digits that can be written down without losing precision, and therefore valuable information, in the result. In the second and third cases the standard rule clearly "fails," predicting fewer or more significant figures than are needed and, therefore, losing or overstating precision.
    The same analysis is then applied to the often suggested alternate rule advocating the use of an additional significant figure over that required by the standard rule. It is shown that it is not possible to obtain an a priori rule that always works because the proper number of significant figures depends critically on the result of the calculation. The alternate rule is, however, shown to be significantly more accurate than the standard rule for multiplication and nearly as accurate as the standard rule for division. Most important, the alternate rule is shown never to lead to a loss of precision.

II. Simple derivation of the standard rounding rule

    The standard rounding rule for the multiplication and division of two numbers can be inferred from one very simple assumption. This assumption states that the precision (percentage error) of a number is approximately related to the number of significant figures in that number [5]. The fundamental principles that lead to this derivation are discussed in earlier literature, but not explicitly developed in a rigorous mathematical manner [6]. Written in mathematical form, this assumption expresses the precision in a number x with Nx significant figures in the form:

    Precision (x) » 10(2 – Nx) %.                                                                                                   (1)

As an illustration of this relationship, consider the number 52.37 written to 1, 2, 3, and 4 significant digits. Following Bevington and Robinson [7], the absolute error in this number is taken to be ± ½ in the least significant decimal place. See Table I, where the corresponding absolute error and precision are also included.
    While Eq. (1) is only an approximate relationship, it often gives the correct order of magnitude for the precision. In reality, the precision is given by the following modified form of Eq. (1):

    Precision (x) = Cx · 10(2 – Nx) %,                                                                                           (2)

where Cx is a constant that can range from approximately 0.5 to exactly 5 depending on the actual value of the number x. The value of Cx is almost entirely determined by the value of the first (and the second, if present) significant digit in the number x; for example, if x equals 10, 5.0, or 90.0 then the constant Cx equals 5, 1.0, and approximately 0.556, respectively. The effects of the leading constant in Eq. (2) will be critical in understanding why the standard rounding rule sometimes fails.
    For the derivation of the standard rounding rule, consider the simple division problem x = y / z. Through differentiation, one can easily show that the uncertainty in the ratio x is described by dx / x = dy / y – dz / z. The errors dy and dz can be directed in the same direction (i.e. having the same sign) or in the opposite direction. In the simplest approximation, it is customary to take the maximum uncertainty to be the one quoted, leading to

    max(dx / x) = abs(dy / y) + abs(dz / z),                                                                                  (3)
 

TABLE I.    An illustration of the relationship between the number of significant figures and the
                    precision of a quantity.
 

Number of Significant Figures
Number
Absolute Error
Precision (%)
1
50
± 5
10
2
52
± 0.5
1
3
52.4
± 0.05
0.1
4
52.37
 ± 0.005
0.01

or

    Precision (x) = Precision (y) + Precision (z).                                                                            (4)

Substituting the approximate relationship Eq. (1) into Eq. (4) yields

    10–Nx = 10–Ny + 10–Nz.                                                                                                         (5)

Differentiation of a simple multiplication problem x = y · z leads directly to the same relationships as those given for division in Eqs. (3), (4), and (5).
    At this point in the derivation, two separate cases must be considered: the case where y and z do not have the same number of significant figures and the case where they do.

Case 1. (Ny ¹ Nz) When Ny does not equal Nz, they must be different by at least 1. Even when they differ by this minimum amount, the term in Eq. (5) involving the smaller value of Ny and Nz is 10 times larger than the other term and thus completely dominates. One is, therefore, justified in replacing Eq. (5) by

    10–Nx = 10–min(Ny, Nz),                                                                                                          (6)

implying that Nx = min(Ny, Nz). This is, clearly, a statement of the standard rounding rule.

Case 2. (Ny = Nz) When Ny equals Nz, both terms in Eq. (5) become equally important. Under this condition, Eq. (5) reduces to

    10–Nx = 10–N + log(2),                                                                                                          (7)

where N = Ny = Nz. One can see that the integer Nx = Int[N – log(2)]. Because log(2) is smaller than 0.5, it can never cause Nx to be rounded down to N – 1. Thus Nx = N = min(Ny, Nz), which is, again, the standard rounding rule.

III. A statistical study of the standard rule

III-1. The method
    To investigate the statistical properties of the standard rounding rule, a Monte-Carlo procedure was used. A computer code was written in Version 4.0.1 of Research Systems Inc.'s Interactive Data Language (IDL). The code uses IDL's uniform, floating-point, random number generator [8] to create two numbers. Each of these numbers have a randomly determined number of significant figures ranging from 1 to 5 and a randomly determined number of places to the left of the decimal point ranging from 0 to 5. Each digit in these numbers is randomly assigned a value from 0 to 9, except for the leading digit that is randomly assigned a value from 1 to 9. Each resulting number can range from the smallest and least precise value of 0.1 to the largest and most precise value 99999. The program calculates the product or ratio of the two generated numbers and determines the number of significant figures which should be kept according to the standard rule. It then uses Eqs. (2) and (4) to compute the true precision and converts it into a true absolute error. The absolute error in the product or ratio, as predicted by the standard rule, is taken to be ± ½ in the least significant decimal place [7]. The true absolute error and the value predicted by the standard rule are compared and the multiplication or division problem is counted as one of the three cases described previously. The program repeats the calculation for one million multiplications or divisions and computes statistics.

III-2. Properties of the standard rule
    Table II shows the results for the standard rule as applied to simple multiplication and division problems. Clearly the standard rule is not accurate. On average, the application of the standard rule works only 46.4% of the time. Simple multiplication was found to preserve precision 38% of the time. This is consistent with the work of Schwartz [2] who estimated that the standard rule predicts the correct number of significant figures (to an order of magnitude) 37% to 63.3% of the time, based on a small grid of multiplications of the form x = yn where n is an integer. Of the times when the standard rule fails, it has an overwhelming tendency to predict one less significant digit than needed and, therefore, lead to a loss of precision. Despite this, many researchers still incorrectly claim that a product or quotient can not have more significant figures than the smallest number of significant figures in any of the numbers used in the calculation [9]. The results also show that it is possible for the standard rule to predict one more significant digit than is needed — a possibility that has been shown by Schwartz [2] for some multiplications of the form x = yn. While the fact that the standard rule can fail is well established [1, 2], this is the first work known to the authors to quantify the success rate of the standard rounding rule for the general case of simple multiplication and division.
    To illustrate the application of the standard rounding rule, examples are displayed in Table III from the output of the Monte-Carlo program. For multiplication and division problems, one example is given for each of the three categories described earlier.

III-3. Why the standard rule fails
    To see why the rounding rule fails, one must find the true precision that results from a multiplication or division problem. To do this, one must substitute Eq. (2) into Eq. (4). Let N = min(Ny, Nz) and N' = max(Ny, Nz). Let C and C' be the constants from Eq. (2) that correspond to the numbers (y or z) with N
 

TABLE II.    The statistical results of the application of the standard rounding rule to simple
                    multiplication and division problem. It shows the statistical likelihood that the
                    application of the standard rounding rule will fall into each of the three categories
                    described in the text.
 

 
Multiplication
Division
Average
Worked
38.0%
54.8%
46.4%
1 More Digit Needed
61.9%
45.1%
53.5%
1 Too Many Digit
0.01%
0.09%
0.05%

and N', respectively. If Ny = Nz, then N = Ny = Nz and one can arbitrarily take C to be Cy and C' to be Cz. One can show that this substitution yields

    Nx = N + { log(Cx) – log[C + C' · 10–(N' – N)] }.                                                                  (8)

The two bracketed terms in the above equation determine whether the standard rule will work for any given problem. If one insists that the "correct" number of significant digits be the minimum number needed to preserve precision, then Eq. (8) must be evaluated in such a way that Nx = N – 1 when –2 < SUM £ –1, Nx = N when –1 < SUM £ 0, and Nx = N + 1 when 0 < SUM £ 1, where SUM is the sum of the bracketed terms in Eq. (8).
    A careful examination of Eq. (8) provides much insight into the standard rounding rule. There are several important points that must be emphasized. Notice that log(Cx) appears in Eq. (8). This fact alone proves that there is no a priori rule that can be used to accurately predict the correct number of significant digits to be used in all cases. In order to know how many significant digits should be used, one must first know log(Cx) and, therefore, the result of the calculation.
    The presence of log(Cx) in Eq. (8) also explains why the standard rule behaves differently for multiplication and division problems. The reason lies in the different relationship between Cx, C, and C' for the two types of problems. As an example, consider the multiplication and division of the numbers y = 2.7 and z = 2.6. The results are approximately 7.0 and 1.0, respectively. In both problems the values of C and C' are approximately 1.9, but the value of Cx is approximately 0.7 for the multiplication problem and 5.0 for the division problem. Thus the multiplication and division of the same two numbers can lead to a very different value for Cx which can, as a result, affect the evaluation of Eq. (8). There is a second difference between multiplication and division dealing with order. In the division of two numbers, switching the numbers can lead to very different results. These different results can also lead to very different values for Cx and, therefore, to different evaluations of Eq. (8). Obviously, this complication does not occur in multiplication problems.
    Eq. (8) also shows that the number of significant digits predicted by the standard rounding can never be more than one digit away from the correct value. This fact can be seen by studying the bracketed terms in Eq. (8). Table IV shows the minimum and maximum values of the sum of the bracketed terms in Eq. (8) as a function of N' – N. When Eq. (8) is evaluated in the manner previously described, the only possible values for Nx are N – 1, N, or N + 1. Thus, the standard rounding rule is never "wrong" by more than one significant digit.
 

TABLE III.    Comparison of standard rounding rule predictions (xSR) with true results (xtrue).  For
                    simple multiplication and division, one example problem is chosen randomly from the
                    output of the Monte-Carlo program for each of the categories described in the text.
                    The true result of each problem is obtained by assuming an uncertainty of ± ½ in the
                    right-most digit and then propagating the errors.
 

Multiplication
x = y · z  
Category
y · z
xSR
xtrue
Worked
0.5 · 0.1427
0.07 ± 0.005
0.07 ± 0.007
1 More Digit Needed
0.86 · 0.2326
0.20 ± 0.005
0.200 ± 0.001
1 Digit Too Many
8.92 · 1.08
9.63 ± 0.005
9.6 ± 0.05
Division
x = y / z
Category
y / z
xSR
xtrue
Worked
532.8 / 60
9. ± 0.5
9. ± 0.7
1 More Digit Needed
6.5 / 5.66
1.1 ± 0.05
1.15 ± 0.01
1 Digit Too Many
11. / 12.
0.92 ± 0.005
0.9 ± 0.08

III-4. Properties of the alternate rule
    Besides the standard rounding rule, there is an often used alternate rounding rule. This rule requires one to use an extra significant digit above that suggested by the standard rule. In order to test the alternate convention, the Monte-Carlo procedure described earlier was applied to the alternate rounding rule.
    Table V shows the results for the alternate rule as applied to simple multiplication and division problems. The alternate rule is almost as accurate as the standard rule for division, but is significantly more accurate for multiplication. The average accuracy of the alternate rule is 58.9% compared with 46.4% for the standard rule. The most significant aspect of the alternate rule is that it never leads to a loss of precision; this can be seen in Table V. The reason for this comes from the fact that the standard rule can, at its worst, predict only one less significant digit than actually needed. In these cases, the "extra" significant digit that the alternate rule provides comes to the rescue. Thus, the alternate rule is more accurate than the standard rule and completely safe for data. The only problem with the alternate rule is that the results of calculations may have one or, in rare cases, two too many significant digits. This disadvantage is minor when compared with the standard rule where precision is lost over 50% of the time.
 

TABLE IV.    The minimum and maximum values of the values of the sum of the bracketed terms in
                     Eq. (8) as a function of – N.
 
N' – N
Extrema
0
1
2
3
4
Minimum Value
 –1.301
–1.041
–1.004
–1.000
–1.000
Maximum Value
+0.699
+0.959
+0.996
+1.000
+1.000

TABLE V.    The statistical results of the application of the alternate rounding rule to simple
                    multiplication and division problems.
 

Category
Multiplication
Division
Average
Worked
66.8%
51.1%
58.9%
More Digits Needed
0
0
0
Too Many Digits
33.2%
48.9%
41.1%

IV. Summary

    Although the best expression for the result of a calculation includes the precise description of the uncertainty in terms of the absolute or percentage error, this is often only possible for experimental data. Many problems encountered by physics students in daily life, including those in textbooks, do not deal with quantities where the uncertainties are explicitly stated. In these cases, the number of significant figures is the only available information upon which to base an error estimate and a rounding rule becomes useful.
    The original purpose of the standard rounding rule was to provide a method for quoting the results of calculations without grossly overstating the precision contained therein. This rounding convention is conservative because it tries to ensure that the true result of a calculation is included within the error bars implied by the number of significant digits to which that result is written. Our work shows that the standard rule is too conservative, leading to a loss of precision over 50% of the time.
    While a simple, accurate, and safe rounding rule has been called for [1], this work proves that there is no a priori rule that can accurately predict the number of significant digits in all cases. With no perfect rounding rule possible, the best rounding rule is the simplest rule that is relatively accurate and safe. Because the alternate rule is simple, more accurate than the standard rule, and never leads to a loss of precision, it is far superior to the standard rule and should be adopted as the new standard.

Appendix: Generalization to a series of multiplications and divisions

    The results of this work can easily be extended to calculations involving an arbitrary number of variables, m, involved in a series of multiplications and/or divisions. To apply the alternate rule to such a series, each step in the calculation must be considered as a simple multiplication/division and evaluated in turn. The application of the alternate rounding rule upon this series must, by its nature, preserve precision because precision is preserved at each step in the calculation. Consider x = (1.2 / 3.45) · 6.000 as an example. Using the alternate rule for each step leads to the result x = 4.14 · 6.000 = 24.84, which has four significant figures (Nx = 4). Now consider the same example written as x = (1.2 · 6.000) / 3.45 and x = (6.000 / 3.45) · 1.2. The application of the alternate rule at each step will now lead to x = 24.84 (Nx = 4) and x = 24.8 (Nx = 3), respectively. Thus, the order in which the operations are performed may affect the predicted number of significant figures even though the result of the calculation is unaffected. Because any prediction made by the application (or series of applications) of the alternate rule must preserve precision, the smallest predicted number of significant figures is the minimum number required to preserve precision. In the preceding example, this number was found to be Nx = N + 1, where N is the smallest number of significant figures used in the calculation (and the number predicted by the standard rule).
    The general case of m variables involved in a series of multiplications and/or divisions can now be considered. Applying the alternate rounding rule leads to m! / 2 predictions for Nx, where m! / 2 is the number of unique orderings of simple multiplications and/or divisions that can reproduce any series. Extensive numerical testing shows that the alternate rule can yield no more than m – 1 unique values of Nx and that these values lie between N + 1 and N + (m – 1), where N is the smallest number of significant figures in the m variables. In the special case where all m variables have the same number of significant figures, all possible values of Nx are equal to the minimum value of N + 1. Thus, according to the alternate rounding rule, the minimum number of significant figures needed to preserve precision in a series of multiplications and/or divisions is N + 1. The fact that the alternate rule, in exactly the same form, applies to simple and multiple operations of multiplication and division makes its application as simple and straightforward as that of the standard rounding rule. In practice, the results of a series of multiplications and/or divisions should be carried out using full calculator results and then rounded to N + 1 significant figures.
    It is also interesting to consider the special case investigated by Schwartz [2], x = yn. By generalizing Eq. (4) to include n terms and substituting Eq. (2) into it, it can easily be shown that Nx = Ny – Int[log(n) + log(Cy / Cx)]. This implies that the number of significant figures needed in the result (x) decreases with increasing number of multiplications, n. This is exactly the overall behavior that Schwartz [2] observed. The somewhat erratic deviations observed by Schwartz are due to the log(Cy / Cx) term which is very sensitive to the result (x) and can fluctuate between –1 and +1. As Schwartz also discovered, the operation x = yn can yield results where the true uncertainty is an order of magnitude larger than the result written with just a single significant digit. In the current notation, this occurs when Nx £ 0 and the precision of the result degrades to a value that surpasses 100%. Thus, the results of Schwartz can be completely described and explained by the formalism developed in this paper.

References

[1]    R. H. Good, Phys. Teach. 34, 192 (1996).
[2]    L. M. Schwartz, J. Chem. Educ. 62, 693 (1985).
[3]    B. L. Earl, J. Chem. Educ. 65, 186 (1988).
[4]    S. Stieg, J. Chem. Educ. 64, 471 (1987).
[5]    J. R. Taylor, An Introduction to Error Analysis: The Study of Uncertainties in Physical
        Measurements, 2nd ed. (University Science Books, Sausalito, C.A., 1997), pp. 30–31.
[6]    B. M. Shchigolev, Mathematical Analysis of Observations (London Iliffe Books, London,
        1965), p. 22.
[7]    P. R. Bevington and D. K. Robinson, Data Reduction and Error Analysis for the Physical
        Sciences (McGraw-Hill, New York, 1992), p. 5.
[8]    Park and Miller, Comm. of the ACM 31, 1192 (1988).
[9]    See, for example, C. E. Swartz, Used Math for the First Two Years of College Science,
        2nd ed. (American Association of Physics Teachers, College Park, M.D., 1993), pp. 6–15.