Recently, I was in the office and was asked to look at a reported bug in an ASP.NET MVC 5 application that another developer was debugging. Under development was an ASP.NET MVC 5 application that was constructed to support the entering, tracking and approving of employee labor hours for a major enterprise. The application was written using C# and used a SQL Server database as its back end.
This blog will provide a short case study on how the specific bug was analyzed and how the issue was resolved.
The customer reported an issue with the accuracy of a time calculation in the application. As part of the QA process, the tester put canned values into the application and observed the results. He compared the results with the values he manually calculated using a hand calculator. In comparing the two results, they were slightly off, and for this customer, any error was unacceptable.
Coming into the application cold, I worked with one of the developers to configure the application to try to reproduce the problem. The data in the database was placed into the appropriate state and breakpoints were added to the application at the juncture where the calculations were made.
I followed the same process to compute the results manually using a hand calculator, and confirmed that the results were slightly off.
The code in question was doing a bunch of Math.Floor and Math.Ceiling operations, so I assumed that something was truncating the numbers throwing off the end result.
It was possible that the issue lay with the units that were being used. The final calculation came down to the following:
A * B = C
A was fixed at 4.335 by the program.
At the breakpoint we examined the code to view the value of B which was decimal 0.1666666666… the decimal equivalent of 1/6.
Performing the calculation on the hand calculator 4.335 * 0.1666666… yielded a product of C = 0.7224997.
However, the calculation in the application yielded a result of C = 0.7225. All calculations were done using the decimal data type.
I was convinced that we may have accidently used an integer, float or a double in a calculation which was potentially rounding errors, but did not find any incidences of this after further analysis.
I created a small program that would repeat the calculation for me to step through, but to no avail. I was stumped.
The Aha Moment!
The aha moment came when I performed the calculation by long hand (4.335/6.0) and got a value of 0.7225. This is exactly what the application produced. The error came when the tester multiplied 4.335 by an approximation of 1/6 (0.166666) in his hand calculator, and got a value of 0.7224997. The calculator only had a precision of 6 decimal places. The error was due to a round-off error. I tried the same with the calculator on my computer and saw similar results.
In the end, thankfully, there was no bug in the application. The issue was that the expected results calculated using a hand calculator, while correct, were not entirely accurate, which caused a bug to be reported where no bug existed.
Moral of the Story
If one is to use hand calculations to check an applications calculated results, make sure that the values used in the calculations are consistent. In this case 0.166666 does not equal to 1/6.
This blog discussed how a bug was reported based on comparing a hand-calculated result using a calculator with the value the application was producing. The hand calculation used values that were less precise than what the application was using, yielding erroneous results.
Taking care to create test cases with expected results that are consistent with how the application should calculate the results is important to an efficient development process.