15/4/2019, 13:04

When redundant type casts matter

In the last episode from the rabbit hole of broken dreams and despair that is trying to make deterministic floating point operations in .NET, we saw how a seemingly innocent series of operations would lead to behaviour that would depend both of target architecture and compiler optimization settings.



This time around, we'll see that even Microsoft's own main C# editor Visual Studio doesn't quite get it right, and how an -- according to Visual Studio -- redundant operation in C# can cause results to change.



Background


C# has statically-typed variables: Compile-time verification helps ensure that algorithms behave correctly, for example by ensuring that one can not use a string in a function that expects an integer.



Moreover, under certain conditions, it is possible to convert the type of a variable to some other type. For example, if int a = 60, it is possible to convert int, the type of the variable, to long, either implicitly through long b = a or explicitly through long c = (long)a (try it).



Not all conversions are possible – for instance, you can not convert string to int – but the mathematically minded may appreciate the fact that the specification explicitly spells out that identity conversions are always possible. That is, any type T can be converted to T. And this is where things get a bit weird.



The example


Let us return to our example from the previous post, changed ever so slightly:


float f1 = 0.00000000002f;
float f2 = 1 / f1;
double d1 = f2;
double d2 = (float) f2;
Console.WriteLine(d1);
Console.WriteLine(d2);

Here, we perform some simple operation of single-precision floating-point numbers, convert the result to a double-precision in two different ways, and print the result. The thing to pay attention to is the explicit cast (float) f2 in which we perform the identity conversion on f2, which is a float, converting its type to float. One would expect that this does nothing, and that d1 and d2 would contain the same value, yet by building the program with an x86 target architecture in release mode, and running it, one gets the following output:


50000000199,7901
49999998976

This is rather surprising. So much so that Visual Studio itself stumbles and claims that the cast is redundant:





The explanation


We reported this behaviour as a possible bug in the Visual Studio code analyzer, and the subsequent discussion reveals what is going on here.



In the previous post, we saw how for local variables, float really means "single-point precision or higher", and digging into the ECMA specification of C#, one finds that the explicit cast effectively changes the meaning of float to "exactly single-point precision". From §11.1.2:



In most cases, an identity conversion has no effect at runtime. However, since floating point operations may be performed at higher precision than prescribed by their type (§9.3.7), assignment of their results may result in a loss of precision, and explicit casts are guaranteed to reduce precision to what is prescribed by the type.



Somewhat curiously, the Microsoft version of the specification contains the exact same text as the ECMA version, except it leaves out the above crucial point on identity conversions of floating-point types.




Comments


No comments yet.


Add a comment

To avoid too much spam, you'll have to log in to add a comment. You can do so below, or you can create a user, if you don't already have one. You can use your photo login here as well.


E-mail address:
Password:  Forgot your password?