When mscorlib is prejitted (by default), the platform intrinsics are not used, e.g. MathF.Round won't use SSE (see dotnet/coreclr#25365). Then as far as I understand, it could be re-compiled for tier1 if it's in a hot method and will use SSE instead). Is it ok that it might start returning different values for the same input? E.g.
using System;
using System.Runtime.CompilerServices;
using System.Threading;
public class Program
{
static void Main()
{
Thread.Sleep(1000);
float x = 0.499999969f;
for (int i = 0; i < 100; i++)
{
// on i=30 iteration it will re-compile MyRound and it will start returning
// different result for the same x
Console.WriteLine(MyRound(x));
Thread.Sleep(100);
}
}
[MethodImpl(MethodImplOptions.NoInlining)]
public static float MyRound(float x) => MathF.Round(x);
}
Output is:
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
...
/cc @tannergooding
When mscorlib is prejitted (by default), the platform intrinsics are not used, e.g.
MathF.Roundwon't useSSE(see dotnet/coreclr#25365). Then as far as I understand, it could be re-compiled for tier1 if it's in a hot method and will use SSE instead). Is it ok that it might start returning different values for the same input? E.g.Output is:
/cc @tannergooding