In my computer this code takes 17 seconds (1000 millions times):
static void Main(string[] args) {
var sw = new Stopwatch(); sw.Start();
int r;
for (int i = 1; i <= 100000000; i++) {
for (int j = 1; j <= 10; j++) {
MyDivRem (i,j, out r);
}
}
Console.WriteLine(sw.ElapsedMilliseconds);
}
static int MyDivRem(int dividend, int divisor, out int remainder) {
int quotient = dividend / divisor;
remainder = dividend - divisor * quotient;
return quotient;
}
while Math.DivRem takes 27 seconds.
.NET Reflector gives me this code for Math.DivRem:
public static int DivRem(int a, int b, out int result)
{
result = a % b;
return (a / b);
}
CIL
.method public hidebysig static int32 DivRem(int32 a, int32 b, [out] int32& result) cil managed
{
.maxstack 8
L_0000: ldarg.2
L_0001: ldarg.0
L_0002: ldarg.1
L_0003: rem
L_0004: stind.i4
L_0005: ldarg.0
L_0006: ldarg.1
L_0007: div
L_0008: ret
}
Theoretically it may be faster for computers with multiple cores, but in fact it shouldn't need to do two operations in the first place, because x86 CPUs return both the quotient and remainder when they do an integer division using DIV or IDIV (http://www.arl.wustl.edu/~lockwood/class/cs306/books/artofasm/Chapter_6/CH06-2.html#HEADING2-451)!
See Question&Answers more detail:os