性能测试与时间戳精度分析

在进行性能测试时,经常需要测量代码执行的时间,以评估其效率。本文将通过C#代码示例,展示如何进行性能测试,并分析不同方法获取时间戳的性能差异。

首先,来看一段用于性能测试C#代码。这段代码通过循环执行空操作(NOP)来测试循环本身的时间消耗,然后分别测试了使用Environment.TickCountDateTime.Now.TicksStopwatch.ElapsedTicks获取时间戳的时间消耗。

using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; namespace TimerPerformance { class Program { static void Main(string[] args) { Console.WriteLine("Performance Tests"); Console.WriteLine("Stopwatch Resolution (nS): " + (1000000000.0 / Stopwatch.Frequency).ToString()); RunTests(); Console.WriteLine("Tests Finished, press any key to stop..."); Console.ReadKey(); } public static long DummyValue; public static void RunTests() { const int loopEnd = 1000000; Stopwatch watch = new Stopwatch(); Console.WriteLine(); Console.WriteLine("Reference Loop (NOP) Iterations: " + loopEnd); watch.Reset(); watch.Start(); for (int i = 0; i < loopEnd; ++i) { DummyValue += i; } watch.Stop(); Console.WriteLine("Reference Loop (NOP) Elapsed Time (ms): " + (((double)watch.ElapsedTicks / Stopwatch.Frequency * 1000).ToString())); Console.WriteLine(); Console.WriteLine("Query Environment.TickCount"); watch.Reset(); watch.Start(); for (int i = 0; i < loopEnd; ++i) { DummyValue += Environment.TickCount; } watch.Stop(); Console.WriteLine("Query Environment.TickCount Elapsed Time (ms): " + (((double)watch.ElapsedTicks / Stopwatch.Frequency * 1000).ToString())); Console.WriteLine(); Console.WriteLine("Query DateTime.Now.Ticks"); watch.Reset(); watch.Start(); for (int i = 0; i < loopEnd; ++i) { DummyValue += DateTime.Now.Ticks; } watch.Stop(); Console.WriteLine("Query DateTime.Now.Ticks Elapsed Time (ms): " + (((double)watch.ElapsedTicks / Stopwatch.Frequency * 1000).ToString())); Console.WriteLine(); Console.WriteLine("Query Stopwatch.ElapsedTicks"); watch.Reset(); watch.Start(); for (int i = 0; i < loopEnd; ++i) { DummyValue += watch.ElapsedTicks; } watch.Stop(); Console.WriteLine("Query Stopwatch.ElapsedTicks Elapsed Time (ms): " + (((double)watch.ElapsedTicks / Stopwatch.Frequency * 1000).ToString())); } } }

通过在不同硬件上运行这段代码,得到了以下结果:

硬件 Empty Loop Environment.TickCount DateTime.Now.Ticks
AMD Opteron 4174 HE 2.3 GHz 8.7 ms 16.6 ms 2227 ms
AMD Athlon 64 X2 5600+ 2.9 GHz 6.8 ms 15.1 ms 1265 ms
Intel Core 2 Quad Q9550 2.83 GHz 2.1 ms 4.9 ms 557.8 ms
Azure A1 (Intel Xeon E5-2660 2.2 GHz) 5.2 ms 19.9 ms 168.1 ms

从结果可以看出,DateTime.Now的调用时间远大于空循环和Environment.TickCount。这是因为DateTime.Now的调用时间大约为1-2微秒,而Environment.TickCount的调用时间大约为600纳秒。

例如,一个HTTP请求需要测量响应时间和吞吐量(数据传输速率),它需要为从Web服务器接收到的每个数据块获取一个时间戳。在操作完成之前,至少需要3个时间戳(开始、响应、结束)来测量响应时间和下载时间。如果测量吞吐量(数据传输速率),则取决于接收到的数据块数量。这对于多线程访问来说更加糟糕。因为Environment.TickCountDateTime.Now都是共享资源,所有调用都必须经过它们的同步机制,这意味着它们不能并行化。

实际上,像Crawler-Lib Engine这样的真实系统可以在相对较好的硬件上每秒执行20,000-30,000个HTTP请求。因此,很明显,时间测量对最大吞吐量有影响。

有些人可能会争辩说DateTime.NowEnvironment.TickCount更精确。这是部分正确的。这里有一个代码片段,用于测量时间戳的粒度:

if (Environment.TickCount > int.MaxValue - 60000) throw new InvalidOperationException("Tick Count will overflow in the next minute, test can't be run"); var startTickCount = Environment.TickCount; var currentTickCount = startTickCount; int minGranularity = int.MaxValue; int maxGranularity = 0; while (currentTickCount < startTickCount + 1000) { var tempMeasure = Environment.TickCount; if (tempMeasure - currentTickCount > 0) { minGranularity = Math.Min(minGranularity, tempMeasure - currentTickCount); maxGranularity = Math.Max(maxGranularity, tempMeasure - currentGranularity); } currentTickCount = tempMeasure; Thread.Sleep(0); } Console.WriteLine("Environment.TickCount Min Granularity: " + minGranularity + ", Max Granularity: " + maxGranularity + "ms"); Console.WriteLine(); var startTime = DateTime.Now; var currentTime = startTime; double minGranularityTime = double.MaxValue; double maxGranularityTime = 0.0; while (currentTime < startTime + new TimeSpan(0, 0, 1)) { var tempMeasure = DateTime.Now; if ((tempMeasure - currentTime).TotalMilliseconds > 0) { minGranularityTime = Math.Min(minGranularityTime, (tempMeasure - currentTime).TotalMilliseconds); maxGranularityTime = Math.Max(maxGranularityTime, (tempMeasure - currentTime).TotalMilliseconds); } currentTime = tempMeasure; Thread.Sleep(0); } Console.WriteLine("DateTime Min Granularity: " + minGranularityTime + ", Max Granularity: " + maxGranularityTime + "ms");

在几台机器上运行这个代码片段,发现Environment.TickCount的粒度约为16毫秒(15.6毫秒),这是默认的系统范围计时器分辨率。系统范围计时器分辨率可以使用函数更改为1毫秒,但通常不建议这样做,因为它会影响所有应用程序。

沪ICP备2024098111号-1
上海秋旦网络科技中心:上海市奉贤区金大公路8218号1幢 联系电话:17898875485