The other day, a coworker mentioned that he wasn’t using high resolution timers on Windows because he heard “they caused clock drift”. His statement struck me as odd. Why would checking the time change the time? Are we causing some sort of quantum observer effect? So I asked him: why don’t we just check what the function does?
True, we can’t just open up the source files, but we have something almost as good: the binary itself! A short C function almost always translates into a short assembly function, and GetTickCount is no different. To follow along at home, fire up windbg attached to a 64 bit process (any will do, like notepad) and run “uf KERNELBASE!GetTickCount”. We see this:
0:000> uf KERNELBASE!GetTickCount
000007ff`14211250 b92003fe7f mov ecx,offset SharedUserData+0x320 (00000000`7ffe0320)
000007ff`14211255 488b09 mov rcx,qword ptr [rcx]
000007ff`14211258 8b04250400fe7f mov eax,dword ptr [SharedUserData+0x4 (00000000`7ffe0004)]
000007ff`1421125f 480fafc1 imul rax,rcx
000007ff`14211263 48c1e818 shr rax,18h
000007ff`14211267 c3 ret
We can analyze this function step by step, but from a quick glance, we see we have an immediate load, two memory reads, a multiply, and a shift. It seems unlikely that this will cause any “clock drift”. So how is it possible that we can read the current tick count so simply? A quick google search reveals that this is the “shared user page”, a page mapped into all usermode processes. All the kernel needs to do is update this value every time a clock interrupt occurs and magically we have a GetTickCount function that is so fast it’s nearly free. And no worries about a clock drift.
So this means we’ve dispelled the myth of the mysterious clock drift, right? Well, not exactly. We know that GetTickCount seems unlikely to ever cause the clock to drift, but it turns out there is a real “clock drift problem”, but that deserves its own blog post.