Hi,
bcoffee wrote:
WE obtain the system clock at one second resolution from PC's CMOS.such as 21:23:22.
But people can access system time with a resolution at one millsecond in WINDOWS by API. such as 21:23:22.296.
Where WINDOWS gets the millsecond time (from PIT?)? And how Windows keep it at a certain accuracy?
Most OSs read the RTC once during boot, and then keep track of time internally using some other timer while they're running - the PIT, or the RTC's "periodic interrupt", etc.
Most OSs are also inprecise. For example, they might return milliseconds but keep track of time in 10 millisecond intervals, so that if you read the time at 1 ms intervals you might actually read 21:23:22.01 ten times before you read 21:23:22.02 (rather than reading 21:23:22.010, 21:23:22.011, 21:23:22.012, etc).
None of the timers in a typical computer are accurate for long periods of time. To improve accuracy you could use a utility to adjust the amount of time added to the timer each IRQ. For example if the PIT is set to generate an IRQ every 1 ms you could add 1.02 ms to the time count, where the difference has been set by users (from earlier measurements) to improve long term accuracy.
Also it's possible to use something like NTP (Network Time Protocol) to set the time on one or more machines according to the time on another (more accurate) machine. In this case you could also use NTP to fine-tune the amount you add to the time count. For e.g. if you check NTP every ten minutes and NTP says the PIT is slow then you could increase the amount you add to the timer count each IRQ to improve the timers accuracy between NTP time checks.
Cheers,
Brendan