The Logical/Physical Disk % Disk Time counters look wrong. What gives?

The % Disk Time counters are capped in System Monitor at 100% because it would be confusing to report disk utilization > 100%. This occurs because the % Disk Time counters do not actually measure disk utilization. The Explain text that implies that it does represent disk utilization is very misleading.

What the Logical Disk and Physical Disk % Disk Time counters actually do measure is a little complicated to explain.

The %Disk Time Counter is not measured directly. It is a value derived by the diskperf filter driver that provides disk performance statistics. diskperf is a layer of software sitting in the disk driver stack. As I/O Request packets (IRPs) pass through this layer, diskperf keeps track of the time I/Os start and the time they finish. On the way to the device, diskperf records a timestamp for the IRP. On the way back from the device, the completion time is recorded. The difference is the duration of the I/O request. Averaged over the collection interval, this becomes the Avg. Disk sec/Transfer, a direct measure of disk response time from the point of view of the device driver. diskperf also maintains byte counts and separate Counters for Reads and Writes, at both the Logical and Physical Disk level. (This allows Avg. Disk sec/Transfer to be broken out into Reads and Writes).

The Avg. Disk sec/Transfer measurement reported is based on the complete round trip time of a request. Strictly speaking, it is a direct measure of disk response time – which means it includes queue time. Queue time is time spent waiting for the device because it is busy with another request or waiting for the SCSI bus to the device because it is busy.

% Disk Time is a value derived by diskperf from the sum of all IRP round trip times (Avg.Disk sec/Transfer) times Disk Transfers/sec, divided by duration, essentially:

 

% Disk Time = Avg Disk sec/Transfer * Disk Transfers/sec

 

which is a calculation (subject to capping when it exceeds 100%!) that you can verify easily enough for yourself.

Because the Avg. Disk sec/Transfer that diskperf measures includes disk queuing, % Disk Time can grow greater than 100% if there is significant disk queuing (at either the Physical or Logical Disk level). The Explain text in the official documentation suggests that this product of Avg. Disk sec/Transfer and Disk Transfers/sec measures % Disk busy. If (big if) IRP round trip time represented service time only, then the % Disk Time calculation would correspond to disk utilization. But Avg. Disk sec/Transfer includes queue time, so the formula used really calculates something entirely different.

The formula used in the calculation to derive % Disk Time corresponds to Little’s Law, a well-known equivalence relation that shows the number of requests in the system as a function of the arrival rate and service time. According to Little’s Law, Avg Disk sec/Transfer * Disk transfers/sec properly yields the average number of requests in the system, more formally known as the average Queue length. The average Queue length value calculated in this fashion includes both IRPs queued for service and actually in service.

A direct measure of disk response time like Avg. Disk sec/Transfer is a useful metric. Since people tend to buy disk hardware based on a service time expectation, it is unfortunate that there is no way to break out disk service time and the queue time separately in NT 4.0. (The situation is greatly improved in Windows 2000, however.) Given the way diskperf hooks into the I/O driver stack, the software RAID functions associated with Ftdisk, and SCSI disks that support command tag queuing, one could argue this the only feasible way to do things in the Windows 2000 architecture. The problem of interpretation arises because of the misleading Explain text and the arbitrary (and surprising) use of capping.

Microsoft’s fix to the problem beginning in NT 4.0 is a different version of the Counter that is not capped. This is Avg. Disk Queue Length. Basically, this is the same field as % Disk Time without capping and without being printed as a percent.

For example, if % Disk Time is 78.3%, Ave Disk Queue Length is 0.783. When % Disk Time is equal to 100%, then Ave Disk Queue Length shows the actual value before capping. We recently had a customer reporting values like 2.63 in this field. That’s a busy disk! The interpretation of this Counter is the average number of disk requests that are active and queued – the average Queue Length.

, ,

Comments are closed.
Bitnami