View Single Post
Old 4th June 2013, 20:23   #18991  |  Link
e-t172
Registered User
 
Join Date: Jan 2008
Posts: 589
Quote:
Originally Posted by cyberbeing View Post
I have no idea what you consider as "video clock", but the "audio clock" is what DirectShow uses as a reference clock as interpreted by the "system clock". The clock deviance shown by madVR, multiplied by and added to the video frame rate is the "real" playback rate in DirectShow before anything ever gets passed over HDMI or other output. For example, in Anandtech's screenshot with 0.00472% clock deviance, the "real" video & audio playback rate is equivalent to ~23.97715 fps, which is higher than his display refresh rate of ~23.97605 Hz.
What I mean is, the way audio over HDMI works is that audio is transferred during the blanking interval between two frames. The simplest way to make it work from the point of view of the GPU manufacturer is to make sure the number of audio samples transferred between two frames is equal to the frame duration. So, for example, if the refresh rate is 24Hz, then 2000 audio samples (assuming 48kHz sampling rate) of audio will be sent between each frame. As long as the hardware (driver, GPU, whatever) enforces that (which is very easy: just push 2000 samples between each frame, no more, no less), then everything should be fine and no clock deviation should occur, ever, because there's nothing to deviate from: there's only one clock, and it's the video clock (i.e. the clock that controls the refresh rate). Sure this one clock might not be perfectly accurate, but that doesn't matter: the device at the other end of the cable is slaved to the clock of the sending side anyway, so that's not an issue.

Contrast this with the traditional situation where you have a video output and an analog sound card output: in this case you're dealing with the video (refresh rate) clock and the audio (sample rate) clock, which won't be perfectly in sync. In other words, instead of sending 2000 audio samples during a vsync interval (assuming 24p and 48kHz), sometimes you'll send 1998 samples, or 2002 samples (or something else), in other words the two clocks will deviate from each other, and that's why you can't prevent frame drops (unless you use ReClock and/or FRC of course). If instead the video output is also taking care of sending the audio, then the issue goes away completely: one component (the GPU) has control over everything, so it just has to make sure it uses the same clock for everything and just send the correct, constant amount of samples each time. In the case of 24p and 48kHz, it just knows it has to send 2000 samples between each frame, because 48000 / 24 = 2000. That's it. It's that simple. There's no clock deviation, because the audio is now slaved to the vsync.

At least that's the theory. In practice we're still seeing clock deviation when playing both audio and video over HDMI, which, I must say, puzzles me. That would mean that GPU manufacturers are using two clocks for video and audio, which doesn't make any sense because that's actually harder than just doing the right thing and using one single clock.

Last edited by e-t172; 4th June 2013 at 20:29.
e-t172 is offline   Reply With Quote