Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 28th November 2016, 09:58   #40841  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,903
Quote:
Originally Posted by madshi View Post
Why? NGU could upscale 8x, then downscale to the target resolution. Just an example, of course.
sorry but this is going to hurt a lot on a UHD screen

720x480-> 5760X3840 -> 3840x2160 doesn't sound viable to me.
i don't think is is possible to skip the last upscale step for every normal source.


i still think super XBR should stay.
simple reason it doesn't highlight compression artefacts as much as NGU on DVDs.
huhn is offline   Reply With Quote
Old 28th November 2016, 09:59   #40842  |  Link
flossy_cake
Registered User
 
Join Date: Aug 2016
Posts: 606
Quote:
Originally Posted by madshi View Post

It's real time rendered, similar to games, not a movie with pre-recorded frames.
A hires timer doesn't help. Have you read my FAQ? I asked for an API which defines at which exact point in time a frame will be shown in the future and for how long the frame will stay on screen. I'm not interested in trying to do this myself with a hires timer. The API needs to provide this functionality (and with a "hardware interrupt" supported backend, not with hires timers working in the background), otherwise it's useless for video rendering. Or do you want to have stuttering motion in situations where the CPU is busy for a few milliseconds (which is pretty normal for a Windows PC)? Relying on a hires timer to do the frame syncing would be a big step *back*, compared to the reliable motion smoothness madVR achieves today with a conventional VSync display, and I'm not interested in spending time and money on a solution which is actually a step back in motion smoothness reliability.
Well it's working perfectly in full screen exclusive mode so I guess I'll just be happy with that.

When you call Present , thats still a CPU command that could be blocked or delayed by the OS kernel for a few ms.
flossy_cake is offline   Reply With Quote
Old 28th November 2016, 10:05   #40843  |  Link
Q-the-STORM
Registered User
 
Join Date: Sep 2012
Posts: 174
Quote:
Originally Posted by madshi View Post
Quote:
Originally Posted by sauma144 View Post
@madshi
2) Is something like "sync playback to display" from Kodi planned for madVR?
I don't even know what that is/does?
It's basically what reclock does with 23.976fps video on a 24Hz display, it slightly increases video and audio speed (resamples audio) to match Hz of output...


Quote:
Originally Posted by Backflash View Post
I do not understand something, why do we need chroma and downscaling tabs then? if it gets selected in image upscaling menu anyway?
downscaling tab settings are being used when you are only downscaling.... "downscaling quality" in upscaling tab is being used when doubling to higher resolution and than downscaling to your display size...
chroma tab is only being used to upscale 4:2:0 chroma to 4:4:4 chroma, "chroma quality" in upscaling tab is being used to upscale further.... so chroma tab is being used with every video, regardless of resolution, chroma quality is only used when you actually upscale resolution

Last edited by Q-the-STORM; 28th November 2016 at 10:10.
Q-the-STORM is offline   Reply With Quote
Old 28th November 2016, 10:10   #40844  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,903
and as reclock it will be the work of the audio renderer not the video renderer.

if i'm not mistaken madVR suppports the new planned "reclock" feature in sanear so madVR "supports" this.
huhn is offline   Reply With Quote
Old 28th November 2016, 10:15   #40845  |  Link
Backflash
Registered User
 
Join Date: Jan 2016
Posts: 52
Quote:
Originally Posted by Q-the-STORM View Post
downscaling tab settings are being used when you are only downscaling.... "downscaling quality" in upscaling tab is being used when doubling to higher resolution and than downscaling to your display size...
chroma tab is only being used to upscale 4:2:0 chroma to 4:4:4 chroma, "chroma quality" in upscaling tab is being used to upscale further.... so chroma tab is being used with every video, regardless of resolution, chroma quality is only used when you actually upscale resolution
Yeah, I got it one minute later after posting, thank you.
Basically it's condensed doubling menu, it took me way too much time to understand this simple concept.
Backflash is offline   Reply With Quote
Old 28th November 2016, 10:26   #40846  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,344
Quote:
Originally Posted by madshi View Post
Can't you use NNEDI3-16 instead? That's the one I left in for low quality SD material..
Personally I have used super-xbr in the past over NNEDI3 because the OpenCL used for NNEDI3 is generally a bit error prone and has produced some issues, while super-xbr just works with no complications.
Most of the things I watch is clean content though, and I've been thinking if i shouldn't figure out some pre-processing step to clean up some of the lower quality sources.

So I'm not sure if i'll miss it, yet. Didn't have any time to try to test NGU yet.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 28th November 2016 at 10:43.
nevcairiel is offline   Reply With Quote
Old 28th November 2016, 10:37   #40847  |  Link
burfadel
Registered User
 
Join Date: Aug 2006
Posts: 2,229
v0.91.3 works better than v0.91.1, however I am still having issues with D3D.

I have two log files. The first is playback on a Freesync monitor (it doesn't seem to like that at times), and the second is on a TV playing from the same computer. It works some of the time. If you put it into a window and the computer turns off the scree as part of power saving, it doesn't restart the video. It just freezes but audio plays. Then, when you close off the player it gets 'stuck' under processes and you have to hard shutdown the computer because using restart it just gets stuck on the restart screen.

Yes, I am using insider build 14971 x64, Crimson driver 16.11.4, but these issues presented themselves on the same older build and driver when I upgraded from a R9-280X to the RX 480 with the freesync monitor.

Where do I upload the logs? The Freesync one is 21.6 MB, the other one which only consists of a very brief playback, window mode, monitor off and back on again (where it kind of worked for the first time :S, but it did jam again) is 258 MB.

Issues only seem to affect Direct3D 11 mode, and more so under exclusive than windowed mode. The GPU can handle the settings very well, in the time of playing even in a warmish room the GPU was a couple of degrees under 60 C, so it didn't need the fans running.
burfadel is offline   Reply With Quote
Old 28th November 2016, 10:44   #40848  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by HillieSan View Post
New test

Code:
0.91.2 with SE and AG both off
-----------------------------------
low        5.3 ms
med        7.2 ms
high      13.3 ms with CQ: normal, DQ:low
very high 35.0 ms with CQ: normal, DQ:low
CQ = Chroma Quality, DQ = Downscale Quality.

mmm, Chroma Quality set to Automatic is not clear to me. What is the criteria?
Ah, so the chroma quality probably was causing the slowdown. I've explained the criteria a couple of posts above. FWIW, I'd recommend setting DQ to normal or auto.

Quote:
Originally Posted by huhn View Post
sorry but this is going to hurt a lot on a UHD screen

720x480-> 5760X3840 -> 3840x2160 doesn't sound viable to me.
i don't think is is possible to skip the last upscale step for every normal source.
How about 6x? 720x480 -> 4320x2880 -> 3840x2160. I've not even started working on larger than 2x NGU algorithms yet, but you will probably be surprised about how fast they are going to be.

Quote:
Originally Posted by huhn View Post
i still think super XBR should stay.
simple reason it doesn't highlight compression artefacts as much as NGU on DVDs.
But isn't that what NNEDI16 is for? Yes, I know, NNEDI16 is slower than super-xbr. But DVDs are really low res. Can't every GPU these days perform NNEDI16 on DVDs? Blu-Rays are a different topic, but there you really want to use NGU, don't you?

Quote:
Originally Posted by flossy_cake View Post
When you call Present , thats still a CPU command that could be blocked or delayed by the OS kernel for a few ms.
That's why madVR presents (up to) 16 frames in advance. At least in FSE mode these 16 frames should then be flipped via VSync hardware interrupt. You can even suspend the media player process with the task manager. All pre-presented frames will still be displayed smoothly. With 24fps, basically playback will continue to run perfectly for 0.667 seconds after you've suspended the media player process. So with the current madVR presentation logic the CPU would have to be blocked for longer than 0.667 seconds for motion to start stuttering. Now compare that to a FreeSync/G-SYNC solution, where motion would already start stuttering if the CPU is blocked for just a couple of milliseconds!

Quote:
Originally Posted by Q-the-STORM View Post
It's basically what reclock does with 23.976fps video on a 24Hz display, it slightly increases video and audio speed (resamples audio) to match Hz of output...
madVR has nothing to do with audio. So this is not something I can possibly implement.

Quote:
Originally Posted by nevcairiel View Post
Personally I have used super-xbr in the past over NNEDI3 because the OpenCL used for NNEDI3 is generally a bit error prone and has produced some issues, while super-xbr just works with no complications.
Fair enough. So you'd also like super-xbr to stay (or rather come back)?

Quote:
Originally Posted by nevcairiel View Post
Most of the things I watch is clean content though, and I've been thinking if i shouldn't figure out some pre-processing step to clean up some of the lower quality sources.
Yes, I think we really do need some good compression artifact reducer.

Quote:
Originally Posted by burfadel View Post
v0.91.3 works better than v0.91.1, however I am still having issues with D3D.

I have two log files. The first is playback on a Freesync monitor (it doesn't seem to like that at times), and the second is on a TV playing from the same computer. It works some of the time. If you put it into a window and the computer turns off the scree as part of power saving, it doesn't restart the video. It just freezes but audio plays. Then, when you close off the player it gets 'stuck' under processes and you have to hard shutdown the computer because using restart it just gets stuck on the restart screen.

Yes, I am using insider build 14971 x64, Crimson driver 16.11.4, but these issues presented themselves on the same older build and driver when I upgraded from a R9-280X to the RX 480 with the freesync monitor.

Where do I upload the logs? The Freesync one is 21.6 MB, the other one which only consists of a very brief playback, window mode, monitor off and back on again (where it kind of worked for the first time :S, but it did jam again) is 258 MB.

Issues only seem to affect Direct3D 11 mode, and more so under exclusive than windowed mode. The GPU can handle the settings very well, in the time of playing even in a warmish room the GPU was a couple of degrees under 60 C, so it didn't need the fans running.
Is FreeSync active? Can you totally disable it? Does that help?
madshi is offline   Reply With Quote
Old 28th November 2016, 10:56   #40849  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,903
Quote:
Originally Posted by madshi View Post
How about 6x? 720x480 -> 4320x2880 -> 3840x2160. I've not even started working on larger than 2x NGU algorithms yet, but you will probably be surprised about how fast they are going to be.
of cause this sounds a lot better. but we are not that far yet.

Quote:
But isn't that what NNEDI16 is for? Yes, I know, NNEDI16 is slower than super-xbr. But DVDs are really low res. Can't every GPU these days perform NNEDI16 on DVDs? Blu-Rays are a different topic, but there you really want to use NGU, don't you?
not sure. i haven't used NGU properly yet. i still have a polaris GPU and i don't like to judge scaling algorithm without watching an movie or something.

i was never a fan of nnedi3 anyway. i'm currently using XBR for everything.

i haven't installed the new madVR version because i simply need XBR for now. but i didn't bring that up as a reason to leave XBR in there because i still hope this will change in the "future".

i could order a new RX 480 for my gaming PC and let them send it to you for about a month or something like that.

i'm not a programmer but this sound like a lot of work for something that should work out of the box and just thinking the card is defect when shipped gives me nightmares...
huhn is offline   Reply With Quote
Old 28th November 2016, 11:05   #40850  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
I've no idea why the RX480 is so slow with NGU. I can only imagine that the drivers have issues with what my PS3.0 shaders are doing, for some weird reason. I might still try running the NGU shaders in D3D11, but I'm not convinced it will make a difference. I'm hopeful the problem might be fixed by newer drivers at some point. If you can't wait, it might make sense to replace the RX480 with an NVidia card, for the time being...

That said, I'm considering bringing back super-xbr next weekend.
madshi is offline   Reply With Quote
Old 28th November 2016, 11:12   #40851  |  Link
cork_OS
Registered User
 
cork_OS's Avatar
 
Join Date: Mar 2016
Posts: 160
Quote:
Originally Posted by madshi View Post
Yes, I think we really do need some good compression artifact reducer.
Great news!
My GPU can ran sXBR+SR1 or NGU-med, but not NNEDI3. So I vote for sXBR should remain until madVR get own denoiser/deblocker/demosquitoer.
BTW latest KNLMeansCL is fast enough for real-time SD (720x480) processing.
cork_OS is offline   Reply With Quote
Old 28th November 2016, 11:13   #40852  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,903
well i got in a huge problem with my GTX 960 with some very heavy slow downs (AVG 60 ms or more) using a 3D LUT and this is completely gone with the RX 480.

this could come back with a 1060 so i'm in a pickle one would say.
but that's not your problem.

ohh these drivers...
huhn is offline   Reply With Quote
Old 28th November 2016, 11:14   #40853  |  Link
burfadel
Registered User
 
Join Date: Aug 2006
Posts: 2,229
Quote:
Originally Posted by madshi View Post
Is FreeSync active? Can you totally disable it? Does that help?
No it doesn't . I had it turned off by mistake actually! Probably from testing earlier. The screen is set at 72 Hz though as the base frequency (Samsung S27F350).

Is madVR DirectX 11.0? What's the difference in terms of madVR would there be between 11, 11.1, 11.2 (from Windows 8.1), and 11.3 (Windows 10)? Not suggesting that's why it's not working, they are just adding on the earlier versions so they are still supported.

Direct3D 11.4 (TH2, updated in RS1):
https://msdn.microsoft.com/en-us/lib...(v=vs.85).aspx

Direct3D 11.3 (Windows 10) additional features:
https://msdn.microsoft.com/en-us/lib...(v=vs.85).aspx

Direct3D 11.2 (Windows 8) additional features:
https://msdn.microsoft.com/en-us/lib...(v=vs.85).aspx

Direct3D 11.1 (Windows 7 with platform update, and later):
https://msdn.microsoft.com/en-us/lib...(v=vs.85).aspx

Last edited by burfadel; 28th November 2016 at 11:26.
burfadel is offline   Reply With Quote
Old 28th November 2016, 11:19   #40854  |  Link
flossy_cake
Registered User
 
Join Date: Aug 2016
Posts: 606
Quote:
Originally Posted by madshi View Post
That's why madVR presents (up to) 16 frames in advance.
Now compare that to a FreeSync/G-SYNC solution, where motion would already start stuttering if the CPU is blocked for just a couple of milliseconds!
You can prerender frames in gsync mode too (in the nvidia control panel "max prerendered frames") and it wont stutter anyway because you still thinking in vsync logic. In gsync you can have frame delivery like this 38ms->42ms->40ms->35ms->42ms and its perfectly smooth because interframe delta is so small compared to the average. But one dropped frame at 60hz is disasterous because it creates this massive 33.4ms spike delta.
flossy_cake is offline   Reply With Quote
Old 28th November 2016, 11:22   #40855  |  Link
HillieSan
Registered User
 
Join Date: Sep 2016
Posts: 176
Quote:
Originally Posted by madshi View Post
I've no idea why the RX480 is so slow with NGU. I can only imagine that the drivers have issues with what my PS3.0 shaders are doing, for some weird reason. I might still try running the NGU shaders in D3D11, but I'm not convinced it will make a difference. I'm hopeful the problem might be fixed by newer drivers at some point. If you can't wait, it might make sense to replace the RX480 with an NVidia card, for the time being...

That said, I'm considering bringing back super-xbr next weekend.
Please, try the DXD11. Currently, madVR works best with DXD11 on RX480. DXD9 has issues with FSE. It only works (not stable though) when I clear all 'trade quality for perfomance' options.
HillieSan is offline   Reply With Quote
Old 28th November 2016, 11:29   #40856  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,344
Quote:
Originally Posted by madshi View Post
Fair enough. So you'd also like super-xbr to stay (or rather come back)?
Like I said before, I didn't have time to test NGU yet, so I can't really say if it would replace all my needs. But in general an alternative without the extra complexity of OpenCL might not be a bad idea.

Quote:
Originally Posted by flossy_cake View Post
it wont stutter anyway because you still thinking in vsync logic. In gsync you can have frame delivery like this 38ms->42ms->40ms->35ms->42ms and its perfectly smooth because interframe delta is so small compared to the average.
No, its you who is still thinking in gaming logic. Video frames are from an exact point in time, if you don't manage to present it at that exact point in time, it (micro-)stutters. No magic in the world can fix that. It needs to be shown at the precise point it was meant to be shown.
Matching V-SYNC to the video frame rate gives you that result perfectly. And as it happens, typical monitor refresh rates match the typical video frame rates. So all is well as it is!

Its not like V-SYNC drops plenty frames, if your system is setup properly you get flawless playback without a single dropped frame, so absolutely no stuttering.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 28th November 2016 at 11:36.
nevcairiel is offline   Reply With Quote
Old 28th November 2016, 11:31   #40857  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by cork_OS View Post
BTW latest KNLMeansCL is fast enough for real-time SD (720x480) processing.
KNLMeansCL is a good denoiser, I think, but I'm not sure it's the right algo for compression artifact removal.

Quote:
Originally Posted by huhn View Post
well i got in a huge problem with my GTX 960 with some very heavy slow downs (AVG 60 ms or more) using a 3D LUT and this is completely gone with the RX 480.
Argh.

Quote:
Originally Posted by burfadel View Post
No it doesn't . I had it turned off by mistake actually! Probably from testing earlier. The screen is set at 72 Hz though as the base frequency (Samsung S27F350).
Ok. So the problem you're having is only in combination with the display being sent to sleep? As long as that doesn't happen everything's fine?

Quote:
Originally Posted by burfadel View Post
Is madVR DirectX 11.0? What's the difference in terms of madVR would there be between 11, 11.1, 11.2 (from Windows 8.1), and 11.3 (Windows 10)?
Rendering is a mixture of D3D9, D3D11, DirectCompute and OpenCL. Presentation can be done in either D3D9 or D3D11. 11.1, 11.2, 11.3 doesn't make a difference for me.

Quote:
Originally Posted by flossy_cake View Post
You can prerender frames in gsync mode too (in the nvidia control panel "max prerendered frames")
And if you do prerender 3 frames in advance and present them in advance with G-SYNC, at which exact interval are they displayed on screen?

Quote:
Originally Posted by flossy_cake View Post
In gsync you can have frame delivery like this 38ms->42ms->40ms->35ms->42ms and its perfectly smooth because interframe delta is so small compared to the average.
No, 35ms vs 42ms is not perfectly smooth at all. It might be smooth with a game but not with a video. It seems to me you don't really understand the difference. With a game, if you have 10ms -> 10ms -> 10ms -> 40ms, then a car moving from the left side of the screen to the right side, will move a little -> little -> little -> lot. Because of that it's not a big problem if the exact frame intervals vary a bit. With video the situation is different. The position of the car is already defined by the recorded video. If you have 10ms -> 10ms -> 10ms -> 40ms, that would be a catastrophe for video, because the car moves the same amount in each frame. For video we need *perfect* frame intervals. And unless G-SYNC can deliver that, it's useless for video.
madshi is offline   Reply With Quote
Old 28th November 2016, 11:59   #40858  |  Link
burfadel
Registered User
 
Join Date: Aug 2006
Posts: 2,229
Quote:
Originally Posted by HillieSan View Post
Please, try the DXD11. Currently, madVR works best with DXD11 on RX480. DXD9 has issues with FSE. It only works (not stable though) when I clear all 'trade quality for perfomance' options.
That's funny! It's much the same for me but D3D11 doesn't work well! D3D9 actually runs quite beautifully.

It's not just the sleep that can trigger the D3D11 issue, it can happen any time. It seems to only affect D3D11, D3D9 is either unaffected or less affected. I don't mean full sleep mode either, just turning off the monitor after 5 minuutes (or setting it to trigger at a button press through power options). I think it's because it puts the card to sleep since it isn't used without needing to display anything, and madVR doesn't seem to like that.

In terms of performance though, the new version is much better on the RX 480. You mentioned a workaround with an earlier build for AMD cards and D3D11, I am wondering whether that workaround affects the RX 480 or at least is required any more, if it is still in place.

Last edited by burfadel; 28th November 2016 at 12:04.
burfadel is offline   Reply With Quote
Old 28th November 2016, 12:13   #40859  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
Quote:
Originally Posted by madshi View Post
You can disable chroma NGU by setting "chroma quality" to "normal". You can select luma NGU directly, not sure what you mean there.
Oh OK cool, this wasn't documented anywhere. I'll do my homework and read the OSD then

Quote:
Originally Posted by madshi View Post
Is it? Why would the "image downscaling" settings page play any role when we actually upscale the video? I think you're stuck in the old way the settings worked. Once you understood it, it made sense, but it was kinda backwards. The new settings are very strict: The "image downscaling" options are now only used when we actually downscale the video. Every option that plays a role when we upscale the image is now actually in the "image upscaling" settings page. That is not confusing but more logical, IMHO, if you try to forget what you were used to from older madVR builds and look at this with a fresh set of eyes.
Well, I use CR AR LL to downscale doubled 720p@1080p and SSIM2D100LL+50%AB to downscale SD, this isn't possible anymore? No more AB for doubled upscale anymore? SR needs it badly you know

Why not using the "image downscaling" settings when downscaling doubled upscale? I really don't see the need for an extra "downscale quality" option there, especially when all it does is Bicubic150 AR and SSIM1D100 without AB

Most ppl messing with this will create automatic profiles anyway and as much as offering an option to set Jinc upscale when doubling didn't make sense I don't see the need for a "downscale quality" preset when the "image downscaling" panel will remain completely unused?

You said "For downscaling after doubling in most cases Bicubic150 AR should be good enough" but I certainly don't want that and I don't find SSIM2D overkill when downscaling doubled upscale as it's very sharp and retains a lot of resolution that Bicubic150 is certainly unable to provide. And again I very much need AB there when using SR on SD.

Quote:
Originally Posted by madshi View Post
JFWIW, did you ask for super-xbr to stay when I asked for feedback when releasing v0.90.0/1?
I don't recall having to ask for things to stay hah, I only thought that SR would be ditched so I whined about that but several ppl claimed that sxbr worked better than NGU for them on noisy low-res so I didn't think you would dismiss it altogether. Anyway, I can see that others raised the subject so I would fully agree that sxbr can be useful at times.

Last edited by leeperry; 28th November 2016 at 12:20.
leeperry is offline   Reply With Quote
Old 28th November 2016, 12:13   #40860  |  Link
e-t172
Registered User
 
Join Date: Jan 2008
Posts: 589
Quote:
Originally Posted by flossy_cake View Post
Here's one example used by Frafs Test Pattern app (DX9). You can use the command line argument to select a fps limit to engage the hires timer and turn on the scrolling bar to check for dropped frames.

Source code of area where hires timer is implemented
https://sourceforge.net/p/frafstestp...ttern.cpp#l872
https://sourceforge.net/p/frafstestp...HiResTimer.cpp

Note: cpu usage is quite high with the hires timer (~15% CPU load ). I was able to eliminate it by adding a 1ms sleep on line 916 of FrafsTestPattern.cpp.

Also
https://msdn.microsoft.com/en-us/lib...(v=vs.85).aspx
This is a software timer that relies on a user-mode software thread to be woken up at precisely the right time. Which is not a viable option in a non-realtime OS such as Windows. The frame interval needs to be managed purely in hardware (i.e. the GPU, like in fixed VSync) to achieve high-quality playback. This is the only way to guarantee high precision of frame presentation times.

Quote:
Originally Posted by flossy_cake View Post
I think you are making an assumption about frame time deltas of 33.4ms (1 dropped frame at 60hz) and trying to apply that to a delta of +/- 5-10ms, which proves you don't own one of these variable monitors because if you did then you would already know that in D3D applications while the fps/hz meter is fluctuating up and down all over the place the motion is still seamless because the inter-frame delta is still much much lower than 1 dropped frame at 60hz. You are still thinking about everything snapping to the nearest 16.7ms which is not the case here. A rapid rise from say 24-90 in gsync will still have an average inter-frame delta that is very low and nowhere near the 33.4ms caused by 1 dropped frame at 60hz.
A delta of 5-10 ms is unacceptable for video playback (which is not the same as video games), and would be a regression from the current fixed VSync approach. I was not assuming deltas of 33.4ms.

Quote:
Originally Posted by flossy_cake View Post
Of course there are no guarantees. At the end of the day the CPU is still running the application regardless of the scenario and relies on not being interrupted by other processes. If the vsync implementation is done by the CPU (such as waiting for a vsync signal) then it will be vulnerable to the same problem if some other process interrupts it.
Quote:
Originally Posted by flossy_cake View Post
When you call Present , thats still a CPU command that could be blocked or delayed by the OS kernel for a few ms.
No, because these calls are made in advance. madVR is telling the GPU "hey, at the next VSync, present this frame, then, at the VSync after that, present this other frame, etc.". After that madVR's job is done, and the GPU just goes through the queue of pre-rendered frames on every VSync event, purely in hardware, with no software interrupts involved. As madshi explained, if the present call is delayed by a few milliseconds (or even tens of milliseconds), nothing bad happens, as long as there are still frames in the queue. In other words, frames are buffered in hardware, precisely to avoid the problem you're mentioning. You can even configure the size of that queue in the madVR settings.

Now, in theory it should be possible to do that with G-Sync too, but now you can't simply tell the GPU "hey, present this frame on next VSync" anymore, because there is no periodic VSync event anymore. Instead, madVR needs to tell the GPU "present this frame at this exact time" (relative to the GPU's hardware clock, presumably). Problem is, there is no API to do that. Hence, madshi cannot properly implement G-Sync in madVR.

Note that none of the above applies to video games. Which is why you should stop making comparisons to video games. It's not the same use case at all and the constraints are fundamentally different.

Quote:
Originally Posted by nevcairiel View Post
Matching V-SYNC to the video frame rate gives you that result perfectly. And as it happens, typical monitor refresh rates match the typical video frame rates. So all is well as it is!

Its not like V-SYNC drops plenty frames, if your system is setup properly you get flawless playback without a single dropped frame, so absolutely no stuttering.
That's not entirely true, though. In practice it is very difficult on PC to perfectly match the video refresh rate to the actual playback rate of the video, because playback is slaved to the audio clock, not the video clock, so you will always get a slight mismatch, which will cause quite a few dropped/repeated frames over the duration of a full-length movie. This is why we have things like ReClock or Smooth Motion, but these are basically hacks that shouldn't be needed in the first place.

G-Sync could help with this, because it would allow madVR to automatically compensate for clock mismatch by slightly adjusting presentation times based on an observed estimate (basically like ReClock, but operating on video timings, not audio timings). But, as madshi already pointed out in the new FAQ, that's only viable if there is a way to use G-Sync while still presenting multiple frames in advance (and having the GPU time the refresh in hardware), otherwise it's going to jitter like crazy and the slightest delay in madVR presentation thread scheduling is going to result in a visible hiccup.
e-t172 is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 09:51.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.