Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 16th November 2011, 14:58   #10941  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
madVR checks whether the registry value "HKCU\Software\DScaler5\MpegVideo Filter\Inverse Telecine" exists. If it does, it considers DScaler to be the IVTC Mod. If you want to go back to the non-modded version, maybe deleting that registry value will help?

But why are you using DScaler in the first place, if the IVTC Mod doesn't work for you? It's very much outdated, based on rather old and buggy decoder sources. I'd suggest to use LAV Video Decoder or the internal madVR decoder instead. Both are better MPEG2 decoders, IMHO, they just don't do IVTC.
madshi is offline   Reply With Quote
Old 16th November 2011, 15:19   #10942  |  Link
pie1394
Registered User
 
Join Date: May 2009
Posts: 212
Quote:
Originally Posted by madshi View Post

It's not a question of easy or not. Changing this would indeed be easy. The question for me is whether it makes generally sense to deinterlace 1080i60 content if the GPU outputs 1080i60, anyway. Maybe it does make sense, maybe not. I'm not 100% sure. Any more opinions on that?
As long as the deinterlacing process does not touch the original decoded field's content, it won't affect the visual quality. Just a waste of GPU processing power to generate the deinterlaced field (will not be output). If original video field contents are touched, my eyes can often observe the lower visual quality on interlaced CRT / PDP device.

If madVR does not send it to do deinterlacing process, in theory, the 25 / 30 fps frame presentation speed should be ok for 1080i50 / 1080i60 contents during playback. But my very old test with madVR showed jaggy moving objects on nVidia GPU (both G80/GT100). Maybe it work if one interlaced frame is presented 2 times?

For the odd-field-first video contents w/o deinterlace, it is no doubt that frame presentation method must be still the one with odd-field-first since it has smaller Presentation Timestamp than the combined even-field parts output by MPEG video decoder.

So this is not a problem on my HTPC(Win7x64 , GTX260+) / Hitachi 42" PDP P42A01A system to do the frame-rate-doubling deinterlacing + original cotent's interlaced mode output. The video output mode is always set 1920x1080i60 to play 29.97/30 double-frame-rated deinterlaced MPEG-2 contents.

Yet I did a comparsion of following 2 playback methods, and found the method 1 shows that human skin colors are somehow more vivid and stereo.

[1] LAVVideo CUDA MPEG2 (+deinterlace) + madVR0.79
[2] LAVVvideo MPEG2 + madVR0.79 (+ DXVA2 deinterlace)

Yet I cannot tell the difference via both methods on Ion + Samsung 203B LCD monitor (6-bit TN + FRC).

Thus I suspect that nVidia's driver did more high-precisioned video image post-processing on GTX260+ just like what madVR does. Otherwise there is no way that same madVR video scaling method can produce such difference.

Last edited by pie1394; 16th November 2011 at 15:23.
pie1394 is offline   Reply With Quote
Old 16th November 2011, 15:27   #10943  |  Link
iSeries
Registered User
 
Join Date: Jan 2009
Posts: 625
Quote:
Originally Posted by madshi View Post
According to the log file, the splitter outputs really bad timestamps. Often multiple frames get exactly the same timestamp. That's a splitter problem.
Hi,

I was reluctant to say it was a splitter problem as I am using LAV as are many others here. I don't understand 1. why playing the m2ts directly plays smoothly and 2. reverting back to MadVR 075 (the last version I have) playing the index.bdmv or mpls, playback is as smooth as it should be?. Something must have changed between version 075 and 078 / 079?
iSeries is offline   Reply With Quote
Old 16th November 2011, 15:33   #10944  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
Quote:
Originally Posted by madshi View Post
It works perfectly fine with my HD 6850 and HD 4770 on XPSP3. Haven't tested NVidia on XPSP3 yet, but in theory it should work, too. LAVVideo does not deinterlace itself (unless you're using CUVID). Normally the renderer does the deinterlacing. CUVID is the exception.

So did you use CUVID? Or did you use software decoding?
I had not tried yet when I posted this, but I just did and indeed as you said today I do need deinterlacing to be performed prior to running my Avisynth script, so either way I would have no use for deinterlacing being performed by the VR

CUVID does an amazing job on 29.97 true interlaced DVD's, but it's a major failure on 1080i IME...each scene change making nasty artifacts.

Anyway, as I feared DXVA2 doesn't seem to exist in XP so as much as ATi might convert them to DXVA1, all I get is a black screen on my 8800GS and not even mVR's OSD will appear. If I mark this 29.97fps DVD as progressive, then I get the usual combing.

I might soon stop the XPSP3 testing, because I want to benefit from the HPET for audio purposes

Last edited by leeperry; 16th November 2011 at 15:40.
leeperry is offline   Reply With Quote
Old 16th November 2011, 15:41   #10945  |  Link
Gser
Registered User
 
Join Date: Apr 2008
Posts: 418
Quote:
Originally Posted by madshi View Post
madVR checks whether the registry value "HKCU\Software\DScaler5\MpegVideo Filter\Inverse Telecine" exists. If it does, it considers DScaler to be the IVTC Mod. If you want to go back to the non-modded version, maybe deleting that registry value will help?

But why are you using DScaler in the first place, if the IVTC Mod doesn't work for you? It's very much outdated, based on rather old and buggy decoder sources. I'd suggest to use LAV Video Decoder or the internal madVR decoder instead. Both are better MPEG2 decoders, IMHO, they just don't do IVTC.
Sweet thanks, deleting the registry value worked. I've noticed that using Dscalers reference idct results in slightly more sharper and detailed picture while the ones in ffmpeg result in a softer picture. Don't know why the mods IVTC feature never works for me. I get lots of combing with it and miss-matched fields.

Last edited by Gser; 16th November 2011 at 15:47.
Gser is offline   Reply With Quote
Old 16th November 2011, 15:43   #10946  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by madshi View Post
Well, madVR calculates the number of MB it allocates, but this includes both GPU RAM and PCIe RAM. I'm not sure which textures/surfaces Direct3D allocates in which RAM. I suppose GPU-Z and NVidia Inspector etc report only GPU RAM usage, while madVR counts all the RAM it uses in either GPU RAM or PCIe RAM. I'm not sure if I can improve that. Maybe I can figure out which resources are allocated where and take that into account. Not sure...
You've mentioned this before about PCIe RAM. What exactly is PCIe RAM and where is it being allocated, as it certainly doesn't seem to be going into System RAM or GPU RAM?

For example, with Queue maxed out on 1440x1080i30 16:9:

madVR reports 1188MB/512MB GPU RAM in Use.
Actual GPU RAM use is only 392MB.
MPC-HC is only using 115MB.

Assuming madVR is reporting GPU + PCIe RAM, is there any reason that combined number should be kept below GPU RAM capacity? Is it just some sort of theoretical maximum size that madVR would use in cases of massive upscaling/downscaling?

This also has me curious if madVR in any way confirms that 16bit/32bit FP textures are being stored properly by the GPU? Seeing as some other people has much higher GPU RAM use numbers in GPU-Z (side-effect of Aero?), hopefully the low numbers don't mean the GPU is actually holding 8bit RGB textures in the queues?

Quote:
Originally Posted by madshi View Post
@cyberbeing, does deinterlacing work for you now? I think it didn't work for you in v0.78 due to GPU RAM problems? Which queue sizes are working for you now?
It doesn't crash now which is a success, but I still need to set all scaling to Bilinear and disable the 3DLUT for it to be usable without delayed/dropped frames on 1440x1080i30 16:9 video.

Could you add an option under Trade Quality for Performance for something like 'Use Bilinear and Shaders Only when Deinterlacing or 60fps+ framerates' to temporarily set these options when deinterlacing is active and/or just high framerate content in general? That's the only practical way I could see to keep deinterlacing enabled in madVR. What this comes back to is my GPU can barely playback 60fps content with madVR even with progressive video.

Another thing I note is performance is much much slower (3-6ms deint vs 0.15-0.25ms deint) if Inverse Telecine enabled in the NVCPL. Since madVR doesn't seem to support decimation like VMR (?), I guess everybody should disable that setting? No change in quality as far as I can tell using madVR, just slower. I'll test a DVD later which VMR9 decimates correctly with the NVIDIA PureVideo MPEG2 decoder (even changing between 24p and 30p on hybrid content) with that 'Inverse Telecine' setting enabled to see if anything changes in madVR.

As for queue sizes for deinterlacing
Absolute minimum: 8CPU/6GPU (below this results in massive dropped frames)
Absolute maximum: 32CPU/24GPU
Requirement 1: Inverse Telecine is disabled in NVCPL
Requirement 2: CPU Queue is at least +2 greater than GPU queue

Last edited by cyberbeing; 16th November 2011 at 15:48.
cyberbeing is offline   Reply With Quote
Old 16th November 2011, 15:46   #10947  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by pie1394 View Post
As long as the deinterlacing process does not touch the original decoded field's content, it won't affect the visual quality. Just a waste of GPU processing power to generate the deinterlaced field (will not be output). If original video field contents are touched, my eyes can often observe the lower visual quality on interlaced CRT / PDP device.
I don't know whether DXVA deinterlacing touches the original fields or not. Haven't tested that. Might depend on ATI/NVidia.

Quote:
Originally Posted by pie1394 View Post
If madVR does not send it to do deinterlacing process, in theory, the 25 / 30 fps frame presentation speed should be ok for 1080i50 / 1080i60 contents during playback. But my very old test with madVR showed jaggy moving objects on nVidia GPU (both G80/GT100). Maybe it work if one interlaced frame is presented 2 times?
In D3D9 exclusive mode with "prepresented frames" active, madVR will already show each weaved frame twice, provided that the timestamps are clean. If the timestamps jitter a lot, nasty things like 2:2:2:3:1:2:2:2 can happen.

Quote:
Originally Posted by pie1394 View Post
For the odd-field-first video contents w/o deinterlace, it is no doubt that frame presentation method must be still the one with odd-field-first since it has smaller Presentation Timestamp than the combined even-field parts output by MPEG video decoder.
If you turn deinterlacing off, madVR does *nothing*. If the fields are swapped then madVR does not correct that.

Quote:
Originally Posted by pie1394 View Post
So this is not a problem on my HTPC(Win7x64 , GTX260+) / Hitachi 42" PDP P42A01A system to do the frame-rate-doubling deinterlacing + original cotent's interlaced mode output. The video output mode is always set 1920x1080i60 to play 29.97/30 double-frame-rated deinterlaced MPEG-2 contents.
You just said above that activating deinterlacing would be a waste of GPU processing power. But now you're saying in your own setup you have deinterlacing active even when using 1080i60 output? That's kinda confusing.

Quote:
Originally Posted by pie1394 View Post
Yet I did a comparsion of following 2 playback methods, and found the method 1 shows that human skin colors are somehow more vivid and stereo.

[1] LAVVideo CUDA MPEG2 (+deinterlace) + madVR0.79
[2] LAVVvideo MPEG2 + madVR0.79 (+ DXVA2 deinterlace)
Can you make comparison screenshots, please?

Quote:
Originally Posted by iSeries View Post
I was reluctant to say it was a splitter problem as I am using LAV as are many others here. I don't understand 1. why playing the m2ts directly plays smoothly and 2. reverting back to MadVR 075 (the last version I have) playing the index.bdmv or mpls, playback is as smooth as it should be?. Something must have changed between version 075 and 078 / 079?
The splitter (or decoder??) is definitely producing bad timestamps. It is possible, though, that some older madVR versions handled the problem better than the latest builds. I recently did a change in this area, which may have made things worse for you. I'd like to debug that. Can you please create a log with v0.79? I know you already created one with v0.78, but it didn't contain enough information. I've added more logging to v0.79, so that should help.

Anyway, I still recommend trying to figure out where the bad timestamps are coming from. It would be better to have that problem fixed, too, instead of relying on madVR to work around it somehow.

Quote:
Originally Posted by leeperry View Post
That's not true. That website you're quoting is either wrong or outdated. EVR + DXVA2 gets retrofitted into XP by installing .NET 3.0 or 4.0. On my PC (with an ATI GPU) DXVA2 deinterlacing works exactly the same as in win7, while that link you posted says "The DXVA 2 API requires Windows Vista or later". Clearly the link is wrong.

Quote:
Originally Posted by leeperry View Post
so as much as ATi might convert them to DXVA1, all I get is a black screen on my 8800GS and not even mVR's OSD will appear.
Do you get DXVA2 deinterlacing with EVR? Have you installed .NET 3.0?

Anybody using NVidia on XP? Does madVR's DXVA2 deinterlacing work for you?
madshi is offline   Reply With Quote
Old 16th November 2011, 15:48   #10948  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,344
Quote:
Originally Posted by cyberbeing View Post
Another thing I note is performance is much much slower (3-6ms deint vs 0.15-0.25ms deint) if Inverse Telecine enabled in the NVCPL. Since madVR doesn't seem to support decimation like VMR (?), I guess everybody should disable that setting? No change in quality as far as I can tell using madVR, just slower.)

Disabling Inverse Telecine in the NVIDIA Control Panel can result in artifacting on Telecined discs, if anything it might do full deinterlacing instead of just weaving the fields back together properly, which could result in an image degradation.
As always however, trust your eyes over anything else.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 16th November 2011, 15:50   #10949  |  Link
dansrfe
Registered User
 
Join Date: Jan 2009
Posts: 1,210
I see dropped frames when upscaling more than approximately 1280 x 720. No delayed frames or presentation glitches.
GPU: ATI Mobility Radeon HD 4570
OS: Windows 7 x64
Movie Resolution: 704 x 480 (16:9) DVD
Output Resolution: 853 x 480 (default 16:9) or upscaled to 1920 x 1080
Display Refresh Rate: 59.940Hz
Decoder: ffdshow decoder

I'm deinterlacing from 29.970 interlaced (3:2 pulldown, but I guess that doesn't make a difference).

Scaling: Spline @ 3 taps on all. Scaling in linear light off.

Basically after upscaling to a certain resolution the queues progressively start to fill up less and less until it's at 0-2 like 400 horizontal pixels before I can hit fullscreen 1920 horizontal resolution. I'm guessing I have to get a faster GPU to fix this or is there hope in me waiting for more speed improvements in DXVA2 implementation of madVR?

Is there an equivalent of CUDA in ATI that you can utlize to attain GPU acceleration in ATI cards?

Also I have noticed that madVR is still not accurately detecting if the input video needs to be deinterlaced or not. I just played a 25fps PAL VOB and it showed that deinterlacing was on. I'm guessing that this has to do with incorrect flag information in the VOB.

Also, the gpu ram usage section in the OSD seems to be a bit off or something else is happeneing with my GPU drivers. When I fill up all the queues on a 1080p file or something large it shows 1068/512. My card is a 1GB card however I'm used to seeing it say 512MB everywhere (e.g. in the driver control panel, windows display information etc). What does this mean?

Last edited by dansrfe; 16th November 2011 at 16:04.
dansrfe is offline   Reply With Quote
Old 16th November 2011, 15:53   #10950  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,344
Quote:
Originally Posted by madshi View Post
That's not true. That website you're quoting is either wrong or outdated. EVR + DXVA2 gets retrofitted into XP by installing .NET 3.0 or 4.0. On my PC (with an ATI GPU) DXVA2 deinterlacing works exactly the same as in win7, while that link you posted says "The DXVA 2 API requires Windows Vista or later". Clearly the link is wrong.
The DXVA2 DDI API requires Windows Vista and a WDM display driver, however you're not calling the DDI directly, you call the Direct3D9 interfaces.

http://msdn.microsoft.com/en-us/libr...=vs.85%29.aspx

"If the graphics drivers uses the older Windows XP Display Driver Model (XPDM), DXVA 2 API calls are converted to DXVA 1 DDI calls. "

So, while XP doesn't support DXVA2 natively, if you install the .net framework 3 or 4 (not sure which version), there will be an emulation layer in place that can convert DXVA2 API calls to DXVA1 hardware calls.
Note that this also applys to very old drivers on Vista/7, which didn't implement WDM yet. However those drivers won't really be around anymore.

I think when you query for a DXVA2 device, there should be some information flag somewhere that tells you if its a emulated device based on DXVA1.

PS:
Using DXVA2 *decoding* on XP is another matter, and alot harder (close to impossible), however video processing is possible.

Quote:
Originally Posted by dansrfe View Post
Is there an equivalent of CUDA in ATI that you can utlize to attain GPU acceleration in ATI cards?
ATI doesn't have a real equivalent. They have OpenCL based APIs to decode video, however they failed to properly link that to D3D, so its not really possible to couple that with a Direct3D based rendering chain.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 16th November 2011 at 16:00.
nevcairiel is offline   Reply With Quote
Old 16th November 2011, 15:57   #10951  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by nevcairiel View Post
Disabling Inverse Telecine in the NVIDIA Control Panel can result in artifacting on Telecined discs, if anything it might do full deinterlacing instead of just weaving the fields back together properly, which could result in an image degradation.
As always however, trust your eyes over anything else.
I'll keep my eye out for artifacts when I test telecined DVD content with a pattern that DXVA can actually detect properly.

For these 1440x1080i30 MPEG2-TS files, inverse telecine never works unless I force the DScaler mod to IVTC in software. The slowness with that NVCPL setting enabled on such files where inverse telecine is broken is a bit of a concern though. It's not something practical to enable and disable all the time even if it does benefit DVDs. madVR deinterlacing may just be a lost cause on this ancient GPU, even if it does sort-of barely work in a pinch.

Last edited by cyberbeing; 16th November 2011 at 16:04.
cyberbeing is offline   Reply With Quote
Old 16th November 2011, 16:01   #10952  |  Link
iSeries
Registered User
 
Join Date: Jan 2009
Posts: 625
Quote:
Originally Posted by madshi View Post
The splitter (or decoder??) is definitely producing bad timestamps. It is possible, though, that some older madVR versions handled the problem better than the latest builds. I recently did a change in this area, which may have made things worse for you. I'd like to debug that. Can you please create a log with v0.79? I know you already created one with v0.78, but it didn't contain enough information. I've added more logging to v0.79, so that should help.

Anyway, I still recommend trying to figure out where the bad timestamps are coming from. It would be better to have that problem fixed, too, instead of relying on madVR to work around it somehow.
Hi Madshi,

Before I upload another log, could the problem be a muxing problem? I have been using EasyBD - I just remuxed one using tsMuxer and hey presto, index.bdmv plays perfectly.

Still not sure why playing the m2ts directly plays smoothly whatever though?
iSeries is offline   Reply With Quote
Old 16th November 2011, 16:05   #10953  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Gser View Post
I've noticed that using Dscalers reference idct results in a lot sharper and detailed picture while the ones in ffmpeg result in a much softer picture.
Can you please post screenshots that show this difference? Ideally also a small video sample? Thanks!

Quote:
Originally Posted by cyberbeing View Post
You've mentioned this before about PCIe RAM. What exactly is PCIe RAM and where is it being allocated, as it certainly doesn't seem to be going into System RAM or GPU RAM?
Microsoft mentions it here:

http://msdn.microsoft.com/en-us/libr...=VS.85%29.aspx

It's called "AGP memory" there, but we're using PCIe instead of AGP these days. So I've now called it "PCIe RAM". Here's a good description I found of what it exactly is:

> AGP memory is just a chunk of regular system memory
> -- the memory on the motherboard -- that's given
> special treatment. It's marked as WC (write-combining)
> by the CPU which allows for fast writes but slow reads.
> It's also mapped by the AGP GART ("Graphics Address
> Remapping Table"), which means that the video card
> can read directly from it relatively quickly. But not as
> quickly as local video memory.
> It's basically a compromise between regular system
> memory and local video memory. Because the CPU can
> write to it quickly and the GPU can read from it
> relatively quickly, it's a natural home for
> D3DUSAGE_DYNAMIC resources.

Quote:
Originally Posted by cyberbeing View Post
Assuming madVR is reporting GPU + PCIe RAM, is there any reason that combined number should be kept below GPU RAM capacity?
Nope. I didn't think of AGP/PCIe memory at all, when implementing that RAM consumption stuff in v0.79. I'll try to revise the logic to only count true GPU RAM usage and to disregard PCIe memory. That should help getting the madVR reported numbers nearer to what the other tools are reporting.

The only thing that counts is GPU RAM. Ok, I don't know if PCIe memory is limited, too. Maybe it is, I don't think it's specified anywhere. So we can only make sure that GPU RAM is not over used.

Quote:
Originally Posted by cyberbeing View Post
This also has me curious if madVR in any way confirms that 16bit/32bit FP textures are being stored properly by the GPU?
What makes you think they would not be properly stored? Never heard yet of drivers refusing the allocate the exact format I'm asking them for.

Quote:
Originally Posted by cyberbeing View Post
It doesn't crash now which is a success, but I still need to set all scaling to Bilinear and disable the 3DLUT for it to be usable without delayed/dropped frames on 1440x1080i30 16:9 video.

Could you add an option under Trade Quality for Performance for something like 'Use Bilinear and Shaders Only when Deinterlacing or 60fps+ framerates' to temporarily set these options when deinterlacing is active and/or just high framerate content in general? That's the only practical way I could see to keep deinterlacing enabled in madVR.
I'm planning to implement some kind of profiles some time in the future. For now I don't want to make the settings dialogs overly complicated by making some of the options only apply to this or that. What you're asking does make some sense. But next thing on your list would then be to also switch to Bilinear with native 60p content. So the proper solution will be something else, sometime in the future. For now you'll have to live with the way things are.

Quote:
Originally Posted by cyberbeing View Post
Another thing I note is performance is much much slower (3-6ms deint vs 0.15-0.25ms deint) if Inverse Telecine enabled in the NVCPL. Since madVR doesn't seem to support decimation like VMR (?)
Huh? DXVA2 itself does not support decimation. Neither VMR, nor EVR nor madVR can do it. Don't be confused by soft-telecined content, which decoders output untouched (weaved). Such content may look like decimated, but it's not really.

Quote:
Originally Posted by cyberbeing View Post
I guess everybody should disable that setting? No change in quality as far as I can tell using madVR, just slower.
It all depends on the content. With native video content, the IVTC option will do nothing but waste GPU power. With soft-telecined movie content, same. But with hard-telecined movie content, the IVTC option should definitely improve image quality. So I wouldn't turn it off.

Quote:
Originally Posted by cyberbeing View Post
As for queue sizes for deinterlacing
Absolute minimum: 8CPU/6GPU (below this results in massive dropped frames)
Absolute maximum: 32CPU/24GPU
Requirement 1: Inverse Telecine is disabled in NVCPL
Requirement 2: CPU Queue is at least +2 greater than GPU queue (in order to keep the CPU queue fluctuations above GPU queues)
Interesting. So you can actually use 32CPU/24GPU? That's probably with SD content, though?
madshi is offline   Reply With Quote
Old 16th November 2011, 16:20   #10954  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by dansrfe View Post
Basically after upscaling to a certain resolution the queues progressively start to fill up less and less until it's at 0-2 like 400 horizontal pixels before I can hit fullscreen 1920 horizontal resolution. I'm guessing I have to get a faster GPU to fix this or is there hope in me waiting for more speed improvements in DXVA2 implementation of madVR?
This does sound like your GPU is running out of steam. There's not much hope that the madVR DXVA2 implementation is going to get any faster. It's now roughly the same speed as nevcairiel's CUVID implementation, so I don't think madVR is wasting any performance, anymore.

Quote:
Originally Posted by dansrfe View Post
Also I have noticed that madVR is still not accurately detecting if the input video needs to be deinterlaced or not. I just played a 25fps PAL VOB and it showed that deinterlacing was on. I'm guessing that this has to do with incorrect flag information in the VOB.
What makes you think that 25fps PAL does not need to be deinterlaced? It all depends. Some PAL DVDs need to be "deinterlaced" (IVTCed), some don't. Depends on how they were encoded. madVR's deinterlacer gets active if the decoder says so.

Quote:
Originally Posted by dansrfe View Post
Also, the gpu ram usage section in the OSD seems to be a bit off or something else is happeneing with my GPU drivers. When I fill up all the queues on a 1080p file or something large it shows 1068/512. My card is a 1GB card however I'm used to seeing it say 512MB everywhere (e.g. in the driver control panel, windows display information etc). What does this mean?
madVR reads the GPU RAM size (in your case "512") from the registry. It's the same source of information used by the OS' control panel to show the GPU RAM size. If that's wrong, it's a bug in the GPU driver installer.

Quote:
Originally Posted by nevcairiel View Post
The DXVA2 DDI API requires Windows Vista and a WDM display driver, however you're not calling the DDI directly, you call the Direct3D9 interfaces.

http://msdn.microsoft.com/en-us/libr...=vs.85%29.aspx

"If the graphics drivers uses the older Windows XP Display Driver Model (XPDM), DXVA 2 API calls are converted to DXVA 1 DDI calls. "

So, while XP doesn't support DXVA2 natively, if you install the .net framework 3 or 4 (not sure which version), there will be an emulation layer in place that can convert DXVA2 API calls to DXVA1 hardware calls.
What you say makes a lot of sense and I'm sure it's true. Still, I consider that website to be incorrect. It only speaks about Vista and never mentions that any of DXVA2 would ever work on XP. The only time they mention XP is when they talk about using XP-style drivers in Vista. But then, Microsoft rarely ever mentions that EVR is now available in XP, too. So the whole MS documentation has not been updated about the availability of EVR/DXVA2 in XP.

Quote:
Originally Posted by iSeries View Post
Before I upload another log, could the problem be a muxing problem? I have been using EasyBD - I just remuxed one using tsMuxer and hey presto, index.bdmv plays perfectly.
Yes, it could. Would still like to get a log, though. And maybe a sample which shows the problem?

Quote:
Originally Posted by iSeries View Post
Still not sure why playing the m2ts directly plays smoothly whatever though?
I've no idea.
madshi is offline   Reply With Quote
Old 16th November 2011, 16:40   #10955  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,344
Quote:
Originally Posted by madshi View Post
So the whole MS documentation has not been updated about the availability of EVR/DXVA2 in XP.
I think its more of a mistake/unwanted side-effect then an actual feature.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 16th November 2011, 16:41   #10956  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
Regarding AGP/PCIe RAM: Device memory, mapped into the CPU address space, was already available e.g. as VESA VBE Linear Framebuffer Adressing in Protected Mode under DOS...
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 16th November 2011, 16:42   #10957  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by madshi View Post
What makes you think they would not be properly stored? Never heard yet of drivers refusing the allocate the exact format I'm asking them for.
That if I multiply 5.9MB 1920x1080 RGB textures by the number of queues it works out almost exactly to the reported GPU RAM usage in GPU-Z... If this is actually an issue, it's always been an issue in madVR since the earliest versions, so hopefully it's just a coincidence.


Quote:
Originally Posted by madshi View Post
Huh? DXVA2 itself does not support decimation. Neither VMR, nor EVR nor madVR can do it. Don't be confused by soft-telecined content, which decoders output untouched (weaved). Such content may look like decimated, but it's not really.

It all depends on the content. With native video content, the IVTC option will do nothing but waste GPU power. With soft-telecined movie content, same. But with hard-telecined movie content, the IVTC option should definitely improve image quality. So I wouldn't turn it off.
So when I playback back a DVD that the NVIDIA PureVideo MPEG2 decoder dynamically switches between 23.976fps display rate and 29.97fps display rate or just stable 23.976fps, it's not actually decimated but rather always soft-telecined? So basically any 30i content which DXVA deinterlacing doesn't display at 23.976 is hard-telecined? Does madVR currently support displaying 30i soft-telecined DVDs @ 23.976fps using DVXA2 deinterlacing?

I watch so little interlaced content normally. Does someone have an example of what a hard-telecine artifact would look like when not using the Inverse Telecine setting in the NVCPL with madVR deinterlacing?

Quote:
Originally Posted by madshi View Post
Interesting. So you can actually use 32CPU/24GPU? That's probably with SD content, though?
No, that's with 1920x1080 HD content.

Last edited by cyberbeing; 16th November 2011 at 16:45.
cyberbeing is offline   Reply With Quote
Old 16th November 2011, 16:47   #10958  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,344
Quote:
Originally Posted by cyberbeing View Post
Does madVR currently support displaying 30i soft-telecined DVDs @ 23.976fps using DVXA2 deinterlacing?
"Soft-Telecined" is just 23.976 progressive video which has some extra flags that instruct the decoder/renderer to repeat fields to actually make it 30fps, if you just ignore those flags its (nearly) perfect 23.976 content (maybe with a bit odd timestamps, but should play smoothly, at least at 24p screen refresh, at 48 or 72Hz, it might show issues)
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 16th November 2011 at 16:51.
nevcairiel is offline   Reply With Quote
Old 16th November 2011, 16:55   #10959  |  Link
andyvt
Registered User
 
Join Date: Jan 2010
Posts: 265
Quote:
Originally Posted by nevcairiel View Post
I think its more of a mistake/unwanted side-effect then an actual feature.
WPF uses the EVR to draw its forms. I'm not sure use for other purposes is officially supported (which is probably why the docs haven't been updated).
__________________
babgvant.com
Missing Remote
andyvt is offline   Reply With Quote
Old 16th November 2011, 17:03   #10960  |  Link
Andy o
Registered User
 
Join Date: Mar 2009
Posts: 962
Quote:
Originally Posted by madshi View Post
It's not a question of easy or not. Changing this would indeed be easy. The question for me is whether it makes generally sense to deinterlace 1080i60 content if the GPU outputs 1080i60, anyway. Maybe it does make sense, maybe not. I'm not 100% sure. Any more opinions on that?
I think generally, it won't affect people either way, since you'd have to manually type in the "29i" tag on the file name to get this behavior. Particularly, it will let me do what I do with the Pioneer TV and it might help others with similar processors/TVs.

Quote:
You're right, that wouldn't work. So it probably makes no sense to use either "24p" or "48i" tags, unless the content is soft-telecined. In the long run I may implement decimating, but it will be difficult to do on the GPU. I'll probably have to use CUDA or OpenCL, and unfortunately ATI doesn't offer Direct3D9 <-> OpenCL interopability, so I'm not sure if it will be possible at all. There may be a different solution to remove the 3:2 judder, without decimating. But let's talk about that later...
The 24p tag would still work for 29i content if I switch to NV and CUVID decoding/deinterlacing again, so thanks anyway.
__________________
MSI MAG X570 TOMAHAWK WIFI, Ryzen 5900x, RTX 3070, Win 10-64.
Pioneer VSX-LX503, LG OLED65C9
Andy o is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 22:21.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.