Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
16th November 2011, 14:58 | #10941 | Link |
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
madVR checks whether the registry value "HKCU\Software\DScaler5\MpegVideo Filter\Inverse Telecine" exists. If it does, it considers DScaler to be the IVTC Mod. If you want to go back to the non-modded version, maybe deleting that registry value will help?
But why are you using DScaler in the first place, if the IVTC Mod doesn't work for you? It's very much outdated, based on rather old and buggy decoder sources. I'd suggest to use LAV Video Decoder or the internal madVR decoder instead. Both are better MPEG2 decoders, IMHO, they just don't do IVTC. |
16th November 2011, 15:19 | #10942 | Link | |
Registered User
Join Date: May 2009
Posts: 212
|
Quote:
If madVR does not send it to do deinterlacing process, in theory, the 25 / 30 fps frame presentation speed should be ok for 1080i50 / 1080i60 contents during playback. But my very old test with madVR showed jaggy moving objects on nVidia GPU (both G80/GT100). Maybe it work if one interlaced frame is presented 2 times? For the odd-field-first video contents w/o deinterlace, it is no doubt that frame presentation method must be still the one with odd-field-first since it has smaller Presentation Timestamp than the combined even-field parts output by MPEG video decoder. So this is not a problem on my HTPC(Win7x64 , GTX260+) / Hitachi 42" PDP P42A01A system to do the frame-rate-doubling deinterlacing + original cotent's interlaced mode output. The video output mode is always set 1920x1080i60 to play 29.97/30 double-frame-rated deinterlaced MPEG-2 contents. Yet I did a comparsion of following 2 playback methods, and found the method 1 shows that human skin colors are somehow more vivid and stereo. [1] LAVVideo CUDA MPEG2 (+deinterlace) + madVR0.79 [2] LAVVvideo MPEG2 + madVR0.79 (+ DXVA2 deinterlace) Yet I cannot tell the difference via both methods on Ion + Samsung 203B LCD monitor (6-bit TN + FRC). Thus I suspect that nVidia's driver did more high-precisioned video image post-processing on GTX260+ just like what madVR does. Otherwise there is no way that same madVR video scaling method can produce such difference. Last edited by pie1394; 16th November 2011 at 15:23. |
|
16th November 2011, 15:27 | #10943 | Link | |
Registered User
Join Date: Jan 2009
Posts: 625
|
Quote:
I was reluctant to say it was a splitter problem as I am using LAV as are many others here. I don't understand 1. why playing the m2ts directly plays smoothly and 2. reverting back to MadVR 075 (the last version I have) playing the index.bdmv or mpls, playback is as smooth as it should be?. Something must have changed between version 075 and 078 / 079? |
|
16th November 2011, 15:33 | #10944 | Link | |
Kid for Today
Join Date: Aug 2004
Posts: 3,477
|
Quote:
CUVID does an amazing job on 29.97 true interlaced DVD's, but it's a major failure on 1080i IME...each scene change making nasty artifacts. Anyway, as I feared DXVA2 doesn't seem to exist in XP so as much as ATi might convert them to DXVA1, all I get is a black screen on my 8800GS and not even mVR's OSD will appear. If I mark this 29.97fps DVD as progressive, then I get the usual combing. I might soon stop the XPSP3 testing, because I want to benefit from the HPET for audio purposes Last edited by leeperry; 16th November 2011 at 15:40. |
|
16th November 2011, 15:41 | #10945 | Link | |
Registered User
Join Date: Apr 2008
Posts: 418
|
Quote:
Last edited by Gser; 16th November 2011 at 15:47. |
|
16th November 2011, 15:43 | #10946 | Link | ||
Broadband Junkie
Join Date: Oct 2005
Posts: 1,859
|
Quote:
For example, with Queue maxed out on 1440x1080i30 16:9: madVR reports 1188MB/512MB GPU RAM in Use. Actual GPU RAM use is only 392MB. MPC-HC is only using 115MB. Assuming madVR is reporting GPU + PCIe RAM, is there any reason that combined number should be kept below GPU RAM capacity? Is it just some sort of theoretical maximum size that madVR would use in cases of massive upscaling/downscaling? This also has me curious if madVR in any way confirms that 16bit/32bit FP textures are being stored properly by the GPU? Seeing as some other people has much higher GPU RAM use numbers in GPU-Z (side-effect of Aero?), hopefully the low numbers don't mean the GPU is actually holding 8bit RGB textures in the queues? Quote:
Could you add an option under Trade Quality for Performance for something like 'Use Bilinear and Shaders Only when Deinterlacing or 60fps+ framerates' to temporarily set these options when deinterlacing is active and/or just high framerate content in general? That's the only practical way I could see to keep deinterlacing enabled in madVR. What this comes back to is my GPU can barely playback 60fps content with madVR even with progressive video. Another thing I note is performance is much much slower (3-6ms deint vs 0.15-0.25ms deint) if Inverse Telecine enabled in the NVCPL. Since madVR doesn't seem to support decimation like VMR (?), I guess everybody should disable that setting? No change in quality as far as I can tell using madVR, just slower. I'll test a DVD later which VMR9 decimates correctly with the NVIDIA PureVideo MPEG2 decoder (even changing between 24p and 30p on hybrid content) with that 'Inverse Telecine' setting enabled to see if anything changes in madVR. As for queue sizes for deinterlacing Absolute minimum: 8CPU/6GPU (below this results in massive dropped frames) Absolute maximum: 32CPU/24GPU Requirement 1: Inverse Telecine is disabled in NVCPL Requirement 2: CPU Queue is at least +2 greater than GPU queue Last edited by cyberbeing; 16th November 2011 at 15:48. |
||
16th November 2011, 15:46 | #10947 | Link | ||||||||
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Anyway, I still recommend trying to figure out where the bad timestamps are coming from. It would be better to have that problem fixed, too, instead of relying on madVR to work around it somehow. Quote:
Quote:
Anybody using NVidia on XP? Does madVR's DXVA2 deinterlacing work for you? |
||||||||
16th November 2011, 15:48 | #10948 | Link | |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,342
|
Quote:
Disabling Inverse Telecine in the NVIDIA Control Panel can result in artifacting on Telecined discs, if anything it might do full deinterlacing instead of just weaving the fields back together properly, which could result in an image degradation. As always however, trust your eyes over anything else.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders |
|
16th November 2011, 15:50 | #10949 | Link |
Registered User
Join Date: Jan 2009
Posts: 1,210
|
I see dropped frames when upscaling more than approximately 1280 x 720. No delayed frames or presentation glitches.
GPU: ATI Mobility Radeon HD 4570 OS: Windows 7 x64 Movie Resolution: 704 x 480 (16:9) DVD Output Resolution: 853 x 480 (default 16:9) or upscaled to 1920 x 1080 Display Refresh Rate: 59.940Hz Decoder: ffdshow decoder I'm deinterlacing from 29.970 interlaced (3:2 pulldown, but I guess that doesn't make a difference). Scaling: Spline @ 3 taps on all. Scaling in linear light off. Basically after upscaling to a certain resolution the queues progressively start to fill up less and less until it's at 0-2 like 400 horizontal pixels before I can hit fullscreen 1920 horizontal resolution. I'm guessing I have to get a faster GPU to fix this or is there hope in me waiting for more speed improvements in DXVA2 implementation of madVR? Is there an equivalent of CUDA in ATI that you can utlize to attain GPU acceleration in ATI cards? Also I have noticed that madVR is still not accurately detecting if the input video needs to be deinterlaced or not. I just played a 25fps PAL VOB and it showed that deinterlacing was on. I'm guessing that this has to do with incorrect flag information in the VOB. Also, the gpu ram usage section in the OSD seems to be a bit off or something else is happeneing with my GPU drivers. When I fill up all the queues on a 1080p file or something large it shows 1068/512. My card is a 1GB card however I'm used to seeing it say 512MB everywhere (e.g. in the driver control panel, windows display information etc). What does this mean? Last edited by dansrfe; 16th November 2011 at 16:04. |
16th November 2011, 15:53 | #10950 | Link | |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,342
|
Quote:
http://msdn.microsoft.com/en-us/libr...=vs.85%29.aspx "If the graphics drivers uses the older Windows XP Display Driver Model (XPDM), DXVA 2 API calls are converted to DXVA 1 DDI calls. " So, while XP doesn't support DXVA2 natively, if you install the .net framework 3 or 4 (not sure which version), there will be an emulation layer in place that can convert DXVA2 API calls to DXVA1 hardware calls. Note that this also applys to very old drivers on Vista/7, which didn't implement WDM yet. However those drivers won't really be around anymore. I think when you query for a DXVA2 device, there should be some information flag somewhere that tells you if its a emulated device based on DXVA1. PS: Using DXVA2 *decoding* on XP is another matter, and alot harder (close to impossible), however video processing is possible. ATI doesn't have a real equivalent. They have OpenCL based APIs to decode video, however they failed to properly link that to D3D, so its not really possible to couple that with a Direct3D based rendering chain.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders Last edited by nevcairiel; 16th November 2011 at 16:00. |
|
16th November 2011, 15:57 | #10951 | Link | |
Broadband Junkie
Join Date: Oct 2005
Posts: 1,859
|
Quote:
For these 1440x1080i30 MPEG2-TS files, inverse telecine never works unless I force the DScaler mod to IVTC in software. The slowness with that NVCPL setting enabled on such files where inverse telecine is broken is a bit of a concern though. It's not something practical to enable and disable all the time even if it does benefit DVDs. madVR deinterlacing may just be a lost cause on this ancient GPU, even if it does sort-of barely work in a pinch. Last edited by cyberbeing; 16th November 2011 at 16:04. |
|
16th November 2011, 16:01 | #10952 | Link | |
Registered User
Join Date: Jan 2009
Posts: 625
|
Quote:
Before I upload another log, could the problem be a muxing problem? I have been using EasyBD - I just remuxed one using tsMuxer and hey presto, index.bdmv plays perfectly. Still not sure why playing the m2ts directly plays smoothly whatever though? |
|
16th November 2011, 16:05 | #10953 | Link | ||||||||
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
Quote:
Quote:
http://msdn.microsoft.com/en-us/libr...=VS.85%29.aspx It's called "AGP memory" there, but we're using PCIe instead of AGP these days. So I've now called it "PCIe RAM". Here's a good description I found of what it exactly is: > AGP memory is just a chunk of regular system memory > -- the memory on the motherboard -- that's given > special treatment. It's marked as WC (write-combining) > by the CPU which allows for fast writes but slow reads. > It's also mapped by the AGP GART ("Graphics Address > Remapping Table"), which means that the video card > can read directly from it relatively quickly. But not as > quickly as local video memory. > It's basically a compromise between regular system > memory and local video memory. Because the CPU can > write to it quickly and the GPU can read from it > relatively quickly, it's a natural home for > D3DUSAGE_DYNAMIC resources. Quote:
The only thing that counts is GPU RAM. Ok, I don't know if PCIe memory is limited, too. Maybe it is, I don't think it's specified anywhere. So we can only make sure that GPU RAM is not over used. Quote:
Quote:
Quote:
Quote:
Quote:
|
||||||||
16th November 2011, 16:20 | #10954 | Link | |||||
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
Quote:
Quote:
Quote:
Quote:
Quote:
I've no idea. |
|||||
16th November 2011, 16:40 | #10955 | Link |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,342
|
I think its more of a mistake/unwanted side-effect then an actual feature.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders |
16th November 2011, 16:41 | #10956 | Link |
German doom9/Gleitz SuMo
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
|
Regarding AGP/PCIe RAM: Device memory, mapped into the CPU address space, was already available e.g. as VESA VBE Linear Framebuffer Adressing in Protected Mode under DOS...
|
16th November 2011, 16:42 | #10957 | Link | ||
Broadband Junkie
Join Date: Oct 2005
Posts: 1,859
|
Quote:
Quote:
I watch so little interlaced content normally. Does someone have an example of what a hard-telecine artifact would look like when not using the Inverse Telecine setting in the NVCPL with madVR deinterlacing? No, that's with 1920x1080 HD content. Last edited by cyberbeing; 16th November 2011 at 16:45. |
||
16th November 2011, 16:47 | #10958 | Link |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,342
|
"Soft-Telecined" is just 23.976 progressive video which has some extra flags that instruct the decoder/renderer to repeat fields to actually make it 30fps, if you just ignore those flags its (nearly) perfect 23.976 content (maybe with a bit odd timestamps, but should play smoothly, at least at 24p screen refresh, at 48 or 72Hz, it might show issues)
__________________
LAV Filters - open source ffmpeg based media splitter and decoders Last edited by nevcairiel; 16th November 2011 at 16:51. |
16th November 2011, 17:03 | #10960 | Link | ||
Registered User
Join Date: Mar 2009
Posts: 962
|
Quote:
Quote:
__________________
MSI MAG X570 TOMAHAWK WIFI, Ryzen 5900x, RTX 3070, Win 10-64. Pioneer VSX-LX503, LG OLED65C9 |
||
Tags |
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling |
Thread Tools | Search this Thread |
Display Modes | |
|
|