Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 28th January 2014, 12:25   #22121  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,926
Quote:
Originally Posted by leeperry View Post
AFAIK these don't provide any kind of R/G/B gain/offset settings so you'll have to use a CLUT on the PC in order to reach D65.
are you sure? my old 2011 week 11(and not really good) pf4600 got all theses options. for my 3d lut i use r128 g122 b122 to reach d65. so newer models didn't have this anymore this would be terrible...
huhn is offline   Reply With Quote
Old 28th January 2014, 12:28   #22122  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,926
Quote:
Originally Posted by ryrynz View Post
This is why I inquired about a CPU version.. unfortunately no answer from Madshi though :P So many goodies.. so much GPU..
if a gpu can't handle it how can a cpu do it? and like nev? said it's the last step... ofc you can download then to the ram use the cpu and upload it agian but this would be slow really slow.
huhn is offline   Reply With Quote
Old 28th January 2014, 12:29   #22123  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,347
huhn is right, trying to get the CPU to do dithering is practically impossible, as it would mean you have to download the image, process it (which isn't going to be very fast either), and upload it again. Its going to be insanely slow.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 28th January 2014, 12:31   #22124  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 447
Quote:
Originally Posted by DragonQ View Post
Who can actually use error diffusion though? It seems as if it's about as demanding as NNEDI3 from all the reports so far.
It doesn't seem to take much juice (2-3ms) on my laptop's Radeon Mobility HD 4650, which surprises me a little (especially given that it's stuck on the legacy driver).

madshi, I don't suppose it would be possible to test that the OpenCL dithering is working correctly? e.g. by testing a pre-dithered sample against the result of using it. I hope it either works or doesn't work at all, but with my laptop card coping so well I'm a little worried that it's just disabling dithering altogether (call me paranoid)

Last edited by Ver Greeneyes; 28th January 2014 at 12:34.
Ver Greeneyes is offline   Reply With Quote
Old 28th January 2014, 12:37   #22125  |  Link
antonyfrn
Registered User
 
Join Date: Nov 2012
Posts: 17
Quote:
Originally Posted by madshi View Post
Great!! Please keep us posted, if there's any progress...
Will do I posted your list for him and suggested he try's to get in contact with you hope you don't mind.
__________________
Windows 10 Pro 64Bit
i7 6700k + H110i GT @4.6Ghz
16GB Corsar Vengance DDR4
EVGA GTX 970 FTW ACX 2.0
ASUS Maximus VIII Hero
ASUS Xonar D2X
antonyfrn is offline   Reply With Quote
Old 28th January 2014, 12:53   #22126  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,650
Quote:
Originally Posted by huhn View Post
if a gpu can't handle it how can a cpu do it? and like nev? said it's the last step... ofc you can download then to the ram use the cpu and upload it agian but this would be slow really slow.
Damn, makes sense, well I would think a i7 would be faster at certain things than the openCL performance of an HD3000 or 4000.. but I guess there's that image transfer which makes it completely pointless performance wise, oh well.

Quote:
Originally Posted by madshi View Post
How much does the checkerboard pattern distract in real live viewing? With real movie content every frame should be (at least ever so slightly) different compared to the next, which should totally change the whole error diffusion pattern. It's different with test patterns where each frame might be 100% identical to the previous.
Nah, not distracting at all.. I don't know how anyone could actually see that with their naked eyes, would have to be a hawk. I will say things look very very clean with it enabled upon close inspection, just wish it had less of an impact on the GPU.
ryrynz is offline   Reply With Quote
Old 28th January 2014, 13:04   #22127  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
Oh well, scratch that...720p@1080p looks a hell lot better with 16 neurons NNEDI than regular J3AR, too bad I can't swing 32 neurons.

And I do get tearing at the bottom of the screen(roughly 5%) in FSW so I'll have to use FSE, I just wish the frigging black frames flashing didn't occur..

I also get tearing with other renderers but it's much lower like at 1% of the bottom of the screen, I can't believe tearing is still a problem on a mid-range H87 Haswell mobo with a HD7850....I wasn't getting any tearing on a P35 s775 mobo

Quote:
Originally Posted by huhn View Post
are you sure? my old 2011 week 11(and not really good) pf4600 got all theses options.
Philips TV's are chinese OEM made by TCL these days FWIR, so YMMV I guess.

Last edited by leeperry; 28th January 2014 at 13:11.
leeperry is offline   Reply With Quote
Old 28th January 2014, 14:04   #22128  |  Link
noee
Registered User
 
Join Date: Jan 2007
Posts: 530
Quote:
Originally Posted by Ver Greeneyes View Post
It doesn't seem to take much juice (2-3ms) on my laptop's Radeon Mobility HD 4650, which surprises me a little (especially given that it's stuck on the legacy driver).

madshi, I don't suppose it would be possible to test that the OpenCL dithering is working correctly? e.g. by testing a pre-dithered sample against the result of using it. I hope it either works or doesn't work at all, but with my laptop card coping so well I'm a little worried that it's just disabling dithering altogether (call me paranoid)
Edit: Oh wait, Madshi stated that ErrorDiff would only work 8bit? Some many interdepedencies now, time for a spreadsheet....

Last edited by noee; 28th January 2014 at 14:15.
noee is offline   Reply With Quote
Old 28th January 2014, 15:00   #22129  |  Link
Plutotype
Registered User
 
Join Date: Apr 2010
Posts: 235
I have read the last few pages and would like get answer on when and using which NNEDI configuration would one need to get 720p/1080p to 2160p NNEDI@ x neurons to achieve maximum upscaling clarity without artifacts. Can also anybody estimate which CPU and GPU would one need to achieve this?
Thanks
__________________
__________________
System: Intel Core i5-6500, 16GB RAM, GTX1060, 75" Sony ZD9, Focal speakers, OS Win10 Pro, Playback: madvr/JRiver
Plutotype is offline   Reply With Quote
Old 28th January 2014, 15:02   #22130  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,926
Quote:
Originally Posted by 6233638 View Post
I may have to edit my 3DLUT now, as values near black seem to be darker than before when using error diffusion.



Brightened:


As you should be able to see though, with error diffusion enabled each level is much more distinct from the last. When brightened like this, 20-25 all look roughly the same with random dither.


Hopefully editing my 3DLUT will not affect the results I'm seeing on a full black screen:


(brightened significantly)

With error diffusion enabled, black is actually solid black, which means that my display can turn off the local dimming zones rather than keeping them on at a low level.
This is a significant improvement and absolutely worth the performance hit.

Of course, if you don't have that problem, it may not matter, and I really hope that changing my 3DLUT values does not affect this result.

I agree with cyberbeing that error diffusion does have a tendency to create patterns though, which random dither avoids.
But the noise level is so much lower near black, and noise is monochrome with error diffusion, rather than being multicolored with random dither.
i can't reproduce this at alll... the brightness isn't changing at all with my 3d lut.

Quote:
Edit: Oh wait, Madshi stated that ErrorDiff would only work 8bit? Some many interdepedencies now, time for a spreadsheet....
rendertimes jump up by ~13ms on 10 bit sources so i think 10bit works.

and errordiff gets 16 bit ycbcr 4:4:4 anyway...
huhn is offline   Reply With Quote
Old 28th January 2014, 15:05   #22131  |  Link
noee
Registered User
 
Join Date: Jan 2007
Posts: 530
Quote:
rendertimes jump up by ~13ms on 10 bit sources so i think 10bit works.
I was talking panel native bitdepth setting.
__________________
Win7Ult || RX560/4G || Ryzen 5
noee is offline   Reply With Quote
Old 28th January 2014, 15:26   #22132  |  Link
6233638
Registered User
 
Join Date: Apr 2009
Posts: 1,019
Quote:
Originally Posted by iSunrise View Post
That is actually very interesting. Never thought that error diffusion (haven´t checked it myself, yet, though) would improve something in such drastic form.
I have just finished creating a new 3DLUT and tested this again - I am not getting the same results now.

With the new 3DLUT the two images look almost identical, other than the dithering. Again, these have been brightened considerably, as these bars are very near black. edit: and I now realize that I left debanding on. (low preset)



The only thing I can think of which may have caused the differences I saw before, is that I simply hit PrtScn when the bars were on-screen, and perhaps I managed to pick a frame that was different?
Or worse, perhaps I disabled the 3DLUT without realizing.

This time I counted the number of flashes and restarted the video when I did the comparison.

I am very happy to report that black is still black with the new 3DLUT though.

Quote:
Originally Posted by DragonQ View Post
Who can actually use error diffusion though? It seems as if it's about as demanding as NNEDI3 from all the reports so far.
I have no problem using it on my system. If I recall correctly, it adds about 5ms to the rendering times with 1080p video displayed at 100%.

Quote:
Originally Posted by leeperry View Post
So is there any way to avoid the black frames flashing during FSE/FSW transitions? This is happening whenever I want to open a context menu and that never occured on XP.
I've only ever used madVR on Windows 7/8, and this has always been the case. I was not aware that XP didn't do this.

Quote:
Originally Posted by leeperry View Post
All this said, FSE was mandatory on XP but isn't it a bit of overkill on W7? The OS knows how to handle VSYNC for a change.
FSE is also for performance, and preventing other windows overlaying the madVR image.

Quote:
Originally Posted by leeperry View Post
And I do get tearing at the bottom of the screen(roughly 5%) in FSW so I'll have to use FSE, I just wish the frigging black frames flashing didn't occur..
Have you disabled Aero/Desktop Composition? (switching the Windows theme might also do this?)

Last edited by 6233638; 28th January 2014 at 15:30.
6233638 is offline   Reply With Quote
Old 28th January 2014, 15:40   #22133  |  Link
pie1394
Registered User
 
Join Date: May 2009
Posts: 212
Quote:
Originally Posted by pie1394 View Post
- Video screen becomes totally black when the 1st subtitle arrived from XySubFilter -- if SmoothMotion logic is NOT engaged. It happens to any 24-frames or 60-fields contents regardless of resolution. Debanding option ON/OFF does not matter, either.
Hi madshi,

This issue which was introduced since 0.87 is finally fixed in 0.87.4!

So the remain old & weird case is the black box under ASS subtile text with SmoothMotion feature on ION-LE(GF9300) platform + GeForce 320.49 since 0.86.x. It happens to all of my samples with external ASS file. Thus I also wonder why it does not happen to your ION(GF9400) machine.

--
C2D P9700 + ION-LE(GF9300) + dual-ch 8GB DDR3-1333 + Dell U2412M + Win7x64SP1 + MPC-BE 1.3.0.3 + LavFilter 0.60.1 + madVR (deband low, all Bilinear, all for performance) + XySubFilter 3.1.0.546
pie1394 is offline   Reply With Quote
Old 28th January 2014, 16:34   #22134  |  Link
MSL_DK
Registered User
 
Join Date: Nov 2011
Location: Denmark
Posts: 137
It has been hell to get OpenCL to work with the latest driver from AMD

Here's what I've done, hope it can help others.
  1. DOWNLOAD and RUN AMD Catalyst Uninstall Utility
  2. GO TO device manager> Display adapters> right click> properties> driver> uninstall
  3. Go to Start–>Search type in gpedit.msc
  4. Click the file to open the Local Group Policy Editor
  5. Go to ->Computer Configuration->Administrative Templates->System->Device Installation. Click on the subfolder Device Installation on the left and on the right side you will see the possible restrictions.
  6. Right Click on Prevent Installation of Devices not described by other policy settings and edit this option, set it on ENABLED.
  7. Reboot
  8. Go to ->Computer Configuration->Administrative Templates->System->Device Installation. Click on the subfolder Device Installation on the left and on the right side you will see the possible restrictions.
  9. Right Click on Prevent Installation of Devices not described by other policy settings and edit this option, set it on DISABLED.
  10. INSTALL AMD Catalyst drivers
  11. Reboot
MSL_DK is offline   Reply With Quote
Old 28th January 2014, 16:52   #22135  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
Quote:
Originally Posted by 6233638 View Post
Have you disabled Aero/Desktop Composition?
Yes I did as I don't like transparent GUI's, out of desperation I tried to set the Windows GUI to "best appearance" and I found out the hard way that "desktop composition" was the fancy name for Aero

As soon as I uncheck it in mVR, the tearing is back so I wonder what that mVR option might be good for?

Quote:
Originally Posted by 6233638 View Post
I've only ever used madVR on Windows 7/8, and this has always been the case. I was not aware that XP didn't do this.
Oh, so for once XP beats W7 hah.......I presume that madshi didn't find a way to avoid the flashing on Vista+?

Quote:
Originally Posted by 6233638 View Post
FSE is also for performance, and preventing other windows overlaying the madVR image.
Well, the flashing is quite a PITA.......if Aero can work as effectively in FSW as mVR on its own in FSE then I don't think I'll bother with the latter anymore.

Quote:
Originally Posted by MSL_DK View Post
It has been hell to get OpenCL to work with the latest driver from AMD
On x64 or something? I simply disabled the drivers signature check before installing the 13.12 AMD drivers on W7SP1 et voilà ))
leeperry is offline   Reply With Quote
Old 28th January 2014, 16:56   #22136  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by AGKnotUser View Post
I had in madVR settings image upscaling and image downscaling set to DXVA2. If I change the settings to Bilinear the video plays.
Quote:
Originally Posted by vomanci View Post
I can confirm setting image upscaling and/or downscaling to DXVA2 freezes the image whether it's up/downscaled. On an Nvidia 8200 onboard (that's right; I use bilinear and works )
Quote:
Originally Posted by madshi View Post
I can confirm the problem. Will look at that for v0.87.5. But not today, anymore...
Strange. I could reproduce it yesterday, but not today, anymore. Can anybody still reproduce this problem? If so, please upload your settings somewhere so I can try to reproduce again.

Quote:
Originally Posted by huhn View Post
still problems with hd4000 and deint split drops first, gpu load still at 84-95 % with 87.4. with a old test 86.11 was at ~82 %, now it is at 88-95% but working.
back buffer is set to 8 and fullscreen exclusive is disabled else it isn't working on 86.11 too rest is totally default.
Hmmmm... I thought it was confirmed by some other users that deinterlacing was back to v0.86.11 levels. Now you say it isn't? FWIW, I've forgot (once again) to reenable optimization for v0.87.4, so CPU consumption is slightly higher than necessary. But I don't think that's the cause of whatever issue you're seeing.

Quote:
Originally Posted by huhn View Post
there are hybrid displays out there: german list: http://geizhals.at/de/?cat=monlcd19wide&xf=103_Tuner~99_26#xf_top

i highly recommend Philips tvs for pc usages they normally all support unlimited rgb in pc mode on >all< refresh rates.
I'm not interested in a TV due to the tuner. What interests me most is having a bigger display (30"), a good panel (not TN), and HDMI 1.4 3D playback ability. I can't seem to find a good monitor which ticks all these boxes. Most 3D capable monitors only support 3D when using an NVidia GPU which I find a terrible idea.

Quote:
Originally Posted by omarank View Post
Profiling works nicely here, but keyboard shortcuts for profiles don't work for "processing" and "rendering" settings. The shortcuts work only for "scaling algorithms".
I can confirm that. Will be fixed in the next build.

Quote:
Originally Posted by omarank View Post
I am using a profiling script for smooth motion too, and I found that the profile which enables smooth motion, when enabled, doesn't consider the option selected in the smooth motion settings page and always enable smooth motion.
I can't confirm that. That seems to work here.

Quote:
Originally Posted by omarank View Post
I was wondering how to use "command line to execute when this profile is activated/ deactivated". Can someone please show me an example of using this functionality?
Oh well, I actually didn't get around implementing this yet... Will add that in the next build, too. What do you mean with an example? You mean you're wondering what the purpose of this functionality is and what it could be good for? Well, I don't know, to be honest, with the current feature set. But one of the next planned features will make the command line stuff useful for some users.

Quote:
Originally Posted by iSunrise View Post
Last night (at 4 o´clock in the evening/morning!) Blaire from 3dcenter already contacted ManuelG of Nvidia, who requested a step by step guidance of the problem. Since I´ve linked your last and more detailed post to him, he was actually able to reproduce the problem and gave all required information to ManuelG in step by step form.

They are looking into it right now. Only thing left now is to look for a new beta driver, where they fix the problem, but I guess Blaire will let me know soon enough. Going to keep you updated on this as soon as I get some feedback.
Quote:
Originally Posted by antonyfrn View Post
Will do I posted your list for him and suggested he try's to get in contact with you hope you don't mind.
Great! Thanks, guys, that sounds promising. Of course I wouldn't mind at all if any NVidia guy contacted me.

Quote:
Originally Posted by Qaq View Post
MadVR 87.3. ATI 7750, driver 13.12, Win7 SP1 x86. OS and video driver are set to high perfomance.

NNEDI3 image doubling: dropouts ~30%
OpenCL error diffusion: dropouts ~30% (only works fine for small size videos without upscaling to full screen)
NNEDI chroma upsampling: dropouts ~30% (this one worked fine for me in 87.1b, btw)

Timings are fine, its just present queue goes to zero. Need more fine tuning of the new CCC 13.12 maybe.
Not sure what's going on there for you. On my PC with Win 8.1 x64 and HD 7770 all queues are perfectly full, as long as rendering time is a bit (not much) lower than the frame interval. No problems whatsoever produced by OpenCL, both NNEDI3 and error diffusion increase the rendering times but have no negative effect on queues and produce no frame drops.

Quote:
Originally Posted by leeperry View Post
So is there any way to avoid the black frames flashing during FSE/FSW transitions? This is happening whenever I want to open a context menu and that never occured on XP.
XP was a bit faster and more seamless when doing FSE/FSW transitions. There's nothing we can do about that. Well, maybe I could find a way (or maybe not), but I consider that a cosmetical problem and as such not very important at the moment.

Quote:
Originally Posted by leeperry View Post
All this said, FSE was mandatory on XP but isn't it a bit of overkill on W7? The OS knows how to handle VSYNC for a change.
Windows never had the intention to guarantee that every thread is getting enough CPU time without any serious interruptions. When talking about 60fps, every frame is only visible for ~16.7ms. If you don't use FSE, playback can only be smooth if madVR always gets enough time in every 16.7ms slot to prepare and present the required frame. If only once Windows fails to give madVR enough CPU time for a duration of about 16.7ms, automatically you'll get a frame drop. This is where FSE helps: By presenting several frames in advance, you increase the critical time window from 16.7ms to a multiple of 16.7ms. E.g. with 8 presented frames in advance playback will still be perfect even if Windows stops giving madVR CPU time for 100ms.

Quote:
Originally Posted by leeperry View Post
It's the Sammy UE32F5000, it does support 4:4:4 but then no more BFI so it becomes blurryland and there's the usual panel and backlight homogeneity lotteries....
Does it support 3D playback via HDMI 1.4? And does it support 4:4:4 at every refresh rate, or just at 60Hz?

Quote:
Originally Posted by DragonQ View Post
Who can actually use error diffusion though? It seems as if it's about as demanding as NNEDI3 from all the reports so far.
WHAT!?

On my PC (Windows 8.1 x64, AMD HD7770) I get the following rendering times:

1080p24 playback, 1680x1050 monitor:
- random dithering: 2.0 ms
- error diffusion: 8.3 ms

1080p60 playback, 1680x1050 monitor:
- random dithering: 2.6 ms
- error diffusion: 10.1 ms

With a 1920x1080 monitor cost of error diffusion would be about 15% higher, but that's it. Sure, error diffusion is not free. But it's not *that* demanding, either. And we're talking about a HD7770 here. There are some much more powerful beasts out there.

Quote:
Originally Posted by ryrynz View Post
This is why I inquired about a CPU version.. unfortunately no answer from Madshi though
I did answer (though a bit later than normal). You seem to have missed it.

Quote:
Originally Posted by ryrynz View Post
Hey Madshi, was wondering how a 2x upscale with Jinc3ar + 2x upscale with nnedi then a downscale with Lanczos3 would look in MadVR. Basically getting that nnedi sharpness and antialiasing without too much thinning happening.. could be a nice blend of both. Do you think it's worth having a preupscaling option? I've played a round a bit in Avisynth with pre Jinc and Lanczos upscaling and from what I see things look quite nice when set up before nnedi. Thoughts?
What would be the purpose of this? Just to remove aliasing? I don't think such a solution would make the image sharper, it would just remove aliasing from the source. I think it would be more effective to run a specialized antialiasing filter, if that's what you need. Most sources don't have aliasing problems, though...

Quote:
Originally Posted by Ver Greeneyes View Post
It doesn't seem to take much juice (2-3ms) on my laptop's Radeon Mobility HD 4650, which surprises me a little (especially given that it's stuck on the legacy driver).
Do you have 13.12 drivers installed? As I wrote in the v0.87.0 announcement post, OpenCL only works with AMD GPUs if you update to the latest drivers.

Quote:
Originally Posted by Plutotype View Post
I have read the last few pages and would like get answer on when and using which NNEDI configuration would one need to get 720p/1080p to 2160p NNEDI@ x neurons to achieve maximum upscaling clarity without artifacts. Can also anybody estimate which CPU and GPU would one need to achieve this?
My HD7770 can do 1080p -> UHD upscaling with NNEDI3 - but only with 24fps content and 16 neurons, and only if I disable everything else. And I don't recommend using 16 neurons for luma doubling. So you'd probably need something which is a bit more than twice as fast as the HD7770. Maybe 2.5x as fast, to be on the safe side. Then you should be able to do 1080p -> 2160p luma doubling with 32 neurons. Probably the "HD 7870 XT" could do the job, but this is just a guess, I can't guarantee it. CPU doesn't matter.

Quote:
Originally Posted by turbojet View Post
For testing you can use http://www.mediafire.com/download/5l...ce_banding.mpg set display at 60hz, turn on film mode and smooth motion. It's soft telecined so must decode with lav video to find correct cadence. Maybe it's just my setup, setting a profile helps but it would be nice to use smooth motion and ivtc together.
Seems to work fine here. No cadence breaks detected and frame stepping looks alright. Can't see anything wrong when playing it back at 60Hz with smooth motion frc activated, either.

Have you tested in FSE mode? And there are no frame drops? Does smooth motion FRC work fine for you with other videos?

Quote:
Originally Posted by turbojet View Post
if (srcFps=29) and (filmMode=true) "interlaced" else "progressive"
29.97 ivtcd in film mode = progressive
29.97 in video mode= interlaced

is this a bug?
I've already explained this before. "srcFps" is information coming from the decoder, and it's not terribly exact. It could be 29.970 or 29.969 or 29.971. It will definitely not be 29. So I'd suggest using ">" or "<" or something like that. E.g. you could do "if (srcFps>29.9) and (srcFps < 30.1) and (filmMode) "interlaced" else "progressive"".

Quote:
Originally Posted by pie1394 View Post
So the remain old & weird case is the black box under ASS subtile text with SmoothMotion feature on ION-LE(GF9300) platform + GeForce 320.49 since 0.86.x. It happens to all of my samples with external ASS file. Thus I also wonder why it does not happen to your ION(GF9400) machine.
Maybe I missed something. But since this is an old bug, please report it to the bug tracker and I'll have another look at it when I find the time. Thanks.

Quote:
Originally Posted by MSL_DK View Post
It has been hell to get OpenCL to work with the latest driver from AMD
Strange, worked just fine without any problems here.

Last edited by madshi; 28th January 2014 at 16:59.
madshi is offline   Reply With Quote
Old 28th January 2014, 17:23   #22137  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,926
i think the hd4000 issue is just a very very small performance degradation.

after i set my ram at 1600 it works fine and i can't really see a huge difference in gpu laod so the hd4000 was at his limited with 1333 mhz ram.

Quote:
I'm not interested in a TV due to the tuner. What interests me most is having a bigger display (30"), a good panel (not TN), and HDMI 1.4 3D playback ability. I can't seem to find a good monitor which ticks all these boxes. Most 3D capable monitors only support 3D when using an NVidia GPU which I find a terrible idea.
the big point was that Philips pc mode works with all refresh rate. and the information is still easy to find in the manuels.

and samsung is know with his problems in pc mode (1 hdmi only 60 hz need a special name...) but time moves on and products should improve so they may fix that general problem.


Quote:
As soon as I uncheck it in mVR, the tearing is back so I wonder what that mVR option might be good for?
performance! intel and nvidia doesn have tearing problems i only get them with amd and disabled aero.

you can disable most of aero in the system properties -> performance options just disable everything except "enable desktop composition" aero is now running with no transparent effect.
huhn is offline   Reply With Quote
Old 28th January 2014, 17:23   #22138  |  Link
tFWo
Registered User
 
Join Date: Apr 2011
Posts: 24
Madshi, big thanks for the new features!

I don't know if this information about Error Diffusion was already mentioned. Might have missed it.
Error Diffusion performance is highly dependent on having Smooth Motion turned on.

For example, a 1080p24 (SM 24->60) file on my 270X gives these rendering times (I have nnedi64 chroma upscaling turned on and I’m downscaling with lanzos3arll to 1680X1050 so the rendering times are already high ):
23ms (SM off, ED off)
36ms (SM off, ED on)
24ms (SM on, ED off)
51ms (SM on, ED on)

I don’t know if this is the correct behavior. ED is applied after SM, so the performance hit is massive if you are doing 24->60. Is it necessary to dither after SM or it can be done before it?
tFWo is offline   Reply With Quote
Old 30th January 2014, 06:30   #22139  |  Link
turbojet
Registered User
 
Join Date: May 2008
Posts: 1,840
Quote:
Quote:
Originally Posted by turbojet View Post
For testing you can use http://www.mediafire.com/download/5l...ce_banding.mpg set display at 60hz, turn on film mode and smooth motion. It's soft telecined so must decode with lav video to find correct cadence. Maybe it's just my setup, setting a profile helps but it would be nice to use smooth motion and ivtc together.
Seems to work fine here. No cadence breaks detected and frame stepping looks alright. Can't see anything wrong when playing it back at 60Hz with smooth motion frc activated, either.

Have you tested in FSE mode? And there are no frame drops? Does smooth motion FRC work fine for you with other videos?
\

It works fine in FSE and overlay, problem only in windowed mode. It happens on all videos with the same scenario. It could be a gpu load issue, goes from 55% to 60% after enabling frc. Deinterlacing 60fps at 30 fps shows 85% gpu and plays flawlessly on 250, on 650 the same scenario doesn't look good either in windowed, gpu is about 30% in that case. So I doubt it's an issue with the 250 gpu load.

I've noticed another oddity when it comes to gpu usage. on 650 bob deinterlacing 60fps is only using about 50% gpu but does not play well in any mode, deinterlacing at 30fps looks fine in every mode, could this be an issue with the display?. Also using nnedi3 to upscale sd to 1080p works fine with about 50% load, however upscaling 720p drops many frames but the gpu load is only about 60%. I can crank up the gpu load to about 90% with jinc ar and it doesn't drop any frames, any ideas?

Quote:
Quote:
Originally Posted by turbojet View Post
if (srcFps=29) and (filmMode=true) "interlaced" else "progressive"
29.97 ivtcd in film mode = progressive
29.97 in video mode= interlaced

is this a bug?
I've already explained this before. "srcFps" is information coming from the decoder, and it's not terribly exact. It could be 29.970 or 29.969 or 29.971. It will definitely not be 29. So I'd suggest using ">" or "<" or something like that. E.g. you could do "if (srcFps>29.9) and (srcFps < 30.1) and (filmMode) "interlaced" else "progressive"".
After some sleep I figured out my mistake.
__________________
PC: FX-8320 GTS250 HTPC: G1610 GTX650
PotPlayer/MPC-BE LAVFilters MadVR-Bicubic75AR/Lanczos4AR/Lanczos4AR LumaSharpen -Strength0.9-Pattern3-Clamp0.1-OffsetBias2.0
turbojet is offline   Reply With Quote
Old 30th January 2014, 07:16   #22140  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,650
Quote:
Originally Posted by madshi View Post
What would be the purpose of this? Just to remove aliasing? I don't think such a solution would ma
On the one hand NNEDI gives beautiful sharp edges but does so at the expense of thinning things out a little more than they should be on these edges.. Quite noticeable on lettersfor example, they look fine in Jinc and Lanczos.
I wanted to experiment to see if I could get some of that NNEDI sharpness retained but keeping with as much Jinc style upscaling as possible and hopefully avoiding some of that funky pattern thing NNEDI does too.
ryrynz is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 11:49.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.