Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
10th June 2015, 02:10 | #30901 | Link | |
/人 ◕ ‿‿ ◕ 人\
Join Date: May 2011
Location: Russia
Posts: 643
|
Quote:
Shiandow's deband runs in several passes: first n passes downscale image by a factor of 2 each, last one uses all of downscaled images as input (or last n passes, each using n-th image, dunno). |
|
10th June 2015, 02:23 | #30902 | Link | |
Registered User
Join Date: Jun 2014
Posts: 42
|
Quote:
That's using the shaders two times. The anti-ringing is ON two times! I like it, besides the rings, of course. It's so much more organic! I think the anti-ringing destroys some textures beyond the rings, kkk. You should use jinc as a second pass. For now it's GPL, though I don't mind to change it for any use. I chose that only because Retroarch was using it. Last edited by Hyllian; 10th June 2015 at 02:55. |
|
10th June 2015, 02:54 | #30903 | Link | |
Registered User
Join Date: Mar 2009
Posts: 3,646
|
Quote:
|
|
10th June 2015, 03:44 | #30905 | Link |
Registered User
Join Date: Jun 2010
Posts: 15
|
I'm sorry to bring up the topic on FreeSync again, but can't madVR force the output FPS (that is, the 3d/DirectX FPS) to be an integer multiple of the video, and let Freesync or G-Sync handle the rest?
i.e. a 24 fps video is converted to 48 fps by frame doubling/repeating, then let the output FPS be equal to the output video FPS (in this case, 48 dx9 fps) and since it falls into the Free/G-Sync working range, Free/G-sync gets activated. From what I've read with regards to G-Sync and FreeSync, the game developers don't have to do anything to implement them. It's all driver side. The only thing it requires is that the FPS must be in the working range. And the game must be in full screen. I haven't seen this suggested so I decided to point this out. Last edited by blindbox; 10th June 2015 at 03:49. Reason: Edited some terms. I'm not sure what you guys call the output FPS. Composition FPS? Overlay FPS? So I decided to use Dx9 FPS |
10th June 2015, 08:04 | #30907 | Link |
Registered User
Join Date: Jun 2005
Posts: 504
|
I did some testing of SuperRes for different resolution video files and these settings found to be quite good to my eyes.
For 1024x576 -> 1920x1080, the following settings are used image doubling: off Chroma upscaling: Bicubic 75 + AR + SuperRes image upscaling: Bicubic75 + AR image downscaling: Catmull-Rom + AR + LL upscaling refinement: passes 2, medium, strength 0.65, sharpness 0.25, softness 0.25, anti-aliasing 0.15, anti-ringing 0.14, refine the image only once after upscaling is complete While with 720x406 -> 1920x1080, difference with the former settings as following: image doubling: NEDI upscaling refinement: choose refine the image after every ~2x upscaling step produces better result, less blocky (the original compressed video has block artefacts). Since when 1024x576 -> 1920x1080, if the image doubling was used, it ends up with a DOWNSCALING to reach the screen resolution, while 720x406 ends up with an UPSCALING, I guess similar effect might stands to videos with other resolutions. With 1024x576 -> 1920x1080 files, the non-double defaults does not produce good quality at least for Bicubic 75 + AR upscaling. Actually I have tried SuperRes with other upscaling algorithms, including bilinear, it seems SuperRes works well with Bicubic and Bilinear, not with other algorithms, to my eyes of course. One question, if using image doubling with 1024x576 -> 1920x1080 and 'choose refine the image after every ~2x upscaling step' is chosen with upscaling refinement settings, would the refinement been applied twice, or just once after doubling; how exactly it's working with 'refine the image only once after upscaling is complete' set to yes? |
10th June 2015, 08:34 | #30908 | Link | |||||
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
Quote:
Quote:
Quote:
Quote:
Please use the advanced search for this thread, search for my user name and "FreeSync", and you should get all the answers you need. Quote:
If you activate "refine after every ~2x upscaling step", madVR splits the upscaling into as many upscaling steps as needed to reach the target resolution. It usually does exact 2x upscaling steps, except for the very last upscaling step, which is done with the exact factor needed to reach the target resolution. For 2x steps it doesn't matter if you use image doubling or not, the behaviour is the same. E.g. if you have image doubling activated, madVR uses NNEDI3/NEDI to double resolution. If you don't have image doubling activated, madVR doubles resolution with your selected upscaling algorithm (e.g. Jinc). Some examples: scaling factor 1.9x -> upscaling 1.9x + superres scaling factor 2.2x -> upscaling 2.2x + superres scaling factor 2.3x -> upscaling 2x + superres + upscaling 1.3x + superres scaling factor 4.1x -> upscaling 2x + superres + upscaling 2.1x + superres Please note that the upscaling factors 1.3x and 2.1x mentioned above are not fully correct, but you will probably know what I mean. Also please note that e.g. upscaling by 1.9x might include image doubling + downscaling, if you've activated NNEDI3/NEDI doubling. If you deactivate "refine after every ~2x upscaling step", madVR first scales in the same way as v0.87.21 would do, and then runs superres only once directly after upscaling (or doubling + up/downscaling) is complete. Superres is skipped in this situation if the overall upscaling factor is smaller than 1.125x. 1024x576 -> 1920x1080 is less than 2x in both directions, so superres will only be applied once, regardless of your settings. So you could leave "refine after every ~2x upscaling step" activated all the time, it would not change anything. Last edited by madshi; 10th June 2015 at 08:40. |
|||||
10th June 2015, 08:52 | #30909 | Link |
Registered User
Join Date: Jun 2005
Posts: 504
|
My observations still stand then, it seems that I don't like image doubling when the scale factor less than 2.0 when the downscaling C-R AR LL involved, which is consistent with my pm to you.
Madshi, maybe a detailed status report for the process would help us better understand it. Right now ctrl+J didn't show things correctly according to you explain in the post above. Last edited by Anima123; 10th June 2015 at 08:59. |
10th June 2015, 10:15 | #30911 | Link |
Registered User
Join Date: Oct 2010
Posts: 131
|
@madshi
" Superres is skipped in this situation if the overall upscaling factor is smaller than 1.125x" Why? SuperRes can be very useful as a sharpener even on not upscaled images. Another question: How processing changes when SuperRes is "applied first"? Last edited by toniash; 10th June 2015 at 10:22. |
10th June 2015, 11:05 | #30912 | Link | |
Registered User
Join Date: Aug 2008
Posts: 176
|
Quote:
But does Fine Sharp ON, Linear Ligt ON (image enhancement) options magically fix the problem? |
|
10th June 2015, 12:46 | #30914 | Link | |
Registered User
Join Date: Sep 2013
Posts: 919
|
Quote:
How can pre-processing fix a hardware trait? ... It can't. Going to test the sharpening shaders as soon as I'll have some time.
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410. |
|
10th June 2015, 12:59 | #30915 | Link |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,342
|
These patterns work by exploiting how subsampling works in the majority of PCs. If a post-processing algorithm changes the pattern, they stop working.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders |
10th June 2015, 13:27 | #30916 | Link | |
Registered User
Join Date: Jun 2010
Posts: 15
|
Quote:
Unfortunately, I don't have a freesync monitor to try this myself, possibly by spoofing my refresh rate towards madvr. Last edited by blindbox; 10th June 2015 at 13:34. |
|
10th June 2015, 13:31 | #30917 | Link | |
Registered User
Join Date: Aug 2008
Posts: 176
|
Quote:
4:2:2 side of the pattern masked very well, practically only 4:4:4 is visible. |
|
10th June 2015, 13:54 | #30918 | Link |
Registered User
Join Date: Apr 2014
Posts: 13
|
Basically, it would require MadVR to deliver frames when they are needed to be displayed. This works for games (frame rendered then displayed instantly), but MadVR has a timing for the frame. However, MadVR cannot guarantee to send the frame when this time comes (No CPU time, and/or simply not precise enough timers).
This is why we have queues in FSE (apart from more stability), to let the GPU handle the presenting times of frames by using a HW circuit. TL;DR: You need to present frames exactly when they need to be presented, with no queue. Current systems can't do this. |
10th June 2015, 13:55 | #30919 | Link | |
/人 ◕ ‿‿ ◕ 人\
Join Date: May 2011
Location: Russia
Posts: 643
|
Quote:
Again, this pattern was created for easy subsampling detection. If you change this pattern - then it stops working. Read what color subsampling is: http://en.wikipedia.org/wiki/Chroma_subsampling |
|
10th June 2015, 14:22 | #30920 | Link | |
Registered User
Join Date: May 2011
Posts: 164
|
Quote:
VRR monitors like my Rog Swift have no issue running at a fixed refresh rate compatible with virtually any content anyway (100, 120, 144, 59, 60, 85...) and I could probably even make a custom resolution if I needed to. Last edited by kalston; 10th June 2015 at 14:33. |
|
Tags |
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling |
Thread Tools | Search this Thread |
Display Modes | |
|
|