Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 8th June 2013, 09:42   #19101  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by madshi View Post
What do you mean with "moving back"? As far as I remember, madVR has always used a TV levels rendering pipeline. Going PC levels would mean either losing BTB/WTW, or alternatively I'd have to use floating point textures for all rendering steps, which would make things slower rather than faster. The main reason for using a TV level rendering pipeline is that I don't have to use floating point textures to maintain BTB/WTW, to save performance!!
No, at least not from what I remember. You originally started out with 3DLUTs outputting PC range. Your decision to migrate the entire render chain to TV range + BTB/WTW and shaders came much later in madVR development, and actually hurt performance quite a bit back when I had my 7800GTX 512. On my GT440, 16bit float textures seem almost as cheap as 8bit integer, things only start to get slow with 32bit floats which by comparison are ~3x slower. Its the shader operations rather than the textures that are the main cause of slowdowns.

Quote:
Originally Posted by madshi View Post
Sorry, but I'm pretty sure you misunderstood him completely. AFAIK, the issues he ran into had nothing to do with the 3DLUT being TV range, I believe.
I think you misunderstood him, but whatever. This problem with elevated BTB with Argyll 3DLUTs is unique to madVR, and from what I can tell is quite similar to what causes leeperry's BTB MPC-HC shader problems. He stated pretty clearly that he could only properly resolve it if madVR was using PC range output 3DLUTs, in similar fashion to madVR 0.61 or otherwise.

Quote:
Originally Posted by madshi View Post
As to performance, as I said above, going PC range would either mean losing BTB/WTW, or using floating point textures, the latter of which would make things slower instead of faster.
In old versions of madVR, it was implemented as such that BTB/WTW was eliminated rather early in the render chain. Though as mentioned above, 16bit floats texture storage is fast at least on NVIDIA GPUs, its only the shader operations which are slow as far as I can tell. Overall I've never seen the usefulness of maintaining BTB/WTW throughout the entire processing chain if you are ultimately outputting 0-255 PC range.

Quote:
Originally Posted by madshi View Post
Yes, it means losing a little bit of GPU performance, but it's not really a "night and day" difference, is it?

I could live with adding support for TV levels subtitles to the subtitle interface, if performance is so critical to you. I do wonder whether the added complexity is worth the small performance benefit. But I could live with it...
Let's just take a wait and see approach then. If we start getting a number of reports from people who can no longer use their normal settings in madVR+XySubFilter we can discuss adding support of TV level subtitles at that time.

Quote:
Originally Posted by madshi View Post
Is there? What kind of GPU does that large subset of the user-base have? With bilinear scaling, most DX9 GPUs should be able to handle madVR ok, as far as I can see. I would think that with even Intel HD3000 already running madVR just fine, most users today should have GPUs that should handle madVR with bilinear filtering just fine?
Honestly I have no idea, I just see chatter occasionally on various web boards, since xy-VSFilter+madVR frequently get recommended together to the fansub crowd. This is usually the same group of people who have CPUs which can barely handle 720p 10bit H264 properly if at all. Many probably come from people using old laptops or Pentium 4 generation computers, but who knows.

Quote:
Originally Posted by madshi View Post
Well, then let's collect some money and send it to him. A budget GPU is so cheap these days.
Yeah, I don't really know what's going on there, only that he's occasionally told me that debugging will have to wait because he doesn't have access to his madVR computer until XX date. It hasn't really caused any problems since most of the heavy coding can be done blind without constant testing, its only when there is a need for him to actively debug something in real-time.

Quote:
Originally Posted by madshi View Post
Of course doing any processing in UHD costs more performance, but again, the simple TV -> PC -> TV conversion is so easy on any modern GPU that I don't expect any performance hit worth mentioning with mid-range or high-end GPUs in the near future, even with UHD.
I was talking about the mid-range and high-end GPUs of today which as far I as can gather really struggle with UHD scaling with any decent settings. It would be good to get some performance metrics on this once we release a XySubFilter beta.

Quote:
Originally Posted by madshi View Post
Well, as I explained above, doing what you suggest would slow madVR down instead of speeding it up. I do know what I'm doing, you know, although you often seem to doubt that...
Old versions of madVR which were texture heavy + 3DLUT instead of shader heavy + 3DLUT were faster though...

Quote:
Originally Posted by madshi View Post
Hmmm... Just a thought: Try activating the trade quality for performance option "store custom pixel shader results in 16bit buffer instead of 32bit". That way the TV -> PC conversion for subtitle blending will use 16bit floating point textures instead of 32bit. That might help to get some performance back, on the cost of accuracy.
I already had the option enabled, and was under the impression that it was enabled by default? So all results I posted thus far were with 16bit buffers. Use of 32bit buffers seems to make things even slower.

Quote:
Originally Posted by madshi View Post
The funny thing is that the madVR subtitle color correction practically comes for free. What costs performance is the conversion of the PC levels subs to TV levels.
Yeah, I'm actually surprised the cost is as large as it is. It's not possible to generate a hard-code 1D LUT or something to speed up these conversions, instead of doing them dynamically in shaders?
cyberbeing is offline   Reply With Quote
Old 8th June 2013, 10:06   #19102  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by madshi View Post
@cyberbeing:

I've just looked through the subtitle interface. As you know, it does already contain a "TV" vs "PC" levels indicator. But currently madVR ignores what XySubFilter sets there, and madVR reports TV vs PC depending on whether the source was detected as fullrange or not. I think we do need to re-think the way we handle levels because currently it's a bit of a mess. I've taken for granted that XySubFilter always renders in PC levels, but looking at the interface, it's not clearly defined that way. Ok, "None" is defined as being fullrange. But anything else is flexible, and I think neither madVR nor XySubFilter handle the flexible part correctly right now. So I think we should restart a discussion on how to handle levels in the best way. I'm not happy with the interface right now in regards to levels because it's not clear enough how either provider or consumer should behave in all situations.

Do you agree that we should re-discuss this? If so, where should we do this discussion?
I don't know about SubRenderIntf.h, but during our discussions there was consensus that the interface was to be clearly defined as 0-255 RGB only. I don't think we really need to re-discuss this unless we were going to support TV-range 16-235 or something like AYUV.

Those level range tags are just for informative purposes, and not immediately relevant to the subtitle interface. They only define intended level range to use for YCbCr output by xy-VSFilter and similar transform filter renderers. At the time, you were quite insistent on it being included in the subtitle interface, because you wanted to mimic incorrect xy-VSFilter behavior in those cases where the subtitle and video were not using the same YCbCr level range. Issues like xy-VSFilter outputting TV-range YCbCr subtitles on a PC-range YCbCr video and vice versa.
cyberbeing is offline   Reply With Quote
Old 8th June 2013, 10:19   #19103  |  Link
ajp2k11
Registered User
 
Join Date: Jul 2011
Posts: 57
Quote:
Originally Posted by ajp2k11 View Post
Thanks for the suggestion, tried it but same thing.

@Madshi,

I have 'auto-load subtitles' unchecked in MPC-HC when using the internal subtitle rendering but same thing...

EDIT: Remuxed one of the files I'm having problems with, removing the internal subtitle, after that it plays fine...
Blocking LAV splitter in MPC-HC and using Haali from CCCP instead and it works fine with subs and all...!?
ajp2k11 is offline   Reply With Quote
Old 8th June 2013, 10:50   #19104  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Danat View Post
Not 100% full, but "full enough" I'd say . Around 100/128 if I just play the video through a first heavy fragment. Though the 4-th test was made just by seeking into the middle of long heavy fragment, pausing to allow the CPU queue to fill up to 60-80/128, then releasing pause and observe how decoder's attempt to fill CPU queue (it fluctuates around 70-90) makes troubles sometimes for the renderer.
Well, so it seems uploading is the problem for you. I have the render and presentation threads at HIGHEST priority, and uploading at ABOVE_NORMAL priority. And the render and presentation threads should sit idle most of the time. So unless the decoder is running with ABOVE_NORMAL, too, the uploading thread should have priority over the decoding thread(s).

Have you tried different decoders? Maybe some are running at ABOVE_NORMAL and some at NORMAL?

Quote:
Originally Posted by ajp2k11 View Post
EDIT: Remuxed one of the files I'm having problems with, removing the internal subtitle, after that it plays fine...
Well, that's interesting, but I have no explanation for that. And thus also no solution... You could try once again to use different subtitle renderers to see if it makes any difference...

Quote:
Originally Posted by ajp2k11 View Post
Blocking LAV splitter in MPC-HC and using Haali from CCCP instead and it works fine with subs and all...!?
Weird. You could try talking to nevcairiel about this, but I bet he will be as confused about this as I am...

Quote:
Originally Posted by cyberbeing View Post
No, at least not from what I remember. You originally started out with 3DLUTs outputting PC range. Your decision to migrate the entire render chain to TV range + BTB/WTW and shaders came much later in madVR development, and actually hurt performance quite a bit back when I had my 7800GTX 512.
Quote:
Originally Posted by cyberbeing View Post
In old versions of madVR, it was implemented as such that BTB/WTW was eliminated rather early in the render chain.
I don't know where you pull that information from, but it's clearly wrong. I've just double checked with madVR 0.09, which is the oldest madVR build I still have on my PC, and it maintains BTB/WTW just fine. The rendering pipeline has *always* been TV levels. However, in versions prior to v0.62 madVR was using different 3dluts for PC and TV levels *output*. Since 3dlut processing and dithering are the last steps in the processing chain, that doesn't change anything, though. All the main processing was still in TV levels. The 3dlut input was also always TV levels, unless my memory is totally betraying me.

Quote:
Originally Posted by cyberbeing View Post
On my GT440, 16bit float textures seem almost as cheap as 8bit integer, things only start to get slow with 32bit floats which by comparison are ~3x slower. Its the shader operations rather than the textures that are the main cause of slowdowns.
I'm using 16bit integer textures by default, which have the benefit of using all their accuracy on the important 0..1 range. Yes, 16bit floats are relatively fast, too, but they stretch their accuracy over a *much* wider range, which means that in the area which counts the most (16-235) they have a much lower accuracy compared to 16bit integer. If I had to use floating point textures, I would have to go 32bit floating point by default, in order to not compromise accuracy. So the decision for me is not 16bit integer vs 16bit float, but it is 16bit integer vs 32bit float, and that's a quite big difference in performance.

Ok, if I went with floating point, people could compromise by using 16bit float instead of 32bit (trade quality for performance). They'd probably get identical performance to the current madVR logic, but with slightly faster subtitle rendering. On the negative side, they'd get less accuracy at the same speed as the current madVR logic. I think that's a bad compromise. Furthermore, if you look at the high quality option, I'd have to use float32 instead of int16, which costs twice the memory bandwidth. So the memory bandwidth cost of doing Jinc3 AR for a "no compromise" setup would double. All of that just to get a small boost in subtitle rendering performance? Sounds like a bad idea to me...

Quote:
Originally Posted by cyberbeing View Post
I think you misunderstood him, but whatever. This problem with elevated BTB with Argyll 3DLUTs is unique to madVR [...]. He stated pretty clearly that he could only properly resolve it if madVR was using PC range output 3DLUTs, in similar fashion to madVR 0.61 or otherwise.
If what you're saying is true then please explain to me why the eeColor box does not have this issue, although it is also using TV levels 3DLUTs (when using it for Blu-Ray/TV calibration)?

Furthermore, ArgyllCMS could make TV levels 3dluts behave identical to PC levels 3dluts by simply clipping BTB/WTW away. From a technical/scientific point of view, using a TV levels 3dlut with clipped BTB/WTW is identical to using a PC levels 3dlut.

From what I've read the recent problems with ArgyllCMS with BTB information suddenly becoming visible with weird colors was due to Graeme trying to add xvYCC support. And the same problem also occurred with the eeColor. I'm not sure if that is the problem with elevated BTB that you're talking about, though.

Quote:
Originally Posted by cyberbeing View Post
I was talking about the mid-range and high-end GPUs of today which as far I as can gather really struggle with UHD scaling with any decent settings.
True, but the reason they're struggling is the performance cost of high quality upscaling algorithms. Try bilinear scaling and things look quite different. Doing TV <-> PC conversions via shader math is not *that* costly with modern GPUs. Let's just do the math: Jinc3 (without AR) needs to read 36 input pixels to produce 1 output pixel. For each of those 36 input pixels Jinc also needs to access a Jinc 1dlut. Add to that a lot of shader math. For a UHD image that means 288 million source pixels need to be read, another 288 million 1dlut accesses, for 8 million output pixels. All of that per video frame. And this doesn't even include the AR filter.

Now let's compare a UHD TV <-> PC conversion to that: Such a conversion needs 8 million input pixels and 8 million output pixels, with no 1dlut access and barely any shader math to speak of. Furthermore, having each input pixel position match exactly the output pixel also means that the GPU texture caches work much more effectively compared to the upscaling shaders. Can you see how doing a TV <-> PC conversion is not even remotely in the same cost category as an upscaling algorithm?

Quote:
Originally Posted by cyberbeing View Post
Old versions of madVR which were texture heavy + 3DLUT instead of shader heavy + 3DLUT were faster though...
With very old GPUs, probably. The problem is that newer GPUs get more and more shader power, while the rest of the GPU properties doesn't get faster as much. So for newer generation GPUs it's a better idea to use shader math instead of textures. That does not only apply to mid-range and high-end GPUs, but also to low-end GPUs. The GPU generation makes all the difference here. I believe for madVR it is a good idea to concentrate on newer generation GPUs. I find it more important to make madVR run well on a new generation low-end GPU than on an old generation mid-range GPU. Hence my decision to use more shader math and less textures. And this decision should pay off more and more in the future.

Quote:
Originally Posted by cyberbeing View Post
I don't know about SubRenderIntf.h, but during our discussions there was consensus that the interface was to be clearly defined as 0-255 RGB only. I don't think we really need to re-discuss this unless we were going to support TV-range 16-235 or something like AYUV.

Those level range tags are just for informative purposes, and not immediately relevant to the subtitle interface. They only define intended level range to use for YCbCr output by xy-VSFilter and similar transform filter renderers. At the time, you were quite insistent on it being included in the subtitle interface, because you wanted to mimic incorrect xy-VSFilter behavior in those cases where the subtitle and video were not using the same YCbCr level range. Issues like xy-VSFilter outputting TV-range YCbCr subtitles on a PC-range YCbCr video and vice versa.
Wasn't our big fight back then about me wanting to get rid of levels from the interface, while you wanted it included? At least that's what I remember, but I might be wrong...

From where I stand right now, I'm not sure what madVR should do if XySubFilter reports TV vs PC. And I'm also not sure what XySubFilter should do if madVR reports TV vs PC. I don't think we should leave the interface like this. We at least need to clarify the intended behaviour. And while we're at it, we could consider adding support for 16-256 subtitles to avoid performance problems in the video renderer?
madshi is offline   Reply With Quote
Old 8th June 2013, 11:07   #19105  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by cyberbeing View Post
At the time, you were quite insistent on it being included in the subtitle interface
Haha, this is kind of funny. I've just double checked and it was the other way round: You were insistent, and I was doubting that the added complexity of adding TV/PC flagging to the interface was worth it. See comment #217 and following:

https://code.google.com/p/xy-vsfilter/issues/detail?id=91#c224
madshi is offline   Reply With Quote
Old 8th June 2013, 12:04   #19106  |  Link
Soukyuu
Registered User
 
Soukyuu's Avatar
 
Join Date: Apr 2012
Posts: 169
Quote:
Originally Posted by ajp2k11 View Post
Blocking LAV splitter in MPC-HC and using Haali from CCCP instead and it works fine with subs and all...!?
I've had this experience as well. For some reason LAV needs a higher CPU queue than Haali for playback to not lag with madvr on same settings.
Soukyuu is offline   Reply With Quote
Old 8th June 2013, 12:11   #19107  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,346
Quote:
Originally Posted by Soukyuu View Post
I've had this experience as well. For some reason LAV needs a higher CPU queue than Haali for playback to not lag with madvr on same settings.
Except that his issue is something else entirely.

In any case, the splitter has absolutely zero interaction with D3D or the GPU, so its not really possible for it to cause any problem with madVRs vsync detection.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 8th June 2013 at 12:17.
nevcairiel is offline   Reply With Quote
Old 8th June 2013, 12:17   #19108  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by madshi View Post
Since 3dlut processing and dithering are the last steps in the processing chain, that doesn't change anything, though. All the main processing was still in TV levels. The 3dlut input was also always TV levels, unless my memory is totally betraying me.
I thought in old versions back when it was 3DLUT which did YCbCr->RGB & TV->PC, this occured as one of the first steps? I also seem to remember that madVR actually read the headers which contained parameters the 3DLUTs were created with and could handle both PC input and TV input.

Quote:
Originally Posted by madshi View Post
So the decision for me is not 16bit integer vs 16bit float, but it is 16bit integer vs 32bit float, and that's a quite big difference in performance.
Ok, if I went with floating point, people could compromise by using 16bit float instead of 32bit (trade quality for performance). They'd probably get identical performance to the current madVR logic, but with slightly faster subtitle rendering. On the negative side, they'd get less accuracy at the same speed as the current madVR logic. I think that's a bad compromise. Furthermore, if you look at the high quality option, I'd have to use float32 instead of int16, which costs twice the memory bandwidth. So the memory bandwidth cost of doing Jinc3 AR for a "no compromise" setup would double. All of that just to get a small boost in subtitle rendering performance? Sounds like a bad idea to me...
Well the main issue I've seen with 32bit floats is it limits maximum framerate throughput on weaker GPUs. I'd expect most entry-level modern GPUs could handle >60fps display with 32bit floats though if there wasn't any heavy shader stuff going on. Doing YCbCr->RGB & TV->PC in a 3DLUT in 0.61 was ~25% faster than your change to gamut shaders + TV->TV 3DLUT + level range shaders back when I had my old 7800GTX 512 GPU. You could likely gain back some of the performance lost here by (optionally) moving some of these operations back onto a 3DLUT.


Quote:
Originally Posted by madshi View Post
If what you're saying is true then please explain to me why the eeColor box does not have this issue, although it is also using TV levels 3DLUTs (when using it for Blu-Ray/TV calibration)?

Furthermore, ArgyllCMS could make TV levels 3dluts behave identical to PC levels 3dluts by simply clipping BTB/WTW away. From a technical/scientific point of view, using a TV levels 3dlut with clipped BTB/WTW is identical to using a PC levels 3dlut.

From what I've read the recent problems with ArgyllCMS with BTB information suddenly becoming visible with weird colors was due to Graeme trying to add xvYCC support. And the same problem also occurred with the eeColor. I'm not sure if that is the problem with elevated BTB that you're talking about, though.
The eeColor box doesn't have the issue probably because it clips BTB/WTW, while you maintain it. Honestly I'm not sure why in madVR an Argyll 3DLUT outputting an elevated 16-235 as 20-235 results in madVR displaying BTB 10-15 after expansion to PC range (random numbers used here for example).

He must be misunderstanding something then, because when I asked him he said that clipping BTB & WTW wasn't possible with a TV range in/out 3DLUT, and that it was something madVR should be doing, which you obviously aren't since it becomes visible.

The xvYCC support thing is something else, these problems were from way before then.


Quote:
Originally Posted by madshi View Post
With very old GPUs, probably. The problem is that newer GPUs get more and more shader power, while the rest of the GPU properties doesn't get faster as much. So for newer generation GPUs it's a better idea to use shader math instead of textures. That does not only apply to mid-range and high-end GPUs, but also to low-end GPUs. The GPU generation makes all the difference here. I believe for madVR it is a good idea to concentrate on newer generation GPUs. I find it more important to make madVR run well on a new generation low-end GPU than on an old generation mid-range GPU. Hence my decision to use more shader math and less textures. And this decision should pay off more and more in the future.
I hope so, since my GT440 Fermi isn't that old, but it has rather weak shader performance but rather strong texture performance. Overall I think that texture heavy would still be a net positive over shader heavy on NVIDIA GPUs, since starting with Fermi they went overkill on texture/tessellation performance. AMD/ATI GPUs in comparison have always had somewhat overkill shader performance and weaker texture.


Quote:
Originally Posted by madshi View Post
Wasn't our big fight back then about me wanting to get rid of levels from the interface, while you wanted it included? At least that's what I remember, but I might be wrong... :
No, more specifically you were against adding level range to the YCbCr Matrix script tags at all. You insisted that if xy-VSFilter was going to support levels range in YCbCb Matrix, then level support needed to be added to the subtitle interface as well which you were unhappy about. My stance was that level range was useless to the subtitle interface and could be ignored, but you wanted to mimic issues which could result from incorrect level tagging in a script. I was insistent on enabling PC range support in xy-VSFilter via script tagging, so as a result when those levels tag got added to YCbCr Matrix, you added them to the subtitle interface as well. You were left in charge of documenting use of the levels tags in the subtitle interface, and that was that.

Quote:
Originally Posted by madshi View Post
From where I stand right now, I'm not sure what madVR should do if XySubFilter reports TV vs PC. And I'm also not sure what XySubFilter should do if madVR reports TV vs PC. I don't think we should leave the interface like this. We at least need to clarify the intended behaviour. And while we're at it, we could consider adding support for 16-256 subtitles to avoid performance problems in the video renderer?
Subtitle providers always output 0-255 RGB, any weird special handing with those tags is entirely in the ballpark of the subtitle consumer. Currently the reason why XySubFilter's internal subtitle correction defaults performing TV.601 on TV.709 video only and reporting the actual yuvMatrix for everything else, is because we thought you wanted to deal with cases where there were level mismatches. As mentioned above, I never wanted levels tags added to the interface in the first place, and rather they be removed if no longer desire to use them in your subtitle consumer. Otherwise, if you know what you want to use them for, I'll help you define them.

As far as actual changes, at the moment I'd rather just focus on getting the first XySubFilter beta released before we consider changing anything.

Quote:
Originally Posted by madshi View Post
Haha, this is kind of funny. I've just double checked and it was the other way round: You were insistent, and I was doubting that the added complexity of adding TV/PC flagging to the interface was worth it. See comment #217 and following:

https://code.google.com/p/xy-vsfilter/issues/detail?id=91#c224
Nothing funny about it, you are confused. I wanted levels support added to YCbCr Matrix for xy-VSFilter (VSFilter.dll) only. As far as I can tell, the comment you link to was me basically stating that using the levels tags in the subtitle interface was pointless. You wanted to copy xy-VSFilter output exactly when it used those tags, even when what xy-VSFilter was doing was incorrect, and this is the sole reason they TV & PC exist in the subtitle interface today rather then just ignoring them. You wouldn't accept xy-VSFilter having a different result than madVR's consumer. In your mind it was an all or nothing affair. If I wanted support in xy-VSFilter, you were going to reluctantly support it as well in madVR. If you were not going to support it in madVR, you wouldn't allow me to add support to xy-VSFilter only. I was stubborn about adding levels support for xy-VSFilter so we ended up with both... I still believe YCbCr levels tags in the subtitle interface is useless though, but there it is because you gave me no other choice.

Last edited by cyberbeing; 8th June 2013 at 12:57.
cyberbeing is offline   Reply With Quote
Old 8th June 2013, 12:55   #19109  |  Link
zoyd
Registered User
 
Join Date: Sep 2009
Posts: 43
madVR - high quality video renderer (GPU assisted)

Quote:
Originally Posted by cyberbeing View Post
0
The eeColor box doesn't have the issue probably because it clips BTB/WTW, while you maintain it. Honestly I'm not sure why in madVR an Argyll 3DLUT outputting an elevated 16-235 as 20-235 results in madVR displaying BTB 10-15 after expansion to PC range (random numbers used here for example).
The eeColor box does not clip BTB/WTW YCbCr input. It does no expansion whatsoever, video level out = LUT(video level in).
zoyd is offline   Reply With Quote
Old 8th June 2013, 13:05   #19110  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by zoyd View Post
The eeColor box does not clip BTB/WTW YCbCr input. It does no expansion whatsoever, video level out = LUT(video level in).
If the eeColor box doesn't do levels expansion or clip BTB/WTW, then I guess that the difference is that madVR doesn't clip BTB before performing TV -> PC range output or something. Most TVs clip BTB by default when performing TV range YCbCr input -> PC range. Otherwise, only madshi would know why a 3DLUT with 16-235 input/output was making BTB in the 0-15 range visible. It still doesn't make much sense to me.

Last edited by cyberbeing; 8th June 2013 at 13:10.
cyberbeing is offline   Reply With Quote
Old 8th June 2013, 13:14   #19111  |  Link
zoyd
Registered User
 
Join Date: Sep 2009
Posts: 43
Quote:
Originally Posted by cyberbeing View Post
If the eeColor box doesn't do levels expansions, then I guess that the difference is that madVR doesn't clip BTB before performing TV -> PC range output or something. Most TVs clip BTB by default when performing TV range YCbCr input -> PC range. Otherwise, only madshi would know why a 3DLUT with 16-235 input/output was making BTB visible. It still doesn't make much sense to me.
I think it's a black point issue. ArgyllCMS 3dLUT encoded with TV in/out levels still contains 0-255 levels, it just maps the data flow with the correct 16-235 precision. If level 16 in gets elevated to say 16.5, then there will be information at level 15 as the LUT interpolation has to be smooth.
zoyd is offline   Reply With Quote
Old 8th June 2013, 13:26   #19112  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by zoyd View Post
I think it's a black point issue. ArgyllCMS 3dLUT encoded with TV in/out levels still contains 0-255 levels, it just maps the data flow with the correct 16-235 precision. If level 16 in gets elevated to say 16.5, then there will be information at level 15 as the LUT interpolation has to be smooth.
Well I guess that would explain why Graeme said madVR would need to clip this BTB information, and there was nothing he could do about it.
cyberbeing is offline   Reply With Quote
Old 8th June 2013, 13:28   #19113  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by cyberbeing View Post
Subtitle providers always output 0-255 RGB, any weird special handing with those tags is entirely in the ballpark of the subtitle consumer.
Yes. But you're not happy with the performance hit madVR takes for this. Which is why I suggested to reconsider the whole levels stuff in the subtitle interface, because that would be the cleanest way to avoid the performance hit.

Quote:
Originally Posted by cyberbeing View Post
As far as actual changes, at the moment I'd rather just focus on getting the first XySubFilter beta released before we consider changing anything.
Ok, that's fine with me.

Quote:
Originally Posted by cyberbeing View Post
Nothing funny about it, you are confused.
No, I'm not, we were just talking past each other (once again).

Quote:
Originally Posted by zoyd View Post
I think it's a black point issue. ArgyllCMS 3dLUT encoded with TV in/out levels still contains 0-255 levels, it just maps the data flow with the correct 16-235 precision. If level 16 in gets elevated to say 16.5, then there will be information at level 15 as the LUT interpolation has to be smooth.
Yeah, maybe it's something like that. I have a Display 3 on order now. Once it's here, I'll start playing with ArgyllCMS + madVR. The latest ArgyllCMS build now supports an "-r256" switch which will overwrite the ICC cLUT resolution to 256^3, identical to the madVR 3dlut size. If the raised BTB issue is caused by LUT interpolation problems, using "-r256" should fix it because this way the ICC cLUT will have identical dimensions and black/white points as the final madVR 3dlut.
madshi is offline   Reply With Quote
Old 8th June 2013, 13:46   #19114  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by madshi View Post
Yeah, maybe it's something like that. I have a Display 3 on order now. Once it's here, I'll start playing with ArgyllCMS + madVR. The latest ArgyllCMS build now supports an "-r256" switch which will overwrite the ICC cLUT resolution to 256^3, identical to the madVR 3dlut size. If the raised BTB issue is caused by LUT interpolation problems, using "-r256" should fix it because this way the ICC cLUT will have identical dimensions and black/white points as the final madVR 3dlut.
Thanks for the head-up about the new build features, as I didn't see it mentioned anywhere about fixing -r255 to be -r256. Though main problem with using high resolution like that is they take forever to generate with -G. Like 2% progress per hour or something.

[Edit: The 6-7-2013 collink build still only supports -r255 , not -r256, did he provide you with a special build?]

Last edited by cyberbeing; 8th June 2013 at 14:04.
cyberbeing is offline   Reply With Quote
Old 8th June 2013, 13:54   #19115  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,650
Madshi, the black frame flash bug with Smooth Motion enabled is still there with 0.86.3
ryrynz is offline   Reply With Quote
Old 8th June 2013, 14:04   #19116  |  Link
flashmozzg
Registered User
 
Join Date: May 2013
Posts: 77
I have a problem with atleast all madVR versions above 0.84 . If i update to a newer one this happens:
Here is Ctrl+J:
http://img15.imageshack.us/img15/2088/67045988.png

If I downgrade to my previous version (84.7) it all comes back to normal.
Is there any way to fix it?
flashmozzg is offline   Reply With Quote
Old 8th June 2013, 14:16   #19117  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by cyberbeing View Post
Thanks for the head-up about the new build features, as I didn't see it mentioned anywhere about fixing -r255 to be -r256. Though main problem with using high resolution like that is they take forever to generate with -G. Like 2% progress per hour or something.

[Edit: The 6-7-2013 collink build still only supports -r255 , not -r256, did he provide you with a special build?]
No, it's the normal download link for the 3dlut collink. He did update it to support -r256, though. He did not make -r255 behave like -r256, btw, those are two different options. Maybe he didn't update the documentation.

Yes, creating a 3dlut like that will take a loooooong time. You could try whether maybe -r128 or -r64 will make an acceptable compromise and still fix the BTB elevation problem. I've not yet had a chance to play with all this myself, so I'm only throwing wild guesses around here...

Quote:
Originally Posted by ryrynz View Post
Madshi, the black frame flash bug with Smooth Motion enabled is still there with 0.86.3
Oh noooooooo... How often does it occur? Can you create a log where you abort playback directly after the problem occured? Not sure if that's doable, if the problem only occurs every couple of hours...

Quote:
Originally Posted by flashmozzg View Post
I have a problem with atleast all madVR versions above 0.84 . If i update to a newer one this happens [...]
If I downgrade to my previous version (84.7) it all comes back to normal.
Does this happen with all videos? Or only with some? Maybe only with one specific codec (e.g. h264 or MPEG2)? Which decoder are you using? Can you try using a different decoder?
madshi is offline   Reply With Quote
Old 8th June 2013, 14:18   #19118  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,650
Quote:
Originally Posted by madshi View Post
Oh noooooooo... How often does it occur? Can you create a log where you abort playback directly after the problem occured? Not sure if that's doable, if the problem only occurs every couple of hours...
Yeah for me it's every couple of hours, I played about an hour tonight and it happened after about 20 minutes. Where as the night before same amount of viewing and no black screen.
ryrynz is offline   Reply With Quote
Old 8th June 2013, 14:38   #19119  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by madshi View Post
No, it's the normal download link for the 3dlut collink. He did update it to support -r256, though. He did not make -r255 behave like -r256, btw, those are two different options. Maybe he didn't update the documentation.
Don't know, because I just downloaded it again but it still rejects -r256 here.
Code:
collink -v -3m -et -Et -IB -r256 -G -ila Rec709.icm TV.icm HD.icm
Diagnostic: Resolution flag (-r) argument out of range (256)
cyberbeing is offline   Reply With Quote
Old 8th June 2013, 14:46   #19120  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
This is a direct quote from his email:

Quote:
So this <http://www.argyllcms.com/Win32_collink_3dlut.zip> now supports -r256.
madshi is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 03:14.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.