Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 4th August 2022, 14:24   #1381  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,070
"I can get white NaN pixels (from the -0.1 values)."

If you think there is some bug around zero or in going negative - the float allow to add some 'big' positive offset like 10.0 to 'nominal' 0.0..1.0 range before operation and subtract after. I think it should completely avoid using any negative values and zero crossing at resampling engine.
This also works without visible distortions
Code:
Blankclip(width=256, height=256, pixel_type="YUV420PS")
Expr("sx 128 > range_max 0 ?","")
ColorYUV(off_y=50000)
bicubicresize(2048,2048,0,0.75)
ColorYUV(off_y=-50000)
ColorYUV(gain_y=-100, off_y=50)
ConvertBits(8)
DTL is offline   Reply With Quote
Old 4th August 2022, 16:40   #1382  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,361
Quote:
Originally Posted by kedautinh12 View Post
Hi Dogway, your deep_resize proplem with 1440i, i tried but error with both gpuid=0 and -1.

My script:
AnimeIVTC(mode=1)
deep_resize(1920,1080)

I got error NNEDI3CL: field must be 0 or 1 when dw=True

and with this script:
AnimeIVTC(mode=1)
deep_resize(1920,1080, gpuid=-1)

I got video don't match with audio

my example: https://mega.nz/file/eHQSWJxB#eSXcMy...Ndg9B2t4mxACPw
Interlaced isn't supported in deep_resize only progressive, try also to check frame properties.

Thanks DTL, yes I was trying to avoid adding an additional calls like preshaping or clamping. It's for the scaling spaces in ConvertFormat (gamma, linear, sigmoid, log)
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread

Last edited by Dogway; 4th August 2022 at 16:46.
Dogway is offline   Reply With Quote
Old 4th August 2022, 18:24   #1383  |  Link
kedautinh12
Registered User
 
Join Date: Jan 2018
Posts: 2,156
Quote:
Originally Posted by Dogway View Post
Interlaced isn't supported in deep_resize only progressive, try also to check frame properties.

Thanks DTL, yes I was trying to avoid adding an additional calls like preshaping or clamping. It's for the scaling spaces in ConvertFormat (gamma, linear, sigmoid, log)
But i used to animeITVC to deinterlace

Last edited by kedautinh12; 5th August 2022 at 01:41.
kedautinh12 is offline   Reply With Quote
Old 5th August 2022, 01:41   #1384  |  Link
kedautinh12
Registered User
 
Join Date: Jan 2018
Posts: 2,156
I used to deep_resize with other interlaced after deinterlace with animeITVC and got no error.
kedautinh12 is offline   Reply With Quote
Old 5th August 2022, 02:42   #1385  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,361
@kedautinh12, can you check frame properties after AnimeIVTC? I don't have the script and its dependencies installed so I can't check but for me your clip works by deinterlacing with ex_bob(nnedi3=true).
Try adding propSet("_FieldBased",0) before deep_resize()

Quote:
Originally Posted by DTL View Post
It looks using or not OOTF part also may visibly affects the other processing attempt to perform in 'linear' like scaling -
https://forum.doom9.org/showthread.p...95#post1970995 . May be only jpsdr plugin may switch on/off OOTF part to see the difference. And many other plugins for convert to 'linear' can not.
OOTF looks to be as simple as o_transfer/i_transfer. In any case I still don't support HDR in ConvertFormat, HLG nor PQ so can't help with your issue.


EDIT: hello_hello, you can download now updated TransformsPack here. Default OETF and EOTF have been changed for consumer delivery spaces. I might need to add more in the future.

I also created a new function OOTF() in TransformsPack - Transfers. It will take a single remap gamma value in OOTF, but you can also specify OETF and EOTF gamma values as floats, or their transfer names as strings. For those transfers with piece-wise functions (a linear part towards zero) a minimization algo was implemented to find the power law match, so for example for sRGB it would minimize to 2.223. This is faster than linearizing then gamma encoding back.

On another note I'm finally updating the SMDegrain documentation, half done already.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread

Last edited by Dogway; 5th August 2022 at 12:46.
Dogway is offline   Reply With Quote
Old 5th August 2022, 16:42   #1386  |  Link
kedautinh12
Registered User
 
Join Date: Jan 2018
Posts: 2,156
Quote:
Originally Posted by Dogway View Post
@kedautinh12, can you check frame properties after AnimeIVTC? I don't have the script and its dependencies installed so I can't check but for me your clip works by deinterlacing with ex_bob(nnedi3=true).
Try adding propSet("_FieldBased",0) before deep_resize()
Oh, it's work with propSet("_FieldBased",0) before deep_resize(). Thanks
kedautinh12 is offline   Reply With Quote
Old 5th August 2022, 18:43   #1387  |  Link
hello_hello
Registered User
 
Join Date: Mar 2011
Posts: 4,829
Well... I've tried, but I just don't understand the world anymore.....

720p Rec.709 YV12 source.
No Avisynth+ frame properties.
Transforms Pack 1.0 RC55

ConvertBits(16)
ConvertToDoubleWidth()
Matrix(From=709, To=601, Bitdepth=16)
ConvertFromDoubleWidth()
Spline36Resize(640,360)
https://i.ibb.co/zx4gq2g/1.jpg

ConvertToYUV444()
ConvertBits(16)
ConvertYUVtoLinearRGB(Color=2)
ConvertLinearRGBtoYUV(Color=4)
ConvertBits(8)
ConvertToYV12()
Spline36Resize(640,360)
https://i.ibb.co/dt886c9/2.jpg

fmtc_bitdepth(bits=16)
fmtc_resample(css="444")
fmtc_matrix(mat="709", bits=16)
fmtc_transfer(transs="709", transd="linear")
fmtc_primaries(prims="709", primd="601-625")
fmtc_transfer(transs="linear", transd="601")
fmtc_matrix(mat="601", bits=16)
fmtc_resample(640,360,css="420")
fmtc_bitdepth(bits=8)
https://i.ibb.co/191WJpG/3.jpg

ConvertFormat(640,360)
https://i.ibb.co/VHKC1Dc/4.jpg

ConvertFormat(640,360, OETF="170m", EOTF="170m")
https://i.ibb.co/3dGfynG/5.jpg

ConvertFormat(640,360, EOTF="170m")
https://i.ibb.co/CBN1ZLY/6.jpg

Bugs be here?
Display_Referred("1886")
https://i.ibb.co/PTPxJzd/7.jpg

Display_Referred()
ConvertFormat(640,360)
https://i.ibb.co/FgXV3j1/8.jpg
hello_hello is offline   Reply With Quote
Old 5th August 2022, 19:39   #1388  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,361
There's a regression when I repurposed Matrix_coef() to retrieve primaries for XYZ_to_RGB() a few weeks ago so it's reading 470M instead of 170M, hold on a bit. I've been thinking for some time on a manner to restructure the matrices vertically since I'm going to add many more but still haven't come with a solution...

Display_referred() should be working. You don't specify the transfer but the color space, if your source is "170M" and your display is calibrated to Rec.709 with 1886 transfer set:
Code:
Display_Referred ("170M","709") # 709 or sRGB whatever your display is calibrated against
Ideally you should set a LUT profile in the "profile" argument.


EDIT: @hello_hello, updated TransformsPack to RC56 with the above fix. I can't follow your first two examples but for fmtconv you might want to use "601-525" which corresponds to "170M". And as I commented a few posts back you want to use 1886 transfer for going to and from linear since all/most media is delivered in this ODT (display referred).
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread

Last edited by Dogway; 5th August 2022 at 20:36.
Dogway is offline   Reply With Quote
Old 5th August 2022, 20:22   #1389  |  Link
ENunn
Registered User
 
Join Date: Dec 2019
Posts: 68
Sorry for the late response.

Quote:
Originally Posted by Dogway View Post
I borked it (QTGMC), fixed now hopefully (along ResizersPack). In any case previous version (3.85) should be working.

My guess is that something is not up to date...
Everything was up to date as of my last post. I just downloaded the pack again just to be safe and the issue still happens.

Quote:
Originally Posted by Dogway View Post
As per your comments everything runs fine until the SMDegrain call?
Yes, and if I move the converttoyuv422/convertbits/tweak/convertbits line after SMDegrain, the issue doesn't happen.

Quote:
Originally Posted by Dogway View Post
Are you on x86 or x64?
x64

Quote:
Originally Posted by Dogway View Post
Another thing, try with str=1 or str=0, str=0.8 invokes ex_retinex() which is good but that might be causing you some issue.
Looks like that fixed it.
ENunn is offline   Reply With Quote
Old 5th August 2022, 20:51   #1390  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,361
@ENunn, that's very strange specially after the last fix for ex_retinex(), simply try directly:
Code:
ex_retinex(lo=80, tv_in=true, tv_out=false)
And just in case revise you are on latest vsTCanny plugin version.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 6th August 2022, 18:39   #1391  |  Link
ENunn
Registered User
 
Join Date: Dec 2019
Posts: 68
Quote:
Originally Posted by Dogway View Post
@ENunn, that's very strange specially after the last fix for ex_retinex(), simply try directly:
Code:
ex_retinex(lo=80, tv_in=true, tv_out=false)
And just in case revise you are on latest vsTCanny plugin version.
Tried before and after SMDegrain and without SMDegrain and the issue is still there, plus it makes the image much brighter as well.

I updated vsTCanny and it's still there.

ENunn is offline   Reply With Quote
Old 6th August 2022, 22:49   #1392  |  Link
hello_hello
Registered User
 
Join Date: Mar 2011
Posts: 4,829
Dogway, is there a RC56 version of Transforms Pack - Models? It's just that I'm only producing "no function named Matrix_coef" errors with ConvertFormat since updating the other two scripts.

Quote:
Originally Posted by Dogway View Post
I can't follow your first two examples but for fmtconv you might want to use "601-525" which corresponds to "170M".
1. Matrix-only conversion with HDRMatrix.
2. 709 -> 601-625 conversion with HDRTools.

What's the logic behind always converting to NTSC primaries?
I was pondering over whether "170M" should still be used to describe a transfer function earlier, which led me to discover... unless the list is incomplete or I'm blind.... it no longer does for h265. http://www.vapoursynth.com/doc/funct...eo/resize.html
And as rec.601 is the matrix for both digital "PAL" & "NTSC", the only reason I can think of for using 170M or 480bg is to differentiate between them.
601_525, 601_625 & 601 cover all the bases, although for CropResize I went with 601N, 601P and 601 even though the NTSC and PAL implication isn't accurate.

For a while I was fairly anal about converting the primaries to 601, and I generally chose 601_625 because they're closer to 709, but if HD displays and players don't concern themselves with the primaries and simply assume 709, which I suspect they mostly do, a matrix-only conversion might be better.

Quote:
Originally Posted by Dogway View Post
And as I commented a few posts back you want to use 1886 transfer for going to and from linear since all/most media is delivered in this ODT (display referred).
I'll take your word for that, but I still can't get my head around it as everything I've read indicates it's only a display transfer function, and because 1886 is designed to factor in the black (and white?) levels of a calibrated display, and because the display used for mastering is an unknown, and because the definition of "deliver" might be one that doesn't imply it's not converted to 709 (scene) before it reaches my TV, and because to me it defeats the purpose of 1886, but what do I know.....

Last edited by hello_hello; 6th August 2022 at 23:13.
hello_hello is offline   Reply With Quote
Old 7th August 2022, 10:40   #1393  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,361
TransformsPack - Models was updated here. I simply forgot version update.
Quote:
Originally Posted by hello_hello View Post
What's the logic behind always converting to NTSC primaries?
You mean NTSC matrix coefficients? Both PAL and NTSC inherited the matrix coefficients of 470M (FCC) for historical reasons. Read here.
Quote:
I was pondering over whether "170M" should still be used to describe a transfer function earlier, which led me to discover... unless the list is incomplete or I'm blind.... it no longer does for h265. http://www.vapoursynth.com/doc/funct...eo/resize.html
And as rec.601 is the matrix for both digital "PAL" & "NTSC", the only reason I can think of for using 170M or 480bg is to differentiate between them.
H.265 ITU paper calls the transfer function by their related matrix, they are inconsistent though, they call the 2020 transfer ST.2084 which is the (SMPTE) paper that defines the 2020 space characteristics. Yes, a mess, they mix ITU, IEC, FCC, and SMPTE naming standards. I say, stick to one or two, you can read here some of my rant. To start with, the first time (or one of the firsts) the 170M transfer was defined was in SMPTE ST 170 in 1994, your linked table makes use of this transfer for 709, 601, 2020_10 and 2020_12 and state "functionally the same", yes, another mess. "Functionally the same" reads as different but yields the same result, which is not true, they employ not "functionally" but the exact same transfer, so I gave it a single name.
Now why did I keep 470BG transfer (2.8 gamma)? because in TransformsPack I allow you to do whatever you want to. Limited TV range for float? Also OK. AP0 primaries? Also OK (not listed in ITU)
Another aspect is the matrix coefficient table, so RGB "is" a matrix ? ICtCp is a matrix? YCoCg? Another mess. They are color models, and the said "matrix" coefficients only refers to the YCbCr model matrix coefficients, not the ICtCp, not the YcCbcCrc or any other. No wonder you are confused, not even the standards make it right. I do my best to add comments on my scripts to give some background but I don't adhere strictly to ITU or any other standard.
Quote:
For a while I was fairly anal about converting the primaries to 601, and I generally chose 601_625 because they're closer to 709, but if HD displays and players don't concern themselves with the primaries and simply assume 709, which I suspect they mostly do, a matrix-only conversion might be better.
I want to think they do convert from 170M for SD, the same reason they still keep deinterlacing capabilities, now I don't know what they assume for 640x360, maybe you should scale it based on height (854x480 or 1024x576).
Quote:
I'll take your word for that, but I still can't get my head around it as everything I've read indicates it's only a display transfer function, and because 1886 is designed to factor in the black (and white?) levels of a calibrated display, and because the display used for mastering is an unknown, and because the definition of "deliver" might be one that doesn't imply it's not converted to 709 (scene) before it reaches my TV, and because to me it defeats the purpose of 1886, but what do I know.....
You can also take cretindesalpes' word for that, as I found an example yesterday in the docs.
The display used for mastering DVD/BD consumer media is calibrated to 1886, they make different (grade) trims depending on delivery media and it's always display referred (for the last 20 years at least). The workflow typically involves an IDT (OETF) -> Log/Linear (trim pass under EOTF proof) -> ODT (EOTF). So they work in log or linear space under an EOTF (1886) viewing transform.
To be more specific, the input device might be calibrated so the IDT could be transformed from a custom (unknown) profile, and similarly the EOTF viewing transform is also a mastering monitor profile characterized against 1886. When encoding the EOTF into the image instead you don't use your display profile (with custom 1886 settings) but a generic one that assumes a perfect imaginary display with 0 black, so in essence a power law gamma of 2.4 and a generic color space with known primaries, like 709. This makes it possible to "undo" the transfer and primaries and convert to your display profile in case it's calibrated, or play very close to intended in case it's not calibrated

--------------------------------------

@ENunn, what do you mean with "the issue is still there, plus it makes the image much brighter as well." Either the issue is present or it's making the image brighter which means it's working fine (that's retinex) if it shows your image black or as your above example, then there's an issue. But I'm not experiencing it with FFMS2 loader.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread

Last edited by Dogway; 7th August 2022 at 12:37.
Dogway is offline   Reply With Quote
Old 8th August 2022, 00:16   #1394  |  Link
hello_hello
Registered User
 
Join Date: Mar 2011
Posts: 4,829
Quote:
Originally Posted by Dogway View Post
TransformsPack - Models was updated here. I simply forgot version update.
I didn't update it because the version number was the same. Obviously I should have.
It's working now so I'll have a play with it again soon.

Quote:
Originally Posted by Dogway View Post
You mean NTSC matrix coefficients? Both PAL and NTSC inherited the matrix coefficients of 470M (FCC) for historical reasons. Read here.
No I was referring to the primaries, as you said I should use 601_525 with fmtconv rather than 601_625. It's the fmtconv method for choosing the SD primaries.

Quote:
Originally Posted by Dogway View Post
H.265 ITU paper calls the transfer function by their related matrix, they are inconsistent though, they call the 2020 transfer ST.2084 which is the (SMPTE) paper that defines the 2020 space characteristics. Yes, a mess, they mix ITU, IEC, FCC, and SMPTE naming standards. I say, stick to one or two, you can read here some of my rant. To start with, the first time (or one of the firsts) the 170M transfer was defined was in SMPTE ST 170 in 1994, your linked table makes use of this transfer for 709, 601, 2020_10 and 2020_12 and state "functionally the same", yes, another mess. "Functionally the same" reads as different but yields the same result, which is not true, they employ not "functionally" but the exact same transfer, so I gave it a single name.

Now why did I keep 470BG transfer (2.8 gamma)? because in TransformsPack I allow you to do whatever you want to. Limited TV range for float? Also OK. AP0 primaries? Also OK (not listed in ITU)
I think it's fine to keep the 470bg transfer option, although given it was an analog-only thing it possibly adds to the confusion.
That's why I think replacing 170m with 601 (in respect to naming) as the transfer curve and matrix for digital is a good idea because if you read the wikipedia page it was intended for both PAL and NTSC and was later updated to include both the PAL and NTSC primaries.
Some plugins (colormatrix) use 601 as the SD matrix, while some (avsresize) use it for the transfer curve but not for the matrix.
Of course because the primaries are different for PAL and NTSC you do need a way to differentiate between them.

Quote:
Originally Posted by Dogway View Post
Another aspect is the matrix coefficient table, so RGB "is" a matrix ? ICtCp is a matrix? YCoCg? Another mess. They are color models, and the said "matrix" coefficients only refers to the YCbCr model matrix coefficients, not the ICtCp, not the YcCbcCrc or any other. No wonder you are confused, not even the standards make it right. I do my best to add comments on my scripts to give some background but I don't adhere strictly to ITU or any other standard.
For RGB to have a matrix does seem meaningless, but from our perspective it'd be useful if you could use it to correct RGB colorimetry without having to convert to a different colorspace and back. For example, if YUV was converted to RGB using rec.709 instead of rec.601, it'd be nice to be able to correct it this way (using an avsresize type example)
709:srgb:170m=>601:srgb:170m
but I don't think any plugins will do that sort of thing for RGB without a format conversion in between. Maybe there's a reason for that I'm unaware of.

Quote:
Originally Posted by Dogway View Post
I want to think they do convert from 170M for SD, the same reason they still keep deinterlacing capabilities, now I don't know what they assume for 640x360, maybe you should scale it based on height (854x480 or 1024x576).
640x360 was just a 16:9 resolution I chose because I'd originally planned to add the pictures to my post, but when there ended up being several of them I posted links instead.

Quote:
Originally Posted by Dogway View Post
You can also take cretindesalpes' word for that, as I found an example yesterday in the docs.
The display used for mastering DVD/BD consumer media is calibrated to 1886, they make different (grade) trims depending on delivery media and it's always display referred (for the last 20 years at least). The workflow typically involves an IDT (OETF) -> Log/Linear (trim pass under EOTF proof) -> ODT (EOTF). So they work in log or linear space under an EOTF (1886) viewing transform.
To be more specific, the input device might be calibrated so the IDT could be transformed from a custom (unknown) profile, and similarly the EOTF viewing transform is also a mastering monitor profile characterized against 1886. When encoding the EOTF into the image instead you don't use your display profile (with custom 1886 settings) but a generic one that assumes a perfect imaginary display with 0 black, so in essence a power law gamma of 2.4 and a generic color space with known primaries, like 709. This makes it possible to "undo" the transfer and primaries and convert to your display profile in case it's calibrated, or play very close to intended in case it's not calibrated
I understand what you're saying, but it seems to cover what production companies can do with delivered content, not necessarily how it ends up being streamed to my TV. I was looking at the Netflix delivery specs yesterday, and for HD SDR it's:
10 bit
ITU-R BT.709 / D65 / ITU-R BT.1886
(Gamma 2.4)


So then I checked out the packaging requirements:
All deliveries to Netflix must be compliant to either SMPTE ST 2067-21:2016 or SMPTE ST 2067-21:2020 Interoperable Master Format (IMF) Application #2E.

Okay, so what does SMPTE ST 2067-21:2020 say?
COLOR.1
Mapped as specified for 625-line systems in Section 2.6 of Recommendation ITU-R BT.601-7.
COLOR.2
Mapped as specified for 525-line systems in Section 2.6 of Recommendation ITU-R BT.601-7.
COLOR.3
Mapped as specified in Section 1 of Recommendation ITU-R BT.709-6.

Note: In Recommendation ITU-R BT.601 and Recommendation ITU-R BT.709, the signals... ...correspond to gamma pre-corrected signals.


The above seems to serve as support for only using 601 too, but off to ITU-R BT.709-6.
https://www.itu.int/dms_pubrec/itu-r...6-I!!PDF-E.pdf
Overall opto-electronic transfer characteristics at source
It doesn't seem to copy and past properly but it's the standard 709 OETF with the following footnote:
(1) In typical production practice the encoding function of image sources is adjusted so that the final picture has the
desired look, as viewed on a reference monitor having the reference decoding function of Recommendation
ITU-R BT.1886, in the reference viewing environment defined in Recommendation ITU-R BT.2035.


Just kill me now. Could "the encoding function of image sources" mean "the encoding function of the source video when it's decoded"?
It won't install on this PC, but sometime soon I'm going to check out Davinci Resolve to see what it actually does. I've looked at a few instructional videos and what they show is what you've said, with the exception of rendering when choosing rec.709 as the output colorspace. There, the transfer curve used wasn't really clear.

Last edited by hello_hello; 8th August 2022 at 01:28.
hello_hello is offline   Reply With Quote
Old 8th August 2022, 16:43   #1395  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,361
Quote:
Originally Posted by hello_hello View Post
although given it was an analog-only thing it possibly adds to the confusion.
That's why I think replacing 170m with 601 (in respect to naming) as the transfer curve and matrix for digital is a good idea because if you read the wikipedia page it was intended for both PAL and NTSC and was later updated to include both the PAL and NTSC primaries.
You are not going to see the 470BG curve referenced anywhere today since it's considered "deprecated" by current standards, so all the references you are going to see for PAL transfer is the original 170M curve. I even had a hard time finding its alpha and phi values.

I'm adding or planning to add all sorts of analog models and spaces, for example YIQ, YPbPr and so on. NTSC 1953 was also analog only and it's included everywhere (for some reason).

My design choice is similar to avisynth's "give people enough rope to hang themselves", replacing "hang" with the chance of being more capable or experimental, all while keeping a good user experience and defaults.

Quote:
it'd be useful if you could use it to correct RGB colorimetry without having to convert to a different colorspace and back
YCbCr is not a linear model, so you can't change color space without going to RGB and back.
Quote:
709:srgb:170m=>601:srgb:170m
That line doesn't make sense to me, change matrix while keeping primaries? sRGB transfer for YUV?

Quote:
Could "the encoding function of image sources" mean "the encoding function of the source video when it's decoded"?
It's only a convoluted way to say, grade your (decoded -as in inverse OETF-) footage until it looks fine in your 1886 calibrated reference monitor.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 8th August 2022, 18:06   #1396  |  Link
ENunn
Registered User
 
Join Date: Dec 2019
Posts: 68
Quote:
Originally Posted by Dogway View Post
@ENunn, what do you mean with "the issue is still there, plus it makes the image much brighter as well." Either the issue is present or it's making the image brighter which means it's working fine (that's retinex) if it shows your image black or as your above example, then there's an issue. But I'm not experiencing it with FFMS2 loader.
The issue is still present, as shown in my first screenshot.

I loaded it with ffms2, and the issue is still there.

EDIT: It has something to do with the tweak line. I removed the convertbits parts and the issue persists. I also tried another tweak plugin, SmoothTweak, and the issue happens there as well. Tweak before SMDegrain causes the issue, tweak after SMDegrain does not have any issues.

Last edited by ENunn; 9th August 2022 at 05:31.
ENunn is offline   Reply With Quote
Old 9th August 2022, 07:41   #1397  |  Link
hello_hello
Registered User
 
Join Date: Mar 2011
Posts: 4,829
Quote:
Originally Posted by Dogway View Post
You are not going to see the 470BG curve referenced anywhere today since it's considered "deprecated" by current standards, so all the references you are going to see for PAL transfer is the original 170M curve. I even had a hard time finding its alpha and phi values.
All I was trying to say is 480bg is analog, as I assume was 170M originally, then along came rec.601 for digital which once again I assume has the same transfer function as 170M, but either way, rec.601 replaced both 170M and 480bg for digital video so to me it makes sense if 601 was used to describe the digital SD matrix and transfer function. Just as an example, that's what fmtconv does for matrix, there is no 170M option, and for transfer it's the same, except 480bg is still an option if you want to use it. For the SD primaries there's several presets for choosing them, but my preference is to stick with the 601 theme and use the 601_525 and 601_625 presets for digital "NTSC" and "PAL".

Lots of options is good, but for presets that'd be my preferred naming scheme.

Quote:
Originally Posted by Dogway View Post
YCbCr is not a linear model, so you can't change color space without going to RGB and back.

That line doesn't make sense to me, change matrix while keeping primaries? sRGB transfer for YUV?
Isn't changing the matrix while staying in YUV what colormatrix does?
My example probably should have looked like this
709:srgb:709=>601:srgb:170m
but it was in reference to my fantasy for being able to correct the incorrect matrix used when converting YUV to RGB without changing color space. So it'd be RGB in and RGB out. For the above example you'd have RGB that was converted from YUV using the 709 matrix when it should've been 601, so in my fantasy it would correct that while staying in RGB.
Maybe it's something that can't actually be done, although I was under the impression that's what the following would do for HDRTools, but looking at it again now I'm not really sure what it does. I think it's just changing the primaries.
ConvertToRGB(matrix="rec709")
ConvertRGBtoXYZ(Color=2)
ConvertXYZtoRGB(Color=4)

Quote:
Originally Posted by Dogway View Post
It's only a convoluted way to say, grade your (decoded -as in inverse OETF-) footage until it looks fine in your 1886 calibrated reference monitor.
Well it does say "the encoding function of image sources is adjusted", and if it's referring to the source video it shouldn't apply if the source video was 1886. Anyway, it's obviously just guessing because I don't actually know.

Last edited by hello_hello; 9th August 2022 at 07:46.
hello_hello is offline   Reply With Quote
Old 9th August 2022, 10:11   #1398  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,361
601 is too ambiguous, for SD I follow the SMPTE nomenclature because they were the reference standard at the time and CCIR (now ITU) was behind. They simply knew what they were doing. All SD related content from ITU is badly copy/pasted from SMPTE standards or directly don't have a paper for the recommendation and simply refer to the SMPTE.

The good news in ConvertFormat is that it supports aliases, so if you like "601" or "601_525" (to remove ambiguity) simply input that and it will retrieve the best match. And the property IDs are unified so they will be the same for fmtconv, TransformsPack, avsresize, etc I expanded them though.

Quote:
Isn't changing the matrix while staying in YUV what colormatrix does?
The use case for ColorMatrix is such a corner case scenario that I haven't even implemented it.
Read here.
It shouldn't be a thing really. My question is, how do you know it's using the wrong matrix but the correct primaries?

Quote:
Well it does say "the encoding function of image sources is adjusted"
You shouldn't read too much into it, they are full of inconsistencies otherwise we wouldn't be going rounds in Doom9 about the topic. You don't "adjust" a transfer function, you choose it according to your delivery choice and master monitor.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 11th August 2022, 11:30   #1399  |  Link
madey83
Guest
 
Posts: n/a
Hello,

Could you advice how to setup denoiser like SMDegrain to clear this out?


EDIT:
on the screen there is less noise than on the video.

i tired these without any spectacular effects:

SMDegrain(tr=2, thSAD=500,thSADC=100, thSCD2=80, contrasharp=false, refinemotion=true, truemotion=false, search=4, subpixel=3, Chroma=true, plane=4, LFR=false, DCTFlicker=false)

### Dogway this is a good compromise between quality and speed
Code:
smdegrain(tr=2,mode="temporalsoften",blksize=32,thSAD=700,LFR=2,contrasharp=false,refinemotion=true)
smdegrain(tr=2,mode="MDegrain",blksize=32,thSAD=400,LFR=false,contrasharp=true,refinemotion=true)

EDIT:
i've just adjusted in the latest version of SMDegrain below line acordingly:

(prefilter== 5) ? pref.ex_KNLMeansCL(a=3,s=3,d=3,h=9.0,wmode=2,chroma=chroma,gpuid=gpuid,LFR=0*(nw/1920.)).ex_boxblur(0.5, mode="weighted", UV=Chr)

and got some promissing results, but encode speed drop off beacuse CPU usage is only for about 70%.

Last edited by madey83; 11th August 2022 at 12:14.
  Reply With Quote
Old 11th August 2022, 12:16   #1400  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,361
I don't think you need to run the double call, I don't see the source that grainy (300 style).

Simply use a good prefilter, maybe BM3D, and raise 'tr' a bit. I don't know the reason to tweak thSCD2, also everything else is already the default so you can omit.

KNLMeansCL runs in GPU with OpenCL, you can also try DGDenoise which runs over CUDA.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Reply

Tags
avisynth, dogway, filters, hbd, packs

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 22:22.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.