Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
30th August 2017, 15:46 | #1 | Link |
Registered User
Join Date: Sep 2007
Posts: 5,373
|
What is wrong with this YUV => RGB => YUV roundtrip ?
Is 16bits not enough ? Or did I make simple error in testing diff? (I made sure to change the MakeDiff inputs when doing other tests), or other user error I hope
Let's say the YUV420P10 is the "source" to test against. 8bit RGB test image is here . https://www.mediafire.com/file/l57xq...stchart720.png or http://www99.zippyshare.com/v/IVqs58H0/file.html 1) If the "RGB intermediate" step uses RGBH or RGBS instead of RGB48 - it works, no difference. This would suggest 16bit not enough. But supposedly 12bit would be enough for 10bit starting point 2) No chroma subsampling works (ie. if you test YUV444P10 => RGB48 => YUV444P10). Even with RGB30 instead of RGB48 works (10bit444 YUV => 10bit RGB => 10bit444 YUV) , which is unexpected. Granted this are very limited test values, but something smells fishy here; I would have expected some error at the same bit depth 3) Testing up/down scaling chroma with point/nearest neighbor alone works as expected (ie. YUV444P10 <=> YUV420P10 works with Point. Or other bitdepths work too) 4) Since individually , duplicating/discarding chroma samples , and YUV444 <=> RGB seems to work, I thought how about splitting out the processes individually into steps instead of 1 command? (ie. chroma upsample to YUV444P10 => RGB48 => YUV444P10 => YUV420P10) . But it actually gives worse results . And various combinations of splitting steps gives worse results 5) fmtconv gives similar results (but slightly different diff results) 6) Is there anyway to export valid RGBH or RGBS from vpy, vspipe ? IM plugin doesn't seem to preserve when exporting to EXR (int), unless there was an issue with linearizing with transfer_in_s and transfer_s with srgb and linear Code:
clip = core.lsmas.LWLibavSource(r'PATH\testchart720.png') clip = core.resize.Bicubic(clip, format=vs.YUV420P10, matrix_s="709") clip2 = core.resize.Point(clip, format=vs.RGB48, matrix_in_s="709", dither_type="none", range_in_s="full") clip3 = core.resize.Point(clip2, format=vs.YUV420P10, matrix_s="709", dither_type="none", range_s="full") clip3 = core.std.SetFrameProp(clip3, prop="_ColorRange", intval=1) # Mark video as limited range. d = core.std.MakeDiff(clip, clip3) da = core.std.Levels(d, min_in=511, max_in=513, gamma=1, min_out=0, max_out=1023, planes=[0,1,2]) da.set_output() |
30th August 2017, 16:19 | #2 | Link | |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
Quote:
I hacked vsrawsource for GrayS support like eons ago, you could load the binary with that in vs, I didn't make the extra support for RGBS, cuz I always do my thing on luminance Code:
to export the binary: vspipe xxx.vpy Sequence.bin -p to import the binary back in vs: clp = core.raws.Source("Sequence.bin", width, height, src_fmt="GRAYS") load the binary as a numpy array for research and academic stuff: def LoadRawBinaryGrayscaleSequence(Address, Width, Height, Length, PixelType='float32'): Sequence = numpy.memmap(Address, dtype=PixelType, mode='r', shape=(Length, Height, Width)) Sequence = numpy.array(Sequence, dtype='float64') return Sequence Sequence = LoadRawBinaryGrayscaleSequence("Sequence.bin", Width, Height, Length) |
|
30th August 2017, 16:24 | #4 | Link |
Registered User
Join Date: Sep 2007
Posts: 5,373
|
thanks f2, but I meant for use in other programs . A common reason to go into RGB land is for other completing operations in other programs . So standardized format is the most likely candidate . EXR should be the ticket - it supports 32bit float - but there is problem with IM exporter in keeping that
Did you look at the other problem ? In theory, that roundtrip should be possible with only 12bit RGB - so is it a problem with zimg, or some other issue (like hopefully user error) |
30th August 2017, 16:41 | #7 | Link |
Registered User
Join Date: Sep 2007
Posts: 5,373
|
Or if there is one of those million "bumpy numpy" programs that can take that input and output a valid EXR ?
Maybe openimageio , which has python support https://github.com/OpenImageIO/oiio I started playing with an oiiotool binary that someone else compiled for Win but can't get it to accept pipe http://www.nico-rehberg.de/tools.html |
30th August 2017, 17:11 | #9 | Link |
Registered User
Join Date: Sep 2007
Posts: 5,373
|
It would be nice if it was integrated in vapoursynth (hint hint )
But I think I found a temp workaround . You can export 16bit int like a PNG with the IM exporter (or probably pipe to ffmpeg but I didn't test specifically) . Reimport with IM . Then convert to RGBS, and continue as before and it works (at least on this limited test) . Although the IM importer "says" it's RGBS, it's actually int . vspipe --info at that stage even says it's RGBS (but it's not), so that explicit step is necessary, then it works Some bizzare and unexpected observations findings though .... |
30th August 2017, 17:19 | #10 | Link |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
theoretically you should be able to map a GrayS or RGBS vaporsynth video clip to a 4D numpy tensor (array of (FrameCount, Height, Width, ChannelCount)) without going through vspipe, Myrsloik said it was possible, and I'm not sure how,,, then you use openexrpython to make ur numpy tensor an EXR sequence
|
30th August 2017, 17:28 | #11 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,373
|
Quote:
|
|
30th August 2017, 17:36 | #12 | Link |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
u could ask Myrsloik to do something for conversions between video clips and numpy arrays maybe, I did it a while ago and he didn't seem to be very interested, maybe he will be more motivated if there're 2 requesting the same thing
|
30th August 2017, 17:58 | #13 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,373
|
Quote:
So I'm like adding a 0.1 voice . So that's 1.1 people requesting |
|
30th August 2017, 18:12 | #14 | Link |
Registered User
Join Date: Sep 2007
Posts: 5,373
|
But if you can demonstrate how numpy arrays can achieve results, like a valid float EXR , then I'll cheat and vote twice
That's what I'm interested in - results. It's just the actual path and details in between that is a bit murky for me |
31st August 2017, 06:53 | #15 | Link | |
Registered User
Join Date: Oct 2007
Posts: 135
|
Quote:
|
|
31st August 2017, 15:45 | #16 | Link | ||
Registered User
Join Date: Sep 2007
Posts: 5,373
|
Quote:
Thanks to both of you Where does the "conventional wisdom" that you can round trip with +2 bits go ? Shouldn't 16bits be enough for 8 or 10bit sources ? Is that only possible with for limited situations like synthetic test patterns that are "perfect" with limited colors? What if you start with a "non perfect" 444 source? You can have native recorded 444 footage from cameras with aliasing, ringing, noise. Even raytraced CG renders can have those issues |
||
31st August 2017, 16:40 | #17 | Link | |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
Quote:
note ur untouched 8-bit YUV444 clip as A the clip gotta be converted to floating points before the conversion to RGB cuz the matrix, Rec601 or 709 or whatever has floating point values, note the floating point version of ur clip just before any actually conversion as B then u convert B to RGB, call the result C then u convert C back to YUV444 with the inverse matrix, call the result D then u convert D, a floating point clip back to an integer clip, call the result E now, A and E are uint8 clips, B, C, D r floating point clips, mathematically speaking, B and D should be the exact same, but they r not cuz floating point arithmetic introduced some precision loss, and...uh, here comes the fun part E is converted from D, and u got many options to do that, and if E is directly rounded down from D, A and E will be the exact same! concretely, like (I actually made up all the values just to show u how it works) A: 1, 1, 1 B: 1.0, 1.0, 1.0 C: 4.2, 5.5, 0.1 D: 0.997, 1.001, 0.994 E: 1, 1, 1 Last edited by feisty2; 31st August 2017 at 16:47. |
|
31st August 2017, 16:57 | #18 | Link |
Registered User
Join Date: Sep 2007
Posts: 5,373
|
Ok thanks got it - Rec floating point "matrices/matrixes "; But my impression was some math models predicted +2 bits was enough , even using the standard "Rec" matrices . Not talking about YCgCo or reversible transforms. Have I misunderstood or misread what is being said ?
It's just when some people say 16bit is "overkill" blah blah... BUT the specifics and steps in between are important. As they say "The devil is in the details" But it's nice to be able to actually "concretely" do it with something (as in actual files, actual workflow), even if you need RGBS in vpy. |
31st August 2017, 17:01 | #19 | Link |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
major point: precision loss introduced by floating point arithmetic will be magically canceled out by rounding back to integers,,,
now I wanna talk a bit more about that 2 extra bits, say ur conversion trip is RGB->YUV444>RGB, and the intermediate YUV444 result was not stored in floating point but rounded down to an integer representation, the aforementioned magic sticks around long as that intermediate result has at least 2 bits (theoretically at least 1 more bit with a cherry picked matrix) more precision than that RGB clip |
31st August 2017, 17:24 | #20 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,373
|
Quote:
|
|
Thread Tools | Search this Thread |
Display Modes | |
|
|