Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
17th July 2020, 09:18 | #1 | Link |
Broadcast Encoder
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
|
Analog Remastering process, where to start
Hi there,
as a follow up to This and after endless power point presentations to convince my boss that the project was worth it, on the 15 of June we finally began our Remastering process for... uh... about 466'000 tapes that are stored in our library underground with no light whatsoever and at constant temperature and humidity. Some of the tapes there are really old, we're talking about 1987, so oxide tapes, some other were instead recorded on 1 inch Type C and we even have film, like, literally, film that I could hold on my bare hands (I didn't touch it directly of course, I don't wanna destroy it with my greasy hands after all these years). It's gonna take a lot of time. Files are captured in lossless 720x576 25i 8bit at about 166 Mbit/s which roughly accounts for a bit more than 1 GB per minute, so it's actually fine. Then, I encode each and every file manually to get rid of every issues like noise, raimbowning, spots, scratches and so on, then I make sure levels are right, I convert matrix and primaries with 16bit precision, I upscale with Spline64 + NNEDI to FULL HD and then I dither it down to 8bit again with the Floyd-Steinberg error diffusion and encode them to our mezzanine format. The Remaster is then archived and it's readily available to our system thanks to the metadata. Of course, since contents are interlaced, I bob-deinterlace them first to 50p with QTGMC, filter out everything and then re-interlace them to 25i TFF. Now a problem arises: it's too slow and it would take me forever to encode 400k contents, so they're asking me to automatize the encoding process. I tried to setup a script, but I can't really seem to get a grip on it 'cause there are so many contents and they're so different that it's really hard to setup something good. Besides, using ColorYUV with autogain=true and autowhite=true saved some materials and screwed up some others. Although the "one script to rule them all" goes against my policy of encoding everything manually, I recognize that such a thing it's simply not doable. So, in the end, what should I do? What would you do if you were in my shoes? I was thinking about something like this: Code:
#Bob-deinterlace to 50p QTGMC(Preset="placebo") #Bring everything to 16bit planar ConvertBits(bits=16) #Convert to 4:2:2 planar 16bit Converttoyuv422(matrix="Rec601") #Automatic levels correction with 16bit planar precision ColorYUV(autowhite=true, autogain=true) #Adding Borders to get a 1.33 PB with 16bit planar precision AddBorders(152, 0, 152, 0) #Upscale to FULL HD with Spline64 + NNEDI and 16bit planar precision nnedi3_rpow2(cshift="Spline64ResizeMT", rfactor=2, fwidth=1920, fheight=1080, nsize=4, nns=4, qual=1, etype=0, pscrn=2, threads=0, csresize=true, mpeg2=true, threads_rs=0, logicalCores_rs=true, MaxPhysCore_rs=true, SetAffinity_rs=false) #Sharpening with 16bit planar precision SeeSaw(resized, Sstr=8.8, Spower=5) #From 16bit planar to 16bit interleaved ConvertToDoubleWidth() #Matrix Conversion from BT601 to BT709 with 16bit interleaved precision Matrix(from=601, to=709, rg=1.0, gg=1.0, bg=1.0, a=16, b=235, ao=16, bo=235, bitdepth=16) #From 16bit interleaved to 16bit planar ConvertFromDoubleWidth() #Dithering from 16bit planar to 8bit planar with the Floyd-Steinberg error diffusion ConvertBits(bits=8, dither=1) #Limiter TV Range 0.0 - 0.7V Limiter(min_luma=16, max_luma=235, min_chroma=16, max_chroma=240) #Loudness Correction Normalize(0.22, show=false) #Re-Interlace to 25i TFF PAL assumeTFF() separatefields() selectevery(4,0,3) weave() however ColorYUV fucked up some contents spectacularly. Should I be more "conservative" and keep that script but without ColorYUV() and keep everything, including dot crawl, tons of grain, noise and many other things or should I do something more aggressive like: Code:
#Bob-deinterlace to 50p QTGMC(Preset="placebo") #De-raimbowning ChubbyRain2(th=10, radius=10, show=false, sft=10, interlaced=false) #De-scratch DeScratch(mindif=5, asym=10, maxgap=3, maxwidth=3, minlen=90, maxlen=2048, maxangle=5, blurlen=15, keep=30, border=2, modeY=3, modeU=3, modeV=3, mindifUV=0, mark=false, minwidth=1) #De-spot DeSpot(mthres=16, mwidth=7, mheight=5, merode=33, interlaced=false, show=0, show_chroma=false) #De-haloing BlindDeHalo3(lodamp=0.0, hidamp=0.0, sharpness=0.0, tweaker=0.0, PPmode=0, interlaced=false) #From 4:2:2 planar 8bit to 4:2:2 interleaved 8bit ConverttoYUY2() #Ghost Removal Ghostbuster(offset=3, strength=11) #From 4:2:2 interleaved 8bit to 4:2:2 planar 8bit Converttoyv16(matrix="Rec601", interlaced=false) #De-grain super = MSuper(pel=2, sharp=1) bv1 = MAnalyse(super, isb = true, delta = 1, overlap=4) fv1 = MAnalyse(super, isb = false, delta = 1, overlap=4) bv2 = MAnalyse(super, isb = true, delta = 2, overlap=4) fv2 = MAnalyse(super, isb = false, delta = 2, overlap=4) MDegrain2(super,bv1,fv1,bv2,fv2,thSADC=800, thSAD=800) #Bring everything to 16bit planar ConvertBits(bits=16) #Convert to 4:2:2 planar 16bit Converttoyuv422(matrix="Rec601") #Denoise with 16bit planar precision neo_dfttest(sigma=64, tbsize=1, Y=3, U=3, V=3, dither=0, opt=0) #Adding Borders to get a 1.33 PB with 16bit planar precision AddBorders(152, 0, 152, 0) #Upscale to FULL HD with Spline64 + NNEDI and 16bit planar precision nnedi3_rpow2(cshift="Spline64ResizeMT", rfactor=2, fwidth=1920, fheight=1080, nsize=4, nns=4, qual=1, etype=0, pscrn=2, threads=0, csresize=true, mpeg2=true, threads_rs=0, logicalCores_rs=true, MaxPhysCore_rs=true, SetAffinity_rs=false) #From 16bit planar to 16bit interleaved ConvertToDoubleWidth() #Matrix Conversion from BT601 to BT709 with 16bit interleaved precision Matrix(from=601, to=709, rg=1.0, gg=1.0, bg=1.0, a=16, b=235, ao=16, bo=235, bitdepth=16) #From 16bit interleaved to 16bit planar ConvertFromDoubleWidth() #Dithering from 16bit planar to 8bit planar with the Floyd-Steinberg error diffusion ConvertBits(bits=8, dither=1) #Limiter TV Range 0.0 - 0.7V Limiter(min_luma=16, max_luma=235, min_chroma=16, max_chroma=240) #Loudness Correction Normalize(0.22, show=false) #Re-Interlace to 25i TFF PAL assumeTFF() separatefields() selectevery(4,0,3) weave() Or perhaps I should use the script made by Fred Van der Putte or are they ok only for Super8? You know, I hate to ask you this and I hate not to be able to provide a sample, but, you know, there are so many different contents that focusing on one single content would be pointless and would do more harm than good. So the question is: what would you do if your boss asks you to setup something for an automatic remastering of several different archive tapes from the 80s and 90s? Last edited by FranceBB; 17th July 2020 at 11:57. |
17th July 2020, 10:14 | #2 | Link |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,653
|
Can you get yourself the transform-PAL-decoder?
It should completely suppress rainbowing. To my knowledge the ld-decode project has it implemented in its software. And the BBC runs it on custom hardware.
__________________
Don't forget the 'c'! Don't PM me for technical support, please. |
17th July 2020, 10:34 | #3 | Link | |
Broadcast Encoder
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
|
Quote:
That said, the workflow is already up and running, but I just wanna know how I should handle post-processing in Avisynth ''cause I'm not sure whether to go for a very basic detail-driven approach or for some heavy filter-chain. Anyway, it has to be done through Avisynth (I already took care of the Automatization part). |
|
17th July 2020, 11:14 | #4 | Link |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
your "remaster" script seems a bit too naïve, some quick and simple suggestions:
1) 4:2:2 is useless, just convert everything to 4:4:4, it is possible to obtain native 4:4:4 out of 4:2:2 or 4:2:0 thru (luma) guided resampling 2) upscaling to 1080p is a waste of space and also damages the quality 3) frequency domain denoisers (DFTTest, FFT3D, etc.) are extremely destructive to components without a sparse frequency representation (or in human language, high frequency components) and introduce this gross "greasy" look, these should not be used as general purpose denoising filters, instead, you should only denoise low frequency components with these filters (something like DFTTest(sstring = "0:64 0.5:12 0.75:0.1 1:0")) 4) sharpening should be strictly prohibited unless it increases the resolution (image looks "finer", more "fine grained" after sharpening), it is NOT possible to increase the resolution with most if not all naïve avisynth sharpening scripts, you get the opposite when you apply something like LSFMod, the image becomes even "coarser" and "bloated". Last edited by feisty2; 17th July 2020 at 11:24. |
17th July 2020, 12:03 | #5 | Link | ||||
Broadcast Encoder
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
|
Quote:
Quote:
Quote:
Quote:
|
||||
17th July 2020, 12:41 | #6 | Link |
Registered User
Join Date: Dec 2005
Location: Germany
Posts: 1,795
|
I liked the sharpening result by the *new* CAS filter https://github.com/HomeOfVapourSynth...apourSynth-CAS
I would also try out denoising with bm3d https://github.com/HomeOfVapourSynth...pourSynth-BM3D (maybe combine it with smdegrain, scroll doooown) I know these are vapoursynth filters. But it is very easy to use both avs+vs together.
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth VapourSynth Portable FATPACK || VapourSynth Database Last edited by ChaosKing; 17th July 2020 at 12:44. |
17th July 2020, 13:39 | #7 | Link |
Registered User
Join Date: Nov 2009
Posts: 2,352
|
You can run a pass with RT_Stats and find the min, max, average for levels correction. This can also work very fine to find out if the video is dark or bright so you can tune the x264 settings depending on which. I would also run a RT_Stats pass over a motion mask to check if the clip has a lot of motion or not, similar to what Netflix does for encoding their shows.
By the way where I can find the Matrix() function? I downloaded HDRTools but still get an error. EDIT: nevermind, it's the HDRCore plugin. Last edited by Dogway; 17th July 2020 at 13:47. |
17th July 2020, 13:58 | #8 | Link |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
the point of "remaster" is to enhance the perceptual quality, so things that are no good to the perceptual quality should definitely be avoided.
1) there's nothing stopping you from storing a 4:4:4 digital file, it's just a file, in fact my remasters are simply stored as uncompressed fp32 bitstreams. the bitstream could be handily loaded by any memory map raw source filter then encoded to something like ProRes or H264 as needed, I don't understand your argument here. 2) I don't know what "Omneon Playout" is, but high quality renderers like madVR could do nnedi3 upscaling at realtime when the video is playing, if you have to upscale SD content to 1080p, at least go for something more computationally expensive so it'd be worth it, something like GANs. 3) NLMeans works reasonably well on high frequency components, you could merge the results of DFTTest and NLMeans with a bandpass filter to get the best of both. Code:
# vaporsynth script def bandpass(clip): low_freq = core.dfttest.DFTTest(clip, slocation=[0,0, 0.48,0, 0.56,1024, 1,1024], ...) high_freq = core.std.MakeDiff(clip, low_freq) return low_freq, high_freq dft = core.dfttest.DFTTest(clip, slocation=[0,16, 0.48,16, 0.56,0, 1,0]) nlm = core.knlm.KNLMeansCL(clip, a=8, h=2.4) dft_low, _ = bandpass(dft) _, nlm_high = bandpass(nlm) clip = core.std.MergeDiff(dft_low, nlm_high) 4) "fine" (native sharpness) looks better than "blurry" looks better than "deliberately enhanced", when you apply (naïve) sharpening, the result doesn't really resemble the look of native sharpness, you get this ugly "fat" and "coarse" and "over enhanced" look that a pair of trained eyes could instantly tell. the whole thing looks very "cheap" and "dumb", maybe take a look at this thread if you want more information about this part. Last edited by feisty2; 17th July 2020 at 14:03. |
17th July 2020, 14:15 | #10 | Link | |
Registered User
Join Date: Nov 2009
Posts: 2,352
|
Quote:
|
|
17th July 2020, 17:29 | #11 | Link |
Registered User
Join Date: Mar 2017
Location: Germany
Posts: 234
|
Will the lossless files be deleted after encoding?
If so, I'd suggest: Do absolutley nothing, instead of encoding interlaced with highest possible bitrate. Dont't filter for storage, only if you need final results for broadcasting. If there are no direct plans to delete the lossless captures, in a few years maybe someone will suddenly have the idea to do so - storage of big files is more expensive. During the last years I had to work more and more often with wrongly and bad deinterlaced, upscaled, too much filtered, a.s.o. materials. And very often they did not keep any not harmed copy of these old treasures, so, do the world a favour. |
17th July 2020, 18:29 | #12 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
My advice: forget about the restoration.
Reason: you are probably not doing it right (everyone's techniques get better over time), but more critical is that it will massively increase the time this takes. Also, a LOT of the material you are going to capture is not important and probably doesn't merit the effort. Your goal instead should be to focus only on digitizing those assets, and do that while they can still be digitized. Capture them in the best format that is financially viable, i.e., without wasting space by choosing formats that don't provide visually perceptible improvements in quality. If you really have 466,000 tapes, and each of them contains half an hour of content, then if you run ten VTR machines for eight hours a day it will take 582 weeks to capture them all. If you multiply that number by any amount, where that multiple represents the increase in time required by your restoration, the project will quickly exceed your remaining lifetime. Even if you capture 24/7, you are still looking at multiple years to get the job done, unless you can employ a larger number of tape players. Oh, as far as film being fragile and worrying about handling it, don't get that worried. I have transferred film from 1928 and it was as supple as the day it came back from developing. Yes, I have also handled vinegar syndrome film, and it is a sorry mess. If you are worried about hurting the film by touching it, just wear old-fashioned cotton work gloves. Nothing fancy is needed, although I can point you to some sites which specialize in materials used for film transfers. BTW, 1987 is hardly "really old" in the videotape world. Videotape was invented by Ampex in the late 1950s. If you had early 1960s Quadruplex tape, then that would be "really old." I say this only because tapes from 1987 onward should be absolutely fine. I have transferred audio tape from 1948 (I have it sitting here in my office) and it did not shed, and I have transferred many consumer-grade VHS and Beta tapes from the late 1970s, also with no tape issues. The main shedding problems were with tape binders used on commercial tapes from the 1970s. It sounds like you don't have any of those. Given that your tapes have been stored in absolutely idea conditions, you are unlikely to have very many tape issues, although there are always a few stinkers in any large lot. Last edited by johnmeyer; 17th July 2020 at 18:36. |
18th July 2020, 01:10 | #13 | Link | ||||||||||
Broadcast Encoder
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
|
Quote:
Quote:
Quote:
Quote:
Quote:
As to the basics, I'm definitely gonna get the simple things right: interlacing, color matrix, color primaries and loudness. That's not really a problem. Perhaps I should really just do those few things... Quote:
Quote:
Quote:
Quote:
Quote:
Anyway, it's funny that you named Ampex 'cause that's the brand I've seen on many tapes and that's the company that my colleagues mentioned a lot. They also told me that they were not sure whether tapes were still ok or not 'cause Ampex guaranteed each and every tape to last 15 years but about 36 years passed... So far so good, though. As a side note, it's funny to read log sheets / recording sheets left 36 years ago by other broadcast operators and their notes on how they did what they did, where they inserted breaks, how they calibrated levels with the bars note reference and so on... I wonder if they ever thought that, 36 years later, someone was going to open the case, read their annotations and play the tape accordingly... :') |
||||||||||
18th July 2020, 01:35 | #14 | Link |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
then you should pick a more sophisticated upscaling algorithm than nnedi3, it's too "cheap" to be hard encoded.
GAN based models produce results much closer to native HD than nnedi3 |
18th July 2020, 11:55 | #16 | Link | |
Broadcast Encoder
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
|
Quote:
If you look here https://forum.doom9.org/showthread.php?t=176655 every block is either a BAT, an Avisynth script or an AutoIT code for the GUI. It would be a pain in the butt to integrate vapoursynth, besides I would also have to ask Steinar and other two people as I'm not the only developer and he's the main developer of the project. Lastly, I spent years from 2006 to 2020 working on my own with Avisynth, I'm not really ready to move to a totally unknown scripting language like python and vapoursynth. Last edited by FranceBB; 18th July 2020 at 12:00. |
|
18th July 2020, 12:41 | #17 | Link | |
Registered User
Join Date: Dec 2005
Location: Germany
Posts: 1,795
|
Quote:
Why don't you just try out VS? I knew nothing about VS and switched completely to VS after using it for about 3 days. Python is very easy to understand (and to learn) even if you never used it before. You just need to know the absolute basics to write/untderstand a script. Vapoursource should be able to load any VS script in avs. If you're using avs-64 you will need also VS-64. As an alternative you could use "avfs.exe" to create a fake avi and open it in any program or with avisource in AVS-32 or 64. Yes you need to install VS. Performance should not be affected, maybe a tiny bit but not by much.
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth VapourSynth Portable FATPACK || VapourSynth Database |
|
18th July 2020, 12:53 | #18 | Link | |||
Registered User
Join Date: Mar 2017
Location: Germany
Posts: 234
|
Quote:
Quote:
Quote:
The material of DigiBeta-cassettes of two of the copies began to dissolve. Maybe it's the very(!) last chance to save these programmes. Be a hero! |
|||
18th July 2020, 13:07 | #19 | Link |
Registered User
Join Date: Dec 2005
Location: Germany
Posts: 1,795
|
And btw I tried mcbob (with parameters from link) and it looked very good https://forum.doom9.org/showthread.p...70#post1917370
I would compare it with qtgmc and see which one looks better
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth VapourSynth Portable FATPACK || VapourSynth Database |
18th July 2020, 13:54 | #20 | Link | |
Pig on the wing
Join Date: Mar 2002
Location: Finland
Posts: 5,718
|
Quote:
__________________
And if the band you're in starts playing different tunes I'll see you on the dark side of the Moon... |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|