Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 17th July 2020, 09:18   #1  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Metropolitan City of Milan, Italy
Posts: 1,470
Analog Remastering process, where to start

Hi there,
as a follow up to This and after endless power point presentations to convince my boss that the project was worth it, on the 15 of June we finally began our Remastering process for... uh... about 466'000 tapes that are stored in our library underground with no light whatsoever and at constant temperature and humidity. Some of the tapes there are really old, we're talking about 1987, so oxide tapes, some other were instead recorded on 1 inch Type C and we even have film, like, literally, film that I could hold on my bare hands (I didn't touch it directly of course, I don't wanna destroy it with my greasy hands after all these years). It's gonna take a lot of time.






Files are captured in lossless 720x576 25i 8bit at about 166 Mbit/s which roughly accounts for a bit more than 1 GB per minute, so it's actually fine. Then, I encode each and every file manually to get rid of every issues like noise, raimbowning, spots, scratches and so on, then I make sure levels are right, I convert matrix and primaries with 16bit precision, I upscale with Spline64 + NNEDI to FULL HD and then I dither it down to 8bit again with the Floyd-Steinberg error diffusion and encode them to our mezzanine format. The Remaster is then archived and it's readily available to our system thanks to the metadata. Of course, since contents are interlaced, I bob-deinterlace them first to 50p with QTGMC, filter out everything and then re-interlace them to 25i TFF.
Now a problem arises: it's too slow and it would take me forever to encode 400k contents, so they're asking me to automatize the encoding process.
I tried to setup a script, but I can't really seem to get a grip on it 'cause there are so many contents and they're so different that it's really hard to setup something good. Besides, using ColorYUV with autogain=true and autowhite=true saved some materials and screwed up some others. Although the "one script to rule them all" goes against my policy of encoding everything manually, I recognize that such a thing it's simply not doable. So, in the end, what should I do? What would you do if you were in my shoes?

I was thinking about something like this:


Code:
#Bob-deinterlace to 50p
QTGMC(Preset="placebo")

#Bring everything to 16bit planar
ConvertBits(bits=16)

#Convert to 4:2:2 planar 16bit
Converttoyuv422(matrix="Rec601")

#Automatic levels correction with 16bit planar precision
ColorYUV(autowhite=true, autogain=true)

#Adding Borders to get a 1.33 PB with 16bit planar precision
AddBorders(152, 0, 152, 0)

#Upscale to FULL HD with Spline64 + NNEDI and 16bit planar precision
nnedi3_rpow2(cshift="Spline64ResizeMT", rfactor=2, fwidth=1920, fheight=1080, nsize=4, nns=4, qual=1, etype=0, pscrn=2, threads=0, csresize=true, mpeg2=true, threads_rs=0, logicalCores_rs=true, MaxPhysCore_rs=true, SetAffinity_rs=false)

#Sharpening with 16bit planar precision
SeeSaw(resized, Sstr=8.8, Spower=5)

#From 16bit planar to 16bit interleaved
ConvertToDoubleWidth()

#Matrix Conversion from BT601 to BT709 with 16bit interleaved precision
Matrix(from=601, to=709, rg=1.0, gg=1.0, bg=1.0, a=16, b=235, ao=16, bo=235, bitdepth=16)

#From 16bit interleaved to 16bit planar
ConvertFromDoubleWidth()

#Dithering from 16bit planar to 8bit planar with the Floyd-Steinberg error diffusion
ConvertBits(bits=8, dither=1)

#Limiter TV Range 0.0 - 0.7V
Limiter(min_luma=16, max_luma=235, min_chroma=16, max_chroma=240)

#Loudness Correction
Normalize(0.22, show=false)

#Re-Interlace to 25i TFF PAL
assumeTFF()
separatefields()
selectevery(4,0,3)
weave()

however ColorYUV fucked up some contents spectacularly.
Should I be more "conservative" and keep that script but without ColorYUV() and keep everything, including dot crawl, tons of grain, noise and many other things or should I do something more aggressive like:

Code:
#Bob-deinterlace to 50p
QTGMC(Preset="placebo")

#De-raimbowning
ChubbyRain2(th=10, radius=10, show=false, sft=10, interlaced=false)

#De-scratch
DeScratch(mindif=5, asym=10, maxgap=3, maxwidth=3, minlen=90, maxlen=2048, maxangle=5, blurlen=15, keep=30, border=2, modeY=3, modeU=3, modeV=3, mindifUV=0, mark=false, minwidth=1)

#De-spot
DeSpot(mthres=16, mwidth=7, mheight=5, merode=33, interlaced=false, show=0, show_chroma=false)

#De-haloing
BlindDeHalo3(lodamp=0.0, hidamp=0.0, sharpness=0.0, tweaker=0.0, PPmode=0, interlaced=false)

#From 4:2:2 planar 8bit to 4:2:2 interleaved 8bit
ConverttoYUY2()

#Ghost Removal
Ghostbuster(offset=3, strength=11)

#From 4:2:2 interleaved 8bit to 4:2:2 planar 8bit
Converttoyv16(matrix="Rec601", interlaced=false)

#De-grain
super = MSuper(pel=2, sharp=1)
bv1 = MAnalyse(super, isb = true, delta = 1, overlap=4)
fv1 = MAnalyse(super, isb = false, delta = 1, overlap=4)
bv2 = MAnalyse(super, isb = true, delta = 2, overlap=4)
fv2 = MAnalyse(super, isb = false, delta = 2, overlap=4)
MDegrain2(super,bv1,fv1,bv2,fv2,thSADC=800, thSAD=800)

#Bring everything to 16bit planar
ConvertBits(bits=16)

#Convert to 4:2:2 planar 16bit
Converttoyuv422(matrix="Rec601")

#Denoise with 16bit planar precision
neo_dfttest(sigma=64, tbsize=1, Y=3, U=3, V=3, dither=0, opt=0)

#Adding Borders to get a 1.33 PB with 16bit planar precision
AddBorders(152, 0, 152, 0)

#Upscale to FULL HD with Spline64 + NNEDI and 16bit planar precision
nnedi3_rpow2(cshift="Spline64ResizeMT", rfactor=2, fwidth=1920, fheight=1080, nsize=4, nns=4, qual=1, etype=0, pscrn=2, threads=0, csresize=true, mpeg2=true, threads_rs=0, logicalCores_rs=true, MaxPhysCore_rs=true, SetAffinity_rs=false)

#From 16bit planar to 16bit interleaved
ConvertToDoubleWidth()

#Matrix Conversion from BT601 to BT709 with 16bit interleaved precision
Matrix(from=601, to=709, rg=1.0, gg=1.0, bg=1.0, a=16, b=235, ao=16, bo=235, bitdepth=16)

#From 16bit interleaved to 16bit planar
ConvertFromDoubleWidth()

#Dithering from 16bit planar to 8bit planar with the Floyd-Steinberg error diffusion
ConvertBits(bits=8, dither=1)

#Limiter TV Range 0.0 - 0.7V
Limiter(min_luma=16, max_luma=235, min_chroma=16, max_chroma=240)

#Loudness Correction
Normalize(0.22, show=false)

#Re-Interlace to 25i TFF PAL
assumeTFF()
separatefields()
selectevery(4,0,3)
weave()

Or perhaps I should use the script made by Fred Van der Putte or are they ok only for Super8? You know, I hate to ask you this and I hate not to be able to provide a sample, but, you know, there are so many different contents that focusing on one single content would be pointless and would do more harm than good. So the question is: what would you do if your boss asks you to setup something for an automatic remastering of several different archive tapes from the 80s and 90s?

Last edited by FranceBB; 17th July 2020 at 11:57.
FranceBB is offline   Reply With Quote
Old 17th July 2020, 10:14   #2  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,642
Can you get yourself the transform-PAL-decoder?
It should completely suppress rainbowing.

To my knowledge the ld-decode project has it implemented in its software.
And the BBC runs it on custom hardware.
__________________
Don't forget the 'c'!

Don't PM me for technical support, please.
scharfis_brain is offline   Reply With Quote
Old 17th July 2020, 10:34   #3  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Metropolitan City of Milan, Italy
Posts: 1,470
Quote:
Originally Posted by scharfis_brain View Post
Can you get yourself the transform-PAL-decoder?
It should completely suppress rainbowing.

To my knowledge the ld-decode project has it implemented in its software.
And the BBC runs it on custom hardware.
I'm actually using a Blackmagic card as suggested by Derek Prestegard and I'm not planning to change it as it works great. Besides, it's SDI to SDI, so the dot crawl and raimbowning I'm seeing has been recorded several years ago for whatever reason.
That said, the workflow is already up and running, but I just wanna know how I should handle post-processing in Avisynth ''cause I'm not sure whether to go for a very basic detail-driven approach or for some heavy filter-chain. Anyway, it has to be done through Avisynth (I already took care of the Automatization part).
FranceBB is offline   Reply With Quote
Old 17th July 2020, 11:14   #4  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: void
Posts: 2,560
your "remaster" script seems a bit too na´ve, some quick and simple suggestions:
1) 4:2:2 is useless, just convert everything to 4:4:4, it is possible to obtain native 4:4:4 out of 4:2:2 or 4:2:0 thru (luma) guided resampling

2) upscaling to 1080p is a waste of space and also damages the quality

3) frequency domain denoisers (DFTTest, FFT3D, etc.) are extremely destructive to components without a sparse frequency representation (or in human language, high frequency components) and introduce this gross "greasy" look, these should not be used as general purpose denoising filters, instead, you should only denoise low frequency components with these filters (something like DFTTest(sstring = "0:64 0.5:12 0.75:0.1 1:0"))

4) sharpening should be strictly prohibited unless it increases the resolution (image looks "finer", more "fine grained" after sharpening), it is NOT possible to increase the resolution with most if not all na´ve avisynth sharpening scripts, you get the opposite when you apply something like LSFMod, the image becomes even "coarser" and "bloated".

Last edited by feisty2; 17th July 2020 at 11:24.
feisty2 is online now   Reply With Quote
Old 17th July 2020, 12:03   #5  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Metropolitan City of Milan, Italy
Posts: 1,470
Quote:
Originally Posted by feisty2 View Post
your "remaster" script seems a bit too na´ve, some quick and simple suggestions:
1) 4:2:2 is useless, just convert everything to 4:4:4, it is possible to obtain native 4:4:4 out of 4:2:2 or 4:2:0 thru (luma) guided resampling
BetaCAM were native yv16. Besides, our FULL HD mezzanine files are stored as yv16 as well, so I can't really go to 4:4:4 'cause in the end, even if I do, I'm gonna have to go back to 4:2:2.


Quote:
Originally Posted by feisty2 View Post
2) upscaling to 1080p is a waste of space and also damages the quality
I would very much like to have Spline64 + NNEDI to upscale to FULL HD than relying on Omneon Playout ports which are gonna do it badly on playback using a FastBicubic().


Quote:
Originally Posted by feisty2 View Post
3) frequency domain denoisers (DFTTest, FFT3D, etc.) are extremely destructive to components without a sparse frequency representation (or in human language, high frequency components) and introduce this gross "greasy" look, these should not be used as general purpose denoising filters, instead, you should only denoise low frequency components with these filters (something like DFTTest(sstring = "0:64 0.5:12 0.75:0.1 1:0"))
I know, they're prone to make wax-face look and oil-painting look sometimes, but what choice do I have...?


Quote:
Originally Posted by feisty2 View Post
4) sharpening should be strictly prohibited unless it increases the resolution (image looks "finer", more "fine grained" after sharpening), it is NOT possible to increase the resolution with most if not all na´ve avisynth sharpening scripts, you get the opposite when you apply something like LSFMod, the image becomes even "coarser" and "bloated".
So you're more for a blurry approach? The resulting file will look extremely blurry if I apply denoise and I upscale...
FranceBB is offline   Reply With Quote
Old 17th July 2020, 12:41   #6  |  Link
ChaosKing
Registered User
 
Join Date: Dec 2005
Location: Germany
Posts: 1,449
I liked the sharpening result by the *new* CAS filter https://github.com/HomeOfVapourSynth...apourSynth-CAS
I would also try out denoising with bm3d https://github.com/HomeOfVapourSynth...pourSynth-BM3D (maybe combine it with smdegrain, scroll doooown)

I know these are vapoursynth filters. But it is very easy to use both avs+vs together.
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth
VapourSynth Portable FATPACK || VapourSynth Database || https://github.com/avisynth-repository

Last edited by ChaosKing; 17th July 2020 at 12:44.
ChaosKing is online now   Reply With Quote
Old 17th July 2020, 13:39   #7  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 1,147
You can run a pass with RT_Stats and find the min, max, average for levels correction. This can also work very fine to find out if the video is dark or bright so you can tune the x264 settings depending on which. I would also run a RT_Stats pass over a motion mask to check if the clip has a lot of motion or not, similar to what Netflix does for encoding their shows.


By the way where I can find the Matrix() function? I downloaded HDRTools but still get an error.
EDIT: nevermind, it's the HDRCore plugin.

Last edited by Dogway; 17th July 2020 at 13:47.
Dogway is offline   Reply With Quote
Old 17th July 2020, 13:58   #8  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: void
Posts: 2,560
the point of "remaster" is to enhance the perceptual quality, so things that are no good to the perceptual quality should definitely be avoided.

1) there's nothing stopping you from storing a 4:4:4 digital file, it's just a file, in fact my remasters are simply stored as uncompressed fp32 bitstreams. the bitstream could be handily loaded by any memory map raw source filter then encoded to something like ProRes or H264 as needed, I don't understand your argument here.

2) I don't know what "Omneon Playout" is, but high quality renderers like madVR could do nnedi3 upscaling at realtime when the video is playing, if you have to upscale SD content to 1080p, at least go for something more computationally expensive so it'd be worth it, something like GANs.

3) NLMeans works reasonably well on high frequency components, you could merge the results of DFTTest and NLMeans with a bandpass filter to get the best of both.
Code:
# vaporsynth script

def bandpass(clip):
    low_freq = core.dfttest.DFTTest(clip, slocation=[0,0, 0.48,0, 0.56,1024, 1,1024], ...)
    high_freq = core.std.MakeDiff(clip, low_freq)
    return low_freq, high_freq

dft = core.dfttest.DFTTest(clip, slocation=[0,16, 0.48,16, 0.56,0, 1,0])
nlm = core.knlm.KNLMeansCL(clip, a=8, h=2.4)
dft_low, _ = bandpass(dft)
_, nlm_high = bandpass(nlm)

clip = core.std.MergeDiff(dft_low, nlm_high)
as ChaosKing suggested, V-BM3D is also worth a shot.

4) "fine" (native sharpness) looks better than "blurry" looks better than "deliberately enhanced", when you apply (na´ve) sharpening, the result doesn't really resemble the look of native sharpness, you get this ugly "fat" and "coarse" and "over enhanced" look that a pair of trained eyes could instantly tell. the whole thing looks very "cheap" and "dumb", maybe take a look at this thread if you want more information about this part.

Last edited by feisty2; 17th July 2020 at 14:03.
feisty2 is online now   Reply With Quote
Old 17th July 2020, 14:01   #9  |  Link
tormento
Acid fr0g
 
tormento's Avatar
 
Join Date: May 2002
Location: Italy
Posts: 1,736
I'd do that job for free, just to have access to that terrific movie collection...
__________________
@turment on Telegram
tormento is offline   Reply With Quote
Old 17th July 2020, 14:15   #10  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 1,147
Quote:
Originally Posted by feisty2 View Post
4) "fine" (native sharpness) looks better than "blurry" looks better than "deliberately enhanced", when you apply (na´ve) sharpening, the result doesn't really resemble the look of native sharpness, you get this ugly "fat" and "coarse" and "over enhanced" look that a pair of trained eyes could instantly tell. the whole thing looks very "cheap" and "dumb", maybe take a look at this thread if you want more information about this part.
I was reading your finesharp approach and saw the deconvolution line but no matrix whatsoever. What kind of sharpening matrix and size do you use for that example (SD)?
Dogway is offline   Reply With Quote
Old 17th July 2020, 17:29   #11  |  Link
Frank62
Registered User
 
Join Date: Mar 2017
Location: Germany
Posts: 141
Will the lossless files be deleted after encoding?
If so, I'd suggest: Do absolutley nothing, instead of encoding interlaced with highest possible bitrate. Dont't filter for storage, only if you need final results for broadcasting.
If there are no direct plans to delete the lossless captures, in a few years maybe someone will suddenly have the idea to do so - storage of big files is more expensive.
During the last years I had to work more and more often with wrongly and bad deinterlaced, upscaled, too much filtered, a.s.o. materials. And very often they did not keep any not harmed copy of these old treasures, so, do the world a favour.
Frank62 is offline   Reply With Quote
Old 17th July 2020, 18:29   #12  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,447
My advice: forget about the restoration.

Reason: you are probably not doing it right (everyone's techniques get better over time), but more critical is that it will massively increase the time this takes. Also, a LOT of the material you are going to capture is not important and probably doesn't merit the effort.

Your goal instead should be to focus only on digitizing those assets, and do that while they can still be digitized. Capture them in the best format that is financially viable, i.e., without wasting space by choosing formats that don't provide visually perceptible improvements in quality.

If you really have 466,000 tapes, and each of them contains half an hour of content, then if you run ten VTR machines for eight hours a day it will take 582 weeks to capture them all. If you multiply that number by any amount, where that multiple represents the increase in time required by your restoration, the project will quickly exceed your remaining lifetime. Even if you capture 24/7, you are still looking at multiple years to get the job done, unless you can employ a larger number of tape players.

Oh, as far as film being fragile and worrying about handling it, don't get that worried. I have transferred film from 1928 and it was as supple as the day it came back from developing. Yes, I have also handled vinegar syndrome film, and it is a sorry mess.

If you are worried about hurting the film by touching it, just wear old-fashioned cotton work gloves. Nothing fancy is needed, although I can point you to some sites which specialize in materials used for film transfers.

BTW, 1987 is hardly "really old" in the videotape world. Videotape was invented by Ampex in the late 1950s. If you had early 1960s Quadruplex tape, then that would be "really old." I say this only because tapes from 1987 onward should be absolutely fine. I have transferred audio tape from 1948 (I have it sitting here in my office) and it did not shed, and I have transferred many consumer-grade VHS and Beta tapes from the late 1970s, also with no tape issues. The main shedding problems were with tape binders used on commercial tapes from the 1970s. It sounds like you don't have any of those.

Given that your tapes have been stored in absolutely idea conditions, you are unlikely to have very many tape issues, although there are always a few stinkers in any large lot.

Last edited by johnmeyer; 17th July 2020 at 18:36.
johnmeyer is offline   Reply With Quote
Old 18th July 2020, 01:10   #13  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Metropolitan City of Milan, Italy
Posts: 1,470
Quote:
Originally Posted by feisty2 View Post
1) there's nothing stopping you from storing a 4:4:4 digital file, it's just a file, in fact my remasters are simply stored as uncompressed fp32 bitstreams. the bitstream could be handily loaded by any memory map raw source filter then encoded to something like ProRes or H264 as needed, I don't understand your argument here.
I know but mezzanine files are meant to be able to be played back on the fly by video servers, playout ports and so on, so you can't encode in whatever format you want, otherwise it's a mess. In broadcasting companies everything is encoded with the very same codec, resolution, sampling, frame rate, bit depth, primaries, color matrix and GOP. In my case, it's yv16 so I'm not gonna encode it in yv24.


Quote:
Originally Posted by feisty2 View Post
2) I don't know what "Omneon Playout" is, but high quality renderers like madVR could do nnedi3 upscaling at realtime when the video is playing, if you have to upscale SD content to 1080p
I don't expect you to know Omneon, EVS etc but just keep in mind that a playout port is essentially a video server which literally plays a masterfile which is encoded on the fly and then aired, so it's better to leave to it nothing to do other than playing the file as it is 'cause even upscaling is done poorly. For instance, have you ever wondered why when you turn on your TV and you see a FULL HD channel that is airing an old SD contents your TV is still saying "1080i"? Well, unless it's something special, nobody upscaled the masterfile, it's probably some playout port playing an MPEG-2 IMX or DV25 file and upscaling it on the fly with something like a fast Bilinear or a fast Bicubic. For instance you can route an SDI signal with an hardware matrix to an Evertz device that will upscale it for you but again it's done poorly. Even modern VTR can be used to upscale but it's done poorly again as it has to be in real time. I would totally go for Spline64 + NNEDI over any crappy hardware upscale. (And... No, you can't air in SD except on SD channels).


Quote:
Originally Posted by feisty2 View Post
3) NLMeans works reasonably well on high frequency components, you could merge the results of DFTTest and NLMeans with a bandpass filter to get the best of both.
as ChaosKing suggested, V-BM3D is also worth a shot.
I'll take a look at them both.

Quote:
Originally Posted by tormento View Post
I'd do that job for free, just to have access to that terrific movie collection...
Hahahahaha I know you would XD

Quote:
Originally Posted by Frank62 View Post
Will the lossless files be deleted after encoding?
If so, I'd suggest: Do absolutley nothing, instead of encoding interlaced with highest possible bitrate. Dont't filter for storage, only if you need final results for broadcasting.
If there are no direct plans to delete the lossless captures, in a few years maybe someone will suddenly have the idea to do so - storage of big files is more expensive.
During the last years I had to work more and more often with wrongly and bad deinterlaced, upscaled, too much filtered, a.s.o. materials. And very often they did not keep any not harmed copy of these old treasures, so, do the world a favour.
I know what you mean. I wanted to archive the lossless files too but that's sadly not feasible. The original physical supports will be kept, though, so if someone wants to re-encode them in the future, he will be able to digitalize them again.
As to the basics, I'm definitely gonna get the simple things right: interlacing, color matrix, color primaries and loudness. That's not really a problem. Perhaps I should really just do those few things...

Quote:
Originally Posted by johnmeyer View Post
a LOT of the material you are going to capture is not important and probably doesn't merit the effort.
That's true, there are some inestimable treasures in there but also some contents that will probably not be aired ever again, but still the plan is to digitalize everything.

Quote:
Originally Posted by johnmeyer View Post
Your goal instead should be to focus only on digitizing those assets, and do that while they can still be digitized. Capture them in the best format that is financially viable, i.e., without wasting space by choosing formats that don't provide visually perceptible improvements in quality.
Absolutely, in fact guys from Production Metadata are making me a list of which contents I should digitalize first.


Quote:
Originally Posted by johnmeyer View Post
If you really have 466,000 tapes, and each of them contains half an hour of content, then if you run ten VTR machines for eight hours a day it will take 582 weeks to capture them all. If you multiply that number by any amount, where that multiple represents the increase in time required by your restoration, the project will quickly exceed your remaining lifetime. Even if you capture 24/7, you are still looking at multiple years to get the job done, unless you can employ a larger number of tape players.
Give or take, they're more than 450k and less than 500k, but some of them are duplicates 'cause people before me kept the original tape and the duplicated one. Our current infrastructure is made of 16 Sony VTR (8 Sony on one side + 8 Sony on the other side), but my colleagues like to joke about the fact that by the time I'll retire, the project will still be ongoing (and I'm young!) XD

Quote:
Originally Posted by johnmeyer View Post
Oh, as far as film being fragile and worrying about handling it, don't get that worried. I have transferred film from 1928 and it was as supple as the day it came back from developing. Yes, I have also handled vinegar syndrome film, and it is a sorry mess.
If you are worried about hurting the film by touching it, just wear old-fashioned cotton work gloves. Nothing fancy is needed, although I can point you to some sites which specialize in materials used for film transfers.
Oh, I see! You know, I never saw something like that before in my life as I'm young (I'm from the 90s generation) so I was really scared to handle those old things 'cause I thought they were going to break like nothing.


Quote:
Originally Posted by johnmeyer View Post
BTW, 1987 is hardly "really old" in the videotape world. Videotape was invented by Ampex in the late 1950s. If you had early 1960s Quadruplex tape, then that would be "really old." I say this only because tapes from 1987 onward should be absolutely fine. I have transferred audio tape from 1948 (I have it sitting here in my office) and it did not shed, and I have transferred many consumer-grade VHS and Beta tapes from the late 1970s, also with no tape issues. The main shedding problems were with tape binders used on commercial tapes from the 1970s. It sounds like you don't have any of those.

Given that your tapes have been stored in absolutely idea conditions, you are unlikely to have very many tape issues, although there are always a few stinkers in any large lot.
Yeah... Well... I wasn't even born in the 80s, so handling things from that era for me is handling something older than me, that's why I said "really old"... Like, it is from my perspective cause it's older than me xD
Anyway, it's funny that you named Ampex 'cause that's the brand I've seen on many tapes and that's the company that my colleagues mentioned a lot. They also told me that they were not sure whether tapes were still ok or not 'cause Ampex guaranteed each and every tape to last 15 years but about 36 years passed... So far so good, though.


As a side note, it's funny to read log sheets / recording sheets left 36 years ago by other broadcast operators and their notes on how they did what they did, where they inserted breaks, how they calibrated levels with the bars note reference and so on... I wonder if they ever thought that, 36 years later, someone was going to open the case, read their annotations and play the tape accordingly... :')
FranceBB is offline   Reply With Quote
Old 18th July 2020, 01:35   #14  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: void
Posts: 2,560
then you should pick a more sophisticated upscaling algorithm than nnedi3, it's too "cheap" to be hard encoded.
GAN based models produce results much closer to native HD than nnedi3
feisty2 is online now   Reply With Quote
Old 18th July 2020, 01:48   #15  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: void
Posts: 2,560
Quote:
Originally Posted by Dogway View Post
I was reading your finesharp approach and saw the deconvolution line but no matrix whatsoever. What kind of sharpening matrix and size do you use for that example (SD)?
iirc, it was deconvolution with a circular PSF of 1 pixel radius. it's not unsharp masking (convolution) so there's no kernel matrix to begin with. the deconvolution filter in that thread was probably FQSharp
feisty2 is online now   Reply With Quote
Old 18th July 2020, 11:55   #16  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Metropolitan City of Milan, Italy
Posts: 1,470
Quote:
Originally Posted by feisty2 View Post
then you should pick a more sophisticated upscaling algorithm than nnedi3, it's too "cheap" to be hard encoded.
GAN based models produce results much closer to native HD than nnedi3
Is there an Avisynth port? The whole workflow is based on Avisynth, it would be a pain to use vapoursynth. Does it work with vapoursource? Do I have to install vapoursynth to use vapoursource? If so, which version and is it gonna be x64? Am I gonna lose performances?

If you look here https://forum.doom9.org/showthread.php?t=176655 every block is either a BAT, an Avisynth script or an AutoIT code for the GUI. It would be a pain in the butt to integrate vapoursynth, besides I would also have to ask Steinar and other two people as I'm not the only developer and he's the main developer of the project. Lastly, I spent years from 2006 to 2020 working on my own with Avisynth, I'm not really ready to move to a totally unknown scripting language like python and vapoursynth.

Last edited by FranceBB; 18th July 2020 at 12:00.
FranceBB is offline   Reply With Quote
Old 18th July 2020, 12:41   #17  |  Link
ChaosKing
Registered User
 
Join Date: Dec 2005
Location: Germany
Posts: 1,449
Quote:
Originally Posted by FranceBB View Post
Is there an Avisynth port? The whole workflow is based on Avisynth, it would be a pain to use vapoursynth. Does it work with vapoursource? Do I have to install vapoursynth to use vapoursource? If so, which version and is it gonna be x64? Am I gonna lose performances?
There is no avs port.

Why don't you just try out VS? I knew nothing about VS and switched completely to VS after using it for about 3 days. Python is very easy to understand (and to learn) even if you never used it before. You just need to know the absolute basics to write/untderstand a script.

Vapoursource should be able to load any VS script in avs. If you're using avs-64 you will need also VS-64. As an alternative you could use "avfs.exe" to create a fake avi and open it in any program or with avisource in AVS-32 or 64. Yes you need to install VS. Performance should not be affected, maybe a tiny bit but not by much.
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth
VapourSynth Portable FATPACK || VapourSynth Database || https://github.com/avisynth-repository
ChaosKing is online now   Reply With Quote
Old 18th July 2020, 12:53   #18  |  Link
Frank62
Registered User
 
Join Date: Mar 2017
Location: Germany
Posts: 141
Quote:
Originally Posted by FranceBB View Post
I know but mezzanine files are meant to be able to be played back on the fly by video servers, playout ports and so on, so you can't encode in whatever format you want, otherwise it's a mess. In broadcasting companies everything is encoded with the very same codec, resolution, sampling, frame rate, bit depth, primaries, color matrix and GOP.
You mean, you will HAVE TO encode in HD?

Quote:
Even modern VTR can be used to upscale but it's done poorly again as it has to be in real time. I would totally go for Spline64 + NNEDI over any crappy hardware upscale. (And... No, you can't air in SD except on SD channels).
Of course, but the ugliest thing about these upscales is not that it has to be made in real-time. In 99% it is bad handling of interlaced material. Bicubic, lanczos and even spline resizing would be fast enough to upscale in real-time.

Quote:
The original physical supports will be kept, though, so if someone wants to re-encode them in the future, he will be able to digitalize them again.
If possible to copy them without errors then. We just made a series of which three different physical stored copies were corrupt, in almost each and every episode. We had to save it by mixing the parts that were ok. Otherwise the whole series (5 seasons) would have been lost. There is no error-free copy left.
The material of DigiBeta-cassettes of two of the copies began to dissolve.
Maybe it's the very(!) last chance to save these programmes. Be a hero!
Frank62 is offline   Reply With Quote
Old 18th July 2020, 13:07   #19  |  Link
ChaosKing
Registered User
 
Join Date: Dec 2005
Location: Germany
Posts: 1,449
And btw I tried mcbob (with parameters from link) and it looked very good https://forum.doom9.org/showthread.p...70#post1917370
I would compare it with qtgmc and see which one looks better
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth
VapourSynth Portable FATPACK || VapourSynth Database || https://github.com/avisynth-repository
ChaosKing is online now   Reply With Quote
Old 18th July 2020, 13:54   #20  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Hollola, Finland
Posts: 5,080
Quote:
Originally Posted by ChaosKing View Post
There is no avs port.

Why don't you just try out VS? I knew nothing about VS and switched completely to VS after using it for about 3 days. Python is very easy to understand (and to learn) even if you never used it before. You just need to know the absolute basics to write/untderstand a script.

Vapoursource should be able to load any VS script in avs. If you're using avs-64 you will need also VS-64. As an alternative you could use "avfs.exe" to create a fake avi and open it in any program or with avisource in AVS-32 or 64. Yes you need to install VS. Performance should not be affected, maybe a tiny bit but not by much.
MVTools in Avisynth is a bit ahead of the Vapoursynth port. I use MVTools quite a lot so I switched back to AVS+ from Vapoursynth some time ago.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 21:28.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, vBulletin Solutions Inc.