Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 14th February 2021, 17:53   #1  |  Link
smel
Registered User
 
Join Date: Mar 2013
Posts: 8
Remove Halo/Ghosting

I'm trying unsuccessfully to solve edge haloing/ghosting (whatever it could be named) problem with different videos.
Below the sample from DV tape video, captured using FireWire and processed using: FFmpegSource2("clip.mov").QTGMC(Preset="placebo")

https://drive.google.com/file/d/1zNP...ew?usp=sharing



Another samples - from the Betacam tape video, captured using AJA and processed using:FFmpegSource2("clip.mov").Crop(0,4,-0,-2).QTGMC(preset="fast").SRestore(frate=23.976)

https://drive.google.com/file/d/1InY...ew?usp=sharing


https://drive.google.com/file/d/10yr...ew?usp=sharing


As you can see the - the video from Betacam introduces two different (kind of) halo/ghosting.
I tried many de-halo/Ghost filters (too long to list) that I could found.
The last one I tried is the approach posted at: https://forum.doom9.org/showthread.p...32#post1177432.
Unfortunately the author did not disclose the whole script, and the part of the disclosed script doesn't match the result, posted in that thread.
Any suggestion greatly appreciated.

Last edited by smel; 14th February 2021 at 18:16.
smel is offline   Reply With Quote
Old 16th February 2021, 22:10   #2  |  Link
Hotte
Registered User
 
Join Date: Oct 2014
Posts: 209
I am not quite sure, if you are asking for the completion of a script with which I cannot help.

However I can tell you that DeHalo_alpha will most likely be very effective with all your examples except for the 2nd Betacam sample.
Hotte is offline   Reply With Quote
Old 17th February 2021, 02:12   #3  |  Link
smel
Registered User
 
Join Date: Mar 2013
Posts: 8
DeHalo_alpha way to aggressive and blurred the whole image, unlike the approach in the post I've mentioned.

For now I'm using:

FFmpegSource2("clip.avi")
clip= last
dehalo = clip.dehalo_alpha(rx=3.5, ry=1.0, DarkStr=1.3, BrightStr=1.3).dehalo_alpha(rx=1.0, ry=3.0, DarkStr=1.6, BrightStr=1.6)
emask = clip.mt_edge("min/max",20,20,0,255,U=-128,V=-128).greyscale.mt_invert(U=2,V=2).Move(0,0,HDir=-0,VDir=-2,BordCol=$FF0000)
result=mt_merge(clip,dehalo,emask, U=2,V=2)

Somehow it works better than simple "dehalo_alpha" , but I'm not really very happy of the result, that's the reason of my post.

Last edited by smel; 17th February 2021 at 02:24.
smel is offline   Reply With Quote
Old 17th February 2021, 17:59   #4  |  Link
Hotte
Registered User
 
Join Date: Oct 2014
Posts: 209
I haven't seen any results, but I am pretty sure that you could improve the first two examples in a satisfying manner. What you might need is some tweaking.

First always set rx=ry unless halos appear only horizontally or only vertically.

Radius 3.5 seems way too wide for what I have seen in your examples. This will probably blurr your edges. Go down to 2.0 or less and magnify the area: Tweak radius in a way that the coverline hits the halo area as precisely as possible.

Set darkstr to 0 and start with the white halos. Pull up brighstr to determine the greyness of the coverline needed - only as much as necessary.

Use the lowsens parameter: 120 will discover more halos than default 50. If you discover too many you might risk blurr.

Use highsens to reduce the loss of detail. I found out that a high distance between lowsens and highsens determines how much detail is being preserved (the higher the more detail).

For inspiration: I am using this setup to remove fine white halos in some older FHD-footage:

Code:
dehalo_alpha(rx=2.4,ry=2.4,darkstr=0.0,brightstr=1.2,lowsens=150,highsens=500,ss=1.0)
The black halos in the same footage get this:
Code:
dehalo_alpha(rx=2.4,ry=2.4,darkstr=1.2,brightstr=0.0,lowsens=120,highsens=500,ss=1.0)
I am very satisfied with the results.

Last edited by Hotte; 17th February 2021 at 18:08.
Hotte is offline   Reply With Quote
Old 18th February 2021, 23:28   #5  |  Link
smel
Registered User
 
Join Date: Mar 2013
Posts: 8
Respond to @Hotte

Unfortunately couldn't manage to get your settings work.
Dark halos from the very last my example(similar to the very first) couldn't be removed either.
Attached Images
     

Last edited by smel; 18th February 2021 at 23:36.
smel is offline   Reply With Quote
Old 19th February 2021, 00:12   #6  |  Link
ChaosKing
Registered User
 
Join Date: Dec 2005
Location: Germany
Posts: 1,795
You could also try Finedehalo2, maybe it helps or gives you new ideas.
Quote:
FineDehalo2, this function tries to remove 2nd order halos
http://avisynth.nl/index.php/FineDehalo
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth
VapourSynth Portable FATPACK || VapourSynth Database
ChaosKing is offline   Reply With Quote
Old 19th February 2021, 04:04   #7  |  Link
smel
Registered User
 
Join Date: Mar 2013
Posts: 8
You could also try Finedehalo2.

The only use of the Finedehalo1/2, so far, I found is - mask generation, that allows me quickly generate and evaluate the mask quality and then use the trivial dehalo_alfa.
I couldn't manage to achieve acceptable result by using Finedehalo1/2 alone. May be because of the lack of understanding of Finedehalo1/2 parameters, that in certain points are different than "dehalo_alfa" .
smel is offline   Reply With Quote
Old 21st February 2021, 06:48   #8  |  Link
geometer
Registered User
 
Join Date: Dec 2015
Posts: 59
Quote:
Originally Posted by smel View Post
I'm trying unsuccessfully to solve edge haloing/ghosting (whatever it could be named) problem with different videos
....
Any suggestion greatly appreciated.
I have _some_ experience with old DVDs, that have flaws in the way the interleaving was treated.
I'm not a good AviSynth'er, just have thrown together what I needed, and did that in the most primitive and sloppy ways.
With luck, it may help.

Your 3 examples are all different and I doubt there would be one process that solves them all.
The third one (edge22) is called "ringing", there are some packaged processes for that available since long, but this time it is so bad that I'm not sure if a meaningful cure is possible.

Do you have all videos available in interlaced original form? The one that needs the interlaced form most urgently, is "edge11".

What I do is two levels of convolution.
One in the interlaced domain, then I simply use Yadif (you can change that of course but when I look at the examples it is moot anyway), then I use a different kernel.
There is a decision to make, for each convolution, whether it should be applied to the luma only, or also to the chroma, or if we must create another kernel at all, for the chroma.

(De-ringing can be tried with convolution too, but it is difficult, actually it would take a complete math process to compute it, but that's for a bigger lab. You need to "paint" the ideal result somehow on your own, being a good artist in this, and then, the difference can be computed and then, we do the matrix inversion and apply it. Repeat and tune until happy..)

Post-processing is recommended. This means that there is upsampling, filtering, downsampling. The result has 4/3 of the original resolution, e.g. 960x640 for a NTSC DVD.
This has to be experimented with. The point is that you have to get rid of pixel artifacts and staircase patterns. Also, denoising is recommended, which I simply do with the noise parameter in the encoder, but also I can throw in Fluxsmooth.
For post-processing I use Avidemux, because it is very simple to handle and comes with encoders, and has multithreading, so it can do quite some processing.

I'm not willing to wait for a QTGMC process to complete, results are good enough for me without. But you can try a QTGMC after processing with the convolution, you can switch off a lot of its options then, esp. the deinterlacer.
A bit more difficult would be to replace the deinterlacer between the two convolutions with QTGMC, again most options except deinterlacing must be switched off.

The kernels need to be adapted exactly to the errors of the video, then they can remove quite a lot.
It is simply that what has happened to the picture, can be modeled with convolution to a big extent (but not the nonlinear stuff like signal clipping), so there is an inverse matrix in theory, that can roll back the unwanted changes in the signal path, but of course still with some artifacts and never completely.

About some typical halos, the process is very powerful. But then, the existence of the halo makes further sharpening impractical, because the transitions/border lines won't really come back. You just gloss over the halo, but restoration of actual data that had existed behind the halo, is a bit poor.

This was a short overview of what I do with similar video data.

If you are interested, I'll be back in a day or two..

Last edited by geometer; 21st February 2021 at 07:23.
geometer is offline   Reply With Quote
Old 21st February 2021, 13:48   #9  |  Link
Emulgator
Big Bit Savings Now !
 
Emulgator's Avatar
 
Join Date: Feb 2007
Location: close to the wall
Posts: 1,531
For an upcoming project I thought so too.
Determine the shape of the unwanted sharpener's step response, then find the horizontal convolution which can be used to subtract that unwanted response.
From the halos I came across I guessed the convolution matrix needed to be wider than the usual 5 samples, but for now I had no time to try my way though, and my FIR filtering lessons were too long ago...
This was what .mp4guy came up with and I will have to try my way though this too, matching parameters.
Code:
string1 = string(" 0 -20 28 -32 36 36 128 128 72 -56 12 12 -4 ")
mt_convolution(horizontal=" "+string(string1)+" ", vertical=" 1 ", u=3, v=3)
__________________
"To bypass shortcuts and find suffering...is called QUALity" (Die toten Augen von Friedrichshain)
"Data reduction ? Yep, Sir. We're that issue working on. Synce invntoin uf lingöage..."

Last edited by Emulgator; 26th February 2021 at 01:27.
Emulgator is offline   Reply With Quote
Old 22nd February 2021, 20:14   #10  |  Link
smel
Registered User
 
Join Date: Mar 2013
Posts: 8
It could be nice if somebody could show the actual result of implementation of the theories for 3rd picture posted in #1 - "...The third one (edge22) is called "ringing"..."
smel is offline   Reply With Quote
Old 23rd February 2021, 09:32   #11  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,041
Quote:
Originally Posted by smel View Post
from the Betacam tape video, captured using AJA and processed using:FFmpegSource2("clip.mov").Crop(0,4,-0,-2).QTGMC(preset="fast").SRestore(frate=23.976)
I think it is good to have sources undistorted by some extra complicated processings like QTGMC and other when asking how to fix some immediate output of large user-defined processing already applied.

May be start with clean captured source.
DTL is offline   Reply With Quote
Old 23rd February 2021, 17:08   #12  |  Link
smel
Registered User
 
Join Date: Mar 2013
Posts: 8
Here untouched captured video segment with noticeable ringing.
https://drive.google.com/file/d/12eJ...ew?usp=sharing
smel is offline   Reply With Quote
Old 23rd February 2021, 19:25   #13  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,041
That is sad. It looks the ringing starts in analog Betacam machine somewhere because it looks 'analog' positive in time ringing. And it also non-linear - started only from high amplitude signal level transition. May be try another Betacam machine for this tape playback ? I think there is no specially designed avisynth filter for such long and non-linear and asymmetric ringing distortion still exist.
DTL is offline   Reply With Quote
Old 24th February 2021, 07:15   #14  |  Link
geometer
Registered User
 
Join Date: Dec 2015
Posts: 59
Quote:
Originally Posted by smel View Post
Here untouched captured video segment with noticeable ringing.
https://drive.google.com/file/d/12eJ...ew?usp=sharing
can you post 5 seconds from the other 2?

what is the correct input statement for reading this apple format?
?? ffmpg(x,y,z,...)

(sorry the only thing I ever do is MPEG2Source () so I have no clue)

it has compression artifacts from the capturing anyway, which adds up to the quagmire.
to process the video in a meaningful way, it will take a lossless capture from the tape.

some simplified theory for the household commonsense:
also, while there are 486/512 lines correctly encoded (so the deinterlace works), the analog corresponding x-dimension is not clear, i.e. how many pixels must the capture have on horizontal scale, to represent an integer number for the width of the ringing? (the ringing artifact must have a distance to the original of precisely 2 or 4 px so what is the dimension of the whole screen in this direction?)
there are known defaults for all types of old video tape formats. hopefully they are truthful to the internal "nyquist/gibb" edge of the device, expressed as a frequency in the analog domain, because this is the root of the ringing. something like 5MHz for a good device, then we must figure out how many cycles of this max f can happen during a cycle of the line scan frequency, then we have to subtract the borders and gaps also, etc. etc. then we know how many pixels this would reflect in the digital domain.
we need to tweak anyway, because of manufacturing differences. the ringing frequency depends on component tolerances.
one more factor, if it is PAL then it has already a digital chopping for the delay line that prepares the color processing. but the actual limit is lower and can influence the ringing frequency.
720 does not look correct to me. perhaps it is something like 640 or even less? you might also take the double value for higher precision. we end up with resizing anyway.

Last edited by geometer; 24th February 2021 at 07:50.
geometer is offline   Reply With Quote
Old 25th February 2021, 10:21   #15  |  Link
smel
Registered User
 
Join Date: Mar 2013
Posts: 8
"can you post 5 seconds from the other 2" - as I've mentioned above- I've manged to resolve light/dark haloes using the script in the post #3 (and as parts - @Hotte suggestions #4).
So, there is no need to post these clips again.
"it has compression artifacts from the capturing anyway" - video has been captured from the Betacam tape, using Betacam SP and AJA component" - in ANALOG (there is no such things like "compression artifacts" for such configuration exclude final ProRes output compression, that widely recognized as "professional" format, that couldn't be blamed (IMHO )
"what is the correct input statement for reading this apple format?" - as stated in my post #3:
FFmpegSource2("clip.avi") - just replace "clip.avi" to "source.mov". Alternatively you can use LSMASHVideoSource("source.mov") or LWLibavVideoSource("source.mov")

Last edited by smel; 25th February 2021 at 10:42.
smel is offline   Reply With Quote
Old 27th February 2021, 16:12   #16  |  Link
geometer
Registered User
 
Join Date: Dec 2015
Posts: 59
was it a TV capture? then, the ringing can also come from "antenna ghosting" (i.e. analog TV signal reflection from houses, hills etc; how do you call it in english).

of course I do not say the analog betacam did jpeg artifacts, but the candidates are the storage format after capture, or even the TV when it had used digital compression in the playback system.

throwback: notoriously, TV stations used digital compression rarely better than S-VHS resolution.
this was Europe early 90ies, but they received much content from Hollywood, and home S-VHS recordings came very close to the cable broadcast of such stuff, so I say this was our basic resolution for movies, no matter what the digital format was, and fast moves in the scenes were flawed with artifacts - only few movies in cable had full resolution that was on par with live sports events. in other words, recording sports on S-VHS had remarkable loss of quality, but not recording the movies, that came with lower resolution anyway. so that's the experience I come from, when looking at your example.



I don't come with empty hands, but this is only an intermediate check.
https://drive.google.com/file/d/1tYl...ew?usp=sharing

(aspect ratio and pulldown stuff i.e. reversing 24fps > 30fps frame doublings was not checked. colors by default script used from my NTSC DVD conversions. the deinterlacer is yadif which can be replaced by any simple deinterlacer but not qtgmc, but the raw pattern-aware deinterlacer within qtgmc might find some way into the process, if somebody works it out.)

so far I threw together and aligned what I needed to start up.
there is treatment for some halo issues, especially if there is a black fat line above a white horizontal object. watch the top of the bright shoulder.
I did not yet address the ringing or vertical ghosting, so far the 5x5 convolution is not wide enough but comes time I might invoke something more powerful.
with this low resolution from the source of events, the quagmire starts big 5px apart from the original signal.
I may be back if I find something.

with what you see, a 90 min movie would convert in 2-3 hours. YMMV.

Last edited by geometer; 27th February 2021 at 16:53.
geometer is offline   Reply With Quote
Old 27th February 2021, 16:15   #17  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
Quote:
was it a TV capture? then, the ringing can also come from "antenna ghosting" (i.e. analog TV signal reflection from houses, hills etc; how do you call it in english).
Maybe Reflection.

EDIT: you already used the word reflection, so maybe not. Just ignore this post.
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???
StainlessS is offline   Reply With Quote
Old 27th February 2021, 16:23   #18  |  Link
geometer
Registered User
 
Join Date: Dec 2015
Posts: 59
haha well, in German this was called "Geisterbilder", so literally pictures from ghosts, because given the terrestric situation, it could have any distance from the original shape that was the real character. if the hill had the right distance, you could see a shadow of that person always on the right side, perhaps a third of the screen width apart from the original. but if it was the neighbour's house reflecting the HF signal, it was in the distance of a typical ringing artifact, and of course it could have multiple instances, and intensity could wildly change with time and contrast.
if we have to cure something like this, the process would have to be highly adaptive...

addendum
please note, that the shown process is entirely linear i.e. looks organic, and suitable for analog content (less for anime).

Last edited by geometer; 27th February 2021 at 16:38.
geometer is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 20:57.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.