Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 9th January 2017, 12:11   #1  |  Link
huguru
Registered User
 
Join Date: Dec 2016
Posts: 4
How to remove white glow ?

Sample images:


Sample clip

I tried using various dehalo filters but couldn't improve it.
huguru is offline   Reply With Quote
Old 9th January 2017, 12:49   #2  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: void
Posts: 2,633
Quote:
I tried using various dehalo filters but couldn't improve it.
cuz that's not halo
try something like warpsharp
feisty2 is offline   Reply With Quote
Old 9th January 2017, 13:05   #3  |  Link
huguru
Registered User
 
Join Date: Dec 2016
Posts: 4
Quote:
Originally Posted by feisty2 View Post
cuz that's not halo
try something like warpsharp
Thanks for the reply. TBH, I don't want to warpsharp, I was hoping to get a solution without it.
huguru is offline   Reply With Quote
Old 9th January 2017, 22:38   #4  |  Link
GMJCZP
Registered User
 
GMJCZP's Avatar
 
Join Date: Apr 2010
Location: I have a statue in Hakodate, Japan
Posts: 738
Try YLevels.
__________________
By law and justice!

GMJCZP's Arsenal
GMJCZP is offline   Reply With Quote
Old 10th January 2017, 05:35   #5  |  Link
lansing
Registered User
 
Join Date: Sep 2006
Posts: 1,626
It was a special effect, not artifact, usually used to reflect a light source from a long distance
lansing is online now   Reply With Quote
Old 12th January 2017, 20:12   #6  |  Link
Bloax
The speed of stupid
 
Bloax's Avatar
 
Join Date: Sep 2011
Posts: 317
It's an effect - https://en.wikipedia.org/wiki/Bloom_%28shader_effect%29 - not an artifact.

Not sure if it's possible to do a bloom reversal, since much like field blending, it contaminates the surrounding data - making the areas you want to make "healthy" again look like part of the problem.
If it's by an edge you may have a hint at whether or not bloom has occured, but if it's in the middle of flat colors (like the bloomed highlights inside that wine glass) it's completely hopeless unless you want to de-gradient the entire damn picture that dares to have bright gradients.
Bloax is offline   Reply With Quote
Old 17th January 2017, 10:50   #7  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,862
No, it could be removed, in real photography there's something called glare caused by a lens and it can be removed with a convolution like anything else. In this case it's an effect yes but I think it's hand-painted in so there's no way to perfectly remove it. If it were a click in a paint tool, there would be a specific brightening effect around the edges of the masked object, but if you look carefully, the glow doesn't seem to exactly follow the outline. It's missing an indent in the bottom left.

Removing a glow should be much like removing a logo in algorithm.

p.s. I've read a paper that did effective glare removal, revealing some items beneath a window that were lost in the glare. Only problem is it comes out a bit noisy, as the glare reduces contrast. You can also see through fog with image processing and there's finally haze removal features in high-end raw processing programs, though in research that's been around for years.

http://graphics.stanford.edu/papers/...re_removal.pdf

Last edited by jmac698; 17th January 2017 at 10:55.
jmac698 is offline   Reply With Quote
Old 19th January 2017, 00:28   #8  |  Link
Katie Boundary
Registered User
 
Katie Boundary's Avatar
 
Join Date: Jan 2015
Posts: 945
Quote:
Originally Posted by lansing View Post
It was a special effect, not artifact, usually used to reflect a light source from a long distance
Quote:
Originally Posted by Bloax View Post
It's an effect - https://en.wikipedia.org/wiki/Bloom_%28shader_effect%29 - not an artifact.

Not sure if it's possible to do a bloom reversal, since much like field blending, it contaminates the surrounding data - making the areas you want to make "healthy" again look like part of the problem.
If it's by an edge you may have a hint at whether or not bloom has occured, but if it's in the middle of flat colors (like the bloomed highlights inside that wine glass) it's completely hopeless unless you want to de-gradient the entire damn picture that dares to have bright gradients.
Actually, I think the OP is referring to the extremely subtle halation-like effect surrounding all of the black lines in the image. It's kind of like Bloom, except that it treats ALL of the characters and objects as very weak light sources.
__________________
I ask unusual questions but always give proper thanks to those who give correct and useful answers.
Katie Boundary is offline   Reply With Quote
Old 19th January 2017, 03:41   #9  |  Link
geometer
Registered User
 
Join Date: Dec 2015
Posts: 59
besides the "painted halo" mentioned in postings above
there is an issue that i can see, that makes the picture look awkward, and it also exists in videos shot by cameras.
perhaps the OP is referring to that also. at least things become a lot worse by that.

in the following, i will present two steps of a treatment, and some practical hints about tuning a convolution.
at the end, there is a simpler example of a script, to get started.

------------
so, if i am right, it has to do with incorrect treatment of interlacing.
symptom: vertical and horizontal sharpness are very different, and the horizontal lines (and sectors of curves) have a certain blur and a halo.

in our specimen, vertical and horizontal halos are visibly different.
why there is this 3-4 px halo, and why it carries the chroma more than the luma, is another question. if it is not deliberate, something should be done about restoration of chroma sharpness, in a dedicated path of the script. the restoration of the bad interlace treatment should come first.

hypothesis: there was a ("producer panic") sharpening after the interlaced video was created already. with the camera it is clear that, in former times they used cams that did output already interlaced formats, and when it was not sharp enough, they threw the sharpener tool on top of that. which today, on progressive computer playback, effectively ruins vertical (y) resolution and creates a horrible double-width halo with a plus/minus effect (not exactly "ringing").
I am not sure about the creation of anime, I'm ignorant about how they do the interlacing. but it might well be the same mistake, on the long travel from the paintings to the TV screen. - that somewhere in between there was an attempt of sharpening within the interlaced format, neglecting its particularities.


my attempt to a repair or mitigation:

a) interlaced sharpen error
i use a convolution with a 5x5 kernel and there are only coefficients that influence the y axis/spectrum.
we have to start the processing with the interlaced format (it must be available, and should have very little of MPEG artifacts).
here, we need a little blur, that will undo the wrongful sharpener.
we must find out whether the sharpener had a width of only one px up and down (which makes 2+2+center pixel, after deinterlacing), else things become more complex and we need to apply two kernels with longer tails, one up and one down.

Code:
# at this point in the script we assume the video was ready loaded from an avi or mpeg2 file, and comes in interlaced format
# no other processing so far

CC=last
SeparateFields 
ConvertToRGB32(matrix="PC.709",interlaced=false,chromaresample="lanczos4")

# below, there are examples of kernels, and they are rather brute, so probably for the anime problem, 
# the center value should be a lot higher in relation to the tail.

# asymmetric (center value = position #5):
C60 =  "0 -0   0  0 112   0  0  0  0 -5   0  0  -0  -0  30   0  0  0  0  -3   0  0  0  0  8" #C
C61 =  "0 -0   0  0 112   0  0  0  0 -6   0  0  -0  -0  10   0  0  0  0  -3   0  0  0  0  0" #C
# symmetric (center value = position #13):
C62 =  "0  0   0  0   0    0  0  0  0  0    0  0 112  0   0    0  0 25 0 0   0  0  0  0  0" #C
C64 =  "0  0   3  0   0    0  0  8  0  0    0  0 140  0   0    0  0 15 0 0   0  0  3  0  0" #C
C65 =  "0  0  15  0   0    0  0 27  0  0    0  0 141  0   0    0  0 37 0 0   0  0 27  0  0" #C

GeneralConvolution(0, C65) # select kernel to use, specify its variable name here; example I
# GeneralConvolution(0, C62) # example II

# these kernels are treating the error situations above and below the cursor differently, as often an old, not horribly 
# expensive camera did output a TV-ish signal with all its "smearing and bleeding" effects, that follow the direction of the "electron beam". 
# when the anime was captured from TV, or is an older product, the same principle might apply.
# thus, the error should be seen as asymmetric.

ConvertToYV12(matrix="PC.709",interlaced=false,chromaresample="lanczos4")
Weave
MergeChroma(CC)  # here we assume that the awkward sharpening was done on the luma signal only. 
# caution - omit that line, if not so. the anime might not need the mergechroma.

#only after this is done, we deinterlace:
Load_Stdcall_plugin("... ... ...\plugins\yadif.dll") ## please insert correct dll path here
Yadif(mode=1,order=1)
# for this example and a fast proof of concept, the yadif will do. the complex deinterlacers change the signal so much, 
# that meaningful tuning of the kernels would be impossible.
so, here we should end up with a more healthy, although quite blurred signal.

b) in the following, a treatment of the blur and halo with another convolution should become possible, though still a bit ellipsoid (y<>x).
note that the following example was used in a very old tape transfer of a video.

Code:
# we continue the above script like this:

AddBorders(4,4,4,8,$7F7F7F)
 
ConvertToRGB32(matrix="PC.709",interlaced=false,chromaresample="lanczos4")
# (unfortunately we need to deinterlace in the chroma/luma format but the convolution works with RGB, thus the double conversions.)

# the kernels in this example are wild guesses, that have shown some merit in a particular video.
# the 40/50ies kernel series is asymmetric and will shift the picture, therefor the borders and cropping.
# if this has to become even more complex, because of the long tails of a blur, 
# then we must apply another reverse version of the kernel, that treats the other direction. 
# with anime, this might become inevitable.

C24a= " 0 0  5 0 -0   0 -1 -3 -1 0   4 -3 99 -3 4   0 -1 -3 -1 0    0 0  5 0  0"  
C24b= " 0 0 -2 0 -0   0 -1 -3 -1 0   4 -3 44 -3 4   0 -1 -3 -1 0    0 0 -2 0  0"  
C40h ="5 -11 29 -25 130  0  0 -0 -3 -30   0  0 -5 -0  40   0  0  0  0 -8   0  0  0  0  3" 
C50g = "2 -8  14 -7 121  0  0 -0 -3 -10   0  0 -4 -0   1   0  0  0  0 -1   0  0  0  0  7" 

# below, there is a transposed pair of kernels, it was used with C60, and does not end up shifting the picture,
# because the shifts are oppositional.
# the 2 kernels are similar but not the same, as they attempt to treat an asymmetric blur+halo in y (interlace) direction.
# these are only half-bred attempts to show the principle of a more complex dehalo with a wider range.
# the result will practically cover a 10x10 area. for extreme results, the positive coefficients (except the big one) 
# might be reduced. think of them as "linearizing".
C51  = "7 -6 17 -13 240   0  0 -0 -3 -10   0  0 -4 -0  20   0  0  0  0 -30   0   0   0  0 10"  #C5.1  w/ pre-deint C60
C51R = "15 0  0  0  0   -35  0  0  0  0   60  0 -4 -0   0  -13 -3  0  0  0   240 -13 17 -6 7"  #C5.1  w/ pre-deint C60


GeneralConvolution(0, C50g) # select kernel to use, specify its variable name here; example I
GeneralConvolution(0, C24b) # more sharpening; example I
# GeneralConvolution(0, C40h) # example II

Crop(0,6,-2,-2) # this is not complete, as there is postprocessing which I do in AviDemux including another cropping.

ConvertToYV12(matrix="PC.709",interlaced=false,chromaresample="lanczos4")

CShft(last,-0.5,0.0) # possibly there is a chroma shift error, here we can do a sub-pixel precise correction
#--------------------------------------

function CShft(clip In, float X, float Y) {
# X = right-shift; Y = down-shift
w = In.Width()
h = In.Height()
Tmp = In.BlackmanResize(w, h, -X, -Y, w-X, h-Y, 5) 
In = MergeChroma(In, Tmp)
return In}
what i do then is an upsample from 720x480 to 960x640 and some further sharpening and "cheap" denoising there.
it is notable that the sharpening in upsampled format creates a much finer spectrum, and creates the impression of a higher resolution.
actually i went so far to upsample to 1600x960, throw 2 full plus one chroma sharpener (hardwired in AviDemux) on that, and downsample to 960x640.

an asymmetric kernel prints as (here: C50g)
Code:
2 -8  14 -7 121  
0  0 -0  -3 -10   
0  0 -4  -0   1   
0  0  0   0  -1   
0  0  0   0   7
with the cursor pixel (=original signal) in the top right corner.

an asymmetric kernel (called impulse) is legit when the coefficients are (+/-) alternating. for anime, when coefficients aren't very low, two transposed kernels might become mandatory.
then, double the factor between original and tails. an example on how to transpose, is shown in C51R.

as for the remaining wide-range blur, in this example the positions of the (-8) and (-1) coefficients are the most relevant ones. (positions #2 and #20)
we might have to decrease them to more negative values. (= to increase the negative/compensating vector)
the (7) position (the last) is very relevant to the artifacts (banding etc.) that this method creates, and should be tuned to smoothen the "unsharpening", that we did in step a) in the first half of the script.

actually, for the OP video, I would start with example II, as it deals with a more subtle error. example I had a brute totally black halo above every sharp white horizontal object.

----
as an entry experiment to the process, you might try the more simple script below, with the kernel
C40k2="3 -10 25 -28 200 0 0 2 -4 -27 0 0 -5 5 23 0 0 0 -1 -7 0 0 0 0 3"
it will have a very soft effect, but probably in the right direction.

Code:
# ... ...
# video has been loaded and deinterlaced at this point.

AddBorders(4,4,4,8,$7F7F7F)
ConvertToRGB32(matrix="PC.709",interlaced=false,chromaresample="lanczos4")
C40k2="3 -10 25 -28 230  0  0  2 -4 -27   0  0 -5  5  23   0  0  0 -1 -7   0  0  0  0  3" 
# reduce the (absolute value of) coefficient #4 when it looks like oversharpening and makes 
# more of a narrow halo, and push #2 more negative, when the anti-halo effect is not wide enough.
# coefficient #3 is the "linearizing" factor that should reduce the finer ringing patterns.
GeneralConvolution(0, C40k2) 
ConvertToYV12(matrix="PC.709",interlaced=false,chromaresample="lanczos4")
CShft(last,-0.6,-0.3) # fine-tune chroma shift; this example is very random and results just from the particular video.
#   the post-filtering in AviDemux needs an up-shift on chroma of 0.5 - 1.5 px.
Crop(0,6,-2,-2)

#--------------------------- helper funcs ---------------------------------------
function CShft(clip In, float X, float Y) {
w = In.Width()
h = In.Height()
Tmp = In.BlackmanResize(w, h, -X, -Y, w-X, h-Y, 5) 
In = MergeChroma(In, Tmp)
return In}
in general, when this is not done with the proper math (harvesting pixel values from a typical 9x9 area of the picture that holds an edge, as an input to the reverse convolutor formulas) we will result with quite some halo reduction, but also with a fine and wide ringing pattern in a low intensity, that may be tolerable depending on the screen and distance where we watch the video.

on another note, the color conversion with the used PC.709 is heuristically optimal with the following filtering in AviDemux:
Code:
adm.addVideoFilter("contrast", "coef=0.970000", "offset=40", "doLuma=False", "doChromaU=False", "doChromaV=True")
adm.addVideoFilter("colorYuv", "y_gain=0.000000", "y_bright=0.000000", "y_gamma=3.400000", "y_contrast=10.400000"
, "u_gain=0.000000", "u_bright=0.000000", "u_gamma=0.000000", "u_contrast=11.500000", "v_gain=0.000000", "v_bright=0.000000"
, "v_gamma=0.000000", "v_contrast=0.000000", "matrix=0", "levels=0", "opt=False", "colorbars=0", "analyze=1", "autowhite=False", "autogain=False")

Last edited by geometer; 19th January 2017 at 03:56.
geometer is offline   Reply With Quote
Old 19th January 2017, 08:03   #10  |  Link
geometer
Registered User
 
Join Date: Dec 2015
Posts: 59
with closer examination,
it turns out that the OP example is not interlaced. so, part a) does not apply here. some difference among x and y sharpness still exists, including a small y halo.

the wide, bright blur-halo is probably a deliberate effect.
the attempt to cancel it with a counter-effect turns out in a way that as soon as the visible halos are reduced, then everywhere else the opposite halo will pop up.
it would take a nonlinear blend among the processed and the nonprocessed streams. to define the detection criteria would be difficult.

I came up with something like this:
Code:
C51  = " 5 -40 25 -13 140   0  0 -0 -3 -10   0  0 -4 -0  20    0  0  0  0 -30     0   0   0  0 10"   
C51R = "15 0  0  0  0     -35  0  0  0  0   60  0 -4 -0  0   -13 -3  0  0  0    140 -3 30 -60  7"  
GeneralConvolution(0, C51R)
GeneralConvolution(0, C51)
then, we have a shorter halo everywhere. this might be curable with a dehalo filter now.
but also, the black outlines have become much thicker, and the style of the drawing has changed.
as a first guess, a median blur 3x3 will mitigate the halos. a more intelligent dehalo might take us a step further.

sorry that it didn't actually work so far, but we can make a quite interesting experience.


but a simple
C40k1="3 -10 25 -28 132 0 0 2 -4 -27 0 0 -5 5 23 0 0 0 -1 -7 0 0 0 0 3"
and nothing else, looks quite nice already. it helps somewhat against the other little issues, but not on the effect halo.
to me, it seems to get a better spectral balance.


>the picture is a primitive screenshot of the avidemux filter preview and has the median3x3 on luma. what is shown here is that the halo now is smaller and similar everywhere and a nonlinear (adaptive) dehalo filter might catch it easier in this form. so, it might be possible to fine-tune the convolution to the best performance of the dehalo. but we also see that the "glowing" effect is not constant at all.<
Attached Images
 

Last edited by geometer; 19th January 2017 at 20:28.
geometer is offline   Reply With Quote
Old 20th January 2017, 08:53   #11  |  Link
geometer
Registered User
 
Join Date: Dec 2015
Posts: 59
the actual halo issues can be reduced like this:
Code:
AviSource("YV12.avi")
ConvertToRGB32(matrix="PC.601",interlaced=false,chromaresample="lanczos4")
C24c   ="0   0  2  0  0    0 -5 -15 -5 0    5 -5 170 -5 5   0 -5 -15 -5 0   0  0  2  0  0" 
C40h1  ="5 -11 29 -25 177  0  0 -0 -3 -50   0  0 -5 -0 40   0  0  0  0 -5   0  0  0  0  5" 
C40h1RY="5   0  0  0  0   -5  0  0  0  0   80  0 -5  0  0 -50 -3  0  0  0 177 -25 29 -11 5"


GeneralConvolution(0, C24c)
GeneralConvolution(0, C40h1)
GeneralConvolution(0, C40h1RY)

ConvertToYV12(matrix="PC.601",interlaced=false,chromaresample="lanczos4")
----------------
in the screen shot I used the following color settings, should be easy to translate to the avisynth filters:
Code:
adm.addVideoFilter("contrast", "coef=0.970000", "offset=40", "doLuma=False", "doChromaU=False", "doChromaV=True")

adm.addVideoFilter("colorYuv", "y_gain=0.000000", "y_bright=0.000000", "y_gamma=5.400000"
, "y_contrast=11.000000", "u_gain=0.000000", "u_bright=0.000000", "u_gamma=0.000000"
, "u_contrast=11.000000", "v_gain=0.000000", "v_bright=0.000000"
, "v_gamma=0.000000", "v_contrast=0.000000", "matrix=0", "levels=0", "opt=False", "colorbars=0", "analyze=1", "autowhite=False", "autogain=False")
(the actual processed video looks better than the auto-converted jpg.)
I found some artifacts though, they come from mpeg artifacts in the video, plus the convolution should have a higher precision than 8 bit.
Attached Images
 

Last edited by geometer; 20th January 2017 at 09:09.
geometer is offline   Reply With Quote
Reply

Tags
dehalo, whiteglow

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 16:30.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, vBulletin Solutions Inc.