Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 21st September 2022, 10:40   #1  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 703
Runtime question

Probably stupid a question but as I cannot find out how for example temporalsoften is coded to handle scenechange I thought to ask this.

I guess it must use some kind of averageluma or similar with a threshold and conditional call to ignore the frames above the given threshold?
It seems that scenechange params does not slow it down alot yet using scriptclip is pretty slow?

This 'cheat' calls only one frame one time so it is fast while scenechange threshold in temporalsoften has to check every frame?
So I was thinking if the way scenechange is coded so fast maybe it would be possible to use the same principle but for the filter to instead adjust its filter strength (tradius, lumathreshold, chromathreshold etc.)? Just an idea.
anton_foy is offline   Reply With Quote
Old 21st September 2022, 12:00   #2  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
TemporalSoften:- http://avisynth.nl/index.php/Soften
Quote:
TemporalSoften(clip clip, int radius, int luma_threshold, int chroma_threshold [, int scenechange] [, int mode ] )

Blends corresponding pixels in neighboring frames. All frames no more than radius away are examined. Blending occurs only if corresponding pixels are within luma_threshold or chroma_threshold, as explained below.

AVS+ arguments are autoscaling – they are always 0-255 at all bit depths.

clip clip =

Source clip. Supported color formats: all except RGB24 and RGB48

int radius =

Filter radius. All frames no more than radius from the current frame are examined.
(for radius=2, FIVE frames are processed: the current frame, two ahead and two behind)
Range 0-7; radius=0 results in no smoothing.

int luma_threshold =

When smoothing a given luma pixel Y, the corresponding pixel in neighboring frame Yn
is ignored where Yn differs from Y by more than luma_threshold.

int chroma_threshold =

When smoothing a given chroma pixel C, the corresponding pixel in neighboring frame Cn
is ignored where Cn differs from C by more than chroma_threshold.

Good starting values are around 1 or 2 times luma_threshold.

int scenechange = 0

Defines the maximum average pixel change between frames; set properly, this will avoid blending across scene changes.

Good values are between 5 and 30, somewhat higher than luma_threshold.
scenechange not supported in RGB modes.

int mode = 1

New scripts should specify mode=2 to enable ISSE processing. For backward compatibility, mode=1 by default.


Examples

Good initial values:

TemporalSoften(3, 4, 8, scenechange=15, mode=2)
Runtime Functions:- http://avisynth.nl/index.php/Interna...time_functions
Quote:
Difference to next

YDifferenceToNext(clip [, int offset = 1])
UDifferenceToNext(clip [, int offset = 1])
VDifferenceToNext(clip [, int offset = 1])

RGBDifferenceToNext(clip [, int offset = 1])
BDifferenceToNext(clip [, int offset = 1]) AVS+
GDifferenceToNext(clip [, int offset = 1]) AVS+
RDifferenceToNext(clip [, int offset = 1]) AVS+
This group of functions return the absolute difference of pixel value between the current and next frame of clip – either the combined RGB difference or the Luma, U-chroma or V-chroma differences, respectively. In v2.61 an offset argument is added, which enables you to access the difference between the RGB, luma or chroma plane of the current frame and of any other frame. Note that for example clip.RGBDifferenceToNext(-1) = clip.RGBDifferenceToPrevious, and clip.RGBDifferenceToNext(0) = 0.

Examples:

# both th1, th2 are positive thresholds; th1 is larger enough than th2
scene_change = (YDifferenceFromPrevious > th1) && (YDifferenceToNext < th2)
scene_change ? some_filter(...) : another_filter(...)
YDifferenceToNext(Offset=1), calcs the total Y pixel difference between current_frame and current_frame+Offset, and returns that result divided by the number of pixels in the frame (Average Pixel Difference).
[Offset is only available as arg in AVS+, default 1, ie next frame]
LumaDifference(), does similar but between frames of the same frame number of two different clips.

TemporalSoften() Scenechange, will likely do a Sort Of
CurrentOffsetDiff = YDifferenceToNext(Offset), ie for each frame between current_frame and current_frame + Offset : Where Offset is (+/-) 1 to radius.
If CurrentOffsetDiff for a particular frame diffence is above SceneChange then will not include that frame in softening [I would assume any frames further away in that direction from current frame, are then also ignored],
this would happen in both directions, ie in radius both before and after current_frame.

I note that TemporalSoften(Scenechange) arg is type Int, it really should be float as YDifferenceToNext(Offset) calc will be float result.
[Float Scenechange arg would be better precision for user threshold, but probably dont matter too much]

Would be very slow in ScriptClip.

Hope above makes some sense and answers your question.

EDIT: Above guesswork, I have not looked at source.

EDIT:
Quote:
So I was thinking if the way scenechange is coded so fast maybe it would be possible to use the same principle but for the filter to instead adjust its filter strength (tradius, lumathreshold, chromathreshold etc.)? Just an idea.
I dont think that would be useful (probably counter productive). [at least not useful for temporal soften]
EDIT Reason::
Where frames very similar, Maximum Processing, :: where frames very dissimilar Minimum processing ::::: Almost a NOP
Where frames very similar, Minimum Processing, :: where very dissimilar Maximum processing ::::: Does not seem useful to me, standard thresholding better [at least for TemporalSoften].

Also, CurrentOffsetDiff would be calculated for each Offset from current frame (both +ve and -ve offsets), dont know how you might be thinking of mixing all those together
to adjust overall strength. [Any wildly fluctuating strength massaging might also introduce temporal instability in result].
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???

Last edited by StainlessS; 21st September 2022 at 13:33.
StainlessS is offline   Reply With Quote
Old 21st September 2022, 15:30   #3  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 703
Quote:
I dont think that would be useful (probably counter productive). [at least not useful for temporal soften]
EDIT Reason::
Where frames very similar, Maximum Processing, :: where frames very dissimilar Minimum processing ::::: Almost a NOP
Where frames very similar, Minimum Processing, :: where very dissimilar Maximum processing ::::: Does not seem useful to me, standard thresholding better [at least for TemporalSoften].

Also, CurrentOffsetDiff would be calculated for each Offset from current frame (both +ve and -ve offsets), dont know how you might be thinking of mixing all those together
to adjust overall strength. [Any wildly fluctuating strength massaging might also introduce temporal instability in result].
Yes I agree and thanks for the thorough reply.
My idea was not to use the exact same operation (and not temporalsoften specifically, yet any denoiser could be used I guess) but if the scenechange code somehow could be altered to use my noise detection to dynamically regulate the params of the denoiser/filter since it (most plugins scenechange detection seems fast) seems faster than using scriptclip. Hope I was clear in my reply, they tend to be messy.

Edit: good idea! You mentioned "when frames are similar" then low noise, and when dissimilar = higher noise. I have tested my noise detection and it corresponds well on all my test clips noise amount numbers with the image.

Last edited by anton_foy; 21st September 2022 at 15:35.
anton_foy is offline   Reply With Quote
Old 21st September 2022, 15:48   #4  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
The SceneChange type functionality basically just sets/calcs extent of frames used in averaging to produce temporal softenening,
where "sceneChange" detected, is not necessarily a scene change, just means frames so different as to be BestNotAveragedAsTooDifferent.
Lumathreshold, chromathreshold, functionality acts similarly at pixel level, where some offset frame pixels ignored in averging as too different to current_frame pixels.

EDIT: Some scene change detectors [eg SCSelect()] dont necessarily detect scene change, just means "unreliable results if processing using this frame".
(also why scene change stuff dont necessarily force eg StartOfScene after EndOfScene)

Quote:
I have tested my noise detection and it corresponds well on all my test clips noise amount numbers with the image.
Yes, can use that kind of stuff with single frame noise thresh adjustment, but where a range of frames as in TemporalSoften, then makes less sense [if any].
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???

Last edited by StainlessS; 21st September 2022 at 16:17.
StainlessS is offline   Reply With Quote
Old 21st September 2022, 19:36   #5  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 703
Quote:
Originally Posted by StainlessS View Post
The SceneChange type functionality basically just sets/calcs extent of frames used in averaging to produce temporal softenening,
where "sceneChange" detected, is not necessarily a scene change, just means frames so different as to be BestNotAveragedAsTooDifferent.
Lumathreshold, chromathreshold, functionality acts similarly at pixel level, where some offset frame pixels ignored in averging as too different to current_frame pixels.

EDIT: Some scene change detectors [eg SCSelect()] dont necessarily detect scene change, just means "unreliable results if processing using this frame".
(also why scene change stuff dont necessarily force eg StartOfScene after EndOfScene)


Yes, can use that kind of stuff with single frame noise thresh adjustment, but where a range of frames as in TemporalSoften, then makes less sense [if any].
Yes Im sure. Just thought some more about it and maybe could you use mt_lutxy(z) difference and make the difference between a clip.trim(1,0)+ and without to get som kind of diff and go from there. That would be faster than runtime.

Edit:
RaffRiff42 posted mdiff2 which I had alot of use for and it is comparing two frames non-runtime so very fast.

Last edited by anton_foy; 21st September 2022 at 19:48. Reason: Forgot something
anton_foy is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 09:51.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.