Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 9th December 2018, 21:25   #141  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,733
Quote:
Originally Posted by pinterf View Post
Big thanks for the report, regression in v37, fixed in v38!
Hey, while you are here.. could you consider porting the VMAF filter to Avisynth+? Currently it's Vapoursynth-only, and it could be really useful with the optimizer.

https://github.com/Netflix/vmaf
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 9th December 2018, 23:09   #142  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 447
Quote:
Originally Posted by Seedmanc View Post
zorr, in your experience, did you notice any relation between the resulting searchRange and searchRangeFinest? I'm trying to figure out which one should normally be larger than the other, but the observations are inconclusive.
This is a good opportunity for me to introduce the latest feature: heat map visualization. To answer your question let's make a heatmap of these two parameters. The command is:

Code:
optimizer -mode evaluate -vismode seriesheatmap -map searchRange searchRangeFinest -log "../scripts/some_script*.log"
It will display a heat map where the brightest colors represent the best results. With my logs (a total of 77902 results) it looks like this:



It's a bit difficult to see where the best results are so let's focus on the best 10% of results by adding -top 10 to the command:



Ok, so it looks like most of the good results have searchRange=2 and searchRangeFinest is not that important.

But wait, let's see another set of results which are using a different source video and also a different script (a total of 271729 results):



Hmm... this one was using -top 100 but we need to focus on the best results again, using -top 20:



Oh, this time the best results have searchRangeFinest 1 or 2 and searchRange is not that important.

So I guess there's no fixed rule. Maybe it's possible to get good results in both ways, but the results here are a good indication the the most optimal settings sometimes need searchRange larger than searchRangeFinest and sometimes the opposite.

Here's the latest version v0.9.16-beta which includes the heat map visualization.

Quote:
Originally Posted by Seedmanc View Post
Have you tried optimizing scripts with the MCompensate trick yet? I've just started and already found that overlap=0 crashes mvtools.
Sorry, haven't gotten around to do that yet. I did do some experiments several months ago.
zorr is offline   Reply With Quote
Old 9th December 2018, 23:30   #143  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 447
Quote:
Originally Posted by Boulder View Post
I have a small request regarding the log file: would it be possible to have the result of the best score written as the last item like it is in the command prompt output? When I run multiple consecutive optimizations with a for loop, the information is lost and I need to use Excel to get the parameters to use in the final script for encoding.
I use the -mode evaluate for this purpose. It can search for and show the best result (and the whole pareto front) from one or multiple log files. You don't need to save the values either, just add -scripts best and you get a script with the best settings already set.

Oh, and maybe you already noticed but the latest version has the heat map visualization you asked for. Since your script only has two parameters you don't need to specify the -map argument (the default is taking the fist two parameters from the script). It works with the autorefresh feature so if you want to watch the process you can call it like this:

Code:
optimizer -mode evaluate -vismode heatmap -autorefresh true
If you want to analyze multiple runs then use -vismode seriesheatmap.

zorr is offline   Reply With Quote
Old 10th December 2018, 04:52   #144  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,733
Quote:
Originally Posted by zorr View Post
I use the -mode evaluate for this purpose. It can search for and show the best result (and the whole pareto front) from one or multiple log files. You don't need to save the values either, just add -scripts best and you get a script with the best settings already set.
Hmm, there must some bug there as I've tried that and -scripts bestofrun, but there are no scripts to check out after the analysis is complete. The work folder is empty and the only scripts in the optimizer folder are the original ones used for the analysis.
Quote:
Oh, and maybe you already noticed but the latest version has the heat map visualization you asked for.
Thank you very much for this, it looks really useful for my purposes. Much easier to see the ballpark for the sane values to use if I run a quick test for some episode of a series first
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 10th December 2018, 09:32   #145  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 447
Quote:
Originally Posted by Boulder View Post
Hmm, there must some bug there as I've tried that and -scripts bestofrun, but there are no scripts to check out after the analysis is complete. The work folder is empty and the only scripts in the optimizer folder are the original ones used for the analysis.
Thanks for the report. I tried to reproduce this but didn't find any issues. The scripts should be written into the same directory where the original script file was. And the original script folder is read from the log files. The first line starting with #script has the script path that will be used. What does that line look like in your log files?

Code:
# script D:\optimizer\bin/script.avs
# output out1="ssim: MAX(float)" out2="time: MIN(time) ms" file="perFrameResults.txt"
19.70072 1980 b=-7 c=-93 
19.797161 1980 b=41 c=66 
19.828617 1980 b=-86 c=-27 
19.700043 1970 b=28 c=-56 
19.80134 1970 b=-75 c=-67 
...
zorr is offline   Reply With Quote
Old 10th December 2018, 16:44   #146  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,733
Running optimizer test2.avs -alg exhaustive -scripts best

# script C:\AvisynthOptimizer/test2.avs
# output out1="ssim: MAX(float)" out2="time: MIN(time) ms" file="perFrameResults.txt"
0.987764 70 b=-80 c=10
0.987764 70 b=-79 c=10
0.98775 70 b=-78 c=10
0.987745 70 b=-77 c=10

At the end of the run, I get this information, but there is no script with that best data available. Test2.avs is like it is saved by me.
Pareto front:
0.987969 70ms b=-68 c=52
0.987969 70ms b=-68 c=49
0.987969 70ms b=-72 c=45
0.98794 60ms b=-54 c=59


The heatmap is indeed very useful. I can easily see which values for b I can safely leave out and reduce the amount of combinations by a fair amount
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 10th December 2018, 19:03   #147  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 447
Quote:
Originally Posted by Boulder View Post
Running optimizer test2.avs -alg exhaustive -scripts best

# script C:\AvisynthOptimizer/test2.avs
# output out1="ssim: MAX(float)" out2="time: MIN(time) ms" file="perFrameResults.txt"
0.987764 70 b=-80 c=10
0.987764 70 b=-79 c=10
0.98775 70 b=-78 c=10
0.987745 70 b=-77 c=10

At the end of the run, I get this information, but there is no script with that best data available. Test2.avs is like it is saved by me.
Pareto front:
0.987969 70ms b=-68 c=52
0.987969 70ms b=-68 c=49
0.987969 70ms b=-72 c=45
0.98794 60ms b=-54 c=59
Ok now I see the problem. The -scripts is an argument that only works with -mode evaluate. So after your optimization is done (or even during it) you should call:

Code:
optimizer -mode evaluate -scripts best
Quote:
Originally Posted by Boulder View Post
The heatmap is indeed very useful. I can easily see which values for b I can safely leave out and reduce the amount of combinations by a fair amount
That's nice! You can also use a filter to reduce the number of tried combinations by forcing the values to be divisible by 10 (or any other number). For example:

Code:
b = 33/100.0 # optimize b = _n_/100.0 | -100..100 ; filter:x 10 % 0 == | b
zorr is offline   Reply With Quote
Old 10th December 2018, 19:14   #148  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,733
Quote:
Originally Posted by zorr View Post
Ok now I see the problem. The -scripts is an argument that only works with -mode evaluate. So after your optimization is done (or even during it) you should call:

Code:
optimizer -mode evaluate -scripts best
Thanks, will try that. What if I have multiple log files, like I usually do after the for loop procedure.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 10th December 2018, 19:25   #149  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 447
Quote:
Originally Posted by Boulder View Post
Thanks, will try that. What if I have multiple log files, like I usually do after the for loop procedure.
You can specify the log files with the -log argument and using the wildcard *. Example:

Code:
optimizer -mode evaluate -log "./scripts/part_of_filename*.log"
If your logs are from the same script then they should all start with the script name and you can use -log "./path/script_name*.log". Or maybe you put all the log files into the same directory, then you can use -log "./path_to_directory/*.log". Maybe you only want to analyze the files from script "abc" created in this month, then you can use -log "./path/abc*2018-12*.log". And so on.

Last edited by zorr; 10th December 2018 at 19:33. Reason: Added another -log example usage.
zorr is offline   Reply With Quote
Old 10th December 2018, 19:29   #150  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,733
Cheers, that will smooth things out a lot. It's a bit of a pain putting everything through Excel. I have to do that already too many times at work these days
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 12th December 2018, 18:03   #151  |  Link
Seedmanc
Registered User
 
Join Date: Sep 2010
Location: Russia
Posts: 85
How do I filter/limit the boolean or string values? Suppose I want to have a variable always be false when another var is below 1. Usually I'd write it as filter:Clevel 1 > NULL true ? x != so that when Clevel is above 1, x is compared to something that it is guaranteed not to be, thus resulting in an always true statement, allowing both true and false. But AvsOptim doesn't know what NULL is or whatever else I try to put instead of that.
Seedmanc is offline   Reply With Quote
Old 12th December 2018, 20:46   #152  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 447
Quote:
Originally Posted by Seedmanc View Post
How do I filter/limit the boolean or string values? Suppose I want to have a variable always be false when another var is below 1. Usually I'd write it as filter:Clevel 1 > NULL true ? x != so that when Clevel is above 1, x is compared to something that it is guaranteed not to be, thus resulting in an always true statement, allowing both true and false. But AvsOptim doesn't know what NULL is or whatever else I try to put instead of that.
There are a couple of ways to approach this. The simplest way is probably returning true whenever Clevel > 1 (all values of x are valid in that case) and also return true when x == false (no matter what Clevel is, x is allowed to be false). In infix form it's

(Clevel > 1) or (x == false)

and translated to reverse polish notation

Clevel 1 > x false == or

Note that in your original filter and this one false is the only allowed value when Clevel is below 2. If you want it false when Clevel is below 1 then use Clevel > 0 or Clevel >= 1.
zorr is offline   Reply With Quote
Old 23rd December 2018, 15:27   #153  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,733
I came up with an idea to reduce the amount of needed iterations, works at least for the resizer test that I run in exhaustive mode.

My results seem to be a bell-shaped curve, so it would be safe to skip the rest of the iterations of b against constant c. Thus, when the SSIM result of a pair is lower than the previous result, start the next round of varying b against c.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 25th December 2018, 22:34   #154  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 447
Quote:
Originally Posted by Boulder View Post
I came up with an idea to reduce the amount of needed iterations, works at least for the resizer test that I run in exhaustive mode.

My results seem to be a bell-shaped curve, so it would be safe to skip the rest of the iterations of b against constant c. Thus, when the SSIM result of a pair is lower than the previous result, start the next round of varying b against c.
Yes that would work. Your problem is the kind of problem that can be solved with a hill climbing algorithm. The mutation algorithm works as hill climbing with population 1 and a small mutation amount. Try these parameters:

-alg mutation -pop 1 -runs 1 -mutcount 1 -mutamount 0.01

The search will start at some random location and gradually move towards the optimum. The heat map looks like this:



The heat map will show a "cross" when the optimum has been reached (the optimum being located at the center of the cross).
zorr is offline   Reply With Quote
Old 26th December 2018, 17:48   #155  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,733
Quote:
Originally Posted by zorr View Post
Try these parameters:

-alg mutation -pop 1 -runs 1 -mutcount 1 -mutamount 0.01

The heat map will show a "cross" when the optimum has been reached (the optimum being located at the center of the cross).
Thanks, worked fine at least with my test of 5 episodes. The first five were run with those settings and the last five with the exhaustive algorithm.

Code:
Run 1 best: 4.952373 570 b=-54 c=52
Run 2 best: 4.934916 600 b=-56 c=58
Run 3 best: 4.9637823 590 b=-46 c=34
Run 4 best: 4.97074 600 b=-41 c=60
Run 5 best: 4.8731537 570 b=-67 c=57
Run 6 best: 4.952373 570 b=-54 c=52
Run 7 best: 4.934916 600 b=-56 c=58
Run 8 best: 4.9637823 600 b=-47 c=34
Run 9 best: 4.97074 580 b=-41 c=60
Run 10 best: 4.8731537 590 b=-67 c=57
The results of a hillclimb series looks like this, so there's probably room for improvement. There were 2000 iterations (default?) there but the best result would probably have been found with less. The exhaustive test needs 2856 iterations at the moment - b is -75..25 and c is 15..70. If the best result hits any of those borders, I run a short optimization run with min-max limits close to the border in question.

__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...

Last edited by Boulder; 26th December 2018 at 17:50.
Boulder is offline   Reply With Quote
Old 28th December 2018, 22:35   #156  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 447
Quote:
Originally Posted by Boulder View Post
The results of a hillclimb series looks like this, so there's probably room for improvement. There were 2000 iterations (default?) there but the best result would probably have been found with less. The exhaustive test needs 2856 iterations at the moment - b is -75..25 and c is 15..70. If the best result hits any of those borders, I run a short optimization run with min-max limits close to the border in question.
Yes it looks like the optimal result can be found with less iterations. In my example it was found within 300 iterations but it could vary (it's a randomized process after all). You can set the number of iterations with the argument -iters.

I think this is where the dynamic iteration count could be useful because it can stop the optimizing process when no better results are found. I ran some tests and found settings where the optimum was found in 10 out of 10 runs (using b and c range -100..100). Try these:

-alg mutation -iters dyn -dyniters 12 -dynphases 2 -pop 1 -runs 1 -mutcount 1 -mutamount 0.1 0.01

The mutation amount is large (0.1) in the beginning and small (0.01) in the end, and looks like this:



It takes on average about 100 iterations per run. That's 28 times faster than the exhaustive algorithm, not bad!
zorr is offline   Reply With Quote
Old 29th December 2018, 14:52   #157  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,733
Quote:
Originally Posted by zorr View Post
Try these:

-alg mutation -iters dyn -dyniters 12 -dynphases 2 -pop 1 -runs 1 -mutcount 1 -mutamount 0.1 0.01

It takes on average about 100 iterations per run. That's 28 times faster than the exhaustive algorithm, not bad!
Thanks, I'll definitely test that with the next batch of encodes I have. A hundred iterations sounds tremendous, as I can then use more frames for analysis to make sure I get a good allround result.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 31st December 2018, 23:35   #158  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,361
Is it possible to resume a test that finished too early and take logs into consideration to refine future mutations?
I'm using the last dynamic mode posted above but it finished too early.

I might have understood it wrong but in my tests the best result is always the one with least change, naturally we want a change since we are denoising. The following always ends with a smdegrain(tr=2,thSAD=100,contrasharp=true,blksize=8,overlap=4,divide=0,refinemotion=true,lsb=true)
Code:
sigma = 20*20          # optimize sigma = _n_*20 | 100..600 | sigma
blockSize = 8          # optimize blockSize = _n_ | 4,8,16,32 ; min:divide 0 > 8 2 ? ; filter:overlap 2 * x <= | blockSize
overlap = 4            # optimize overlap = _n_ | 4,6,8,10,12,14,16 ; max:blockSize 2 / ; filter:x divide 0 > 8 2 ? % 0 == | overlap
tr = 3                 # optimize tr = _n_ | 2..4 | tr
divide = 0             # optimize divide = _n_ | 0..2 ; max:blockSize 8 >= 2 0 ? overlap 4 % 0 == 2 0 ? min | divide
denoised=smdegrain(tr=tr,thSAD=sigma,contrasharp=true,prefilter=2,blksize=blockSize,divide=divide,overlap=overlap,refinemotion=true,lsb=true)

Last edited by Dogway; 1st January 2019 at 00:26.
Dogway is offline   Reply With Quote
Old 1st January 2019, 15:26   #159  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 447
Quote:
Originally Posted by Dogway View Post
Is it possible to resume a test that finished too early and take logs into consideration to refine future mutations?
There's no real resume support, but you can take the best results from the logs as the initial population with the -initial argument. Take a look here. I can see the value in resuming the optimization, for example when Windows suddenly decides that it's mandatory to install some updates and reboot the computer while I'm running an optimization with 50000 iterations... So I will probably implement that at some point.

Quote:
Originally Posted by Dogway View Post
I'm using the last dynamic mode posted above but it finished too early.
Those settings are not really suitable for the general case, they were handcrafted to a specific kind of optimization which is easier than most. You probably need a lot more iterations than those settings provide. You should perhaps try the default optimization settings first and see if you get better results with those.

Quote:
Originally Posted by Dogway View Post
I might have understood it wrong but in my tests the best result is always the one with least change, naturally we want a change since we are denoising.
When doing denoising you need to make a clip with added noise, then remove the noise and finally compare the denoised clip with the original. See here for an example. I have thought about other ways to do it but this is currently the most reliable way.

Can you show what kind of values you are returning from the script?
zorr is offline   Reply With Quote
Old 1st January 2019, 17:28   #160  |  Link
ChaosKing
Registered User
 
Join Date: Dec 2005
Location: Germany
Posts: 1,795
Quote:
Originally Posted by zorr View Post
When doing denoising you need to make a clip with added noise, then remove the noise and finally compare the denoised clip with the original.
I would be nice if we would have also halo and "other crap" simulators to easily test (optimize) other kinds of filters.
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth
VapourSynth Portable FATPACK || VapourSynth Database
ChaosKing is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 13:02.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.