Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 22nd March 2017, 16:28   #21  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
Quote:
Originally Posted by manolito View Post
Are you talking about the three small angels in the background?

The human brain works differently. Everybody just sees the two large angels in the foreground, and they do not display motion artifacts. The information in the background is pretty much discarded because the brain determines that this information is not important. (This is gone by the "Formatio Reticularis" which works like a spotlight).


Cheers
manolito
Everything is connected the edge warping of the moving object 2 Angels is causing the problems and stays visible up until the end the edge of the moving object is "warping" everything it moves in front of gets distorted, that is pretty visible and was less visible some versions before these problems where better hidden.

Im pretty sure these warping distortions at the end got amplified i didn't percept them so crazy heavy before on the first sight.

just make a experiment glue both results together

the beginning and middle of your newest results and the older result of the end and voila a pretty stable overall result

And of course you concentrating on those 2 Angels if you would conduct a Eye tracker experiment you would see the ROI being the 2 Angels and everything in their close proximity is being percepted with highest priority.
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 22nd March 2017 at 16:51.
CruNcher is offline   Reply With Quote
Old 24th March 2017, 17:46   #22  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,559
Quote:
Originally Posted by manolito View Post
Here is a "poor man's" fps conversion function by johnmeyer.

jm_fps.avsi


Motion based fps conversion will always introduce some artifacts, but with these parameters johnmeyer really had a golden hand. I did test many different scripts for this, but this one gave by far the best results on all the clips I used it for.

Maybe you are interested in a standalone tool for slow motion using different methods for changing frame rates. Have a look here:
https://forum.doom9.org/showthread.p...31#post1789031


Cheers
manolito
How does this script compare with InterFrame?
MysteryX is offline   Reply With Quote
Old 24th March 2017, 18:50   #23  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,691
Quote:
Originally Posted by MysteryX View Post
How does this script compare with InterFrame?
For the answer, read my post here.

That post also explains why some in this thread thought my script provided some good results: I was able to tweak some things which are not "exposed" with SVP and its Interframe front end.

As for things said in this thread, I still am not convinced that any of the DCT settings are going to provide any substantial, real improvements, i.e., it won't produce differences that actually matter. Note I didn't say there wouldn't be differences, only that those differences won't really get at the reasons why motion estimation fails when doing frame rate changes or other operations which involve creating new frames from adjacent frames.

The real issue is how to define "objects" and how to track them. The motion estimation done by any of these MVTools-derived filters relies on nothing more than tracking pre-determined blocks of pixels rather than pre-identifying actual objects in the frame. This is why block size is the most important variable to change when trying to get good results. Depending on the video and the size of the things being tracked (like people's legs, vertical fence posts, and other difficult-to-track items), different block sizes will work on some videos better than others. I find a block size of 16 to be a good starting point, but sometimes find 8 or 32 works better. The block overlap can then provide some fine tuning.

In general, I only use this technology in conjunction with something else and then mix the two together. The reason is that other technologies, such as frame blending, never fail badly, but they also don't produce results that are as good as motion estimation, but only when motion estimation is behaving. Unfortunately, when motion estimation (including other tools like Twixtor) fails, it fails sepectacularly, ruining the viewing experience. You cannot rely on it.

So, motion estimation is not a "set it, and forget it" tool, and if you use it that way for creating new frames, you will get burned, and it will be sooner rather than later.
johnmeyer is offline   Reply With Quote
Old 25th March 2017, 04:56   #24  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,559
In the Natural Grounding Player / Yin Media Encoder, I upscale videos from 288p 25fps into 768p 60fps. I get the best results by running Interframe between the 2 frame doubles, and it is one of the most important steps. For low quality videos with lots of artifacts, SVP is actually removing a lot of those artifacts by creating the interframe animations! Kind of too good to be true, but it works well.

I'm wondering whether the approach you suggest here would give better results, by combining 2 approaches to reduce the severity of artifacts when it fails. I'm looking for a generic script that will work most of the time, with a few simple tweakeable settings.

Do you have a specific script I could try to see the difference in my case?
MysteryX is offline   Reply With Quote
Old 25th March 2017, 07:24   #25  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
Quote:
Originally Posted by johnmeyer View Post
For the answer, read my post here.

That post also explains why some in this thread thought my script provided some good results: I was able to tweak some things which are not "exposed" with SVP and its Interframe front end.

As for things said in this thread, I still am not convinced that any of the DCT settings are going to provide any substantial, real improvements, i.e., it won't produce differences that actually matter. Note I didn't say there wouldn't be differences, only that those differences won't really get at the reasons why motion estimation fails when doing frame rate changes or other operations which involve creating new frames from adjacent frames.

The real issue is how to define "objects" and how to track them. The motion estimation done by any of these MVTools-derived filters relies on nothing more than tracking pre-determined blocks of pixels rather than pre-identifying actual objects in the frame. This is why block size is the most important variable to change when trying to get good results. Depending on the video and the size of the things being tracked (like people's legs, vertical fence posts, and other difficult-to-track items), different block sizes will work on some videos better than others. I find a block size of 16 to be a good starting point, but sometimes find 8 or 32 works better. The block overlap can then provide some fine tuning.

In general, I only use this technology in conjunction with something else and then mix the two together. The reason is that other technologies, such as frame blending, never fail badly, but they also don't produce results that are as good as motion estimation, but only when motion estimation is behaving. Unfortunately, when motion estimation (including other tools like Twixtor) fails, it fails sepectacularly, ruining the viewing experience. You cannot rely on it.

So, motion estimation is not a "set it, and forget it" tool, and if you use it that way for creating new frames, you will get burned, and it will be sooner rather than later.
Exactly and in this case the Quantization noise from the lossy MPEG Input in manolitos test conversion case causes some of the tracking problems.
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 25th March 2017 at 07:27.
CruNcher is offline   Reply With Quote
Old 25th March 2017, 22:05   #26  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,691
Quote:
Originally Posted by MysteryX View Post
Do you have a specific script I could try to see the difference in my case?
Definitely not. Like so many video issues, you have to do things manually. In this case, you have to watch the video, see where it fails, and then switch over to the other approach at those points.

I put both versions on two timelines in my NLE, with the motion estimated version on the dominant (i.e., default) track. I play the video, usually at 1.5x - 2x normal speed (to get through it in a hurry). When I see bad frame (or several bad frames), I simply cut to the other track until the problem goes away.

For somewhat critical work, I will sometimes crossfade from the ME version to the other version in order to make the switch less apparent. For really critical work, I will create a motion mask, feathered at the edges, to replace only the parts of the frame that are broken. This produces virtually perfect results, but it obviously takes quite a bit of time. When I do paid work (once in awhile some of my stuff ends up in movies or on TV), it is worth the time to do this.
johnmeyer is offline   Reply With Quote
Old 26th March 2017, 00:22   #27  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,559
Quote:
Originally Posted by johnmeyer View Post
I put both versions on two timelines in my NLE, with the motion estimated version on the dominant (i.e., default) track. I play the video, usually at 1.5x - 2x normal speed (to get through it in a hurry). When I see bad frame (or several bad frames), I simply cut to the other track until the problem goes away.
That's a perfect case where automation provided by a software would be most useful -- but this process would be somewhat complicated to implement.
MysteryX is offline   Reply With Quote
Old 26th March 2017, 00:44   #28  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,691
Quote:
Originally Posted by MysteryX View Post
That's a perfect case where automation provided by a software would be most useful -- but this process would be somewhat complicated to implement.
Complicated? No. Impossible? Yes.

You simply cannot anticipate the problems which get created. Also, things which "look good" to an algorithm look like heck to the human eye.

I've written over 100 AVISynth scripts, and have spent many thousands of hours editing video over the past twenty years. I understand both and, as an EE, programmer, and former software manager, I think I know what can and cannot be done (although I am often amazed and surprised what software people manage to create).

The best way to express my skepticism about finding an algorithmic solution to this problem is that if you could actually detect the anomaly created by motion estimation, then the motion estimation itself would be able to avoid creating it in the first place. Since even the commercial software cannot do this (i.e., even people with advanced programming skills and economic incentives), and since they've been working on this for a long time, I don't think it will happen anytime soon.
johnmeyer is offline   Reply With Quote
Old 26th March 2017, 01:07   #29  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,559
What I'm saying is that a software could allow you to manually go over the video at 50% speed to mark which frames are corrupt, and handle the rest. Exactly the same as you're doing, but without having to hack around with manual scripts.

The first part of your process could probably easily be done.

Quote:
Originally Posted by johnmeyer View Post
For somewhat critical work, I will sometimes crossfade from the ME version to the other version in order to make the switch less apparent. For really critical work, I will create a motion mask, feathered at the edges, to replace only the parts of the frame that are broken. This produces virtually perfect results, but it obviously takes quite a bit of time. When I do paid work (once in awhile some of my stuff ends up in movies or on TV), it is worth the time to do this.
Cross-fade maybe could maybe be semi-automated too. But that last part with motion masks, that can only be done manually by someone who really knows what he's doing.

Right now I'm implementing Deshaker (VirtualDub filter), which is difficult to use manually with its 2 passes, especially if you want to preview various settings -- and especially if you want to adjust settings for various segments.

I don't know if your process would be appropriate for my needs, but something that could be done is enter the frame ranges for which to use the alternate method, and handle everything else automatically. Ex: 100-115, 140-144, 160-180

Last edited by MysteryX; 26th March 2017 at 02:42.
MysteryX is offline   Reply With Quote
Old 26th March 2017, 04:25   #30  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,559
Essentially, what you're doing is pretty simple if I understand correctly. You use 2 algorithms: a more aggressive frame interpolation (that causes more artifacts), and a safer method for when it fails. Then the idea is to identify which frames or area to use the alternative method.

If this approach was to be semi-automated, it would be best done as an Avisynth script than as a software. A script or plugin could be designed that takes a string with "100-115,140-144,160-180", and perhaps even allow specifying rectangles or zones for masks, and automatically do everything you're manually scripting.

The only challenge I'm seeing is that this would require conditional filters, but ScriptClip doesn't currently work with MT.

If you're doing a lot of it and are spending a lot of time on this process, perhaps it would be worth it to develop such an utility that systematizes your process.
MysteryX is offline   Reply With Quote
Old 26th March 2017, 04:57   #31  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,691
Quote:
Originally Posted by MysteryX View Post
What I'm saying is that a software could allow you to manually go over the video at 50% speed to mark which frames are corrupt, and handle the rest. Exactly the same as you're doing, but without having to hack around with manual scripts.

The first part of your process could probably easily be done.

Cross-fade maybe could maybe be semi-automated too. But that last part with motion masks, that can only be done manually by someone who really knows what he's doing.

Right now I'm implementing Deshaker (VirtualDub filter), which is difficult to use manually with its 2 passes, especially if you want to preview various settings -- and especially if you want to adjust settings for various segments.
1. If you Google my name and "Deshaker" you will find that I wrote a very complex set of scripts to completely automate the process. One set of scripts was written within Sony Vegas Pro, my NLE. It provides its own scripting language. I also wrote a script in VirtualDub that interacts with the Vegas script. The end result is that, when you want to stabilize a clip on the timeline, you press one button, and the entire process happens with no further interaction on your part, including both Deshaker passes.

This entire process is well-documented by posts that I did in the Vegas forum many years ago.

2. Because Vegas provides scripting, I already do the automation that you suggest. If, for instance, I want to cut to the other clip, but do it via a two-frame cross-fade, I can simply press one key and it will do the entire cut and cross fade. What's more, if I wanted to, I could instead simply insert markers at each place that I want to fix, and then do all the cuts using a batch version of the same script.

Again, if you look at the scripting portion of the Vegas forum, you will find many of my posts describing some of my dozens of Vegas scripts.

Even though Sony pretty much abandoned Vegas, and the new owner, Magix, doesn't appear to be doing anything to make it better, it is still the most productive editing tool on the planet because of its scripting capability.
johnmeyer is offline   Reply With Quote
Old 26th March 2017, 06:47   #32  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,559
It still could be interesting to see how your process could be translated into pure Avisynth programming.
MysteryX is offline   Reply With Quote
Old 26th March 2017, 17:55   #33  |  Link
TheFluff
Excessively jovial fellow
 
Join Date: Jun 2004
Location: rude
Posts: 1,100
Quote:
Originally Posted by MysteryX View Post
It still could be interesting to see how your process could be translated into pure Avisynth programming.
please don't
TheFluff is offline   Reply With Quote
Old 26th March 2017, 23:30   #34  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
Tachyon (used in broadcast for fps conversion) uses fallback method with masking and fathering:
https://www.telestream.net/pdfs/app-...achyon_VPL.pdf

page 5,6.

They find problematic areas and then replace them with frame blended or nearest frame using making and feathering. Looks like they use quality of vectors as deterministic process.
I'm just surprised that this doesn't break "local" motion coherency between frames.

Love to see this in mvtools

Last edited by kolak; 26th March 2017 at 23:33.
kolak is offline   Reply With Quote
Old 27th March 2017, 00:53   #35  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,346
Quote:
Originally Posted by kolak View Post
I'm just surprised that this doesn't break "local" motion coherency between frames.
How do you know it doesn't ?

Have you tested it or seen samples ?

Manually doing it usually looks poor when you try to take care of occlusions in that manner (blending or nearest through accurate user defined rotoscoped masks), so I doubt an "automatic" method using a less accurate method would look any better
poisondeathray is offline   Reply With Quote
Old 27th March 2017, 00:56   #36  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,691
Quote:
Originally Posted by MysteryX View Post
It still could be interesting to see how your process could be translated into pure Avisynth programming.
Quote:
Originally Posted by TheFluff View Post
please don't
Since there is no point, I am not even tempted.
johnmeyer is offline   Reply With Quote
Old 27th March 2017, 06:47   #37  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,559
Johnmeyer, the first part of your process is actually very simple to do.

Write a filter that takes 2 clips and a string as parameters. The filter returns either clip based on the frame number as configured in the string. Frame blending and making transitions transparent, that's a whole other story. Perhaps blending both in the transition frames. Is it like an artist's work where you have to draw it differently in each situation?

I'm just curious, what part of your process can't be systematized?
MysteryX is offline   Reply With Quote
Old 27th March 2017, 10:07   #38  |  Link
videoFred
Registered User
 
videoFred's Avatar
 
Join Date: Dec 2004
Location: Terneuzen, Zeeland, the Netherlands, Europe, Earth, Milky Way,Universe
Posts: 689
A very simple solution is creating two clips from the same source: one with changeFPS() and one with interpolation (this can be done with MVTools2 or Interframe() ).

Then we can select scenes with clipclop():

Code:
source = Avisource("L:\VdP\VdP_Sp4_gekuist.avi").converttoYV12()
changed = source.ChangeFPS(25) 
  

    V0 = changed
    V1 = InterFrame(source,Newnum=25, Newden=1, Cores=8, GPU = true)     
    
    
    NickNames ="""  # Psuedonyms for clips

         I   = 1    
    """


    SCMD="""          
        I  0,20
        I  836,1285   
        I 1723,2312
        I 3032,3202
        I 3986,4456 
        I 4682,4938
        I 6060,6600           
        
    """

    SHOW= True

    ClipClop(V0,V1,scmd=SCMD,nickname=NickNames,show=SHOW)
The result is a mixed clip: frame doubling on "difficult" scenes (fast moving objects) and frame interpolation on "easy" scenes (slow panning for example). Of cource this requires manual searching for the scenes who can be interpolated.

Fred.
__________________
About 8mm film:
http://www.super-8.be
Film Transfer Tutorial and example clips:
https://www.youtube.com/watch?v=W4QBsWXKuV8
More Example clips:
http://www.vimeo.com/user678523/videos/sort:newest
videoFred is offline   Reply With Quote
Old 27th March 2017, 11:15   #39  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
Quote:
Originally Posted by poisondeathray View Post
How do you know it doesn't ?

Have you tested it or seen samples ?

Manually doing it usually looks poor when you try to take care of occlusions in that manner (blending or nearest through accurate user defined rotoscoped masks), so I doubt an "automatic" method using a less accurate method would look any better
Yes, I seen results. They are quite good.
kolak is offline   Reply With Quote
Old 27th March 2017, 16:55   #40  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,691
Of course you can use ClipClop, but to do that you already have to know frame numbers for your in/out points. To get those, you've already done all the work, presumably in your NLE. In Vegas (my NLE), once I arrive at the frame where the switch should be made, I just make it then and there. It takes less time to finish the job "on site" than actually type or write out the frame number, transfer those numbers to a script, and then execute the script. I see zero benefit to that workflow: it adds extra steps, takes more time, and doesn't let me nudge or make slight changes easily.

I have never understood why people insist on making AVISynth into an editing tool. It really is not well-suited to that job. But, if you want to do it, knock yourself out and have fun!
johnmeyer is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 09:20.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.