Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Announcements and Chat > General Discussion

Reply
 
Thread Tools Search this Thread Display Modes
Old 20th September 2022, 10:09   #1  |  Link
wonkey_monkey
Formerly davidh*****
 
wonkey_monkey's Avatar
 
Join Date: Jan 2004
Posts: 2,493
AI EXTRApolation? (adding frames to the end of a video)

I've been looking at the likes of DAINapp and FlowFrames that do pretty amazing things when it comes to interpolating new frames in a video, but is there anything that will extrapolate new frames?

I have a short, fast-moving, fairly motion-blurred clip (a camera being pulled back through a forest) and I want to add just two frames to the end of it that continue the motion. There's a lot of parallax because of all the trees at different distances to the camera.

Any suggestions?
__________________
My AviSynth filters / I'm the Doctor
wonkey_monkey is offline   Reply With Quote
Old 25th September 2022, 22:22   #2  |  Link
AVIL
Registered User
 
Join Date: Nov 2004
Location: Spain
Posts: 408
Hi:

A mad idea. With mvtools, get the motion vector of the previous to the last frame of clip and apply to mcompensate the last frame (e.g. by using a fake clip.trim(clip,1,0).duplicateframe(last)). Repeating the procedure with the newly created frame obtain a second "interpolated" frame (hopely). But no AI involved, Probably no Natural Intelligence involved too.

Last edited by AVIL; 25th September 2022 at 22:54. Reason: Completion of the idea
AVIL is offline   Reply With Quote
Old 26th September 2022, 00:18   #3  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
This of any use at all ? [only for single frame prediction]

Quote:
Originally Posted by StainlessS View Post
You beat me to it (was busy with an edit).

Might be able to use these (manually),

Code:
Function PredictFromPrevious(clip Last)
{ # Predict i frame from Previous
	yuy2=(isYUY2())?true:false
	PreFilt=Last.DeGrainMedian() 	# prefiltered for better motion analysis
	Last= (yuy2)?Interleaved2planar():Last
	PreFilt= (yuy2)?PreFilt.Interleaved2planar():PreFilt
	super=Last.MSuper(planar=yuy2)
	superPreFilt=PreFilt.MSuper(planar=yuy2)
	fv = superPreFilt.MAnalyse(isb = false,  truemotion=true,delta = 1)
	Last.MFlow(super,fv,planar=yuy2)
	Last=(yuy2)?Planar2Interleaved():Last	
	Return Last
}


Function PredictFromNext(clip Last)
{
	yuy2=(isYUY2())?true:false
	PreFilt=Last.DeGrainMedian() 	# prefiltered for better motion analysis
	Last= (yuy2)?Interleaved2planar():Last
	PreFilt= (yuy2)?PreFilt.Interleaved2planar():PreFilt
	super=Last.MSuper(planar=yuy2)
	superPreFilt=PreFilt.MSuper(planar=yuy2)
	bv = superPreFilt.MAnalyse(isb = true,  truemotion=true,delta = 1)
	Last.MFlow(super,bv,planar=yuy2)
	Last=(yuy2)?Planar2Interleaved():Last
	return Last
}
And use ClipClop() plug to select replacement frames from generated predicted clips.
EDIT: Returns clip where all frames predicted from previous/next frame.

EDIT: or bit more from elsewhere,

Quote:
Originally Posted by StainlessS View Post
Here a modification of a script function posted by Gavino (Thankyou Maestro)
http://forum.videohelp.com/threads/3...=1#post2089696
EDIT: Update version of below DoctorFrames included in zip, with frame interpolations up to 22 frames.
Code:
Function DoctorFrames(clip c, String "Scmd",String "Cmd",bool "Show",int "dv") {
# Replace damaged single frames using commands in a either command string or file.

    Scmd = Default(Scmd,"")     # User supplied list of newline or semicolon separated commands.
    Cmd  = Default(Cmd,"")      # User supplied Filename containing newline separated commands.
    Show = Default(Show,false)  # Show Info on frame
    dv   = Default(dv,0)        # ClipClop DebugView level (Need DebugView utility)

    sup         =   c.MSuper()
    PreFilt     =   c.DeGrainMedian()       # prefiltered for better motion analysis
    supPreFilt  =   PreFilt.MSuper()
    
    ci_bv   = sup.MAnalyse(isb=true, delta=2)
    ci_fv   = sup.MAnalyse(isb=false, delta=2)
    ci  = c.MFlowInter(sup,ci_bv, ci_fv, time=50.0, ml=100).DuplicateFrame(0).Trim(0,c.Framecount()-1)
    
    cp = c.SelectEvery(1,-1).Trim(0,c.Framecount()-1)           # chop off extra frame
    cn = c.SelectEvery(1,1).DuplicateFrame(c.Framecount()-1)    # Make same length as source

    pp_fv = supPreFilt.MAnalyse(isb = false,  truemotion=true,delta = 1)
    pp=c.MFlow(sup,pp_fv)

    pn_bv = supPreFilt.MAnalyse(isb = true,  truemotion=true,delta = 1)
    pn=c.MFlow(sup,pn_bv)

    NickName="  # Define Command mnemonics allowed in command string and file.
        CI=1    # CopyFromInterpolated
        CP=2    # CopyFromPrevious (frame n replaced with frame n-1)
        CN=3    # CopyFromNext (frame n replaced with frame n+1)
        PP=4    # PredictFromPrevious  (frame n replaced with frame predicted from frame n-1)
        PN=5    # PredictFromNext  (frame n replaced with frame predicted from frame n+1)
    "
    Return c.ClipClop(ci,cp,cn,pp,pn,scmd=Scmd,cmd=Cmd,nickname=Nickname,show=show,dv=dv)  
}

Avisource("D:\avs\avi\TEST.AVI")
Trim(1000,-10) ++ trim(2000,-10) ++ trim(3000,-10) # Make two scene changes

SCMD="
 CI 5           # CopyFromInterpolated clip frame 5
 CP 9           # CopyFromPrevious at last frame before scene change
 CN 10          # CopyFromNext at first frame of scene change
 CI 15          # CopyFromInterpolated clip frame 15
 PP 19          # PredictFromPrevious at last frame before scene change
 PN 20          # PredictFromNext at first frame of scene change
 CI 25          # CopyFromInterpolated clip frame 25
"
    
DoctorFrames(Scmd,show=true)
Function Could easily be extended.

Thread on fixing broken frames here:-
http://forum.doom9.org/showthread.php?t=152758

EDIT:- For single frame repair only.
EDIT:- Easy to mod for eg frame range denoising.
EDIT:- @IanB & Gavino, Thanx for impelling the array of clips, would never have gotten here otherwise.

EDIT: SCMD using mnemonics is equivalent to this (the mnemonic just an alternative means of giving clip index number)
Code:
SCMD="
 1 5           # CopyFromInterpolated clip frame 5
 2 9           # CopyFromPrevious at last frame before scene change
 3 10          # CopyFromNext at first frame of scene change
 1 15          # CopyFromInterpolated clip frame 15
 4 19          # PredictFromPrevious at last frame before scene change
 5 20          # PredictFromNext at first frame of scene change
 1 25          # CopyFromInterpolated clip frame 25
"
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???

Last edited by StainlessS; 26th September 2022 at 01:32.
StainlessS is offline   Reply With Quote
Old 26th September 2022, 00:31   #4  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
deblurring the motion blur beforehand (using various "AI" methods) will probably help improve the quality of fwd vector interpolation. You can add back blur after to match with original frames
poisondeathray is offline   Reply With Quote
Old 22nd March 2023, 06:11   #5  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
Here is an example of forward prediction . t+1 derived from t-1 and t0 - with a usable working demo

https://github.com/megvii-research/CVPR2023-DMVFN



(apngs, should animate in most browsers)
demo example - axial object rotation looks decently predicted



example on different data set
<edit> see post 11 below, added t+2 frame



Quote:
Originally Posted by wonkey_monkey View Post
I have a short, fast-moving, fairly motion-blurred clip (a camera being pulled back through a forest) and I want to add just two frames to the end of it that continue the motion. There's a lot of parallax because of all the trees at different distances to the camera.
Not sure how well it would work on motion blurred clip tree clip , but you would likely improve results by de-motionblurring, inference, then adding the blur back if desired

Another tool in the tool belt - other possible uses are fixing bad splices/ scene changes (maybe reverse the sequence after the scene change), extending a scene .

Last edited by poisondeathray; 22nd March 2023 at 22:34.
poisondeathray is offline   Reply With Quote
Old 22nd March 2023, 08:08   #6  |  Link
Emulgator
Big Bit Savings Now !
 
Emulgator's Avatar
 
Join Date: Feb 2007
Location: close to the wall
Posts: 1,531
Wow. Even the torn ropes/cables look convincing.
__________________
"To bypass shortcuts and find suffering...is called QUALity" (Die toten Augen von Friedrichshain)
"Data reduction ? Yep, Sir. We're that issue working on. Synce invntoin uf lingöage..."
Emulgator is offline   Reply With Quote
Old 22nd March 2023, 10:38   #7  |  Link
Frank62
Registered User
 
Join Date: Mar 2017
Location: Germany
Posts: 234
Very interesting!
How long does it take to predict let's say one frame in HD?
Frank62 is offline   Reply With Quote
Old 22nd March 2023, 14:53   #8  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
Quote:
Originally Posted by Frank62 View Post
How long does it take to predict let's say one frame in HD?
Maybe a few seconds . It's not as fast as mvtools2 , but not as slow in the order of something like ESRGAN


Another machine learning prediction project
https://github.com/YueWuHKUST/CVPR20...-Interpolation

I haven't had a change to play with this one yet. It's linux only and I have to break out a VM
poisondeathray is offline   Reply With Quote
Old 22nd March 2023, 16:21   #9  |  Link
Frank62
Registered User
 
Join Date: Mar 2017
Location: Germany
Posts: 234
Pretty fast. Did anyone ever try to predict more than one frame? If so: how does it look? Dramatically worse or acceptable?
Frank62 is offline   Reply With Quote
Old 22nd March 2023, 21:31   #10  |  Link
Emulgator
Big Bit Savings Now !
 
Emulgator's Avatar
 
Join Date: Feb 2007
Location: close to the wall
Posts: 1,531
Simple. Your video ends at n_old.
Extrapolate the n_old+1 frame.
Add it to your source video.
Now it became n_new.
Then again extrapolate n_new+1.
There it is: your n_old+2.
__________________
"To bypass shortcuts and find suffering...is called QUALity" (Die toten Augen von Friedrichshain)
"Data reduction ? Yep, Sir. We're that issue working on. Synce invntoin uf lingöage..."
Emulgator is offline   Reply With Quote
Old 22nd March 2023, 22:33   #11  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
It gets worse and errors accumulate (as expected)

Here is the above with t+2 (from t and interpolated t+1) , and I'll remove the one above



General quality is worse than something like RIFE where you had before/after frames.

It also "fails" under the usual "fail" situations like repeating patterns, picket fences etc...
poisondeathray is offline   Reply With Quote
Old 23rd March 2023, 10:36   #12  |  Link
Frank62
Registered User
 
Join Date: Mar 2017
Location: Germany
Posts: 234
Yes, I thought so. Would have been too magical. Thanks for the example.
Frank62 is offline   Reply With Quote
Old 23rd March 2023, 16:04   #13  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
If you don't have next frame (even with big distance gap) then it's all guess
kolak is offline   Reply With Quote
Old 24th March 2023, 09:08   #14  |  Link
Frank62
Registered User
 
Join Date: Mar 2017
Location: Germany
Posts: 234
What is meanwhile the best way to interpolate a bigger gap? Let's say about 10-20 frames missing with some movement?
Frank62 is offline   Reply With Quote
Old 24th March 2023, 15:02   #15  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
Quote:
Originally Posted by Frank62 View Post
What is meanwhile the best way to interpolate a bigger gap? Let's say about 10-20 frames missing with some movement?
Too many frames to "tween" in most cases - motion will look unnatural / robotic . (Unless those 10 -20 frames was pure linear motion)

Normally you would photoshop a few frames in between adjusted with motion for reference points
poisondeathray is offline   Reply With Quote
Old 26th March 2023, 10:23   #16  |  Link
Frank62
Registered User
 
Join Date: Mar 2017
Location: Germany
Posts: 234
Thanks! So nothing new. I thought maybe with A. I. there would be better chances for this meanwhile.
Frank62 is offline   Reply With Quote
Old 30th March 2023, 06:31   #17  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,041
Quote:
Originally Posted by Frank62 View Post
What is meanwhile the best way to interpolate a bigger gap? Let's say about 10-20 frames missing with some movement?
Engine should be time-axis symmetrical so to decrease possible errors of forward-only motion interpolation 2 directions may be used:

1. Get forward interpolated frame sequence from past good frames.
2. Get backward interpolated frame sequence from next good frames
3. Interleave 2 sequences and pass to some majority-based temporal 'denoiser' and it should select samples or blocks values of higher-probability so it may decrease number of interpolation errors.

Though for best results of majority-based processing better to have odd-numbered data samples per each output sample (sample or block depending on the engine - the vsTTempSmooth is example of per-sample engine and mvtools (MDegrainN pmode=1) is block-based).

So to attempt to get better results - several examples of interpolation from different interpolation engines may be feeded to majority-selector engine and it will try to output most-probable looking result. Per-sample based majority-selector mode expected in vsTTempSmoth in some close future after feature-request transferring from MDegrainN or other may be faster algorithm implementation.
DTL is offline   Reply With Quote
Old 30th March 2023, 10:08   #18  |  Link
Frank62
Registered User
 
Join Date: Mar 2017
Location: Germany
Posts: 234
Thank you very much for your hints. I did similar things already some years ago and tested a lot of such strategies, also with different engines including Alchemist. But - logically - the quality of synthesised frames falls of exponentially when there is motion, even of bigger objects, no matter what you try. By the way I also preferred block based approaches in most cases. I also mixed several things, don't remember exactly,and spent weeks for this,with no bigger success.
I had not much hope that A.I. could improve this a lot, but at least a bit. But nothing new under the sun.
Frank62 is offline   Reply With Quote
Old 30th March 2023, 15:00   #19  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
Quote:
Originally Posted by Frank62 View Post
But - logically - the quality of synthesised frames falls of exponentially when there is motion, even of bigger objects, no matter what you try.
Yes, and I think a big problem for this approach is using "generic" model for "everything". This gives mediocre results, unless the test material is similar to the training material

You generally see the best results in machine learning with a custom model matching the input characteristics . The 3 provided models are trained on limited, specific testing sets. If you train your own model based on similar source material, you will get better results for any of the machine learning algorithms


Quote:
I had not much hope that A.I. could improve this a lot, but at least a bit. But nothing new under the sun.
I would still consider it some improvement, because there is less manual work. Instead of photoshoping every frame, you only have to touch up parts of frames. I consider it a useful time saver when using these tools.

The forward only prediction (from previous frame(s) ) of mvtools2 is significantly worse using any variant of mflow,mcompensate, mrecalculate than DMVFN
poisondeathray is offline   Reply With Quote
Old 30th March 2023, 17:16   #20  |  Link
Frank62
Registered User
 
Join Date: Mar 2017
Location: Germany
Posts: 234
Quote:
Originally Posted by poisondeathray View Post
If you train your own model based on similar source material, you will get better results for any of the machine learning algorithms
That's what I would like to also with colorisation. F. e. I have a smurf (yes, the small blue creatures) short feature from 1959 in Black & White. If I had any idea of this whole model thing I would train it with colored scenes from the later series where the smurfs are blue...
But unfortunately I haven't, and would need too much time to learn all this I fear.
Frank62 is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 10:03.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.