Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
![]() |
|
Thread Tools | Search this Thread | Display Modes |
![]() |
#1 | Link |
Formerly davidh*****
Join Date: Jan 2004
Posts: 2,456
|
AI EXTRApolation? (adding frames to the end of a video)
I've been looking at the likes of DAINapp and FlowFrames that do pretty amazing things when it comes to interpolating new frames in a video, but is there anything that will extrapolate new frames?
I have a short, fast-moving, fairly motion-blurred clip (a camera being pulled back through a forest) and I want to add just two frames to the end of it that continue the motion. There's a lot of parallax because of all the trees at different distances to the camera. Any suggestions? |
![]() |
![]() |
![]() |
#2 | Link |
Registered User
Join Date: Nov 2004
Location: Spain
Posts: 404
|
Hi:
A mad idea. With mvtools, get the motion vector of the previous to the last frame of clip and apply to mcompensate the last frame (e.g. by using a fake clip.trim(clip,1,0).duplicateframe(last)). Repeating the procedure with the newly created frame obtain a second "interpolated" frame (hopely). But no AI involved, Probably no Natural Intelligence involved too. Last edited by AVIL; 25th September 2022 at 22:54. Reason: Completion of the idea |
![]() |
![]() |
![]() |
#3 | Link | ||
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,678
|
This of any use at all ? [only for single frame prediction]
Quote:
EDIT: or bit more from elsewhere, Quote:
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? Last edited by StainlessS; 26th September 2022 at 01:32. |
||
![]() |
![]() |
![]() |
#5 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,217
|
Here is an example of forward prediction . t+1 derived from t-1 and t0 - with a usable working demo
https://github.com/megvii-research/CVPR2023-DMVFN (apngs, should animate in most browsers) demo example - axial object rotation looks decently predicted ![]() example on different data set <edit> see post 11 below, added t+2 frame Quote:
Another tool in the tool belt - other possible uses are fixing bad splices/ scene changes (maybe reverse the sequence after the scene change), extending a scene . Last edited by poisondeathray; 22nd March 2023 at 22:34. |
|
![]() |
![]() |
![]() |
#6 | Link |
Big Bit Savings Now !
Join Date: Feb 2007
Location: close to the wall
Posts: 1,361
|
Wow. Even the torn ropes/cables look convincing.
__________________
"To bypass shortcuts and find suffering...is called QUALity" (Die toten Augen von Friedrichshain) "Data reduction ? Yep, Sir. We're working on that issue. Synce invntoin uf lingöage..." |
![]() |
![]() |
![]() |
#8 | Link |
Registered User
Join Date: Sep 2007
Posts: 5,217
|
Maybe a few seconds . It's not as fast as mvtools2 , but not as slow in the order of something like ESRGAN
Another machine learning prediction project https://github.com/YueWuHKUST/CVPR20...-Interpolation I haven't had a change to play with this one yet. It's linux only and I have to break out a VM |
![]() |
![]() |
![]() |
#10 | Link |
Big Bit Savings Now !
Join Date: Feb 2007
Location: close to the wall
Posts: 1,361
|
Simple. Your video ends at n_old.
Extrapolate the n_old+1 frame. Add it to your source video. Now it became n_new. Then again extrapolate n_new+1. There it is: your n_old+2.
__________________
"To bypass shortcuts and find suffering...is called QUALity" (Die toten Augen von Friedrichshain) "Data reduction ? Yep, Sir. We're working on that issue. Synce invntoin uf lingöage..." |
![]() |
![]() |
![]() |
#11 | Link |
Registered User
Join Date: Sep 2007
Posts: 5,217
|
It gets worse and errors accumulate (as expected)
Here is the above with t+2 (from t and interpolated t+1) , and I'll remove the one above ![]() General quality is worse than something like RIFE where you had before/after frames. It also "fails" under the usual "fail" situations like repeating patterns, picket fences etc... |
![]() |
![]() |
![]() |
#15 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,217
|
Quote:
Normally you would photoshop a few frames in between adjusted with motion for reference points |
|
![]() |
![]() |
![]() |
#17 | Link | |
Registered User
Join Date: Jul 2018
Posts: 880
|
Quote:
1. Get forward interpolated frame sequence from past good frames. 2. Get backward interpolated frame sequence from next good frames 3. Interleave 2 sequences and pass to some majority-based temporal 'denoiser' and it should select samples or blocks values of higher-probability so it may decrease number of interpolation errors. Though for best results of majority-based processing better to have odd-numbered data samples per each output sample (sample or block depending on the engine - the vsTTempSmooth is example of per-sample engine and mvtools (MDegrainN pmode=1) is block-based). So to attempt to get better results - several examples of interpolation from different interpolation engines may be feeded to majority-selector engine and it will try to output most-probable looking result. Per-sample based majority-selector mode expected in vsTTempSmoth in some close future after feature-request transferring from MDegrainN or other may be faster algorithm implementation. |
|
![]() |
![]() |
![]() |
#18 | Link |
Registered User
Join Date: Mar 2017
Location: Germany
Posts: 210
|
Thank you very much for your hints. I did similar things already some years ago and tested a lot of such strategies, also with different engines including Alchemist. But - logically - the quality of synthesised frames falls of exponentially when there is motion, even of bigger objects, no matter what you try. By the way I also preferred block based approaches in most cases. I also mixed several things, don't remember exactly,and spent weeks for this,with no bigger success.
I had not much hope that A.I. could improve this a lot, but at least a bit. But nothing new under the sun. |
![]() |
![]() |
![]() |
#19 | Link | ||
Registered User
Join Date: Sep 2007
Posts: 5,217
|
Quote:
You generally see the best results in machine learning with a custom model matching the input characteristics . The 3 provided models are trained on limited, specific testing sets. If you train your own model based on similar source material, you will get better results for any of the machine learning algorithms Quote:
The forward only prediction (from previous frame(s) ) of mvtools2 is significantly worse using any variant of mflow,mcompensate, mrecalculate than DMVFN |
||
![]() |
![]() |
![]() |
#20 | Link | |
Registered User
Join Date: Mar 2017
Location: Germany
Posts: 210
|
Quote:
But unfortunately I haven't, and would need too much time to learn all this I fear. |
|
![]() |
![]() |
![]() |
Thread Tools | Search this Thread |
Display Modes | |
|
|