Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 15th August 2019, 18:23   #281  |  Link
videophile
Registered User
 
Join Date: May 2019
Posts: 4
Are you THE John Meyer ?

If yes, then thank you for your great script which I am using since 2013! Is there somewhere an official/updated version?

Regarding FrameRateConverter, that's bad news

I think I will stick to my current technique: I create two clips at 36 fps, one with your method, the other one with frame blending, and then I manually select the one that works best on each scene, using my NLE Software (Edius).
videophile is offline   Reply With Quote
Old 15th August 2019, 20:45   #282  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,695
Quote:
Originally Posted by videophile View Post
Are you THE John Meyer?
I am one of thousands, but yes, I am the guy who spent half a day trying hundreds of combinations of MVTools2 settings, trying to understand what each one did, and figure out what impact they had on the typical motion estimation artifacts that all motion estimation tools seem to create. These problems are present even in the expensive commercial programs, such as Twixtor, After Effects, Motionperfect, etc.

I managed to find some sort of "sweet spot" that has held up pretty well for many people. The most important setting was block size. What I have found is that for frame rate conversion you want to use big block size, which makes perfect sense, since this keeps small details from producing "morphs." By contrast, when doing motion-estimated de-noising, you almost always want to use smaller block sizes.

The mask idea is a good one, and when I am doing really critical work, I'll create both a motion estimated version as well as a frame-blended version, but then manually go through each frame and either cut between them for that one frame, or create a manual mask, which is the equivalent of what this tool tries to do. I only do this for paid jobs, of course, and it is very, very tedious.

Last edited by johnmeyer; 15th August 2019 at 20:48.
johnmeyer is offline   Reply With Quote
Old 15th August 2019, 23:29   #283  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
Best motion adaptive engines are in XFile (ex Alchemist) and in Tachyon. Both GPU based. Apparently Nvidia neural based engine is also good.
kolak is offline   Reply With Quote
Old 16th August 2019, 01:25   #284  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,695
Quote:
Originally Posted by kolak View Post
Best motion adaptive engines are in XFile (ex Alchemist) and in Tachyon. Both GPU based. Apparently Nvidia neural based engine is also good.
Thanks for listing those two. I need to look at the results they can produce. I'm always looking for better tools, even if they cost money.
johnmeyer is offline   Reply With Quote
Old 16th August 2019, 12:58   #285  |  Link
videophile
Registered User
 
Join Date: May 2019
Posts: 4
Quote:
Originally Posted by johnmeyer View Post
The mask idea is a good one, and when I am doing really critical work, I'll create both a motion estimated version as well as a frame-blended version, but then manually go through each frame and either cut between them for that one frame, or create a manual mask, which is the equivalent of what this tool tries to do. I only do this for paid jobs, of course, and it is very, very tedious.
Yes, this is exactly what I am doing currently. I wished I could rely on a more automated - i.e. less time consuming - method.
videophile is offline   Reply With Quote
Old 16th August 2019, 15:08   #286  |  Link
doxel
Registered User
 
Join Date: Jul 2019
Posts: 1
I've found two john meyer scripts on this forum with only two lines that are different and I'm wondering which version people are using:

Code:
super = MSuper(source, hpad = 16, vpad = 16, levels = 1)
superfilt = MSuper(prefiltered, hpad = 16, vpad = 16)
Code:
super = MSuper(source, hpad = 16, vpad = 16, levels = 1, sharp = 1, rfilter = 4) 
superfilt = MSuper(prefiltered, hpad = 16, vpad = 16, sharp = 1, rfilter = 4)
What does 'sharp' and 'rfilter' do? Should I use them?

Last edited by doxel; 16th August 2019 at 15:11.
doxel is offline   Reply With Quote
Old 16th August 2019, 17:25   #287  |  Link
manolito
Registered User
 
manolito's Avatar
 
Join Date: Sep 2003
Location: Berlin, Germany
Posts: 3,079
Have a look at this post:
https://forum.doom9.org/showthread.p...52#post1841952

IMO these two params do make sense, but you need to test it for yourself...
manolito is offline   Reply With Quote
Old 16th August 2019, 18:31   #288  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
Quote:
Originally Posted by johnmeyer View Post
Thanks for listing those two. I need to look at the results they can produce. I'm always looking for better tools, even if they cost money.

They cost serious money. Xfile is 10k$. Tachyon similar.
kolak is offline   Reply With Quote
Old 16th August 2019, 22:56   #289  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,377
Quote:
Originally Posted by kolak View Post
Best motion adaptive engines are in XFile (ex Alchemist) and in Tachyon. Both GPU based. Apparently Nvidia neural based engine is also good.


You mentioned this a year or two ago . Is that marketing speak or actual testing ?

I don't know about Tachyon, but for Alchemist Ph.C / Ph.C high effort, it's terribly over rated in terms of the motion compensation conversions

I got a chance to test Alchemist at a trade show, then later in more detail where I know somebody at facility that has it . The artifacts were quite similar as to other solutions, and it would fail in exactly the same conditions and situations. I ran it though typical clips where most ME approaches fail in automatic mode, not some cherry picked marketing demo clips. It was slightly better in terms of the artifacts were maybe slightly smaller (there weren't massive fails, just slightly smaller edge morphing fails)

I think the product got taken over by another company ( Grass Valley now? ), so maybe it improved 100x . But when it was still Snell, it was similar to other solutions in pure automatic mode.

The Nvidia demos looks great and promising, but until it's a usable product under more a few cherry picked test scenarios, it's still unproven vapourware in my mind.
poisondeathray is offline   Reply With Quote
Old 16th August 2019, 23:17   #290  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,695
Quote:
Originally Posted by poisondeathray View Post
I don't know about Tachyon, but for Alchemist Ph.C / Ph.C high effort, it's terribly over rated in terms of the motion compensation conversions

I got a chance to test Alchemist at a trade show, then later in more detail where I know somebody at facility that has it . The artifacts were quite similar as to other solutions, and it would fail in exactly the same conditions and situations. I ran it though typical clips where most ME approaches fail in automatic mode, not some cherry picked marketing demo clips. It was slightly better in terms of the artifacts were maybe slightly smaller (there weren't massive fails, just slightly smaller edge morphing fails)

I think the product got taken over by another company ( Grass Valley now? ), so maybe it improved 100x . But when it was still Snell, it was similar to other solutions in pure automatic mode.

The Nvidia demos looks great and promising, but until it's a usable product under more a few cherry picked test scenarios, it's still unproven vapourware in my mind.
That's really interesting information and will save me some time trying to do my own tests. I too have found that the artifact generation seems to be an issue with the concept itself, and not with the specific implementation: if I take video of a picket fence while driving by (one of the best torture tests), really bad things are going to happen when I use any of these products.

The problems are also highly dependent on the original frame rate: if you apply ME to 10 fps material and try to take it to 60 fps, all heck will break loose. But, if you want to take 120 fps to 720 fps, it will look perfect, no matter what program you use.

I have always thought that somewhere in that last sentence lies the solution to the problem of getting better results, perhaps by doing multiple estimations sequentially, rather than all at once.

As (admittedly flimsy) support for that idea, when doing broadband audio noise reduction (e.g., single-ended hiss removal), you get much better results by reducing the noise only a little bit, and then doing a second (and third, fourth, fifth, etc.) pass, each one doing just a little reduction. You get far fewer artifacts this way.

However, I don't know if it is possible to only do a "little ME" in each pass.

I'm just trying to come up with ideas ...
johnmeyer is offline   Reply With Quote
Old 16th August 2019, 23:51   #291  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,377
Quote:
Originally Posted by johnmeyer View Post
The problems are also highly dependent on the original frame rate: if you apply ME to 10 fps material and try to take it to 60 fps, all heck will break loose. But, if you want to take 120 fps to 720 fps, it will look perfect, no matter what program you use.

I have always thought that somewhere in that last sentence lies the solution to the problem of getting better results, perhaps by doing multiple estimations sequentially, rather than all at once.

As (admittedly flimsy) support for that idea, when doing broadband audio noise reduction (e.g., single-ended hiss removal), you get much better results by reducing the noise only a little bit, and then doing a second (and third, fourth, fifth, etc.) pass, each one doing just a little reduction. You get far fewer artifacts this way.

However, I don't know if it is possible to only do a "little ME" in each pass.

I'm just trying to come up with ideas ...


I don't think that's applicable here, but go ahead and test it out

The reason why higher FPS sources work better - in general - is you are starting with more frequent true motion samples. So in general, the distance that a given object moves between frame A and frame B is smaller for the higher FPS source (of course you have exceptions like whip pans, explosions etc..., but in general terms) . The larger the motion (and distance travelled by objects), the more difficult to interpolate.

Not only that, but complex motion paths, and things in nature are not linear. Someone walking in straight line does not mean their muscles bend and flex linearly. Where that FPS snapshot frame take of where an arm bend might miss the peak of the bend. You miss the trajectory and motion curves

And that only addresses some of the problems with ME. The other huge problem is occlusions and object boundaries . Objects passing over another. Objects deforming (yet the same object). This involves user intervention in the higher end solutions . If that could be done accurately, automatically, you have a winner. Some of the github projects look amazingly accurate for object/mask generation , the ones for superresolution too
poisondeathray is offline   Reply With Quote
Old 22nd October 2019, 18:12   #292  |  Link
Matt Kirby
Registered User
 
Join Date: Jan 2005
Location: Germany
Posts: 42
Hi guys,

I have a problem with "framerateconverter" and "anime".
I have ugly blends or ghosts in my frames when I use it to slowmotion my anime video. My original video has 25 fps. I pump it up with interpolated frames to 37.5 fps. Then I slow down it to 25 fps. The video is factor 1.5 longer than the original. (I need it in this speed, don't ask why )

My script:
Code:
LoadPlugin("...\tools\lsmash\LSMASHSource.dll")
LWLibavVideoSource("...\Mach a S  Test.mkv")
FrameRateConverter(NewNum=37500, NewDen=1000, Preset="Anime") 
AssumeFPS(25)
The ugly result is : Look at "Attachment"

Here is the original testfile:

https://www.dropbox.com/sh/pl29vs4t2...dinij6Q8a?dl=0


Is it possible to convert the video to 37.5 fps without these effects?
Attached Images
 

Last edited by Matt Kirby; 22nd October 2019 at 18:18.
Matt Kirby is offline   Reply With Quote
Old 22nd October 2019, 23:15   #293  |  Link
manono
Moderator
 
Join Date: Oct 2001
Location: Hawaii
Posts: 7,406
Frame interpolation often doesn't work well with animations. You might be better off with frame duplication.

ChangeFPS(37.5)
AssumeFPS(25)
manono is offline   Reply With Quote
Old 23rd October 2019, 08:22   #294  |  Link
Matt Kirby
Registered User
 
Join Date: Jan 2005
Location: Germany
Posts: 42
OK, it looks much better. Thank you!
Matt Kirby is offline   Reply With Quote
Old 28th October 2019, 04:48   #295  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,377
Quote:
Originally Posted by poisondeathray View Post
The Nvidia demos looks great and promising, but until it's a usable product under more a few cherry picked test scenarios, it's still unproven vapourware in my mind.

Mini review - Current status of available Optical Flow AI research methods (that are semi-usable right now and don't require a PhD in programming to use)

1) Super SloMo - This one was the biggest "letdown" for me

Remember the "Super SloMo" Nvidia AI demo awhile back ?
https://www.youtube.com/watch?v=MjViy6kyiqs
https://people.cs.umass.edu/~hzjiang...ts/superslomo/

It's going to be released for NGX for RTX cards eventually, the author said it could be implemented in pytorch
https://developer.nvidia.com/rtx/ngx

So there is a working pytorch implementation with pretrained model using the Adobe240fps dataset
https://github.com/avinashpaliwal/Super-SloMo

The results? Mediocre. Slightly better/slightly worse in areas than mvtools2. Basically the same problems as other optical flow methods - object/edge artifacts , picket fence problems etc.. It might be an issue with this implementation, or perhaps the dataset and training aren't as good as the other unavailable set. I tried several tests, including samples cut from Nvidia's own demo video and the results weren't as good as the demo. Their demo video had clean edges, almost no warping artifacts. But the trained model used was different, and there might be pytorch implementation differences. Hopefully the official NGX release will improve with different distributed trained models

There is a tensorflow implementation of Super-Slomo using the same Adobe240fps dataset, but the provided tensorflow model has motion problems. Produced results are definitely worse than gif bike demo and the pytorch implementation
https://github.com/rmalav15/Super-SloMo

2) CyclicGen, voxelflow based
I got CyclicGen to run, but results are poor. Lots of edge morphing artifacts. Tried both small and large models
https://github.com/alex04072000/CyclicGen

3) sepconv using "lf" model seems to do ok, similar quality to super slo mo; ie. Slightly better/slightly worse in some areas than mvtools2 based optical flow.
https://github.com/sniklaus/pytorch-sepconv


Of course, most of my tests were on "problematic" samples where OF typically "fails" or has problems, so it's probably not representative of general use
poisondeathray is offline   Reply With Quote
Old 28th October 2019, 15:04   #296  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,695
I just tuned past a re-run of M*A*S*H on MeTV and they have used optical flow to make it look like it was shot on video rather than film. I assume they had access to whatever "best of breed" is available for professional use. It too doesn't look very good.
johnmeyer is offline   Reply With Quote
Old 28th October 2019, 16:48   #297  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
Anybody done any tests to see if hi bitdepth performs better in mvtools than lo bitdeth.
For lo BD and eg pel =4, so max vector length about 32, & so if movement
greater then cannot be
Used.

God I hate auto correct, Mobile.

Edit: Presumably vectors can be much longer in HBD.
Edit: Of greater importance where HD clip.
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???

Last edited by StainlessS; 28th October 2019 at 16:59.
StainlessS is offline   Reply With Quote
Old 13th May 2020, 06:05   #298  |  Link
Adam65
Registered User
 
Join Date: May 2018
Posts: 8
Hi,
How to install FrameRateConverter on Windows with MPC-HC/BE ?

Last edited by Adam65; 13th May 2020 at 06:47.
Adam65 is offline   Reply With Quote
Old 13th May 2020, 07:26   #299  |  Link
Sharc
Registered User
 
Join Date: May 2006
Posts: 3,997
Quote:
Originally Posted by Adam65 View Post
Hi,
How to install FrameRateConverter on Windows with MPC-HC/BE ?
It would put too much load on the CPU. Playback would just stutter.
Sharc is offline   Reply With Quote
Old 10th July 2020, 00:19   #300  |  Link
Treaties Of Warp
Registered User
 
Join Date: Apr 2019
Posts: 14
At the "best" settings (Preset = "slowest",Dct=1,DctRe=1), it's WAAAAY better than Interframe at not producing noticeable artifacts. But it's also WAAAAAY slower.

It does seem to have one problem... it doesn't do well with scrolling end credits, especially white text on a black background. "Boxes" or sections of credit text seem to "slide" around a little as they scroll. Has anyone encountered that and have any settings that might fix it?
Treaties Of Warp is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 06:35.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.