Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
22nd March 2017, 16:28 | #21 | Link | |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 4,926
|
Quote:
Im pretty sure these warping distortions at the end got amplified i didn't percept them so crazy heavy before on the first sight. just make a experiment glue both results together the beginning and middle of your newest results and the older result of the end and voila a pretty stable overall result And of course you concentrating on those 2 Angels if you would conduct a Eye tracker experiment you would see the ROI being the 2 Angels and everything in their close proximity is being percepted with highest priority.
__________________
all my compares are riddles so please try to decipher them yourselves :) It is about Time Join the Revolution NOW before it is to Late ! http://forum.doom9.org/showthread.php?t=168004 Last edited by CruNcher; 22nd March 2017 at 16:51. |
|
24th March 2017, 17:46 | #22 | Link | |
Soul Architect
Join Date: Apr 2014
Posts: 2,559
|
Quote:
|
|
24th March 2017, 18:50 | #23 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
For the answer, read my post here.
That post also explains why some in this thread thought my script provided some good results: I was able to tweak some things which are not "exposed" with SVP and its Interframe front end. As for things said in this thread, I still am not convinced that any of the DCT settings are going to provide any substantial, real improvements, i.e., it won't produce differences that actually matter. Note I didn't say there wouldn't be differences, only that those differences won't really get at the reasons why motion estimation fails when doing frame rate changes or other operations which involve creating new frames from adjacent frames. The real issue is how to define "objects" and how to track them. The motion estimation done by any of these MVTools-derived filters relies on nothing more than tracking pre-determined blocks of pixels rather than pre-identifying actual objects in the frame. This is why block size is the most important variable to change when trying to get good results. Depending on the video and the size of the things being tracked (like people's legs, vertical fence posts, and other difficult-to-track items), different block sizes will work on some videos better than others. I find a block size of 16 to be a good starting point, but sometimes find 8 or 32 works better. The block overlap can then provide some fine tuning. In general, I only use this technology in conjunction with something else and then mix the two together. The reason is that other technologies, such as frame blending, never fail badly, but they also don't produce results that are as good as motion estimation, but only when motion estimation is behaving. Unfortunately, when motion estimation (including other tools like Twixtor) fails, it fails sepectacularly, ruining the viewing experience. You cannot rely on it. So, motion estimation is not a "set it, and forget it" tool, and if you use it that way for creating new frames, you will get burned, and it will be sooner rather than later. |
25th March 2017, 04:56 | #24 | Link |
Soul Architect
Join Date: Apr 2014
Posts: 2,559
|
In the Natural Grounding Player / Yin Media Encoder, I upscale videos from 288p 25fps into 768p 60fps. I get the best results by running Interframe between the 2 frame doubles, and it is one of the most important steps. For low quality videos with lots of artifacts, SVP is actually removing a lot of those artifacts by creating the interframe animations! Kind of too good to be true, but it works well.
I'm wondering whether the approach you suggest here would give better results, by combining 2 approaches to reduce the severity of artifacts when it fails. I'm looking for a generic script that will work most of the time, with a few simple tweakeable settings. Do you have a specific script I could try to see the difference in my case? |
25th March 2017, 07:24 | #25 | Link | |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 4,926
|
Quote:
__________________
all my compares are riddles so please try to decipher them yourselves :) It is about Time Join the Revolution NOW before it is to Late ! http://forum.doom9.org/showthread.php?t=168004 Last edited by CruNcher; 25th March 2017 at 07:27. |
|
25th March 2017, 22:05 | #26 | Link | |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Quote:
I put both versions on two timelines in my NLE, with the motion estimated version on the dominant (i.e., default) track. I play the video, usually at 1.5x - 2x normal speed (to get through it in a hurry). When I see bad frame (or several bad frames), I simply cut to the other track until the problem goes away. For somewhat critical work, I will sometimes crossfade from the ME version to the other version in order to make the switch less apparent. For really critical work, I will create a motion mask, feathered at the edges, to replace only the parts of the frame that are broken. This produces virtually perfect results, but it obviously takes quite a bit of time. When I do paid work (once in awhile some of my stuff ends up in movies or on TV), it is worth the time to do this. |
|
26th March 2017, 00:22 | #27 | Link | |
Soul Architect
Join Date: Apr 2014
Posts: 2,559
|
Quote:
|
|
26th March 2017, 00:44 | #28 | Link | |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Quote:
You simply cannot anticipate the problems which get created. Also, things which "look good" to an algorithm look like heck to the human eye. I've written over 100 AVISynth scripts, and have spent many thousands of hours editing video over the past twenty years. I understand both and, as an EE, programmer, and former software manager, I think I know what can and cannot be done (although I am often amazed and surprised what software people manage to create). The best way to express my skepticism about finding an algorithmic solution to this problem is that if you could actually detect the anomaly created by motion estimation, then the motion estimation itself would be able to avoid creating it in the first place. Since even the commercial software cannot do this (i.e., even people with advanced programming skills and economic incentives), and since they've been working on this for a long time, I don't think it will happen anytime soon. |
|
26th March 2017, 01:07 | #29 | Link | |
Soul Architect
Join Date: Apr 2014
Posts: 2,559
|
What I'm saying is that a software could allow you to manually go over the video at 50% speed to mark which frames are corrupt, and handle the rest. Exactly the same as you're doing, but without having to hack around with manual scripts.
The first part of your process could probably easily be done. Quote:
Right now I'm implementing Deshaker (VirtualDub filter), which is difficult to use manually with its 2 passes, especially if you want to preview various settings -- and especially if you want to adjust settings for various segments. I don't know if your process would be appropriate for my needs, but something that could be done is enter the frame ranges for which to use the alternate method, and handle everything else automatically. Ex: 100-115, 140-144, 160-180
__________________
FrameRateConverter | AvisynthShader | AvsFilterNet | Natural Grounding Player with Yin Media Encoder, 432hz Player, Powerliminals Player and Audio Video Muxer Last edited by MysteryX; 26th March 2017 at 02:42. |
|
26th March 2017, 04:25 | #30 | Link |
Soul Architect
Join Date: Apr 2014
Posts: 2,559
|
Essentially, what you're doing is pretty simple if I understand correctly. You use 2 algorithms: a more aggressive frame interpolation (that causes more artifacts), and a safer method for when it fails. Then the idea is to identify which frames or area to use the alternative method.
If this approach was to be semi-automated, it would be best done as an Avisynth script than as a software. A script or plugin could be designed that takes a string with "100-115,140-144,160-180", and perhaps even allow specifying rectangles or zones for masks, and automatically do everything you're manually scripting. The only challenge I'm seeing is that this would require conditional filters, but ScriptClip doesn't currently work with MT. If you're doing a lot of it and are spending a lot of time on this process, perhaps it would be worth it to develop such an utility that systematizes your process. |
26th March 2017, 04:57 | #31 | Link | |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Quote:
This entire process is well-documented by posts that I did in the Vegas forum many years ago. 2. Because Vegas provides scripting, I already do the automation that you suggest. If, for instance, I want to cut to the other clip, but do it via a two-frame cross-fade, I can simply press one key and it will do the entire cut and cross fade. What's more, if I wanted to, I could instead simply insert markers at each place that I want to fix, and then do all the cuts using a batch version of the same script. Again, if you look at the scripting portion of the Vegas forum, you will find many of my posts describing some of my dozens of Vegas scripts. Even though Sony pretty much abandoned Vegas, and the new owner, Magix, doesn't appear to be doing anything to make it better, it is still the most productive editing tool on the planet because of its scripting capability. |
|
26th March 2017, 23:30 | #34 | Link |
Registered User
Join Date: Nov 2004
Location: Poland
Posts: 2,843
|
Tachyon (used in broadcast for fps conversion) uses fallback method with masking and fathering:
https://www.telestream.net/pdfs/app-...achyon_VPL.pdf page 5,6. They find problematic areas and then replace them with frame blended or nearest frame using making and feathering. Looks like they use quality of vectors as deterministic process. I'm just surprised that this doesn't break "local" motion coherency between frames. Love to see this in mvtools Last edited by kolak; 26th March 2017 at 23:33. |
27th March 2017, 00:53 | #35 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,346
|
Quote:
Have you tested it or seen samples ? Manually doing it usually looks poor when you try to take care of occlusions in that manner (blending or nearest through accurate user defined rotoscoped masks), so I doubt an "automatic" method using a less accurate method would look any better |
|
27th March 2017, 06:47 | #37 | Link |
Soul Architect
Join Date: Apr 2014
Posts: 2,559
|
Johnmeyer, the first part of your process is actually very simple to do.
Write a filter that takes 2 clips and a string as parameters. The filter returns either clip based on the frame number as configured in the string. Frame blending and making transitions transparent, that's a whole other story. Perhaps blending both in the transition frames. Is it like an artist's work where you have to draw it differently in each situation? I'm just curious, what part of your process can't be systematized? |
27th March 2017, 10:07 | #38 | Link |
Registered User
Join Date: Dec 2004
Location: Terneuzen, Zeeland, the Netherlands, Europe, Earth, Milky Way,Universe
Posts: 689
|
A very simple solution is creating two clips from the same source: one with changeFPS() and one with interpolation (this can be done with MVTools2 or Interframe() ).
Then we can select scenes with clipclop(): Code:
source = Avisource("L:\VdP\VdP_Sp4_gekuist.avi").converttoYV12() changed = source.ChangeFPS(25) V0 = changed V1 = InterFrame(source,Newnum=25, Newden=1, Cores=8, GPU = true) NickNames =""" # Psuedonyms for clips I = 1 """ SCMD=""" I 0,20 I 836,1285 I 1723,2312 I 3032,3202 I 3986,4456 I 4682,4938 I 6060,6600 """ SHOW= True ClipClop(V0,V1,scmd=SCMD,nickname=NickNames,show=SHOW) Fred.
__________________
About 8mm film: http://www.super-8.be Film Transfer Tutorial and example clips: https://www.youtube.com/watch?v=W4QBsWXKuV8 More Example clips: http://www.vimeo.com/user678523/videos/sort:newest |
27th March 2017, 11:15 | #39 | Link | |
Registered User
Join Date: Nov 2004
Location: Poland
Posts: 2,843
|
Quote:
|
|
27th March 2017, 16:55 | #40 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Of course you can use ClipClop, but to do that you already have to know frame numbers for your in/out points. To get those, you've already done all the work, presumably in your NLE. In Vegas (my NLE), once I arrive at the frame where the switch should be made, I just make it then and there. It takes less time to finish the job "on site" than actually type or write out the frame number, transfer those numbers to a script, and then execute the script. I see zero benefit to that workflow: it adds extra steps, takes more time, and doesn't let me nudge or make slight changes easily.
I have never understood why people insist on making AVISynth into an editing tool. It really is not well-suited to that job. But, if you want to do it, knock yourself out and have fun! |
Thread Tools | Search this Thread |
Display Modes | |
|
|