Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development

Reply
 
Thread Tools Search this Thread Display Modes
Old 28th January 2006, 02:10   #521  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
A standalone version could be made, but it don't think that alone would really make it noticeably, if any, faster. Also, the nice thing about having it as an avisynth filter is I don't have to mess with opening/decoding sources, saving the result, etc... which is a big pain the ass . There are some potential areas for improving its speed, but only a couple of the routines in eedi2 lend themselves to simd implementations and they are the ones that use only a little of the current processing time. I could probably speed up the more intensive routines by writing the code in plain assembly myself, but that is a big job and I am still changing them some atm.
tritical is offline   Reply With Quote
Old 28th January 2006, 02:34   #522  |  Link
Chainmax
Huh?
 
Chainmax's Avatar
 
Join Date: Sep 2003
Location: Uruguay
Posts: 3,103
What about just making the standalone version without any optimizations? I'd love to see this implemented in emulators .
__________________
Read Decomb's readmes and tutorials, the IVTC tutorial and the capture guide in order to learn about combing and how to deal with it.
Chainmax is offline   Reply With Quote
Old 28th January 2006, 05:02   #523  |  Link
foxyshadis
ангел смерти
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Lost
Posts: 9,565
HQ4x looks a lot nicer (for sharp, low-res, high-contrast pixelly animation) and retains fine details, imho, and runs much faster than 2x2xeedi2. Eedi2 smooths edges somewhat, very similar to the much faster 2xsai or even cubic interpolation. It mostly outperforms others on curves on antialiased material.

HQ4x
EEDI2

Not a representative sample, of course, and a bit of a matter of taste.

heh, I wonder if I could use isochroma's old neat image script to interface with photozoom pro, with its amazing (and amazingly slow) s-spline resize.

EEDI does have one significant advantage: You can use it as many times as you want, as far as I can tell. (Avisynth itself has problems with huge frames though; sucks since I'd love to use it as a poster-scan processor.) It's generally a very bad idea to chain HQXx, on the other hand, it causes bad haloing even on the second gen and just gets worse from there, along with uninterpolated areas standing out much more. It was created with one very specific purpose, and sucks for anything else, heh.

Last edited by foxyshadis; 28th January 2006 at 05:17.
foxyshadis is offline   Reply With Quote
Old 28th January 2006, 15:08   #524  |  Link
Chainmax
Huh?
 
Chainmax's Avatar
 
Join Date: Sep 2003
Location: Uruguay
Posts: 3,103
I don't like HQ4X much, in my opinion it makes the picture much too cartoonish. I prefer just using HQ2X and letting the video card scale up to monitor size (video cards use bilinear, right?). Therefore, for me it would be more interesting to see EEDI2 Vs HQ2X.
__________________
Read Decomb's readmes and tutorials, the IVTC tutorial and the capture guide in order to learn about combing and how to deal with it.

Last edited by Chainmax; 28th January 2006 at 15:16.
Chainmax is offline   Reply With Quote
Old 29th January 2006, 04:16   #525  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
EEDI2 is really not that great for trying to enlarge a regular image. It takes too many risks, searches much farther than it needs to to find connections, and blurs too much. The same is also true of type=1/3 in tdeint. Interpolating a missing field in an image is a much different ballgame than normal resizing. I can think of a number of modifications that would improve how eedi2 performs when upsizing a normal pic. Also, I don't think EEDI2 could ever really compete with HQ2X or 4X on resizing those type of non-antialiased, blocky, high contrast video game images... that is specifically what HQ2X is designed for and it does a pretty good job. I am gonna try out a few modifications to improve how EEDI2 performs with normal images.
tritical is offline   Reply With Quote
Old 29th January 2006, 05:25   #526  |  Link
foxyshadis
ангел смерти
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Lost
Posts: 9,565
Chainmax, I bet you would like something like 2xsai, eagle, or or even bicubic followed by a strong sharpen filter with few halos. I might ask at the zsnes board if that would be possible. HQXx are custom made for perfect sharpness, everything else fuzzy and unsharp, and you're right that it's quite annoying to have no middle.

Tritical, that would be awesome, I love it. =D I'm a little curious on how it works, since I get fuzzy on C algorithms. Does it fit a line/curve to each edge? Or try to create vectors from each point?
foxyshadis is offline   Reply With Quote
Old 29th January 2006, 23:29   #527  |  Link
Chainmax
Huh?
 
Chainmax's Avatar
 
Join Date: Sep 2003
Location: Uruguay
Posts: 3,103
tritical: great, I hope you get some good results .

foxyshadis: I never was much of a fan of image enhancement filters until HQ2X, but it's currently my favorite image enhancement filter for emulators. What do you mean by "followed by a strong sharpen filter with few halos", by the way?
__________________
Read Decomb's readmes and tutorials, the IVTC tutorial and the capture guide in order to learn about combing and how to deal with it.
Chainmax is offline   Reply With Quote
Old 31st January 2006, 08:20   #528  |  Link
foxyshadis
ангел смерти
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Lost
Posts: 9,565
Tritical, I'm curious. I know EEDI has a very specific purpose, to double a field height, but as avisynth is currently without any really good vectorization functions this is about as close as it gets. Spline36resize is pretty decent too, sometimes even better, but overall it's all eedi. I've been toying around and found it can do amazing things, although artifacts build up after a few generations.

So I'm wondering if it would be possible to use the already calculated vectors to resize to any arbitrary width, interpolating as necessary, rather than repeated application and a final resize? I guess for too-large sizes it'd have to smooth out the vectors/curves more than normal. If that'd be too difficult or interfere with its primary use, then disregard.

Chainmax, I just meant to cut down the effect from the card's bilinear upsize, although I didn't find anything that's really preferable without being cartoony. HQ2x's (and 3/4) biggest issue is that it's limited to 3x3 blocks, which is why eedi and spline resizers look much smoother on long edges and curves, but are also significantly slower.

Quick comparison to the above 2 pics:
Photozoom Pro

If I can compile it too, I'll add greycstoration's wavelet-ish resize, just for kicks.

Now that I look closer, eedi2 really does the best job by far on the background, even with a few generational artifacts.

Last edited by foxyshadis; 31st January 2006 at 09:28.
foxyshadis is offline   Reply With Quote
Old 31st January 2006, 09:45   #529  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Quote:
I'm a little curious on how it works, since I get fuzzy on C algorithms. Does it fit a line/curve to each edge? Or try to create vectors from each point?
Basically, it finds a vector at each point and does linear interpolation in that direction, but it does it in two steps. It does an initial pass to get tentative directions at each point in the field and then it does smoothing/eroding/dialating of the direction map. When it interpolates the missing field off those directions it only connects pixels that are detected as having the same direction and matching in intensity. The directions are found using a 4 scan line vector matching method (which is a custom method that combines ideas from 3 or 4 papers I've read). The vector matching vs direction based off gradients (the tdeint type=1/pp=7 in tfm modes use a gradient direction method) is one of the differences when dealing with interpolation for deinterlacing versus normal resizing. The direction based of gradients method simply doesn't work very well for deinterlacing interpolation since half the signal is gone (poor detection as slope gets smaller), so vector matching is pretty much the only good method for connecting long edges/lines. Whereas if I was gonna make a normal resizer I would probably use a combination of gradient/vector detection.

Quote:
So I'm wondering if it would be possible to use the already calculated vectors to resize to any arbitrary width, interpolating as necessary, rather than repeated application and a final resize? I guess for too-large sizes it'd have to smooth out the vectors/curves more than normal. If that'd be too difficult or interfere with its primary use, then disregard.
It would be somewhat complicated, but if you simply used the direction of the point nearest to the interpolation point and then fit the desired curve or function in that direction it wouldn't be too difficult. Although, if you wanted to get sub-pel accurate on finding the desired points for interpolation it would definitely take a while to finish.

For the moment I've started working on the motion-mapping filter again (I am getting tired of seeing TDeint's motion-mapping failing all the time), but I've already added a directional based resizer for normal images to my todo list.
tritical is offline   Reply With Quote
Old 17th February 2006, 03:55   #530  |  Link
Zarxrax
Registered User
 
Join Date: Dec 2001
Posts: 1,212
@tritical or others:

With a mixed 24/30fps source, is it possible to create an output which preserves all frames but is NOT variable frame rate? This would essentially be like doing ivtc on the 24fps parts, and like an assumefps() on the 30fps sections.
This will produce an incorrect running time, but with the advantage of preserving all frames perfectly while still using a constant framerate.

This could be useful in some instances where the purpose is to do some kind of editing on the video stream, and you are only concerned with having all the frames look correct.
Zarxrax is offline   Reply With Quote
Old 17th February 2006, 06:09   #531  |  Link
foxyshadis
ангел смерти
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Lost
Posts: 9,565
Process as you normal would a hybrid in mode 4+5, and then use mkvout="nul". No timecodes, no problem! Set your fps as desired. (Will cause pain if you ever need to turn hybrid edits into watchable video though.)

120 fps is generally a better solution since durations and sync are preserved, and reverting it back to vfr or hybrid is relatively painless. Tritical made a neat little tool to turn a mode-5 decimated video into a super-fps one called avi_tc_package you might like.
foxyshadis is offline   Reply With Quote
Old 17th February 2006, 19:42   #532  |  Link
Zarxrax
Registered User
 
Join Date: Dec 2001
Posts: 1,212
Awesome, I should have thought of that
Zarxrax is offline   Reply With Quote
Old 22nd February 2006, 18:59   #533  |  Link
Zarxrax
Registered User
 
Join Date: Dec 2001
Posts: 1,212
@tritical: I have encountered what might possibly be a bug in tdeint... or maybe its a bug in blendbob, I really cant tell.

source = MPEG2Source("ep3.d2v", cpu=0)
deint = source.Tdeint(mode=1).BlendBob()
return deint


This script works fine for the most part, however there are certain frames where, while they will normally look correct, if I choose "refresh" in virtualdubmod, the frame will suddenly appear jagged like it was not being passed through blendbob. If you back up quite a ways and then come back, it will show fine again.

If I use a different bobber besides tdeint, this problem does not occur.


Edit: Also, there is a bug in cthresh of TFM. The documentation says that the minimum value is -1, and this should cause all frames to be postprocessed. However, -1 actually seems to do the opposite, and cause all frames to be reported as "clean".

Last edited by Zarxrax; 22nd February 2006 at 19:04.
Zarxrax is offline   Reply With Quote
Old 22nd February 2006, 21:11   #534  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Quote:
Edit: Also, there is a bug in cthresh of TFM. The documentation says that the minimum value is -1, and this should cause all frames to be postprocessed. However, -1 actually seems to do the opposite, and cause all frames to be reported as "clean".
I'm not in a position to test atm, but it is probably a problem with the mmx/sse2 routines and negative values for cthresh. Try setting opt=0 in tfm and see if it works correctly.

I'll try to look into the problem with your blendbob script tonight sometime and see what's wrong.

EDIT: I took a quick look at the source and the assembly comb detection routines are indeed broken for cthresh values < 0. I'll fix it in the next release, thanks for reporting it.

Last edited by tritical; 22nd February 2006 at 21:19.
tritical is offline   Reply With Quote
Old 22nd February 2006, 23:00   #535  |  Link
Zarxrax
Registered User
 
Join Date: Dec 2001
Posts: 1,212
Regarding the tdeint problem, I *think* older versions of tdeint were working properly. I don't have an old version on hand to try it anymore though.

Edit: and here is a sample clip you should be able to see the problem with.
Open this script in virtualdubmod and look at frame 141. Then do edit>refresh.
http://zarxrax.kicks-ass.net/test.demuxed.zip

Last edited by Zarxrax; 22nd February 2006 at 23:13.
Zarxrax is offline   Reply With Quote
Old 23rd February 2006, 02:42   #536  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
The reason the frame changes on reload is because BlendBob bases part of its decision about which frames to blend together on the average abs diff of the past 7 frames. When you run up to frame 141 linearly that history is completed filled, but when seeking to frame 141 it only has one valid value (from frame 140 which is processed before). This changes the result of the following if statement due to the fact that the average value calculation does not adjust for having less than 7 values (the unfilled entries are treated as 0 and bring down the average value substantially):

Code:
if ((pnThresh != 0.0) &&
(absDiffPN > (avgAbsDiff/5)) && // Don't freak out on plain Bob()
((double)usedDiff-(double)avgAbsDiff > (double)absDiffPN*pnThresh))
When run up to linearly that statement will be false for frame 141, but on seeking it evaluates to true and the average of P/N is used for frame 141 instead of P/C. I tried a quick hack of tracking the number of valid entries in the history when computing avgAbsDiff and then adjusting the calculation as necessary. After making the change it picked the correct blend for frame 141 when seeking. I'm afraid that Leak might have made it the way it is for reason though... so you might want to ask him about it. And even with the change, there is still the possibility for seeking to give different results than linear play.

Last edited by tritical; 23rd February 2006 at 02:48.
tritical is offline   Reply With Quote
Old 23rd February 2006, 03:06   #537  |  Link
Zarxrax
Registered User
 
Join Date: Dec 2001
Posts: 1,212
Ah, I see. Thanks for checking into it.
Zarxrax is offline   Reply With Quote
Old 26th February 2006, 23:34   #538  |  Link
rig_veda
Registered User
 
Join Date: Aug 2005
Posts: 20
Does TTempSmooth have some effect on decisions of TDecimate further up in the filter chain?
I'm having a script that goes like (plugin loading part omitted):

mpeg2source("D:\Big\Project_Ghibli\Totoro.d2v")
TFM(PP=0,display=false,input="Totoro_TFM.txt")
TDecimate(input="Totoro_TDecimate.txt",display=true)

and another one

mpeg2source("D:\Big\Project_Ghibli\Totoro.d2v")
TFM(PP=0,display=false,input="Totoro_TFM.txt")
TDecimate(input="Totoro_TDecimate.txt",display=true)
TTempSmooth(maxr=7)

The TTempSmooth(maxr=7) is the only difference in the script.
Now TDecimate's output shows that even though it's running on the precalculated input files, it makes different decisions on one and the same frame in the first and in the second script. In one case, for example, it decided to use frame 336, in the other it to blend 334-336 (50.00,50.00). Now i thought that filters further down in the chain wouldn't effect the ones higher up, but just eat what they are fed with. And in this case, I was all the more surprised since i was using an input file with TDecimate. (Not using one doesn't make a difference in this case, though.)
What is going on here?

Last edited by rig_veda; 26th February 2006 at 23:36.
rig_veda is offline   Reply With Quote
Old 27th February 2006, 07:06   #539  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Would you be able to post the clip that causes the problem? You're right that having ttempsmooth shouldn't change things... I tested out that scenerio using the settings you posted on a clip on my computer but didn't encounter any problems.
tritical is offline   Reply With Quote
Old 27th February 2006, 08:22   #540  |  Link
rig_veda
Registered User
 
Join Date: Aug 2005
Posts: 20
I have no webspace to upload available atm, but I can send you a small version of the clip that exhibits the problem, if your mailbox handles large attachments. The size of the rar is 17.3 mbyte. You can contact me at fede2a01 at uni-trier.de
to give me your contact information, if you want to go that way. If not, I'll look into getting a space to upload.
rig_veda is offline   Reply With Quote
Reply

Tags
tdeint, tivtc

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 17:00.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, vBulletin Solutions Inc.