Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development

Reply
 
Thread Tools Search this Thread Display Modes
Old 18th September 2004, 19:17   #21  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Put up version 0.9.3, changes:

+ Added order = -1 option, will detect parity from avisynth (this is now the default)
+ Added hints option for reading telecide hints for interlaced/progressive (use hints=true,post=1 in telecide)
+ 5 field motion check now includes checks over 4 field distances
- Fixed a bug in YUY2 type = 1 deinterlacing

Also, after testing on the sources Mug Funky sent, I would not recommend using type=1 (modified ELA) on non anime/cartoon material. Either go with kernel or cubic interpolation for natural sources. I will probably change the default in the next version.

@Chainmax
I'll take a look at the clip and see just how bad it is .

Last edited by tritical; 18th September 2004 at 19:19.
tritical is offline   Reply With Quote
Old 19th September 2004, 13:53   #22  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,394
Quote:
Originally posted by tritical
after testing [...] I would not recommend using type=1 (modified ELA) on non anime/cartoon material.
Some description about why you wouldn't recommend that?

Actually, I'm working on a Live recording (DVB source) where the upper fields are just plain unusable - they are fieldblends of the shifted bottom fields (!). The experts from television
So, I've just one field for upscaling available. After trying out lots of things, I got the best results on jaggy-ness reduction, shimmer reduction and keeping things as sharp as possible with
TDeint(mode=1).TomsMoComp(1,-1,1).LanczosResize().
That's the best base filtering I could come up with so far.

(And BTW, it's astonishing how good a 464*208 source can look at 704*512, with some dedicated slow filtering )
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 19th September 2004, 14:08   #23  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,607
didée: for you purpose (completely missing information of one field) the ELA is the best way to go.

but for bobbed deinterlacing of natural video, the ELA produces more flickering (but lesser jaggy diagonals) than the kernel interpolation method.

this is caused due to the nature of the kernel: it just has some information from the other fields. the ELA in opposite works blind.

[
I've done something, too some months ago. the television-experts sometimes knock out one of the fields and interpolate this field by line averaging (like shifting the remaining field by 1/2 pixel).
They seem intend to get the film-look, but I say, they produce a "pop-sta(i)r-look".
for exactly this scenario I wrote a function, that detects this kind of pseudo progressive and interpolates those blurred fields using tmc(-1,-1,0). video-overlays, that are interlaced/true progressive are not touched using my function.
]
__________________
Don't forget the 'c'!

Don't PM me for technical support, please.
scharfis_brain is offline   Reply With Quote
Old 20th September 2004, 17:56   #24  |  Link
erratic
member
 
erratic's Avatar
 
Join Date: Oct 2003
Location: Belgium
Posts: 106
I compared the speed of TDeint(mode=1,order=1,type=2) with Leak's KernelBob(order=1,sharp=true) and KernelBob was nearly 4 times faster. The source was normally interlaced PAL (not telecined stuff). Is there any way to make TDeint as fast as KernelBob? Or should I just use KernelBob for normally interlaced stuff and TDeint for fieldblended crap (as scharfis_brain calls it)?
erratic is offline   Reply With Quote
Old 20th September 2004, 18:19   #25  |  Link
Leak
ffdshow/AviSynth wrangler
 
Leak's Avatar
 
Join Date: Feb 2003
Location: Austria
Posts: 2,441
Quote:
Originally posted by erratic
Is there any way to make TDeint as fast as KernelBob?
Well, a quick peek at Tritical's sources shows that he's using pure C++ while my implementation of KernelBob uses MMX to speed things up; I'm not sure if you'll get a 4x speedup out of using MMX here (didn't look at the algorithm closely enough), but it sure would make things faster.

Then again, I don't have the intention to MMXise yet another deinterlacer in the near future, so somebody else would have to do it...

np: Arovane - Cry Osaka Cry (Lilies)
Leak is offline   Reply With Quote
Old 20th September 2004, 18:51   #26  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
@erratic
There is one thing I can say with absolute certainty... it will never be as fast as Leak's MMX optimized kernel deint even if it was MMX optimized. Its simply a matter of complexity... in terms of the motion checking (which is substantially more involved then kernel deint's), weaving from either the prev or next field (goes back to the motion checking), the motion map denoising (though it is pretty weak atm), the switching of the one way kernel between forwards and backwards, etc... There is also the fact that I pretty much refuse to code or even try to make MMX/iSSE versions of things anymore... mainly out of laziness and not having the time, but also because it tends to limit how much you can test new ideas since a major algo change can cause you to have to rewrite all that assembly code. I am still testing/changing quite a bit of the main code atm... I may look into doing optimizations for some functions when things get more stable... but don't count on it. I'm actually surprised it was only 4x slower... try mtnmode=1, maybe we can get it 8-10x slower .

@Didee
I probably generalized that statement too much. Like every edge adaptive interpolation scheme this one suffers from problems/artifacts in very detailed areas with multiple edges/lines. Usually anime does not have as much fine detail like this compared to movies, etc... except for kanji and some other things off course. So I'll take back what I said and simply say that if ELA interpolation is causing a lot of artifacts try using kernel or cubic interpolation instead. Kernel interpolation is definitely much better when it comes to flickering in almost static areas, this can even be noticeable when doing same framerate deinterlacing on noisy sources.

@Chainmax
Aside from being noisy I didn't think the sample looked that bad, it was better then what I was expecting. In terms of getting progressive frames I thought it was pretty good for how it was decribed in the other thread.

Last edited by tritical; 20th September 2004 at 18:54.
tritical is offline   Reply With Quote
Old 21st September 2004, 00:00   #27  |  Link
Chainmax
Huh?
 
Chainmax's Avatar
 
Join Date: Sep 2003
Location: Uruguay
Posts: 3,103
Well, the description in the thread corresponds to the (unfiltered) vob samples. The clip you downloaded was compressed using the script I posted earlier. That's why I offered to post some vob samples so that you could make a comparison.
And yeah, the results are very impressive. Did you see the scenes with close-ups of Spawn's face? In all the other methods I tried, the eyes showed a lot of jaggyness. TDeint is the only filter that almost eliminated that . I think I have finally found an adequate script for ripping this DVD (just need to throw some deen at it) .
Chainmax is offline   Reply With Quote
Old 21st September 2004, 03:23   #28  |  Link
joshbm
Registered User
 
joshbm's Avatar
 
Join Date: Jun 2003
Location: Land of the Noobs & the Home of the Brave
Posts: 349
Quote:
Originally posted by Didée
Some description about why you wouldn't recommend that?

Actually, I'm working on a Live recording (DVB source) where the upper fields are just plain unusable - they are fieldblends of the shifted bottom fields (!). The experts from television
So, I've just one field for upscaling available. After trying out lots of things, I got the best results on jaggy-ness reduction, shimmer reduction and keeping things as sharp as possible with
TDeint(mode=1).TomsMoComp(1,-1,1).LanczosResize().
That's the best base filtering I could come up with so far.

(And BTW, it's astonishing how good a 464*208 source can look at 704*512, with some dedicated slow filtering )
I attempted your TDeint(mode=1).TomsMoComp(1,-1,1).LanczosResize(). It said that the video framerate's did not match. Except when I set the mode=0. BUT I want to bob my video to get double the framerate, so this does not help. What am I doing wrong here?

Thanks!
Josh
__________________
Tired of waiting for video encodes? Get a totally new P4 Desktop Computer for FREE.
joshbm is offline   Reply With Quote
Old 21st September 2004, 09:25   #29  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,394
Aaargh. I swapped parameters when recalling from my (poor) memory.
Of course I used TDeint(mode=0, field=1), since there is nothing in my stream that could be bobbed ... and the blends are in the bottom's, not in the top's. *Cough*.

Joshbm, I assume the framerate mismatching you're experiencing comes from the one-line-script with bobbing. Try to separate it:

TDeint(...)
TomsMoComp(1,-1,1)
LanczosResize(...)

Although TMC should not bother, since with (1,-1,1) it does spatial-only interpolation without field matching ... but you never know. There are worse things in life.

***

However, with spatial-only interpolation, I never got anything out of any deinterlacer that *really* satisfied me. At least not in "directly" applying the filter. There's always way too much jaggyness left for my taste - though it gets (much) better if one first blows up the fields to double the height.

So, here's another thought that's wandering through my mind for quite some time now. It's about spatial-only interpolation of missing fields, to avoid jaggyness. Perhaps this is already sort-of-implemented in any filter, perhaps it's not. I just don't know...

I think that all methods so far start out to work on each single pixel, and then examine a more-or-less small area around the current pixel to gather information - be it 5x5 tap, modified or unmodified ELA, or whatever.
Well, as far as jaggyness is concerned, it is only an issue on edges close to "horizontal". Starting with edges having a slope of "2", like

____XX
__XX__
XX____


the problem already starts to go away, and on 45° edges a simple resize() works near-to-perfect. So, what we do have to conquer are only the edges "below" 45°.

Now, what about ... a "scanline search"? A filter that searches 2 complete scanlines at a time, finds the correlating areas between them, and interpolates accordingly?
Searching (sort of "parsing", in this respect) basicly would enable something like the following, relatively easy:


..........XXXXXXXXXXXXXXXXXX...........
....::::::.::::::::::::::::.:::::::....
....XXXXXXX................XXXXXXXX....

..........XXXXXXXXXXXXXXXXXX...........
.......xxxoxxxx........xxxxoxxx........
....XXXXXXX................XXXXXXXX....


o = found area match between scanlines
: = not-matching area of a "matched feature"
x = interpolation around matches

where the "horizontal length" of the interpolation is based on the length of the not-matching area. Could be 1/3 of the not-matching area in "dumb mode", or could be adaptive based on the scanlines +2 and -2, if it can be afforded to include even these into evaluation.

For areas, or "features", that have no "matching" on one side or the other, a simple (lanczos) resize would be appropriate, then.

Now, that's draft, crude & basic, and needs of course some more comparing & computing. But basically, it seems not too unreasonable to me. Or is that already something like an "ultra-primitive ELA"?

I even tried to script something in that direction ... but you now, *directional* searching is almost impossible in an avisynth script. :/
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 22nd September 2004, 03:47   #30  |  Link
joshbm
Registered User
 
joshbm's Avatar
 
Join Date: Jun 2003
Location: Land of the Noobs & the Home of the Brave
Posts: 349
@Didée:

Weird, it still doesn't work... lol.. but this works:

TDeint(...).AssumeFps(last.framerate).TomsMoComp(1,-1,1).LanczosResize(...)

Whatever works .

Thanks!
Josh
__________________
Tired of waiting for video encodes? Get a totally new P4 Desktop Computer for FREE.

Last edited by joshbm; 22nd September 2004 at 04:42.
joshbm is offline   Reply With Quote
Old 22nd September 2004, 19:59   #31  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
@Didee
Your idea is very similar to plain n+n tap ela, though your probably thinking of a slightly larger scale. Plain 5+5 tap ela works like this (where o is the pixel to interpolate):
Code:
a b c d e
. . o . .
f g h i j
Take the absolute value of the differences of pairs (a,j), (b,i), (c,h), (d,g), and (e,f). Then simply average the two pixels that have the lowest difference and use that for o. This is exactly what tomsmocomp uses, except that it clips the calculated value into the range of +-2 or 3 of min and max of c and h. The reason being that this method is insanely prone to artifacts since the minimum difference can easily have nothing to do with edge direction. i.e. the edge runs through (b,i), but (f,e) has a minimum value. etc... However, this clipping eliminates a lot of the benefits of the algorithm since on most edges interpolated values need to be outside this range. It is easy to see why this method can't be used to search more then a few pixels away.

DCDi (directional correlational deinterlacing), which is very similar, uses the same technique but uses horizontal vectors of 2 pixels when doing the minimum difference comparisions and also does a couple of extra pixel difference comparisons to make sure that the direction is along that of the edge. This helps to reduce artifacts somewhat and gets closer to what you are probably thinking of. You can expand on that idea and use 3 or 5 pixel horizontal vectors to search for corresponding areas in the above and below scan lines. This works well except when you start expanding the search area to say 8-10 or more pixels to either side. If there is only one line in the surrounding area then your probably ok, but in detailed areas you'll get all kinds of mismatches. I've actually tried out this exact idea... using 5 pixel groups to search the above and below scan lines while limiting the search to edge areas only) it works well in areas with only one major edge. Another problem is that both of the above algorithms are limited to a small set of angles to which they can accurately adapt, since they use linear interpolation with only one pixel from each of the scan lines.

Now, what TDeint uses and what I've been calling modified ELA, I should probably just call edge-directed interpolation since it does exactly that. It doesn't do any direct pixel comparisons at all. It literally finds the direction of the gradient vector via the local partial derivatives of the surrounding pixels in the above and below scan lines and then rotates it 90 degrees to get the isophote direction or direction of least change. It also calculates gradient magnitude and local variance and drops back to linear interpolation in non-edge areas. Once it has the direction, it then uses linear interpolation on the above and below scan lines to obtain the points at which a line passing through point o with the calculated direction would intersect each line and then averages those values together. The main problem with this method is it will almost always mess up in detailed areas where two lines intersect, since the gradient vector will most likely end up pointing in the isophote direction. However, I think I have finally come up with a simple way to fix this and eliminate some spurious directions due to noise. In general I've found that this method usually surpasses that of pixel matching/searching. However, since it calculates the derivatives in the x direction in a small area (only over 3 pixels) it can't accurately detect the direction of nearly horizontal edges (it simply says they are purely horizontal). Atm, I allow it to use directions that would go out to 4 pixels to either side, which corresponds to rougly 15 degrees. However, it is really only good for edges with around 25-30 degrees slope or more. I'm currently working on improving this method around areas where lines intersect, and an obvious improvement would be to calculate the directions for the 4 other surrounding pixels which would give a better indication of the local geometry. Handling edges with smaller slopes could also be done by calculating approximations of the local partial derivatives over a wider area.

So your method would be pretty similar to DCDi or plain ELA, though with a slightly enlarged search area and possibly search group, maybe 3-5 pixels instead of 1. TDeint's method isn't really that similar to ELA except for the fact that it still does the interpolation using pixels that correspond to the intersection points instead of computing the final value by tuning the coefficients of the neighbor pixels as in NEDI or some other methods. I have been looking into edge directed methods of interpolation for deinterlacing (read... faster then NEDI) quite a bit recently and am quite interested in any ideas anyone has...
tritical is offline   Reply With Quote
Old 23rd September 2004, 06:13   #32  |  Link
MfA
Registered User
 
Join Date: Mar 2002
Posts: 1,075
Does DCDi do a lerp between the 2 horizontal pixels before doing the average between the 2 lines or something? ELA with a wider aperture doesnt necessarily have to pick 2 opposing pixels to average straightaway ... you could first find the best match between two opposing X pixel vectors, and then do subpixel search. Just a couple more lerps.

BTW you might want to try local centered moments to determine isophote direction (can be implemented with running averages, so the amount of operations per pixel is pretty low ... it is probably cheaper than using DoG convolution masks greater than 3x3 for getting the gradient).

Last edited by MfA; 23rd September 2004 at 06:34.
MfA is offline   Reply With Quote
Old 23rd September 2004, 09:18   #33  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,394
Thanks for the thorough explanation, tritical - It's much clearer now.

Well, the words of my description probably were a bad expression of my thoughts. What I was thinking of is, in some way, more like analog signal processing (better, a digital simulation of that). A "tension" between the scanlines, that "tears" everything to the right places ... dunno how to explain it better.

Perhaps a comparison to "the travelling salesman problem" is appropriate. You can throw a supercomputer at that and let it calculate hours and days. Or you can get the result within just some moments from a dedicated analog system that more or less lets the problem optimize itself (any given "start" routing modifies itself by moving towards a routing with lower overall tension).
However, the trick of analog systems is that they are working - analog ... :|
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 26th September 2004, 06:28   #34  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
@MfA
Your right, DCDi first averages the two pixels from each line together and then averages those values together... I had forgotten. And sub-pixel search for ela with larger apertures is possible as well. Image moments I hadn't really thought about, mainly cause I don't know much about the subject. However, I'm not actually planning to use larger then 3x3 masks for the derivatives, it's just a possibility. Also, in the normal deinterlacing case, the derivatives are only calculated for about 15-20% of the pixels to begin with since most are handled by the motion adaptation or are within +-3 of both the above and below pixel in which case I just leave it. So the advantage to being able to use running averages wouldn't be that much.

@Didee
I see what your saying, there is probably a good way to do that but I can't think of one. At least not one that could work for lines with both very small and very large slopes without dynamically adjusting the aperture.


@All
I posted version 0.9.4. I haven't been able to work on this much the last week, but here are the changes:

+ Added auto detection of hints (if you don't set a value for hints manually, then TDeint checks the first frame to determine if hints are present and sets the value accordingly)

+ Added mtnmodes 2 and 3... these accomplish what I think scharfis_brain was suggesting earlier in the thread. They act like 0 and 1, but wherever 0 and 1 would have used the avg() of two pixels, 2 and 3 instead use the pixel value from the field that is most similar to the current field.

+ Added clip2 parameter. When using TDeint as a postprocessor for telecide you can get weird results since telecide changes the order of the fields (that's not good for a motion adaptive deinterlacer). So you can specify clip2 for the actual deinterlacing to be done from... here's an example of how it works:

mpeg2source("c:\mysource.d2v")
orig = last
telecide(guide=1,order=1,post=1)
tdeint(order=1,clip2=orig)

Then TDeint reads the hints and output from telecide as usual, but whenever a frame needs deinterlacing it does it from clip2. It also preserves the hints in case any filters later on need to read them. If you don't specify a clip for clip2 then the deinterlacing is done from the input clip as usual.

- Fixed field differencing in kernel interpolation using the wrong fields, and not correctly adjusting the direction of the kernel to that of the field that is most similar to the current field.


I'm still planning on improving the edge directed interpolation in TDeint. Also plan on improving the motion map denoising and adding built in combed frame detection.

edit:
Posted v0.9.5, right after posting 0.9.4 realized that I was doing mtnmodes 2 and 3 the hard way instead of the simple, faster, and blatantly obvious way. So now 2 and 3 aren't any slower then their 0 and 1 forms.

Last edited by tritical; 26th September 2004 at 07:27.
tritical is offline   Reply With Quote
Old 3rd October 2004, 23:02   #35  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Well, either no one has had any problems or no one uses it .

I just posted v0.9.6, haven't had time to work on improving the edge-directed interpolation, but I did manage to add built-in combed frame detection and per-field weaving.


Changes from v0.9.5:

+ Added full parameter, allows for ivtc post-processing. full defaults to true.
+ Added cthresh, chroma, and MI parameters... these are used when full=false and with tryWeave option
+ Added tryWeave option, allows TDeint to adaptively switch between per-field and per-pixel motion adaptation. tryWeave defaults to true.
+ Improved field differencing
+ changed mtnmode default to 1


Some explanations...

Full works the same way as the full parameter in fielddeinterlace(). If set to false, then all input frames are first checked to see if they are combed. If a frame isn't combed, then it is returned as is. If it is combed, then the frame is processed as normal. Full can only be used when mode = 0. Full defaults to true.

tryWeave, if set to true, works like this... The most similar field to the current field (either prev or next) is calculated. A new frame is then made by weaving this field with the current field. This new frame is then checked for combing. If it isn't combed then it is returned. If it is combed, then normal processing (as if tryWeave=false) is done. This allows TDeint to adaptively switch between per-pixel and per-field weaving. This idea is taken from scharfis_brain's suggestion earlier in the thread and his matchbob() function. tryWeave defaults to true.
tritical is offline   Reply With Quote
Old 4th October 2004, 18:00   #36  |  Link
Mug Funky
interlace this!
 
Mug Funky's Avatar
 
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,547
you rock.

...that's all i've got to say until i test this a little further.

[edit]

shit, man, you REALLY rock. i barely need IVTC anymore now

[edit 2]

seems to be some odd field-order problems with the new matchbobbing... perhaps when it does the weaving it doesn't re-set the original field-order? i'm getting havoc with the intro sequence to Lain, which has a mix of interlaced, 30p and telecine.

i suspect some of the interlaced stuff off this disc to be b0rk field order to begin with (there's bits shot on DV cam in the show where the field-order is obviously wrong, but it was probably an effect, as it's that kind of show). however, assumeTFF().bob() is returning different stuff from tdeint(1,1).

i can post a sample if you like.
__________________
sucking the life out of your videos since 2004

Last edited by Mug Funky; 4th October 2004 at 18:23.
Mug Funky is offline   Reply With Quote
Old 4th October 2004, 23:49   #37  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Yeah, if its not too much trouble a sample would be helpful. On normal material I can't produce the problem you described. I do have some weird clips that have what I would call "out-of-order" fields every once in a while, that might be what your clip suffers from. Even when the field order is set correctly, if you do a separatefields() and step through a field at a time it will jump backwards on some fields. This will cause major problems for any motion adaptive deinterlacing approach. If there are entire sections of the clip that have a different field order from the rest of the clip that will obviously produce garbage output as well. Even on this type of input though, TDeint should preserve the same field as bob().

Providing a work around for clips that have single out-of-order fields or weird stuff in general would not be that difficult, just need to test the output frame to see if it is combed and if it is, deinterlace it w/o motion detection. That isn't ideal though since it would end up detecting some good output frames as well. If a clip has whole sections with a different field order that is a tougher problem, probably have to just process it in sections...
tritical is offline   Reply With Quote
Old 5th October 2004, 06:08   #38  |  Link
pdottz
Registered User
 
Join Date: Jul 2003
Posts: 94
I read the readme file but can't understand it to clearly
what would be the settings to use for ntsc captured tv? i cap at 720x480. this is the script i use.

Quote:
LoadPlugin("C:\PROGRA~1\GORDIA~1\TomsMoComp.dll")
LoadPlugin("C:\PROGRA~1\GORDIA~1\undot.dll")
LoadPlugin("C:\PROGRA~1\GORDIA~1\FluxSmooth.dll")
AVISource("D:\captures\maincap\eene.avi")
TomsMoComp(1,5,1)
Crop(0,0,720,480)
LanczosResize(512,384)
Undot()
FluxSmooth(7,7)
ConvertToRGB()

tomsmocomp does an awesome job but leave me with some aliasied line that are clearly noticeable on the capped toons.
i want to try this new filter and would look to try it on this file.
can anyone help?
pdottz is offline   Reply With Quote
Old 5th October 2004, 07:49   #39  |  Link
Mug Funky
interlace this!
 
Mug Funky's Avatar
 
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,547
hmm... looks like my bad. i tracked the problem down to MPEGsource rather than MPEG2source. looks like there's some oddness in the repeat-field flag handling, causing chaotically reversed fields.

everything works fine using DGdecode. (pity, because every little bit of speed is useful to me, so the very small decoding speed difference of nic's mod was useful to me).

here's the sample anyway, if you want it - it contains some good testing material besides the field-order oddness that turned out not to be a bug in your plugin (sorry... but i'll let nic know about this).

http://210.49.108.136:8080/lain_intro.m2v
__________________
sucking the life out of your videos since 2004
Mug Funky is offline   Reply With Quote
Old 6th October 2004, 08:33   #40  |  Link
xappy
Registered User
 
Join Date: Nov 2003
Posts: 3
Hi!

Thank you for the great plug-in. I have used it for bobbing my interlaced MiniDV PAL material before creating amazing slow motions

The plug-in (0.9.6) still produces some very disturbing bobbing artefacts. I know the reason, why artefacts are produced and I can send you some images about it. There is one havily compressed image as an attachment.

I was thinking to make some modifications to your code, but when I tried to compile your source code without any modifications I got the following errors:

Code:
--------------------Configuration: TDeint - Win32 Release-------------------- 
Compiling... 
TDeinterlaceYUY2.cpp 
E:\TDeint\TDeinterlace.h(199) : error C2065: '_aligned_malloc' : undeclared identifier 
E:\TDeint\TDeinterlace.h(500) : error C2065: '_aligned_free' : undeclared identifier 
TDeinterlaceYV12.cpp 
E:\TDeint\TDeinterlace.h(199) : error C2065: '_aligned_malloc' : undeclared identifier 
E:\TDeint\TDeinterlace.h(500) : error C2065: '_aligned_free' : undeclared identifier 
Error executing cl.exe. 

TDeint.dll - 4 error(s), 0 warning(s)
What am I missing?
Attached Images
 
xappy is offline   Reply With Quote
Reply

Tags
tdeint, tivtc

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 02:55.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.