Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development

Reply
 
Thread Tools Search this Thread Display Modes
Old 24th May 2005, 10:51   #221  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
@MOmonster
Sorry for not getting back to you sooner, but I was on a trip the last four and a half days with no internet access. I read through the text and didn't quite follow some of the specifics, mainly the assumptions about which comparisons would produce more or less combing, but I think I got the general idea. Would you mind sending me a sample to test on? I don't have any clips with blending patterns as you describe. If so just pm me.

@All
[link removed]. This version adds iSSE optimizations for the default metric calculation case of TDecimate. That is when blockx = 32, blocky = 32, nt <= 0, and width and height are mod 16. The speed up is pretty significant. On a 10,000 frame test clip tdecimate went from running about 120fps to 1200fps on my comp, so roughly a factor of 10. Your mileage may vary. The optimizations are for both YV12 and YUY2, both with and without chroma.

Last edited by tritical; 7th June 2005 at 01:55.
tritical is offline   Reply With Quote
Old 24th May 2005, 19:46   #222  |  Link
Chainmax
Huh?
 
Chainmax's Avatar
 
Join Date: Sep 2003
Location: Uruguay
Posts: 3,103
I have an interlaced clip that gives TDeint some trouble. I will try to upload it somewhere soon.
Chainmax is offline   Reply With Quote
Old 27th May 2005, 14:54   #223  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
Sorry for my late answer. I only have internet on weekend till now.
I have a sample of 8 mb for you, but I don´t know how to upload. This sample is so bad telecined, that tfm and telecide use the postprocessor on really many frames. But many of this postprocessed frames are blendings, because your filter just choose the first field to recreate the progressive frame.

Can anybody help me to upload the short sample?

Last edited by MOmonster; 27th May 2005 at 20:35.
MOmonster is offline   Reply With Quote
Old 27th May 2005, 16:36   #224  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
Free uploads:

http://rapidshare.de
LigH is offline   Reply With Quote
Old 27th May 2005, 17:55   #225  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
Thank you, LigH.

Here is the sample.

Last edited by MOmonster; 27th May 2005 at 18:02.
MOmonster is offline   Reply With Quote
Old 28th May 2005, 00:21   #226  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
@MOmonster
Thanks. After looking at it I think your idea could work well. I'm gonna implement it into tdeint first though cause it will be easier. Bascially, add a new mode that outputs at same framerate but instead of always using the top or bottom field it will choose adaptively based on your blend detection by comparing combing idea. Will see how it goes.

@Chainmax
Any luck uploading that clip? If you don't have a spot I can just give you my ftp info.
tritical is offline   Reply With Quote
Old 28th May 2005, 09:16   #227  |  Link
Chainmax
Huh?
 
Chainmax's Avatar
 
Join Date: Sep 2003
Location: Uruguay
Posts: 3,103
I have a place to upload it to, but I had a virus infection a couple of weeks ago and want to make sure I'm clean as a whistle before uploading anything anywhere. It will take a few more days (basically, I need to find someone willing to scan any file from my machine which should be infection free now).
Chainmax is offline   Reply With Quote
Old 28th May 2005, 12:16   #228  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
@tritical
Oh, I havn´t expected, that it will be easier to implement it into Tdeint.
I´m really happy, that you try to implement this blend-detection.
Will it also work together with your tryweave option? If yes, this should work similar to my idea for your tfm filter (maybe not in all cases same good, but of course really nice).

Thanks for your attention.

Last edited by MOmonster; 28th May 2005 at 13:07.
MOmonster is offline   Reply With Quote
Old 29th May 2005, 04:56   #229  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
I'm not sure exactly how it's all gonna work in the end, but as of right now I think it will be achieved via a tfm+tdeint combination. A script like:

deinted = tdeint()
tfm(clip2=deinted)

So far, I've added a new 3-way match mode into tfm that is like mode 4, but with all matches that go against the field order removed. So for a tff stream, matching off top field, it first tries c/p, if the best of those two is detected as combed then it tries u and if u isn't combed it uses that else it goes back to the best of c/p and does PP.

I've tested the blend detection method you mentioned above and while it seems good in principle it doesn't work that well in practice. The major problem is that the blend ratio is not a constant and that screws up the comparisons. Instead, I decided to try a method that simply checks to see if the current field is a blend of the previous/next field. Since the two surrounding fields should be clean, simply determining that the current field is a blend should be enough. To do that it creates 5 frames, assume we have the following fields:
Code:
 B  <= top field
A C <= bottom fields
and we want to know if B is a blend of A/C. The five frames it creates are:
Code:
0      1           2           3      4 <= frame #

B      B           B           B      B <= top field
A (25%A/75%C) (50%A/50%C) (75%A/25%C) C <= bottom field
It then checks the overall combing level of those five frames. If min(1,2,3) is less than min(0,4) by a significant margin then it decides that B is a blend of A/C. The extra 25/75, 50/50, and 75/25 frames are to account for uneven blend ratios. It seems to be working quite well so far, but will need more testing. Of course the above method will not work for true interlaced material, as it assumes that B either matches with A or C or is a blend of those two.

Last edited by tritical; 29th May 2005 at 04:59.
tritical is offline   Reply With Quote
Old 29th May 2005, 06:56   #230  |  Link
Revgen
Registered User
 
Join Date: Sep 2004
Location: Near LA, California, USA
Posts: 1,545
@Chainmax

Try to use http://www.yousendit.com to send the file to Tritical.

You can upload any file up to 1 GB to their server for somebody to download, and the file will stay on their server for seven days.

It costs nothing to use. It also scans files for viruses, so you don't have to worry.

I hope this helps.
Revgen is offline   Reply With Quote
Old 29th May 2005, 15:10   #231  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
@tritical
Oh, I also thought about this problem, but I wasn´t able to test it before. I thought, we still get the blended field, if we compare the combing levels for both fields, but I seem to be wrong.
If MIN(0,4) is really significant higher than MIN(1,2,3), would you also test the field C and than compare the differences or would you use a combing difference parameter to decide, if the test of field C is really necessary.
Your method seems to be accurate enough to find the blend fields.
By trying to match A & B or B & C we get the the combing levels for 0 and 4 and now compare it with MIN(1,2,3) to find the blending.
Maybe you could make this method a bit more effective.
If B is really a blend of A and C, maybe we could compare the combing levels from matching A with B and B with C to find out how strong the blending factor is.
If B is for example an A:C => 30:70 blend, the combing level from matching B with C should be much higher than for matching A with B.
So, it shouldn´t be necessary to create three new frames. We should be able to use an adaptive blend factor by comparing the combings. The rest would work how you said it.
Maybe it is possible to choose the right blendfactor a bit more differenced, for example choose between 20, 35, 50, 65 and 80 percent, but it is only an idea.
If C is the blend and not B, the blending factor should be unimportant, because the combing level should be still much higher than for the blended field, so we could work the same way.

Thank you for your work, tritical.

Last edited by MOmonster; 2nd June 2005 at 15:38.
MOmonster is offline   Reply With Quote
Old 29th May 2005, 16:44   #232  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Hardly I dare to mention Restore24, but its blend detection works pretty good. Of course it is slower, since it works with edgemasks. But the percentage of blend reckognitions in not-static parts is usually >95%.
The idea is free to use, that's why Restore24 once was posted at all

(And also, that's why some are waiting for a suited decimator, less unstable than SmartDecimate, to fall from heaven.)
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)

Last edited by Didée; 29th May 2005 at 16:48.
Didée is offline   Reply With Quote
Old 29th May 2005, 18:08   #233  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
Hi, Didee

You´re right, the blend detection of restore24 is really good.
But that is not the reason why I search for another way to make a good restoring.
restore24 has many disadvantages, not quality wise, but there are many other things, like maximum video length, the stability of this complex function and of course the speed.
The most important point is the dividing of the operations like deblending, decimating and so on. There is no need to search for blends on matched frames because there shouldn´t be any blending. Another thing to look at, is the fact that only, if there are blends, there are no possible matches, so this way, we only have to decide what of two fields is the blended field and don´t have to find out the less blends on the whole bobbed source. Also it is harder to decimate a bobbed source right than source with same framerate. We could tweak the decimating also if the decimater would be able to read the informations from the matcher and so on.
One more time, I don´t think restore24 works bad, no, it is an amazing function, that gets clear with nearly all bad telecined material. But there are other things, I said before, why I look for an alternative and this here seems to be also a good way.
MOmonster is offline   Reply With Quote
Old 30th May 2005, 02:53   #234  |  Link
Chainmax
Huh?
 
Chainmax's Avatar
 
Join Date: Sep 2003
Location: Uruguay
Posts: 3,103
Revgen, thanks for the tip, I'll use that.

tritical, yousendit needs a recipient email adress. Please create a temp account in a free mail and send me your adress in it via PM so that I can upload the file for you.
Chainmax is offline   Reply With Quote
Old 1st June 2005, 00:32   #235  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
@tritical
I use the display info from Donald A. Graft´s Telecide to get something like a combing level for the different matches. You said, you don´t have such bad telecined material, so I´ll try to test a little bit on my sources. Till now my idea should really work, but I have to make many more tests, till I´ll come to a conclusion. It will need some time, but I hope it will help you. I don´t know, if the combing levels of Donald A. Graft´s Telecide are that much useful for you, maybe one of your filters could also display something like the combing level, that you could use the results better for you, but first I´ll try it that way.

@all
Does anybody know a good possibility to show the combing level of the current picture on the display?
MOmonster is offline   Reply With Quote
Old 1st June 2005, 23:47   #236  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
@tritical
I wait for your next tivtc version, because I want to test you new three way mode.
So far so good, for today I stop with my test. I have learned a lot from testing the different sources. First of all my idea that after a clear field without a match can follows a second clear field without any possible match seems to be wrong. BUt my special mode for tfm I thought of wouldn´t be useless.
I made tests on ten different sources and more then eighty different motion scenes. On four of ten sources, there are sometimes (really seldom and only light) two blended fields, that stands together, so bad are the conversations.
It is possible, that if we have an u-match after an p-match, we lost a clear field.
This simply means, that we only should have a look to the next two fields after the last possible match. If we have a p-match and no possible c-match (there are maybe both matches possible, but p is better) we just should try to match the next two fields (second field from the previous frame and first field frame the frame we try to restore). That means only c- or p-match would be recommed. With a possible c-match of the previous frame also an u-match would be useful. The other thing is of course, that only this two fields should be used for the postprocessing with blend detection.
If we use tdeint as postprocessor after tfm we won´t solve this problem. Maybe you can think about the implemetation of such a mode in the feature.

But now to the more important thing, the blend detection tests.

Sorry tritical, but your idea also wouldn´t work that good. In nearly all scenes I looked at, I found some blends with blend factors between 10 and 15%, so the difference between MIN(0,4) and MIN(1,2,3) wouldn´t be that high or clear and sometimes we would detect the wrong field as blended.
A better method seems to be, that we compare the MIN(1,2,3) from testing the first field with the MIN(1,2,3) from testing the second field. In some cases this seems to work better, but also not for all absolut right.
I´m happy that my idea seems to work pretty good in all cases, I tested till now. To find the right blend factor by comparing the matches 0 and 4 seems to work really fine (till now not tested on really noisy and blocky source) also for the different sources. I still would prefer to compare the combing levels from the matched blend creations with the test of the other field. In some cases this seems to work better than compare the differences from MIN(0,4) with this combing level.

I made a list of the blending factors depends on the combing level ratio (0,4). I made it that accurate, clear because this should work better. Depening on the source the variance to the real blend factor isn´t higher than 3% (for my tests). I think this sounds not bad. I´ll make some more tests in the feature and maybe will modify the list a little bit, but till now this seems to work good. I hope these tests would help you a little bit.
So, here is the list
Code:
A = the ratio between the both matches with the blended field
	(used the match values displayed from telecide)
B = the percentage of the blend factor from the field with the lower match values


A		B		A		B
(value:1)	

1		50%		2.5		29%
1.02		49%		2.6		28%
1.05		48%		2.7		27%
1.08		47%		2.9		26%
1.12		46%		3.0		25%
1.16		45%		3.2		24%
1.20		44%		3.4		23%
1.24		43%		3.6		22%
1.28		42%		3.8		21%
1.33		41%		4.1		20%
1.38		40%		4.4		19%
1.44		39%		4.7		18%
1.5		38%		5.0		17%
1.6		37%		5.4		16%
1.7		36%		5.8		15%
1.8		35%		6.3		14%
1.9		34%		6.7		13%
2.0		33%		7.2		12%
2.2		32%		7.7		11%
2.3		31%		8.2		10%
2.4		30%

Last edited by MOmonster; 2nd June 2005 at 15:32.
MOmonster is offline   Reply With Quote
Old 3rd June 2005, 16:16   #237  |  Link
Chainmax
Huh?
 
Chainmax's Avatar
 
Join Date: Sep 2003
Location: Uruguay
Posts: 3,103
tritical, did you get a chance to examine the sample?
Chainmax is offline   Reply With Quote
Old 4th June 2005, 09:56   #238  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
@Chainmax
Yep, I took a look at it. I can't really give any suggestions or think of anything to improve the result significantly. It is just one of those tough clips to handle for a per-pixel motion adaptive deinterlacer. Hopefully handling of such cases will improve in the next version. Thanks for sending the clip, I will definitely keep it for future testing.

@MOmonster
Thanks for those tests. Indeed the 3 cases for the blend detection method I mentioned aren't always enough, smaller steps are needed.

I have been thinking more about the overall process and how it will work. After thinking about it, I am less and less inclined to try and include this in tfm/tdeint, and am thinking it would be easier to make a separate filter (that internally uses tfm/tdeint) specifically for handling such video. After looking at restore24 a little more in depth (I have never had any material that needed it) I think it's general approach is quite good. Turning restore24 into a filter (or making a filter that works like restore24) and adding some of the ideas from here (field matching, matching patterns, alternative choices for blend detection, etc..) seems like the best way to go. The filter would be able to grab the needed info about matches from tfm/tdeint since they already hint all relevant information. Anyways, I haven't had much time the last few days to work on it, hopefully that will change.

Last edited by tritical; 4th June 2005 at 09:59.
tritical is offline   Reply With Quote
Old 4th June 2005, 14:18   #239  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,653
Hi tritical, I am currently playing wit TDecimate within restore24

it works like it should!


Is there room for some minor impovement?

Restore24 delivers mainly this pattern:
Code:
Filmframes   0  1  2  3  4  5  6  7  8  9 10  11 12 13 14 15 16 17 18 19 20 21 22  23 24 25 26 ...
inputframes xd xd xd xd xd xd xd xd xd xd xd xdn xd xd xd xd xd xd xd xd xd xd xd xdn xd xd xd ...
x - new frame
d - absolute duplicate
n - near duplicate (is a duplicte, but affected by noise and deinterlacing artifacts)

currently tdecimate chooses one of x, d or n when it detect such a triple
(I didn't examine which one it choosed..)

is it possible to force tdecimate to mix all near-duplicates to get noise and artifacts reduced?

In the case of Restore24 for frames 11 and 23 either x&n or d&n should be mixed.

This duplicate-blending should also help on normal IVTCes,
because noise and/or comression artifacts will be reduced be half.
__________________
Don't forget the 'c'!

Don't PM me for technical support, please.

Last edited by scharfis_brain; 4th June 2005 at 21:14.
scharfis_brain is offline   Reply With Quote
Old 4th June 2005, 23:58   #240  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Yep, that is possible... but how do you want to define near duplicate? I mean, how will tdecimate know which frames are near duplicates and not simply different frames with a small amount of motion? I guess it could be by looking at each frame that isn't dropped and seeing if its neighbors were dropped and are less than a certain difference threshold?
tritical is offline   Reply With Quote
Reply

Tags
tdeint, tivtc

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 13:54.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.