Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > General > DVD2AVI / DGIndex

Reply
 
Thread Tools Search this Thread Display Modes
Old 10th September 2010, 19:06   #2581  |  Link
tormento
Acid fr0g
 
tormento's Avatar
 
Join Date: May 2002
Location: Italy
Posts: 2,542
Which avisynth filters do you think is better to apply after a deinterlace=2 (bobbing) to convert from a 60i (60p by bobbing) to a 30p, retaining as much details as possible? Don't know how the Nvidia bobbing works when "espanding" temporal resolution.
__________________
@turment on Telegram
tormento is offline   Reply With Quote
Old 10th September 2010, 19:10   #2582  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
Hmm, good question.

I don't think you can do much better than simply SelectEven() or SelectOdd(). You may want to check them to see which is better, because sometimes sources have artifacts limited to, or more prevalent in, one field.

But also, have a look at simply using deinterlace=1, i.e., not bobbing.

I'm always open to be corrected by Didée, of course.

Last edited by Guest; 10th September 2010 at 19:12.
Guest is offline   Reply With Quote
Old 10th September 2010, 20:21   #2583  |  Link
tormento
Acid fr0g
 
tormento's Avatar
 
Join Date: May 2002
Location: Italy
Posts: 2,542
I have read lot of docs about 60i interlacing and doing a deinterlace=1 is a no-way as you discard half frame of information. Unless NVIDIA provides a smarter way of deinterlacing. I suppose you should ask your "informer"
__________________
@turment on Telegram
tormento is offline   Reply With Quote
Old 10th September 2010, 20:28   #2584  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
Bobbing followed by SelectEven() will also lose half the temporal resolution. I don't see anyway to avoid it.
Guest is offline   Reply With Quote
Old 10th September 2010, 23:37   #2585  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Of course you cannot have 60Hz information in 30Hz progressive frames. Except for simple field blending, but nobody really wants that. (Hopefully.)

So far I've only had a brief look at Nvidia's deinterlacing. After checking with a few of my standard test samples, I quickly lost interest. Sure it is fast, but that's about it. Quality-wise, I can't see any particular magic in there; it's about in the same league as tdeint/yadif/etc. When you want blazing speed, then it's for you. When you aim for maximum detail/stability (and inherently, compressibility), then it's no replacement for TGMC. Which is still in a leage of it's own - regarding the result, as well as as the needed CPU-time.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 10th September 2010, 23:43   #2586  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
Quote:
Originally Posted by Didée View Post
then it's no replacement for TGMC. Which is still in a leage of it's own
Of course TGMC is the quality champ! Nobody's claiming PV as a replacement for TGMC. It's just a useful sweet spot for the quality/performance tradeoff, as you say.
Guest is offline   Reply With Quote
Old 11th September 2010, 00:30   #2587  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
But I had hoped that Nvidia would pull a little more out of the hat. It's not easy to get deeper information, however the video engine generally offers 3 ways of deinterlacing: "spatial", "temporal", and "vector adaptive". Spatial probably is a simple interpolator, temporal probably is interpolation with weaving acc. to some kind of usual motion-check. vector adaptive could be like temporal, by using source's motion vectors to judge if there "is" or "is not" motion. I'm pretty sure that no kind of active motion compensation is used: the results simply do not look like that.

Sample for demonstration is encoding right now. Check in a few minutes.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 11th September 2010, 00:34   #2588  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
I thinks it's a relatively simple EDI, but I could be wrong. We can ask Nvidia about it, but it may not be something they want to discuss.
Guest is offline   Reply With Quote
Old 11th September 2010, 00:57   #2589  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Quite possible. As long as the curtain isn't pulled, the illusion can persist there would be something special underneath. - Still, asking wouldn't hurt ... perhaps someone has a weak moment and actually leaks some insight.
Also, what I gathered from the internet: the video engine automatically uses the "highest" method, according to the capabilities of the card, as well as the actual content. Particular example: the GT220/240 is said to use "vector adaptive" deinterlacing only up to SD resolution, but for HD resolution it uses only "temporal" deinterlacing. Such behaviour is understandable, given that the usual application is realtime-playback. Though, for offline processing it could be interesting if it were possible to choose the deinterlacing method to liking. If you contact Nvidia again, maybe you could ask about that, too.

Okay, the comparison ...

Here's a quickly produced sample, from a realworld source: "Lord of the dance", native PAL DVD. To demonstrate clearly, the content was first bobbed, then upscaled 200% with pointresize, and slowed down from 50fps to 12.5 fps. Just to make it easy to see what's really going on.

<sample> (MediaFire, ~9 MB)

Three of them are more or less about the same. Hard to tell why one of the three should be preferable to the other two. They are exchangeable.

Hence ... if you're in a hurry, then it doesn't matter too much which one you pick. When calling for quality even if it takes longer, then there's not much of a choice.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 11th September 2010, 02:09   #2590  |  Link
hydra3333
Registered User
 
Join Date: Oct 2009
Location: crow-land
Posts: 540
My goodness, that is rather confronting as a demonstration. Is there a link to the latest TGMC which you use ?
hydra3333 is offline   Reply With Quote
Old 11th September 2010, 06:37   #2591  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
Good news. I have found and fixed the dreaded "crash if you don't get a frame bug".

It was a race condition between CUDA init and the destruction of the filter instance. Putting a frame fetch in there gave CUDA long enough to finish initializing before the application deinstantiated the filter. I've mitigated that in DGDecodeNV.

Will test a bit more and then release. Hopefully it will make DGNV work with CCE, Procoder, etc.

Thanks to Groucho2004 for providing the simple test app I used to recreate the crash.
Guest is offline   Reply With Quote
Old 11th September 2010, 07:05   #2592  |  Link
lych_necross
ZZZzzzz...
 
lych_necross's Avatar
 
Join Date: Jan 2007
Location: USA
Posts: 303
I have a quick noobish question: does TGMC == TempGaussMC?
lych_necross is offline   Reply With Quote
Old 11th September 2010, 08:05   #2593  |  Link
tormento
Acid fr0g
 
tormento's Avatar
 
Join Date: May 2002
Location: Italy
Posts: 2,542
Didée, Neuron2, please keep us informed about your VP deinterlacing. Are NVIDIA GPUs capable of some noise reduction too or other video manipulation in hardware? DGNV is such a useful program that some other features would be welcome
__________________
@turment on Telegram
tormento is offline   Reply With Quote
Old 11th September 2010, 09:09   #2594  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
@ lych_necross: Yes. The exact source might be important, though...
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 11th September 2010, 09:12   #2595  |  Link
stax76
Registered User
 
stax76's Avatar
 
Join Date: Jun 2002
Location: On thin ice
Posts: 6,837
Quote:
It was a race condition between CUDA init and the destruction of the filter instance. Putting a frame fetch in there gave CUDA long enough to finish initializing before the application deinstantiated the filter. I've mitigated that in DGDecodeNV.
That explains the arbitrary behavior it had.

http://forum.doom9.org/showthread.ph...68#post1417168
stax76 is offline   Reply With Quote
Old 11th September 2010, 14:14   #2596  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
Build 2026

* Fixed a race condition between CUDA init and filter deinstantiation that could cause
a crash when DGDecodeNV is instantiated and then deinstantiated without a call
to GetFrame(). Some third-party applications do that to get the video clip properties
returned by an Avisynth script.

http://neuron2.net/dgdecnv/dgdecnv.html
Guest is offline   Reply With Quote
Old 11th September 2010, 17:22   #2597  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
Quote:
Originally Posted by tormento View Post
Are NVIDIA GPUs capable of some noise reduction too or other video manipulation in hardware? DGNV is such a useful program that some other features would be welcome
Sure they are capable of it but these things are not currently exposed in the CUVID API.

Since I now know how to write postprocessing functions that run on CUDA (I use it for the NV12->RGB conversion in DGIndexNV), I could contemplate writing some filters. At first, spatial only. Would you like to suggest any specific spatial denoising algorithm that I could implement?
Guest is offline   Reply With Quote
Old 11th September 2010, 17:43   #2598  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
Surely a "median" filter would be useful, and possibly rather simple to implement (hopefully).

a) "careful" method: capping a value to the range of the direct neighbors except self (not exactly the meaning of the term "median", but often used in filters with such a name)

b) "strict" method: setting a value to the middle value of the sorted list of self and neighbor values (the mathematical meaning of the term "median", but with stronger effect and "plateau" side effects)
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 11th September 2010, 17:52   #2599  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
Do you think a 3x3 kernel is sufficient?

Last edited by Guest; 11th September 2010 at 17:56.
Guest is offline   Reply With Quote
Old 11th September 2010, 18:06   #2600  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
Hmm ... well ... in general yes. But GPUs may have enough power to try a 5x5 kernel too. But they will have side effects, that can get quite heavy. I used Median with 5x5 kernel to simulate mesas in Terragen (the statistical filters in the "SOPack" are based on my suggestions to its author).
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 22:28.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.