Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
23rd October 2008, 04:14 | #581 | Link |
Guest
Join Date: Jan 2002
Posts: 21,901
|
Things are still bubbling. The thing is that ideally I'd like to be able to support a script like this:
a=AVCDecode("file.dga").reverse() b=AVCDecode("file.dga") interleave(a,b) ...which with a file of frames 0-95 would give: 95 0 94 1 93 2 92 3 ... In other words, a script should support multiple independent AVCSource() instances. The script above worked with earlier versions of nvcuvid.dll when opening with VirtualDub. But now there are two problems: 1. Doing this with the latest nvcuvid.dll crashes. Nvidia has been informed. 2. It never worked with the managed apps like MEGUI, which crashed when trying to instantiate a second decoder instance. Assuming that Nvidia can fix 1 leaves us still with 2. 2 can be fixed either with the server model (working fine) or the floating context model (working with issues probably solvable). But both these solutions preclude ever doing the multiple independent decodes, because only one decoder is ever instantiated. It seems like a big loss to give up multiple instantiation. So I'm tempted to hold off on the server model rollout and work the issues 1 and 2 some more in hopes of solving them. But then users won't have a MEGUI solution in the meantime. Or I can rollout the server now and perhaps have to abandon it later. One advantage of the server model is that the decoder code is always in one place and fixing bugs there, or adding features there, fixes all applications that use it. And the server model is simple and robust. So there's something to be said for the server model. So I don't know what to do. Your thoughts will be appreciated. Is it really so bad to lose multiple instantiation? Last edited by Guest; 23rd October 2008 at 04:17. |
23rd October 2008, 04:17 | #582 | Link |
x264aholic
Join Date: Jul 2007
Location: New York
Posts: 1,752
|
Why would you have to abandon the server model though? It seems like it has no disadvantages.
__________________
You can't call your encoding speed slow until you start measuring in seconds per frame. Last edited by Guest; 23rd October 2008 at 04:21. |
23rd October 2008, 04:21 | #583 | Link | |
Guest
Join Date: Jan 2002
Posts: 21,901
|
Quote:
AVCSource("fileA.dga")++AVCSource("fileB.dga") There is a way to support that with the server model by forcing every GetFrame() call to do a full decoder reset. But the performance would be horrible. Last edited by Guest; 23rd October 2008 at 04:24. |
|
23rd October 2008, 04:28 | #584 | Link |
x264aholic
Join Date: Jul 2007
Location: New York
Posts: 1,752
|
Well #1 seems like it could be, or should be, a very simple fix -- something that nvidia can resolve for us (hopefully). MeGUI on the other hand.. Perhaps we can work with the developers to make it not crash.
Both seems completely solvable, just requires someone to actually do it. I wish I knew enough code to help on #2
__________________
You can't call your encoding speed slow until you start measuring in seconds per frame. |
23rd October 2008, 05:46 | #586 | Link |
Compiling Encoder
Join Date: Jan 2007
Posts: 1,348
|
hmmm.....
the server model works for megui for a single instantiation call, but not for several and the 'normal' method (once fixed) supports multiple instantiation.... then how viable would it be to have both? Seeing at how there are the people who would like to do complicated multiple instantiation and then the megui users. probably hitting an over generalization here: but i would say that the multiple instantiation users will probably encode the file to a lossless with filtering they will perform, which they will then use that in megui - removing the necessity for DGAVCDecodeNV withing megui. and then the megui users generally want to straight-shot convert a single video. - without multiple instantiation capability. sure if they both worked simultaneously people would be foaming out of the mouth in joy, but hey if it absolutely requires two versions, i think people could live with that; we should be happy to get anything at all. As for the file concatenating idea, what cuda restrictions are there in place to prevent you from doing so within the project .dga and only having one AVCSource() call (like DGAVCIndex and DGIndex)? is there any chance of getting it worked out with nvidia, or is it an absolute no-go? - mp4box's stream concatenation feature can be used for files that have the same properties as a workaround for that if it doesn't ever work out.... .... geh, what a wall of text :< Last edited by kemuri-_9; 23rd October 2008 at 05:52. |
23rd October 2008, 06:39 | #587 | Link | |
Registered User
Join Date: Dec 2006
Posts: 69
|
Quote:
Alternatively perhaps keep the code managed so that both can be contained within the same app even but selection via options/preferences too keep it all in one app rather than dividing it up based on what the end user wants too do, better too keep it in one imho (but may mean some code repetition - but honestly who's worried about memory/storage space these days on recent machines). Just an idea... ---- edit: Just make sure that its documented (hell even well tool-tipped) what the difference's are and whats best for what use. Last edited by tre31; 23rd October 2008 at 06:43. Reason: more... |
|
23rd October 2008, 07:09 | #588 | Link |
Registered User
Join Date: Jun 2008
Posts: 177
|
Storage is still matters — shadow copies for example can eat space pretty fast if you will do massive writes. That requires separate storage for temporary data — it's not something that comes for free.
Memory also heavily matters in 32-bit apps, fft3dgpu for example eat allocation space in that way, so it's not possible to use it two times on HD content. If you add memory requirement for encoding and decoding, mvtools processing, etc — you will suffer from insufficient address space. (Sorry for my english) |
23rd October 2008, 09:15 | #589 | Link | |
Registered User
Join Date: Dec 2006
Posts: 69
|
Quote:
Anyway its up too neuron2, I was just offering an idea that encompases both, since both seem too be valid solutions too different problems. |
|
23rd October 2008, 15:48 | #590 | Link |
Guest
Join Date: Jan 2002
Posts: 21,901
|
Actually, it turns out that AVCSource()++AVCSource() works fine with the server because the uses of the decoder are serialized. It's only when you interleave their usage that problems arise. So I'll go ahead and release the server version tonight.
|
23rd October 2008, 23:47 | #591 | Link |
x264aholic
Join Date: Jul 2007
Location: New York
Posts: 1,752
|
How odd.. I don't suppose most people would realistically use two sources interleaved, would they?
__________________
You can't call your encoding speed slow until you start measuring in seconds per frame. |
24th October 2008, 05:23 | #593 | Link |
Guest
Join Date: Jan 2002
Posts: 21,901
|
Version 1.0.5: Server implementation fixes MEGUI operation
OK, I'm rolling out the server version. Nvidia already fixed problem 1 described above but problem 2 remains intractable for now. Some notes:
* I strongly recommend killing the MEGUI preview before starting the encode. You can leave it open if you want to be perverse, but if you navigate in it after starting encoding, kiss your life goodbye. And don't say I didn't warn you. * DGAVCIndexNV still uses a built-in GPU decoder; the server does not have to be active. But again for the same reason as above, kill DGAVCIndexNV before trying to execute scripts. * Don't even think about writing a script that in any way interleaves access to multiple instances of AVCSource(). It's OK if they are serialized, e.g., AVCSource()++AVCSource(). Note that normal PureVideo bobbing without multiple instances is coming. * The server is licensed, so either put it in the same directory as DGAVCIndexNV.exe, or put a copy of your license.txt file with its executable. * Tomorrow I will release a basic server client with source code, so that you can use the server in your own applications. http://neuron2.net/dgavcdecnv/dgavcdecnv.html Last edited by Guest; 14th November 2008 at 15:01. |
24th October 2008, 22:26 | #596 | Link |
Registered User
Join Date: Sep 2004
Location: Germany, Hamm
Posts: 161
|
oh sorry, have many work at the moment will test it more tomorrow.
I test some small Premiere HD Stream. I only load the dga project in avs script an encode it with dxva hq profile in megui. source file playes finde. but the encoded one has terrible lags. and sorry about my english |
25th October 2008, 02:11 | #598 | Link |
Guest
Join Date: Jan 2002
Posts: 21,901
|
CUVID Sample Client Application version 1.0.0
I've released the sample client package with source code. It works with the CUVID Server contained in DGAVCDecNV 1.0.5. The package contains documentation for the CUVID Server interface. I would be interested in any ideas for improving the interface.
http://neuron2.net/dgavcdecnv/dgavcdecnv.html |
25th October 2008, 11:21 | #600 | Link |
Turkey Machine
Join Date: Jan 2005
Location: Lowestoft, UK (but visit lots of places with bribes [beer])
Posts: 1,953
|
No, because the license models are different.
__________________
On Discworld it is clearly recognized that million-to-one chances happen 9 times out of 10. If the hero did not overcome huge odds, what would be the point? Terry Pratchett - The Science Of Discworld |
|
|