Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
23rd October 2008, 04:14 | #10 | Link |
Guest
Join Date: Jan 2002
Posts: 21,901
|
Things are still bubbling. The thing is that ideally I'd like to be able to support a script like this:
a=AVCDecode("file.dga").reverse() b=AVCDecode("file.dga") interleave(a,b) ...which with a file of frames 0-95 would give: 95 0 94 1 93 2 92 3 ... In other words, a script should support multiple independent AVCSource() instances. The script above worked with earlier versions of nvcuvid.dll when opening with VirtualDub. But now there are two problems: 1. Doing this with the latest nvcuvid.dll crashes. Nvidia has been informed. 2. It never worked with the managed apps like MEGUI, which crashed when trying to instantiate a second decoder instance. Assuming that Nvidia can fix 1 leaves us still with 2. 2 can be fixed either with the server model (working fine) or the floating context model (working with issues probably solvable). But both these solutions preclude ever doing the multiple independent decodes, because only one decoder is ever instantiated. It seems like a big loss to give up multiple instantiation. So I'm tempted to hold off on the server model rollout and work the issues 1 and 2 some more in hopes of solving them. But then users won't have a MEGUI solution in the meantime. Or I can rollout the server now and perhaps have to abandon it later. One advantage of the server model is that the decoder code is always in one place and fixing bugs there, or adding features there, fixes all applications that use it. And the server model is simple and robust. So there's something to be said for the server model. So I don't know what to do. Your thoughts will be appreciated. Is it really so bad to lose multiple instantiation? Last edited by Guest; 23rd October 2008 at 04:17. |
Thread Tools | Search this Thread |
Display Modes | |
|
|